AI Imitating Humans

AI Imitating Humans or the other way around?

In the past few months, I’ve had the same boring conversation several times: did I really write this myself, or did an AI help? Aside from the fact that I believed it was fairly accepted practice to use AI for productivity and efficiency, I find it mildly irritating.

The latest example occurred a few days ago as part of preparations for a meeting. I had written some background material, using only my own words, typed it out the usual way with a keyboard and screen, and spent plenty of time staring into space seeking inspiration. It was a little long, a little too careful, and maybe even a bit heavy. In other words, it sounded just like me.

Someone in my team reminded me of some new features in a grammar and writing tool we use that I should try. I wanted the tool to help me tighten the document and make it punchier and less ponderous. But I found myself staring at the “AI-Detector” button, and I couldn’t help it. The tool claims to judge whether a piece of text is likely AI-generated.

The verdict: 47% likely AI-generated.

I’m lucky that I don’t panic easily, because this could have shaken my confidence. If my own writing, made without any prompts or shortcuts, now seems algorithmic to a machine, what are we really detecting?

After the initial surprise and mild outrage, I mostly found it funny. Still, it highlights a bigger and more troubling issue: AI-detection tools are really just making educated guesses. Even their creators admit they have limits. OpenAI has said clearly that AI text detectors are unreliable, with high false-positive rates, especially for careful, formal, or non-native English writing. This hits home for me, since English isn’t my first language and my writing has always been more careful as a result (OpenAI has been explicit about the limits of AI text detection).

Linguists and educators have voiced similar worries. Studies show these tools often confuse clarity, consistency, or a traditional structure with “machine-likeness,” while missing real AI-generated text that a human has lightly edited (see, for example, Why Is It So Hard to Tell If a Piece of Text Was Written by AI?).

Which brings me to my second, more personal grievance.

I’m constantly told to remove dashes because they are seen as “obvious AI signals”. This I find genuinely frustrating. I’ve used dashes my whole life. I like them, and I choose them over commas and semicolons. They aren’t a habit I picked up from language models – I’ve used them for years.

Enough.

Style isn’t proof of automation. Many human writers, like journalists, essayists, and academics, have strong, consistent habits. Removing those habits just to pass an AI-detection test risks flattening our voices at a time when voice matters most.

Journalism and scholars have pointed out that the more writing is shaped by generic “best practice,” the harder it is to tell thoughtful human work from machine output, not easier (as explored by the Columbia Journalism Review).

So, here’s my small declaration of independence: I’m taking back my dashes. They don’t belong to AI. They never did. I won’t edit the quirks out of my writing just to please a tool that admits it’s wrong half the time anyway.

Rant over — at least for now.


Read reactions to this story on LinkedIn


Discover more from Isabelle Duarté

Subscribe to get the latest posts sent to your email.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *