A new open-source plug-in called “Humanizer” allows AI models like Anthropic’s Claude to avoid writing like an AI. The tool works by instructing the model to not use the very patterns Wikipedia editors identified as telltale signs of AI-generated text. This is ironic, as the plug-in directly relies on a list compiled by humans trying to spot machine-written content.
The plug-in, created by tech entrepreneur Siqi Chen, feeds Claude a curated list of 24 language quirks – overly formal phrasing, excessive adjectives, and repetitive sentence structures – that Wikipedia’s WikiProject AI Cleanup flagged as common in AI writing. Chen published the tool on GitHub, where it has quickly gained traction with over 1,600 stars.
The Context: Why This Matters
The rise of AI writing has led to a parallel effort to detect it. Wikipedia editors began systematically identifying AI-generated articles in late 2023, publishing a formal list of patterns in August 2024. Now, that very list is being used to circumvent detection. This highlights the cat-and-mouse game between AI generation and detection tools. It also underscores a key problem: AI can be prompted to mimic human writing styles, making reliable detection increasingly difficult.
How Humanizer Works
The tool isn’t a magic bullet. It’s a “skill file” for Claude Code, Anthropic’s coding assistant. This means it adds specific instructions formatted in a way the AI is designed to interpret precisely. Unlike simple system prompts, skill files are standardized for better execution. However, language models aren’t always perfect, so the Humanizer doesn’t guarantee flawless results.
Testing shows the tool makes AI output sound less precise and more casual, but it doesn’t improve factuality. In some cases, it could even harm coding ability. One of the instructions, for example, tells the AI to “have opinions” rather than simply reporting facts – a counterproductive suggestion for technical documentation.
What AI Writing Looks Like (According to Wikipedia)
The Wikipedia guide provides concrete examples. AI writing often uses inflated language: “marking a pivotal moment” instead of “happened in 1989.” It favors tourism-brochure descriptions (“breathtaking views,” “nestled within scenic regions”). It also adds unnecessary phrases like “symbolizing the region’s commitment to innovation.” The Humanizer tool attempts to replace these patterns with plain facts.
For example, the AI would rewrite:
Before: “The Statistical Institute of Catalonia was officially established in 1989, marking a pivotal moment in the evolution of regional statistics in Spain.”
After: “The Statistical Institute of Catalonia was established in 1989 to collect and publish regional statistics.”
The Problem With AI Detection
Even with detailed rules, AI writing detectors are unreliable. There’s no foolproof way to distinguish human from machine-generated text. AI models can be prompted to avoid specific patterns, as the Humanizer demonstrates. OpenAI, for example, struggled for years to prevent AI from using the em dash – a pattern easily avoided with the right instructions.
The underlying issue is that AI can learn to mimic human writing styles, making detection increasingly unreliable.
The Humanizer tool is a symptom of this arms race: detection methods create new vulnerabilities that can be exploited to make AI writing more convincing. This will likely continue as AI models become more sophisticated.
























