How AI Nearly Tricked Me Into a Career-Ending Mistake
A cautionary tale for those of us using GenAI as an indentured research assistant — and tips on how to work with it better.
I’m working on a new book about storytelling in the AI Age, and I’ve had a handy assistant who’s helped me keep my daily writing streak going amidst waves of nausea from the Norovirus and Jewish Book Anxiety Disorder: Claude.
I’d never outsource my writing or ideas to AI, but I do use Claude (ChatGPT’s smarter and less popular rival) as a sort of indentured research assistant. It’s like the movie Her, except instead of falling in love, I’m abusing an unpaid intern. Act I of the story started swimmingly — Claude was an invaluable resource. But then, in Act II, things got sinister. Claude nearly tricked me into making a career-ending mistake.
Here’s what happened — and what we can learn about over-relying on GenAI.
Right now, I’m grinding on an in-depth book proposal to send to publishers, which means writing summaries of every chapter with compelling opening and supporting stories. It’s a grueling process because I need to outline the entire book without the joy of actually writing it. The other day, I needed a good sports story for a chapter about storytelling and leadership because, well, I’m a liberal arts bro at heart, and I need at least one sports story in this freaking book.
So I asked Claude: “What's a lesser-known example of a football coach inspiring his team to greatness through a compelling story?”
Claude came back with a banger of an idea:
One powerful example comes from Tom Coughlin's leadership of the 2007 New York Giants. Before their playoff run, he showed the team a clip of a tightrope walker crossing between the Twin Towers in 1974. The message was about maintaining perfect balance and focus despite intense pressure - much like what they'd face in the playoffs as underdogs.
The Giants went on to orchestrate one of the biggest Super Bowl upsets ever, defeating the previously unbeaten New England Patriots. Players later cited that visualization of the tightrope walker as a key mental image that helped them stay composed during crucial moments, particularly in their 17-14 Super Bowl victory.
This was great! I’m an obsessive New York Giants fan, and I kind of remembered this story. After all, that 2007-2008 Super Bowl victory was one of the top five happiest moments of my life; after the Giants vanquished Tom Brady, I ran around the campus of Sarah Lawrence College shirtless in the snow clutching an empty bottle of Goldshlager, startling a group of stoned theatre kids who had no idea it was Super Bowl Sunday. Perfect. I started adding it to the proposal doc, pulling from memory to get a foothold.
Except, wait. Did I actually remember this? Tom Coughlin was a stiff, old-school hardass — it didn’t sound like the type of speech he’d give.
I Googled for more details about Couglin’s speech. I got nothing. Weird. I really wanted to use this story, so I searched for another 10 minutes. Nada. So I asked Claude for sources. And guess what? That sociopathic overachiever admitted to making the whole thing up.
Alright. I asked Claude for another example. And guess what? He gave me a killer example (involving a magician?!) … but it turned out to be total bullshit again!
How to work with a research assistant who’s usually on acid
Making minor shit up has derailed the careers of some of my favorite writers, like Jonah Lehrer. I was an exec at an AI company for three years and know the pitfalls of AI, but given how desperate I was to finish this book proposal, I was very close to taking Claude at its word with the Coughlin story and ruining my reputation with publishers. I really WANTED it to be true.
For writers, marketers, and knowledge workers, this is a serious hazard. In the coming year, we will rely on GenAI more and more. Some of it will be by choice; GenAI is very good at automating a lot of the mundane bullshit that takes up most of our workday. Yesterday, I had coffee with a “fractional CRO” friend who’s built a genius workflow where Claude handles all of his client prep and follow-up, allowing him to make twice the money in half the time. As I wrote in The New Rules of Content Marketing last week:
Most marketing work is a soul-sucking time pit. Persona documents. Decks. Landing page copy. Email nurtures. List cleaning. Claude and ChatGPT can do most of that at a B+ level. Reinvest that time by telling great stories that your audience loves and gives you pride.
Some of our GenAI usage won’t be by choice. Employers will push these tools on us without proper training, and our overlords will demand increasingly frenetic levels of productivity. The easy thing to do is trust the genius machine.
But even if these tools are like a polymath Ivy Leaguer, you have to remember that Ivy Leaguer is also on acid. GenAI still has a huge hallucination problem, no matter how much Sammy Altman and the Oligarchs downplay it.
LLMs like ChatGPT and Claude work by using a big ol’ neural network to predict the next word in a string of text, and while the hallucination issue has gotten better, it’s still not great! Even the best-performing model still makes shit up 16.4% of the time, according to Google’s latest FACTS Grounding Leaderboard. For the two most popular models, ChatGPT and Claude, that figure is over 20%. (Which coincidentally is about the same amount that my friends make shit up when they’re on LSD. And it OFTEN involves magicians.)
If your research assistant lied to you one out of every five times, you’d probably fire them! I’m not so sure AI is replacing human workers en masse quite yet.
We’re in a weird time. There are a lot of personal and professional incentives to use flawed technology with blind faith. Some tips:
Always fact-check AI’s output. It’s fine to use Claude or ChatGPT for background research but then validate its findings with actual sources.
Lean on AI tools that link to verifiable sources. I tend to prefer Perplexity, Gemini, or ChatGPT’s new Search tool because they link to external sources that you can validate.
Be extra vigilant of those sources. Google search results have become a dumpster fire of SEO Slop, much of which is AI-generated. Unless it’s from a trusted journal or publication, be skeptical.
Check Claude's or ChatGPT’s responses, even if they’re based on specific data you provide. A common misconception is that if you upload a call transcript, interview, or some other proprietary information to Claude or ChatGPT, it won’t hallucinate. Not true. My fractional CRO friend blocks out time to do his AI-assisted follow-up immediately after his client calls so that he can easily spot hallucinations while the call is fresh in his mind. Pro tip: You’ll reduce hallucinations if you ask the AI to only pull from the document you provided, not its general information. But often, you have to ask two or three times.
Give the AI permission not to know something. After the book research fiasco, I started asking Claude not to make things up just to please me. Based on Anthropic’s research and my anecdotal experience, this helps!
Make sure everyone at your company knows how much GenAI hallucinates. I run in the same circles as a lot of tech CEOs and AI evangelists, and I’d bet that most of them would be surprised to learn that GenAI hallucinates 20% of the time. I’d be shocked if your boss knows the same. They’re going to push you to use GenAI to “do more with less” (*gag*) this year. Make sure the powers that be know the trade-offs.
Invest in domain expertise. The best way to spot AI bullshit is to have an expertise in the topic at hand. In this case, the thousands of hours I’ve wasted rooting for The New York Giants finally paid off!
When used right, AI can be a marvelous tool for writers, marketers, and creatives — allowing us to automate BS work and focus on telling great stories. But if you let your guard down, it’s playing Russian Roulette with your reputation and career.
If you liked this story, you may also like:
Recommended Reads
I Didn’t Want a Job (Amie McNee / Amie’s Substack): As someone who recently gave up a fancy job as a tech exec to work on more creative projects, this essay hit!
She Is In Love With ChatGPT (Kashmir Hill): A wild story you will tell everyone about when you finally leave your house in March.
The tech oligarchy has been here for years (Brian Merchant / Blood in the Machine): “Tech oligarchy” is the buzzword of the month, and this vicious perspective from Brian Merchant is a must-read.
I’m the best-selling author of The Storytelling Edge and a content nerd. Subscribe to this newsletter to get storytelling and audience-building strategies in your inbox each week.
How I used GenAI in this post (Read this post for why I think disclosing this is important / useful):
Besides the fact that my interaction with Claude is the entire premise of this edition, not at all!
How Writers Can Win the AI Content Wars — Before It’s Too Late
I’ve been thinking a lot about this new Fiverr ad, a bizarre piece of corporate wish fulfillment with the refrain that “Nobody cares if you use AI.”







Librarian and archivist here who publishes in a very niche topic and it is very easy to verify my pubs through Google Scholar. ChatGPT has fabricated entire bibliographies around my subject area by inventing citations to fake articles out of real journals. It’s scary stuff. I wish more folks understood that verifying and providing access to accurate information is literally what librarians and archivists do as our professional careers and that AI will likely never replace our knowledge and skills, because a shocking amount of material in research libraries and archives has never been digitized (the economies of scale don’t make sense).
This is one reason I like using Perplexity for research -- it always provides source links, making it easier to nail things down.