29 Comments

This is a great article. Thank you. I support your disclosure idea to take the initiative and show some transparency. I use very little AI generated content in my work, but knowing how others use it is helpful and makes me see how I could use it to support my process as a creative.

Expand full comment

Glad it resonated with you!

Expand full comment

Joe, great post and idea. (I underestimated your reaction when I tagged you about this ad!)

And I love what @Eric is doing at Credtent!

I think for broad adoption the transparency needs to be simple and painless. It sounds like you want to be even more specific than the current Credtent badges, for example, in your disclosures. Those badges seem appropriate for a finished/published work. In your case, maybe it’s a badge (or checkable box) for each phase of the creative process? The badges might be a bit much for the audience, psychologically and visually, depending on the scenario or media type.

To a point made earlier, I do think too many don’t care bc most don’t have a real appreciation for anything above mid. Pick almost any topic, media, or consumer good.

But I also think this strengthens your self-governance approach to ethics, accountability, and transparency.

Expand full comment

Oh damn I forgot you're the one who sent me this!

And yeah -- I like the idea of badges, but for the purposes I'm talking about, I think that specificity and citation is important. Specificity because I think there's a lot to gain from us not hiding how we use AI and actually talking about it -- we'll learn from one another. Citation because given the way GenAI is trained, it feels appropriate to me to cite when you used it for research.

My issue with the badges would be that they spark more questions than they answer. What does it really mean for a post to be "AI-assisted?" Does "human-created?" mean that no AI was used at all? Not entirely sure how to solve this! Maybe it's a combo of a badge and the kind of disclosure I'm talking about. I'd love your thoughts, this is a fairly half-baked idea I threw out into the ether for exchanges like this!

Expand full comment

That Fiverr ad -- holy shit. I don't know how I missed it. And thanks (?) for sharing it. (LOL) Wow, wow, wow.

Appreciate this one, Joe!

Expand full comment

It's both shocking and not shocking at the same time! Fiverr was an arch nemesis when we were building Contently because at the same time we were telling brands that they should pay writers at least $1/word, Fiverr was out there telling them they should pay writers $5/story.

Expand full comment

Great post! Strangely enough, these stats also work if you swap 'fried onions' for 'AI' and 'cheesesteaks' for 'content'!

Creatives must now ask: 'So, you want dat wit' or witout?'

Expand full comment

Brad, are you drunk? And eating a cheesesteak at 11 am?

Expand full comment

Joe: This is a great post, which I very much appreciate. To me, a big problem with the AI-generated content is that it sucks, but in a way that people don't seem to mind.

I write regularly about the NBA. I used AI to help brainstorm headlines, which was helpful. I ended up combining ideas an iterative process that involved ideas from me, ChatGPT and my wife.

Also, just wanted to share -- I ran my article through ChatGPT to ask for proofreading and tips to improve. The suggestions were laughably bad. Things like "correcting" lines written for specific effects like slowing down the reader in certain spots. At one point, it flagged a sentence as too negative when in context, it was clearly sarcasm.

So, I do share the concern about creatives being replaced by AI because AI actually isn't very good at creative work.

Expand full comment

Yeah, Generative AI generally sucks as an editor because they're mid by design — they learn to generate text from the most common patterns in their training data, which is comprised of everything their Silicon Valley overlords could scrape off the internet —copyright be damned.

They're then fine-tuned through Reinforcement Learning from Human Feedback in which humans rate the AI output, often in “digital sweatshops” in Africa and Southeast Asia, and are instructed to sway the AI towards safe, inoffensive outputs.

Which is why its output so often sounds like it was written by a painfully boring grad student from Connecticut named Brett.

I'm curious if you could say more about this though — "a big problem with the AI-generated content is that it sucks, but in a way that people don't seem to mind"?

What do you mean exactly? That most readers don't have the taste to really give a crap about AI's writing being painfully mid?

Expand full comment

Joe, I can like this article enough! As a marketer and content creator I have remained optimistic while simultaneously rolling my eyes at the crap that rolls out as “helpful” or “informative” content. AI will get you quantity packaged as a collection of words that check off the boxes on some optimization list. But AI cannot deliver quality content. I’m talking about the stuff that is truly creative, fresh, provocative, or inspiring. Maybe someday??

You offer a simple and direct solution. A show your process/tools tag that would actually be a great resource for writers and readers alike. It will certainly help segment the talented creators from the high volume producers.

*This comment was entirely written by a human with a soul and appetite for original thought that encourages further engagement.*

Expand full comment

Thanks Shawn! Any way you'd want me to structure the disclosure differently? I'm obviously going to keep doing it, but want to do it in a way that's actually helpful to readers.

Expand full comment

Ultimately, I think the best approach will be one that feels most natural to you. My personal preference would be to include it almost like a footnote at the bottom. That way the details and transparency are available without distracting a reader from a call to action or related next steps you might be guiding them to.

Expand full comment

One thing I'm considering doing for next time is to attach each instance to a footnote of where it appears in the text. This is how I'm doing it in the book I'm working on -- we'll see if it translates well to Substack!

Expand full comment

Interesting test. How will you include any notes on the use of AI for brainstorming, research, or editing? Do you think the use of AI in these applications should be noted?

You’ve definitely got me hooked on this practice now, so please share what you learn with each iteration. 🤔

Expand full comment

I think of ChatGPT and Claude as mass plagiarism one word at a time.

This is a great idea and post.

I had jotted down an idea very similar to what you propose here -- just mention at the end of a post if I used genAI as a prompt or editor (Claude does make a mighty-fine editor in a pinch).

I'll probably forget for the first few, but I'm going to start doing this with all of my new content.

The reality of AI is that when companies start trying to roll this out, they're going to realize how shitty their back-end content systems are. And there's a dearth of IT talent on both the governance and the AI side of things. Massive clusterfucks are coming as the c-suite focuses on how AI can drop headcount versus actually thinking through how it can make their businesses better.

Expand full comment

Thanks! Can you say more on this? "The reality of AI is that when companies start trying to roll this out, they're going to realize how shitty their back-end content systems are."

What do you mean by their back-end content systems?

Expand full comment

Current AI isn't AI -- it's more massively fast prediction of what the next word should be, regarding genAI. All of that output relies on have good solid content to train your fancy dancy AI tool. And most companies don't pay attention to the information architecture/IG issues.

So think about a shared drive or the shitton of SharePoint setups that are all over the place across some vendors. Which version of a document is the final one to feed your learning model? The one that's marked final? The final.v2 one? The final.final.ceosaysitsfinal file name? If you feed all three into the backend, you're going to end up incorrect or outdated or contradictory information for you bright shiny new AI engine.

That's why Salesforce is massively bleating on about agenticAI, but also have bought an ECM company recently because Benioff is smart enough to understand that they don't have control over unstructured content (documents, emails, pdfs, etc.) that make up the vast majority of content within orgs -- includng the companies that run their sales/marketing on Salesforce. Oh, and over 50% of business docs/records are still on paper. How you gonna get that info into LLM to train your AI engine (there's a 30 plus year old industry that has a solution for this problem -- I was an editor in it and still have a toe dipped in via work with an analyst company in the space)?

The marketing hype and fear and excitement over AI is far outpacing current ability to implement.

I'm actually starting to pull together some material to write a post about this to get it sorted out more firmly in my own head.

Basically, AI needs good data. Companies (on the whole) do a shit job of managing their unstructured data. That's gonna be a problem.

Expand full comment

This is right on. There was a HUGE misconception during the ridiculous early post-ChatGPT hype that you could just take all of your shitty unstructured data and throw it into an LLM sandbox environment and WHA-LA you can ask it questions and it'll be accurate and the shitty data doesn't matter.

Of course that's not what happened, and to your point, the data hygiene in most companies is absolutely awful. So now Bain and McKinsey are selling data management services at obscene price points to enterprise companies so they can do generative AI, but even when that's successful, you still have the hallucination question, which means you can only really use the tool for use cases where you have a moderate-to-high tolerance for error.

This doesn't mean that GenAI can't be useful, but it does mean that all of the reasons that companies suck at digital transformation are still extremely debilitating, and GenAI doesn't fix any of that -- in fact, it may make it worse, since this is a technology that employees are actively terrified of, mostly because CEOs can't stop talking about how they're thirsty to cut jobs.

Expand full comment

I'm thinking there's going to be a few successes, but mostly companies gonna be wasting money. Especially the ones getting bilked by mckinsey and gartner. Figure a combo of hubris, ignorance, and unclear objectives will lead to an even higher than normal rate of failure for agentic ai, the hot new thang on the block, attempts.

Expand full comment

Yeah, I'd anticipate a lot longer road than people are projecting for companies to figure out what processes they're actually willing to outsource to AI -- which is why the agent-fueled layoff projections for next year I'm seeing are likely off

Expand full comment

Yeah, I'd agree with that. Was talking to an analyst that tracks agentic AI/IDP and that salesforce Atlas reasoing engine they just announced is basically smoke and mirrors. Feels like an announcement to smokescreen the press and their customers while they keep working to integrate the ECM product they bought so they can get to half of what they're claiming in the press release . . . someday. LOL. Software vendors, doing bullshit marketing that hurts everyone (even themselves) to make a splash!

Funny you mention process, I think there's going to be an uptick in process mining to help figure that out. Though, again, that's still a slow process, albeit faster than mapping flows with sticky notes on a whiteboard.

Expand full comment

Thanks for advancing the concept of AI transparency. Credtent.org has set industry standards for AI transparency in content. Our three badge system is easy to use and our Content Origin Guide explains how to use them across various formats. More details are on our site and the free Badge Tool is here: https://badges.credtent.org/content-origin

Expand full comment

This is very cool! Are you seeing LLMs pay out writers on your platform yet?

Expand full comment

We are focused on building up our content right now. LLMs are waiting for the license launch in Q1 2025. We’ve bee helping people opt out for months.

Expand full comment

So basically you opt-out as part of credtent, and then credtent helps you license your opted-out content to models once they get desperate high-quality training data?

Expand full comment

Bingo!

Expand full comment

Very cool! Count me in

Expand full comment

Really love this approach, and I’d say in general it’s similar in type to how I use AI … though in truth, I use it very little. If I can’t think of ways to restate a metaphor, I might ask for ideas. If I need an explanation for how something works (research), I’ll do that. And if I have written a whole section of text and it just doesn’t seem to work well, I may ask Gemini or Claude for a suggested rewrite that I then study and learn from and then rewrite the whole thing myself. I’ll take your suggestion to disclose—that’s the way I want to treat my readers. But mostly, I’m put off by the use of AI and I steer away from creators who use it (especially if they don’t say anything). I

Expand full comment