Transparency in AI Content

How should we be thinking about transparency in AI content generation? Good question. I certainly do not have all the answers, but here are some factors I consider when there’s AI content in my work.

First, however, I have to say that the controversy about AI’s use of the em dash is a bit annoying. Apparently, many people think that when they see an em dash—used to set off a phrase or comment instead of two commas—it’s a telltale sign of AI at work.

Au contraire!

Ever since Mrs. Smith’s English class in high school, I have always, and will always, use the em dash. It’s a far more emphatic way to make a point. (I will admit, however, to continually overusing parenthetical phrases, but that’s a topic for another post.) The fact that AI has commandeered it is irritating, but I’ll manage.

The point is you cannot always tell what is AI and what is not. This challenge will get harder and harder each day, no doubt. That makes it even more incumbent upon us human writers to commit to transparency in AI content.

Mitigating Risks with Using AI for Content Generation

Currently, my primary AI provider is Claude from Anthropic. There are a number of reasons why and I’m happy to go into them with anyone who wants to know, however, I expect that my alliances may shift as capabilities and practices evolve.

As Claude reminds me at the end of each query, “Claude is AI and can make mistakes. Please double-check responses.” Boy, he’s not kidding! Here are some of the ways I try to mitigate the risks associated with using AI for content.

Require attribution. Early in my AI journey I was frustrated by the amount of $#!^ AI would just make up. Now, for every conversation I require attribution from any of the statements it makes, complete with links to the public pages from which the facts come. If I share research from AI, I will share AI’s source as well. My hard line here no doubt comes from my degree in journalism which taught me to always attribute opinions and facts to their sources. (Thank you, Prof. Sheppard.)

Check for plagiarism. Even if you attribute content that AI has found for you, it’s important to confirm you’re not inadvertently lifting someone else’s work and presenting it as yours. AI-content/plagiarism checking services like TextGuard and Grammarly are examples.

Have AI check the work of AI. As I’ve mentioned, Claude is my go-to AI source. However, once I am happy with its output, I will often share it with Perplexity to have it look at Claude’s work for errors, balance, and coherence.

AI in the Background

When it comes to transparency in AI content generation, here are the primary ways I use AI in content development—and the uses for which I don’t personally feel disclosure is necessary. (If you disagree, let’s discuss! I’m open to other points of view.)

Basic research. It’s an incredible tool for researching facts (remember those?) with broader context than you can with Google (or my new fave, Duck Duck Go). For example, when crafting the script for one of my YouTube videos recently, I asked it to research brands who have shifted their core messages over a period of time and why. Attribution is critical here if you include any of the observations or suggestions it makes.

Outlines. Often I have a flurry of concepts in my head for a blog post, script, or white paper. Asking AI to suggest an outline for my ideas not only saves me time and energy, it provides alternatives I might not have come up with on my own.
Repurposing. There have been a couple of times when I’ve asked Claude to take longer blog posts I have written and repurpose them into a smaller, more succinct snippet of the idea. All the words remain my own. Just fewer of them.

Proofreading. Oh my gosh, this has been a game changer for me. My proofreading skills have never been impressive. With the advent of spontaneous spell check they got even weaker. I’ve found AI to be a godsend for keeping my work error free-er.

Niche utility writing. Some forms of writing require a specific expertise that I do not have. Landing pages, for example. A sample prompt for me might look like: “You are an expert digital marketer in the B2B space. Write the headline and core content for a high-converting landing page for the attached white paper.”

AI in the Foreground

White papers. I have used AI to expand upon ideas within white papers and supply more data to support any of my opinions. Using AI in this manner is fraught with risks, unless you mitigate them in the ways I listed above. Also, make certain that you agree with how AI has riffed on your ideas. Next, include an endnote that acknowledges AI’s involvement in the white paper in general, or specific passages in particular.

Blog posts. I personally don’t use AI to draft blog posts, but it’s certainly a role it can fulfill. Here, a disclosure would go a long way in building trust as well as demonstrating an understanding of how to use AI efficiently and ethically.

Service delivery. The verbal portion of our Messaging Audits, a review of a client’s messages and that of five competitors, is built through a proprietary AI model. Here, we lean into AI’s contribution because it brings more objectivity to the process than a human might, which for these purposes is an added benefit.

More Involved Research. One client recently had a very unique research request, which without AI would have been out of our capabilities. For the end product, we shared the findings and included the large language model (LLM), version, and the specific prompt used. That level of transparency may have been a bit extreme, but when it comes to protecting and building client relationships, I’d rather share too much than not enough.

How to Disclose

In full transparency about transparency, I will admit that my disclosures of AI use have been inconsistent at best. This blog post is my public commitment to doing better!

What’s more meta than asking AI how to disclose AI? When I queried Claude/Sonnet 4.6 in April of 2026, here was the response:

  • Light use: “This article was edited with AI assistance. All reporting and conclusions are the author’s own.”
  • Moderate use: “Portions of this piece were drafted with the assistance of [Tool/version]. A human editor reviewed and revised the content for accuracy, fairness, and editorial standards.”
  • Heavy use: “This content was substantially generated using [Tool/version]. It was reviewed, fact-checked, and edited by the author(s) prior to publication. [Organization’s] AI use policy is available at [link].”

The main takeaway for transparency in AI content generation is that human accountability is non-negotiable. Always ensure that your use of AI is assistive, not as a proxy.

What do you think of these approaches? This topic is ever-evolving and I am open to having my perspectives evolve as well.

 

This article was edited with AI assistance. All reporting and conclusions are the author’s own.

Share via
Copy link
Powered by Social Snap