AI-Generated Content Disclosure: FTC Guidelines and Best Practices for 2026
Everything e-commerce brands need to know about disclosing AI-generated content—from FTC rules and platform policies to the EU AI Act and practical implementation.

AI-generated content is now a core part of how e-commerce brands produce marketing materials. From product photos to AI UGC ads, synthetic media is everywhere—and regulators have taken notice. The Federal Trade Commission, major advertising platforms, and international bodies like the EU have all introduced or expanded rules governing how brands must disclose AI-generated content to consumers. Getting this wrong isn't just a compliance risk—it can mean fines, ad account suspensions, and lasting damage to brand trust.
The Regulatory Landscape in 2026
The FTC's position on AI-generated marketing content has evolved significantly over the past two years. What started as general guidance about deceptive practices has crystallized into specific enforcement actions and rulemaking that directly affect any brand using AI to create ads, product images, or influencer-style content.
In late 2024, the FTC issued its first enforcement actions specifically targeting AI-generated testimonials. By mid-2025, the Commission published updated Endorsement Guides that explicitly addressed synthetic media and AI-generated personas. And in early 2026, the FTC finalized rules around "AI Transparency in Advertising," creating clearer (though still evolving) standards for when and how brands must disclose AI involvement in their marketing.
The core principle hasn't changed: the FTC prohibits deceptive acts and practices. What's new is the specificity. The Commission now treats undisclosed AI-generated content as a form of material misrepresentation when consumers would reasonably expect the content to depict real people, real experiences, or unaltered products. This distinction—between what consumers would reasonably expect and what brands actually deliver—is the key to understanding every disclosure rule that follows.
What the FTC Actually Requires
The FTC's requirements for AI content disclosure flow from three overlapping legal frameworks: material connection disclosures, deceptive practices rules, and the updated Endorsement Guides.
Material connection disclosures
The FTC has long required disclosure of any "material connection" between an endorser and an advertiser—payment, free products, employment relationships. In 2026, this extends to the connection between a brand and an AI system that generates content appearing to come from a real person. If your ad features what looks like a customer testimonial but was actually generated by AI, the connection between the "endorser" (the AI persona) and your brand is material because consumers would expect that person to be real and to have genuinely used your product.
Deceptive practices rules
Section 5 of the FTC Act prohibits "unfair or deceptive acts or practices." The FTC has made clear that using AI-generated content is deceptive when it creates a false impression that would likely affect a consumer's purchasing decision. This includes AI-generated before/after photos that show impossible results, fake customer reviews written by language models, synthetic influencer endorsements presented as real, and product images that materially misrepresent a product's appearance or capabilities.
Updated Endorsement Guides
The 2025 update to the FTC's Endorsement Guides explicitly addresses AI-generated endorsements. Key provisions include: AI-generated testimonials must be disclosed as such, even if the underlying sentiment reflects real customer feedback. Synthetic influencers—AI-generated personas that function like traditional influencers—must be clearly identified as non-human. Brands are liable for AI-generated endorsements just as they are for endorsements from real people. And the disclosure must be "clear and conspicuous," meaning it can't be buried in fine print or hidden behind a tap.
Platform-Specific Rules
Beyond federal regulation, every major advertising platform has implemented its own AI content policies. These often go further than what the FTC requires, and violating them can result in ad disapprovals, account restrictions, or permanent bans.
Meta's AI content labeling
Meta requires advertisers to disclose when ads contain AI-generated or AI-modified imagery of realistic people or events. Their system automatically detects some AI content using embedded signals and C2PA metadata, but advertisers bear the primary disclosure responsibility. As of early 2026, Meta's policy requires a visible "AI-generated" or "Made with AI" label on paid content featuring synthetic people, AI-altered product demonstrations, and any AI-generated audio or video that depicts realistic scenarios. Failure to disclose can result in ad rejection and, for repeat offenders, account-level restrictions.
TikTok's synthetic media policy
TikTok's approach is more aggressive. Their Community Guidelines require disclosure of all "realistic AI-generated content" via a built-in content label, not just a caption disclosure. For TikTok Shop—an increasingly important channel for e-commerce brands—product images must reflect the actual product, and AI-generated lifestyle imagery must be labeled. TikTok also prohibits AI-generated content that impersonates real people without consent, which affects how brands use AI influencer strategies.
Google's AI content guidelines
Google Ads requires disclosure for AI-generated content in election-related and issue-based advertising, and has extended disclosure recommendations (not yet requirements) to commercial advertising featuring synthetic people. For Google Shopping, product images must accurately represent the item being sold. AI-generated product photos are permitted but must not alter the product's appearance in ways that would mislead buyers. Google Merchant Center policies explicitly prohibit "unrealistic enhancement" of product images, whether the enhancement is AI-generated or manual.
The EU AI Act: What E-Commerce Brands Need to Know
If you sell internationally—or even if your ads reach EU consumers—the EU AI Act introduces additional transparency requirements that went into effect in phases starting in 2025. For e-commerce brands using AI-generated content, the key provisions are:
- Transparency for AI-generated content — Article 50 of the AI Act requires that AI-generated or manipulated content (images, audio, video, text) be marked in a machine-readable format and, in many cases, disclosed to end users. This applies to any content that "appreciably resembles existing persons, objects, places, or other entities or events" and would "falsely appear to a person to be authentic."
- Deep fake provisions — AI-generated content depicting real or realistic-looking people must be explicitly labeled as artificially generated. This goes beyond the FTC's approach by requiring machine-readable watermarking in addition to visible disclosures.
- Exemptions for creative and commercial use — Standard product photography enhancement, artistic stylization, and content that is "obviously artificial" (such as clearly cartoon-style graphics) may not require disclosure under the AI Act's proportionality principle. However, the line between "obviously artificial" and "realistically synthetic" is still being interpreted.
- Penalties — Non-compliance with transparency obligations can result in fines of up to 15 million euros or 3% of global annual turnover, whichever is higher. For most e-commerce brands, the reputational risk of non-compliance is equally significant.
The practical takeaway: if you're running ads that reach EU audiences or selling on EU marketplaces, embed machine-readable AI content markers (like C2PA) and add visible disclosures to any AI-generated content featuring realistic people or scenarios.
When You Must Disclose
Not all AI-generated content requires disclosure. The rules center on consumer expectations and potential for deception. Here are the cases where disclosure is required across virtually all jurisdictions and platforms:
AI-generated testimonials and reviews
Any content that appears to be a real customer sharing a genuine experience must be disclosed as AI-generated. This includes UGC-style ads featuring AI personas describing product benefits, AI-written reviews posted to your product pages, and composite testimonials where AI combines real feedback into a synthetic narrative. Even if the underlying sentiment is based on real customer data, presenting AI-generated content as organic customer testimony without disclosure is deceptive.
Synthetic influencers
If you're using AI-generated personas as brand ambassadors or in sponsored content, disclosure is mandatory. Consumers have a right to know they're engaging with a non-human entity. This applies whether the AI influencer has a persistent identity (a named, recurring character) or appears as a one-off model in an ad. The FTC has specifically called out AI influencers as requiring the same level of disclosure as paid human endorsers—plus the additional disclosure that the "person" is AI-generated.
AI product photos that alter reality
Product images that use AI to change the product's color, size, texture, or functionality in ways that don't reflect the actual item require disclosure—or, more accurately, they should not be used at all. Most platform policies and FTC guidance treat materially misleading product images as straightforwardly deceptive, regardless of whether they were created by AI or Photoshop. The AI-specific angle is that generative tools make it easy to create hyper-realistic product shots that subtly misrepresent what the buyer will receive.
When Disclosure Is Not Required
Many common uses of AI in content creation do not trigger disclosure requirements. Understanding these exemptions is just as important as knowing the rules, because over-disclosing can create unnecessary friction and consumer confusion.
Standard product photography enhancement
Using AI to remove backgrounds, adjust lighting, correct white balance, or perform standard retouching on real product photos does not require disclosure. These are the digital equivalents of traditional photography post-processing, and no reasonable consumer would expect product images to be unedited. The key distinction is that the product itself remains accurately represented.
Background generation
Placing a real product photo on an AI-generated background—a kitchen counter, a living room, an outdoor scene—generally does not require disclosure. The product is real; only the context is synthetic. However, if the generated background makes specific claims (for example, showing a product in a medical setting to imply clinical use), that could cross into deceptive territory regardless of disclosure.
Style transfer and color grading
Applying AI-powered style filters, color grading, or aesthetic adjustments to real photography does not typically require disclosure. These are considered standard creative tools, equivalent to applying a filter in Lightroom. As long as the resulting image doesn't materially misrepresent the product, no disclosure is needed.
AI-assisted copywriting
Using AI tools to draft, edit, or refine marketing copy does not currently require disclosure in most jurisdictions. The FTC's focus is on content that appears to come from a specific person or that makes factual claims. Generic marketing copy—headlines, product descriptions, email subject lines—generated or assisted by AI is treated the same as copy written by a human employee.
Best Practices for Disclosure
Even where the rules are clear, implementation matters. A disclosure that's technically present but practically invisible doesn't meet the FTC's "clear and conspicuous" standard. Here's how to do it right:
Where to place labels
- In the content itself — The most effective placement is directly on or immediately adjacent to the AI-generated content. For images, use a visible overlay label. For video, include it in the first few seconds and in any accompanying text.
- In the caption or description — For social media posts and ads, include the disclosure in the primary text, not just in hashtags or comments. Platform-native labels (like Meta's "Made with AI" tag) supplement but don't replace your own disclosure.
- In ad copy — For paid ads, the disclosure should be visible without scrolling or tapping. The FTC's standard is that consumers should see the disclosure before engaging with the content.
What language to use
- Keep it simple — "Made with AI" or "AI-generated image" is clear and universally understood. Avoid jargon like "synthetically rendered" or "computationally generated."
- Be specific when needed — If only part of the content is AI-generated (for example, a real product on an AI background with an AI model), specify what was generated: "Model and background created with AI. Product shown is actual item."
- Don't over-qualify — Phrases like "partially AI-assisted" or "AI-enhanced" can be vague enough to be unhelpful. If the core visual element (a person, a scene, a demonstration) is AI-generated, say so directly.
Making disclosures consumer-friendly
The goal is transparency, not self-sabotage. Research shows that consumers are increasingly comfortable with AI-generated marketing content when they feel informed rather than deceived. A straightforward "Created with AI" label actually builds trust. It signals that your brand is modern, transparent, and confident in its content quality. Frame the disclosure as part of your brand's commitment to honesty, not as a legal footnote to be hidden.
Building a Disclosure Policy: Template for E-Commerce Brands
Every e-commerce brand using AI to generate marketing content should have a written disclosure policy. Here's a framework:
1. Audit your AI content usage
Document every way your brand uses AI in content creation: product photos, lifestyle imagery, model generation, ad copy, UGC-style content, email personalization, and social media posts. Categorize each use as "disclosure required," "disclosure recommended," or "no disclosure needed" based on the guidelines above.
2. Define disclosure standards by channel
Each advertising channel and platform has different technical capabilities and policy requirements. Your policy should specify exact disclosure language, placement, and format for each channel—Meta ads, Google Shopping, TikTok, your own website, email marketing, and marketplace listings.
3. Assign ownership
Designate who in your organization is responsible for applying disclosures to each content type. In small teams, this might be the same person who produces the content. In larger organizations, compliance review should be a defined step in the content production workflow.
4. Create a review cadence
Platform policies and regulatory requirements change frequently. Schedule quarterly reviews of your disclosure policy to ensure it aligns with current rules. Monitor FTC announcements, platform policy updates, and (if you sell internationally) EU regulatory guidance.
5. Document everything
Maintain records of your AI content production, including which tools were used, what prompts or inputs were provided, and what disclosures were applied. This documentation protects you in the event of a regulatory inquiry and demonstrates good-faith compliance.
How AI UGC Platforms Handle Disclosure
Responsible AI content platforms build compliance infrastructure directly into their products. Here's what to look for—and how tools like ppl.studio approach the problem:
Built-in metadata
Professional AI UGC platforms embed metadata in generated images that identifies them as AI-created. This metadata travels with the file and can be read by platforms (like Meta) that check for AI content signals during the ad review process. This isn't just good practice—it's becoming a technical requirement as platforms build automated AIGC detection into their ad review pipelines.
EXIF tagging
Standard EXIF metadata fields can be used to indicate AI generation. Tools like ppl.studio tag generated images with relevant metadata that identifies the content as AI-generated, the platform that created it, and the date of generation. This provides a tamper-resistant record of the content's origin.
Content Credentials (C2PA)
The Coalition for Content Provenance and Authenticity (C2PA) standard is emerging as the industry benchmark for AI content transparency. C2PA embeds a cryptographically signed manifest in content files that records the content's creation history—what tools were used, what inputs were provided, and what modifications were made. Major platforms including Adobe, Microsoft, Google, and Meta support C2PA verification. Adopting C2PA-compliant tools future-proofs your content against tightening regulations and platform requirements. Learn more about detection approaches in our guide on AIGC detection tools and methods.
Future-Proofing: Anticipated Regulation Changes
The regulatory environment for AI-generated content is moving in one direction: more disclosure, not less. Here's what to anticipate and how to prepare:
Expanding FTC enforcement
The FTC has signaled that it will pursue more enforcement actions against brands using AI-generated content deceptively. Expect the scope of required disclosures to widen—the Commission has indicated interest in addressing AI-generated product descriptions, AI-personalized pricing displays, and AI-curated review summaries. Brands that build disclosure into their workflow now will have a significant advantage over those scrambling to comply later.
State-level legislation
Multiple U.S. states have introduced or passed AI transparency laws that go beyond federal requirements. California's AI Transparency Act, for example, requires disclosure of AI-generated content in commercial communications and imposes additional requirements for content depicting minors. Monitor state-level developments, especially in states where you have significant customer bases.
Technical standards becoming legal requirements
C2PA and similar content provenance standards are likely to move from "best practice" to "legal requirement" in the coming years. The EU AI Act already requires machine-readable marking of AI content. The U.S. is likely to follow. Choosing AI content tools that support these standards now means you won't need to re-tool when regulations catch up.
Platform enforcement escalation
Major ad platforms are investing heavily in automated AIGC detection. As these systems become more accurate, the cost of non-disclosure will rise. Ads flagged as AI-generated but lacking disclosure will face higher rejection rates and potential account penalties. Proactive disclosure preempts this entirely.
How to prepare now
- Adopt C2PA-compliant tools — Choose AI content platforms that embed content credentials. This is the single most future-proof step you can take.
- Default to disclosure — When in doubt about whether a specific use case requires disclosure, disclose. The downside of unnecessary disclosure is minimal. The downside of missing a required disclosure is significant.
- Build compliance into your workflow — Don't treat disclosure as an afterthought. Make it a standard step in your content production pipeline, the same way you QA creative assets before publishing.
- Train your team — Everyone involved in content creation—designers, marketers, media buyers—should understand what requires disclosure and how to apply it on each platform.
- Stay informed — Follow FTC announcements, platform policy updates, and industry groups like the Content Authenticity Initiative. Regulations are moving fast, and quarterly policy reviews are the minimum.
The Bottom Line
AI-generated content disclosure is not a barrier to using AI in marketing—it's a competitive advantage. Brands that are transparent about their AI usage build consumer trust, avoid regulatory risk, and position themselves as responsible innovators. The regulatory trend is unmistakable: disclosure requirements will only increase. The brands that build compliance into their AI content workflows today won't just avoid fines—they'll earn the trust that drives long-term customer loyalty.
For a deeper understanding of AI-generated content and its marketing applications, explore our complete guide to AIGC. To see how detection technology works and what it means for your content strategy, read our breakdown of AIGC detection tools and methods. And for practical techniques on producing AI content that meets quality and compliance standards, check out our guide on reducing AIGC detection signals.
Create compliant AI UGC at scale
ppl.studio generates marketing-grade AI content with built-in metadata, C2PA support, and disclosure-ready outputs—so you stay compliant without slowing down production.
Start free with ppl.studioFounder of ppl.studio. Building AI tools for product marketing teams who need visual content at scale without the production overhead.