AI generated image of photorealistic female face

The real question isn’t “Can we use it?” It’s “When does it make sense for our brand?”

For marketers, the possibilities of AI seem endless with faster production, lower costs, more creative freedom. Especially smaller firms can use AI to produce professional-quality content that previously required large budgets. Recent research shows that top-tier AI visuals can match or even surpass human-created images on realism, aesthetics, and performance metrics such as click-through rates, but only when the creative brief and execution are well crafted.

AI works best when it amplifies what a brand already stands for: imagination, innovation, digital fluency, or creative experimentation. It is far riskier in categories built on trust, expertise, human warmth, or craftsmanship, where synthetic images can easily feel like shortcuts or identity violations.

Quality control still matters. Even advanced tools can produce distorted hands, odd reflections, or emotional flatness. Human judgment remains irreplaceable to ensure the final visuals align with brand values and precision expectations. When the fit is right and when the brand applies careful curation, AI can expand creative possibilities. When the fit is wrong, it can quietly erode trust far faster than it saves money.

The Risk

When there is a mismatch between brand and marketing material, consumers don’t just critique the image, they question the brand.

Category-code violations

Every brand operates within “category codes,” or unwritten rules about what signals professionalism, credibility, and authenticity.  These codes shape how consumers interpret a brand’s identity. When AI-generated images break these expectations, the brand looks inconsistent.

For example, Brilleland, a Norwegian optical chain recently faced backlash after using slightly distorted AI-generated human models in its campaign. Consumers didn’t see creative experimentation; they saw a brand that appeared careless about the values of precision and human care it claims to uphold.

Perceived shortcuts and fakery

AI-generated human models can also suggest a brand is cutting corners, replacing real representation with synthetic people. In categories where expertise and empathy matter, replacing real people with digital ones can feel dishonest, especially if it’s not disclosed.

When consumers sense fakery, even unintentionally, trust evaporates.

For example, when premium brands known for craftsmanship use AI, consumers may read it as cost-saving that clashes with the brand’s quality standards.

Soullessness

AI-generated imagery is often seen as soulless and cold. For brands that rely on emotional resonance, this creates a direct mismatch.

Consider Coca-Cola’s AI-generated Christmas trailer, now in its second year. Despite being technically polished, online chatter continues criticizing it as soulless, because it undermines the very values the brand tries to embody during the festive season: warmth, nostalgia, human connection.

When emotional cues are central to a brand’s identity, AI-generated imagery can feel hollow, making even high-quality visuals ring false.

The Disclosure Dilemma

Research shows that making it clear an image was created with AI can lower perceived authenticity and, in turn, reduce brand trust, purchase intention, and brand image.

Yet not disclosing AI-generated imagery may be even riskier. If consumers later discover that photorealistic images were produced synthetically, they may feel misled or betrayed, which can quickly damage brand trust and trigger backlash.

Moreover, synthetic human images may alter how “real people” should look, homogenize diversity, and subtly reshape appearance norms, raising ethical concerns that go beyond transparency.

Norway currently has clear guidelines for disclosing retouched advertising images, and companies are generally advised to apply the same disclosure sign when AI generated visuals depict humans. The EU AI Act also moves in this direction by requiring disclosure for content that could be seen as ‘deepfake', and it requires transparency when users interact with AI systems. However, neither framework clearly specifies how these rules apply to everyday commercial advertising, leaving companies uncertain about when and how AI-generated content should be disclosed.

But then comes the harder question: where should we draw the line? Should every ad that uses AI be labeled if only minor elements are AI generated? Should AI involvement only be disclosed when photorealistic synthetic people could be mistaken for real people (i.e., deep fakes)? What about AI-generated music or background elements? As these tools become commonplace, we need to have a much broader discussion about what is acceptable in order to allow the creative use of these new technologies while also protecting consumers.

Published 29. January 2026

Share this article:

You can also see all news here.