In this column for WARC, Dr Karen Nelson-Field is joined by WARC’s Lena Roland, both of whom argue that as Gen AI ramps up, the core value of the industry remains its understanding of real people, whether in media and attention or by ensuring that emotion, imagination, and wisdom are at the heart of the next iteration of the web.

In this section, Karen Nelson-Field, Founder and CEO at Amplified Intelligence, examines how the industry can challenge non-human involvement in creative development yet continue to accept non-human involvement in measuring human attention.

Unsurprisingly, the use of generative AI and its application in the realm of creativity emerged as the most widely discussed subject at the Cannes Festival of Creativity. But interestingly, most of the discussions seemed to be leaning towards criticism rather than support. In one corner, proponents acknowledged the potential of generative AI systems to produce innovative and creative content across various industries such as art, design, and entertainment.

In the other corner, there were significantly more individuals expressing concerns about the loss of jobs and the future of creativity. They argued that without any human touchpoints the authenticity, emotional depth, and uniqueness of content would likely suffer. Aline Santos Farhat, Chief Brand Officer and Chief Equity Diversity & Inclusion Officer at Unilever posed a poignant question: “How can we possibly create remarkable content without human contribution? Where are the humans in all of this?”.

MIT, a prominent AI research hub, houses the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) and is associated with two AI pioneers. Marvin Minsky, known for leading the man versus machine debate on machines replicating human cognitive abilities independently. J.C.R. Licklider, renowned for spearheading the man PLUS machine debate, focusing on human-machine collaboration and augmented or amplified intelligence.

MIT advocates for a man PLUS machine approach in a recent article on generative AI. Senior Lecturer Paul McDonagh-Smith highlights the significance of human-centric capabilities like creativity, curiosity, and compassion in successful businesses utilizing these technologies. The integration of more human qualities into robots and machines is a crucial aspect, according to McDonagh-Smith.

As an AI large language model, even ChatGPT uses a vast amount of human-generated data as ground truth to learn and generate its responses. Its ground truth is a large collection of publicly available sources such as books, articles, websites, and other human written material. After the pre-training phase the model fine-tunes itself in flight, but continues to get direct input from human reviewers so that it addresses responsiveness, biases and handles new scenarios. This is exactly what a human feedback loop should do.

While no doubt the debate on man and machine will continue in the coming advertising festivals, what ultimately floored me in Cannes were the striking ironies surrounding the rallying cry of ‘Where’s the human?’

On the same Croisette we have an uprising of industry folk championing the need for human involvement in creativity, but these same folk accept the uprising of non-human attention measurement as acceptable in the same breath. So how as an industry can we now challenge non-human involvement in creative development yet continue to accept non-human involvement in measuring human attention such as pixel tags that predict human attention without a single human involved along the process (the second most discussed topic at Cannes)?

The query ‘Where’s the human?’, evoked the concept of ground truthing, a term that refers to data that is collected by deep and direct observation of real features and substance in context, and is used as training data for scalable predictive algorithms (see WARC DNA of an Attention Economy). Just like ChatGPT’s pre-training phase, the deeper and more accurate the ground truth data is, the better predictive algorithms will perform. Furthermore, the more complex the prediction task, the deeper and more accurate the ground truth data needs to be.

This is precisely why attention proxies prove inadequate in predicting (complex) human attention. Factors collected at the impression level through tracking pixels inherently encompass both attention and inattention indicators (such as time-in-view, scroll speed, aspect ratio, and viewability). Furthermore, the contribution of these factors varies across platforms, formats, and ad units, making it challenging to rely on simple combinations for attention metrics. Solely relying on impression data fails to capture the intricacies of human behavior. The simple answer is better human attention training data, then these attention proxies will have an opportunity to deliver what the industry needs them to (accuracy and consistency).

So I propose that instead of merely inquiring ‘Where’s Waldo?’, a clever and catchy phrase to highlight the deficiencies in viewability measurement, we should also be asking ‘Where’s the Human?’. Both seek-and-find questions are equally important for the future of accurate attention products and for the future of authentic, emotional, and unique creative content.

Let me finish with this: if ChatGPT were based on a limited amount of training data, lacked continuous human review, and had a reported accuracy below 85% would you trust it like you do? Probably not.

The antidote to the AI web: more emotion, imagination, and wisdom

In the following section, Lena Roland, Head of Content for WARC Strategy, warns of how too much automation can lead to lack of creativity and emotion, the kind of work that drives fame.

From a creative and strategy development perspective, new tools like ChatGPT and Google Bard, and image-generating apps, like DALL-E and Midjourney look set to have a huge impact on marketing, and how brands reach consumers.

Advances in Gen-AI mean we’re at the genesis of a new internet era, with the ability to create ever more content, at rapid speed and scale. If we are to avoid exacerbating the pitfalls of web 2.0, we must ask, where is the human? Where is the human in the strategic process, creative execution and consumer research? And importantly, where is the human to provide checks and balances as this powerful technology is unleashed?

While Gen-AI is seen as a boon for creative thinking, there are also limitations. The output of Gen AI is often predictable, which can lead to unimaginative and bland work. If everyone is inputting the same prompts, and reaching the same conclusions, where’s the distinctiveness?

Strategists like Tom Roach, VP Brand Strategy at Jellyfish, use ChatGPT as a starting off point, as a way to rule out predictable and conformist thinking. Once the obvious answers have been identified, the human strategist or creative can bring the imagination, originality, nuance, humour, empathy, and brilliance.

Like most things in advertising, it’s not a case of one versus the other, but it’s a smart blend of both.

 

Too much automation can stifle creativity

 

 

One of the promises of tools like ChatGPT is next level personalisation at scale, and at rapid pace. Done well, personalisation offers a better customer experience, more relevance, and ultimately, commercial growth.

But because personalisation leans into narrow targeting, it can come at the expense of big emotive, broad reach campaigns, the work that drives fame, creates shared meaning, and builds mental availability. 

Too much targeting attracts what System1’s Orlando Wood calls “narrow beam attention”, characterised by advertising that is rigid and detached. In contrast, ads that attract “broad-beam attention” and engage broad reach audiences feature more characters, humour and emotion, Wood’s research found. As a result, creatively, the industry is “editing out humanity and play”, says strategist Martin Weigel.

In his Off Kilter newsletter, Paul Worthington, president of Invencion argues that “with less human creativity to feed the machines”, tools like ChatGPT will leave marketers with “a synthetic loop, which will increasingly struggle to provide novel or interesting creative solutions as it gets stuck in a rut of creativity.”

On a societal level, an over-reliance on data and automation has created some very serious problems. Undesirable outcomes include ad budgets directed (and wasted) on low-quality sites and troublesome algorithms serving people content designed to polarise.

As Gen-AI tools become more prevalent, human input will be essential to build a healthier, safer and a more sustainable web. For example, although the climate crisis is raging in peoples’ news feeds, ChatGPT et al don’t know that generating more content on more sites will also generate more carbon emissions. It is the job of humans – today’s business leaders – to raise important questions, to imagine unintended consequences, to reduce harm, (or course correct).

Similarly, ChatGPT does not know whether the large language model (LLM) data it is trained on is biased, nor does the technology know if the content it generates will cause harm to brands, consumers and society. Like all technology, AI is a tool that can be used for good and bad. And it’s only as good or as bad as those who use it.

 

Beyond data, going where Gen-AI and LLMs can’t 

 

Business strategy author and CEO advisor Roger Martin has argued that “great business decisions often require imagination more than data”. To reinforce the point, he states that the genius of Apple’s Steve Jobs and automaker Henry Ford “is in their ability to imagine products or processes that never existed before”.

Going further still, at this year’s Cannes Festival of Creativity, Martin Weigel, CSO, AMV BBDO, observed that “strategy is a future creating process, and by its nature a future facing process… Strategy runs into the desired better future, and then turns around to face the present to work out how to make that real today… Strategy is imagination,” he said, during a session titled “Strategy is constipated, imagination is the laxative”.

It’s questionable if Gen-AI tools can create the future given they are by their nature, backward looking; they scrape and repurpose content from the past, which has already been published. 

Because algorithms and precision-engineering tend to produce generic and banal work, “there will be a premium for being really, really human. And for being a really, really talented and original human,” Weigel argued.

Traits like imagination, empathy, wisdom and emotion means humans can go where algorithms can’t. 

The best creative work identifies a human truth, meets a human need, or addresses a human pain point. While data has a role to play, for many, the optimum way to get to real human truths is to spend time with communities and subcultures, in real life, through things like ethnography.

At Cannes, Weigel advised strategists to “get out of the office and off the screen and take a look at where the cultural innovations are happening. Go to people who are passionate about your brand because they probably have one foot in the future and are one step ahead of you already”.

 

This article is published with permission of WARC. You can read the original in full here and discover a wealth of other industry leading insights on their website.