In her latest Forbes Business Council article, Dr Karen Nelson-Field explains how industry players must be all-in for the long haul and prepared to thoroughly recalibrate existing practices to fuel fairness, accountability and precision, rooted in rigorous attention data.

 

As an enthusiastic proponent of the test, learn and reiterate approach, I find it a wonder digital advertising has taken so long to recognize what’s truly behind persistent measurement challenges and to start exploring different approaches. A glance over industry surveys quickly shows that marketers, frustrated by poor performance clarity in 2015, are still at odds with CEOs over how their campaign success is quantified, including which metrics to measure.

At the same time, general hesitancy around prodding systems that were assembled in haste, and far from structurally sound to begin with, isn’t so surprising either. For the past 10 years, understanding of audience engagement has been substantially propped up by assumptions based on metadata and artifacts of time-in-view, such as viewability.

This shift to inward-focused data made a certain kind of sense during the initial explosion of digital media as demand increased for fast, scalable measurement solutions. Now, however, advertisers who want more accurate and granular insight into human attention are realizing the need to cast their gaze outward and collect data about, and from, real humans.

Where It Began

 

The answer to this question lies with efforts to make the right move at a time when chaotic digital trading required better discipline. Following reports of spiraling fraud, measurement error and estimates that half of ad impressions weren’t actually reaching audiences, industry heavyweights joined forces in a bid to restore order, including America’s national advertising and agency associations (ANA and 4As) and the Interactive Advertising Bureau (IAB).

Eventually this assembly helped build the Media Rating Council (MRC) minimum viewability threshold—50% of an ad’s pixels in view for one second within display or two seconds for video—and brought some positives. Based on the logical idea ads must be seen to make an impact, it meant advertisers only paid for impressions that had the opportunity to do so. Because traditional audience surveys were regarded as too slow, small and costly for mass use, it was deemed a good fit for ensuring accountability without hampering advertising scale.

Much, however, has changed since the viewability journey kicked off 2011. At the technical level, it’s become clear that JavaScript code executed in browsers is limited: lacking the ability to define where ads sit on a page, whether browsers are hidden behind other programs, or if viewers are human (not bots). Importantly, there has also been dawning awareness that long-running measures don’t tell advertisers the full performance story.

Metrics such as time-in-view track how long ads comply with the basic MRC bar on a user’s device. The key factor they don’t cover is the way individuals engage with ads, which is crucial for multiple reasons—especially when we consider the variability of human attention.

Viewability Vs. Humanity: Measurement Fails

 

In my opinion, people tend to constantly switch in and out of attentive states. Such natural activity isn’t necessarily unknown to advertisers, but it’s precisely what device-level measures such as time-in-view miss.

Using these metrics as indicators of ad effectiveness assumes that high sustained viewability automatically means strong interaction, when in reality, audiences have been flitting from attention to distraction the entire time. This measurement gap presents obvious problems for advertisers, chief among them being getting less engagement than they believe they’re paying for. But on the brighter side, mounting evidence of viewability failings is powering momentum to reframe current measurement practices around true audience attention.

Last year, my company’s research highlighted a notable gulf between viewability and attention data. Spanning 20,000 online impressions across four countries and seven popular platforms, the study found 70% met MRC stipulations.

More recently, I took part in conducting an evaluation that expanded to a random sample of 60,000 ads showed attention rates in the first 10 seconds of viewable ad delivery are typically low. After initial assessment uncovered less than a third garnered active attention, the test was repeated across a broader data set 28 times and consistently supported two key findings: time-in-view does not predict active human attention, yet human active attention has a causal relationship with attentive time-in-view. Or in other words: in-view delivery isn’t a guarantee of meaningful attention, but focused engagement can mean eyeballs stay on ads for longer.

A Long-Running Refurb

 

A shake up of current norms is in motion, propelled by increasing pressure to optimize ad spend and the now validated value of attention-based measurement. So far, work to pull down pillars built on time-in-view and other proxies has made impressive ground; according to the Attention Council, use of attention metrics among a longitudinal cohort of brands and agencies has reached 55% for analysis of media quality, while 24% are buying or optimizing media against attention.

From where I’m sitting, this suggests we’re on the right track for developing an ecosystem in which reliable modeling of moment-to-moment audience attention offers a firm foundation for informed media planning, trading and optimization. But it’s important to acknowledge that there is still some way to go. Despite encouraging signs that we are crossing the divide into mass adoption, the need to uphold quality and maintain vigilance remains.

Part of this is about embedding analytical models designed to ensure robust measurement. For instance, there are growing opportunities to leverage media buying and verification tools that don’t exclusively rely on impression-level metatags; combining this data with the output of algorithms trained on real, continually refreshing attention data.

The other essential element is understanding that taking down a ramshackle but established system is not easy. Industry players must be all-in for the long haul and prepared to thoroughly recalibrate existing practices to fuel fairness, accountability and precision, rooted in rigorous attention data.

This article was first published on the Forbes website as part of their Forbes Business Council series.