Day 1 of The 2019 AudienceXScience conference from the Advertising Research Foundation (ARF) was held April 15th in Jersey City. This annual fixture in the media research industry calendar – a rebrand of the ARF’s 2006-2018 Audience Measurement conference – again brought together many luminaries to shed light on the current state of media measurement.
A surprisingly large 550 registrations were announced for AudienceXScience, indicating that the conference is in good health. However, part of that may be due to the ARF killing off its long-running annual conference this year (called re:think for many years before being rebranded ConsumerXScience in 2018).
Among the recurrent themes this year are:
- Attribution and its issues continues to be the hot topic in measurement
- One to segment may be a better targeting approach than one-to-one, especially given future developments re privacy
- Data quality and the need for “ground truth panels” continues to make a comeback
Detailed Notes for Day 1
See Day 2 Notes here
Below are notes from each of the panels/presentations I attended. These notes are by necessity distilled down based on how quickly I could take notes, so they do not reflect the totality of the presentations or discussions. I apologize in advance to any presenters who feel short-changed, misinterpreted, or misquoted.
Opening Remarks – Scott McDonald, ARF President
McDonald feels there had been an improvement in measuring video in the past year, at least in coverage. But there are still blind spots and there are still uncooperative sellers who won’t open their walled gardens. Advertisers need to pressure Amazon and others to open their systems to measurement. But there is no consensus yet on a cross-platform video measurement that takes into account both TV and digital. McDonald repeatedly called out “parochial concerns” as roadblocks – companies wanting to keep their data walled up to gain a competitive advantage.
Advertising in a Modern Media Company – Rick Welday, Xandr Media (A&T)
Welday spend some time on the advertising structure within AT&T: WarnerMedia with premium advertising opportunities, AT&T with ability to serve addressable ads across multiple channels, and Xandr being AT&T’s adtech solution. Key trends include 1) addressability scaling; 2) addressable is becoming easier to buy; 3) addressable expanding into other areas; 4) advertisers are committing to always-on budgets enabling digital optimization.
Frequency capping continues to be an issue. Example showed 70% of impressions were served to 28% of targets. However, using Xandr increases efficiency and allows advertisers to reach the “gold” light TV viewer. But Xandr right now only works with the 2 min of local avail time given to MVPDs.
The future includes improvement of frequency delivery and also sequencing ads. Local avails converge with national ads. Format innovations via AR, MR, and 5G. Very bullish on 5G and on its potential ability to bridge rural & digital divides.
Transforming Measurement – Megan Clarken, Nielsen
Overall media use has increased from 50 hours/week in 2003 to 75 hours/week in 2018 – an increase of 50%. Targeted advertising has increased from 2017 to 2019 from $2.4B to $6.8B for linear TV ads, and from $47B to $73B for digital TV ads.
Is there a problem with “measurement”? No, measurement is being done (by Nielsen, of course). There are issues with the overall system
- alignment on comparability
- everything should be measured and available, like for TV – all see all
- how to avoid fraud
- improvements in the ecosystem to support this goal
Many people are unaware of what Nielsen can do with de-duping audiences and with measurement within walled gardens.
Planning in an AI World – Brad Smallwood, Facebook
82% of display ads are bought using automated systems. Agencies, advertisers, and platforms need to think differently – “liquidity” and “signals”.
Liquidity allows each $1 to be spent on the next most valuable impression. An automated system selects the most valuable impression and creative, and serves it to the right person in real time.
Signals are behavioral data that machine learning uses to make predictions. They drive improvement in ROI for advertisers.
Automated systems like these are only as good as the data passed into them. And do the signals align with the end goal of a campaign? E.g., advertising ROI and optimization are two different things.
He feels that the implication for Nielsen and measurement is how can Nielsen make marketing better? It should be a marketing improvement company, not a counting company. It should add value rather than being a cost center.
Counting the Right Viewers in OTT Measurement – Nielsen
We should be measuring people not devices for both linear and digital.
- Connected TV audiences are different from both linear TV and digital audiences
- Should be measured at the persons level
- This will assist dynamic ad insertion (DAI)
More Than Impressions: OTT in the TV Daypart Model – Roku & TVision
How does attention (measured by eyes-on-screen) and OTT translate into TV’s traditional daypart model? OTT has similar co-viewing levels as linear TV but attention to commercials is 50% higher for OTT. Why?
- Intentional viewing
- Can’t skip ads
- Captive audience – channel surfing is much more difficult than in the past
These OTT advantages persist across the total day. Final points: 1) OTT is TV – mostly same viewing habits; 2) OTT has higher attention; 3) OTT breaks the linear daypart model.
Quantifying and Aligning Emotion – Magid & Warner Bros Entertainment
This paper discussed efforts by WB to help their affiliates align the local news promos shown in syndicated Warner Bros programs with the content in those programs, allowing greater synergy in brand image and increasing audience flow into local news.
For Ellen, 99% of affiliates use it to lead into local news; high levels also for Warner Bros programs Dr Phil and Judge Judy. Particularly for the feel-good Ellen, the typical “if it bleeds it leads” style of news promotion can cause cognitive dissonance and actually decrease intent to view the news.
A series of surveys and focus groups, the former making use of Magid’s Emotional DNA metric, showed that the more tonal the news promo is to the program, the better the tune-in rate. A key point is to use a positive spin in the promo, even if it’s a serious story. An example would be “Suspects identified and being pursued by police” rather than “Killers on the run!”.
The findings are being shared with news directors at the the affiliates.
In or Out? – WarnerMedia
Advanced TV includes data-driven linear TV. Audience Now is WM’s (nee Turner’s) own targeting system. Has been proven to drive outcomes – example showed 1.6x ROAS target among campaign using Audience Now vs not using it.
Uses three components: 1) Spot level measurement via EDO; 2) Nielsen Catalina data; 3) Kantor surveys.
Audio and Video at the Intersections of Digital Video and Linear TV – Omnicom & Tunity
This paper discussed out-of-home (OOH) measurement. There is a gap for OOH measures where audio cannot be heard. This is addressed by the Tunity app, which apparently streams the audio of muted programs to a user through their smartphone. The Tunity data was analyzed to look at OOH viewing behaviors.
Key takeaways:
- Tunity app did indeed capture OOH viewing
- A substantial amount of use of the app was “in home” as well as OOH
- Location of viewing was a substantial influence on viewing behavior
- Need to think about how OOH viewing can contribute to the TV audience
- Consider including OOH into cross-platform measures
How a Truth Set Can Power Data Accuracy Verification – Ericsson Emodo
Emodo is the digital advertising arm of Ericsson. There is so much focus on media quality but so little on how we decide to buy. Segments, build requests metadata, attribution studies all dependent on data.
Raw data can be 46% inaccurate, even filtered data can be 34% inaccurate. Emodo can use Ericsson’s cell-tower-level data from all mobile service providers to validate GPS location data (their data not dependent on device, OS, carrier).
When questioned further, the presenter had difficulty articulating why Emodo’s data are a truth set: “It’s hard to explain;” “Scale and completeness”.
Takeaways: 1) Carve out data quality from media quality; 2) seek proof of data quality not just indicators; 3) recognize the key role that “truth sets” should play in scaling data
Calibrating Bias in Online Samples for High Quality Surveys at Scale – MRI/Simmons
This presentation made some very on-point points, mainly reminding people that online panels and surveys are not representative in the same way traditional probability sample are. This is a key point that from experience I know that people ignore, forget, or are not even aware of.
Sample bias tends to be narrow; in other words, most of a survey using a non-probability sample can be perfectly fine but then a few points are not representative of the real world. Analysis of data using Simmons’ National Consumer Sample showed some deviations in topic areas such as:
- Online shopping
- Communications
- Video streaming
- Use of tech
- Numerous psychographic attributes
Use of demo weighting does not address these differences, only moderates them a little. Bottom line is do not ask questions about online uses or attitudes to a non-probability online sample.
[personal note: this argument was made for years by Knowledge Networks in support of its probability-based panel called KnowledgePanel (now part of Ipsos). Unfortunately, these arguments typically fell on deaf ears; researchers acknowledged the numerous papers put out by KN on the topic, but getting them to actually spend the extra money for KnowledgePanel sample was a much more difficult task. I wish MRI/Simmons better success than we had!]
A Segments Journey – clypd, Acxiom, MRI/Simmons
This presentation discussed taking segments from MRI to other environments. The issue: audience consistency. Offline and digital measures represent identities and attitudes differently.
They followed five segments from MRI to the Nielsen-MRI fusion, and also MRI to Acxiom to DMPs, publishers, etc.
For the segments, they evaluated the segment sizes and how well the profiles compared (using 47 variables). As for the Nielsen-MRI fusion, there was good matching. With the digital fusion, the matching was (as expected) less good. Issues included ID fuzziness, loss of scale, drop off, and impact.
Correlations for digital segments were in the range of 0.62 to 0.71 compared with the Nielsen-MRI segments which were 0.89 to 0.97. But due to the inherent differences in the datasets, it should not be expected that digital segments match the correlation of the two probability-based datasets.
Standards, Research and Rationale – George Ivie, Media Ratings Council
Need to move from gross impressions to targeted characteristics. Need to increase the quality of the digital side of measurement to that of TV. The standard is based on consistency for video exposures. Provides stronger content focus for digital, stronger ad focus for TV.
There are rules for granularity and comparability, durations and completions, practices for appending audience characteristics. Because of its establishment in current agency systems, the 30 second base is being used.
Is it for planning or currency? Both, but mainly as a currency. Planning tools, which are not the basis of sales, don’t require same rigor. Duration weighted video impressions (DWVI) is getting almost all the debate and comment, despite taking up only 4 of the 70 pages of the draft document.
Going Beyond :30s, :15s or :06s – Vas Bakopoulos, Mobile Marketing Association
This was the first study to pass the new ARF Certification Program and dealt with attention and cognitive load. Mobile ads do more in one second than we think. Attention is almost always similar and cognition follows closely.
Focus on creative in the first second. Ads that fail, fail in the first second. For longer exposures, are you overpaying for unneeded exposure if key effects are almost immediate?
Advance Toward Digital Audience Quality – Robin Opie, Oracle
Poor audience quality results from several factors:
- Bad actors
- Weak ID graphs
- Over-extension of data
- Quality of source data
- Bad modeling
Oracle employs a number of different processes to combat bad quality, including:
- Audience health
- Model diagnostics
- Ecosystem diagnostics
- Real-world validation
- ID graph accuracy
Grow Your Brand With Better Audience Targeting – Nishat Mehta, IRI
Top tips for targeting:
- Quality @ scale (what is the highest quality at the highest scale?)
- Recency of data
- Future proofing (getting ahead of regulations – is data collected now in a way that will be legal in the future? Example – he feels traceable tender will not survive in the future)
Should a big brand be microtargeting? Does that defeat the purpose of building a big-umbrella brand? Plus he feels microtargeting is too creepy.
Paving the Way for News Organizations – Lisa Ryan Howard, NY Times
[note: This might have been the worst-presented session of the entire conference, with Ms Howard spending most of her time standing in one place, hand on hip, looking down to read the teleprompter… not the type of dynamic presenter needed at 5PM in the afternoon.]
This presentation basically reviewed the NY Times’ advertising assets, and how they have adjusted to the current digital era. A brand needs to matter… and consumers need to know what matters. The NYT has expanded into audio with podcasts, and into TV with an upcoming series on the FX network.
The NYT ReaderScope application gives advertisers insights into what topics are being read by their targets, and insights into contextual advertising.
CampaignScope is an advertising tool that profiles content and what each impression was exposed to/read. They are currently still mostly audience buys, but want to move more advertising to contextual, which they feel is more advantageous both in terms of effectiveness and the reader experience.
END OF DAY ONE
See Day 2 Notes here
David Tice is the principal of TiceVision LLC, a media research consultancy.
– Don’t miss future posts by signing up for email notifications here .
– Read my new book about TV, “The Genius Box”. Details here .