Warning: A non-numeric value encountered in /home/customer/www/ticevision.com/public_html/wp-content/themes/Builder-Cohen/lib/builder-core/lib/layout-engine/modules/class-layout-module.php on line 505

Conference Summary: Mavericks of Media 2019

As guest-blogger for the 2019 edition of the Mavericks of Media conference (which is put on by knect365), I wrote up summaries of the keynotes and break-out session  I attended. You can find the daily summaries on the knect365 website:

Day 1 of the 2019 Mavericks of Media conference: Day 1 (July 10 2019)
Day 2 of the 2019 Mavericks of Media conference: Day 2 (July 11 2019)

Enjoy!

David Tice is the principal of TiceVision LLC, a media research consultancy.
Don’t miss future posts by signing up for email notifications here .  
– Read my new book about TV, “The Genius Box”. Details here . 

The Smart TV “Box Killer” Breaks Out

Hub Research logoA new media milestone appears to have been reached, as the new Entertainment in the Connected Home report [details here] from Hub Entertainment Research shows that US TV households now have an average of one (1.0) smart TV with internet access, enabling streaming directly to the set itself.

In one sense, this milestone technically means little other than smart TVs have become mainstream. But it is a psychological benchmark just like the Dow crossing 20,000 or 25,000.

In a real sense, it compares to other TV-related psychological benchmarks like when the average number of TV sets exceeded the average number of people in the home (around 2003). Or when VCRs (1999*) or DVD players (2011*) reached nearly ubiquity by reaching 90% of TV households.

The Possible Victims

Unlike other TV-related technologies introduced over the past couple of decades, the rise of the connected smart TV isn’t introducing a new box to hook up to a set. It is giving viewers freedom from those boxes. Internet connectivity within a set can potentially remove many boxes…

  • the pay TV STB – if you switch to a vMVPD
  • the DVR – if your vMVPD service offers a cloud DVR
  • the DVD or Blu-ray player – if you rely on OTT or SVOD services to watch movies or catalog TV series
  • the streaming media player (Roku/Fire TV/etc) – this capability will migrate into smart TVs
  • the stereo/amplifier of a home theater system – if you have a set that supports Bluetooth or WiFi transmission of sound to external speakers
  • even the hand-held box of the remote control may become obsolete as more sets gain internet-driven, voice-activated controls

Videogame consoles may have a longer survival due to the greater processing power and memory needed for games, but this box will also become expendable  as cloud gaming prospers, internet speeds increase and lag times lessen.

As we’ve seen for many years, consumers want fewer boxes and fewer wires. Evidence of this is when TiVo’s separate boxes were quickly pushed to the side by pay TV STBs with built-in DVRs. This was despite TiVo’s superior technology and user interface. And also how popular TV combination units (TV sets integrated into a single box with VCRs or with DVD players – or even both!) became in the latter stages of the VCR/DVD product cycles.

The Only Question is Adoption Rate

The trend towards smart TVs isn’t going away. It’s hard to find any TV set for sale now that doesn’t have smart capability as a standard feature. The only question will be how many of these sets will end up being connected to the internet, and how deep viewers will go to activate and use all the box-killer applications available to them. Evidence so far indicates this may take a while.

Disclosure: The author works as a consultant for Hub Research and is project manager for the Entertainment+Tech Tracker research series, of which Entertainment in the Connected Home is one report.
*per The Home Technology Monitor– published by Statistical Research Inc. (1999) and by Knowledge Networks (2011)

David Tice is the principal of TiceVision LLC, a media research consultancy.
Don’t miss future posts by signing up for email notifications here .  
– Read my new book about TV, “The Genius Box”. Details here . 

Friday Finds: Avengers Endgame

Friday Finds shares a piece of content I’ve recently experienced.

Today’s find: Avengers: Endgame
Genre: Superhero, Feature Film
Origin: Marvel Studios (Disney)
Find it: Every movie theater

Avengers Endgame posterCertainly this edition of Friday Finds isn’t bringing forward a little know piece of content. Avengers: Endgame will likely be the most successful movie of all time. It made a billion dollars in just one weekend.

This post today is to salute the Marvel Studios and Disney team for bringing forward a 22 movie, 10 year long effort to create the Marvel Cinematic Universe (MCU).

With almost no hiccups – OK, putting aside Edward Norton’s The Incredible Hulk – and relying on Marvel’s so-called “B list” of heroes, Kevin Fiege and the Marvel Studios team crafted a movie series that was not only incredibly lucrative for Disney but creatively successful.

Think it’s easy to take characters that have existed for decades with hundreds of stories already written and bring them to the screen? Take a look at Warner Bros’ efforts trying to bring its corporate cousin’s DC Universe to the screen, which would seem to be foolproof with Batman, Superman, and a host of known characters.

Or look at the lack of consistent success with Marvel’s “A” list characters. These were licensed long ago by Marvel before its acquisition by Disney. There have been Fox’s series of uninspiring Fantastic Four movies, and Sony’s ups and downs with Spiderman. Arguably Fox’s X-Men/Wolverine/Deadpool franchise has been most successful, but it still lacks the consistent performance of the MCU.

The Rare Ketchup – Movie Comparison

Endgame manages to pull together the threads of 21 previous movies to create a movie that mostly works well as could be expected. In fact, it starts rather slowly in the first 20 or so minutes before action picks up. It’s like getting ketchup out of a bottle – it trickles out then there is almost more than you want! The film does a good deal of fan service with references to past movies in the MCU series. It closes the loop in the stories of some of our characters (to be vague and non-spoilerly). And its lack of a post-credits scene puts a punctuation mark on its status as the end of a long story arc.

Thor’s Hammer Hits Hard

Thor's Hammer in Iron Man 2As a minor comic book nerd, I was always in the demo targeted by the MCU. From the strains of Black Sabbath’s Iron Man at the end of the first Iron Man trailer, they had me on the hook. And that hook was in for good during the post-credits scene in Iron Man 2 that showed Thor’s hammer. I literally jumped out of my seat in excitement, an exhibition my son will never let me forget.

Will the next large arc of the MCU be as successful as the first? It is unlikely that lightning will strike twice (sorry, Thor) in terms of actors, directors, characters, and screenwriters. But there is no reason why it can’t continue to excel, even if not at the same level as the past 10 years.

David Tice is the principal of TiceVision LLC, a media research consultancy.
Don’t miss future posts by signing up for email notifications here .  
– Read my new book about TV, “The Genius Box”. Details here . 

Scenes from the ARF 2019 AudiencexScience Conference – Day 2

AxS logoDay 2 of The 2019 AudienceXScience conference from the Advertising Research Foundation (ARF) was held April 16th in Jersey City. This annual fixture in the media research industry calendar – a rebrand of the ARF’s 2006-2018 Audience Measurement conference – again brought together many luminaries to shed light on the current state of media measurement.

ARF logo

Detailed Notes for Day 2
See Day 1 Notes here

Below are notes from each of the panels/presentations I attended. These notes are by necessity distilled down based on how quickly I could take notes, so they do not reflect the totality of the presentations or discussions. I apologize in advance to any presenters who feel short-changed, misinterpreted, or misquoted.

Marketing Effectiveness in the Digital Era – Les Binet, adam&eveDDB

Presented results from a long-term analysis of UK data, which has head to four books. Fundamental principle: Brand (building) vs activation (sales). Activation can be high efficiency and high ROI – but while it creates sales blips, it does not build growth. Brand building creates long-term memories – broader reach, different attention, more memorable activities; its decay is slower which leads to long-term build.

How to maximize effectiveness

  • Penetration is always the main driver of growth; reach is king
  • Maximize mental availability; brand awareness/salience/fame
  • Messages vs emotions; rational is used for activation, emotional for brand effects

Invest in share-of-voice; if share-of-voice is greater than share-of-market, then growth. An optimal budget should be 60% brand, 40% activation. The fundamental rules haven’t changed with the emergence of digital. Digital increases efficiency and makes activation easier, but brand building is still more important in the long run.

The Race to Own the Future of TV – Julie DeTraglia, Hulu; Natasha Hritzuk, WarnerMedia; Ali Rana, Snapchat

— AT&T’s sale of Hulu?
JD: no change in near term at Hulu
NH: WarnerMedia are treating their upcoming DTC service as a CPG product, not a tech product. Doing UX research for consumer features. Content discovery and personalization are key attributes.
— Snapchat issues?
AR: they have Discover for storytelling from respected partners. Both scripted and unscripted shows. Content for mobile is very difference from regular TV.
— Future of appointment TV?
NH: Appointment TV now is when people get together to watch, not a set time based on broadcast schedule
JD: viewers want their content on every screen. Most of Hulu viewing is a connected TV in the living room
— Ads?
JD: Hulu wants to offer choice and flexibility like they do for viewing their programs. The ad load is less than regular TV; viewers can choose ads or have interactivity. All this leads to more effective ad environment. Their choice to place ads in “pause” screens was another space they could use without interrupting viewing (since viewing was already paused)

— Measurement?
NH: are we putting the cart before the horse by focusing on developing current measures, when they are working on ad experiences that bypass traditional ads; shouldn’t the measurement match the new experiences, rather than trying to fit new experiences into the old measures?
JD: Measurement needs to be part of the ecosystem. She has to do attribution with different vendors depending on measure needed. Hulu does some attribution now directly so to bypass.
AR: In 5 years, all advertising will be “performance” advertisers (eg, only pays on results, not exposures or impressions)

— Importance of diversity
NH: It’s a given. May need to offer multiple service options to serve all consumers.
JD: Same. Their research covers all types of persons.
AR: Snap has diverse user base and staff.

Seeking a Framework for Measurement – Radha Subramanyam, CBS

Media measurement has historically been about counting, in the future it will need to add outcomes as well. The current state of attribution research is that there are no consistent outcome measures or standards; and the impact of linear TV is underestimated.

The state of counting is it is too complex and still siloed. She wants simplification: a total audience count across all devices that gives total program and commercial audiences.

Philosophy for the future

  • Data comes in all sizes
  • Consumer analytics need to be aligned (survey and passive measures)
  • There is an art and a science to interpret meaning of data – the art focus on storytelling

And apparently, if she’s on your team and she yells at you, it’s a sign she cares.

Exploring the Multiple Dimensions of Attention – MediaScience & Google

What is attention? Desk analysis of existing literature revealed there is attention (in a continuum from Passive to Active) and inattention. There is much academic research on attention but little on inattention.

Attention is the absence of inattention, and inattention can be accurately measured. In the lab, blink duration and eye fixations per second had the highest accuracy in measuring attention/inattention

Within attention the best measures may be dependent on the content viewed, or the intended outcome of the stimulus.

Next steps are 1) a pilot to see if measures of attention translate to ads and 2) confirm the best measures for ad attention.

The Future of Audience-Based Buying  – Comscore

This session was really just a review of OpenAP without any new insights. It was also somewhat ironic as WarnerMedia (Turner), one of the founders of OpenAP, announced three days later it was dropping out of the OpenAP system.

OpenAP is helping network sales teams and their buying partners utilize new datasets. These can be used for planning, buying, posting, and auditing.

Demand for OpenAP has been “limited” but expanding. Despite the free access, the presenters quoted there are about 1,000 individual users signed up.

Consistent segment definitions can be used across network groups with secure segment sharing. It also allows independent 3rd party posting.

From Proxy-Based Optimization to People-Based Optimization – Survata

The problem of proxies. Today optimization is typically against viewability, CPMs, and reach but not against outcomes (such as brand lift).

To enable auto-optimization, need to move from campaign level to persons-level reporting (the latter being modeled). Also need single KPI to optimize against (such as funnel impact).

Can’t use traditional survey research, need “programmatic scale”
Can’t use tradition panel accuracy, need superior data accuracy
Don’t use look-a-like respondents, need causal AI

Cross-Platform Insights Every Influencer Will Cite This Year – Nielsen

This was pretty much a recitation of relevant results from the latest Total Audience Report from Nielsen.

There has been a 182% year-over-year increase in connected TV (CTV) impressions
There is currently about 10 billion(!) hours per month viewing time of CTV in the USA, translating to about 75 hours/month of CTV time among CTV users.
CTV adds about a 16% increment to a P18-49 audience.

Erwin Ephron Demystification Award

Congrats to Leslie Wood!

Brand Purpose and Cinema – NCM, ScreenVision, MESH

Many brand experiences are perceived as neutral, whereas consumers and brands both want “purpose”. This study used Real-Time Experience Tracking (RET), a one week brand experience diary.

Paid brand touchpoints are seen as less engaging and persuasive than owned or earned touchpoints. But paid can be a first step to drive people to the better-received owned/earned experiences.

Cinema cuts through neutrality [as one would expect from an NCM/ScreenVision presentation]. Two thirds of cinema brand experiences were positive, more than any other touchpoint, and was particularly helpful among 18-24 demo. TV & cinema together work even better.

A Levi’s case study was presented. Cinema exposures were 2x more engaged than TV alone; 93% found cinema memorable compared with 71% of TV.

Can Data Privacy Be Good for Brands? – Dan Linton, W2O Group

The risk of harm is real. Examples are physical (such as when FitBit jogging data revealed secret military/CIA bases) and emotional (such as when a woman miscarried but still was followed by baby advertising online).

The California Consumer Privacy Act will have a large impact, and is being followed by similar laws in WA, VT, OR, CT, IL, and TX. GDPR is already impacting the EU.

But GDPR did not kill off digital advertising in the EU. In fact, privacy ethnics are not detrimental but can be a positive differentiator for a brand/ad tech service. There are many positives that can result. These include:

  • Getting ahead of the curve in terms of what data are collected and how – and if any will fall foul of new laws
  • Becoming aware of, and organizing, data streams. Where are they from? Why do we use them? Are they really needed? Where are they stored? Is there PII to worry about?
  • Being transparent will build trust
  • Give consumers a reason to engage and share their data

Presenting the ARF Code of Conduct – Paul Donato, CRO of the ARF

Donato discussed the recently announced ARF Code of Conduct. What makes it different?

  • A focus on research not activation-type data
  • A commitment requires research of terms and agreement
  • A chain of trust between elements of the research process
  • Includes automated, location, and AI-driven research
  • There are monitoring KPIs; the ARF can see how many times the terms have been read and agreed to
  • There is a required annual compliance report

Companies can apply online and it is voluntary. It was made voluntary to avoid company lawyers resisting a more structured commitment.

[Donato completely sidestepped the whole issue of compliance. The code is a nice idea but it has no teeth – there is no active enforcement by the ARF and it’s dependent on someone being a whistle-blower. And the penalty of having the ARF seal rescinded may likely have no effect other than temporary embarrassment]

Too Much Math, Too Little Meaning – Rishad Tobaccowala, Publicis

We are in the 3rd connected age (1st = initial computer/browser based; 2nd = computer + smartphone; 3rd = internet of things, all is connected)

Issues:

  • Erosion of trust
  • Close-mindedness – we need to do “A/B testing” in our own beliefs, ie consider other viewpoints
  • Rising inequality
  • These are all the dark side of the first two connected ages

Data isn’t missing about what to do to solve many of these issues, it’s the will to implement solutions

Purpose – what are we doing all this for?
Poetry – where is the art/beauty in what is being done?
People – you need to change people or keep them and change their mindset

END OF DAY TWO – END OF CONFERENCE
See Day 1 Notes here

David Tice is the principal of TiceVision LLC, a media research consultancy.
Don’t miss future posts by signing up for email notifications here .  
– Read my new book about TV, “The Genius Box”. Details here . 

Scenes from the ARF 2019 AudiencexScience Conference – Day 1

AxS logoDay 1 of The 2019 AudienceXScience conference from the Advertising Research Foundation (ARF) was held April 15th in Jersey City. This annual fixture in the media research industry calendar – a rebrand of the ARF’s 2006-2018 Audience Measurement conference – again brought together many luminaries to shed light on the current state of media measurement.

ARF logoA surprisingly large 550 registrations were announced for AudienceXScience, indicating that the conference is in good health. However, part of that may be due to the ARF killing off its long-running annual conference this year (called re:think for many years before being rebranded ConsumerXScience in 2018).

Among the recurrent themes this year are:

  • Attribution and its issues continues to be the hot topic in measurement
  • One to segment may be a better targeting approach than one-to-one, especially given future developments re privacy
  • Data quality and the need for “ground truth panels” continues to make a comeback

Detailed Notes for Day 1
See Day 2 Notes here

Below are notes from each of the panels/presentations I attended. These notes are by necessity distilled down based on how quickly I could take notes, so they do not reflect the totality of the presentations or discussions. I apologize in advance to any presenters who feel short-changed, misinterpreted, or misquoted.

Opening Remarks – Scott McDonald, ARF President

McDonald feels there had been an improvement in measuring video in the past year, at least in coverage. But there are still blind spots and there are still uncooperative sellers who won’t open their walled gardens.  Advertisers need to pressure Amazon and others to open their systems to measurement. But there is no consensus yet on a cross-platform video measurement that takes into account both TV and digital. McDonald repeatedly called out “parochial concerns” as roadblocks – companies wanting to keep their data walled up to gain a competitive advantage.

Advertising in a Modern Media Company – Rick Welday, Xandr Media (A&T)

Welday spend some time on the advertising structure within AT&T: WarnerMedia with premium advertising opportunities, AT&T with ability to serve addressable ads across multiple channels, and Xandr being AT&T’s adtech solution. Key trends include 1) addressability scaling; 2) addressable is becoming easier to buy; 3) addressable expanding into other areas; 4) advertisers are committing to always-on budgets enabling digital optimization.

Frequency capping continues to be an issue. Example showed 70% of impressions were served to 28% of targets. However, using Xandr increases efficiency and allows advertisers to reach the “gold” light TV viewer. But Xandr right now only works with the 2 min of local avail time given to MVPDs.

The future includes improvement of frequency delivery and also sequencing ads. Local avails converge with national ads. Format innovations via AR, MR, and 5G. Very bullish on 5G and on its potential ability to bridge rural & digital divides.

Transforming Measurement – Megan Clarken, Nielsen

Overall media use has increased from 50 hours/week in 2003 to 75 hours/week in 2018 – an increase of 50%. Targeted advertising has increased from 2017 to 2019 from $2.4B to $6.8B for linear TV ads, and from $47B to $73B for digital TV ads.

Is there a problem with “measurement”? No, measurement is being done (by Nielsen, of course). There are issues with the overall system

  • alignment on comparability
  • everything should be measured and available, like for TV – all see all
  • how to avoid fraud
  • improvements in the ecosystem to support this goal

Many people are unaware of what Nielsen can do with de-duping audiences and with measurement within walled gardens.

Planning in an AI World – Brad Smallwood, Facebook

82% of display ads are bought using automated systems. Agencies, advertisers, and platforms need to think differently – “liquidity” and “signals”.

Liquidity allows each $1 to be spent on the next most valuable impression. An automated system selects the most valuable impression and creative, and serves it to the right person in real time.

Signals are behavioral data that machine learning uses to make predictions. They drive improvement in ROI for advertisers.

Automated systems like these are only as good as the data passed into them. And do the signals align with the end goal of a campaign? E.g., advertising ROI and optimization are two different things.

He feels that the implication for Nielsen and measurement is how can Nielsen make marketing better? It should be a marketing improvement company, not a counting company. It should add value rather than being a cost center.

Counting the Right Viewers in OTT Measurement – Nielsen

We should be measuring people not devices for both linear and digital.

  • Connected TV audiences are different from both linear TV and digital audiences
  • Should be measured at the persons level
  • This will assist dynamic ad insertion (DAI)

More Than Impressions: OTT in the TV Daypart Model – Roku & TVision

How does attention (measured by eyes-on-screen) and OTT translate into TV’s traditional daypart model? OTT has similar co-viewing levels as linear TV but attention to commercials is 50% higher for OTT. Why?

  • Intentional viewing
  • Can’t skip ads
  • Captive audience – channel surfing is much more difficult than in the past

These OTT advantages persist across the total day. Final points: 1) OTT is TV – mostly same viewing habits; 2) OTT has higher attention; 3) OTT breaks the linear daypart model.

Quantifying and Aligning Emotion – Magid & Warner Bros Entertainment

This paper discussed efforts by WB to help their affiliates align the local news promos shown in syndicated Warner Bros programs with the content in those programs, allowing greater synergy in brand image and increasing audience flow into local news.

For Ellen, 99% of affiliates use it to lead into local news; high levels also for Warner Bros programs Dr Phil and Judge Judy. Particularly for the feel-good Ellen, the typical “if it bleeds it leads” style of news promotion can cause cognitive dissonance and actually decrease intent to view the news.

A series of surveys and focus groups, the former making use of Magid’s Emotional DNA metric, showed that the more tonal the news promo is to the program, the better the tune-in rate. A key point is to use a positive spin in the promo, even if it’s a serious story. An example would be “Suspects identified and being pursued by police” rather than “Killers on the run!”.

The findings are being shared with news directors at the the affiliates.

In or Out? – WarnerMedia

Advanced TV includes data-driven linear TV. Audience Now is WM’s (nee Turner’s) own targeting system. Has been proven to drive outcomes – example showed 1.6x ROAS target among campaign using Audience Now vs not using it.

Uses three components: 1) Spot level measurement via EDO; 2) Nielsen Catalina data; 3) Kantor surveys.

Audio and Video at the Intersections of Digital Video and Linear TV – Omnicom & Tunity

This paper discussed out-of-home (OOH) measurement. There is a gap for OOH measures where audio cannot be heard. This is addressed by the Tunity app, which apparently streams the audio of muted programs to a user through their smartphone. The Tunity data was analyzed to look at OOH viewing behaviors.

Key takeaways:

  • Tunity app did indeed capture OOH viewing
  • A substantial amount of use of the app was “in home” as well as OOH
  • Location of viewing was a substantial influence on viewing behavior
  • Need to think about how OOH viewing can contribute to the TV audience
  • Consider including OOH into cross-platform measures

How a Truth Set Can Power Data Accuracy Verification – Ericsson Emodo

Emodo is the digital advertising arm of Ericsson. There is so much focus on media quality but so little on how we decide to buy. Segments, build requests metadata, attribution studies all dependent on data.

Raw data can be 46% inaccurate, even filtered data can be 34% inaccurate. Emodo can use Ericsson’s cell-tower-level data from all mobile service providers to validate GPS location data (their data not dependent on device, OS, carrier).

When questioned further, the presenter had difficulty articulating why Emodo’s data are a truth set: “It’s hard to explain;” “Scale and completeness”.

Takeaways: 1) Carve out data quality from media quality; 2) seek proof of data quality not just indicators; 3) recognize the key role that “truth sets” should play in scaling data

Calibrating Bias in Online Samples for High Quality Surveys at Scale – MRI/Simmons

This presentation made some very on-point points, mainly reminding people that online panels and surveys are not representative in the same way traditional probability sample are. This is a key point that from experience I know that people ignore, forget, or are not even aware of.

Sample bias tends to be narrow; in other words, most of a survey using a non-probability sample can be perfectly fine but then a few points are not representative of the real world. Analysis of data using Simmons’ National Consumer Sample showed some deviations in topic areas such as:

  • Online shopping
  • Communications
  • Video streaming
  • Use of tech
  • Numerous psychographic attributes

Use of demo weighting does not address these differences, only moderates them a little. Bottom line is do not ask questions about online uses or attitudes to a non-probability online sample.

[personal note: this argument was made for years by Knowledge Networks in support of its probability-based panel called KnowledgePanel (now part of Ipsos). Unfortunately, these arguments typically fell on deaf ears; researchers acknowledged the numerous papers put out by KN on the topic, but getting them to actually spend the extra money for KnowledgePanel sample was a much more difficult task. I wish MRI/Simmons better success than we had!]

A Segments Journey – clypd, Acxiom, MRI/Simmons

This presentation discussed taking segments from MRI to other environments. The issue: audience consistency. Offline and digital measures represent identities and attitudes differently.

They followed five segments from MRI to the Nielsen-MRI fusion, and also MRI to Acxiom to DMPs, publishers, etc.

For the segments, they evaluated the segment sizes and how well the profiles compared (using 47 variables). As for the Nielsen-MRI fusion, there was good matching. With the digital fusion, the matching was (as expected) less good. Issues included ID fuzziness, loss of scale, drop off, and impact.

Correlations for digital segments were in the range of 0.62 to 0.71 compared with the Nielsen-MRI segments which were 0.89 to 0.97. But due to the inherent differences in the datasets, it should not be expected that digital segments match the correlation of the two probability-based datasets.

Standards, Research and Rationale – George Ivie, Media Ratings Council

Need to move from gross impressions to targeted characteristics. Need to increase the quality of the digital side of measurement to that of TV. The standard is based on consistency for video exposures. Provides stronger content focus for digital, stronger ad focus for TV.

There are rules for granularity and comparability, durations and completions, practices for appending audience characteristics. Because of its establishment in current agency systems, the 30 second base is being used.

Is it for planning or currency? Both, but mainly as a currency. Planning tools, which are not the basis of sales, don’t require same rigor. Duration weighted video impressions (DWVI) is getting almost all the debate and comment, despite taking up only 4 of the 70 pages of the draft document.

Going Beyond :30s, :15s or :06s – Vas Bakopoulos, Mobile Marketing Association

This was the first study to pass the new ARF Certification Program and dealt with attention and cognitive load. Mobile ads do more in one second than we think. Attention is almost always similar and cognition follows closely.

Focus on creative in the first second. Ads that fail, fail in the first second. For longer exposures, are you overpaying for unneeded exposure if key effects are almost immediate?

Advance Toward Digital Audience Quality – Robin Opie, Oracle

Poor audience quality results from several factors:

  • Bad actors
  • Weak ID graphs
  • Over-extension of data
  • Quality of source data
  • Bad modeling

Oracle employs a number of different processes to combat bad quality, including:

  • Audience health
  • Model diagnostics
  • Ecosystem diagnostics
  • Real-world validation
  • ID graph accuracy

Grow Your Brand With Better Audience Targeting – Nishat Mehta, IRI

Top tips for targeting:

  • Quality @ scale (what is the highest quality at the highest scale?)
  • Recency of data
  • Future proofing (getting ahead of regulations – is data collected now in a way that will be legal in the future? Example – he feels traceable tender will not survive in the future)

Should a big brand be microtargeting? Does that defeat the purpose of building a big-umbrella brand? Plus he feels microtargeting is too creepy.

Paving the Way for News Organizations – Lisa Ryan Howard, NY Times

[note: This might have been the worst-presented session of the entire conference, with Ms Howard spending most of her time standing in one place, hand on hip, looking down to read the teleprompter… not the type of dynamic presenter needed at 5PM in the afternoon.]

This presentation basically reviewed the NY Times’ advertising assets, and how they have adjusted to the current digital era. A brand needs to matter… and consumers need to know what matters. The NYT has expanded into audio with podcasts, and into TV with an upcoming series on the FX network.

The NYT ReaderScope application gives advertisers insights into what topics are being read by their targets, and insights into contextual advertising.

CampaignScope is an advertising tool that profiles content and what each impression was exposed to/read. They are currently still mostly audience buys, but want to move more advertising to contextual, which they feel is more advantageous both in terms of effectiveness and the reader experience.

END OF DAY ONE
See Day 2 Notes here

David Tice is the principal of TiceVision LLC, a media research consultancy.
Don’t miss future posts by signing up for email notifications here .  
– Read my new book about TV, “The Genius Box”. Details here . 

Is There An Elon Musk For Media Measurement?

Nielsen ratings boxNews in recent weeks called out the troublesome business situation in the media measurement space. Both Nielsen (which is rumored to be finding it difficult to find a buyer) and Comscore (which forced out its CEO and president after less than a year) highlight the difficulties even the key companies in this space are experiencing, quite apart from the difficulty of measuring today’s media use.

[The following post is adopted from the recently published book “The Genius Box: How the “Idiot Box” Got Smart & Is Changing the Television Business”. “The Genius Box” is available in paperback or digital format from Amazon, Barnes & Noble, Apple iBooks, and most major online booksellers. A short term discount is available at the BookBaby store, thru April 17th. Go to se code ARF2019PRINT for paperback, ARF2019EBOOK for ebooks.]

In most industries, the seller delivers a discrete product or service to the buyer – but in TV and media, buyers and sellers transact their business based on market research results (audience estimates, also called “ratings”). Because the audience measures account for billions of dollars in spending, media research has traditionally been subject to high levels of scrutiny, an important consideration to keep in mind when considering the future of audience measurement.

Disruption Isn’t As Easy As Some Might Think

It would seem that, in today’s world, a business such as audience measurement of electronic media – led by a near-monopolist for half a century – would be a ripe target for disruption and new entrants. But it is not that easy. There are numerous “structural” issues that stand in the way of progress, separate from developing a holistic, cross-platform solution.

These obstacles include:

  • Nielsen exploiting its monopoly power in terms of revenue and agreements, and generally implementing improvements only when faced with potential competitors
  • On the TV network side, a reluctance to fund two parallel measurements – most past models of Nielsen competitor roll-outs assume that the new entrant would have to run parallel with Nielsen for at least some period
  • TV network sales people preferring to sell a “Nielsen” currency because of the prestige of the name itself
  • Getting agencies to buy into an audience measurement system developed or led by TV networks, since the assumption is that a method led by the sellers will disadvantage the buyers.

Despite its protestations to the contrary, Nielsen wields the power of a monopoly – one that US courts said was OK, even before Nielsen gobbled up one of it only potential competitors, Arbitron, in 2013. Being the sole arbiter of the national television currency for decades, and of local television since 1993, Nielsen has been a perennial lightning rod for critics, with some good reason. It is expensive and seemingly slow to innovate unless it perceives a competitive threat.

In Defense of Nielsen

The ratings giant does have a difficult mission – trying to keep up with the constant change in media while still maintaining the strict quality its clients demand (or at least the previous generation of research heads used to demand). Media researchers have been bashing Nielsen for the three decades I have been in the industry, but no one yet has been willing to fully fund an alternative. For many in the industry, to paraphrase Churchill’s comment about democracy: Nielsen is seemingly the worst form of audience measure, except for all the others.

Despite calls for disruptive entrants, what I perceive from many in the industry is resignation to Nielsen’s dominance. As with the Borg from Star Trek: The Next Generation, “resistance is futile,” given that Nielsen has faced down about a dozen potential competitors as well as an antitrust suit over the past 50 years.

Who Could Step Up?

Only the most deep-pocketed, risk-tolerant firms would even be tempted to enter this space as the barriers to entry for a new currency-quality measure are now so high.  Alphabet, Amazon, and Facebook all have the money and would

likely have a great deal of interest in the viewer data stream; but their positioning as competitors in this space – both between themselves and with regular television – would almost certainly prevent any one of them from creating a widely accepted advanced measurement.

Perhaps someone could interest Elon Musk once he gets a man on Mars – that might be the easier task!

David Tice is the principal of TiceVision LLC, a media research consultancy.
Don’t miss future posts by signing up for email notifications here .  
– Read my new book about TV, “The Genius Box”. Details here . 

Brexit Dramedy Streaming Daily

picture with Brexit signpostOne of the benefits of being a consultant and working primarily at home is being able to have some entertainment on in the background. And the past few weeks have been full of drama – and farce – as I’ve followed Brexit coverage from the UK.

Let me step back a second. All of my family (except my brother) are English, so I’ve always been quite an Anglophile and have followed British politics and culture. There was the shock of the Brexit win in a UK referendum in 2016 and the ill-timed general election that cost Theresa May her majority. This has only been exceeded by the current rush to a Brexit deadline without an agreement being approved by Parliament.

The weeks prior to the original “Brexit Day,” this past Friday March 29th, have been filled with fascinating content from the floor of Parliament and political intrigue worthy of a BBC/PBS co-production. Whether a drama or farce is another question altogether.

I bring this up in this column for a number of reasons – the content, the featured players, and the role our contemporary streaming media world played in my ability to watch and listen to each day’s developments.

The Media

Let’s discuss the latter part first. While some Americans have discovered the weekly Prime Minister’s Question time on C-SPAN, broader live coverage of events requires going a little deeper on media’s bench. I found out that I could get a few good sources using a combination of Roku apps and YouTube. This was across a number of different devices – my Roku TV, the Roku box attached to another TV, the YouTube portal that is in my FiOS program guide, and YouTube apps on my phone, tablet, and computer. I was, admittedly, getting a little obsessive about watching!

Sky News streams its live broadcast on YouTube (Brexit or no Brexit) so that is a reliable source of coverage with analysis. Spottier coverage comes from ITV News (mostly they just have a feed from Parliament, sometimes they have a studio feed with analysts) or Channel 4. BBC News, surprisingly, does not stream live video coverage outside the UK (at least that I could *legally* access). But it does have a helpful live blog/Twitter feed on its website.

I even scouted around audio sources like the TuneIn and Radio.com apps. Here I found some free live streams from BBC4, BBC5, and independent radio stations in the UK. Unfortunately, the latter seem to lean towards US-style talk radio so I mostly skipped those.

The bottom line is that I’ve been able to stitch together a pretty decent coverage of events as they’ve transpired across the Atlantic.

The Content

The content I find quite entertaining to watch. After a couple of weeks, I’m now familiar with many of the idiosyncrasies of Parliament. My favorite is when insults are hurled at “the honourable gentleman” or “my right honourable friend,” because using a member’s name is a no-no.

John BercowThe big winner, in my eyes, is the Speaker, John Bercow. Mr. Bercow could easily have a future after all this is over. He could be the UK equivalent of Judge Wapner or Judge Judy. His interjections of “Ooor-dah!” have created a new catch phrase in my house. Other popular Bercow-isms being learned by new viewers are “Division!” (members move to voting lobbies), “Lock!” (the lobbies are locked to record final votes) and “Unlock!” (the votes have been presented and the lobbies can be unlocked). All his expressions end in an exclamation point, by the way.

Aside from Mr. Bercow, we have the Prime Minister, Mrs. May, who continues to try over and over to get her agreement approved despite losing votes each time (three and counting). Most PMs would have been forced to resign by now, but she is like a relentless zombie. Across from her is Jeremy Corbin, leader of the opposition Labour Party. He throws a lot of insults and implements blocking tactics but without really doing much to resolve this critical national issue.

Other characters are the leaders of the smaller parties like the SNP (Scottish National Party) and the DUC (Democratic Unionist Party). The latter enabled May and the Conservatives to form a government after the 2017 election, but they have held May’s Brexit agreement hostage over the way it treats Northern Ireland.

Michael FabricantAnother favorite of mine is member Michael Fabricant, who appears to sport an obvious and somewhat ridiculous Trump-like toupee. Or else, he just has had a very long run of bad hair days.

When Will It End?

At the moment, the way forward for the UK is quite unclear. There could be a last minute agreement; a crash out of the EU with no deal; a lengthy extension; or there could be a reversal of Brexit altogether. There is certain to be a general election before long. And depending on the final terms of a Brexit, the UK itself could be threatened by a vote for Scottish independence to allow it to rejoin the EU.

This “series” will be continuing for quite a long time, no matter what happens. I just hope my internet doesn’t give out in the middle of an important vote.

David Tice is the principal of TiceVision LLC, a media research consultancy.
Don’t miss future posts by signing up for email notifications here .  
– Read my new book about TV, “The Genius Box”. Details here . 

Friday Finds: “Apollo 11”

Friday Finds shares a piece of content I’ve recently experienced.

Today’s find: Apollo 11
Genre: Documentary, Feature Film
Origin: CNN Films, Statement Pictures
Find it: Cinemas (mostly art house or specialty)

Apollo 11 movie posterIn this edition of Friday Finds, it’s time to start celebrating the 50th anniversary of the Moon landing by seeing Apollo 11. This documentary is an excellent recap of the historic mission solely using original film shot in that period. It’s highly recommended.

The story behind much of the film used is quite interesting. NASA had contracted with a film studio to cover the mission using theatrical, wide-screen 70mm cameras. Never used, this film sat forgotten for over four decades in a NASA storage facility – I keep imaging the warehouse in Indiana Jones – until the documentary crew discovered it.

The use of this large format film provides some segments in Apollo 11 that are of amazing quality. One sequence showing the Saturn V rocket lifting off and clearing the launch gantry is so crisp, it could have been CGI. The segments of the mission itself are supplemented by interesting non-flight sequences. One example is the crowds awaiting lift-off on the shores of Florida near the Kennedy Space Center – which included a dapper Johnny Carson in sports jacket and ascot.

Another choice by the filmmakers is to use historic audio to provide a narration. Walter Cronkite and other news reporters describe different aspects of the flight at an overall level. Recordings from Earth-Apollo transmissions provide insight into flight details. This was fine for someone like me, who lived through the era, or those with a particular interest in the Space Race. But I fear it might be a little thin for younger people not so familiar with the Apollo flights.

Nine Year Old Space Cadet

As a nine-year-old space fanatic in 1969, I had followed every flight as closely as I could. For me, Apollo 11 is a nice reminder of those days. First Man, the biopic about Neil Armstrong released last Fall, was also a worthy reminder of the Space Race. Although it was a bit dry at times, this was perhaps indicative of its subject, who barely raised his blood pressure even during lift-off or the Moon landing.

It’s hard to believe that flight was half a century ago. Or that I’m old enough to remember something half a century ago! Despite the successes (and sacrifices) of the Space Shuttle program, it does seem a great shame that the Moon landing didn’t lead to a more extensive stay on the Moon or other journeys beyond Earth orbit. But it’s a great reminder of what the United States is capable, when enough people and resources are thrown at a problem.

David Tice is the principal of TiceVision LLC, a media research consultancy.
Don’t miss future posts by signing up for email notifications here .  
– Read my new book about TV, “The Genius Box”. Details here . 

What’s the Outcome of Outcomes-Based Sales?

Outcome imageAside from “attribution,” the “outcomes-based sales guarantee” seems to be the emerging hot phrase in TV sales this winter. With the upfronts only a scant two months away, we are likely to hear more about this. But do we really know what these sales teams mean?

Outcomes-based sales has been thrown around by the likes of A+E Networks, NBCU, and Hulu in recent months. Just by stint of competition, other network groups are certain to want to get in on the conversation. And let’s face it – in an ideal world, the accomplishment of intended outcomes is the best way to measure the value of a media buy.

Those Devilish Details

But the devil is in the details, and of these we know very little from the few deals that have been discussed in public. One of the things a true measure of outcomes requires is some way to assign the different elements of a campaign to a specific outcome. This leads back to our other buzzword, attribution, a nascent science that has its share of opaque blackboxes and blindspots.

But data aside, there is perhaps something more important to consider. As I note in my book The Genius Box, a full-scale outcomes-based measure of advertising should be considered a partnership between the media company, advertising brand, and its agency. There are so many elements at play that are out of the hands of the media company, it is hard to see how it, by itself, can guarantee an outcome.

Let’s quickly look at a few elements. A TV network (or AVOD service) can guarantee that it will put so many eyes of a particular target audience on an ad, in a safe brand environment, and perhaps in context relative to content. But at that point, many factors emerge that the network has no control over:

  • is the creative and the brand message of the ad interesting and compelling?
  • how well is the product priced in the marketplace?
  • do people perceive the brand well in the real world?
  • if pushing to a website or app, how well does that interface work for consumers? Is it easy to find the product online and to buy it?
  • if pushing to a retail location, are they conveniently located? Are the stores organized well so it’s easy to find the product? Are the stores clean? Is the staff welcoming and knowledgeable?

A Whopper of an Example

Let’s take a concrete example. I really like the recent Burger King ads with the (somewhat creepy) King. I see them quite often, and I used to eat at BK quite often. But in my area of the country, most BKs have closed; the ones that remain are often in run-down shape, with few customers, and workers who just go through the motions. It’s a sad place, and one I don’t really care to go to anymore. So should the TV network that put those BK ads in front of me be punished on an “outcomes” basis, when it’s really an issue with BK and its franchisees that comes between me and buying a Whopper?

Few of us are – or will be – on the inside of these deals, so it will be interesting to see how outcomes plays out in this and future upfronts, and how much detail can be gleaned. Perhaps they start with simple measures like ticket sales or digital/foot traffic. But as the requests get more complex, with a focus on actual sales, I think there will have to be a recognition that media can only guarantee part of the sales outcome equation.

David Tice is the principal of TiceVision LLC, a media research consultancy.
Don’t miss future posts by signing up for email notifications here .  
– Read my new book about TV, “The Genius Box”. Details here . 

Dave the Research Grouch: Pew Goes Online

Pew Research logoThe Grouch was actually happy last week. The Pew Research Center announced it was moving away from telephone-based research to an online research panel recruited using a traditional, representative probability-based sample.

Pew is home to the Pew Internet Project and multiple other political and social research centers. It has long done research to a standard that The Grouch would tell anyone to emulate. But its one drawback was reliance on RDD telephone samples (even if gussied up with cell phone supplements).

There is another aspect of this move that makes The Grouch happy. It is another example that exonerates his belief in representative probability-based online research panels. This is because the Pew panel was developed using the same concepts and team as KnowledgePanel, the probability-based panel used by The Grouch for 15 years during his time with Knowledge Networks and GfK.

Now part of Ipsos after its acquisition of much of GfK, KnowledgePanel is almost unique in the world as the only large-scale implementation of an access panel of its type. Pew is not the first client to have used KN/GfK to recruit and maintain a proprietary panel using similar methods to KnowledgePanel (names you would know but I can’t share).

What’s the Big Deal?

The distinctive aspect of the recruitment of these panels, compared with opt-in internet panels, is people can’t volunteer to join the panel. An address-based sample from the US Postal Service is used to recruit the panel. Basically, you are eligible to be selected in a recruitment batch if you have a valid mailing address. And to enable a cross-section of all US homes, offline homes are given a netbook and internet access.

In this way, a true random selection can be made and response rates can be calculated, unlike with opt-in samples. This is because it is known exactly how many have been asked and how many cooperated. It was – and still may be – the only online research panel accepted for peer-reviewed academic research.

I won’t dive much more into this whole topic. But there are clearly applications where a truly representative panel is a superior choice. These would include trying to nail down high-quality estimates for a population or for making important business decisions.  There are certainly uses for opt-in samples as well. These would be where the level of data quality needed may not justify the added research expense that results from the costs of recruiting and maintaining a probability-based research panel.

The Grouch Emerges

To get grouchy at least once in this post, too many experienced researchers today have no idea that a random sample doesn’t just mean a random pick from any sample source. The sample has to originate from a probability-based panel to be truly representative in the classical research sense. They also don’t realize that more sample doesn’t mean better data. Or that just because an opt-in survey’s demos equal Census distributions makes it truly representative.

The use of an expensive recruited panel is never an easy sell in these days of procurement departments driving down costs and where awareness of traditional measures of quality are quickly disappearing from the research gene pool. It is encouraging to see Pew step up and make the investment in quality sample. This should result in furthering their tradition of quality research.

David Tice is the principal of TiceVision LLC, a media research consultancy.
Don’t miss future posts by signing up for email notifications here .  
– Read my new book about TV, “The Genius Box”. Details here .