Warning: A non-numeric value encountered in /home/ticevisi/public_html/wp-content/themes/Builder-Cohen/lib/builder-core/lib/layout-engine/modules/class-layout-module.php on line 505

Archive for Research business

Is There An Elon Musk For Media Measurement?

Nielsen ratings boxNews in recent weeks called out the troublesome business situation in the media measurement space. Both Nielsen (which is rumored to be finding it difficult to find a buyer) and Comscore (which forced out its CEO and president after less than a year) highlight the difficulties even the key companies in this space are experiencing, quite apart from the difficulty of measuring today’s media use.

[The following post is adopted from the recently published book “The Genius Box: How the “Idiot Box” Got Smart & Is Changing the Television Business”. “The Genius Box” is available in paperback or digital format from Amazon, Barnes & Noble, Apple iBooks, and most major online booksellers. A short term discount is available at the BookBaby store, thru April 17th. Go to se code ARF2019PRINT for paperback, ARF2019EBOOK for ebooks.]

In most industries, the seller delivers a discrete product or service to the buyer – but in TV and media, buyers and sellers transact their business based on market research results (audience estimates, also called “ratings”). Because the audience measures account for billions of dollars in spending, media research has traditionally been subject to high levels of scrutiny, an important consideration to keep in mind when considering the future of audience measurement.

Disruption Isn’t As Easy As Some Might Think

It would seem that, in today’s world, a business such as audience measurement of electronic media – led by a near-monopolist for half a century – would be a ripe target for disruption and new entrants. But it is not that easy. There are numerous “structural” issues that stand in the way of progress, separate from developing a holistic, cross-platform solution.

These obstacles include:

  • Nielsen exploiting its monopoly power in terms of revenue and agreements, and generally implementing improvements only when faced with potential competitors
  • On the TV network side, a reluctance to fund two parallel measurements – most past models of Nielsen competitor roll-outs assume that the new entrant would have to run parallel with Nielsen for at least some period
  • TV network sales people preferring to sell a “Nielsen” currency because of the prestige of the name itself
  • Getting agencies to buy into an audience measurement system developed or led by TV networks, since the assumption is that a method led by the sellers will disadvantage the buyers.

Despite its protestations to the contrary, Nielsen wields the power of a monopoly – one that US courts said was OK, even before Nielsen gobbled up one of it only potential competitors, Arbitron, in 2013. Being the sole arbiter of the national television currency for decades, and of local television since 1993, Nielsen has been a perennial lightning rod for critics, with some good reason. It is expensive and seemingly slow to innovate unless it perceives a competitive threat.

In Defense of Nielsen

The ratings giant does have a difficult mission – trying to keep up with the constant change in media while still maintaining the strict quality its clients demand (or at least the previous generation of research heads used to demand). Media researchers have been bashing Nielsen for the three decades I have been in the industry, but no one yet has been willing to fully fund an alternative. For many in the industry, to paraphrase Churchill’s comment about democracy: Nielsen is seemingly the worst form of audience measure, except for all the others.

Despite calls for disruptive entrants, what I perceive from many in the industry is resignation to Nielsen’s dominance. As with the Borg from Star Trek: The Next Generation, “resistance is futile,” given that Nielsen has faced down about a dozen potential competitors as well as an antitrust suit over the past 50 years.

Who Could Step Up?

Only the most deep-pocketed, risk-tolerant firms would even be tempted to enter this space as the barriers to entry for a new currency-quality measure are now so high.  Alphabet, Amazon, and Facebook all have the money and would

likely have a great deal of interest in the viewer data stream; but their positioning as competitors in this space – both between themselves and with regular television – would almost certainly prevent any one of them from creating a widely accepted advanced measurement.

Perhaps someone could interest Elon Musk once he gets a man on Mars – that might be the easier task!

David Tice is the principal of TiceVision LLC, a media research consultancy.
Don’t miss future posts by signing up for email notifications here .  
– Read my new book about TV, “The Genius Box”. Details here . 

Dave the Research Grouch: Pew Goes Online

Pew Research logoThe Grouch was actually happy last week. The Pew Research Center announced it was moving away from telephone-based research to an online research panel recruited using a traditional, representative probability-based sample.

Pew is home to the Pew Internet Project and multiple other political and social research centers. It has long done research to a standard that The Grouch would tell anyone to emulate. But its one drawback was reliance on RDD telephone samples (even if gussied up with cell phone supplements).

There is another aspect of this move that makes The Grouch happy. It is another example that exonerates his belief in representative probability-based online research panels. This is because the Pew panel was developed using the same concepts and team as KnowledgePanel, the probability-based panel used by The Grouch for 15 years during his time with Knowledge Networks and GfK.

Now part of Ipsos after its acquisition of much of GfK, KnowledgePanel is almost unique in the world as the only large-scale implementation of an access panel of its type. Pew is not the first client to have used KN/GfK to recruit and maintain a proprietary panel using similar methods to KnowledgePanel (names you would know but I can’t share).

What’s the Big Deal?

The distinctive aspect of the recruitment of these panels, compared with opt-in internet panels, is people can’t volunteer to join the panel. An address-based sample from the US Postal Service is used to recruit the panel. Basically, you are eligible to be selected in a recruitment batch if you have a valid mailing address. And to enable a cross-section of all US homes, offline homes are given a netbook and internet access.

In this way, a true random selection can be made and response rates can be calculated, unlike with opt-in samples. This is because it is known exactly how many have been asked and how many cooperated. It was – and still may be – the only online research panel accepted for peer-reviewed academic research.

I won’t dive much more into this whole topic. But there are clearly applications where a truly representative panel is a superior choice. These would include trying to nail down high-quality estimates for a population or for making important business decisions.  There are certainly uses for opt-in samples as well. These would be where the level of data quality needed may not justify the added research expense that results from the costs of recruiting and maintaining a probability-based research panel.

The Grouch Emerges

To get grouchy at least once in this post, too many experienced researchers today have no idea that a random sample doesn’t just mean a random pick from any sample source. The sample has to originate from a probability-based panel to be truly representative in the classical research sense. They also don’t realize that more sample doesn’t mean better data. Or that just because an opt-in survey’s demos equal Census distributions makes it truly representative.

The use of an expensive recruited panel is never an easy sell in these days of procurement departments driving down costs and where awareness of traditional measures of quality are quickly disappearing from the research gene pool. It is encouraging to see Pew step up and make the investment in quality sample. This should result in furthering their tradition of quality research.

David Tice is the principal of TiceVision LLC, a media research consultancy.
Don’t miss future posts by signing up for email notifications here .  
– Read my new book about TV, “The Genius Box”. Details here . 

Entering the Gen Z Zone

As guest-blogger for the 2019 Media Insights & Engagement Conference (staged by knect365), I am putting some of the overarching themes I heard at the conference in perspective. I discuss about what was said at the conference about Gen Z, the rising group of young adults, in my second post-conference piece.

“A number of presentations at the 2019 Media Insights & Engagement Conference talked about the newest generation for us to worry about: Gen Z. Presentations or keynotes touching on Gen Z were given by Viacom, Freeform, ABC, TiVo, BBC America, and Zebra Intelligence/Ipsy…”

Read the rest of the post at the knect365 website here.


MIE Conference logo
The MIE conference was held in Los Angeles between January 29-31. Details about the conference can be found here.

David Tice is the principal of TiceVision LLC, a media research consultancy.
Don’t miss future posts by signing up for email notifications here .  
– Read my new book about TV, The Genius Box. Details here . 

Advice for Future Researchers

As a guest-blogger for the 2019 edition of the Media Insights & Engagement Conference (which is put on by knect365), I have put some of the themes I heard at the conference in perspective. In this first post, I discuss about what was said at the conference about what future – or up-and-coming – researchers should know.

“Up-and-coming or future researchers were on the minds of several presentations at the 2019 Media Insights & Engagement Conference, which took place January 29-31 in Los Angeles. These included a panel of high-level research execs, a session from Viacom, and a tech perspective. And, at least two of the “Off the Record Industry Conversations” discussed future researchers, or researchers now vs. then.

There seemed to be three main themes that I took away from these sessions…”

Read the rest of the post at the knect365 website here.


MIE Conference logo
Between January 29-31, the MIE conference was held in Los Angeles. Details about the conference can be found here.

David Tice is the principal of TiceVision LLC, a media research consultancy.
Don’t miss future posts by signing up for email notifications here .  
– Read my new book about TV, The Genius Box. Details here . 

Scenes from the 2019 CIMM Summit

CIMM logoThe eighth annual CIMM Cross-Platform Video Measurement & Data Summit was held on February 7th at the Time Warner Center in New York. As always, this annual fixture in the media research industry provided an interesting discussion about the state of media measurement.

Among the recurrent themes were:

  • C-3 and C-7 measures, meant to be temporary, are now 12 years old and do not seem to be going anywhere – despite not reflecting today’s viewers
  • Greater transparency is still needed at all levels
  • The need for “ground truth panels” seems to be making a comeback
  • Attribution continues to be the hot topic in measurement

In something of a change from previous editions, no-one from Nielsen or Comscore (or any start-up measurement service for that matter) presented or was part of a panel.

The hand-outs, press releases, and deck from the summit are available on the CIMM website, as are materials from earlier summits.

This was the first CIMM Summit since CIMM was acquired by the ARF back in October. I hope that CIMM and the ARF will continue to offer this summit, and to keep it free so that all those with an interest are able to attend.

Detailed Notes

Below are notes from each of the panels/presentations. These are by necessity distilled down based on how quickly I could take notes, so they do not reflect the totality of the discussions.

After a short kick-off by CIMM CEO and Managing Director Jane Clarke, the first session featured an interview of Krishan Bhatia of NBCUniversal.

  • C-3 and C-7 are outdated by today’s viewing habits
  • C-Flight introduction by NBCU came with little pushback. There is some friction around the work but not about the concept
  • They are working on attribution, campaign measurement, and how to prove performance across all NBCU media
  • He is skeptical that there will ever again be a one-size-fits-all solution
  • 34% of NBCU consumption is now on digital – expect it to be up to 50% very soon

The next session was a panel featuring Rob Master of Unilever, David Cohen of MAGNA, and Laura Nathanson of Disney to discuss business needs for cross-measurement and metrics.

  • RM: There is no common solution. Industry needs to develop a common vernacular to discuss. Can’t be perfect – what is now? near? next?
  • LN: Disney adjusted by moving all media sales under one group. The “plumbing” is an issue – need to plumb and test
  • DC: C-3 and C-7 are no longer sufficient. Need to move to exact commercial minute measurement. In the mid-/long-term, need to look at audible and visual measures across all platforms.
  • RM: Unilever doesn’t care so much about addressability – they have broad markets
  • LN: But then Unilever should use addressability to send different creative to various segments within a broad demo
  • One key thought to close:
    • RM: Transparency and dialog around counting
    • DC: Let’s “start by starting” – need to get moving
    • LN: Just because it’s hard doesn’t mean we shouldn’t do it – it’s the reason we should do it

Next, an overview of this year’s update of the CIMM TV attribution whitepaper was presented by Jim Spaeth and Alice Sylvester of Sequent Partners. Attribution then discussed by Claudio Marcus of Freewheel and Lisa Giacosa of Spark Foundry.

  • What is the state of the art of attribution?
    • LG: I’m excited and hungry [for more]
    • CM: Like in the UK train stations, “Watch the Gap”. There are gaps in cross-platform attribution, and brand/longer-term effects
  • CM: Biggest effect so far on automotive. Auto had moved money from TV to digital – but attribution showed TV drove the digital exposures. Moving back to TV. Media & Entertainment another area – TV program promotion
  • LG: Need to understand content effects. Can’t just follow short-term ROI over a cliff.
  • JS: Need to use baseline sales as a basis for calculating incremental effects of attribution media

Following a break, there were brief updates of the Taxi Complete (AD-ID and EIDR) and Data Label initiatives.

Another panel discussed Deduplicating Reach for Content and Ads, featuring Radha Subramanyam of CBS, Eric Cavanaugh of Publicis, Beth Rockwood of Turner, and Ed Gaffney of GroupM and moderated by Scott McDonald of the ARF.

  • EC: A good quality attribution should be getting deduplication as a byproduct
  • BR: how things fit together is a big issue
  • RS: need both counting and outcome measures. But we need to up-level the conversation: There are lots of products and data, but are we any closer to making sense of media and marketing together? Need a commonsense playbook at a high level.
  • EG: Need dedup in place before this years upfront – or 2020 upfront.
  • RS: Vendors need to listen closely to needs. Their solutions are not necessarily addressing the needs.
  • EC: We also need to know about content to be able to place ads in context.
  • EG: Blindspots are getting smaller but there are new ones popping up every day
  • EC: We are getting one-off fixes to blindspots but need integrated response
  • RS: Integrating projectable and non-projectable samples is doable but needs more investment
  • BR: The technical issues of integration are easier than making the theory work
  • RS: In terms of privacy, one-to-many is less threatening than true one-to-one marketing

Is there One Metric to Rule Them All? Kavita Vazirani of NBCU, Brian Hughes of MAGNA, George Ivie of the MRC, and Sheryl Feldinger of Google discussed this topic.

  • BH: Need exact minute commercial ratings
  • SF: Need equitable (with TV) transparency at exposure and second-by-second ratings
  • KV: Need to measure effort vs return. Shouldn’t we be focusing on cross-platform measures rather than arguing about TV measures?
  • BH: already does second-by-second with MediaOcean, which is an old platform – so it can be done today
  • GI: MRC is working on standard definitions with partners and industry, aiming for impression-based duration-weighted data by 2021. Measures to include exposure, viewability, duration-weighting, complete exposure to an ad.
  • SF: Wants absolute exposure. His work shows that a 5 or 10 second exposure elicits a similar response, regardless of the total length of an ad
  • KV: Disagrees. She claims the only time a 6 second ad worked was as part of a larger integrated campaign
  • GI: There is a big gap in content measurement in digital. For content measurement in a cross-platform world, customer journey analysis is something that should be syndicated (eg, third party)
  • All: agree audio status needs to be known (muted vs non-muted)

The last panel talked about Audience-Based Buying Platforms for TV/Video. This panel included Bryson Gordon of Viacom, Mike Law of Dentsu Aegis, Bob Ivins of NCC Media, and Mike Welch of Xandr.

  • BI: Inertia is real. Need to get marketers to “cross the bridge” and not turn back halfway across. We need standards and transparency.
  • MW: Can help reach low incidence/low viewing HHs
  • BI: Need an automated platform like Google and Facebook. Still too much manual transfers between different applications
  • BG: users on OpenAP have already created 1,872 segments
  • Opportunities in 2019
    • BI: More inventory and optimization
    • ML: Platform, optimization, interactivity
    • BG: Automated workflows, cross-platform delivery, unified posting
    • MW: Platform, true cross-platform delivery

To wrap up the afternoon, Jack Smith of GroupM told us about what he saw at the 2019 CES conference.

  • The three areas to pay most attention to are Assistants (Alexa, etc); Autonomy (self-driving cars); and Simulation (VR/AR).
  • It is important to understand how algorithms work – what products are suggested when Alexa is asked to buy something. Should brands have an avatar to speak for themselves, rather than relying on Amazon etc.
  • Most everything will still be on screens. How are these to be measured?
  • Top takeaways: 1) Interface revolution. 2) Immersion environments. 3) The ethics of tech in general.

David Tice is the principal of TiceVision LLC, a media research consultancy.
Don’t miss future posts by signing up for email notifications here .  
– Read my new book about TV, “The Genius Box”. Details here . 

2019 MIE Conference Summaries

MIE Conference logoAs guest-blogger for the 2019 edition of the Media Insights & Engagement Conference (which is put on by knect365), I wrote up summaries of the keynotes and the break-out sessions I attended. You can find the daily summaries on the knect365 website:

Day 1 of the 2019 MIE conference: Day 1 (Jan 29 2019)
Day 2 of the 2019 MIE conference: Day 2 (Jan 30 2019)
Day 3 of the 2019 MIE conference: Day 3 (Jan 31 2019)

Also, read my three pre-conference posts here:

2019’s New SVOD Services: Blitzkrieg or War of Attrition?

Connected TVs: Corporate Connections as Important as Internet Connections

Does AVOD News Reveal a New Phase of SVOD?

 

David Tice is the principal of TiceVision LLC, a media research consultancy.
Don’t miss future posts by signing up for email notifications here .  
– Read my new book about TV, “The Genius Box”. Details here . 

Most Popular Posts of 2018

2018 is coming to a close and it’s time to take a look back. Which TiceVision blog posts have had the most interest in the past year?

Third Place

In a virtual tie for third place are two posts:

3a. Quick Takes from the ARF AudienceXScience Conference – as the name implies, in this June post I share some of my thoughts on the 2018 edition of this long-running conference, the good (as always, some interesting sessions) and the bad (its lack of diversity in companies and presenters).

3b. Drake vs The Beatles: Let it Be – A post from July, I take issue with press comparisons that claim Drake outdoes The Beatles. These comparisons don’t take into account differences in how the Hot 100 is calculated now vs the 1960s.

Second Place

2. In second place for the year is Dave the Research Grouch: Another Data Fluff Piece. This post, one of the generally popular “Dave the Research Grouch” series, takes exception to press coverage of a data release by Inscape, Vizio’s division which monetizes their TV set viewing data.

First Place

My most popular post of the year, by a margin of almost 2-to-1 over the runners-up, is Foreverspin Tops? More Like Forever Annoying Ads. This post has the longest legs of my 2018 posts, with at least a reader or two every week since being published last February. In the post, I take issue with the bad side of digital advertising, exemplified by the Foreverspin Tops ads that followed me for years.

Happy Holidays!

Whether you observe Christmas, Hanukkah, Kwanzaa, or another winter holiday, I hope all my readers have – or have had – an enjoyable holiday season. And best wishes for your happiness and success in 2019!

  • Don’t miss any of my 2019 posts by signing up for email notifications here
  • Haven’t read my new book about TV, The Genius Box? It’s available in paperback and e-book formats. Book details and ordering info here

David Tice is the principal of TiceVision LLC, a media research consultancy.

ARF-CIMM is good news, but let’s get CREative

The ARF logoThere was interesting news in the audience measurement business yesterday. Several outlets covered the announcement that the Advertising Research Foundation (ARF) will acquire the Coalition for Innovative Media Measurement (CIMM). As a couple of articles noted, this is a continuation of the trend in consolidation in many sectors of the media business.

CIMM logoI’ve been on the sharp end of trying to sell syndicated research studies to a decreasing pool of clients because of consolidation. I can imagine that CIMM was dealing with a similar issues among its membership in the wake of the Disney-Fox, Discovery-Scripps, and other recent deals. The ARF, facing an increased battle to be relevant, gets a high-profile, major initiative “off the shelf.” It seems to be a win-win situation for both sides.

CRE logoLet’s Get CREative

But let’s be adventurous and go for a trifecta. There are also the assets of the Council for Research Excellence (CRE) sitting out there, in the wake of its defunding by Nielsen at the end of 2017. These would be a nice complement to the CIMM’s body of work. In my own viewpoint, I tended to think of the CRE as dealing more with the micro issues of audience measurement while CIMM took much broader, macro brushstrokes. At the least, the CRE’s work deserves an archival home if (when?) the plug is finally pulled on the CRE website.

In any case, congratulations to the ARF and CIMM on their new marriage. Let’s hope this blended family adds some new audience research to its existing initiatives.

David Tice is the principal of TiceVision LLC, a media research consultancy.
Read his new book, “The Genius Box” – details here
Get notifications of new posts – sign up at right or at bottom of this page.

Label Surveys As Well As Data

It was with great interest I read of the new “data transparency label.” This label is being released for comment by several of the media alphabet associations – the AMA, ARF, and CIMM.

data transparency label example

datalabel.org

In the manner of the nutrition labels mandated by the FDA, these labels are hoped to increase clarification about the torrent of data being aimed at big data applications in media, particularly advertising targeting. Adopting a very brief but standard reporting structure, the labels will give users of data a high-level assessment of the quality of the numbers being injected into their algorithmic black boxes. (And by the way, notice there is no equivalent transparency effort about those black boxes; but that’s another story.)

Survey Nutrition Too?

This is important news in that corner of the research, data, and analytics world. What would I like to see? An equivalent nutrition label for publicly released surveys, perhaps sponsored by the Insights Association (the 2017 amalgamation of CASRO and the MRA). The label would provide a required minimal level of information to release with research conducted by its members. This would include items such as:

  • Who paid or sponsored the poll
  • A description of the sample
  • Mode of collection
  • Probability or non-probability sample
  • Dates for fielding
  • Standard error for probability samples, or some “equivalent” for non-probability samples

This information should be enough to quickly evaluate the bias and relative level of quality of a publicly released survey. In fact, some of this information may already be required, but in reality is rarely available in press articles or from the entity releasing the survey.

Too Busy to Process

The Press is too inundated with press releases and too busy filling a 24/7 demand for content to bother to evaluate PR surveys anymore (read MediaPost‘s disclaimer on their Research Intelligencer newsletter). It’s all just grist for the content mill. But maybe with a very simple label, they will be tempted to think once in a while. At the least, we could do the thinking with the right information.

David Tice is the principal of TiceVision LLC, a media research consultancy.
Read his new book, “The Genius Box” – details here
Get notifications of new posts – sign up at right or at bottom of this page.

Dave the Research Grouch: Variety and Cowan

Variety logoLast week, Variety (and multiple others) published a report on a new study from Cowan & Co. on Netflix use, and it’s hard to decide at whom to get grouchy. At Variety, for writing up an article with no context, or Cowan for dropping survey results without publishing any details about their study.

Let’s look at the headline first – “Netflix Is No. 1 Choice for TV Viewing, Beating Broadcast, Cable and YouTube (Study)”. What, according to the article, did the survey results actually say? That people self-reported they used Netflix (27%) “more often” to view than cable TV (20%) or broadcast TV (18%).

Let’s parse this out a bit. First, consider that Nielsen reported in Q1 2017 that 90% of viewing time is still on traditional TV networks. Sure, there are issues with Nielsen but even so it is reasonable to assume that it’s not too far off. This means that in terms of actual viewing time among the total population, Netflix is nowhere near the most-watched platform despite what people may say they “use most often.”

Second is the rather subjective decision to compare broadcast and cable separately against Netflix. It’s been my experience that people with a streaming agenda tend to also be the ones who say viewers can’t tell or don’t care about cable vs broadcast. But that would ruin the headline, because it would change to “Legacy TV Networks Are No. 1 Choice for TV Viewing [38%], Beating Netflix [27%] and YouTube.”

This point is emphasized further when the data for homes with pay TV are shown. Most trusted studies show that a majority of Netflix homes still have pay TV in some form, and here the difference is even more pronounced, with 45% choosing legacy broadcast or cable and 24% Netflix. No attention-getting, disruptive headline from that.

The Frowns are Awarded

Thus a big Research Grouch frown is aimed at Variety (and other sites) for publishing these data without any context at all – context one would hope the beat writers in this area would know enough to include.

Cowan doesn’t escape without a frown either, for my pet peeve – promoting a study without publishing anything about it on their own site. I could not find anything on their website or a press release with which to follow-up. I understand that we don’t need a dissertation, but if you’re going to promote research, then at least have some basic details available to read outside the lens of the press, who (from experience) are notoriously fast-and-loose with their interpretation of research results. What age was the sample? When was it fielded? How was it weighted?

David Tice is the principal of TiceVision LLC, a media research consultancy.
Get notifications of new posts – sign up at right or at bottom of this page.