Archive for Research business – Page 2

Will Control of Data Control the Future of Media?

Please click on this link to see my second guest blog for this year’s Media Insights & Entertainment conference – Will Control of Data Control the Future of Media?

MIE conference logo

I will be blogging on-site with session recaps from the 2018 conference February 6th through 8th… look for my updates via my twitter feed or via the conference twitter.

David Tice is the principal of TiceVision LLC, a media research consultancy.
Get notifications of new posts – sign up at right or at bottom of this page.

Scenes from the CIMM Summit

CIMM logoThe following are some quick notes and comments I jotted down from CIMM’s Cross-Platform Media Measurement & Data Summit (CIMM = Coalition for Innovative Media Measurement). The CIMM Summit was held February 1st at the Time Warner Center in New York City, and was well attended by the movers and shakers of the media research world.

Presentations and videos of some or all of the presentations should be available soon on CIMM’s website,

Notes and Comments

Opening with Jane Clarke, CEO of CIMM
Jane reviewed CIMM’s manifesto and progress made in the past year. [I think how actual much progress will depend on one’s viewpoint (buyer, seller, vendor, or MRC (Media Ratings Council)]

Fireside Chat with Rishad Tobaccowala, Publicis
– he believes there will be a 20-30% reduction in impression-based advertising in the next few years
– “data connecting to data” is the next level of connectivity
– advertising has been in the segmentation business (breaking up large audiences); it’s now entering the aggregation business (collecting smaller, fragmented audiences to create reach)

Buyers and Sellers Speak Out (panel)
– highly entertaining panel!
– Cost per rating point (getting a program in front of a viewer) for ad-supported content is skyrocketing (Joe Marchese, FOX)
– Data about viewers of an ad impression are quickly becoming almost more valuable than the impact of the impression itself (Lou Paskalis, Bank of America)
– You cannot create the same reach of a Super Bowl ad with digital, even if you had a year [assuming full viewing of ad] (Marchese)
– We are now being forced to create valuable, engaging marketing content (Paskalis)

CIMM Attribution Provider Comparison Study (panel)
– Attribution models are like Christmas trees; you can turn them to look good and hide the bare spots (Newcombe Clark, AIG)

Disney-ABC Multiplatform TV Attribution, Phase 2
– There are three main drivers to multiplatform ROI: Audience size, consumer commitment to content, and consumer perception of quality (Cindy Davis, Disney/ABC). [this was highly quantitative but very reminiscent of survey-based work we did at Knowledge Networks in early/mid 2000s]
– “Smarts”, “edge”, and “relatability” are the three of the eight Magid Emotional DNA attributes that are best indicators of multiplatform ROI
– Report supposedly available at

Creating a Data Relationship with TV Viewers (Channel 4, UK)
– An audience analysis that was interesting if not particularly groundbreaking
– Did show some very cool personalized ads served on digital streaming
– Trying to sell on “ABC”: Audience demos, Behavioral info, Content/Context (building ads or pods related to the content being viewed)

Coffee Break

Industry Associations Speak Out (panel)
– “Muscle memory” [mentioned several times earlier in the day as a reason why various stakeholders don’t adopt new methods] is a good thing, because we need to consider both legacy standards and all viewers [eg, measurement needs to include all viewers, even those still watching VCRs, not just who consumes digital content] (George Ivie, Media Ratings Council)
– We as an industry need “TED Talks” to discuss marketing successes, not just continual talk about the challenges we are facing. (Bob Liodice, ANA)
– We eventually will need MRC audit and accreditation of sales or brand lift providers. If we are validating the data going in, then the loop should be closed by accrediting the lift calculations (Ivie)

Who’s Getting It Right? (panel)
– We need progress not perfection (Kate Sirkin, Publicis)
– Gaps that need to be filled:
— Complete multiplatform system, both pipes and data (Brian Hughes, MAGNA)
— Vendors need to take the time to understand our business to know what the business questions are (Lisa Heimann, NBC)
— Measuring attention or engagement, Magid’s Emotional DNA doesn’t scale. (Howard Shimmel, Turner)
— Be prepared to validate your methods (Daniel Slotwiner, Facebook)
— Transparency and validation. Measurement is now a team sport (Elissa Lee, Google)

Programmatic TV (panel)
– It still takes a long time to evaluate a campaign, up to six months (Dan Aversano, Turner)
– We could use a quick read on campaigns using proxy data (Greg Pharo, Coca-Cola)
– If you make one [national] ad addressable, then the whole program can’t be C3 rated by Nielsen’s rules (Aversano)
– There are too many layers, each with their hand out for a piece of the pie; this can force us to do what we CAN rather than what we would LIKE (Mike Bologna, One2One Media)
– Cycle times are becoming more and more compressed between pitch, sale, and execution (Aversano)

Is the TV Industry Ready for Ad Ratings?
– Results of 27 interviews with industry leaders by Artie Bulgrin
– In 1987, the PeopleMeter came on; 2009 C3 ratings; 2017 separate measures of content and of ads
– Having a standard cross-platform currency is seen as important but NOT critical
– Having an accurate measure of net reach and duplication IS seen as critical but doesn’t have to be “currency quality”

David Tice is the principal of TiceVision LLC, a media research consultancy.
Get notifications of new posts – sign up at right or at bottom of this page.

Dave the research grouch: another viewing data fluff piece

Inscape logoTime to call out another example of “publicity research” that seems to say something but proves very little. Today’s fickle finger gets pointed at Inscape, Vizio’s division that is trying to monetize their TV set data, and data released to Deadline and Broadcasting & Cable about 2017 viewing.

Just quickly looking at the data in the article, I suspected that any measure that had “Teen Mom 2” in the top five programs viewed via DVR, VOD, and OTT may be somewhat skewed. I decided to look deeper and see if there was any definition of the sample – who was measured, etc.

The short version is there is such a paucity of supporting information that it’s hard to tell anything about these reported data. The article has some of the usual phrases thrown around to impress those who don’t know any better about audience measurement – “second-by-second viewing,” “7.7 million households,” “can go granular and capture more precise information on viewing.” As the old saying goes, one can be very precise but not very accurate. Large samples and second-by-second measures don’t guarantee accurate, reliable measures – only a lot of data points.

What we can glean from the article is that these data represent sets in 7.7 million homes, or roughly six percent of all TV households. But we don’t know what proportion of sets these Vizio sets represent (out of the roughly 2.7 sets per TV home), which rooms these sets are in, or who is viewing them. Plus, we don’t know the profiles of those who have given opt-in permission to be tracked by Inscape – how different are they from Vizio owners overall, or the general population? This all informs the value of the information.

With their ACR measurement, Vizio/Inscape claims to be able to measure broadcast, DVR, OTT, and VOD viewing on a set. Again, no explanation of how this is accomplished (eg, how is the ACR match attributed to a source) or what other limitations there may be in the measurement. Are there any gaps that would be helpful to know to assess the data?

I did go further afield to try to find further information via a press release or other material aside from what was published on Deadline or B&C – but there is no other information I could find at the Vizio or Inscape websites.

Free your data! (at least the basics)

Thus the bottom line is there is little supporting data or context, and regular readers know I don’t think too highly of data published without any context. Obviously a press release doesn’t have to be a dissertation but there should be some basic information available.

As a result, in my opinion these viewing data only really represent some measure of some viewing by some unknown population – not exactly the kind of information that can be used “as is” for meaningful extension of industry knowledge. Presumably Vizio/Inscape has more detailed and useful data they share with their clients and partners – and it would be nice to see some of these basics also shared when pursuing press attention.

David Tice is the principal of TiceVision LLC, a media research consultancy.
Get notifications of new posts – sign up at right or at bottom of this page.

What those PwC-Netflix stories yesterday didn’t mention

Please note the TiceVision blog is on a reduced publication schedule through Jan 2.

Yesterday, there were a raft of stories about a new PwC report that claimed Netflix now equaled pay TV subscribers. This seemed a bit off to me, based on my own research done earlier this year, and my recall of Netflix’s own numbers. For Q3 2017, Netflix reported 53 million US subscribers – roughly 20% of the 250 million adults in the US or, if you assume one sub per home, 40% of US households. Even accounting for churn over a survey period and people sharing passwords, how did, as these articles reported, PwC end up with 73% of Americans being Netflix users?

I decided to dig a little deeper, which many of the reporters seem unwilling – or not aware enough – to do. The digging showed a different picture than painted in many of the articles.


The first big consideration is that on the PwC page describing the study, it does clearly state that the sample are people age 18-59 who have a household income over $40,000/year – far from a representative sample of all Americans or all households. Perhaps some reporters may have mentioned that, but none of the half dozen articles I read had made that distinction. This is important, since that age range and income bracket would be much more likely to subscribe to have internet in the home and thus able to subscribe to Netflix.


Another aspect not discussed anywhere in PwC’s landing page for the report or the downloaded report itself is the methodology or sample used. I’ll presume it was an online sample. Again, this would not be representative of Americans as a whole and since by definition online sample consists of only internet users, this sample would of course more likely to subscribe to or use Netflix. And no mention anywhere if Spanish-dominant persons were interviewed to be inclusive of “all” Americans.


Another issue in the press articles is that the terms Netflix “subscribers” and “users” were used indiscriminately – some used one of those terms, some the other. The PwC report specifically has the term “users” – an important distinction the reporters for some articles missed, as users may or may not be subscribers (in past research I found about 15% of Netflix users say they use other people’s logins). But by using “subscribers,” some of the reporters add to the misinterpretation of the data. And in neither the articles nor the PwC supporting material is how “users” are defined – are these Netflix people “ever” users, regular users, or so forth? Or what is the difference in time spent for Netflix versus pay TV channels?

Always consider the limitations of any report

Words have meaning. Methods have consequences. Firms that publish – and the press that covers – reports such as these should keep that in mind. While this PwC study may show some interesting trends for the slice of the US public to which it applies, it should not be confused as a definitive profile of Americans’ access or usage, which is how many of the press articles presented it.

David Tice is the principal of TiceVision LLC, a media research consultancy.
Get notifications of new posts – sign up at right or at bottom of this page.

Throwing cold water on cord-never hyperbole

Last week’s article in MediaPost on the TV of Tomorrow conference included another example of the hyperventilating hyperbole that unfortunately drives our industry’s conversation about cord-cutters and cord-nevers.

In an unattributed quote in the article, the author writes “Millennials are cord-nevers who didn’t grow up in a world of TV networks.” Whether this is her opinion, or something a speaker said, is not specified. But whomever the quote belongs to, they are vastly incorrect.

The definition of a millennial varies, but for the purposes of this post, let’s say it’s people born between 1982 and 2004. And, again, for the purposes of this post, we’ll use high quality data from The Home Technology Monitor, published by SRI (1981-2001), Knowledge Networks (2001-2011), and GfK (2012 to present)*. This respected source has always used a representative probability-based sample that includes all homes, including offline and Spanish-dominant homes.

So let’s look at a couple of years with millennial kids:

— 1999 (kids 0 to 17 years old are millennials): pay TV penetration in these homes was 78%

— 2004 (kids 0 to 17 years old are millennials): pay TV penetration in these homes was 81%.

As can be seen from these two snapshots, millennials most definitely grew up in a corded world. Their homes were very familiar with TV networks and pay TV.

In fact, even this year – when adult millennials are ages 18 to 35 – the presence of traditional pay TV in households headed by a millennial is still a majority 59%.

Perhaps the author (or whomever she quoted) meant Gen Z and not millennials. Or maybe it was meant to say one of these groups are somewhat more likely to be cord-nevers. But the statement as published is an example of the received (incorrect) “wisdom” that comes out of many digital-focused reports and presentations from people unfamiliar with the long-term trends of TV reception and use.

All that being said, is cord-cutting, and are cord-nevers, a significant issue for the TV industry? Do TV stakeholders need to learn to play in an increasingly streaming world? Absolutely. But let’s not exacerbate the issue by passing along poor data.

And lastly, don’t get me started on later in the column when a media research leader used her child as an example of changing media use – a topic I covered some years ago here.

*Disclosure: the author ran The Home Technology Monitor between 1995 and 2017, and was employed by GfK until October 2017

David Tice is the principal of TiceVision LLC, a media research consultancy.
Get notifications of new posts – sign up at right or at bottom of this page.

A great example of the effects of weighting

Nate Silver’s FiveThirtyEight website, now owned and hosted by ESPN, features an interesting mix of articles that generally fall into two disparate topics: sports and political polling. An article yesterday, posted by Silver himself, had a stimulating discussion about the disparities in the polls for the Alabama senate election that delved into topics that are seemingly of decreasing consequence anymore among researchers: the impact of modes of interview, response rates, representativeness of samples, and weighting.

Let’s just focus on the last topic, weighting. Even among experienced and savvy researchers, weighting can still be a bit of an arcane dark art. Among the inexperienced – or those who never learned any better – weighting is just a magic black box that supposedly fixes whatever might be wrong with a crappy (to use a highly technical term) sample. Silver’s article included a great, real-life example of the effects of weighting.

To their credit, especially given my usual attitude towards self-service survey companies, SurveyMonkey published 10 different outcomes of their data for the Alabama senate race – the underlying unweighted data was the same, it was only the weights that differed. The various outcomes showed a range from a 9-point Jones lead to a 10-point Moore lead. As Silver points out, these results emphasize “the extent to which polling can be an assumption-driven exercise”.

These results also show the impact of not just demographic weighting but attitudinal or behavioral weighting – in this case, forcing a sample to match things like political affiliation or likelihood to vote from a previous election, which may or may not reflect contemporary conditions.

Getting to the (data) point

The point of this post is not to denigrate weighting but merely to point out this nice example of the potential impact of weighting and its potential influence on data outcomes and insights. Get your weighting scheme right, and it no doubt helps. Get it wrong, and it can lead to incorrect conclusions that have significant implications for an candidate or for the success of a business (depending on the sphere of the survey).

Better yet, if you start out with high quality sample – rather than the least expensive – and demand high cooperation rates, weighting should be of secondary consequence.

David Tice is the principal of TiceVision LLC, a media research consultancy.
Get notifications of new posts – sign up at right or at bottom of this page.

Dave the research grouch: Attribution

slicing a pieAttribution seems to be an increasingly popular topic. Recently several networks decided to pilot a new attribution model that includes TV. This effort is intended to head off a number of existing attribution models that only include digital exposures.

The primary problem with either flavor of attribution model is that neither includes all sources of exposure, only those ones that are easiest to measure or that vested interests are willing to pay to measure. Even if we have digital and TV, what about radio (valued for the last exposure when heading to a store), magazines (valued for trust and a platform for presenting detailed ads), sponsorships (affinity with the consumer’s “tribes”), and so forth?

This is a bit reminiscent of so-called “cross-platform media” measures over the past decade, a term that drove me crazy, because most really only covered digital media. And even then, some were only desktop or mobile browsers, no apps included; and forget about inclusion of streaming to digital TV.

A second issue is how these attribution models are presented, which typically is the relative value of each media source’s influence on a purchase. Up to now, this usually meant that the last medium consumed gets the credit, with little or no “attribution” to the stack of other media sources which may have come before. What about the ad that informed a consumer that a product existed, or convinced a consumer of the value of a brand, or gave a consumer the information on a product that would lead to a purchase?

Putting aside the issue of if all relevant media are measured, a larger overarching matter is the seemingly intractable issue of measuring the influence of every exposure on every medium and its relative impact on purchasing – what I would think are needed variables for true attribution. Even for exposure and time spent, some media will have to be respondent-reported, not passively measured. Attentiveness to a property or the ad within it – that’s going to rely on self-report. The trust in, and influence of, each property and its ads, for the specific product of interest – self-reported also. How are these models going to measure all of these aspects for each ad exposure, with so many factors being subjective?

To what I attribute my grouchiness

I’m not naive enough to expect that such a data vacuum will remain empty, even if the solutions offered are questionable in some respects. Solutions are filling this space, and some may be very successful businesses – there have certainly been enough examples of that over the years. My grumble is that the nomenclature used and the way data are presented in these services may not accurately reflect their limits, leading to specious headlines and underinformed users.

David Tice is the principal of TiceVision LLC, a media research consultancy.
Get notifications of new posts – sign up at right or at bottom of this page.

It’s Audience Measurement Groundhog Day, Again

As reported yesterday by AdWeek, Linda Yaccarino from NBC Universal had her invitation-only meeting about shaking up the legacy approach to video measurement. Going by the AdWeek report, as well as several other stories in other publications, it seems the meeting which promised “a meaningful plan for action and follow” didn’t provide the industry with very much in specifics or general guidance.

Ever since this meeting was announced with much fanfare, it seemed a questionable endeavor. Giving NBCU the benefit of the doubt that this was a sincere attempt rather than a stunt, what exactly did Yaccarino hope to accomplish in a day that the Council for Research Excellence (CRE), Coalition for Innovative Media Measurement (CIMM), the Media Ratings Council (MRC) – among others – have spent the last decade or more exploring?

Ever since I started my second career in media research in 1994, there have been calls to revise legacy measurement, generally centered around perceived shortcomings of Nielsen. In the ’90s, it was the always entertaining, if cringe-inducing, public excoriations of Nielsen by NBC’s Nick Schiavone. By the early 2000s, it was issues around DVRs. Then in the late 2000s, C3/C7 ratings. Now, how to come up with a single integrated measure of video across all platforms.

Bill Murray in Groundhog Day

If Bill Murray was in media research, maybe this is how his Groundhog Day would go. He’s destined to wake up every day until Nielsen, ComScore, or someone else manages to measure all his viewing across TV, phone, and tablet.

Despite these repeated bouts of conscience, the truth is meaningful change won’t happen until media companies on both sides of the table are ready to withhold their cash from the currencies – but they won’t have cash if they don’t have a currency to buy and sell on.

Structural issues

Also from past experience, there are “structural” issues that stand in the way of progress, quite aside from the ability to evolve a solves-all solution…

— Nielsen will exploit its monopoly power (yes, the US courts found it’s a monopoly but that it’s OK) and rapid improvements generally only come as a result of potential competitors

— the barrier to entry is now so high that only the most deep-pocketed, risk-friendly firms would even be tempted (Alphabet, are you listening?)

— on the network side, there will almost certainly be a reluctance to fund two parallel measurements (most past models of competitive roll-out assume that the new entrant would have to run parallel with Nielsen for at least some period)

— also, network sales people will prefer to sell under a Nielsen currency because of the prestige of the name. (this I know from personal experience, because my team lost two small audience measures simply because another company had a bigger name than we did)

— getting agencies to buy into a network-led development is also problematic (the assumption is that a method led by the sellers will disadvantage the buyers)


Poor Bill the media researcher. Once today’s issues are resolved, the complaints about measuring 5G mobile or ATSC 3.0 TV sets will start to roll in… time to wake up again!

David Tice is the principal of TiceVision LLC, a media research consultancy.
Get notifications of new posts – sign up at right or at bottom of this page.

Quick thoughts on Nielsen-Comcast

Today’s quick thought…

With Nielsen’s announcement of their partnership with Comcast on set-top box data for local market audiences, it’s clear that measurement purists will have to accept the fact that future audience measures will be based mostly on “virtual” viewers, not real ones. Set-top box data – like router meter data for streaming devices – is a device measure, not a persons measure. Therefore, actual viewing – if there is viewing, who is viewing, how long is the viewing, and coviewing – all has to be modeled.

As a realist, there are probably no other viable options given today’s audience fragmentation and the need to get program-level ratings. And tuning data will certainly be more reliable than it was for paper diaries. But buying and selling on modeled viewers has to scare buyers and sellers alike.

While a number of services have been doing modeling from STB data for a number of years, none have had to carry the mantle of currency like Nielsen. There are a lot of potential pitfalls, from the quality of the data stream from the STBs to how the models are derived and validated, so it will be interesting to see how this plays out. And if final resulting service will be accredited by the MRC (subject of a good article here).

David Tice is the principal of TiceVision LLC, a media research consultancy.
Get notifications of new posts – sign up at right or at bottom of this page.

Dave the research grouch: methods, please

Echo Dot

Yesterday, TechCrunch reported on a new projection from Juniper Research saying smart speakers (Echo, Home) would be in 55 percent of US homes by 2022. Having measured adoption of media devices for 20 years, this sounded more than a little like another episode of marketing hype that drives executives into a tizzy (Researcher, we need to be in front of this!) and researchers crazy (Boss, you can’t believe everything you read!).

My first instinct was to try and put this estimate in context compared with data I do trust. Looking at long term trends from GfK’s Home Technology Monitor*, smart speakers reaching 55 percent by 2022 would make it the second-quickest-adopted media device of the past 35 years – behind only DVD players and ahead of VCRs, cell phones, tablets, and broadband. I therefore decided to dig a little deeper.

Following the link to Juniper from the TechCrunch article, and then to various pages at Juniper including its press release, there is no specific information at all about how these estimates were derived. Consumer surveys? Interviews with experts? By asking Siri? Crystal balls?

I can’t say Juniper’s projection is wrong but I can say that such a lack of transparency is a key issue, not only for research companies but for those who publish such findings. While TechCrunch (and most publishers, for that matter) may find research minutiae boring for its readers, it should provide its readers with at least some context in which to evaluate claims like these. As for research companies, you can only help your stature by, again, providing some level of minimum information. If you’re doing high quality research, then it shouldn’t be an issue – and should be a selling point – so why hide it?

My own experience in researching so-called “smart speakers” over the past year (using a large-scale, projectable sample of consumers) is that expected uptake levels are quite low. Until Amazon and Google convince consumers to do more than listen to Spotify or Pandora, it’s my opinion these devices will be challenged to meet the high adoption rates seen for other devices in the past.

*disclosure: I ran The Home Technology Monitor until leaving GfK in October 2017

David Tice is the principal of TiceVision LLC, a media research consultancy.
Get notifications of new posts – sign up at right or at bottom of this page.