Warning: A non-numeric value encountered in /home/customer/www/ticevision.com/public_html/wp-content/themes/Builder-Cohen/lib/builder-core/lib/layout-engine/modules/class-layout-module.php on line 505

Archive for Dave the Research Grouch

Dave the Research Grouch: Pew Goes Online

Pew Research logoThe Grouch was actually happy last week. The Pew Research Center announced it was moving away from telephone-based research to an online research panel recruited using a traditional, representative probability-based sample.

Pew is home to the Pew Internet Project and multiple other political and social research centers. It has long done research to a standard that The Grouch would tell anyone to emulate. But its one drawback was reliance on RDD telephone samples (even if gussied up with cell phone supplements).

There is another aspect of this move that makes The Grouch happy. It is another example that exonerates his belief in representative probability-based online research panels. This is because the Pew panel was developed using the same concepts and team as KnowledgePanel, the probability-based panel used by The Grouch for 15 years during his time with Knowledge Networks and GfK.

Now part of Ipsos after its acquisition of much of GfK, KnowledgePanel is almost unique in the world as the only large-scale implementation of an access panel of its type. Pew is not the first client to have used KN/GfK to recruit and maintain a proprietary panel using similar methods to KnowledgePanel (names you would know but I can’t share).

What’s the Big Deal?

The distinctive aspect of the recruitment of these panels, compared with opt-in internet panels, is people can’t volunteer to join the panel. An address-based sample from the US Postal Service is used to recruit the panel. Basically, you are eligible to be selected in a recruitment batch if you have a valid mailing address. And to enable a cross-section of all US homes, offline homes are given a netbook and internet access.

In this way, a true random selection can be made and response rates can be calculated, unlike with opt-in samples. This is because it is known exactly how many have been asked and how many cooperated. It was – and still may be – the only online research panel accepted for peer-reviewed academic research.

I won’t dive much more into this whole topic. But there are clearly applications where a truly representative panel is a superior choice. These would include trying to nail down high-quality estimates for a population or for making important business decisions.  There are certainly uses for opt-in samples as well. These would be where the level of data quality needed may not justify the added research expense that results from the costs of recruiting and maintaining a probability-based research panel.

The Grouch Emerges

To get grouchy at least once in this post, too many experienced researchers today have no idea that a random sample doesn’t just mean a random pick from any sample source. The sample has to originate from a probability-based panel to be truly representative in the classical research sense. They also don’t realize that more sample doesn’t mean better data. Or that just because an opt-in survey’s demos equal Census distributions makes it truly representative.

The use of an expensive recruited panel is never an easy sell in these days of procurement departments driving down costs and where awareness of traditional measures of quality are quickly disappearing from the research gene pool. It is encouraging to see Pew step up and make the investment in quality sample. This should result in furthering their tradition of quality research.

David Tice is the principal of TiceVision LLC, a media research consultancy.
Don’t miss future posts by signing up for email notifications here .  
– Read my new book about TV, “The Genius Box”. Details here . 

Most Popular Posts of 2018

2018 is coming to a close and it’s time to take a look back. Which TiceVision blog posts have had the most interest in the past year?

Third Place

In a virtual tie for third place are two posts:

3a. Quick Takes from the ARF AudienceXScience Conference – as the name implies, in this June post I share some of my thoughts on the 2018 edition of this long-running conference, the good (as always, some interesting sessions) and the bad (its lack of diversity in companies and presenters).

3b. Drake vs The Beatles: Let it Be – A post from July, I take issue with press comparisons that claim Drake outdoes The Beatles. These comparisons don’t take into account differences in how the Hot 100 is calculated now vs the 1960s.

Second Place

2. In second place for the year is Dave the Research Grouch: Another Data Fluff Piece. This post, one of the generally popular “Dave the Research Grouch” series, takes exception to press coverage of a data release by Inscape, Vizio’s division which monetizes their TV set viewing data.

First Place

My most popular post of the year, by a margin of almost 2-to-1 over the runners-up, is Foreverspin Tops? More Like Forever Annoying Ads. This post has the longest legs of my 2018 posts, with at least a reader or two every week since being published last February. In the post, I take issue with the bad side of digital advertising, exemplified by the Foreverspin Tops ads that followed me for years.

Happy Holidays!

Whether you observe Christmas, Hanukkah, Kwanzaa, or another winter holiday, I hope all my readers have – or have had – an enjoyable holiday season. And best wishes for your happiness and success in 2019!

  • Don’t miss any of my 2019 posts by signing up for email notifications here
  • Haven’t read my new book about TV, The Genius Box? It’s available in paperback and e-book formats. Book details and ordering info here

David Tice is the principal of TiceVision LLC, a media research consultancy.

Dave the Research Grouch: iSpot.TV and MediaPost

Fall is in the air, Christmas ads have started on TV, and the Research Grouch has emerged Grinch-like from his cave. Today’s offenders are iSpotTV and MediaPost – because it always takes a company looking for publicity and a news outlet to publish it.

iSpot.TV logoThursday’s story in MediaPost, “Shorter TV Ads Command More Viewer Attention,” discussed findings from iSpot.TV’s analysis of “37,854 TV commercials across 4.7 million TV ad airings.” The first alarm bells go off. Usually, when huge numbers are tossed around, it’s often to try to legitimize sketchier numbers to follow – as if large sample sizes are some sort of guarantee of quality.

Strike Out

The article noted several differences in “Attention Score” – a score which was undefined. I don’t expect to be told how it’s calculated, but I do expect to be told how “attention” is defined, since presumably these are calculated solely from digital data and not from tracking eye-gaze. Strike one on MediaPost.

Strike two comes from the conclusion that 10 second commercials have a better Attention Score than do 30 second ads. The scores are “91.0 to 91.5” and “90.0”, respectively. But no context is given in the article as to what is a significant difference. Delving into iSpot.TV’s own report, they do actually say a difference of “a few points is significant.” Assuming “a few” has its typical meaning, this would be 3 to 4 points. Applying this to the headline finding, and the difference of 1 to 1.5 points is not really significant.

Another difference called out as “much more notable”, between the 10 second spots and 60 second spots (a score of 88 to 88.5, and thus a difference of 1.5 to 2 points), appears to also not be significant.

Strike three on the MediaPost article, or at least a foul tip, is not questioning the inclusion of 10 second ads. Does anyone actually sell those? I’ve heard of 6s, 15s, and 30s, but I’ve not read about 10s being a standard length for TV commercials. A curious choice by iSpot.TV.

Credit Where It’s Due

I will give some credit to iSpot.TV for publishing a report on which the MediaPost article was based (free to download if you give them your email info). And they get credit for including the significance information that was lacking in the article. However, nowhere in the report, or anywhere on the iSpot.TV website, is the derivation of the Attention Score addressed. To me, attention is only measured by actual eyes-on or ears-on an ad. I’m very curious how it is defined in this case.

As I’ve mentioned before in this space, I don’t expect writers to be experts on research, but there should be some level of intellectual curiosity rather than just regurgitating a press release. And I don’t expect companies to give away proprietary information, but if you’re going to publicize something, at least give enough information to answer some basic research questions about your service.

David Tice is the principal of TiceVision LLC, a media research consultancy.
Read his new book, “The Genius Box” – details here
Get notifications of new posts – sign up at right or at bottom of this page.

Label Surveys As Well As Data

It was with great interest I read of the new “data transparency label.” This label is being released for comment by several of the media alphabet associations – the AMA, ARF, and CIMM.

data transparency label example

datalabel.org

In the manner of the nutrition labels mandated by the FDA, these labels are hoped to increase clarification about the torrent of data being aimed at big data applications in media, particularly advertising targeting. Adopting a very brief but standard reporting structure, the labels will give users of data a high-level assessment of the quality of the numbers being injected into their algorithmic black boxes. (And by the way, notice there is no equivalent transparency effort about those black boxes; but that’s another story.)

Survey Nutrition Too?

This is important news in that corner of the research, data, and analytics world. What would I like to see? An equivalent nutrition label for publicly released surveys, perhaps sponsored by the Insights Association (the 2017 amalgamation of CASRO and the MRA). The label would provide a required minimal level of information to release with research conducted by its members. This would include items such as:

  • Who paid or sponsored the poll
  • A description of the sample
  • Mode of collection
  • Probability or non-probability sample
  • Dates for fielding
  • Standard error for probability samples, or some “equivalent” for non-probability samples

This information should be enough to quickly evaluate the bias and relative level of quality of a publicly released survey. In fact, some of this information may already be required, but in reality is rarely available in press articles or from the entity releasing the survey.

Too Busy to Process

The Press is too inundated with press releases and too busy filling a 24/7 demand for content to bother to evaluate PR surveys anymore (read MediaPost‘s disclaimer on their Research Intelligencer newsletter). It’s all just grist for the content mill. But maybe with a very simple label, they will be tempted to think once in a while. At the least, we could do the thinking with the right information.

David Tice is the principal of TiceVision LLC, a media research consultancy.
Read his new book, “The Genius Box” – details here
Get notifications of new posts – sign up at right or at bottom of this page.

Drake vs The Beatles: Let It Be

Billboard Hot 100 logoAs a (young) Boomer, I was a little dismayed last week. I saw that Billboard declared that Drake had taken away a record from The Beatles – most songs in the Hot 100’s Top 10 in a given week. Drake’s seven songs had beaten The Beatles’ five songs. This record had stood for 54 years, since 1964.

I don’t have anything in particular against Drake. I know little about him other than he’s Canadian and seems to be at a lot of NBA games. But as a researcher, I was curious how he had broken such a long-standing record, especially against my generation’s touchstone music group.

We Can Work It Out

Doing a little digging around on the internet, it quickly seemed apparent that this record breaking is about as meaningful as saying Drake’s seven apples breaks The Beatles’ record of five oranges. As clickbait, it’s great; as a real comparison, it leaves something to be desired.

Although Billboard does not publish any information, numerous online sources discuss how the calculation of Billboard’s Hot 100 has changed many times over the years. These changes reflect both changes in how people listen to music, and what metric the industry was looking for in a particular era (eg, popularity or profitability?). In 1964, it seems that sales of singles and radio airplay accounted for much of the calculation of the Hot 100, with more weight towards sales. In contrast, today’s calculations are mostly based on radio airplay, streaming requests across all types of sources including YouTube, and digital sales.

If one were a bitter Boomer, one could argue that five songs which were ranked in the Hot 100 to a great extent because people actually had to pay for the records is a superior achievement than seven songs that are mostly ranked because of listening through free radio or free/subscription-based streaming audio.

Or is total reach the best measure? The Beatles sold 25 million records in 1964. If one considers the 10-19 age range their target market, then that means that about there was one Beatles record sold per 1.4 members of the 35 million youngsters in that cohort in 1964.  I could not find similar data available for Drake; but with a 10-19 population of about 42 million in 2018, he’d have sell 30 million song or album downloads to proportionately equal The Beatles’ 1964 sales. But we’ll never really know which is better than the other due to the changes in how people get music today – there is little need to buy music due to all-you-can-listen subscription streaming.

Come Together

Are the Yankees of 2018 better than the Yankees of 1964? While both played baseball, they played two very different types of ball game and need to be considered in the context of the differences between eras. The bottom line is that comparing Hot 100 lists of these different eras is no more meaningful than comparing baseball’s hitters and pitchers of today versus 1964.

*** Jan 2019 update: just to emphasize the difference above, someone named “A Boogie Wit Da Hoodie” hit number one on the Hot 100 by selling only 823 copies – but had 83 million streams *** 

David Tice is the principal of TiceVision LLC, a media research consultancy.
Get notifications of new posts – sign up at right or at bottom of this page.

Dave the Research Grouch: Variety and Cowan

Variety logoLast week, Variety (and multiple others) published a report on a new study from Cowan & Co. on Netflix use, and it’s hard to decide at whom to get grouchy. At Variety, for writing up an article with no context, or Cowan for dropping survey results without publishing any details about their study.

Let’s look at the headline first – “Netflix Is No. 1 Choice for TV Viewing, Beating Broadcast, Cable and YouTube (Study)”. What, according to the article, did the survey results actually say? That people self-reported they used Netflix (27%) “more often” to view than cable TV (20%) or broadcast TV (18%).

Let’s parse this out a bit. First, consider that Nielsen reported in Q1 2017 that 90% of viewing time is still on traditional TV networks. Sure, there are issues with Nielsen but even so it is reasonable to assume that it’s not too far off. This means that in terms of actual viewing time among the total population, Netflix is nowhere near the most-watched platform despite what people may say they “use most often.”

Second is the rather subjective decision to compare broadcast and cable separately against Netflix. It’s been my experience that people with a streaming agenda tend to also be the ones who say viewers can’t tell or don’t care about cable vs broadcast. But that would ruin the headline, because it would change to “Legacy TV Networks Are No. 1 Choice for TV Viewing [38%], Beating Netflix [27%] and YouTube.”

This point is emphasized further when the data for homes with pay TV are shown. Most trusted studies show that a majority of Netflix homes still have pay TV in some form, and here the difference is even more pronounced, with 45% choosing legacy broadcast or cable and 24% Netflix. No attention-getting, disruptive headline from that.

The Frowns are Awarded

Thus a big Research Grouch frown is aimed at Variety (and other sites) for publishing these data without any context at all – context one would hope the beat writers in this area would know enough to include.

Cowan doesn’t escape without a frown either, for my pet peeve – promoting a study without publishing anything about it on their own site. I could not find anything on their website or a press release with which to follow-up. I understand that we don’t need a dissertation, but if you’re going to promote research, then at least have some basic details available to read outside the lens of the press, who (from experience) are notoriously fast-and-loose with their interpretation of research results. What age was the sample? When was it fielded? How was it weighted?

David Tice is the principal of TiceVision LLC, a media research consultancy.
Get notifications of new posts – sign up at right or at bottom of this page.

Dave the Research Grouch: Recruiting isn’t Representativeness

Bloomberg View logoThe Research Grouch got a double whammy Monday morning. First, having seen the headline “We Are Finally Getting Better at Polls” on Bloomberg.com, I clicked right to the piece. Once there, however, I discovered it was a self-promoting opinion piece by a UK polling company. 

Already grouchy from feeling somewhat misled, the content of the piece ratcheted up the grouch level. The author discussed what he considered to be new, innovative ways to reach “representative” samples for polling. These new ways included using IM instead of email to reach out to respondents (OK, that makes sense); showing respondents how their responses sit against others taking the poll (debatable, especially if it’s raw instantaneous data); giving people surveys on topics they like in order to maintain interest (again, debatable); emotional testing rather than direct questions (not sure how this solves a sample issue); recruiting from non-political or non-news websites, or from social media (diversification of online recruiting does not create representativeness).

About the only indisputable point made in the piece is politically active people need to be recruited in their correct proportions to get the best data. If this is considered news to political pollsters in the UK, then no wonder they had issues predicting recent elections.

I may be a grouch but I’m not against innovation. These suggestions are certainly ways to potentially increase respondent engagement and diversify online sample sources. But, despite claims from online research firms everywhere, a volunteer opt-in online sample is by definition not representative regardless of panel size, recruitment techniques, weighting, or other manipulations.

Such samples may often give the same answers as a truly representative sample, but that doesn’t make them representative. Neither does the use of the terminology, such as response rates and error margins, lifted from traditional probability-based research and which really don’t apply to volunteer samples of any kind. A serious problem for this industry is that there is a whole generation of researchers that don’t realize this.

There really are probability-based online panels

I do recognize it’s a different world today than 20 years ago. Opt-in online samples are generally “good enough” for many applications, and I’ve used them myself many times. But, at least in the USA, there are sources of online access panels recruited using traditional probability techniques (including GfK’s KnowledgePanel, NORC’s AmeriSpeak, and RAND’s American Life panel) which are available for important research, whether political polls or key business decisions.

Yes, these panels are expensive compared to opt-in sample, but you get what you pay for – and as many pollsters, candidates, and businesses have found over the years, the most expensive research is bad research.

David Tice is the principal of TiceVision LLC, a media research consultancy.
Get notifications of new posts – sign up at right or at bottom of this page.

Foreverspin Tops? More like forever annoying ads!

Apologies for the scarcity of posts this week – the Media Insights and Engagement Conference had trouble with their website, so my stream of posts expected this week from that meeting are still waiting to be put up. So in the meantime, let’s close the week with a small rant…

clip of Foreverspin adIs anyone else being followed around by the digital ads from “Foreverspin Top”? These are showing up on my work computer, home computer, phone, and tablet, and have been for what I’d guess is at least 12 to 18 months. And they represent everything wrong with the digital ad ecosystem.

Google Ads clip

Almost from the time I first saw this ad – and it’s virtually the same ad, if not exactly the same, over this whole time period – I’ve asked Google to stop serving them to me. I report the ad as annoying, or that I don’t have an interest, or that I’ve seen it multiple times. And yet it still shows up like I’m living in a digital ad Groundhog Day.

And how did some Google algorithm select me for the ad? I have no interest in tops, especially “luxury tops” that start at $35. Is it because the tops are made in Canada and somehow Google found out I was born in Canada? Or that I had long arguments about what the spinning top meant at the end of Inception?

And since we’re asking questions: why, if Google is so smart, does its algorithms still serve me ads that I’ve reported not having an interest in at least a dozen or more times?

There are others, but this is the best example of why I laugh to myself at conferences or presentations when people talk about the wonders of Big Data and its ability to personalize and target advertising. Sure, sometimes it works fine; but all it takes is a bad actor like Foreverspin to poison the well of public opinion for every digital advertiser. And this is especially of concern in this age when we want consumers to be more comfortable sharing data with media networks. All the talk of building trust and transparency goes out the window when a little money overrides stated consumer sentiment.

David Tice is the principal of TiceVision LLC, a media research consultancy.
Get notifications of new posts – sign up at right or at bottom of this page.

Dave the research grouch: another viewing data fluff piece

Inscape logoTime to call out another example of “publicity research” that seems to say something but proves very little. Today’s fickle finger gets pointed at Inscape, Vizio’s division that is trying to monetize their TV set data, and data released to Deadline and Broadcasting & Cable about 2017 viewing.

Just quickly looking at the data in the article, I suspected that any measure that had “Teen Mom 2” in the top five programs viewed via DVR, VOD, and OTT may be somewhat skewed. I decided to look deeper and see if there was any definition of the sample – who was measured, etc.

The short version is there is such a paucity of supporting information that it’s hard to tell anything about these reported data. The article has some of the usual phrases thrown around to impress those who don’t know any better about audience measurement – “second-by-second viewing,” “7.7 million households,” “can go granular and capture more precise information on viewing.” As the old saying goes, one can be very precise but not very accurate. Large samples and second-by-second measures don’t guarantee accurate, reliable measures – only a lot of data points.

What we can glean from the article is that these data represent sets in 7.7 million homes, or roughly six percent of all TV households. But we don’t know what proportion of sets these Vizio sets represent (out of the roughly 2.7 sets per TV home), which rooms these sets are in, or who is viewing them. Plus, we don’t know the profiles of those who have given opt-in permission to be tracked by Inscape – how different are they from Vizio owners overall, or the general population? This all informs the value of the information.

With their ACR measurement, Vizio/Inscape claims to be able to measure broadcast, DVR, OTT, and VOD viewing on a set. Again, no explanation of how this is accomplished (eg, how is the ACR match attributed to a source) or what other limitations there may be in the measurement. Are there any gaps that would be helpful to know to assess the data?

I did go further afield to try to find further information via a press release or other material aside from what was published on Deadline or B&C – but there is no other information I could find at the Vizio or Inscape websites.

Free your data! (at least the basics)

Thus the bottom line is there is little supporting data or context, and regular readers know I don’t think too highly of data published without any context. Obviously a press release doesn’t have to be a dissertation but there should be some basic information available.

As a result, in my opinion these viewing data only really represent some measure of some viewing by some unknown population – not exactly the kind of information that can be used “as is” for meaningful extension of industry knowledge. Presumably Vizio/Inscape has more detailed and useful data they share with their clients and partners – and it would be nice to see some of these basics also shared when pursuing press attention.

David Tice is the principal of TiceVision LLC, a media research consultancy.
Get notifications of new posts – sign up at right or at bottom of this page.

What those PwC-Netflix stories yesterday didn’t mention

Please note the TiceVision blog is on a reduced publication schedule through Jan 2.

Yesterday, there were a raft of stories about a new PwC report that claimed Netflix now equaled pay TV subscribers. This seemed a bit off to me, based on my own research done earlier this year, and my recall of Netflix’s own numbers. For Q3 2017, Netflix reported 53 million US subscribers – roughly 20% of the 250 million adults in the US or, if you assume one sub per home, 40% of US households. Even accounting for churn over a survey period and people sharing passwords, how did, as these articles reported, PwC end up with 73% of Americans being Netflix users?

I decided to dig a little deeper, which many of the reporters seem unwilling – or not aware enough – to do. The digging showed a different picture than painted in many of the articles.

Sample

The first big consideration is that on the PwC page describing the study, it does clearly state that the sample are people age 18-59 who have a household income over $40,000/year – far from a representative sample of all Americans or all households. Perhaps some reporters may have mentioned that, but none of the half dozen articles I read had made that distinction. This is important, since that age range and income bracket would be much more likely to subscribe to have internet in the home and thus able to subscribe to Netflix.

Method

Another aspect not discussed anywhere in PwC’s landing page for the report or the downloaded report itself is the methodology or sample used. I’ll presume it was an online sample. Again, this would not be representative of Americans as a whole and since by definition online sample consists of only internet users, this sample would of course more likely to subscribe to or use Netflix. And no mention anywhere if Spanish-dominant persons were interviewed to be inclusive of “all” Americans.

Terminology

Another issue in the press articles is that the terms Netflix “subscribers” and “users” were used indiscriminately – some used one of those terms, some the other. The PwC report specifically has the term “users” – an important distinction the reporters for some articles missed, as users may or may not be subscribers (in past research I found about 15% of Netflix users say they use other people’s logins). But by using “subscribers,” some of the reporters add to the misinterpretation of the data. And in neither the articles nor the PwC supporting material is how “users” are defined – are these Netflix people “ever” users, regular users, or so forth? Or what is the difference in time spent for Netflix versus pay TV channels?

Always consider the limitations of any report

Words have meaning. Methods have consequences. Firms that publish – and the press that covers – reports such as these should keep that in mind. While this PwC study may show some interesting trends for the slice of the US public to which it applies, it should not be confused as a definitive profile of Americans’ access or usage, which is how many of the press articles presented it.

David Tice is the principal of TiceVision LLC, a media research consultancy.
Get notifications of new posts – sign up at right or at bottom of this page.