How TView Works

 

TView is a respondent-level, personal probability system.

 

 

Aggregate Audience Data

 

Survey research lets you draw discoveries and conclusions such as this:

 

"The average rating for NBC on May 23rd was a 3.7 for Women 25-54."

 

We call that a summary measure, or an aggregate measure, because it aggregates what individuals did into a lump total.  When you look at GRP totals, reach results, daypart rankings, lists of popular TV shows or magazines, comparisons of how different groups of people watch TV, you are looking at aggregate measures.

 

Years ago, the only way media planners could evaluate television schedules was by making estimates from reports on what portion of a whole group did something.  For example, a look-up table or printed curve might have been consulted to learn that 300 points placed on prime time would produce a 55 reach (or whatever).  Then, after a planner learned that the prime portion of the plan got a 34 reach and the late night portion got a 22, some kind of scheme or method was needed to be used to combine these numbers in some way into a plan total.

 

Still today, in smaller U.S. markets the only available data on television usage is about aggregate behavior. In radio in the U.S., the only data available at any level is aggregate data.

 

Having only aggregate data creates a serious bottleneck for deeper research.With only aggregate information, it's very difficult to say what happens when exposures are combined. We have a 200 point introductory period, and a 350 point sustaining period, give me the result of the total. Or, we have a prime schedule and a daytime schedule, what do they generate together?  Great effort has to be put into creating regressions (such as reach = f(grps,x1,x2,x3)), ad hoc models (such as T=A+B-fAB) or probability models (such as beta-binomial or gamma-Poisson) trying to fabricate such answers.

 

 

Respondent-Level Audience Data

 

With respondent data, what we get from our data suppliers is a whole lot of information about every person in the survey panel. It may look something like this:

 

Alice K. is a 64-year-old woman in Billings, Montana.  On May 23rd she watched CBS from 8:00 to 8:08 pm, then from 8:12 (after the commercial block) to 8:27 pm, then switched to Discovery Channel which she watched from 8:28 to 8:45.

 

... and so on for each of the 80,000 people in the survey panel!

 

Respondent-level analysis means that everything is calculated by looking at how individuals view television, and for each person estimating whether that person will see a proposed campaign and if so, for how many exposures.  Currently (for U.S. users), that means that TView is able to think about each of the 80,000 people in the Nielsen panel and analyze them one-by-one.

 

 

Advantages of Respondent-Level Data

 

With rich respondent data, new analytical possibilities emerge.

 

1.By focusing on individual people rather than melted-down groups, we preserve much more of their unique viewing interests and patterns when we analyze a plan.
2.The notion of a “daypart” can be considerably broadened. Dayparts can be user-defined, and can include or exclude specific channels or times of day. As an example, years ago in the old Nielsen Persons Cume Study all we had was a single category for “Broadcast Prime”. Nowadays you can break out a daypart in many, many ways.  You can set the specific days and times of interest, perhaps with different times on different days. You can also define a daypart to consist of specific genres of programs (dramas, sitcoms, etc.) or even lists of specific programs.
3.Because a respondent database is used, a more precise definition of the target audience is possible. This definition can include complex logical groupings. For example, you might wish to study Adults 25-54 in higher income homes in A counties, or Women 21+ who have children and who own a dog.
4.Respondent data can be supplemented with additional research.  Examples:
a.U.S. Nielsen also asks about moviegoing habits, and can categorize respondents by where they live to into PRIZM clusters.
b.Nielsen Catalina Solutions and Nielsen Buyer Insights develop brand and product information directly tied to the Nielsen television panel.
5.Respondent data enables fusion analysis with other sources.  For example, MRI data is available fusing their rich set of product and psychographic insights into the television panel.
6.We can take draws of subsets of respondents.  This enables replication and crossvalidation studies.
7.Actual respondent data is used (in some way) to calculate reach. This eliminates the need for increasingly complex (and thus precarious) models to handle tough issues of audience accumulation and media fragmentation. Goodbye, regression. Goodbye, beta binomial. (Note that this does not necessarily mean direct tabulation of respondent data. It just means that we use this individual data in some way.)
8.In the same way, respondent data allows us to eliminate daypart and vehicle duplication models and factors, which has been an even more precarious undertaking. Goodbye, mystical f-factors (and multiple regression to predict the mystical f-factors).
9.Having respondent data offers the potential of powerful re-coding of respondents according to known or discovered group memberships.  In TView's TeleDemo™ capability, we can create demos such as Men who never watch NFL football, or Women who were exposed to our introductory campaign.
10.A specific area that gains from this kind of re-coding is analysis of the frequency distribution of exposure and viewing by volume of viewing. Quintile analysis using medium-based (rather than schedule-based) definitions of quintiles become straightforward.

 

We believe that back in those days the earliest versions of TView had the best tools in the industry for construction and utilization of algorithmic models of television audiences. (The SHS “BBDX” model of television viewing was provably superior to regression techniques in robustness, quality of fit to real world data, and suitability for follow-up analyses.) So, it was somewhat hard to say goodbye to the old ways of doing things. Nonetheless, the precision, robustness and flexibility of respondent data are absolutely irresistible, and we are delighted to be breaking new ground with this exciting data stream.

 

 

Where You Can Get Respondent-Level Data

 

In media research today, respondent data is available from such sources as:

 

U.S. Nielsen All-Minute
U.S. Nielsen Mid-Minute
U.S. Nielsen Persons Cume Studies (yes, the PCS did provide respondent-level data, but sadly it has been discontinued by Nielsen)
Numeris Canada (formerly BBM)
IBOPE in Latin America
Print and omnibus studies from Mediamark Research (MRI)

 

But also note that respondent data is NOT available from several other sources, including:

 

U.S. Nielsen Station Index VIP
U.S. Nielsen Local Monthlies
U.S. Nielsen Audio (formerly Arbitron)
U.S. Nielsen Mobile video usage (as of February 2015)

 

This has a very significant implication! Software tools that rely on these aggregate sources do not have the richness of respondent data available from which to draw conclusions and make estimates.  Instead, they must use various of those modeling or ad hoc methods to provide estimates.

 

 

Personal Probability

 

Great, we're thinking about the viewership of individual people.  But how does TView make estimates for each of those people?  That's where "personal probability" comes in! The system has the complete viewing records of how each person viewed television while a member of the survey panel (in the U.S., that's Nielsen).  That means that it has a pretty good idea of the patterns in each person's preferences for viewing times, days of the week, networks, programs and so on.  When a media planner defines a daypart in TView, the system builds up probabilities for each person on the likelihood that he or she will see the next spot and the full set of spots in each daypart for each requested network.

 

 

Putting It Together

 

So, when a media strategist specifies a plan, the system thinks about how that plan is likely to be seen (or not seen!) by each respondent individually.  A prediction is made for each and every person on the number of times that person will be exposed to airings of spots in the campaign -- once, twice, more times, or not at all.  When this has been completed for all of the respondents, only then does TView add them up to come up with totals expressing the portion of the target demo that has been exposed.  The totals you see for GRPs, reach, frequency and everything else are assembled by simply adding up what is projected for all of these individuals.

 

Copyright © 2015, Mediaocean. All Rights Reserved. [718]