If you’ve been in the entertainment research and testing biz for any period of time, you’ve likely heard of Vision Critical’s EVP of Media and Entertainment, Aaron Paquette. With more than 18 years of experience in the industry, including stints at Nielsen Entertainment, CBS and Sony Pictures Television, it’s safe to say that Aaron has made his mark on the industry. At Dialsmith, we’ve been fortunate to recently collaborate with Aaron and Vision Critical (VC) on several online research projects where we’ve integrated our online dial testing (powered by Perception Analyzer Online) with VC’s Viewer Insights Community.
On the heels of this round of collaborative projects, Aaron was kind enough to spend a few minutes to chat with us and cover some topics of interest to Dialsmith clients.
Q: Can you give us some background on the type of projects you’re working on and clients you’re serving at Vision Critical?
Aaron: At VC, we think of “media” very holistically to include TV, film, new media, music, sports, news and more. We consult with about a dozen clients on what we call Premium Insight Communities or Voice of Market engagements; these are where clients pay a little bit extra to have media experts consult on their communities and run their research activities. We conduct all forms of media research on client communities… among pre-targeted members of the Vision Critical Springboard America community (this is what we call “Voice of Market”)… and among nationally representative survey-takers on ad hoc studies. Our clients range from sports leagues to cable/satellite/ providers to television networks to studios to new media companies and media technology providers. The variety keeps things interesting!
Q: What do you find to be the most exciting aspect of being a part of the entertainment research industry?
Aaron: I’d say the fact that it’s constantly changing. Between mergers, acquisitions, spinoffs, new technologies and new business models, our industry is constantly evolving and our clients constantly have new research needs. We, in turn, continually have new methodologies to offer them to meet these needs. When I started a long time ago, everybody was trying to figure out cable TV. Now, they’re trying to figure out SVOD, multiplatform usage and social media. It seemed crazy back in 1996 that consumers had “57 channels and nothing on” (to quote the Springsteen song). Today, they have 500+ linear channels, SVOD services, websites producing content, social media sites filled with content, and if they’re still bored, they can make their own videos and post them. Imagine how many more choices they’ll have over the next 18 years.
Q: I know “dial testing” plays a role in the research you do. Where do you find that it adds value and what types of projects that you’re involved with lend themselves or don’t lend themselves to this methodology?
Aaron: For entertainment testing, the dials add value any time clients are trying to understand moment-by-moment reactions to a piece of content. In traditional survey research or focus groups, it’s fairly easy to learn how a consumer feels about a piece of content in its totality. However, it’s very difficult to understand how the specific moments are playing. For starters, respondents forget individual moments after being exposed to a longer piece of content. And secondly, their feelings about those individual moments are colored by what comes next. But dial testing can capture that visceral reaction in the moment, allowing researchers to understand what’s working and what’s not on a second-by-second basis, as well as the “flow” throughout an entire piece. We use this deeper level of feedback to make editing decisions, remove entire scenes, reorder elements and quicken pace. We also use this to ask thoughtful follow-up questions that can address moments of confusion, boredom or offensiveness. This has proved valuable in evaluating pilots, presentations, existing series, music, new media content; really anything that you can play for respondents with a fixed-time duration.
Q: Can you point to one or two “aha moments” that dial testing has revealed during any of your projects?
Aaron: We were just talking about one today. We were recently testing a pilot episode of a new television comedy, looking to identify scenes that were working and scenes that weren’t. Through the survey results, we saw that the show tested better in total among men than women—a surprising result as women tended to favor this client’s shows. However, on the dial, we saw something interesting. There was a climactic scene during the pilot that men just loved. It came about two-thirds of the way through the show, and it caused men’s dials to peak, creating a “halo effect” around the final third of the show. Women, on the other hand, didn’t show as much interest in the scene. Their dials flattened, and were then lukewarm for the rest of the program. So this indicated that this one scene was very polarizing by gender, and seemed to color each gender’s response to the rest of the show, and in fact, their entire evaluation of the program afterwards. That’s a lot of power for one scene, and the dial traces illustrated it very clearly.
Q: You say you’re a pop culture junkie. What about the work that you do feeds to that guilty, or not-so-guilty pleasure?
Aaron: Ha, I think that my work and my own viewing habits kind of feed off of each other. I watch about six hours of primetime each night, and I think that being a heavy viewer of TV helps inform how I evaluate a program. It gives me some sense of how a program might stack up to the “best in class” for its genre, and allows me to better understand comparisons that respondents themselves might make. And when I test something I really like, it inspires me to watch it on my own. I once tested a drama pilot with the Perception Analyzer dials. It was a so-so pilot that tested okay, but not great. I found it very confusing, and kept picking up new things every time I screened it. The network gave it a green light, retooled the pilot a bit based on testing, and made some thoughtful improvements in future episodes. That program is now one of my favorite dramas on TV, and I might not have found it or been so committed to watching it had I not been involved in testing it. And on the flip side, without the dial testing, it might not be as solid a program as it is today.
Q: What aspect of the entertainment industry inspired you to work on the research and media testing side?
Aaron: Growing up, I was always fascinated by TV—the shows themselves and the networks carrying them. I enjoyed following the creative side of the business and the business side; things like scheduling, upfronts, new fall seasons, executive comings and goings, etc. I channeled that interest into an internship at CBS when I was 19. CBS has a very robust program-testing practice, so I was exposed to that early on, and it was love at first sight. My boss at the time said that our job in research was to be the “voice of the viewer.” When executives in other departments would “give notes,” they were really just sharing their own opinion, and it could be right or wrong. However, when a research exec provides viewer feedback, he or she is providing an assessment straight from the consumer, and that carries a lot of weight. I’ve seen testing put big hits on the air that nobody thought would be successful, take marginal shows and make them much better, get weak shows cancelled before they even hit the air, get actors replaced or get characters and relationships heavily modified. Viewers have a lot of power when they take part in a program test, and I see it as an honor to be their spokesperson.
Q: What has been the most interesting or exciting project you’ve had a chance to be a part of working with media testing?
Aaron: One of my first projects is still one of my favorites. In 1996, I was at CBS for my first pilot season and we had big new projects in the works from Bill Cosby, Ted Danson, Rhea Pearlman, etc. There was this other sitcom pilot we were testing, featuring this little-known comedian, Ray Romano, called, “Everybody Loves Raymond.” The pilot was laugh-out-loud funny, tho
ugh, and it ended up as one of our highest tested shows we saw that year. Those strong results helped get it on the air, but it was put in a weak time slot.
We tested the next few episodes, and they didn’t perform quite as well. I think the show was still trying to find its “voice.” The writers were trying to get a handle on the characters and relationships, and Ray was finding his way as an actor. The testing helped by providing crucial feedback during this period and the show improved from a creative standpoint. It was then upgraded to Monday nights and the rest is TV history, as they say. Years later, a TV columnist unearthed our testing reports on the second and third episodes, and used them to try to discredit the testing process. But I firmly believe that testing helped get the show on the air, helped it hit its stride creatively, and played a key role in it becoming the mega hit and TV classic we know it as today.
Thanks Aaron for the great stories and insights! If you want to keep tabs on what Aaron is up to, you can follow him on Twitter or check out his postings on the Vision Critical Blog. You can also check out this recorded Webinar featuring Aaron and Dialsmith’s Online Products Manager Aaron Korach discussing real world examples and results from some of the entertainment research we’ve worked on with Vision Critical.