How to balance length and frequency of online video ads

Mark Husak, director of media development, Europe, MIllward Brown

Article here

Online video provides a tremendous opportunity to shape how and who your brand message reaches in ways television has never done. From frequency capping and optimisation of dwell time or click through, to context and page position, choice and control of brand content is truly mind boggling.

But some of the big basic questions still remain, questions advertisers are still grappling with for conventional TV. Like how long should my ad be? What’s the most efficient frequency of exposure? Do people have the patience for longer formats online or is frequency the way to go to beat the clutter? Or is it just a matter of how much you can afford.

Dynamic Logic research helps us shed some light on these questions. And the answer is… well it depends on what the brand’s objectives are for using online video in the first place. Let’s first look at brand awareness:

Brand Awareness Uplift (Control vs. Exposed)
Frequency
Video Length (sec.) 1 2 3-5 6-9 10+
10 or less 1.02% 1.08% 2.17% 3.39% 3.69%
11-20 1.48% 2.31% 2.22% 2.45% 4.86%
21-30 2.53% 3.90% 5.22% 5.81% 7.56%
Over 30 2.13% 5.85% 3.90% 3.69% 4.51%
Source: Dynamic Logic Market Norms.Based on over 650 case studies with over 500,000 control and 225,000 exposed survey respondents

Short spots build awareness and more exposure frequency builds it further, up to +3.69% on average for videos under 10 seconds (10+ frequency) to +7.56% on average for  videos of 21-30 second length (10+ exposures) . Longer spots also perform better than shorter spots and also continue to build brand awareness with increased exposure.  For example, at the 3-5 frequency range, videos 10 seconds or less achieve an average uplift of 2.17%. This increases to 5.22% uplift on average for videos 21-30 seconds long at the same frequency range.

This is true on average for anything up to 30 seconds. But look at what happens for ads over the standard 30 seconds. Brand awareness uplift is maximised after about 2 exposures (uplift of 5.85%) . After that extra exposures don’t really do much at all. In fact more exposures might hurt a little as people tune out as uplifts range from 3.90% uplift for 3-5 exposures to 4.51% for 10+ exposures. The best effect is created with a slightly shorter spot, more times.

Now let’s look at purchase intent.

Purchase Intent (Control vs. Exposed)
Frequency
Video Length (sec.) 1 2 3-5 6-9 10+
10 or less 0.86% 1.11% IFR 1.68% 1.76%
11-20 0.91% 0.58% 1.45% 2.00% 2.11%
21-30 0.97% 2.33% 3.57% 3.50% 2.65%
Over 30 2.27% 2.71% 2.03% 2.16% 1.36%
Source: Dynamic Logic Market Norms.Based on over 650 case studies with over 500,000 control and 225,000 exposed survey respondents

The effect of frequency here paints a grimmer picture. The shortest spots (under 10 seconds) are still generally immune, growing from +0.86% uplift after one exposure to +1.76% exposure at the 10+ frequency level. Even 20 second ads grow purchase intent even after 10 or more repetitions (to +2.11%). However, for longer activity, there seems to be an inverse relationship between length and the tolerance people have to its recurrence. For example, 21-30 second videos achieve maximum uplift after 3-5 exposures. Past this,  frequency is hurting the brand as patience levels have reached a limit.

For longer spots (greater than 30 seconds), the threshold is even shorter. Two exposures achieves maximum (+2.71%), after which activity is just antagonising the viewer.

What does all this mean? Well, if you’re just trying to get your brand known or keep it top of mind, lots of frequency with short spots online work just fine. Longer lengths work even better, but you need to balance effectiveness with efficiency.

On the other hand, if you are trying to really influence purchase in the short term, longer spots may be the way to go, but be careful not to overdo it on the frequency. You already have their attention. Use it wisely.

Advertisements

The Value of a fan

Buying Ad Inventory – Presentation from emarketer

Beware GA: Maybe lying to you from SEER Interactive

Many people have noticed and commented about the recent changes to how Google Analytics handles how sessions are reported or how image referrals are tracked. As part of reviewing the effects these changes may have had on our clients, I wanted to dig into our Analytics to make sure that our data is as accurate as possible. One of the great things about working at an agency is you have tons of data sources in all sorts of industries and verticals, so identifying systemic issues becomes much easier.

However, when digging into this data I found some startling things results that date back well before the above-mentioned changes went into effect. To be clear, the below examples are not (in my humble opinion) a result of or even correlated to the recent changes to Google Analytics, but are certainly something you should review when validating your past and future numbers.

Referring Keywords

Now, the first place I turn when validating data is always Google Analytics, the source of data (imagine that!). When I was taking a look at some data for a client that saw increases as of late, at first things looked great; our targeted keywords were moving in the right direction and there were no abnormal spikes across one or two keywords that would indicate spam or faulty reporting.

But just as I was cinching up my party hat and lacing on my dancing shoes, I noticed some strange long-tail terms driving traffic. Take this example:

Not only is this an 8-word phrase that is only vaguely related to the client in question, the grammar is all out of whack. In fact, Google even tried to correct me when I search for this:

So now that this has caught my attention, I wanted to dig a little deeper.

Please Note: I recognize that 25 visits is not something that will necessarily make-or-break our reporting. However, this is just an example of how this phenomenon was observed over a bunch of keywords across multiple clients.

Now, we know that we recorded 25 visits over the past month, but how were these split up?

Hmm, ok. Already I’m scratching my head but at the same time, our judicial system requires us to prove beyond a shadow of a doubt, so maybe a news story or a algorithm shift got us some temporary rankings for this phrase. Skeptical as I was, if these visits were legitimate, you would think there would be some variation in the source, right? Well, according to the same Google Analytics that tells us we got a spike in visits for this keyword, all of these visits came from the same City:

Note: When changing our location to Castro Valley, the website did not appear until the #11 result on Google Chrome Incognito (to prevent personalization bias).

The visits also all came from the same browser:

And even the same Screen Resolution:

So now that I’m pretty confident there are some issues here, I wanted to give 2 more examples of where we’re seeing this issue arise under different circumstances.

Do I Have a Stalker?

In a client call earlier this week, we discussed the curious fact that one of the top traffic-driving keywords for the entire month of June was the name of one of the Sales Executives. Now, if this were the CEO or Directory of Marketing that just spoke at a conference or was featured on the news, that would be one thing. No, this was a Sales Executive who herself was a little confused as to why her name would drive so many visits to the corporate website. When looking into the data, we saw a lot of the same consistencies we referenced above: same browser, operating system, city, screen resolution, etc. However, what was different about this example was that the visits were actually dispersed over several days:

Let’s Put it to the Test!

To figure this out, the client performed a test; they copied a random block of text from their homepage and clicked through to their listing in Google. What happened next gives us the greatest cause for concern: over the course of the next week, the client did not close the tab by which he had accessed the SERP listing. To be clear, he did not revisit this tab, refresh the tab, or re-search for the query in Google. However, over the course of a week the 1 visit that he made as a test was actually being recorded as 27 visits in Google Analytics:

You will notice that there were no visits recorded on Saturday and Sunday, when he was out of the office and not actively using his computer. While we’re not certain what caused this activity, there are some speculation that it may have been an auto-refresh in the browser or that the visits were recorded as the computer recovered from “Sleep Mode,” but we have no confirmation to this point (We are, however, currently testing if this might be the case)

Once the tab was closed and the cache was cleared on Friday, July 22, these visits were no longer being recorded and the traffic for the keyword returned to zero.

They Just Can’t Get Enough!

The last example we have shows how we can look to recognize potential faulty reporting by looking for abnormalities in your metrics. The term in question is one that we’ve been tracking for quite some time, so we know for a fact that rankings on this term have not moved over the course of the past 2 weeks. However, this week we saw visits spike with no discernible explanation. The Sherlock Holmes in me was, once again, intrigued. How did a term that experienced no rankings movement, no notable new references, and no changes in search activity that I could distinguish spike so much?

Rather than rehash the above points, I suggest you also take a look at any abnormalities in pages per visit or average time on site, as these will show you places where there was unusual (and potentially incorrect) reporting of the activity taken on your site. Normally, we wouldn’t place too much stock in these metrics but for the sake of this argument they do provide a valuable insight into shortcomings of Google Analytics. You will notice from the below that we saw absurd increases in these metrics:

The biggest thing to look at here is the comparisons to the site average: if we averaged 55 pages per visit, this wouldn’t be remarkable. However, a 1300% increase in pages per visit and almost 1600% increase in average time on site is even more of an indicator that there is something wrong with how this data is likely skewing your results to appear more favorable.

So what does this mean for you?

Before you celebrate monumental wins (or losses for that matter), you should always validate the data. Just as we should always consider “what strategy may have caused this?” we should also consider what reporting deficiencies may have caused it.

Another suggestion is to look at Unique or Absolute Visitors, which from our review seem to provide more realistic numbers and are not affected by (for example) the changes to how sessions are being recorded.

Finally, always make sure you’re looking beyond your top 3, 5, or even 10 keywords. While a reporting error of 25 visits (as with our first example) may not be a huge deal, a 25-visit error across 50 or 100 keywords can, in the aggregate, seriously hurt the validity of your data.

Are you seeing similar results in your data? We encourage you to share your thoughts or feedback in the comments section below. If you have specific questions you can also feel free to reach out to

Read: Beware: Google Analytics May Be Lying to You | SEER Interactive

Roland Berger Strategy Consultants – Trend Compendium 2030

Gen Xers Are Online Media Kings

A new eMarketer report, “Gen X: Demographic Profile and Marketing Approaches” indicates that 34-45 year old consumers are as comfortable with digital as with traditional media. “To effectively engage with Gen X, brands need a strategy that incorporates multiple channels—including mobile, social and online video—with authentic, relevant messaging,” the report notes.

The report recommends that brands would be wise to include online video in the media mix, along with TV.

Gen X constitutes the largest online video audience. eMarketer forecasts that 74.2.% of Gen X internet users will watch online video at least monthly in 2011, and that percentage is expected to grow to 80% by 2015.

Most Gen Xers are online. eMarketer estimates that 88% of the segment are web users in 2011, and that number is expected to increase to 90.9% by 2015. They are slightly more likely than the general population to visit online retail sites and significantly more likely to visit mobile retail sites, according to comScore.

Top product categories purchased online by Gen Xers in fall 2010 included apparel, airline tickets, books and hotel reservations, concluded a survey by GfK MRI.

For more information see http://www.emarketer.com

Content Snackers Become Cord Cutters; Change The TV World As We Know It

Every five or so years for the past two decades the introduction of an Internet connection to a new device type has created a boom in disruptive businesses. Most of these booms—computers, followed by mobile phones, gaming consoles and now tablets—have been clearly successful. Others (remember the Network Computer?) have been ill-timed.

Now manufacturers, and a growing ecosystem of partners to support them, are betting big that consumers are finally poised to accept an Internet connection in their most cherished living room technology mainstay, the television. Players from Samsung to Sony are bringing the so-called Connected TV (CTV) to market in mass, and you’ll see a big push this holiday season. There are already upwards of fourteen million CTVs in North America and an estimated 65 percent of TVs sold in 2012 will be CTVs.

With every platform change, both new and established companies have lined up to try and capture a share of the redistribution of rewards that inevitably comes when consumers change their habits. North American television advertising is certainly no exception as a host of companies, old and new line up to try and capture their share of that $62 billion annual advertising feast.

While there has been some preparation to date, incumbents have an incredibly hard time cannibalizing existing revenue streams for growing, but yet to mature, new revenue streams. We’ve seen this with everything from books to brokerages. And in the TV world, we are seeing it on display with the recent stutter of Hulu, the pioneering archetype, catching arrows in their back from erstwhile incumbent partners as they bravely forge ahead.

Such is the nature of distribution when the business advantage is built primarily on pricing and bundling, and carefully restricted access, not on real consumer demonstrated desires and behaviors.

Technology has always been on the side of the consumer, especially in the realm of television viewing. You may not remember now, but broadcasters bitterly fought the arrival of cable in the 70s. And while it seems absurd now, given it has created hundreds of billions of revenue, studios fought against the arrival of DVDs in the late 90s. The early titles were a handful of B movies released by Warner Brothers in conjunctions with Toshiba. It was all Toshiba could get at the time.

We may be seeing another disruption today. With a new wave of CTV content applications, the pricing and access advantage of cable television may dissipate. Imagine downloading a TNT program application directly from Turner rather than paying a cable company for access to Turner content. Content providers themselves already, or will soon, have the tools to reach their audience directly on the big screen. Turner could pocket 100% of any subscription fee and advertising revenue rather than having to share with a distribution partner.

The traditional distribution players are betting, but not banking, on the fact that new television distribution will look substantially similar to old television distribution. They are expanding their services to include on-demand viewing and hoping much will continue as before with consumers paying a fee for content bundles.

But what if that’s not the way it goes down? What if like mobile phones and the PC before them consumers choose to snack on content delivered directly to them by the content providers themselves, effectively removing the pricing, bundling and access advantage of traditional cable and satellite television distribution. In that world the power of delivery, and advertising insertion, shifts directly to content providers, device manufactures and the ecosystem of direct Internet-connected business partners they surround themselves with. In that scenario, online advertising businesses have a distinct advantage over traditional distribution businesses as they are already in the market pumping billions of video ads through existing devices like PCs, mobile phones and tablets.

Sure distribution incumbents like Comcast could make IP connected set-top boxes that consumers use to access content directly, unbundled or a la carte, but that erodes their existing revenue model around cable pricing. The industry calls folks who end run cable to get their content directly from content companies, “cord cutters.” A recent Morgan Stanley report concluded that cable companies would have to double the internet access fees of so called “cord cutters” to make up for the lost revenue on cable TV packages.

There is change brewing. Years in this business and witness to booms and busts have taught all of us to be cautious of absolutist rhetoric opining the end of any particular distribution channel. Consumers have shown a remarkable ability to expand their entertainment appetites, and new consumption habits largely prove additive, not cannibalistic (except for my poor print friends of course). So be suspect of anyone who claims that all programming or advertising is going to be wholly delivered in a particular way. But the numbers themselves are so enormous, and the opportunity so large that even a ten percent swing in consumer viewing habits from cable and satelite to Connected TV applications and cord-cutters will represent a shift in $6.2B of advertising spending. That, to me, is a scenario worth preparing for.

http://techcrunch.com/2011/08/21/content-snackers-cord-cutters-tv/

Global Map of Social Networking

Every Hour of TV You Watch May Shorten Your Lifespan By 22 Minutes

08/17/11 by Ben Parr

In case you needed more proof that watching excessive amounts of TV is bad for your health: new research shows that there is a correlation between the amount of time you spend in front of the TV and how long you live.

A study by researchers at the University of Queensland in Australia has concluded that, for every hour of television watched after age 25, the average human lifespan drops by 22 minutes. A person who watch six hours of TV per day will, on average, live five years less than people who spent less time on the couch and in front of the television screen. Those are some scary numbers.

The study tracked data from 11,000 Australian participants over the age of 25. It was published earlier this month in the British Journal of Sports Medicine.

This study doesn’t prove that TV is quietly killing us. It’s more likely that lack of exercise and bad eating habits are shortening the lifespans of TV couch potatoes. A person who spends six hours a day staying active is almost certainly going to live longer than a person who likes to lean back in a recliner watching countless episodes of Judge Judy or Law and Order: SVU.

It’s not just TV watching that’s bad for you, either. We recently learned that sitting in front of the computer for six hours a day increases your risk of death by 40%. And with Americans watching more video than ever, the health problem is growing. That’s why we’re fans of the stand-up desk.

http://mashable.com/2011/08/17/tv-lifespan-study/

The 2011 Great Debate — Rent vs. Buy

The 2011 Great Debate — Rent vs. Buy
A great way to visualise data online, try it!!

%d bloggers like this: