The Philadelphia Eagles may still be reveling in their victory over the New England Patriots in Sunday’s historic Super Bowl win, but many game-day advertisers won’t be quite as happy in the aftermath. After the vast creative (and financial!) input to running a prized Super Bowl ad, it should have been a time for businesses to reap the rewards of a guaranteed increase in traffic. Unfortunately, we identified several websites that went down during this peak time, undoing so much hard work with each second of poor performance.
This year, plenty of commercials were actually released prior to kick-off. This means that our teams, using data in Apica Synthetic, had the opportunity to monitor different companies’ performance and availability as Super Bowl Sunday unfolded.
What we found was that many major brand sites just couldn’t deal with the volume in traffic driven by the commercials, with some experiencing load delays of nearly 30 seconds and others falling over completely. What was going wrong?
Downtime – It Stinks
Here is the longest homepage URL timeout during the Super Bowl. As you can see, Febreze’s “Bleep don’t stink” ad caused its website to really suffer as a consequence.
We released a consumer expectations survey last year and based on the findings, 40% of visitors will leave a site if it takes longer than ten seconds to load. Here, the longest load time was a whopping 26.5 seconds!
Diving deeper into the problems of website performance, an HTTP code error 500 indicated internal server errors. What does this mean? More extensive and damaging downtime. In fact, Febreze’s site was down for a maximum of an hour during Super Bowl Sunday. As a business leader, if you were paying this much for the opportunity to advertise on one of the world’s biggest annual platforms, your website and other supporting marketing vehicles should be tested and ready to go! Sadly, it looks like some hadn’t read the playbook.
I Like Beer, But…
Let’s look at site performance based on Michelob Ultra’s “I Like Beer” 60-second ad. Over a 24-hour period on Sunday, you can see two big ‘dots’ of downtime, not to mention the tallest spike where it looks like the site took 50 seconds to load. Seriously? I like a beer, but I’m not going to wait 50 seconds to check out your website! And as industry research has been showing for a few years, no-one else is going to either.
The company did a big marketing/publicity push designed for the Super Bowl and it looks like they had a breach of their service level agreement (SLA) during the most critical ad time of their year. It’s a worst-case scenario that prior planning and preparation would have avoided.
It’s so important to test your applications before the big day; it’s not as if the winning QB just turned up to the Super Bowl without having trained and assumed everything would be fine, is it?
What can we learn from this?
It’s all very well pointing fingers, but organizations need to understand what they can do to avoid similar downtime in the future. Companies of all sizes, across all industries, need to load test their sites to handle increased traffic and anticipate when that increase will occur. Hint – probably during halftime at the Super Bowl. Stretch your site to its limit if you want to understand where potential problems might lie.
Synthetic monitoring, too, provides an overall picture of the site’s availability over time, which is critical information. Monitoring matters more than ever. If a commercial with no direct link to a website or specific application can impact availability and performance, it’s crucial for organizations to always be monitoring and testing for performance, and never resting on their laurels.
Integrating website, application, and API performance testing and monitoring into your product and business strategy has never been as critically important as it is now. Being ready for every eventuality means that organizations aren’t just taking a punt on their customers’ satisfaction. In a culture of constant availability, downtime is simply no longer an option.