The Philadelphia Eagles may still be reveling in their victory over the New England Patriots in Sunday’s historic Super Bowl win, but many game-day advertisers won’t be quite as happy in the aftermath. After the vast creative (and financial!) input to running a prized Super Bowl ad, it should have been a time for businesses to reap the rewards of a guaranteed increase in traffic. Unfortunately, we identified several websites that went down during this peak time, undoing so much hard work with each second of poor performance. This year, plenty of commercials were actually released prior to kick-off. This means that our teams, using data in Apica Synthetic, had the opportunity to monitor different companies’ performance and availability as Super Bowl Sunday unfolded. What we found was that many major brand sites just couldn’t deal with the volume in traffic driven by the commercials, with some experiencing load delays of nearly 30 seconds and others falling over completely. What was going wrong? Downtime – It Stinks Here is the longest homepage URL timeout during the Super Bowl. As you can see, Febreze’s “Bleep don’t stink” ad caused its website to really suffer as a consequence. We released a consumer expectations survey last year and based on the findings, 40% of visitors will leave a site if it takes longer than ten seconds to load. Here, the longest load time was a whopping 26.5 seconds! Diving deeper into the problems of website performance, an HTTP code error 500 indicated internal server errors. What does this mean? More extensive and damaging downtime. In fact, Febreze’s site was down for a maximum of an hour during Super Bowl Sunday. As a business leader, if you were paying this much for the opportunity to advertise on one of the world’s biggest annual platforms, your website and other supporting marketing vehicles should be tested and ready to go! Sadly, it looks like some hadn’t read the playbook. I Like Beer, But… Let’s look at site performance based on Michelob Ultra’s “I Like Beer” 60-second ad. Over a 24-hour period on Sunday, you can see two big ‘dots’ of downtime, not to mention the tallest spike where it looks like the site took 50 seconds to load. Seriously? I like a beer, but I’m not going to wait 50 seconds to check out your website! And as industry research has been showing for a few years, no-one else is going to either. The company did a big marketing/publicity push designed for the Super Bowl and it looks like they had a breach of their service level agreement (SLA) during the most critical ad time of their year. It’s a worst-case scenario that prior planning and preparation would have avoided. It’s so important to test your applications before the big day; it’s not as if the winning QB just turned up to the Super Bowl without having trained and assumed everything would be fine, is it? What can we learn from this? It’s all very well pointing fingers, but organizations need to understand what they can do to avoid similar downtime in the future. Companies of all sizes, across all industries, need to load test their sites to handle increased traffic and anticipate when that increase will occur. Hint – probably during halftime at the Super Bowl. Stretch your site to its limit if you want to understand where potential problems might lie. Synthetic monitoring, too, provides an overall picture of the site’s availability over time, which is critical information. Monitoring matters more than ever. If a commercial with no direct link to a website orRead More
With 2017 nearly over, organizations are hotly anticipating what the new year will bring in terms of trends, developments and hidden surprises in the software testing industry. For the software testing industry, it’s been a busy year. Increased automation adoption within testing teams – where agile development boosted speed of release – has led to QA becoming more embedded within development teams than ever before. As software testing and monitoring gains increasing importance, it’s time to ask – what’s next? Here are my 2018 predictions for our industry: Bigger AI breakthroughs Next year, we’ll see businesses really make breakthroughs when it comes to utilizing artificial intelligence (AI) and machine learning – namely to better understand captured data. It can be difficult to see how wider concepts such as AI manifest physically – intelligent objects or ‘things’ bridge that gap. In the past, connected devices sent data for limited processing; today, machine learning enables devices to transform that data into tangible insight, transforming the behavior of IoT devices worldwide. UX monitoring will be key With 2017’s major outages in mind, it’s clear the industry has not progressed quickly enough to address exponential IoT and API economy growth. Although some businesses are achieving great things in testing and monitoring terms, many organizations are still focusing efforts on speed instead of quality, security and resilience. Looking ahead to the new year, companies need to address the overall quality of their services to remain competitive. Consequently, we’ll see a shift in focus to monitoring the customer experience, as well as the need for thorough end-to-end testing, embedded within the delivery lifecycle. Differentiation will come down to availability If 2018 sees vendors offering similar capabilities, how will consumers decide where to spend their money? Differentiation of services will come down to availability, ease of use and a consistent, high quality experience. Increased reliance on IoT devices, their data and their management will also drive the need for high availability of the API services that these devices interact with. Monitoring availability of these APIs is key to ensuring that organizations can run – especially in the manufacturing space – and that business intelligence data can be trusted by leaders. The CMO role will evolve Historically, software testing and monitoring has been the responsibility of the IT department, be that the development teams for testing, or operations on the monitoring side. Next year, with digital transformation underway in most organizations, in addition the explosion of connected devices and data processing derived from IoT, focus will shift onto application quality as well as customer experience. Thus, testing and monitoring should be of keen interest to the COO and the CMO: this will result in more rounded testing with team members coming from different parts of the organization. That’s a potential step change in the type of testing that would occur. Validation of results will transform In 2018, we will see further adoption of AI: major software vendors will increasingly embed machine learning within their core applications. In addition, machine learning will become a standard platform for data analytics for new development initiatives – benefiting IoT vendors most of all, due to the explosion of data in the market. This will challenge the testing community as new ways of testing and validating the results from AI need to be identified. — Although I have confidence in these predictions, I’m aware that there will be events in the new year that no one in the industry could foresee. However, we at Apica look forward to the challenge. Here’s to a successful 2018!