When it comes to maintaining constant ‘uptime’ for customer experience, even the most reputable industry players face hurdles. In 2018, businesses could do well to get to grips with their services in terms of improving availability, driving optimum performance and delivering the ultimate customer experience. Here are some new customer experience challenges businesses are facing – and tips on how to solve them: Addressing availability before user experience Last year, we witnessed several major website and application outages, from Virgin Money Giving crashing the night before the London Marathon, to WhatsApp going down on New Year’s Eve. The financial and reputational impact on businesses, whether caused by traffic volume or IT glitches, can be severe. It was recently revealed that hundreds of UK parents are unable to access their tax-free childcare benefits via an HMRC-run website, resulting in unpaid – and unhappy – childcare providers. These examples demonstrate how much work businesses still need to put in before they can really compete on a user experience level. Failing to ensure constant availability and enforce a user-first mentality directly affects market positioning, and, often, the delivery of vital services. The supply chain improving user experience Application supply chains are becoming increasingly long and complex. Consequently, we can no longer assume a positive user experience from monitoring individual components – or that the experience needs to be measured at the edge, at a macro level. Exercising and testing the supply chain could be a real market differentiator in 2018 for businesses looking to achieve user experience excellence. The UX-driven evolution of C-suite roles This year, it will become more important for the entire C-suite to be involved in monitoring the ‘performance dashboard’, as businesses become increasingly digital. It’s essential that roles evolve soon – for example, the CIO and CDO will begin to merge as the need for UX awareness and appreciation takes over from technical knowledge alone. When it comes to supporting digital services, the Head of Customer Experience will play a key role. In some organizations, this could mean that CDOs start to tackle more customer experience requirements, and in others, it may mean that over time, CDOs become the customer experience role. Through the design of user friendly digital services that perform at all times – including in periods of peak traffic – large industry players will be able to meet customer expectations with user experience offerings. In addition, smaller organizations are given an opportunity to compete in the ever-evolving digital landscape. Business will put web monitoring and load testing first How long is ten seconds for a website home page to load? Long enough to lose you 40% of potential users if they can’t access what they want in that time-frame. Ultimately, regardless of whether a site or service’s graphic user interface is optimized for a great user experience – if performance and response times fail to meet user expectations, companies will suffer. In the not-too-distant future, I believe we’ll see business leaders focus on web and application monitoring and load testing so that UX teams can concentrate on doing what they do best: developing forward-thinking digital services that make the user experience faster and better.
The Philadelphia Eagles may still be reveling in their victory over the New England Patriots in Sunday’s historic Super Bowl win, but many game-day advertisers won’t be quite as happy in the aftermath. After the vast creative (and financial!) input to running a prized Super Bowl ad, it should have been a time for businesses to reap the rewards of a guaranteed increase in traffic. Unfortunately, we identified several websites that went down during this peak time, undoing so much hard work with each second of poor performance. This year, plenty of commercials were actually released prior to kick-off. This means that our teams, using data in Apica Synthetic, had the opportunity to monitor different companies’ performance and availability as Super Bowl Sunday unfolded. What we found was that many major brand sites just couldn’t deal with the volume in traffic driven by the commercials, with some experiencing load delays of nearly 30 seconds and others falling over completely. What was going wrong? Downtime – It Stinks Here is the longest homepage URL timeout during the Super Bowl. As you can see, Febreze’s “Bleep don’t stink” ad caused its website to really suffer as a consequence. We released a consumer expectations survey last year and based on the findings, 40% of visitors will leave a site if it takes longer than ten seconds to load. Here, the longest load time was a whopping 26.5 seconds! Diving deeper into the problems of website performance, an HTTP code error 500 indicated internal server errors. What does this mean? More extensive and damaging downtime. In fact, Febreze’s site was down for a maximum of an hour during Super Bowl Sunday. As a business leader, if you were paying this much for the opportunity to advertise on one of the world’s biggest annual platforms, your website and other supporting marketing vehicles should be tested and ready to go! Sadly, it looks like some hadn’t read the playbook. I Like Beer, But… Let’s look at site performance based on Michelob Ultra’s “I Like Beer” 60-second ad. Over a 24-hour period on Sunday, you can see two big ‘dots’ of downtime, not to mention the tallest spike where it looks like the site took 50 seconds to load. Seriously? I like a beer, but I’m not going to wait 50 seconds to check out your website! And as industry research has been showing for a few years, no-one else is going to either. The company did a big marketing/publicity push designed for the Super Bowl and it looks like they had a breach of their service level agreement (SLA) during the most critical ad time of their year. It’s a worst-case scenario that prior planning and preparation would have avoided. It’s so important to test your applications before the big day; it’s not as if the winning QB just turned up to the Super Bowl without having trained and assumed everything would be fine, is it? What can we learn from this? It’s all very well pointing fingers, but organizations need to understand what they can do to avoid similar downtime in the future. Companies of all sizes, across all industries, need to load test their sites to handle increased traffic and anticipate when that increase will occur. Hint – probably during halftime at the Super Bowl. Stretch your site to its limit if you want to understand where potential problems might lie. Synthetic monitoring, too, provides an overall picture of the site’s availability over time, which is critical information. Monitoring matters more than ever. If a commercial with no direct link to a website orRead More
StubHub, one of the most well-known online ticket exchange companies, crashed for 20+ minutes following the Georgia Bulldogs double-overtime win in Atlanta on January 1st. Although StubHub has yet to reveal the cause of the crash, in the age of instant digital response Georgian fans immediately took to Twitter to announce they “broke StubHub”. In addition, many fans poked fun at the Bulldog pride and their ability to overthrow a major player in the industry. Despite the crash on social media, it’s unlikely that users will abandon StubHub; however, many users may have jumped to some of StubHub’s competitors – Razorgator, Vivid Seats, or SeatGeek to name a few. Apica released consumer expectations survey in mid-2017, showing consumers have little tolerance for slow websites, often abandoning them within 10 seconds. What can we learn from this? Companies, especially those who cater to an end-user, need to test and monitor their sites to handle spiked traffic, no matter how big or small. Application performance monitoring matters more than ever, and if StubHub can go down from a relatively small, regionalized event, so can your business. To this end, integrating website, application, and API performance testing and monitoring into your product and business strategy has never been as critically important as it is now. Find out how Apica can help your business ensure application performance monitoring today. Take a look at our products: Apica LoadTest, Apica Synthetic
With 2017 nearly over, organizations are hotly anticipating what the new year will bring in terms of trends, developments and hidden surprises in the software testing industry. For the software testing industry, it’s been a busy year. Increased automation adoption within testing teams – where agile development boosted speed of release – has led to QA becoming more embedded within development teams than ever before. As software testing and monitoring gains increasing importance, it’s time to ask – what’s next? Here are my 2018 predictions for our industry: Bigger AI breakthroughs Next year, we’ll see businesses really make breakthroughs when it comes to utilizing artificial intelligence (AI) and machine learning – namely to better understand captured data. It can be difficult to see how wider concepts such as AI manifest physically – intelligent objects or ‘things’ bridge that gap. In the past, connected devices sent data for limited processing; today, machine learning enables devices to transform that data into tangible insight, transforming the behavior of IoT devices worldwide. UX monitoring will be key With 2017’s major outages in mind, it’s clear the industry has not progressed quickly enough to address exponential IoT and API economy growth. Although some businesses are achieving great things in testing and monitoring terms, many organizations are still focusing efforts on speed instead of quality, security and resilience. Looking ahead to the new year, companies need to address the overall quality of their services to remain competitive. Consequently, we’ll see a shift in focus to monitoring the customer experience, as well as the need for thorough end-to-end testing, embedded within the delivery lifecycle. Differentiation will come down to availability If 2018 sees vendors offering similar capabilities, how will consumers decide where to spend their money? Differentiation of services will come down to availability, ease of use and a consistent, high quality experience. Increased reliance on IoT devices, their data and their management will also drive the need for high availability of the API services that these devices interact with. Monitoring availability of these APIs is key to ensuring that organizations can run – especially in the manufacturing space – and that business intelligence data can be trusted by leaders. The CMO role will evolve Historically, software testing and monitoring has been the responsibility of the IT department, be that the development teams for testing, or operations on the monitoring side. Next year, with digital transformation underway in most organizations, in addition the explosion of connected devices and data processing derived from IoT, focus will shift onto application quality as well as customer experience. Thus, testing and monitoring should be of keen interest to the COO and the CMO: this will result in more rounded testing with team members coming from different parts of the organization. That’s a potential step change in the type of testing that would occur. Validation of results will transform In 2018, we will see further adoption of AI: major software vendors will increasingly embed machine learning within their core applications. In addition, machine learning will become a standard platform for data analytics for new development initiatives – benefiting IoT vendors most of all, due to the explosion of data in the market. This will challenge the testing community as new ways of testing and validating the results from AI need to be identified. — Although I have confidence in these predictions, I’m aware that there will be events in the new year that no one in the industry could foresee. However, we at Apica look forward to the challenge. Here’s to a successful 2018!