It’s no surprise that Gartner has estimated that by 2020, there will be seven billion connected business devices in existence. Companies are investing vast sums in IoT, but the question remains, how many of these devices are actually useful and deliver value? From the connected coffee machine that will tell you when you’re nearly out of pods, to the intuitive egg tray designed to deliver the earth-shattering news that the dreaded sell by date is one-day closer, there is countless ‘Internet of Useless things’ devices out there we can probably live without. What’s worse is that there is real potential for organizations to spend millions on IoT projects without clear business objectives, and without strategic testing plans in place to ensure that these devices safely deliver as intended. Technology to provide tangible real-life benefits Klas Bendrik, SVP & CIO at Volvo Car Group, has got it right. On embracing IoT, he referred to creating more efficient and safer cars, as well as helping people’s journeys: “We take the best available technology and make it work in the most useful way for our customers. It’s about using technology to provide tangible real-life benefits, rather than providing technology just for the sake of it.” It’s a great point, and other companies would do well to try and deliver on this type of approach focused on real value. It also highlights that performance and availability of connected devices will absolutely become important differentiators when it comes to an ever more competitive and crowded marketplace. When in doubt, test again Simply put, companies invested in IoT have to put the time into strategic monitoring and testing to guarantee continuous performance that will actually add tangible business value, and to stand up to the test of time and popularity. Connected IoT devices have a high level of dependency on the speed of communication, which can open them up to issues such as slow internet connection or unreliable network hardware. So in this sense, it’s critical to test IoT devices to ensure that they’re not failing to respond or losing data. Key to the customer experience is proactively monitoring your websites and applications, not to mention APIs, 24/7 is always going to be key to providing a good customer experience – doing this intermittently is just not an option. It means that you can be fixing any issues before they escalate and before customers ‘hit a wall’ in their user experience and start complaining about availability or performance issues. Time is always of the essence in these instances because key performance indicators like page load time are intrinsically lined to loss of visitors. If you test people’s patience with slow load times or other performance issues, you really do risk losing trade. When it comes to IoT, cyber-crime and data privacy are other issues that should be considered. Who wants to get hacked by the egg tray or the not-so-conscientious coffee machine? Testing needs to push applications in all areas of performance, including security. Within the year, the IoT market will likely be more than double the size of the smartphone, PC, tablet, connected car, and the wearable market combined. By then, let’s hope that there is a growing trend for IoT for business value, and not just IoT’s sake, and that companies recognize the need for proper testing to deliver a safe, reliable IoT user experience.
When it comes to maintaining constant ‘uptime’ for customer experience, even the most reputable industry players face hurdles. In 2018, businesses could do well to get to grips with their services in terms of improving availability, driving optimum performance and delivering the ultimate customer experience. Here are some new customer experience challenges businesses are facing – and tips on how to solve them: Addressing availability before user experience Last year, we witnessed several major website and application outages, from Virgin Money Giving crashing the night before the London Marathon, to WhatsApp going down on New Year’s Eve. The financial and reputational impact on businesses, whether caused by traffic volume or IT glitches, can be severe. It was recently revealed that hundreds of UK parents are unable to access their tax-free childcare benefits via an HMRC-run website, resulting in unpaid – and unhappy – childcare providers. These examples demonstrate how much work businesses still need to put in before they can really compete on a user experience level. Failing to ensure constant availability and enforce a user-first mentality directly affects market positioning, and, often, the delivery of vital services. The supply chain improving user experience Application supply chains are becoming increasingly long and complex. Consequently, we can no longer assume a positive user experience from monitoring individual components – or that the experience needs to be measured at the edge, at a macro level. Exercising and testing the supply chain could be a real market differentiator in 2018 for businesses looking to achieve user experience excellence. The UX-driven evolution of C-suite roles This year, it will become more important for the entire C-suite to be involved in monitoring the ‘performance dashboard’, as businesses become increasingly digital. It’s essential that roles evolve soon – for example, the CIO and CDO will begin to merge as the need for UX awareness and appreciation takes over from technical knowledge alone. When it comes to supporting digital services, the Head of Customer Experience will play a key role. In some organizations, this could mean that CDOs start to tackle more customer experience requirements, and in others, it may mean that over time, CDOs become the customer experience role. Through the design of user friendly digital services that perform at all times – including in periods of peak traffic – large industry players will be able to meet customer expectations with user experience offerings. In addition, smaller organizations are given an opportunity to compete in the ever-evolving digital landscape. Business will put web monitoring and load testing first How long is ten seconds for a website home page to load? Long enough to lose you 40% of potential users if they can’t access what they want in that time-frame. Ultimately, regardless of whether a site or service’s graphic user interface is optimized for a great user experience – if performance and response times fail to meet user expectations, companies will suffer. In the not-too-distant future, I believe we’ll see business leaders focus on web and application monitoring and load testing so that UX teams can concentrate on doing what they do best: developing forward-thinking digital services that make the user experience faster and better.
The Philadelphia Eagles may still be reveling in their victory over the New England Patriots in Sunday’s historic Super Bowl win, but many game-day advertisers won’t be quite as happy in the aftermath. After the vast creative (and financial!) input to running a prized Super Bowl ad, it should have been a time for businesses to reap the rewards of a guaranteed increase in traffic. Unfortunately, we identified several websites that went down during this peak time, undoing so much hard work with each second of poor performance. This year, plenty of commercials were actually released prior to kick-off. This means that our teams, using data in Apica Synthetic, had the opportunity to monitor different companies’ performance and availability as Super Bowl Sunday unfolded. What we found was that many major brand sites just couldn’t deal with the volume in traffic driven by the commercials, with some experiencing load delays of nearly 30 seconds and others falling over completely. What was going wrong? Downtime – It Stinks Here is the longest homepage URL timeout during the Super Bowl. As you can see, Febreze’s “Bleep don’t stink” ad caused its website to really suffer as a consequence. We released a consumer expectations survey last year and based on the findings, 40% of visitors will leave a site if it takes longer than ten seconds to load. Here, the longest load time was a whopping 26.5 seconds! Diving deeper into the problems of website performance, an HTTP code error 500 indicated internal server errors. What does this mean? More extensive and damaging downtime. In fact, Febreze’s site was down for a maximum of an hour during Super Bowl Sunday. As a business leader, if you were paying this much for the opportunity to advertise on one of the world’s biggest annual platforms, your website and other supporting marketing vehicles should be tested and ready to go! Sadly, it looks like some hadn’t read the playbook. I Like Beer, But… Let’s look at site performance based on Michelob Ultra’s “I Like Beer” 60-second ad. Over a 24-hour period on Sunday, you can see two big ‘dots’ of downtime, not to mention the tallest spike where it looks like the site took 50 seconds to load. Seriously? I like a beer, but I’m not going to wait 50 seconds to check out your website! And as industry research has been showing for a few years, no-one else is going to either. The company did a big marketing/publicity push designed for the Super Bowl and it looks like they had a breach of their service level agreement (SLA) during the most critical ad time of their year. It’s a worst-case scenario that prior planning and preparation would have avoided. It’s so important to test your applications before the big day; it’s not as if the winning QB just turned up to the Super Bowl without having trained and assumed everything would be fine, is it? What can we learn from this? It’s all very well pointing fingers, but organizations need to understand what they can do to avoid similar downtime in the future. Companies of all sizes, across all industries, need to load test their sites to handle increased traffic and anticipate when that increase will occur. Hint – probably during halftime at the Super Bowl. Stretch your site to its limit if you want to understand where potential problems might lie. Synthetic monitoring, too, provides an overall picture of the site’s availability over time, which is critical information. Monitoring matters more than ever. If a commercial with no direct link to a website orRead More
By 2020, there will be billions of devices connected to the Internet of Things (IoT). From the Google Homes and Fitbits in our personal lives to smart cities connecting everything from traffic lights, public transport, and water supply systems, IoT devices are producing masses of data to improve virtually every aspect of the day-to-day. As an example, the automotive industry has since risen to the challenge of applying IoT to cars. Devices can access road data and road conditions, or even call for help in an emergency. Manufacturers are no longer just “car companies” – they’ve evolved to become software companies and data centers, connecting to a variety of interconnected servers providing data. Over the next few years, there will be a seismic shift in the automotive industry, delivering the power of the Internet and IoT to vehicles. This places a huge reliance on having battle tested APIs so that IoT enabled devices and services do not crash. This means monitoring, advanced testing, and ongoing maintenance will be everything to the new ‘tech’ automotive companies of the future, on a scale never seen before, and the manufacturers that can bring the truly connected vehicle closer to that reality first will have a considerable competitive advantage in the marketplace. A good performance monitoring and testing tool offers a fully developed IoT strategy to ensure companies across all verticals can develop and deploy complex scripts with bank-class security. What does a good IoT Strategy include? 1. End-to-end application visibility The end-to-end aspect is really important for any large IoT deployment. Cycles are running in production with new solutions being deployed regularly. Being able to program and set up monitoring provides complete, end-end visibility. Quality of service, showing SLAs, and application performance based on real device interactions, are key. 2. Security and global execution An enterprise IoT organization needs bank-class security and local certificates to validate access to each app, vehicle, or device is secure. The certificate is required to authenticate and encrypt all communication. You need a tool that can fully emulate and run encrypted communication, based on client certificates and session tickets for secure environments. Apica’s global network allows execution from more than 84 countries and 2400 nodes worldwide matching any deployment map. 3. IoT protocol support There are only a few performance monitoring/digital experience monitoring vendors with a clear-cut IoT strategy. Many vendors offer API monitoring, but an IoT strategy requires very complex scripting of APIs, protocol support, and security functionality. Apica supports REST, WebSocket, MQTT, X.509 Certificates, Source Module support, and Java Programming Plugins. With our MQTT over WebSocket protocol support, you can monitor performance and test large-scale IoT systems. 4. Enterprise IT monitoring and testing Enterprise companies need to test and monitor apps, network infrastructure, and APIs to ensure the end-user experience is flawless. Apica offers these services as an integrated part of all IT-operations; however, you don’t have to be an IT expert to see and understand the full picture. Contact us today to learn more about implementing an IoT strategy.
StubHub, one of the most well-known online ticket exchange companies, crashed for 20+ minutes following the Georgia Bulldogs double-overtime win in Atlanta on January 1st. Although StubHub has yet to reveal the cause of the crash, in the age of instant digital response Georgian fans immediately took to Twitter to announce they “broke StubHub”. In addition, many fans poked fun at the Bulldog pride and their ability to overthrow a major player in the industry. Despite the crash on social media, it’s unlikely that users will abandon StubHub; however, many users may have jumped to some of StubHub’s competitors – Razorgator, Vivid Seats, or SeatGeek to name a few. Apica released consumer expectations survey in mid-2017, showing consumers have little tolerance for slow websites, often abandoning them within 10 seconds. What can we learn from this? Companies, especially those who cater to an end-user, need to test and monitor their sites to handle spiked traffic, no matter how big or small. Application performance monitoring matters more than ever, and if StubHub can go down from a relatively small, regionalized event, so can your business. To this end, integrating website, application, and API performance testing and monitoring into your product and business strategy has never been as critically important as it is now. Find out how Apica can help your business ensure application performance monitoring today. Take a look at our products: Apica LoadTest, Apica Synthetic
Klarna’s 7 secrets for maintaining four nines uptime 1. Implement end-to-end responsibility 2. Get started on a shift in architecture (Microservices in cloud platform and graceful degradation) 3. Keep centralized Incident Management (operations land, OPs knowledge) 4. Support proper Problem Management 5. Do continuous improvement/feedback – on all levels (lives, dev-teams, retros, incident reports) 6. Save minutes/seconds in communication 7. Service customers from the new platform Last week 451 Research partnered with Apica‘s customer Klarna, a $319M+ fintech company, to understand how they improved their availability, increased their bottom line and overall customer satisfaction using a proactive monitoring toolset, including Apica Synthetic. Proactive DEM (digital experience monitoring) has become a business requirement for all digital organizations, and in a time when application complexity continues to rise and cloud migration is no longer just a strategic vision, businesses are investing in people, practices, and tools to ensure their critical internal and external applications are always-on and high performing. Watch the full webinar today.
With 2017 nearly over, organizations are hotly anticipating what the new year will bring in terms of trends, developments and hidden surprises in the software testing industry. For the software testing industry, it’s been a busy year. Increased automation adoption within testing teams – where agile development boosted speed of release – has led to QA becoming more embedded within development teams than ever before. As software testing and monitoring gains increasing importance, it’s time to ask – what’s next? Here are my 2018 predictions for our industry: Bigger AI breakthroughs Next year, we’ll see businesses really make breakthroughs when it comes to utilizing artificial intelligence (AI) and machine learning – namely to better understand captured data. It can be difficult to see how wider concepts such as AI manifest physically – intelligent objects or ‘things’ bridge that gap. In the past, connected devices sent data for limited processing; today, machine learning enables devices to transform that data into tangible insight, transforming the behavior of IoT devices worldwide. UX monitoring will be key With 2017’s major outages in mind, it’s clear the industry has not progressed quickly enough to address exponential IoT and API economy growth. Although some businesses are achieving great things in testing and monitoring terms, many organizations are still focusing efforts on speed instead of quality, security and resilience. Looking ahead to the new year, companies need to address the overall quality of their services to remain competitive. Consequently, we’ll see a shift in focus to monitoring the customer experience, as well as the need for thorough end-to-end testing, embedded within the delivery lifecycle. Differentiation will come down to availability If 2018 sees vendors offering similar capabilities, how will consumers decide where to spend their money? Differentiation of services will come down to availability, ease of use and a consistent, high quality experience. Increased reliance on IoT devices, their data and their management will also drive the need for high availability of the API services that these devices interact with. Monitoring availability of these APIs is key to ensuring that organizations can run – especially in the manufacturing space – and that business intelligence data can be trusted by leaders. The CMO role will evolve Historically, software testing and monitoring has been the responsibility of the IT department, be that the development teams for testing, or operations on the monitoring side. Next year, with digital transformation underway in most organizations, in addition the explosion of connected devices and data processing derived from IoT, focus will shift onto application quality as well as customer experience. Thus, testing and monitoring should be of keen interest to the COO and the CMO: this will result in more rounded testing with team members coming from different parts of the organization. That’s a potential step change in the type of testing that would occur. Validation of results will transform In 2018, we will see further adoption of AI: major software vendors will increasingly embed machine learning within their core applications. In addition, machine learning will become a standard platform for data analytics for new development initiatives – benefiting IoT vendors most of all, due to the explosion of data in the market. This will challenge the testing community as new ways of testing and validating the results from AI need to be identified. — Although I have confidence in these predictions, I’m aware that there will be events in the new year that no one in the industry could foresee. However, we at Apica look forward to the challenge. Here’s to a successful 2018!
As the holiday season approaches, the IT and Development teams for many ecommerce and online retail brands have already begun their application performance testing to gain answers to business-critical questions. What will happen when thousands upon thousands of users concurrently visit the web app to conduct their holiday gift shopping? At what point will the application present a performance bottleneck? How well will the application scale, and what will be the effect on the end-user as it does? Conducting comprehensive performance testing is the first step of the process to yield critical insights into the maximum performance and scalability of a new or existing application. A common approach is to select a prioritized use case, create a script, and then run a test where the load is slowly increased until a point where response times begin to fluctuate. This point is commonly known as the applications ‘maximum throughput’ for the selected use case. Maximum throughput: the number of transactions per second an application can handle; the amount of transactions produced over time during a test. Simply running tests isn’t enough However, it’s not enough to simply run the tests. In order to properly assess the results, IT and Development teams must dig into the details for a true and accurate picture into the maximum performance and scalability of an application. Apica recognizes the value of giving users the ability to view this information quickly and easily. With the latest release of Apica LoadTest, we have now added two new graphs for page and transaction throughput in relation to the number of virtual users in the test. Leverage Page and Transaction throughput for app insights The differences in how these tests are set up and executed within Apica LoadTest are minimal, but how they are utilized, the results they produce, and the insights they expose, are significant. The ability to leverage both of these data points helps to effectively and efficiently test and analyze the performance of web applications. Page throughput is the number of pages requested per second compared to the number of concurrent virtual users in a given point of time during the ramp-up of the load. To drill down even further into an application’s performance, Apica has also added a new graph for transaction throughput. In Apica LoadTest, a transaction can be defined as one or more requests. Leveraging this chart allows users to drill down and measure the performance of a single HTTP/S request, for simplified bottleneck identification. For now, the two new graphs can be accessed in the Apica LoadTest Live View. Soon Apica will be incorporating the graphs into the test result details, as well as in reports. In short, throughput is a key concept for good performance testers to understand and is one of the top metrics used to measure how well an application is performing under duress and how that strain could impact performance. With an eye towards the holiday season, many application performance testers will be utilizing these data points for holistic insights into the business’ most important questions. For more information on application performance, send a note to firstname.lastname@example.org or contact us for a free trial.
Oxx, the investment company for scale-up stage B2B software businesses, focusing on the UK, the Nordics and Israel, recently led a $12m funding round in Apica. We sat down with Mikael Johnsson, Oxx’s General Partner, to discuss the investment, market opportunities, and his thoughts on the world of performance testing. Tell us more about Oxx and the company’s mission. I co-founded Oxx with Richard Anton following several years spent working together at Amadeus Capital Partners. Today, our focus is on investing in B2B ‘scale-up’ stage software companies with serious commercial potential. Most investment companies focus on either early or late stage and are most interested in Internet and Consumer brands – but. there’s a large number of really great companies operating in the B2B software space, they just scale differently than consumer companies and need investors who understand and support this. Oxx’s plan is to invest in 2 or 3 of these companies per year, and support them all the way, “from promise to success”. How did you discover Apica? We’ve been connected to the co-founder Sven Hammar for quite a few years now, tracking Apica’s progress throughout. At Oxx, we like to know and track companies for a few years before we invest – and understand the numbers and the challenges they face so we can make an informed decision. As Apica matured and broadened its testing and monitoring portfolio to meet the demands of the digital enterprise, we recognized a company at a very exciting stage in maturity. Another pull factor for Oxx was the addition of Carmen Carey as CEO – demonstrating Apica had entrepreneurial and commercial leadership in place. What market opportunities do you see for Apica moving forward? In our opinion, Apica is being presented with a huge market opportunity – the industry is going through massive disruption via digital transformation, and now organizations are faced with the challenge of ‘stitching’ multiple components together into a coherent and attractive user experience. Whether it’s cloud services, websites or apps, they need to control user experience by monitoring and testing the performance of numerous and disparate systems. Add device proliferation to the mix (APIs working on APIs on APIs, and so on) and we can see the task in hand for these organizations is significant, and mission critical. This undoubted challenge actually provides a fabulous opportunity for a company like Apica. What are your predictions for the Internet of Things (IoT) and API Economy? The API Economy is already here, with organizations building applications by leveraging readily available components, cloud services and APIs. The IoT Economy, on the other hand, is on the cusp of becoming mainstream technology and it’s incredibly exciting to witness. By example, an Apica customer in the automotive industry is turning its vehicles into digital hubs – this is testimony to how quickly the IoT is developing in just one sector. Apica has helped so many companies to optimize system performance. Do you believe companies have reached a tipping point where they’ve realized how important this is? For a large number of organizations performance testing is a big, dark hole, and this is inexcusable at a time when your business stands or falls on the customer’s digital experience. It’s really interesting to see how so many companies just haven’t got this message yet, despite the significant cost of outages. I was recently at the Apica Apex customer event, where one speaker highlighted the cost of downtime – 30-40 million dollars for just a 15-minute outage. Looked at this way, what Apica offers is a no-brainer; it’s just like buying property insurance. What makesRead More