Moving Beyond the ‘Internet of Useless’ Things to Deliver Value

It’s no surprise that Gartner has estimated that by 2020, there will be seven billion connected business devices in existence. Companies are investing vast sums in IoT, but the question remains, how many of these devices are actually useful and deliver value? From the connected coffee machine that will tell you when you’re nearly out of pods, to the intuitive egg tray designed to deliver the earth-shattering news that the dreaded sell by date is one-day closer, there is countless ‘Internet of Useless things’ devices out there we can probably live without. What’s worse is that there is real potential for organizations to spend millions on IoT projects without clear business objectives, and without strategic testing plans in place to ensure that these devices safely deliver as intended. Technology to provide tangible real-life benefits Klas Bendrik, SVP & CIO at Volvo Car Group, has got it right. On embracing IoT, he referred to creating more efficient and safer cars, as well as helping people’s journeys: “We take the best available technology and make it work in the most useful way for our customers. It’s about using technology to provide tangible real-life benefits, rather than providing technology just for the sake of it.” It’s a great point, and other companies would do well to try and deliver on this type of approach focused on real value. It also highlights that performance and availability of connected devices will absolutely become important differentiators when it comes to an ever more competitive and crowded marketplace. When in doubt, test again Simply put, companies invested in IoT have to put the time into strategic monitoring and testing to guarantee continuous performance that will actually add tangible business value, and to stand up to the test of time and popularity. Connected IoT devices have a high level of dependency on the speed of communication, which can open them up to issues such as slow internet connection or unreliable network hardware. So in this sense, it’s critical to test IoT devices to ensure that they’re not failing to respond or losing data. Key to the customer experience is proactively monitoring your websites and applications, not to mention APIs, 24/7 is always going to be key to providing a good customer experience – doing this intermittently is just not an option. It means that you can be fixing any issues before they escalate and before customers ‘hit a wall’ in their user experience and start complaining about availability or performance issues. Time is always of the essence in these instances because key performance indicators like page load time are intrinsically lined to loss of visitors. If you test people’s patience with slow load times or other performance issues, you really do risk losing trade. When it comes to IoT, cyber-crime and data privacy are other issues that should be considered. Who wants to get hacked by the egg tray or the not-so-conscientious coffee machine? Testing needs to push applications in all areas of performance, including security. Within the year, the IoT market will likely be more than double the size of the smartphone, PC, tablet, connected car, and the wearable market combined. By then, let’s hope that there is a growing trend for IoT for business value, and not just IoT’s sake, and that companies recognize the need for proper testing to deliver a safe, reliable IoT user experience.  

Facing up to the new customer experience challenges in 2018

When it comes to maintaining constant ‘uptime’ for customer experience, even the most reputable industry players face hurdles. In 2018, businesses could do well to get to grips with their services in terms of improving availability, driving optimum performance and delivering the ultimate customer experience. Here are some new customer experience challenges businesses are facing – and tips on how to solve them: Addressing availability before user experience Last year, we witnessed several major website and application outages, from Virgin Money Giving crashing the night before the London Marathon, to WhatsApp going down on New Year’s Eve. The financial and reputational impact on businesses, whether caused by traffic volume or IT glitches, can be severe. It was recently revealed that hundreds of UK parents are unable to access their tax-free childcare benefits via an HMRC-run website, resulting in unpaid – and unhappy – childcare providers. These examples demonstrate how much work businesses still need to put in before they can really compete on a user experience level. Failing to ensure constant availability and enforce a user-first mentality directly affects market positioning, and, often, the delivery of vital services. The supply chain improving user experience Application supply chains are becoming increasingly long and complex. Consequently, we can no longer assume a positive user experience from monitoring individual components – or that the experience needs to be measured at the edge, at a macro level. Exercising and testing the supply chain could be a real market differentiator in 2018 for businesses looking to achieve user experience excellence. The UX-driven evolution of C-suite roles This year, it will become more important for the entire C-suite to be involved in monitoring the ‘performance dashboard’, as businesses become increasingly digital. It’s essential that roles evolve soon – for example, the CIO and CDO will begin to merge as the need for UX awareness and appreciation takes over from technical knowledge alone. When it comes to supporting digital services, the Head of Customer Experience will play a key role. In some organizations, this could mean that CDOs start to tackle more customer experience requirements, and in others, it may mean that over time, CDOs become the customer experience role. Through the design of user friendly digital services that perform at all times – including in periods of peak traffic – large industry players will be able to meet customer expectations with user experience offerings. In addition, smaller organizations are given an opportunity to compete in the ever-evolving digital landscape. Business will put web monitoring and load testing first How long is ten seconds for a website home page to load? Long enough to lose you 40% of potential users if they can’t access what they want in that time-frame. Ultimately, regardless of whether a site or service’s graphic user interface is optimized for a great user experience – if performance and response times fail to meet user expectations, companies will suffer. In the not-too-distant future, I believe we’ll see business leaders focus on web and application monitoring and load testing so that UX teams can concentrate on doing what they do best: developing forward-thinking digital services that make the user experience faster and better.

How to avoid being ‘First Down’: Lessons to Learn from Super Bowl LII

The Philadelphia Eagles may still be reveling in their victory over the New England Patriots in Sunday’s historic Super Bowl win, but many game-day advertisers won’t be quite as happy in the aftermath. After the vast creative (and financial!) input to running a prized Super Bowl ad, it should have been a time for businesses to reap the rewards of a guaranteed increase in traffic. Unfortunately, we identified several websites that went down during this peak time, undoing so much hard work with each second of poor performance. This year, plenty of commercials were actually released prior to kick-off. This means that our teams, using data in Apica Synthetic, had the opportunity to monitor different companies’ performance and availability as Super Bowl Sunday unfolded. What we found was that many major brand sites just couldn’t deal with the volume in traffic driven by the commercials, with some experiencing load delays of nearly 30 seconds and others falling over completely. What was going wrong? Downtime – It Stinks Here is the longest homepage URL timeout during the Super Bowl. As you can see, Febreze’s “Bleep don’t stink” ad caused its website to really suffer as a consequence. We released a consumer expectations survey last year and based on the findings, 40% of visitors will leave a site if it takes longer than ten seconds to load. Here, the longest load time was a whopping 26.5 seconds! Diving deeper into the problems of website performance, an HTTP code error 500 indicated internal server errors. What does this mean? More extensive and damaging downtime. In fact, Febreze’s site was down for a maximum of an hour during Super Bowl Sunday. As a business leader, if you were paying this much for the opportunity to advertise on one of the world’s biggest annual platforms, your website and other supporting marketing vehicles should be tested and ready to go! Sadly, it looks like some hadn’t read the playbook. I Like Beer, But… Let’s look at site performance based on Michelob Ultra’s “I Like Beer” 60-second ad. Over a 24-hour period on Sunday, you can see two big ‘dots’ of downtime, not to mention the tallest spike where it looks like the site took 50 seconds to load. Seriously? I like a beer, but I’m not going to wait 50 seconds to check out your website! And as industry research has been showing for a few years, no-one else is going to either. The company did a big marketing/publicity push designed for the Super Bowl and it looks like they had a breach of their service level agreement (SLA) during the most critical ad time of their year. It’s a worst-case scenario that prior planning and preparation would have avoided. It’s so important to test your applications before the big day; it’s not as if the winning QB just turned up to the Super Bowl without having trained and assumed everything would be fine, is it? What can we learn from this? It’s all very well pointing fingers, but organizations need to understand what they can do to avoid similar downtime in the future. Companies of all sizes, across all industries, need to load test their sites to handle increased traffic and anticipate when that increase will occur. Hint – probably during halftime at the Super Bowl. Stretch your site to its limit if you want to understand where potential problems might lie. Synthetic monitoring, too, provides an overall picture of the site’s availability over time, which is critical information. Monitoring matters more than ever. If a commercial with no direct link to a website orRead More

4 key requirements for an enterprise IoT strategy

By 2020, there will be billions of devices connected to the Internet of Things (IoT). From the Google Homes and Fitbits in our personal lives to smart cities connecting everything from traffic lights, public transport, and water supply systems, IoT devices are producing masses of data to improve virtually every aspect of the day-to-day. As an example, the automotive industry has since risen to the challenge of applying IoT to cars. Devices can access road data and road conditions, or even call for help in an emergency. Manufacturers are no longer just “car companies” – they’ve evolved to become software companies and data centers, connecting to a variety of interconnected servers providing data. Over the next few years, there will be a seismic shift in the automotive industry, delivering the power of the Internet and IoT to vehicles. This places a huge reliance on having battle tested APIs so that IoT enabled devices and services do not crash. This means monitoring, advanced testing, and ongoing maintenance will be everything to the new ‘tech’ automotive companies of the future, on a scale never seen before, and the manufacturers that can bring the truly connected vehicle closer to that reality first will have a considerable competitive advantage in the marketplace. A good performance monitoring and testing tool offers a fully developed IoT strategy to ensure companies across all verticals can develop and deploy complex scripts with bank-class security. What does a good IoT Strategy include? 1. End-to-end application visibility The end-to-end aspect is really important for any large IoT deployment.  Cycles are running in production with new solutions being deployed regularly. Being able to program and set up monitoring provides complete, end-end visibility. Quality of service, showing SLAs, and application performance based on real device interactions, are key. 2. Security and global execution An enterprise IoT organization needs bank-class security and local certificates to validate access to each app, vehicle, or device is secure. The certificate is required to authenticate and encrypt all communication. You need a tool that can fully emulate and run encrypted communication, based on client certificates and session tickets for secure environments. Apica’s global network allows execution from more than 84 countries and 2400 nodes worldwide matching any deployment map. 3. IoT protocol support There are only a few performance monitoring/digital experience monitoring vendors with a clear-cut IoT strategy. Many vendors offer API monitoring, but an IoT strategy requires very complex scripting of APIs, protocol support, and security functionality.  Apica supports REST, WebSocket, MQTT, X.509 Certificates, Source Module support, and Java Programming Plugins. With our MQTT over WebSocket protocol support, you can monitor performance and test large-scale IoT systems. 4. Enterprise IT monitoring and testing Enterprise companies need to test and monitor apps, network infrastructure, and APIs to ensure the end-user experience is flawless. Apica offers these services as an integrated part of all IT-operations; however, you don’t have to be an IT expert to see and understand the full picture. Contact us today to learn more about implementing an IoT strategy.  

How application performance monitoring could have defended the StubHub sack

StubHub, one of the most well-known online ticket exchange companies, crashed for 20+ minutes following the Georgia Bulldogs double-overtime win in Atlanta on January 1st. Although StubHub has yet to reveal the cause of the crash, in the age of instant digital response Georgian fans immediately took to Twitter to announce they “broke StubHub”. In addition, many fans poked fun at the Bulldog pride and their ability to overthrow a major player in the industry. Despite the crash on social media, it’s unlikely that users will abandon StubHub; however, many users may have jumped to some of StubHub’s competitors – Razorgator, Vivid  Seats, or SeatGeek to name a few. Apica released consumer expectations survey in mid-2017, showing consumers have little tolerance for slow websites, often abandoning them within 10 seconds. What can we learn from this? Companies, especially those who cater to an end-user, need to test and monitor their sites to handle spiked traffic, no matter how big or small. Application performance monitoring matters more than ever, and if StubHub can go down from a relatively small, regionalized event, so can your business. To this end, integrating website, application, and API performance testing and monitoring into your product and business strategy has never been as critically important as it is now. Find out how Apica can help your business ensure application performance monitoring today. Take a look at our products: Apica LoadTest, Apica Synthetic

7 secrets to maintaining four nines uptime using performance monitoring tools

Klarna’s 7 secrets for maintaining four nines uptime 1. Implement end-to-end responsibility 2. Get started on a shift in architecture (Microservices in cloud platform and graceful degradation) 3. Keep centralized Incident Management (operations land, OPs knowledge) 4. Support proper Problem Management 5. Do continuous improvement/feedback – on all levels (lives, dev-teams, retros, incident reports) 6. Save minutes/seconds in communication 7. Service customers from the new platform Last week 451 Research partnered with Apica‘s customer Klarna, a $319M+ fintech company, to understand how they improved their availability, increased their bottom line and overall customer satisfaction using a proactive monitoring toolset, including Apica Synthetic. Proactive DEM (digital experience monitoring) has become a business requirement for all digital organizations, and in a time when application complexity continues to rise and cloud migration is no longer just a strategic vision, businesses are investing in people, practices, and tools to ensure their critical internal and external applications are always-on and high performing. Watch the full webinar today.   

Shaping up for 2018 – five predictions for the software testing industry

With 2017 nearly over, organizations are hotly anticipating what the new year will bring in terms of trends, developments and hidden surprises in the software testing industry. For the software testing industry, it’s been a busy year. Increased automation adoption within testing teams – where agile development boosted speed of release – has led to QA becoming more embedded within development teams than ever before. As software testing and monitoring gains increasing importance, it’s time to ask – what’s next? Here are my 2018 predictions for our industry: Bigger AI breakthroughs Next year, we’ll see businesses really make breakthroughs when it comes to utilizing artificial intelligence (AI) and machine learning – namely to better understand captured data. It can be difficult to see how wider concepts such as AI manifest physically – intelligent objects or ‘things’ bridge that gap. In the past, connected devices sent data for limited processing; today, machine learning enables devices to transform that data into tangible insight, transforming the behavior of IoT devices worldwide. UX monitoring will be key With 2017’s major outages in mind, it’s clear the industry has not progressed quickly enough to address exponential IoT and API economy growth. Although some businesses are achieving great things in testing and monitoring terms, many organizations are still focusing efforts on speed instead of quality, security and resilience. Looking ahead to the new year, companies need to address the overall quality of their services to remain competitive. Consequently, we’ll see a shift in focus to monitoring the customer experience, as well as the need for thorough end-to-end testing, embedded within the delivery lifecycle. Differentiation will come down to availability If 2018 sees vendors offering similar capabilities, how will consumers decide where to spend their money? Differentiation of services will come down to availability, ease of use and a consistent, high quality experience. Increased reliance on IoT devices, their data and their management will also drive the need for high availability of the API services that these devices interact with. Monitoring availability of these APIs is key to ensuring that organizations can run – especially in the manufacturing space – and that business intelligence data can be trusted by leaders. The CMO role will evolve Historically, software testing and monitoring has been the responsibility of the IT department, be that the development teams for testing, or operations on the monitoring side. Next year, with digital transformation underway in most organizations, in addition the explosion of connected devices and data processing derived from IoT, focus will shift onto application quality as well as customer experience. Thus, testing and monitoring should be of keen interest to the COO and the CMO: this will result in more rounded testing with team members coming from different parts of the organization. That’s a potential step change in the type of testing that would occur. Validation of results will transform In 2018, we will see further adoption of AI: major software vendors will increasingly embed machine learning within their core applications.  In addition, machine learning will become a standard platform for data analytics for new development initiatives – benefiting IoT vendors most of all, due to the explosion of data in the market. This will challenge the testing community as new ways of testing and validating the results from AI need to be identified. — Although I have confidence in these predictions, I’m aware that there will be events in the new year that no one in the industry could foresee. However, we at Apica look forward to the challenge. Here’s to a successful 2018!  

Retail Rankings: Apica’s Web Performance Index Reveals Black Friday and Cyber Monday Winners and Losers

New York, London – December 4th, 2017. One week on from Cyber Monday, Apica – the leading provider of comprehensive software testing and monitoring solutions – has unveiled the 2017 Apica Web Performance Index (WPI). The annual index evaluates and ranks the web performance of some of the 200 top e-commerce websites in the US and Europe, during one of the busiest retail periods on the calendar. 2017 saw one million dollars per minute being spent at the peak of Black Friday sales, indicating that regular website and application testing, especially at peak times, has never been more important. Based on Cyber Monday performance, using a technical value called ‘DOM Complete’ (the time it takes for a page to fully load and respond to users), the Apica WPI is also an indicator of how the e-commerce industry is performing as a whole.  This year, 184/200 companies were seen as ‘healthy’ based on Apica’s measurements – that is, fully loading within 10 seconds. This produced a 2017 Apica WPI score of 92%. Based on Apica’s index, a score of 75% or more typically indicates a healthy market. Apica Web Performance Index Highlights 2017      WPI 2017 Score: 92/100      Overall Cyber Monday 2017 WPI winner: Hayneedle (1.367 second load time).      42% of US companies improved their load time on Black Friday compared to a normal day.       42% of US companies improved load time on Cyber Monday compared to a normal day.      Approximately 1 in 10 UK companies were not able to load their site within 10 seconds.      Amazon, Apple and Clarks were all amongst the performance leaders in the UK. Winners and losers 75% of UK e-retailers maintained or improved their performance on Black Friday 2017, compared to a sampled random day. This was a notably higher figure than in the US market at 48%. However, the US displayed an excellent ‘top 10’ performers, with an average load time of just 1.8 seconds. At the other end of the spectrum, the worst performing 10% of US retailers averaged 9.9 seconds to fully load, compared to 9.45 seconds for the same 10% in the UK. Overall, 96% of US e-commerce platforms loaded their sites within 10 seconds on Black Friday, whereas 90% of UK sites managed to avoid this pitfall. No US site experienced an outage on Black Friday, whereas 5% of those monitored in the UK suffered downtime ranging from 2 minutes to 2 hours. Carmen Carey, CEO of Apica, said: “This year’s Index score of 92/100 demonstrates that almost all organizations are developing and acting on awareness of the importance of web performance, especially at peak times, in driving business value.” Website performance is a primary factor in completing transactions on e-commerce sites, with multiple reports indicating just a few minutes downtime result in significant lost profits. See how the top 50 US retail/ecommerce sites ranked here. Free 30-day Trial: Load Test Free 30-day Trial: Synthetic Monitoring About Apica Leading enterprises rely on the Apica Web Excellence Suite to test and monitor their mission critical business systems, APIs, web and mobile applications. Apica enables businesses to get detailed real-time performance, uptime and capacity insights, ensuring outstanding end user experience and optimized IT operations. Apica’s suite – available as SaaS, on-premise and hybrid solutions – is trusted by 400+ leading brands globally. Apica has offices in Stockholm, New York, London and Santa Monica. To learn more about Apica, visit www.apicasystems.com // @apicasystems Media inquiries: UK: Ed Clark, The CommsCo, eclark@thecommsco.com,  +44 (0) 2082 961 875 US: Maria Doyle, Doyle Strategic Communications, maria@doylestratcomm.com, 781-964-3536

LoadRunner or JMeter: A False Premise

When I speak to QA Managers in my daily routine, I often hear a common scenario, they own LoadRunner as their performance load testing solution, but they have been told by “C” level executives within their organization to replace it with a “free” open source solution like JMeter. This is almost always due to the excessive price of LoadRunner’s annual renewal, coupled with the fact that every year it becomes more and more outdated and less relevant to the complex demands of today’s load testers. LoadRunner or JMeter? As LoadRunner’s deficiencies become more apparent the justification for keeping it becomes harder to justify price-conscious IT organizations, and rightly so. Here are the five major deficiencies LoadRunner customers tell me frustrates them: Windows only based architecture that lack support Apple and Linux OS’s. Scripts can only be created by complex programming (C++ & JavaScript) so testers with this know-how are both expensive and hard to find. Not DevOps friendly – limited or awkward support DevOps tools like Jenkins, JIRA, Bamboo, CodePipeline etc. API Support – APIs can only be created manually by complex & time-consuming programming. Server deployed – LoadRunner’s Lacks real-time test collaboration and unlimited scalability you see with SaaS-based load testing solutions. Given the high cost and lack of innovation of LoadRunner, it’s no wonder companies decide to make a 180-degree turn to open source tools. However, in many cases, this choice is moving from a difficult situation to a much worse one. JMeter brings its own set of challenges as a load testing solution: Like LoadRunner, it too requires users to know complex programming to create scripts. Open source tools cannot offer tech support, so you need to search the internet for answers which may or may not be helpful and can be time-consuming. It’s not secure. How many trojan horses have hacker left in the open source code? I don’t know and neither does anyone else. Your security officer will probably not be happy with an open source tool either. Also like LoadRunner it does not scale well when pushed to today’s large or even mega tests, that many organizations now require. Free? – There are many costs associated with open source software. The costs associated with the lack of the ongoing maintenance and support as well as the creating news scripts– not to mention the security risk. As a sales professional, I use Salesforce. I use it because it is simply the best CRM solution on the market in my experience. Yes, it cost my company money, and yes there are many open source tools that I could use. But as a professional, I have decided to give myself the best chance at success. My management is not going to congratulate me if I miss my sales goals but saved the company money by using an open source CRM.  That’s not how the world works. LoadRunner or JMeter is a false choice. There is a third alternative, Apica LoadTest. Apica LoadTest offers a modern alternative with modern capabilities: Advanced Scripting Engine – Create realistic and complex load test scripts without programming, plus it runs Selenium, LoadRunner, and JMeter scripts. OS Support – Windows, Linux, and Apple. Modern deployment options – SaaS, On-Premise, or Hybrid. Unmatched scalability – Up to Tens of million concurrent virtual users. API Test – Chaining together complex API calls without programming. Agile DevOps support – Jenkins, Bamboo, Git, AWS Team City, CodePipeline and more. Native APM Integrations – AppDynamics, Dynatrace, New Relic. Multi-Team Support – Real-time, centralized, better collaboration and superior testing. 24/7/365 GLOBAL SUPPORT – All performed in-house. Secure – WillRead More