Google Compute Engine (GCE) is relatively new to the world of Cloud Computing, as such, a common question we see how does this new cloud platform perform. Challenge accepted!
Therefore it was very interesting to perform a public test of a real application (WordPress) hosted on a leading cloud management platform and cloud infrastructure.
Together with our partners RightScale and Google, Apica tested a typical WordPress site to the limit and validated a few points on how to best scale a cloud application on this . Below you see the system architecture on top of Google Compute Engine, managed by RightScale.
Read more on Brian Adler’s breakdown of events, scaling configuration and other details on RightScale’s blog.
The goal was to kick the tires and verify how well a classic WordPress application could scale with target loads of up to 200,000 concurrent users and more than 20 million pageviews per hour. Why treat GCE any different, quite a challenge for any cloud platform!
From a testing perspective we applied a ramp period of approximately 15 minutes to provide a realistic load introduction of the 200,000 concurrent users. During the test we were serving 330,000 pageviews per minute, and network throughput was approximately 2.3 Gbit/s, with 23,000 requests per second.
We focused on providing a mix of test scripts/scenarios that would both force the page to be updated, as well as, providing the typical search and updates to a blog. This testing strategy ensures a more realistic testing configuration and load; typical of any major blog on a leading cloud platform. We wanted to make sure that we really stressed the application while not being subject to cache technologies, which would provide a misleading representation of overall performance. This was performed deliberately with frequent updates to the blog pages with new content and search events which also stressed the backend layers, not just the caching function in the web server.
With Apica’s Proxy Sniffer, we created the following load scenarios:
- Browse the home page, select a random page, then select a random article
- Browse the home page, perform a search, load result article
- Browse the home page, open random article, post comment
- Browse the home page, log in to site, post article, logout
The scripts were recorded and made dynamic (re-run, even if the URL would be dynamic based on response) by our performance team. Naturally we used our favorite test tool, ProxySniffer, to provide a quick and easy way to record scenarios and adapt them as required. This set the stage to create unmatched scalability and data measurements for large test over Apica’s Global Load Test Network.
ProxySniffer produces (based on the recording) a java class file that can be deployed to multiple test clusters and therefore scale to many millions of users. We have become known in the industry to be able to easily deploy very large scale tests, from thousands to millions of concurrent users. This experience has enabled Apica to deploy “Mega-Load tests” with frequency and ease since we utilize mostly our very own infrastructure. It’s one thing to just generate load, however, the greater challenge is to produce the load from outside of the target cloud in a controlled way. You need to provide real latency and traffic patterns; that can’t be done if you are testing from the same network.
For those that wondered, the ProxySniffer recording session was made with a standard web browser (Firefox, Chrome, etc.). In another test context you could add mobile user agents if you need to validate server API and GUI adoption to the user agent selection.
In order to make the load balancers work properly we needed multiple IP addresses and ranges, so we used our premium Apica Test Network to allocate the servers clusters. Apica’s Test Network, consists of hundreds of physical servers and if that’s not enough we always have the option to add on demand Cloud capacity and locations whenever the need arises.
In terms of this test, our test rig consisted of 80 servers located in within the continental US. The load was set from 2,000 to 5,000 concurrent users per server depending on server capacity. Since ProxySniffer runs so efficiently it allows us to truly maximize the number of conconcurrent users per server. We have tested setups with 10 000 users per server, running Windows 64 Bit on large instances. It’s a big differentiator and very beneficial to be able to launch loads from known network environments and non-virtualized servers, allowing for clean and repeatable tests at massive scale. If you just have a limited time window you don’t like to redo the test based on weak performance from a just few cluster members!
The test was scheduled for a one hour duration with a ramp up over 15 minutes. From 0 – 200,000 concurrent users in 15 minutes. This means we introduced around 13,00o-14,000o users per minute onto the site.
This was one on the main test goals – Verifying that the scaling provided by RightScale and the GCE instances was able to keep up with the user peak. Scaling up – with a more aggressive overspend , and then scale back during the test to match the desired load goals.
The graph represents the actual load test results
1. The Ramp Up – Is where we introduced visitors
2. The Scale Up – Is where we reached our target
3. Load Stability – Once we achieved the desired capacity, GCE and RightScale served more than 20 million page views using a maximum of 42 servers, 35 of them being n1-standard-2-d machines.
Where the $$$ matters
Typically in a pure capacity planning setup we test high performing machines vs. low performance to see if the ratio between users/price is more beneficial vs. standard machines?
The big upside in this test was the ability for the platform to scale up and down and still match the user load even with standard machines. Very Cool!!
The price structure of GCE – per minute charge – allows you to over provision and the quickly scale down, which is a smarter way versus reactively attempting to scale up and catch the load…
Overall, the GCE/RightScale provides a very fast and scalable platform. Here are some of our tips to scale optimally:
- Test and validate – no obvious application bottlenecks
- Test and validate performance per added server
- Test and validate spin-up time
- Define pre scaling batches. How many machines you need to meet expanding user load
- Synchronize your up and down triggers
- Validate error rate and stability during scaling
Testing a cloud platform is demanding – both from a technology perspective as well as and process setup. The Apica load test platform is well equipped to provide a flexible solution for different types of application and test setups.
You can use the platform in tool / Self-service fashion or scale up for a turnkey project with full script and execution support. Server-side analytics are supported using either an Apica local server agent or better yet more advanced APM tools like AppDynamics. Apica Test Network provides all the capacity you need in a flexible way and can be configured to match your expected load from multiple locations that match user traffic.
If you need help with Cloud migration and capacity planning, we can provide the tools, platform and expertise for your project.
We test, you scale, improve and take the business risk out of deployment!