CloudShare Automation: Quality Testing Simplified

By Danielle Arad - September 25, 2014
4 min read

As software continues to “eat the world”, everything in the development and deployment process is scrutinized and subjected to automation. Whether you find this revolutionary or business as usual, it’s worth considering: What can be automated?

Cloudshare aims to automate the repetitive, error prone tasks that trap development and testing teams in technological “red tape.” We’ve written a lot about our experiences, from our automated testing process, to our development team’s “road to continuous integration”, to our guides in the support form to cover common automated deployment scenarios.

What does CloudShare Automate?

CloudShare’s virtual lab automate the creation, cloning, editing and sharing of full IT environments. This automated IT allows access to environments for anyone – developer, tester, sales engineer, or trainer – from anywhere, in minutes. This is why most users come to CloudShare in the first place.

Once these IT tasks are automated, new possibilities, processes and ideals emerge. In 2006, Martin Fowler described ideal testing practices this way:

The point of testing is to flush out, under controlled conditions, any problem that the system will have in production. A significant part of this is the environment within which the production system will run. If you test in a different environment, every difference results in a risk that what happens under test won’t happen in production.

As a result you want to set up your test environment to be as exact a mimic of your production environment as possible. Use the same database software, with the same versions, use the same version of operating system. Put all the appropriate libraries that are in the production environment into the test environment, even if the system doesn’t actually use them. Use the same IP addresses and ports, run it on the same hardware.

But, he noted (in ’06) “in reality there are limits. If you’re writing desktop software it’s not practicable to test in a clone of every possible desktop with all the third party software that different people are running. Similarly some production environments may be prohibitively expensive to duplicate … I’ve noticed a growing interest in using virtualization to make it easy to put together test environments.

Virtual Labs Enable Automated Testing on Production-Like Environments

Fowler mentions two constraints: cloning and cost. If you can clone the production system, you can test against it. And if you can scale clones without nightmarish maintenance requirements or capital outlay, then you can test against a full testing matrix.

As he explains:

It’s then relatively straightforward to install the latest build and run tests. Furthermore this can allow you to run multiple tests on one machine, or simulate multiple machines in a network on a single machine.

What Does Automated Testing Look like?

At CloudShare, we see it in our processes, our product and our customers. We automate tests from the UI to the API to the back-end. And many customers have done the same.

When it works, it’s invisible, with CloudShare handling thousands of virtual machines, scripts and tests. Though it lacks the physical plant of typical automated production systems, it’s results are nonetheless impressive and easy to visualize.

For example, Etsy looks at their homegrown process this way:

“We track everything: number of logins, number of login errors, dollars of transactions through the site, bugs reported on the forum — you name it. We batch these up and aggregate the numbers into 10-minute increments, then show the graphs. A vertical line here is a deploy to production.”

Of course, we are not all as crafty as Etsy. But with automated IT and testing in CloudShare the results can look very similar. Take this example of a CloudShare user, developing and testing a legacy app and a new app, hundreds of times a week:


Their test frequency – in just one week! – is impressive, and likely correlated to high overall test coverage. But, as Etsy and others know, the important stats are much harder to track –bugs not released, systems that don’t go down, and users that are unaffected. Fowler describes it as at first “vague exhortations” for quality testing, then a “solid action” plan and finally a “non-event”. In this sense, the answer to “What can be automated” is the simple, non-intrusive, non-event that happens, continuously after a software release: quality.