The software powering today's hyper-growth enterprises is no longer packaged. It isn't for sale—it's being built from scratch by internal engineers using new distributed frameworks.
This shift towards software eating the world means that the core business is the software itself. And the people, infrastructure, and processes required to run and scale that software are a business's intellectual property.
One example is MemSQL, whose software is evolving so quickly that it has outgrown any commercially-available code testing solution, forcing MemSQL to build its own. Previously, I spoke with Eric Frenkiel, co-founder and CEO of MemSQL. Frenkiel, a former Facebook sales engineer, believes that the future of databases is distributed, and his company was recently named the number one operational data warehouse by analyst firm Gartner.
I recently caught up with Carl Sverre, a principal software architect at MemSQL, to learn about how containers are fueling their product's evolution, and how their home-grown testing platform has grown from a broom closet to a massive cluster to keep up with the rate of iteration on the product.
TechRepublic: Tell me a little bit about the testing demands faced by MemSQL.
Sverre: There are a couple of big trends that are pushing today's software testing requirements.
First, modern practices around Agile software methodology and frequent, iterative releases spawned the whole continuous integration, continuous delivery (CI/CD) movement. You need automated build and test procedures so that, as you introduce new features, the code is tested and integrated on the fly. This kind of test automation puts a huge strain on CPU cycles, memory, networking, and storage. But you ship software a lot faster!
The other trend is distributed computing. Modern applications support a lot more users and data volumes than traditional testing suites can handle. You need a new architecture. No longer are you running tests on a single server or monolithic stack. It needs to be tested across so-called "four corner" testing scenarios. By testing all of the outside corners or boundaries, you can make sure your software performs under the most extreme scenarios.
Commercial testing tools were designed for the monolithic stack. So companies like MemSQL have invested tremendous resources in testing our own software against many permutations, and that led us to create a highly customized and automated test platform. We were big Pokémon fans back when we first built our platform, so we called it Psyduck.
TechRepublic: How did you go about building Psyduck?
Sverre: So, years ago we found ourselves in this first generation of testing that required lots of unique tests to run daily, with too many unique test scenarios and highly variable runtimes for the testing. All of these tests required tons of RAM and multiple cores, so you can imagine the scale you have to think about to test a platform like that. There wasn't an off-the-shelf solution. We had to build it ourselves.
Psyduck started out innocently enough—put together on a few machines to get something going—and suited our needs for a time. But, as testing needs for MemSQL exploded, the manual operations that had to be done on each machine made it really challenging to scale. We also didn't have the sort of test isolation that we needed.
For a time, moving onto a virtual machine infrastructure partially addressed those challenges. Each VM ran its own specific types of tests, so we had these reproducible images that were running, and virtual machines provided some nice test isolation. But ultimately, VMs became problematic because the VM itself is too slow to launch, and created a big performance drag.
TechRepublic: In hindsight, what were the hardest parts about building out Psyduck?
Sverre: Well, as I said, containers bring some nice isolation between individual tests—but there's a lot more to it than that when it comes to delivering the actual resources required for running the tests. Storage and networking in particular have been tricky areas for containers, as the industry shakes out best practices and common standards. The container networking model is completely different from the traditional enterprise networking model, and delivering storage resources to many different containers is much different from configuring storage for monolithic applications.
When you are running containers at large scale you need control over how those containers claim the resources, and don't collide with each other—the so-called "noisy neighbor" problem. With unit tests this becomes especially important, because some tests require a lot more resources than others, and some tests are short-running while others can take hours to complete.
We actually have been beta testing a hyper-converged appliance from a company called Diamanti that was started by ex-Cisco UCS folks, to bring more automation and guaranteed performance to how storage and network I/O are delivered to containers. Presently, these challenges are very "do-it-yourself" for companies that are building CI/CD test pipelines, and we think that automated network and storage features from players like Diamanti are a big step forward in terms of bringing the type of mature tooling to containers that exists in the virtual machine world.
- Containers, cloud, and how developers predict the future of enterprise tech (TechRepublic)
- Docker's no longer all about test-and-dev, says Docker CEO (TechRepublic)
- Diamanti thinks it has a way to make open source containers pay (TechRepublic)
- Why Kubernetes could be crowned king of container management (TechRepublic)
- Diamanti believes appliances can simplify containers for the enterprise (TechRepublic)
Matt Asay is a veteran technology columnist who has written for CNET, ReadWrite, and other tech media. Asay has also held a variety of executive roles with leading mobile and big data software companies.