Diamanti had a legacy app it needed to bring into the 21st Century. Here's what the company learned along the way.
Containers may be the hot new thing, but most large enterprises are buried in not-so-hot legacy apps. It can take even years to upgrade or add features. Adding insult to injury, maintenance on these old apps is expensive. Enterprises are rushing to upgrade their infrastructure and migrate apps to the cloud, but legacy apps complicate that process.
Vendors are lining up to help, most notably Docker. At DockerCon in April 2017 the company launched a new program called Modernize Traditional Apps (MTA) to help companies modernize legacy applications written in .NET and Java by containerizing them...without changing a single line of code. If this sounds too good to be true, it just might be. Much more believable was Diamanti's work to move its legacy application to the Docker container format, a process not without a few tears, as Arvind Gupta, Diamanti's technical marketing engineer, told me in an interview.
The pain of containerizing old stuff
For Diamanti, the goal was to containerize its automated testing application. Diamanti is a relatively new startup, but this particular app closely resembles other legacy applications that typically run on high-performance computer clusters. It's the kind of application common in pharma, oil and gas, and other complex simulation situations where you are modeling lots of data and it's computationally intensive. As such, it's a good model for both how and why to modernize an app through containers, preferably without breaking things and without expensive code refactoring.
As Gupta told me: "It was certainly painful to manage the existing CI/CD platform for running parallel jobs. That's why we went for a container approach in the first place. We wanted to achieve the same results with less complexity and more scalability." Did it work? Yes..."eventually."
That "eventually" hints at lots of bother, but along the way Diamanti learned a great deal about containerizing the jobs using Docker. For example, debugging a container failure is one of the hardest tasks because you don't have clear visibility inside the container. It takes time to understand why some container or pod died--or why a container is not running as expected.
Also, containers by default run as root, so the problem is how a company can read/write NFS volumes when root user was actually owned by some other users. In Diamanti's case, Gupta detailed, "we overcame this problem with a workaround, but we need to still figure out a more secure way." Even despite the less-than-ideal approach, "once we had a successful Docker image, it was completely portable and scalable," Gupta said.
The manage cluster nodes, the company turned to Kubernetes, offering "a completely fresh and flexible approach compared to the old system," Gupta said. However, he also noted that "setting up Kubernetes is not for the faint of heart." Fortunately, Gupta went on, "our Diamanti D10 appliance simplified the Kubernetes setup and eliminated most of the other common problems." For those that don't have the benefit of Diamanti's appliance, Gupta described the common hiccups to me:
Before simplifying our life with our own appliance, we tried to set up Kubernetes ourselves. We started with Mini Kube, which helped in understanding Kubernetes concepts, but it was limited to a single node and it was hard to map the volumes, as it was running in a VM. We hit a wall trying to set up vanilla Kubernetes distributions across two to three different nodes. It was very complicated and took a long time to set up. Networking setup was very hard. Then we tried to set it up using kubeADM. It was much easier, but networking was still a problem and the containers were not able to talk to the external license server we had.
The pleasure of containerization
That's the pain. The upside, however, is arguably worth it. As Gupta told me, "our previous legacy app still served its purpose but it was hard to find help and support. Containers freed us from those concerns." Containers also gave Diamanti more control and a way to clean up their continuous integration flow and infrastructure, with the promise of more flexibility down the road.
SEE Docker: The smart person's guide (TechRepublic)
Containers represent the modern application lifecycle, which is about much greater agility and velocity from deployment to production. That's the positive. The negative, according to Gupta, is that "as that lifecycle accelerates, it exposes operational weaknesses that had been latent in IT all along." In other words, "if you're driving a slow vehicle down a bumpy road, it's an inconvenience, but it's okay. This is the traditional six to 12-month release cycle for your ERP systems and things like that. But when you want to go to highway speeds, it fundamentally becomes unsafe and you have business risks in trying to get that application lifecycle accelerated when you're dealing with operational processes simply not built for that world."
In the modern containerized application world, we're looking at continuous integration, we're looking at releases on a daily basis or sometimes even more frequently. Developers frankly expect a lot more. Developers are accustomed to having programmatic access to resources where they deploy their application. They're accustomed to elasticity--being able to rapidly scale up and scale down--and they're accustomed to very rapid deployments and speed.
By containerizing legacy applications, enterprises bring back this flexibility to developers. It costs some bother and tears, and it will expose flaws in IT processes that have grown calcified with age, but the promise of agility for developers is worth it.
- Docker: The smart person's guide (TechRepublic)
- 6 quick Docker tips to make managing your containers easier (TechRepublic)
- Docker rocker: container technology usage doubles; serious money follows (ZDNet)
- 2 innovative use cases for containers in virtual machines (TechRepublic)
- Many enterprises not ready for 'microservices tsunami' (ZDNet)