Databases and DevOps seem like oil and water. While databases are seen as stagnant, solid entities, DevOps is known for being agile and providing continuous delivery – which requires a lot of change in very little time. However, without data, applications have nothing to deliver. Databases need to be a part of DevOps, not work at cross purposes, as difficult as that may seem. Experts say that keeping databases flexible with the right tools and technology – and in some cases, using separate and smaller data stores – can go a long way toward achieving that goal.
The biggest thing about DevOps is that it is supposed to be agile and rapid. “If the core of your business value is in the data you store and manage for your customers, having the flexibility to change that data quickly, easily, and reliably is critical,” said Blake Smith, director of infrastructure engineering at Sprout Social. “Continuous delivery is all about cutting down the time from value creation to getting that value into your customers’ hands.”
Unfortunately, most teams overlook changing the data models when they’re changing their application code with continuous delivery, Smith said. While using schema-less or NoSQL database technologies can address the need for flexibility, teams often don’t build a strategy to manage changes in data structure. This leads to databases that don’t work with agile applications. Companies should adapt to changing business data needs in the application while keeping the complexity low, he added. This begins in the database.
Where databases fit in with DevOps
By its very definition, DevOps is the convergence of development and operations working as a team. Databases fall into the realm of operations, therefore becoming an essential part of DevOps, according to Robert Heuts, director of software engineering at POP. Operations can’t ignore what development is doing, and vice versa, when it comes to making sure data is there to fuel the applications.
“The issue with databases is that they are non-transient by design, and this makes progressing them through a continuous delivery pipeline a lot harder than code, for example,” Heuts said. To make matters more complicated, production data needs to go through the pipeline anonymously, making the database one of the more difficult aspects of DevOps. It requires more effort to do, and ensuring schemas and content can be migrated around continuous delivery landscapes securely and effectively can make a huge difference.
Dealing with schemas
However, just changing the schema of the database programmatically won’t solve every problem, said Joshua Eichorn, chief technology officer at Pagely. “A new release might require a schema change, requiring lock-step deployment with the rest of the app.” This can be time consuming.
Schema changes also can’t be made on a copy of the database and then subbed in like a team would with an instance or container, unlike other parts of the stack. The database is the primary component of applications and can’t be flipped out easily. As a result, teams will often treat databases like a separate entity and apply schema changes manually, outside continuous delivery setup, Eichorn said.
More and more organizations are jumping on the DevOps bandwagon and benefiting from increased productivity and a smoother workflow. This ebook looks at how you can get the most from the DevOps approach.
Free for Tech Pro Research subscribers.
Shifting toward a more agile database
Operations can also take a cue from development when it comes to managing databases. Microservices and immutable build and replace infrastructure are leading teams away from monolithic databases and to smaller data stores, according to Kevin McGrath, senior CTO architect at Sungard Availability Services. This means shifting from one relational or NoSQL database that serves up data for many applications to several databases that support each service.
This helps support the continuous integration and continuous delivery of DevOps. Code needs to move through pipelines quickly, and the database changes need to follow suit, McGrath said. Smaller changes can be pushed to production quickly with feature flags that enable or disable tables, columns, documents, and stored procedures. Then, databases are pulled into infrastructure as a code and start looking like other configuration items.
That doesn’t mean changing the data and structure of databases continuously, McGrath said. “The data and structure of a database should only change as much as the service that fronts its demands.” While it may seem counterintuitive, rolling the infrastructure the database resides on and continually practicing restoring from backups and failing from master to slave nodes will pay dividends over time. It helps test how the databases and the entire environment will handle failures, which helps gauge how much load it can handle, he said.
It’s clear that databases will always be intertwined with DevOps as the development team creates applications that rely on databases to fuel them. For operations teams, smaller data stores, schema changes, and repeated failure testing can help provide the back end needed for both teams to achieve agile deployment and continuous delivery.