I recently talked with an analyst that is new to the DevOps space, and he asked a question that our team often assumes everyone knows the answer to – why would someone need a tool like Orca with all the automation tools available now?
In an ideal world with a small greenfield shop that is 100% automated, the answer is that you probably don’t need a drift detection tool. But very few established companies fit this criteria. More common are the shops that have a large number of existing applications, more infrastructure than they can manage, and a mish-mash of technologies. At this juncture, microservices in containers isn’t a viable near or even medium term solution. They’re having enough trouble managing their current applications and just don’t have the bandwidth to look at moving it all over to the cloud and containerizing it all.
I work with teams all the time that have automated provisioning in place, but somehow when they request server builds one server looks different from the others they requested. Often times they don’t find the difference until the deployed application won’t come up. Then they bring in the DBAs and developers, making it an “all hands on deck” scenario until they find the problem, which usually ends up being an update that was applied to one server but not the others, or a package that’s a newer version on one server but not the others, or the middleware was configured incorrectly on one server. You get the idea.
Perhaps a bigger problem is when the provisioning is done correctly – server, middleware, database – but someone on the production team messes with the middleware configurations, edits configuration files, or makes a seemingly innocuous change to the database, and everything blows up. It’s an “all hands on deck” scenario again where the DBAs, developers, and Ops team all try to figure out what went wrong and what changed between the current configuration and what it was two days ago, or why the application runs in UAT but not Production.
Having a drift detection tool that quickly tells you what changed, when it changed, and how it compares to other environments across the application stack is game changing. Being able to easily see that an unexpected schema change happened as the result of a release, or that a middleware configuration keeps changing when it shouldn’t, can shave days off your team’s MTTD (mean time to discovery) and free you up to plan that cloud strategy if that’s where you’re headed.