Demystifying DevOps: How Analytics Helps in IT Operations

Demystifying DevOps
Demystifying DevOps

DevOps is not just about automation, or the cloud. Adopting DevOps in IT operations and technology delivery means disrupting ingrained corporate culture, cutting through the hype, democratizing knowledge, and using the right data, tools, and processes to meet organizational aspirations.

Many organizations contend with managing the costs, timeliness, demand, and supply of technologies, solutions, and resources across their businesses. There’s the expectation to meet accurate timetables, the need to improve productivity, the responsibility to ensure quality and performance, and the commitment to contribute to the bottom line.

These expectations propelled DevOps and Agile principles to the fore. They alleviated the burdens of managing the company’s technology products and services, and the IT operations and development teams who deliver and maintain them. Their benefits to the likes of Netflix, ING, and Nordstrom — along with 74% of recently surveyed enterprises — are reimagining operating models and reiterating the significance of cross-functional collaboration.

Yet more than a decade after DevOps coalesced into IT operations, many companies still struggle implementing it at scale, let alone adopting it. While 83% of surveyed IT decision-makers said their organization is adopting DevOps, a vast majority of them remain stuck in the middle. With noise and hype, DevOps and Agile could dwell somewhere between industry best practices, management expectations, “cloudwashing,” and cargo cult programming.

The result: decisions based on broad, ingrained yet ambiguous rules, comparisons, and assumptions. Sadly, they offer little in empowering decision-makers to influence the quality, cost, speed, and utility of the tools and technologies their company uses. With no concrete insights to draw from, it’s difficult to justify investing in capabilities or resources to meet the obvious need for digitalization faced by many organizations.

We’ve partnered and worked with many enterprises, and over the years we’ve found some myths that remain entrenched even among companies with a sizable technology portfolio.


Myth: DevOps can solve all problems in IT delivery.

DevOps can provide tremendous benefits: automation, improved feature rollout, decrease in outages, faster deployment, and quicker rollbacks, fixes, and updates through iteration, to name a few.

However, DevOps is not a panacea. DevOps is inherently tied to corporate culture that builds on existing or familiar components and unites them to improve development and operations. It’s not enough to simply introduce new tools and fill new roles. Trying to do a new thing in the old way is self-defeating. Adopting DevOps principles requires upending deep-seated culture and mindset that organizations might find too onerous to break out of. It’s no surprise that by 2023, 90% of DevOps-related initiatives are projected to fail due to cultural issues.

There’s no definitive, identical approach to adopting DevOps. It will be as unique as the people who implement it within the business. Some companies, for example, might find DevOps more effective when focusing on product delivery. Others might it cumbersome to adopt when they’re building highly customized, enterprise-level solutions. Outcomes can be influenced by the openness (or reluctance) of stakeholders and the company’s formal approval processes.

It’s more reasonable to consider DevOps as a journey, rather than a destination. DevOps strives for continuous improvement, constantly seeking better outputs and greater efficiencies.


Myth: AI can replace the company’s support teams.

Artificial intelligence (AI) is everywhere: over 90% of companies surveyed last year have dabbled with AI in some form, whether testing or scaling proofs of concept (PoCs), implementing limited use cases, or fully utilizing it across the business. By 2024, 70% of enterprises will use the cloud to operationalize AI projects.

Digital transformation accentuated the adoption of AI. In many aspects, however, technology still can’t replace the human touch — at least for now.

While machines can efficiently perform and automate a task, they currently lack the artistry in doing it. The protocol or platform may recommend an approach, but a human expert uniquely understands when to adjust and what nuances and subtleties are needed.

AI-powered tools obviously help, but they’re not yet mature enough to tackle all complexities. They can exceedingly perform a task they were designed to do, but if the conditions or settings of the task change only slightly, it will expectedly fail. The technical feasibility and use case of AI also differ significantly across industries and activities. Another example is observability in IT operations, where personnel on telemetry — using well-instrumented tools — can catch and resolve corner cases before they even affect the business.

AI is constantly evolving, however, and we expect it to advance beyond automating or solving routine problems. We’re already seeing it in healthcare, wealth management, and insurance.

Like DevOps, AI is not a silver bullet. It’s more realistic to see (and adopt) AI as a complement to human expertise, working together to improve the company’s agility, productivity, and experience.


Myth: Operations is only about costs and can be handed over cheaply.

Quite the contrary: Strong, agile, and well-equipped operations teams can reduce costs by up to 60% while improving the quality and timeliness of the usage and delivery of technologies. Recently surveyed decision-makers in more than 2,300 organizations echo this, as they’ve prioritized investment in IT operations to deal with the pandemic, optimize costs, speed up their pace of innovation, and improve user and customer engagement. These decision-makers placed strong emphasis on the value proposition and business impact of the IT team’s performance because there are tangible ROIs. These include lower mean time to resolution (MTTR) per incident, higher percentage in detecting performance issues, and increased revenue from newly created digital services.

The challenge is in knowing how and where to connect the dots. Many assume that operations is only about costs because they find it difficult to calculate its financial benefits, many of which are inconspicuous without the tools to measure them. And with a lack of visibility and capability to measure, they’d be inclined to hand over operations to third parties. Organizations tend to do this without realizing the hidden costs and unforeseen expenses that often overshadow why they’re farming out their operations in the first place.

Moving operations should have the right context and shrewdness. It should account for the regulatory, security, privacy, quality, and organizational risks that might outweigh an apparent financial advantage.


Analytics gives more context to technology delivery and maintenance.

These are just some of the myths that, in part, motivated us to establish our own standards and best practices for delivering and maintaining IT solutions. We draw on data and analytics to continuously improve our own teams, enable our clients to make informed decisions when allocating resources, and give stakeholders fuller visibility into the variables that affect their investments.

Our best practices are underpinned by principles that enable our teams to transform operations at speed and scale:

Build on existing tools and methodologies to improve processes. Technologies sit on troves of data about how they operate. This data can be engineered in ways that reveal insights into their performance and impact to the business and its people.

We use value stream mapping (VSM), for instance, to find and simplify overcomplicated and overlapping processes. Our approach to VSM also helps us uncover gaps in oversimplified or unsupervised processes that might lead to security or compliance issues. We use and integrate existing tools for process mining, which enables our support teams to understand and improve how business processes run. Our knowledge-centered service bridges all these elements together, using collective, in-depth data to quickly resolve issues.

Design and create tools and processes that optimize workflows. We customize business intelligence systems and data visualization technologies to track and measure processes as well as strengthen governance in our teams and the services they support.

We use predictive analytics, for example, to overcome the complexities of analyzing thousands of monthly incidents in multivendor environments. This enabled our support teams to be more proactive, rather than reacting to outages or incidents only when they happen. All these are documented in a knowledge base that allows our developers and operations team to transfer their know-how. Everything is visible to the management through unified, dynamic dashboards that monitor the performance of all services in near real time.

Use cloud financial operations (FinOps) to optimize costs. Organizations that don’t have defined cost optimization plans will end up overshooting budgets on cloud services by up to 70%. FinOps — which unites technology, finance, and business teams — lets us strike a balance between cost, speed, and quality while fostering transparency and accountability in spending, owning, and using cloud resources.

Our approach to FinOps includes finding quick wins and implementing long-term, strategic goals:

  • Constantly monitor and optimize infrastructure, resources, and processes to avoid nonessential and redundant costs.

  • Analyze the financial impact and risks of software components, PoCs, and customized cloud solutions.

  • Review the enterprise software architecture and its financial impact on the cloud environment or platform.

By applying robust policies and cost management strategies to each of our project, we enable our partners and clients to get more value and performance from their cloud resources while also instilling financial responsibility.

Indeed, DevOps is more than just automation, or the cloud. Our approach to technology delivery and ownership, for instance, uses the instrumentation of our development, IT operations, and support teams to measure and improve the technology, solution, or tool’s quality and adoption. We use data and analytics to track their performance — the velocity of release, rollout of features, defects and misconfigurations, outages in production environments, and cost overruns, to name a few.

All of these are drawn from a multitude of our own people’s principles, experiences, and activities. New technologies are testing the capacity and capability of businesses to digitally transform and deliver value. Their synergy requires near-constant disruptions in the organization’s people, processes, and technologies. They don’t evolve in a vacuum — they need to change holistically so businesses can culturally navigate the chaos and noise that these disruptions create.

to Top