There has long been a divide between development and operations teams. But recently, there has been a movement, within both small startups and massive enterprise organizations alike, to break down these metaphorical walls and build bridges of shared accountability between the two teams. With the emergence of roles like DevOps and Site Reliability Engineering (SRE), we can see the introduction of a more collaborative approaches to delivering reliable software. Still, in the context of increasingly distributed and complex systems and tooling, when things go awry, accountability often remains unclear. In the heat of battle, when an application breaks and customers are feeling the burn, who is ultimately responsible for ensuring application reliability? Do enterprises with DevOps workflows have the right processes in place to ensure quick resolution of issues?

Modern development and operations professionals rely on a variety of tools and processes to build and maintain their applications. DevOps adoption as a foundation of the tool-chain and workflows used by today’s professionals are key indicators of successful transformations.

Time and Tide wait for none

Modern software production stops for no one, and everyone is needed to keep it rolling. Great speed and friction produce a lot of heat, and when everything is on fire all the time, even the best engineers struggle to keep the train speeding smoothly without getting burned.

Back in the day, things were simple

In days gone by, software projects were far simpler things than what we know today. As we moved from single-process desktop apps to large-scale, distributed, cloud-based solutions, simplicity was run over by the move-fast-and-break-things mentality. Supporting and maintaining software evolved from a simple task carried by small teams with a basic skill set, to company-wide efforts, requiring our best engineers. Nowadays, software projects comprise multiple services and micro-services distributed in the cloud (as well as on-prem, at the edge, and on physical devices). Each service is created by a different developer, a different team, and maybe even a different department or a third party. However, all of these parts must harmoniously play together as a beautiful orchestra, and as we’ve mentioned before: stopping or pausing is not an option.

DevOps has gone from fringe concept to mainstream juggernaut in a matter of just a few years.

What was once a niche approach to retool the way teams or organizations develop, deploy, and maintain applications has evolved to become a business imperative that impacts all areas of the company.

DevOps is not a product or tool–it’s a culture that has evolved organically to meet the needs of a more rapid pace of IT. Many organizations are utilizing DevOps processes or tools, and have no idea what DevOps is.

Faster delivery, greater security and higher quality were always aspirations for any organization creating apps. Today, through the use of agile for creation, DevOps for building  and cloud for delivery, these aspirations are achievable for world-spanning apps by even small teams with tight budgets.Open source plays a large role in achieving these changes. Rewriting open source code is rare if it works well—when a library or application can be downloaded and integrated to meet the needs of your current  development effort, writing it from scratch has much less appeal and is normally not cost-effective. Using open source code is faster and generally more convenient for developers.

Benefits of DevOps

Companies that practice DevOps have reported significant benefits, including: significantly shorter time to market, improved customer satisfaction, better product quality, more reliable releases, improved productivity and efficiency, and the increased ability to build the right product by fast experimentation.


As DevOps is intended to be a cross-functional mode of working, rather than a single DevOps tool, there are sets (or "toolchains") of multiple tools. Such DevOps tools are expected to fit into one or more of these categories, reflective of key aspects of the development and delivery process:

  1. Code — code development and review, source code management tools, code merging
  2. Build — continuous integration tools, build status
  3. Test — continuous testing tools that provide feedback on business risks
  4. Package — artifact (binary) repository, application pre-deployment staging
  5. Release — change management, release approvals, release automation
  6. Configure — infrastructure configuration and management, Infrastructure as Code tools
  7. Monitor — applications performance monitoring, end–user experience

Note that there exist different interpretations of the DevOps toolchain (e.g. Plan, Create, Verify, Package, Release, Configure, and Monitor).

Some categories are more essential in a DevOps toolchain than others.

A chain is only as strong as its weakest link

There is a lot to do when implementing DevOps tool-chain, but most of the global issues fall outside the realm of the DevOps team or are things already being performed in the act of moving to DevOps. With automation taking a larger portion of the build/test cycle, many of the roadblocks are broken down just by implementing DevOps. Issues with approvals will remain—most organizations do not attempt to run commit to release without human interaction—but they are greatly reduced when a system catches a failed build and notifies the responsible developer automatically. 

As identified by the below Periodic Table of DevOps Tools from XebiaLabs, once can observe the expanse of various tools and what category each one support.

Productivity and code quality are the top two
performance indicators for developers and DevOps

Where do I start?

One question asked by every individual or organization looking to start their DevOps journey is 'Where do I start?'

The answer to this question may not be as simple as the one we found. When I first started working in DevOps, my Company (comprised with over 300 developers and around 50 different teams), used a Source Control Management tool (also referred to as a Version Control System) called Dimensions . 

Dimensions is (an old) version control system that worked out of a traditional (distributed) database - where each check-in & check-out process were independent database transactions. This is a very inefficient way to perform Source Control Management. 

The system was so bad that the developers did not even bother using the system because a simple checkout of the latest version of the source code would take about 40 minutes. Considering our move to DevOps was to achieve the goal of delivering code faster, better and safer - Dimensions was a huge show-stopper.

Source control Systems today (like Git) are more advanced where the transactions are recorded as deltas (track changes - rather than creating a new copy of the file)

So, our first step was to implement a more efficient source control system.

Given the level of knowledge gathering and training to move from a system like Dimensions ( or having no SCM) to implement an efficient SCM system takes effort and time.

Our first task was to perform an analysis and POC (proof-of-concept) to support the new system and to understand the efforts required by the core "DevOps" team to provide, support, train and maintain the system. 

Our initial analysis brought up two potential tools that could be implemented: Subversion and GitHub.

In order to finalize our decision on the one, we considered the following: 

    • Hosting (Server or cloud)
    • Maintenance efforts (including support, outages, etc..)
    • Training efforts (Keep in mind - we are introducing a new tool / process that the current landscape is unfamiliar with. We needed to train all the developers on using the new tool and the true concepts of SCM)
    • Rollout efforts (staged approach vs full-blast launch)

(We finalized moving to Subversion as a stepping stone before finally migrating to Git)

Image result for devops build pipeline

Once the Source Control System has been setup for the environment and is being used, the next stage of our DevOps process was to introduce the Build Pipeline. In order to achieve a successful Build Pipeline, all developers need to understand the concepts of continuous integration, continuous delivery and continuous deployment.

It was also the responsibility of the "DevOps" team to identify a feasible tool that could support the requirements of the organization and implement a successful Build pipeline. 

Few things to consider for a stable build pipeline:

    • Build Tools (ANT, Maven, Gradle)
    • Continuous Integration tool (Jenkins, Bamboo, etc..)

The CI tools must also support

    • Unit Testing
    • Static Code Analysis
    • Regression Testing
    • Automated Functional testing
    • Artifact (Binary) Repository
    • Release Orchestration
    • Delivery
    • Deployment

More in future articles ...