Choosing the Right First Steps in DevOps Mainframe Transformation

October 4, 2023 

Sometimes, you just need to get started. Especially when you have a big, challenging mainframe transformation ahead of you. You can’t get intimidated by the size of the project ahead. You can’t worry about planning everything perfectly. You can’t spend too much time trying to convince every skeptical stakeholder that the transformation will be a big success.

You just need to get started, to achieve some “quick wins”, to improve over time, and to convince the skeptics through tangible results, no matter how small they might look.

We took this approach with a recent client.

For the past 18+ months we have moved this client through a massive DevOps transformation of their mainframe application development capability.

Over this timeframe, we have made substantial progress… but it all began with a very small, very imperfect, and very isolated initial project.

In this article, we will explore why we took this approach, the challenges we faced getting this client started with their DevOps transformation, and how we chose and delivered this specific first project to just get the client started down the right path to a successful long-term transformation.

The Story So Far…

This is Part 3 of a longer series that will tell the full story of our client’s transformation.

In Part 1, we introduced the client and their project. This client is a multi-billion-dollar technology company that employs hundreds of thousands of people, and operates a central mainframe. When they found us, they had been following 20+ year old legacy processes to develop applications for their mainframe. They partnered with us to transform their mainframe application development processes, and to bring a modern DevOps approach to this critical area of their business.

You can read Part 1 here.

In Part 2, we discussed one of the foundational principles we brought to this project, we worked with this client to develop a dedicated DevOps team. This team would help every element of the client’s transformation. They would act as the transformation’s “champions”, and they would help the client and all relevant teams standardize and automate their mainframe application development processes. Establishing this team set the stage for the rest of the transformation work that lay ahead for this client.

You can read Part 2 here.

In this article, we’re going to start to dig into the actual, hands-on, nuts-and-bolts work of transforming this client’s mainframe application development processes.

The whole story is:

  1. Bringing DevOps to Mainframe Application Development: A Real-World Story
  2. The Key to Large-Scale Mainframe DevOps Transformation: Building the Right Dedicated Team
  3. Choosing the Right First Steps in DevOps Mainframe Transformation
  4. Improving the Initial Mainframe Automation and Creating a “Two Button Push” Process
  5. How to Select and Implement the Right Tools for Mainframe DevOps Transformation
  6. Picking the Right Tools: A Deep Dive Into the Most Popular Options for Mainframe DevOps
  7. Completing the DevOps Pipeline for Mainframe: Designing and Implementing Test Automation
  8. Making Change Stick: How to Get Mainframe Teams Onboard with New Strategies, Processes, and Tools

Where to Begin? The First Step in a Long Transformation Journey

This client’s DevOps transformation had one big, central goal to standardize and automate all of their mainframe application deployment processes.

Easier said than done.

This client had been following their existing processes for 20+ years. They used these processes to develop hundreds of new applications and features every year. They had 20+ global teams working in silos who were very used to doing things their way. And pretty much nobody at this client believed that any part of any one of their processes could be standardized and automated— let alone that all of their processes could be transformed.

We faced a big challenge. Not only did we have to find a way to automate and standardize all of their processes, but we also had to find a way to convince them that this transformation was even possible!

There were lots of ways we could have attempted to do this.

We could have launched a massive education campaign that attempted to get everyone on board with the concept of our DevOps transformation project…

We could have taken off a huge chunk of their mainframe application development processes, and then automated it, standardized it, and handed it back to them…

We could have performed any number of big, impressive-looking actions, planned perfectly down to the nth degree, to show this client and their teams that their large-scale DevOps transformation was doable…

But instead, we started small.

To begin, we focused on creating one tiny, simple, strategic and imperfect set of automations that only impacted a very narrow slice of the client’s mainframe application development processes, users, and outcomes.

Here’s why.

Identifying and Planning to Solve the Real Barrier to Transformation

When we first looked at this project, we saw two big challenges to overcome.

  1. We had to solve the technical challenges of helping this client automate and standardize their processes.
  2. We had to solve the people challenges of getting their teams to accept the transformation and actually use these automations.

The project’s technical challenges were big, but they were relatively easy to solve: we knew how to write the necessary automations.

But the project’s people challenges were another story… and in some ways even more important to overcome.

We knew the client’s DevOps transformation would be a failure if we couldn’t get their teams on board.

And they had good reasons to be skeptical of this transformation.

They had been following their own way of doing things for many years, sometimes decades. They knew their processes were imperfect, but they knew their processes worked well enough to get by… and they had no evidence that our proposed mainframe application automation and standardization would work at all, let alone deliver improved outcomes.

What’s more, they had already attempted to automate their processes before. The client’s 20+ mainframe application development teams had written many automation scripts. But these scripts were narrowly focused, poorly documented, and never really shared between teams.

In short: the client’s past work in this area had never added up to the sort of end-to-end automated and standardized process environment that we were talking about. Even if these teams believed that what we were attempting to achieve was possible, they had a hard time believing that it could work for them.

We took this resistance seriously. We knew we needed to overcome this resistance and convince client teams that any of their processes could be automated and standardized in even the smallest, most imperfect way—it was even more than just building great automation.

We knew if we could do this, then we could start to disarm their concerns for the larger project, and to begin to build each team’s trust and belief in our project. If we could accomplish something small, it would lay a foundation that we could build upon for the next sets of bigger and bigger challenges.

Now a quick note: we couldn’t pick a process that was too small and would look insignificant when completed. We needed to show that mainframe application automation was both possible and worth doing. So we needed to automate something that would produce a meaningful, measurable impact once it was completed.

With all this in mind, we set out to decide what elements of this client’s work we could automate first as a “proof of concept” for the project as a whole, and looked for something we could automate relatively quickly and easily, but which would still result in a meaningful process improvement once it was done.

Here is what we chose, and why.

We chose to automate this process for a few reasons:

It Was Slow and Error Prone: It was being performed entirely through manual actions open to human error, and it usually took a full day for the operation analyst to find a time slot to install the package.

If we automated this process, it would create an immediate, tangible, and undeniable improvement for the teams.

It Was Highly Visible: It was a critical part of the deployment process, and many different stakeholders and members of the team had their hands or their eyes on it.

If we improved this process, we would prove our work to a large amount of people whom we had to win over to make the larger project a success.

It Was Already Started: The operations analysts had already prepared some automation scripts for this work, in order to simplify their day-to-day work and to avoid repetitive actions. They simply had not shared, standardized, or operationalized these scripts in a broadly applicable way. They were not documented or organized in an understandable way that others could use.

If we worked with these scripts, we could not only produce our automations faster, but we could demonstrate the collaborative nature of this project and how we were simply there to improve work they already wanted to complete.

This process fit most of the criteria for our “proof of concept”.

However, it was still a bit too large and complex to automate as quickly as we hoped to.

So instead of taking on the process as a whole, we broke it down into individual components, and selected a couple of smaller “chunks” that we could focus our initial automation around.

We decided to automate two components of this process.

  1. After the developer wrote their code, they would hand it off, along with installation instructions, to the TEST system. There, an operation analyst would manually perform every action within the installation instructions (things like copying datasets, backing up data and database structures, and submitting the jobs).
    We decided to standardize and automate as many of these installation instructions as possible.
  2. During this process, If the operation analyst encountered any problems they would tell the developer, and the developer would create a code change to fix the problem. The operation analyst would then need to combine the developer’s initial code with their fixes.
    We decided to automate the combining of the initial code with the fixes.

Now that we knew what we were automating and standardizing, we got to work.

How We Automated and Standardized Our First Processes

Essentially, we followed a simple, streamlined 6-step process that produced this client’s first significant automation of their mainframe application development processes.

Here’s what we did.

  • Step 1:
    We selected the most common and recurring installation instructions that operations analysts most often received from developers. To do this, we analyzed all of the instructions that we could find documented. We collaborated heavily with the operations analysts at this stage to ensure we properly understood and selected the most useful installation instructions to automate first. Finally, we created a shared document that collected these instructions, and listed, for each, its resolution status (whether or not it could be used) and its automation status.
  • Step 2:
    We organized all of the installation instructions that we collected. We lined them up in a linear, step-by-step manner. We then reviewed the lists of installation instructions and steps, and determined which steps had multiple viable ways to be completed. When we encountered a step that had multiple instructions, and multiple possible ways of being completed, we chose the instruction that we felt was best, in order to standardize and simplify the process as a whole under a single, unified, end-to-end procedure.
  • Step 3:
    Using the document we created in Step 2, we once again collaborated with the client’s operations analysts to define clear requirements for this MVP process, identifying the most frequent required actions they had to take before they could send a code change to production.
  • Step 4:
    We collected and consolidated all of the automation scripts for these processes that the operations analysts had already developed. As mentioned, each team member had their own set of automation scripts that they had written, most of which were not meaningfully documented nor organized.
  • Step 5:
    As we worked through these steps, we configured the client’s CI/CD tool components on their mainframe’s test and production systems. We needed to install agents on every system present in these processes, and we needed to properly configure each of these agents.
  • Step 6:
    Finally, we created the client’s first pipeline in their CI/CD tool (they used IBM® UrbanCode® Deploy). To execute the automation scripts, we used the UCD z/OS® plugin to submit JCL jobs onto the system directly from the UCD. Now, this last step was not the “best” way to use their CI/CD tool, as it only used about 1% of the available functionality. However, it was the “best” way to perform this automation for our purposes, in order to create a working “proof of concept of automation” as quickly as possible.

The Results: Setting the Stage for Wide-Scale Transformation

Overall, this process only took about 6 weeks to complete, and it delivered on the outcomes that we originally sought.

  • It produced practical automation. Operations analysts were able to immediately utilize these automations to install applications much faster and easier than they could following their legacy methods.
  • It created a standardized approach. Operations analysts from all 20+ of the client’s teams began to follow the same processes to complete these steps of the application installation procedure.
  • It reduced deployment times. Operations analysts saved a ton of time, and were able to focus on unifying and transforming additional components of their installation process in a collaborative manner.
  • It proved automation was possible. Most important, it showed everyone, including both developers and operations analysts, that we could work together to automate their core processes.

Now, these were big wins, and they laid a foundation that we built upon throughout the rest of our shared transformation efforts.

And yet, we don’t wish to paint too perfect of a picture here.

After all, this project was never designed to be perfect, and there were certain limitations baked into it that everyone had to accept to achieve these outcomes in the short timeframe that we did.

Let’s take a quick look at what was imperfect with our mainframe application automation, and why we knowingly left these issues in it.

The Key to Rapid Success: Setting Priorities and Overcoming Perfectionism

Off the tops of our heads, here are a few technical issues, imperfections, or instances of incomplete functionality that were present within the first automation we created with this client:

  • We didn’t use most of the main functionalities provided by UrbanCode Deploy.
  • We didn’t do much automation of the code promotion and deployment subprocesses.
  • We didn’t do any automation at all for testing documenting, and communication.
  • The infrastructure failed often and required constant assistance.
  • We couldn’t process multiple deployments at once due to initial designs of the JCL and REXX automations.
  • We didn’t standardize requirements for code development and promotion, leaving dozens of use cases during every deployment that needed individualization.
  • Overall, 90% of the installation process still required manual action to complete.

Make no mistake, these are sizable issues that we had to fix later in the ongoing mainframe transformation.

But for this initial project, we could live with them. We chose to let them remain for the first 6 weeks, and to laser-focus on achieving our top priority— simply demonstrate to the client that meaningful automation could be achieved at all.

This was a worthwhile tradeoff, and provides a valuable lesson… When you are attempting a huge transformation, you are always going to have to make trade-offs.

To make these trade-offs palatable You will need to know what your priorities are, what you are truly trying to achieve at any moment, and you will need to know what issues you are willing to live with while you focus on achieving what matters most first.

This prioritization can be uncomfortable, but it’s essential if you want to move fast, and to shake up old ways of doing things.

Our client quickly came to see the benefits of this approach. They were used to following a rigid, slow, and perfectionist Waterfall approach to change (as are many mainframe teams).

But from this very first project, they began to understand the benefits of a more Agile methodology. They began to learn that you don’t need to get everything perfect to start, but that you can perform partial adjustments, and still produce concrete, tangible, incremental movements that build to a radical change in how your operations work.

Keeping the Ball Rolling: Next Steps

This initial project laid the foundation for the rest of our ongoing mainframe transformation effort with this client. It established the guiding principles that we have used for the last 18 months, and more importantly, it has made our massive transformation efforts look and feel much more manageable and achievable for all our stakeholders.

In the next article, we will explore the steps we took to build off our new foundation, and how we began to add more functionality to the client’s installation process and further improve their MVP pipeline.

If you would like to learn more about this project without waiting, then contact us today and we will talk you through the elements of driving a DevOps transformation for mainframe that are most relevant to your unique context.

How we can help you