How to Select and Implement the Right Tools for Mainframe DevOps Transformation
In some ways, you are only as good as your tools.
With the wrong tools, you will struggle to construct efficient, effective processes.
With the right tools, you will have a much easier time building a smooth DevOps capability.
Unfortunately, many mainframe teams either use suboptimal tools, or they use the right tools in a suboptimal manner, and make their job harder than it needs to be.
We faced this issue with a recent client.
For the past 18+ months, we have moved this client through a massive DevOps transformation of their mainframe application development capability.
As part of this transformation, we have upgraded a few of the foundational tools their teams leverage, and shown them how to improve the use of the foundational tools they have kept.
In this article, we will dive deep into this work.
- Explain why it is so critical to have the right tooling in place for mainframe DevOps.
- Share our overall approach to selecting the right tooling for our clients.
- And show you how we approached tool selection, and new tool implementation, for three of this client’s most critical applications.
Let’s dive in.
The Story So Far…
This is Part 5 of a longer series that will tell the full story of our client’s transformation.
This client is a multi-billion-dollar technology company that employs hundreds of thousands of people, and uses a central mainframe to drive many of their critical actions.
When they came to us, they had been following 20+ year old legacy processes to develop applications for their mainframe. They partnered with us to transform their mainframe application development process, and to bring a modern DevOps approach to this critical area of their business.
So far, we have shared a top-level overview of this project, discussed the importance of building dedicated teams for mainframe DevOps, explained the initial automation we created for this client, and showed how we improved that initial automation over time.
In this article, we will shift our focus to the tools underlying each of these steps in our client’s transformation, and how we improved them in parallel to the more visible steps of their transformation.
As we begin, let’s take a quick moment to explore—and stress—the importance of selecting the correct tooling for mainframe DevOps.
Why Choosing the Right Tooling is So Important for Mainframe DevOps
Mainframe application development is already a highly tool-dependent function.
DevOps takes this tool-dependence, and brings it to the next level.
Think about it. DevOps is structured around multiple development phases. The actions within each of these phases are largely automated, and the different phases are linked together with the end goal of moving code from submission to deployment as smoothly, seamlessly, and touch-free as possible.
Within mainframe DevOps, you will need to use tools to:
- Automate each of the various actions taken within each phase.
- Track those actions as they are taken.
- Communicate those actions—and their results—to relevant team members.
- And establish connections between each of the different phases, so the code can move from one to the next.
In short: mainframe DevOps requires multiple tools. Each of these tools must meet a complex set of technical requirements that ensure it can perform its own tasks very well, and also operate as an effective element of a larger tool ecosystem.
However, as important as these technical considerations are when selecting the right tools for your mainframe DevOps capability, they are only the tip of the iceberg.
You must also consider a few “soft” requirements that play just as big a role in determining whether a tool will be a good fit for your environment.
Three Considerations When Selecting Tools for Mainframe DevOps
No mainframe DevOps tool operates in a vacuum.
Every tool operates within a complex web of other tools, users, and projects already in motion— none of which can be halted to accommodate a change in tech.
As such, any time you consider implementing a new tool, it’s important to first evaluate it along three core factors.
- Can the Tool Do the Job?
First, determine whether the tool can do its job from a purely technical perspective, and whether it will play nice with other tools. Core considerations include:- Does this tool have the features required to do the job?- Can this tool integrate well with the other tools used by the team in their DevOps processes?- Is this tool flexible and extensible enough to grow and evolve as the team’s DevOps process grows and evolves?
- How Popular is This Tool?
Next, determine if the tool is well-known and commonly used, and whether existing and new users will already know how to operate it. Core considerations include:- Is this a widely used tool around the world?- Do existing members of the team already know how to use this tool (reducing its implementation time)?- Would new people who join the team later already know how to use this tool, (reducing their onboarding time)?- Does this tool have a lot of documentation and support behind it (reducing the time and effort required to learn it)?
- Can We Introduce This Tool in a Non-Disruptive Manner?
Finally, determine if you can switch the teams to the new tool without disrupting your existing projects, processes, and day-to-day operations. Core considerations include:- Will introducing this tool interrupt existing users, processes, and projects in motion? If so, are there ways to minimize the impact?- How much will this new tool change in the day-to-day working life of users and management? Will they have to learn anything new?- Will introducing this tool require time-consuming management discussions, require difficult changes to budget, or otherwise disrupt things “at the top”?
By running every tool you are evaluating through these questions, you will gain a good idea of whether or not it will be a viable replacement for an existing tool in your environment.
But before you even start to evaluate alternatives, there’s one even more important and fundamental question to ask …
“Do I even need to change any of my tools, or will my existing tools work if I just use them a little more effectively?”
Our Approach: Start with What You Have, and Upgrade Over Time
When making a change, it’s always important to keep the end goal in mind.
In this case— the end goal is not to replace your tools for the sake of replacing your tools.
The end goal is a more complete, effective, and efficient mainframe DevOps capability.
And sometimes, that can be achieved by simply improving how you use the tools you already have.
When we first start working with a client transform their mainframe DevOps capability, we almost always start by just using whatever tools they already have— even if we know those tools are technically not the best option available.
We do this for a few simple reasons. We know the client’s existing tools will at least have a baseline functionality to do their job. We know the client’s users will already know how to use these tools, so there’s no training required. And we know there will be little-to-no interruption of anyone’s day-to-day work, processes, or projects.
But most importantly— we know if we use our client’s existing tools, then we can just get started transforming their mainframe DevOps capability.
We won’t have to wait for management approval of new tools.
We won’t have to go through the whole procurement process.
We won’t have to take days, weeks, or months to train their users.
We can just immediately begin to build out their automated DevOps pipeline.
And along the way, we gain real-world insight into what’s actually required to improve their mainframe DevOps capability.
We learn which of their existing tools will be sufficient over the long term. We learn which of their existing tools need to be replaced. And we build the data, credibility, and support we need to make a meaningful business case that validates any recommendation we make when we do eventually argue for a change in tooling.
We followed this exact approach with our recent client, and it allowed us to make intelligent choices—and persuasive arguments—regarding what to keep and what to change in their mainframe DevOps tooling.
Here’s how we did it.
Upgrading Tools for a Real-World Mainframe DevOps Transformation: First Steps
When we first started working with this client, they were utilizing the following fundamental tools:
- IBM Urban Code Deploy (UCD): Our client used this as their base CI/CD tool that they used to create pipelines for code deployment in their different environments.
- Rational Team Concert (RTC): Our client used this as their team collaboration tool; primarily to track code changes. They weren’t using it for RTC’s wealth of other features.
- Rational Quality Manager (RQM): Our client had selected RQM as their test management tool for our work together, though they were not using it prior.
True to our approach, at the start of our project we took these tools and used them to establish a stable pipeline. Over time, we came to see which of these tools we felt we could keep, and which we felt the client should replace.
Here’s what we came to see:
- UCD: We determined that UCD was a viable option that we would be able to continue to use sufficiently and sustainably throughout the life of our client’s mainframe DevOps capability. As such, we decided to keep it.
- RTC: The client’s high-level management actually decided to change from RTC to Jira for their organization as a whole, before we could make a recommendation to do so. However, even though we were not involved in this decision-making process, we would have made this same recommendation on our own.
- RQM: We determined that RQM would not be sufficient. We determined that TestRail would be a superior and more sustainable tool, and we communicated this to our client. Ultimately, the client agreed to make this change, and we implemented it.
Now— deciding which of these tools to keep, and which to replace, was one thing.
But actually making the switch was a whole other effort.
Let’s take a minute to dig into how we switched this client from RQM to TestRail.
We’ll dive into more detail regarding how we selected which tool to replace RQM with, how we approached implementing the new tool, and how we overcame the resistance from the client’s teams throughout the whole process.
How We Switched RQM for TestRail: The Tool Selection Process
Once we determined that RQM was not sufficient or sustainable for this client’s mainframe DevOps capability, we immediately began to both look for alternatives, and to make the case to their management teams that they needed to make a change.
This process took a lot of time, effort, and attention, and it played out over multiple phases, each with its own challenges.
Here’s what the whole process looked like.
Phase I: List Development
We provided our client with technical advice on why RQM wouldn’t work over the long term, and what they should seek in a new tool. We then provided them with a list of potential test management tools that met these requirements better than RQM. In this specific case, we selected tools that:
- Complied with the client’s security rules.
- Had enough existing plug-ins for UCD.
- Had well-documented REST APIs.
- Was widely-used, and well known by the client’s teams.
Phase II: Management Review
We then submitted our prioritized list of recommended replacement tools to the client’s point person responsible for their tooling decisions. She performed her own research, and presented everything to their management teams. We ultimately had to shorten our list of recommended tools, which allowed the client to reach out to each and agree on terms for trial licenses.
Phase III: Tool Trials
Our client’s management team ultimately approved two of the tools on our list. We created trial environments for both of those tools, and began to see how well they would serve our client’s needs. We defined and replicated the functionality we would use them for in a real world environment, made sure we could replicate all needed functionality, and gained as close to a real-world view as possible of which would better serve our client’s needs.
Phase IV: Tool Recommendations and Management’s Decision
After we tested both of the approved tools in their trial environments, we provided a detailed comparative analysis between them to management, which included our recommendation of which of the two tools would best serve our client’s needs. Management then reviewed our analysis and recommendation, and ultimately selected TestRail.
Once our client selected TestRail, we immediately helped them implement it.
How We Switched RQM for TestRail: The Tool Implementation Process
We guided our client through a dedicated process to swap RQM for TestRail in the least disruptive manner possible.
Our process followed these steps.
Step 1: Visioning
As we experimented with the trial version of TestRail (mentioned above), we created a documented vision for how our client’s teams would use the new tool in their real environment, and this documentation helped guide our implementation.
Step 2: Defining
We mapped the automation pipeline we had developed, and defined each of the different touchpoints where we would use TestRail. At each of these touchpoints, we also defined the specific functions TestRail would perform, and the specific integrations that we would need to establish to ensure it all worked together.
Step 3: Test Planning
We took our working test management solution that we had built in RQM—including everything from test scripts to notifications—and made sure that each of these functions would remain the same in TestRail. We created a checklist that included each of them and ensured each was replicated before we finished our migration.
Step 4: Scripting
We developed integration scripts for each of the test automation touchpoints we defined in Step 2. These included making sure TestRail would receive information about executed test suites by their IDs, showing the progress during test execution, and sending the results to the appropriate tool.
Step 5: Integrating
We integrated TestRail into the pipeline, and ensured it executed the suite of testing automations at the right time. With RQM, we used a plug-in to ensure this execution, but this approach was not available for TestRail. Instead, we developed in Java a handler for the test requests queue, put the program module onto one of our servers, and included a script to add the test suite to the queue.
Step 6: Documenting
We developed instructions for how to use TestRail within this pipeline, and shared them with our client’s teams. These instructions included—in detail—what we had changed and how team members could best use the new tool.
Step 7: Educating
We also performed dedicated educational sessions with relevant team members to ensure everyone understood how to use TestRail, and to answer any questions they might have.
Step 8: Handing-Off
Finally, we worked with our client’s teams to create and share full documentation for how TestRail worked within their pipeline. We made sure this documentation would help them self-troubleshoot any issues that they encountered, and that it would help them rapidly onboard new users into their project.
Overall, this entire change from RQM to TestRail took about two months to complete. It required two weeks to research the client’s environment and design our plan, one month to define and implement the technical solution, and two weeks to educate and train their teams on TestRail.
It also took about two months to move this client from RTC to Jira, and overall we consider this timeline representative of what to expect when replacing an existing tool with a superior alternative within a mainframe DevOps capability.
How We Switched RQM for TestRail: Overcoming Objections
As could be expected, we did face some resistance from our client’s existing teams and management when we made the switch to TestRail (and, to a lesser extent, when we switched to Jira).
Our client’s front-line users didn’t have any specific issues with either TestRail or Jira, they just didn’t want to change how they operated. They were used to RQM and RTC, and they didn’t see the point in changing.
We saw this coming, so we preempted their concerns by beginning the process of educating our client’s users on TestRail and Jira long before we completed the migration and they would be forced to start using it. We also did everything we could to minimize the differences between how they used their existing tools, and how they would use their new tools. For example, we used the same names for fields and replicated the same workflows to the highest degree of fidelity possible.
Our client’s managers predictably worried about the costs of the new tools. To ease their concerns, we presented multiple tool options at multiple price points. We also calculated the measurable benefits of each tool in order to give management a meaningful sense of what ROI they could expect from making the change.
In addition, we made sure to use the new tools to make things that management liked, but didn’t have with the old tools—such as more informative dashboards and statistics—and we fixed design mistakes from our initial build using the old tools, resulting in an overall more stable pipeline and making the benefits of switching as tangible and visible as possible.
Overall, we were able to disarm both sets of resistance with relative ease, and over time both front-line users and management came to clearly see the advantages of using TestRail and Jira over their predecessors.
Next Steps: Our Tool Analysis, In Depth
In our next article, we will dive even deeper into the subject of tooling.
We will provide you with a thorough analysis of each tool we discussed in this article. We will walk you through the pros and cons of each, and offer our field-tested perspective on which of these tools might be an ideal fit for your specific environment and needs.
If you would like to learn more about this project without waiting, then contact us today and we will talk you through the elements of driving a DevOps transformation for mainframe that are most relevant to your unique context.