Completing the DevOps Pipeline for Mainframe: Designing and Implementing Test Automation

December 18, 2023 

Every transformation is a journey, and the bigger the transformation, the longer the journey.

We recently worked through an 18+ month transformation for a client of ours. We worked with them to modernize their legacy mainframe application development capability, and to give them a truly modern, automated, DevOps pipeline for mainframe.

During this transformation, we modernized every stage of their pipeline, starting with a few “proof of concept” automations, and then gradually automating larger and more critical stages of their development process.

In this article, we will dive deep into how we created the final, and in some ways biggest and most important — automation for our client: the automation of their testing process.

To do so, we will:

  • Explain the big problems created by manual testing processes and why it’s so important to automate testing to correct those problems.
  • Explore how we designed and created our client’s new test automation process, and how we overcame their team’s resistance to adopting it.
  • Outline what mainframe automation testing now looks like for our client, and how implementing mainframe test automation has changed their teams’ day-to-day life for the better.

Let’s dive in.

Table of contents

The Story So Far…

This is Part 7 of a longer series that will tell the full story of our client’s transformation.

  1. Bringing DevOps to Mainframe Application Development: A Real-World Story
  2. The Key to Large-Scale Mainframe DevOps Transformation: Building the Right Dedicated Team
  3. Choosing the Right First Steps in DevOps Mainframe Transformation
  4. Improving the Initial Mainframe Automation and Creating a “Two Button Push” Process
  5. How to Select and Implement the Right Tools for Mainframe DevOps Transformation
  6. Picking the Right Tools: A Deep Dive Into the Most Popular Options for Mainframe DevOps
  7. Completing the DevOps Pipeline for Mainframe: Designing and Implementing Test Automation
  8. Making Change Stick: How to Get Mainframe Teams Onboard with New Strategies, Processes, and Tools

This client is a multi-billion-dollar technology company that employs hundreds of thousands of people, and uses a central mainframe to drive many of their critical actions.

When they came to us, they had been following 20+ year old legacy processes for mainframe application development. They partnered with us to transform their mainframe application development process, and to bring a modern DevOps approach to this critical area of their business.

So far, we have shared a top-level overview of this project, discussed the importance of building dedicated teams for mainframe DevOps, explained the initial automation we created for this client, showed how we improved that initial automation over time, and discussed how we helped them select and onboard new, modern mainframe tools.

In this article, we will discuss the next and final step in this client’s transformation.

The Missing Piece of This DevOps Mainframe Transformation

Ultimately, every step in this client’s transformation drove towards a single purpose: we were helping them build a fully automated deployment pipeline for their mainframe applications.

Once completed, this pipeline would be able to promote code from our client’s DEVELOPMENT environment to their PRODUCTION environment without any manual steps or intervention.

Once a developer has completed their code in DEVELOPMENT, all they have to do is “push” one “button” (this could be a GUI, a template submission script, or something else) and the pipeline will take all the necessary steps to deploy the code in PRODUCTION.

At this point, we were very close to achieving this “one push-button” DevOps pipeline for mainframe.

We had already moved our client through a dramatic transformation. They had already adopted many DevOps principles and tools, and their pipeline was far more automated than before.

We had already built a DevOps pipeline for mainframe and integrated it with RTC and Slack, and developers were moving code through the pipeline with two button pushes. Our client was migrating changes between their environments with minimal intervention, and they were already leveraging a far more agile and efficient mainframe application development process.

However, they still did not have a fully automated deployment pipeline.

They were still performing manual testing. Developers still needed to take their code from DEVELOPMENT, manually submit a script, and send their code to a TEST environment. There, a QA lead, with the help of the operations team, would manually test it before moving it to PRODUCTION.

In short, before we completed our client’s DevOps pipeline for mainframe, we also had to automate testing. Before we completed our client’s DevOps pipeline, we had to automate testing as well.

Closing the Gap: Why It Was So Important to Create Mainframe Test Automation

Testing is a big part of mainframe application development pipeline, and our client was still following a long and winding manual process. It looked like this:

  • Step 1: When code was installed on TEST, the testing team received a notification and built a test plan for the code change.
  • Step 2: After the test plan was ready, the assigned tester would prepare a list of technical steps required. This list included instructions, such as what data should be loaded to the TEST database and which application parts should be run.
  • Step 3: The list was then sent to the operation analyst via mail or RTC record comments. There was no standard communication procedure, which caused difficulties managing the process, even just within the application team, especially when someone was away from work and others had to pick up and continue the work for them.
  • Step 4: The application team would then complete the work whenever they had the time to do so. Testing activities were not being scheduled in the operation’s analysts’ calendar and only being completed in an ad hoc manner, which meant that response times varied depending on the operation team’s workload and availability.
  • Step 5: Once all of the testing instructions were executed, the operation analyst sent a notification to the tester, who checked the output data.
  • Step 6: Finally, when all of the tests were completed, the tester would provide permission to send code changes to production.

This was the best-case scenario.

In the worst-case scenario, there was a lot of back-and-forth before they even got to step one.

You see, our client’s developers found it so easy to ship code to TEST that they began to send changes without running them on a development system first. When testers tried to run this code they received syntax errors. These errors added up and eventually began to drive our client’s testers crazy. Over time, they began to just send stories back to development without even starting to test them.

This growing point of friction created a lot of back-and-forth between developers and testers that wasted a ton of time and attention, and made manual testing even less efficient.

A Big Problem: Why Manual Testing Wasn’t Working

Our client’s manual testing processes created a lot of problems for a lot of stakeholders.

  • Their Process Was Too Slow: The whole process took a lot of working time and a long duration to complete even small tests. Operation analysts had to schedule a time slot for each of their manual testing actions. These analysts were busy, and tests would just stack up untouched until the right analyst found some free time in their schedule. When the analyst did get to complete their tasks, they would then need to go back and double check every action they took to make sure the test completed properly.
  • Their Communication Was Fragmented: Every test required a flood of messages. Analysts had to explain their results. Testers had to explain their approach. To make matters worse, none of these communications were standardized: every tester and analyst had their own communication processes. Some preferred chat, others email, others left notes in RTC records. This flood of communication and lack of standardization made the testing process slower and more complicated than it needed to be, and made it near-impossible for new testers or analysts to join a test mid-stream.
  • They Were Not Documenting Their Tests: They didn’t use any test management tool. Instead, they stored test scenarios in various documents, stored on different computers, team folders, and scattered repositories. Some scenarios weren’t documented at all, and just stored in the mind of the tester who created them (and who used high-barrier-to-entry technologies). This made every test project even more complex and confusing, and made it near-impossible for any young team members to join projects and get up to speed quickly on how things were being done.
  • It Did Not Fit Our DevOps Vision: Manual testing just did not fit into the DevOps transformation we were guiding this client through. Manual testing caused problems for every other automation we created, and it prevented the client from seeing and deriving full value from the automated pipeline we were building.
  • Worst of All, Test Quality Suffered. Because tests were not documented and always performed ad hoc, there was no standard quality level that code had to pass to move to production. Different teams would test code at different levels of detail, some teams would skip some tests because those tests took too long, and sometimes teams would skip tests entirely because they didn’t think it was too risky if the new code-driven feature didn’t work.

Overall, it was clear to us that our client could not continue to perform manual testing. We needed to replace their legacy processes with automation.

Here’s how we did it.

Step-by-Step: How We Created Mainframe Test Automation

We knew that automating our client’s test processes would not be easy.

We were going to have to develop a new test strategy, to deploy new technical solutions to drive their automation, and to integrate it all into their existing DevOps pipeline for mainframe. In addition, we were going to have to perform a lot of training and education for their testing and analyst teams, as our new approach was going to be very different than their existing processes.

But rather than get overwhelmed or delay the process through overthinking it, we following a quick, simple, and adaptable step-by-step process.

Step One: Collect Test Functions to Automate

First, we focused on creating a Minimum Viable Product (MVP) for mainframe test automation.

To do so, we followed a process that was very similar to how we collected automation points for the deployment process. We simply collected and analyzed the most common actions that testers had been sending to their operations teams.

We worked closely with operations on this step to make sure we selected test actions that were frequently submitted, or which had to be performed on every single code change before installation. Each of these actions were widely used in most test frameworks, but we tailored them specifically for mainframe. They included:

  • Execution of JCL Job
  • Copy of Datasets
  • Starting IBM® Z® Workload Scheduler (formerly known as Tivoli Workload Scheduler or TWS)
  • Executing IBM® Db2 for z/OS® Utilities
  • And other common test actions.

We then created a document that explained the functionality for every requested test. If we saw multiple ways to perform the same test, then we selected the single best approach, and recommended that everyone on the client’s team followed it.

This helped us unify and standardize our client’s testing processes, and we used these files as both first iterations, as well as later deployment improvements when we implemented mainframe automated testing into the final DevOps pipeline.

Step Two: Create Mainframe Test Automation Scripts

Once we collected these common and required test actions, we created a test script for each of them. Each test script included the specific parameters a tester would use to execute it, and we developed the test scripts to be added to as many test suites as needed, and to be reused and executed over and over again. From there, when a tester would create a test suite, they would choose which test scripts they needed, and gather them in a strictly defined order.

Every time there was a code change, the tester would need to create a test suite by gathering the appropriate test scripts, and then providing the appropriate test suite ID to the automated deployment process.

Step Three: Integrate to the pipeline

To do so, we used the existing UCD-RQM plug-in. We configured the RQM server URL and the adapter installed on the client’s internal Windows server.

We linked the scripts to RQM, and to simplify their usage we created a “wrapping” test case for each suite as a usage example. Each of these “wrapping” test case was a test case with a linked test script, and execution variables section that we created specifically for that test script, and that included its parameters.

All test scripts were executed via an RQM command line adapter. This adapter can be installed on Windows® or Linux® machines, but it was not designed for mainframe so we had to create an interlayer server.

Step Four: Documentation

Our mainframe test automation process leveraged a lot of new functionality and we had to describe it all in a lot of detail for testers, as well as developers, business analysts, and anyone else that might be involved in the process.

To document the new process, we wrote a user guide where we described:

  1. The test automation approach that we used
  2. Each test script that we created and how to run it
  3. How to gather test cases into test suites
  4. How to send prepared tests to the deployment pipeline to be executed

We will discuss our approach to documentation in greater detail in our next article. For now, just note that we didn’t show the test automation framework we developed before our user guide was ready, because we knew it would look so new, and there would be so many questions.

Step Five: Education and Support for the Teams

Finally, we made sure every relevant team member learned their role in the new mainframe test automation process. We will discuss how we did a little bit later in this article.

When we finished, we created the following mainframe test automation architecture:

  1. The test script is triggered from RQM.
  2. Input and output information is stored, and requests to the mainframe are prepared on an interlayer machine.
  3. The required scripts on the mainframe are executed.
  4. The results go back from the mainframe to the interlayer machine, and then back to RQM where they are presented in a graphic interface.

Our Mainframe Test Automation Architecture in Action: Real-World Example

Here is a simple example of this architecture and the implementation of this mechanism to see it in action. We will use the example of running a TWS application.

  • The tester provides an application name to input.
  • The test script receives the appropriate parameter and pastes it into the JCL sample.
  • The test script then moves the generated JCL to the JES input queue, and waits until the application is completed.
  • We use the EQQYCAIN utility to control a “TWS current plan”.
  • The JCL job adds a job net to the “current plan” and removes dependencies.
  • The interlayer shell scripts generates a timestamp to provide the JCL job (alongside its application name).
  • We receive logs of the executed steps using FTP. The interlayer script uploads them to RQM, and the tester can now see all of the required information in the test case execution results page.

While this process is smooth, seamless, and automated, we still encountered substantial resistance towards implementing it from our client’s teams.

Here’s how we overcame that resistance.

Overcoming Resistance: How We Got Our Client’s Teams Onboard

We faced a lot of resistance to test automation from the client’s teams.

On the one hand, this was normal, their teams were very accustomed to doing things the old way, and they were always reluctant to adopt any new process that we developed for them.

On the other hand,  they were particularly critical of our test automation framework, and even more resistant to adopting it than they had been with our previous processes.

Now, their resistance was somewhat understandable. They could see that our mainframe test automation process contained some good ideas that would improve their existing processes, and that it would give them some needed new functionality. But some of their team members just didn’t think that our test automation process would work, simply because they had never automated their test actions before, and they had never seen a similar solution in action anywhere else.

To minimize this resistance and to drive adoption, we took two steps.

First, We made our mainframe test automation solution as technically simple as possible

We minimized the technical skills needed to perform test automation. We made sure the process required no scripting, coding, code repositories, or Version Control System (VCS), so that testers could create test scenarios without performing anything too technical. We also made sure our test automation solution was easy to expand and extend with new functions. Finally, we also created standardized structures for business requirements and test plans.

As a whole, we did everything we could to make our test automation solution flexible and simple, to formalize each test’s business requirements, and to make tests as easy to automate as possible.

We made sure there were no more ad hoc “Looks good, we can send it to production!” moments, and instead we focused on creating an approach where every decision would be made based on a structured test report to align the technical and business parts of the process and where anyone at any technical level could follow these processes to the letter.

Second, We Provided Abundant Documentation, Training, and Education

We knew that test automation would seem very different to our client’s teams, so we did everything we could to get them knowledgeable, capable, and comfortable with the process.

To do so, we developed detailed documentation for everything, and offered a two-week training program that included separate sessions for each team (depending on their location).

These training sessions included:

  • Test automation theory
  • Technical design of the module
  • Example of real cases of automation based on team’s input
  • Practical assistance for everyone during their first time using RQM

We had to provide much wider and more detailed documentation, training, and education for test automation than any other part of the client’s DevOps transformation, but at the end of the day it was worth it. The client’s teams learned and adopted test automation, and they’ve generated a wide range of benefits because of it.

The New Test Automation Process: What It Looks Like

After going through this work, our client gained a much simpler, more streamlined, and more effective test automation process. They no longer need to involve their operations team at all, their testers are now the only ones involved, and the whole process proceeds from one “button push”.

Our client’s new test automation process looks like:

  1. A tester builds a test plan. They create a detailed plan that can be implemented within the test automation process. They have to perform this work carefully, as a badly designed plan will cause problems in the automation process and create more work for everyone.
  2. They prepare the input data. Before, they loaded test data from remote sources. Now, they load test data from the files that are attached test cases. This isolates the testing process from any potential, unpredictable changes to the data.
  3. They create test cases. Because test cases can be reused, the tester sometimes does not need to create a test case from scratch, and can go from point 1 directly to point 4 in the process. In that case, they just copy the appropriate template/sample and fill in the execution variable that matches their test plan’s needs.
  4. They gather the test cases into a test suite. They must create the test suite before the code is ready, primarily using pre-defined test cases and test scripts.

They provide the suite ID within the package. They send the test suite ID to the developer, who then sends their code with the ID in it as soon as the code is ready.

What They Got: Benefits of Test Automation

Our client’s test automation process is now much faster and more efficient than their prior manual process, and it corrected the big problems in that prior process.

  • They now perform tests much faster. Previously, teams spent several days moving code through the testing process. Now, after their DevOps process and implementation of their test automation, their teams can move any code change from Development, through testing, and to production within an hour. Their teams got used to performing fast deployments, and now pay a lot of attention to fixing any friction in the system. Whenever they encounter a short delay, they do everything they can to find the issue, fix it, and perform smoother deployments.
  • Communication has greatly improved. They no longer waste time performing endless communications between teams. Many communications have been removed from the system, and most remaining communications have been standardized.
  • All tests are being documented. All test cases are now stored in a test management tool (RQM). Now, every time there’s a code change, the system stores a link to the test suite within the test management tool. This allows our client’s teams to review test plans and test results for every code change.
  • It filled the gap in our DevOps Vision. With test automation we filled the final gap in the client’s DevOps pipeline. Tests were now automated and built into the pipeline, which closed the gap between test and production, and moved our client from a “two push button” to a complete “one push button” process.
  • Most important, test quality improved. Our client’s teams can no longer skip any testing steps, or skip testing as a whole. Test automation takes care of those problems and fixes all test actions in an RTC record for every code change. All actions are standardized and now require more careful planning, providing greater assurance that each action is completed properly and that code moves to PRODUCTION without issues.

By designing and implementing test automation, we have closed the final gap in our DevOps pipeline. Our client can now run automated tests as soon as they complete development of new code. All tests will be executed automatically, and then pushed to PRODUCTION. There is no longer any delay between stages, and there is no need for manual intervention to move the package through the pipeline.

Now, there are additional considerations. For example, there are real users in the PRODUCTION environment, so you can’t push changes to that environment at any given moment. Our client’s teams still need to schedule the right date and time to promote code changes to PRODUCTION. To address this point, we developed additional scripts to place a code change on hold after it has passed its test, and before its deployment.

Yet aside from these small subtleties and tweaks, our client’s new automated pipeline does exactly what it was intended to do, and now provides a true “one button push” solution.

Final Point: Diving Deep Into Our Training, Education, and Documentation Process

In our next and final article in this series, we will give you a deep dive into how we produce documentation, conduct training, educate our client’s teams, and overall make sure that our new processes are adopted and performed.

But if you would like to learn more about this project without waiting, then contact us today and we will talk you through the elements of driving a DevOps transformation for mainframe that are most relevant to your unique context.

How we can help you