Making Change Stick: How to Get Mainframe Teams Onboard with New Strategies, Processes, and Tools
Change is almost never easy.
You can build the best new strategy…
You can develop the best new processes…
You can select and implement the best new tools…
You can create a new way of doing things that is undeniably better than your old way…
And you might still have a hard time driving change, bringing your people onboard with your new strategy, and getting them to use the new processes and tools you give them.
We faced this challenge again and again with a recent client. We moved them through an 18+ month transformation of their mainframe application development process, and every step of the way we faced significant resistance to change from their existing team.
In this article, we’ll explain how we overcame this resistance.
We will explore:
- Why standard approaches to end user adoption and change management are not enough to drive big transformations in legacy fields like mainframe.
- The new approach we developed to drive effective change and ensure end user adoption, based around two prongs of detailed user guides and robust documentation.
- How we overcame resistance from our client’s management, and how our new approach has changed their day-to-day life for the better.
Let’s dive in.
Table of contents
- The Story So Far…
- Internal Resistance Makes Sense: Why Mainframe Teams Don’t Like to Change
- The Myth of Natural Change: Why Common Approaches to End User Adoption are Not Enough
- Pillar One: Detailed User Guides
- Pillar Two: Extended Training
- What’s Changed: Getting Buy-In and Driving Results
- The Road Ahead
The Story So Far…
This is the final part of a series we wrote to tell the full story of our client’s transformation.
This client is a multi-billion-dollar technology company that employs hundreds of thousands of people, and uses a central mainframe to drive many of their critical actions.
When they came to us, they had been following 20+ year old legacy processes to develop applications for their mainframe. They partnered with us to transform their mainframe application development process, and to bring a modern approach and DevOps culture transformation to this critical area of their business.
In our previous articles, we explored every step of this transformation — from how we created a dedicated DevOps team for our client to how we automated every stage of their pipeline.
In this final article, we will provide a detailed description of how we drove every one of these changes and build a DevOps culture — despite the significant resistance we faced from our client’s existing mainframe teams.
To begin, let’s briefly touch base on why we faced so much change to begin with.
Internal Resistance Makes Sense: Why Mainframe Teams Don’t Like to Change
Let’s be clear about one thing — our client’s mainframe teams were filled with smart, capable professionals who wanted to do their job well and who wanted to do everything they could to help their company succeed.
They were not resisting the changes we proposed out of apathy, or because they wanted to sabotage their company, or because they couldn’t understand the changes we were driving.
They were just used to doing things a certain way, and they felt their way was the best way.
This is perfectly natural, normal, and understandable. Most of our client’s team members had been working in mainframe for decades. They had their own legacy strategies, processes, and tools that they knew worked just fine. We were asking them to do their jobs in a brand-new way, and it made sense that they felt skeptical and resisted the changes we were driving.
What’s more, the changes we proposed were very modern. Most of our client’s team members had never used our new strategies, processes, and tools before. They had never even heard of anyone else using them either. They had no proof that what we were proposing would work — let alone provide a better alternative to the tried-and-trued methods they had always used.
As such, we needed to find a way to convince them to give our new methods a fair shot. And we quickly learned that we couldn’t rely on standard methods of end user adoption and DevOps culture transformation to drive the changes we were proposing.
Here’s why.
The Myth of Natural Change: Why Common Approaches to End User Adoption are Not Enough
Now, this project was not the first time we had transformed a client’s mainframe application development capability. We had already driven multiple large and small transformations for many clients, and along the way we had learned a hard truth — common approaches to end user adoption rarely work to drive real change.
You see, most common approaches to end user adoption and large-scale change operate from the same misconception. They tend to believe that end users will just naturally adopt whatever new strategies, processes, and tools you give them, and will certainly believe that this will boost team productivity and performance.
We have seen this misconception play out across multiple vectors, with multiple clients, and from multiple schools of thought.
For example, we’ve led transformations at many enterprises. In almost every case, management believed that we just needed to demo our new processes and tools in front of their end users, and then those end users would both know exactly how to use those things, and they would educate the rest of the team how to use them too.
And big, slow, legacy enterprises are not the only ones who believe in this myth of natural adoption. The Agile Manifesto also believes in “Working software over comprehensive documentation”. It suggests that creating detailed documentation is a waste of time and that building the DevOps pipeline is enough to ensure adoption.
Over the years we have stopped believing in this myth that end users will just naturally adopt whatever changes you give them. We have learned that end users need a lot more hands-on guidance to adopt even small changes — and that they need a lot of convincing to adopt the sort of large-scale transformations that we were driving this particular client through.
However, at the start of this client’s transformation our client’s management asked us to stick pretty close to the conventional model of end-user adoption. At their request we just performed the standard one-two demo sessions every time we released a new process or deployed a new tool and hoped that it would be enough this time.
But this approach was not enough for this client’s end users — their developers and testers — to learn the new process or tool we were giving them. What’s more, because they were skeptical about the whole concept of bringing DevOps to mainframe and culture transformation in the teams they didn’t share anything they learned from our sessions with the rest of their teams, and what little they did pick up on remained siloed and fragmented.
In short: By following standard approaches with this new client, old timers were not learning our new approach, newcomers were not hearing what we were teaching, and our client’s teams often forgot to use their new DevOps pipeline altogether.
It was a painful reminder that the standard approach to driving change management and end user adoption just doesn’t work. Thankfully, we were able to take the poor results of this initial approach, and show our client’s management team that the standard approach they requested was resulting in:
- Slow adoption of new processes and tools. Because end users didn’t really know how to perform their new processes or use their new tools, they were dragging their feet on adopting our new methods, and kept doing things the old way for as long as they felt they could get away with it.
- Excess burden and dependency on external support teams. We were spending a lot of our billable hours answering the same end user questions over and over again — and we worried that our end users wouldn’t be able to use their new processes and tools if we were no longer there to help them.
We presented these issues to our client’s management, and convinced them that we needed to develop a new approach to end user adoption — one that would ensure their teams would actually learn and use the new processes and tools we gave them.
We eventually developed a new approach that revolved around two core pillars:
- Pillar One: Detailed User Guides
- Pillar Two: Extended Training
Here’s what each pillar looks like, in-depth.
Pillar One: Detailed User Guides
Why We Developed Detailed User Guides
First, we saw that our client’s teams were not learning all of the material they needed to know from our one-two sessions. We designed these sessions to be very thorough, and to cover each new topic end-to-end, but no matter what we did our client’s teams kept coming to us with follow up questions on how to use the new processes and tools we gave them.
Seeing this, we realized that end users just need a lot more detailed information on how to leverage every new tool and process they receive, and they need to be able to refer back to this information on-demand when they are first learning their new way of performing their work.
This led us to create the first prong of our new approach to end user adoption — heavy documentation through detailed user guides.
We began to create highly detailed user guides every time we completed one piece of our solution and one stage of our client’s transformation. For example, we created detailed guides for test automation, for pipeline for mainframe artifacts deployment, for Jira integration, and for pretty much every other change that we drove.
We created these user guides for all team members — developers, testers, business analysts, etc. — and made sure that everyone in every team could understand the entire new process or toolset even if they were not directly using it.
What We Included in Our Detailed User Guides
We populated each of our user guides with every piece of information related to the process or the tool that we could think up. In addition, we collected every support question we received for each process or tool and placed them in the relevant user guide. Over time, we developed a standard format for our user guides, and included the following information, structured in the following manner:
- A Theoretical Explanation of the New Approach: For example, for test automation – we pointed out that automated testing is important to maintain application quality. We can’t just say “This code looks ok, I guess it is correct”. Instead, we need to prepare inputs for any possible case and predict outputs that we would consider to be correct for that case. As a result, the main steps of our testing process include automating test input load, executing the application, and comparing the result against its acceptable outcome. In this use guide we also explained the fundamentals of Test Driven Development (TDD).
- A Step-by-Step Guide on How to Use the Functionality: For example, for the start of the pipeline – we explained how to configure code review requirements, how to create test cases in test management tools, and our new JIRA workflow.
- The Most Common Mistakes Users Can Make, and How to Fix Them: For example, when the user is starting the pipeline, they might run the pipeline without filling JIRA fields properly. For instance, they might not properly fill in the expected production installation time. If this happens then the pipeline will fail, however the user guide mentions the necessary fix through field validation, and we duplicated this info in the FAQ.
For another example, for test automation, during moments of high testing activity the test automation instance can become unavailable directly from RQM. But at the same time, it might be available from UCD. We highlight this in our FAQ, and recommend the user repeat their attempt from UCD and not report it as a bug. - Description of Reports for Test Automation and Code Review, Including Tips for How to Easily Find the Root Cause of the Failure: For example, creating the code review report for IBM Data Stage packages consists of several html files. Each of them shows a result of reviewing one IBM® DataStage® job. At the same time, each job report has several sections. Our user guide describes the structure and links these reports to make them easy to generate and understand.
How We Delivered Our Detailed User Guides
Finally, we had to figure out the right way to share these guides and to deliver the information in them to our client’s teams. This was challenging because the client’s different teams all were different ages, and came from different cultures, and had different preferences for how they consumed information.
To accommodate everyone, we tried multiple formats for sharing our user guides. These included:
Wiki Pages
At first we just created wiki pages for all of our instructions and end user guides. Wiki pages were common in our client’s organization and familiar to everyone.
However, our client’s wiki page system interface was pretty simple. It was not up-to-date, it was hard to organize information in them and to search for that information, and we could not format the text in an intuitive manner.
And as we created guides for more and more complex topics, our documentation became more and more complex, and over time our system interface could no longer accommodate what we were building for them. Over time, many people just avoided opening the wikis and ignored the documentation we put in them.
Cloud Storage + Announcements on Slack
After we tried wiki pages, we moved on to cloud storage.
All of our user guides were originally created as presentations. We find it easy to draw algorithms and visualize important steps and concepts in slides, and it’s easy to save them as PDFs for simple sharing.
So, we began to upload the slides from our user guides, and presentations, and demo session documentation to Cloud storage. This allowed us to manage access to different documents, and to separate user guides and design docs while still storing them in one place. Overall this has been a useful solution and we have continued to use it for a very long time.
Git
Finally, we have also begun to use a Git repository to share our user guides and other forms of documentation.
In some cases, Git offers a few advantages over simply uploading end user documentation to a Cloud storage unit. Specifically, we found that we often had to adjust our documentation after we finished active development on a new solution, implemented it, and began to support that solution with end users. In addition, we were constantly updating our documents during bug fixes and when we added new features, and each time we made a change we had to communicate it with our client’s teams via SLACK to notify them when we updated a user guide.
At first we just had members of our DevOps team constantly adjusting our documentation in Cloud storage — which meant we struggled with things like version control, and with some team members accidentally deleting work completed by other team members.
After this happened a few times, we put all of our documentation — from user guides to design documents — into a Git repository. With Git we no longer lost or accidentally altered our documentation, we were able to better collect feedback from our end users, and we were able to iterate our documents faster.
Git is still not perfect — it can be time consuming when you have to clone the repository or manage complex access rights — but overall it’s the best solution we found to hosting and sharing our detailed end user documentation.
Pillar Two: Extended Training
Why We Performed Extended Training
Heavily documenting our processes and tools, and creating detailed user guides, was a big improvement, but we still faced big challenges to end user adoption. The client’s teams were still not adopting their new processes and tools as quickly as we wanted, and the information within the guides was not spreading as wide as we hoped — no matter how much people talked formally and informally about these processes and tools. Some people did not even know our user guides existed!
To boost teams engagement, we began to perform the second pillar of our new approach to end user adoption — extended training.
How We Performed Extended Training
We based our extended training around two types of live sessions.
Demos
For most of our processes and tools we were able to drive effective end user adoption through a combination of user guides and demo sessions. We performed the demo sessions live but recorded the sessions and added the recording to our user guides. This was a big help for newcomers especially.
In our demos, we showed that it wasn’t such a big deal to use the new technical solutions we were suggesting, and we shared critical details on how to use the solutions effectively.
Over time, we developed a standard structure for demos. In each demo we would:
- Describe the theory behind the new approach
- Show examples of the new functionality in a real-world usage scenario. For example, for starting the DevOps pipeline, we showed an example of deploying a code change, paying attention to the relevant details, and we walked them through everything in the solution step-by-step.
- Explain how it works under the hood. This was the client’s engineers’ favorite part. For them it was much easier to get used to a new solution after they knew how it was created and how it worked.
- Open discussion with the team (Q&A). We let them ask questions and share their opinion about what we should add into the pipeline.
Practical Education Sessions
In addition, we performed additional practical education sessions.
From our experience, demo sessions delivered over video conferencing tools are often not enough to really drive end user adoption. These sessions are good for delivering information, but they aren’t ideal for driving real change.
And the bigger the change, the less effective online demo sessions are on their own. For example, with this client we gave 4 different demo sessions when we were rolling out their new test automation framework, but even that wasn’t enough to overcome resistance and convince their teams that our new solution would work.
In an ideal world, every time we had to make a change we would always combine online demo sessions with face-to-face practical education sessions. We have found that face-to-face is just much more effective for driving end user adoption
But there were still some really big changes in this client’s transformation that required face-to-face or online education sessions, and we worked with their management to arrange several meetings with small teams to explain our new processes and tools.
During these meetings, we:
- Discussed their doubts about the new approach.
- Provided answers to their most common questions and concerns by lecturing with whiteboard drawings.
- Prepared and performed a small sample exercise where the attendees were “forced” to log into the new tool, touch and play with it a bit, test the functionality we developed, work with different test results, and fix failed test cases — all to see and feel with their own hands that our new solution worked.
- After each session, we asked each attendee to provide a real case where they would use their legacy approach, and to then explain how they could use the new process or tool instead.
This approach worked, and our client’s teams began to discuss different exercises, and even had a small competition about who could finish all of the exercises first.
With this approach they helped each other learn the approach and functionality of our new solution, and — ultimately — we performed similar educational sessions for all of the big changes we were proposing that were not being adopted through demos and documentation alone.
What’s Changed: Getting Buy-In and Driving Results
As a whole, our client was very receptive to this new approach to training, education, and driving change within their teams.
We only faced one big type of resistance to our new approach to end user adoption — our approach was more effective but it also took a lot more time and effort to perform. Managers worried that our people would spend too much time producing documentation, and that we would delay the release of new processes and tools while we waited to document them.
We did two things to overcome these concerns:
- We only had one person work on documentation while the team developed the new solution in parallel, and then other team members reviewed the docs when they were completed. This limited the amount of internal bandwidth we devoted to creating our end user guides.
- We did not wait to complete the end user guide before we released whatever process or tool we were documenting. We made sure we had a short introduction for the new solution, and we performed a general demonstration for a wide group of team members, but we produced our detailed and beautiful user guides after release.
But when it came down to it, our client’s managers got on board with our new approach when they saw the improved results we were driving. By following our new approach, we have been able to significantly accelerate the adoption of new processes and tools by end users, and teams have been able to experience the results of optimizing their processes. Our client and their teams are much more capable of learning these new processes and tools and getting the information they need when they hit a sticking point.
Over time, our new approach has created some big improvements in our client’s day to day life. Specifically:
- The client’s DevOps transformation began to move much faster due to the detailed documentation and support during educational sessions.
- Developers and testers refer to our user guides when they need information, instead of wasting time waiting for those answers from our Slack channel.
- Newcomers are able to review demo sessions and user guides to quickly get up to speed on their new team’s internal processes, and to learn tools they might not know.
- We have an easy procedure to deliver new functionality — we just update our user guides, send a notification to the team, and schedule a demo session that moves a large part of each team to become active users of the new solution.
- These demos have become an integral part of the day-to-day life of their DevOps teams, and we love to use them to share our solutions and to see firsthand our client’s teams becoming more and more open to the DevOps concept.
- Sometimes we still receive questions in Slack where the answer is available in the user guides, but now we can save time and just send the user a link and show them where to quickly find the information they need.
By developing and deploying this new approach to end user adoption, we have created multiple benefits for our client and their teams, including:
- The DevOps team spent less time to support the solution.
- Creating a single source of truth with our user guides.
- Fewer false positive bugs reported.
- Less conflict between teams due to documented rules and responsibilities.
- They can support their pipeline even if we stop working together.
If you’d like these benefits, then we suggest you give our new approach a try.
The Road Ahead
This was the last piece of the puzzle in our client’s mainframe DevOps transformation. By developing an effective method to drive end user adoption of tools and processes, and DevOps culture creation, we were able to move our client’s teams through a long series of changes that were radically different from anything they had done before.
We will not pretend that these changes suddenly became easy and all resistance melted away — we will simply state that with this process these changes became possible, the team engagement increased, and resistance became something we could reliably, systematically overcome.
Thanks to this change management process, our client now operates an automated DevOps pipeline for their mainframe application development. They have modernized their mainframe function, and completed a transformation that they once worried would be impossible. We continue to work with them to modernize and improve additional elements of their mainframe capability, and we continue to use this change management process to drive every new solution that we propose.
If you are interested in learning more, or in discussing how you might be able to move your own mainframe capability through a similar transformation, then reach out today.
You may also be interested in
Db2 Administration
- How to Break Your Cycle of Constant Firefighting and Embrace Proactive Management
- How to Break Your Cycle of Constant Firefighting and Embrace Proactive Management (Part Two)
- How to Maintain z/OS Availability During System Enhancements: Our Core Strategy
- How to Maintain z/OS Availability During System Enhancements: Our Recommended Tactics, Tools, and Services
Mainframe DevOps
- The Key to Large-Scale Mainframe DevOps Transformation: Building the Right Dedicated Team
- Choosing the Right First Steps in DevOps Mainframe Transformation
- Improving the Initial Automation and Creating a “Two Button Push” Process
- How to Select and Implement the Right Tools for Mainframe DevOps Transformation
- Picking the Right Tools: A Deep Dive Into the Most Popular Options for Mainframe DevOps
- Completing the DevOps Pipeline for Mainframe: Designing and Implementing Test Automation