Servers

10 things you should do for a successful mainframe migration

Done right, a mainframe migration can dramatically reduce IT costs, free up your budget to fund new initiatives, and improve system performance and application quality.

IT cost reduction opportunities have never been so good. Rapid, low-risk mainframe migrations to distributed environments are becoming the norm. Thousands of MIPS are being moved to Windows, UNIX, and Linux through six- to 12-month migration projects. And mainframes are being decommissioned with accelerating frequency worldwide.

Multimillion dollar IT cost reductions inside a year or two provide a ready opportunity to fund ongoing application and infrastructure modernization initiatives out of existing and reduced IT budgets. So what's the catch? Doing it right.

Mainframe migrations are always successful when you do it right. More than a decade of large scale enterprise mainframe migrations have led to best practices and strategies that establish robust distributed solutions, always outperforming their previous mainframe environments, and with more ongoing modernization flexibility and scalability.

One global investment firm recently moved more than 2,000 MIPS to a combined Windows and Linux environment in seven months. Its approach was to stage the migration of 18 applications across a period of 12 months. The firm just completed its sixth production cutover with 12 to go in the next five months. Another company, Owens & Minor, a $7 billion distributor of medical and surgical supplies, cut over into production and shut down its 800 MIPS mainframe for a first year saving of $5 million in annual IT costs. That company immediately applied those savings to support funding of its ongoing IT modernization and improvement plan, starting with prioritized creation and deployment of previously mainframe-based functionality as Web services. And now projects to migrate up to and exceeding 10,000 MIPS are becoming more common.

So what do you need for a successful migration? Selection of the right solution and solution provider is just the beginning. Experience gained through dozens of large migration projects suggests these 10 primary best practices.

Note: This article is also available as a PDF download.

1: Ensure that all 10 components common to all migrations are designed for and assigned clear ownership

Establish ownership for delivery of each of the 10 components and obtain firm delivery schedule commitments. The 10 components are:

  • Primary programming languages (like COBOL, PL/I and Natural)
  • Secondary programming languages (like Easytrieve and Assembler)
  • Data infrastructure and data stored in files and relational databases
  • Batch application infrastructure (including JCL, Supporting Utilities, and Job Scheduler)
  • Online application infrastructure (including TP System and User Interface Screens)
  • Application and system level security (like RACF, TopSecret, and ACF2)
  • Output, content and report management (like CA-View//Deliver, ASG-Mobius, and IBM FileNet)
  • Development, test, and QA infrastructure
  • Production, failover, and disaster recovery infrastructure
  • Application modernization architecture and tooling

2: Apply expert solution architecture advice to target solution design

During the Analysis & Design delivery phase, apply expert advice to technical solution design, with a primary deliverable being a documented solution design that will ensure the right fit for your unique requirements and target environment.

3: Apply strong project management and solution architecture expertise across your project lifecycle

Show your modernization project due respect. Do not minimize the value of strong project management and solution architect support for your project.

4: Assign strong subject matter expertise to ensure solution adoption and success

Gain the expected value from your solution. Assign strong internal subject matter experts (SMEs) to champion, endorse, support, and sustain continuing solution value improvement.

5: Prepare robust problem resolution processes early (toward the end of the Build phase going into the Test phase)

Avoid wasted time in post-migration testing and production support. Implement and use the right tools and train a core technical team in problem resolution processes and procedures before going into production.

6: Leverage pre-existing test processes to a maximum extent

Again, avoid wasted time in post-migration testing and production support. Thoroughly prepare test data and scripts, leveraging your existing testing assets and processes.

7: Adopt an incident-tracking solution from the start of your project

Adopt and use an internal incident-tracking solution from the very beginning. Using a help desk incident-tracking solution as a central repository:

  • Renders the process immediately efficient.
  • Ensures visible accountability and reportability.
  • Avoids "lost issues."
  • Makes valuable issue resolution approaches and solutions available for posterity on a searchable and reportable basis.

8: Organize support processes and operations environment support during the Project Initiation phase

Address internal support processes and other operational considerations during project planning and solution design to assist in developing a realistic delivery schedule and minimize rework or unexpected delays. The application team running a project frequently lacks experience with infrastructure projects and related procurement/support processes in the target environment. It is important for the delivery team(s) to understand the internal processes, lead times, change windows, lockdown schedules, and other constraints.

9: Limit data conversion planning to "input files"

Segregate "input files" from other file types and gather all associated record layouts. Only input files (VSAM, QSAM, etc.) require conversion. Therefore, it's important to identify which are the input files vs. temporary files vs. output files. Data migration can be a significant part of the project, so putting in effort up front saves on budget and schedule.

10: Plan specifically for cost savings, then track achievement

Gain the expected value from your solution. Implement strong leadership and teaming approaches in your production environment with the mandate and accountability to measure and deliver on the ROI that was agreed upon when the solution was procured. Quantify your ROI opportunity and measure results. IT cost reduction well in excess of 50% or 60% is common. Application maintenance and development productivity improvement usually exceeds 20%.

The payoff

If you get it right, you'll be positioned for the future, and these business improvement metrics could be yours:

  • 50+% IT cost reduction
  • 25+% development productivity improvement
  • 30+% system performance improvement
  • 15+% application quality improvement


Malcolm Marais is vice president, North American technical services at Micro Focus.

4 comments
Tony Hopkinson
Tony Hopkinson

If you re-engineer any legacy computer system most of those inprovements should occut. If they don't why in Cthulu's name would you bother? I bet I could take any existing distributed solution that had been in place as long as some of the mainframe ones have and achieve the same sorts of numbers. Sheesh just getting your new (competent !) guy to rerwite your existing code base can do the last three improvements. Possibly get a good way towards number one as well. I do agree with you about planning (specifically scope management) for cost savings, if you don't,everybody important and his dog will p1ss the resources you allocated to do the switch on the pet want they could never get any money allocated to in and of itelf. Please bear in mind, no matter how well managed there will be a constant disruption to existing services. Their recipients won't be expecting cheaper, they'll be expecting better. If you are really lucky they might even have a useful definition of what better is. I did one of these and every improvement that ended up in the new system (aside from aging hardware) I could have done with the existing kit. In fact we ended up keeping the vaxes and most of the green screens relegating that platform to a basic display system, becasue that turned out to be cheaper in the short and medium term than buying lots of clients and trebling the network infrstructure in one go.

stso9daa
stso9daa

I can just picture the landscape of the thousands of servers required compared to what one mainframe can do. the author forgets to point out that if your application has thousands of users, you no longer can use the term 'response time'. you will quickly see the system architect dissappear behind the veil of - it has to be the network excuse... then guess what, you keep buying bigger machines and you end up back where you started...

Marcel den Hartog
Marcel den Hartog

It should actually be #1. Know the real cost of your Mainframe. More and more I am finding out that the cost center for Mainframe has been used as the garbage can for IT spend. Historically, the Mainframe was THE consumer of energy in the Datacenter, in many cases, the MF cost Center carries >50% of the energy costs while only consuming 5-10%. Software costs, project management costs, consultants, and in 1 case even the corporate jet, all end up in the MF cost center. So, before you jump, figure out the REAL cost of your Distributed environment...

Deepali Mayekar
Deepali Mayekar

Migration is a very high risk initiative taken companies with the goal of 100% success. Migration projects are long running endeavors during with the business and the organization evolves in a natural way. Managing the scope creep is a challenge that needs to be agreed between the business and the IT vendors. This understanding is the key to the success of the project.

Editor's Picks