Inside A Working Cloud Migration Journey With DXC – Forbes

A guide to the cloud journey with DXC Technology - there are many levels and layers inside a modern ... [+] complex cloud migration, it pays to know where you stand at any given moment in time.

Cloud migration is a big job. Moving enterprise systems from largely analogue (often paper-based and outdated) job process systems and creating new cloud-based digital workflows is a big task that requires an expansive amount of planning and holistic awareness. Organizations on this path are looking to embrace data-focused, automated and cost-efficient systems that will provide a platform for new applications and services.

Its easy to say it out loud, but its a tough task to do it in real world practice.

Taking incumbent systems into the new era of cloud computing often requires the IT estate that a business has spread over older mainframe systems to be re-architected, refactored, re-interfaced, retested, resecured and ultimately rehosted and retested once again.

Having (willingly) overseen more than his fare share of cloud migration projects, Joe Rodgers is chief technology officer for DXC's Joint Venture with Lloyd's and the International Underwriters Association (which is behind the major cloud transformation of the London insurance market). DXC Technology is a company known for its work managing mission-critical systems and operations while modernizing and optimizing data architectures.

Rodgers recognizes that mainframe to cloud migration is a key objective for many businesses; however, these same companies often struggle to define a clear strategy to progress towards that end. Indeed, organizations often say it is difficult to identify a clean scope of work, enabling the decoupling of critical parts of the systems that will get re-platformed.

On one hand, since mainframe-skilled experts and experience in the IT organization become scarcer; mainframe applications can be viewed as opaque boxes that might block progress. On the other hand, where skills do exist, they are often entrenched in the typical operating models and techniques applied to mainframe change. Technology leaders and architects who understand modern design principles, techniques, tools and processes often dont know mainframe systems, explained Rodgers.

Because cloud migration (from mainframe, or simply from the pre-virtualization era) initiatives are often regarded (inside an organization) as a sort of wholesale i.e. large, expensive and potentially tangled up in various forms of bureaucratic red tape before use, this can lead to them being poorly understood internally (and indeed externally with partners, supply chain connections and so on) - so this may reduce the overall appetite to change inside the business.

But, you know, the mainframe (and for that matter most forms of legacy system) is just a computer running applications and basic services that support security, transaction management and other features. When you boil it down, it is not so different to a modern digital cloud-native platform. It can be decomposed - and modern techniques can be applied to it. If the right skills, tools and processes are in place, then the gap between legacy and the target estate can be narrowed and changes can be made to support transition, asserted Rodgers.

At the coalface of the IT department looking to move more of its incumbent stack to cloud, we often find a lack of resources, an inflexible approach to scalability (i.e. the lifeblood feature that we adore cloud for) and poor software language support for modern application development.

However, says DXCs Rodgers, the problem is often the complexity of the actual enterprise software applications themselves. This is often down to the prevalence of batch processing (scheduled software jobs that happen automatically inside IT systems), which in itself creates complexity and often leads to high degrees of coupling between applications and monothetic designs.

This notion of coupling is a term oft-used in modern IT environments, or more accurately now, decoupling i.e. the act of defining and working with data resources or application components (or both) in a way where their actual value and entity is separated and abstracted from the underlying substrate (or upper tier) computing structure that they live on and integrate with.

This [scenario] also leads to complexity in integrating with modern transactional processing systems. Older systems often comprise poorly designed data models and dont follow the design principles that would be second nature today. These factors can make it very difficult to identify clear boundaries within systems, that would allow the architecture to be decomposed into components and migrated or integrated. These factors also make testing complex and difficult and this can make the transition slow and expensive, said Rodgers.

Migration from mainframe systems can be very difficult for the many reasons cited above. At each stage of a legacy-to-cloud transformation, new application features need to be re-evaluated with the business and, once again, tested for user acceptance and tested for performance, functionality and security.

Drawing from his experience working with DXC Technology customers particularly in Londons insurance market, Rodgers says he has learned to try and get customers to think beyond the constraints of current system behavior.

"When I talk about large digital transformations, I often use Monty Pythons Holy Grail proud builder analogy as a case in point. He proudly proclaims - I built a castle in a swamp. But it fell in. So I built another one. And that fell in and so on. The point is that you must prepare and lay proper foundations. At the same time, you will have a transition period where you will probably need to invest in the existing incumbent pre-cloud architecture to allow mainframe or other component parts to be safely decoupled and transformed to achieve the target architecture, said Rodgers.

The key principle for the migration to the cloud is to design for the cloud. Hosting legacy designed applications on the cloud can quickly become expensive. These applications should be decomposed and decoupled (remember decoupling?), making use of containerization platforms, serverless technologies and the PaaS and SaaS capability that the cloud providers offer to accelerate development.

It is also key not to make a cloud migration strategy an island. It is likely that cloud-hosted services and an organization's on-premise datacentre services will need to co-exist. This clearly means that cloud strategy needs to be a hybrid one from the outset. This also allows modernized target operating models to be implemented during the transition, front-loading a key risk to the migration.

In terms of the skills needed to migrate to cloud, this will depend on the nature of the transformation, but bolstering the skills and resources on the mainframe and legacy environments themselves is generally an extremely wise move.

There is a high likelihood of change been driven into new cloud-native systems, but organizations should also prepare themselves realistically for a certain level of attrition. Remember, clouds spin up as well as down and new platform paradigms come to the fore every decade and sometimes more often than that, said DXCs Rodgers.

He also advises organizations to look towards bringing in people with experience of large-scale digital transformations. This typically means architects, program managers and technical leaders who should also have knowledge of the business. Strong lines of product ownership are critical i.e. the business decision makers need to be engaged in the transformation.

The applications that can be delivered into production first should be tackled first. It is often tempting to start with user or customer facing systems or external channels due to their higher visibility. However, back-end systems are often easiest to decouple, dual run and release early into production. Large transformation programs often succeed in completing build but fail to deliver to production because of the scale of the service transition and implementation of new business and technology operating models, said Rodgers.

Long business and technology change freezes can also be problematic, so more frequent and smaller releases that allow other activity to continue are preferable. The difference between applications being cheap or expensive to run on the cloud can often depend heavily on their design.

Applications designed for the cloud should be lightweight, taking advantage of modern frameworks to reduce their footprints and make them more suitable to container or serverless technologies. They should be decomposed to use the native PaaS services provided by the cloud providers as highly available, scalable and consumption-based services, clarified DXCs Rodgers.

The highest cost of all is engineering. Although cloud hosting cost may look high, it may not be that expensive if it saves a significant amount of engineering and longer term maintenance cost.

Modern cloud platforms are also well suited to automation and, although this increases initial set up costs, it can reduce longer term maintenance costs and change lead-times while allowing for more cycles, improved security, reduced friction and fewer support handoffs through the use of DevSecOps models and mitigate issues with loss of skills and knowledge over the longer term.

If I can leave one final piece of advice here, it would come down to a handful of (I hope) hard hitting points, said Rodgers. Lay foundations. Be ready to change. Be ready to test. Harden as early as possible. Nothing counts other than that which has made it to live production status. Statistically, a soft implementation makes sense if you can do it. Invest in the legacy system if it means a better and smoother transition. Dont forget to bring the business with you. At DXC we call this Doing Cloud Right if you will indulge my use of our company mantra to describe the need to focus on business outcomes and successfully manage a mix of cloud, multicloud and on-premises platforms.

Overall, as a caveat risk factor to be aware of, Rodgers and team say that the flexibility and delivery acceleration that can be achieved in the cloud can lead to services being built out too quickly, without forward planning and control. It can be difficult to recover if this happens. The result can lead to cost complexity and security risk. It is difficult to reap the full benefits of cloud adoption without a level of lock-in with the cloud provider. So a delicate balance needs to be achieved here.

Moving to the cloud means moving to an environment that is always changing and developing, this may require a culture shift in some organizations as they need to be prepared for continuous change.

Llyod's (Llyod's of London)

Read this article:
Inside A Working Cloud Migration Journey With DXC - Forbes

Related Posts

Comments are closed.