Adoption of Cloud-Native Architecture, Part 1: Architecture Evolution and Maturity – InfoQ.com

Key Takeaways

Architecture stabilization gaps and anti-patterns can emerge as part of a hasty microservices adoption.

Understanding the caveats and pitfalls of historic paradigm shifts should enable us to learn from previous mistakes and position our organizations to thrive at the latest technology waves.

Its important to know the pros and cons of different architectural styles like monolithic apps, microservices, and serverless functions.

Repeating cycle of architecture evolution: initial stage of not knowing best practices in the new paradigm, which accelerates the technical debt. As the industry develops new patterns to address the gaps, teams adopt new standards and patterns.

Consider the architecture patterns as strategies that favor rapid technological evolution while protecting the business apps from volatility.

Technology trends such as microservices, cloud computing, and containerization have been escalating so quickly in recent years that most of these technologies are now part of the day-to-day duties of top IT engineers, architects, and leaders.

We live in a cloud-enabled world. However, being cloud-enabled does not mean being cloud-native. In fact, its not only possible but dangerous to be cloud-enabled without being cloud-native.

Before we examine these trends and discuss what architectural and organizational changes corporations should implement to take full advantage of a cloud-enabled world, it is important to look at where we have been, where we are, and where we are going.

Understanding the caveats and pitfalls of the historic paradigm shifts should allow us to learn from previous mistakes and position our organizations to thrive on the latest waves of this technology.

As we briefly walk through this evolution, well be exploring the concept of anti-patterns, which are common responses to a recurring problem that are usually ineffective and risk being counterproductive.

This article series will describe the anti-patterns mentioned.

For the last 50 years or so, software architecture and application hosting models have experienced major transformation from mainframes to microservices and serverless.

Figure 1 shows this evolution of architecture models and the paradigms they promoted.

Figure 1: Architecture evolution from mainframe to cloud and microservices

Back in the 70s and 80s, mainframes were the way of computing. Mainframes are based on a centralized data storage and computing model, with basic client terminals used for data entry and data display on primitive screens.

The original mainframe computers used punch cards and most of the computation happened within batch processes. There was no online processing and latency was at 100% as nothing was processed in real time.

Some evolution happened within the mainframe paradigm with the introduction of online processing and user interface terminals. The overall paradigm of a massive central unit of processing contained within the four walls of a single organization still had a "one size fits all" approach, however, and that was only partially able to supply the capabilities needed by most business applications.

Client/server architecture put most of the logic on the server side and some of the processing on the client. Client/server was the first attempt in distributed computing to replace the mainframe as the primary hosting model for business applications.

In the first few years of this architecture, the development community was still writing software for client/server using the same procedural, single-tier principles that they had used for mainframe development, which resulted in anti-patterns like spaghetti code and the blob. This organic growth of software also resulted in other anti-patterns like big ball of mud. The industry had to find ways to stop teams from following these bad practices and so had to research what was necessary to write sound client/server code.

This research effort mapped out several anti-patterns and best-practice design and coding patterns. It introduced a major improvement called object-oriented programming (OOP), which had inheritance, polymorphism, and encapsulation features, along with paradigms to deal with decentralized data (as opposed to a mainframe with one version of the truth) and guidance for how industry could cope with the new challenges.

The client/server model was based on three-tier architecture consisting of presentation (UI), business logic, and data tiers. But most of the applications were written using two-tier models with a thick client encapsulating all presentation, business, and data-access logic, directly hitting the database. Although the industry had started to discuss the need to separate presentation from business from data access, that practice didnt really become vital until the advent of Internet-based applications.

In general, this model was an improvement over the mainframe limitations, but the industry soon ran into its limitations like needing to install the client application on every users computer and an inability to scale at a fine-grained level as a business function.

During mid 90s, the Internet revolution occurred and a completely new paradigm arrived with it. Web browsers became the client software while web and application servers hosted all the processing and logic. The World-Wide Web (www) paradigm promoted a true three-tier architecture with presentation (UI) code hosted on web servers, business logic (API) on application servers, and the data stored in database servers.

The development community started to migrate from thick (desktop) clients to thin (web) clients, driven mainly by ideas like service-oriented architecture (SOA) that reinforced the need for a three-tiered architecture and fueled by improvements to client-side technologies and the rapid evolution of web browsers. This move sped up time to market and required no installation of the client software. But developers were still creating software as tightly coupled designs, leading to jumble and other anti-patterns.

The industry in response came up with evolved three-tiered architectures and practices such as domain-driven design (DDD), enterprise integration patterns (EIP), SOA, and loosely coupled techniques.

The first decade of the 21st century saw a major transformation in application hosting when hosting became available as a service in the form of cloud computing. Application use cases requiring capabilities like distributed computing, network, storage, compute, etc., became much easier to provision with cloud hosting at a reasonable cost compared to traditional infrastructure. Also, consumers were taking advantage of elasticity of the resources to scale up and down based on the demand. They only needed to pay for the storage and compute resources that they used.

The elastic capabilities introduced in IaaS and PaaS allow for a single instance of a service to scale as needed, eliminating duplication of instances for the sake of scalability. However, these capabilities cannot compensate for the duplication of instances for other purposes, such as having multiple versions, or as a byproduct of monolith deployments.

The appeal of cloud-based hosting is that the dev and ops teams no longer had to worry about server infrastructure. It offered three different hosting options:

PaaS became the sweet spot among the cloud options because it allows developers to host their own custom business application without having to worry about provisioning or maintaining the underlying infrastructure.

Even though cloud hosting encouraged modular application design and deployment, many organizations found it enticing to lift and shift legacy applications that had not been designed to work on an elastic distributed architecture directly to the cloud, resulting in a somewhat modern anti-pattern called "monolith hell".

To address these challenges, the industry came up with new architecture patterns like microservices and 12-factor apps.

Moving to the cloud also presented industry with the challenges of managing the application dependencies on third-party libraries and technologies. Developers started struggling with too many options and not enough criteria for selecting third-party tools, and we started seeing some dependency hell.

Dependency hell can occur at different levels:

Library-based dependency hell is a packaging challenge and the latter two are design challenges. A future article in this series will examine these dependency-hell scenarios in more detail and offer design patterns for avoiding the unintended consequences to prevent any proliferation of technologies.

Software design practices like DDD and EIP have been available since 2003 or so and some teams then had been developing applications as modular services, but traditional infrastructure like heavyweight J2EE application servers for Java applications and IIS for .NET applications didn't help with modular deployments.

With the emergence of cloud hosting and especially PaaS offerings like Heroku and Cloud Foundry, the developer community had everything it needed for true modular deployment and scalable business apps. This gave rise to the microservices evolution. Microservices offered the possibility of fine-grained, reusable functional and non-functional services.

Microservices became more popular in 2013 - 2014. They are powerful, and enable smaller teams to own the full-cycle development of specific business and technical capabilities. Developers can deploy or upgrade code at any time without adversely impacting the other parts of the systems (client applications or other services). The services can also be scaled up or down based on demand, at the individual service level.

A client application that needs to use a specific business function calls the appropriate microservice without requiring the developers to code the solution from scratch or to package the solution as library in the application. The microservices approach encouraged a contract-driven development between service providers and service consumers. This sped up the overall time of development and reduced dependency among teams. In other words, microservices made the teams more loosely coupled and accelerated the development of solutions, which are critical for organizations, especially the business startups.

Microservices also help establish clear boundaries between business processes and domains (e.g., customer versus order versus inventory). They can be developed independently within that vertical modularity known as the "bounded context" in the organization.

This evolution also accelerated the evolution of other good practices like DevOps, and it provided agility and faster time to market at the organization level. Each development team would own one or more microservices in its domain and be responsible for the whole process of designing, coding, deploying to production as well as post-production support and maintenance.

However, similar to the previous architecture models, the microservices approach ran into its own issues.

Legacy applications that had not been designed as microservices from bottom-up started being cannibalized in attempts to force them into a microservices architecture, leading to the anti-pattern known as monolith hell. Other attempts tried to artificially break monolithic applications into several microservices even though these resulting microservices were not isolated in terms of functionality and still heavily depended on other microservices broken out of the same monolithic application. This is the anti-pattern called microliths.

It's important to note that monoliths and microservices are two different patterns, and the latter is not always a replacement for the former. If we are not careful, we can end up creating tightly coupled, intermingled microservices. The right option depends on the business and scalability requirements of an applications functionality.

Another undesired side effect of the microservices explosion is the so-called "Death Star" anti-pattern. Microservices proliferation without a governance model in terms of service interaction and service-to-service security (authentication and authorization) often results in a situation where any service can willy-nilly call any other service. It also becomes a challenge to monitor how many services are being used by different client applications without decent coordination of those service calls.

Figure 2 shows how organizations like Netflix and Twitter ran into this nightmare scenario and had to come up with new patterns to cope with a "death by Death Star" problem.

Figure 2: Death Star architectures due to microservices explosion without governance

Although the examples depicted in figure 2 might look like extreme cases that only happen to giants, do not underestimate the exponential destructive power of cloud anti-patterns. The industry must learn how to operate a weapon that is massively larger than anything the world has seen before. "Great power involves great responsibility," said Franklin D. Roosevelt.

Emerging architecture patterns like service mesh, sidecar, service orchestration, and containers can be effective defense mechanisms against malpractices in the cloud-enabled world.

Organizations should understand these patterns and drive adoption sooner rather than later.

With the emergence of cloud platforms, especially the container orchestration technologies like Kubernetes, service mesh has been gaining attention. A service mesh is the bridge between application services that adds additional capabilities like traffic control, service discovery, load balancing, resilience, observability, security, and so on. It allows the applications to offload these capabilities from application- level libraries and allows developers to focus on business logic.

Some service mesh technologies like Istio also support features like chaos injection so that developers can test the resilience and robustness of their application and its potentially dozens of interdependent microservices.

Service mesh fits nicely on top of platform as a service (PaaS) and container as a service (CaaS), and enhances the cloud-adoption experience with the above-mentioned common platform services.

A future article will delve into the service-mesh-based architectures with discussion on specific use cases and comparison of solutions with and without service mesh.

Another trend that has received a lot of attention in the last few years is serverless architecture, also known as serverless computing. Serverless goes a step further than the PaaS model in that it fully abstracts server infrastructure from the application developers.

In serverless, we write business services as functions and deploy those functions to the cloud infrastructure. Some examples of serverless technologies are Amazon Lambda, Spring Cloud Function, Google Cloud Functions, and Microsoft Azure Functions.

The serverless model sits in between PaaS and SaaS in the cloud-hosting spectrum, as shown in the diagram below.

Figure 3: Cloud computing, containers, service mesh, and serverless

In a similar conclusion to the discussion of monolithic versus microservices, not all solutions should be implemented as functions. Also, we should not replace all microservices with serverless functions just like we shouldnt replace or break down all of monolithic apps into microservices. Only the fine-grained business and technical functions like user authentication or customer notification should be designed as serverless functions.

Depending on our application functionality and non-functional requirements like performance and scalability and the transaction boundaries, we should choose the appropriate monolith, microservices, or serverless model for each specific use case. Its typical that we may need to use all three of these patterns in a solution architecture.

If not designed properly, serverless solutions can end up becoming nanoliths, where each function is tightly coupled with other functions or microservices and cannot operate independently.

Complementary trends like container technologies came out around the same time as microservices to help with deploying the services and apps in microserver environments that offered true isolation of business services and scalability of individual services. Container technologies like Docker, containerd, rkt, and Kubernetes can complement the microservices development very well. Nowadays, we cannot mention one - microservices or containers - without the other.

As mentioned earlier, its important to know the pros and cons of the three architectural styles: monolithic apps, microservices, and serverless functions. A written case study on monolith versus microservices describes in detail one decision to avoid microservices.

Table 1 highlights the high-level differences between these three options.

Note: Sometimes teams artificially break down related functions into microservices and experience the limitations of microservices model.

Application is completely shut down when there's no traffic.

Dev teams don't have to care about underlying infrastructure.

Table 1: Service architecture models and when to use or avoid them

Its important for us to keep an eye on the anti-patterns that may develop in our software architecture and code over time. Anti-patterns not only cause technical debt but, more importantly, they could drive subject-matter experts out of the organization. An organization could find itself with only the people who dont bother about the architecture deviations or anti-patterns.

After the brief history above, lets focus on the stabilization gaps and anti-patterns that can emerge as part of a hasty microservices adoption.

Specific factors like the team structure in an organization, the business domains, and the skillsets in a team determine which applications should be implemented as microservices and which should remain as monolith solutions. But we can look at some general considerations for choosing to design a solution as a microservice.

The Eric Evans book, Domain-Driven Design (DDD), transformed how we develop software. Eric promoted the idea of looking at business requirements from a domain perspective rather than from one based on technology.

The book considers microservices to be a derivation of the aggregate pattern. But many software development teams are taking the microservices design concept to the extreme, by attempting to convert all of their existing apps into microservices. This has led to anti-patterns like monolith hell, microliths, and others.

Following are some of the anti-patterns that architecture and dev teams need to be careful about:

Well look in more detail at each of these anti-patterns in the next article.

To close the stabilization gaps and anti-patterns found in different application hosting models, the industry has come up with evolved architecture patterns and best practices to close the gaps.

These architecture models, stabilization gaps and patterns are summarized in the table below.

Connected/shared

Table 2: Application hosting models, anti-patterns, and patterns

Figure 4 shows all these architecture models, the stabilization gaps in the form of anti-patterns, and the evolved design patterns and best practices.

Figure 4: Architecture evolution and application-hosting models

Figure 5 lists the steps of architecture evolution, including the initial stage of not knowing the best practices in the new paradigm, which accelerates the technical debt. As the industry develops new design patterns to address the stabilization gaps, teams adopt the new standards and patterns in their architecture.

Figure 5: Architecture models and adoption of new patterns

IT leaders must protect their investment against the ever-growing rapid transformation of technologies while providing a stable array of business applications running on a constantly evolving and optimizing technological foundation. IT executives across the globe have been dealing with this problem more and more frequently.

They and we should embrace the evolution of technology but not at the price of constant instability of the apps supporting the business.Disciplined systematic architecture should be able to deliver just that. Consider the patterns discussed in this article series as strategies that favor rapid technological evolution while protecting the business apps from volatility. Lets explore how that can be done in the next article.

See the rest here:
Adoption of Cloud-Native Architecture, Part 1: Architecture Evolution and Maturity - InfoQ.com

Related Posts

Comments are closed.