How to achieve deep observability in a hybrid cloud world and … – iTWire

Changed economic conditions means IT leaders are under pressure to do more with less, and the dream of moving all workloads into the public cloud have not been realised, says Gigamon CEO Shane Buckley. For cost, performance and scalability reasons hybrid cloud is the reality in 2023.

But IT organisations still need to meet governance, risk and compliance (GRC) requirements, so the observability market is changing.

"Cloud is simple when it is; if it isn't, then it's really complicated" and hybrid cloud is complicated, he says.

Cloud-first observability vendors need to find ways of looking inside hybrid environments (eg, containers) in order to capitalise on what they already have.

Conversely, traditional approaches to observability combine log and network data analysis, but that network data is not available in public clouds. Furthermore, attackers are known to penetrate systems and then lay dormant for several months to "fool the enterprise into thinking everything's fine" before turning off logging and exfiltrating data.

The company has some 15 years experience in on-premises and private cloud observability, and has brought that to public cloud and containers.

Gigamon's approach is able to obtain data from inside every cloud platform, capturing and aggregating traffic, and then transforming and enriching it to deliver actionable intelligence that can be connected to an organisation's observability and security stack, including detection and response systems, extended detection and response systems, and data lakes.

That way, he explains, customers can keep all their existing tools, and when a workload is moved (eg, from an in-premises container to a container in the public cloud), Gigamon moves the telemetry with it.

Furthermore, organisations need consistent telemetry across hybrid clouds, whether they are doing 'lift and shift' migrations or modernising their applications to run in the cloud.

Things can become particularly complicated where microservices are involved. Applications might work well when all the parts are in the same place, but performance can suffer greatly if they become split between the data centre and the cloud. Without deep observability, it can be hard to determine what has gone wrong, because application performance monitoring tools do not work in the cloud. Buckley is aware of an application where the response time soared from around three seconds to three minutes in these circumstances.

With Gigamon, IT organisations are able to see what is happening, and reallocate applications appropriately to improve performance.

It typically takes two to three years to modernise an application, and some have to be completely rewritten. This is rarely feasible, especially when IT is under pressure to do more with less and the benefit of such projects are unclear. So the likely choice is to keep using the old application, but in a container located on-premises or in the cloud.

Fewer workloads than originally expected have been moved to the cloud, Buckley says, and 'Cloud 2.0' reflects a realisation that some applications do not run well in the cloud (especially those that involve sending data back to the data centre), therefore they should not be moved into the cloud. From Gigamon's perspective, "it's whatever's best for the customer."

Gigamon ANZ country manager Jonathan Hatchuel points out that one large bank's "everything in the cloud" policy is now read as if it were "everything in the hybrid cloud," reflecting Cloud 2.0 thinking.

Whether an application is being kept on-premises or moved to the cloud, Gigamon can help. Organisations can choose Tanzu, Kubernetes, OpenShift, etc whichever works best for them and Gigamon can provide the telemetry needed to ensure applications are working right, says Buckley.

Public cloud "is important, but not a panacea," so some 90 percent of organisations are adopting hybrid cloud. That is often a hybrid multi-cloud strategy, especially when SaaS applications such as Salesforce and Workday are part of the picture.

That hybrid strategy, especially for larger organisations, includes a program of consolidating and renovating data centres in the right locations (power and cooling capacity are among the criteria), using modern virtualisation and container technology.

A particular use of public cloud that makes financial and operational sense is to provide burst capacity to deal with peak loads. This is another situation where Gigamon's technology can be used to provide the telemetry to feed the organisation's management and security tools.

Zero trust architectures are becoming a required element of hybrid cloud security, he says, in part due to US Executive Order 14028 which requires, among other things, all agencies to develop a plan to implement zero trust. This idea is spreading to other nations, including Australia.

Hatchuel pointed out that all of Gigamon's Australian customers are seeing pressure as Federal Government regulations change. Australia is regarded as a pioneer in critical infrastructure security (as shown by the appointment of a Minister for Cyber Security and the promulgation of the Essential Eight), but there have been several large, high-profile security breaches.

"Government has set some policy for the regulatory environment around critical infrastructure," he notes, and organisations need to comply. But as one local Gigamon customer observed, "you can't regulate what you can't see," so visibility is a key to security.

Gigamon is already part of most government infrastructure, says Buckley, as well as that of most service providers, and most enterprises that aren't already customers are looking at it. "Gigamon has the Who' Who of Australian business" and government.

While security models vary, they all say you cannot have blind spots in a network, especially for east-west traffic between servers in a data centre. Nothing should be implicitly trusted: zero trust means everything has to prove it can be trusted. Making sure that happens requires continuous visibility, and Gigamon's approach includes monitoring east-west traffic by not only extracting metadata from the headers, but also by acting as a man in the middle that can break and inspect encrypted data on behalf of security tools.

"Our strategy is to completely empower IT to move workloads wherever they want."

"Over time, tools will change," but Gigamon is committed to remaining the Switzerland of observability, connecting any source system with any management or security tool. "We're giving optionality to the business."

Buckley was in Australia as part of Gigamon's GigaTour of 26 locations in 16 countries across three continents. The remaining dates on the tour can be found here.

Read the original post:
How to achieve deep observability in a hybrid cloud world and ... - iTWire

Related Posts

Comments are closed.