Cloud Native Computing and AI: A Q&A with CNCF’s Head of Ecosystem – The New Stack

Artificial intelligence, and Generative AI in particular, has become a top subject of conversation, from food to fashion and just about everything else. It’s making huge inroads in software development in general by generating documentation, alleviating developer cognitive overload and actually churning out code, including test code. Furthermore, AI has created additional value for platform engineering and its automation.

At the center of this rebirth of AI is cloud native computing and the Cloud Native Computing Foundation(CNCF).

So, in advance of this year’s KubeCon+CloudNativeCon EU, to be held in Paris March 19-22,

I caught up with Taylor Dolezal, head of ecosystem and AI at CNCF, to discuss AI and Cloud Native. Dolezal has worked as a senior developer advocate for HashiCorp and a site reliability engineer for Walt Disney Studios. He actually started his own IT career by founding his own software solutions company, called Pixelmachinist, that focused on businesses in the Cleveland area.

In this interview, Dolezal talks about how AI is affecting the CNCF and how the CNCF is spearheading efforts towards ethical AI. He talks about the success of the Kubernetes community which has managed to unify infrastructure and how those “lessons learned” could be used to help developers and architects. He talks about the synergies between AI and Cloud Native technologies and communities.

Generative AI in general and ChatGPT in particular seem to have impacted every facet of everyday life. Is this something that is going to impact cloud native computing, which to date has primarily dealt with infrastructure and has been somewhat removed from AI?

I have had the opportunity to witness the incredible potential of Generative AI and technologies across many business verticals. In cloud native computing, which has traditionally focused on infrastructure, the emergence of Generative AI is not just an adjacent trend but a core driver of innovation. It prompts us to rethink our infrastructure paradigms to accommodate AI workloads, improve platform engineering focuses with AI insights, and ensure our systems are AI-ready. This integration represents a significant shift in how we design, deploy, and manage cloud native solutions, making AI an integral component of our ecosystem.

The AI & Data landscape is pretty daunting. Are you satisfied with the community participation and how the CNCF and the Linux Foundation have addressed this?

The contributions of our community members towards shaping the AI and Data landscape have been illuminating and helpful to the greater community. The CNCF is collaborating with the Linux Foundation to create an environment that encourages innovation in AI and data. We have taken multiple initiatives, such as projects, workgroups, and educational efforts to make AI technologies accessible to developers and companies.

This high-level engagement is crucial to navigating the complexities of AI training and inference while keeping our community at the forefront of this technological evolution.

Model training and deployment for Large Language Models (LLMs) requires a lot of infrastructure. However, the diverse nature and disparate platforms can be intimidating for software developers and architects to comprehend and use. Just like Kubernetes unified the infrastructure, is the end goal of CNCF to provide a unified AI platform?

The complexity and diversity of machine learning models, their training, and the platforms used to deploy them pose a significant challenge for developers and architects. Taking inspiration from the success of Kubernetes in unifying infrastructure, the CNCF envisions a future where similar frameworks can improve the developer experience of AI workloads.

By hosting projects that promote productivity, encourage innovation, and provide broader access to advanced AI capabilities within the cloud native ecosystem, we aim to spotlight the progress made within our community. As a vendor-neutral foundation, we aren’t seeking to select a single platform that works for all (no kingmaking) but instead, provide options that allow adopters and builders to make the best possible choices in a composable, iterative way within their organizations.

Data is at the very core of all this and generally, a huge corpus of data is required to provide reliable services. Generating test data that is free of biases for training is important. Can you highlight some initiatives and tactical plans to address the gaps vis-a-vis data?

Our community acknowledges the vital role played by data in AI. Therefore, we continuously improve and discuss the best practices for handling data. We also support open source tools for data validation and storage. We encourage community-led projects that promote ethical AI. We aim to set new standards for responsible AI development in the cloud native landscape by bringing the community together and, most importantly — working together in public.

Multimodal AI has been eclipsed by the recent interest in Generative AI. If it’s not there (yet), is there something you would like to see that will likely make a profound impact on multimodal AI?

Although Generative AI has gained a lot of attention lately, multimodal AI has significant potential to enrich cloud native applications. I foresee future projects using multimodal AI to improve observability, security, and user experience in cloud native platforms. This will have a profound impact on the delivery and consumption of services.

Can you provide an example(s) that drives home the impact of multimodal AI on Cloud Native?

Multimodal AI integration has been a significant breakthrough in enhancing the adaptability and intelligence of applications across various domains. The healthcare sector is showing prominent examples of this impact. By leveraging cloud native architectures, multimodal AI improves patient care and diagnostics by analyzing diverse data, from medical imaging to electronic health records and real-time patient monitoring data.

Multimodal AI enables healthcare applications to provide more precise diagnostics, personalized treatment plans, and predictive health insights. This integration not only streamlines the healthcare delivery process but also enhances the scalability and efficiency of these applications, thanks to the inherent advantages of cloud native technologies such as microservices, containerization, and dynamic orchestration.

What are your predictions for AI-based announcements at Kubecon EU 2024? Anything else you would like to add?

Looking ahead to KubeCon EU 2024, I anticipate that there will be significant announcements within our ecosystem that relate to AI-based tooling, security enhancements, and sustainability initiatives within the cloud native landscape. The integration of AI in cloud native is likely to take center stage, showcasing innovations that facilitate easier adoption, scalability, and management of AI workloads. I’m looking forward to seeing a strong emphasis on ethical AI practices and community-driven projects that bridge the gap between AI technologies and cloud native principles.

YOUTUBE.COM/THENEWSTACK

Tech moves fast, don't miss an episode. Subscribe to our YouTube channel to stream all our podcasts, interviews, demos, and more.

SUBSCRIBE

Read the rest here:
Cloud Native Computing and AI: A Q&A with CNCF's Head of Ecosystem - The New Stack

Related Posts

Comments are closed.