Category Archives: Cloud Hosting

3 Under-the-Radar Cloud Computing Stocks With a Massive Growth Runway – InvestorPlace

Source: jossnat/Shutterstock

Current predictions show the cloud computing industry growing at a rapid pace in the coming years. Much of this growth stems from novel approaches to cloud technology, such ashybrid clouds, multi-clouds, and edge computing. These developing technologies now affect several sectors and have led to the growth of several under-the-radar cloud computing stocks.

Traditional cloud computing applications include e-commerce, healthcare, and entertainment, but with the technology advancing, several custom applications are developing. As such, newer cloud computing companies are flourishing in their respective niches, producing potentially lucrative stocks to keep an eye on.

However, as a general disclaimer, the stocks mentioned in this article are currently on the growth path. Their market capitalization and overall maturity are low relative to the cloud computing ventures of tech giants like Microsoft and Amazon. With this in mind, here are three companies exploring new frontiers in cloud computing that could lead to some serious growth.

Source: monticello / Shutterstock.com

By specializing in cloud hosting and Infrastructure-as-a-Service (IaaS),DigitalOcean Holdings (NYSE:DOCN) has carved out a niche among developers and startups. The company currently offers several technical derivations of cloud computing such as virtual machines and custom-managedKubernetes services for containerized applications.

One particular service the company offers that helps it stand out is itsApp Platform, monetized as a platform-as-a-service. This service allows developers to build and deploy applications without having to manage any server-side integration. DigitalOcean further sweetens the deal for small businesses and independent developers by offering hourly billing to allow customers to pay for services as they need them.

Furthermore, with AI making software development more and more accessible to smaller companies, demand for DigitalOceans streamlined services could increase. I believe DUOTs niche and targeted products give it a significant runway for growth, should the stars align for it.

Source: Blackboard / Shutterstock

Another of the under-the-radar cloud computing stocks to watch isDuos Technologies Group (NASDAQ:DUOT). Though not as new as other stocks on this list, the company tailors its cloud solutions to the rail industry. By committing to designingproprietary AI and technologies around inspecting railcars, DUOT has been able to continue growing alongside global rail projects.

However, this growth is not without its costs, leading the company to operate at a loss last year. DUOTs technology has become the industry standard for railway safety and inspection. Thus, the company was entrusted withscanning over 8.5 million railcars in 2023.

Investors should be diligent when considering a cloud computing stock like DUOT. Its future profitability remains tied to the regulation of rail safety standards across the US and beyond. As such, should the government standardize DUOTs services as mandatory for all railcars in the US, the companys growth could explode.

Source: Pavel Kapysh / Shutterstock.com

Last but not least, one of the niche cloud computing stocks to consideris Fastly (NYSE:FSLY), which hedges its bets on edge computing. Dubbed theEdge Cloud Platform, Fastly has built a global server network that is strategically positioned closer to users than traditional data centers.

With this proximity, the company offers customers reduced latency since data travels shorter distances, resulting in faster loading and responsiveness. As such, the companys primary customers are providers of entertainment, e-commerce, and fintech. Fastlys edge computing can process robust data at high speeds, allowing for everything from financial calculations to high-definition streaming.

Though the company has not consistently reported net income yet, the potential is strong.

FY23 saw a 56.6% increase in net profit margin which brought the company closer to profitability. If Fastly can grow revenue in 2024 past the tipping point, its valuation could see a significant increase.

On the date of publication, Viktor Zarev did not have (either directly or indirectly) any positions in the securities mentioned in this article. The opinions expressed in this article are those of the writer, subject to the InvestorPlace.comPublishing Guidelines.

Viktor Zarev is a scientist, researcher, and writer specializing in explaining the complex world of technology stocks through dedication to accuracy and understanding.

Read the rest here:
3 Under-the-Radar Cloud Computing Stocks With a Massive Growth Runway - InvestorPlace

Cloud Native Computing Foundation Announces the Winners of its First CloudNativeHacks Hackathon – PR Newswire

First-ever hackathon showcases innovative and unique solutions to challenging sustainability challenges

PARIS, March 22, 2024 /PRNewswire/ -- KubeCon + CloudNativeCon Europe The Cloud Native Computing Foundation(CNCF), which builds sustainable ecosystems for cloud native software, today announced the first, second, and third-place winners of CloudNativeHacks.

During KubeCon + CloudNativeCon Europe 2024, CNCF, in collaboration with the United Nations, hosted its first-ever hackathon, CloudNativeHacks, to focus on advancing the delivery of the UN Sustainable Development Goals (SDGs). Sponsored by Heroku, the aim was for individuals and teams to develop a proof of concept to help support these development goals, working together to solve pressing issues and contribute meaningfully to creating a better, more sustainable world.

CloudNativeHacks Winners

First place: TeamUrban Unity - Carolina Lindqvist and Syed Ali Raza Zaidi, which addresses SDG 11: Sustainable Cities and Communities and SDG 17: Partnerships for the Goals.

Team Urban Unity from Switzerland and the UK developed a proof of concept for a platform that democratizes urban planning policies. They created a map where urban planners can drop a pin if they want to create a new building, but perhaps the local neighbors want to create a park and so can provide feedback about it. It is a platform for the people and run by the people.

Second place: TeamForrester - Radu-Stefan Zamfir, Alex-Andrei Cioc, George-Alexandru Tudurean, which addresses SDG 13: Climate Action and SDG 15: Life on Land.

Team Forrester, from Romania, developed an app that spreads awareness and handles automatic detection and monitoring of deforestation globally, leveraging AI, open source software, and publicly available data such as satellite imagery.

Third place: Team Potato - Inhwan Hwang, Sungjin Hong, and Myeonghun Yu which addresses SDG 5: Gender Equality and SDG 11: Sustainable Cities and Communities

Team Potato from Korea developed a project that creates a crowd-guarded route, a collaborative map using luminance to gauge the safety of a chosen walking path.

"As we celebrate ten years of Kubernetes, it has been an honor to see #TeamCloudNative come together to use cloud native technologies to help create a more sustainable future," said Arun Gupta, Vice President and General Manager, Open Ecosystem at Intel and Chairperson of the Governing Board for CNCF. "I am so proud of the participants and want to congratulate the winners."

"Congratulations to the winners of the first-ever CloudNativeHacks event," said Priyanka Sharma, Executive Director of the Cloud Native Computing Foundation. "It was inspiring to see the diverse and innovative ideas and I am thrilled that cloud native technologies were the building blocks for creating applications that help impact our world for generations to come."

"As a technology that accelerates the development of applications, it is great to support the first ever CloudNativeHacks and see applications that help with the sustainability of our planet built in just two days," said Bob Wise, CEO of Heroku. "We look forward to seeing how these applications can change the future."

The hackathon was presided over by a panel of judges from the cloud native community and the United Nations, including:

Winners received $10,000, $5,000, and $2,500 respectively.

Additional Resources

About Cloud Native Computing FoundationCloud native computing empowers organizations to build and run scalable applications with an open source software stack in public, private, and hybrid clouds. The Cloud Native Computing Foundation (CNCF) hosts critical components of the global technology infrastructure, including Kubernetes, Prometheus, and Envoy. CNCF brings together the industry's top developers, end users, and vendors, and runs the largest open source developer conferences in the world. Supported by more than 800 members, including the world's largest cloud computing and software companies, as well as over 200 innovative startups, CNCF is part of the nonprofit Linux Foundation. For more information, please visit http://www.cncf.io.

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see ourtrademarkusage page. Linux is a registered trademark of Linus Torvalds.

Media ContactJessie Adams-Shore The Linux Foundation [emailprotected]

SOURCE Cloud Native Computing Foundation

See the original post:
Cloud Native Computing Foundation Announces the Winners of its First CloudNativeHacks Hackathon - PR Newswire

What Is Cloud Security? – CrowdStrike

Cloud security definition

Cloud security is a discipline of cybersecurity focused on the protection of cloud computing systems. It involves a collection of technologies, policies, services, and security controls that protect an organizations sensitive data, applications, and environments.

Cloud computing, commonly referred to as the cloud, is the delivery of hosted services like storage, servers, and software through the internet. Cloud computing allows businesses to reduce costs, accelerate deployments, and develop at scale.

Cloud security goals:

Fortify the security posture of your cloud platforms and respond with authority to cloud data breaches.Cloud Security Services

As companies continue to transition to a fully digital environment, the use of cloud computing has become increasingly popular. But cloud computing comes with cybersecurity challenges, which is why understanding the importance of cloud security is essential in keeping your organization safe.

Over the years, security threats have become incredibly complex, and every year, new adversaries threaten the field. In the cloud, all components can be accessed remotely 24/7, so not having a proper security strategy puts gathered data in danger all at once. According to the CrowdStrike 2024 Global Threat Report, cloud environment intrusions increased by 75% from 2022 to 2023, with a 110% year-over-year increase in cloud-conscious cases and a 60% year-over-year increase in cloud-agnostic cases. Additionally, the report revealed that the average breakout time for interactive eCrime intrusion activity in 2023 was 62 minutes, with one adversary breaking out in just 2 minutes and 7 seconds.

Cloud security should be an integral part of an organizations cybersecurity strategy regardless of their size. Many believe that only enterprise-sized companies are victims of cyberattacks, but small and medium-sized businesses are some of the biggest targets for threat actors. Organizations that do not invest in cloud security face immense issues that include potentially suffering from a data breach and not staying compliant when managing sensitive customer data.

Download this new report to learn about the most prevalent cloud security risks and threats from 2023 to better protect from them in 2024.

An effective cloud security strategy employs multiple policies and technologies to protect data and applications in cloud environments from every attack surface. Some of these technologies include identity and access management (IAM) tools, firewall management tools, and cloud security posture management tools, among others.

Organizations also have the option to deploy their cloud infrastructures using different models, which come with their own sets of pros and cons.

The four available cloud deployment models are:

This type of model is the most affordable, but it is also associated with the greatest risk because a breach in one account puts all other accounts at risk.

The benefit of this deployment model is the level of control it provides individual organizations. Additionally, it provides enhanced security and ensures compliance, making it the most leveraged model by organizations that handle sensitive information. However, it is expensive to use.

The biggest benefit from this deployment model is the flexibility and performance it offers.

Most organizations use a third-party CSP such as Google Cloud Platform (GCP), Amazon Web Services (AWS), and Microsoft Azure to host their data and applications. Strong cloud security involves shared responsibility between these CSPs and their customers.

It is important not to rely only on security measures set by your CSP you should also implement security measures within your organization. Though a solid CSP should have strong security to protect from attackers on their end, if there are security misconfigurations, privileged access exploitations, or some form of human error within your organization, attackers can potentially move laterally from an endpoint into your cloud workload. To avoid issues, it is essential to foster a security-first culture by implementing comprehensive security training programs to keep employees aware of cybersecurity best practices, common ways attackers exploit users, and any changes in company policy.

The shared responsibility model outlines the security responsibilities of cloud providers and customers based on each type of cloud service: software as a service (SaaS), platform as a service (PaaS), and infrastructure as a service (IaaS).

This table breaks down the shared responsibility by cloud service model:

Misconfigurations, workloads and data

Endpoints, user and network security, and workloads

Endpoints, user and network security, workloads, and data

The dynamic nature of cloud security opens up the market to multiple types of cloud security solutions, which are considered pillars of a cloud security strategy. These core technologies include:

It is essential to have a cloud security strategy in place. Whether your cloud provider has built-in security measures or you partner with the top cloud security providers in the industry, you can gain numerous benefits from cloud security. However, if you do not employ or maintain it correctly, it can pose challenges.

The most common benefits include:

Unlike traditional on-premises infrastructures, the public cloud has no defined perimeters. The lack of clear boundaries poses several cybersecurity challenges and risks.

Failure to properly secure each of these workloads makes the application and organization more susceptible to breaches, delays app development, compromises production and performance, and puts the brakes on the speed of business.

In addition, organizations using multi-cloud environments tend to rely on the default access controls of their cloud providers, which can become an issue in multi-cloud or hybrid cloud environments. Insider threats can do a great deal of damage with their privileged access, knowledge of where to strike, and ability to hide their tracks.

To address these cloud security risks, threats, and challenges, organizations need a comprehensive cybersecurity strategy designed around vulnerabilities specific to the cloud. Read this post to understand 12 security issues that affect the cloud. Read: 12 cloud security issues

Though cloud environments can be open to vulnerabilities, there are many cloud security best practices you can follow to secure the cloud and prevent attackers from stealing your sensitive data.

Some of the most important practices include:

Why embrace Zero Trust?

The basic premise of the Zero Trust principle in cloud security is to not trust anyone or anything in or outside the organizations network. It ensures the protection of sensitive infrastructure and data in todays world of digital transformation. The principle requires all users to be authenticated, authorized, and validated before they get access to sensitive information, and they can easily be denied access if they dont have the proper permissions.

CrowdStrike has redefined security with the worlds most complete CNAPP that secures everything from code to cloud and enables the people, processes, and technologies that drive modern enterprise.

With a 75% increase in cloud-conscious attacks in the last year, it is essential for your security teams to partner with the right security vendor to protect your cloud, prevent operational disruptions, and protect sensitive information in the cloud. CrowdStrike continuously tracks 230+ adversaries to give you industry-leading intelligence for robust threat detection and response.

The CrowdStrike Falcon platform contains a range of capabilities designed to protect the cloud. CrowdStrike Falcon Cloud Security stops cloud breaches by consolidating all the critical cloud security capabilities that you need into a single platform for complete visibility and unified protection. Falcon Cloud Security offers cloud workload protection; cloud, application, and data security posture management; CIEM; and container security across multiple environments.

Get a free, no obligation Cloud Security Risk Review for instant and complete visibility into your entire cloud estate, provided through agentless scanning with zero impact to your business.CrowdStrike's cloud security risk review

Read the original post:
What Is Cloud Security? - CrowdStrike

Nvidia Blackwell GPUs to be offered via AWS, Microsoft, Google, Oracle and others – DatacenterDynamics

On the back of Nvidia announcing its latest Blackwell line of GPUs, the hyperscale cloud providers have all announced plans to offer access to them later this year.

Oracle, Amazon, Microsoft, and Google have all said they will offer access to the new GPUs through their respective cloud platforms at launch. Lambda and NexGen, GPU-cloud providers, have said it will soon be offering access to Blackwell hardware.

The launch of the H100 Hopper GPU saw niche cloud providers including CoreWeave and Cirrascale get first access, with H100 instances coming to the big cloud platforms later.

Malaysian conglomerate YTL, which recently moved into developing data centers, is also set to host and offer access to a DGX supercomputer.

Singaporean telco Singtel is also set to launch a GPU cloud service later this year.

Applied Digital, a US company previously focused on hosting cryptomining hardware, has also announced it will host Blackwell hardware.

Oracle said it plans to offer Nvidias Blackwell GPUs via its OCI Supercluster and OCI Compute instances. OCI Compute will adopt both the Nvidia GB200 Grace Blackwell Superchip and the Nvidia Blackwell B200 Tensor Core GPU.

Oracle also said Nvidias Oracle-based DGX Cloud cluster will consist of 72 Blackwell GPUs NVL72 and 36 Grace CPUs with fifth-generation NVLink. Access will be available through GB200 NVL72-based instances.

As AI reshapes business, industry, and policy around the world, countries and organizations need to strengthen their digital sovereignty in order to protect their most valuable data, said Safra Catz, CEO of Oracle.

Our continued collaboration with Nvidia and our unique ability to deploy cloud regions quickly and locally will ensure societies can take advantage of AI without compromising their security.

Google announced its adoption of the new Nvidia Grace Blackwell AI computing platform. The company said Google is adopting the platform for various internal deployments and will be one of the first cloud providers to offer Blackwell-powered instances.

The search and cloud company also said the Nvidia H100-powered DGX Cloud platform is now generally available on Google Cloud. The company said it will bring Nvidia GB200 NVL72 systems, which combine 72 Blackwell GPUs and 36 Grace CPUs interconnected by fifth-generation NVLink, to its cloud infrastructure in future.

"The strength of our long-lasting partnership with Nvidia begins at the hardware level and extends across our portfolio - from state-of-the-art GPU accelerators, to the software ecosystem, to our managed Vertex AI platform," said Google Cloud CEO Thomas Kurian.

"Together with Nvidia, our team is committed to providing a highly accessible, open, and comprehensive AI platform for ML developers."

Microsoft also said it will be one of the first organizations to bring the power of Nvidia Grace Blackwell GB200 and advanced Nvidia Quantum-X800 InfiniBand networking to the cloud and will be offering them through its Azure cloud service.

Microsoft also announced the general availability of its Azure NC H100 v5 VM virtual machine (VM) based on the Nvidia H100 NVL platform, which is designed for midrange training and inferencing.

Together with Nvidia, we are making the promise of AI real, helping drive new benefits and productivity gains for people and organizations everywhere, said Satya Nadella, chairman and CEO, Microsoft.

From bringing the GB200 Grace Blackwell processor to Azure to new integrations between DGX Cloud and Microsoft Fabric, the announcements we are making today will ensure customers have the most comprehensive platforms and tools across every layer of the Copilot stack, from silicon to software, to build their own breakthrough AI capability.

Blackwell hardware is also coming to Amazon Web Services (AWS). The companies said AWS will offer the GB200 Grace Blackwell Superchip and B100 Tensor Core GPUs on its cloud platform.

AWS will offer the Blackwell platform, featuring GB200 NVL72, with 72 Blackwell GPUs and 36 Grace CPUs interconnected by fifth-generation NVLink. The cloud provider also plans to offer EC2 instances featuring the new B100 GPUs deployed in EC2 UltraClusters. GB200s will also be available on Nvidias DGX Cloud within AWS.

The deep collaboration between our two organizations goes back more than 13 years, when together we launched the worlds first GPU cloud instance on AWS, and today we offer the widest range of Nvidia GPU solutions for customers, said Adam Selipsky, CEO at AWS.

Nvidias next-generation Grace Blackwell processor marks a significant step forward in generative AI and GPU computing. When combined with AWSs powerful Elastic Fabric Adapter Networking, Amazon EC2 UltraClusters hyper-scale clustering, and our unique Nitro systems advanced virtualization and security capabilities, we make it possible for customers to build and run multi-trillion parameter large language models faster, at massive scale, and more securely than anywhere else.

In its own announcement, GPU cloud provider Lambda Labs said it would be one of the first companies to deploy the latest Blackwell hardware.

The GB200 Grace Blackwell Superchip and B200 and B100 Tensor Core GPUs will be available through Lambdas On-Demand & Reserved Cloud, and Blackwell-based DGX SuperPODs will be deployed in Lambdas AI-Ready Data Centers.

NexGen, a GPU cloud and Infrastructure-as-a-Service provider, also announced it would be among the first cloud providers to offer access to Blackwell hardware.

The company said it will provide these services as part of its AI Supercloud, which is itself planned for Q2 2024.

Being one of the first Elite Cloud Partners in the Nvidia Partner Network to offer Nvidia Blackwell-powered products to the market marks a major milestone for our business, said Chris Starkey, CEO of NexGen Cloud.

Through Blackwell-powered solutions, we will be able to equip customers with the most powerful GPU offerings on the market, empowering them to drive innovation, whilst achieving unprecedented efficiencies. This will help unlock new opportunities across industries and enhance the way we use AI both now and in the future.

Malaysias YTL, which is developing data centers in Johor, is moving to become an AI cloud provider.

The company this week announced the formation of YTL AI Cloud, a specialized provider of GPU-based computing. The new unit will deploy and manage one of the worlds most advanced supercomputers on Nvidias Grace Blackwell-powered DGX Cloud.

The YTL AI Supercomputer will reportedly surpass more than 300 exaflops of AI compute.

The supercomputer will be located in a facility at the 1,640-acre YTL Green Data Center Campus, Johor. The site will reportedly be powered via 500MW of on-site solar capacity.

YTL Power International Managing Director, Dato Seri Yeoh Seok Hong, said: We are proud to be working with Nvidia and the Malaysian government to bring powerful AI cloud computing to Malaysia.

"We are excited to bring this supercomputing power to the Asia Pacific region, which has been home to many of the fastest-growing cloud regions and many of the most innovative users of AI in the world.

In the US, Applied Digital also said it would be "among the pioneering cloud service providers" offering Blackwell GPUs. Further details weren't shared.

Applied develops and operates next-generation data centers across North America to cater to high-performance computing (HPC). It was previously focused on hosting cryptomining hardware. The company also has a cloud offering through Sai Computing.

Applied Digital demonstrates a profound commitment to driving generative AI, showcasing a deep understanding of its transformative potential. By seamlessly integrating infrastructure, Applied breathes life into generative AI, recognizing the critical role of GPUs and supporting data center infrastructure in its advancement, said Wes Cummins, CEO and chairman of Applied Digital.

Singaporean telco Singtel announced it will be launching its GPU-as-a-Service (GPUaaS) in Singapore and Southeast Asia in the third quarter of this year.

At launch, Singtels GPUaaS will be powered by Nvidia H100 Tensor Core GPU-powered clusters that are operated in existing upgraded data centers in Singapore. In addition, Singtel - like everyone else - will be among the world's first to deploy GB200 Grace Blackwell Superchips.

Bill Chang, CEO of Singtels Digital InfraCo unit and Nxera regional data center business, said: We are seeing keen interest from the private and public sectors which are raring to deploy AI at scale quickly and cost-effectively.

"Our GPUaaS will run in AI-ready data centers specifically tailored for intense compute environments with purpose-built liquid-cooling technologies for maximum efficiency and lowest PUE, giving them the flexibility to deploy AI without having to invest and manage expensive data center infrastructure.

Original post:
Nvidia Blackwell GPUs to be offered via AWS, Microsoft, Google, Oracle and others - DatacenterDynamics

What is Quantum Cloud Computing? Definition & How it Works – Techopedia

This website is using a security service to protect itself from online attacks. The action you just performed triggered the security solution. There are several actions that could trigger this block including submitting a certain word or phrase, a SQL command or malformed data.

Read the original:
What is Quantum Cloud Computing? Definition & How it Works - Techopedia

VMware by Broadcom reportedly offers olive branch to some cloud providers – TechRadar

Broadcom appears to have softened its stance somewhat by giving some VMware customers a peace offering.

An exclusive report by The Register notes the shift comes several weeks after the company decided to terminate VMwares Cloud Services Provider (VCSP) program, among other fairly significant shakeups.

The news also comes just a few days after the companys CEO, Hock Tan, addressed customer unease following Broadcoms $61 billion acquisition of VMware at the end of 2023.

Previously, VCSP assisted partners in offering VMware applications as managed services; however, Broadcoms move left many partners in limbo, facing the prospect of not being able to continue offering VMware-powered services.

In response to feedback, Broadcom is now reported to have introduced a white label program that would allow cloud providers that do not meet Broadcoms core licensing requirements to continue operating by partnering with established affiliates.

This peace offering does two things:it preserves existing partnerships and allows smaller providers to continue operating their businesses, but it also ensures that VMware services continue to serve their customers and end users.

The decision comes just in time, before the looming deadline for terminating VMwares CSP program, and offers a viable alternative for those who had been threatened with partner status termination.

Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!

Though the move does address some key concerns, Broadcom continues to be monitored by industry analysts globally, and it remains to be seen whether the company is indeed on track to boost VMwares profitability.

However, many are wondering whether this is too little too late, with customers already jumping ship to switch to other hypervisor alternatives.

Follow this link:
VMware by Broadcom reportedly offers olive branch to some cloud providers - TechRadar

Fluences Cloudless Platform Goes Live as Alternative to AWS, Google Cloud – Tekedia

In an era where cloud computing has become synonymous with big tech giants like AWS and Google Cloud, a new player has emerged with a revolutionary proposition. Fluence, a decentralized platform, has officially launched its Cloudless Platform, offering a unique alternative to the centralized services that currently dominate the market.

The launch of Protocol Village marks a significant milestone for Fluence, as it represents a major leap forward in the realm of decentralized networks. This innovative platform is designed to enhance collaboration and interoperability between different protocols, fostering a more unified and efficient blockchain ecosystem.

The Cloudless Platform by Fluence is designed to operate without the need for traditional cloud infrastructure. Instead, it leverages a network of independent nodes to provide computing resources and data storage. This approach not only challenges the status quo of cloud services but also aims to address some of the key concerns associated with them, such as privacy, security, and vendor lock-in.

Tekedia Mini-MBA edition 14 (June 3 Sept 2, 2024) begins registrations; get massive discounts with early registration here.

Tekedia AI in Business Masterclass opens registrations here.

Join Tekedia Capital Syndicate and invest in Africas finest startupshere.

Fluences platform operates on the principle of protocol interoperability, allowing various applications and services to communicate seamlessly. This is achieved through Protocol Village, a suite of protocols that ensure compatibility and smooth operation across different systems and services.

One of the standout features of Fluences platform is its commitment to open-source development. By fostering a community-driven approach, Fluence encourages innovation and collaboration among developers. This open ecosystem is expected to accelerate the development of new applications and services that can run on the Cloudless Platform.

As businesses and individuals become increasingly aware of the implications of data sovereignty and digital autonomy, Fluences Cloudless Platform presents an attractive proposition. It offers users control over their data while providing a robust and scalable solution for their computing needs.

The launch of Protocol Village marks a significant milestone for Fluence as it sets out to redefine the landscape of cloud computing. With its decentralized model, Fluence is poised to empower users with greater freedom and flexibility in how they manage their digital resources.

The implications of Protocol Village are far-reaching. By providing a common ground for various protocols to interact, it paves the way for more seamless integration of services and applications. This not only benefits developers but also end-users who will enjoy a more cohesive experience across different blockchain platforms.

Looking ahead, the launch of Protocol Village is just the beginning. It sets the stage for a future where decentralized networks can operate more harmoniously, unlocking new possibilities for innovation and growth in the digital world.

As we move forward, it will be interesting to observe how the market responds to this alternative approach to cloud computing. Will Protocol Village and Fluences Cloudless Platform disrupt the dominance of established players? Only time will tell.

Like Loading...

Follow this link:
Fluences Cloudless Platform Goes Live as Alternative to AWS, Google Cloud - Tekedia

Kubernetes: the driving force behind cloud-native evolution – SiliconANGLE News

As KubeCon + CloudNativeCon Europe 2024 draws to a close, the event leaves behind a rich tapestry of insights and advancements in the world of cloud computing and Kubernetes.

This years conference, marked by its vibrant atmosphere of nostalgia mixed with forward-looking enthusiasm, celebrated the significant milestone of Kubernetes tenth anniversary. Attendees and speakers alike delved into retrospective discussions about Kubernetes transformative journey over the past decade while also casting a keen eye on its future, particularly in the realms of artificial intelligence integration and the evolving landscape of cloud-native technologies.

Absolutely packed, really good interests across the board from community developers, saidDustin Kirkland (pictured, second from left), guest analyst on theCUBE, SiliconANGLE Medias livestreaming studio. Some maybe new to the open-source and CNCF community, but a lot of enterprise interests, a lot of EU enterprise interest in the various solutions that are surrounding us here on the show floor.

During the event, Kirkland and his co-analysts,Rob Strechay (right), Savannah Peterson (second from right), and Joep Piscaer (left), discussed the significant attendance and community engagement, the evolution and maturity of Kubernetes and cloud-native technologies, and the anticipation of their future integration with AI and data science. (* Disclosure below.)

The conference highlighted the shift toward a developer-centric approach in cloud-native technology, noting the importance of empowering developers to build meaningful solutions for businesses, acording to Piscaer. This focus is indicative of the maturity of Kubernetes and the broader ecosystem, moving beyond infrastructure concerns to address more complex, business-driven requirements.

Kubernetes is just mature, like you say, people are implemented were looking at the developer how to empower them, how to enable them to actually build something that makes sense for the business, Piscaer said. Thats what excites me in this show is actually having those conversations about what the developers need, what the business needs. Were kind of in a phase where we can just say, the infrastructure part, its there, its a commodity again, which I just enjoy.

The integration of new contributors and the expansion of the community also plays a critical role, according to Strechay. The diversity and growth of the community contribute significantly to the evolution of cloud-native technologies. With more than half of the attendees being newcomers, theres a vibrant exchange of ideas and experiences that enriches the ecosystem.

Backstage has more individual contributors than any of the other projects that are out there. It may not have all of the contributions, but to me, thats the interface between platform engineering and developer, Strechay said. I think weve heard through a number of the discussions weve had this week, how do you make that? How do you make platform engineering, understand what the developers need? And, by the way, now you got this guy called a data scientist whos trying to put other things and models in places that models havent been before.

In summary, the latest KubeCon + CloudNativeCon has underscored a pivotal shift in the cloud-native ecosystem. The focus is now firmly on empowering developers and integrating AI and data science into cloud-native strategies. This evolution, backed by a diverse and growing community, sets a promising trajectory for the future of cloud-native technologies.

Heres the complete video interview, part of SiliconANGLEs and theCUBE Researchs coverage of KubeCon + CloudNativeCon Europe:

(* Disclosure: TheCUBE is a paid media partner for the KubeCon + CloudNativeCon Europe event. No sponsors have editorial control over content on theCUBE or SiliconANGLE.)

THANK YOU

Follow this link:
Kubernetes: the driving force behind cloud-native evolution - SiliconANGLE News

10 Cloud Security Issues & Tips to Manage Them – Techopedia

This website is using a security service to protect itself from online attacks. The action you just performed triggered the security solution. There are several actions that could trigger this block including submitting a certain word or phrase, a SQL command or malformed data.

More:
10 Cloud Security Issues & Tips to Manage Them - Techopedia

AWS and NVIDIA Extend Collaboration to Advance Generative AI Innovation – NVIDIA Blog

GTCAmazon Web Services (AWS), an Amazon.com company (NASDAQ: AMZN), and NVIDIA (NASDAQ: NVDA) today announced that the new NVIDIA Blackwell GPU platform unveiled by NVIDIA at GTC 2024 is coming to AWS. AWS will offer the NVIDIA GB200 Grace Blackwell Superchip and B100 Tensor Core GPUs, extending the companies long standing strategic collaboration to deliver the most secure and advanced infrastructure, software, and services to help customers unlock new generative artificial intelligence (AI) capabilities.

NVIDIA and AWS continue to bring together the best of their technologies, including NVIDIAs newest multi-node systems featuring the next-generation NVIDIA Blackwell platform and AI software, AWSs Nitro System and AWS Key Management Service (AWS KMS) advanced security, Elastic Fabric Adapter (EFA) petabit scale networking, and Amazon Elastic Compute Cloud (Amazon EC2) UltraCluster hyper-scale clustering. Together, they deliver the infrastructure and tools that enable customers to build and run real-time inference on multi-trillion parameter large language models (LLMs) faster, at massive scale, and at a lower cost than previous-generation NVIDIA GPUs on Amazon EC2.

The deep collaboration between our two organizations goes back more than 13 years, when together we launched the worlds first GPU cloud instance on AWS, and today we offer the widest range of NVIDIA GPU solutions for customers, said Adam Selipsky, CEO at AWS. NVIDIAs next-generation Grace Blackwell processor marks a significant step forward in generative AI and GPU computing. When combined with AWSs powerful Elastic Fabric Adapter Networking, Amazon EC2 UltraClusters hyper-scale clustering, and our unique Nitro systems advanced virtualization and security capabilities, we make it possible for customers to build and run multi-trillion parameter large language models faster, at massive scale, and more securely than anywhere else. Together, we continue to innovate to make AWS the best place to run NVIDIA GPUs in the cloud.

AI is driving breakthroughs at an unprecedented pace, leading to new applications, business models, and innovation across industries, said Jensen Huang, founder and CEO of NVIDIA. Our collaboration with AWS is accelerating new generative AI capabilities and providing customers with unprecedented computing power to push the boundaries of what's possible.

Latest innovations from AWS and NVIDIA accelerate training of cutting-edge LLMs that can reach beyond 1 trillion parameters AWS will offer the NVIDIA Blackwell platform, featuring GB200 NVL72, with 72 Blackwell GPUs and 36 Grace CPUs interconnected by fifth-generation NVIDIA NVLink. When connected with Amazons powerful networking (EFA), and supported by advanced virtualization (AWS Nitro System) and hyper-scale clustering (Amazon EC2 UltraClusters), customers can scale to thousands of GB200 Superchips. NVIDIA Blackwell on AWS delivers a massive leap forward in speeding up inference workloads for resource-intensive, multi-trillion-parameter language models.

Based on the success of the NVIDIA H100-powered EC2 P5 instances, which are available to customers for short durations throughAmazon EC2 Capacity Blocks for ML, AWS plans to offer EC2 instances featuring the new B100 GPUs deployed in EC2 UltraClusters for accelerating generative AI training and inference at massive scale. GB200s will also be available onNVIDIA DGX Cloud, an AI platform co-engineered on AWS, that gives enterprise developers dedicated access to the infrastructure and software needed to build and deploy advanced generative AI models. The Blackwell-powered DGX Cloud instances on AWS will accelerate development of cutting-edge generative AI and LLMs that can reach beyond 1 trillion parameters.

Elevate AI security with AWS Nitro System, AWS KMS, encrypted EFA, and Blackwell encryption As customers move quickly to implement AI in their organizations, they need to know that their data is being handled securely throughout their training workflow. The security of model weights the parameters that a model learns during training that are critical for its ability to make predictions is paramount to protecting customers intellectual property, preventing tampering with models, and maintaining model integrity.

AWS AI infrastructure and services already have security features in place to give customers control over their data and ensure that it is not shared with third-party model providers. The combination of the AWS Nitro System and the NVIDIA GB200 takes AI security even further by preventing unauthorized individuals from accessing model weights. The GB200 allows inline encryption of the NVLink connections between GPUsand encrypts data transfers,while EFA encrypts data across servers for distributed training and inference. The GB200 will also benefit from the AWS Nitro System, which offloads I/O for functions from the host CPU/GPU to specialized AWS hardware to deliver more consistent performance, while its enhanced security protects customer code and data during processing on both the customer side and AWS side. This capability available only on AWS has beenindependently verified by NCC Group, a leading cybersecurity firm.

With the GB200 on Amazon EC2, AWS will enable customers to create a trusted execution environment alongside their EC2 instance, usingAWS Nitro Enclaves and AWS KMS. Nitro Enclaves allow customers to encrypt their training data and weights with KMS, using key material under their control. The enclave can be loaded from within the GB200 instance and can communicate directly with the GB200 Superchip. This enables KMS to communicate directly with the enclave and pass key material to it in a cryptographically secure way. The enclave can then pass that material to the GB200, protected from the customer instance and preventing AWS operators from ever accessing the key or decrypting the training data or model weights, giving customers unparalleled control over their data.

Project Ceiba taps Blackwell to propel NVIDIAs future generative AI innovation on AWS Announced at AWS re:Invent 2023, Project Ceiba is a collaboration between NVIDIA and AWS to build one of the worlds fastest AI supercomputers. Hosted exclusively on AWS, the supercomputer is available for NVIDIAs own research and development. This first-of-its-kind supercomputer with 20,736 B200 GPUs is being built using the new NVIDIA GB200 NVL72, a system featuring fifth-generation NVLink, that scales to 20,736 B200 GPUs connected to 10,368 NVIDIA Grace CPUs. The system scales out using fourth-generation EFA networking, providing up to 800 Gbps per Superchip of low-latency, high-bandwidth networking throughput capable of processing a massive 414exaflops of AI a 6x performance increase over earlier plans to build Ceiba on the Hopper architecture. NVIDIA research and development teams will use Ceiba to advance AI for LLMs, graphics (image/video/3D generation) and simulation, digital biology, robotics, self-driving cars, NVIDIA Earth-2 climate prediction, and more to help NVIDIA propel future generative AI innovation.

AWS and NVIDIA collaboration accelerates development of generative AI applications and advance use cases in healthcare and life sciences AWS and NVIDIA have joined forces to offer high-performance, low-cost inference for generative AI with Amazon SageMaker integration with NVIDIA NIM inference microservices, available with NVIDIA AI Enterprise. Customers can use this combination to quickly deploy FMs that are pre-compiled and optimized to run on NVIDIA GPUs to SageMaker, reducing the time-to-market for generative AI applications.

AWS and NVIDIA have teamed up to expand computer-aided drug discovery with newNVIDIA BioNeMo FMs for generative chemistry, protein structure prediction, and understanding how drug molecules interact with targets. These new models will soon be available on AWS HealthOmics, a purpose-built service that helps healthcare and life sciences organizations store, query, and analyze genomic, transcriptomic, and other omics data.

AWS HealthOmics and NVIDIA Healthcare teams are also working together to launch generative AI microservices to advance drug discovery, medtech, and digital health delivering a new catalog of GPU-accelerated cloud endpoints for biology, chemistry, imaging and healthcare data so healthcare enterprises can take advantage of the latest advances in generative AI on AWS.

See the original post here:
AWS and NVIDIA Extend Collaboration to Advance Generative AI Innovation - NVIDIA Blog