Category Archives: Cloud Servers

This startup says it can glue all your networks together in the cloud – The Register

Multi-cloud networking startup Alkira has decided it wants to be a network-as-a-service (NaaS) provider with the launch of its cloud area networking platform this week.

The upstart, founded in 2018, claims this platform lets customers automatically stitch together multiple on-prem datacenters, branches, and cloud workloads at the press of a button.

The subscription is the latest evolution of Alkiras multi-cloud platform introduced back in 2020. The service integrates with all major public cloud providers Amazon Web Services, Google Cloud, Microsoft Azure, and Oracle Cloud and automates the provisioning and management of their network services.

"Cloud was supposed to make life easier, but it has grown more complex as customers struggle to manage islands of networking, each with its own rules and tools. They thought they were buying agility, but what arrived was a mountain of complexity and technical debt," Alkira CEO Amir Khan argued in a canned statement.

He argues that today's network architectures were never designed for the level of change that the cloud has introduced. "Until now, enterprises had a choice between shoehorning last-generation technology into the cloud or using orchestration tools to hide the complexity."

Rather than building its own private network as vendors like Aryaka (yes, Aryaka) have done, or rely on telecommunications providers like many SD-WAN vendors, Alkira piggybacks on the global network backbones that interconnect the public cloud providers' datacenters.

For example, if a customer needs to connect a workload running in AWS to another running in GCP or Azure, the platform automatically configures and connects the virtual networks on each the respective public clouds.

However, since launching the platform, Alkira has introduced several additional capabilities including support for branch-to-branch communications and hybrid-cloud networking for customers with a mix of on-prem and cloud infrastructure.

The company has also announced integrations with several large security and network vendors like Cisco, Fortinet, Check Point, Palo Alto Networks, and Aruba to enable customers to deploy the service alongside their existing infrastructure.

Alkira's Cloud Area Networks service consolidates these capabilities into a single platform, and adds support for Teraform and REST APIs for integration with customers' continuous integration and continuous delivery pipelines.

Altogether, this functionality has helped the multi-cloud startup secure multiple high-profile contracts with the likes of Warner Music Group, Tekion, and Koch Industries. The latter was one of the company's largest financiers and has deployed Alkira's services to connect its more than 700 locations around the world.

However, Alkira is far from the only vendor vying for a piece of the NaaS market. The business faces competition from the many of the same cloud providers on which its service relies.

As more enterprise workloads have made their way into the cloud, AWS, GCP, and Azure have all launched cloud transport services for customers that need to connect workloads running across multiple regions. Many of these services also support using their private networks as an alternative to multi-protocol label switching (MPLS) or broadband connectivity for branch-to-branch communications. Amazon's Cloud WAN service introduced late last year is one such example.

Meanwhile, Alkira also faces competition from traditional SD-WAN vendors like Cisco and Fortinet, which have leaned on these cloud transport services as a means for extending network architectures customers are ready familiar with to multi-cloud networking use cases.

Go here to see the original:
This startup says it can glue all your networks together in the cloud - The Register

ZTE intros ‘cloud laptop’ that draws just five watts of power – The Register

Chinese telecom equipment maker ZTE has announced what it claims is the first "cloud laptop" an Android-powered device that the consumes just five watts and links to its cloud desktop-as-a-service.

Announced this week at the partially state-owned company's 2022 Cloud Network Ecosystem Summit, the machine model W600D measures 325mm 215mm 14 mm, weighs 1.1kg and includes a 14-inch HD display, full-size keyboard, HD camera, and Bluetooth and Wi-Fi connectivity. An unspecified eight-core processors drives it, and a 40.42 watt-hour battery is claimed to last for eight hours.

It seems the primary purpose of this thing is to access a cloud-hosted remote desktop in which you do all or most of your work. ZTE claimed its home-grown RAP protocol ensures these remote desktops will be usable even on connections of a mere 128Kbit/sec, or with latency of 300ms and packet loss of six percent. That's quite a brag.

ZTE's rendering of its W600D 'cloud laptop'

As such, the machine is basically a client end-point connected to ZTEs uSmart cloud PC service, and this is suggested for use in almost any setting most especially when multiple users share a physical machine at home or work.

ZTE already has a cloud PC on the desktop the W100D, a pack-of-cards-sized device similar to Alibaba's Wuying device.

Alibaba released its virtual computer earlier this year. The Wuying is designed for use with Alibaba Cloud and is available in Singapore or China. Alibaba also suggests its cloudy client device as an option for consumers or businesses.

Desktop-as-a-service is seldom offered to consumers, anywhere. Now two of China's mightiest tech outfits think the nation has an appetite for such services and accompanying devices.

ZTE may struggle to find a market for the W600D outside China, given the company is so distrusted in the US that the FCC will literally reimburse medium and small carriers (or at least promise to, when there's enough money) who remove and replace the company's products.

This does not mean China's PC market is terminal, but it could mean terminals will take a chunk of China's PC market.

Continued here:
ZTE intros 'cloud laptop' that draws just five watts of power - The Register

Chinese startup hires chip godfather and TSMC vet to break into DRAM biz – The Register

A Chinese state-backed startup has hired legendary Japanese chip exec Yukio Sakamoto as part of a strategy to launch a local DRAM industry.

Chinese press last week reported that Sakamoto has joined an outfit named SwaySure, also known as Shenzhen Sheng Weixu Technology Company or Sheng Weixu for brevity.

Sakamoto's last gig was as senior vice president of Chinese company Tsinghua Unigroup, where he was hired to build up a 100-employee team in Japan with the aim of making DRAM products in Chongqing, China. That effort reportedly faced challenges along the way some related to US sanctions, others from recruitment.

The company scrapped major memory projects in two cities and was forced into bankruptcy last year, before Beijing arranged a bailout.

While that venture failed, 75-year-old Sakamoto's CV remains hard to match. He was once president at Japan's Elpida Memory a major Apple supplier with the capacity to produce over 185,000 300mm wafers per month. Micron bought the company in 2013.

Sakamoto's new employer, which he claims will be his last, was established in March with 5 billion ($745 million) of registered capital and 100 percent controlled by Shenzhen state-owned assets, according to Chinese state media.

Its main products are listed as general-purpose DRAM chips for datacenters and smartphones, developed by teams in Japan and China.

Sakamoto will join Taiwan Semiconductor Manufacturing Co (TSMC) veteran Liu Xiaoqiang, said Chinese state media outlet Global Times. Although Liu left TSMC three years ago, the employment choice raises eyebrows given China's yearning for Taiwanese talent, complete with accusations of poaching and speculation of aggressive methods to obtain it.

Beijing has been extremely eager to achieve tech self-sufficiency amid US sanctions in an already critical supply chain environment. In October 2020, China set a goal of growing all its own tech at home by 2035.

Unfortunately for the Middle Kingdom, that goal seems more elusive by the day. Analyst house IC Insights predicted that by 2026, China will only produce 20 percent of the chips it uses.

Previous attempts to create a steady domestic DRAM stream in China have been thwarted by pesky things like IP laws. In addition to Tsinghua's failure to thrive, state-owned Fujian Jinhua Integrated Circuit Company was indicted on industrial espionage charges in the US and banned from importing semiconductor equipment and materials from the States.

Instead, the market remains dominated by the likes of Korea's Samsung and SK hynix, plus US company Micron. According to IC Insights [PDF], the trio held 94 percent of global DRAM market share in 2021.

More:
Chinese startup hires chip godfather and TSMC vet to break into DRAM biz - The Register

Zscaler bulks up AI, cloud, IoT in its zero-trust systems – The Register

Zscaler is growing the machine-learning capabilities of its zero-trust platform and expanding it into the public cloud and network edge, CEO Jay Chaudhry told devotees at a conference in Las Vegas today.

Along with the AI advancements, Zscaler at its Zenith 2022 show in Sin City also announced greater integration of its technologies with Amazon Web Services, and a security management offering designed to enable infosec teams and developers to better detect risks in cloud-native applications.

In addition, the biz also is putting a focus on the Internet of Things (IoT) and operational technology (OT) control systems as it addresses the security side of the network edge. Zscaler, for those not aware, makes products that securely connect devices, networks, and backend systems together, and provides the monitoring, controls, and cloud services an organization might need to manage all that.

Enterprises are looking for ways to protect workloads and data that are increasingly being run, accessed, and created outside the central datacenter, making a legacy perimeter security defense more outdated, Chaudhry opined during his keynote Wednesday.

"Workloads, somewhat like users, talk to the internet," he said. "Workloads talk to other workloads, so zero trust plays an important role."

Zscaler has been banging on the idea of zero trust since the rollout of its first cloud services in 2008. Zero trust essentially operates on the premise that no user, device, or application on the network inherently can be trusted. Instead, a zero-trust framework relies on identity, behavior, authentication, and security policies to verify and validate everything on the network and to determine such issues as access and privileges.

It's a booming space, with analyst biz MarketsandMarkets recently forecasting the global zero-trust market growing from $27.4 billion this year to $60.7 billion by 2027. Zero trust has also become a buzzword in the industry, with a growing number of vendors claiming they offer such capabilities.

Chaudhry said his company is working to build out an integrated, cloud-based platform that gives enterprises tightly integrated services rather than a collection of point products that need to be pulled together by an organization.

The latest offerings are designed to expand what its Zero Trust Exchange architecture can do. Zscaler's Posture Control agentless offering is integrated into Zero Trust Exchange to prioritize risk, including unpatched vulnerabilities in containers and virtual machines, cloud service misconfigurations and excessive permissions.

It also scans workloads and detects and resolves issues early in the development lifecycle before they become problems in production. Posture Control is the second step in Zscaler's efforts to secure workloads, following the release last year of Cloud Connector, which Chaudhry said eliminated the need for multiple virtual firewalls.

"Workloads need to securely communicate, but in addition to that, when you are launching those workloads, you want to make sure they are configured right there are hundreds and hundreds of configurations around the workloads and you also need to make sure that the right people have the right access, entitlement and permissions," the CEO said. "In addition, you need to make sure the attack surface is minimized."

The new AI and machine learning capabilities integrated into the Zero Trust Exchange are aimed at both improving the user experience and better protecting the network against the rising numbers and sophistication of cyberattacks. According to Zscaler research, there was a 314 percent increase in encrypted attacks between September 2020 and 2021 and an 80 percent increase in ransomware attacks between February 2021 and March 2022, with a 117 percent jump in double-extortion attacks.

There also was a more than 100 percent [PDF] year-over-year rise in phishing attacks in 2021, it claimed.

AI and machine learning technologies are fed by data and Zscaler's security cloud inspects more than 240 million transactions a day and extracts more than 300 trillion signals that can feed the AI and machine learning algorithms. This now includes AI-powered phishing prevention, AI-based policy recommendations to stop the lateral movement of cyberthreats and user-to-app segmentation to reduce the attack surface, he said.

There also are an autonomous risk-based policy engine to enhance network integrity and enable customized policies based on risk scores applied to users, devices, apps and content, and an AI-driven root cause analysis capabilities to accelerate the mean time to resolution.

Chaudhry said customer demand drove the development of IoT and OT security capabilities in the platform. Enterprises said that many of their plants and factories rely on traditional security components that open them to ever-increasing cyberthreats.

"You can actually define those solutions within the factory floor or you can send telemetry from IoT or OT devices from your data lake at Azure, AWS or wherever else securely without doing VPN devices," the CEO said, noting that the company is partnering with Siemens developing and integrating products in this area.

Go here to see the original:
Zscaler bulks up AI, cloud, IoT in its zero-trust systems - The Register

Cloud services are convenient, but they hurt smart home longevity – Digital Trends

So many of the smart home products people know and love rely on the cloud.

Alexa, Google Assistant, and Siri all rely on the cloud for language processing. Even many smart lights need the cloud to turn on or off. Its one thing when these cloud services provide some additional function (like storage or automation presets), but relying on them for basic functionality? Thats a terrible idea.

While it does feel like smart home technology has been around for a while, the industry is still relatively young. Many of the companies that sprung out of the initial wave of interest in the early 2010s were startups, and its safe to say that quite a few were created only to later fail.

These companies produced products that are now little more than expensive paperweights. And its not just smaller companies, either. Lowes started the Iris smart home platform, only for it to shut down in March 2019.

Insteon announced the death of its cloud servers several months back, and iHome has shut down its services too. While iHome didnt have the broadest product range, Insteon was well-known. It was one of the easiest ways to convert an older home into a smart home due to its use of in-wall wiring to send smart signals. While there is some hope left for Insteon customers their devices will become, at worst, dumb switches the same cant be said for a lot of other products.

It seems like everything is cloud this and cloud that, but a cloud server isnt an easy thing to maintain. A single server can cost as much as $400 per month to maintain, and a large company will have multiple servers. An average back-end infrastructure might be $15,000 or more per month for even a medium-sized company.

If a product isnt generating enough revenue (or a company is relying on Kickstarter or Indiegogo funding), then cloud servers will be one of the first things to go when a company looks to cut costs. When this happens, its most often the customer that suffers.

Features get cut, functionality thins out, and products offer a lot less benefit than you initially expected when you bought them.

While there are a lot of benefits to using cloud servers, there are just as many downsides. Theyre not as secure as on-device processing, for one.

If smart home devices rely on local processing, the entire system improves. I shouldnt need internet access to tell the lights in my home to turn off, especially if I have to dedicate a port on my router to a smart hub. If the hub cant relay a basic on/off command, whats the point of it?

Alexa and Google Assistant devices could translate your commands into actions without the need to relay them through an external server. Natural language processing isnt as difficult as it once was, and new-and-improved chips offer dramatically more power without an increase in size. (This is also a good time to mention that HomeKit should pick up the pace and implement the new M1 chips into Siri processing to close the gap with the competition.)

Perhaps the biggest indication that on-device processing is the way to go is that if a company shuts its doors, the products still retain functionality. Customers wont be cheated out of their money in two or three years just because financing fell through or startup funding ran out.

Insteon, iHome, and Iris are just the tip of the iceberg. Theres already enough skepticism about smart home devices as it is. If customers cant feel like theyre making a good investment, the industry wont grow, and development will stall. On-device processing can provide customers with some assurance in their investment and continuing functionality, even if the company doesnt produce anything new.

More here:
Cloud services are convenient, but they hurt smart home longevity - Digital Trends

TikTok has moved all its US traffic to Oracle’s cloud servers – Protocol

The Code which was developed by 34 signatories, including Meta, Google, TikTok and Microsoft is essentially a list of disinformation-fighting practices tech companies can employ if they want to demonstrate theyre at least trying to mitigate risk and stay in compliance with the Digital Services Act in Europe.

To be credible, the new Code of Practice will be backed up by the DSA including for heavy dissuasive sanctions, Thierry Breton, European Commissioner for Internal Market, said in a statement on Thursday. Very large platforms that repeatedly break the Code and do not carry out risk mitigation measures properly risk fines of up to 6% of their global turnover.

The list of signatories also includes Twitter, Twitch, Vimeo, Clubhouse, Adobe and a range of civil society, research and fact-checking groups. Notably missing from the list, however, are other tech giants, including Apple and Telegram, which have played a particularly key role in the spread of misinformation around the war in Ukraine. Amazon is also largely missing, with the exception of livestreaming platform Twitch, which the company owns.

Not every company that did sign on has committed to every line item in the code, leading to some ongoing conflict even among signatories. In some cases, that could be because the commitment just isnt relevant to their business. In others, it could mean tech platforms are picking and choosing the commitments that are the easiest for them to pull off.

Still, the list of companies that have signed up and what they have signed up for is significant, and could lead to dramatically more transparency into some of the worlds biggest platforms.

Companies now have a six-month window to implement the code. Here are a few of the biggest promises theyre making:

The Code would increase pressure on platforms to not only cease carrying disinformation but also avoid the placement of advertising next to Disinformation content or on sources that repeatedly violate these policies.

The companies committed to creating dedicated searchable ad repositories and ensuring that political ads come with a disclaimer and details about how much an ad cost and how long it ran. Meta and Google already offer this, but the Code would encourage even more platforms that want to stay on the right side of the DSA to provide this visibility. (Of course, it could also, alternatively, push some platforms to cut off political ads altogether, as Twitter, LinkedIn and Twitch already do, a move some argue has only made it harder for small campaigns and advocacy groups to get their messages out.)

The Code requires companies to offer researchers automated access to non-personal, anonymized, aggregated or manifestly made public data.

This is potentially huge, Mathias Vermeulen, director of European data rights agency AWO, said in a tweet. It could entail the development of a Crowdtangle platform for all these companies.

In the words of Joe Biden, it's a big fucking deal, and potentially an inflection point in the history of social media, tweeted CrowdTangle founder Brandon Silverman, who has become an outspoken advocate for transparency in the tech industry. But whether that's true will be determined in all the work that happens from this point on..and there's a lot.

Under the Code, very large platforms (defined as having more than 45 million average monthly active users in the EU) will have to report every six months on their progress implementing the Code. Other companies will report on an annual basis.

The signatories agreed to work more closely across platforms to compare notes on manipulative user behavior theyre encountering. Thats a potentially meaningful shift, which would give smaller companies operating in Europe the benefit of visibility into what the largest players with the most resources are seeing and what theyre doing about it.

The hardest part of regulating tech is that innovation often outpaces the law itself. The Code establishes a task force, which will review and adapt the commitments in view of technological, societal, market and legislative developments.

More here:
TikTok has moved all its US traffic to Oracle's cloud servers - Protocol

Cloud hosting group sees clear skies of recovery as market begins to normalise | TheBusinessDesk.com – The Business Desk

Liverpool-based Cloud hosting provider, SysGroup, said it is starting to see normalisation of market conditions following the pandemic, and anticipates further acquisition possibilities, following two recent additions to the group.

The business announced its annual results today, for the year to March 31, 2022, which revealed a fall in revenues, but a sizeable improvement in pre-tax profits.

Sales of 14.75m compared with 18.13m the previous year, but pre-tax profits soared 192% to 600,000, which chief executive Adam Binks said reflected the strength of the groups business model.

Adjusted EBITDA of 2.82m was slightly down on last years 2.91m figure.

Net cash stood at 2.99m, up from 1.88m a year ago.

During the reporting period the group completed its project to deliver a unified platform of systems, Project Fusion, which has resulted in significant benefits across all operations.

It achieved the successful migration to SysCloud 2.0, the groups multi-tenanted cloud platform which went fully live in May 2022, delivering higher client performance and group efficiency with greater capacity from less physical space.

A unified sales and marketing hub opened in Manchester with a number of highly targeted campaigns planned for fiscal year 2023 to drive new customer engagement and continue to build its sales pipeline, and customer approval scores were comfortably ahead of the 97% target throughout the entire year.

Also, its office rationalisation completed with a refurbishment programme delivered in Newport and closure of theTelford site, which will generate a small operational saving.

In the first quarter of the current financial year, the business acquired Edinburgh-based Truststream Security Solutions a fast-growing provider of cyber security solutions which enhances SysGroups security services and gives the group a presence in Scotland from which to grow, and Independent Network Solutions, trading as Orchard Computers, further enhancing the groups presence in the South West region and complementing its South Wales based operations.

Both acquisitions are expected to be immediately earnings enhancing.

Adam Binks said: The adjusted EBITDA performance and strong cash generation in a year when turnover was impacted by COVID highlights the strength of our business model.

We have invested to drive future growth whilst maintaining prudent financial discipline throughout the business. Operationally, the group is ideally placed to take advantage of conditions as they begin to normalise and we have started to see the early green shoots of such a recovery.

He added: The acquisitions of Truststream and Orchard added further customers, expertise and geographical reach and demonstrate our ongoing commitment to be consolidators in this highly fragmented market.

M&A activity in our sector is picking up and we believe there will be further opportunities that we can take advantage of during the course of this year. With a clear strategy for both organic and inorganic growth, the board is confident in the future.

And he revealed: Towards the end of the last financial year we began to see the green shoots of recovery for new business, with existing clients beginning to engage on projects and an increasing pipeline of opportunities from new potential clients.

Whilst these are still early days and we must remain cautious, I am confident that we will see improvements to both revenue and EBITDA performance in this new financial year.

Read more from the original source:
Cloud hosting group sees clear skies of recovery as market begins to normalise | TheBusinessDesk.com - The Business Desk

Google recasts Anthos with hitch to AWS Outposts – The Register

Google Cloud's Anthos on-prem platform is getting a new home under the search giants recently announced Google Distributed Cloud (GDC) portfolio, where it will live on as a software-based competitor to AWS Outposts and Microsoft Azure Stack.

Introduced last fall, GDC enables customers to deploy managed servers and software in private datacenters and at communication service provider or on the edge.

Its latest update sees Google reposition Anthos on-prem, introduced back in 2020, as the bring-your-own-server edition of GDC. Using the service, customers can extend Google Cloud-style management and services to applications running on-prem.

For example, customers can use the service to provision and manage Google Kubernetes Engine (GKE) clusters on virtual machines or bare-metal servers in their own datacenters, and do it all from the Google Cloud Console.

GDC Virtual doesnt appear to introduce any new functionality not already found in Anthos on-prem.

Customers of Anthos on-premises will continue to enjoy the consistent management and developer experience they have come to know and expect, with no changes to current capabilities, pricing structure, or look and feel across user interfaces, Chen Goldberg, GM and VP of engineering for cloud-native runtimes at Google, said in a statement.

The announcement marks the latest evolution to the Anthos hybrid cloud platform, which launched in early 2019 following Thomas Kurian's appointment as CEO of Google Cloud.

Anthos was initially conceived as a way to extend a consistent management plane to applications running in multiple clouds GCP, AWS, Azure, etc or workloads that customers werent ready to see leave the corporate datacenter.

The idea was that customers could manage their workloads wherever they were deployed and migrate them to GCP with minimal retooling. The platform quickly picked up additional features, including integration with VMware's vSphere VM management suite, and a migration tool designed to re-wrap virtual machines to run in containers on GKE.

Googles motivations dont appear to have changed much in that regard. The company cites customers with significant investment in their own VM environments or those wishing to migrate their applications to the cloud as the target market for GDC Virtual.

Despite the emphasis on GDC, were told the platform isnt so much the spiritual successor to Anthos, but rather a consolidation of SKUs powered by the platform aimed at simplifying customer journeys. Or put another way, making Anthos a little less confusing for customers.

Only time will tell whether well see Anthos subjected to the same fate as so many of Googles products. Im looking at you Google Talk, I mean Hangouts, or wait, is it Chat now?

The rest is here:
Google recasts Anthos with hitch to AWS Outposts - The Register

StorPool Named ‘Storage Optimization Company of the Year’ At Storage Awards XIX – Business Wire

SOFIA, Bulgaria--(BUSINESS WIRE)--StorPool Storage today announced that it won the Storage Optimization Company of the Year award at the 2022 Storage Awards. The company was previously honored by the Storage Awards with wins for Software Defined Storage (SDS) Vendor of the Year in 2020 and One to Watch Product in 2017.

The "Storries" awards are a premier IT sector event that recognizes the industrys finest products, companies and people. Winners were chosen via online voting by Storage Magazine readers with results presented at a black-tie gala dinner on June 9 in London.

StorPool accelerates the world by storing data more productively and helping businesses streamline their operations. StorPool storage systems are ideal for storing and managing the data of demanding primary workloads databases, web servers, virtual desktops, real-time analytics solutions, and other mission-critical software. Under the hood, the primary storage platform provides thin-provisioned volumes to the workloads and applications running in on-premise clouds. The native multi-site, multi-cluster and BC/DR capabilities supercharge hybrid- and multi-cloud efforts at scale.

"While awards cannot necessarily tell whether products, services or companies are truly better than others, they serve as a good barometer of quality and success when presented from those within the industry," said Boyan Ivanov, CEO at StorPool Storage. "Being nominated for this years Storage Awards and being chosen by the readers of Storage Magazine as the Storage Optimization Company of the Year is especially satisfying because of the third-party validation of our ongoing efforts. We are thankful to all who took time to recognize us and look forward to continuing our work delivering our next-generation primary storage platform for demanding workloads."

About StorPool Storage

StorPool Storage is a primary storage platform designed for large-scale cloud infrastructure. It is the easiest way to convert sets of standard servers into primary or secondary storage systems. The StorPool team has experience working with various clients Managed Service Providers, Hosting Service Providers, Cloud Service Providers, enterprises and SaaS vendors. StorPool Storage comes as a software, plus a fully managed data storage service that transforms standard hardware into fast, highly available and scalable storage systems.

View original post here:
StorPool Named 'Storage Optimization Company of the Year' At Storage Awards XIX - Business Wire

Linux Foundation Announces Open Programmable Infrastructure Project to Drive Open Standards for New Class of Cloud Native Infrastructure – Yahoo…

Data Processing and Infrastructure Processing Units DPU and IPU are changing the way enterprises deploy and manage compute resources across their networks; OPI will nurture an ecosystem to enable easy adoption of these innovative technologies

SAN FRANCISCO, June 21, 2022 /PRNewswire/ -- The Linux Foundation, the nonprofit organization enabling mass innovation through open source, today announced the new Open Programmable Infrastructure (OPI) Project. OPI will foster a community-driven, standards-based open ecosystem for next-generation architectures and frameworks based on DPU and IPU technologies. OPI is designed to facilitate the simplification of network, storage and security APIs within applications to enable more portable and performant applications in the cloud and datacenter across DevOps, SecOps and NetOps.

Founding members of OPI include Dell Technologies, F5, Intel, Keysight Technologies, Marvell, NVIDIA and Red Hat with a growing number of contributors representing a broad range of leading companies in their fields ranging from silicon and device manufactures, ISVs, test and measurement partners, OEMs to end users.

"When new technologies emerge, there is so much opportunity for both technical and business innovation but barriers often include a lack of open standards and a thriving community to support them," said Mike Dolan, senior vice president of Projects at the Linux Foundation. "DPUs and IPUs are great examples of some of the most promising technologies emerging today for cloud and datacenter, and OPI is poised to accelerate adoption and opportunity by supporting an ecosystem for DPU and IPU technologies."

DPUs and IPUs are increasingly being used to support high-speed network capabilities and packet processing for applications like 5G, AI/ML, Web3, crypto and more because of their flexibility in managing resources across networking, compute, security and storage domains. Instead of the servers being the infrastructure unit for cloud, edge or the data center, operators can now create pools of disaggregated networking, compute and storage resources supported by DPUs, IPUs, GPUs, and CPUs to meet their customers' application workloads and scaling requirements.

Story continues

OPI will help establish and nurture an open and creative software ecosystem for DPU and IPU-based infrastructures. As more DPUs and IPUs are offered by various vendors, the OPI Project seeks to help define the architecture and frameworks for the DPU and IPU software stacks that can be applied to any vendor's hardware offerings. The OPI Project also aims to foster a rich open source application ecosystem, leveraging existing open source projects, such as DPDK, SPDK, OvS, P4, etc., as appropriate. The project intends to:

Define DPU and IPU,

Delineate vendor-agnostic frameworks and architectures for DPU- and IPU-based software stacks applicable to any hardware solutions,

Enable the creation of a rich open source application ecosystem,

Integrate with existing open source projects aligned to the same vision such as the Linux kernel, and,

Create new APIs for interaction with, and between, the elements of the DPU and IPU ecosystem, including hardware, hosted applications, host node, and the remote provisioning and orchestration of software

With several working groups already active, the initial technology contributions will come in the form of the Infrastructure Programmer Development Kit (IPDK) that is now an official sub-project of OPI governed by the Linux Foundation.IPDK is an open source framework of drivers and APIs for infrastructure offload and management that runs on a CPU, IPU, DPU or switch. In addition, NVIDIA DOCA, an open source software development framework for NVIDIA's BlueField DPU, will be contributed to OPI to help developers create applications that can be offloaded, accelerated, and isolated across DPUs, IPUs, and other hardware platforms.

For more information visit: https://opiproject.org; start contributing here: https://github.com/opiproject/opi.

Founding Member Comments

Geng Lin, EVP and Chief Technology Officer, F5"The emerging DPU market is a golden opportunity to reimagine how infrastructure services can be deployed and managed. With collective collaboration across many vendors representing both the silicon devices and the entire DPU software stack, an ecosystem is emerging that will provide a low friction customer experience and achieve portability of services across a DPU enabled infrastructure layer of next generation data centers, private clouds, and edge deployments."

Patricia Kummrow, CVP and GM, Ethernet Products Group, IntelIntel is committed to open software to advance collaborative and competitive ecosystems and is pleased to be a founding member of the Open Programmable Infrastructure project, as well as fully supportive of the Infrastructure Processor Development Kit (IPDK) as part of OPI. We look forward to advancing these tools, with the Linux Foundation, fulfilling the need for a programmable infrastructure across cloud, data center, communication and enterprise industries making it easier for developers to accelerate innovation and advance technological developments.

Ram Periakaruppan, VP and General Manager, Network Test and Security Solutions Group, Keysight Technologies"Programmable infrastructure built with DPUs/IPUs enables significant innovation for networking, security, storage and other areas in disaggregated cloud environments. As a founding member of the Open Programmable Infrastructure Project, we are committed to providing our test and validation expertise as we collaboratively develop and foster a standards-based open ecosystem that furthers infrastructure development, enabling cloud providers to maximize their investment."

Cary Ussery, Vice President, Software and Support, Processors, MarvellData center operators across multiple industry segments are increasingly incorporating DPUs as an integral part of their infrastructure processing to offload complex workloads from general purpose to more robust compute platforms. Marvell strongly believes that software standardization in the ecosystem will significantly contribute to the success of workload acceleration solutions. As a founding member of the OPI Project, Marvell aims to address the need for standardization of software frameworks used in provisioning, lifecycle management, orchestration, virtualization and deployment of workloads.

Kevin Deierling, vice president of Networking at NVIDIA"The fundamental architecture of data centers is evolving to meet the demands of private and hyperscale clouds and AI, which require extreme performance enabled by DPUs such as the NVIDIA BlueField and open frameworks such as NVIDIA DOCA. These will support OPI to provide BlueField users with extreme acceleration, enabled by common, multi-vendor management and applications. NVIDIA is a founding member of the Linux Foundation's Open Programmable Infrastructure Project to continue pushing the boundaries of networking performance and accelerated data center infrastructure while championing open standards and ecosystems."

Erin Boyd, director of emerging technologies, Red Hat"As a founding member of the Open Programmable Infrastructure project, Red Hat is committed to helping promote, grow and collaborate on the emergent advantage that new hardware stacks can bring to the cloud-native community, and we believe that the formalization of OPI into the Linux Foundation is an important step toward achieving this in an open and transparent fashion. Establishing an open standards-based ecosystem will enable us to create fully programmable infrastructure, opening up new possibilities for better performance, consumption, and the ability to more easily manage unique hardware at scale."

About the Linux Foundation

Founded in 2000, the Linux Foundation and its projects are supported by more than 1,800 members and is the world's leading home for collaboration on open source software, open standards, open data, and open hardware. Linux Foundation's projects are critical to the world's infrastructure including Linux, Kubernetes, Node.js, Hyperledger, RISC-V, and more. The Linux Foundation's methodology focuses on leveraging best practices and addressing the needs of contributors, users and solution providers to create sustainable models for open collaboration. For more information, please visit us atlinuxfoundation.org.

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see its trademark usage page: http://www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds. Red Hat is a registered trademark of Red Hat, Inc. or its subsidiaries in the U.S. and other countries.

Marvell Disclaimer: This press release contains forward-looking statements within the meaning of the federal securities laws that involve risks and uncertainties. Forward-looking statements include, without limitation, any statement that may predict, forecast, indicate or imply future events or achievements. Actual events or results may differ materially from those contemplated in this press release. Forward-looking statements speak only as of the date they are made. Readers are cautioned not to put undue reliance on forward-looking statements, and no person assumes any obligation to update or revise any such forward-looking statements, whether as a result of new information, future events or otherwise.

Media ContactCarolyn LehmanThe Linux Foundationclehman@linuxfoundation.org

Cision

View original content:https://www.prnewswire.com/news-releases/linux-foundation-announces-open-programmable-infrastructure-project-to-drive-open-standards-for-new-class-of-cloud-native-infrastructure-301571791.html

SOURCE The Linux Foundation

Visit link:
Linux Foundation Announces Open Programmable Infrastructure Project to Drive Open Standards for New Class of Cloud Native Infrastructure - Yahoo...