Category Archives: Cloud Servers

This self-sustainable cloud server is powered by the energy of growing tomatoes indoor! – Yanko Design

Picture a post-apocalyptic future where human beings dont have the liberty of dependence on power stations. Self-sustainable systems are the norm and utilizing every ounce of available energy is vital for survival. A dystopian tech-infused future world where computing systems dont have any external source of abundant energy. Straight out from that sci-fi futuristic scenario is the Warm Earth server system by Ilja Schamle, a Design Academy Eindhoven graduate.

The DIY cloud server system embodies the symbiotic relationship between technology and nature. This project is all about utilizing the renewable energy extracted out of tomato vines to solely run the cloud server. In turn, the energy produced by the heat dissipation is cyclically used to maintain the optimum temperature for the vegetables to grow. As concept-like this might seem, the project was a part of the Missed Your Call graduate exhibition at the Milan design week.

The DIY project houses the tomato plants within the server racks and the server is mounted on the exterior of the rig. The ventilation shaft equipped with fans, channels the hot air to the interior of the cabinet essentially turning it into a greenhouse. Tomatoes power the server courtesy of the plant-microbial fuel cell technology developed by researchers at Wageningen University, Netherlands. This turns vegetables into batteries literally!

Nothing goes to waste as the plants perform photosynthesis turning sunlight into chemical energy, and storing the sugars and proteins. The excess nutrition is excreted via the roots as waste, where the bacteria break it down to release energy. This energy is then leveraged as electricity. Since the servers are indoors, the solar-powered grow lamps act as a source of sunlight. The electrons released by microbes are attracted to iron and the activated-carbon grid functioning as a conductor is placed at the bottom of the pot. For now, the system can produce energy to sustain a single website, and we can expect this to develop into a massive system with more research and development.

Warm Earth is a self-sustainable geeky mashup that not many could think of before this. According to Schamle, the amount of content consumed at present and in the future is destined to rise and the energy required to run such systems is going to be colossal. The artificial ecosystem will change the perception of data centers as being mere dungeons for hosting servers. They will become an important entity of future homes, where they arent kept hidden from sight!

Designer: Ilja Schamle

Read more here:
This self-sustainable cloud server is powered by the energy of growing tomatoes indoor! - Yanko Design

What IBM i Shops Want From Cloud, And How To Do It Right – IT Jungle

September 27, 2021Timothy Prickett Morgan

It is no secret to readers of The Four Hundred that we are big proponents of so-called cloud computing, which doesnt just include access to slices of servers but also storage to keep their data and networking to link them to the world and, if multiple slices share work, to link them to storage and to each other.

We never liked the term cloud, because it connotes a fuzzy kind of infrastructure when quite the opposite is true. We still dont like calling it cloud computing, but language is created by consensus, not by fiat, so sometimes we have to yield. But there was a better metaphor, and one we might want to revive if this term can shake off some of its own bad connotations.

Way back in the dawn of time in 2003, when Big Blue launched its Supercomputing On Demand service and standards for what the academics were calling grid computing were evolving to allow computing centers to interoperate and share work, the term we came up with to describe what was happening was the obvious and far more accurate utility computing. And as we pointed out at the time, almost two decades ago, it was not entirely obvious how this On Demand model being espoused by the major IT platform providers was different from the Application Service Provider (ASP) wave that started as the client/server revolution of the late 1980s and early 1990s merged with the Internet software stack of the mid-to-late 1990s and for the first time allowed for companies to use applications remotely and under a subscription model that looked like electricity service, telephone service, or cable service. This has evolved over the ensuing time into what we now know as Software as a Service, or SaaS, which is all well and good for those companies who can get by using code designed for some kind of class average across industries and sizes.

But as AS/400 and IBM i shops know perhaps more than any other base, true differentiation in the market comes from crafting applications that specifically match the needs of the business. There was never a question that IT matters, which was a tempest in a teacup when Nicholas Carr wrote IT Doesnt Matter for the Harvard Business Review around the same time that IBM started its On Demand effort under new chief executive officer Sam Palmisano. A few months later, after online retailer Amazon.com had noticed that when it opened up APIs on its online store so people could build rudimentary applications on top of it, Andy Jassy, now chief executive officer at Amazon, took control of what would become Amazon Web Services, today the worlds largest, most complex, most complete, and arguably most expensive public cloud, which has managed to attain millions of unique customers.

It is not lost on us that many of the attributes of the original AS/400 platform and integrated stack of operating systems, databases, file systems, and programming runtimes all running on highly available, distributed computing hardware are embodied by the AWS cloud and its followers, such as Microsoft Azure and Google Cloud. In fact, in 2012, we quipped that it should be called AWS/400, and at that time, only six years after it had been launched, had about the same revenue stream and the same customer count as the original AS/400 base at its peak, which by the way it took IBM 29 years to reach after the launch of the System/3 in 1969.

Despite the success of AWS and its imitators and the realization of something that looks like the utility model that we and others conceived of two decades ago a kind of return with a new twist to the early days of the shared computing, service bureau model that IBM started off with mainframes in the 1960s we are simultaneously perplexed that cloud has not taken off in the IBM i base and also not surprised because the cloud, as it is currently delivered by the many excellent providers in the market, is missing a few vital things.

The first thing to remember is that cloud is a consumption model for a highly scalable platform that has utility pricing and a shared service bureau to bring the price down well, down more than it might otherwise be, but it still aint cheap. But cloud is not a panacea. The worlds largest clouds have very sophisticated and scalable infrastructure, and it can be made to run some of the biggest distributed computing jobs on the planet. While this is intellectually interesting, it just doesnt matter to a lot of companies, which is why there are still many tens of millions of companies that are still buying their own infrastructure and installing it in their own dataclosets and datacenters.

Most IBM i shops have persistent databases with fairly consistent workloads. Yes, they have processing peaks during key buying seasons and they also have peaks at the end of the week, the end of the month, the end of the quarter, and the end of the year, too. But there are ways of buying utility-style capacity on a temporary basis with the Capacity Upgrade On Demand (CUoD) features of IBMs Power Systems to deal with this, or just simply overprovisioning the server from the get-go to deal with peaks. This may not be the most efficient way to use capital, but it works and firing up cloud capacity 247 for the five or six or seven years that many IBM i on Power Systems make use of their machine is far more expensive.

Moreover, IBM i shops have long since figured out how to make use of that excess capacity when it is not needed for running online transaction processing (OLTP) workloads, supporting partitions with other infrastructure workloads like file serving or Web serving or even analytics and batch processing. And at some point, we suspect that future Power Systems machines will be running machine learning training models by night and applying machine learning inference by day, embedded in the applications themselves.

The point is, while the cloud utility model is attractive from an intellectual standpoint, and being able to scale workloads up and down and to turn them off and therefore not pay for them when you are not using them is truly evolutionary, it just isnt all that valuable for IBM i shops. And as evidence, all we need to do is talk to the big clouds. IBM has 125 customers on its Power Systems Virtual Server cloud instances, and the other true cloud providers have several dozens to hundreds of their own. There are even more companies that have what are really hosted IBM i instances, which are not utility as we have defined it you can turn it on and turn it off at will. Call it 500 to 1,000 true cloud customers and maybe several thousand hosted customers, against an IBM i base that numbers somewhere between 120,000 and 150,000 unique customers, depending on who you ask.

This is after a decade and a half of pushing very hard by many companies, many of whom are listed in the Related Stories section below. And while many of these companies have been successful, it is hard to say that cloud has taken the IBM i base by storm in a way that it has for other customers. We are beginning to think that IBM i shops need something that feels like cloud in terms of the operational expense pricing model, but it really is a combination of hosting plus managed services layered on top of them that solves real problems.

Think about it. The public clouds are successful because developers needed a cheap place to try out new ideas and new services to make new kinds of applications, and when their companies were successful think of Netflix on running on AWS they needed to scale like crazy as well as increase their application scope to try to make some money. The big clouds solved the infrastructure problems of millions of developers and for several thousand and now several tens of thousands of enterprises. While there are some companies who have gone all in with AWS and other clouds, this is a lot more rare than anyone wants to talk about. IBM is right that hybrid cloud models, mixing on premises and cloud infrastructure, is the future for most companies.

IBM i shops are not fearful, but they are conservative. There is a lot of talk about how IBM i shops are afraid of change, afraid of loss of control, and afraid of the lack of security out there on the cloud. They arent afraid of change most IT managers, system administrators, and programmers in the IBM i space have seen so much change in their many decades that it will make your head spin if you were born after 1990. They are not believers in change for the sake of change no question about that. So lets just put to bed the idea that IBM i shops are afraid of anything.

They surely are skeptical of some of the claims people make about cloud being cheaper than on premises infrastructure, and from the survey data that we have seen, they are indeed worried about security and performance on what is in essence a shared utility. They have data sovereignty issues many of them compelled by law in financial services, insurance, healthcare, and other industries. They rightly worry about connectivity between their users and the systems running in the cloud, and because of the pricing complexity of cloud services, they worry how they can budget the costs.

There is a lot to worry about, and no one wants to go first to find out about the differences between on premises and the cloud the hard way. And even though they pay a premium for their IBM i on Power Systems iron, they cant get nickled and dimed to death on a cloud or dollared or ten dollared, for that matter. They want to bring order to the financing of IT, but they dont want to lose control of IT. That is taking it too far, and that is why we are seeing so many datacenter repatriations after a wave of all-in cloud customer stories.

But we think the issue of resistance to the cloud among the vast majority of IBM i customers is even larger than all of this. After watching this for years, we have come to the conclusion that IBM i shops want a full, vertically integrated experience out of their infrastructure provider. This is the ideology that the AS/400 represented and that the IBM i platform continues. And we think they want to throw back all the way to what IBM originally delivered with the System/360 mainframe, when capacity on the machines was rented, often located in a service bureau because few companies could afford to buy mainframes, and Big Blue provided all kinds of training and programming services to help customers get the full use of the capacity they bought. The capacity was expensive and the help was free.

These days, the capacity is nearly free thanks to Moores Law, and the help that IBM i shops with an aging population and a large technical debt need desperately are too expensive. Something has to give, and someone needs to provide a vertically integrated set of hardware, software, and services that helps customers get their platforms and the applications that run on them all modernized. Updating the hardware is necessary, but not sufficient. We need a utility model for application programming and modernization as much as we need a utility model for hardware capacity and technical support. And anyone who can bring these all together will probably be able to get IBM i shops excited about what will still very likely be called the cloud. But we will all know it is more than that.

Public Cloud Dreams Becoming A Reality for IBM i Users

Comarchs PowerCloud Gives IBM, Microsoft, And Google A Run For The Money

Thoroughly Modern: Clearing Up Some Cloud And IBM i Computing Myths

Skytap To Expand IBM i Cloud Offering

IBM i on Google Cloud Appears To Be Stuck in Alpha

Skytap Offers Deals and Discounts in IBM, Azure Clouds

IBM i Headed To Azure By Way Of Skytap

Microsoft Wants to Migrate Your IBM i Code to Azure

IBM i Clouds Proliferating At Rapid Clip

Its Getting Cloud-i In Here

Big Blue Finally Brings IBM i To Its Own Public Cloud

Public Cloud Dreaming For IBM i

A Better Way To Skin The IBM i Cloud Cat

Blue Chip Builds Out 1.5 Million CPW IBM i Cloud

Key Info Unlocks Its Cloud

Deconstructing IBM i Cloud Migration Myths

Steady Growth For The Connectria Cloud

Sirius Considers Expanding Its Power Cloud Capacity

Mobile, Modernization, And Cloud See The Money In 2013

Abacus Wants You To Run In Its CloudAnd For Your Health

IBM Buys SoftLayer To Build Out Hosting, Cloud Businesses

Corus360 Builds Power Systems Cloud In Atlanta

Infor and Abacus Launch System i Cloud

One MSPs Clear View Of The Future Of Cloud ERP

I, Cloud-i-us

IBMs Power-Based SmartClouds on the Horizon

Wanted: Cloud-i i-nfrastructure

See the article here:
What IBM i Shops Want From Cloud, And How To Do It Right - IT Jungle

Server market size to reach $145.31 billion by 2028 – Help Net Security

The global server market size is expected to reach $145.31 billion by 2028, according to ResearchAndMarkets. It is expected to expand at a CAGR of 7.8% from 2021 to 2028.

The demand for servers is anticipated to grow considerably over the forecast period owing to the growing focus on the timely update of IT infrastructure worldwide. The rising adoption of data analytics among enterprises to understand consumer trends has resulted in the growing adoption of IT networking equipment.

Furthermore, the rollout of 5G networks and technologies such as the Internet of Things (IoT), cloud computing, and virtualization is expected to fuel the demand for high-performance computing servers.

The rising preference for contactless payments and remote working amid the COVID-19 pandemic is expected to drive the need for high-speed data processing and storage capacity across various industry verticals.

Advanced technologies have paved the way for connected appliances and autonomous vehicles, which has prompted IT infrastructure companies to opt for the latest, advanced storage solutions, including flash memory and solid-state drives (SSD), for storing crucial business data.

Meanwhile, the demanding and changing configurations required by cloud service providers are driving the demand for servers. For instance, in May 2020, Facebook released its third generation Yosemite scalable server, which is equipped with Cooper Lake CPU and six memory modules. Such developments are expected to cause an increase in the average selling prices of servers, which is expected to subsequently benefit the market growth.

Several enterprises are shifting to managed data center services from colocation data centers owing to the cost advantages offered by managed data center services.

Managed data centers allow enterprises to adopt virtual servers by renting the networking equipment, connecting devices and peripherals, and cloud space. The cloud server space can be private or shared, which again allows the enterprises to reduce the total cost of ownership.

The market is witnessing increasing competition between OEMs and Original Design manufacturers (ODMs). OEMs are the companies that manufacture servers as well as sell them through resellers and distributors, while ODMs design and manufacture similar servers and directly sell them to the customer.

Besides, ODMs cater to the demand for servers customized according to the user configuration. The increasing demand for customized requirements is expected to drive server sales through ODMs.

The market is characterized by intense competition among established market players. Key market players are focused on product innovation and the introduction of new technologies to their server portfolios. For instance, in September 2019, Dell EMC introduced new products in its PowerEdge server portfolio.

These new servers are equipped with 2nd Gen AMD EPYC processors, which help to easily manage the platform and offer superior performance to the user. The new servers are built specifically for modern data centers for multi-cloud approaches.

View original post here:
Server market size to reach $145.31 billion by 2028 - Help Net Security

Unisys Named a Leader in Next-Gen Private and Hybrid Cloud Managed Services by Advisory Firm ISG in the US, UK and Brazil – PRNewswire

BLUE BELL, Pa., Sept. 27, 2021 /PRNewswire/ --Unisys Corporation(NYSE: UIS) today announced that leading global technology research and advisory firm Information Services Group (ISG) has recognized the company as a global leader for its cloud and infrastructure solutions in reports published in the U.S., U.K. and Brazil.

The ISG Provider Lens "Next-Gen Private/Hybrid Cloud - Data Center Services & Solutions" report, published in the third quarter, summarizes the relative capabilities of more than 50 software vendors/service providers. Each provider is positioned based on quantitative data collected from providers, ISG internal data and/or data obtained through secondary research. In each quadrant, providers are categorized as being Leaders, Product Challengers, Contenders or Market Challengers.

In the U.S. report, ISG ranks Unisys as a "Leader" in the Managed Services (Mid-Market) quadrant, which reflectsa provider's ability to offer ongoing management services for private and hybrid clouds, as well as platforms that comprise physical and virtual servers and networking components. Leaders in this quadrant have established a proven track record of helping clients in planning the transformation of each workload and maximizing the performance of those workloads in the cloud.

According to the U.S. report, "Unisys is highly capable of offering a complete range of traditional and hybrid cloud infrastructure management services to help clients with transformation engagements that are cost effective, secure and efficient, by using its proprietary solutions such as CloudForte platform and other resources." ISG also observed that "Unisys focuses on offering professional services, cloud migration, security and managed services to enterprises of all sizes and has a strong security practice, which allows it to cater to the highly regulated markets."

In addition to the U.S. report, ISG issued versions of The ISG Provider Lens "Next-Gen Private/Hybrid Cloud - Data Center Services & Solutions" report for the U.K. and Brazil. Unisys was named a "Leader" in the "Managed Services (Mid-Market)" quadrant in the U.K and a "Leader" in the "Managed Services (large accounts)" quadrant in Brazil.

"This acknowledgment from a premier advisory firm like ISG reaffirms our reputation and expertise in managed cloud, network and security solutions," said Mike Morrison, senior vice president and general manager, Cloud and Infrastructure Solutions, Unisys. "We deliver secure, end-to-end solutions across public, private, hybrid and multi-cloud environments that enable our clients to transform their businesses by maximizing the operational and financial benefits of cloud and infrastructure transformation."

This is the latest recognition for Unisys cloud and infrastructure solutions. Leading global analyst firm NelsonHall recently named Unisys a Leader in cognitive and self-healing IT infrastructure management and has also named Unisys a Leader in the vendor evaluation for Cloud Infrastructure Brokerage, Orchestration & Management.

To learn more about this research and why ISG recognizes Unisys as a Leader, click here or download the report here.

About UnisysUnisys is a global IT solutions company that delivers successful outcomes for the most demanding businesses and governments. Unisys offerings include digital workplace solutions, cloud and infrastructure solutions, enterprise computing solutions, business process solutions and cybersecurity solutions. For more information on how Unisys delivers for its clients across the commercial, financial services and government markets, visit http://www.unisys.com.

Follow Unisys on Twitter andLinkedIn.

RELEASE NO.: 0927/9849

Unisys and other Unisys products and services mentioned herein, as well as their respective logos, are trademarks or registered trademarks of Unisys Corporation. Any other brand or product referenced herein is acknowledged to be a trademark or registered trademark of its respective holder.

UIS-C

SOURCE Unisys Corporation

Read the original here:
Unisys Named a Leader in Next-Gen Private and Hybrid Cloud Managed Services by Advisory Firm ISG in the US, UK and Brazil - PRNewswire

Arm Neoverse: Powering the Next-Generation of High-Performance Computing – Eetasia.com

Article By : Stephen Las Marias

Arm's Neoverse platform and ecosystem can help foster innovation and growth with successful deployment in the hyperscale and enterprise cloud data centers.

Indias digital economy is in a stage of exciting growth. With over a billion mobile phones in use in the country and around 700 million internet subscribers, the opportunities for an ecosystem powered by digitalization are endless.

In fact, India now is one of the leaders in data consumption and generation worldwide. The outbreak of the COVID-19 pandemic in 2020 further accelerated the adoption of cloud computing in the country as enterprises sent employees to work from home and schools turned to online education. Add to this the demand for online services brought about by video streaming and gaming as people get to stay at home amid lockdowns and movement control orders, social media platforms, as well as increasing e-commerce activities.

All of these trends are fueling the growth of the countrys data center infrastructure industry. According to JLL India, Indias data center industry is expected to reach 1,007 MW by 2023, more than double its existing capacity of 447 MW.

The growth of the digital economy is going to lead to the growth of Indias data center industry over the next few years, said Eddie Ramirez, Sr. Director of Marketing, Infrastructure, at Arm. The Ministry of Electronics and IT (MeitY) published a report last year saying that by 2025, there will be $4.9 billion spent on data centers within the country.

Ramirez leads the go-to market and ecosystem team for Arms infrastructure line of business. For us, infrastructure is everything in the data center, including the networks such as 5G that power data that goes across the world, he said. We are the group thats looking at how to improve compute power for the infrastructure.

In a recent webinar titled Disrupting Cloud Data Centers with Arm Neoverse, Ramirez discussed how Arms Neoverse platform and ecosystem can help foster innovation and growth with successful deployment in the hyperscale and enterprise cloud data centers. He also highlighted the comprehensive hardware and software ecosystem that enable and optimize customers application development and deployment on Arm-based infrastructure.

The Neoverse Platform

Conceptualizing the Neoverse platform, Ramirez said they started with the simple question of How do we build a platform that can get you more compute from the same power output?.

If all these data centers are going to be built over the next five years in India, how do we scale the compute to use that space most efficiently? Every data center has a certain power footprint that they have to operate about, he said.

That was the fundamental question that Arm addressed with the Neoverse platform. Designed specifically for infrastructure and cloud computing segments, Arms Neoverse platformstarting with the N1 and E1 released in 2019, followed by N2 and V1 released this yearis the foundation for the next generation cloud-to-edge infrastructure, delivering high-performance, secure, and scalable computing solutions along with a robust hardware and software ecosystem.

Since 2018 when Arm first announced Neoverse, the company has seen a wave of adoption throughout cloud-to-edge infrastructure. The rich diversity of hardware and software solutions that have come to market enabled by Neoverse-based compute are now deployed in cloud data centers, HPC systems, 5G networks, and out to edge gateways, providing cost savings, power efficiency, and compute performance gains.

We are now seeing cloud service provides like AWS [Amazon Web Services] and Oracle adopting Arm and offering compute instances that are both high performing and offer costs advantages, said Ramirez.

Designed by their AnnapurnaLabs team, AWSs own server CPU called Graviton2 delivers 64 Arm Neoverse N1 cores on 7nm manufacturing technology. With Neoverse, AWS was able to demonstrate a 40% better price performance running on Graviton2-based compute instances than what they had before with the legacy architectures.

Thats really significant because not only are they able to build their own processors, but they are also now more in control of their supply chain, said Ramirez. But to actually be able to pass on these very significant performance and cost savings to their end customer really puts them in a different class of cloud providers.

AWS now has several EC2 compute instances running on Graviton2. Most recently, the company launched new extra-high memory X2gd instances which, in some cases, are providing over 80% better throughput compared to older X1 models.

We were excited by the performance benefits that we at Arm are now shifting more of our EDA workflows to Graviton2. And were happy with the overall performance and TCO benefits we have achieved, said Ramirez.

Another cloud service provider embracing Arms Neoverse platform is Oracle. Oracles known for their database software, but they also have quite a significant presence in their Oracle Cloud. They launched their Arm-based cloud instances utilizing two socket servers equipped with Ampere Computings Altra 80-core CPUs for a total of 160 Arm Neoverse N1 cores per server. The systems include 1TB of memory and 250 Gbps networking. This powerful server allows customer flexibility to enable right size of compute and memory to support their needs, explained Ramirez.

He said Oracle was the first to announce a penny per core-hour that customers can usebringing the cost of compute down significantly for customers that are using the public cloud.

Enabling the Next-Generation of HPC

One of the things that differentiates Neoverse from some of the x86-type architectures out there is that we focus our designs on single-threaded performance versus using this concept of multithreading, where different threads share the same core, said Ramirez.

This brings a more predictable performance, according to him. If you are using a public cloud on an Arm Neoverse core, you can be sure that your virtual machine is accessing the full core on its ownyou are not time sharing with other customers, Ramirez explained.

This also provides benefits from a security standpoint because you are isolated to that single core.

And then it is not just about the cores, but the interconnects. Our Neoverse CPU cores combined with our Coherent Mesh Interconnect products enables superior performance for high core count systems, said Ramirez.

Last but not the least, it is also about the generation uplift. The other thing that we look at is how do we ensure that we deliver generation to generation performance improvements. With our newer roadmap on the N series and V series, we are now able to achieve 40 to 50% performance uplift. Thats really kind of been unheard of. And that level of performance improvement from one generation to the other is very unique to Arm, said Ramirez.

The future CPU designs that will be powered by Arm Neoverse will enable continued scaling in data center performance.

We are already seeing traction with our new Neoverse platforms. One example is MeitY in India has decided to license the Neoverse V1 platform for their exascale HPC CPU design. They join other HPC initiatives for Exascale computing project in Europe via SiPearl and in Asia through ETRI, who have also announced adoption of Neoverse V1, said Ramirez.

Enabling an Ecosystem

At Arm, we are working every day to ensure that software can easily be developed and deployed on Arm platforms, said Ramirez. We see a future where all of the worlds shared digital data will, at some point in its lifetime, be processed on Arm. To execute this vision requires significant investment in software and support for developers who write the code.

He noted that developers are also rapidly adopting cloud native software. We have a significant footprint of OSS projects, independent software vendors already supporting Arm 64-bit architecture, Ramirez said. We were really excited to learn from Docker that there are now over 100,000 containers that are written for Arm processors that are on their site today.

He added that the other part of cloud native is deploying CI/CDcontinuous integration continuous development toolsto ensure that anything developers changed or modify, features they add to their software, get tested daily.

One of the things that we have done to help spur that is the Works on Arm program, where we are offering CPU cyclesthey could be virtualized or they could be bare metal serversthat developers can take advantage of for free, as part of their co-development process, said Ramirez. The ecosystem has come a long way on Arm and it also helps that we have partners, like AWS, who are contributing to the ecosystem, as well as several independent software vendors that have made their efforts to port and optimize on Arm.

There are now several OEMs and ODMs offering Arm-based servers in India. Companies like Foxconn, Wiwynn, and Gigabyte have deployed multiple skews of Ampere Altra-based servers, said Ramirez. We continue to see more OEMs engaging us every day. And we are also excited to work with local vendors in India who may be interested in supporting Arm-based servers as well.

Innovations in the Pipeline

Arm is a company that focuses on relentless IP innovation. And one of the things that the company introduced earlier this year is the Armv9 architectureits first major upgrade in a decade.

According to Ramirez, one of the improvements in v9 is security, enabling things like confidential compute. This is where you can ensure that a customers user data is effectively protected not only within the processor, but even within the virtual machine or within the container application that runs on that processor, he explains. We are also introducing enhancements to performance. Part of v9 is our scalable vector technology. Vectorssort of one-dimensional arrays of datahave been around since the first supercomputers. With Armv9 and the SVE2 upgrade, chip designers now have a lot more flexibility in the vector links that they want to deploy. This will all help with delivering higher performance for workloads like genomics, computer vision, VR, and even machine learning on CPU.

India and Beyond

Guru Ganesan, President of Arm India, sees wide adoption of Arm technology in the Indian cloud computing and telecom space in the coming years.

Public cloud end-user spending in India is forecasted to be over $4B in 2021. Large enterprises, medium businesses, and start-ups in India will see significant performance and costs benefits by moving to higher performance and power-efficient Arm-based CPUs in the cloud. Additionally, as companies become more conscious of the environmental impact, it is important to consider the energy efficiency of Arm-based computing. Our engagement with the Indian government on the HPC front is progressing well, with MeITY starting to develop an HPC processor based on Arm Neoverse Technology

We have done a few supercomputing projects, most notably is the Fugaku supercomputer in Japan, where we helped enable RIKEN to build the most powerful supercomputer in the world using Fujitsus A64FX CPU, said Ramirez. That has delivered almost 7.6 million cores of processing power, so we are very excited to see what we can do with entities like MeitY in India. Not only for the cloud space, but for the HPC space and academics, or companies using supercomputing powerwe are hoping that we bring such high-performance solutions to the India market.

Arm is also working with other countries beyond India, in projects including 5G network deployments.

Indias telecom ecosystem consisting of the network operators as well as OEMs, are actively pursuing development of modular and interoperable, best-in-class hardware and software elements, to build state of the art, scalable and manageable 5G networks, said Ganesan. Arm-based products, built on the concept of heterogenous compute, offer a complete set of solutions all the way from radio-unit to the core, to enable deployment of high-performance networks with the lowest TCO.

Weve been working very closely with different countries in Southeast Asia on how to enable, for example, O-RAN initiatives. Weve been a big part of O-RAN, and this will have a big impact in Southeast Asia, as they are now looking at deploying 5G networks in different countries, said Ramirez. That has been a big initiative for usto participate in those standard groupsto drive this open architecture for 5G networks.

With respect to Southeast Asia, opportunities abound both with AWS as Eddie mentioned and Oracleas it is expanding its Arm instance presence in all regions globally, said Amaresh Iyer, Senior Manager, Segment Marketing, Infrastructure BU at Arm. And their pricing offer that they have on Arm instance in the OCI is a golden opportunity for a lot of developers, especially in countries like India and Southeast Asia, to take advantage of when it comes to testing, porting, and recategorizing their workloads on Arm, at a very low cost.

Iyer noted that another big factor offered by Arm is the sustainable power-efficient processor. In these countries where power is a big issue, having power-efficient datacenter infrastructure is very important. Thats very attractive to markets like India and countries in Southeast Asia, he explained. We see a lot of developments happening in 5G, Internet of Thingsall of those market segments we also play in as part of the Arm IP infrastructure. And we have a global viewfrom edge to cloudand Arm has an IP offering in each of those segments. These are extensive technology offerings that are secure, scalable, power efficient, and high-performance, and suitable for many different markets worldwide.

Related

Visit link:
Arm Neoverse: Powering the Next-Generation of High-Performance Computing - Eetasia.com

Microsoft cloud storage: is OneDrive or Azure right for your business? – ITProPortal

Microsoft is one of the best cloud storage providers, and offers some of the best cloud storage for business too. But before you can dive into Microsoft cloud storage, you have to choose between two different products: OneDrive and Azure.

Microsoft OneDrive is a file storage service that integrates with the Microsoft 365 productivity suite. Employees can collaborate in real time on documents in either web or desktop versions of apps like Word, Excel, and PowerPoint. Plus, employees get their own cloud storage vault where they can keep their files.

Microsoft Azure, on the other hand, is a comprehensive cloud hosting service that enables you to store files, run servers in the cloud, and much more. Its best suited for developing software, running big data analyses, or handling massive databases for the apps your business runs on. Azure storage is more expensive than OneDrive, and doesnt integrate with Microsoft 365 apps.

In this guide, well cover how OneDrive and Azure work with productivity tools like Microsoft 365, how file sharing and collaboration work, and how data is stored in the cloud on the two platforms. Whereas OneDrive is best for businesses that simply want to enable cloud storage and file sharing using Microsoft 365 apps, Azure is best for businesses that want a scalable and highly advanced cloud computing platform.

We compared Microsoft OneDrive and Azure not just on their file storage capabilities, but also on the totality of what these two services are capable of. Well cover the following topics:

Its relatively easy to get started with OneDrive. You can add employees to your business account using their email addresses, and each employee gets their own dashboard. The dashboard is accessible on the web, as a desktop app, or as a mobile app.

OneDrive also integrates with the Windows File Explorer, enabling you to access your OneDrive cloud storage alongside files stored locally on your computer.

Azure is a bit more complicated to set up. Once you sign up, you can access the dashboardcalled the Azure Portalover the web or through desktop and mobile apps.

The Azure Portal starts out empty, but theres a list of all the services available in Azure on the left-hand side of the screen. You can add modules from this list to the Portal to build a custom dashboard and access Azures tools. Youll need to find the Azure Active Directory module and use that to add employees to your businesss Azure deployment.

Within the Portal, you can set up custom dashboards that display your virtual machines, storage systems, and more. To create a new cloud storage space, you can browse the available storage types in the left-hand services menu, and launch a new storage container from there.

Its tricky to compare pricing between OneDrive and Azure because they use completely different pricing schemes. OneDrive charges a flat monthly fee per user, starting from $5 a month for 1TB of storage per user.

Azure operates on a pay-as-you-go basis. Youre charged based on the amount of data you have stored, what type of cloud servers your data is stored on, and how often you make changes to your files. To give a rough estimate, storing 1TB of data with Azure typically costs around $20 a month.

You can also reserve storage with Azure for up to three years at a time. This results in a larger upfront bill, but Microsoft offers discounts of up to 38% for these extended contracts.

One of the major differences between OneDrive and Azure is how they integrate with Microsoft 365, Microsofts productivity suite, which includes apps like Word, Excel, PowerPoint, Teams, and Outlook.

OneDrive is baked into all Microsoft 365 apps. In any of these apps, you have the option to save files directly to your OneDrive storage, and you can set up syncing so that changes are saved to the cloud in real time.

Even better, its possible for multiple people to work on a file simultaneously if its stored in the cloud. Thats true whether you want to use Microsofts online office apps or the desktop versions of apps like Word and Excel. So, multiple employees can collaborate on a document without running the risk that multiple divergent copies will be created.

Azure, on the other hand, doesnt offer special integrations with Microsoft 365 apps. Youll need to save files created in Microsoft 365 to a storage container in Azure manually. Alternatively, you can work on a Microsoft 365 deployment inside the Azure cloud, and save files directly to Azures storage containers, but this requires you to first set up a virtual machine in the cloud.

In addition, you must be online at all times to use Microsoft 365 apps with Azure, whereas you can work offline with OneDrive, and files will save to the cloud automatically when you reconnect.

Microsoft Azure is a comprehensive cloud computing platform, not just an online cloud storage service. In fact, cloud storage in Azure is designed to interface with the virtual machines, workflows, and software development environments you create on the platform. This enables you to run big data analyses or to access information databases when running applications in the cloud.

Some of the features Azure offers in this respect include virtual Windows and Linux machines with scalable computing power, premade AI models for analyzing data, and developer tools for building apps in the cloud. Azure also offers a content delivery network and tools for enabling single sign-on for your employees.

OneDrive, in contrast, is simply a cloud storage service. You can move files around in the cloud, but thats it. Any computing tasks that cannot be done using the online Office 365 suite must be done on your local network.

The process for backing up files is somewhat different between OneDrive and Azure.

With OneDrive, there are no choices that you need to make about how your files are backed up. Microsoft automatically stores your data in multiple data centers for redundancy. Files are saved in hot storage, meaning that they can be accessed from anywhere in the world immediately.

OneDrive has a service level agreement (SLA) of 99.9%, meaning that Microsoft guarantees that it will have no more than five minutes of downtime a month.

With Azure, you have a number of choices to make about how to store your data. Azure offers the Files module for storing file libraries, the Blob module for storing unstructured data and SQL databases, and the Tables module for storing NoSQL databases. On top of that, youll need to decide whether your data is stored hot for active use or cold for archival purposes.

You also get to choose the primary data center region your data is stored in with Azure. Of course, all data is backed up to multiple data centers for redundancy. Azure comes with an SLA of 99.99%, meaning that it experiences less than 30 seconds of downtime per month.

Both OneDrive and Azure enable you to share files both inside and outside your organization.

With OneDrive, file sharing is relatively simple. You can select any file or group of files and share them via a link or invitation. In addition, you can password-protect shared files or set an expiration date on sharing links. Administrators have the option to limit sharing of certain types of files outside your organization.

With Azure, files are by default accessible to anyone with access to your businesss Azure Portal. It is not possible to create personal, employee-specific file storage systems within Azure. However, you can control access to files using the Azure Active Directory module. This module also lets you invite guest users from outside your business to access specific files for collaboration.

If OneDrive and Azure arent a fit for your business, there are plenty of alternatives to choose from. However, youll still need to make the same decision as to whether your business needs a cloud storage platform like OneDrive or a cloud computing platform like Azure.

OneDrives main competitor is Google Drive, which is part of Google Workspace. Just as OneDrive integrates with Microsoft 365, so Google Drive integrates with Google productivity apps like Docs, Slides, Gmail, and Calendar. However, its worth pointing out that Google Workspace only includes web-based apps, whereas Microsoft 365 apps are available in both web and desktop versions.

Google Workspace plans start at $6 per user a month, and include 30GB per user, so you wont get as much storage per dollar as with OneDrive. The main reason to go with Google Workspace is if you prefer Googles productivity suite to Microsofts. You can find out more about this cloud storage service in our Google Drive review.

An alternative to Azure for cloud computing is Amazon Web Services (AWS). AWS is years ahead of Azure in deploying advanced computing resources, including quantum computing applications. It also offers ultra-cheap storage through AWS Glacier, a service for long-term storage of rarely accessed data. For businesses with a lot of data, AWS can be cheaper than Azure, and offers a wider variety of big data analysis pipelines.

OneDrive and Azure are very different Microsoft cloud storage platforms that serve different purposes. OneDrive is a file storage platform that integrates seamlessly with Microsoft 365 productivity apps. Azure is a cloud computing platform thats designed to facilitate big data analysis, software development, and cloud server deployment.

For businesses that simply want to store files in the cloud to promote collaboration, and reduce dependence on physical hard drives, OneDrive is likely to be the better choice. Its very easy to use, and if your business uses Microsoft 365 apps like Word, Excel, Outlook, or Teams, you likely already have access to OneDrive. OneDrive makes file sharing simple, and employees can even edit files at the same time.

For businesses that currently operate in the cloud or want to take advantage of cloud computing, Azure may make more sense. While its more complicated to set up than OneDrive, Azure makes your data available to applications and workflows running in the cloud. With Azure, you can flexibly access as much computing power as you need to analyze big data or develop your own custom applications.

OneDrive has almost everything you want in a cloud storage platform. It's affordable and highly secure, with robust encryption frameworks. Business customers also get access to a wide range of compliance and auditing capabilities...Deep Microsoft 365 integration makes OneDrive perfect for working online, and it is our top pick for businesses wanting a premium digital workspace and communication ecosystem. Score: 4/5

Choosing between Microsoft OneDrive and Azure can be a fork in the road for your business. Ultimately, which is better comes down to whether you just want cloud storage for business or a full-fledged cloud hosting service. If youre thinking about migrating your entire business to the cloud, make sure you set yourself up with a roadmap for success.

Visit link:
Microsoft cloud storage: is OneDrive or Azure right for your business? - ITProPortal

Veea Introduces A Breakthrough Smart Computing Hub And An Ultracompact Wi-Fi 6 Mesh Router Product At Qualcomm’s Smart Cities Accelerate Global…

STAX and BOLT Products Deliver Unprecedented Edge Processing, Full Range of Connectivity, Enterprise-Class Security, Modularity, Scalability and Ease-of-Use

NEW YORK and SAN DIEGO, Sept. 27, 2021 /PRNewswire/ -- Veea Inc., an innovation leader in integrated smart edge connectivity and computing, today introduced two new models of VeeaHub products, STAX and BOLT, at Qualcomm's signature global event, Smart Cities Accelerate 2021, being held in San Diego on September 28th and 29th. Both products will be on display with live demonstrations of several applications and solutions supported by these products including "Automation for Smart Spaces" and Trollee.

(PRNewsfoto/Veea, Inc.)

STAX is the world's most advanced and currently the only stackable, ultracompact, multi-purpose, multi-protocol edge computing product with integrated wireless access, including Wi-Fi 6, server-class processing, and mesh scalability. Its design supports VeeaWare, Veea's leading edge software platform for running containerized applications on one VeeaHub or a micro-cloud of VeeaHub units that are clustered together on a mesh network at the "Device Edge" of the user premises. STAX and BOLT are two new members of the VeeaHub platform product family, primarily based on highly integrated Qualcomm components for a wide range of smart vertical use cases.

STAX is a groundbreaking Smart Computing Hub product supported by extensive cloud-based network management, monitoring and maintenance services that are field-proven with the current models of VeeaHub for a variety of edge use cases for the past several years. Its powerful Linux server can support multiple applications concurrently on one or several VeeaHub units across the mesh network installed at the user premises in a manner similar to how Wi-Fi Access Points are installed.

VeeaWare software platform applications are all secured by a chain of trust that includes hardware secure boot supported by bootstrap and enrollment servers with Single Sign-on (SSO) across the entire platform. Additional, highly differentiated features include unique secure lightweight Docker containers with digitally signed software applications and network interfaces.

Story continues

Finally, VeeaHubs utilize the most powerful commercially available encryption for all network connections on the local network and for WAN connections to cloud-managed services and cloud-managed security functions, as well as other remote connections. STAX offers a powerful and feature-rich alternative to simple Wi-Fi Access Points (APs) and basic 4G/5G WAN connectivity product solutions. Its highly integrated platform provides enterprise-class tri-band Wi-Fi 6, Internet of Things (IoT) connectivity, a 64-bit quad-core Linux processor and up to 2TB of local storage. Cellular connectivity solutions are offered through optional 4G (Gigabit LTE) and 5G (Sub-6 GHz) SD-WAN stackable modules as either a primary or a failover WAN connection with or without an optional cloud-based full security stack that exceeds the functionality and features of most next generation firewalls (NGFW).

STAX, with a 4G or 5G module, can be bootstrapped and activated over a cellular connection with its combination of USIM, eSIM and vSIM capabilities, which supports most carrier networks globally by virtue of switching between any of its SIM functions at activation. STAX is implemented entirely through one 12-layer PCBA, measuring 4 in. x 4 in. on its sides, that integrates all STAX functionality to provide for an affordable product in the most compact form factor compared to other products in its category of products in the market today.

STAX incorporates IoT connectivity including Bluetooth Low Energy (BLE) and Classic and a range of IEEE 802.15.4 protocol-based solutions including Zigbee. VeeaHub nodes installed on the local network instantaneously create a self-organizing Connectivity Mesh, called vMesh, that simultaneously provides for a Computing Mesh, a microservice-based Service Mesh, an Application Mesh and an Edge Intelligence Mesh that also supports a highly accurate indoor positioning solution.

With STAX, Veea provides yet another comprehensive edge platform product delivering all-in-one SASE-ready edge connectivity and server-grade edge processing with secure edge applications and service management across a scalable mesh network of VeeaHubs. Other STAX product features include:

Modular design supporting add-on functionality with stackable modules, with more modules under development for introduction over the next twelve months

Simple plug-and-play setup with PoE supported through a stackable module

Highly compact form-factor (approx. 4.2 in. x 4.2 in. x 2 in. or 10 cm x 10 cm x 5 cm) made possible through patented innovations

Highly unique integrated antenna/heat-sink design

Fan-less with no special cooling required

With multiple connectivity capabilities, server functions, simplicity of its activation and "fleet-managed"-like remote monitoring and maintenance features, it is an ideal product to be offered by service providers, including ISPs, MNOs, and MSOs, for many use cases including as a highly secure remote connectivity solution for Work-from-Home (WFH) use cases.

Veea, in parallel, introduces BOLT, which brings enterprise-class blazing fast Wi-Fi 6 mesh router technology for highly simplified deployments in the same form factor, with optional features such as "Automation for Smart Spaces", which is ideal for home, hotel and dormitory rooms, nursing homes, and many other consumer-friendly use cases. Automation for Smart Spaces provides for an "all-in-one" solution for integration with products and peripherals from several leading vendors' lighting, HVAC, air quality management, security alarm, cameras, door locks, shades, garage door openers and other home or office automation use cases.

"The VeeaHub STAX and VeeaHub BOLT extend the capabilities of our edge platform, which is raising the bar in edge computing simplicity," said Allen Salmasi, founder and CEO of Veea. "These breakthrough products represent the best-in-class solutions for edge computing at the Device Edge, which is the first and the more critical network touchpoint prior to making the WAN connection. The solution architecture substantially minimizes the backhaul requirements for data intensive activities, improves resiliency for mission critical edge computing applications and facilitates data ownership and privacy to meet regulatory requirements such as GDPR. We're making it simple, predictable, cost effective and most efficient to support a wide range of edge use cases as edge computing applications grow exponentially. With a very small physical footprint that nonetheless provides server-grade processing and scalable storage, we have broken the edge barrier with products that include zero touch plug-and-play ease of use and multilevel security features that the market demands today."

Learn more about VCH25 STAX, VHC20 BOLT, and all of Veea's products here.Learn more about the Qualcomm Smart Cities Accelerate 2021 event here.

Mr. Salmasi will take the Main Stage to present his vision for the next generation of edge technologies and real-world applications on Tuesday, at 11:40 AM.

About Veea Veea is redefining and simplifying secure edge computing that improves application responsiveness, reduces bandwidth costs, and eliminates central cloud dependency. VeeaHub Smart Computing Hubs integrate a full range of connectivity options, application processing power, and a full security stack to form an elastic edge computing platform with a dynamic connectivity and application mesh that can easily be deployed and centrally managed from the cloud. Veea Edge Services run across this application mesh to deliver secure remote access, IoT/IIoT/AIoT, and a wide range of smart applications. These elements along with a range of groundbreaking vertical-specific applications comprise the Veea Edge Platform, serving the needs of organizations across Smart Buildings, Smart Energy, Smart Cities, Smart Construction, Smart Farming, Smart Retail, and other industry verticals. Veea's Virtual Trusted Private Networking (vTPN) solution, based on a unique and highly secure VPN technology and cloud-managed full stack security services, makes it simple and affordable to securely connect for most smart vertical market applications including the remote and work-from-home workforce and branch offices. Veea was formed in 2014 and is headquartered in New York, NY, with its development activities primarily located in its engineering offices in Bath, UK, and Iselin, New Jersey, USA, along with sales and support offices located at multiple locations throughout the US, France, South Korea, and Brazil. Veea was named by Gartner as a 2021 Cool Vendor in Edge Computing. For more information, visit veea.com. Follow us on Twitter and LinkedIn.

Cision

View original content to download multimedia:https://www.prnewswire.com/news-releases/veea-introduces-a-breakthrough-smart-computing-hub-and-an-ultracompact-wi-fi-6-mesh-router-product-at-qualcomms-smart-cities-accelerate-global-conference-301385499.html

SOURCE Veea Inc.

Excerpt from:
Veea Introduces A Breakthrough Smart Computing Hub And An Ultracompact Wi-Fi 6 Mesh Router Product At Qualcomm's Smart Cities Accelerate Global...

Cloud is hot, and it’s only going to get hotter – Channel Asia Singapore

Theres no denying that cloud is arguably the most important and in-demand segment of the global IT landscape, but new research suggests the business world is far from saturated by cloud technology, leaving plenty of room for further growth.

Indeed, fewer than 20 per cent of organisations have put more than 50 per cent of their operational data in the cloud, according to market analyst firm IDC.

This is despite organisations ranking cloud highest in terms of investment priorities for operations over the next five years, IDCs recent Future of Operations survey, which looked at the top operational technology investment priorities, found.

Cloud, as the top investment priority identified by organisations, was followed by wireless connectivity and artificial intelligence (AI) and machine learning (ML), although actual investment in current AI and ML projects tell a more complicated story, according to IDC.

While many organisations cited AI and ML as an important future technology investment area, most survey respondents indicated that they had no plans to use AI to analyse operational data in the next several years, the firm noted.

At the same time, as indicated by IDCs survey respondents, many enterprises had yet to move their operational data from on-premises to the cloud.

But this is changing.

"A point of resistance just a few years ago, organisations are now prioritising investments and building strategies for putting operational data into the cloud," said Leif Eriksen, research vice president, future of operations, at IDC.

"And, while the momentum is irrefutable, organisations will need to develop a specific cloud data management strategy that addresses organisational needs and objectives, he added.

Finding the foundations for the next level of cloud maturity

Clearly, despite the existing prevalence of cloud technology, which today seems to have infiltrated just about every aspect of the business world's collective IT footprint, many companies remain very much at the beginning of their cloud journeys.

However, organisations use of cloud is moving to the next level of maturity, spurring the adoption of a cross-functional set of services to drive innovation in a digital-first economy, according to IDC.

From the analyst firms perspective, these so-called foundational cloud services (FCS) for compute, data and app frameworks will drive competitive development across the whole cloud market.

In fact, IDC estimates that annual recurring revenue (ARR) derived from its FCS catch-all category will increase from just under US$100 billion in 2020 to more than US$300 billion in 2025, with a compound annual growth rate (CAGR) of 28.8 per cent.

"Digital is now a permanent, yet dynamic fixture in our world, built on the digital infrastructure and platform technologies of a cloud foundation," said Rick Villars, group vice president, worldwide research at IDC.

"When organisations want to pursue some digital-based capability or intelligently leverage data to their advantage, they can do so because they have rapid access to the foundational cloud services offered by the leading cloud services providers, he added.

IDC defines its FCS category specifically as containing elements of the infrastructure-as-a-service (IaaS), platform-as-a-service (PaaS) and system infrastructure-as-a-service (SISaaS) markets.

Breaking down the compute, data and app frameworks areas further, IDC classes compute services as including technology elements such as virtualised x86 compute, bare metal compute, block storage, accelerated compute and software-defined compute software.

Meanwhile, data services include data management systems, object storage, file storage and event stream processing software. At the same time, app framework services include integration software, deployment-centric application platforms and AI lifecycle software.

Around these core offerings are what IDC calls usage multiplier services, which are largely low or no-fee services that encourage greater or more effective use of high-value services by making it easier to adopt, connect, deploy, track, secure and update those services. Such services include load balancing and DNS, as well as marketplaces and bundles of open-source software solutions.

Combined, the services within these combined portfolios accounted for more than half of all IaaS, PaaS and SISaaS revenue in 2020 and are expected to grow to more than two-thirds of all revenue in 2025, according to IDC.

Unsurprisingly, these service segments and the markets that house them are dominated by the top public cloud service providers on the planet, with Amazon Web Services (AWS), Microsoft, Google, Alibaba Group, IBM, Tencent, Huawei and Oracle together holding a combined market share of more than 60 per cent last year.

Enterprises on the hunt for robust partner ecosystems

It is expected that organisations will adopt a range of strategies for embracing FCS portfolios, according to IDC, with some enterprise customers set to select a primary FCS partner while others likely to choose more diversified cloud deployment strategies.

However, regardless of the FCS strategy selected, enterprises are expected to place a high priority on extensions to providers FCS portfolios in the areas of expanded service deployment options, such as edge, network and core, automated governance services and robust partner ecosystems.

These factors are set to keep vendors on their toes.

"Demand for FCS is increasing, indicating customer expectations are being met by the providers in these areas. However, this is no time to rest, said Lara Greden, research director, PaaS at IDC. In a market characterised by rapid innovation, FCS providers must continually prove that they are willing to invest in innovation at a high level.

Customers are seeking outcomes, not technology solutions. The key will be to differentiate, build mindshare and redefine [or] productise portfolios by use cases."

From IDCs perspective, several factors are behind the rising demand for foundational cloud services rather than similar IaaS and PaaS services from individual providers.

A big part of the differentiation between foundational and non-foundational services, according to IDC, is the available, affordable and standardised infrastructure offered by FCS providers, which give developers the ability to rapidly build, test and deploy innovative applications.

Meanwhile, the availability of multiple deployment options, as is the case with hybrid cloud, and technologies that bring portability to applications enable customers to choose the best-matched cloud provider for a given workload are also driving factors.

Another draw is that service-based consumption of IT infrastructure has the potential to let end users reduce capital spending, optimise operating expenses and focus the efforts of IT personnel on achieving business goals rather than routine infrastructure and data management.

Finally, data-centric foundational cloud services can provide fully automated data capabilities that address the significant increases in data volumes and storage associated with mobile and edge devices, IDC said.

Earlier this year, fellow industry analyst firm Synergy Research Group claimed that 2020 marked the first time global enterprise spend on cloud infrastructure services outstripped enterprise spending on data centre hardware and software.

Synergy's analysis suggested enterprise spending on cloud infrastructure services continued to ramp up aggressively last year, growing by 35 per cent, to reach almost US$130 billion.

At the same time, in 2020, worldwide spending on the enterprise data centre hardware and software typically used in on-premises environments comprising servers, storage, networking, security and associated software was US$89 billion.

The analyst firm pointed out that the ratio between the two market segments continued a decade-long trend of explosive growth in cloud and virtual stagnation in the market for enterprise-owned data centre equipment.

Tags MicrosoftOracleGoogleazureAWSGoogle CloudCloud

Read this article:
Cloud is hot, and it's only going to get hotter - Channel Asia Singapore

AMD Scores Its Highest Server CPU Market Share in Years: Report – Tom’s Hardware

AMD's recent server efforts have been tremendous. The company not only released CPUs with 64 cores and worked hard to win designs with server makers and cloud giants like Google, but it also prioritized producing EPYC processors over client CPUs and GPUs. These efforts paid off in the second quarter as, according to a report from analyst firm Omdia, AMD has reached its highest server CPU market share in years.

According to the report, around 3.4 million data center servers were sold in the second quarter of 2021 (flat year over year). Additionally, server makers earned $21.5 billion due to growing demand from hyperscale cloud service providers.

AMD controlled 16% of data center servers, Intel lost some revenue share to AMD, and Arm-based servers continued to progress, albeit in a limited number of cases. Omdia says that Ampere's Altra (deployed by Oracle) and chips from Fujitsu and Huawei are the most successful server-grade Arm SoCs.

An avid reader will certainly notice that data from Omdia appears to differ fromdata shared by Mercury Researchlast month, which shows that AMD commanded around 11.6% of server unit share in Q2 2021. This happens because Omdia includes all types ofgeneral-purpose servers in the report,like mainstream/datacenter machines (blades, rack servers, whitebox servers used by hyperscalers, tower servers, hyperconverged infrastructure servers), edge servers (a small emerging category),andfour-socket and beyond servers.

Other firms do not include certain niche types of servers (e.g., machines with four or more CPUs) that sometimes happen to be categories in which AMD does not participate, which is why server-related reports from IDC, Mercury Research, and Omdia can have different perspectives. In the case of Omdia, it only covers data center machines.

Omdia says that AMD's 16% share in data center servers is the highest share that the company has ever reached in this market segment, but it didn't elaborate. AMD controlledover 25%of the server market in Q3 2006, based on data from Mercury Research. It's important to remember that the server market is not only larger than it was in 2006 in terms of units, but the machines themselves are becoming more expensive, so there's more revenue to be gained.

Omdia says accelerated adoption of AMD's EPYC processors by hyperscalers in general, and Google in particular, helped AMD increase its share in Q2 2021. Cloud providers were Intel's stronghold for years, but it appears AMD is beginning to gain traction here, just like it gained traction with enterprises.

Speaking of hyperscalers, it is necessary to note that they use whitebox servers produced by companies likeWiwynn, QCT (Quanta), Tyan (MiTAC), and Ingrasys (Foxconn). These companies controlled 26% of the market and produced $5.566 billion worth of servers in Q2 2021, up 17% quarter-over-quarter and 9% year-over-year. In fact, hyperscale cloud service providers like AWS, Azure, Facebook, and Google consumed more servers in the second quarter than any server vendor shipped during this timeframe.

Speaking of big server vendors. Dell EMC maintained its lead and sold $3.655 billion worth of servers in Q2 2021. The company was followed by HPE with $2.727 billion (nearly a billion of USDs behind), whereas Inspur was No. 3 with $2.285 billion. Inspur's sales grew 45% quarter-over-quarter, so the company probably delivered several large orders to its clients in China. Meanwhile, the company's server sales dropped by 5% year-over-year. By contrast, Lenovo increased its server revenue to $1.652 billion, up 15% QoQ and 13% YoY.

Since demand for servers is growing, everyone in the supply chain benefits. Omdia expects server revenue to hit $92 million for 2021 and increase by 11% compared to 2020. For obvious reasons, companies like AMD benefit more than makers of smaller components like power management ICs (PMICs) or network controllers. But as ironic as it is, makers of small components may put further growth of the server market in the second half of the year (and therefore server revenue) at risk.

Lead times for some server componentsextended to 52 70 weeksby early July, forcing some manufacturers to procure loads of cheap but critical components, putting additional pressure on the supply chain. PMICs are made on 200-mm fabs that are relatively cheap, but building additional capacity takes a long time, so shortages will persist for quite a while.

According to multiple researchers, AMD is gaining server market share as the server market grows due to demand from hyperscale cloud service providers. AMD has successfully won designs with enterprise server makers and is now gaining traction with hyperscalers, which almost guarantees steady shipment growth.

For now, AMD's EPYC CPUs have an indisputable trump over Intel's Xeon processors: their core count. However, component shortages could slow down server shipments and thus AMD's expansion.

Read more:
AMD Scores Its Highest Server CPU Market Share in Years: Report - Tom's Hardware

Inspur Comes Out on Top with Superior AI Performance in MLPerf Inference V1.1 – Business Wire

SAN JOSE, Calif.--(BUSINESS WIRE)--Recently, MLCommons, a well-known open engineering consortium, released the results of MLPerf Inference V1.1, the leading AI benchmark suite. In the very competitive Closed Division, Inspur ranked first in 15 out of 30 tasks, making it the most successful vendor at the event.

Inspur Results in MLPerfTM Inference V1.1

Vendor

Division

System

Model

Accuracy

Score

Units

Inspur

Data CenterClosed

NF5688M6

3D-UNet

Offline, 99%

498.03

Samples/s

NF5688M6

3D-UNet

Offline, 99.9%

498.03

Samples/s

NF5488A5

DLRM

Offline, 99%

2607910

Samples/s

NF5688M6

DLRM

Server, 99%

2608410

Queries/s

NF5488A5

DLRM

Offline, 99.9%

2607910

Samples/s

NF5688M6

DLRM

Server, 99.9%

2608410

Queries/s

EdgeClosed

NE5260M5

3D-UNet

Offline, 99%

93.49

Samples/s

NE5260M5

3D-UNet

Offline, 99.9%

93.49

Samples/s

NE5260M5

Bert

Offline, 99%

5914.13

Samples/s

NF5688M6

Bert

SingleStream, 99%

1.54

Latency (ms)

NF5688M6

ResNet50

SingleStream, 99%

0.43

Latency (ms)

NE5260M5

RNNT

Offline, 99%

24446.9

Samples/s

NF5688M6

RNNT

SingleStream, 99%

18.5

Latency (ms)

NF5688M6

SSD-ResNet34

SingleStream, 99%

1.67

Latency (ms)

NF5488A5

SSD-MobileNet

SingleStream, 99%

0.25

Latency (ms)

Developed by Turing Award winner David Patterson and leading academic institutions, MLPerf is the leading industry benchmark for AI performance. Founded in 2020 and based on MLPerf benchmarks, MLCommons is an open non-profit engineering consortium dedicated to advancing standards and metrics for machine learning and AI performance. Inspur is a founding member of MLCommons, along with over 50 other leading organizations and companies from across the AI landscape.

In the MLPerf Inference V1.1 benchmark test, the Closed Division included two categories Data Center (16 tasks) and Edge (14 tasks). Under the Data Center category, six models were covered, including Image Classification (ResNet50), Medical Image Segmentation (3D-UNet), Object Detection (SSD-ResNet34), Speech Recognition (RNN-T), Natural Language Processing (BERT), and Recommendation (DLRM). A high accuracy mode (99.9%) was set for BERT, DLRM and 3D-UNET. Every model task evaluated the performance in both Server and Offline scenarios with the exception 3D-UNET, which was only evaluated in the Offline scenario. For the Edge category, the Recommendation (DLRM) model was removed and the Object Detection (SSD-MobileNet) model was added. A high accuracy mode (99.9%) was set for 3D-UNET. All models were tested for both Offline and Single Stream inference.

In the extremely competitive Closed Division, in which mainstream vendors were competing, the use of the same models and optimizers was required by all participants. Doing so provided the ability to easily evaluate and compare AI computing system performance among various vendors. Nineteen vendors including Nvidia, Intel, Inspur, Qualcomm, Alibaba, Dell, and HPE participated in the Closed Division. A total of 1,130 results were submitted, including 710 for the Data Center category, and 420 for the Edge category.

Full-Stack AI Capabilities Ramp up Performance

Inspur achieved excellent results in this MLPerf competition with its three AI servers NF5488A5, NF5688M6, and NE5260M5.

Inspur ranked first in 15 tasks covering all AI models, including Medical Image Recognition, Natural Language Processing, Image Classification, Speech Recognition, Recommendation, as well as Object Detection (SSD-ResNet34 and SSD-MobileNet). The results showcase that from Cloud to Edge, Inspur is ahead of the Industry in nearly all aspects. Inspur was able to make huge strides in performance in various tasks under the Data Center category compared to previous MLPerf events despite no changes to its server configuration. Its model performance results in Image Classification (ResNet50) and Speech Recognition (RNN-T) increased by 4.75% and 3.83% compared to the V1.0 competition just six months ago.

The outstanding performance of Inspur's AI servers in the MLPerf Benchmark Test can be credited to Inspur's exceptional system design and full-stack optimization in AI computing systems. Through precise calibration and optimization, CPU and GPU performance as well as the data communication between CPUs and GPUs were able to reach the highest levels for AI inference. Additionally, by enhancing the round-robin scheduling for multiple GPUs based on GPU topology, the performance of a single GPU or multiple GPUs can be increased nearly linearly.

Inspur NF5488A5 was the only AI server in this MLPerf competition to support eight 500W A100 GPUs with liquid cooling technology, which significantly boosted AI computing performance. Among mainstream high-end AI servers with 8 NVIDIA A100 SXM4 GPUs, Inspur's servers came out on top in all 16 tasks in the Closed Division under the Data Center category.

As a leading AI computing company, Inspur is committed to the R&D and innovation of AI computing, including both resource-based and algorithm platforms. It also works with other leading AI enterprises to promote the industrialization of AI and the development of AI-driven industries through its Meta-Brain technology ecosystem.

To view the complete results of MLPerf Inference v1.1, please visit:https://mlcommons.org/en/inference-datacenter-11/ https://mlcommons.org/en/inference-edge-11/

About Inspur Information

Inspur Information is a leading provider of data center infrastructure, cloud computing, and AI solutions. It is the worlds 2nd largest server manufacturer. Through engineering and innovation, Inspur delivers cutting-edge computing hardware design and extensive product offerings to address important technology arenas like open computing, cloud data center, AI, and deep learning. Performance-optimized and purpose-built, our world-class solutions empower customers to tackle specific workloads and real-world challenges. To learn more, please go to https://www.inspursystems.com/.

Visit link:
Inspur Comes Out on Top with Superior AI Performance in MLPerf Inference V1.1 - Business Wire