Category Archives: Cloud Servers

In the AI era, the edge is the new cloud – TechRadar

Over the last decade or so, businesses have migrated more and more workloads away from on-premise servers and to the cloud, in an effort to capitalize on the flexibility and cost savings on offer.

As a result, the global cloud computing market is set to be worth upwards of $250 billion this year, a large proportion of which will fall into the pockets of hyperscalers such as Amazon Web Services, Microsoft Azure and Google Cloud.

However, various signs suggest the tide is beginning to shift in a different direction, with a larger proportion of computing taking place outside centralized datacenters once again.

According to Mike Vildibill, VP & GM of Cloud Edge AI at semiconductor company Qualcomm, the rise of artificial intelligence (AI) will combine with a number of other factors to push computing back towards the edge of the network, where latency is just as important as raw performance.

The next mega-trend is now underway, he told TechRadar Pro. Previously, we saw a lot of computation moving to the cloud, but a yo-yo effect is creating a need for computation closer to the edge, where both the data and consumers of the data reside.

Theres still a need for a centralized cloud, but even the hyperscalers recognize that the cloud is coming to the edge. Instead of residing in some far-flung datacenter, it might be in the trunk of your car, at an intersection, or bolted to the side of a building. Thats the future.

Although Qualcomm made its name in the mobile computing space with its Snapdragon line of chips, which continue to compete at the top of the market, the company recently launched a new line of business that is quickly gaining momentum.

The focus is on building high-performance server chips specifically designed to accelerate AI inference, both in the cloud and at the edge. Manufactured on a 7nm process, the companys latest Cloud AI 100 accelerators lead the market in both performance density and energy efficiency, per MLPerf benchmarks.

For example, Qualcomms Cloud AI 100 Edge Development Kit (AEDK) was found to achieve 240 inferences per second per watt (inf/sec/watt) for ResNet-50, a neural network commonly used to benchmark inference performance. For comparison, the AGX Xavier from Nvidia managed 60 inf/sec/watt, four times fewer.

While the company is working with customers to accelerate inference in a datacenter setting with its Cloud AI 100 platform, Vildibill is most enthusiastic about new opportunities at the edge.

The poster-child use case for edge computing, he explained, is autonomous driving, whereby a car performs inference on the data pulled from various cameras and sensors to plot a route without the input of a driver.

If an obstruction suddenly appears on the road (say, a child walks out from behind a parked car), a course correction needs to be calculated almost instantaneously, in such a way that only edge computing makes possible.

The laws of physics dictate that data cannot move quickly enough between the car and a cloud datacenter and back again in sufficient time for disaster to be averted, said Vildibill. You need to do the processing closer to where the data resides.

And this is just one of many examples; Qualcomm says its customers are finding various new use cases for inference at the edge, from monitoring shelf stock in a retail store environment to checking factory workers are wearing the necessary protective gear. In conjunction with 5G, edge computing is also enabling a new breed of augmented and virtual reality (AR/VR) applications that wouldnt otherwise be feasible.

The new emphasis on AI accelerators means Qualcomm has found itself dealing with a brand new class of customer, which include not only the hyperscalers but any organization interested in deploying AI at the edge. And this strategy appears to be paying off.

According to the companys latest earnings figures, the IoT segment (which houses the Cloud AI 100 platform) took in $5.1 billion in fiscal 2021, up 67% on the previous year. And Vildibill told us Qualcomms efforts in the server chip space are only going to continue ramping up.

Its not just the shift towards the edge that Qualcomm is interested in, however. Its the intersection of this new trend and another: the drive towards sustainable computing. With many companies now committing to ever more ambitious carbon pledges, the ability to run workloads in a sustainable manner has become a top priority.

A very important element of the puzzle is that its not computing at any cost; youve got to be able to do this processing efficiently, in a sustainable way, explained Vildibill.

What Qualcomm is trying to do is drive more effective, powerful and power efficient means of processing at the edge, which will save not just on the energy bill, but on the carbon footprint too.

As Qualcomm continues to explore opportunities in the server chip market, the firm is aiming to develop an extensive roadmap of products with power-efficiency at their heart, Vildibill says. And the company will also continue to enhance its software too, in a bid to draw even greater energy efficiency from its current Cloud AI 100 product line.

If Qualcomm is able to unseat Nvidia, the historic leader in AI acceleration, with this focus on maximizing performance per watt, the economic opportunity could be massive. And despite the companys relative inexperience in the space, Vildibill is confident about its prospects.

An increased focus on sustainability, the explosion of AI and the shift towards edge computing have come together to create the perfect storm. And we believe were in a perfect position, at the perfect time, to address the market, he said.

Also check out our lists of the best bare metal hosting, best dedicated server hosting providers and best VPS providers.

Visit link:
In the AI era, the edge is the new cloud - TechRadar

Foxconn Reports Better-Than-Expected Q3 Profits – iPhone in Canada

Foxconn reported a better-than-expected third-quarter profit on Friday, helped by strong smartphone demand as people continue to work remotely through the coronavirus pandemic.

Taiwans Foxconn, which assembles iPhones for Apple, said on Friday it expected revenue from its key smartphone business to slide more than 15 percent in the quarter ending December, hurt by the ongoing global shortage of components, explainsReuters.

The company previously said it felt only a small impact from the year-long global chip shortage but had cautioned that risingCOVID-19cases in Asia could hurt its supply chain.

Given the on-and-off COVID situation globally, we expect the component shortage will extend to at least the second half of next year, which is longer than our previous estimate of till the first half of 2022, Foxconn Chairman Young Liu told investors.

Our revenue performance for this year was better than our previous expectation, which will be a higher comparison base for next year. We will be rather cautious about the next years outlook, Liu said. COVID and inflation are two very major factors to influence [the global economy], and its still hard to predict.

Liu said for the final quarter of 2021, revenue for consumer electronics, as well as computing products, could decline from a year ago due to the component shortage and a relatively high base in the same period last year. Revenue for cloud server and networking products could be flat compared with a year ago, while the companys component business may still seegrowth for the October to December period, the chairman added. Foxconns cloud and networking business will be the companys most important growth driver next year, he added.

Foxconn reported record earnings for the July to September period on Friday despite the global chip crunch. Its net profit surged 20 percent on the year to 36.9 billion New Taiwan dollars ($1.32 billion USD). In the first three quarters of this year, Foxconns net profit climbed 70 percent on the year to a record NT$94.92 billion.

Foxconn is the largest Apple supplier, but it also counts Google, HP, Facebook, Microsoft, Amazon and Cisco as its clients. The company provides product assembly and components for everything from smartphones, tablets and smartwatches to PCs, servers and cars.

See more here:
Foxconn Reports Better-Than-Expected Q3 Profits - iPhone in Canada

ThycoticCentrify builds on vision for modern PAM with latest integration – IT Brief New Zealand

ThycoticCentrify has leveraged the ThycoticCentrify platform to integrate with Secret Server, its privileged account and session management solution.

The combination avails Secret Server customers to a range of SaaS services, establishing the foundation of modern PAM strategies and centralising access and visibility to credentials for faster time to access, risk identification and resolution, the company states.

Customers now have access to credentials vaulted in multiple Secret Server instances from a single portal.

In addition, Secret Server can now consume platform capabilities such as enhanced remote access with VPN-less login and extensive second factors for multi-factor authentication (MFA).

With the 21.7 release of its Cloud Suite product, ThycoticCentrify also delivers centralised, fine-grained control of access and privilege for Windows and Linux servers.

With PAM policies centrally managed in the platform, organisations can scope varying degrees of privileged access that better align with job functions, allowing administrators to elevate permissions, just in time, to run privileged applications or commands.

Unless their identity is consistent, when users log in to different Linux systems, mount central file shares, and create files and folders, the file system can deny access, affecting productivity, the company states.

In the 21.7 release, when a user with a Linux profile defined in the platform logs into a Linux server, Cloud Suite ensures their correct profile attributes are associated with the session.

The clients on the host systems perform user identifier and group identifier rationalisation and preserve this across user sessions. Resource access is assured, avoiding a disruption in usage, according to the company.

In addition, ThycoticCentrify has also extended the MFA redirection capabilities of Cloud Suite. Privileged users can now perform additional authentication on behalf of another user, such as alternate-admin or dash-A accounts.

With MFA redirection, second factors of authentication only need to be configured on the main user's account. They will then be applied when using any alternate administrative accounts and an MFA policy is triggered.

For example, system administrators may have a primary low-privilege account for routine tasks such as email and web surfing, and additional alternate-admin or dash-a accounts used for privileged tasks.

MFA redirection previously supported Centrify's mobile app as the only second factor. The new feature extends this capability to all second factors supported by the platform.

According to the company, benefits include reduced second-factor maintenance for administrators, as well as for applications using service accounts that require additional proof of legitimacy from a human.

ThycoticCentrify chief technology officer David McNeely says, Our platform is the foundational layer that connects ThycoticCentrify's core vaulting and privilege elevation solutions, leveraging the similar cloud architectures of each to deliver new insights and value for modern, hybrid enterprises.

"Centralising access empowers security and IT teams to quickly access a range of accounts across multiple vaults, whether optimising day-to-day operations or during time-critical instances such as active cyber attacks.

Read the original:
ThycoticCentrify builds on vision for modern PAM with latest integration - IT Brief New Zealand

European server sales sink to 4-year low: Cloud, software-defined and chip shortage blamed – The Register

Server sales across the European channel fell to their lowest level in four years over the third quarter of 2021, as the long-awaited recovery in infrastructure spending failed to show up with shrinking volumes reported for 18 countries.

The numbers collated by Context show 91,021 servers were sold via distribution in calendar Q3, with hefty double-digit declines recorded in some of the largest countries that consume the systems. It is estimated Context captures up to 60 per cent of the total server market volumes in the region*.

"We were waiting for a rebound with recovery from COVID-19 but it looks like it is not happening for now," Gurvan Meyer, enterprise business analyst at Context told The Register.

The reasons? The pandemic has sped up customers switch to a hybrid IT environment, software defined infrastructure played a part too, and so did the ongoing supply chain wobbles.

Meyer said he believes that "infrastructure management has progressed impressively in the last two to three years, businesses have become more efficient in terms of using their hardware resources; there was quite a lot of over-provisioning in the past and IT teams have caught up."

Sales to end users in Germany slid 3.9 per cent year on year, they sank almost 26 per cent in the UK, and dropped 6.4 per cent in France.

Context also compared unit sales in Q3 2021 with those made in Q3 2019, before the pandemic began: Germany was down 30.3 per cent in those two years, the UK was down 24.6 per cent, and France was down 11.2 per cent. A further 15 countries reported double-digit drops in year-on-year comparison.

Meyer told us: "Some countries are more advanced in their digital transformation than others, and countries are structurally different (service economy in the UK versus industrial economy in Germany) and I tend to think that the rather soft market we see in the UK, for example, is down partly to the fact that the UK is slightly more advanced in terms of hybrid infrastructure than, let's say, Italy."

As for worldwide infrastructure-as-a-service spending, Canalys estimated Q3 expansion of 35 per cent to $49.9bn, with AWS, Microsoft and Google accounting for 61 per cent of the entire market. Yet even this corner of tech industry isn't immune to the crippling shortages affecting multiple industries.

"Overall computer demand is outgrowing chip manufacturing capabilities, and infrastructure expansion may become limited for the cloud service providers," said Blake Murray, research analyst.

We asked the market watcher for European-specific stats but it seems they are not yet at a stage to be made public.

Canalys said the impact of the global chip shortages on the cloud giants is "imminent" as data centre component makers are seeing lead times extended and prices rising. Just last week, for example, data centre networking outfit Arista said the lead times to secure certain parts were stretching to 80 weeks.

Glenn O'Donnell, veep and research director at Forrester, said the auto industry has become the poster child for chip shortages.

"The impact extends far beyond autos home appliances, consumer electronics, medical devices, farm equipment, and even toys are all affected. It is hitting corporate IT hard, as data centre equipment, cloud services, PC, and even Apple struggle to get these essential parts," he said.

Computacenter, one of Europe's largest resellers, said in September that customers were recommencing projects but that getting hold of enough kit was the issue, not demand.

"The ongoing supply shortages in the industry has risen to the top of our challenges," said colourful CEO Mike Norris.

* Some customers including trade clients buy servers direct and do not use distribution, so their figures aren't tracked by Context.

Follow this link:
European server sales sink to 4-year low: Cloud, software-defined and chip shortage blamed - The Register

Amazon Cloud can save the world AWS plays the green card Blocks and Files – Blocks and Files

AWS is proclaiming that businesses in Europe can reduce energy use by nearly 80 per cent when they run their applications on the AWS Cloud instead of operating their own datacentres. The claim is found in an AWS-commissioned report by 451 Research.

We learn that companies could potentially further reduce carbon emissions from an average workload by up to 96 per cent once AWS meets its goal to be powered by 100 per cent renewable energy, a target the company is on a path to achieve by 2025. The 451 Researchers found that, compared to the computing resources of the average European company, cloud servers are roughly three times more energy efficient, and AWS datacentres are up to five times more energy efficient.

Chris Wellise, director of sustainability at AWS, is quoted in the blog: AWS is proud to collaborate with businesses and governments to help meet their sustainability goals. We believe we have responsibilities to the communities where we operate, and to us, that means sustainability and environmental stewardship.

Surely every business understands it has responsibilities to the communities in which it operates. Amazon is just one of many businesses hurriedly running a green climate change banner up its corporate flagpole as it tries to ride on the back of customers climate change concerns to boost its own business.

AWS claims, via the 451 Researchers, that moving a megawatt (MW) of a typical compute workload from a European organisations datacentre to the AWS Cloud could reduce carbon emissions by up to 1,079 metric tonnes of carbon dioxide per year. So AWS is effectively saying, if you want your European compute operations to emit less carbon then move them to AWS, increase AWSs profits and increase its carbon emissions.

Amazons total carbon emissions were the equivalent of 60.64 million metric tonnes of carbon dioxide in 2020. That was 19 per cent more than the 51.17 metric tonnes it emitted in 2020 which was 15 per cent higher than its 2019 total. This data comes from its own annual Sustainablity Report.

It says it is the worlds largest corporate buyer of renewable energy and invites organisations to join The Climate Pledge a commitment to becoming net-zero carbon by 2040, ten years ahead of the Paris Agreement. Amazon co-founded The Climate Pledge. It is not known if Blue Origin, the space tourism company whose rockets and rocket building activities emit CO2, and which was founded Amazon founder and ex-CEO Jeff Bezos, has signed up as well.

Amazon itself the ecommerce behemoth has only committed to making 50 per cent of all its shipments net-zero carbon by 2030, five years after its AWS-powered-by-100-per-cent-renewable-energy goal. Amazon actually aims to reach net-zero carbon emissions across all its operations by 2040, ten years after that. As part of this it intends to only use zero-carbon fuel ocean shipping by 2040.

Amazon revenues were $386 billion in 2020 and it made a profit of $21.3 billion. With this amount of financial firepower at its disposal the company could move faster to net-zero status in its activities if it optimised them for sustainability over profitability. As a tiny example, the money spent on the 451 Research report could have been spent instead on reducing Amazons own carbon emissions but that wouldnt have provided such a good marketing opportunity for this self-interested business.

Continue reading here:
Amazon Cloud can save the world AWS plays the green card Blocks and Files - Blocks and Files

AMD Deepens Its Already Broad Epyc Server Chip Roadmap – The Next Platform

The hyperscalers, cloud builders, HPC centers, and OEM server manufacturers of the world who build servers for everyone else all want, more than anything else, competition between component suppliers and a regular, predictable, almost boring cadence of new component introductions. This way, everyone can consume on a regular schedule and those ODMs and OEMs who actually manufacture the twelve million servers (and growing) consumed each year can predict demand and manage their supply chains.

As many wise people have said, however, IT organizations buy roadmaps, they dont buy point products because they have to manage risk and get as much of it out of their products and organizations as they possibly can.

AMD left the server business for all intents and purposes in 2010 after Intel finally got a good 64-bit server chip design out the door with the Nehalem Xeon E5500 architecture that came out in early 2009 largely copied from AMDs wildly successful Opteron family of chips. AMDs early Opterons were innovative, sporting 64-bits, multiple cores, HyperTransport interconnect, and multiple cores on a die, and essentially made Intel look like a buffoon for pushing only 32-bit Xeons and trying to get the enterprise to adopt 64-bit Itanium chips. But by 2010, AMD had been delayed on delivering several generations of Opterons and had made an architectural fork that did not pan out. When Intel pulled back on Itanium and designed many generations of competitive 64-bit Xeon server chips, AMD was basically pushed out of the datacenter. But by 2015, Intel had been slowing the pace of innovation and driving up prices, and the market was clamoring for more competition, and AMD reorganized itself and got to work creating what has become its Epyc comeback this time once again coinciding with Intel leaving its own flanks exposed for attack because of delays in its 10 nanometer and 7 nanometer chip making processes.

Intel, under the guiding hand of chief executive officer Pat Gelsinger, is getting its chip manufacturing house in order and also getting back to a predictable and more rapid cadence of performance and feature enhancements, and that means AMD has to do the same thing. And as part of its Data Center Premier event this week, the top brass at AMD unrolled the roadmap and showed that they were not only going to be sticking to a regular cadence and flawless execution for the Epyc generations, but were going to be deepening the Epyc roadmap to include different variations and SKUs to chase very specific portions of the server market and very precise workloads.

Ahead of the keynote by Lisa Su, AMDs president and chief executive officer, Mark Papermaster, the companys chief technology officer, and Forrest Norrod, general manager of AMDs Datacenter and Embedded Solutions Group, walked through the deepening roadmap for the Epyc server chips. This was done in the context of the unveiling of the Milan-X Epyc 7003 with 3D V-Cache, which boosts performance by 50 percent on many HPC and AI workloads and which is coming out in the first quarter of 2022, and the Aldebaran Instinct MI200 GPU accelerator, which is starting to ship now and notably in the 1.5 exaflops Frontier supercomputer being installed at Oak Ridge National Laboratory. Milan-X and Instinct MI200 were the highlights of the AMD event this week, to be sure, but they were not the only things that AMD talked about on its roadmap, and there is other chatter we need to bring into the picture as well that pushes this roadmap even further than AMD itself did this week.

Both of them are the culmination of a lot of work over the last four years to start broadening our product portfolio in the datacenter, Norrod explained, referring to Milan-X and Aldebaran. So particularly on the CPU side, you should think about the first three stops in Italy, and that we are sort of on one train, barreling down the road to get to market relevance in a reasonable footprint with one socket, one fundamental part. It has long been our belief that as we pass a certain point, particularly given the increasing workload complexity in the datacenter, that we were going to have to begin broadening our product offerings, still always being mindful of how do we do it in such a way that we preserve our execution fidelity. And we need to make it really easy for customers to adopt the more workload specific products. That is a central theme of what we talked about: workload specificity, having products that are tuned for particular segments of the datacenter market. And by doing so, we make sure that we can continue to offer leadership performance and leadership TCO in each one of those segments.

Norrod made no specific promises, but said that we should expect the broadening and deepening of the portfolio of chips and products with AMD compute GPUs as well.

In her keynote address, Su carved the datacenter up into four segments, and explained how AMD would be targeting each of them with unique silicon.

General purpose computing covers the broadest set of mainstream workloads, both on-prem and in the cloud, Su explained. Socket-level performance is an important consideration for these workloads. Technical computing includes some of the most demanding workloads in the datacenter. And here, per-core performance matters the most for these workloads. Accelerated computing is focused on the forefront of human understanding, addressing scientific fields like climate change, materials research, and genomics, and highly parallel and massive computational capability is really the key. And with cloud-native computing, maximum core and thread density are needed to support hyperscale applications. To deliver leadership compute across all these workloads, we must take a tailored approach focused on innovations in hardware, software, and system design.

With that, lets take a look at the Epyc roadmap that Su, Norrod, and Papermaster talked about and then look at the augmented and extended one that we put together to give you and even fuller picture.

Heres the Epyc roadmap they all talked about:

You can see that the Milan-X chip has been added, and so has another chip in the Genoa series, called Bergamo and sporting the Zen 4c core, a variant of the forthcoming Zen 4 core and a different packaging of the compute chiplets than the standard Genoa parts will have. But thats not all you get.

There is also the Trento variant of the Milan processor, which will be used as the CPU host to the MI200 GPU accelerators in the Frontier system. And then there will be a second generation of 5 nanometer Epyc processors, and we have caught wind of a high core count version code-named Turin, which now that we see the more revealing AMD server chip roadmap, looks very much like a follow-on to Bergamo, not to Genoa. Which implies a different follow-on to Genoa for which we do not yet have a codename. (Might we suggest Florence? Maybe Venice after that?)

Anyway, here is our extended version of AMDs Epyc roadmap:

Lets walk through this.

Milan-X, as we know from this week, will be comprised of a couple of SKUs of the Milan chip with two banks of L3 cache stacked up on top of the native L3 cache on the die, tripling the total L3 cache to boost performance. We know from the presentations that there is a 16-core variant and a 64-core variant, and we presume there might be a few more variants with 24 cores and 32 cores, possibly 48 cores with all of them getting the proportional amount of extra L3 cache (3X more per core) added.

With Trento, what we have heard is that the I/O and memory hub chiplet on the Milan processor complex has been enhanced in two ways. The first is that the Infinity Fabric 3.0 interconnect is supported on the I/O hub, which means the Trento chip can share memory coherently with any Instinct MI200 accelerators attached to it. This is a necessary feature for Frontier because Oak Ridge had coherent CPU-GPU memory on the prior Summit supercomputer based on IBM Power9 CPUs and Nvidia V100 GPU accelerators. The other enhancement with the Trento I/O and memory hub chiplet is rumored to be support for DDR5 main memory on the controllers. For all we know, the Trento hub chiplet also supports PCI-Express 5.0 controllers and also the CXL accelerator protocol, which might be useful in Frontier.

Milan, Milan-X, and Trento all fit into the SP3 server socket, which tops out at a 400 watt TDP.

With the Genoa and Bergamo chips, AMD is moving to the 5 nanometer chip etching processes from Taiwan Semiconductor Manufacturing Co., and Papermaster said that at the same ISO frequency, this process delivers twice the transistor density and twice the transistor power efficiency while also boosting the switching performance of the transistors by 25 percent. To be super clear: This is not a Milan to Genoa statement, but a 7 nanometer process to 5 nanometer process statement, and how this results in server chip performance depends on architecture and how AMD turns the dials on the frequency and voltage curves. AMD is also moving to a larger SP5 socket for these processors.

Genoa is based on the Zen 4 core, and Bergamo is based on the Zen 4c core that has the same instructions per clock (IPC) improvements over the Zen 3 core in the Milan family of chips and the same microarchitecture so there are no software tweaks necessary to use it but it has a different point on the optimization curve for frequency and voltage and has some optimizations in the cache hierarchy that make Bergamo more suited to having more compute chiplets, or CCDs, in the Epyc package. That Zen 4 core IPC uplift is expected to be in the range of 29 percent compared to the Zen 3 core, so this is going to be a big change in single thread performance as well as throughput performance with Genoa. Begamo will take throughput performance to an even higher extreme, but will sacrifice some per-thread performance to get there.

The Genoa Epyc 7004 will have 96 Zen 4 cores across four banks of three compute tiles, for a total of a dozen compute tiles, and an I/O and memory hub that supports DDR5 memory, PCI-Express 5.0 controllers, and the CXL protocol on top of that for linking accelerators, memory, and storage to the compute complex. Genoa is launching sometime in 2022; we dont have much clarity as to when because AMD is timing itself to keep ahead of Intel, which keeps changing its launch dates for Sapphire Rapids and Granite Rapids Xeon SPs.

There are a couple of ways to get to the 128 Zen 4c cores that Bergamo will offer. Instead of twelve 8-core compute tiles in the Genoa, the Bergamo chip could employ eight 16-core tiles. The die could also have twelve 12-core tiles, and then dud back some of the cores on each tile to dial the core count all the way back to 128 total cores in the Bergamo package. The latter seems equally likely as the former, but if both processors have twelve memory controllers, as is rumored, then it will be the latter scenario. The Trento I/O and memory hub supports eight compute chiplets and the Genoa I/O and memory hub supports twelve compute chiplets, so AMD could go either way to get to Bergamo, but again, if it used the Trento I/O and memory hub, then Bergamo would be relegated to only eight memory controllers and that would cause a compute to memory capacity and bandwidth imbalance. It looks like Bergamo will use the Genoa I/O and memory hub, therefore, and have some partially dudded cores so it maxes out at 128 cores instead of 144 cores. All Papermaster said is that Bergamo has a different physical design and a different chiplet configuration from Genoa, so everyone is guessing at this point.

The Bergamo chip will plug into the same SP5 socket as Genoa, which is what the hyperscalers and cloud builders care about. Bergamo will be available in the first half of 2023 according to Su, but Norrod initially said that it could be end of 2022 to early 2023 for the launch, and then backed off to say early 2023. Its not clear why this will take so long to come to market. It could be that the hyperscalers and cloud builders only recently talked AMD into taking the risk and incur the extra cost of making a special SKU of the Gemoa processor.

After that, comes kickers to Genoa and Bergamo, and it is looking like the Bergamo kicker is in fact the rumored 256-core Turin processor based on the future Zen 5c core that has been rumored recently.

We dont think the stock, general purpose kicker to Genoa would jump from 96 cores to 256 cores, but jumping to 192 cores would be reasonable. And so that is what we think will be in the Genoa kicker, which is labeled with ??? in our extended roadmap above. (We will call it Florence until we are told otherwise.) This chip might have four compute tiles, each with twelve Zen 5 cores, in each core complex, and four core complexes on the package to reach that theoretical 192 cores in the general purpose Epyc 7005. The Turin hyperscale variant would have 256 cores and a thermal design point of a whopping 600 watts, so people are saying. The compute tile here could be based on 16 Zen 5c cores, packed into a four-tile compute complex, with four of these on the package.

We think there will be Genoa-X and Florence-X variants with stacked 3D V-Cache, and there is even a possibility to see Bergamo-X and Turin-X variants that also have enhanced L3 caches. Why not?

There is talk that the Epyc 7005s will be based on TSMCs 3 nanometer processes, but we think AMD will try to get two generations of chips out of 5 nanometers, with the Genoa kicker and Turin based on a refined 5 nanometer process, much as Rome was a first pass at 7 nanometers and Milan is a second pass. This is particularly the case if TSMC is having delays with its 3 nanometer processes, as was rumored two months ago. The Epyc 7005s are probably a late 2024 to early 2025 product again, it will depend on a lot of moving parts and how well or poorly Intel is doing, and whatever else is happening in the server space at that time. The 10 exaflops generations of supercomputers will require these CPUs.

We strongly suspect that the Genoa kicker and the Turin processor will fit into the same SP5 server socket as Genoa and Bergamo. Server makers freak out if you do a socket change with every generation.

See original here:
AMD Deepens Its Already Broad Epyc Server Chip Roadmap - The Next Platform

Red Hat Extends Foundation for Multi-Cloud Transformation and Hybrid Innovation with Version 8.5 of Red Hat Enterprise Linux – Database Trends and…

Red Hat recently announced the general availability ofRed Hat EnterpriseLinux 8.5, the latest version of its enterprise Linux platform. The new release provides new capabilities to meet evolving and complex IT needs, from enhanced cloud-native container innovations to extending Linux skills with system roles, on whatever footprint customers require.

Linux is the common language spoken across nearly every public cloud, private cloud, edge deployment and data center, said Gunnar Hellekson, general manager, Red Hat Enterprise Linux, Red Hat. Red Hat Enterprise Linux 8.5 reinforces the role of the worlds leading enterprise Linux platform in the multi-cloud ecosystem, providing new capabilities to meet evolving and complex IT needs, from enhanced cloud-native container innovations to extending Linux skills with system roles, on whatever footprint our customers require.

According to Red Hat, recentstudiesindicate that organizations are realizing that using public cloud exclusively may not be economically feasible for long-term scale. At the same time, it notes, Gartnerpredicts that by 2026, public cloud spending will exceed 45% of all enterprise IT spending, up from less than 17% in 2021.

Red Hat says it has long championed a hybrid multi-cloud world, where customers can choose the environment and technologies that build on a flexible, more consistent foundation.

The updated platform extendsRed Hat Insights services, builds on existing container management capabilities and makes it easier for IT teams to set up workload-specific systems wherever they may exist across a multi-cloud world.

Red Hat Insights, Red Hats predictive analytics service for identifying and remediating potential system issues, is available by default through almost all Red Hat Enterprise Linux subscriptions. With the launch of Red Hat Enterprise Linux 8.5, Insights adds new capabilities around vulnerability, compliance and remediation, helping organizations more effectively manage Red Hat Enterprise Linux environments across multicloud and hybrid cloud environments, even when it comes to nuanced security or compliance scenarios.

According to Red Hat, containers are a crucial component of modern DevOps implementations, which in turn are key to the adoption of multi-cloud and hybrid cloud strategies. Supporting these strategies, Red Hat Enterprise Linux 8.5 offers:

In addition, Red Hat says, as modern IT environments spread across multiple public clouds, virtualized environments, private clouds, on-premise servers and edge devices, the IT operations experience is becoming more complex. To help address complexity and to extend the existing skills of both new and experienced IT operations teams, Red Hat Enterprise Linux 8.5 adds support for new Red Hat Enterprise Linux system roles. System roles are preset configurations for Red Hat Enterprise Linux systems, enabling IT teams to more easily support specific workloads from the cloud to the edge. Red Hat Enterprise Linux 8.5 now includes:

In addition to these capabilities, Red Hat Enterprise Linux 8.5 also adds support for OpenJDK 17 and .NET 6 for developers seeking to modernize and build next-generation applications. The Red Hat Enterprise Linux web console has also been enhanced, making it possible to manage live kernel patching operations and manage overall performance. And, finally, enhancements to Image Builder introduce broader support for creating customized Red Hat Enterprise Linux images on bare metal for edge deployments and for assembling images that have distinct file systems to meet organization-specific internal standards and security compliance requirements.

For more information, read the Red Hat Enterprise Linux 8.5release notesor view theproduct documentationfor Red Hat Enterprise Linux 8.5.

Go here to read the rest:
Red Hat Extends Foundation for Multi-Cloud Transformation and Hybrid Innovation with Version 8.5 of Red Hat Enterprise Linux - Database Trends and...

Wiwynn Showcases High Performance OCP OAI Server and Immersion Cooling Solutions at OCP Global Summit 2021 – Yahoo Finance

Solutions from cloud to edge; optimizations for AI training, low PUE and edge environment

TAIPEI, Nov. 8, 2021 /PRNewswire/ -- Wiwynn (TWSE: 6669), an innovative cloud IT infrastructure provider for data centers, announced to exhibit at the OCP Global Summit 2021, November 9-10, for its Open Compute Project (OCP) based cloud and edge servers, in addition to the Habana OCP Accelerator Module (OAM) based Open Accelerator Infrastructure (OAI) platform. It will also showcase the world's first two-phase immersion cooled edge server and OAI server to address the surging power consumption and demand of low PUE in datacenters.

Wiwynn Logo (PRNewsfoto/Wiwynn)

"We are excited to exhibit at OCP Global Summit and demonstrate our latest development in server, storage and high-performance OAI server. By integrating the latest CPU platforms with cutting-edge compute acceleration, 48V DC-in, and advanced cooling technologies, our offerings bring the most optimized performance to applications from cloud to edge," said Dr. Sunlai Chang, Wiwynn's President. "We are committed to the vibrant community and will continue to innovate for workload optimization while contributing to sustainable development for the datacenters."

As the major server partner for hyperscale datacenters and the leading OCP Solution Provider, Wiwynn will exhibit its next-generation OCP-based 1P/2P servers using processor platforms, including x86 and ARM, to address the needs of diverse workload optimization. Wiwynn's field-proven two-phase immersion cooling solution, designed for hyperscale datacenter to save up to 90% cooling energy, will be elaborated in response to the surging power consumption and demand of low PUE in datacenters.

For edge offering, Wiwynn's OCP openEDGE based solutions, EP100 and ES200, are perfect for central unit/distributed unit (CU/DU) of 5G open radio access network (RAN), MEC, 5GC software, as well as platforms for AI edge applications, such as 5G smart factory. Considering the diverse edge environment, Wiwynn will unveil the world first two-phase immersion cooling edge platform, EP200, with 2000W cooling capability within 2U height. The compact and integrated design allows edge servers like ES200 to operate without AC facility in harsh environment while still having massive computing capability.

Story continues

For AI/deep learning (DL) training, Wiwynn will showcase its latest OCP Accelerator Module (OAM) based OCP Accelerator Infrastructure (OAI) server, SV600G4. It is one of the Wiwynn collaborations with Habana Labs, an industry-leading developer of purpose-built deep learning AI processors. SV600G4 integrates the server motherboard and the Universal Baseboard (UBB) that adopts the fully connected OAM architecture with 100Gb/s OAM interlink, and features eight Habana Gaudi AI training processors. In addition to air cooling, Wiwynn optimized the system to support liquid cooling options for high density deployment. For the OCP event, Wiwynn has partnered with LiquidStack, an industry-leading data center thermal management company, to demonstrate the world's first OAI server cooled by a 2-phase liquid immersion DataTank delivering 3kW of compute power per RU.

"We are excited to collaborate with Wiwynn on the development of their high-performance OAI solution and benefit from access to solutions optimized for both air cooling and liquid cooling," said Eitan Medina, chief business officer of Habana Labs. "Habana is committed to bringing increased operational efficiencies to our data center customers. With Wiwynn's experience in cloud datacenters and design capabilities in system integration, thermal and advanced cooling, their cooling innovations can be catalysts for Habana's drive to datacenter adoption of our purpose-built deep learning solutions."

In addition to the showcase at booth #C2, Wiwynn will have speakers at Expo Hall Stage Talks and Executive Tracks to present the Company's offerings of "Cloud to Edge 2.0" and outlook for the future technology trend. Wiwynn will also present in eight engineering workshops to deep dive topics regarding OCP Accelerator Infrastructure (OAI), DC-SCM, Open System Firmware (OSF), modular BMC, system management, immersion cooling, and liquid cooling.

Come and explore the Synergy of Edge and Cloud together.

Wiwynn's OCP Summit 2021 Event page

OCP Global Summit

About Wiwynn

Wiwynn is an innovative cloud IT infrastructure provider of high-quality computing and storage products, plus rack solutions for leading data centers. We aggressively invest in next generation technologies for workload optimization and best TCO (Total Cost of Ownership). As an OCP (Open Compute Project) solution provider and platinum member, Wiwynn actively participates in advanced computing and storage system designs while constantly implementing the benefits of OCP into traditional data centers.

For more information, please visit Wiwynn website or contact sales@wiwynn.com Follow Wiwynn on Facebook and Linkedin for the latest news and market trends.

Cision

View original content to download multimedia:https://www.prnewswire.com/news-releases/wiwynn-showcases-high-performance-ocp-oai-server-and-immersion-cooling-solutions-at-ocp-global-summit-2021-301418424.html

SOURCE Wiwynn

Read the rest here:
Wiwynn Showcases High Performance OCP OAI Server and Immersion Cooling Solutions at OCP Global Summit 2021 - Yahoo Finance

Apple letting third-party shops replace screens is the right thing to do, but why was it ever in question? – iMore

We received a spot of good news yesterday with the revelation that Apple plans to release a software update that will allow third-party repair shops to swap out iPhone 13 screens without breaking Face ID, but why was that ever in question?

Backing up for a moment, let's remind everyone of what was going on.

As iFixit and others noted, replacing an iPhone 13 screen is relatively easy except for one new chip attached to the glass. That chip isn't an issue for Apple and its authorized repair centers but it is kryptonite to third-party shops. See, that chip renders Face ID unusable if it isn't swapped from the old screen or re-programmed. Unfortunately, most repair shops can't do either, effectively breaking Face ID.

It's a whole thing and, frankly, not a great look for Apple.

iFixit:

This unprecedented lockdown is unique to Apple and totally new in the iPhone 13. It is likely the strongest case yet for right to repair laws. And it's all because of a chip about the size of a Tic-Tac, tucked into the bottom of a screen.

(...)

The iPhone 13 is paired to its screen using this small microcontroller, in a condition repair techs often call "serialization." Apple has not provided a way for owners or independent shops to pair a new screen. Authorized technicians with access to proprietary software, Apple Services Toolkit 2, can make new screens work by logging the repair to Apple's cloud servers and syncing the serial numbers of the phone and screen. This gives Apple the ability to approve or deny each individual repair.

However, a new report by The Verge says that Apple will issue a software update that fixes all of this, although it isn't known when that will happen.

That's good news, and it's the right thing to do. But it doesn't explain what's going on here is the current situation the result of a bug, for example? That seems unlikely, which means that we're dealing with expected behavior here. But why?

Apple sometimes says it does similarly odd things in the name of security it wants to ensure that people can't swap out components as a way to get around biometric security measures like Face ID and, in the past, Touch ID. But there are two issues with that in this instance:

The current iPhone 13 is undoubtedly the best iPhone we've seen to date, but its display still breaks like any other. It's a reasonably safe assumption that people will regularly replace screens on these things, and what percentage of those replacements will be done by Apple or its partners? As things stand, I can only imagine a ton of people being left without a functional Face ID. That wasn't part of the plan, right?

I don't have the answers to any of these questions, but I've reached out to Apple to ask. Fingers crossed that this was all just a bug after all.

Follow this link:
Apple letting third-party shops replace screens is the right thing to do, but why was it ever in question? - iMore

Oracle expands in Middle East with new cloud region in Abu Dhabi – The National

Oracle opened a new cloud region a complex that houses at least two data centres in Abu Dhabi that provides storage capacity to regional enterprises amid soaring demand.

This will be the second cloud region of the Austin, Texas-based company in the UAE and its third in the Middle East.

The UAE and Middle East are priority regions and the company has made significant investments to enhance its infrastructure, physical presence, human resources and other support capabilities in the region, Jae Sook Evans, Oracles chief information officer, told The National.

This long-term commitment from Oracle has translated into massive investments to help organisations of all sizes achieve their digital transformation projects

Jae Sook Evans, Oracles chief information officer

This long-term commitment from Oracle has translated into massive investments to help organisations of all sizes achieve their digital transformation projects, said Ms Evans, without disclosing the value of the investment.

The cloud industry is booming globally. The GCC's public cloud market is expected to more than double in value to reach $2.4 billion by 2024, up from $956 million last year, the International Data Corporation said.

For regional businesses, moving to a cloud system hosted by a specialised company such as Oracle, Amazon Web Services or SAP is more economical than creating their own infrastructure of servers, hardware and security networks, industry experts said. It also brings down the overall cost of ownership.

Businesses have realised the numerous benefits including higher return on investment, ability to constantly innovate, boost security and create a scalable business model that is quick to respond to changing economic environment, a key priority post-Covid, said Ms Evans.

Oracle reported nearly $7.4bn in global revenue from its cloud services and licence support business in the quarter that ended on August 31. Reuters

Oracle, whose local clients include DP World, Abu Dhabi Customs, Emaar Properties, Saudi Arabia Tourism Development Fund, Saudi Railway Company, Mashreq Bank and Saudi Arabia Mining Company, reported nearly $7.4bn in global revenue from its cloud services and licence support business in the quarter that ended on August 31.

The cloud services business accounted for more than 75 per cent of its total sales of $9.7bn.

In July last year, the company opened its first cloud region in Jeddah that was followed by another centre in Dubai in October 2020. Last month, Oracle said it planned to open a second cloud region in Saudi Arabia's upcoming futuristic city Neom.

With the Dubai and Abu Dhabi cloud regions, Oracle has the required infrastructure to work with public as well as private organisations to accelerate their digital transformation, Richard Smith, Oracles executive vice president for technology in Europe, Middle East and Africa, said.

Dr Thani Al Zeyoudi, Minister of State for Foreign Trade, said the UAE is committed to developing an innovative and knowledge-based economy that encourages the development and deployment of the technologies of the future and attracting human, financial and technological capital to the nation is central to these ambitions,.

Oracles continued investment into the UAE will only accelerate this process, he said.

The company has 34 cloud regions globally and aims to open 10 new centres across Europe, the Middle East, Asia and Latin America over the next year.

Oracles cloud regions will boost the cyber resilience of the country, mitigate the incidents of cyber crime and increase international collaboration, Dr Mohamed Hamad Al Kuwaiti, head of the UAE governments cyber security, said.

Oracles two cloud regions in the UAE are important investments towards providing cyber resilience and secure digital infrastructure for organisations to enjoy the full benefits of cloud computing, said Mr Al Kuwaiti.

Several global players are establishing data centres in the region as the cloud market picks up.

Last year, IBM unveiled two data centres in the UAE, its first foray in the Middle East and Africa cloud storage market. In 2019, Amazon Web Services opened three data centres in Bahrain. Germany's SAP has centres in Dubai, Riyadh and Dammam, which house servers for local cloud computing clients.

Alibaba Cloud a comparatively smaller player and the cloud computing of the Chinese e-commerce giant opened its first regional data centre in Dubai in 2016.

Updated: November 9th 2021, 5:00 AM

See the original post:
Oracle expands in Middle East with new cloud region in Abu Dhabi - The National