Category Archives: Cloud Servers
How the DoD’s future war-fighting needs are shaping cloud vendors’ products – C4ISRNet
The U.S. Defense Departments expectation that future wars will be fought across dispersed, disconnected environments is driving changes to its cloud needs. Industry is preparing for that reality.
With the nascent concept of connecting the best sensor from any location with the best shooter in any service, known as Joint All-Domain Command and Control, the defense industrial base is seeing a shift in the Pentagons need for tools that people can access from any location.
Cloud computing, which allows users to store data more cheaply and access it remotely, is a core principle of the departments digital modernization strategy. With distributed war fighting on the horizon, the department will need tactical cloud abilities available in remote places.
At IBM, customers ask for cloud environments that will allow users to access data across security classifications.
Were seeing more much interest in more mobile environments [including] more distributed, mobile environments that can operate at multiple [classification] levels, said Terry Halvorsen, former Defense Department chief information officer and IBM general manager for public sector client development.
The departments cloud computing needs are expected to grow from an estimated $2.1 billion in fiscal 2020 to about $2.7 billion in 2023 an increase of about 29 percent, according to an analysis by Bloomberg Government.
Several efforts will drive much of that growth increased use of cloud-native applications and remote collaboration tools, continued migration of legacy systems, and the departments artificial intelligence push.
Sign up for the C4ISRNET newsletter about future battlefield technologies.
(please select a country) United States United Kingdom Afghanistan Albania Algeria American Samoa Andorra Angola Anguilla Antarctica Antigua and Barbuda Argentina Armenia Aruba Australia Austria Azerbaijan Bahamas Bahrain Bangladesh Barbados Belarus Belgium Belize Benin Bermuda Bhutan Bolivia Bosnia and Herzegovina Botswana Bouvet Island Brazil British Indian Ocean Territory Brunei Darussalam Bulgaria Burkina Faso Burundi Cambodia Cameroon Canada Cape Verde Cayman Islands Central African Republic Chad Chile China Christmas Island Cocos (Keeling) Islands Colombia Comoros Congo Congo, The Democratic Republic of The Cook Islands Costa Rica Cote D'ivoire Croatia Cuba Cyprus Czech Republic Denmark Djibouti Dominica Dominican Republic Ecuador Egypt El Salvador Equatorial Guinea Eritrea Estonia Ethiopia Falkland Islands (Malvinas) Faroe Islands Fiji Finland France French Guiana French Polynesia French Southern Territories Gabon Gambia Georgia Germany Ghana Gibraltar Greece Greenland Grenada Guadeloupe Guam Guatemala Guinea Guinea-bissau Guyana Haiti Heard Island and Mcdonald Islands Holy See (Vatican City State) Honduras Hong Kong Hungary Iceland India Indonesia Iran, Islamic Republic of Iraq Ireland Israel Italy Jamaica Japan Jordan Kazakhstan Kenya Kiribati Korea, Democratic People's Republic of Korea, Republic of Kuwait Kyrgyzstan Lao People's Democratic Republic Latvia Lebanon Lesotho Liberia Libyan Arab Jamahiriya Liechtenstein Lithuania Luxembourg Macao Macedonia, The Former Yugoslav Republic of Madagascar Malawi Malaysia Maldives Mali Malta Marshall Islands Martinique Mauritania Mauritius Mayotte Mexico Micronesia, Federated States of Moldova, Republic of Monaco Mongolia Montserrat Morocco Mozambique Myanmar Namibia Nauru Nepal Netherlands Netherlands Antilles New Caledonia New Zealand Nicaragua Niger Nigeria Niue Norfolk Island Northern Mariana Islands Norway Oman Pakistan Palau Palestinian Territory, Occupied Panama Papua New Guinea Paraguay Peru Philippines Pitcairn Poland Portugal Puerto Rico Qatar Reunion Romania Russian Federation Rwanda Saint Helena Saint Kitts and Nevis Saint Lucia Saint Pierre and Miquelon Saint Vincent and The Grenadines Samoa San Marino Sao Tome and Principe Saudi Arabia Senegal Serbia and Montenegro Seychelles Sierra Leone Singapore Slovakia Slovenia Solomon Islands Somalia South Africa South Georgia and The South Sandwich Islands Spain Sri Lanka Sudan Suriname Svalbard and Jan Mayen Swaziland Sweden Switzerland Syrian Arab Republic Taiwan, Province of China Tajikistan Tanzania, United Republic of Thailand Timor-leste Togo Tokelau Tonga Trinidad and Tobago Tunisia Turkey Turkmenistan Turks and Caicos Islands Tuvalu Uganda Ukraine United Arab Emirates United Kingdom United States United States Minor Outlying Islands Uruguay Uzbekistan Vanuatu Venezuela Viet Nam Virgin Islands, British Virgin Islands, U.S. Wallis and Futuna Western Sahara Yemen Zambia Zimbabwe
Subscribe
By giving us your email, you are opting in to the C4ISRNET Daily Brief.
The demand for cloud access from any setting is part of a fundamental transition in how the DoD views the technology, said Hillery Hunter, chief technology officer of IBM cloud.
Cloud is no longer a conversation about being in one place, she said. In the next three to five years it really is a conversation about cloud being a consistent platform that spans all the way from the original data center out to the edge.
In the future the military wants to process data, such as drone footage or vehicle-mounted sensor data, in the tactical environment, rather than transporting it back to data centers thousands of miles away, a process that sucks up precious bandwidth and takes too much time.
The need is driving investment by major cloud providers in smaller servers and processing devices for war fighters in remote environments. The need is evident, for example, with the Air Forces Advanced Battle Management System, which uses cloud services from vendors Microsoft and AWS through indefinite delivery, indefinite quantity contracts.
We have to create devices that can operate in those austere environments, said Rick Wagner, president of Microsoft Federal for government customers. Some of that goes with custom design. This is clearly obviously not simply pulling something off the shelf thats been developed, so its driving us into doing more work in creating our own devices, optimizing them to work in those environments and work with our cloud.
At Microsoft, joint war fighting is pushing the company to think about how they can get cloud-computing capabilities into disconnected and challenging environments, such as outfitting an individual ship with cloud-enabled devices or giving every battle group a data center.
How can we get to the point where we get pieces of the cloud everywhere the DoD is operating and then you can start tying things together? Wagner asked. At Microsoft, that is one of the things we are trying to optimize for how we do compute at the edge, where the data sits, reduce the amount of time youve got to move data back and forth and to be able and operate with it?
Beyond that, the department needs to be able to easily and securely pass data between classified and unclassified environments another requirement that has industry brainstorming new options.
There is a push to do things more remotely, and so the idea of cross-domain solutions starts to become a big capability, Wagner said. How do we work from unclassified to secret to top secret and beyond over a consolidated environment where youve got the same tool sets on every environment?
The Pentagon plans to provide data access at different classification levels through the Joint Enterprise Defense Infrastructure cloud, but its enterprise cloud has been mired in controversy and is potentially on the verge of getting scrapped over a court battle. That cloud contract, experts said, is a gaping question for what the DoDs future cloud needs looks like.
The JEDI cloud also would provide some tactical edge capabilities. Former DoD CIO Dana Deasy consistently presented JEDI as a solution that would allow soldiers at the battlefront to access data, once noting that on a trip to Afghanistan, soldiers had to use three different systems to identify an adversary, make a decision and find friendly forces.
Over the next three to five years, itll really depend on if JEDI gets off the ground in fiscal 2021, which it may possibly do at the very end [of the year] if the legal decision goes the Pentagons way, said Chris Cornillie, a federal market analyst at Bloomberg Government. Otherwise, you have the DoD starting from scratch and looking to replace that big general-purpose cloud that JEDI represents with a more federated structure.
The JEDI program is billed as a cloud that will host 80 percent of the DoDs systems, deliver data to the war fighter at the edge and enable artificial intelligence development. In the absence of the cloud due to court cases and protests, services and other components have had to find other solutions. Cornillie suspects the military branches and fourth estate agencies will continue with the solutions they started to fill the void.
If JEDI is scrapped, will they try to recompete another big cloud contract? I think thats yet to be determined, Cornillie said. I dont think well have one big cloud contract and certainly not destined for a single vendor and a single cloud provider.
See the original post:
How the DoD's future war-fighting needs are shaping cloud vendors' products - C4ISRNet
Where the Cloud Can’t Save You – Analytics Insight
Generals will divide their forces to ensure that their army survives a battle. The cloud takes you to the next level, you can achieve the same by replicating your application, being in multiple places at once to keep a service window always open for your users.
However, if the split forces still report to a single commander, and he gets taken captive, it doesnt matter how many places you are located, none of them will be able to function. The army as a whole is vulnerable at a single point of failure.
A distributed application can operate in several locations throughout a cloud network, but if every instance of that application is connected to the same database, you have the same problem. Your entire system is vulnerable at the point of its database even on the cloud.
In order for an army to split up and continue on its mission, local commanders must possess the same information as the general and possess the authority to make commands under his authority.
In order for a database to enable the same agility to the entire application, it must be distributed where several copies of that database exist across multiple nodes, each having the same information and ability to operate independently.
Relational databases were first used in the 1960s, long before the cloud, or any type of distributed network. They are designed to work on a server and to provide all your data needs from one location.
Their size and speed enable applications worldwide to serve their users with accuracy and speed.
Their challenge is their perpetual monolithic structure. Due to the complexity of how a relational model puts together data, appearing on multiple nodes while constantly replicating to itself increases that complexity to unsustainable at any cost.
Relational models restrict their applications to a single point of failure. Even at above average availability, cloud platforms do have outages. If such an outage were to hit a relational database, the entire application relying on that database will be disabled until a new server can be found.
Nonrelational databases were developed alongside the cloud a little over a decade ago. Like cloud platforms, they were developed to be distributed from scratch.
Without the need for multiple tables or even schemas, the most common type of nonrelational database, the document database, is natural for a distributed system where multiple copies of your database can sit at the backend of multiple copies of your application.
The best type of database has a master-master structure where each copy can perform both reads and writes to your data. If you have a database cluster of 3 nodes and one goes down, two databases have full ability and authority to keep working. Even if the majority of your nodes go down, you can still provide service to your users.
Provisioning servers on the cloud closest to where your users are reduces the distance from their device to your application, increasing performance. It also reduces load on any one database at any one time.
Its like being on the longest of three lines in the supermarket when an announcement is made: Will the people at the end of the longest line please move to the front of the shortest? How awesome is that.
Like our universe, data is in a state of constant expansion. There is always more traffic, more volume, even the amount of information you can store in one unit of data is rising.
To manage that, while keeping performance robust in the face of more information to constantly administer, you need a distributed system. If your application relies on one massive database, the cloud cant save you.
The best way to attack the data of tomorrow is with the finest tools of today.
Oren Einiis the CEO ofRavenDB, a NoSQL Distributed Database, and RavenDB Cloud, itsManaged Cloud Service(DBaaS). Oren is a Microsoft MVP and a DZone Hall of Famer with over 3.5 million views over ten years writing about NoSQL Database Technology, the .NET Ecosystem, and Software Development. He has been blogging for more than 15 years using his aliasAyende Rahien.
Read this article:
Where the Cloud Can't Save You - Analytics Insight
Server Microprocessor Market Detailed Analysis of Business Overview, Statistics and Forecasts to 2026| IBM, Oracle KSU | The Sentinel Newspaper – KSU…
The server microprocessor market was valued at USD 15.19 billion in 2020 and is expected to reach USD 17.89 billion by 2026, at a CAGR of 2% over the forecast period 2021 2026.
Increasing data center footprint and demand from cloud service providers prompt the growth of the server microprocessor market. Dominated by the duopoly of Intel and AMD, the market for server microprocessors is undergoing product innovations. Companies are realizing the performance needs for modern workloads such as data analytics, Machine Learning, Artificial Intelligence and are improving their designs accordingly.
The expansion of mobile broadband, growth in big data analytics, and cloud computing are driving the demand for new data center infrastructures. As of 2017 US holds almost 45% of the global cloud and internet data centers according to CNNIC. North America alone (US and Canada) house around 2854 data centers as of 2017, making it a prominent market for server microprocessors.
In line with global cloud penetration, cloud service providers such as Google, are expanding their data center footprint across regions to keep up with the demand for high-performance computing. Such trends drive the server microprocessors.
Request Sample Copy of this report at (Special Offers: Get 25% Discount)
Competitive Landscape
The server Microprocessor Market has been dominated by Intel for the past few years, with AMD as the close competitor and other low-volume players like IBM, Oracle among others. The market demands strong investments in R&D and technology partnerships to address the needs of the servers and data centers. AMD is likely to cut through a slight share of Intel in the server microprocessor market, considering the new product rollouts and competitive pricing strategy. Vendors in the market have been involving in the launch of new microprocessors for next-generation data centers. Following are the recent developments in the market:
May 2018 Cavium announced the general availability of ThunderX2, their second generation of Armv8-A SoC processors for next-generation data center, cloud, and high-performance computing applications. The company, recently acquired by Maxwell, worked with over 60 different partners (including OEMs, ODMs, and independent software and hardware vendors) to enable the deployment of ThunderX2 based platforms and finally reached the commercialization phase.
June 2019 Marvell (parent company of Cavium) announced a broader strategic partnership with Arm, to accelerate the design and development of next-generation Marvell ThunderX server processor technology. With this new agreement, Arm will support Marvells R&D in the server processor technology area for at least three more years (until 2022). With this partnership, Marvell aims to expand its Arm-based server roadmap to enable the next generation of cloud and data center infrastructure.
Inquire here to avail discount on this report:
Key Market Trends
Open Instruction Set Architecture (ISA) to Gain Traction Amid US-China Trade War
The trade war between China and the United States may have a certain impact on the market. this is considering US companies (Intel), as well as ARM, that have pledged to cut off Huaweis access to critical semiconductor components, such as SoCs and CPUs. This might trigger increased dependency on open-source Instruction Set Architecture (ISA) such as MIPS, RISC-V, etc. For instance, in July 2019, Alibaba Group unveiled its first self-designed microprocessor, which marks a key step in Chinas efforts to promote chip self-sufficiency. The launch prompts the efforts of the Chinese technology giants to address the trade clashes with the U.S. (over access to technology). Alibabas (not produced by itself, but some other Chinese foundry such as Semiconductor Manufacturing International Corp.) has been designed to power consumer devices such as smart speakers, self-driving cars, or other internet-connected equipment requiring high-performance computing.
Browse Full Report at:
Europe Making Efforts to Develop Key Competence in Microprocessors
Europe is estimated to provide significant scope for servers powered by microprocessors as compute-intensive applications and cloud adoption is increasing. Cloud computing is one of the strategic digital technologies that has been promoted by the European Union for enhancing productivity and better services by enterprises. These initiatives provide significant demand for data centers in the region, thus driving the market.
Although North America has been a larger source for Server Microprocessors, regional initiatives such as the European Processor Initiative(EPI) is likely to have an impact on the market.
The European Commission, in December 2018, declared the selection of the Consortium European Processor Initiative (EPI). The aim of the initiative is to develop, co-design, and introduce a low-power microprocessor to the European market, thus being able to retain a significant part of that technology in Europe.
The EPI consortium proposes to create a long-term economic model by delivering a family of processors for the following markets: High-Performance Computing, Data centers and servers, and Autonomous vehicles.
About Us:
MarketInsightsReports provides syndicated market research on industry verticals including Healthcare, Information, and Communication Technology (ICT), Technology and Media, Chemicals, Materials, Energy, Heavy Industry, etc. MarketInsightsReports provides global and regional market intelligence coverage, a 360-degree market view which includes statistical forecasts, competitive landscape, detailed segmentation, key trends, and strategic recommendations.
Contact Us
Irfan Tamboli (Head of Sales) Market Insights Reports
Phone: + 1704 266 3234 | Mob: +91-750-707-8687
sales@marketinsightsreports.com
irfan@marketinsightsreports.com
See the original post here:
Server Microprocessor Market Detailed Analysis of Business Overview, Statistics and Forecasts to 2026| IBM, Oracle KSU | The Sentinel Newspaper - KSU...
Global Server Shipment for 2021 Projected to Grow by More than 5% YoY, with Successive QoQ Increases in Demand for ODM Direct Servers, Says TrendForce…
Enterprise demand for cloud services has been rising steady in the past two years owing to the rapidly changing global markets and uncertainties brought about by the COVID-19 pandemic. TrendForces investigations find that most enterprises have been prioritizing cloud service adoption across applications ranging from AI to other emerging technologies as cloud services have relatively flexible costs. Case in point, demand from clients in the hyperscale data center segment constituted more than 40% of total demand for servers in 4Q20, while this figure may potentially approach 45% for 2021. For 2021, TrendForce expects global server shipment to increase by more than 5% YoY and ODM Direct server shipment to increase by more than 15% YoY.
Global server shipment for 2Q21 is expected to increase by 20% QoQ and remain unaffected by material shortage
Thanks to the accelerating pace of enterprise cloud migration and the long queue of unfulfilled server orders last year as a result of the pandemic, server ODMs will likely receive an increasing number of client orders throughout each quarter this year. For instance, ODM vendors saw a 1% QoQ growth in L6 server barebones orders from their clients in 1Q21, but this growth is expected to reach 15-18% in 2Q21. TrendForces analysis indicates that apart from server ODMs maintaining a strong momentum, server OEMs (or server brands) will also be able to significantly raise their unit shipments in 2Q21. The quarterly total shipments from server OEMs for 2Q21 is currently projected to increase by 20% compared with 1Q21 that was the traditional off-season. The COVID-19 pandemic is a major contributor to shipment growth because it has caused a paradigm shift in corporate work practices and spurred companies to accelerate their cloud migrations. The effects of the pandemic have also provided a window of opportunity for the traditional server OEMs, including HPE and Dell, to develop new business models such as hybrid cloud solutions or colocation services that allow their customers to pay as they go, in addition to their existing sales of whole servers.
It should be pointed out that, not only is the shortage of materials within the server supply chain as yet unresolved, but the long lead times for certain key components are also showing no signs of abating. However, in response to the pandemics impact on the industry last year, server manufacturers have now transitioned to a more flexible procurement strategy by sourcing from two or three suppliers instead of a single supplier for a single component, as this diversification allows server manufacturers to mitigate the risk of potential supply chain disruptions. TrendForce therefore believes that the current supply of key components including BMCs and PMICs is sufficient for server manufacturers, without any noticeable risk of supply chain disruptions in the short run.
Huawei and Inspur maintain brisk server shipments due to favorable domestic governmental policies and demand from cloud service providers
Chinas server demand, which accounted for about 27.2% of the global total in 1Q21, continues to grow annually. Favorable policies and support from domestic cloud service providers are the main demand drivers in the country. Shipments from domestic server OEMs have remained fairly robust in China on account of the build-out of the hyperscale data centers across the country. Another reason is that Chinese telecom companies procure servers mostly from domestic manufacturers. Taken together, these aforementioned factors directly contributed to the server shipments of Inspur and Huawei in 1Q21.
Huaweis server shipments are relatively unaffected by the US-China dispute, even though the sanctions enforced by the US government constrained Huaweis component supply. The demand for Huawei servers has been boosted by telecom tenders and procurement from domestic enterprise clients. A QoQ growth rate of roughly 10% is projected for 2Q21 on account of a new round of government tenders. As for the whole 2021, Huaweis annual shipments are still forecasted to register a YoY growth rate of about 5%. Thanks to infrastructure programs and rising orders from data centers, Inspur is expected to capture around 30% of Chinas total server demand in 2021. On the matter of product strategy, Inspur already has a sizable ODM business with tier-1 Chinese cloud service providers (i.e., Baidu, ByteDance, Alibaba, and Tencent). The volume of incoming orders for the first half of this year will also be quite massive because tier-2 cloud service providers and e-commerce platforms such as JD.com, Kuaishou, and Meituan will be injecting significant demand.
For more information on reports and market data from TrendForces Department of Semiconductor Research, please clickhere, or email Ms. Latte Chung from the Sales Department atlattechung@trendforce.com
NSI to sponsor British Security Awards’s Apprentice of the Year Award | Security News – SourceSecurity.com
Physical security and the cloud: why one cant work without the other
Human beings have a long-standing relationship with privacy and security. For centuries, weve locked our doors, held close our most precious possessions, and been wary of the threats posed by thieves. As time has gone on, our relationship with security has become more complicated as weve now got much more to be protective of. As technological advancements in security have got smarter and stronger, so have those looking to compromise it. CybersecurityCybersecurity, however, is still incredibly new to humans when we look at the long relationship that we have with security in general. As much as we understand the basics, such as keeping our passwords secure and storing data in safe places, our understanding of cybersecurity as a whole is complicated and so is our understanding of the threats that it protects against.However, the relationship between physical security and cybersecurity is often interlinked. Business leaders may find themselves weighing up the different risks to the physical security of their business. As a result, they implement CCTV into the office space, and alarms are placed on doors to help repel intruders.Importance of cybersecurityBut what happens when the data that is collected from such security devices is also at risk of being stolen, and you dont have to break through the front door of an office to get it? The answer is that your physical security can lose its power to keep your business safe if your cybersecurity is weak.As a result, cybersecurity is incredibly important to empower your physical security. Weve seen the risks posed by cybersecurity hacks in recent news. Video security company Verkada recently suffered a security breach as malicious attackers obtained access to the contents of many of its live camera feeds, and a recent report by the UK government says two in five UK firms experienced cyberattacks in 2020.Cloud computing The solutionCloud stores information in data centres located anywhere in the world, and is maintained by a third partyCloud computing offers a solution. The cloud stores your information in data centres located anywhere in the world and is maintained by a third party, such as Claranet. As the data sits on hosted servers, its easily accessible while not being at risk of being stolen through your physical device.Heres why cloud computing can help to ensure that your physical security and the data it holds arent compromised.Cloud anxietyIts completely normal to speculate whether your data is safe when its stored within a cloud infrastructure. As we are effectively outsourcing our security by storing our important files on servers we have no control over - and, in some cases, limited understanding of - its natural to worry about how vulnerable this is to cyber-attacks.The reality is, the data that you save on the cloud is likely to be a lot safer than that which you store on your device. Cyber hackers can try and trick you into clicking on links that deploy malware or pose as a help desk trying to fix your machine. As a result, they can access your device and if this is where youre storing important security data, then it is vulnerable.Cloud service providersCloud service providers offer security that is a lot stronger than the software in the personal computerCloud service providers offer security that is a lot stronger than the software that is likely in place on your personal computer. Hyperscalers such as Microsoft and Amazon Web Service (AWS) are able to hire countless more security experts than any individual company - save the corporate behemoth - could afford.These major platform owners have culpability for thousands of customers on their cloud and are constantly working to enhance the security of their platforms. The security provided by cloud service providers such as Claranet is an extension of these capabilities.Cloud resistanceCloud servers are located in remote locations that workers dont have access to. They are also encrypted, which is the process of converting information or data into code to prevent unauthorised access.Additionally, cloud infrastructure providers like ourselves look to regularly update your security to protect against viruses and malware, leaving you free to get on with your work without any niggling worries about your data being at risk from hackers.Data centresCloud providers provide sophisticated security measures and solutions in the form of firewalls and AIAdditionally, cloud providers are also able to provide sophisticated security measures and solutions in the form of firewalls and artificial intelligence, as well as data redundancy, where the same piece of data is held within several separate data centres.This is effectively super-strong backup and recovery, meaning that if a server goes down, you can access your files from a backup server.Empowering physical security with cybersecurityBy storing the data gathered by your physical security in the cloud, you're not just significantly reducing the risk of cyber-attacks, but also protecting it from physical threats such as damage in the event of a fire or flood.Rather than viewing your physical and cybersecurity as two different entities, treat them as part of one system: if one is compromised, the other is also at risk. They should work in tandem to keep your whole organisation secure.
Continue reading here:
NSI to sponsor British Security Awards's Apprentice of the Year Award | Security News - SourceSecurity.com
Introducing the Cloud-Native Supercomputing Architecture – HPCwire
Historically, supercomputers were designed to run a single application and were confined to a small set of well-controlled users. With AI and HPC becoming primary compute environments for wide commercial use, supercomputers now need to serve a broad population of users and to host a more diverse software ecosystem, delivering non-stop services dynamically. New supercomputers must be architected to deliver bare-metal performance in a multi-tenancy environment.
The design of a supercomputer focuses on its most important mission: maximum performance with the lowest overhead. The goal of the cloud-native supercomputer architecture is to maintain these performance characteristics while meeting cloud services requirements: least-privilege security policies and isolation, data protection, and instant, on-demand AI and HPC services.
The data processing unit, or DPU, is an infrastructure platform thats architected and designed to deliver infrastructure services for supercomputing applications while maintaining their native performance. The DPU handles all provisioning and management of hardware and virtualization of servicescomputing, networking, storage, and security. It improves overall performance of multi-user supercomputers by optimizing the placement of applications and by optimizing network traffic and storage performance, while assuring quality of service.
DPUs also support protected data computing, making it possible to use supercomputing services to process highly confidential data. The DPU architecture securely transfers data between client storage and the cloud supercomputer, executing data encryption on behalf of the user.
The NVIDIA BlueField DPU consists of the industry-leading NVIDIA ConnectX network adapter, combined with an array of Arm cores; purpose-built, high-performance-computing hardware acceleration engines with full data-center-infrastructure-on-a-chip programmability; and a PCIe subsystem. The combination of the acceleration engines and the programmable cores enables migrating the complex infrastructure management and user isolation and protection from the host to the DPU, simplifying and eliminating overheads associated with them, as well as accelerating high-performance communication and storage frameworks.
By migrating the infrastructure management, user isolation and security, and communication and storage frameworks from the untrusted host to the trusted infrastructure control plane that the DPU is a part of, truly cloud-native supercomputing is possible for the first time. CPUs or GPUs can increase their compute availability to the applications and operate in a more synchronous way for higher overall performance and scalability.
The BlueField DPU enables a zero-trust supercomputing domain at the edge of every node, providing bare-metal performance with full isolation and protection in a multi-tenancy supercomputing infrastructure.
The BlueField DPU can host untrusted multi-node tenants and ensure that supercomputing resources used by one tenant will be handed over clean to a new tenant. As part of this process, the BlueField DPU protects the integrity of the nodes, reprovisions resources as needed, clears states left behind, provides a clean boot image for a newly scheduled tenant, and more.
HPC and AI communication frameworks such as Unified Communication X (UCX), Unified Collective Communications (UCC), Message Passing Interface (MPI), and Symmetrical Hierarchical Memory (SHMEM) provide programming models for exchanging data between cooperating parallel processes. These libraries include point-to-point and collective communication semantics (with or without data) for synchronization, data collection, or reduction purposes. These libraries are latency and bandwidth sensitive and play a critical role in determining application performance. Offloading the communication libraries from the host to the DPU enables parallel progress in the communication periods and in the computation periods (that is, overlapping) and reduces the negative effect of system noise.
BlueField DPUs include dedicated hardware acceleration engines (for example, NVIDIA In-Network Computing engines) to accelerate parts of the communication frameworks, such as data reduction-based collective communications and tag matching. The other parts of the communication frameworks can be offloaded to the DPU Arm cores, enabling asynchronous progress of the communication semantics. One example is leveraging BlueField for MPI non-blocking, All-to-All collective communication. The MVAPICH team at Ohio State University (OSU) and the X-ScaleSolutions team have migrated this MPI collective operation into the DPU Arm cores with the OSU MVAPICH MPI and have demonstrated 100 percent overlapping of communication and computation, which is 99 percent higher than using the host CPU for this operation.
Parallel Three-Dimensional Fast Fourier Transforms (P3DFFT) is a library used for large-scale computer simulations in a wide range of fields, including studies of turbulence, climatology, astrophysics, and material science. P3DFFT is written in Fortran90 and is optimized for parallel performance. It uses MPI for interprocessor communication and greatly depends on the performance of MPI All-to-All. Leveraging the OSU MVAPICH MPI over BlueField, the OSU and X-ScaleSolutions teams have demonstrated a 1.4X performance acceleration for P3DFFT.
1The performance tests were conducted by Ohio State University on the HPC-AI Advisory Councils Cluster Center, with the following system configuration: 32 servers with dual-socket Intel Xeon 16-core CPUs E5-2697A V4 @ 2.60GHz (total of 32 processors per node), 256GB DDR4 2400MHz RDIMMs memory, and 1TB 7.2K RPM SATA 2.5 hard drive per node. The servers were connected with NVIDIA BlueField-2 InfiniBand HDR100 DPUs and NVIDIA Quantum QM7800 40-port HDR 200Gb/s InfiniBand switch.
Extracting the highest possible performance from supercomputing systems while achieving efficient utilization has traditionally been incompatible with the secured, multi-tenant architecture of modern cloud computing. A cloud-native supercomputing platform provides the best of both worlds for the first time, combining peak performance and cluster efficiency with a modern zero-trust model for security isolation and multi-tenancy.
Learn more about the NVIDIA Cloud-Native Supercomputing Platform.
2021 NVIDIA Corporation. All rights reserved. NVIDIA, the NVIDIA logo, BlueField, ConnectX, DOCA, and Magnum IO are trademarks and/or registered trademarks of NVIDIA Corporation in the U.S. and other countries. Other company and product names may be trademarks of the respective companies with which they are associated. All other trademarks are property of their respective owners.
ARM, AMBA and ARM Powered are registered trademarks of ARM Limited. Cortex, MPCore and Mali are trademarks of ARM Limited. ARM is used to represent ARM Holdings plc; its operating company ARM Limited; and the regional subsidiaries ARM Inc.; ARM KK; ARM Korea Limited.; ARM Taiwan Limited; ARM France SAS; ARM Consulting (Shanghai) Co. Ltd.; ARM Germany GmbH; ARM Embedded Technologies Pvt. Ltd.; ARM Norway, AS and ARM Sweden AB.
More:
Introducing the Cloud-Native Supercomputing Architecture - HPCwire
Cloud vs in-house disaster recovery – Times of Malta
More and more companies of all sizes are moving to the cloud, but why is it time to also move disaster recovery systems to the cloud?
Partnering with a provider that meets their disaster-recovery (DR) needs will allow organisations to protect themselves from threats such as system failures and focus on growing their business rather than addressing unknown risk factors. One of the benefits of using disaster recovery as a service is that one does not have to invest money and resources in owning and maintaining a disaster recovery environment on the ground. It may be tempting to implement every step of a disaster rescue plan in-house, but smaller companies that lack a dedicated IT team may find it easier to use a third-party solution.
Cloud computing is cheap because of its economy of scale, and outsourced tasks usually gives one exactly what they need. The plot thickens for companies that use software as a service (SaaS) provider, which in turn relies on third-party cloud providers to host their services.
There are also several smaller, lesser-known players who focus much of their efforts on providing high-quality disaster recovery services (DRaaS), but there is a shortage of them. A single point disaster can weaken ones business, and a backup and logging service is extremely important if one needs to perform disaster recovery after a failure and see where something went wrong.
DRAAS can be a great option for small- and medium-sized enterprises that lack the expertise to test an effective disaster and recovery plan. Proper management of the location and nature of the backups, as well as the availability of backup data, can cause a single point of failure or disaster that can weaken the company.
A disaster can also affect a wide geographical area, which means that backups can be affected even if they are in the same region as the main office.
If one wants to use the cloud for DR/business continuity (BC) planning, there are some problems they need to face. For disaster recovery, this means how critical business applications behave in a cloud environment. If one relies on cloud disaster and recovery software, they also need to examine the specifics of what they are buying. These tests not only help to know whether the disaster recovery plan is working but can also help in gaining insight into problems that can occur during a disaster.
It is a misconception that one does not have to worry about resilience and recovery when one deploys their workloads in the cloud. While such services exist, there is certainly much more to consider before an organisation can be considered safe. It is important to note that while cloud providers have certain responsibilities, companies and cloud customers are responsible for planning an effective disaster recovery strategy. One probably has a plan to protect their company data, employees and businesses. Management will feel safer knowing that one knows the risks and has adapted their disaster and recovery plan accordingly.
This is one of the main reasons why an emergency rescue plan is needed for both cloud services and in-house services, as well as cloud providers and cloud customers. Other problems that could put a business in a bad situation if it is not prepared include lack of access to critical infrastructure and other critical resources such as backup and recovery equipment, and other issues.
Basically, the nature of the cloud makes it less secure than traditional options, and there needs to be more preparation to ensure security on the software platform and infrastructure level to ensure security. While the cloud provides excellent disaster-recovery capabilities, it is not a cheap alternative to the in-house approach. If ones environment is already in the cloud, it may be useful to use a cloud provider as an option to restore data. One can work with their cloud disaster recovery partner to implement the design and set up the disaster recovery infrastructure. Cloud DR partners have access to a wide range of resources, including data centres, cloud servers, storage and network infrastructure, and disaster management tools.
While backing up important data is an integral part of a companys IT strategy, backing up is not the same as having an emergency plan. The last thing one wants to find out is that their backups failed at a time when they lost their data. That is why it is critical for cloud providers to define exactly what their policies are when it comes to the backup process and the disaster recovery.
This article was prepared by collating various publicly available online sources.
Claude Calleja, Executive, eSkills Malta Foundation
Independent journalism costs money. Support Times of Malta for the price of a coffee.
More here:
Cloud vs in-house disaster recovery - Times of Malta
Microsoft submerges cloud servers in liquid – Fudzilla
A rack of servers is now being used for production loads in what looks like a liquid bath.
While immersion has existed in the industry for a few years now, Vole claims it's "the first cloud provider that is running two-phase immersion cooling in a production environment".
The cooling works by completely submerging server racks in a specially designed non-conductive fluid. The fluorocarbon-based liquid works by removing heat as it directly hits components and the fluid reaches a lower boiling point (122 degrees Fahrenheit or 50 degrees Celsius) to condense and fall back into the bath as a raining liquid.
This creates a closed-loop cooling system, reducing costs as no energy is needed to move the liquid around the tank, and no chiller is needed for the condenser either.
Voles data centre advanced development group vice-president Christian Belady told The Verge: "The rack will lie down inside that bath tub, and what you'll see is boiling just like you'd see boiling in your pot. The boiling in your pot is at 100 degrees Celsius, and in this case, it's at 50 degrees Celsius."
Just so long as the server does not try to put its feet up the taps it should be ok.
See more here:
Microsoft submerges cloud servers in liquid - Fudzilla
The OVHCloud fire: Assessing the after-effects on datacentre operators and cloud users – ComputerWeekly.com
The OVHCloud datacentre campus fire in Strasbourg, France, sent shockwaves through the hyperscale cloud community when it happened in early March 2021, but the industry-wide after-effects of the event could be transformational. In terms of addressing shortcomings in enterprise attitudes towards cloud backups and disaster recovery, while also changing the way that datacentre operators worldwide approach fire suppression.
The fire occurred in the early hours of Wednesday 10 March 2021, with the firms five-story SBG2 datacentre destroyed outright during the blaze, while another facility dubbed SBG1 incurred some damage. Two other datacentres at the site known as SBG3 and SBG4 were switched off as a post-fire precaution and were reportedly undamaged by the incident.
Even so, OVHCloud customers across Europe were affected by service interruptions and downtime by the incident, and in the weeks that have followed the firm has been racing to bring their applications and workloads back online again.
These efforts have included embarking on a widescale clean-up of the datacentre campus, but simultaneously the firm has been drawing on the fact it builds all its own servers in-house to rapidly replace the server capacity destroyed during the fire.
The company operates 15 datacentres in Europe, and also moved to make any spare capacity within these sites available to affected customers as well. At the time of writing, OVHClouds service status page for the Strasbourg facility stated that it is still in the throes of rolling out replacement server capacity at alternative datacentre locations for customers who had workloads housed in SBG2 and the partially destroyed parts of SBG1.
Both facilities housed a mix of public cloud, bare metal and virtual private services (VPS), with the company confirming that 80% of the public cloud-hosted virtual machines these datacentres hosted are back online, as of Tuesday 6 April 2021. Meanwhile, 25% of its bare metal services have been restored, and 34% of its bare metal-based VPS service are also back online.
In SBG1 specifically, 35% of the bare metal cloud servers were back online as of Tuesday 6 April 2021, the companys service status site confirmed, with OVHCloud stating its hope to have 95% of services back in action by the end of this week.
The update further confirmed that SBG4 and SBG3 are operating at 99% availability for customers.
In a video update, posted on 22 March 2021, OVHCloud founder and chairman Octave Klaba shared details of the how efforts to restore services for affected customers were progressing, but also confirmed the root cause of the fire is still the subject of an ongoing investigation that is set to run for a while yet.
The investigation is ongoing, he said, and involves law enforcement, insurance personnel and other assorted financial experts. It will take a few months to have the conclusion of this investigation, and once we have it all, well share it with you.
Initial reports in the wake of the event, however, have suggested the onset of the blaze may have been linked to work carried out on an Uninterruptible Power Supply (UPS) at the site on the day leading up to the fire.
Early indicators point to the failure of a UPS, causing a fire that spread quickly, said Andy Lawrence, executive director of research at the datacentre resiliency think tank, the Uptime Institute, in a March 2021 blog post.At least one of the UPSs had been extensively worked on earlier in the day, suggesting maintenance issues may have been a main contributor.
Although there is no way of knowing for sure at this point, it is possible the UPS in question may have been deployed next to a battery cabinet that may have overheated and caused a fire, offered Lawrence.
Although it is not best practice, battery cabinets (when using vent-regulated lead acid or VRLA batteries) are often installed next to the UPS units themselves, he wrote. This may not have been the case at SBG2, [but] this type of configuration can create a situation where a UPS fire heats up batteries until they start to burn and can cause fire to spread rapidly.
While the investigation into the cause of the fire continues, Klaba said during the video update that the company is committed to using the incident to develop new industry standards, setting out how best to tackle fires within datacentres.
Presently, best practice techniques and standards for fire detection, suppression and extinguishment within datacentres vary according to the location of the datacentre itself, but also what type of equipment is deployed in each room, he said.
[There are] different kinds of fire [extinguishment techniques] for an electrical fire and a different kind for a fire coming from the servers. Whatever the standard is we [have] decided to over secure all our datacentres, said Klaba.
In addition to this, he continued, OVHCloud has set itself a goal of creating a fire testing laboratory, within which the firm will test how fires progress within different datacentre settings, and has committed to sharing the findings from that work with the wider industry.
We decided to create a lab where I want to test. I want to see how the fire is going in the different kinds of the rooms, and to find the best way to extinguish the fire in all kinds of these situations. I want to also to share the conclusion that we will have in this lab with all industry, he said.
Because we we dont want to have this kind of the incident in our datacentre, but also nobody wants to have this kind of an incident in [their] datacentre at all, and the industry has to evolve, and to evolve their standards.
Datacentre fires are a mercifully rare occurrence in the datacentre industry, but that does not stop them being anything less than a constant concern for operators, stated the Uptime Institutes Lawrence in an April 2021 blog post about the frequency of such incidents.
Uptime Institutes database of abnormal incidents, which documents over 8,000 incidents shared by members since its inception in 1994, records 11 fires in datacentres less than 0.5 per year, wrote Lawrence. All of these were successfully contained, causing minimal damage and disruption.
Lawrence goes on to share an observation in the post that it tends to be the systems put in place to suppress fires that tend to do more damage than actual fires in datacentres.
In recent years, accidental discharge of fire suppression systems, especially high pressure clean agent gas systems, has actually caused significantly more series disruption than fires, with some banking and financial trading datacentres affected by this issue, wrote Lawrence.
He also offers operators some fire prevention advice, in terms of the steps they should take to ensure the relatively low incidence of fires reported in the sector continues.
Responsibility for fire regulation is covered by the local authority having jurisdiction, and requirements are usually strict, but rules may be stricter for newer facilities, so good operational management is critical for older datacentres, he said.
Uptime Institute advises that all datacentres use very early smoke detection apparatus systems and maintain appropriate fire barriers and separation of systems. Well-maintained water sprinkler or low-pressure clean agent fire suppression systems are preferred. Risk assessments primarily aimed at reducing the likelihood of outages will also pick up obvious issues with these systems.
While the OVHCloud datacentre fire can serve as a cautionary tale for other operators about how to avoid their facilities befalling a similar fate, what about the firms customers who have experienced a prolonged period of service disruption as a result of the incident? What lessons can they learn from all this?
According to Christophe Bertrand, senior analyst at TechTarget-owned Enterprise Strategy Group, the number one lesson that enterprises need to learn from this incident regardless of whether they are an OVHCloud customer or not is the importance of backing up their data.
Whatever you do as a business, you are always responsible for your data. From a compliance and governance standpoint, you as a business are responsible for securing the ability to recover your own data, he told Computer Weekly.
Just because you have placed data with a third-party software as a service (SaaS) or cloud infrastructure provider, youre still responsible for your data, said Bertrand. If something happens, and anything could happen, on your premises or with the cloud service you use, you should always be in a position to recover your data.
What we have [with OVHCloud] is possibly a situation where maybe people thought, because it was with a third-party provider, it was automatically protected and backed-up, he said. [So] tough luck, because the data is your data and its on you as a business if you dont have a backup somewhere else.
For some of the firms affected by the fire, the lack of backup could be fatal, said Bertrand. I really feel for the small companies that were affected by it, because [the fire] is certainly not their fault, but if they didnt have a backup that was strategically thought through and placed somewhere where they could recover their data, then they made a mistake. And it maybe fatal one. I think some businesses will close based on that.
They may also now incur some additional issues as well, he said. They have a liability to their end users, or maybe some business partners, and maybe some compliance exposures to? Compliance exposures, for sure, because youre not really supposed to lose data.
A common misconception that IT buyers often have about cloud is that they mistake the fact their data is accessible from anywhere as proof that it is backed-up and will always be available in the event of an outage, said Bertrand.
My research shows this big disconnect in terms of protection of data thats in cloud environments because somehow people conflate availability with protection, he said.
OVHClouds Klaba made a similar observation during one of his post-fire video updates, where he made a public commitment to provide the firms customers with free data backups in future as standard, rather than as a paid-for add-on.
It seems globally, the customers understand what we are delivering, but some customers dont understand exactly what they have bought, so we dont want to jump into this discussion by saying we will explain better what we are delivering. What we are doing is we will increase security, and we will deliver the higher security of backups for all customers in different datacentres, he said.
And, in OVHClouds Klabas view, this could lead other cloud firms to follow suit in due course. This incident will change our way of delivering the services, but I believe it will also change the standards of the industry and the market, he said, in a video update to customers dated 16 March 2021.
Jon Healy, operations director at datacentre management services provider Keysource, said the entire incident serves to reinforce why disaster recovery is something neither datacentre operators nor cloud users can afford to overlook.
One hundred percent service availability is an expected standard today but putting this in place for some requires comprehensive planning and can have both technical and commercial implications which need to be considered in order for it to be effective, he said.
Given the average lifespan on a datacentre, there is every chance that while fires might be scarce now that could change in the future.
Given the exponential increase in facilities built in the early noughties, the core infrastructure reaches end of life in 10-to-20 years, and the capital investment to replace or upgrade remains high, will we see more events like this and what will this mean for the industry?
One area that ESGs Bertrand and others have commended OVHCloud on is the transparency and openness of its communications with customers in the wake of the fire, which have included regular video updates from Klaba, as well as daily despatches on the situation via his Twitter feed and service status updates from the company directly from its web pages.
They seem to have been very transparent, communications-wise, which is a real sign of maturity, he said. There is probably only so much they can share, and they have to be cautious because of this process in place to figure out what happened, but you dont get the sense that theyre hiding anything.
Originally posted here:
The OVHCloud fire: Assessing the after-effects on datacentre operators and cloud users - ComputerWeekly.com
Intel Partners Debut Latest Servers Based on the New Intel Gen 3 ‘Ice Lake’ Xeons – HPCwire
Fresh from Intels launch of the companys latest third-generation Xeon Scalable Ice Lake processors on April 6 (Tuesday), Intel server partners Cisco, Dell EMC, HPE and Lenovo simultaneously unveiled their first server models built around the latest chips.
And though arch-rival AMD may have won the first round of the latest global chip fight by unveiling its latestnext-generation Epyc server chipsthree weeks before Intels products back on March 15, Intel is apparently not giving up any ground in the fight.
Intel is touting thenew Xeon Scalable chips as having performance that is up to 46 percent better than the companys previous generation of chips, along with major performance improvements in security, flexibility and more.
TheIce Lake processors are 10nm chips that include up to 40 cores per processor, up from 28 cores in the previous generation Cascade Lake chips. Supporting up to 6 terabytes of system memory per socket, the chips provide eight channels of DDR4-3200 memory per socket and up to 64 lanes of PCIe Gen4 per socket, compared to eight channels of DDR4-2933 and up to 48 lanes of PCI Gen3 per socket for the previous chips. The new chips also include features such as Intel Software Guard Extensions (SGX), Intel Total Memory Encryption (TME) and Intel Speed Select Technology (SST), and are compatible with the latest version of Intel Optane persistent memory modules (PMem). PCIe Gen4 architecture provides throughput at twice the speed of the earlier PCI Gen3 specification.
Rob Enderle, principal analyst with Enderle Group, toldEnterpriseAIthat the launch of Ice Lake is arguably one of the most critical launches this decade for Intel because of previous delays in getting these chips to market. According to OEMs, Intels inability to advance their process technology and remain competitive put them arguably two years behind AMD, said Enderle.
The Ice Lake family includes 56 SKUs, grouped across 10 segments: 13 are optimized for highest per-core scalable performance (8 to 40 cores, 140-270 watts), 10 for scalable performance (8 to 32 cores, 105-205 watts), 15 target four- and eight- socket (18 to 28 cores, 150-250 watts), and there are three single-socket optimized parts (24 to 36 cores, 185-225 watts). There are also SKUs optimized for cloud, networking, media and other workloads. All but four SKUs support Intel Optane Persistent Memory 200 series technology.
In comparison, AMDs recently announced Epyc Milan CPUswill be available in 19 SKUs, from a flagship 64-core version to 8-core versions built for a myriad of server workloads. For AI users, the big AMD news was that the latest generation of AMD server chips show promise in improving performance for many AI processes, according to the company.
For Intels partners, the bolstered specifications and capabilities in the Ice Lake chips are what they wanted to offer fresh and more robust and powerful servers to customers who have ever-increasing compute workloads.
While Cisco, Dell EMC, HPE and Lenovo are the first Intel server partners to announce their new hardware at the launch of the Ice Lake chips, other server partners are expected to announce their own boosted server line-ups soon as well.
Heres a rundown of the first Ice Lake-equipped server products:
Cisco Unveils Three Server Models
Cisco begins its Ice Lake transformation with three new Unified Computing System (UCS) server models that incorporate the new CPUs the Cisco UCS B200 M6, C220 M6, and C240 M6 servers, built for todays hybrid and diverse computing environments.
Highlighting the latest UCS servers is native integration with the Cisco Intersight hybrid cloud operations platform, which aims to make it easier for customers to manage their infrastructure wherever it is located through a policy-based system.
With the new capabilities built into the latest Intel chips, the new Cisco servers will be able to fill a wide range of workloads for customers, including Virtual Desktop Infrastructure (VDI), databases, AI and machine learning, big data and more, according to Cisco. The new servers are expected to be generally available within about 90 days.
For over twelve years, Cisco and Intel have been committed to pushing the boundaries in the server market, together delivering many industry-leading innovations, DD Dasgupta, vice president of product management for the Cisco cloud and compute business unit, said in a statement. Todays announcement continues this tradition, and it could not come at a more crucial time. As customers hybrid cloud journeys accelerate, the need for simple yet powerful solutions increase. Cisco and Intel are proud to deliver solutions that not only meet the demands of todays workloads but provide the foundations necessary to embrace new and emerging technologies.
Dell EMC Reiterates Its Ice Lake Plans
Although Intel is officially debuting its new chips today, Dell actually got a leg up on the competition by announcing its plans for its first Ice Lake-equipped servers back on March 17, right after the latest AMD Epyc chips were unveiled.
Thats when Dell unveiled its PowerEdge R750 server, as well as its PowerEdge R750xa, which the company said is purpose-built to boost acceleration performance for machine learning training, inferencing and AI. The PowerEdge R750xa is a dual socket, 2U server that supports up to four double-wide GPUs and six single-wide GPUs. Other Dell server models using the new Intel chips are the C6520, the MX750 and the R750, according to the company. The servers are expected to be available globally in May 2021. Several other models, including the Dell EMC PowerEdge R750xs, the R650xs, the R550, the R450 and the ruggedized PowerEdge XR11 and XR12, are expected to be available in the second quarter of 2021.
Dell Technologies is focused on helping businesses benefit from emerging technologies and innovations that will help them reach their goals faster, Rajesh Pohani, the companys vice president of server product management, said in a statement. Through our close collaboration with Intel, Dell EMC PowerEdge servers deliver better performance and security than ever before, putting customers on a path to autonomous infrastructure that will make IT simpler, more powerful and serve as the innovation engine for moving businesses forward.
HPE Unveils Eight Ice Lake Server Models
At HPE, Intels latest Gen 3 chips are being integrated across eight server lines. This includes the HPE ProLiant DL360 Gen10 Plus, HPE ProLiant DL380 Gen10 Plus and HPE ProLiant DL110 Gen10 Plus standard servers, as well as the HPE Synergy 480 Gen10 Plus server line, which is built for compostable, software-defined infrastructure for hybrid cloud environments.
Also getting the new chips are the HPE Edgeline EL8000 Converged Edge systems and the HPE Edgeline EL8000T Converged Edge systems, which are ruggedized systems that are built for extreme edge use cases.
In HPEs high performance computing (HPC) and AI server lines, the HPE Apollo 2000 Gen10 Plus systems built for HPC workloads like modeling, simulations and deep learning, as well as AI modeling and training and the HPE Cray EX supercomputer lineup are also getting Ice Lake CPUs.
Four New Lenovo ThinkSystem Servers with Ice Lake CPUs
Built for customer workloads in HPC, AI, modeling and simulation, cloud, VDI, advanced analytics and more, Lenovo debuted four new ThinkSystem Server models that incorporate many of the advancements of the latest Intel Ice Lake chips.
The four new server models are the ThinkSystem SR650 V2, the SR630 V2, the ST650 V2 and the SN550 V2, which can be configured in a myriad of ways to meet business demands:
The ThinkSystem SR650 V2 is a 2U, two-socket server aimed at customers from SMBs to large enterprises and managed cloud service providers, providing speed and expansion along with flexible storage and I/O for business-critical workloads. The systems use Intel Optane persistent memory 200 series and include support for faster PCIe Gen4 networking.
The ThinkSystem SR630 V2 is a 1U, two-socket server that includes optimized performance and density for hybrid data center workloads such as cloud, virtualization, analytics, computing and gaming.
The ThinkSystem ST650 V2 is a new two-socket mainstream tower server that uses a slimmer 4U chassis to make it easier and more flexible to deploy in remote offices or branch offices (ROBO), technology or retail locations.
The ThinkSystem SN550 V2 is part of the HPE Flex System family. Designed for enterprise performance and flexibility in a compact footprint, the SN500 V2 is a blade server node that is optimized for performance, efficiency and security for a wide range of business-critical workloads including cloud, server virtualization, databases and VDI.
Later in 2021, Lenovo expects to bring Intels latest Ice Lake processors to its edge computing server line with the introduction of a new highly ruggedized, edge server designed to handle the extreme performance and environmental conditions needed for telecommunications, manufacturing and smarter cities use cases. More details will be announced later in the year.
Intel Again Takes Charge: Analysts
Despite Intels earlier delays getting these Ice Lake chips to market and to its partners, the company remains the dominant vendor in the world of CPUs, said analyst Enderle.
Unlike AMD, which needs a sizeable competitive edge to displace Intel, all Intel needs is to be good enough to hold on to its base, he said. Ice Lake is a forklift upgrade, meaning you cant just replace an older Intel processor with it; youll likely need to replace the server.OEMs generally prefer a complete product replacement over a parts upgrade because they are far more lucrative.
For customers, thats not as appreciated because of disruptions due to server replacements as well as higher related costs, said Enderle. As a result, this is unlikely to force a competitive replacement of newer AMD servers, he said. Those companies preferring performance over all else may still prefer AMD Epyc over Intels latest. But shops wanting to remain homogenous with aging servers will appreciate the extra performance Ice Lake brought and were unlikely to embrace AMD anyway.
Where Intel really gains over AMD is in Intels stronger control over manufacturing, which should also help the company during raw materials shortages, further offsetting their disadvantages, said Enderle. While I doubt Ice Lake is strong enough to reverse the erosion of Intels base to AMD, it should slow it and give Intel time to bring out their next generation, which should be far more competitive.
Karl Freund, founder and principal HPC, AI and machine learning analyst withCambrian AI Research, agrees.
Intel has demonstrated the companys broad spectrum of technology prowess and leadership in this announcement, from CPUs to memory to encryption and networking, said Freund. AMD still enjoys hard-earned leadership in many CPU metrics including performance per core and per socket, but on most other features such as AI performance, Intel clearly has the lead.
This article originally appeared on sister website EnterpriseAI.news.
View post:
Intel Partners Debut Latest Servers Based on the New Intel Gen 3 'Ice Lake' Xeons - HPCwire