Category Archives: Cloud Servers
AWS outperforms rivals in test of cloud capabilities – TechRepublic
The 2020 Cloud Report from Cockroach Labs finds Google Cloud Platform and Microsoft Azure are catching up to Amazon.
Cockroach Labs tested the speed and strength of the three major cloud providers and found that Amazon Web Services holds an edge over Google Cloud Platform and Microsoft Azure.
In the 2020 Cloud Report, Azure did best with the CPU performance test but AWS offered the best network and I/O capabilities. The testers found that GCP made significant improvements over last year's report and had the best showing in network throughput.
Cockroach Labs tested the three providers on a series of microbenchmarks and customer-like workloads. The goal was to understand the performance of each cloud provider overall as well as the strength of each company's machine types.
Cockroach Labs vetted the results with the major cloud providers for a review of the setup of the machines and benchmarks. Cockroach Labs posted the testing process and the results in this public repository. Paul Bardea, Charlotte Dillon, Nathan VanBenschoten, and Andy Woods of Cockroach Labs wrote the 2020 report.
The performance tests and testing tools included:
In this category, the best performing Azure machines achieved significantly better results on the CPU microbenchmark.
The testers found that the "top performing Azure machines use 16 cores with 1 thread per core while the other clouds use hyperthreading across all instances and use 8 cores with 2 threads per core to achieve 16 vCPUs."The authors caution that the effects of avoiding hyperthreading may have inflated the benchmark and may not represent performance on other workloads. They also said that these results are highly correlated with the clock frequency of each instance type.
The reviewers changed this test setup this year by testing load from multiple clients and observing the results from a single destination server.
The throughput comparison tests found that GCP's network performed much better than AWS or Azure: "Not only do their top performing machines beat each network's top performing machines but so do their bottom performing machines."
The report's authors note that last year AWS outperformed GCP in network tests.
In the latency comparisons, GCP improved over last year's report but AWS won the race again with Azure far behind both competitors: "Even the best machine on Azure is more than five times worse than on AWS or GCP."
Cloud providers offer two types of storage hardware: locally attached storage and network attached storage. Each provider has a different label for these two types:
Locally attached storage Network attached storage
AWS Instance store volumes Elastic-block storage volumesAzure Temporary disks Managed disksGCP Local SSDs Persistent disks
Cockroach also tested for throughput and latency in this category as well. The testers used a "configuration of sysbench that simulates small writes with frequent syncs for both write and read performance" and measured read and write capabilities separately.
AWS won the write round with "superior write storage performance with the i3en machine type."
Azure had an advantage over the other two providers in the ability of managing threads: AWS and GCP hit a bottleneck at four threads but Azure continues to increase write iOPs until 16 threads. The report states that Azure write iOPs excel at managing applications with more threads after falling behind initially on smaller thread sizes.
SEE: Special feature: Industry cloud (free PDF)
AWS's storage optimized machines live up to their billing as strong choices when optimizing for storage performance. Azure can't reliably outperform AWS on read throughput and the provider's read latency is extremely variable.
The report found that AWS wins the combined storage read comparison across all categories with its i3 machine type.
In this category, the testers measured the number of orders processed per minute and the total number of warehouses supported. Testers found that all clouds were within 5% of each other, although AWS came out on top.
The comparison found that "the highest performing machine types from each cloud are also the same machine types which performed the best on the CPU and Network Throughput tests."
Both AWS's c5n.4xlarge and GCP's c2- standard-16 won the CPU, Network Throughput, and Network Latency tests while Azure's Standard_DS14_v2 won the CPU and Network Throughput throughput tests.
However, the machine types that won the read and write storage testsAWS i3.4xlarge and i3en.6xlarge, GCPs n2-standard-16, and Azure's Standard_GS4 varied in their TPC-C performance.
The authors said this suggests that these tests are less influential in determining OLTP performance and that OLTP workloads like TPC-C are often limited by compute resources.
Your go-to knowledge base for the latest about AWS, Microsoft Azure, Google Cloud Platform, Docker, SaaS, IaaS, cloud security, containers, the public cloud, the hybrid cloud, the industry cloud, and much more. Delivered Mondays
Image: Cockroach Labs
Image: Cockroach Labs
Image: Cockroach Labs
See the article here:
AWS outperforms rivals in test of cloud capabilities - TechRepublic
Hyperscale operators accounted for a third of all data center spending in first three quarters of 2019 – FierceTelecom
It's the hyperscalers' world and we're just living in it. By most any measure, the hyperscale service providers are ascendant in the industry across all levels.
New data from Synergy Research Group (SRG) found that hyperscale operators accounted for 33% of all spending on data center hardware and software in the first three quarters of 2019. That's an increase from 26% in the first three quarters of 2017 and from the 15% posted in the same timeframe in 2014.
Over the same time period, the total market has increase in size by more than 34%, primarily due to increased spending by the hyperscale providers.
Like this story? Subscribe to FierceTelecom!
The Telecom industry is an ever-changing world where big ideas come along daily. Our subscribers rely on FierceTelecom as their must-read source for the latest news, analysis and data on the intersection of telecom and media. Sign up today to get telecom news and updates delivered to your inbox and read on the go.
By contrast, spending by service providers and enterprises has increased by a measly 6%, according to SRG.
The hyperscale spending is being driven by the continued robust growth in social networking and the strong demand for public cloud services. Enterprise spending has been under pressure due to the ongoing shift in workloads from private networks to the public cloud, according to SRG.
Hyperscale operators are the world's largest providers across various service sectors including infrastructure-as-a-service (IaaS), platform-as-a-service (PaaS), software-as-a-service, search engines, social networking and e-commerce.
SRG's data showed that total data center infrastructure equipment revenues, including both cloud and non-cloud, hardware and software, were $38 billion in the recent third quarter. Combined, servers, operating systems, storage, networking and virtualization software accounted for 96% of the data center infrastructure market, with the remainder coming from network security and management software.
According to recent research by Dell'Oro Group, the worldwide server and storage systems market declined 2% in 2019 due to macroeconomic factors and declining commodity costs.
Dell EMC is the leader in both the server and storage revenue, according to both SRG and Dell'Oro Group, while Cisco was dominant in the networking sector.
Among the top vendors, SRG said Microsoft and VMware featured heavily in the vendor rankings due to their leadership status in server OS and virtualization applications, respectively.
Below those four, the other leading vendors in the market were HPE, Huawei, Inspur, and Lenovo. Among the major vendors, Inspur and Huawei chalked up the largest growth.
Original design manufacturers (ODM)were also represented in the rankings due to supplying hardware to the hyperscale providers, according to SRG.
We are seeing very different scenarios play out in terms of data center spending by hyperscale operators and enterprises, said John Dinsdale, a chief analyst at Synergy Research Group, in a statement. On the one hand revenues at the hyperscale operators continue to grow strongly, driving increased demand for data centers and data center hardware. There is an ever-increasing number of hyperscale data centers, many of which continue to be expanded. Those huge data centers are crammed full of servers and other hardware, which are on a frequent refresh cycle.
"On the other hand we see a continued decline in the volume of servers being bought by enterprises. The impact of those declines is balanced by steady increases in server average selling prices, as IT operations demand ever-more sophisticated server configurations, but overall spending by enterprises remains almost flat. These trends will continue into the future.
RELATED: Hyperscale data center count passes the 500 milestone in 3Q - report
In October, SRG said hyperscale data centers hit a new high-water mark in the third quarter.
The number of hyperscale data centers increased to 504 at the end of the third quarter, which tripled the total from the beginning of 2013. The total number of data centers increased by 55 over the last four quarters, which marked a bigger increase than was seen in the previous four quarters, according to SRG.
Over the past four quarters, new data centers by likes of Google, Amazon Web Services, and Alibaba Cloud have been opened in 15 different countries with the U.S., Hong Kong, Switzerland and China having the largest number of additions.
Originally posted here:
Hyperscale operators accounted for a third of all data center spending in first three quarters of 2019 - FierceTelecom
IoT news of the week for Dec. 13, 2019 – Stacey on IoT
Heres a security partnership to merge IT and OT:Pulse Secure has teamed up with Nozomi Networks to provide a comprehensive security monitoring service that can track badly behaving devices on IT (information technology) and OT (operational technology) networks. When malicious behavior is detected, Pulse Secure can quarantine the device or segment it from the network. Pulse Secure has several products designed to protect IT networks and Nozomi has the OT expertise. Whats interesting about the deal is that security has become an ecosystem play as opposed to something a single company can offer. Prakash Mana, VP of product management with Pulse Secure, saysthat in the last few years those in the security world have realized that they have to work together to stop threats, which makes partnerships like the onePulse and Nozomi just signed more important. (Pulse Secure)
UL is building security labels for IoT devices: UL, the standards company that ensures our electrical devices arent going to cause a fire or blow out our wiring, has been wading into the cybersecurity realm for a while. A few years back, it suggested some security measures for smart home devices, and now its offering a bit more clarity (and probably tackling the complexity associated with IoT security) with a five-tier labeling scheme. The tiers start with bronze for the devices meeting the least stringent parameters and go all the way up to diamond.In an interview withCNET,a UL spokesperson said that most IoT devices out there today probably wouldnt meet the bronze standard. The interview comes on the heels of aJune whitepaperpublished by UL that lays out the different categories. Of course, with all UL standards, you have to pay to see what they entail.(CNET)
Using IoT to keep an eye on infrastructure: Its no secret that our nations roads, bridges, dams, and other physical infrastructure components need some TLC. While the government stalls on funding, municipalities are turning to IoT to figure out which of them need the most urgent attention. But this is also a global problem, and companies are trying to build 3-D models of hard infrastructure with drones, cameras, and AI to figure out exactly which bridges, sewers, railways and more are in the most danger. (ZDnet)
Ecosystems are the new oil: Yall know that I am all about the need for ecosystems and ongoing relationships when it comes to building out comprehensive IoT products. It looks like others are coming around to this thinking, too. Not only did Dr. Irene Petrick from Intel, my podcast guest this week, spend 10 minutes talking about it with me, but the CEO ofPacket also wrote an excellent blog post on the topic. He likens our current obsession with data extraction to fossil fuel extraction, pointing out that it can be a short-sighted goal and has the power to cause irreparable harm. Instead, he urges taking an ecosystem approach with efforts to create value from data, whereby companies and players invest in the ecosystem to create value for all of the participants. I wholeheartedly believe this is what all of our investments in digital transformations should be about, and predict that were on the verge of a shift in how we think about building a business. Read it and be forewarned. (Packet)
Look, a UL-certified in-wall smart outlet: There are a lot of firsts with this product from ConnectSense. The company has made an in-wall outlet that can beconnected to the internet and controlled via an app, and the outlet has passed a rigorous certification process from Underwriters Laboratories (UL). The UL certification is rare in the smart outlet category. The outlet comes in a 15- or 20-amp version that costs $79 and $99, respectively. It also works with Siri, Alexa, and Google Assistant. (The Verge)
LORIOT adds three new availability zones for its LoRa WAN network offering:LoRa is a long-range wireless networking technology that operates on unlicensed networks. Several companies are it using to build out connectivity services for the IoT. One of them is LORIOT, which makes software for organizations that want to build and operate their own LoRaWAN networks. LORIOT serves up its software froma worldwide network comprised of 16 distributed public servers, three of which were just announced this month. Those servers are in Oregon, Singapore, and The Netherlands, and the new zones will mean lower latencies for folks providing networks in those areas. (LORIOT)
IoT jobs are moving to the cloud: Nutanix, a company that provides software to help build and manage flexible cloud-style computing on-premise, has issued its annual report on trends in cloud computing that includes a small tidbit focused on where applications are running. Surprisingly, while overall more workloads are moving away from the cloud, IoT workloads are moving toward the cloud. The report notes a 3.2%increase in IoT workloads moving to the cloud, whereas workloads happening in a traditional data center have fallen 1.3%and the number of jobs running in private clouds is down 8.1%. Equally surprising is how even as more IoT workloads move to the cloud, the report cites IoT and edge computing as reasons for the increase in computing jobs leaving the cloud and returning to local servers or traditional data centers. The rationale behind that shiftis to reduce latency and also to ensure more security. (Nutanix)
Wearables are taking clinical trials by storm: Ivediscussed the potential impactthat wearable devices can have on the way clinical trials are performed, in particular to make them more accessible and cheaper. And now heres an entire paper discussing the topicin depth, including pointing out that since 2013 the number of digital elements used in clinical trials has skyrocketed. No wonder Apples so interested in health. (Harvard Business School)
GreenWaves launches a new IoT chip: Iprofiled GreenWaves, a startup building a RISC-V-based processor for the IoT, last year ahead of the launch of its first chip. Now it has released a second processor designed to process 10x larger neural networks while consuming 5x less power. It also added support for bank-grade encryption and several other features designed to improve how it handles machine learning at the edge. For a deep dive into the specs, click through on the release. (GreenWaves)
Related
See the original post:
IoT news of the week for Dec. 13, 2019 - Stacey on IoT
Data Sovereignty: The imperative for action – DatacenterDynamics
In this frenetic landscape, in which huge amounts of data are harvested, stored and analyzed 24 hours a day, governments have moved with uncommon swiftness to provide statutory instruments that seek to regulate the flow of information.
This has included the assertion of data sovereignty, in which governments enforce their own privacy laws on data stored within their jurisdictions. It is a rebuff of sorts to the global economy, a reimposition of sovereign interest.
For businesses, this has created a raft of compliance obligations and strategic imperatives, as well as the need for informed decisions about where their data is stored, how that data is managed and protected when shared across borders, and how IT systems are set up.
The rapid take-up of cloud-based data storage exposes companies to issues of data sovereignty. With the rising popularity of cloud computing, data sovereignty issues have become a greater focus for companies concerned about threats to the integrity and security of their data.
Data sovereignty becomes an issue when a companys data servers are located outside the country in which the business is domiciled, and governments insist that this data is subject to the laws of the country in which it is collected or processed.
Businesses need to have a robust and comprehensive data security strategy and vigorous internal procedures to protect and secure data. The onus is on businesses to understand how their data is stored, who owns it and how it moves.
Businesses also need to:
Data gravity is a metaphor introduced into the IT lexicon by San Francisco software engineer Dave McCrory in 2010. The idea is that data and applications are attracted to each other, similar to the attraction between objects that is explained by the law of gravity. As data sets grow larger and larger they become more difficult to move. So, the data stays put and applications and processing power moves to where the data resides.
Barriers become even more challenging if you want to run analytics in the cloud on data stored in the enterprise, or vice-versa. These new realities for a world of ever-growing data sets suggests the need to design enterprise IT architectures in a manner that reflects the reality of data gravity. Alternatively, companies could consolidate their data in a cloud platform where the analytics capabilities reside (and which includes data sovereignty guarantees).
General Data Protection Regulation (GDPR)
The European Unions GDPR covers data protection for EU citizens. The GDPR also addresses the transfer of personal data outside the EU and European Economic Area (EEA). It supersedes the Data Protection Directive.
With the advent of the GDPR, organizations have reviewed their data sovereignty requirements and capabilities.
Brexit: in or out?
All countries in the EU benefit from what might be called the free movement of data. This currently applies to the UK in the same way that it does to the other 27 members.
However, when the UK leaves the EU, it may or may not still be included in this free market in data. Current EU data protection legislation states that special precautions need to be taken when personal data is transferred to countries outside the European Economic Area that do not provide EU-standard data protection.
If data sovereignty isnt included in any finalized Brexit deal, or if the no deal scenario eventuates, then UK businesses could be directly affected. Post-Brexit, the UK would no longer be covered by data agreements between the EU and other countries, such as the EU-US Privacy Shield Framework.
If the EU does not grant equivalency to the UK post-Brexit, the safest thing to do when it comes to data sovereignty issues is to make sure that data is migrated to UK-based data centers.
In the digital economy, organizations are information-rich. They have never possessed such extensive reserves of personal data nor have they been closer to their customers as a result. Digital consumers have benefited from customized product and service offerings, enhanced customer experiences and the ability to intimately engage with their favorite brands across multiple platforms.
But with the ability of organizations to collect unprecedented amounts of data across multiple technology platforms comes great responsibility, and challenges - not least compliance obligations and strategic imperatives, as well as the need for informed decisions about where their data is stored, how that data is managed and protected, and how vendors are chosen.
How well organizations deal with the risks posed by data sovereignty is the latest challenge in the digital transformation of the economy.
Read the original post:
Data Sovereignty: The imperative for action - DatacenterDynamics
Google Transfer Service makes moving your data to the cloud easier than ever – ITProPortal
After building services that allows businesses to physically move their data from local data centres and on-premise solutions to Google, the search engine and cloud giant has now built another complementary solution to make the process even smoother.
Transfer Service is a new part of Google Cloud designed to move the data digitally. It is first and foremost for businesses with billions of files and petabytes of data, and should bear the brunt of the work, validating the integrity of the data, as it moves it to the cloud.
The service will use as much bandwidth as it has on its disposal, to make sure transfer times are as short as they can be. Any potential failures are automatically handled by the agent.
Google promises a relatively painless process. All the business needs to do is install the agent on the local server and select the directories that need moving. The rest is handled by the service itself. Obviously, the business can monitor and manage the transfer through the Google Cloud console.
Even though the main benefits and key selling points seem to be archiving and disaster recovery, it was said that Google also wants to onboard organisations looking to shift workloads and use machine learning to analyse data.
I see enterprises default to making their own custom solutions, which is a slippery slope as they cant anticipate the costs and long-term resourcing, Senior Analyst Scott Sinclair says in aGoogle blog post .
With Transfer Service for on-premises data (beta), enterprises can optimize for TCO and reduce the friction that often comes with data transfers. This solution is a great fit for enterprises moving data for business-critical use cases like archive and disaster recovery, lift and shift, and analytics and machine learning.
Read the original here:
Google Transfer Service makes moving your data to the cloud easier than ever - ITProPortal
Why Middle East startups are choosing the cloud – Khaleej Times
The cloud has revolutionised the way businesses operate, especially startups. It's uncommon now to find a startup that isn't cloud native; most chose to adopt a cloud infrastructure from the beginning. Businesses across the Middle East such as Careem, Anghami, Boutiqaat, Mrsool and many more, have been able to grow and innovate quickly, seamlessly underpinned by their highly secure, agile and flexible cloud infrastructure. Startups approach cloud and, more importantly, security, with a different viewpoint when compared to larger established organisations who are still struggling to marry together new capabilities with legacy systems.
Smart investmentsWhen starting a business, managing costs is critical, therefore investments that deliver the highest possible value and return on investment are a must, so startups only pay for the services they use. This approach enables them to avoid the large upfront expense of owned infrastructure, and manage their IT at a lower cost than an on-premises environment. However, low cost does not mean low functionality. On the contrary, a startup operating on cloud infrastructure has access to the same services and capabilities as the largest enterprise or government customers. This investment includes entire teams dedicated to security that satisfy the security and compliance needs of the most risk-sensitive organisations. This allows them to compete on an even playing field, innovate quickly and bringing products to the market, all with the knowledge that they have world-class security in place to protect against the most prevalent threats.
ScalabilityStartups are ambitious, tenacious and hungry to expand, so choosing to build and scale their business on the cloud is a natural choice. Simply by embracing the cloud, they can scale rapidly, giving them the ability to add or remove resources to meet evolving business demands as required. Instead of investing in data centres, servers and service level agreements, cloud technology allows startups to react faster and more flexibly, to experiment, innovate and better serve customers.
Speed and agilityThe cloud provides an opportunity for startups to optimise existing IT systems and to increase operational efficiencies, while driving business agility and growth. This is achieved by allowing companies to significantly decrease the time it takes to provision and de-provision IT infrastructure. While a physical server could take months or weeks to procure and provision, a cloud server takes minutes.
SecurityStartups must make security a top priority, regardless of size. A security breach can impact start-ups by hurting their reputation and customer base, and can have repercussions on the larger organisations these businesses do business with. Start-ups need to bake-in security from the ground up to make sure they are not the weak link in a supply chain.
Security automationTime is precious for startups and automating security tasks enables startups to be more secure by reducing human configuration errors and giving teams more time to work on other tasks critical to the business. Automation can also offer a smarter approach to detecting potential threats through its ability to monitor patterns of behaviour. Being able to identify changes in behavior means potential attacks can be identified and dealt with immediately.
Vinod Krishnan is head of the Mena at Amazon Web Services. Views expressed are his own and do not reflect the newspaper's policy.
See original here:
Why Middle East startups are choosing the cloud - Khaleej Times
The 10 Hottest New Business And Enterprise Servers Of 2019 – CRN: Technology news for channel partners and solution providers
The demand for enterprise computing over the past two years has been at an historic high, as businesses seek next-generation server performance to meet their digital transformation needs.
In 2019, the worldwide server market is projected to exceed $100 billion in revenue with each quarter generating on average more than $20 billion in server sales, according to IT market research firm IDC. Although market leaders like Dell Technologies and Hewlett Packard Enterprise witnessed a fall in server sales this year, the global server market is expected to pick back up in 2020.
Hardware server innovation is being led by the likes of Dell, HPE, Inspur and Lenovo who are accelerating computing performance, storage class memory and next-generation input/output (I/O) for workloads around artificial intelligence, cloud, virtual desktop infrastructure (VDI) and edge computing.
CRN breaks down the ten hottest business and enterprise servers that led the way in 2019.
Get more of CRN's 2019 tech year in review.
Original post:
The 10 Hottest New Business And Enterprise Servers Of 2019 - CRN: Technology news for channel partners and solution providers
Pulseway Introduces the All New, Integrated Cloud Backup Solution – PR Web
Pulseway Cloud Backup
DUBLIN (PRWEB) December 12, 2019
Pulseway, a leading provider of mobile-first, cloud-first remote monitoring and management (RMM) software, is excited to close 2019 with the launch of a brand new cloud backup product that is built directly into both the web-based platform and mobile application, allowing users to backup their files, regardless of their location. Pulseway Cloud Backup delivers a clean and easy-to-use interface built to follow a coherent structure that is lined with the rest of the platform, allowing users to check backup statuses, calculate backup health score, schedule backup jobs, and recover data for physical and virtual servers, workstations and documents. This new feature helps organizations ensure that their data is always protected.
"Pulseway Cloud Backup has been in the works for some time now and we are extremely proud of the end result that enables our customers to easily and securely backup their data," said Marius Mihalec, Founder and CEO of Pulseway. "I am thrilled to end 2019 with the launch of Pulseway Cloud Backup. Simple, efficient and flexible - that's what lies at the heart of our product vision and Pulseway Cloud Backup is no exception."
Pulseway Cloud Backup is fully integrated with the Pulseway ecosystem. The user can configure it from the WebApp under the cloud backup section and additionally can perform critical tasks from the mobile app. The native mobile application gives users the ability to backup already enrolled systems with a click of a button and allows time-critical operations, such as restoring deleted files and folders, to be performed on-the-go from anywhere, using a device closest to them.
"I am extremely excited about the new Pulseway Cloud Backup launch, it's built directly into the platform and enables me and my team to manage all of our backup needs from one unified dashboard," said Phil Law, Managing Director of Spicy Support. "The UI and the experience are very seamless and the flow correlates to the IT management functionality, which saves us a lot of time and eliminates the need for multiple portals and platforms."
About PulsewayMMSOFT Design, Ltd. is the maker of Pulseway, a mobile-first IT management software that helps busy IT administrators look after their IT infrastructure on the go. Pulseway is used by over 5,800 businesses worldwide including DELL, Louis Vuitton, Canon and Siemens.
Share article on social media or email:
The rest is here:
Pulseway Introduces the All New, Integrated Cloud Backup Solution - PR Web
Broadcom Launches Another Tomahawk Into The Datacenter – The Next Platform
If hyperscalers, cloud builders, HPC centers, enterprises, and both OEMs and ODMs like one thing, it is a steady drumbeat of technology enhancements to drive their datacenters forward. It is hard to reckon what is more important: the technology or the drumbeat, but it is painfully obvious when both fail and it is a thing of beauty to watch when both are humming along together.
Broadcom, a company that was founded in the early 1990s as a supplier of chips for cable modems and set-top boxes, started down the road to datacenter switching and routing in January 1999 with its $104 million acquisition of Maverick Networks, and followed that up in September 2002 with a $533 million acquisition of Altima Communications. Broadcom was already designing its own ASICs for datacenter networking gear, but these were for fairly simple Layer 2 Ethernet switches, and Maverick was working on higher-end, beefier ASICs that combined Layer 2 switching and Layer 3 routing functions on the same device. Altima made networking chips that ended up in networking devices sold mostly to SMBs, but gave Broadcom more networking customers and a broader engineering and patent portfolio to pull from.
Broadcom got serious about switching when blade servers took off in the datacenter in the early 2000s, when the hyperscalers were not even really megascale yet and when the public cloud was still just a bunch of talk about utility computing and grid computing. It unveiled its first mass produced collection of chips for building 10 Gb/sec Ethernet switches which did not even have codenames, apparently out of nine chips. In 2007, the Scorpion chip provided 24 ports running at 10 Gb/sec or 40 Gb/sec and 1 Tb/sec of aggregate bandwidth, and the merchant silicon business was off to the races as the hyperscalers were exploding and Amazon had just launched its public cloud a year earlier. The $178 million deal in December 2009 to take control of Dune Networks, which still carries on as the Jericho StrataDNX line of deep buffer switches, was pivotal for the companys merchant silicon aspirations and coincides with the rise of the hyperscalers and cloud builders and their particular needs on their network backbones.
The Trident family, which really ramped up merchant capabilities compared to the captive chips made by the networking incumbents such as Cisco Systems, Juniper Networks, Hewlett Packard (3Com), and Dell (Force10 Networks), came out in 2010, aimed mostly at enterprises that needed more features and bandwidth than the Jericho line could provide but that did not need the deep buffers. The Tomahawk line debuted in 2014, which stripped out features that hyperscalers and cloud builders did not need (such as protocols they had no intention of using) but which included more routing functions and larger tables, and lower power consumption made possible by 25 GHz lane speeds that Google and Microsoft drove the IEEE to accept when it really wasnt in the mood to do that initially.
Broadcom has been advancing all three families of silicon with a pretty steady cadence. The Jericho 2 chip, rated at 9.6 Tb/sec of aggregate bandwidth and driving 24 ports at 400 Gb/sec with deep buffers based on HBM stacked memory, was announced in in March 2018 and started shipping in production in February of this year. With the Trident 4 ASIC unveiled in June of this year, Broadcom supported up to 12.8 Tb/sec of aggregate bandwidth and using PAM-4 encoding to drive 25 GHz lanes on the SERDES to an effective speed of 50 Gb/sec per lane and able to drive 128 ports at 100 Gb/sec or 32 ports at 400 Gb/sec. The Trident 4 chip weighed in at 21 billion transistors and is a monolithic device etched in 7 nanometer processes from fab partner Taiwan Semiconductor Manufacturing Corp.
Believe it or not, the Trident 4, which was the fattest chip in terms of transistor count we had ever heard of when it was unveiled this year, was not up against the reticle limit of chip making gear. But we suspect that the Tomahawk 4 announced this week is pushing up against the reticle limits, with over 31 billion transistors etched using the same 7 nanometer processes. The Trident 4 and Tomahawk 3 chip from January 2018 were pin compatible, but they had an equal number of SERDES. With the doubling up of SERDES with the Tomahawk 4, there was no way to keep Tomahawk 4 pin compatible with these two prior chips. But, there is hope for Trident 5. . . .
The Tomahawk line has come a long way in its five years, as you can see:
The original Tomahawk 1 chip from 2014 was etched using 28 nanometer processes from TSMC and had a mere 7 billion transistors supporting its 128 Long reach SERDES running at 25 GHz using non-return on zero (NRZ) encoding, which has two levels of encoding to encode a bit per signal. The Tomahawk 1 delivered 3.2 Tb/sec of aggregate bandwidth, which was top of the line five years ago. With the PAM-4 encoding added with recent switch ASICs, you can have four signals per lane and encode two bits of data, driving up the effective bandwidth without increasing the clock speed above 25 GHz. This is how the Tomahawk 3, Trident 4, and Tomahawk 4 have been growing their bandwidth. The SERDES count on the die has also been going up as processes have shrunk, with the Tomahawk 4 doubling up to 512 of the Blackhawk SERDES, of which the Tomahawk 3 had 256 implemented in 16 nanometers, thus delivering a doubling of aggregate bandwidth across the Tomahawk 4 ASIC to 25.6 Tb/sec.
The Tomahawk 4 is a monolithic chip, like prior generations of Broadcom StrataXGS and StrataDNX, chips, and Broadcom seems intent in staying monolithic as long as it can without resorting to the complexity of chiplets. Even if smaller chips tend to increase yields, adding two, four, or eight chiplets to a package creates assembly and yield issues of their own. Some CPU suppliers (like AMD and IBM) have gone with chiplets, but others are staying monolithic (Intel, Ampere, Marvell, HiSilicon, Fujitsu, and IBM with some Power9 chips), and there are reasons for both.
When it comes to networking, Peter Del Vecchio, who is the product line manager for the Tomahawk and Trident lines at Broadcom, monolithic is the way to go.
We have seen some of our competition move to multi-die implementations just to gate to 12.8 Tb/sec, Del Vecchio tells The Next Platform, and the obvious one there is the Tofino2 chip from Intel (formerly Barefoot Networks). Just for the benefits of power and performance, if you can keep all of the traces on a single piece of silicon, that definitely provides benefits. And that is why we wanted to stay with a monolithic design for this device.
Having a fatter device means eliminating hops on the network, too, and also eliminating the cost of those chips and the networking gear that has to wrap around them. If you wanted to build a switch with 25.6 Tb/sec of aggregate networking bandwidth using the prior generation of Tomahawk 3 ASICs, you would need six such devices, as shown below:
It takes six devices to connect 256 ports using the current Tomahawk 3 chip, assuming that half the bandwidth (6.4 Tb/sec) of the bandwidth on each ASIC is used for server downlinks running at 100 Gb/sec (64 ports) and half the bandwidth is aggregated and used as uplinks to the next level up in the modular switch (we presume it would be 16 ports running at 400 Gb/sec). It takes four of those first-level Tomahawk 3 ASICs to create 256 100 Gb/sec downlinks plus two more to cross connect the four chips together in a non-blocking fashion with two paths across the pair of additional Tomahawk 3 ASICs. This architecture adds two more hops to three-quarters of the port hops (some of them stay within a single switch ASIC), so the latency is not always higher than with a single chip, but the odds favor it. If you cut down on the number of second level ASICs, then you might get congestion, which would increase latency.
Now, shift to a single Tomahawk 4 ASIC, and you can have 256 100 Gb/sec ports all hanging off the same device, which in the case of the prototype platform built by Broadcom will be a 2U form factor switch with 64 ports running at 400 Gb/sec and four-way cable splitters breaking each port down into 100 Gb/sec ports. Every port is a single hop away when any other port across those 256 ports, and according to Del Vecchio, the cost will go down by 75 percent at the switch level and the power consumption will also go down by a factor of 75 percent.
Broadcom is not providing specific pricing for its chips, and it is an incorrect assumption that Broadcom will charge the same price for the Tomahawk 4 as it did for the Tomahawk 3. On the contrary, with these improvements, we expect that Broadcom will be able to charge more for the ASIC (but far less than 2X of course) probably on the order of 25 percent to 30 percent more for that 2X increase in throughput and reduction in latency.
Speaking of latency, here is another chart that Del Vecchio shared that put some numbers on the latency decrease using servers chatting with an external NVM-Express flash cluster:
In this case, the flash cluster gets twice as many endpoints running at 100 Gb/sec and the latency between the servers and the disaggregated NVM-Express flash servers drops by 60 percent. (Exact latency numbers were not given by Broadcom, and neither is price or watts per port on any of its ASICs or die size or watts for any of the ASICs.)
Lets think about this for a second. The CPU business has been lucky to double the number of cores every two to three years, and in many cases has not really done this. (Intels Cascade Lake-AP doubled up processors sort of count, but not really given the wattages.) So that means you can get a little less than 2X the performance in the same two-socket machine every two to three years. There will be exceptions, when a lot of vendors can double up within one year, but this will not hold for the long term.
What Broadcom is doing here is cutting the number of chips it needs to provide a port at a given speed by a factor of 6X every two years. Not 2X every two to three years, but 6X every two years like clockwork. Even if every successive chip gives you 30 percent more money, you need to sell 4.X more chips to get the same revenue stream, which means that your customer base has to be more than doubling their port counts at a given speed, or doubling up their port speed, or a mix of both, every year for the money to work out for the chip maker like Broadcom. This is a much rougher business in this regard than the CPU business for servers. But clearly, the demand for bandwidth is not abating, and despite intense competition, Broadcom still has dominant share of the bandwidth sold into datacenter networks, as it has had for the better part of a decade.
That 25.6 Tb/sec of aggregate bandwidth on the Tomahawk 4 chip can be card up in a number of ways, including 64 ports at 400 Gb/sec, 128 ports at 200 Gb/sec, and 256 ports at 100 Gb/sec. It takes cable splitters to chop it down by a factor of two or four, and you might be thinking: Why stop there? Why not 512 ports running at 50 Gb/sec or even 1,024 ports running at 25 Gb/sec and really push the radix to the limits and also create a massive muscle of network cables coming off each port? The answer is you cant because to keep the chip size manageable, Broadcom had to limit the queues and other features to a maximum of 256 ports. The cutting down of physical ports with splitters is not free. So, for instance, supporting 100 Gb/sec ports requires more queues and buffering. Which is why you dont see ports split all the way down to 10 Gb/sec natively on the chip, although you can get a 100 Gb/sec port to negotiate down to 40 Gb/sec or 10 Gb/sec and throw the extra bandwidth out the window.
In a certain sense, a modern CPU, whether it is monolithic or comprised of chiplets, is really a baby NUMA server crammed down into a socket and it takes fewer and fewer servers in a distributing computing cluster to reach a certain number of cores, the unit of compute performance more or less. Similarly, with every new generation of switch ASICs, vendors like Broadcom are able to eliminate layers of the network by constantly doubling up the number of SERDES on the device, and thus allowing for whole layers of the network to be eliminated assuming of course you want a non-blocking network as hyperscalers and cloud builders do. And as we have shown above, the increasing bandwidth and radix of each generation of device allows for each network cluster for that is what a modular switch and a full-blown Clos network spanning a hyperscale datacenter is, after all to have fewer and fewer nodes for a given port count.
The architecture of the Tomahawk 4 chip is very similar to that of the Tomahawk 3, and while you might not be aware of it, there were 1 GHz Arm processor cores on both switch ASICs to run some firmware and do telemetry processing, plus some other Arm cores on the SERDES to run their own firmware. (A switch chip is a hybrid computing device these days, too, just like an FPGA or DSP generally is.) The Trident 4 and Tomahawk 4 ASICs have four of the Arm cores for running the telemetry and instrumentation, twice that of their respective predecessors.
The buffer size on the Tomahawk 3 was 64 MB, and we presume that it is at least double this on the Tomahawk 4, but Broadcom is not saying.
The thing to remember about the hyperscalers is that their packet processing pipelines are not that complicated, but their need to have a lot of telemetry and instrumentation from their networks is vital because with 100,000 devices on a datacenter-scale network, understanding and then shaping traffic is the key to consistent performance.
So the Broadcom networking software stack includes in-band telemetry, real-time SERDES link quality meters, a way to see into all packet drops, flow and queue tracking, and detection of microbursts and elephant flows.
Perhaps equally importantly for hyperscaler and cloud builder customers, Broadcom is documenting and opening up each and every API used with the Tomahawk, Trident, and Jericho families of chips. Among other things, this will help these companies, which by and large create their own network operating systems, better support them, but it will also allow for open NOS initiatives (such as ArcOS from Arrcus) to more easily port their code and support it on Broadcom chips. The OpenNSA API documentation is a superset of the OpenNSL API library that maps to the Broadcom SDK, which was previously available. It is the whole shebang, as they say.
The Tomahawk 4 chip is sampling now and production will be ramping fast, with Del Vecchio expecting as fast of a ramp or faster than was done with the Tomahawk 3 or the Trident 4. So expect Tomahawk 4 devices next summer.
Read more here:
Broadcom Launches Another Tomahawk Into The Datacenter - The Next Platform
Edge predictions for 2020: From SD-WAN and cloud interconnection to security – Small Business
Image: Stockfresh
By 2023, half of enterprise-generated data will be created and processed outside the data centre or cloud
Read More: cloud interconnection Edge networking predictions SD-WAN security
Few areas of the enterprise face as much churn as the edge of the network.Experts say a variety of challenges drive this change from increasedSD-WANaccess demand to cloud interconnected resources andIoT, the traditional perimeter of the enterprise is shifting radically and will continue to do so throughout 2020.
One indicator: Gartner research that says by 2023, more than 50% of enterprise-generated data will be created and processed outside thedata centreor cloud, up from less than 10% in 2019.
Hand-in-hand with that change is a shift in what technologies are supported at the edge of the network and that means information processing, content collection and delivery are placed closer to the sources, repositories and consumers of this information. Edge networking tries to keep the traffic and processing local to reduce latency, exploit the capabilities of the edge and enable greater autonomy at the edge,Gartner says.
The scope of enterprise WAN networks is broadening. No longer is it only from a branch edge to a data-centre edge. Now the boundaries have shifted across the LAN from individual clients and devices on the one end and across the WAN to individual containers in data centres or clouds on the other, said Sanjay Uppal, vice president and general manager of VMwares VeloCloud Business Unit. This broadening of the WAN scope is a direct consequence of the democratisation of data generation and the need to secure that data. So, we end up with edges at clients, servers, devices, branches, private data centres, public data centres, telco POP, RAN and the list goes on. Additionally, with IoT and mobility taking hold at the enterprise, the edge is moving out from the traditional branch to the individual clients and devices.
The evolution of business applications from monolithic constructs to flexible containerised workloads necessitates the evolution of the edge itself to move closer to the application data, Uppal said. This, in turn, requires the enterprise network to adjust and meet and exceed the requirements of the modern enterprise.
Such changes will ultimately make defining what constitutes the edge of the network more difficult.
With increased adoption of cloud-delivered services, unmanaged mobile and IoT devices, and integration of networks outside the enterprise (particularly partners), the edge is more difficult to define. Each of these paradigms extend the boundaries of todays organisations, said Martin Kuppinger, principal analyst withKuppingerCole Analysts AG. On the other hand, there is a common perception that there is no such perimeter anymore with statements such as the device is the perimeter or identity is the new perimeter. To some extent, all of this is true and wrong. There still might be perimeters in defined micro-segments. But there is not that one, large perimeter anymore.
The enterprise is not the only arena that will see continued change in 2020, there are big changes afoot on the WAN was well.
Analysts fromIDC wroteearlier this year that traditional enterprise WANs are increasingly not meeting the needs of digital businesses, especially as it relates to supporting SaaS apps and multi- andhybrid-cloudusage. Enterprises are interested in easier management of multiple connection types across their WAN to improve application performance and end-user experience hence the growth of SD-WAN technologies.
The market for branch-office WAN-edge functionality continues to shift from dedicated routing, security and WAN optimisation appliances to feature-rich software-defined WAN and, to a lesser extent, [universal customer-premises equipment] platforms,Gartner wrote.SD-WAN is replacing routing and adding application-aware path selection among multiple links, centralised orchestration and native security, as well as other functions. Consequently, it includes incumbent and emerging vendors from multiple markets (namely routing, security, WAN optimisation and SD-WAN), each bringing its own differentiators and limitations.
One of the biggest changes for 2020 could come around the SD-WAN. One of the drivers stems from the relationships that networking vendors such as Cisco, VMware, Juniper, Arista and others have with the likes of Amazon Web Services, Microsoft Azure, Google Anthos and IBM RedHat.
An indicator of those changes came this month when AWS announced a slew of services for its cloud offering that included new integration technologies such asAWS Transit Gateway, which lets customers connect their Amazon Virtual Private Clouds (VPCs) and their on-premises networks to a single gateway. Aruba, Aviatrix Cisco, Citrix Systems, Silver Peak and Versa already announced support for the technology which promises to simplify and enhance the performance of SD-WAN integration with AWS cloud resources.
The ecosystem around this type of cloud interconnection is likely one of the hottest areas of growth for 2020, experts say.
SD-WAN is critical for businesses adopting cloud services, acting as a connective tissue between the campus, branch, IoT, data centre and cloud, said Sachin Gupta, senior vice president, product management, with Cisco Enterprise Networking in a recentNetwork Worldarticle. It brings all the network domains together and delivers the outcomes business requires.
It must align user and device policies, and provide assurance to meet application service-level agreements. It must deliver robust security to every device and every cloud that the enterprises data touches. The AWS Transit Gateway will let IT teams implement consistent network and data security rules, he said.
All of these edge transformations will most certainly bring security challenges. Kuppinger noted a few including:
Each of these situations is beyond the traditional edge and can increase your enterprise attack surface and risk, Kuppinger said. Once identified, enterprises must figure out how to secure the edges and get more complete visibility to all risks and mitigations. New tools may be needed. Some organisations may choose to engage more managed security services.
The perimeter needs to be everywhere and hence the advent of the zero-trust architecture, VMwares Uppal said. This requires an end-to-end view where posture is checked at the edge, and based on that assessment network traffic is segmented both to reduce the attack surface and also the blast radius. ie, first reduce the likelihood that something is going to go wrong, but if it does then minimise the impact, Uppal said.
As traffic traverses the network, security services, both letting through the good while blocking the bad are inserted based on policy. Here again the network of cloud services that dynamically sequences security based on business policy is critical, Uppal said.
Going forward enterprise organisations might need to focus less on the network itself. Protect the services, protect the communication between devices and services, protect the devices and the identities of the users accessing these devices. This is very much what the zero trust paradigm has in mind notably, this is not primarily zero-trust networks, but zero trust at all levels, Kuppinger said.
The most important learning is: Protecting just the network at its edge is not sufficient anymore. If there is a defined network either physical such as in OT or virtual such as in many data centers this adds to protection, Kuppinger said.
The mixture of cloud and security services at the edge will lead to another trend in 2020, one that Gartner calls secure access service edge (SASE) which is basically the melding of network and security-as-a-service capabilities into a cloud-delivered package. By 2024, at least 40% of enterprises will have explicit strategies to adoptSASE, up from less than 1% at year-end 2018, Gartner says.
SASE is in the early stages of development, Gartner says. Its evolution and demand are being driven by the needs of digital business transformation due to the adoption of SaaS and other cloud-based services accessed by increasingly distributed and mobile workforces, and to the adoption of edge computing.
Early manifestations of SASE are in the form of SD-WAN vendors adding network security capabilities and cloud-based security vendors offering secure web gateways, zero-trust network access and cloud-access security broker services, Gartner says.
Regardless of what it is called, it is clear the melding of cloud applications, security and new edge WAN services will be increasing in 2020.
We are seeing the rise of microservices in application development, allowing applications to be built based upon a collection of discrete technology elements. Beyond new application architectures, there are demands for new applications to support IoT initiatives and to push compute closer to the user for lower latency and better application performance, VMwares Uppal said. With the maturation of Kubernetes, what is needed is the next set of application development and deployment tools that work cooperatively with the underlying infrastructure, compute, network and storage to serve the needs of that distributed application.
IDG News Service
Read More: cloud interconnection Edge networking predictions SD-WAN security
Read the original:
Edge predictions for 2020: From SD-WAN and cloud interconnection to security - Small Business