Page 3,794«..1020..3,7933,7943,7953,796..3,8003,810..»

Insights for L&D: The top 10 technical skills of 2020 – HR Dive

As you consider where to invest your learning in the decade ahead or which tools to adopt within your organization, look to these 10 hot technical skills. These frameworks, languages, and cloud computing platforms represent skills that have grown in popularity based on data from Udemys 50+ million global learners over the last three years from 20162019.

For more L&D and workforce training insights, download the 2020 Workplace Learning Trends Report: The Skills of the Future.

The #1 technical skill for 2020 is TensorFlow, a deep learning Python library developed by Google. The library is used to uncover insights and predictions from large datasets. Since its initial release, TensorFlow has amassed an ecosystem of tools and the ability to use the library in languages other than Python, including JavaScript and Swift.

According to a Gartner prediction, by 2021, 15% of all customer service interactions will be handled completely by chatbots. Chatbot technology is software powered by AI to mimic human conversations. A boon for scaling customer service teams and offering round-the-clock support, chatbots recreate the way a human interacts with customers to solve administrative tasks, sales, or frequently asked questions (FAQs).

While Amazons AWS cloud computing platform remains the market leader in cloud providers, Microsofts Azure cloud services are becoming a popular option for enterprises requiring strong security implementation and alignment with the suite of Microsoft services already in use by the organization.

Through a branch of artificial intelligence called computer vision, computing systems learn to identify and analyze static images and videos. This technology is now applied to help self-driving cars identify obstacles on the road, accurately diagnose medical conditions, and even identify old photos of your parents. Democratizing the use of computer vision for developers of all experience levels is an open-source library called OpenCV.

Part of the deep learning branch of artificial intelligence, neural networks are algorithms built to function like the neurological pathways of the brain. By mimicking the complex way humans learn, these algorithms can recognize patterns within complex datasets and generate independent insights on the data.

Year after year, Linux, an open-source operating system based on the Unix operating system, takes top honors for the most commonly used platform and most loved platform categories in Stack Overflows developer survey. Its estimated that 16,000 developers have contributed to the Linux kernel since 2005 and the OS can be found on 96% of the worlds top one million web servers. Linux doesnt show any signs of losing popularity in the coming decade, so companies want to ensure their teams can properly administer and maintain their internal Linux instances.

Ethereum is an open-source, decentralized software platform based on blockchain technology. While Ethereum does have a cryptocurrency, Ether, developers are creating applications that run on Ethereum because they know it will run as programmed without downtime, censorship, fraud, or other third-party interference.

From servers to IoT sensors to syslogs, a staggering amount of the data companies process is machine-generated rather than human-generated. Monitoring, analyzing, and reporting on this high volume of data is a challenge for even the most skilled IT teams, which is where Splunk, becomes an important tool.

Quantum geographic information system (QGIS) is a term many professionals arent familiar with, but its a skill with increasing use in the age of mobile-first and Wi-Fi-equipped devices. QGIS is a type of GIS software that stores geospatial data data with a geographic component such as map coordinates. Spatial data can be found everywhere from Google Earth to satellite data to your GPS-connected fitness tracker.

In the top five of developers most loved languages is Kotlin, a programming language used for Android development. Because its a language for the Java Virtual Machine, Kotlin can be used anywhere Java is also used. Google even made Kotlin an official language of Android apps.

Read more:
Insights for L&D: The top 10 technical skills of 2020 - HR Dive

Read More..

Why tales of cloud repatriation are largely wishful thinking – ITProPortal

Whats next in the development of cloud? There are a growing number of success stories, while public cloud providers are significantly increasing capacity and adding new services. However, others have begun talking about cloud repatriation i.e. organisations moving some services back to on-premise infrastructure. There is also the growth of the IoT to consider - will the need for devices to process information in real time make clouds latency an issue?

The answers will not be found by talking to vendors, each of whom will be keen to push their particular agenda as the solution to every organisations needs, however diverse those needs may be. I suspect that cloud repatriation is wishful thinking by vendors experiencing a lack of server and storage sales. It suits specific use cases such as Dropbox or Netflix, who built their own data centres after exponential increases in sales made their public cloud costs soar. It might also be appropriate for very consistent and stable workloads that do not need scaling, although those could use reserved instances to ensure cloud costs remain under control. And it might be the result of a failed attempt to move services to cloud, for example after getting the architecture wrong.

In my experience of working with customers across a range of sectors and running a wide variety of applications, cloud remains the ultimate goal, and moving applications to cloud makes complete business sense. However, this will not all be SaaS, and it is definitely not one size fits all. Each organisation needs to find its own solution, driven by the applications used, the volumes of data handled and organisations future business strategy.

For most, the future will be hybrid cloud, making multi-cloud management a key requirement. The continued development of containerisation and general agreement on Kubernetes for orchestration across the major cloud providers will play a key role by making applications truly portable, thus encouraging migration between cloud providers. Edge computing is necessary for IoT and intelligent devices, but data analysis from these devices benefits from the scale and flexibility of centralised cloud processing and storage.

To get cloud right, and avoid any potential need for repatriation in the future, organisations need to understand three things. First, cloud is not a single concept, but means different things to different people. It is pretty good for most applications, but it is not the best answer to everything, and other options are still valid. Cloud applications are designed around a number of assumptions which may not apply to some organisations and how they work.

Second, a move to cloud is not just an IT transformation it requires a refocusing of the business, plus new skills and new ways of working, in order to succeed.

Third, cloud almost certainly will not save an organisation money unless that organisation fundamentally reengineers how it works. While cloud takes away in-house IT operations, it creates the need for new skills, such as billing management and managing one or more cloud providers to ensure they deliver the agreed service.

When considering cloud, SaaS should be the first logical choice. A good SaaS service will provide everything you need, be easy to use and costs will be similar or lower than in-house provision. Of course each vendor wants you to take their SaaS which is fine if you can use it seamlessly. You need to consider where your data is going to be held (will it be secure?) and how the SaaS solution will work with other critical line of business applications.

For example, Microsoft Office 365 is in effect SaaS, and it makes sense for the majority of organisations that are using Exchange, SharePoint and MS Office. However, there are ramifications for other systems. Office applications in the cloud are patched and upgraded automatically. This means that if an organisation has another application which only integrates with an older version of Office, perhaps because the vendor has not yet developed an upgrade or no longer provides support, there will come a time when it will no longer talk to Office applications. This could be a significant problem without advance preparation.

If SaaS is not available, the next best option is PaaS, whereby you install an application on top of a managed database service or development environment. This requires the application to be using an up-to-date, widely supported database such as Oracle, SQL Server or MySQL. PaaS services providing 15-year-old Informix or ProgressDB environments are rather harder to find.

A third option is hosting on IaaS, which means moving the application, as-is or with minor enhancements, to operate from a cloud providers infrastructure. The only responsibilities retained in-house are licences and support from the application provider. Alternatively it could be run by a managed service provider, who will handle all the complexity.

A final option is to configure IT as a private cloud and run it in-house, ready to move to SaaS when a suitable solution becomes available. Some applications cannot be moved in the short or medium term as particular dependencies will require too much work to eliminate. When the choice is between paying, say, several thousand pounds annually to host an application as it is, or ten times that cost to redevelop it using an open API for cloud PaaS, the balance of benefit versus reward is clear.

Once an organisation has moved services to the cloud and retired its in-house infrastructure, it has to accept what the chosen provider offers unless it migrates services again. There is a higher degree of lock-in to a cloud provider than to an IT vendor, and moving services between providers is not yet straightforward, as their services are not directly comparable. Vendors will naturally focus on their short-term income, rather than how applications will work together in the long term.

Fortunately, we are seeing the growing use of containers, which make it easier to move applications between cloud providers although this wont work for legacy applications, which will need to be fundamentally redeveloped. Cobol will not go into containers easily, whereas Python will.

As all the major cloud providers now support Kubernetes container management, organisations who have (re)developed their applications to containers will be able to take advantage of cloud broking between providers. This could either be an in-house role, if an organisation has the capability, or be provided by a third party.

Although Im a strong believer in the benefits of cloud for many applications, I expect to see a move back to decentralisation for the growing number of intelligent devices such as robotics in manufacturing. These devices are in effect small scale datacentres in their own right and need to process information in real time, so for them the latency of cloud is becoming a major issue and the need to have intelligence at the edge will increase.

This is simply on-premise computing re-imagined the next step in the regular waves of centralisation and decentralisation which have characterised IT over the last 40 years. The growth of edge computing certainly doesnt mean that cloud is dying. Each organisation will need to consider its own use case and choose the most appropriate solution, depending on how much real time processing they need. All will benefit from the scale and flexibility of centralised cloud processing and storage, from construction companies putting together consortia to deliver specific projects such as Crossrail and HS2, who require capacity for a finite amount of time, to public sector organisations who can hand routine applications to a cloud provider in order to focus on their core activities.

Even organisations working at the cutting edge of robotics and AI will benefit from clouds scale and capacity. However, their smart devices will need to rely on inbuilt intelligence, supported by cloud services.

Richard Blanford, chief executive, Fordway

See the article here:
Why tales of cloud repatriation are largely wishful thinking - ITProPortal

Read More..

Google ups the ante in cloud arms race with new regions – Data Economy

For many oftodays applications and workloads, cloud computing offers the enterprise ahost of advantages over traditional data centers, including lowered operationaland capital expenditures, improved time to market, and the ability todynamically adjust provisioning to meet changing needs globally. Consequently,there has been a massive shift to cloud migration over the past decade, with cloud computing trends showing significantyear-over-year growth since it was first introduced, and Cisco predicting thatby 2021 cloud data centers will process 94 percent of all workloads.According toMarketsandMarkets, the global cloud computing market is projected to surge at acompound annual growth rate (CAGR) of 18 percent to reach approximately $623.3 billion by 2023,up from $272 billion in 2018.

Today, however, we are seeing more companies bringing workloads backinto their data centers or edge environments after having them run in the cloudfor several years because they didnt originally fully understand theirsuitability in a cloud environment. 451Research has referred to this dynamic as cloud repatriation,and a recent survey found that 20 percent of cloud users had already moved atleast one or more of their workloads from the public cloud to a private cloud,and another 40 percent planned to do so in the near future.

All of this begs a deceivingly simple question: How do I know whena workload would be better off running in or outside of the cloud?

When Latency, Availability and ControlAre Key

Aswith any IT decision, an inadequately researched, planned and tested process islikely to cause setbacks for enterprise end-users when the organization atlarge is faced with uncertainty whether to move an application or workload outof the public cloud and return it to an on-premises data center or edgeenvironment.

Very often, moving an application or workload from the cloud makes good business sense when critical operational benchmarks are not being met. This might mean inconsistent application performance, high network latency due to congestion, or concerns about data security. For example, we know of one Fortune 500 financial services firm that was pursuing an initiative to move its applications and data to the public cloud and only later discovered that its corporate policy prohibited placement of personally identifiable information (PII) and other sensitive data beyond their internal network/firewall. Although many security standards are supported by public cloud providers, because of its internal policy, the financial organization opted to keep its data on-premises.

Newsletter

Time is precious, but news has no time. Sign up today to receive daily free updates in your email box from the Data Economy Newsroom.

Somecompanies, such as Dropbox, have chosen to migrate from the public cloud tobenefit their bottom line. While cost is but one criterion for leaving, it is amajor one. In the wake of leaving the cloud, Dropbox was able to save nearly$75 million over two years.

Generallyspeaking, applications that are latency sensitive or have datasets which arelarge and require transport between various locations for processing are primecandidates for repatriation. Consider smart cities and IoT-enabled systems, whichcreate enormous amounts of data. While cloud computing provides a strongenabling platform for these next-gen technologies because it provides thenecessary scale, storage and processing power, edge computing environments willbe needed to overcome limitations in latency and the demand for more localprocessing.

Additionally,if your applications and databases require very high availability orredundancy, they may be best suited to private or hybrid clouds. Repatriation alsoprovides improved control over the applications and enables IT to better planfor potential problems.

Yes,moving to the cloud means a decrease in rack space, power usage and ITrequirements, which results in lower installation, hardware, and upgrade costs.Moreover, cloud computing does liberate IT staff from ongoing maintenance andsupport tasks, freeing them to focus on building the business in more innovativeways. And yet, while many businesses are attracted to the gains associated withpublic or hybrid cloud models, they often do not fully appreciate the strategynecessary to optimize their performance. Fortunately, there are tools to assistIT teams to better understand how their cloud infrastructure is performing.

DemystifyingCloud Decision-Making

Nomatter the shape of an organizations cloud public, private or hybrid data center management solutions can provideIT staff with greater visibility and real-time insight into power usage,thermal consumption, server health and utilization. Among the key benefits arebetter operational control, infrastructure optimization and reduced costs.

Beforeany organization moves its data to the public cloud, the IT staff needs tounderstand how its systems perform internally. The unique requirements of itsapplications, including memory, processing power and operating systems, should determinewhat it provisions in the cloud. Data center management solutions collect andnormalize data to help teams understand their current implementationon-premise, empowering them to make more informed decisions as to what isnecessary in a new cloud configuration.

Intel Data Center Manageris a software solution that collects and analyzes the real-time health, power,and thermals of a variety of devices in data centers. Providing the clarityneeded to improve data center reliability and efficiency, including identifyingunderlying hardware issues before they impact uptime, these tools bring invaluableinsight to increasingly cloudy enterprise IT environments, demystifying thequestion of on-premises, public and hybrid cloud decision-making.

Here are some factors to consider whenmaking a decision about embarking on a course of cloud repatriation:

If you answered yes to a majority of the questions above, it might be time to consider cloud repatriation.

Read the latest from the Data Economy Newsroom:

Read more here:
Google ups the ante in cloud arms race with new regions - Data Economy

Read More..

Cloud Technologies in Health Care Market Growth Prospects and Outlook 2020-2026 | Athenahealth, CareCloud,Vmware (Dell,Merge Healthcare, IBM…

cloud computing can be defined as the practice of using remote servers in place of local server or network, to store, manage, and process the data. Therefore, the use of cloud moves the data center infrastructure outside of the organization. This report analyzes and discusses the market for cloud computing in the healthcare sector. The revenue from cloud services has been tracked in the report. The healthcare cloud computing market is segmented by application, deployment, service, end user, and geography.

The report discusses many vital industry facets that influence Global Cloud Technologies in Health Care industry acutely which includes extensive study of competitive edge, latest technological advancements, region-wise industry environment, contemporary market and manufacturing trends, leading market contenders, and current consumption tendency of the end user. The report also oversees market size, market share, growth rate, revenue, and CAGR reported previously along with its forecast estimation.

The global healthcare cloud computing market is projected to reach USD 51.9 billion by 2024, from an estimated USD 23.4 billion in 2019 at a CAGR of 17.2% during the forecast period.

Get Sample Copy of The Report NOW!

https://www.marketinsightsreports.com/reports/03061883657/global-cloud-technologies-in-health-care-market-size-status-and-forecast-2020-2026/inquiry?source=bestresearchreports&Mode=21

Top leading Manufactures Profiled in Cloud Technologies in Health Care Market Report are:

Athenahealth, CareCloud,Vmware (Dell,Merge Healthcare, IBM Coeporation, ClearData, Carestream Health, Lexmark International, NTT Data,Iron Mountain

Market Research Study Focus on these Types:

Software-as-a-service (SaaS)Platform-as-a-service (PaaS)Infrastructure-as-a-service (IaaS)

Market Research Study Focus on these Applications:Clinical information systemsNonclinical information systems

Inquire for Discount of Cloud Technologies in Health Care Market Report at:

https://www.marketinsightsreports.com/reports/03061883657/global-cloud-technologies-in-health-care-market-size-status-and-forecast-2020-2026/discount?source=bestresearchreports&Mode=21

The report highlights major developments and changing trends adopted by key companies over a period of time. For a stronger and more stable business outlook, the report on the global market carries key projections that can be practically studied.

Cloud Technologies in Health Care Market analysis report has recently added by Research which helps to make informed business decisions. This research report further identifies the market segmentation along with their sub-types. The Cloud Technologies in Health Care Market is expected to reach at a huge CAGR during the forecast period. Various factors are responsible for the markets growth, which are studied in detail in this research report.

This research report represents a 360-degree overview of the competitive landscape of the Cloud Technologies in Health Care Market. Furthermore, it offers massive data relating to recent trends, technological advancements, tools, and methodologies. The research report analyzes the Cloud Technologies in Health Care Market in a detailed and concise manner for better insights into the businesses.

Report evaluates the growth rate and the Market value based on Market dynamics, growth inducing factors. The complete knowledge is based on latest industry news, opportunities and trends. The report contains a comprehensive Market analysis and vendor landscape in addition to a SWOT analysis of the key vendors.

Recent Developments

The report provides key statistics on the market status of the Cloud Technologies in Health Care market manufacturers and is a valuable source of guidance and direction for companies and individuals interested in the industry.

The report provides a basic overview of the industry including its definition, applications and manufacturing technology.

The Cloud Technologies in Health Care market report presents the company profile, product specifications, capacity, production value, and 2014-2020 market shares for key vendors.

The total market is further divided by company, by country, and by application/type for the competitive landscape analysis.

The report estimates 2020-2026 market development trends of Cloud Technologies in Health Care Market.

Analysis of upstream raw materials, downstream demand and current market dynamics is also carried out

The report makes some important proposals for a new project of Cloud Technologies in Health Care Industry before evaluating its feasibility.

The research includes historic data from 2015 to 2020 and forecasts until 2026 which makes the report an invaluable resource for company executives, marketing executive, sales and product managers, consultants, analysts, and stakeholders looking for key industry data in readily accessible documents with clearly presented tables and graphs.

In conclusion, Cloud Technologies in Health Care market report presents the descriptive analysis of the parent market supported elite players, present, past and artistic movement information which is able to function a profitable guide for all the Cloud Technologies in Health Care Industry business competitors. Our expert research analysts team has been trained to provide in-depth market research report from every individual sector which will be helpful to understand the industry data in the most precise way.

Contact US:

Irfan Tamboli (Head of Sales) Market Insights Reports

Phone:+ 1704 266 3234

Mob:+91-750-707-8687

sales@marketinsightsreports.com

irfan@marketinsightsreports.com

Here is the original post:
Cloud Technologies in Health Care Market Growth Prospects and Outlook 2020-2026 | Athenahealth, CareCloud,Vmware (Dell,Merge Healthcare, IBM...

Read More..

Your ‘Love Is Blind’ addiction is not heating the planet yet – Grist

Chances are youve heard the song Despacito, by Puerto Rican artists Luis Fonsi and Daddy Yankee. Its distinction as the most-watched YouTube video of all-time suggests it was unavoidable when the song was released in 2017.

Nearly a year ago, when the video became the first clip to pass 5 billion views on YouTube its since reached 6.6 billion Fortune Magazine published a more-alarming statistic: Streaming the nearly five-minute video that many times required as much computing power as 40,000 U.S. homes use in a year.

As more people ramp up their online activity streaming Netflix shows like Love Is Blind, shopping on Amazon, gaming, banking, etc. the data centers needed to make that happen are multiplying worldwide. And all that computational activity takes energy. These cavernous buildings need significant amounts of electricity to operate and cool their humming servers, the central data repositories for individual devices on a network.

Get Grist in your inboxAlways free, always fresh

The Beacon Other choices

Ask your climate scientist if Grist is right for you. See our privacy policy

Despite the eye-popping number associated with all those Despacito views, researchers say our soaring internet use hasnt driven an equally huge boom in electricity use, yet. Thats mainly thanks to improvements in energy efficiency.

Globally, demand for data center services rose by 550 percent between 2010 and 2018, a new study found. But the facilities energy use grew by only 6 percent in that same period. In the United States, the worlds biggest data center market, energy use actually plateaued over that time period a sharp departure from the early 2000s, when a doubling of data center output meant a doubling in energy demand.

Theres been a drastic decoupling in the amount of data center services provided and the energy use, said Sarah Smith, who coauthored a recent paper in the journal Science. Smith is a senior scientific engineering associate at the Lawrence Berkeley National Laboratory in California.

There are a number of efficiency-related reasons for this decoupling: Companies are increasingly moving servers out of office buildings and into large, shared facilities, which allow for more efficient use of cooling and ventilation systems. Meanwhile, buildings in colder climates like Finland and Sweden use naturally chilled air and water to keep servers from overheating. Servers themselves are improving, with the latest models using far less energy than their power-hungry predecessors.

Smith and her colleagues paper suggests that earlier reports have exaggerated the environmental effects of all our binge-watching and cloud computing. While some experts dispute the conclusion, they still agreed that energy efficiency has made huge dents in data centers electricity appetite. But everyone who spoke to Grist was united in warning that such measures will only hold back this growing hunger for so long.

Inside each data center, thousands of pizza-box-shaped servers are stacked in rows upon rows of racks. An individual server can do the work of about 10 computers, in terms of storing, moving, and analyzing data. Engineers are continuously tweaking server designs so the machines consume less electricity to process data. Since 2010, the amount of electricity use per computation has dropped by a factor of four. While older servers use the same amount of electricity both when theyre active and not in use, newer models use only about a third as much energy when idle.

Data-center operators are likewise installing more energy-efficient equipment to cool and circulate air and to illuminate long, narrow corridors. Thats leading to steady reductions in the power usage effectiveness of data centers. That measure compares a data centers total energy use to the amount of energy needed to run the computing equipment itself. A power usage effectiveness of 2, as Smith explained, means that for every watt used on computing, another is required to do the cooling and everything else in the building. Todays highly efficient data centers have an index of about 1.1. By contrast, a server room in a typical office building might have an effectiveness of 3 or 4, which is one reason why many businesses are opting for shared so-called hyperscale facilities.

And its not just data centers. Energy efficiency is helping reduce the environmental footprint of other global industries. On land, architects and construction firms are designing buildings to harness more daylight and reduce artificial lighting, as well as using sustainable insulation materials and thicker-paned windows to shrink heating and cooling needs. In 2018, about 250 global architectural firms said they expected to slash the predicted energy use of their new buildings by nearly half, compared to a 2003 baseline. That avoided energy consumption could help prevent 17.7 million metric tons of carbon dioxide emissions equivalent to taking 4 million passenger cars off the road and save more than $3.3 billion in operational costs, the American Institute of Architects recently reported.

Meanwhile, at sea, cargo shipping companies are designing vessels to guzzle less fuel. Many newer ships can plug into shoreside electricity supplies to avoid running their massive diesel engines at berth. CMA CGM of France is equipping ships with electronically controlled engines to optimize performance, while Chinas COSCO Shipping is outfitting vessels with new propellers and hulls to reduce wave resistance. As a result, according to the Clean Cargo Working Group, emissions from container ships dropped by 37 percent on average (per container, per mile) from 2009 to 2017.

Energy efficiency improvements can make a meaningful difference both for a companys bottom line and the environment, but they can only do so much and stretch so far before they hit a physical limit.

According to energy experts, efficiency measures should be able to absorb the next doubling of global data center output and keep electricity use steady in the next four more years. Beyond that, it will be harder to stem a surge in power demand without significant changes to computing technology.

Once almost all computing loads shift to hyperscale facilities, then the benefits of shifting away from really inefficient corporate data centers just run out, and youll have to do some other things, said Jonathan Koomey, a coauthor of the Science paper and a longtime data center researcher.

One way to further limit a facilitys environmental toll is to connect it to renewable energy sources. Many data centers still rely on energy generated by coal- and natural gas-fired power plants. In China, coal supplies about three-fourths of the electricity that the countrys cloud operations consume, according to a study by Greenpeace and Chinese academic institutions. In Virginia, major U.S. tech companies like Amazon, which is setting up its HQ2 in the states Washington, D.C., suburbs are expanding their presence without adding any additional supplies of wind or solar power. Amazons Virginia-based data centers are powered by only 12 percent renewable energy, a separate Greenpeace report said.

We need to make sure were building this digital infrastructure in a way thats not making the climate change problem worse and taking us in the wrong direction, said Gary Cook, a former Greenpeace researcher who is now a director for the environmental corporate responsibility watchdog Stand.earth.

Cook said the Science study likely underestimated the amount of electricity that todays data centers use. He pointed to other reports that found that facilities in the U.S, European Union, and China together consumed around 400 terawatt-hours of electricity annually, nearly 2 percent of global electricity consumption. In the Science paper, researchers said that global data center energy use was about half that amount, or 205 terawatt-hours.

Some of the discrepancy can be explained by researchers different approaches to modeling data center activity. Gathering real numbers from actual cloud setups is notoriously difficult, in part because tech companies arent willing to share the information. The new report also doesnt factor in specialized computers used for mining cryptocurrencies like Bitcoin or Ethereum the carbon footprint of which has been hotly debated and it doesnt consider artificial intelligence or virtual reality applications, which are also computationally intensive, said Lofti Belkhir, a mechanical engineering professor at McMaster University in Canada.

He said he expects existing efficiency measures will become tapped out sooner than other researchers implied. Unless computing technology takes a quantum leap very soon, Belkhir said, Were bound for a major uptick in data center energy consumption that will continue to grow exponentially.

But Koomey and Smith argue that data centers can still do more to boost efficiency before the rising energy demand becomes too much. Scientists are developing liquid cooling technologies that place computer chips in direct contact with water or another liquid. By some accounts, this approach could reduce cooling costs by at least 80 percent compared to conventional whirring fans. State and federal governments can adopt energy-efficiency standards or renewable energy requirements to ensure that data centers use only the most advanced technology and cleanest power available. Requiring data centers to report information, even anonymously, would also give researchers more tools for designing and improving their operations.

We need to make an effort to be prepared for more demand in the future, Smith explained. That being said, we can look at what has happened in the industry in the last decade and see it as an inspiration.

And thats worth celebrating with a little reggaeton at least for now.

Never miss a beat! Sign up for The Beacon today. Its your daily dose of good news coupled with all the latest environmental coverage from Grist. Stop freakin and sign up for The Beacon. Youre gonna love it.

Read the original post:
Your 'Love Is Blind' addiction is not heating the planet yet - Grist

Read More..

The Serious Business Of Being A Server OEM – The Next Platform

Not everybody is a hyperscaler or large public cloud builder, and no two companies are happier about that than Dell Technologies and Hewlett Packard Enterprise, the two largest original equipment manufacturers in the world for servers and storage and also the two companies that chased plenty of sales at these webscale datacenter operators in years gone by but which have learned, of necessity, to walk away from deals where they cant make money or even lose money.

The two companies gave us a glimpse into their respective datacenter businesses in recent days, and we plowed into the figures to try to get a sense of what is going on in the enterprise, basically taking a snapshot before whatever effects of the coronavirus outbreak hit the server market. Even without Covid-19, both Dell and HPE were struggling, and that is not a good sign. We arent sure what it means yet, but Intel also reported that enterprise spending for its Data Center Group was soft, down 7 percent in its most recent quarter, and again that was before Covid-19 started slowing down sales of just about everything but disinfectant, facemasks, flu medicine, and ibuprofen.

Because Dell is the dominant OEM seller of both servers and storage, we will cover it first. Dells fiscal year ends in January each year, and after it was done taking itself private and acquiring all of enterprise server virtualization juggernaut VMware and its EMC enterprise storage juggernaut parent, Dell rejiggered its financial presentations to include these businesses in a consolidated format.

It is immediately obvious, of course, that Dell is a very large company and is indeed the largest IT supplier in the world. This was an aspiration that Michael Dell, the companys founder, has had for at least as long as we have been watching the glasshouse, and say what you will, but Dell did it. But what is also glaringly obvious is that this business has to work hard to be profitable given all of the grief Dell gets from other OEMs like HPE, Lenovo, Cisco Systems, Inspur, Sugon and the original design manufacturers like Foxconn, WiWynn, Inventec, and others as well as the in-betweener Supermicro. (The lines between OEM and ODM are blurry, and Dell used to have a very large custom server business of its own before Facebook started the Open Compute Project.) As we have said before, the systems business and the storage business is very cut-throat, and we are thankful that companies step up to the plate and do the job each and every quarter because someone has to build this stuff. But the margins on hardware are nothing like what we used to see in the 1980s and 1990s, or even the 2000s, and what margins there are in systems end up going mostly to Intel with some sent to the makers of DRAM and flash and GPUs and some sent to Microsoft and even less sent to Red Hat.

In the fourth quarter ended on January 31, Dells revenues were just a tad over $24 billion, up eight-tenths of a point compared to the year ago quarter, but it flipped to a net income of $408 million from a loss of $299 million. This is progress indeed, considering the softness in server and storage sales. The Servers and Networking division posted sales of $4.27 billion, down 18.7 percent year on year against a very tough compare when Dell kissed $10 billion in system sales in Q4 F2019. Jeff Clarke, vice chairman and chief operating officer at Dell, said in a conference call with Wall Street analysts that the server market was soft in China (due in part to the corona virus we think given the end of January period), but that large enterprises in both the United States and Europe also pulled back the reins on system spending as well. On the storage front, sales of the vast portfolio that Dell has amassed dropped by 3.2 percent to $4.49 billion, and Dell added that it would be rebranding its storage products under the Power brand by May. (We presume they meant PowerStore to match the PowerEdge server brand and PowerConnect switch brand.) In the past two years, Dell has gone from 80 distinct storage products down to 20, and Clarke called out hyperconverged infrastructure (HCI) as a key driver of growth in the coming year along with core enterprise storage and data protection hardware and software.

All told, the Infrastructure Services Group had $8.76 billion in sales, down 11.5 percent, and an operating income of $1.11 billion, off 12.1 percent and comprising 12.7 percent of revenues.

The picture for the full year is a little different for ISG, as this summary table shows:

And here is the data going back to fiscal 2016 that shows the ups and downs of the datacenter business groups at the new and embiggened Dell:

Server and networking revenue declined in fiscal year 2020, but profitability was up as we didnt chase unprofitable server deals in a down market, explained Clarke. Our long-term server share trajectory remains strong. We are winning in the consolidation. Clarke added that Dell was the number one supplier of mainstream servers for seven quarters in a row, which we presume he means enterprise and SMB class PowerEdge machines and not custom or semi-custom iron, and that market watcher IDC reckoned that this mainstream server sector would see growth of 3.3 percent in calendar 2020 and then said Dell believed it would outpace this market. This would be driven by more expensive iron and particularly that which runs AI workloads, which are heavy on everything except the irony. Clarke said that demand in China was down around 35 percent but the rest of the world was down only 5 percent. The real softness outside of China was in large enterprises in North America, who might have been catching wind of coronavirus early and tapping the server spending brakes.

An interesting tidbit: In the call, Clarke said that Dell has about 30,000 unique server buyers every quarter, and that only about half of them buy storage (presumably he meant external storage) from Dell. Moreover, Dell has about three times as many server buyers as storage buyers, so the opportunity to sell across the former Dell and former EMC silos is still quite large. Simplifying and updating the storage products this year is part of the plan. To try to boost ISG sales, both for servers and storage, in fiscal 2021, Dell has invested over $1 billion in capacity and coverage in the past two and a half years to chase more deals. And Clarke thinks, with customers digesting the large numbers of servers they bought in the prior two years, there is a new consumption wave coming this year.

From Clarkes mouth to those 240,000 customer ears. . . .

As you can see, VMware continues to be the profit engine at Dell, with sales up 18.5 percent in fiscal Q4 to $3.122 billion and operating profits up 17.7 percent to a very healthy $1.03 billion (32.8 percent of revenues). VMware closed the acquisition of Pivotal in the fourth quarter, which VMware acquired years ago, spun out, and then reacquired. Pivotal is, of course, the provider of the Cloud Foundry application framework and is going to be delivering an integrated Kubernetes platform, which we expect to be hearing more about in the coming weeks.

Even with all that revenue and the improving profits, Dells debt load is still high after acquiring EMC (and therefore VMware) a few years back and then taking private a few years after that. About a third of Dells cash is parked in VMware, and it spent $11 billion at the end of 2018 as a special dividend relating to its going public again through a tracking stock related to its ownership of VMware that landed Dell back on the New York Stock Exchange as a public company. VMware still reports separate financials, but we are not getting into those today except for the mention above and to point out that vSAN virtual storage bookings were up around 15 percent and NSX virtual networking bookings were up over 20 percent. (We will be drilling down into VMware more next week, when the company talks more about its strategy for the future.)

The point is, Dell has a lot of debt still, which is one of the reasons why it is selling off its RSA Security divisions, which is one of the big acquisitions that EMC made back in 2006 for $2.1 billion after it acquired VMware in 2003 for a mere $635 million just before VMware was getting ready to go public. Dell is selling RSA Security for $2.1 billion, so the Dell collective is getting its bait back and got all the RSA revenue for the past 14 years for free and now it gets to use the funds to pay down more debt.

Now, lets have a gander at Hewlett Packard Enterprise. In the quarter ended in January, HPE posted sales of $6.95 billion, down 8 percent, and net income was up by 88.1 percent to $333 million due mostly to cost cutting maneuvers and with a small increase of research and development spending (which now includes Cray). HPE has $3.17 billion of cash in the bank, which is respectable given its size. Heres the HPE revenue and income chart over time:

As is immediately obvious, HPE in its older Hewlett-Packard incarnation that included PCs and printers as well as some systems and PC software and a sizeable enterprise services (outsourcing and systems integration) business, and was in fact larger than the new and embiggened Dell. HPE had huge writeoffs from its ill-fated Autonomy software acquisition, and it has sold off its enterprise services and most of its software businesses as well as spinning off PCs and printers into HP Inc, and now it is considerably smaller. And Dells core systems business servers and storage together at $8.76 billion is also around 70 percent larger than HPEs $5.67 billion in the most recent similar quarter. That is thanks in part to Dell becoming the biggest shipper of OEM servers in the world, passing HPE seven quarters ago, as HPE started walking away from deals where it could not make money something Dell is also starting to do. It also comes from buying EMC, the largest enterprise storage supplier, and having a business that is about five times larger than the storage array business of HPE.

We have been tracking the HPE core systems business over time in its various incarnations and presentation styles, and here is what it looks like if you shake all the non-datacenter stuff out:

That orange vertical line shows the demarcation between the older ways of classifying servers and storage (which did not include services, systems software, or maintenance) and the new way announced in February and backcast for nine quarters. Relatively new chief executive officer Antonio Neri has rejiggered the companys financial reporting categories to reflect the business in the wake of the acquisition of Cray and the breaking up of the Pointnext services category that frankly no one really understood very well anyway.

In the new categorization, the Compute division sells rack, tower, and blade servers bearing the ProLiant brand. The HPC & MCS division is, as the abbreviations suggest, the High Performance Computing and Mission Critical Systems lines. The HPC machines are the Cray XC and CS lines as well as the Apollo lines; the Apollo machines are replacing the CS machines in the new HPE lineup, and the SGI UV3000 and UV300 NUMAlink machines have been merged with the Superdome X NUMA boxes that HPE created itself to create a new line of fat NUMA boxes, called Superdome Flex, aimed at enterprise and data analytics workloads in the MCS division and sit alongside the Superdome X machines that support Linux, Windows Server, and OpenVMS. All of these machines are arguably about high performance, in one way or another, and hence belong together. That is not to say that there are not clusters of ProLiant servers that are being used as HPC machines or even AI engines when gussied up with GPUs or other kinds of accelerators. All of the break/fix maintenance and systems software for these machines are included in their revenue streams. The Storage division includes all of HPEs storage arrays and HCI wares, again including the support and systems software, and the Advisory and Professional Services division does just what it says it does and was bundled into Pointnext previously. The Intelligent Edge business is still largely made up of Aruba hardware and software, but not includes datacenter instead of being shoved into Compute. Financial Services is still, well, financial services.

Having gone through all of that, here is what the last nine quarters of HPEs numbers look like under the new classifications:

And here is a chart showing the divisions:

Aside from the general trend bending downward slightly in the Compute business (again, mainly ProLiant servers), the other lines are amazingly flat. So much so we checked them three times just to make sure. The services on Storage and Compute help smooth out the curves a bit.

But without a question, demand in the January quarter was impacted by a bunch of issues, and Neri didnt dodge it. Our Q1 revenues were impacted by a number of factors, Neri explained to Wall Street analysts in a call going over the numbers. First, like many of our peers, we continue to see uneven and unpredictable demand due to macro uncertainty. This has resulted in longer sales cycles and delayed customer decisions. Second, commodity supply constraints disrupted our ability to meet our customers demand this quarter, particularly in our Compute and HPC businesses. Additionally, the outbreak of the coronavirus at the end of January impacted component manufacturing, resulting in higher costs and backlogs. In both of these cases, we have established specific mitigation and recovering plans with each of our suppliers.

The net effect of all of this is that HPE is cutting $300 million from its free cash flow for fiscal 2021, which is now set in the range of $1.6 billion to $1.8 billion.

There were some bright spots in the HPE Compute business. If you take Tier 1 clouds in China out of the mix, unit shipments of ProLiant machines were up in the mid-single digits, according to Neri. While Cray has won some big deals that will eventually come to the top line at HPE, that is from one to two years away and there is no guarantee that these big exascale systems will have high profits for HPE. That said, the HPC & MCS division had 6 percent growth, and Tarek Robbiati, chief financial officer at HPE, said that the HPC business presumably dominated by all the recent Cray deals had won over $2 billion in business that would be booked between now and 2023. How that revenue will stream in remains to be seen, and it is also unclear how much HPE can bring to the bottom line. But obviously HPE can buy CPUs and GPUs at higher volumes and lower unit prices than Cray ever could, and that should help make what used to be Cray more profitable. That means operating income will rise for the HPC & MCS division.

So not only did the $1.3 billion acquisition of Cray by HPE last September fulfill the goal of former chief executive officer Peter Ungaro to make Cray into a $1 billion and growing business, but it might even make that Cray business more profitable than it could ever have been even as it expands its revenue footprint again, perhaps to $2 billion. Its hard to says for sure, but what we do know is that SGI and Cray survived for decades and didnt really profit that much or that often because HPC is a tough, tough business with the most demanding compute, storage, and networking challenges and the most stingy budgets even if they are big. And HPE, in its many guises, has tried to have a datacenter business that was as profitable as that of IBM, which has had its own revenue and profit growth issues in the past decade.

Being an OEM is hard. Be grateful someone is still doing the job of engineering and building machines and trying to make it up in volume. It is often one of those thankless jobs. And if the OEMs stop, you will be thrown to the tender budgetary mercies of the clouds and that is going to be truly expensive.

See the original post here:
The Serious Business Of Being A Server OEM - The Next Platform

Read More..

8 Israeli Companies Named To Top 100 Promising AI Startups Worldwide | News Briefs – NoCamels – Israeli Innovation News

Eight Israeli-founded companies featured in the annual AI 100 finalists report put together by New York-based research firm CB Insights, recognizing top firms worldwide that are pushing the boundaries of AI research and redefining industries. The report was published last week.

CB Insights said the listed companies were selected from nearly 5,000 startups based on several factors including patent activity, business relations, investor profile, market potential, competitive landscape, team strength, and tech novelty.

The selected firms are working on solutions across 15 core industries, including healthcare, retail and warehouse, and finance and insurance.

The Israeli companies are:

Lemonade, the Israeli-founded, NY-headquartered insurance tech company that uses behavioral economics, AI and chatbots to deliver renters and homeowners insurance policies in over two dozen states across the US.

Lemonades renters insurance pricing starts at $5 per month and their homeowners insurance starts at $25 per month. The company also has a charitable component where revenue left over after paying claims goes to charities of users choices.

Lemonade announced last month that it will begin offering pet insurance this year.

Healthy.io, the Israeli medical tech startup that developed a platform to turn smartphones into sophisticated diagnostics devices capable of analyzing urine samples. Healthy.io has two FDA clearances for a smartphone-based urine albumin test, called Dip.io that aids the diagnosis of chronic kidney disease, developed a consumer-focused UTI testing service in partnership with UK pharmacies, and recently unveiled a new digital solution in the US for the management of chronic wounds.

Healthy.io was founded in 2013 by Yonatan Adiri, who also serves as CEO, and has raised some $90 million to date.

Zebra Medical Vision, the Israeli AI medical imaging insights company that developed platforms to read medical scans and automatically detect anomalies. Through its development and use of different algorithms, Zebra Medical has been able to identify visual symptoms for diseases such as breast cancer, osteoporosis, and fatty liver as well as conditions such as aneurysms and brain bleeds.

The company received its fourth FDA 510(k) clearance in November for its HealthCXR device for the identification and triaging of pleural effusion (water in the lungs) in chest X-rays.

Viz.ai, a medical imaging company that helps optimize emergency treatment using deep-learning technology to analyze CT scans and automatically detect and alert physicians of early signs of large vessel occlusion strokes. The platform, Viz LVO, helps triage patients directly to a stroke specialist and fast-track life-saving care. The software is available in over 300 hospitals across the US.

Viz was founded in 2016 by a global team of experts, including Dr. David Golan, an Israeli statistics and AI expert. The startup has offices in San Francisco and Tel Aviv and has raised over $70 million to date.

Razor Labs, a Tel Aviv-based startup founded in 2016 that builds tailor-made neural networks and helps enterprise companies reap the benefits arising out of the AI revolution.

The startups DataMind platform virtualizes manufacturing processes and its VisualMind video analytics platform uses several AI applications for corporate objectives. The company also runs an eight-week educational program focused on deep learning engineering.

Snyk, the Israeli-founded, UK-based cybersecurity company that provides security solutions for vulnerabilities in open source libraries. Founded in 2105, Snyk recently raised a $150 million investment with a valuation of over $1 billion.

The companys major clients include Google, Microsoft, Salesforce, and Adobe.

SentinelOne, the Israeli cybersecurity firm that developed an AI-based platform to secure endpoints including laptops, PCS, servers, cloud servers, and IoT devices. Its system analyzes data in real-time to identify anomalies and provide a response to attacks.

Founded in 2013, the company recently raised $200 million in a funding round led by NY-based equity firm Insight Partners at a company valuation of more than $1 billion. SentinelOne has offices in Mountain View, California and a development center in Israel.

ClimaCell, the US-based, AI-powered weather intelligence platform company created by Israeli founders. ClimaCell automates operational decisions and action plans based on how historic, real-time, and future weather will impact businesses. It produces hyper-local weather forecasts using vast quantities of traditional and non-traditional data (IoT devices, drones, cellular signals, sat com signals, and street cameras) and targets weather-sensitive industries.

The company is based in Boston and has more than 150 corporate clients including Delta, JetBlue, the New England Patriots, and ride-sharing, Israeli-founded startup Via.

Continued here:
8 Israeli Companies Named To Top 100 Promising AI Startups Worldwide | News Briefs - NoCamels - Israeli Innovation News

Read More..

Microsoft Is #1 in the Cloud, and CFO Amy Hood Is Bullish on the Future – Cloud Wars

Speaking at an investors conference earlier this week, Microsoft CFO Amy Hood offered a range of insights into the companys strategy while calling out particularly strong performances across each of our cloud properties and in Windows for security.

In a Q&A session with Morgan Stanley lead software analyst Keith Weiss, Hood consistently expressed her optimism in a catchphrase thats a particular favorite of hers and CEO Satya Nadella: that she feels good.

And there are many, many things in the Microsoft portfolio and playbook about which Hood is feeling good. All underpin Microsofts ability to have something close to a stranglehold on the #1 spot on the Cloud Wars Top 10. Such as:

Hood is not only a superb financial leader but also deeply steeped in all aspects of Microsofts business. So lets hear directly from her on 10 reasons shes bullishand feels goodabout Microsofts prospects for the future from the Microsoft transcript of the conversation with Weiss.

If you look at where weve really focused, I think you see it in our results across each of our cloud properties, as well as in Windows for security. And you see those results in the bookingsand for me, bookings is really just a statement of both obviously years of research-and-development investment, followed by sales-and-marketing investments to support that based on opportunity.

Most of those are multiyear commitments that customers are making, and people dont do that lightly, and we certainly dont take them lightly once made. So I think its less of a one-year view that I have on these things and more of a multiyear frame, and feeling well-positioned for the next decade.

If you think about it, over the next decade, do any of us sort of sit and say IT will be a bigger or smaller piece of GDP no matter what that number is? I absolutely believe it will be a larger percentage of GDP in the next decade than it is today.

And if thats true, being well-positioned for that growth, no matter its pacing quarter to quarter, tends to be just how I look at the world in terms of demand. And the conversations were having with customers really reflect that.

The reality that compute will need to exist at the edge and in the cloud, a thing that we call generally hybrid, I think is a reality that people are now beginning to see.

Weve been architected that way from the beginning and have been talking about the advantages of that also from the beginning. I do feel that some of these recent announcements weve made make that more and more real for people, when you talk about the importance of latency, or sovereignty, or privacy or security in terms of the portfolio we have and how customers think about it.

[For the past few years, Hood has consistently hammered home the point that Microsoft has zero preference about whether customers buy cloud or on-premises products and services because its Microsofts job to provide a seamless architecture for both rather than steering them to one or the other.]

That number for us is best seen in the server and cloud products KPI. We talked about that for a long time, and I know people really focus on one or the other. But if you think about what customers are asking, and how customers are contracting and what they are trying to achieve, its far more similar to the all-up KPI One-third of the customers who have those Hybrid Benefits are starting to use them in Azure I do believe that going forward that opportunity for us is just beginning. And that again sort of reinforces why I tend to focus more on the server and cloud products KPI.

I tend to believe that any view of trying to get a specific number in any specific quarter to say, Did the server on-prem number go up or down and the Azure number go up or down, and is there some large drag or not, is probably getting a little myopic in terms of the overall trend line, which is, is the overall cloud opportunity represented by on-prem, hybrid and Azure continuing to show the growth and execution that we feel good about in terms of the explosive TAM and our market opportunity within that TAM?

I also think the fullness of our commercial-cloud offerings matter. Its not just Azure, for example. Its the conversations were able to have with customers around their total digital transformation by workload, by solution. And it stems across Microsoft 365, Dynamics, as well as Azure in terms of this really mattering to customers, and relevance, and adding industry layers. And I feel quite good about that.

[Weiss asked Hood to name any premium services that have become real hits.]

I think if you looked across our data story, its a place that I would, in particular, highlight great progress over the past year. And I think were pretty excited about what the next year has to offer.

[While Hood was likely pretty excited about various parts of Microsofts business, the data segment was the only one that elicited that specific praise from her.]

Microsoft 365 is the solution that brings together Office 365, our EMS story, as well as Windows Commercial. Its not simply a bundleits about innovation done at the core that is shared amongst those products that get sold within that bundle. And the reason we spend a lot of time talking about it, is it really is the product that we lead with, with our customers. It is, in fact, their language with us increasingly over what really a modern and secure workplace and collaborative environment and experience for users looks like

Weve said recently that 25% of our Office 365 contracts are executed this way. And really what thats meant to do is to tell people, wow, this is really the hero motion for us in terms of engaging with our customers and the conversation they want to have with us.

Dynamics is really fundamentally about addressing what I think just sets the early part of curve on reinventing business processes as they work today with the vast amounts of data that are available in a cost-effective manner Its a place where we have spent the year investing a lot more in sales capacity, which were excited about And this is a place where I just think were in the very early parts of the cycle in terms of what reinventing business processes will look like for the next decade. So this is certainly a place where I think my optimism remains high. We did see Dynamics 365 growth accelerate in Q2 on a larger base of business.

Every time I start to get the impression that Microsofts trying to do too many things too quickly, I take a look at comments from Amy Hood. Her insights, her rational thinking, her unfailingly consistent set of principles about what the company is doing and why are alwaysalwaysgrounded fully in reality.

So if Amy Hood frequently states that she feels good about various aspects of Microsofts breakaway business, its not hard to see why.

View original post here:
Microsoft Is #1 in the Cloud, and CFO Amy Hood Is Bullish on the Future - Cloud Wars

Read More..

Supermicro Plants A Flag At The Edge – The Next Platform

In a short time, the edge has become the crucial third leg holding up the IT stool, joining traditional on-premises datacenters and the public clouds. That is not surprising, given the increasingly distributed nature of the enterprise. The cloud, the proliferation of mobile devices, the Internet of Things (IoT), the growing need for more real-time analysis of the massive amounts of data being generated outside of core datacenters and now the promise of high speeds and low latency from 5G networks all play a role in driving demand for more computer and storage capabilities closer to the users and devices created all that data.

But it is still early in the development of the edge and how that will evolve still remains to be seen. But theres a belief among some system makers that while the edge has particular capability needs and that in many ways the applications running out there will dictate what the infrastructure looks like, there has to be commonality in the infrastructure components from the edge back through the cloud and into the datacenter. Dell EMC officials often talk about the continuum of the IT environment from one point to another.

Supermicro is seeing small but growing demand from enterprises that essentially want to extend out to the edge but make it essentially an extension of their datacenters rather than something separate, according to Michael Clegg, vice president and general manager of 5G and embedded/IoT for the company.

Some people that have assets out at the edge the tower companies, the telcos, anybody that has an edge datacenter and the big datacenter players are starting to say, How can I take my traditional network and push out with these to the edge? Clegg tells The Next Platform, noting that such interest dovetails with Supermicros strategy of using a building-block model with chips, GPUs, storage and other components that are the same in systems at the edge and in the datacenter. Its a design, engineering and packaging problem. Not to trivialize it, but it takes that experience. Typically, you get companies that are either very good at these sorts of industrial computers and you get companies that are good at just traditional servers. What youre doing with these types of technologies and this is whats happening with edge is youre taking the technology thats been in the traditional server and youre trying to repackage that into almost an industrial server application. Youre starting to bridge those two together and thats a new expertise that were definitely going down that path with.

The vendor has been growing out its portfolio of offerings for 5G and edge computing environments, including with the single-socket SuperServer E403-9P-FN2T and 1019-FN2T systems, which were rolled out last year. Those are designed to complement datacenter systems like the multi-node BigTwin and high-density SuperBlade and MicroBlade systems that can support virtualized 5G in the network core.

Supermicro this week unveiled the first of what will be a family of small and rugged edge systems aimed at outdoor environments, the IP65 designed for 5G RAN (radio access network), artificial intelligence (AI) inferencing and similar applications that need to be run as close to the user as possible and often in hostile environments.

The systems come with multiple configurations, including options for either Intel Xeon D system-on-a-chip (SoC) or Xeon Scalable CPUs. There also are three PCIe slots that can be used for GPUs important for AI inferencing at the edge or field-programmable gate arrays (FPGAs). Storage options include SSDs, M.2 and EDSFF (Enterprise and Data Center SSD Form Factor). The systems are built through Supermicros building-block architectural approach.

We have motherboards that are the core building block, Clegg says. Around that we have storage blocks, GPU blocks, power supply blocks and then chasses. It becomes a little bit of a packaging exercise: which combination of these things do I want to do? When you go outdoors you have different constraints, power constraints and most importantly, temperature constraints. Packaging and meeting the extended environmental conditions is the more difficult piece. Usually general-purpose server chips are not designed to over-extended temperatures. Intel does have a class of products that [can handle extreme temperatures], so you pick it out of a smaller pool of processors. For us its really going [general purpose] and then doing some custom design as we need it.

Telcos and other utilities are embracing edge computing environments, as are such industries as smart cities, physical surveillance and retail, according to Clegg. The keys to the edge is not only having compute out there, but also storage, expansion slots for GPU or network accelerators and high levels of performance. Supermicro started out with Xeon D in its edge devices, but customers pushed higher performance and more scalable solutions that could handle datacenter-type applications.

The ramp to 5G is a key driver behind the edge and Supermicros efforts behind the IP65 systems. For most 5G environments, the goal is to separate the software from the underlying hardware system, essentially running the applications on general-purpose systems. Supermicro has been working with Intel on its FlexRAN initiative, where a hardware system is made from off-the-shelf components and run what had been networking tasks on dedicated boxes on the commodity hardware, Clegg says.

For vendors like Supermicro, there are essentially two parts of 5G networks. The first is the core network, which is going cloud-native, adopting a lot of virtualization on general-purpose servers and running everything on top of it as a software stack. The other is the radio, which is pinned in by such constraints as digital signal processing. It can be run in a controlled environment like a core or regional datacenter, but in rural areas it needs to be moved to the edge, closer to where bay stations are located.

Thats the genesis [of the new system], Clegg says. Basically, 5G is moving into general-purpose servers to put product out in the field. What we had developed is we had a distribution unit that was designed to go into an edge datacenter or a regional datacenter, but we were getting requests from our customers to put that essentially up close to the tower itself. Thats what drove this development initially. We have taken one of our servers and repackaged that [and] put it inside the IP-65 boxes, a complete outdoor system.

5G holds the promise of 1,000 times more bandwidth than 4G and LTE, a tenth of the latency, and the ability to support millions of connected devices per mile, a significant capability in the era of IoT. It also will enable such technologies as AI and machine learning, data processing and analytics and virtual reality at the edge and will be crucial in the evolution of such new markets as autonomous vehicles.

Theres the 5G network itself and then theres the applications that are going to utilize the 5G network, Clegg says. Because the 5G network itself has become virtualized and cloud-native, you often get these two spoken about as if theyre one. The one is enabling the other, though. The work thats been done in 5G to create these virtualized x86 type platforms, in some cases operators are saying, I will use the same hardware. I will put a unit at the edge of my network a regional datacenter and I will run network services on that unit because thats just general-purpose computing. If it adds extra horsepower, I can also spin an application container up on that same unit.

What drives 5G is low latency, he says, and low latency drives compute towards the edge.

Thats probably the catalyst piece of 5G, Clegg says. As people think about why 5G is very different from 4G, its about the idea of ultra-low latency, sort of the hidden theme in 5G. But the other factor of 5G itself is a virtualized architecture thats enabling people to really adopt these different compute models. You can probably see some dramatic capex savings down the road as we do some resource shaving at these edge nodes. Thats going to be the other catalyst that doesnt exist today.

Read more:
Supermicro Plants A Flag At The Edge - The Next Platform

Read More..

Cloud Backup Market is expected to expand at the highest CAGR by 2026 : Acronis International GmbH, Asigra Inc., Barracuda Networks, Inc, Carbonite -…

Cloud Backup Market Industry Forecast To 2026

Cloud backup, also known as online backup, is a strategy for backing up data that involves sending a copy of the data over a proprietary or public network to an off-site server. The server is usually hosted by a third-party service provider, which charges the backup customer a fee based on capacity, bandwidth or number of users. In the enterprise, the off-site server might be owned by the company, but the chargeback method would be similar.An increasing adoption of cloud-based technologies and need for managing voluminous data sets in enterprises has led to the adoption of cloud backup solution. Also, the adoption of cloud backup solution has increased due to its various benefits such as simple management and monitoring, real-time backup and recovery, simple integration of cloud backup with enterprises other applications, data deduplication, and customer support.North America is estimated to hold the largest market share in 2017, while APAC is projected to be the fastest growing region with the highest CAGR due to the rising data generation in many countries. Cloud emergence and mandatory government regulations are simultaneously helping boost the growth of the cloud backup market in this region.

This Research report comes up with the size of the global Cloud Backup Market for the base year 2020 and the forecast between 2020 and 2026.

Major Manufacturer Detail:Acronis International GmbH, Asigra Inc., Barracuda Networks, Inc, Carbonite, Inc., Code42 Software, Inc., Datto, Inc., Druva Software, Efolder, Inc., International Business Machines Corporation, Iron Mountain Incorporated, Microsoft Corporation, Veeam Software

Geta PDF SampleCopy (including TOC, Tables, and Figures) @https://garnerinsights.com/Global-Cloud-Backup-Market-Size-Status-and-Forecast-2019-2025#request-sample

Types of Cloud Backup covered are:Public Cloud, Private Cloud, Hybrid Cloud

Applications of Cloud Backup covered are:Small and Medium-Sized Enterprises, Large Enterprises

The Global Cloud Backup Market is studied on the basis of pricing, dynamics of demand and supply, total volume produced, and the revenue generated by the products. The manufacturing is studied with regards to various contributors such as manufacturing plant distribution, industry production, capacity, research and development. It also provides market evaluations including SWOT analysis, investments, return analysis, and growth trend analysis.

To get this report at a profitable rate, Click Herehttps://garnerinsights.com/Global-Cloud-Backup-Market-Size-Status-and-Forecast-2019-2025#discount

Regional Analysis For Cloud BackupMarket

North America(the United States, Canada, and Mexico)Europe(Germany, France, UK, Russia, and Italy)Asia-Pacific(China, Japan, Korea, India, and Southeast Asia)South America(Brazil, Argentina, Colombia, etc.)The Middle East and Africa(Saudi Arabia, UAE, Egypt, Nigeria, and South Africa)

Get Full Report Description, TOC, Table of Figures, Chart, etc. @https://garnerinsights.com/Global-Cloud-Backup-Market-Size-Status-and-Forecast-2019-2025

What does this report deliver?

Reasons to buy:

Get Full Report @ https://garnerinsights.com/Global-Cloud-Backup-Market-Size-Status-and-Forecast-2019-2025

In conclusion, the Cloud Backup Market report is a reliable source for accessing the Market data that will exponentially accelerate your business. The report provides the principle locale, economic scenarios with the item value, benefit, supply, limit, generation, request, Market development rate, and figure and so on. Besides, the report presents a new task SWOT analysis, speculation attainability investigation, and venture return investigation.

Contact Us:Mr. Kevin Thomas+1 513 549 5911 (US)+44 203 318 2846 (UK)Email: [emailprotected]

Read more from the original source:
Cloud Backup Market is expected to expand at the highest CAGR by 2026 : Acronis International GmbH, Asigra Inc., Barracuda Networks, Inc, Carbonite -...

Read More..