Category Archives: Cloud Servers
Four Technologies That Are Changing The Way We Do Business – TechShout!
Most technology investors keenly monitor the progress of companies in these sectors. Governments also typically grant R&D tax credits to them because of the immense potential they have for the economy. Here are the top four technologies that are changing business as we know it:
Probably the most useful innovation on the internet, cloud computing has almost wholly revolutionised how businesses run. Cloud computing allows individuals and organisations to create, share, process and store files and programs on internet servers.
Cloud computing services are broadly divided into three categories:
There is a massive drop in overhead cost for corporations that embrace cloud computing. However, thats not all; businesses that have taken advantage of cloud computing services have reported increased productivity by almost 40%. Theyve also reported an increase in flexibility and collaboration. Companies can outsource work and monitor it effectively, mainly due to cloud computing.
IoT is a term used to describe a complex system of interrelations between multiple devices and users that communicate with one another without human intervention. The advent of cloud computing is what has made IoT possible.
With IoT, large businesses can now monitor their operations in real-time while ensuring efficiency.
Another change IoT has made to businesses is making them data-centric. Data collected from multiple sources, including user interactions, are now being used to inform corporate decisions. The long term benefit of this is that organisations can better accurately predict what customers want and produce it a win-win for both parties.
One of the great innovations of modern-day technology is teaching machines to learn. A by-product of IoT, AI is a field that has taken a life of its own. Artificial Intelligence allows machines to perform complex functions without the help of humans.
AI, through the help of machine learning, allows for organisations to go beyond the automation of processes. Algorithms are designed to pick up trends from IoT or other data sources and analyse them to make decisions.
One of the more common examples of how AI is influencing businesses is the chatbots on e-commerce websites. These bots are trained to respond to consumer inquiries and help them find products that they are looking for. As it is with most AI applications, they are cheaper and more efficient than hiring manual assistants.
Although blockchain is almost synonymous with cryptocurrencies, they are two completely different notions. Blockchain is a secure digital ledger that is used to keep track of things, including financial transactions. The distributed nature of its storage also means that it allows for records to be inputted at an incredible rate.
Unsurprisingly, one of the most significant applications of blockchain technology is in finance. The differing currencies and regulations across borders make sending and receiving money complex for multinational organisations. With blockchain technology, payments can now be made instantly across continents through instruments like bitcoin.
Other applications of blockchain technology in recent times include supply chain technology and food safety. According to some estimations, at least 30% of manufacturing organisations with over $5bn in assets will have implemented blockchain technology by 2023.
So, while the hype around cryptos might have reduced to a whimper, blockchain in itself is growing by leaps and bounds.
Business growth is one of the drivers of technology development. In the race to make services and operations more efficient, technology will always be at the fore. At the end of the day, everyone is better off; whether they are producers or consumers.
Follow this link:
Four Technologies That Are Changing The Way We Do Business - TechShout!
Meet the power players behind Microsoft’s Azure cloud – Business Insider
Microsoft's cloud business is on the rise and the Redmond, Washington-based company has assembled a team of high-powered executives to upend its rivals.
Microsoft Azure has long been considered the No. 2 cloud provider versus dominant Amazon Web Services, but that perception has started to change.
"Azure is the primary growth engine for the company and positions them to have a leading marketshare in a potentially multitrillion-dollar opportunity in the future of computing," RBC Capital analyst Alex Zukin said.
To be sure, Microsoft still has a lot of catching up to do. Gartner in a report released over the summer pegged the 2018 market share for AWS at 47.8% and that of Microsoft Azure at 15.5%. But Microsoft has scored some significant wins and recent moves indicate the company is prioritizing the cloud above all else.
Perhaps most significant is the company's recent win of a $10 billion cloud computing contract with the Pentagon. AWSwas considered the frontrunner but experts say the win puts Microsoft in the same league as the AWS.
"It signals to the market Microsoft is no longer a runner-up and can be viewed as a leader in the category where they can surpass AWS in certain areas," Zukin said.
To lead that charge, Microsoft has assembled a team of high-powered executives to guide its all-important cloud strategy. We spoke with insiders and experts who said that these were the 19 power players to watch within Microsoft's cloud business.
Meet Microsoft's ace cloud team:
Continued here:
Meet the power players behind Microsoft's Azure cloud - Business Insider
Hyperscale Data Center Market – Global Outlook and Forecast 2019-2024: Adoption of Cloud-based Services and & Big Data Driving Hyperscale Data…
DUBLIN--(BUSINESS WIRE)--The "Hyperscale Data Center Market - Global Outlook and Forecast 2019-2024" report has been added to ResearchAndMarkets.com's offering.
The hyperscale data center market is expected to grow at a CAGR of over 9% during the period 2018-2024
Hyperscale construction in terms of area and power will be high in the US, the UK, Germany, China & Hong Kong, Ireland, Brazil, Canada, the Netherlands, Singapore, Japan, South Korea, Australia, India, France, Denmark, Sweden, and Norway. Adoption of Cloud-based Services and & big data driving hyperscale data center surge.
Tax incentives offered by regulatory agencies worldwide are likely to play an important role in the development of hyperscale facilities construction. A majority of development over the past few years has been concentrated in those regions that offer tax incentives. These tax breaks yield high savings for service operators. For instance, Google negotiated a 100%, 15-year sales tax exemption for a $600-million data center in New Albany, Ohio, the US in 2018.
Similarly, Facebook is likely to receive about $150 million through property tax incentives for building a facility in Utah. Tax incentives are being offered to grow the digital economy through multi-million-dollar investments. Many developing countries are looking to allure investors by providing incentives and land for development during the forecast period. Tax incentives, which are a major criterion for the site selection process, help to generate business opportunities for local sub-contractors. Hence, the availability of attractive tax incentives is expected to drive the hyperscale data center market.
Hyperscale Data Center Market: Segmentation
This research report includes detailed segmentation by IT infrastructure, electrical infrastructure, mechanical infrastructure, general construction, and geography. The demand for servers in the cloud environment is likely to grow during the forecast as service providers are expanding their presence globally. The server market is expected to witness demand for servers with multicore processors. Storage capacity will grow as the average number of virtual machines per physical server continues to grow. The US is likely to witness growth in the server segment and shipment is projected to increase during the forecast period.
The use of Lithium-ion UPS systems will continue to grow among hyperscale data center operators during the forecast period. Vendors are continually innovating their UPS solutions to increase efficiency and reduce cost. Diesel generators are likely to witness growth in the US. However, gas and bi-fuel generators are expected to witness steady growth due to the increased awareness of carbon emissions, especially in the US.
The adoption of switchgears will grow because of the increased construction of large and mega facilities that require medium and high-voltage switchgears. The adoption of basic rack PDUs is expected to decline with the higher adoption of metered, monitored, switched, and metered-by outlet PDUs.
The use of indirect evaporative cooler and air/water-side economizers is likely to continue as most hyperscale facilities are being developed in countries that experience cold climate for more than 4,000 hours per year. The facilities in Southeast Asia, China, India, the Middle East, Africa, and Latin America are likely to prefer chilled water systems.
A majority of the existing development in the US is being carried in locations that offer free cooling of a minimum of 4,000 hours. The facilities established in the South Western US are incorporated with energy-efficient water-based cooling systems, with on-site water treatment plants saving a minimum of 30% of water consumed. In the US, a majority of states provide tax incentives for data centers; also, they provide job-based tax incentives.
The growing hyperscale construction will be a major boost to contractors and sub-contractors operating in the market. Most projects established in MEA are of greenfield development type.
Key Vendor Analysis
The competition in cloud service providers to establish multiple cloud regions and increase the customer base for their service offerings is increasing the investment in hyperscale facilities construction. The market for infrastructure suppliers is becoming competitive YOY.Infrastructure suppliers are continuously innovating their product portfolio to increase their revenue shares. The competition will be high in infrastructure providers supplying mission-critical and high-performance infrastructure solutions.
Schneider Electric, Eaton, Vertiv, and ABB are leading the electrical infrastructure market. Cummins, Caterpillar, and MTU On Site Energy have a strong presence in the generator market.
Market Dynamics
Market Growth Enablers
Market Growth Restraints
Market Opportunities & Trends
Key Company Profiles
Other Prominent Vendors
For more information about this report visit https://www.researchandmarkets.com/r/kwkkeg
Inside Intel’s billion-dollar transformation in the age of AI – Fast Company
As I walked up to the Intel visitor center in Santa Clara, California, a big group of South Korean teenagers ran from their bus and excitedly gathered round the big Intel sign for selfies and group shots. This is the kind of fandom you might expect to see at Apple or Google. But Intel?
Then I remembered that Intel is the company that put the silicon in Silicon Valley. Its processors and other technologies provided much of the under-the-hood power for the personal computer revolution.At 51 years old, Intel still has some star power.
But its also going through a period of profound change thats reshaping the culture of the company and the way its products get made. As ever, Intels main products are the microprocessors that serve as the brains of desktop PCs, laptops and tablets, and servers. Theyre wafers of silicon coated with millions or billions of transistors, each of which has an on and off state corresponding to the binary ones and zeros language of computers.
Since the 1950s, Intel has achieved a steady increase in processor power by jamming ever more transistors onto that piece of silicon. The pace was so steady that Intel cofounder Gordon Moore could make his famous 1965 prediction that the number of transistors on a chip would double every two years. Moores Law held true for many years, but Intels transistor-cramming approach has reached a point of diminishing returns, analysts say.
At 51 years old, Intel still has some star power.
Meanwhile, the demand for more processing power has never been greater. The rise of artificial intelligence, which analysts say is now being widely used in core business processes in almost every industry, is pushing the demand for computing power into overdrive. Neural networks require massive amounts of computing power, and they perform best when teams of computers share the work. And their applications go far beyond the PCs and servers that made Intel a behemoth in the first place.
Whether its smart cities, whether its a retail store, whether its a factory, whether its a car, whether its a home, all of these things kind of look like computers today, says Bob Swan, Intels president since January 2019.The tectonic shift of AI and Intels ambitions to expand have forced the company to change the designs and features of some of its chips. The company is building software, designing chips that can work together, and even looking outside its walls to acquire companies that can bring it up to speed in a changed world of computing. More transformation is sure to come as the industry relies on Intel to power the AI that will increasingly find its way into our business and personal lives.
Today, its mainly big tech companies with data centers that are using AI for major parts of their business. Some of them, such as Amazon, Microsoft, and Google, also offer AI as a cloud service to enterprise customers. But AI is starting to spread to other large enterprises, which will train models to analyze and act upon huge bodies of input data.
This shift will require an incredible amount of computation. And AI models hunger for computing power is where the AI renaissance runs head-on into Moores Law.
For decades, Moores 1965 prediction has held a lot of meaning for the whole tech industry. Both hardware makers and software developers have traditionally linked their product road maps to the amount of power they can expect to get from next years CPUs. Moores Law kept everyone dancing to the same music, as one analyst puts it.
Moores Law also implied a promise that Intel would continue figuring out, year after year, how to deliver the expected gain in computing power in its chips. For most of its history, Intel fulfilled that promise by finding ways to wedge more transistors onto pieces of silicon, but its gotten harder.
Were running out of gas in the chip factories, says Moor Insights & Strategy principal analyst Patrick Moorhead. Its getting harder and harder to make these massive chips, and make them economically.
Were running out of gas in the chip factories.
Its still possible to squeeze larger numbers of transistors into silicon wafers, but its becoming more expensive and taking longer to do soand the gains are certainly not enough to keep up with the requirements of the neural networks that computer scientists are building. For instance, the biggest known neural network in 2016 had 100 million parameters, while the largest so far in 2019 has 1.5 billionparametersan order of magnitude larger in just a few years.
Thats a very different growth curve than in the previous computing paradigm, and its putting pressure on Intel to find ways to increase the processing power of its chips.
However, Swan sees AI as more of an opportunity than a challenge. He acknowledges that data centers may be the primary Intel market to benefit, since they will need powerful chips for AI training and inference, but he believes that Intel has a growing opportunity to also sell AI-compatible chips for smaller devices, such as smart cameras and sensors. For these devices, its the small size and power efficiency, not the raw power of the chip, that makes all the difference.
Theres three kinds of technologies that we think will continue to accelerate: One is AI, one is 5G, and then one is autonomous systemsthings that move around that look like computers, says Swan, Intels former CFO who took over as CEO when BrianKrzanich left after allegations of an extramarital affair with a staffer in 2018.
Were sitting in a large, nondescript conference room at Intels headquarters. On the whiteboard at the front of the room, Swan draws out the two sides of Intels businesses. On the left side is the personal computer chip businessfrom which Intel gets about half of its revenue now. On the right is its data center business, which includes the emerging Internet of Things, autonomous car, and network equipment markets.
We expand [into] this world where more and more data [is] required, which needs more processing, more storage, more retrieval, faster movement of data, analytics, and intelligence to make the data more relevant, Swan says.
Rather than taking a 90-something-percent share of the $50 billion data center market, Swan is hoping to take a 25% market share of the larger $300 billion market that includes connected devices such as smart cameras, futuristic self-driving cars, and network gear. Its a strategy that he says starts with our core competencies, and requires us to invent in some ways, but also extends what we already do. It might also be a way for Intel to bounce back from its failure to have become a major provider of technology to the smartphone business, where Qualcomm has long played an Intel-like role. (Most recently, Intel gave up on its major investment in the market for smartphone modems and sold off the remains to Apple.)
The Internet of Things market, which includes chips for robots, drones, cars, smart cameras, and other devices that move around, is expected to reach $2.1 trillion by 2023. And while Intels share of that market has been growing by double digits year-over-year, IoT still contributes only about 7% of Intels overall revenue today.
The data center business contributes 32%, the second-largest chunk behind the PC chip business, which contributes about half of total revenue. And its the data center that AI is impacting first and most. Thats why Intel has been altering the design of its most powerful CPU, the Xeon, to accommodate machine learning tasks. In April, it added a feature called DL Boost to its second-generation Xeon CPUs, which offers greater performance for neural nets with a negligible loss of accuracy. Its also the reason that the company will next year begin selling two new chips that specialize in running large machine learning models.
By 2016, it had become clear that neural networks were going to be used for all kinds of applications, from product recommendation algorithms to natural language bots for customer service.
Like other chipmakers, Intel knew it would have to offer its large customers a chip whose hardware and software were purpose-built for AI, which could be used to train AI models and then draw inferences from huge pools of data.
At the time, Intel was lacking a chip that could do the former. The narrative in the industry was that Intels Xeon CPUs were very good at analyzing data, but that the GPUs made by Intels rival in AI, Nvidia, were better for trainingan important perception that was impacting Intels business.
So in 2016, Intel went shopping and spent$400 million on a buzzy young company called Nervana that had already beenworkingon a ripping fast chip architecture that was designed for training AI.
Its been three years since the Nervana acquisition, and its looking like it was a smart move by Intel. At a November event in San Francisco, Intel announced two new Nervana Neural Network Processorsone designed for running neural network models that infer meaning from large bodies of data, the other for training the networks. Intel worked withFacebook and Baidu, two of its larger customers, to help validate the chip design.
Nervana wasnt the only acquisition Intel made that year. In 2016, Intel also bought another company, calledMovidius, that had been building tiny chips that could run computer vision models inside things such as drones or smart cameras. Intels sales of the Movidius chips arent huge, but theyve been growing quickly, and they address the larger IoT market Swans excited about. At its San Francisco event, Intel also announced a new Movidius chip, which will be ready in the first half of 2020.
Intel Nervana NNP-I for inference [Photo: courtesy of Intel Corporation]Many of Intels customers do at least some of their AI computation on regular Intel CPUs inside servers in data centers. But its not so easy to link those CPUs together so they can tag-team the work that a neural network model needs. The Nervana chips, on the other hand, each contain multiple connections so that they easily work in tandem with other processors in the data center, Nervana CEO and founder Naveen Rao tells me.
Now I can start taking my neural network and I can break it apart across multiple systems that are working together, Rao says. So we can have a whole rack [of servers], or four racks, working on one problem together.
Naveen Rao, Intel corporate vice president and general manager of the Intel Artificial Intelligence Products Group, displays an Intel Neural Network Processor for inference during his keynote address Tuesday, November 12, 2019, at Intels AI Summit in San Francisco. [Photo: Walden Kirsch/Intel Corporation]In 2019, Intel expects to see $3.5 billion in revenue from its AI-related products. Right now, only a handful of Intel customers are using the new Nervana chips, but theyre likely to reach a far wider user base next year.
The Nervana chips represent the evolution of a long-held Intel belief that a single piece of silicon, a CPU, could handle whatever computing tasks a PC or server needed to do. This widespread belief began to change with the gaming revolution, which demanded the extreme computational muscle needed for displaying complex graphics on a screen. It made sense to offload that work to a graphics processing unit, a GPU, so that the CPU wouldnt get bogged down with it. Intel began integrating its own GPUs with its CPUs years ago, and next year it will release a free-standing GPU for the first time, Swan tells me.
That same thinking also applies to AI models. A certain number of AI processes can be handled by the CPU within a data center server, but as the work scales up, its more efficient to offload it to another specialized chip. Intel has been investing in designing new chips that bundle together a CPU and a number of specialized accelerator chips in a way that matches the power and workload needs of the customer.
When youre building a chip, you want to put a system together that solves a problem, and that system [often] requires more than a CPU, Swan says.
When youre building a chip, you want to put a system together that solves a problem.
In addition, Intel now relies far more on software to drive its processors to higher performance and better power efficiency. This has shifted the balance of power within the organization. According to one analyst, software development at Intel is now an equal citizen with hardware development.
In some cases, Intel no longer manufactures all its chips on its own, an epoch-shifting departure from the companys historical practice. Today, if chip designers call for a chip that some other company might fabricate better or more efficiently than Intel, its acceptable for the job to be outsourced. The new Nervana chip for training, for example, is manufactured by the semiconductor fabricator TSMC.
Intel has outsourced some chip manufacturing for logistical and economic reasons. Because of capacity limitations in its most advanced chip fabrication processes, many of its customers have been left waiting for their orders of new Intel Xeon CPUs. So Intel outsourced the production of some of its other chips to other manufacturers. Intel sent a letter to its customers earlier this year to apologize for the delay and lay out its plans for catching up.
All these changes are challenging long-held beliefs within Intel, shifting the companys priorities, and rebalancing old power structures.
The fact is that mobile devices have become vending machines for services delivered to your phone via the cloud.
In the midst of this transformation, Intels business is looking pretty good. Its traditional business of selling chips for personal computers is down 25% from five years ago, but sales of Xeon processors to data centers are rocking and rolling, as analyst Mike Feibus says.
Some of Intels customers are already using the Xeon processors to run AI models. If those workloads grow, they may consider adding on the new Nervana specialized chips. Intel expects the first customers for these chips to be hyperscalers, or large companies that operate massive data centersthe Googles, Microsofts, and Facebooks of the world.
Its an old story that Intel missed out on the mobile revolution by ceding the smartphone processor market to Qualcomm. But the fact is that mobile devices have become vending machines for services delivered to your phone via the clouds data centers. So when you stream that video to your tablet, its likely an Intel chip is helping serve it to you. The coming of 5G might make it possible to run real-time services such as gaming from the cloud. A future pair of smart glasses might be able to instantly identify objects using a lightning-fast connection to an algorithm thats running in a data center.
All of that adds up to a very different era than when the technological world revolved around PCs with Intel inside. But as AI models grow ever more complex and versatile, Intel has a shot at being the company best equipped to power themjust as it has powered our computers for almost a half-century.
See original here:
Inside Intel's billion-dollar transformation in the age of AI - Fast Company
Amazon’s cloud business bombards the market with dozens of new features as it looks to preserve its lead – CNBC
Amazon's cloud business now has over 175 different services for customers to use. That's up from more than 100 services two years ago and 140 last year.
Don't worry, no one will ask you to name them all. But the fact is, Amazon Web Services, a 13-year-old division of the e-commerce company, is coming out with new technologies for its customers really fast, making the competition look like slackers.
It's important for Amazon Web Services to show off new ideas, as it's Amazon's main source of operating income. Amazon is ahead of all other companies in the growing cloud infrastructure market, where software developers can pay for however much computing and storage they use, rather than rely on their companies' existing facilities. It helps that Amazon was earlier to market than other big competitors like Microsoft and Google, but it's maintained that position by continuously adding new features.
In 2018 Amazon controlled about 47.8% of the market, according to technology industry research firm Gartner. That's down from 49.4% in 2017. Amazon would like to see its share widen, not narrow.
At the annual AWS Reinvent conference in Las Vegas on Tuesday, Amazon announced new chips to run customers' applications in its data centers, plus new services and feature enhancements for developers to check out. Although AWS boss Andy Jassy snuck in a few potshots at competitors Google, IBM, Microsoft and Oracle, he spent more of his stage time touting existing and new capabilities before an audience of 65,000. It was about tools in other words, it was about adding to Amazon's technological lead. To underline the point, he called to the stage Goldman Sachs CEO David Solomon and Cerner CEO Brent Shafer, who talked up their companies' use of AWS.
"There were AWS 28 launches announced today, 23 of which were made during Andy's keynote," an AWS spokesperson told CNBC in an email on Tuesday.
Highlights included:
Graviton2. AWS is launching more powerful processors it developed in house based on the Arm architecture to power computing resources, representing an alternative to existing cloud servers containing Intel and AMD chips. The chips promise to provide lower cost for the same level of performance in tasks like handling user requests in applications, analyzing user data or monitoring performance.
Wave Length. The new Wave Length service, thanks to collaborations with Verizon and other service providers, will enable faster cloud computing and storage services to keep applications moving quickly as 5G arrives.
Fraud Detector. A new service for fraud detection will help companies suss out fake sign-ups and transactions from stolen credit cards. It draws on knowledge Amazon has built up over the years about selling products online.
Contact Lens. New analytics technology for its Connect contact center service can recognize people's emotions on phone calls coming in from customers, so representatives can provide better support.
Kendra. Another service, Kendra, will be able to search for information stored in various enterprise content repositories, including Box and Microsoft's SharePoint.
Managed Apache Cassandra Service. Amazon revealed a new service for using the open-source database Cassandra that will compete with products from a start-up called DataStax. The company has done this before with companies like Elastic and MongoDB.
CodeGuru is a new offering that programmers can tap when they want a computer to review their source code so that it runs efficiently. The service will work with code storage service GitHub, which is owned by cloud rival Microsoft.
SageMaker. People who come up with artificial-intelligence models can now use a web application from AWS called SageMaker Studio IDE that's designed just for that work. In addition, a new tool called SageMaker Autopilot can help customers train AI models all one has to do is feed it some data.
There's more, but the point is that AWS just rolled out a whole bunch of bells and whistles for companies big and small. If keeping track of all the new stuff feels overwhelming, that's kind of the point. Amazon wants people to feel like it's coming out with so much that no one can keep up.
The only thing Amazon did not announce: price cuts.
Follow @CNBCtech on Twitter for the latest tech industry news.
What is Infrastructure-as-a-Service? Everything you need to know about IaaS – TechRadar
Not every company has a vast IT operation. This might involve a data center with business servers, network switches and equipment, storage -- and the related IT service management staff needed to run it. Yet, with the emergence of cloud computing for storage and web-based software, the concept of outsourcing the computing power itself to the cloud became viable.
Known as Infrastructure-as-a-Service (or IaaS), the idea is to move most of the complexity of IT involving servers, storage and networking and move that out to the cloud where it is managed by a third party. In essence, IaaS gives you access to a data center in the cloud, although there are some important things to know about how this actually works.
Before diving into the key components of Infrastructure-as-a-Service, its important to understand how the concept even developed. Cloud computing became more viable once Internet speeds increased, host providers started addressing security concerns, and businesses started relying on web-based apps (known as Software-as-a-Service or SaaS). A next evolutionary step called Platform-as-a-Service (or PaaS) involves the hardware and operating systems needed to run corporate apps or customer-facing apps; companies can focus on the applications and not the hardware (patches, security, updates, and maintenance).
Infrastructure-as-a-Service expands on both of these models. Typically, this means the entire IT operation is cloud-based, including the software, servers, networks, and storage. Lets cover each of those, and also explain what is not part of Infrastructure-as-a-Service.
Knowing the key components of Infrastructure-as-a-Service is important, especially since there are still aspects that are managed by your company and not the cloud provider. As mentioned, IaaS typically involves three key components: the servers, network, and storage.
As with most web-based apps, Infrastructure-as-a-Service almost always involves hosted software. This can be the business apps used to run your company, the email clients, the office productivity apps, and just about anything you can think of to run your business. However, it might not include the in-house software you develop and host.
For servers, the cloud provider is tasked with all of the maintenance, updates, endpoint security, and management related to keeping the cloud running at optimal levels. You can trust that the infrastructure management you run on the remote cloud servers is maintained properly. For companies with on-premise data centers, you know that it often requires a full staff of operators to install servers, keep them updated, and fix any problems.
Storage is another key component and the classic (original) definition of cloud computing. Most companies first realized the benefits of the cloud when they started using web-based apps and started relying on cloud storage, which means more elastic file storage that can expand and contract to meet your demands and company growth strategies. To end-user sin your company, cloud storage appears to be infinite and always expanding.
Infrastructure-as-a-Service also involves network monitoring and management, and can also expand and change as needed for your company. This can involve all of the network security features you might need, the network management and throttling, and maintenance.
Its important to know that Infrastructure-as-a-Service does not alleviate all possible IT work from the equation. What is often left to the company to manage involves any custom, in-house software development and also the business computers, printers, and mobile devices such as smartphones, that attach to the cloud and benefit from Infrastructure-as-a-Service. Often, there is a middleware component as well, especially if you also use an internal data center and need to make connections to the Infrastructure-as-a-Service provider or between custom apps.
As you can imagine, the key benefit here is reduced complexity. The cloud hosting provider is most of the complexity to manage and update servers, maintain network topologies, and to make sure the storage is always available and archived. As a company moves from SaaS only, the PaaS used with custom apps, up to IaaS as a more complete solution, the benefits also increase in terms of dealing with less and less complexity.
Another benefit has to do with security. Many companies are dealing with security issues on a continual basis -- security on servers, networks, within storage archives, and even with end-users. With Infrastructure-as-a-Service, the security issues move from the data center out to the end-user, and IT staff will typically shift to a support role for end-users where they can assist with problems but also educate employees about proper security protocols.
Another shift is that the IT employees become partners with the host provider and their role tends to be more about on-premise support. This often alleviates staff to focus on strategy, partnering with the provider to orchestrate cloud services, and develop long-term plans for IT operations, without the typical micro-management duties involved with servers, networks, and storage.
In the end, Infrastructure-as-a-Service is a way to outsource complexity and refocus on internal needs, employee support, and in-house development and infrastructure duties.
Read the original post:
What is Infrastructure-as-a-Service? Everything you need to know about IaaS - TechRadar
Report: Growing HCI Space Boosted by Cloud, AI – Virtualization Review
News
The hyperconverged infrastructure (HCI) space is growing fast as it adjusts to new technologies and factors such as artificial intelligence and hybrid/multicloud implementations, according to a new report from research firm Gartner Inc.
The company's new "Magic Quadrant for Hyperconverged Infrastructure" report finds Nutanix and VMware at the top of a pack of "Leaders" that also includes Dell EMC, Cisco and HPE. Gartner describes HCI as "a category of scale-out software-integrated infrastructure that applies a modular approach to compute, network and storage on standard hardware, leveraging distributed, horizontal building blocks under unified management." It basically substitutes old-world, proprietary, hardware-based, purpose-built systems with software-centric, integrated systems running on commercial off-the-shelf servers, with the focus on virtualized networking, compute and storage.
The report says better HCI scalability and management functionality will result in 70 percent of enterprises running some form of HCI (that is, appliance, software, cloud-tethered) by 2023, up from less than 30 percent this year.
While the movement is growing, it's also evolving, Gartner said, with some implementations leveraging AI to automatically improve performance and prevent failures, and others increasingly supporting different kinds of cloud implementations.
The cloud, the report indicates, can almost be thought of as a double-edged sword.
"For most HCI vendors, the public cloud is an extension of the strategy, but also could be a strategic threat if IT leaders buy public cloud services in lieu of spending on their own infrastructure," the report says. Furthermore, different kinds of cloud applications are being considered. "At the same time, HCI vendors have expanded their strategy to embrace hybrid/multicloud deployments, as either backup targets or disaster recovery options, or as an alternative for on-premises infrastructure for unpredictable or cyclical resource requirements."
Overall, Gartner said of the market, "Hyperconverged infrastructure solutions are making substantial inroads into a broader set of use cases and deployment options, but limitations exist. Infrastructure and operations (I&O) leaders should view HCI solutions as tools in the toolbox, rather than as panaceas for all IT infrastructure problems."
As noted, many of those tools come from vendors in the "Leaders" quadrant: Nutanix, VMware, Dell EMC, Cisco and HPE. Microsoft was the only vendor in the "Visionaries" section, while Huawei and Pivot3 were named "Challengers" and the final camp of "Niche Players" consisted of Scale Computing, Huayun Data Group, Red Hat, Sangfor Technologies, StorMagic, DataCore and StarWind.
To be eligible for the study, HCI vendors' functional criteria included:
Gartner also revisited the cloud factor in providing context for the report.
"One of the attractions of integrated systems and HCI is the potential to create a cloudlike provisioning model while maintaining physical control of IT assets and data on-premises in the data center, remote site or branch office," the report said. "Over the next few years, cloud deployment models will become increasingly important to meet both short-term scale-up/scale-down requirements and backup and disaster-recovery requirements. An important question for users is whether HCI is a stepping stone to the cloud or a 'foreseeable future' resting place for applications; and ultimately, whether it is a good alternative to the public cloud from performance, manageability at scale and cost perspectives."
A copy of the report licensed for distribution is available from VMware here.
About the Author
David Ramel is an editor and writer for Converge360.
Read the original here:
Report: Growing HCI Space Boosted by Cloud, AI - Virtualization Review
How AWS Plans to Speed Up the Cloud – Toolbox
Amazon Web Services is aiming to make the most of 5G telecoms by embedding cloud resources at the network edge closer to users devices. It will also help the global leader in cloud computing platforms to distance itself from competitors.
AWS Wavelength a new service announced at the Amazon subsidiarys Re:invent conference in Las Vegas features tools for storage, analytics, compute and databases that speeds performance by lowering latencies.
Such latencies, or network delays, occur when data and instructions traverse the connection points that separate devices like smartphones and sensors from data centers.
Its among a host of new products, services and features trotted out at the conference. They include Outpost, which lets enterprise users run AWS on premises, and Local Zone, which delivers select AWS services to geographic areas.
With them, customers can exploit fifth generation's wider spectrum of frequencies for applications that range from streaming video to self-guided machines and smart cities. They also can improve power consumption and reduce bandwidth.
Installing servers in the data centers of network operators cuts transfer times by taking links out of those chains. As a result, AWS claims latencies can be lowered from hundreds of milliseconds to the single digits.
Thats vital for emerging technologies like self-driving vehicles, which must parse and model sensor-data to make decisions in real time. According to AWS, Wavelength can improve speeds by 20x over existing 4G networks.
The Dell Technologies subsidiary is partnering with network operators in major markets worldwide. They include Verizon, which is testing its mobile edge computing service in Chicago and intends to expand its 5G Ultra Bandwidth to 30 cities by years end.
British mobile operator Vodafone, South Koreas SK Telecom and Japans KDDI also have signed on and will begin offering Wavelength next year.
AWS says Wavelength will service 69zones in 22 regions, enabling worldwide coverage as the new standard gets rolled out.
Mapbox and the Finnish augmented-reality specialist Varjo Technologies are putting Wavelength through trials. Varjo is using the services improved speeds to blend virtual reality with real-time image rendering to improve resolution for immersive computing applications. The apps can run without the need for dedicated local servers.
Mapboxs 1.7-million-member developer community can benefit from the artificial intelligence that guides users around obstacles like traffic jams and road construction as they drive to their destinations. Company execs say Wavelength permits automatic updating based on data from millions of sensors, allowing customers to refresh maps in pages and applications faster with 5G.
The Outposts service lets companies import cloud-ready rack servers for low-latency apps that AWS installs, maintains and updates. Local Zone lets users tap AWS infrastructure for faster processing of media and entertainment, advertising technologies, electronic design automation and machine learning.
With nearly half the global market for cloud services, AWS isnt content to rest on its considerable lead over second-place Microsoft Azure and stragglers Google, IBM and Oracle.
The measure of success will be whether lower latencies translate to faster transitions among corporate users seeking to offload more of their storage and computing infrastructure to outsource providers.
View original post here:
How AWS Plans to Speed Up the Cloud - Toolbox
20 VPS providers to shut down on Monday, giving customers two days to save their data – ZDNet
At least 20 web hosting providers have hastily notified customers today, Saturday, December 7, that they plan to shut down on Monday, giving their clients two days to download data from their accounts before servers are shut down and wiped clean.
The list of providers that notified customers about their impending shutdown includes:
ArkaHostingBigfoot ServersDCNHostHostBRZHostedSimplyHosting73KudoHostingLQHostingMegaZoneHostingn3ServersServerStrongSnowVPSSparkVPSStrongHostingSuperbVPSSupremeVPSTCNHostingUMaxHostingWelcomeHostingX4Servers
All the services listed above offer cheap low-end virtual private servers (VPSes). The providers appear to be using servers hosted in ColoCrossing data centers, a source told ZDNet.
Furthermore, all the websites feature a similar page structure, share large chunks of text, use the same CAPTCHA technology, and have notified customers using the same email template.
All clues point to the fact that all 20 websites are part of an affiliate scheme or a multi-brand business ran by the same entity.
The initial reaction on bulletin boards dedicated to discussing web hosting topics was that someone might be sabotaging the company behind all these VPS providers by sending spoofed emails and hoping that customers jump ship.
This proved to be false. In the hours after they received the notifications, several users confirmed the email's legitimacy by analyzing email headers, confirmed the shutdown with the support staff at their respective VPS provider, and found a copy of the same message in their web hosting dashboards.
Since then, customers have shifted from surprise to anger. Some said inquiries about refunds remained unanswered.
Those who didn't lose too much money quickly realized they were set to work the weekend, as they had to download all their data and find a new provider, in order to avoid a prolonged downtime on Monday, when the 20 providers are set to shut off servers.
Online, the phrase "exit scam" is now being mentioned in several places [1, 2]. Some theories claim the company behind all these VPS providers is running away with the money it made in Black Friday and Cyber Monday deals.
Paranoia is high, and for good reasons. As several users have pointed out, the VPS providers don't list physical addresses, don't list proper business registration information, and have no references to their ownership. Effectively, they look like ghost companies.
Requests for comment sent by ZDNet to some of the VPS providers remained unanswered before this article's publication.
A user impacted by the shutdown told ZDNet that the number of VPS providers shutting down might also be higher than 20, as not all customers might have shared the email notification online with others.
Another source pointed out that a search for a server IP address used by one of the soon-to-close VPS providers shows it has also been used by other companies that also provide cheap VPS hosting services -- which are also using websites with a similar structure and templates as some of the services shutting down.
(h/t Bad Packets)
Read the original:
20 VPS providers to shut down on Monday, giving customers two days to save their data - ZDNet
Finally: AWS Gives Servers A Real Shot In The Arm – The Next Platform
Finally, we get to test out how well or poorly a well-designed Arm server chip will do in the datacenter. And we dont have to wait for any of the traditional and upstart server chip makers to convince server partners to build and support machines, and the software partners to get on board and certify their stacks and apps to run on the chip. Amazon Web Services is an ecosystem unto itself, and it owns a lot of its own stack, so it can just mike drop the Graviton2 processor on the stage at re:Invent in Las Vegas and dare Marvell, Ampere, and anyone else who cares to try to keep up.
And that is precisely what Andy Jassy, chief executive officer of AWS, did in announcing the second generation of server-class Arm processors that the cloud computing behemoth has created with its Annapurna Labs division, making it clear to Intel and AMD alike that it doesnt need X86 processors to run a lot of its workloads.
Its funny to think of X86 chips as being a legacy workload that costs a premium to make and therefore costs a premium to own or rent, but this is the situation that AWS is itself setting up on its infrastructure. It is still early days, obviously, but if even half of the major hyperscalers and cloud builders follow suit and build custom (or barely custom) versions of the Arm Holdings Neoverse chip designs, which are very good indeed and on a pretty aggressive cadence and performance roadmap, then a representative portion of annual X86 server chip shipments could move from X86 to Arm in a very short time call it two to three years.
Microsoft has made no secret that it wants to have 50 percent of its server capacity on Arm processors, and has recently started deploying Marvells Vulcan ThunderX2 processors in its Olympus rack servers internally. Microsoft is not talking about the extent of its deployments, but our guess is that it is on the order of tens of thousands of units, which aint but a speck against the millions of machines in its server fleet. Google has dabble in Power processors for relatively big iron and has done some deployments, but again we dont know the magnitude. Google was rumored to be the big backer that Qualcomm had for its Amberwing Centriq 2400 processor, and there are persistent whispers that it might be designing its own server and SmartNIC processors based on the Arm architecture, but given the licensing requirements, it seems just as likely that Google would go straight to the open source RISC-V instruction set and work to enhance that. Alibaba has dabbled with Arm servers for the past three years, and in July announced its own Xuantie 910 chip, based on RISC-V. Huawei Technologys HiSilicon chip design subsidiary launched its 64-core Kunpeng 920, which we presume is a variant of Arms own Ares Neoverse N1 design and which we presume will be aimed at Chinese hyperscalers, cloud builders, telcos, and other service providers. We think that Amazons Graviton2 probably looks a lot like the Kunpeng 920, in fact, and they probably borrow heavily from the Arm Ares design. As is the case with all Arm designs, they do not include memory controllers or PCI-Express controllers, which have to be designed or licensed separately from third parties.
This time last year, AWS rolled out the original Graviton Arm server chip, which had 16 vCPUs running at 2.3 GHz; it was implemented in 16 nanometer processes from Taiwan Semiconductor Manufacturing Corp. AWS never did confirm if the Graviton processor had sixteen cores with no SMT or eight cores with two-way SMT, but we think it does not have SMT and that it is just a stock Cosmos core, itself a tweaked Cortex-A72 or Cortex-A75 core, depending. The A1 instances on the EC2 compute facility at AWS could support up to 32 GB of main memory and had up to 10 Gb/sec of network bandwidth coming out of its server adapter and up to 3.5 Gb/sec of Elastic Block Storage (EBS) bandwidth. We suspect that this chip had only one memory controller with two channels, something akin to an Intel Xeon D aimed at hyperscalers. This was not an impressive Arm server chip at all, and more akin to a beefy chip that would make a very powerful SmartNIC.
In the history of AWS, a big turning point for us was when we acquired Annapurna Labs, which was a group of very talented and expert chip designers and builders in Israel, and we decided that we were going to actually design and build chips to try to give you more capabilities, Jassy explained in his opening keynote at re:Invent. While lots of companies, including ourselves, have been working with X86 processors for a long time Intel is very close partner and we have increasingly started using AMD as well if we wanted to push the price/performance envelope for you, it meant that we had to do some innovating ourselves. We took this to the Annapurna team and we set them loose on a couple chips that we wanted to build that we thought could provide meaningful differentiation in terms of performance and things that really mattered and we thought people were really doing it in a broad way. The first chip that they started working on was an Arm-based chip that we called our Graviton chip, which we announced last year as part of our A1 instances, which were the first Arm-based instances in the cloud and these were designed to be used for scale out workflows, so containerized microservices and web-tier apps and things like that.
The A1 instances have thousands of customers, but as we have pointed out in the past and just now, it is not a great server chip in terms of its throughput, at least not compared to its peers. But AWS knew that, and so did the rest of us. This was a testing of the waters.
We had three questions we were wondering about when we launched the A1 instances, Jassy continued. The first was: Will anybody use them? The second was: Will the partner ecosystem step up, support the tool chain required for people to use Arm-based instances? And the third was: Can we innovate enough on this first version of this Graviton chip to allow you use Arm-based chips for a much broader array of workloads? On the first two questions, weve been really pleasantly surprised. You can see this on the slide, the number of logos, loads of customers are using the A1 instances in a way that we hadnt anticipated and the partner ecosystem has really stepped up and supported our base instances in a very significant way. The third question whether we can really innovate enough on this chip we just werent sure about and its part of the reason why we started working a couple of years ago on the second version of Graviton, even while we were building the first version, because we just didnt know if were going to be able to do it. It might take a while.
Chips tend to, and from what little we know, the Graviton2 is much more of a throughput engine and can also, it looks like, hold its own against modern X86 chips at the core level, too, where single thread performance is the gauge.
The Graviton2 chip, with over 30 billion transistors, and up to 64 vCPUs and again, we think these are real cores and not the thread count in half the number of cores. We know that Graviton2 it is a variant of the 7 nanometer Neoverse N1, which means it is a derivative of the Ares chip that Arm created to help get customers up to speed. The Ares Neoverse N1 has a top speed of 3.5 GHz, with most licensees driving the cores, which do not have simultaneous multithreading built in, at somewhere between 2.6 GHz and 3.1 GHz, according to Arm. The Ares core has 64 KB of L1 instruction cache and 64 KB of data cache, and the instruction caches across the cores are coherent on a chip. (This is cool.) The Ares design offers 512 KB or 1 MB of private L2 cache per core, and the core complex has a special high bandwidth, low latency pipe called Direct Connect that links the cores to a mesh interconnect that links all of the elements of the system on chip together. The way Arm put together Ares, it can scale up to 128 cores in a single chip or across chiplets; the 64-core variant had eight memory controllers and eight I/O controllers and 32 core pairs with their shared L2 caches.
We think Graviton2 probably looks a lot like the 64-core Ares reference design with some features added in. One of those features is memory encryption, which is done with 256-bit keys that are generated on the server at boot time and that never leave the server. (It is not clear what encryption technique is used, but it is probably AES-256.)
Amazon says that the Graviton2 chip can deliver 7X the integer performance and 2X the floating point performance of the first Graviton chip. That first stat makes sense at the chip level and the second stat must be at the core level or it makes no sense. (AWS was vague.) Going from 16 cores to 64 cores gives you 4X more integer performance, and moving from 2.3 GHz to 3.2 GHz would give you another 39 percent, and going all the way up to 3.5 GHz would give you another 50 percent on top of that, yielding 6X overall. The rest would be improvements in cache architecture, instruction per clock (IPC), and memory bandwidth across the hierarchy. Doubling up the width of floating point vectors is easy enough and normal enough. AWS says further that the Graviton2 chip has per-core caches that are twice as big and additional memory channels (it almost has to by definition) and that these features together allow a Graviton2 to access memory 5X faster than the original Graviton. Frankly, we are surprised that it is not more like 10X faster, particularly if Graviton2 has eight DDR4 memory channels running at 3.2 GHz, as we suspect that it does.
Here is where it gets interesting. AWS compared a vCPU running on the current M5 instances to a vCPU running on the forthcoming M6g instances based on the Graviton2 chip. AWS was not specific about what test was used on what instance configuration, so the following data could be a mixing of apples and applesauce and bowling balls. The M5 instances are based on Intels 24-core Skylake Xeon SP-8175 Platinum running at 2.5 GHz; this chip is custom made for AWS, with four fewer cores and a slightly higher clock speed (400 MHz) than the stock Xeon SP-8176 Platinum part. Here is how the Graviton2 M6g instances stacked up against the Skylake Xeon SP instances on a variety of workloads on a per-vCPU basis:
Remember: These comparisons are pitting a core on the Arm chip against a hyperthread (with the consequent reduction in single thread performance to boost the chip throughput). These are significant performance increases, but AWS was not necessarily putting its best Xeon SP foot forward in the comparisons. The EC2 C5 instances are based on a Cascade Lake Xeon SP processors, with an all core turbo frequency of 3.6 GHz, and it looks like they have a pair of 24-core chips with HyperThreading activated to deliver 96 vCPUs in a single image. The R5 instances are based on Skylake Xeon SP-8000 series chips (which precise one is unknown) with cores running at 3.1 GHz; it looks like these instances also have a pair of 24-core chips with HyperThreading turned on. These are both much zippier than the M5 instances on a per vCPU basis, and more scalable in terms of throughput across the vCPUs, too. It is very likely that the extra clock speed on these C5 abnd R5 instances would close the per vCPU performance gap. (It is hard to say for sure.)
The main point here is that we suspect that AWS can make processors a lot cheaper than it can buy them from Intel 20 percent is enough of a reason to do it, but Jassy says the price/performance advantage is around 40 percent. (Presumably that is comparing the actual cost of designing and creating a Graviton2 against what we presume is a heavily discounted custom Skylake Xeon SP used in the M5 instance type.) And because of that AWS is rolling out Graviton2 processors to sit behind Elastic MapReduce (Hadoop), Elastic Load Balancing, ElastiCache, and other platform-level services on its cloud.
For the rest of us, there will be three different configurations of the Graviton2 chips available as instances on the EC2 compute infrastructure service:
The g designates the Graviton2 chip and the d designates that it has NVM-Express flash for local storage on the instance. All of the instances will have 25 Gb/sec of network bandwidth and 18 Gb/sec of bandwidth for the Elastic Block Storage service. There will also be bare metal versions, and it will be interesting to see if AWS implemented the CCIX interconnect to create two-socket or even four-socket NUMA servers or stuck with a single-socket design.
The M6g and M6gd instances are available now, and the compute and memory optimized versions will be available in 2020. The chip and the platform and the software stack are all ready, right now, from the same single vendor. When is the last time we could say that about a server platform? The Unix Wars. . . . three decades ago.
Read the original here:
Finally: AWS Gives Servers A Real Shot In The Arm - The Next Platform