Category Archives: Cloud Servers

1&1: the most trusted partner for SMBs – TechRadar

This feature has been brought to you by 1&1

As one of the most successful hosting providers worldwide 1&1 has a deep knowledge of its markets. Leveraging this experience, 1&1 knows that SMBs are in need of hosting products tailored to their requirements, easy-to-use and customisable.

As the most trusted partner for SMBs we offer a broad range of products and solutions our customers need for running their business and being successful online. We act as a one-stop-shop for everything customers need to carry out their digital transformation. Our extensive portfolio features products for hosting, cloud and email applications tailored to the specific needs of experts, home users, small companies and freelancers. Starting with basic servers up to high-end cloud servers, 1&1 provides solutions for a variety of user needs.

The latest products in our website creation and hosting portfolio are:

Security is very important to us and customers benefit from the many security features built in our products and infrastructure:

Check out 1&1 website packages here.

Read this article:
1&1: the most trusted partner for SMBs - TechRadar

Our view: Lessons learned from Amazon cloud outage – The Salem News

Tuesdays widespread internet outage carries many lessons for us all, from the need to back-up cloud servers to an awareness that without the internet, we are in big trouble.

Hundreds of thousands websites crashed Tuesday from about 12:30 p.m. to around 4:30 p.m. The outage was a result of problems atone of Amazons server farms, located in a Virginia warehouse. It affected companies large and small, from the likes of Yahoo and Apple to the North of Boston Media Group. It affected how people work, shop or look at pet-trick videos. It affected Huffington Post, Imgur, Business Insider and many other mainstream media sites.

In short, it affected a lot of people.

Amazon Web Services, which is distinct from the companys retail business, hosts a large portion of what is known as the cloud, that vast repository of shared computer memory. Tangibly Amazons cloud is series of data centers enormous, climate controlled warehouses that keep the flow of data moving all over the world.

Amazon Web Services is far from the only company that leases shared serve space in the cloud, but it is a major player. By one account, it carries and stores data for about 30 percent of the companies on the internet. Thats a lot of businesses. Its been around since 2006 and, with the exception of a few blips along the way, has remained reliable.

Amazon said Tuesdays outage was caused by high error rates in one part of its simple storage service. That may be Greek to most, but what it means is that a major portion of a public utility that people rely on as much as electricity and water was offline. In 1990, not even Al Gore could have known how important the internet much less cloud computing would become to how the world operates.

Amazons outage showed how dependent we have all become on reliable internet and shared computing services. Here at the North of Boston Media Group, for example, access to essential production software was shut off for nearly half the day. The experience was a lesson to all that when these systems go down, we and everyone else need a back-up plan.

One solution for businesses is to build their own, on-site data centers. But these are costly, particularly compared to the relatively inexpensive services offered by Amazon.

With this wake-up call come questions beyond the obvious ones about how this happened, which Amazon so far hasnt publicly detailed. One question is this: What are the ramifications of allowing companies such as Amazon or Google to carry such large pieces of our shared computing and internet architecture?

Its kind of like the financial crisis of 2008 when huge financial companies began failing and needed government bailouts because they were too big to fail. Is Amazon now too big to fail? What, if anything, can be done about it?

It seems unlikely that President Donald Trumps administration will want to start telling internet businesses what to do, but it might be time for some scrutiny on the issue.

Public utilities are heavily regulated, from electric companies to water distribution systems. Isnt the internet a public utility? Wouldnt it be wise for an independent, third-party to take a look at whats happening?

This case doesnt appear to involve a malicious attack, but the problem of hacking is real and has dealt severe blows to the internet in the past. With everyone so concerned about security these days, it makes sense to take a look at the behemoth that Amazon Web Services has become.

Continue reading here:
Our view: Lessons learned from Amazon cloud outage - The Salem News

Bouncing Back To Private Clouds With OpenStack – The Next Platform

March 1, 2017 Timothy Prickett Morgan

There is an adage, not quite yet old, suggesting that compute is free but storage is not. Perhaps a more accurate and, as far as public clouds are concerned, apt adaptation of this saying might be that computing and storage are free, and so are inbound networking within a region, but moving data across regions in a public cloud is brutally expensive, and it is even more costly spanning regions.

So much so that, at a certain scale, it makes sense to build your own datacenter and create your own infrastructure hardware and software stack that mimics the salient characteristics of one of the big public clouds. What that tipping point in scale is really depends on the business and the sophistication of the IT organization that supports it; Intel has suggested it is somewhere around 1,200 to 1,500 nodes. But clearly, just because a public cloud has economies of scale does not mean that it passes all of those benefits on to customers. One need only look as far as the operating profits of Amazon Web Services to see this. No one is suggesting that AWS does not provide value for its services. But in its last quarter, it brought nearly $1 billion to its middle line out of just under $3.5 billion in sales and that is software-class margins for a business that is very heavily into building datacenter infrastructure.

Some companies, say the folks that run the OpenStack project, are ricocheting back from the public cloud to build their own private cloud analogues, and for economic reasons. Luckily, it is getting easier to use tools like OpenStack to support virtual machine, bare metal, and container environments. This is, of course, a relative thing, too. No one would call OpenStack easy, but the same holds true for any complex piece of software such as the Hadoop data analytics stack or the Mesos cluster controller, just to call out two.

People are realizing that the public cloud, and in particular the hyperscale providers like AWS, Google, and Microsoft, are really in some cases the most expensive way to do cloud computing, Mark Collier, chief operating officer at the OpenStack Foundation, tells The Next Platform. There is was this misconception early on that, because of economies of scale, the hyperscale clouds would be cheaper. But if you look at the earnings releases of AWS and others, their growth rates are slowing in the public cloud, and we think that has a lot to do with cost. So we are starting to see some major users of AWS standing up private clouds powered by OpenStack and moving certain strategic workloads off of AWS and repatriating them internally. There are a lot of reasons for this, but cost is the biggest driver.

OpenStack users have, over the past three years, moved from tire kicking to production

The public cloud is worth a premium over private clouds for a bunch of reasons, not the least of which customers using public clouds do not have to pay the capital costs of infrastructure or the management costs of making it run well. And having the ability to do utility priced, instant on and off capacity is also worth a premium, and we know this because steady state rental of capacity on clouds costs less than on-demand capacity. (As we would expect.) But, says Collier, a lot of customers have steady state workloads that just run, and even though there are ways to bring the costs down on public clouds where the virtual machines just sit there, day in and day out, customers moving off AWS to a private cloud can see anywhere from a 50 percent to 70 percent cost reduction for these constantly running jobs.

Those are some big numbers, and we would love to see this further quantified and qualified.

Collier also points out that OpenStack is used to create dozens of big public clouds, so it is not just a private cloud technology. (Rackspace Hosting, one of the co-founders of OpenStack along with NASA, operates what is probably the largest OpenStack cloud for its Cloud Servers and Cloud Storage services.)

Public and private clouds are all growing, but customers are getting more strategic about where to place their workloads so they get their moneys worth, says Collier. And if you are paying to turn resources on and off, and you are not doing that, then you are wasting your money. People are no longer wondering when they are moving to clouds they pretty much know everything is going in a cloud environment. But now they are thinking about which type makes sense. People are starting to dig into the numbers.

It is hard to say how much of the compute, storage, and networking capacity installed worldwide and only running enterprise applications at that is on private clouds versus public clouds versus traditional uncloudy infrastructure. And Collier was not in a mood to take a wild guess about how, years hence, this pie chart might shake out. But he concurred with us that it might look like a 50-50 or 60-40 split between private and public cloud capacity over the long haul. A lot will depend on economics, both in terms of what the public clouds charge and what enterprises can afford in terms of building their own cloud teams and investing in infrastructure.

If Amazon Web Services offered a private cloud you could plunk into a datacenter, and at a private cloud price, this would certainly change things. But it also might make AWS a whole lot less money, which is why we think maybe the top brass at AWS are not so keen on the idea. They might be able to double, triple, or quadruple their aggregate compute and storage, but not make more money doing it unless customers decide to use AWS management on their baby and private AWS clouds. Should they ever come to pass.

And having a private VMware cloud running in AWS datacenters, as will be done this year, does not count. We are not sure of much in this world, but we fully expect for capacity on this VMware Cloud on AWS service to cost considerably more than hosting a private cloud based on the ESXi hypervisor, vCenter management tools, and some of the vRealize cloud management tools.

There are a couple of things that are giving OpenStack a second wind, and it is not just the backdraft effect off of big public clouds by enterprise customers.

For one thing, OpenStack is getting more refined and more polished, as we demonstrated by the Ocata release that was put out by the community two weeks ago. This release had a relatively short development cycle, coming out about two months ahead of the usual cadence, but the future Pike release will get back to the normal six-month release cadence that OpenStack has adhered to for years now.

One big change with the Ocata release of OpenStack is that the horizontal scaling mechanism for the Nova compute portion of OpenStack, called Cells, has gotten a V2 update and is not only ready for primetime, but is running with Nova by default starting with Ocata. In essence, Cells allows for multiple instances of the Nova compute controller (including its database and queue) to be distributed in a single cluster and be federated for management. Cells was developed by Rackspace and has been used in production since August 2012 and has been in development formally for OpenStack the Grizzly release back in 2012, and it can be used to federate clustered Nova controllers within a datacenter or region or across regions.

Nova also now includes a feature called the placement and resource scheduler it does not yet have a funky name because it has not been busted free of Nova, but Jonathan Bryce, executive director of the OpenStack Foundation, says that this scheduler could eventually be broken free and used to control certain aspects of other portions of the OpenStack stack. This is a new way of managing the assets that comprise a cloud servers, storage devices, networking equipment, and so on adding intelligence to that placement. So, for instance, it tracks the kinds of devices and their capacities and performance, and with a set of APIs you can request that a workload be deployed on a specific collection of resources and this scheduler can find it and make it happen through Nova.

The first and second generations of cloud, according to OpenStack

The idea is that we are on the second generation of clouds, and they are easier to run and that makes them more cost effective and also opens them up for deployment by more people, says Bryce, which sets up a virtuous cycle. But the other attribute of Gen 2 clouds is that they do more things. When OpenStack was just starting, it was basic virtualization with self service and elastic provisioning. When you look at it now, what you see are cloud native applications, but also things like SAP and network function virtualization workloads. So the private cloud today costs less, but it also does more. So having a more intelligent scheduler that makes sure you put an NFV workload onto a server that has high performance networking gear, or you put a data analytics workload onto something that has high performance I/O, these are the things that end up making these new clouds extremely capable and able to run these new workloads.

And this is also why OpenStack use is exploding in new markets, particularly China, where there is no established virtualization player and lots of companies are greenfield installations.

With OpenStack now seven years old, it has become a reasonably mature platform thanks to the hard work of thousands of software engineers and the enlightened self-interest of their employers. And it is reasonable to ask if OpenStack, like other open source infrastructure components like the Linux kernel and the bits that wrap around it to make it an operating system, is largely done.

OpenStack has thousands of marquee enterprise customers, and this is just a sampling

Theres always something more to do, says Bryce. OpenStack is an interesting animal in some ways because it has these very primitive core functions such as virtualization and networking, and those are necessary for every single workload, every single application that runs on any platform. Those are key, and fairly stable and mature. Where we are seeing exciting work still happen is how you leverage and integrate these infrastructure primitives to meet new workloads.

For instance, a lot is happening in the OpenStack community with software containers right now. Not only is OpenStack being containerized itself so it can be deployed and managed better, but containers are being added atop either virtualized or bare metal OpenStack clouds so they can be used to manage other applications that in turn run on OpenStack.

When you layer dynamic application management through containers on top of programmable infrastructure, you really get the best of both worlds, Bryce explains. But in order to achieve this, you need tight integration between the two.

Just as was the case with server virtualization based on hypervisors when it became popular on X86 platforms a decade ago, there is much weeping and gnashing of teeth with regards to both networking and storage underpinning container environments. So OpenStack shops are combining the Neutron virtual networking with Cinder block storage and the Kubernetes container scheduler, or gluing together Nova compute with Cinder block storage and Docker container runtimes. The Kuryr project provides the link between Docker and Neutron, hence its name courier, and a subproject called Fuxi connects Cinder block storage and Manila shared file systems to Docker in a similar fashion.

Categories: Cloud, Compute, Enterprise

Tags: container, Docker, OpenStack

Making Remote NVM-Express Flash Look Local And Fast Looking Down The Long Enterprise Road With Hadoop

See original here:
Bouncing Back To Private Clouds With OpenStack - The Next Platform

Glitch in Amazon web servers causes problems for popular sites – The Guardian

The case highlights how reliant the internet has become on several players, including Amazon, used by tens of thousands of web services for hosting and backing up data. Photograph: Loic Venance/AFP/Getty Images

Amazons S3 cloud service experienced an outage of several hours on Tuesday that caused problems for many websites and mobile apps that rely on it, including Medium, Business Insider, Slack, Quora and Giphy.

The company said earlier on Tuesday that it was experiencing high error rates on the platform affecting a large part of the east coast of the US. Then on Tuesday afternoon, Amazon posted on its service health dashboard that the issue had been resolved:

As of 1:49 PM PST, we are fully recovered for operations for adding new objects in S3, which was our last operation showing a high error rate. The Amazon S3 service is operating normally.

The Amazon Simple Storage Solution (S3) is used by tens of thousands of web services for hosting and backing up data, including the Guardian, which was heavily affected.

The problem had also affected some internet-connected devices, such as as smartphone-controlled light switches.

The outage even affected a site called is it down right now? which monitors when other sites are down.

The case highlights how reliant the internet has become on several players, including Amazon, Cloudflare and Google who provide the expensive centralized infrastructure on which the web runs.

The so-called cloud is actually made up of thousands and thousands of powerful computer servers, stored by Amazon and others in huge server farms. The companies build and maintain them so so that smaller players dont have to.

Its convenient and flexible (you only pay for the storage they use) for companies who dont have the resources or skills to do it themselves that is, until theres a problem.

See the original post here:
Glitch in Amazon web servers causes problems for popular sites - The Guardian

Behind AMD’s Big Plan for Data Center – Market Realist

Are AMD's ABCs Worth the Price Tag? PART 12 OF 17

In the preceding part of this series, we discussed how Advanced Micro Devices (AMD) Pascal GPU (graphics processing unit) pushed the companys Computing and Graphics revenue to a two-year high in fiscal 4Q16. The company is now expanding the GPU market beyond gaming and into thedata center.

With the advent of deep learning and AI (artificial intelligence), more and more cloud companies are using accelerators like GPU and FPGAs (field programmable gate arrays) for their deep learning work.

Nvidia (NVDA) is a leader in the AI market. Its revenue from data center rose 23% sequentially and 205% on a YoY (year-over-year) basis in the quarter ended January 2017. Meanwhile, Intels (INTC) data center revenue rose 4.4% sequentially and 8% on a YoY basis during the same quarter. This shows that the trend is moving away from x86 CPUs (central processing unit) to GPUs.

Until now, AMD only supplied x86 server chips to data centers. Now, its expanding its offerings to include Radeon Instinct, which is a combination of its GPU, CPU, and open source software. AMD has already secured orders from Google (GOOG) and Alibaba (BABA) to supply GPUs for their data centers.

AMD is developing next-generation Vega GPU and Zen-based server CPU Naples to expand the breadth of its customers beyond traditional and cloud servers to include embedded infrastructure and communications markets. With Naples, AMD is targeting cloud, big data applications, and traditional enterprise that require more threads, higher memory, and I/O-bound (input/output) applications.

To be sure, AMD is focusing more on thecloud as design wins convert into revenue faster than networking and storage.

Radeon Instinct and Naples CPU are expected to hit the market by the end of fiscal 2Q17. AMD stated that it is receiving astrong response from customers for its new products. As the design wins take some time to reflect in earnings, the effect of this would likely be visible in fiscal 4Q17.

In the meantime, AMDs Enterprise, Embedded, and Semi-Custom segment will continue to be influenced by the semi-custom seasonality.Continue to the next part for a closer look.

Visit link:
Behind AMD's Big Plan for Data Center - Market Realist

How many instances of Windows Nano Server can run on 1 TB of RAM? – TechTarget

One of the premier new features of Windows Server 2016 is Windows Nano Server, essentially a stripped-down, headless install that is 90% smaller than a full Windows Server install. What does that mean with respect to cloud server workloads and better security resulting from having a much smaller attack surface?

In this expert handbook, we explore the issues and trends in cloud development and provide tips on how developers can pick the right platform.

By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.

You also agree that your personal information may be transferred and processed in the United States, and that you have read and agree to the Terms of Use and the Privacy Policy.

Windows Nano Server is a new feature that's part of Windows Server 2016. It's a headless version of Windows Server that's even smaller than Windows Server Core. Some of the stats say that it has a 93% smaller virtual hard disk space, 92% fewer critical bulletins and it requires 80% fewer reboots. That could be a big feature for cloud implementations. Nano Server, Hyper-V containers and Storage Spaces Direct are three major new features of Windows Server 2016.

Nano Server offers improved density, because you can run a lot more Nano servers than you can full-size Windows servers. That's particularly important if you're trying to develop cloud applications. The Nano Server's reduced size definitely gives it a much smaller attack surface. That makes it a far more secure kind of operating system.

I recall Microsoft doing research where they were running Windows Nano Server as a virtualization host. They found that a 1 TB Windows Nano Server with 1 TB of RAM was capable of running 1,000 Nano Server virtual machines on it. That gives you an idea of the kind of density that you can get out of Nano Server and some of its advantages. Of course, with no GUI and no browser, there's a lot less to attack. So, it is more secure.

One of the negatives, though, is that it is going to be different to manage because there's no local login. There's no UI. You have to manage Windows Nano Server completely remotely. That's going to probably cause a learning hurdle and some slow adoption for a lot of companies, because sometimes that's difficult to get into. That's one of the reasons why Windows Server Core isn't running everywhere. It's a little bit more difficult to manage. So, that requirement on remote management will probably provide at least an initial hesitation for running Nano Server.

The densities and security benefits that you can get through Windows Nano Server are significant, and that is definitely the way that application development, especially in the cloud, is headed.

What are the key benefits to Microsoft's Nano Server?

Microsoft Nano Server: Does less really mean more?

Hyper-V and Windows Server 2016 containers: The same, but different

See the original post:
How many instances of Windows Nano Server can run on 1 TB of RAM? - TechTarget

Function as a service, or serverless computing: Cloud’s next big act? – TechTarget

Many developers at Expedia, the online travel giant, can disregard the need to provision servers when releasing...

Enjoy this article as well as all of our content, including E-Guides, news, tips and more.

By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.

You also agree that your personal information may be transferred and processed in the United States, and that you have read and agree to the Terms of Use and the Privacy Policy.

new code. They can dismiss the need to manage and maintain servers as well.

Instead, they can simply focus on their code, releasing it when their work is done without thinking about the technology stack that will run it.

"All you're doing is writing your software code and then you're packaging it and you're letting someone else worry about whether the environment is ready for you," explained Kuldeep Chowhan, a principal engineer at Expedia.

Some may dismiss this as a return to siloed practices, back before the days of DevOps when development and operations worked separately.

But coders at Expedia and a growing number of companies are embracing function as a service -- also known as serverless computing -- as the next evolution in cloud computing, moving away from the need to spin up servers as they release new functionality for apps and thus becoming more agile and responsive to changing business needs in the process.

All you're doing is writing your software code and then you're packaging it and you're letting someone else worry about whether the environment is ready for you. Kuldeep Chowhanprincipal engineer, Expedia

CIOs, analysts said, should now consider where they can adopt serverless computing to bring agility, speed, scalability and cost benefits to their enterprise IT departments.

"If I'm developing software, I don't care about the machine it's running on. I care about my ideas and getting it to the users as fast as possible and as cheap as possible. That's what serverless offers by giving me a platform that can take software code straight into the servers and provide it then to my end users," said Karoly Sepsy, a DevOps consultant with consulting firm Contino.

Function as a service/serverless computing (also known as serverless architecture) is an event-driven compute service offered by cloud providers which executes the code, or function, only when needed.

"Serverless is the next step in cloud IaaS, making it easier to scale [and] take advantage of more complex automation areas -- all without having to administer servers and without having to pay for server capacity when not using them," said Holger Mueller, principal analyst and vice president with Constellation Research.

Vendors in this space include AWS Lambda (the first on the market, introduced in 2014), Google Cloud Functions, IBM OpenWhisk and Microsoft Azure Functions.

Function as a service/serverless computing differs from cloud computing in a few key ways, and those differences are what produce its benefits as well as the challenges associated with using it.

When using serverless computing, coders upload code snippets packaged as a function that carries out a specific task. The code only runs when triggered by an event. But while the coder is responsible for the code itself, the service provider manages the compute stack that runs it; the provider automatically provisions the compute and storage resources needed for that function.

Users (generally enterprise IT departments) then are billed on a pay-per-use basis, determined by the number of requests served and the compute time needed to run the code, metered in increments of 100 milliseconds. On the other hand, if the code is never triggered, the user is never billed.

Serverless computing differs from other cloud services, such as Infrastructure as a Service and Platform as a Service, in that under those cloud versions, users must spin up virtual machines for their applications and also deploy codebase as an entire application.

"With function as a service, the infrastructure is completely hidden from the consumer, and it's event-driven, so your function can run when it's required to run and not at other times," Sepsy said.

Those distinctions matter, he and other analysts said.

One of the biggest benefits with serverless computing, cited by users and analysts, is the reduced cost that comes for paying only when the code runs.

"You can get away with using less compute capacity for your systems," Sepsy says.

Expedia, for instance, paid only $550 a month for 2.3 billion computations in AWS Lambda in December, Chowhan said, explaining that it was multiples less than what the company would have paid with a traditional cloud deployment of the code.

Scalability is another cited benefit. "You can serve extreme spikes in traffic," Sepsy said.

Function as a service also helps users achieve faster speed to market with their code.

"You're only writing the actual function code that runs. If you don't have to spend the effort on creating and maintaining infrastructure automation, you don't have to write the code to control the infrastructure, you don't have to worry about running containers or virtual machines, so you can move faster," said Donnie Berkholz, research director for development, DevOps and IT Ops at 451 Research.

Serverless computing won't work well in some circumstances, though, and it does present challenges for enterprises looking to use it.

For starters, not all programs work well in function as a service, analysts and consultants said. Enterprise IT leaders would likely find it hard to justify, at least at the start, breaking down monolithic applications into functions.

Also, extremely long-running jobs and highly parallel jobs that require a lot of communication between them while they're running aren't good candidates for use with this technology, Berkholz said. Nor are functions where there's a great deal of data transfer between public and on-premises systems.

"They won't work well for cost reasons or responsiveness," he said.

Analysts and consultants also noted some challenges and limitations that users will have to consider if moving to function as a service.

First, organizations need to remember that function as a service vendors have certain parameters, particularly around the languages used, noted Jordan Taylor, a DevOps practitioner with Contino. Lambda, for instance, currently only supports Node.js, Java, C# and Python.

Companies might also be limited in their ability to move to function as a service if don't have the right skills or practices to fit with this new way of working. "You could have the most perfect piece of technology, but if it's not with the right workflow or technical configuration or process, it's not going to work," Taylor said.

To that point, Chowhan said IT will need to develop a self-service capability for coders to deploy. "It's something critical for the enterprise [to be successful]," he said.

Additionally, Berkholz said users will have to learn to effectively manage the greater numbers in their software stack if moving to function as a service.

"To know what's there, so you can [for example] understand how secure it is and whether there are updates, enterprise IT will have to manage it differently. A lot of the [management] software today works with that at a small scale," he said. "But as you grow with FaaS [function as a service], you could go from [managing] one monolithic application with 1 million lines of codes to [managing] bits of 200 lines of codes in functions that's 5,000 functions."

Recent stories by Mary K. Pratt:

The case for OpenStack in the enterprise

This is how tosecure a seat on the board

How tostart and grow a DevOps team

Link:
Function as a service, or serverless computing: Cloud's next big act? - TechTarget

White boxes set to shift server buying habits – Computer Weekly – ComputerWeekly.com

The latest financial results from HPE have highlighted a growing issue facing server makers around the effects of cloud computing.

In its first-quarter 2017 results, HPE reported an 11% decline in server revenue due to what CFO Tim Stonesifer described as a softer-than-expected core server market combined with some execution challenges.

Learn how to lay the technical groundwork for a smooth transition

By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.

You also agree that your personal information may be transferred and processed in the United States, and that you have read and agree to the Terms of Use and the Privacy Policy.

In a transcript of the earnings call for the quarter, posted on the Seeking Alpha financial blogging site, HPE CEO Meg Whitman said: Revenue was impacted by a tough market environment, particularly in core servers and storage. We saw a significantly lower demand from one customer and major tier one service provider facing a very competitive environment.

In a question-and-answer session with financial analysts, Whitman admitted that cloud computing was affecting HPEs server and storage business, but she said the company was now focusing on private and hybrid clouds, using its Synergy product range.

We are ramping our Synergy offering, weve got the power of SGI and our high-performance compute that was part of HPE, she said. Synergy is important because it allows us to provide on-premise private cloud alternatives at public cloud economics, both the total cost of ownership as well as the consumption-based pricing model. And we have now seen a number of customers move workloads off the public cloud back into an on-premise datacentre because its more cost-effective.

One of the questions raised at the HPE fiscal briefing concerned the impact of contract manufacturers or original design manufacturers (ODMs) that produce white box servers for cloud providers. Whitman would not be drawn on whether HPE was seeing strong competition from white box server makers. When asked about the lower demand for HPE servers from the tier one service provider that directly contributed to the lower server sales in the quarter, she said: Im not entirely sure. What I will tell you is that they have dramatically decreased their purchasing below commitments that they had made to us.

But it is not uncommon for cloud and service providers to choose white box servers to reduce infrastructure costs, resulting in lower sales by the major server companies. John Dinsdale, a chief analyst and research director at Synergy Research Group, said: For traditional IT infrastructure suppliers, there is one fly in the ointment hyperscale cloud providers account for an ever-increasing share of datacentre gear and many of them are on a continued drive to deploy own-designed servers, storage and networking equipment, manufactured for them by ODMs. ODMs in aggregate now control a large and growing share of public cloud infrastructure shipments.

Over the last few years, the Open Compute Platform (OCP), originally devised by Facebook, has been gaining traction as a hardware specification for hyperscale datacentre computing. White box servers that meet the OCP specification can be deployed in datacentres. Benefits include lower costs for both hardware and energy, plus interoperability and compatibility, so hardware from different manufacturers can be deployed.

As Computer Weekly has previously reported, Facebook claims that using OCP kit saved the social media site about $1.2bn in IT infrastructure costs within its first three years of use, by formulating its own designs and managing its supply chain.

In a Computer Weekly blog post, James Bailey, director of datacentre hardware provider Hyperscale IT, said that Rackspace another founding member of OCP had used the architecture to deploy white box servers for its OnMetal product. Microsoft is also a frequent contributor and runs more than 90% of its hardware as OCP, according to Bailey.

And it is not only server providers. Goldman Sachs is also believed to have a significant footprint of OCP equipment in its datacentres. Again, white box servers are being used instead of servers from the major hardware companies.

What this means is that non-traditional server manufacturers are increasingly taking market share from the established providers. In November 2016, IDCs quarterly server tracker reported that the ODM segment (white box servers) accounted for 10% of the market is terms of value, making white box servers the third-biggest server supplier with 10.3% of the overall market, ahead of the likes of Lenovo (7.9%), Cisco (7.3%) and IBM (6.9%).

Other than Cisco, all major US-based suppliers experienced significant global revenue declines year over year, while many international and smaller suppliers were able to find areas of growth, said Lloyd Cohen, research director, computing platforms at IDC. As large enterprise accounts slowed their demand for servers, small businesses and startups continued to grow their IT portfolios via non-traditional channels with innovative supply chain strategies. It will be interesting to see how this segment develops over time.

HPEs rival, IBM, is already shifting its business away from on-premise datacentre computing. IBM sold its x86 server business to Lenovo in 2014. In January this year, Ginni Rometty, IBM chairman, president and chief executive officer, said the companys shift from its core business to so-called strategic imperatives accounted for 40% of its earnings.

The full impact of the white box server makers is yet to be felt. According to analyst Gartner, ODMs are not particularly effective at dealing with enterprises and small and mid-sized businesses. Partnering with an organisation that could meet the needs of these customers more effectively would allow the white box server suppliers to benefit from more comprehensive sales and marketing programmes, according to Gartners report Lack of comprehensive go-to-market mechanisms keeps server ODMs in check for now.

The analyst warned that the biggest risk to the major server manufacturers was if and when a major systems integrator partnered with a white box server supplier, which would provide a deep customer relationship, plus sales and marketing expertise.

More here:
White boxes set to shift server buying habits - Computer Weekly - ComputerWeekly.com

Google First to Upgrade Cloud Data Centers with Intel’s Latest Chips – The VAR Guy

Brought to you by Data Center Knowledge

Google has upgraded servers in cloud data centers across five availability regions with Intels latest Xeon processors, codenamed Skylake. The company claims it is the first cloud provider to do so.

Amazon said last year it expected to launch Skylake-powered C5 instances on its Amazon Web Services cloud sometime in early 2017. Microsoft has not revealed plans to upgrade to Skylake, but the blogAnandTechhas deduced from a company blog post that Intels latest and greatest in data center tech is likely to appear in the next-generation Open Compute servers the giant said were in the works last November under the codename Project Olympus.

The processors are geared for workloads that require high performance, such as scientific modeling, genomic research, 3D rendering, data analytics, and engineering simulations, Urs Hlzle, Googles senior VP of cloud infrastructure, wrote in ablog post.

These applications will benefit from the new chips Advanced Vector Extensions (AVX-512) feature. In Googles internal tests the feature improved application performance by up to 30 percent, Hlzle said.

Google optimized Skylake for all its Google Compute Engine VMs, including standard, highmem, highcpu, and Custom Machine Types. Cloud servers powered by Skylake are initially available in five Google cloud regions: Western US, Eastern US, Central US, Western Europe, and Eastern Asia Pacific.

This is a second major processor upgrade announcement from Googles cloud services division this week. On Tuesday, the company said it had added theoption to spin up bare-metal GPUsalong with cloud VMs for machine learning and other compute-heavy applications.

Read the original here:
Google First to Upgrade Cloud Data Centers with Intel's Latest Chips - The VAR Guy

Internet security cataclysm Cloudbleed hits Singapore. Here’s a list of over 2k local domains affected – Coconuts Hong Kong

In case youve yet to hear, a tiny bug in Cloudflares code cause huge security problems by leaking an unspecified amount of data including confidential information such as passwords, personal information, and more all over the internet. This rare but worrying security disaster has since been labeled as Cloudbleed.

To put it simply, one small character hiding among the long chunk of codes that makes up the security factors of Cloudflares data ends up being the catalyst of compromising security data in various (major) websites.

According to a blog post on Cloudflares site, this major security leak was caused by as described by Gizmodo the companys decision to use a new HTML parser called cf-html. An HTML parser is an application that scans code to pull out relevant information like start tags and end tags. This makes it easier to modify that code.

And thus, complications turned up when the coding in cf-html clashed with Cloudflares old parser Ragel creating what is known as a buffer overrun vulnerability.

In layman terms, Cloudflares new software tried to store user data in their usual spot, but that place has ran out of space. Thus, it tried to store the remaining data elsewhere, which was picked up by sites like Google.

Simply put, with leaked critical security data such as passwords and personal information, expect hackers to grab the opportunity to utilize these information to compromise the security and trust of these domain sites. In the age of Internet where every information in this day and age is stored in Cloud servers, the seriousness of this situation cannot be understated. Heresa site where you can check if youve visited any sites recently that werehit by the bug.

With the amount of industries operating in Singapore, therell definitely be some companies that utilizes Cloudflares services, and are thus not immune to the Cloudbleed phenomenon. IP addresses, passwords from password managers, messages from dating sites, and much more data have been leaked, according to The Verge.For those interested, theres a whole long list (numbering in the thousands, mind you) of local domain sites affected by Cloudbleed, but herere just some of the notable ones:

http://birdpark.com.sg/

http://www.avgantivirus.com.sg/

https://buysinglit.sg/

https://www.foodpanda.sg/

https://www.tech.gov.sg/

This situation has since been contained and fixed, but we still urge everyone to up their security checks with 2-Factor Authentication (2FA) if it exists, or just outright change your password periodically. As you should, regardless of internet security cataclysms or not.

Covering what's happening in Singapore since 2013. Send tips.

More:
Internet security cataclysm Cloudbleed hits Singapore. Here's a list of over 2k local domains affected - Coconuts Hong Kong