Category Archives: Cloud Servers

Chef expands its cloud and container menu – ZDNet

Chef, a leading DevOps company, announced at ChefConf 2017 that it was adding new capabilities to it flagship Continous Automation/DevOps program, Chef Automate. This enables enterprises to transition from server- and virtual machine- (VM) based IT systems to cloud-native and container-first environments with consistent automation and DevOps practices.

What is DevOps and why does it matter?

New to some, old hat to many and a source of puzzlement to more than a few, there is no doubt that DevOps is a hot topic. Read on to find out what it's all about.

Chef started as an open-source cloud configuration management and deployment application. It's meant to help anyone orchestrate servers in a cloud or just in a departmental data center. Instead of system administrators sweating over management programs that were designed for single, stand-alone servers, Chef enables DevOps users to spin up dozens or hundreds of server instances in seconds.

That's still it's primary use, but in the eight-years since Chef was created, we've moved from server- and VM-dominated data centers to container and cloud-based infrastructures. That's where Chef Automate steps in. Ken Cheney, Chef CMO explained, "We're helping organizations with where they are at today, but we provide a bridge to the future, (showing) how they can go about delivering software across those environments."

While Chef Automate was only introduced in 2016, it was already facing stiff competition. Container orchestration programs such as Kubernetes, Mesosphere Marathon, and Docker swarm mode, are already major players. Still, that isn't stopping Chef from trying to move from server and VM DevOps to cloud and container DevOps.

Chef Automate is being extended with capabilities for:

Chef also released InSpec-AWS, InSpec-Azure, and InSpec-vSphere as incubation projects that bring code compliance to the cloud. These projects provide resources to test, interact, and audit these cloud platforms directly and easily access their configuration inside of InSpec.

In addition, Chef released its Habitat Builder service. This is a software-as-a-service (SaaS) platform to build Habitats for packaging, managing, and running apps. Habitat's mew productivity capabilities, include:

Chef can do all this on most popular public clouds. This includes: Amazon Web Services (AWS) OpsWorks; Microsoft Azure, and VMware vRealize 7 on both Windows and Linux platforms.

For companies already using Chef as their DevOps tool of choice, this makes Chef even more promising as they move to a cloud-native, container-driven IT world. For those who haven't committed to Chef, it gives them reason to try Chef for their IT meals. I think it quite possible they'll find Chef's recipes delicious.

Related stories:

The rest is here:
Chef expands its cloud and container menu - ZDNet

Hitachi, IBM to collaborate in mainframes in the cloud era – Nikkei Asian Review

TOKYO -- Hitachi will supply IBM-made mainframe computers loaded with its own operating systems starting in fiscal 2018, beating a retreat from hardware development in the age of cloud servers.

The arrangement was announced Tuesday. Hitachi will continue developing operating systems for mainframe machines.New products will offer improved compatibility with Hitachi's "internet of things" platform Lumada.

Mainframes have been widely used across Japan's public and private sectors for in-house computer systems since they emerged in the 1950s. Japan was a particularly big market, with Hitachi, IBM, NEC, Fujitsu and others all competing for a piece of the pie at one point.

But the tide turned in the 1990s, when computer servers loaded with Windows and Linux operating systems became widespread. Japan's mainframe shipments topped 1 trillion yen ($8.94 billion) in the mid-1990s, but slid to below 45 billion yen in fiscal 2015, according to the Japan Electronics and Information Technology Industries Association.

That changing market climate prompted Hitachi's eventual exit from hardware. At the same time, the company recognizes persistent demand among companies that value stable system operation and security. There is also promise for use in the internet of things, a business Hitachi sees as a growth field.

The turning fortunes of IBM are symbolic of the technological shift. The company's sales fell for a 20th straight quarter in the January-March period of this year. The company was hit by the advent of servers and then by the spread of cloud servers.This is in stark contrast to Amazon.com's cloud business, which is the world's largest. The business logged $890 million in operating profit in the January-March period, accounting for 90% of the company's profit.

In Japan, where cloud computing has not become as widespread, IT companies have been logging relatively solid earnings. Yet, Fujitsu, NEC and Hitachi all saw revenue in the IT segment decline in the year ended in March. Making it imperative for them to respond to the growth of cloud computing.

(Nikkei)

Excerpt from:
Hitachi, IBM to collaborate in mainframes in the cloud era - Nikkei Asian Review

Op-ed: Utah’s tech renaissance threatened unless Congress acts to update archaic law – Deseret News

Some of the worlds most innovative cloud computing companies are based in Utah, but an international patchwork of outdated and conflicting laws on how law enforcement can access our online data threatens the entire industry. Fortunately, Sen. Orrin Hatch has taken a leadership role in pressing Congress to establish clarity for U.S. companies doing business overseas, to protect our individual privacy rights and to help law enforcement do its job more effectively.

As a longtime Utah resident and a technology and marketing consultant based in Salt Lake City, I am incredibly proud of the renaissance of entrepreneurship and innovation taking hold across our state. At the heart of this growth are startups building powerful cloud computing software services, which run on internet-connected servers located around the globe. For example, Farmington-based Pluralsight is redefining the future of learning with its education-on-demand platform, and in American Fork, Domo is revolutionizing business management through its much-lauded business cloud solutions.

Cloud computing technology allows even the smallest, most remote companies to serve clients around the globe, and the cloud works most efficiently when it allows them to store their data where it makes the most technical, rather than the most geographical, sense. However, the future of cloud computing opportunities for Utahs innovators remains murky unless we update the legal framework governing how law enforcement can access data stored overseas.

Implemented in 1986, years before the cloud was even invented, the Electronic Communications Privacy Act (ECPA) is outdated and ambiguous and proving harmful to the success of our business, the trust of our customers, and the ability of law enforcement to do their jobs effectively. Because of the ambiguities codified within this statute, companies providing cloud services are caught precariously between international legal jurisdictions. For example, when faced with a U.S. law enforcement request to gather data stored in a cloud server in Italy, American companies are forced to choose between abiding by the data access laws of the United States or Italy.

This ambiguity is having an unintentional chilling effect on our ability to do business around the world. Without clarity on where and how U.S. warrants may be used to access cloud data, Utah companies I work with can be hesitant to store data overseas. Meanwhile, foreign companies are increasingly hesitant to house data in the United States because they are concerned about the privacy of their data.

The result has been a series of time-consuming lawsuits in the U.S. Court of Appeals with differing outcomes, while uncertainty for companies and citizens builds and law enforcements access to data is not clarified. Congress needs to act.

Hatch helped jump-start Congress interest in this issue last year when he introduced the International Communications Privacy Act (ICPA). Though it did not pass, this legislation would have helped to clarify the responsibilities of businesses storing data overseas, ensure law enforcement has the tools it needs to access information abroad and restore the trust of foreign companies storing data in the United States. This week, legislators on the Senate Judiciary Committee have an opportunity to revisit this vital issue.

Its imperative that Congress quickly address the ambiguity within our current law. As every company becomes a software company, we need legislation that supports our companies ability to store data overseas, protects our individual privacy rights and helps U.S. law enforcement do its important job. Utahs tech renaissance, and the success of cloud-driven companies across our country, depends on it.

Jeff Hadfield is the founder of 1564B, based in Salt Lake City. 1564b focuses on technical markets and helps companies in Utah, and across the country, reach their marketing, sales and content goals.

Read more here:
Op-ed: Utah's tech renaissance threatened unless Congress acts to update archaic law - Deseret News

Google’s Firebase taps serverless Cloud Functions – InfoWorld

Thank you

Your message has been sent.

There was an error emailing this page.

By Paul Krill

Editor at Large, InfoWorld | May 22, 2017

Firebase, Google Clouds back end and SDK for mobile and web application development, is being enhanced with serverless compute capabilities. Google Cloud Functions for Firebase, now available in a beta release, allows developers to run back-end JavaScript code that responds to events triggered by Firebase features and HTTPS requests.

Developers upload their code to Google's cloud, and the functions are run in a managed Node.js environment. There is no need for users to manage or scale their own servers.[Cloud Functions] enables true server-less development, Google's Ben Galbraith said. LikeAWS Lambda and Microsoft's Azure Functions, Cloud Functions allows users to deploy and run code without provisioning servers. Developers code to cloud APIs, and the cloud takes care of managing and scaling the functions.

Acquired by Google in 2014, Firebase features a cross-platform SDK with capabilities for cloud data storage and synchronization across devices. It also provides app usage analytics and tools for serving in-app advertising and sending targeted notifications to users.

Google has also just released a beta version of Firebase Performance Monitoring. The service provides insight into the performance of iOS and Android mobile apps by monitoring startup times, network response times, and other aspects of app performance. The data can be analyzed in the Firebase Console.

Google also has begun open-sourcing Firebase SDKs, describing it as the first step toward open-sourcing client libraries. Were starting by open sourcing several products in our iOS, JavaScript, Java, Node.js, and Python SDKs. We'll be looking at open sourcing our Android SDK as well, Salman Qadri, Firebase product manager, said. Admin SDKs to access Firebase on privileged environments are now open source as well, including the recently launched Python SDK.

Paul Krill is an editor at large at InfoWorld, whose coverage focuses on application development.

Sponsored Links

Excerpt from:
Google's Firebase taps serverless Cloud Functions - InfoWorld

Report: AI Tells AWS How Many Servers to Buy and When – Talkin’ Cloud

Brought to you by Data Center Knowledge

Internet giants Google, Microsoft, Amazon, and Facebook use Machine Learning to enhance their services for end users, such as real-time search suggestions, face recognition in photos, voice commands, or cloud services for software developers, but they also use Artificial Intelligence to optimize their internal operations. Google revealed in 2014 that it uses Machine Learning to improve energy efficiency of its data centers, and Amazons use of AI to manage warehouses for its e-commerce business hasnt been a secret since at least 2015.

So, it comes as no surprise that Amazon Web Services, the companys cloud services arm, also applies Machine Learning to one of the toughest puzzles in data center management: capacity planning. AWS uses Machine Learning to forecast cloud data center capacity demand and to figure out where on the planet to store additional data center components so that it can expand capacity quickly where and when its needed.

AWS CEO Andy Jassy revealed the practice in front of an audience at this weeks Foundations of Science Breakfast by the Pacific Science Center, GeekWire reported. The company buys an enormous amount of servers on a regular basis. GeekWire quotes Jassy:

The report doesnt provide much detail about what kinds of input data the companys Machine Learning algorithm uses to forecast demand, but one of the primary data sources appears to be its cloud sales team. From the GeekWire report:

Read this article:
Report: AI Tells AWS How Many Servers to Buy and When - Talkin' Cloud

Google Targets Nvidia With Learning-Capable Cloud TPU – ExtremeTech

Only a week after Nvidias new AI-focused Volta GPU architecture was announced, Google aims to steal some of its thunder with its new, second-generation, Tensor Processing Unit (TPU) that it calls a Cloud TPU. While its first generation chip was only suitable for inferencing, and therefore didnt pose much of a threat to Nvidias dominance in machine learning, the new version is equally at home with both the training and running of AI systems.

At 180 teraflops, Googles Cloud TPU packs more punch, at least by that one measure, than the Volta-powered Tesla V100 at 120 teraflops (trillion floating point operations per second). However, until both chips are available, it wont be possible to get a sense of a real world comparison. Much like Nvidia has built servers out of multiple V100s, Google has also constructed TPU Pods that combine multiple TPUs to achieve 11.5 petaflops (11,500 teraflops) of performance.

For Google, this performance is already paying off. As one example, a Google model that required an entire day to train on a cluster of 32 high-end GPUs (probably Pascal), can be trained in an afternoon on one-eighth of a TPU Pod (a full pod is 64TPUs, so that means on 8TPUs). Of course, standard GPUs can be used for all sorts of other things, while the Google TPUs are limited to the training and running of models written using Googles tools.

Google is making its Cloud TPUs available as part of its Google Compute offering, and says that they will be priced similar to GPUs. That isnt enough information to say how they will compare in cost to renting time on an Nvidia V100, but Id expect it to be very competitive. One drawback, though, is that the Google TPUs currently only support TensorFlow and Googles tools. As powerful as they are, many developers will not want to get locked into Googles machine learning framework.

While Google is making its Cloud TPU available as part of its Google Compute cloud, it hasnt said anything about making it available outside Googles own server farms. So it isnt competing with on-premise GPUs, and certainly wont be available on competitive clouds from Microsoft and Amazon. In fact, it is likely to deepen their partnerships with Nvidia.

The other company that should probably be worried is Intel. It has been woefully behind in GPUs, which means it hasnt made much of a dent in the rapidly growing market for GPGPU (General Purpose computing on GPUs), of which machine learning is a huge part. This is just one more way that chip dollars that could have gone to Intel, wont.

Big picture, more machine learning applications will be moving to the cloud. In some cases if you can tolerate being pre-empted its already less expensive to rent GPU clusters in the cloud than it is to power them locally. That equation is only going to get more lopsided with chips like the Volta and the new Google TPU being added to cloud servers. Google knows that key to increasing its share of that market is having more leading edge software running on its chips, so it is making 1,000 Cloud TPUs available for free to researchers willing to share the results of their work.

The rest is here:
Google Targets Nvidia With Learning-Capable Cloud TPU - ExtremeTech

Cloud provider snubs SAN for StorPool hyper-converged infrastructure – ComputerWeekly.com

London-based managed services provider Coreix has opted for StorPool software-defined storage in preference to SAN storage. The company has built hyper-converged infrastructure instead, using SuperMicro x86 boxes as a server and storage platform.

What to move, where and when. Use this checklist and tips for a smooth transition.

By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.

You also agree that your personal information may be transferred and processed in the United States, and that you have read and agree to the Terms of Use and the Privacy Policy.

The move allowed Coreix to avoid a large capitaloutlay on SAN storage and instead scale up from a few servers.

Coreix provides hosting, managed services, private and hybrid cloud, servers and colocation from its London datacentres to about 600 clients using some 1,500 physical servers plus Dell and EMC storage arrays.

It was reluctant to spend a lot of money for large SAN arrays that dont last forever.

The company wanted to build a public cloud offering to provide enterprise-class applications to customers, but its initial efforts using CloudStack as a platform were frustrated by Dell iSCSI SAN storage that struggled to perform adequately, said Paul Davies, technical director at Coreix.

We had issues of IOPS and resiliency, and the SANs were generally over-contested. A SAN can be extremely resilient, but to get the IOPS you need to spend 250,000, he said.

Coreix looked around for new products to support the offering. We didnt want to spend on a chassis that could take 1PB from day one. SANs involve a lot of capex [capital expenditure]; its cost-prohibitive for us. We needed a model where we could scale,said Davies.

Coreix deployed a hyper-converged architecture based on 10 SuperMicro servers with four KVM virtual machine hypervisors and Storpool storage, and using OnApps cloud orchestration platform. Total storage capacity is around 20TB using 600GB flash drives.

Storpool offers software-defined storage that can pool storage from commodity servers it specifies recommended server components such as CPU, RAM and network card with Sata drives (HDD or flash) to provide performance of up to 100,000 IOPS per node.

It can provide hyper-converged infrastructure by utilising resources to offer server and storage capacity in the same box.

For Coreix, the advantage of building systems in-house from commodity hardware is the ability to scale from a few instances of server and storage hardware without having to spend on a big-ticket SAN.

Its about cost-efficiency and flexibility and not being tied to one vendor, saidDavies.We can put our own CPUs in and add storage. We can buy as we grow and dont have to buy a big chassis to start with. With a SAN you always get caught on something. Its just more cost-efficient to do it this way.

Originally posted here:
Cloud provider snubs SAN for StorPool hyper-converged infrastructure - ComputerWeekly.com

Cisco’s servers are stuck in limbo, look likely to stay there – The Register

Comment Cisco has missed out on a blade to rack server shift, sales growth has turned negative, it doesn't sell to cloud providers and it has a small market share. Should it invest to grow or get out of servers altogether?

Cisco's third fiscal 2017 quarter results were disappointing, with a 1 per cent decline in revenue year-on-year to $11.9bn. The data centre segment, meaning UCS servers mostly, made $767m in revenue and was down 5 per cent. It constitutes just 6 per cent of Cisco's overall revenues.

In the previous quarter data centre revenues were $790m, down 4 per cent year-on-year, and in the quarter before that they were $834 million, down 3 per cent year-on-year. There is a pattern of decline here.

Stifel analyst and MD Aaron Rakers has charted this, showing Cisco data centre revenues and the year-on-year percentage change:

The chart shows actual numbers plus estimates looking at nine quarters.

UCS servers blazed a bright trail in the sky when they first arrived. What is going on?

Overall server sales are down, according to both Gartner and IDC. Dell and HPE lead the market, followed by IBM, Lenovo and Huawei.

IDC gave Cisco a 6.3 per cent market share in 2016's fourth quarter, with HPE having a 23.6 per cent share, Dell 17.6 per cent, IBM 12.3 per cent and Lenovo 6.5 per cent. Original design manufacturer (ODM) suppliers accounted for 7.9 per cent. Why is Cisco lagging?

Rakers charted quarterly server sales by architecture over the past few years:

Rack-optimised server sales are the big winners, with blade server sales second, a long way behind, and growth stopping. Density-optimised server sales are flattish, towers are in decline and large systems are the smallest category, although growing slightly.

Rakers next plotted Cisco's UCS server sales in the blade and rack segments, showing both revenues and revenue share percentages:

Most of Cisco's UCS revenues come from blade server sales, the declining second-placed architecture, and not rack servers, the main and growing segment. The conclusion is inescapable; Cisco has misread the server market badly, with revenue growth slowing drastically and then stopping from its first fiscal 2015 quarter, two and a half years ago.

Rakers said: "Cisco continues to face a misaligned portfolio for the mix from blade to rack servers i.e. Cisco has ~30 per cent revenue share in blades; sub-4 per cent share in rack servers."

Cisco sells its servers to enterprises and not the hyperscalers or cloud service providers, ODMs such as Supermicro and Chinese server suppliers such as Inspur.

Cisco has been pushing its HyperFlex hyperconverged infrastructure appliance (HCIA), using OEM'd Springpath software. In March Cisco said it had gained 1,100 HyperFlex customers after nine months of sales. Nutanix has around 5,400 and we expect Dell EMC to be in that kind of area soon.

In its third-quarter results announcement Cisco did not update the 1,100 customer number. A Stifel survey of Cisco's VARs/resellers found 16 per cent thought HyperFlex was best positioned in the HCIA market while 40 per cent thought Nutanix was the leader. Some 66 per cent had sold HyperFlex systems into existing Cisco accounts, not new customers.

Rakers said that some 20 per cent of server revenues come from sales into the public cloud, and Cisco does not sell there, with ODMs and white box servers having around a 40 per cent share.

To sum up, Cisco's servers account for 6 percent of its overall revenues, and these revenues have been declining for four quarters in a row. It has a 6.3 per cent share of the overall market, but a less than 5 per cent share in the biggest and growing rack server section. It's progress in the HCIA market was off to a good start but it lags a long way behind market leaders Nutanix and Dell. HPE, by buying SimpliVity, is becoming a stronger competitor.

Finally it is not a supplier to the public cloud server marker.

It seems to us that, to make progress with servers, Cisco needs to get into rack servers in a big way. But there is a more fundamental question; what is its goal here? Does it want to be a leading server supplier, up with Dell and HPE? Or is it content to have a sub-10 per cent share of the market, selling into its installed base and under continual attack from Dell, HPE and the various Chinese and ODM suppliers?

If it wants to get up with the leaders then it has to spend a lot of money on engineering development and so forth. That will be a hard call when overall revenues are declining, servers are just 6 per cent of its business and it's laying people off.

Perhaps Cisco should step back, take a deep breath, and decide to exit the server market, selling its UCS business to Lenovo, say. Perhaps on the other hand it could try something radical, like buying Supermicro.

That doesn't feel right with our view of Cisco moving into servers as an adjacent market to its core networking market. It then moved into storage as an adjacent market to servers, and failed. We think Cisco sees its server market prospects as being limited, and can't see it making the investments needed to become a top four or five server supplier.

Looking ahead we reckon there'll likely be product line tweaking, statements of renewed commitment and determination, but little actual change in its situation. Servers are too large a part of Cisco's revenues to throw away, too small a part to be worth investing heavily in, and not in a dire enough situation to need a radical fix. They're stuck in limbo and look likely to stay there.

See more here:
Cisco's servers are stuck in limbo, look likely to stay there - The Register

Why we still fear working in the cloud – Augusta Free Press

Published Thursday, May. 18, 2017, 3:00 pm

Front Page Business Why we still fear working in the cloud

Join AFP's 112,000+ followers on Facebook, Twitter and YouTube Subscribe to sports and news podcasts on iTunes News, press releases, letters to the editor: augustafreepress2@gmail.com Advertising inquiries: freepress@ntelos.net Phone: 540-949-6574

Whenever there is a new technology there is a normal amount of concern that comes along with the territory. Its like the old saying goes, we naturally fear the unknown. But, why do we still fear cloud computing when its been around for so many years now and in actuality, the Internet itself is the Cloud! Even so, when asked why they arent using more cloud-based services, private individuals, nonprofits and businesses alike all express at least one or more of the following key concerns over working in the Cloud.

The funny thing about this concern is the fact that most of the big hacks we have heard about in recent years have been local mainframes that had been hacked! For example, it wasnt the cloud server that was breached in the Oracle hack of 2016. Their point of sales Micros division was breached, leading to a significant amount of panic within their customer base. Then there were the breeches within the Trump real estate systems that had nothing to do with a cloud-based platform because the system they were using was terribly antiquated and based on local hard drives.

A bit of advice seems to be called for here in light of all the concerns over security and privacy. Since most IT professionals understand that cloud servers are actually many times more secure than local mainframes due to 24/7 on-site security teams with the latest patches always being updated as problems are identified, you might want to invest in a little PR. A well-rounded digital marketing agency such as Single Grain not only provides backlinks and PPC ads, they are also expert in content geared towards public relations. Let the public know that your cloud services have the latest security technology and to what measures you will go to ensure their safety. Marketing pros like Single Grain are able to overcome objections even before they are made, and sometimes this is exactly what you need to do when fear is out of control!

Here is the second greatest fear when thinking about cloud computing. Many startups have given the whole industry a bad name because they lacked the expertise or resources to offer the level of service their clients required. You see this time and again when major brands relegate their call centers to some obscure third world country where the customer service reps dont even speak English fluently enough to understand what you are asking them!

In order to address this fear, it is suggested that you keep operations at home and seek out the top talent within your industry. Customers have a right to expect the service they are paying for and if they encounter a glitch, no matter how trivial it may seem to you, it is nonetheless significant to them. If youve already gotten bad press, hire a digital marketing agency to engage in a full-out PR campaign. Nothing builds a business faster than word of mouth but the opposite also holds true. Working in the cloud should be a worry-free process, so make every effort to offer that to your customers, and when all else fails, enlist the help of digital marketers to overcome any bad press. In the end, it pays.

Go here to see the original:
Why we still fear working in the cloud - Augusta Free Press

Nvidia: This Could Work Out Great, Says Bernstein – Barron’s – Barron’s

Nvidia: This Could Work Out Great, Says Bernstein - Barron's
Barron's
Bernstein analyst Stacy Rasgon started coverage of GPU chip maker Nvidia with the equivalent of a Buy rating, and a $165 price target, arguing its market for ...
Nvidia makes the case for GPU accelerators | ZDNetZDNet

all 2 news articles »

Read the rest here:
Nvidia: This Could Work Out Great, Says Bernstein - Barron's - Barron's