Category Archives: Cloud Servers

Is That Old Cloud Instance Running? How Visibility Saves Money in the Cloud – Business 2 Community

Is that old cloud instance running?

Perhaps youve heard this around the office. It shouldnt be too surprising: anyone whos ever tried to load the Amazon EC2 console has quickly found how difficult it is to keep a handle on everything that is running. Only one region gets displayed at a time, which makes it common for admins to be surprised when the bill comes at the end of the month. In todays distributed world, it not only makes sense for different instances to be running in different geographical regions, but its encouraged from an availability perspective.

Webcast, April 18th: Google Analytics Setup and Basics: Measuring What Matters for Business Impact

On top of this multi-region setup, many organizations are moving to a multi-cloud strategy as well. Many executives are stressing to their operations teams that its important to run systems in both Azure and AWS. This provides extreme levels of reliability, but also complicates the day-to-day management of cloud instances.

So is that old cloud instance running?

You may get a chuckle out of the idea that IT administrators can lose servers, but it happens more frequently than we like to admit. If you only ever log in to US-East1, then you might forget that your dev team that lives in San Francisco was using US-West2 as their main development environment. Or perhaps you set up a second cloud environment to make sure your apps all work properly, but forgot to shut them down prior to going back to your main cloud.

Thats where a single-view dashboard can provide administrators with unprecedented visibility into their cloud accounts. This is a huge benefit that leads to cost savings right off the bat, as the cloud servers running that you forgot about or thought you turned off can be seen in a single pane of glass. Knowledge is power: now that you know it exists, you can turn it off. You also get an easy view into how your environment changes over time, so youll be aware if instances get spun up in various regions.

This level of visibility also has a freeing effect, as it can lead you to utilizing more regions without fear of losing instances. Many folks know they should be distributed geographically, but dont want to deal with the headache of keeping track of the sprawl. By tracking all of your regions and accounts in one easy-to-use view, you can start to fully benefit from cloud computing without wasting money on unused resources.

Go here to see the original:
Is That Old Cloud Instance Running? How Visibility Saves Money in the Cloud - Business 2 Community

4 Stocks Set to Trump Earnings in the Technology Space – Investorplace.com

Technology has been one of the best-performing sectors in 2017. The Technology SPDR(NYSEARCA:XLK) has returned 9.1% as compared with S&P 500s gain of 5.2% on a year-to-date basis. (Read More: 5 Hottest Tech ETFs of 2017)

The sector is well poised to flourish driven by improving macro-economic conditions in the U.S and global markets. The rebound in U.S. economy, as evident from an improvement in GDP numbers, the Consumer Confidence Index, unemployment rate and factory activity data presents significant growth opportunity for Technology stocks.

Moreover, sector-specific factors like rapid adoption of cloud computing, growing demand for Artificial Intelligence (AI) applications and expanding usage of Internet of Things (IoT) at home, office and car are key catalysts.

The rapid evolution of hybrid cloud (combination of on-premise servers and public cloud servers) is an important factor behind the adoption of cloud computing among enterprises, primarily due to faster data transfer and heightened privacy. Per Gartner Organizations that adopt hybrid infrastructure will optimize costs and increase efficiency.

Per MarketsAndMarkets projections, the hybrid cloud market is estimated to grow at CAGR of 22.5% over 2016-2021 time frame. In dollar terms, the market will grow from $33.28 billion in 2016 to $91.74 billion by 2021.

Meanwhile, demand for AI-based applications has gained immense popularity among enterprises owing to its positive effects on operations. According to Accenture (ACN) AI-based technologies is projected to boost labor productivity by up to 40% by fundamentally changing the way work is done and reinforcing the role of people to drive growth in business.

The advancement in AI tools and related applications like Big Data analytics, Natural Language Processing (NLP), Machine Learning and Deep Learning are primarily responsible for AIs massive growth projections. Per MarketsAndMarkets data, the overall AI market is anticipated to be worth $16.06 Billion by 2022, growing at a CAGR of 62.9% within the 2016- 2022 time frame.

Projections for IoT are also significantly bullish. According to Forbes, which quoted BCG data, spending on IoT technologies, apps and solutions will reach almost $267 billion by 2020.

The technology space continues to be investors favorite owing to its dynamic nature. The improving macro-environment, solid underlying fundamentals and impressive growth opportunities have been primarily responsible for the sectors earnings momentum, which is expected to persist in the first quarter.

According to the latest Earnings Preview, the sectors total earnings are expected to grow 10.7% from the same period last year on 6.2% higher revenues. This is higher than 9.2% earnings growth witnessed in fourth-quarter 2016 on a 5.6% increase in the top line.

With the existence of a number of industry players, finding the right stocks that have the potential to beat earnings could be a daunting task. Our proprietary methodology, however, makes it fairly simple for you. You could narrow down the list of choices by looking at stocks that have the combination of a favorable Zacks Rank #1 (Strong Buy), 2 (Buy) or 3 (Hold) and a positive Earnings ESP.

Earnings ESP is our proprietary methodology for determining stocks that have the best chance to surprise with their next earnings announcement. It provides the percentage difference between the Most Accurate estimate and the Zacks Consensus Estimate.

Our research shows that for stocks with this combination, the chance of a positive earnings surprise is as high as 70%.

Lets take a look atfour technology providers that have the right combination of elements to post an earnings beat this quarter:

Next Page

Read this article:
4 Stocks Set to Trump Earnings in the Technology Space - Investorplace.com

Letting the Cat Out of the Bag: Public Cloud has Latency Issues! – InfoWorld

Transform to a modern hybrid infrastructure with converged, hyperconverged, and composable infrastructure solutions from Hewlett Packard Enterprise.

sponsored

Technology, like everything else, has trends or cycles. Cloud started more than 10 years ago and was the hot, new tech trend. But noware things starting to shift again? Are organizations thinking twice before automatically moving essential workloads to the public cloud?

The answer is yes and for a variety of reasons. A few born-in-the-cloud companies have now moved from the public cloud back to on-premises data centers DropBox is a high-profile example. And the public cloud performance (or lack thereof) was a big reason why.

Letting the cat out of the bag: Public cloud is all about capacity, not performance

When businesses choose to put their applications in the public cloud, they are sharing infrastructure with a lot of other people. Of course, this can be a good solution because it means that you only pay for what you need when you need it. Public cloud also gives businesses the ability to scale up or down based upon demand.

But dont forget the whole business model of public cloud: time-sharing. The provider is giving everyone a slice of the timeshare pie, which means that the provider is promising capacity not performance. I am not the first person to let this particular cat out of the bag. I just want to reiterate it yes, public cloud providers do place performance limits on the services they provide.

Of course, for workloads you deploy on premises, you get to decide what the performance slice should be. Having this choice is imperative for applications that require reduced latency, such as those for big data and financial services.

Are new technologies making data centers new again?

Looking forward, two new technologies are now available that can boost performance for all applications. These technologies are containers and composable infrastructure. Running containers on composable infrastructure can ensure better performance for all applications.

Containers are open source software development platforms that share a common lightweight Linux OS and only keep the different pieces that are unique to that application within the container. This type of OS-level virtualization means you can hold a lot more containers on a particular server compared to virtual machines (VMs).

A big benefit of containers is increased performance. And when you run containers on bare-metal, performance is increased even more! This is because containers running on bare-metal dont require a hardware emulation layer that separates the applications from the server.

HPE and Docker recently tested the performance of applications running inside of a single, large VM or directly on top of a Linux operating system installed on an HPE server. When bare-metal Docker servers were used, performance of CPU-intensive workloads increased up to 46%. For businesses where performance is paramount, these results tell a compelling story.

Yet, some companies have hesitated to move containers out of virtual machines and on to bare-metal because of perceived drawbacks of running containers on bare-metal servers. These drawbacks, such as difficulties with managing physical servers, are definitely relevant when considering yesterdays data center technologies. Composable infrastructure helps overcome these challenges by making management simple through highly automated operations controlled through software.

Composable infrastructure consists of fluid pools of compute, storage, and fabric that can dynamically self-assemble to meet the needs of an application or workload. These resources are defined in software and controlled programmatically through a unified API, thereby transforming infrastructure into a single line of code that is optimized to the needs of the application.

Because composable infrastructure is so simple to deploy and easy to use, it removes many of the drawbacks you would traditionally encounter when deploying containers on bare-metal. The end result is better performance at lower costs within your own data center. The combination of containers and composable infrastructure is a marriage made in heaven.

A hybrid IT cloud strategy solves the performance problem of public cloud

When considering where to deploy, first consider the performance needs of your application. Then compare those performance needs against the service levels offered by public cloud vendors and what you can deliver on premises. As I wrote in a previous article, businesses need to determine which workloads should be in the public cloud and which ones should remain on traditional IT or a private cloud. And thanks to todays new technologies, containers and composable infrastructure, staying with traditional data-center deployments may just be the better choice.

To learn more about containers running on HPE bare-metal servers, click here. To read about the benefits of HPEs first composable infrastructure, HPE Synergy, readHPE Synergy for Dummies. To find out how HPE can help you determine a workload placement strategy and how to best meet your service level agreements, check outHPE Pointnext.

Sponsored Links

Link:
Letting the Cat Out of the Bag: Public Cloud has Latency Issues! - InfoWorld

Embrace our cloud, damn you: Microsoft dangles 40% discount on Azure instances – The Register

Pic: marysuperstudio/Shutterstock

Microsoft has started offering substantial Windows Server licence discounts as an incentive to embrace its cloud.

Redmond has rolled out its Azure Hybrid Use Benefits scheme, which it says can cut up to 40 per cent off the price of Windows Server virtual instances on Azure.

Azure Hybrid Use Benefit covers two-processor and 16-core Windows Server licences covered under Microsoft's volume Software Assurance programme.

The 40 per cent saving depends on usage, instance type and location.

Microsoft has wrapped the discount with new tools as a further incentive to drive uptake of Azure.

Also released was Azure Site Recovery to migrate virtual machines from AWS, VMware, Hyper-V or physical servers. The service lets you tag virtual machines within the Azure portal without needing to employ PowerShell.

A Cloud Migration Assessment has also been rolled out, which lets you discover servers in your on-prem setting and analyse their hardware configuration.

Read the original post:
Embrace our cloud, damn you: Microsoft dangles 40% discount on Azure instances - The Register

Comcast Business offers direct connection to IBM Cloud network – Computerworld

Comcast Business announced Thursday it now offers direct, dedicated network links to the IBM Cloud global network.

The move positions Comcast Business, a unit of Comcast, to compete against AT&T, Verizon, Bell Canada and telecom service providers already offering IBM Cloud Direct Link services.

Comcast Business already claims to be the nation's biggest cable provider to small and mid-sized businesses. The IBM partnership could be a way for Comcast Business to grow, especially among larger businesses and enterprises.

Enterprises will have "more choices for connectivity so they can store data, optimize their workloads and execute mission-critical applications in the cloud, whether it be on-premise, off-premise or a combination of the two," said Jeff Lewis, vice president of data services at Comcast Business, in a statement. Customers can select speeds up to 10 gigabits/second.

Other than the price of connectivity and the ability to potentially offer lower prices than AT&T and Verizon, analysts said they aren't sure what Comcast is providing enterprise customers that is distinct. Neither Comcast nor IBM announced pricing.

"Business services is the only area of substantial growth at Comcast right now," said Bill Menezes, an analyst at Gartner. "It makes sense for Comcast to align with as many major partners as possible so their customers see Comcast as a significant, broad player who can meet their requirements across major regions and services. Cloud is a major demand item for the enterprise right now and Comcast doesn't want to miss out on business by having too few customer options.''

The same can be said for IBM as well. "IBM is trying to connect with all the major carriers and network connectors to ensure that they are not shut out of the enterprise cloud business," said Jack Gold, an analyst at J. Gold Associates. "IBM sees this partnership as potential leverage, particularly against the likes of Microsoft Azure, Google Cloud and AWS, which have a larger share of the enterprise cloud market than IBM does."

Gold said IBM also wants to appeal to mid-sized companies that are less likely to have their own dedicated networks and more likely to outsource that capability to carriers and cable providers. "This is a potentially easier path for such businesses when they go to the cloud," he said.

IBM is saying to their customers that "we are making the connection part of cloud really easy for you," Gold added.

Some enterprise customers can be expected to buy the Direct Link service from Comcast Business, Gold said. "Enterprises definitely need help with cloud implementations and anything that can make it easier for them is a good thing," he said.

Still, the bigger question is whether IBM can be the cloud provider of choice "given that so many enterprises are Microsoft-centric and have Azure on their minds as the preferred path," Gold added. Microsoft has made the transition to Azure easier with its Azure Stack, an on-premises version of Azure cloud.

"Also, Google is pushing hard in the enterprise area now," Gold said.

An IDC survey recently showed that 73% of businesses in that survey have developed a hybrid strategy, compared to only 13% that said they have all the skills and processes in place to execute on that strategy. IBM said a secure and dedicated connection to its cloud service, like what Comcast Business is offering, will allow enterprises to easily preserve their existing IT investments while transitioning to a hybrid cloud environment. There, they can build next-generation cognitive computing and services around the internet of things.

IBM has a global network of more than 50 data centers across 19 countries, while Comcast Business boasts that its network connects to nearly 500 data centers and cloud exchanges for access to multiple cloud providers.

Comcast Business and IBM said a direct connection to the cloud will help with better performance, security and availability, especially compared with doing business over the open internet. Comcast Business, like many others, offers customers a service level agreement -- a contract that states such things as the level of network reliability, up-time and other factors.

Most companies are evaluating how to get to the cloud and for the next few years will build hybrid approaches that rely on both on-premises and public cloud servers, Gold said. "Comcast and all the carriers and internet service providers want to jump on the bandwagon that is cloud," he added.

More here:
Comcast Business offers direct connection to IBM Cloud network - Computerworld

AI Boom Boosts GPU Adoption, High-Density Cooling – Data Center Frontier (blog)

A row of eight NVIDIA graphics processing units (GPUs) packed into a Big Sur machine learning server at Facebook's data center in Prineville, Oregon. (Photo: Rich Miller)

The data center team at eBay is plenty familiar with high density data centers. The e-commerce giant has been running racks with more than 30 kilowatts (kW) of power density at the SUPERNAP in Las Vegas, seeking to fill every available slot in racks whenever possible.

But as eBay has begun applying artificial intelligence (AI) to its IT operations, the company has deployed more servers using graphics processing units (GPUs) instead of traditional CPUs.

From a data center power and cooling perspective, theyre a real challenge, said Serena DeVito, an Advanced Data Center Engineer at eBay. Most data centers are not ready for them. These are really power hungry little boxes.

The rise of artificial intelligence, and the GPU computing hardware that often supports it, is reshaping the data center industrys relationship with power density. New hardware for AI workloads is packing more computing power into each piece of equipment, boosting the power density the amount of electricity used by servers and storage in a rack or cabinet and the accompanying heat. The trend is challenging traditional practices in data center cooling, and prompting data center operators to adapt new strategies and designs.

All signs suggest that we are in the early phase of the adoption of AI hardware by data center users. For the moment, the trend is focused on hyperscale players, who are pursuing AI and machine learning at Internet scale. But soon there will be a larger group of companies and industries hoping to integrate AI into their products, and in many cases, their data centers.

Amazon Web Services, Microsoft Azure, Google Cloud Platform and IBM all offer GPU cloud servers. Facebook and Microsoft have each developed GPU-accelerated servers for their in-house machine learning operations, while Google went a step further, designing and building its own custom silicon for AI.

AI is the fastest-growing segment of the data center, but it is still nascent, said Diane Bryant, the Executive VP and General Manager of Intels Data Center Group. Bryant says that 7 percent of servers sold in 2016 were dedicated for AI workloads. While that is still a small percentage of its business, Intel has invested more than $32 billion in acquisitions of Altera, Nervana and MobilEye to prepare for a world in which specialized computing for AI workloads will become more important.

The appetite for accelerated computing shows up most clearly at NVIDIA, the market leader in GPU computing, which has seen its revenue from data center customers leap 205 percent over the past year. NVIDIAs prowess in parallel processing was seen first in supercomputing and high-performance computing (HPC), and supported by facilities with specialized cooling using water or refrigerants. The arrival of HPC-style density in data centers is driven by the broad application of machine learning technologies.

Deep learning on Nvidia GPUs, a breakthrough approach to AI, is helping to tackle challenges such as self-driving cars, early cancer detection and weather prediction, said Nvidia cofounder and CEO Jen-Hsun Huang. We can now see that GPU-based deep learning will revolutionize major industries, from consumer internet and transportation to health care and manufacturing. The era of AI is upon us.

And with the dawn of the AI era comes a rise in rack density, first at the hyperscale players and soon at multi-tenant colocation centers.

How much density are we talking about? A kilowatt per rack unit is common with these GPUs, said Peter Harrison, the co-founder and Chief Technical Officer at Colovore, a Silicon Valley colocation business that specializes in high-density hosting. These are real deployments. These customers are pushing to the point where 30kW or 40kW loads (per cabinet) are easily possible today.

A good example is CirraScale, a service provider that specializes in GPU-powered cloud services for AI and machine learning. CirraScale hosts some of its infrastructure in custom high-density cabinets at the ScaleMatrix data center in San Diego.

These technologies are pushing the envelope, said Chris Orlando, the Chief Sales and Marketing Officer and a co-founder of ScaleMatrix. We have people from around the country seeking us out because they have dense platforms that are pushing the limits of what their data centers can handle. With densities and workloads changing rapidly, its hard to see the future.

Cirrascale, the successor to the Verari HPC business, operates several rows of cabinets at ScaleMatrix, which house between 11 and 14 GPU servers per cabinet, including some connecting eight NVIDIA GPUs using PCIe a configuration also seen in Facebooks Big Sur AI appliance andthe NVIDIA DGX-1supercomputer in a box.

Over the past decade, there have been numerous predictions of the imminent arrival of higher rack power densities. Yet extreme densities remain limited, primarily seen in HPC. The consensus view is that most data centers average 3kW to 6kW a rack, with hyperscale facilities running at about 10kW per rack.

Yet the interest in AI extends beyond the HPC environments at universities and research labs, bringing these workloads into cloud data centers. Service providers specializing in high-density computing have also seen growing business from machine learning and AI workloads. These companies use different strategies and designs to cool high-density cabinets.

A TSCIF aisle containment system inside the SUPERNAP campus in Las Vegas. (Photo: Switch)

The primary strategy is containment, which creates a physical separation between cold air and hot air in the data hall. One of the pioneers in containment has been Switch, whose SUPERNAP data centers use a hot-aisle containment system to handle workloads of 30kW a rack and beyond. This capability has won the business of many large customers, allowing them to pack more computing power into a smaller footprint. Prominent customers include eBay, with its historic focus on density, which hosts its GPU-powered AI hardware at the SUPERNAPs in Las Vegas.

For hyperscale operators, data center economics dictates a middle path on the density spectrum. Facebook, Google and Microsoft operate their data centers at higher temperatures, often above 80 degrees in the cold aisle. This saves money on power and cooling, but those higher temperatures make it difficult to manage HPC-style density. Facebook, for example, seeks to keep racks around 10 kW, so it runs just four of its Big Sur and Big Basin AI servers in each rack. The units are each 3U in depth.

Facebooks machine learning servers feature eight NVIDIA GPUs, which the custom chassis design places directly in front of the cool air being drawn into the system, removing preheat from other components and improving the overall thermal efficiency. Microsofts HGX-1 machine learning server, developed with NVIDIA and Ingrasys/Foxconn, also features eight GPUs.

A custom rack in a Google data center packed with Tensor Processing Unit hardware for machine learning. (Photo: Google)

While much of the AI density discussion has focused on NVIDIA gear, GPUs arent the only hardware being adopted for artificial intelligence computing, and just about all of these chips result in higher power densities.

Google decided to design and build its own AI hardware centered on the Tensor Processing Unit (TPU), a custom ASIC tailored for Googles TensorFlow open source software library for machine learning. An ASIC (Application Specific Integrated Circuits) is a chip that can be customized to perform a specific task, squeezing more operations per second into the silicon. A board with a TPU fits into a hard disk drive slot in a data center rack.

Those TPUs are more energy dense than a traditional x86 server, said Joe Kava, the Vice President of Data Centers at Google. If you have a full rack of TPUs, it will draw more power than a traditional rack. It hasnt really changed anything for us. We have the ability in our data center design to adapt for higher density. As a percentage of the total fleet, its not a majority of our (hardware).

Tomorrow: We look at data center service providers focused on GPU hosting, and how they are designing for extreme density.

See the rest here:
AI Boom Boosts GPU Adoption, High-Density Cooling - Data Center Frontier (blog)

Microsoft’s new software tool helps enterprises evaluate cloud move – PCWorld

Thank you

Your message has been sent.

There was an error emailing this page.

IT professionals who want help getting a handle on a potential cloud migration have a new tool from Microsoft. The company is offering a Cloud Migration Assessment service that walks customers through an evaluation of the resources they currently use, in order to determine what a move to the cloud would cost.

Microsofts cost calculation is driven in part by the Azure Hybrid Use Benefit , which lets customers apply their existing Windows Server licenses with Software Assurance to virtual machines running in Microsofts cloud. That means customers only have to pay the base price for the compute resources they use.

Also starting Wednesday, all customers can invoke the discount from the Azure Management Portal. In the past, this type of deployment of discounted virtual machine images was limited to companies who have enterprise agreements with Microsoft. Others had to use Azure PowerShell to configure the discounts.

The moves are part of Microsofts overall push to get its enterprise customers to move more of their workloads from on-premises servers to the Azure public cloud. The tech titan has been emphasizing tools for running hybrid cloud configurations for quite some time.

In the past year, weve seen lots of other vendors also starting to talk about hybrid and realizing that its central to the vast majority of organizations IT strategies, Julia White, Microsofts corporate vice president for Azure marketing, said. And this push here, whether it be the migration tools or in general, better amplifying and clarifying our hybrid capabilities, is all in the essence of recognizing that [hybrid] is the approach for most customers, and it needs to be done in a way that can be durable.

The Cloud Migration Assessment tool lets users manually enter the compute, networking and storage resources that theyre already using, or import the same information from an Excel file thats either user-composed or generated by the Microsoft Assessment and Planning Toolkit.

Microsofts tool takes that information and provides users with a graph that shows them a model for the costs of continuing to run a data center, along with how much theyll pay for running the same workloads in Azure. The tool offers a set of default assumptions about how much an on-premises deployment costs, but customers who have information about the costs associated with their environment can input those, instead.

In order to get access to the tool, users have to hand over their name, contact information, and the name of their company. Microsoft will use that to follow up with users about their experience, and will also work to connect those companies with partner businesses that can help with migration if that makes sense.

Much like Microsoft in general, we remain very partner-led, White said. And so, when we can match a great partner with a customer that needs them, thats what we aim to do.

On top of all this, Microsoft also announced that its Azure Site Recovery migration tool will be updated in the coming weeks so that users can more easily use AHUB discounts when migrating from other environments. When that update goes through, users will be able to tag Windows Server VMs that theyre migrating for hybrid use discounts. That may entice people to move their Windows Server virtual machines from AWS and on-premises hardware into Azure by making it easier to do so.

Blair Hanley Frank is primarily focused on the public cloud, productivity and operating systems businesses for the IDG News Service.

Continued here:
Microsoft's new software tool helps enterprises evaluate cloud move - PCWorld

Oracle CEO: We Can Beat Amazon and Microsoft Without as Many Data Centers – Fortune

Conventional wisdom in the public cloud market is that there are three leaders: Amazon Web Services followed by Microsoft Azure, and Google Cloud Platform.

Those companies assemble and sell massive arrays of servers, storage, and networking to businessesmost of which don't want to build more of their own data centers. Towards that end, those three cloud superpowers alone spent roughly $31 billion last year to extend their data center capacity around the world, according to the Wall Street Journal , which tabulated that total from corporate filings.

By comparison, Oracle ( orcl ) , which is making its own public cloud push, spent about $1.7 billion. To most observers, that looks like a stunning mismatch.

But Mark Hurd, Oracle's co-chief executive, would beg to differ. In his view, there are data centers and then there are data centers. And Oracle's data centers, he said, can be more efficient because they run Oracle hardware and supercharged databases.

"We try not to get into this capital expenditure discussion. It's an interesting thesis that whoever has the most capex wins," Hurd said in response to a question from Fortune at a Boston event on Tuesday. "If I have two-times faster computers, I don't need as many data centers. If I can speed up the database, maybe I need one fourth as may data centers. I can go on and on about how tech drives this."

Get Data Sheet , Fortunes technology newsletter.

"Our core advantage is what we've said all along, which is that it's about the intellectual property and the software, not about who's got the most real estate," Hurd added. "We have spent billions over the past year, but in isolation, that's a discrete argument that I find interesting, but not fascinating."

Following up via email, Hurd said: This isnt a battle of capex. This is about R&D, about technology, software, innovation and IP; and then the capex to make it work."

Oracle has said it runs its data centers on Oracle Exadata servers , which are turbocharged machines that differ fundamentally from the bare-bones servers that other public cloud providers deploy by the hundreds of thousands in what is called a scale-out model. The idea is that when a server or two among the thousands failas they willthe jobs get routed to still-working machines. It's about designing applications that are easily redeployed.

Oracle is banking more on what techies call a "scale-up" model in which fewer, but very powerful computersin Exadata's case each with its own integrated networking and storagetake on big workloads.

Oracle execs, including executive chairman Larry Ellison, have argued that Oracle's big machines can actually work cheaper and more efficiently than the other public cloud configurations. Many industry analysts have their doubts on that, maintaining Oracle must spend much more to catchup with Amazon. Toward that end, in January, Oracle announced plans to add three new data center farms within six months and more to come.

There are those who think that Fortune 500 companies relying on Oracle databases and financial applications give Oracle an advantage because they are loathe to move those workloads to another cloud providerdespite AWS wooing them with promises of easy migrations other perks.

In late March, AWS chief executive Andy Jassy claimed the company had converted 22,000 databases from other vendors to its own database services. AWS ( amzn ) does not break out which databases those customers had been using.

Hurd took up that point as well: "How much database market will Oracle lose to [Amazon] Aurora? My guess is close to zero." (Aurora is one of several database options that AWS offers.)

"The third largest database in the world is IBM DB2, and it's been going out of business for 20 years," Hurd said in a characterization that IBM ( ibm ) would dispute. "If it was so easy to replace databases, DB2 market share would be zero."

That is because most databaseswhich companies rely on as the basis for core accounting and financial operationsrun custom programming, which is hard to move.

Still, in Amazon, Oracleas well as IBM, Microsoft, and virtually every legacy information technology providerfaces a huge challenge. AWS is on track to log more than $14 billion in revenue this year.

Note: (April 12, 2017 12:55 p.m.) This story was updated to add an additional quote from Oracle's Mark Hurd.

Go here to see the original:
Oracle CEO: We Can Beat Amazon and Microsoft Without as Many Data Centers - Fortune

Global Cloud Servers Industry 2017 Market Solutions, Opportunities, Applications, Trends & Services : IBM, Fujitsu … – MilTech

Brooklyn, NY (SBWIRE) 04/11/2017 This research report provides an in-depth analysis of the global Cloud Servers market. The report offers insights into instrumental figures, news, and facts pertaining to the market at both global and regional levels. It serves as a repository of analysis and data regarding various important parameters including application, technology, and product.

Global Cloud Servers market competition by top manufacturers, with production, price, revenue (value) and market share for each manufacturer; the top players including

Dell HP IBM Oracle Cisco Fujitsu Hitachi NEC

Geographically, this report is segmented into several key Regions, with production, consumption, revenue (million USD), market share and growth rate of Cloud Servers in these regions, from 2012 to 2022 (forecast), covering

United States EU China Japan South Korea Taiwan

To Get Free Sample Copy of this Report Visit @ http://www.qyresearchreports.com/sample/sample.php?rep_id=1016550&type=E

On the basis of product, this report displays the production, revenue, price, market share and growth rate of each type, primarily split into

Public Cloud Private Cloud Hybrid Cloud Community Cloud

The research report uses tools such as value chain analysis and investment feasibility and return analysis to offer an extensive understanding of the nature of the Cloud Servers industry. The former includes an analysis of the cost structure of the manufacturing capacity, the product catalog, and industry policies that influence the global Cloud Servers market.

Reliable forecasts by industrial experts on critical aspects such as production, price, and profit are also included in the research report. The report sheds light on import and export data, and upstream raw materials and downstream demand in the global Cloud Servers market. In addition, the report also provides recommendations that can help existing players as well as new entrants in formulating crucial business strategies.

Explore Complete Report in detail @ http://www.qyresearchreports.com/report/global-cloud-servers-market-research-report-2017.htm

Table of Contents

1 Cloud Servers Market Overview 1.1 Product Overview and Scope of Cloud Servers 1.2 Cloud Servers Segment by Type (Product Category) 1.2.1 Global Cloud Servers Production and CAGR (%) Comparison by Type (Product Category) (2012-2022) 1.2.2 Global Cloud Servers Production Market Share by Type (Product Category) in 2016 1.2.3 Public Cloud 1.2.4 Private Cloud 1.2.5 Hybrid Cloud 1.2.6 Community Cloud 1.3 Global Cloud Servers Segment by Application 1.4 Global Cloud Servers Market by Region (2012-2022) 1.4.1 Global Cloud Servers Market Size (Value) and CAGR (%) Comparison by Region (2012-2022) 1.4.2 United States Status and Prospect (2012-2022) 1.4.3 EU Status and Prospect (2012-2022) 1.4.4 China Status and Prospect (2012-2022) 1.4.5 Japan Status and Prospect (2012-2022) 1.4.6 South Korea Status and Prospect (2012-2022) 1.4.7 Taiwan Status and Prospect (2012-2022)

Explore Latest QYResearch News & Articles @ http://www.qyresearchreports.com/press-releases.htm

See original here:
Global Cloud Servers Industry 2017 Market Solutions, Opportunities, Applications, Trends & Services : IBM, Fujitsu ... - MilTech

Data centers decline as users turn to rented servers – Computerworld

Data centers are declining worldwide both in numbers and square footage, according to IDC -- a remarkable change for an industry that has seen booming growth for many years.

Users are consolidating data centers and increasingly renting server power. These two trends are having a major impact on data center space.

The number of data centers worldwide peaked at 8.55 million in 2015, according to IDC. That figure began declining last year, and is expected to drop to an expected 8.4 million this year. By 2021, the research firm expects there to be 7.2 million data centers globally, more than 15% fewer than in 2015.

The global square footage of data centers, after recent boom times, is also expected to slide. In 2013, data centers totalled 1.6 billion square feet. That was when big service providers like Amazon, Microsoft and Google were building huge data center complexes -- pushing square footage globally to 1.8 billion this year.

But IDC expects that number to decline from now on. Cloud adoption is a major reason for the trend.

Consider the adoption of Office 365, said Tad Davies, who heads consulting services at Bick Group, a data center consultancy. "Easy to move to and eliminates infrastructure in my data center," he said. "Same for CRM."

Consolidation is also playing a role, said Davies, as are new approaches to computing. New firms are adopting "cloud first" strategies, he said. "As they grow into larger organizations, the data center is never created."

Large users -- especially the U.S. government -- have been shrinking their data center space to drive efficiency. Better server utilization often means more consolidation.

While the biggest decline is affecting in-house data centers, said IDC, service provider data centers continue to expand. But even there, the pace of growth is moderating as the market matures.

Despite stagnant growth, data centers are still needed, Davies said. That's because there's limits to what can go into the cloud.

"Many applications that end users have built and further refined over the years are not cloud compatible," he said. "To get there requires significant re-architecture as well as investment."

The cloud is not necessarily less expensive than an on-premise operation, said Davies. But it does provide speed, flexibility and an operating expense, or OPEX, model.

In terms of revenue, the data center system market, which includes software and hardware, is barely growing, according to research firm Gartner.

"Enterprises are moving away from buying servers from the traditional vendors and instead renting server power in the cloud from companies such as Amazon, Google and Microsoft," John-David Lovelock, research vice president at Gartner, said in a statement. "This has created a reduction in spending on servers, which is impacting the overall data center system segment."

Last year, spending on data centers declined 0.1%, said Gartner. This year it's expected to increase by only 0.3%.

Link:
Data centers decline as users turn to rented servers - Computerworld