Page 4,467«..1020..4,4664,4674,4684,469..4,4804,490..»

New solution guarantees 100 percent uptime for private cloud storage – BetaNews

Whether public or private, one of the key factors businesses consider in choosing a cloud service is to ensure maximum availability.

Cloud storage specialist Scality is announcing its new HALO Cloud Monitor, a 24/7 solution to provide customers continuous uptime for their managed private cloud storage environments.

Designed to work with the Scality RING object storage platform, HALO continuously monitors customer environments in real time and provides predictive analytics to ensure storage systems are performing optimally. Comprehensive dashboards offer diagnostic metrics, monitoring system level statistics, component processes, memory, disk and many other key elements. It provides, user-friendly visualization of events, proactive fault and incident detection, configuration assistance and system health checks.

The HALO system uses smart learning, employing previous system behavior to define predictive range key performance indicators. These KPIs can then detect changes in the storage environment before they become problematic. Automatic alerts are triggered to notify key personnel who can then proactively respond and maintain continuous uptime.

"Our Scality RING system is designed to be 100 percent available and now with Scality HALO we have the cloud monitoring assurance for customers to guarantee 100 percent uptime for their Scality RING and S3 environments," says Daniel Binsfeld, VP of DevOps and Global Support at Scality. "Our customers, both service providers and enterprise companies, must deliver on strong service level agreements to their users. With Scality HALO we provide peace of mind and confidence that downtime can become a thing of the past for everyone."

HALO comes in two program levels, a fully-featured DCS edition and a Standard version. The Standard version is available to all Scality customers for free and includes up to 15 diagnostic metrics. Scality HALO premium with a 100 percent availability guarantee is available to Scality Dedicated Care Service customers through the company's alliance partners and global network of ISVs and resellers.

For more information visit the Scality website.

Photo Credit: Sakonboon Sansri/Shutterstock

Read this article:
New solution guarantees 100 percent uptime for private cloud storage - BetaNews

Read More..

What The Rise Of Cloud Computing Means For IT Pros – Forbes

What The Rise Of Cloud Computing Means For IT Pros
Forbes
Twenty years ago, enterprise CIOs began using public cloud computing applications to ease the basic IT headache of maintaining and updating all systems and applications. Ever since, industry watchers have been predicting those CIOs would someday ...

Go here to see the original:
What The Rise Of Cloud Computing Means For IT Pros - Forbes

Read More..

High-performance computation is available by cloud computing … – Science Daily


Science Daily
High-performance computation is available by cloud computing ...
Science Daily
A group of researchers has developed the world's first system for flexibly providing high-performance computation by cloud computing.

and more »

Read the original:
High-performance computation is available by cloud computing ... - Science Daily

Read More..

Ambitious Alibaba takes aim at the kings of cloud computing – TechCrunch

When you think of the biggest cloud players in the world, one company you might not consider is Alibaba, the Chinese e-commerce giant that held a record $25 billion U.S. IPO in 2014.

Alibabaentered the cloud computing business in 2009, just three years after Amazon launched its cloud division, AWS and Alibabas cloud computing efforts are among the ambitious projects that the Chinese e-commerce giant is pursuing aggressively.

Its impossible not to note the similarities between the two companies. While Alibaba is the premier e-commerce company in China, Amazon is the biggest in the U.S.

The utter dominance of both is proven on paper: NASDAQ-listed Amazons market cap exceeds $400 billion, while Alibabais valued at $250 billion according to its NYSE share price. When it comes to the cloud, the nature of their corebusinesses and the size of their computing requirements both necessitatecomputing on a massive scale. Both believe they can parlay that knowledge and experience into a significant business offering cloud services to others.

Two years ago, Alibaba decided to take the cloud part of its business more seriously and expand outside of China with a billion dollar investment in Aliyun(now known as Alibaba Cloud in English). At the time, Alibaba Clouds president Simon Humade a bold prediction, telling Reuters, Our goal is to overtake Amazon in four years, whether thats in customers, technology, or worldwide scale.

Were at the halfway marknow and while that goal seems unlikely at this point, Alibaba has begun to make itspresence felt, particularlyin China and the rest of Asia. In fact, theres plenty of evidence that Alibaba Cloud can play an important part for Alibabas overall business.

Battling giants

Up until its financial commitment in its cloud businessin 2015, Alibaba was content to use thescale of its e-commerce services which range from a marketplace and branded mall, to payment services and digital banking and count nearly 500 million users to bring in customers for its cloud business in China. Moving out to the rest of the world has far greaterchallenges.

Still, Alibabas cloud unit has been growing at a brisk pace with triple digit year-over-year growth for its last sevenquarters including 115 percent in its most recent report in December. Based on that growth, Alibaba Cloudis probably one or two quarters from reaching break even or profit, but already it has surpassed the $1 billion run rate mark courtesy of $254 million in revenue in its most recent quarter.Not bad, but not close to AWS, which grew at a more modest 47 percent rate for a total income of $3.53 billion for the quarter or a run rate over $14 billion.

Photo: Qilai Shen/Bloomberg via Getty Images

Thats a stark difference and it shows just how far Alibaba has to go in the cloud business to catch AWS.

However, Alibaba might be doing better than you think. According to Synergy Research Group, Alibaba is sixth in the world behind AWS, Microsoft, Google, IBM and Salesforce in infrastructure, platform and hosted private cloud services (this number does not include Salesforces more substantial SaaS business).

For cloud infrastructure services (IaaS, PaaS, Hosted Private Cloud services) Alibaba is now ranked sixth, based on worldwide revenues in Q4. For China specifically, while AWS and Microsoft are in the top five ranking in China, the market is led by Alibaba (a long way out in front) followed by China Telecom. Alibaba market share is running at 40 percent [in China] and has been increasing with time, John Dinsdale, Synergyschief analyst and research director, told TechCrunch.

Alibaba itself says the cloud unitcounts 765,000 paying customers as of the last quarter. That figure represented an increase of about 114,000 on the previousquarter, although there was no equivalent number given out for theprevious year.

Moving beyondChina

While several different analysts agree with Synergys assessment of Alibaba as the clear number one cloud vendor in China,Alibaba Cloud Global GMEthan Yu concedesthat the market is still a few years behind the U.S., and there is plenty of room for growth keeping in mind that China itself represents a massive potential market.

The addressable market is getting bigger in China with only single digit IT spending in the cloud and the rest in on-prem software and hardware spending. There is still enough buy out there to move up to the cloud, Yu said in an interview with TechCrunch. He saw 2015 year as the year it all changed (the same year itinvested the$1 billion into its cloud operation).

Photo: Alibaba

I think in 2015, adopting infrastructure in the cloud, there was suddenly a change, a tipping point where most [Chinese] CIOs found it quite acceptable to use the cloud in some ways, he said. But even as the market shifts in China, the company has made it clear that its ambitions stretch far beyond its home country.

China is a big market, but the cloud market just started to grow, which gave us a good foundation. We think we can do more outside of China, but we are a few years behind. We started our global footprint a couple of years ago. We have 14 global data centers including 8 outside of China, he explained.

Like AWS, Alibaba Cloud began with smaller customers, but as it sets its sights higher in the market, it wants to lure enterprise customers to the platform. The company says that it has proven it can handle the workload from larger customers based on its abilities to handle its own massive e-commerce and financial services businesses.

Of course, landing enterprise customers in the U.S., where the new president has sentsignals of tougher trade relations with China, may prove difficult. Yu said he needs to see how trade talk plays out, but he added, For now, we dont have any comments on that.. but our position is very firm. A friendly commercial relationship will help both parties.

Alibaba might actually find itself better positioned than others in the current climate in the U.S. Executive chairman Jack Ma held a meeting with the (then) president-electin early January which culminated in a promisethat Alibaba would create one million new jobs in the U.S.

Neither man provided details on how they would achieve that, andthe promise looks like little more than grandstanding by Ma or an effort to curry favor with the new administration. Either way, Alibaba will get its first real signal soon enough. Ant Financial, an Alibaba-affiliatedfintechfirm and another ambitious project, is acquiring U.S.-based Moneygram in an $880 million deal that is pegged to close in the second half of this year, assuming that regulators and the government OKit.

The cloud unitand Ant Financial, which is close to raising $3 billion in debt funding for M&A deals, are two areas Ma and Alibaba look to for the future. Meanwhile, Alibabas core e-commerce business is performing above expectations it smashed analyst forecasts for its final quarter of 2016 and raised its expectations for the remainder of the financial year but the e-commerce giant wants to develop businesses that can reduce its reliance on its core services in China.

Those services accounted for 87 percent of theRMB 53.25 billion ($7.67 million) revenue grossed in the last quarter.

Photo: VCG/Getty Images

Alibaba Cloudcontributed just $215 million to that figure with a small$49 million loss but revenue was up 50 percent on the previous quarter alone and 115 percent on the previous year.

Although these growth figures are impressive, it would take years to reach $1 billion per quarter so Alibaba has focused on expanding its geographic footprint, pushing its cloud business into Europe, Australia, Japan and the Middle East byopening of four new data centers last November.The company has also expanded existing sites,recently doubling its capacityin Hong Kong to address increasing demand.

Alibaba isnt just relying on the cloud to generate new revenue, it is investing in what it knows: e-commerce. The company picked up a stake in Paytm, Indias top mobile wallet firm, and an online sales firm, and elsewhere in India, it was linked with a deal for Amazon rival Snapdeal. There have been many rumors but no investment nonetheless, Jack Ma has spoken publicly of his desire to expand into India, and it wouldnt be a shock if he oversawanother deal to ensure that the plan isnt entirely reliant on Paytm.

Elsewhere, last year Alibaba snapped up a controlling share in Lazada, the largest online shopping site in Southeast Asia, a region of more than 600 million consumers and increasing internet connectivity.While a 2016 report co-authored by Google suggested that online commerce in Southeast Asia will rise to reach $88 billion by 2025, the region is another slow burner for Alibaba. Onlineis thought to account for under five percent of commerce in the region, while Lazada hasyet to break even, let alone post a profit.

That really sums up many of Alibabas bets. It is still early days and the reliance remains on Taobao (its marketplace) and T-Mall (its service for brands) in China, but theres enough money in the bank to push its business interests in India, Southeast Asia and the cloud towards a higher chunk of revenue. And Ant Financial is also helping grow its e-commerce footprint abroad with investments in the U.S., Korea, Southeast Asia and beyond. In that respect, the cloud may be Alibabas longest shot or its grandest ambition.

While its not impossible for a company with the resources and reach of Alibaba to make a spirited play for cloud market share outside of Asia, it would take some unlikely shifts in the current balance of power in the marketfor it to reach Simon Hus ambitious goal of catching AWS.

More:
Ambitious Alibaba takes aim at the kings of cloud computing - TechCrunch

Read More..

When Amazon’s cloud storage fails, lots of people get wet – ABC News

Usually people don't notice the "cloud" unless, that is, it turns into a massive storm. Which was the case Tuesday when Amazon's huge cloud-computing service suffered a major outage.

Amazon Web Services, by far the world's largest provider of internet-based computing services, suffered an unspecified breakdown in its eastern U.S. region starting about midday Tuesday. The result: unprecedented and widespread performance problems for thousands of websites and apps.

While few services went down completely, thousands, if not tens of thousands, of companies had trouble with features ranging from file sharing to webfeeds to loading any type of data from Amazon's "simple storage service," known as S3. Amazon services began returning around 4 p.m. EST, and an hour later the company noted on its service site that S3 was fully recovered and "operating normally."

THE CONCENTRATED CLOUD

The breakdown shows the risks of depending heavily on a few big companies for cloud computing. Amazon's service is significantly larger by revenue than any of its nearest rivals Microsoft's Azure, Google's Cloud Platform and IBM, according to Forrester Research.

With so few large providers, any outage can have a disproportionate effect. But some analysts argue that the Amazon outage doesn't prove there's a problem with cloud computing it just highlights how reliable the cloud normally is.

The outage, said Forrester analyst Dave Bartoletti, shouldn't cause companies to assume "the cloud is dangerous."

Amazon's problems began when one S3 region based in Virginia began to experience what the company called "increased error rates." In a statement, Amazon said as of 4 p.m. EST it was still experiencing errors that were "impacting various AWS services."

"We are working hard at repairing S3, believe we understand root cause, and are working on implementing what we believe will remediate the issue," the company said.

WHY S3 MATTERS

Amazon S3 stores files and data for companies on remote servers. Amazon started offering it in 2006, and it's used for everything from building websites and apps to storing images, customer data and commercial transactions.

"Anything you can think about storing in the most cost-effective way possible," is how Rich Mogull, CEO of data security firm Securosis, puts it.

Since Amazon hasn't said exactly what is happening yet, it's hard to know just how serious the outage is. "We do know it's bad," Mogull said. "We just don't know how bad."

At S3 customers, the problem affected both "front-end" operations meaning the websites and apps that users see and back-end data processing that takes place out of sight. Some smaller online services, such as Trello, Scribd and IFTTT, appeared to be down for a while, although all have since recovered.

The corporate message service Slack, by contrast, stayed up, although it reported " degraded service " for some features. Users reported that file sharing in particular appeared to freeze up.

The Associated Press' own photos, webfeeds and other online services were also affected.

TECHNICAL KNOCKOUTAGE

Major cloud-computing outages don't occur very often perhaps every year or two but they do happen. In 2015, Amazon's DynamoDB service, a cloud-based database, had problems that affected companies like Netflix and Medium. But usually providers have workarounds that can get things working again quickly.

"What's really surprising to me is that there's no fallback usually there is some sort of backup plan to move data over, and it will be made available within a few minutes," said Patrick Moorhead, an analyst at Moor Insights & Strategy.

AFTEREFFECTS

Forrester's Bartoletti said the problems on Tuesday could lead to some Amazon customers storing their data on Amazon's servers in more than one location, or even shifting to other providers.

"A lot more large companies could look at their application architecture and ask 'how could we have insulated ourselves a little bit more,'" he said. But he added, "I don't think it fundamentally changes how incredibly reliable the S3 service has been."

Read the rest here:
When Amazon's cloud storage fails, lots of people get wet - ABC News

Read More..

Osaka University researchers create flexible cloud-based computing system – Digital Trends

Why it matters to you

Cloud computing is not a new concept, but this project demonstrates that there are still major advances to be made in the field.

A group of researchers working at Osaka University in Japan have created the worlds first system capable of delivering flexible computation via cloud computing. The team created a piece of management software that allows theuser to customize various aspects of the set-up to suit the task at hand.

Traditionally, when high-performance computation is carried out in the cloud, a server with a fixed configuration would be used. The drawback to this kind of system is that the initial build can be expensive, and much of its computational muscle might not be utilized in ordinary usage.

By contrast, the projectthats been carried outat Osaka University allows the user to tailor various aspects of the system to their needs. Thenumber of servers that are in use and the network connection can be controlled in this way, but the system is also capable of making adjustments to hardware components.

More:Stanford researchers use a compound in fertilizer to create inexpensive battery

Users can tweakthe GPUs being usedas part of the computation, or they can tweak the solid state drives being used to store data, according to a report from Phys. This functionalityallows for a flexible cloud computing solution that can efficiently carry out all manner of different tasks.

Cloud computing can be implemented to help research projects crunch numbers when they dont have capable enough hardware on-site. This kind of flexible system could potentially help researchers all over the world have better access tothe computational power they need to carry out their work.

Its hoped that this project, which was led by visiting professor Takashi Yoshikawa, will continue to be developed, and will eventually be widely used. The system was previously shown at the Supercomputing 16 conference which was held in Salt Lake City, Utah, in November 2016.

See original here:
Osaka University researchers create flexible cloud-based computing system - Digital Trends

Read More..

Grand Challenge: Exploring the power of cloud computing for research partnerships – CU Boulder Today

Amazon Web Services. Google. IBM. Microsoft. These are just a few of the major tech movers and shakers partnering with researchers and big data providersrecently, the National Oceanic and Atmospheric Administration (NOAA)to invest in a new way of supporting a data-enabled economy: cloud computing.

The advantages and opportunities that come with working in the cloud are potentially significant for researchers, especially in terms of multidisciplinary collaboration, something CU Boulders Earth Lab team discovered firsthand after entering a cooperative research partnership with DigitalGlobe last September. The agreement allows Earth Lab researchers to access and work through DigitalGlobes 80-petabyte, cloud-based library of high-resolution satellite imagery, data and analytics tools.

The ease of access to powerful data on such a massive scale has proven a key catalyst as Earth Lab works to advance Earth and space science research alongside other pillars of CU Boulders Grand Challenge. The experience has sparked an inevitable question: How might cloud computing enhance and streamline the research being performed at CU Boulder campus-wide?

Terri Fiez, vice chancellor for Research & Innovation, has selected a team housed within the Grand Challenge initiative to execute a definition study exploring how research computing on the cloud might benefit CU Boulder and its partners in the future.

"Cloud computing has the potential to enhance existing collaborations and stimulate new ones between CU Boulder and its many research partners, both internal and external," says Fiez. "Discovering how the cloud can best support our researchers will be a key step forward in developing our long-term strategy as the innovation university."

While the need for high-performance computing (HPC) will likely remain in the coming years and beyond, a hybrid strategy that integrates cloud computing is quickly becoming a viable, and even vital, approach. Cloud computing delivers the same resources as a traditional data center at a lower-operational cost, allowing users to rent services on an as-needed basis without the upfront capital expense that comes from provisioning HPC resources.

The flexibility of the cloud platform also promises to maximize the speed, scaleand collaborative output of research partnerships.

Using virtualization approaches like containers in the cloud allows researchers to better collaborate with partners, since they are already using those approaches, says Thomas Hauser, director of research computing for CU Boulder. Containerized computational approaches enable CU researchers to create reproducible research workflows and share those approaches with our collaborators.

Larry Levine, director of Information Technology for CU Boulder, says he expects the campus to eventually move toward a "cloud-first"philosophywhere the cloud is the default (but not the only)answer for investigators computing needs. The question is always: "What isthe most optimized, efficient and cost-effective way to share data and manage access to that data?"

Levine says, "Theres no right or wrong answer. It will depend on [the]type of work people are trying to get done."

More here:
Grand Challenge: Exploring the power of cloud computing for research partnerships - CU Boulder Today

Read More..

How many instances of Windows Nano Server can run on 1 TB of RAM? – TechTarget

One of the premier new features of Windows Server 2016 is Windows Nano Server, essentially a stripped-down, headless install that is 90% smaller than a full Windows Server install. What does that mean with respect to cloud server workloads and better security resulting from having a much smaller attack surface?

In this expert handbook, we explore the issues and trends in cloud development and provide tips on how developers can pick the right platform.

By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.

You also agree that your personal information may be transferred and processed in the United States, and that you have read and agree to the Terms of Use and the Privacy Policy.

Windows Nano Server is a new feature that's part of Windows Server 2016. It's a headless version of Windows Server that's even smaller than Windows Server Core. Some of the stats say that it has a 93% smaller virtual hard disk space, 92% fewer critical bulletins and it requires 80% fewer reboots. That could be a big feature for cloud implementations. Nano Server, Hyper-V containers and Storage Spaces Direct are three major new features of Windows Server 2016.

Nano Server offers improved density, because you can run a lot more Nano servers than you can full-size Windows servers. That's particularly important if you're trying to develop cloud applications. The Nano Server's reduced size definitely gives it a much smaller attack surface. That makes it a far more secure kind of operating system.

I recall Microsoft doing research where they were running Windows Nano Server as a virtualization host. They found that a 1 TB Windows Nano Server with 1 TB of RAM was capable of running 1,000 Nano Server virtual machines on it. That gives you an idea of the kind of density that you can get out of Nano Server and some of its advantages. Of course, with no GUI and no browser, there's a lot less to attack. So, it is more secure.

One of the negatives, though, is that it is going to be different to manage because there's no local login. There's no UI. You have to manage Windows Nano Server completely remotely. That's going to probably cause a learning hurdle and some slow adoption for a lot of companies, because sometimes that's difficult to get into. That's one of the reasons why Windows Server Core isn't running everywhere. It's a little bit more difficult to manage. So, that requirement on remote management will probably provide at least an initial hesitation for running Nano Server.

The densities and security benefits that you can get through Windows Nano Server are significant, and that is definitely the way that application development, especially in the cloud, is headed.

What are the key benefits to Microsoft's Nano Server?

Microsoft Nano Server: Does less really mean more?

Hyper-V and Windows Server 2016 containers: The same, but different

See the original post:
How many instances of Windows Nano Server can run on 1 TB of RAM? - TechTarget

Read More..

Behind AMD’s Big Plan for Data Center – Market Realist

Are AMD's ABCs Worth the Price Tag? PART 12 OF 17

In the preceding part of this series, we discussed how Advanced Micro Devices (AMD) Pascal GPU (graphics processing unit) pushed the companys Computing and Graphics revenue to a two-year high in fiscal 4Q16. The company is now expanding the GPU market beyond gaming and into thedata center.

With the advent of deep learning and AI (artificial intelligence), more and more cloud companies are using accelerators like GPU and FPGAs (field programmable gate arrays) for their deep learning work.

Nvidia (NVDA) is a leader in the AI market. Its revenue from data center rose 23% sequentially and 205% on a YoY (year-over-year) basis in the quarter ended January 2017. Meanwhile, Intels (INTC) data center revenue rose 4.4% sequentially and 8% on a YoY basis during the same quarter. This shows that the trend is moving away from x86 CPUs (central processing unit) to GPUs.

Until now, AMD only supplied x86 server chips to data centers. Now, its expanding its offerings to include Radeon Instinct, which is a combination of its GPU, CPU, and open source software. AMD has already secured orders from Google (GOOG) and Alibaba (BABA) to supply GPUs for their data centers.

AMD is developing next-generation Vega GPU and Zen-based server CPU Naples to expand the breadth of its customers beyond traditional and cloud servers to include embedded infrastructure and communications markets. With Naples, AMD is targeting cloud, big data applications, and traditional enterprise that require more threads, higher memory, and I/O-bound (input/output) applications.

To be sure, AMD is focusing more on thecloud as design wins convert into revenue faster than networking and storage.

Radeon Instinct and Naples CPU are expected to hit the market by the end of fiscal 2Q17. AMD stated that it is receiving astrong response from customers for its new products. As the design wins take some time to reflect in earnings, the effect of this would likely be visible in fiscal 4Q17.

In the meantime, AMDs Enterprise, Embedded, and Semi-Custom segment will continue to be influenced by the semi-custom seasonality.Continue to the next part for a closer look.

Visit link:
Behind AMD's Big Plan for Data Center - Market Realist

Read More..

Google First to Upgrade Cloud Data Centers with Intel’s Latest Chips – The VAR Guy

Brought to you by Data Center Knowledge

Google has upgraded servers in cloud data centers across five availability regions with Intels latest Xeon processors, codenamed Skylake. The company claims it is the first cloud provider to do so.

Amazon said last year it expected to launch Skylake-powered C5 instances on its Amazon Web Services cloud sometime in early 2017. Microsoft has not revealed plans to upgrade to Skylake, but the blogAnandTechhas deduced from a company blog post that Intels latest and greatest in data center tech is likely to appear in the next-generation Open Compute servers the giant said were in the works last November under the codename Project Olympus.

The processors are geared for workloads that require high performance, such as scientific modeling, genomic research, 3D rendering, data analytics, and engineering simulations, Urs Hlzle, Googles senior VP of cloud infrastructure, wrote in ablog post.

These applications will benefit from the new chips Advanced Vector Extensions (AVX-512) feature. In Googles internal tests the feature improved application performance by up to 30 percent, Hlzle said.

Google optimized Skylake for all its Google Compute Engine VMs, including standard, highmem, highcpu, and Custom Machine Types. Cloud servers powered by Skylake are initially available in five Google cloud regions: Western US, Eastern US, Central US, Western Europe, and Eastern Asia Pacific.

This is a second major processor upgrade announcement from Googles cloud services division this week. On Tuesday, the company said it had added theoption to spin up bare-metal GPUsalong with cloud VMs for machine learning and other compute-heavy applications.

Read the original here:
Google First to Upgrade Cloud Data Centers with Intel's Latest Chips - The VAR Guy

Read More..