Page 4,460«..1020..4,4594,4604,4614,462..4,4704,480..»

Tintri storage strategy shoots straight at public cloud – TechTarget

Tintri storage is taking a cloud approach with its VM-aware arrays. Co-founder and CTO Kieran Harty said Tintri is adapting an API-based web services approach to help IT organizations deal with "pressure from the CEO to do things differently" by applying the economics and scale of the cloud.

Every cloud storage option has its pros and cons. Depending on your specific needs, the size of your environment, and your budget, its essential to weigh all cloud and on-prem options. Download this comprehensive guide in which experts analyze and evaluate each cloud storage option available today so you can decide which cloud model public, private, or hybrid is right for you.

By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.

You also agree that your personal information may be transferred and processed in the United States, and that you have read and agree to the Terms of Use and the Privacy Policy.

Harty was Tintri's original CEO until Ken Klein took over in 2013. Prior to launching Tintri, Harty spent seven years as an executive vice president of engineering at VMware, leading the delivery of ESX Server, VirtualCenter and VMware desktop virtualization products.

Tintri VMstore hybrid and all-flash arrays are based on a deep integration with VMware. The Tintri arrays operate at the virtual machine (VM) and disk level, replacing conventional file abstractions with VM-aware storage. Harty gave SearchCloudStorage an update on Tintri's emerging cloud strategy and planned product rollouts in 2017.

You say Tintri's cloud strategy is based on automated web services. What does it entail and how does it change the way people use Tintri storage arrays?

Kieran Harty: A lot of our customers have a huge impetus to do cloud initiatives. The public cloud model has taken off in a big way with our customer base. They want their existing infrastructure and applications to be available with the agility of the public cloud. They don't want to rewrite their applications or redo their compliance for a cloud environment.

Are you trying to avoid getting pigeonholed as a storage-only vendor?

Harty: We're still an all-flash array vendor. Obviously, our VMstore flash storage itself is invaluable. There's a lot of complexity associated with implementing good storage in terms of performance, cost and reliability. But storage is table stakes.

We're taking a web services approach that allows you to automate everything. The basis of web services is that everything is defined using APIs [to provide] the right level of abstraction.

How would a customer use Tintri storage to build an on-premises cloud?

Harty:Today, more than one-third of Tintri customers are using our platform to build their own cloud with varying degree of cloud capabilities. They use our web services architecture to automate common tasks like provisioning new virtual machines or applying Quality of Service policies, to scale-out while optimizing the location of every individual virtual machine and to apply predictive analytics to anticipate their future need for capacity and performance.

When we asked our customer advisory board a year ago whether they need connections from their own data center to public cloud (S3, Azure, etc), they said no. But there has been a transition within the last year where they decided they do need it. So I think it's going to be a gradual process.

More companies want a cloud implementation on premises behind their firewall. Does Tintri plan to emulate the Amazon cloud computing model?

We're trying to provide you [with] the agility of the public cloud, but within your own data center. Kieran Hartyco-founder and CTO, Tintri

Harty: Yes, we're taking a very similar approach to the enterprise environment. Compute has been a web service since VMware introduced its hypervisor products. You also have Microsoft Hyper-V and containers emerging. Network is becoming a service with things like VMware NSXand Cisco's ACI (Application Centric Infrastructure). But storage has not [developed as a service] at the same pace.

Most storage vendors aren't using a web services approach. They design their architectures using the same physical concepts you have within the traditional data center, where you create LUNs and volumes and have people provision and interpret the stats associated with a particular VM. The model we have is one in which all your workflows are fully automated, including the storage, compute and networking that is provided by other vendors. We're trying to provide you [with] the agility of the public cloud, but within your own data center.

How quickly will you roll out support for public clouds?

Harty: We plan to start integration with AWS later this year, which will give you the ability to send and retrieve granular VM snapshots with Amazon. Gartner is using the term 'cloud-inspired infrastructure' to describe this uptake and it's very consistent with what we see as well. We're starting with AWS because it's the big gorilla. We'll add Azure support if we see a customer demand for it.

What concerns do you have about the recent AWS outage? What redundancies does your cloud strategy include to ensure customers have access to their Tintri storage?

Harty: The AWS outage validates our belief that customers should have a multifaceted cloud strategy. We allow customers to have data in AWS and [also] have data on premises via Tintri enterprise cloud. In the case of an outage, Tintri storage would still be able to offer data on premises via snapshots. Tintri has the ability to replicate data to multiple different sites. If you care about the availability of your data, you can replicate it to multiple sites as well as to AWS, which will give you high availability even during an outage.

What new Tintri storage products are you developing as part of the virtualization strategy?

Harty: The basis for what we do is everything being done in automated fashion. You can create a snapshot on a VM that talks to us, the storage vendor. You can set [Quality of Service] on the VM which talks to us, the storage vendor. And you can integrate those workflows in an automated, self-service way. Very distinct in terms of the capabilities we provide. And it's based on the concept that everything we do is based on a web service.

We've introduced support for VMware vRealize Orchestrator that allows you to orchestrate more general storage workflows. We've had asynchronous replication, but we just introduced synchronous replication capabilities. That allows you to have a disaster recovery capability where all the data is sent to a remote site. You can have two VMstore arrays up to 60 miles apart. Data gets written to both arrays. If there's a failure of the primary array, the secondary array takes over.

Scale, performance remain key components of cloud storage

Maximizing cloud storage services

Tintri leads the way with VM-aware technology

Read more:
Tintri storage strategy shoots straight at public cloud - TechTarget

Read More..

The embarrassing reason behind Amazon’s huge cloud computing outage this week – Washington Post

Amazon is back with an apology and an explanation for a high-profile malfunction that caused websites all across the Internetto grind to a halt for hours on Tuesday.

The online retail giant, which runs a popular cloud computing platform for sites such as Airbnb, Netflix, reddit and Quora, is blaming the outage on a simple and perhapssomewhat amusing employee mistake.

A team member was doing a bit of maintenance on Amazon Web Services Tuesday, trying to speed up the billing system, when he or she tapped in the wrong codes and inadvertently took a few more servers offline thanthe procedure was supposed to, Amazon said in a statement Thursday. With a few mistaken keystrokes, the employee wound up knocking out systems that supportedother systems that help AWS workproperly.

The cascading failure meant that many websitescould no longer make changes to the informationstored on Amazon's cloud platform. For everyday users, that meant being unable to load pages, transfer files or take other actions on some of the sites they regularly use.

In this instance, the tool used allowed too much capacity to be removed too quickly,Amazon said.We have modified this tool to remove capacity more slowly and added safeguards to prevent capacity from being removed when it will take any subsystem below its minimum required capacity level.

Translation: Employees will no longer be able to unplug whole parts of the Internet by mistake.

Amazon said it was sorry for the outage's effect on its customers and vowed to learn from the incident. One immediate next step? The company said it will subdivide its servers even more than before to reduce blast radius and improve recovery, should something like this happen again.

(Amazon chief executive Jeffrey P. Bezos owns The Washington Post.)

See the original post here:
The embarrassing reason behind Amazon's huge cloud computing outage this week - Washington Post

Read More..

Oracle Expands Cloud Services with Launch of Exadata Cloud Machine – Top Tech News

By Jef Cozza / Top Tech News. Updated March 02, 2017.

"With these new Edge Gateways, customers will be able to securely transfer and analyze important data at the edge of the network to glean real-time intelligence from the physical world," Dell said in a statement, "Ideal deployments include a vehicle, a refrigerated trailer, a remote oil pump in the desert, digital signs in an elevator or inside of the HVAC units on a rooftop of a casino."

Real-Time Analysis

The 3000 series is designed to appeal to customers looking for faster, real-time analysis of massive amounts of data produced by devices on their networks in order to perform immediate decision-making, Dell said. In many cases, it can be too expensive for enterprises to move all the data from the edge of the network near the devices to the data center.

Computing at the edge, on the other hand, can help determine which data sets are relevant and need to be sent back to the data center or the cloud for further analytics and longer term storage, saving bandwidth and reducing costs and security concerns.

"As the number of connected devices becomes more ubiquitous, we know that intelligent computing at the edge of the network is critical. The IoT continues to enhance customer experience, drive business growth and improve lives, making it central to organizations' digital transformation strategies," said Andy Rhodes, vice president and general manager, Internet of Things at Dell, in the statement. "The small and mighty 3000 Series opens up new opportunities for our customers and partners to get smarter with their data and make big things happen."

A Rugged Alternative

The company said the 3000 Series is a rugged alternative to its 5000 Series, which is designed to excel in fixed use cases that require modular expansion, large sensor networks and more advanced edge analytics. The 3000 Series, however, is geared toward fixed and mobile use cases requiring smaller sensor networks, tight spaces and more simple analytics.

The series consists of three different models. The first is geared to the industrial automation and energy management markets, and comes with a multi-function I/O port and programmable serial ports. The second is aimed at the transportation and logistics markets, with a CAN bus for land/marine protocol and integrated ZigBee for mesh sensor networks.

The last model targets digital signage and retail enterprises, and is equipped with a display port output for video displays and 3.5mm line in/line out for quality audio streaming.

Each of the three models features an Intel Atom processor, 2 GB of RAM, and support for operating temperatures between minus 30 degrees Celsius (minus 22 degrees Fahrenheit) and 70 degrees Celsius (158 degrees Fahrenheit).

Excerpt from:
Oracle Expands Cloud Services with Launch of Exadata Cloud Machine - Top Tech News

Read More..

Amazon Web Services ‘Operating Normally’ After Broad Internet Disruptions – Investor’s Business Daily

Amazon (AMZN) said Amazon Web Services is "operating normally" after the cloud computing unit reported problems that triggered widespread outages and disruptions for several hours on many websites and apps.

AWS issued this update at 5:08 p.m. ET: "As of 1:49 PM PST, we are fully recovered for operations for adding new objects in S3, which was our last operation showing a high error rate. The Amazon S3 service is operating normally.

Previously, AWS had said that its S3 service was experiencing "high error rates."

Amazon is the largest provider of cloud services, followed by Microsoft (MSFT) Azure and Alphabet's (GOOGL) Google Cloud Platform.

Cloud computing is mushrooming as customers ditchthe hassles of running their own computers and seek betterspeed and performance.

AWS revenue jumped 55% in 2016 to $12.2 billion. Analysts estimate that Microsoft's Azure cloud service topped $2.6 billion in 2016 revenue, with Google at around $1 billion.

Amazon stock slipped 0.4% to 845.04 in the stock market today, after hitting an all-time high of 860.86 on Thursday.

RELATED:

Amazon Analysts Turn Sour Following Q4 Results Below Views

6:19 PM ET Google brought its high-speed internet to Kansas City, but it didnt turn the city into a tech paradise.

6:19 PM ET Google brought its high-speed internet to Kansas City, but it...

Continued here:
Amazon Web Services 'Operating Normally' After Broad Internet Disruptions - Investor's Business Daily

Read More..

Cloud Computing Enters its Second Decade – DATAQUEST

By: David Mitchell Smith, vice president and Gartner Fellow

Cloud computing was originally a place to experiment, and has come a long way as a critical part of todays IT. After 10 years, companies should look for even wider scaleinvestments.

In its first decade, cloud computing was disruptive to IT, but looking into the second decade, it is becoming mature and an expected part of most next-generation disruptions (such as AI, Digital Business). For the past 10 years, cloud computing changed the expectations and capabilities of the IT department, but now it is a necessary catalyst for innovation across the company.

As the technology matures, objections to cloud computing are lessening, although myths and confusing technology terms continue to plague the space.

As it enters its second decade, cloud computing is increasingly becoming a vehicle for next-generation digital business, as well as for agile, scalable and elastic solutions. CIOs and other IT leaders need to constantly adapt their strategies to leverage cloud capabilities.

Its not too late to begin planning a roadmap to an all-in cloud future. A few predictions about what that future will look like.

By 2020, anything other than a cloud-only strategy for new IT initiatives will require justification at more than 30% of large-enterprise organizations.

During the past decade, cloud computing has matured on several fronts. Today, most security analysis suggests that mainstream cloud computing is more secure than on-premises IT. Cloud services are more often functionally complete, and vendors now offer migration options.

Importantly, innovation is rapidly shifting to the cloud, with many vendors employing a cloud-first approach (and some beginning cloud-only approaches) to product design and some technology and business innovations available only as cloud services. This includes innovations in the Internet of Things and artificial intelligence.

As the pressure to move to cloud services increases, more organizations are creating roadmaps that reflect the need to shift strategy. At these organizations, projects that propose on-site resources are considered conservative, as the reduced agility and innovation options decrease competitive agility. Enterprises will begin to pressure IT departments to embrace cloud computing.

Keep in mind that not all projects can utilize cloud services due to regulatory or security concerns.. Also, some enterprises might lack the correct skill sets and talent.

By 2021, more than half of global enterprises already using cloud today will adopt an all-in cloud strategy. The key to an all-in cloud strategy is not to lift and shift data center content. Instead, enterprises should evaluate what applications within the data center can be replaced with SaaS, refactored or rebuilt. However, an all-in strategy will have more impact on IT compared to a cloud-first strategy.

By and large, companies that have shifted to all-cloud have not returned to traditional on-premises data centers, with even large companies embracing third-party cloud infrastructure.

Enterprises should begin to plan a roadmap for their cloud strategy, and ensure that lift and shift is only being done when necessary, such as part of data center consolidation efforts.

Here is the original post:
Cloud Computing Enters its Second Decade - DATAQUEST

Read More..

The advantages of cloud computing – Sacramento Business Journal


Sacramento Business Journal
The advantages of cloud computing
Sacramento Business Journal
I see it time and again in small- to mid-sized organizations that buy their own infrastructure. They waste money by over-purchasing capacity or not sharing capacity properly. Subscribe to get the full story. Already a subscriber? Sign in. Subscribe to ...

and more »

Read the rest here:
The advantages of cloud computing - Sacramento Business Journal

Read More..

Microsoft said to cut purchases of HPE servers for cloud service – Information Management

(Bloomberg) -- Hewlett Packard Enterprise Co. is losing business from Microsoft Corp., one of the worlds largest users of servers, the latest sign of trouble for the pioneering computer maker as it struggles with the rise of cloud services, people familiar with the matter said.

Hewlett Packard Enterprise Chief Executive Officer Meg Whitman said last week her company saw "significantly lower demand" for servers from a tier-1 service provider, but without identifying the customer. Tier-1 service providers are typically major cloud and telecom companies.

The softer demand came from Microsoft, the people said, as the software giant pushes for lower prices from hardware providers to help it efficiently expand its public cloud service and keep up with rivals Amazon.com Inc. and Alphabet Inc.s Google. Spokeswomen for Microsoft and HPE declined to comment.

Late last year, the Redmond, Washington-based company unveiled a new in-house cloud server design that it will require hardware vendors to follow. This forces HPE and rival Dell Technologies Inc. to compete against lower-cost generic, commodity manufacturers. Already, Microsoft has been using less-expensive gear for its data centers and the new design is set to be fully implemented later this year.

"We will continue to meet customer demand by expanding data center capacity while driving efficiencies through new technologies," Microsoft Chief Financial Officer Amy Hood said on a call with analysts in January when it announced earnings.

Microsofts Azure public cloud business reported a 93 percent revenue surge in the final quarter of 2016 as more businesses opted for the flexibility and ease of accessing computing power and storage over a network instead of building their own data centers.

HPE has been a leading seller of servers that go in these corporate data centers. But the shift to the public cloud means businesses dont need to buy their own servers anymore. Selling to the big cloud providers is harder, either because they demand more volume discounts, or increasingly they design their own cheaper servers.

"Within Tier 1 we had a much lower demand from a single large customer," Whitman said on the call, noting that these types of deals arent as profitable as other parts of the server business. "The Tier 1 business is very competitive, and well see what happens there."

HPE last week cut its adjusted profit forecast for the current fiscal year and reported sales that missed analysts projections for the third consecutive quarter.

Visit link:
Microsoft said to cut purchases of HPE servers for cloud service - Information Management

Read More..

Massive AWS Cloud Server Outage Was Caused by an Incorrect Debugging Command – 1redDrop (blog)

When youre a company thats disrupting nearly everything it touches, anything you do is reported almost instantaneously in the media. And that attention is amplified when you take down the websites of some of the worlds most known techbrands for 5 hours or so. Thats what happened to Amazon Web Services on Tuesday, February 28, 2017.

The effects of the outage have been discussed to shreds, but it now turns out that a simple mistakein a command given during a debugging run of its billing system was the root cause of the outage.

The wrong command didnt actually cause the outage, but it took down the wrong S3 servers, forcing a full restart. Considering the scale at which AWS servers operate, that took a few hours hence, the outage.

The fact that a wrong command can execute the wrong operation is understandable. After all, thats where human error plays a role. The interesting thing is that Amazon is now going to make changes to the system so incorrect commands are not able to trigger an outage like this one.

We dont know what those changes are other than the fact that capacity removal is being capped (the allowable limit was too high) and recovery times are greatly reduced, but will that prevent another outage from ever happening? Hardly. Its just one hole thats being plugged.

What we need to understand is that any technology can be broken. IBM says that cloud can be made more secure than the static security layers typically found in traditional infrastructure, but whos going to account for human error, as was clearly the case in the AWS outage?

Thanks for reading our work! We invite you to check out ourEssentials of CloudComputingpage, which covers the basics of cloud computing, itscomponents, variousdeployment models, historical, current andforecast datafor the cloud computing industry, and evenaglossaryof cloud computing terms.

Source

The rest is here:
Massive AWS Cloud Server Outage Was Caused by an Incorrect Debugging Command - 1redDrop (blog)

Read More..

1&1: the most trusted partner for SMBs – TechRadar

This feature has been brought to you by 1&1

As one of the most successful hosting providers worldwide 1&1 has a deep knowledge of its markets. Leveraging this experience, 1&1 knows that SMBs are in need of hosting products tailored to their requirements, easy-to-use and customisable.

As the most trusted partner for SMBs we offer a broad range of products and solutions our customers need for running their business and being successful online. We act as a one-stop-shop for everything customers need to carry out their digital transformation. Our extensive portfolio features products for hosting, cloud and email applications tailored to the specific needs of experts, home users, small companies and freelancers. Starting with basic servers up to high-end cloud servers, 1&1 provides solutions for a variety of user needs.

The latest products in our website creation and hosting portfolio are:

Security is very important to us and customers benefit from the many security features built in our products and infrastructure:

Check out 1&1 website packages here.

Read this article:
1&1: the most trusted partner for SMBs - TechRadar

Read More..

Amazon’s web servers are down and it’s causing trouble across the internet – The Verge

Amazons web hosting services are among the most widely used out there, which means that when Amazons servers goes down, a lot of things go down with them. That appears to be happening today, with Amazon reporting high error rates in one region of its S3 web services, and a number of services going offline because of it.

Trello, Quora, IFTTT, and Splitwise all appear to be offline, as are websites built with the site-creation service Wix; GroupMe seems to be unable to load assets (The Verges own image system, which relies on Amazon, is also down); and Alexa is struggling to stay online, too. Nests app was unable to connect to thermostats and other devices for a period of time as well.

Isitdownrightnow.com also appears to be down as a result of the outage.

Amazon has suffered brief outages before that have knocked offline services including Instagram, Vine, and IMDb. There dont appear to be any truly huge names impacted by this outage so far, but as always, its effects are widespread due to just how many services especially smaller ones rely on Amazon.

Theres no estimate on when service will be restored, but Amazon says it is actively working on remediating the issue.

Read the original here:
Amazon's web servers are down and it's causing trouble across the internet - The Verge

Read More..