Page 4,463«..1020..4,4624,4634,4644,465..4,4704,480..»

When Amazon’s cloud storage fails, lots of people get wet – Savannah Morning News

NEW YORK Usually people dont notice the cloud unless, that is, it turns into a massive storm. Which was the case Tuesday when Amazons huge cloud-computing service suffered a major outage.

Amazon Web Services, by far the worlds largest provider of internet-based computing services, suffered an unspecified breakdown in its eastern U.S. region starting about midday Tuesday. The result: unprecedented and widespread performance problems for thousands of websites and apps.

While few services went down completely, thousands, if not tens of thousands, of companies had trouble with features ranging from file sharing to webfeeds to loading any type of data from Amazons simple storage service, known as S3. Amazon services began returning around 4 p.m. EST, and an hour later the company noted on its service site that S3 was fully recovered and operating normally.

THE CONCENTRATED CLOUD

The breakdown shows the risks of depending heavily on a few big companies for cloud computing. Amazons service is significantly larger by revenue than any of its nearest rivals Microsofts Azure, Googles Cloud Platform and IBM, according to Forrester Research.

With so few large providers, any outage can have a disproportionate effect. But some analysts argue that the Amazon outage doesnt prove theres a problem with cloud computing it just highlights how reliable the cloud normally is.

The outage, said Forrester analyst Dave Bartoletti, shouldnt cause companies to assume the cloud is dangerous.

Amazons problems began when one S3 region based in Virginia began to experience what the company called increased error rates. In a statement, Amazon said as of 4 p.m. EST it was still experiencing errors that were impacting various AWS services.

We are working hard at repairing S3, believe we understand root cause, and are working on implementing what we believe will remediate the issue, the company said.

WHY S3 MATTERS

Amazon S3 stores files and data for companies on remote servers. Amazon started offering it in 2006, and its used for everything from building websites and apps to storing images, customer data and commercial transactions.

Anything you can think about storing in the most cost-effective way possible, is how Rich Mogull, CEO of data security firm Securosis, puts it.

Since Amazon hasnt said exactly what is happening yet, its hard to know just how serious the outage is. We do know its bad, Mogull said. We just dont know how bad.

At S3 customers, the problem affected both front-end operations meaning the websites and apps that users see and back-end data processing that takes place out of sight. Some smaller online services, such as Trello, Scribd and IFTTT, appeared to be down for a while, although all have since recovered.

The corporate message service Slack, by contrast, stayed up, although it reported degraded service for some features. Users reported that file sharing in particular appeared to freeze up.

The Associated Press own photos, webfeeds and other online services were also affected.

TECHNICAL KNOCKOUTAGE

Major cloud-computing outages dont occur very often perhaps every year or two but they do happen. In 2015, Amazons DynamoDB service, a cloud-based database, had problems that affected companies like Netflix and Medium. But usually providers have workarounds that can get things working again quickly.

Whats really surprising to me is that theres no fallback usually there is some sort of backup plan to move data over, and it will be made available within a few minutes, said Patrick Moorhead, an analyst at Moor Insights & Strategy.

AFTEREFFECTS

Forresters Bartoletti said the problems on Tuesday could lead to some Amazon customers storing their data on Amazons servers in more than one location, or even shifting to other providers.

A lot more large companies could look at their application architecture and ask how could we have insulated ourselves a little bit more, he said. But he added, I dont think it fundamentally changes how incredibly reliable the S3 service has been.

Read more from the original source:
When Amazon's cloud storage fails, lots of people get wet - Savannah Morning News

Read More..

Update: AWS cloud storage back online after outage cripples popular sites – GeekWire

(Shutterstock Image).

Anoutage of Amazon Web ServicesSimple Storage Service (S3) impacted sites across the internet Tuesday. Continue reading for our story and updates as events unfolded.

UPDATE: 2:15 p.m.: Amazon Web Services has fixed all the issues related to the cloud storage outage Tuesday. Here is the latest message from the service health dashboard: As of 1:49 p.m. Pacific, we are fully recovered for operations for adding new objects in S3, which was our last operation showing a high error rate. The Amazon S3 service is operating normally.

UPDATE 1:20 p.m.: The online world is starting to spin again as Amazon is fixing the problems that caused high error rates at its data centers in Virginia, knocking out many prominent websites and apps Tuesdaymorning. Here is the latest update: S3 object retrieval, listing and deletion are fully recovered now. We are still working to recover normal operations for adding new objects to S3.

UPDATE 1:00 p.m.: It appears Amazon is close to fixing the problem as it posted the following message on the service health dashboard with expectation that the number of errors are expected to decrease in the next hour: We are seeing recovery for S3 object retrievals, listing and deletions. We continue to work on recovery for adding new objects to S3 and expect to start seeing improved error rates within the hour.

UPDATE: The outage remains ongoing, butAmazon Web Services has fixed the issue with the service health dashboard, which was previously not showing the Simple Storage Service outage. The company also believes it has identified the cause of the outage.

Here is the latest update from AWS:

Update at 11:35 AM PST: We have now repaired the ability to update the service health dashboard. The service updates are below. We continue to experience high error rates with S3 in US-EAST-1, which is impacting various AWS services. We are working hard at repairing S3, believe we understand root cause, and are working on implementing what we believe will remediate the issue.

Here is a look at the now updated service health dashboard, which displays all the incidents.

Original story below.

Amazon Web ServicesSimple Storage Service, its cloud storage program, started experiencinghigh error rates atdata centers in Northern Virginia just before 10 a.m. Pacific Tuesday morning, knocking down service to many of the countless websitesand applicationsthat use AWS such as Expedia, Slack, Medium and the U.S. Securities and Exchange Commission.

The outage appears to be affecting the AWS service health dashboard, as it shows no outages currently. Amazon adjusted by posting a special message at the top of the site that reads as follows:Were continuing to work to remediate the availability issues for Amazon S3 in US-EAST-1. AWS services and customer applications depending on S3 will continue to experience high error rates as we are actively working to remediate the errors in Amazon S3.

In an ironic twist, some websites dedicated to telling people when sites are down are down.

Amazon is the king of cloud computing, capturing more than 40 percent of the market, according to a recent report.AWStopped $12 billion in sales for the year,up 55 percent from the same period last year, blowing past a goal of reaching $10 billion in sales in 2016.

That said, the internet is not happy with the outage and people are ventingtheir frustrations on Twitter. Others seem to be embracing the outage, terming it a digital snow day.

More here:
Update: AWS cloud storage back online after outage cripples popular sites - GeekWire

Read More..

Box revenues near $400m as cloud storage demands grow – www.computing.co.uk

Box has announced revenues of $398.6m for its most recent fiscal year, an increase of 32 per cent on its previous financial year, as demand for its cloud storage and content services continues to grow.

Despite the increase in revenues the firm still posted a loss of $150m, although this was an improvement on the $210m it lost in fiscal 2016. Notably, Box also posted its first free cash flow quarter in Q4 at $10m, a notable milestone for any youngish company.

Investors will have noted this with mild optimism, as well as that the fact billings rose by 23 per cent to $454m in fiscal 2017, as it demonstrates that more firms are willing to pay for access to its cloud storage platform and the tools it offers to work with, and share, this data.

Indeed Box said it now has a paying customer base of over 71,000 firms, including high-profile brands such as Volkswagen Group of America, Discovery Communications, and Spotify.

Aaron Levie, co-founder and CEO of Box was upbeat on the results and said it demonstrated the firm's focus was paying off and if it continues will see it reach the milestone $1bn revenue target.

"Box is raising the bar in cloud content management. We've consistently delivered innovative new products, set the standard for security and compliance, and helped customers in every industry move to the cloud with confidence," he said.

"We are driving towards a $1 billion long-term revenue target, and this year we plan to invest for scale while continuing to drive operating leverage."

Levie also took to Twitter to tout the fact the company became free-cash-flow positive in its fourth quarter and that revenue for its next fiscal year is guided to pass $500m.

Read the original post:
Box revenues near $400m as cloud storage demands grow - http://www.computing.co.uk

Read More..

Amazon escapes the internet outage caused by its own cloud computing service – Mashable


Mashable
Amazon escapes the internet outage caused by its own cloud computing service
Mashable
When Amazon's cloud computing service failed on Tuesday, more than half of the internet's top 100 e-commerce websites were affected. Which site didn't get burned? Amazon.com, of course. The AWS S3 cloud computing service backs more than 150,000 ...
AWS Outage Highlights Value of Multi-Cloud StrategyMSPmentor
Cisco Adds Layering to Network Services Orchestrator TechnologyCIO Today
Cloud Computing Makes the Internet More Reliable and Secure, Except When It Doesn'tSlate Magazine (blog)
Savannah Morning News -Technabob (blog)
all 123 news articles »

Read more here:
Amazon escapes the internet outage caused by its own cloud computing service - Mashable

Read More..

Federal Agencies Unable to Completely Leverage Cloud Computing – Read IT Quik

A new survey by Deloitte and the Government Business Council (GBC), a market research company specializing in governments, titled "Mastering the Migration: A Candid Survey of Federal Leaders on the State of Cloud Computing," has found that the way federal organizations migrate their data and applications to the cloud is impacting the value of such migrations.

The survey which studied the responses of 328 senior employees from major defense, government and civilian agencies, found that only 24% of the respondents believed that cloud computing had a positive impact on their organization. While only 6% of the respondents reported a negative impact, about 70% said that cloud computing had no noticeable impact or any impact on their organization.

The Obama administration had announced the cloud-first initiative in 2011. To comply with this initiative, many government organizations and agencies have moved their applications to the clouds from the government-owned data centers. But, these migrations just "lifted-and-shifted" existing applications and data to cut costs and add convenience. They did not factor in the impact of such migrations on the functionality, IT architecture and users causing the newer applications to become non-intuitive, slow to load, and more difficult to work with.

"While most respondents agree that cloud computing should provide many benefits, what we are seeing is that federal agencies that have implemented the cloud may still be working on bringing those benefits fully to fruition and/or communicating those benefits that have been achieved," says Nicholas McClusky, director of research & strategic insights, GBC.

About 41% of the respondents found that the efforts put into cloud migration by their organization were either problematic, mixed or non-existent, with less than 10% finding them successful. The study states that these inefficiencies and difficulties are due to lack of expertise/ skills, security concerns, budget constraints, apart from the inflexibility and complexity of legacy applications.

Even though top agency executives like CIOs and CFOs have shown positive interest for cloud, the findings of the survey suggest that IT leadership should also become involved during cloud migrations so that they can help leverage the value of the cloud. Success in implementing the cloud requires that the organization and its cyber practices evolve along with the IT services portfolio.

"The promise of the cloud is huge, but the journey isn't easy," says Doug Bourgeois, managing director, Deloitte Consulting LLP, who is also the team leader for federal technology. "Cloud value cannot be achieved through technology aloneit's about governance, security, and transformation. This report validates that support for cloud in federal agencies is growing, but perceptions of its impact vary significantly. Agencies should rethink their core development principles and strategy for migration to the cloud."

While cloud is the future for the federal agencies, institutional policies for data sharing and security should change to fit the new cloud architecture to ensure ease of operations, stability, performance and agility.

Visit link:
Federal Agencies Unable to Completely Leverage Cloud Computing - Read IT Quik

Read More..

Our view: Lessons learned from Amazon cloud outage – The Salem News

Tuesdays widespread internet outage carries many lessons for us all, from the need to back-up cloud servers to an awareness that without the internet, we are in big trouble.

Hundreds of thousands websites crashed Tuesday from about 12:30 p.m. to around 4:30 p.m. The outage was a result of problems atone of Amazons server farms, located in a Virginia warehouse. It affected companies large and small, from the likes of Yahoo and Apple to the North of Boston Media Group. It affected how people work, shop or look at pet-trick videos. It affected Huffington Post, Imgur, Business Insider and many other mainstream media sites.

In short, it affected a lot of people.

Amazon Web Services, which is distinct from the companys retail business, hosts a large portion of what is known as the cloud, that vast repository of shared computer memory. Tangibly Amazons cloud is series of data centers enormous, climate controlled warehouses that keep the flow of data moving all over the world.

Amazon Web Services is far from the only company that leases shared serve space in the cloud, but it is a major player. By one account, it carries and stores data for about 30 percent of the companies on the internet. Thats a lot of businesses. Its been around since 2006 and, with the exception of a few blips along the way, has remained reliable.

Amazon said Tuesdays outage was caused by high error rates in one part of its simple storage service. That may be Greek to most, but what it means is that a major portion of a public utility that people rely on as much as electricity and water was offline. In 1990, not even Al Gore could have known how important the internet much less cloud computing would become to how the world operates.

Amazons outage showed how dependent we have all become on reliable internet and shared computing services. Here at the North of Boston Media Group, for example, access to essential production software was shut off for nearly half the day. The experience was a lesson to all that when these systems go down, we and everyone else need a back-up plan.

One solution for businesses is to build their own, on-site data centers. But these are costly, particularly compared to the relatively inexpensive services offered by Amazon.

With this wake-up call come questions beyond the obvious ones about how this happened, which Amazon so far hasnt publicly detailed. One question is this: What are the ramifications of allowing companies such as Amazon or Google to carry such large pieces of our shared computing and internet architecture?

Its kind of like the financial crisis of 2008 when huge financial companies began failing and needed government bailouts because they were too big to fail. Is Amazon now too big to fail? What, if anything, can be done about it?

It seems unlikely that President Donald Trumps administration will want to start telling internet businesses what to do, but it might be time for some scrutiny on the issue.

Public utilities are heavily regulated, from electric companies to water distribution systems. Isnt the internet a public utility? Wouldnt it be wise for an independent, third-party to take a look at whats happening?

This case doesnt appear to involve a malicious attack, but the problem of hacking is real and has dealt severe blows to the internet in the past. With everyone so concerned about security these days, it makes sense to take a look at the behemoth that Amazon Web Services has become.

Continue reading here:
Our view: Lessons learned from Amazon cloud outage - The Salem News

Read More..

Cloud hosting ‘critical’ to sustainability, says Defra CTO – PublicTechnology.net

Defrastaff aredue to move into offices at 2Marsham Street next year - Photo credit: Steve Cadman, CC BY 2.0

The department has created the UnITy programme to oversee the exit of these contracts as it replaces them with a number of smaller, more flexible contracts.

It has already launched a competition for its office printer network, announcing last month that there were 25 suppliers interested, and has now opened procurement for its hosting and application support services.

The call, which chief technology officer Chris Howes described as the biggest yet, will involve securely hosting 355 applications and supporting infrastructure services to 21,000 end users.

Defra seeks suppliers to take over IBM and Capgemini contracts Defra claims initial success in switch from IBM and Capgemini as 25 suppliers express interest in UnITy

Howes said that hosting and support of applications was the area of ICT where the biggest efficiencies and savings can be made.

He added that, as the lead government organisation for sustainability, Defra has to be an exemplar in this area but that Defras applications are currently hosted from five data centres and around 150 server rooms spread across the regions.

Moving services to cloud hosting wherever possible would be critical to the work, he said.

Consuming cloud-based services will mean that we no longer will we need to buy static provision - we can simply flex up and down our provision as our needs ebb and flow.

Howes stressed that the work would be a journey of incremental improvements, not a big bang transformational event, but that the importance of hosting and application support services cant be overestimated.

The exercise will allow the department to increase efficiencies by standardising infrastructure, operating systems and services, he said. Meanwhile, the new services will support new ways of working and ensure that Defras ICT is more resilient, with fewer outages and failures, he said.

Defra is also carrying out a spring clean of its applications to identify which can be decommissioned as work is underway to lift and shift them. The department is also changing some of the existing applications, Howes said.

Read the rest here:
Cloud hosting 'critical' to sustainability, says Defra CTO - PublicTechnology.net

Read More..

Database-as-a-service platform introduces encryption-at-rest – BetaNews

While storing data in the cloud is undoubtedly convenient it also introduces risks and encryption is increasingly seen as a way of helping combat them.

Database-as-a-service company mLab is introducing encryption-at-rest as an opt-in data security measure for customers of its most popular plans, at no additional cost.

The mLab platform currently manages nearly 500,000 MongoDB deployments across Amazon Web Services, Google Cloud Platform, and Microsoft Azure. Encryption-at-rest will be available to mLab's Database-as-a-Service customers on Dedicated Standard and High Storage plans, covering deployments across both Amazon Web Services and Google Cloud Platform.

The company already offers customers in-transit encryption via SSL to secure data transmission over networks. Adding encryption-at-rest boosts mLabs commitment to enterprise security by encrypting data on disks and wherever backups are stored. The feature is designed so that there will be minimal performance impact on the database.

"As the cloud services industry matures, many customers, especially enterprises, are developing programs to perform due diligence on their portfolio of service providers," says Jared D Cottrell, CTO of mLab. "Whether an industry regulation or best practice, encryption-at-rest is one of the most commonly-requested security features. Encryption-at-rest provides a layer of protection against unauthorized access to sensitive data, especially attacks directed at the physical devices on which the data is stored. mLab's encryption-at-rest feature gives our customers greater peace of mind."

You can find out more on the mLab website.

Photo credit: faithie / Shutterstock

Here is the original post:

Database-as-a-service platform introduces encryption-at-rest - BetaNews

Read More..

Research proposes ‘full-journey’ email encryption – The Stack

A group of researchers from Austin, NYU and Cornell universities have developed a scheme for genuine end-to-end email encryption though that term might need to be redefined in the context of their project.

Traditional end-to-end email encryption only provides security in transit between mail servers once on the servers themselves, the emails are processed as plain text, facilitating processes such as spam filters.

The group proposes a system called Pretzel, which develops a cryptographic algorithm that permits two parties to blindly contribute to encryption, and extends the concept to email.

However the researchers admit that providers will need to furnish additional computing resources in order to handle the encryption process.

The benefit of the scheme is the near-impossibility of interception/decryption from emails captured in transit. Gaining control of network nodes is a widespread practice on both sides of the law, with headlines in recent years going to official and malfeasant actors taking control of Tor exit nodes with a view to de-anonymising information.

In practice genuine end-to-end encryption has been available via PGP since the early 1990s, and the functionality is offered by certain of the larger providers notably those who are party to the decrypted emails at the client end, at which point the information can be monetised by targeted advertising.

But the researchers note that the limited availability of PGP has more commercial than governmental imperatives behind it:

A crucial reasonat least the one that is often citedis that encryption appears to be incompatible with value-added functions (such as spam filtering, email search, and predictive personal assistanceand with the functions by which free webmail providers monetize user data (for example, topic extraction)These functions are proprietary; for example, the provider might have invested in training a spam filtering model, and does not want to publicize it (even if a dedicated party can infer itSo it follows that the functions must execute on providers servers with access to plaintext emails.

Pretzels innovation is in following up email decryption (usually provided by public/private keys as in PGP) with a second protocol which operates between the email provider and each mail recipient, called secure two-party computation (2PC). 2PC schemes can process any function in a manner hidden from one or more of the concerned parties.

However the processing needs of full-scale 2PC systems would not be realistic as a transport mechanism, and so the researchers have produced a slimmed-down and more linear throughput, with certain algorithm functionality baked into the procedure.

At the moment the teams implementation of Pretzel permits core commercial operations such as email scanning (i.e. for advertising or spam-identification purposes), and a limited subset of other usual mail server functions. The researchers hope to add the ability to accommodate predictive personal assistance services and virus scanning in the future, as well as the ability to hide metadata some of the most fiercely-sought information among security services and hackers alike.

Ultimately, our goal is just to demonstrate an alternative. We dont claim that Pretzel is an optimal point in the three-way tradeoff among functionality, performance, and privacywe dont yet know what such an optimum would be. We simply claim that it is different from the status quo (which combines rich functionality, superb performance, but no encryption by default) and that it is potentially plausible.

Excerpt from:

Research proposes 'full-journey' email encryption - The Stack

Read More..

Livecoin, the Fourth Largest Altcoin Exchange, Is Now Available in Eight Languages – Coinspeaker

Source: Livecoin Place/Date: London, UK - February 28th, 2017

The addition of eight different languages to the platform makes Livecoin a truly global altcoin exchange.

Livecoin, the fourth largest altcoin exchange on the internet has announced a range of new feature additions and platform improvements. These new changes provide unprecedented ease of access to the global cryptocurrency community. The platform has extended support to 8 different languages including English, Chinese, Spanish, Portuguese, Russian, Italian, French and Indonesian (Bahasa). It allows a significant number of international traders to use Livecoin in their own native language.

The Livecoin platform now sports a new, simple, and easy to use interface that is optimized for convenient trading. The platform has further standardized its minimum order amount by setting it at 0.0001 BTC for all cryptocurrency pairs. With these latest changes, Livecoin stands true to its commitment to making it easy for the traders, irrespective of their experience levels to indulge in cryptocurrency trading activity. While beginners get the hang of cryptocurrency trading by interacting with a user-friendly platform, seasoned traders can make use of the platforms various tools to execute profitable trades.

Livecoin is constantly in the process of adding new cryptocurrencies. It has recently included Iconomi, SpectreCoin, and BitConnect. Users can expect more altcoin pairs to be introduced soon.

Started as an exchange for just Bitcoin and Litecoin, today Livecoin has turned into a gateway to the crypto-market. Livecoin prides itself on the highly functional and customizable interface, with different levels of sophistication suitable for both new traders and seasoned market sharks. All coins are examined thoroughly before being added to the roster. With a stringent review process in place, Livecoin today lists 85 different altcoins.

Livecoin offers free debit cards to its traders for swift cash withdrawals. Meanwhile, any funds stored on the platform is secured in cold storage to ensure its safekeeping.

Learn more about Livecoin at https://www.livecoin.net/

Disclosure: Livecoin is the source of this content. Virtual currency is not legal tender, is not backed by the government, and accounts and value balances are not subject to consumer protections. This press release is for informational purposes only. The information does not constitute investment advice or an offer to invest.

Read more from the original source:
Livecoin, the Fourth Largest Altcoin Exchange, Is Now Available in Eight Languages - Coinspeaker

Read More..