Page 4,459«..1020..4,4584,4594,4604,461..4,4704,480..»

Bitcoin passes gold in value for the first time ever – ConsumerAffairs

You may have heard of the gold standard, but it seems that a type of online currency is making a bid for supremacy in the financial world.

Reports yesterday indicate that the price of one bitcoin surpassed the price of one ounce of gold for the first time ever. While both are considered alternative assets by the financial community, it gives some credence to past claims that the online currency might one day reign supreme for investors.

For those who dont know, bitcoin is a type of digital currency that consumers hold electronically. However, unlike other forms of currency, it is sent from one entity to another and is not controlled by a central source, like a bank. The cryptocurrency has a number of advantages, which consumers can learn about in the video below. However, due to the anonymity associated with its trading, it has also been used for a number of scams and illicit activities.

While the value of bitcoin has gone up and down since it was introduced to the market, its recent increase in value may indicate that investors are taking it more seriously. Currently, traders of the currency are awaiting an SEC decision that would allow Winklevoss Bitcoin ETF to become the first bitcoin exchange-traded fund (ETF) in the U.S. market.

The ruling, which is set to be announced on March 11, would open the digital currency to a wider range of investors. However, some analysts have said that the bitcoin ETF has less than a 25% chance of being approved, according to an earlier CoinDesk report.

As of Friday morning, gold had once again climbed back over bitcoin in value; one bitcoin was selling for $1,284.58, while one ounce of gold was selling for $1,319.60.

Read the rest here:
Bitcoin passes gold in value for the first time ever - ConsumerAffairs

Read More..

Bitcoin: Can RBI ignore the elephant in the room? – Economic Times

By Arnav Joshi

Virtual currencies like Bitcoin are all the rage in FinTech, and could potentially transform global commerce in the years ahead. Users are adopting them in the thousands each day and the value of trade in these currencies is witnessing unparalleled growth.

The world over, regulators are working out carefully-crafted regulations to foster Bitcoin growth. In India, however, even with the new cashless push by the government and existing Bitcoin trade spiking post-demonetisation, the Reserve Bank of India (RBI) continues to shy away from recognising and regulating virtual currencies.

On February 1, the RBI issued a yet another cautionary press release, on the back of an earlier one issued in December 2013, warning users of a risk they are likely to already be aware of -- that it (the RBI) does not regulate and has not licensed any virtual currencies in India, and anyone using them does so at their own risk.

A month later, on March 1, RBI Deputy Governor R. Gandhi raised concerns over virtual currencies, saying they pose potential financial, legal, customer protection and security-related risks.

While the central bank seems to be insulating itself from the repercussions of these currencies remaining unregulated, their use continues to grow exponentially across the world, including in India.

As of an August 2016 (pre-demonetisation) estimate, the number of Bitcoin (the most prominent of several virtual currencies) users in India stood at 50,000 and growing. India now also has a large number of prominent Bitcoin exchanges such as BTCXIndia, Coinsecure, Unocoin and Zebpay. Globally, by some estimates, Bitcoin users alone could breach five million by 2019.

The latest red flag from the RBI may well have been prompted by the recent surge in the price of Bitcoin on Indian Bitcoin exchanges post-demonetisation. Bitcoin is freely tradable currency, and has its own exchanges (including in India) where users can sign up and speculate, buy and sell Bitcoins for other currencies (such as the rupee).

After the cash ban, Bitcoin was quoted to be inflated 20-25 per cent over cost. As of March 2, Bitcoin was trading at Rs 90,000 to a single Bitcoin. In October 2016, this value was Rs 40,000 to a Bitcoin.

The question that arises then is how long can the RBI afford to adopt a hands-off approach to virtual currencies, when regulators elsewhere are adopting proactive measures?

The RBI's research wing, the Institute for Development & Research in Banking Technology, issued a white paper on the applications for blockchain technology in the banking and financial sectors in India in January 2017, which acknowledges the prominence of virtual currencies, but steers towards the underlying distributed ledger (blockchain) technology, rather than virtual currency regulation.

A large number of countries, not just in the West but in India's own neighbourhood, have either adopted or are close to adopting virtual currency regulation in some form. These include China, Russia, Singapore and the Philippines, which issued guidelines for virtual currency exchanges as recently as January.

Interestingly, the precursor to regulation in a number of these countries were warnings similar to those issued by the RBI. However, these warnings largely came around 2013, at a time when the understanding of the technology and the use of virtual currencies was much lesser than it is today.

In 2017, when users, trading and payments in these currencies are growing and maturing faster than ever, the warn-watch-wait approach simply will not work.

There are a number of downsides to not bringing in regulation when virtual currency use in India is still modest. Prominent among these is that regulation which kicks in when products and technologies have become systemic will invariably cause friction between regulators on the one hand, and businesses and users on the other, requiring stakeholders to make slow and possibly expensive changes to the way they transact.

Another issue is the key role regulation plays in consumer awareness and security. While the RBI may sleep soundly having issued its caveat emptor, given the attractive investment opportunity and ease of use and access virtual currencies offer, users are likely to throw caution to the wind and invest anyway.

The clear downside to this is that investors will likely fall prey to unregulated and unscrupulous Bitcoin exchanges and wallet operators (similar to a Paytm or Mobikwik, but exclusive to storing Bitcoin). Without any oversight, these operators rely on self-regulation. They could have severe gaps in data security, could charge exorbitant interest and transaction fees, and in a worst-case scenario, disappear with investor money altogether.

More importantly, the jury is still out on whether virtual currencies can be used to pseudonymously finance crime, including terrorism, and given the sensitive security scenario in India, it is important for the government to understand, and for the law to control, who can buy them and what they can do with them. As transactions grow, so will the chances and potential for virtual currency-related fraud.

Legal scholars Jack Goldsmith and Timothy Wu have said "government regulation works by cost and bother, not by hermetic seal", which appears to be the line the RBI is taking on virtual currencies.

With emerging technologies, however, especially those as radical as virtual currencies, governments are increasingly learning that the cost and bother of reactive regulation can be substantially greater than proactive regulation.

If the Indian government is serious about its cashless drive, it will have to consider virtual currencies as an integral part of the panacea being touted for our archaic economy.

It is up to the government and the RBI to lead the way by bringing forward-looking regulation for virtual currencies sooner rather than later, because there is already much catching-up to do.

The writer is a Senior Associate at J. Sagar Associates and advises internet and emerging technology clients. Views expressed are personal. He can be contacted at arnav.jo@gmail.com

See the original post:
Bitcoin: Can RBI ignore the elephant in the room? - Economic Times

Read More..

Dash Becomes Third-Most Valuable Cryptocurrency Based On … – The Dash Times (blog)

The past 48 hours have proven to be quite intriguing for anyone involved in the Dash cryptocurrency. With prices spiking to a new all-time high yesterday afternoon, things got off to a good start. Albeit Dash has seen a small correction ever since, the price per individual coin still hovers around the US$46 mark. Dash is now the third-most valuable cryptocurrency in existence.

People who have been holding onto their Dash for some time were more than happy to see the recent price increase take form. With the value increasing by 450% over the past few days, it is evident the demand for privacy-centric altcoins is bigger than ever before. Dash has been around for several years now, yet never saw such a spectacular price increase up until the past two days.

All of this positive momentum has catapulted Dash to the third rank on Coinmarketcap. To put this into perspective, Dash is now the third-biggest cryptocurrency based on their market cap. At the price of US$46.09 per coin, the total market cap sits at US$328,782 million. That is quite an impressive feat and it allowed Dash to bypass Ripple, which has been the third-largest market cap for quite some time. Dash is still a long way removed from overtaking Ethereum, though, as there is a US$1.5bn gap between the two right now.

It is difficult to explain why the Dash price saw such an impressive price surge all of a sudden. There has been positive news, as Dash has been officially integrated into point-of-sale devices. In doing so, the manufacturers of these devices aim to make cryptocurrency payments more accessible to merchants and more common among consumers all over the world. That news alone would not propel Dash to the third spot on Coinmarketcap, though. It is evident some of the cryptocurrency traders and speculators had a role to play in all of this as well.

One Reddit user explained how the parabolic rise of Dash can be attributed to the Poloniex exchange. On this cryptocurrency exchange platform, users can lend out their Dash balance as a way to generate passive interest once the money is repaid. Using leverage to margin trade has been one of the primary reasons why Poloniex became the number one altcoin exchange in the world today. Users borrow cryptocurrencies from others and bet on which way the market will evolve.

Considering Dash saw a bullish trend, a lot of traders aimed to borrow funds to open long positions on the Dash price. However, some people were betting the Dash price would crash and opened short position, which requires a Dash balance to do so in the first place. With the demand to borrow Dash on the rise a lot of people expected a price crash the number of bitcoin flowing into the market exploded exponentially. It was impossible to open shorts due to lack of Dash, hence the bullish price trend could be maintained without problems.

With shorts no longer being able to match the longs opened on Dash, it was evident something had to give sooner or later. Shorters were forced to buy back into bitcoin at a loss, causing a short squeeze for Dash. A lot of people made good profit and suffered big losses as a result of this unexpected price movement. This is only part of the reason why the Dash price shot up, but it goes to show there was a lot momentum caused by speculators and traders. It is good to see someone explaining the situation in this manner, that much is certain.

In the end, the price momentum for Dahs has somewhat kept its flow going. A lot of people expected a retrace to US$20 per coin or less, yet that has not happened yet. Instead, the price has seemingly found a stable floor for now. Dash remains the third-most valuable cryptocurrency, which will not change anytime soon by the look of things. Whether or not this trend can be turned into long-term momentum for Dash, remains to be seen.

Header image courtesy of Shutterstock

See the rest here:
Dash Becomes Third-Most Valuable Cryptocurrency Based On ... - The Dash Times (blog)

Read More..

Overcome problems with public cloud storage providers – TechTarget

If you have a new app or use case requiring scalable, on-demand or pay-as-you-go storage, one or more public cloud storage services will probably make your short list. It's likely your development team has at least dabbled with cloud storage, and you may be using cloud storage today to support secondary uses such as backup, archiving or analytics.

Every cloud storage option has its pros and cons. Depending on your specific needs, the size of your environment, and your budget, its essential to weigh all cloud and on-prem options. Download this comprehensive guide in which experts analyze and evaluate each cloud storage option available today so you can decide which cloud model public, private, or hybrid is right for you.

By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.

You also agree that your personal information may be transferred and processed in the United States, and that you have read and agree to the Terms of Use and the Privacy Policy.

While cloud storage has come a long way, its use for production apps remains relatively limited. Taneja Group surveyed enterprises and midsize businesses in 2014 and again in 2016, asking whether they are running any business-critical workloads (e.g., ERP, customer relationship management [CRM] or other line-of-business apps) in a public cloud (see "Deployments on the rise"). Less than half were running one or more critical apps in the cloud in 2014, and that percentage grew to just over 60% in 2016. Though cloud adoption for critical apps has increased significantly, many IT managers remain hesitant about committing production apps and data to public cloud storage providers.

Concerns about security and compliance are big obstacles to public cloud storage adoption, as IT managers balk at having critical data move and reside outside data center walls. Poor application performance, often stemming from unpredictable spikes in network latency, is another top-of-mind issue. And then there's the cost and difficulty of moving large volumes of data in and out of the cloud or within the cloud itself, say when pursuing a multicloud approach or switching providers. Another challenge is the need to reliably and efficiently back up cloud-based data, traditionally not well supported by most public cloud storage providers.

How can you overcome these kinds of issues and ensure your public cloud storage deployment will be successful, including for production workloads? We suggest using a three-step process to assess, compare and contrast providers' key capabilities, service-level agreements (SLAs) and track records so you can make a better informed decision (see: "Three-step approach to cloud storage adoption").

Let's examine specific security, compliance and performance capabilities as well as SLA commitments you should look for when evaluating public cloud storage providers.

Maintaining cloud data storage security is generally understood to operate under a shared responsibility model: The provider is responsible for security of the underlying infrastructure, and you are responsible for data placed on the cloud as well as devices or data you connect to the cloud.

All three major cloud storage infrastructure-as-a-service providers (Amazon Web Services [AWS], Microsoft Azure and Google Cloud) have made significant investments to protect their physical data center facilities and cloud infrastructure, placing a particular emphasis on securing their networks from attacks, intrusions and the like. Smaller and regional players tend also to focus on securing their cloud infrastructure. Still, take the time to review technical white papers and best practices to fully understand available security provisions.

Though you will be responsible for securing the data you connect or move to the cloud, public cloud storage providers offer tools and capabilities to assist. These generally fall into one of three categories of protection: data access, data in transit or data at rest.

Data access: Overall, providers allow you to protect and control access to user accounts, compute instances, APIs and data, just as you would in your own data center. This is accomplished through authentication credentials such as passwords, cryptographic keys, certificates or digital signatures. Specific data access capabilities and policies let you restrict and regulate access to particular storage buckets, objects or files. For example, within Amazon Simple Storage Service (S3), you can use Access Control Lists (ACLs) to grant groups of AWS users read or write access to specific buckets or objects and employ Bucket Policies to enable or disable permissions across some or all of the objects in a given bucket. Check each provider's credentials and policies to verify they satisfy your internal requirements. Though most make multifactor authentication optional, we recommend enabling it for account logins.

Data in transit:To protect data in transit, public cloud storage providers offer one or more forms of transport-level or client-side encryption. For example, Microsoft recommends using HTTPS to ensure secure transmission of data over the public internet to and from Azure Storage, and offers client-side encryption to encrypt data before it's transferred to Azure Storage. Similarly, Amazon provides SSL-encrypted endpoints to enable secure uploading and downloading of data between S3 and client endpoints, whether they reside within or outside of AWS. Verify that the encryption approach in each provider's service is rigorous enough to comply with relevant security or industry-level standards.

Data at rest:To secure data at rest, some public cloud storage providers automatically encrypt data when it's stored, while others offer a choice of having them encrypt the data or doing it yourself. Google Cloud Platform services, for instance, always encrypt customer content stored at rest. Google encrypts new data stored in persistent disks using the 256-bit Advanced Encryption Standard (AES-256) and offers you the choice of having Google supply and manage the encryption keys or doing it yourself. Microsoft Azure, on the other hand, enables you to encrypt data using client-side encryption (protecting it both in transit and at rest) or to rely on Storage Service Encryption (SSE) to automatically encrypt data as it is written to Azure Storage. Amazon's offering for encrypting data at rest in S3 is nearly identical to Microsoft Azure's.

Also, check for data access logging -- to enable a record of access requests to specific buckets or objects -- and data disposal (wiping) provisions, to ensure data's fully destroyed if you decide to move it to a new provider's service.

Your provider should offer resources and controls that allow you to comply with key security standards and industry regulations. For example, depending on your industry, business focus and IT requirements, you may look for help in complying with Health Insurance Portability and Accountability Act, Service Organization Controls 1 financial reporting, Payment Card Industry Data Security Standard or FedRAMP security controls for information stored and processed in the cloud. So be sure to check out the list of supported compliance standards, including third-party certifications and accreditations.

Unlike security and compliance, for which you can make an objective assessment, application performance is highly dependent on IT environment, including cloud infrastructure configuration, network connection speeds and the additional traffic running over that connection. If you're achieving an I/O latency of 5 to 10 milliseconds running with traditional storage on premises, or even better than that with flash storage, you will want to prequalify application performance before committing to a cloud provider. It's difficult to anticipate how well a latency-sensitive application will perform in a public cloud environment without actually testing it under the kinds of conditions you expect to see in production.

Speed of access is based, in part, on data location, meaning expect better performance if you colocate apps in the cloud. If you're planning to store primary data in the cloud but keep production workloads running on premises, evaluate the use of an on-premises cloud storage gateway -- such as Azure StorSimple or AWS Storage Gateway -- to cache frequently accessed data locally and (likely) compress or deduplicate it before it's sent to the cloud.

To further address the performance needs of I/O-intensive use cases and applications, major public cloud storage providers offer premium storage capabilities, along with instances that are optimized for such workloads. For example, Microsoft Azure offers Premium Storage, allowing virtual machine disks to store data on SSDs. This helps solve the latency issue by enabling I/O-hungry enterprise workloads such as CRM, messaging and other database apps to be moved to the cloud. As you might expect, these premium storage services come with a higher price tag than conventional cloud storage.

Bottom line on application performance: Try before you buy.

A cloud storage service-level agreement spells out guarantees for minimum uptime during monthly billing periods, along with the recourse you're entitled to if those commitments aren't met. Contrary to many customers' wishes, SLAs do not include objectives or commitments for other important aspects of the storage service, such as maximum latency, minimum I/O performance or worst-case data durability.

In the case of the "big three" providers' services, the monthly uptime percentage is calculated by subtracting from 100% the average percentage of service requests not fulfilled due to "errors," with the percentages calculated every five minutes (or one hour in the case of Microsoft Azure Storage) and averaged over the course of the month.

Typically, when the uptime percentage for a provider's single-region, standard storage service falls below 99.9% during the month, you will be entitled to a service credit. (Though it's not calculated this way for SLA purposes, 99.9% availability implies no more than 43 minutes of downtime in a 30-day month.) The provider will typically credit 10% of the current monthly charges for uptime levels between 99% and 99.9%, and 25% for uptime levels below 99% (Google Cloud Storage credits up to 50% if uptime falls below 95%). Microsoft Azure Storage considers storage transactions failures if they exceed a maximum processing time (based on request type), while Amazon S3 and Google Cloud Storage rely on internally generated error codes to measure failed storage requests. Note that the burden is on you as the customer to request a service credit in a timely manner if a monthly uptime guarantee isn't met.

Also, carefully evaluate the SLAs to determine whether they satisfy your availability requirements for both data and workloads. If a single-region service isn't likely to meet your needs, it may make sense to pay the premium for a multi-region service, in which copies of data are dispersed across multiple geographies. This approach increases data availability, but it won't protect you from instances of data corruption or accidental deletions, which are simply propagated across regions as data is replicated.

With these guidelines and caveats in mind, you can better assess whether public cloud storage makes sense for your particular use cases, data and applications. If public cloud storage providers' service-level commitments and capabilities fall short of meeting your requirements, consider developing a private cloud or taking advantage of managed cloud services.

Though public cloud storage may not be an ideal fit for your production data and workloads, you may find it fits the bill for some of your less demanding use cases.

Companies move toward public cloud storage

Evaluate all variables in the cloud storage equation

Public, private or hybrid? What's the right cloud storage for you?

Original post:
Overcome problems with public cloud storage providers - TechTarget

Read More..

AWS claims human error to blame for US cloud storage outage – ComputerWeekly.com

Amazon Web Services (AWS) says human error caused the cloud storage system outage, which lasted several hours and affected thousands of customers earlier this week.

What to move, where and when. Use this checklist and tips for a smooth transition.

By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.

You also agree that your personal information may be transferred and processed in the United States, and that you have read and agree to the Terms of Use and the Privacy Policy.

Amazons Simple Storage Service (S3), which provides backend support for websites, applications and other cloud services, ran into technical difficulties on the morning of Tuesday 28 February in the US, returning error messages to those trying to use it.

The cloud service giant revealed the cause in a post-mortem-style blog post, and explained the issue can be traced back to some exploratory work its engineers were doing to establish why the S3 billing system was performing so slowly.

During this process, a number of servers providing underlying support for two S3 subsystems were accidently removed, requiring a full restart, which caused the problems.

An authorised S3 team member using an established playbook executed a command which was intended to remove a small number of servers for one of the S3 subsystems that is used by the S3 billing process, said the blog post.

Unfortunately, one of the inputs to the command was entered incorrectly and a largerset of servers was removed than intended.

This affected instances of S3 run out of the firms US East-1 datacentre region in Virginia, US, causing havoc for a number of high-profile websites and service providers, including the cloud-based collaboration platform, Box, and instant and group messaging site, Slack.

The outage also had a knock-on impact on a number of AWS services, hosted from US East-1, that rely on S3 for backend support, including Amazon Elastic Computer Cloud (EC2), AWS Elastic Block Store, and AWS Lambda.

It also caused the AWS service status page to stop working, causing problems for users keen to find out when the firms systems would be back up and running again.

The downtime has promoted numerous industry commentators to speak up about the risks involved with running a business off the infrastructure of a single cloud provider, while others have seized on it to reinforce the importance of having a robust business continuity strategy in place.

AWS, however, goes on to say its platforms are built to be highly resilient, but the full-scale restart of S3 took much longer than anticipated.

We build our systems with the assumption that things will occasionally fail, and we rely on the ability to remove and replace capacity as one of our core operational processes, said the post.

While this is an operation that we have relied on to maintain our systems since the launch of S3, we have not completely restarted the index subsystem or the placement subsystem in our larger regions for many years.

S3 has experienced massive growth over the last several years and the process of restarting these services and running the necessary safety checks to validate the integrity of the metadata took longer than expected, it added.

The incident has prompted AWS to re-evaluate the setup of its S3 infrastructure, the blog post continues, to prevent similar incidents from occurring in future.

Wewant to apologise for the impact this event caused for our customers. While we are proud of our long track record of availability with Amazon S3, we know how critical this service is to our customers, their applications and users, and their businesses. We will do everything we can to learn from this event and use it to improve our availability even further, it concluded.

More here:
AWS claims human error to blame for US cloud storage outage - ComputerWeekly.com

Read More..

Australian crowdsourcing platform turns to cloud storage to fuel growth – ComputerWeekly.com

When Airtasker hit the magic hockey stick growth spurt it had always wanted, the Australian crowdsourcing service provider realised it needed to change its storage strategy.

This guide offers a comprehensive survey of the hybrid flash storage market. We give the lowdown on hybrid flash products from the big six storage vendors and the startups and specialists.

By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.

You also agree that your personal information may be transferred and processed in the United States, and that you have read and agree to the Terms of Use and the Privacy Policy.

When it was first launched in 2012, Airtasker easily attracted 45,000 users and had put through A$1m (US$760,000) worth of jobs in four years. Today, the platform boasts some 950,000 users.

We have been growing really fast over the past 18 months, said Paul Keen, chief technology officer at Airtasker, which has an annualised revenue run rate of A$85m for the 2017 calendar year. Eighteen months ago, that would have been close to A$15m.

Much of Airtaskers success may be attributed to its emphasis on building trust between job posters and workers who bid for 65,000 jobs each month, from transcribing videos to cleaning bedrooms.

Through the platform, workers will quote for jobs, while job posters award jobs based on the quotes and the workers task history, profile and reputation rating. Airtasker takes a 15% cut of the payment for completed jobs.

What we find now is that people assign and bid for tasks based on reputation we have 400,000 reviews on our site, said Keen. "When you have plenty of reviews to go by, you're likely to choose the person whos not necessarily the cheapest, but the best at a job.

Keen joined Airtasker a year ago when the company was climbing through its hockey stick growth curve. He had to make sure the companys technology enabled rather than impeded growth.

Then, Airtasker was using four production servers in a managed environment. They were about as powerful as a Macbook Pro, he said. It was a pretty basic scenario. We had a managed NAS (network attached storage), but we had no idea what it was really doing or when it would run out of storage.

Keen had already seen the problems that a shared NAS system could bring from his experience at his previous company. When that NAS went down, the whole site went down and theres nothing you could do, he said. Youve got to wait several hours for the environment to come back up.

To avoid similar issues, and the onerous task of managing a NAS system, Airtasker decided to move to Amazon Web Services (AWS), where infrastructure management would be taken care of. I dont know why you would want to manage storage nowadays when there are enough providers out there that can do it for you, said Keen.

Keen liked the fact that cloud service providers such as AWS offer snappy elasticity that ensures Airtaskers storage capacity scales up as the business grows.

Recently, we had to go from half a gigabyte of database storage to two terabytes, he said. It took just two clicks of a button for that to happen with zero downtime. Half an hour later, my entire storage environment was moved to the new environment.

The initial move to AWS, however, took more than two months, as Airtasker implemented an immutable infrastructure strategy, where IT components are replaced rather than changed when managing services and deploying software.

Immutable infrastructure is an approach that is only possible with cloud platforms, which have the automation capabilities required to build and deploy components. Airtasker worked with Rackspace on the deployment scripts.

Instead of upgrading our storage environment, we build a separate environment next to it, do some testing and switch users over. Once we are comfortable with the new environment, we destroy the old one, said Keen.

By building from scratch, Keen said he is now able to grow the companys storage capacity in an elastic manner, given that he knows exactly what is going on in the deployment script.

The best part is that this approach is invisible to Airtaskers developers, who need to work fast and focus on customer needs instead of firefighting. Theres no value in building storage environments, which should just be there for customers, said Keen.

Keen concedes there is supplier lock-in with using AWS, but it doesnt bother him. This is my third AWS migration and I have come to the conclusion that if you want to get the best out of these public clouds, theres a whole degree of supplier lock-in, he said.

Outside of IT, Airtaskers employees use Google Drive and Dropbox for cloud storage. We like that theres a lot of security around those services, and its the providers responsibility to ensure governance. We also store all the logs, but thats more for audit purposes, he added.

See more here:
Australian crowdsourcing platform turns to cloud storage to fuel growth - ComputerWeekly.com

Read More..

Infrastructure provisioning made easier with hybrid cloud storage – TechTarget

Better access to digital information has opened new revenue opportunities in nearly every industry. Whether it involves business intelligence and analytics, mobility or internet of things, it's clear the next level of business competiveness is being built upon a foundation of data availability. None of this is news, of course. And, if you're reading this column, you are likely already heavily involved in the ongoing battle to ensure that IT resources keep pace with the increasing demands placed on business applications and data.

The good news is IT infrastructure innovation has accelerated as well. Hardware, for example, continues to become quicker and more affordable. As a result, storage systems perform faster, scale higher and hold more capacity than ever. In theory, you'd think the two, (1) growing demands served by (2) ever-more capable infrastructure, would cancel each other out. But that's not how it works in the real world. While there are a number of reasons for this inconsistency, one that doesn't get discussed enough is the time to provision new storage capacity.

When storage vendors discuss time to provision, they tend to focus on how easy it is to set up and configure an array physically located on site in a rack with adequate power and cooling. Here, setup time is often a very small portion of the entire process. The true time to provision, however, encompasses everything that occurs from once you identify a storage resource need to the moment newly acquired resources are made available to applications. The full end-to-end process can take months, and the few minutes or hours it takes to set up the final storage array is only a small part of the overall pain. Meanwhile, application demands continue to increase while the infrastructure provisioning process happens.

In this era where business competitiveness is often determined by data access, delays can negatively impact revenue opportunities and the bottom line as well.

Delays in provisioning infrastructure deployments not only slow down new IT initiatives. In this era where business competitiveness is often determined by data access, delays can negatively impact revenue opportunities and the bottom line as well. For years, you could address the time-to-provision challenge by simply deploying more storage capacity than immediately necessary, giving the environment room to support near-term growth during the sometimes lengthy process of new storage system procurement. While still considered a best practice by some, having excess infrastructure just sitting around doing nothing adds unnecessary cost, a nonstarter in this age of tighter budgets.

One obvious method for reducing the time to provision storage is using public cloud services. While this can provide near-immediate access to new capacity, performance and security, other business considerations often lead many firms to prudently retain a significant portion of data on premises. The trick here is to achieve cloud-like agility in storage provisioning while maintaining those on-premises capabilities required by many workloads. There are a number of options available to help improve, or at least mask, time-to-provision challenges for on-premises infrastructure, these include the following:

IT demands change so rapidly that new resources are often needed immediately, not months down the road. Some look to the public cloud to solve these challenges, but these services alone aren't right for everyone or every workload. In response, on-premises vendors are offering greater intelligence and more flexibility in payment options to ease the burden of deploying new capacity on site. While there are benefits to this approach, it can still be a challenge to match the agility of the public cloud. For that, hybrid clouds have stepped in as an excellent option to deliver on-premises performance and security while integrating the agility of public cloud infrastructure.

Provisioning a software-defined data center

How automation and provisioning mesh with cloud computing

Develop a private cloud environment

Read more:
Infrastructure provisioning made easier with hybrid cloud storage - TechTarget

Read More..

Last chance to get a lifetime subscription to pCloud Premium Cloud Storage for just $59.99 – Neowin

Today's highlighted deal comes from our Software section of Neowin Deals, where it's your last chance to save 87% off* a lifetime subscription to pCloud Premium Cloud Storage. Save, share and enjoy even your largest files quickly and securely with pCloud.

You've got too many files in your life to effectively manage on just one device, which is where pCloud comes in handy. A supremely secure web storage space for all of your photos, videos, music, documents, and more, pCloud gives you an easily accessible place to store your valuables without taking up any precious data on your devices. With unrivaled transfer speed and security, pCloud makes saving and sharing memories extremely easy.

For specifications, and license info please click here.

A lifetime license to pCloud Premium Cloud Storage normally represents an overall recommended retail pricing* of $478.80, but it can be yours for just $59.99 for a limited time, a saving of $418.81.

Stick with Neowin Deals and earn credit or even deeper discounts.

Get this deal or learn more about it | View more discounted offers in Online Courses

That's OK. If this offer doesn't interest you, why not check out our giveaways on the Neowin Deals website? There's also a bunch of freebies you can check out here.

Or try your luck on The Ultimate Entertainment Center Giveaway. All you have to do is sign up to enter this giveaway.

How can I disable these posts? Click here.

Disclosure: This is a StackCommerce deal or giveaway in partnership with Neowin; an account at StackCommerce is required to participate in any deals or giveaways. For a full description of StackCommerces privacy guidelines, go here. Neowin benefits from shared revenue of each sale made through our branded deals site, and it all goes toward the running costs. *Values or percentages mentioned above are subject to StackCommerces own determination of retail pricing.

Read this article:
Last chance to get a lifetime subscription to pCloud Premium Cloud Storage for just $59.99 - Neowin

Read More..

Scality adds HALO cloud-based monitoring services to RING – TechTarget

Scality has added cloud-based monitoring services to its RING object storage that perform predictive analytics to improve uptime.

The Scality HALO Cloud Monitor, launched this week, is available for on-premises customers and service providers who build private clouds on RING.

Every cloud storage option has its pros and cons. Depending on your specific needs, the size of your environment, and your budget, its essential to weigh all cloud and on-prem options. Download this comprehensive guide in which experts analyze and evaluate each cloud storage option available today so you can decide which cloud model public, private, or hybrid is right for you.

By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.

You also agree that your personal information may be transferred and processed in the United States, and that you have read and agree to the Terms of Use and the Privacy Policy.

HALO remote cloud-based monitoring services allow users to gauge the health of servers, network bandwidth and storage. HALO uses machine learning to analyze previous system behavior and define a group of key performance indicators (KPIs) to detect changes in the storage environment that indicate potential problems in the system. The KPIs are predefined by Scality support experts, but users have the option to customize them.

The standard version of the HALO Cloud Monitor pulls information from 15 metrics for predictive analysis and capacity planning. Scality also offers a full-scale Dedicated Care Services (DCS) version that gathers data based on 100 metrics. Both versions support Amazon Simple Storage Service-based deployments.

HALO's centralized dashboard shows system-level statistics on memory, disks, CPUs and storage. It gives visuals of events and offers proactive incident detection and system health checks. Alarms are triggered when behavioral changes are detected in the object storage.

"It continuously measures from a user perspective to make sure [the system] is working well," said Daniel Binsfeld, Scality's vice president of DevOps and global support.

The standard version of the cloud-based monitoring service offering became generally available in January. DCS went into beta in February with general availability scheduled for this month.

George Crump, president and founder of analyst firm Storage Switzerland, said Scality's HALO cloud monitoring services show that object storage technology is maturing.

"It's almost a big data program," Crump said. "It analyzes the data that the object storage has about itself. It gives the ability to consume and act upon that data. Most systems have general information about what is going on, but they don't provide the ability to consume that data."

The standard version of the HALO Cloud Monitor pulls information from 15 metrics for predictive analysis and capacity planning.

Scality offers a 100% uptime guarantee with its DCS program. DCS provides capacity planning and root-cause analysis of the KPIs that detect problems in the system. DCS includes Scality's in-house support team. If the cloud-based monitoring service becomes unavailable at any time, the customer does not have to pay a service fee for the affected time period.

"If you use HALO with our DCS program, then our people are doing the monitoring," said Paul Turner, Scality's chief marketing officer. "Those customers get a 100% availability guarantee. We make sure the system is always up and running."

Scality RING software uses a decentralized distributed architecture, providing concurrent access to data stored on x86-based hardware. RING's core features include replication and erasure coding for data protection, auto-tiering and geographic redundancies inside a cluster.

Object storage makes use of erasure coding for data resilience and to avoid the use of RAID. As the capacities of hard disk drives grow, RAID rebuilds become time-consuming when large drives fail.

Choosing storage for cloud-based applications

Scale-out NAS or object storage for unstructured data?

Object storage gaining ground over NAS

Read the original:
Scality adds HALO cloud-based monitoring services to RING - TechTarget

Read More..