Category Archives: Cloud Hosting

Cost Reduction Strategies on Java Cloud Hosting Services – InfoQ.com

Key Takeaways

Cloud resources can be expensive, especially when you are forced to pay for resources that you dont need; on the other hand resource shortages cause downtimes. Whats a developer to do? In this article we will discuss techniques for determining the golden medium that lets you pay for just the resources you actually consume, without being limited as your application capacity requirements scale.

The first step to any solution of course is admitting that you have a problem. Below are some details on the issue that many cloud users face.

Almost every cloud vendor offers the ability to choose from a range of different VM sizes. Choosing the right VM size can be a daunting task; too small and you can trigger performance issues or even downtimes during load spikes. Over-allocate? Then during normal load or idle periods all unused resources are wasted. Does this scenario look familiar from your own cloud hosted applications?

And when the project starts growing horizontally, the resource inefficiency issue replicates in each instance, and so, the problem grows proportionally.

In addition, if you need to add just a few more resources to the same VM, the only way out with most of current cloud vendors is to double your VM size. See the sample of AWS offering below.

(Click on the image to enlarge it)

Exacerbating the problem, you need to incur downtime when you move, by stopping a current VM, performing all steps of application redeploy or migration, and then dealing with the inevitable associated challenges.

This shows that VMs are not quite flexible and efficient in terms of resource usage, and limits adjustment according to variable loads. Such lack of elasticity directly leads to overpaying.

If scale out is not helping to use resources efficiently, then we need to look inside our VMs for a deeper understanding of how vertical scaling can be implemented.

Vertical scaling optimizes memory and CPU usage of any instance, according to its current load. If configured properly, this works perfectly for both monoliths, as well as microservices.

Setting up vertical scaling inside a VM by adding or removing resources on the fly without downtimes is a difficult task. VM technologies provide memory ballooning, but its not fully automated, requiring tooling for monitoring the memory pressure in the host and guest OS, and then activating up or down scaling as appropriate. But this doesn't work well in practice, as the memory sharing should be automatic in order to be useful.

Container technology unlocks a new level of flexibility thanks to its out-of-box automatic resource sharing among containers on the same host, with a help of cgroups. Resources that are not consumed within the limit boundaries are automatically shared with other containers running on the same hardware node.

And unlike VMs, the resource limits in containers can be easily scaled without reboot of the running instances.

As a result, the resizing of the same container on the fly is easier, cheaper and faster than moving to larger VMs.

There are two types of containers application and system containers. An application container (such as Docker or rkt) typically runs in as little as a single process, whereas a system container (LXD, OpenVZ) behaves like a full OS and can run full-featured init systems like systemd, SysVinit, and openrc, that allow processes to spawn other processes like openssh, crond, or syslogd, together inside a single container. Both types support vertical scaling with resource sharing for higher efficiency.

Ideally on new projects you want to design around application containers from the ground up, as it is relatively easy to create the required images using publicly available Docker templates. But there is a common misconception that containers are good only for greenfield applications (microservices and cloud-native). The experience and use cases prove possibility to migrate existing workloads from VMs to containers without rewriting or redesigning applications.

For monolithic and legacy applications it is preferable to use system containers, so that you can reuse architecture, configuration, etc., that were implemented in the original VM design. Use standard network configurations like multicast, run multiple processes inside a container, avoid issues with incorrect memory limits determination, write on the local file system and keep it safe during container restart, troubleshoot issues and analyze logs in an already established way, use a variety of configuration tools based on SSH, and be liberal in relying on other important "old school" tasks.

To migrate from VMs, monolithic application topology should be decomposed into small logical pieces distributed among a set of interconnected containers. A simple representation of the decomposition process is shown in the picture below.

Each application component should be placed inside an isolated container. This approach can simplify the application topology in general, as some specific parts of the project may become unnecessary within a new architecture.

For example, Java EE WebLogic Server consists mainly of three kinds of instances required for running in a VM: administration server, node manager and managed server. After decomposition, we can get rid of the node manager role, which is designed as a VM agent to add/remove managed server instances, as now they will be added automatically by the container and attached directly to administration server using the container orchestration platform and a set of WLST (WebLogic Server Scripting Tool) scripts.

To proceed with migration, you need to prepare the required container images. For system containers, that process might be a bit more complex than for application containers, so either build it yourself or use an orchestrator like Jelastic with pre-configured system container templates.

And finally, deploy the project itself and configure the needed interconnections.

Now each container can be scaled up and down on the fly with no downtime. It is much thinner compared to virtual machines, so this operation takes much less time compared to scaling with VMs. And the horizontal scaling process became very granular and smooth, as a container can be easily provisioned from the scratch or cloned.

For scaling Java vertically, it is not sufficient to just use containers; you also need to configure the JVM properly. Specifically, the garbage collector you select should provide memoryshrinking in runtime.

Such GC packages all the live objects together, removes garbage objects, uncommit and releases unused memory back to the operation system, in contrast to non-shrinking GC or non-optimal JVM start options, where Java applications hold all committed RAM and cannot be scaled vertically according to the application load. Unfortunately, the JDK 8 default Parallel garbage collector (-XX:+UseParallelGC) is not shrinking and does not solve the issue of inefficient RAM usage by JVM. Fortunately, this is easily remedied by switching to Garbage-First (-XX:+UseG1GC).

Lets see the example below. Even if your application has low RAM utilization (blue in the graph), the unused resources cannot be shared with other processes or other containers as its fully allocated to the JVM (orange).

(Click on the image to enlarge it)

However, the good news for the Java ecosystem is that as of JDK 9, themodern shrinking G1 garbage collector is enabled by default. One of its main advantages is the ability to compact free memory space without lengthy GC pause timesand uncommit unused heap.

Use the following parameter to enable G1, if you use JDK lower than 9th release:-XX:+UseG1GC

The following two parameters configure the vertical scaling of memory resources:

Also, the application should periodically invoke Full GC, for example, System.gc(), during a low load or idle stage. This process can be implemented inside the application logic or automated with a help of the external Jelastic GC Agent.

In the graph below, we show the result of activating the following JVM start options with delta time growth of about 300 seconds:

-XX:+UseG1GC -Xmx2g -Xms32m

(Click on the image to enlarge it)

This graph illustrates the significant improvement in resource utilization compared to the previous sample. The reserved RAM (orange) increases slowly corresponding to the real usage growth (blue). And all unused resources within the Max Heap limits are available to be consumed by other containers or processes running in the same host, and not wasted by standing idle.

This proves that a combination of container technology and G1 provides the highest efficiency in terms of resource usage for Java applications in the cloud.

The last (but not least) important step is to choose a cloud provider with a "pay per use" pricing model in order to be charged only based on consumption.

Cloud computing is very often compared to electricity usage, in that it provides resources on demand and offers a "pay as you go" model. But there is a major difference - your electric bill doesnt double when you use a little more power!

Most of the cloud vendors provide a "pay as you go" billing model, which means that it is possible to start with a smaller machine and then add more servers as the project grows. But as we described above, you cannot simply choose the size that precisely fits your current needs and will scale with you, without some extra manual steps and possible downtimes. So you keep paying for the limits - for a small machine at first, then for one double in size, and ultimately horizontal scaling to several underutilized VMs.

In contrast to that, a "pay as you use" billing approach considers the load on the application instances at a present time, and provides or reclaims any required resources on the fly, which is made possible thanks to container technology. As a result, you are charged based on actual consumption and are not required to make complex reconfigurations to scale up.

(Click on the image to enlarge it)

But what if you are already locked into a vendor with running VMs, and youre paying for the limits and not ready to change it, then there is still a possible workaround to increase efficiency and save money? You can take a large VM, install a container engine inside and then migrate the workloads from all of the small VMs. In this way, your application will be running inside containers within the VM - a kind of "layer-cake", but it helps to consolidate and compact used resources, as well as to release and share unused ones.

Realizing benefits of vertical scaling helps to quickly eliminate a set of performance issues, avoid unnecessary complexity with rashly implemented horizontal scaling, and decrease cloud spends regardless of application type - monolith or microservice.

Ruslan Synytsky is CEO and co-founder of Jelastic, delivering multi-cloud Platform-as-a-Service for developers. He designed the core technology of the platform that runs millions of containers in a wide range of data centers worldwide. Synytsky worked on building highly-available clustered solutions, as well as enhancements of automatic vertical scaling and horizontal scaling methods for legacy and microservice applications in the cloud. Rich in technical and business experience, Synytsky is actively involved in various conferences for developers, hosting providers, integrators and enterprises.

See the original post:
Cost Reduction Strategies on Java Cloud Hosting Services - InfoQ.com

President Trump Could Cost US Cloud Computing Providers More Than $10 billion by 2020 – The Data Center Journal

The U.S. cloud computing industry stands to lose more than $10 billion by 2020 as a result of President Trumps increasingly shaky reputation on data privacy, according to the latest research from secure data center experts Artmotion.

Growth for U.S. cloud computing providers is already thought to be slowing. Although IDCs latest Worldwide Public Cloud Services Spending Guide suggests that the US will generate more than 60% of total worldwide cloud revenues to 2020, the country is expected to experience the slowest growth rate of the eight regions in the analysis.

However, this forecast slowdown does not factor in the effect that President Trumps controversial record on data privacy has had on business confidence in the U.S. as a data hosting location. This coincides with a rapid increase in people expressing unfavorable opinions about the U.S. more generally. In fact, the latest study from the Pew Research Center highlights that just 22% of people have confidence in President Trump to do the right thing when it comes to international affairs.

As a result of this growing uncertainty, Artmotions new analysis suggests that U.S. cloud providers will experience further slowing of growth in the next three yearscreating estimated losses of $10.1 billion for the industry between 2017-2020.

Mateo Meier, CEO of Artmotion, commented: In a market that is still expected to grow significantly in the next few years, it is vital that U.S. service providers continue to attract new customers in order to retain market share. Despite the U.S.s current dominance of the global cloud computing market, there is no certainty that the status quo will be maintained. Perhaps the key reason for US cloud providers to be fearful is that this isnt the first time weve been here.

Edward Snowdens revelations about PRISM and the NSAs mass surveillance techniques were hugely damaging to U.S. cloud companies. It also encouraged many businesses to completely rethink their data strategies, rather than continuing to trust that U.S. cloud providers would guarantee the levels of data security and privacy they need. The impact that President Trump could have needs to be understood in that context.

Artmotions full analysis is available as a free download here.

President Trump Could Cost US Cloud Computing Providers More Than $10 billion by 2020 was last modified: August 25th, 2017 by Press Release

Link:
President Trump Could Cost US Cloud Computing Providers More Than $10 billion by 2020 - The Data Center Journal

Why 2017 is the Year to Understand Cloud Computing – Business 2 Community

The Cloud has become a major buzzword in business for very good reason. Small businesses and large enterprises alike can take advantage of cloud computing to build and expand the computer based infrastructure behind the scenes. Follow this guide to better understand what cloud computing is, how it works, and how you can take advantage.

In the old world of web servers and internet infrastructure, websites and other online assets were typically limited to one main server, or a few linked servers using tools called load balancers, to process and send data, whether it be a customer facing website or internal facing application. The advent of content delivery networks (CDNs) powered up those servers to host and serve data from the edge of the network for faster serving and sometimes lower costs.

As computing demand exploded with the rise of the smartphone and high-speed internet, consumer and business needs downstream of those servers continues to creep upward. Cloud computing has emerged as the best option to handle an array of computing needs for startups and small businesses due to the ability to start at a low cost and scale, almost infinitely, as demand grows. Advances in cloud technology at Amazon, Google, Microsoft, IBM, Oracle, and other major cloud providers is making cloud computing more desirable for all businesses.

When cloud computing first emerged, large enterprises were the only businesses able to afford the cost of elastic, flexible computing power. Now, however, those costs are more likely a drop in the bucket for small businesses.

For example, I use the cloud to store and serve videos for Denver Flash Mob, a side hustle business I run with my wife. Our monthly bill is typically around a dollar or two, and heavy months lead to a bill around five bucks. No big deal! My lending startup Money Mola is also cloud based, with costs to run both a development server and public facing server running us around $30 per month.

The first time I logged into Amazon Web Services (AWS) it seemed like I needed a computer science degree to use it! I had a hard time doing even basic tasks outside of uploading and sharing videos. Thankfully Amazon has made using AWS much easier, though it is not without its challenges.

Im a pretty techy guy, so my skillset is a bit more advanced than the average computer user. I have setup AWS to send outgoing transactional emails, automatically backup websites, and more on my own. If you are willing and able to hire a cloud expert, the possibilities of the cloud are endless. Anything from web hosting to artificial intelligence and big data analysis can run in the cloud.

Webcast, August 29th: How to 8x Your SEO Traffic With These 3 Power Hacks

The most basic way to get started with cloud computing is website and computer backups. If you use WordPress for your website, setting up cloud backups is simple with one of a handful of plugins like Updraft Plus. If you can use the WordPress dashboard, you can setup cloud backups with Updraft plus. It is quick and easy and includes out of the box support. Easy from companies like AWS, Drobox, Google Drive, Rackspace Cloud, and other services. The paid plugin version adds access to Microsoft OneDrive and Azure, Google Cloud Storage, and other options.

I run several backups of both my laptop and my web based assets. If my home were to be burglarized or burned down, the cloud has me covered. If my laptop is stolen, I have a backup at home and in the cloud. Redundant backups are not optional, they are a must in 2017.

In addition to safe, secure backups, the cloud can reach far corners of the planet. Utilizing cloud based CDNs, you know your customers will get every video and web page they want with near instant speeds.

Lets say your business has a popular video you want to share around the world. With a cloud CDN, you upload your video once to the web. Then the CDN takes over and creates copies of that video file in data centers around the world. Whenever a customer clicks to view that video, they are served a copy from the closest data center to their location.

Thanks to the power of a CDN, you dont have to send viewers in Australia, London, Bangkok, and Buenos Aires a video from your web server in Texas. Each one gets a local copy so they get their video even faster, offering a better customer experience. App based businesses can even run multiple versions of their app in data centers around the world. This will nsure every user has the same great experience.

It doesnt matter what your business does, there is some way the cloud can help you achieve better results. The cloud is only going to grow and become more prominent in business. Older computer methods will go the way of the fax machine. If you want serious computing success with scalability and flexibility, the cloud is your best option.

Read the original post:
Why 2017 is the Year to Understand Cloud Computing - Business 2 Community

President Trump Could Cost US Cloud Computing Providers More … – The Data Center Journal

The U.S. cloud computing industry stands to lose more than $10 billion by 2020 as a result of President Trumps increasingly shaky reputation on data privacy, according to the latest research from secure data center experts Artmotion.

Growth for U.S. cloud computing providers is already thought to be slowing. Although IDCs latest Worldwide Public Cloud Services Spending Guide suggests that the US will generate more than 60% of total worldwide cloud revenues to 2020, the country is expected to experience the slowest growth rate of the eight regions in the analysis.

However, this forecast slowdown does not factor in the effect that President Trumps controversial record on data privacy has had on business confidence in the U.S. as a data hosting location. This coincides with a rapid increase in people expressing unfavorable opinions about the U.S. more generally. In fact, the latest study from the Pew Research Center highlights that just 22% of people have confidence in President Trump to do the right thing when it comes to international affairs.

As a result of this growing uncertainty, Artmotions new analysis suggests that U.S. cloud providers will experience further slowing of growth in the next three yearscreating estimated losses of $10.1 billion for the industry between 2017-2020.

Mateo Meier, CEO of Artmotion, commented: In a market that is still expected to grow significantly in the next few years, it is vital that U.S. service providers continue to attract new customers in order to retain market share. Despite the U.S.s current dominance of the global cloud computing market, there is no certainty that the status quo will be maintained. Perhaps the key reason for US cloud providers to be fearful is that this isnt the first time weve been here.

Edward Snowdens revelations about PRISM and the NSAs mass surveillance techniques were hugely damaging to U.S. cloud companies. It also encouraged many businesses to completely rethink their data strategies, rather than continuing to trust that U.S. cloud providers would guarantee the levels of data security and privacy they need. The impact that President Trump could have needs to be understood in that context.

Artmotions full analysis is available as a free download here.

President Trump Could Cost US Cloud Computing Providers More Than $10 billion by 2020 was last modified: August 25th, 2017 by Press Release

View original post here:
President Trump Could Cost US Cloud Computing Providers More ... - The Data Center Journal

What You NEED To Look For In A Cloud Hosting SLA – TG Daily (blog)

In the modern world of business IT, the cloud is king thats just a fact. According to a recent survey, 95% of all businesses are using either public or private cloud hosting services and the vast majority of businesses are contracting with at least 6 different cloud computing providers.

This makes sense, of course. Cloud computing is inexpensive, reliable, and available even to SMEs (Small-To-Midsized-Enterprises), who often could not afford expensive, on-site IT infrastructure.

However, not all cloud hosting Vancouver companies are the same. As the cloud becomes more and more important to critical business operations, robust Service Level Agreements (SLAs) are essential for any business with a cloud hosting partner.

Essentially, an SLA is a legally-binding document that defines performance standards, uptime, and customer support standards between a cloud provider and a business.

In this document, things such as expected network uptime, Mean Time Between Failure (MTBF), data throughput, and server/software performance are defined in plain language.

The requirements both for the hosting provider and the customer are also defined as are the next steps that can be taken if either party fails to uphold their end of the contract.

An SLA is the single most important document youll sign when choosing a new cloud hosting partner. So heres what you should look for before signing a new cloud hosting SLA.

Cloud hosting SLAs are complicated documents but there are some simple things that you can look for to ensure youre signing an SLA from a reputable company.

System Uptime This is the single most important guarantee you can get on your SLA. Any reputable cloud hosting company should offer system uptime of 99.9% or higher, and have clear guarantees for compensation in case they fail to uphold the system uptime standards outlined in the SLA.

Clear Support And Response Time Guidelines Your SLA should include guarantees both for the level of customer support, and response times from support staff. Try to choose a cloud hosting provider that offers 24/7 customer support, and has a clear policy for fast, reliable response times.

Detailed Definitions For Data Ownership And Management Any SLA you sign should include details about data ownership. You must make it clear that your company still owns any data hosted by a third party.

Your SLA should include language that makes your data ownership clear as well as detailed next steps for retrieval of proprietary data in case you must break the service contract.

Clearly-Defined System Security An SLA should always include a set of security standards that are clearly defined, and testable by you, or a third party.

Your SLA should also allow you to take appropriate security precautions if desired such as using a third-party auditing service to ensure your data is properly protected.

Steps That Can Be Take In Case Of Non-Compliance Or Disputes If your cloud hosting provider fails to uphold their SLA, there must be proper, legal steps that your company can take to exit the contract, or obtain compensation from the company.

A clear strategy for resolving conflicts should be defined as should a clear exit strategy that can be implemented in case the terms of the contract are breached.

Any reputable cloud hosting company in Canada should be willing to create an SLA with these terms and if you find that your potential partner is unwilling to create a comprehensive SLA for any reason, walk away. You should never enter a contract with a cloud hosting provider without an SLA the risks are simply too great.

An SLA is a multifunctional legal document. It protects both you and your cloud hosting partner, and ensures that your business relationship is mutually beneficial.

For this reason, you should only do business with reputable companies that offer robust SLAs. And if you follow these tips and understand the basics behind SLAs, youre sure to find success when searching for a cloud hosting partner in Canada!

Read more from the original source:
What You NEED To Look For In A Cloud Hosting SLA - TG Daily (blog)

President Trump could cost US cloud computing providers more than $10 billion by 2020 – Bdaily

The US cloud computing industry stands to lose more than $10 billion by 2020 as a result of President Trumps increasingly shaky reputation on data privacy, according to the latest research from secure data centre experts Artmotion.

Growth for US cloud computing providers is already thought to be slowing. Although IDCs latest Worldwide Public Cloud Services Spending Guide suggests that the US will generate more than 60% of total worldwide cloud revenues to 2020, the country is expected to experience the slowest growth rate of the eight regions in the analysis.

However, this forecast slowdown does not factor in the effect that President Trumps controversial record on data privacy has had on business confidence in the US as a data hosting location. This coincides with a rapid increase in people expressing unfavourable opinions about the US more generally. In fact, the latest study from the the Pew Research Center highlights that just 22% of people have confidence in President Trump to do the right thing when it comes to international affairs.

As a result of this growing uncertainty, Artmotions new analysis suggests that US cloud providers will experience further slowing of growth in the next three years creating estimated losses of $10.1 billion for the industry between 2017-2020.

Mateo Meier, CEO of Artmotion, commented: In a market that is still expected to grow significantly in the next few years, it is vital that US service providers continue to attract new customers in order to retain market share. Despite the USs current dominance of the global cloud computing market, there is no certainty that the status quo will be maintained. Perhaps the key reason for US cloud providers to be fearful is that this isnt the first time weve been here.

Edward Snowdens revelations about PRISM and the NSAs mass surveillance techniques were hugely damaging to US cloud companies. It also encouraged many businesses to completely rethink their data strategies, rather than continuing to trust that US cloud providers would guarantee the levels of data security and privacy they need. The impact that President Trump could have needs to be understood in that context.

Continued here:
President Trump could cost US cloud computing providers more than $10 billion by 2020 - Bdaily

Microsoft No Longer a PC Company with Deals Like Halliburton, Says Credit Suisse – Barron’s

Shares of Microsoft (MSFT) are up 88 cents, or 1.2%, at $73.03, after the company this morning announced a deal with oil and gas giant Halliburton(HAL), in which Microsofts Azure cloud computing service will host the latters iEnergy service for exploration and production.

In response, Credit Suisses Michael Nemeroffreiterates an Outperform ratting on Microsoft shares, writing that the company is moving away from its legacy tools and cyclical PC business."

Among details of the collaboration, Microsoft said it will "allow the companies to apply voice and image recognition, video processing and AR/Virtual Reality to create a digital representation of a physical asset using Microsofts HoloLens and Surface devices, after gathering data fromsensors placed on infrastructure.

Additionally, the companies will utilize digital representation for oil wells and pumps at the IoT edge using the Landmark Field Appliance and Azure Stack, said Microsoft.

Under a headline Not your fathers Microsoft anymore, Nemeroff writes that deals like this one arehelping to diversify Microsoft:

While the economics of this deal were not disclosed, we view these types of announcements and this one in particular, as a prime example of how MSFT is purposely steering its long-term corporate strategy away from its legacy tools and cyclical PC business, and towards the next generation of software technology that will foster incremental productivity gains to create wider competitive advantages for its early adopter customers, which we expect to become technology standards over time. Unlike many of the other cloud platforms that primarily offer commodity-like cloud hosting, Azures IOT edge, machine learning and augmented reality capabilities, bundled together, distinguish Microsoft by offering numerous high-level products and services within its Azure platform that truly enable digital transformations beyond simply exporting data workloads to the cloud, and is the reason why we remain quite bullish on MSFTs cloud strategy which seems to be gaining momentum (Credit Suisse Survey Suggests Inflection Point for Azure).

See the rest here:
Microsoft No Longer a PC Company with Deals Like Halliburton, Says Credit Suisse - Barron's

State of Cloud – 2017 – Read IT Quik

Cloud has taken the world by storm. So much so that Gartner reports that by 2020, more than $1 trillion will be spent in shifting businesses from traditional systems to the cloud, effectively showcasing how far the global industry has progressed in cloud adoption.

The cloud system has exploded and now houses many different segments. There has been a massive shakedown in the industry, and as the dust settles, the main players have emerged. This article identifies these trailblazers in the different segments present in the cloud arena.

Cloud Governance

Cloud Governance manages Cloud Resource Management, Operations Automation, Cost Management, Security and Compliance of Cloud Consumption. A Cloud Security Alliance survey found that 34.8% of enterprises with more than 5,000 employees have a cloud governance committee charged with creating and enforcing cloud policies. This shows the growing clamor to have an able Cloud Governance Mechanism in place to have control over the consumption, productivity improvements and security of multi-cloud setups.

Some of the leading players are CoreStack, Rightscale and Jamcracker.

Cloud Orchestration

Cloud orchestration tools help in sequencing the different automation tasks and ensure that they are carried out in a consolidated process workflow. The cloud orchestration market is to grow from USD 4.95 billion in 2016 to USD 14.17 billion by 2021. This indicates a CAGR of 23.4% from 2016 to 2021, marking rapid growth in the cloud orchestration market. The reason for this growth can be attributed to the increasing need felt by Small and Medium Enterprises to optimize resources, and organizations inclination towards self-service provisioning.

Leading players in this segment are Cloudify, Terraform and Ansible. Its to be noted that this segment has seen a flurry of acquisitions.

Cloud Management

Cloud Management tools help businesses plan, build and operate multiple clouds from a single pane, with their centralized management features. Many of the players in this segment have now however diverged into other areas such as Cloud Governance.

Some of the players are ParkMyCloud, BMCs Cloud Lifecycle Management, etc.

Cloud Brokerage

Cloud brokerage service providers are intermediaries who help businesses interact with cloud providers and provide customized integration and consulting services. However, this segment has lost traction as a pure-play service in the past few years. These days most Cloud Brokers also provide other services such as Cloud Management or Cloud Governance services.

Cloud Market Place

Cloud Marketplaces provide distributors and resellers ecommerce platforms for selling cloud-based services to organizations. The large Cloud Service providers are expanding their footprint to support distributors and resellers to have more market penetration. SaaS Cloud Market place is another area which is picking up.

The top Cloud Marketplaces are Appdirect, ComputeNext, Mirakl.

Cloud Eco System

The Cloud Ecosystem has undergone continuous expansion in the past few years. There were many small but promising players who were acquired by the whos who of the tech world, such as Amazon, Microsoft and Google. Today, AWS offers 85+ ecosystem services, while Azure offers 90+.

Also, open-source platform, Openstack has announced 50+ incubated projects to strengthen their Cloud Ecosystem.

Cloud Security

Security is a key concern for organizations given the mounting cyberattacks. Recently CloudFlare made news for the wrong reasons, when sensitive data pertaining to top businesses such as Uber, Fitbit, Medium, Yelp, Zendesk, etc. was leaked. At this stage, organizations are exploring effective cloud security options to safeguard from business risks. Security is fast becoming an in-built consideration in any new product release, not an afterthought. The current trends are:

Cloud Service Providers such as AWS, Azure and Google Compute Engine are coming up with their own on-demand security services. This may be a threat for the SaaS based security service providers but their ability to provide security services across these multiple clouds augment well for them.

Cloud Migration

Migration is one area in Cloud that has many wrinkles to iron out. Migration can be in the form of Physical to Cloud, Virtual to Cloud and Cloud to Cloud, and can be carried out via virtual image or operating system. These migrations are usually carried out by Cloud SIs in partnership with the migration tool vendors, like, Appzero and Racemi (Operating System based), Zerto and Doubletake (Virtual Image based).

These Cloud Service providers/ Platforms too are adding migration tools to their arsenal. Case in point is AWS Migration Tools and Azures App Service Migration Assistant. While migration is not as seamless as it ought to be, there is light ahead in the tunnel, in the form of containers.

Cloud Service Provider

We all know the biggies in this field. If Cloud technology powers your business, then you most probably should thank one of these - Amazon AWS, Microsoft Azure, Google Compute Engine and IBM Softlayer.

At the very beginning, there were other big brands competing too, such as, HP Public Cloud, Cisco Public Cloud, etc. However, they later modified their strategies and closed their public cloud offering. Rackspace, another well-known name, was pitted against AWS, but later migrated to Managed Support Services. It was then acquired by Apollo Global Management.

Recently, Synergy Research released the market share information of the major players, where AWS emerged as the winner in the race for Public Cloud Leadership.

Cloud Platform

The clear winners in the Public Cloud domain are AWS and Azure. AWS raked in $3.5 billion in sales in Q4 2016, a 47% increase from Q4 2015. Azure is gaining mileage with a JP Morgan analyst estimating a cool $2.5 billion in revenue in Q4 2016. Microsoft has disclosed a 93% revenue rise up from Q4 2015 to Q4 2016.

In the Private Cloud domain, Openstack, the opensource platform is ruling roost. The many small-time players in the opensource ecosystem have been acquired by the big fshes, Redhat, IBM, Dell EMC, consolidating the segment. Microsoft is trying to make inroads in Private cloud with Azure Stack, but its impact remains to be seen.

Meanwhile, VMWare is still the number one provider in the virtualization segment.

Cloud Service Delivery Platform

Cloud Service Delivery Platform enables hosting service providers to offer unified Cloud Service delivery for their Cloud business. The need of the hour are service delivery platforms that provide self-service provisioning and run the show services business.

The market leaders are Atomia, Cloud Portal Business Manager (CPBM) and Talligent. ACQUIRER

However, it is to be noted that since large enterprises often build their own Cloud Service Delivery Platform, accelerated growth cant be expected in this segment.

PaaS Service Providers

PaaS provides you computing platform services which typically includes hosted development kits, database tools, and application management capabilities. Companies use these services as scalable environments for new or expanding applications to larger audiences. While Azure and Google started with PaaS and moved to IaaS, AWS started with IaaS and added PaaS as part of their ecosystem of services.

Leading PaaS Service Providers are Salesforce, Google App Engine, Redhat Openshift, IBM Bluemix, Engine Yard and Heroku, which was later acquired by Salesforce.

Upcoming PaaS Services are AWS Beanstalk, and Azure App Services.

PaaS Platform

Though touted as the future of the cloud, Cloud PaaS platform couldnt live up to the market expectations due to evolution issues. PaaS platforms didnt provide fresh approaches for application migration. Nor did they provide the control and flexibility required to manage complex environments. Also, evolution of Cloud Orchestration and Containers side-lined the focus and growth of PaaS platforms as they provided most of the features offered by PaaS platforms with lesser constraints.

The major players are Cloud Foundry and OpenShift. These two adopted Docker to provide Container Image Mobility features. Cloud Foundry also announced multiple siblings such as Stackato, Appfog and Iron-foundry, which were subsequently acquired.

Object Storage Service Providers

Object Storage Services are used by almost every organization adopting cloud, as they are used in managing configuring files, backup, logs, files, etc. The current trend in object storage is to have a hybrid object storage that helps in bursting to cloud from the enterprise object storage based on certain policies.

AWS S3 is considered unanimously as the market leader in cloud object storage. For accelerated hybrid cloud storage, AWS subscribers have to use the Amazon Storage Gateway services. A much more economical version by AWS for cloud object storage is AWS Glacier.

Major players are Azure Blob Storage, Swistack, Rackspace, Internap, Google Cloud Storage, Tiscali and Scality.

Object Storage Platform

While it has been around for quite some time, Cloud Object Storage Platform is slowly picking up speed as a viable competitor for block and file storage. These platforms help organizations enjoy content storage thats simple, flexible and most importantly, scalable. Also, they enable easy retrieval of data from the storage for usage.

OpenStack Swift is the leading opensource platform in this segment. It is important to note that some of the largest storage clouds such as Rackspace, SwiftStack and Internap have been built using OpenStack Swift.

Hybrid object storage is supported via APIs provided by object storage platforms to integrate with the object storage service providers like AWS S3 or Azure Blob. Upcoming players in this segment are Redhat, Ceph, Cloudian, Scality and EMC ECS.

Cloud Storage as A Service Provider

As per a survey by IDGs 2016 Enterprise Cloud Computing Survey, 21% respondents are predicting data storage/data management apps are a high priority area for their organizations cloud migration plans in 2017. With the deluge of data, courtesy analytics and insight-driven strategies, cloud storage as a service provider are betting big on organizations demand for storage needs.

That said, the enterprise storage market is poised for major disruptions this year, as per Techtarget. while AWS and Azure offer storage services, they are nowhere enough to meet the requirements of large enterprises.

Hence, there have bloomed a few players like Zadara to tackle this niche.

Block Storage Platform

The leading Open source Block Storage platforms are Openstack Cinder and Ceph Block Storage. Openstack Cloud platform supports both Cinder and Ceph for block Storage. On the Container Side, Flocker is a leading open-source container data volume orchestrator for Dockerized application.

Cloud File Hosting Service Providers

File hosting is mostly driven by mobile and IoT devices. Also, it has been an eventful year for enterprises, as their fie servers have been now migrated to cloud hosting service providers. The challenge for the major players in this segment is how they tackle the myriad requirement of enterprises, which are Performance, Migration, Security and Availability.

File hosting service providers is a segment that has matured and consists of a quite a number of contenders. We have the biggies Dropbox, Google Drive, Microsoft, Onedrive, AWS CloudDrive, Apples iCloud on one side and other players such as Sugarsync and Mediare.

Enterprise File Sharing Platform

As per a report, the Enterprise File Sharing and Synchronization (EFSS) market size is expected to grow by 25.7%. by 2021. Composed EFSS trend, demand for cloud-based integration, and Bring-Your-Own-Device (BYOD) has led to widespread adoption of EFSS solutions. Companies are leveraging EFSS solutions to effectively share and manage voluminous day processed every day, while ensuring data security.

Leading players are Sharefile, Box, ownCloud, Egnyte, Sparkleshare, Seafle, Pydio, Syncplicity, and WatchDox. ownCloud, Seattle and Pydio are open source platforms in this segment. Some of the acquisitions in this segment are as above.

Conclusion

As per Bain & Companys research brief titled e Changing Faces of the Cloud, the global cloud IT market revenue is predicted to increase from $180B in 2015 to $390B in 2020, attaining a Compound Annual Growth Rate (CAGR) of 17%. In the same period, SaaS-based apps are predicted to grow at an 18% CAGR, and IaaS/PaaS is predicted to increase at a 27% CAGR.

These numbers are indicative of the market sentiment towards cloud and give an idea of the rate at which the technology has been and will be adopted. Cloud has transformed businesses and helped establish new business models that were previously unthinkable. And in that, cloud has greatly advanced the state of business, internationally.

And the future seems promising. At the 2016 Google Next Conference, Eric Schmidt had made a great many predictions with respect to computing. He believes that NoOps will become mainstream, server less architecture will be the next wave of computing.

Another area that is deemed to revolutionize clouds future is containerization and container orchestration, with popular open source projects such as Docker, Kubernetes and Mesos providing benefits of efficiency for memory, CPU and storage.

Read the rest here:
State of Cloud - 2017 - Read IT Quik

Did Snap Make a Mistake With Its $3 Billion in Cloud Contracts? – Madison.com

When Snap (NYSE: SNAP) filed its IPO registration with the SEC, it revealed that it recently signed a $2 billion deal with Google, the Alphabet (NASDAQ: GOOG) (NASDAQ: GOOGL) subsidiary, to use its cloud platform to support Snapchat. Snap then went out and signed a $1 billion deal with Amazon (NASDAQ: AMZN) to use Amazon Web Services for additional cloud hosting.

These contracts have certain stipulations that require Snap to spend a certain amount with each company every year through 2021. And while they've been very beneficial to Snap's hosting costs per user, Snapchat's sluggish user growth puts a big question mark over whether Snap will meet its spending commitments with Google and Amazon.

Snap's commitments with Google and Amazon are a bit different. The Google deal is for a relatively flat amount every year while the Amazon deal ramps up each year.

Year

Google

Amazon

2017

$400 million*

$50 million

2018

$400 million*

$125 million

2019

$400 million*

$200 million

2020

$400 million*

$275 million

2021

$400 million

$350 million

*Snap can defer up to 15% of the amount to a subsequent year

The deals make it so Snap is committed to spending a bare minimum of $390 million on hosting fees this year. Snap's hosting fees for the first and second quarters were $99 million and $106 million, respectively. Those amounts will continue to climb as Snapchat adds more users and features, so it seems like Snap will hit the bare minimum this year.

But Snap discloses its commitments in its 10-Q every quarter, and it says it still has $243 million left in contractual hosting commitments for 2017. That's down from $350 million in the first quarter. Snap's commitment disclosure implies that it doesn't plan on using any of the slack in its Google contract to defer its hosting costs at this point. But it also implies a significant ramp in cloud spending over the second half of the year, and it's unclear if Snap's data usage will climb that much. If it doesn't, it will have to push back some of its commitments into the next year, raising the bar for user growth and increased engagement on Snapchat.

If Snap fails to meet its minimum obligation with Google, it forfeits any amount below that 15% buffer. Amazon's deal is a bit more lenient in that it allows Snap to prepay for AWS hosting to meet the minimum commitment.

Snap doesn't necessarily need all of its hosting costs to come from users. It could come from its customers. Snap launched its self-serve ad platform earlier this year and it's rolled out several new tools to make it easier for small businesses to create, target, and measure ad performance on Snapchat.

Snap is aggressively courting businesses to its automated ad buying platforms and that could help make up some of the gap in cloud hosting costs in the second half of the year. All of those video ads and measurement data need to be stored somewhere.

That said, it's not clear how great of a return Snap is getting on its self-serve ads. Management noted the auction format of its automated ad buying platform resulted in lower average ad prices for Snapchat in the second quarter. It expects ad prices to rebound in the long run, however, as more businesses join the auction for ads. But it could incur a lot of expenses, including hosting fees in the meantime, as it onboards a lot of new customers.

It'll be interesting to see if Snap actually uses the full $400 million commitment to Google in 2017. Investors should keep an eye on any disclosure in the third or fourth quarter about deferring some of its commitment to 2018. It does have some leeway, but unless it starts attracting a lot of new users and a lot of new advertisers, it might have to admit it's bitten off a bit more than it can chew.

10 stocks we like better than Snap Inc.

When investing geniuses David and Tom Gardner have a stock tip, it can pay to listen. After all, the newsletter they have run for over a decade, Motley Fool Stock Advisor, has tripled the market.*

David and Tom just revealed what they believe are the 10 best stocks for investors to buy right now... and Snap Inc. wasn't one of them! That's right -- they think these 10 stocks are even better buys.

Click here to learn about these picks!

*Stock Advisor returns as of August 1, 2017

Suzanne Frey, an executive at Alphabet, is a member of The Motley Fool's board of directors. Adam Levy owns shares of Amazon. The Motley Fool owns shares of and recommends Alphabet (A shares), Alphabet (C shares), and Amazon. The Motley Fool has a disclosure policy.

Read more:
Did Snap Make a Mistake With Its $3 Billion in Cloud Contracts? - Madison.com

Right Networks continues focus on tech improvement with new CIO – Accounting Today

Right Networks, which provides cloud hosting services for CPA firms, has named Jim Walsh its new chief information officer (CIO). He will be guiding the technology strategy for the companys cloud infrastructure, and will lead the scaling of the corporate IT and security functions.

Prior to joining Right Networks, Walsh served as vice president and CIO at Virtustream for the past year. Before Virtustream, Walsh spent seven years at Constant Contact helping develop and scale the software as a service companys enterprise systems and technology. Before Constant Contact, Walsh held senior technology management executive positions at Peoplefluent, Saba Software, and Concentra.

This year, Right Networks released a user management portal app, allowing accountants to manage staff permissions and client onboarding themselves should they want to. The company also reached the 100,000 user mark in June.

We are pleased to welcome Jim Walsh to our executive team as Right Networks takes the next step to expand and grow our reach in the marketplace, said John Farrer, CEO of Right Networks, in a statement. Jims extensive experience in helping develop teams, workflow and technology operations for mid-stage and later-stage SaaS companies makes him excellent addition to the team.

Read this article:
Right Networks continues focus on tech improvement with new CIO - Accounting Today