Category Archives: Cloud Hosting

Global Cloud Hosting Service Market 2019 Industry Research, Segmentation, Key Players Analysis and Forecast to 2024 – News Midget

MRInsights.biz has distributed another measurable insight analysis to its repository titled as, Global Cloud Hosting Service Market Growth (Status and Outlook) 2019-2024. The report comprises in-depth case studies on the various countries involved in the Cloud Hosting Service production. The report offers information related to import and export, along with the current business chain in the market at the global level. The report determines the opportunities, its restraints as well as analysis of the technical barriers, other issues, and cost-effectiveness affecting the market. A detailed segmentation, market trend by application global market based on technology, product type, application, and various processes are provided in the research study.

In the next section, factors that are affecting the growth of the market in a positive way are included. Investment opportunities, recommendations, and trends that are currently trending in the market. Additionally, several factors that are affecting the growth of the Cloud Hosting Service market are included in a positive way. Top key market players and their complete profiles are also highlighted in the report. Moreover, key regions expected to achieve the fastest growth during the future are mentioned in this report. The worlds main region market conditions are discussed along with the product price, profit, capacity, production, capacity utilization, supply, demand, and industry growth rate, etc.

DOWNLOAD FREE SAMPLE REPORT: https://www.mrinsights.biz/report-detail/195489/request-sample

The main companies in this survey are: HostGator, Liquid Web Hosting, SiteGround, A2 Hosting, DreamHost, InMotion, Bytemark Cloud, 11 IONOS, Hostwinds, Cloudways, AccuWeb, BlueHost, FatCow, Vultr, SiteGround

Geographically, this report studies the top producers and consumers, focuses on product capacity, production, value, consumption, market share and growth opportunity in these key regions, covering Americas (United States, Canada, Mexico, Brazil), APAC (China, Japan, Korea, Southeast Asia, India, Australia), Europe (Germany, France, UK, Italy, Russia, Spain), Middle East & Africa (Egypt, South Africa, Israel, Turkey, GCC Countries)

On the basis of product, this report displays the production, revenue, price, and market share and growth rate of each type, primarily split into, Linux Servers Cloud, Windows Servers Cloud

On the basis of the end users/applications, this report focuses on the status and outlook for major applications/end users, consumption (sales), market share and growth rate for each application, including, Commercial Operation, Government Department, Others

Markets Status:

The report has taken into account the data integration and analysis capabilities and the relevant findings in order to anticipate the strong future growth of the Cloud Hosting Service market in all its geographical and product segments. Every segment expansion is evaluated along with the evaluation of their growth in the forecast period from 2019 to 2024. Several significant variables that are predicted to shape the industry to determine the future direction of the markets have been employed to create the report.

ACCESS FULL REPORT: https://www.mrinsights.biz/report/global-cloud-hosting-service-market-growth-status-and-195489.html

Moreover, the report provides a thorough estimation of the market through a detail qualitative overview, previous data, as well as verified estimations about Cloud Hosting Service market size. It also targets the competitive landscape of the industries to understand the competition on domestic as well as on a global level.

Customization of the Report:This report can be customized to meet the clients requirements. Please connect with our sales team (sales@mrinsights.biz), who will ensure that you get a report that suits your needs. You can also get in touch with our executives on +1-201-465-4211 to share your research requirements.

Follow this link:
Global Cloud Hosting Service Market 2019 Industry Research, Segmentation, Key Players Analysis and Forecast to 2024 - News Midget

Global Managed Hybrid Cloud Hosting Market 2019 by Manufacturers, Countries, Type and Application, Forecast to 2025 – World Industry Reports

The Global Managed Hybrid Cloud Hosting Market report study includes an elaborative summary of the Managed Hybrid Cloud Hosting market that provides in-depth knowledge of various different segmentations. Managed Hybrid Cloud Hosting Market Research Report presents a detailed analysis based on the thorough research of the overall market, particularly on questions that border on the market size, growth scenario, potential opportunities, operation landscape, trend analysis, and competitive analysis of Managed Hybrid Cloud Hosting Market. The information includes the company profile, annual turnover, the types of products and services they provide, income generation, which provide direction to businesses to take important steps. Managed Hybrid Cloud Hosting delivers pin point analysis of varying competition dynamics and keeps ahead of Managed Hybrid Cloud Hosting competitors such as Amazon Web Services (AWS), Microsoft, Tata Communications, Rackspace, Datapipe, Sify, NTT Communications, NxtGen, BT, CtrlS Datacenters, CenturyLink, Dimension Data (NTT Communications), Fujitsu, Singtel, Telstra.

View Sample Report @www.marketresearchstore.com/report/global-managed-hybrid-cloud-hosting-market-2019-by-496401#RequestSample

The main objective of the Managed Hybrid Cloud Hosting report is to guide the user to understand the Managed Hybrid Cloud Hosting market in terms of its definition, classification, Managed Hybrid Cloud Hosting market potential, latest trends, and the challenges that the Managed Hybrid Cloud Hosting market is facing. In-depth researches and Managed Hybrid Cloud Hosting studies were done while preparing the Managed Hybrid Cloud Hosting report. The Managed Hybrid Cloud Hosting readers will find this report very beneficial in understanding the Managed Hybrid Cloud Hosting market in detailed. The aspects and information are represented in the Managed Hybrid Cloud Hosting report using figures, bar-graphs, pie diagrams, and other visual representations. This intensifies the Managed Hybrid Cloud Hosting pictorial representation and also helps in getting the Managed Hybrid Cloud Hosting industry facts much better.

.This research report consists of the worlds crucial region market share, size (volume), trends including the product profit, price, Value, production, capacity, capability utilization, supply, and demand and industry growth rate.

Geographically this report covers all the major manufacturers from India, China, the USA, the UK, and Japan. The present, past and forecast overview of the Managed Hybrid Cloud Hosting market is represented in this report.

The Study is segmented by following Product Type, Cloud-based, On-premises

Major applications/end-users industry are as follows Manufacturing, Retail, Financial, Government, Others

Managed Hybrid Cloud Hosting Market Report Highlights:

1)The report provides a detailed analysis of current and future market trends to identify the investment opportunities2) In-depth company profiles of key players and upcoming prominent players3) Global Managed Hybrid Cloud Hosting Market Trends (Drivers, Constraints, Opportunities, Threats, Challenges, Investment Opportunities, and recommendations)4) Strategic recommendations in key business segments based on the market estimations5) To get the research methodologies those are being collected by Managed Hybrid Cloud Hosting driving individual organizations.

Research Parameter/ Research Methodology

Primary Research:

The primary sources involve the industry experts from the Global Managed Hybrid Cloud Hosting industry including the management organizations, processing organizations, analytics service providers of the industrys value chain. All primary sources were interviewed to gather and authenticate qualitative & quantitative information and determine future prospects.

In the extensive primary research process undertaken for this study, the primary sources industry experts such as CEOs, vice presidents, marketing director, technology & innovation directors, founders and related key executives from various key companies and organizations in the Global Managed Hybrid Cloud Hosting in the industry have been interviewed to obtain and verify both qualitative and quantitative aspects of this research study.

Secondary Research:

In Secondary research crucial information about the industry value chain, the total pool of key players, and application areas. It also assisted in market segmentation according to industry trends to the bottom-most level, geographical markets and key developments from both market and technology oriented perspectives.

Inquiry for Buying Report: http://www.marketresearchstore.com/report/global-managed-hybrid-cloud-hosting-market-2019-by-496401#InquiryForBuying

Thanks for reading this article, you can also get individual chapter wise section or region wise report versions like North America, Europe or Asia. Also, If you have any special requirements, please let us know and we will offer you the report as you want.

Go here to read the rest:
Global Managed Hybrid Cloud Hosting Market 2019 by Manufacturers, Countries, Type and Application, Forecast to 2025 - World Industry Reports

‘Big 3’ Public Cloud Providers: 4 Reasons Not to Use Them – ITPro Today

When most folks think cloud, three names come straight to mind: AWS, Azure and Google Cloud. (People may even be thinking AWS more so than usual, with AWS re:invent in full swing.) These public clouds--which are known collectively as the Big Three--have dominated the public cloud computing market for at least the past five years. But just because there are three major public cloud providers does not mean you have to use one of them.

Indeed, AWS, Azure and Google Cloud are hardly the only public cloud providers out there. There are a variety of other contenders, ranging from the general-purpose clouds associated with major enterprises, like Oracles and IBMs, to public clouds from smaller vendors that specialize in only certain types of cloud services, like Wasabi and Backblaze.

This begs the question: When might you decide not to use one of the Big Three public cloud providers and instead opt for a lesser-known option?

To answer that question, lets start by considering why you would choose one of the Big Three. The reasons are obvious enough, but they are worth spelling out:

Each of these factors helps to make AWS, Azure or Google Cloud a compelling choice for many workloads.

But just because the Big Three are the most popular public cloud providers, it doesnt make them the best choice for every workload and deployment. Following are reasons why you might want to consider an alternative public cloud.

Perhaps the most obvious is cost. Depending on what you are deploying on the cloud, a Big Three vendor may or may not offer the most cost-efficient solution.

This tends to be particularly true in situations where you only need to run a certain type of workload on a cloud. In that case, you might find a better price by choosing a vendor that specializes in that service, rather than turning to one of the general-purpose public cloud providers.

For example, if all you need is cloud storage, a vendor that specializes in storage, like Backblaze or Wasabi, may provide better pricing than the storage services available from AWS, Azure and Google Cloud.

Likewise, you may find that the Big Three vendors offer less choice or customization for a given type of workload than does another, smaller vendor.

Here again, this is often particularly true in situations where you have a certain type of workload to deploy. For instance, each of the Big Three clouds lets you run Kubernetes-based workloads. However, a variety of other vendors specialize specifically in cloud-based Kubernetes (or container-based apps in general), like OpenShift Online or Platform9.

Although most public cloud providers have data centers spread around the world, these centers are not always spread evenly. In some situations, you may opt not to use one of the Big Three clouds because it lacks data centers (or enough data centers) in a given geographic area that you need to serve.

For example, if most of your users are in Asia, you might prefer Alibaba Cloud over one of the Big Three. Alibaba has more than two dozen Asia-based cloud regions, whereas most other major public clouds have only a few, if any. On the other hand, Alibabas presence in Europe and North America is more limited.

Choosing a cloud provider that offers many hosting options in a particular region can help improve performance in that region (because it means data centers are closer to your users). Presence in a particular region may also simplify compliance requirements, in the event that regulations require workloads to be hosted in a certain country.

Each of the Big Three clouds offers dozens of services. In general, having this array of options is a good thing.

But for organizations where IT governance is lacking or oversight is lax, too many choices can become a negative. They can lead to what I call cloud sprawl, or the temptation to launch new cloud services just because you can.

You can avoid this temptation by choosing a cloud provider that simply doesnt offer so many services. For example, if your basic cloud computing needs amount to IaaS, you might decide to make it an organizational policy to use Rackspace instead of AWS, Azure or Google. Rackspace offers a fairly extensive list of IaaS-related cloud services, but it doesnt offer a lot of other options that could result in cloud sprawl.

Its worth noting that we are living in the age of multicloud. Many companies are no longer choosing just one or another. However, in many cases, multicloud strategies are oriented around combining two or more of the Big Three clouds together, rather than mixing a Big Three cloud with a lesser-known alternative.

As long as you are comfortable with the complexities that come with multicloud, then, by all means, adopt a multicloud architecture. But as you build your multicloud strategy, keep in mind that multicloud doesnt have to involve just AWS and Azure, or just Azure and Google Cloud. You can mix and match other public clouds into your multicloud architecture, as well. In fact, you dont need to include any of the Big Three clouds in a multicloud strategy at all; you could build a multicloud architecture out of alternative clouds alone.

There are some good reasons to build a cloud computing strategy based on on AWS, Azure and/or Google Cloud. But there are other good reasons for looking beyond the Big Three and considering lesser-known or more specialized public cloud computing vendors.

See more here:
'Big 3' Public Cloud Providers: 4 Reasons Not to Use Them - ITPro Today

Logz.io Unveils First-Ever Open Source-Based Cloud Observability Platform Powered by ELK and Grafana – GlobeNewswire

BOSTON and TEL AVIV, Israel, Dec. 03, 2019 (GLOBE NEWSWIRE) -- Logz.io, a leading solution for open source based log management and cloud security, today announced the launch of the first-ever Cloud Observability Platform, powered by the open source ELK and Grafana. The platform enables engineers to reduce time to resolution, increase their productivity, and integrate security into DevOps workflows. It is delivered as a fully managed, developer-centric cloud service providing a single pane of glass for monitoring, troubleshooting and securing distributed cloud workloads and Kubernetes.

As engineering teams build and ship code faster, they employ technologies such as Kubernetes and serverless resulting in application stacks that are distributed, abstracted, and difficult to monitor. As a result, achieving observability in modern IT environments has become cumbersome, and time consuming. To solve this issue, engineers prefer to use open source tools, such as ELK and Grafana, because they are accessible, easy to set up, community-driven, and purpose-built to solve developers problems. In addition, they are cloud-native and easy to integrate with modern infrastructure such as Kubernetes and other open source projects.

However, open source tools can be difficult to maintain and scale, costing engineers both time and effort. Logz.ios Cloud Observability Platform enables engineers to use the best open source tools on the market without the complexity of managing and scaling them.

Powered by both Kibana and Grafana, the Observability Platform makes it easy for engineers to correlate between metrics and logs, providing complete visibility into Kubernetes and distributed cloud workloads. In addition, Logz.ios Cloud Observability Platform features out-of-the-box proactive alerting and advanced machine learning capabilities so engineers can identify and resolve issues and threats faster.

The Cloud Observability Platform is the culmination of three unique product offerings, which together provide visibility into all layers of a given environment:

As todays builders and creators, developers rely on open source for its flexibility, creativity and innovation, but scaling, managing and hosting Open Source monitoring and logging tools can be resource and time-intensive, said Tomer Levy, CEO of Logz.io. We firmly believe developers are most productive when they are free to use community-driven, open-source tools, but we recognize the challenges that come along with scaling these solutions to fit businesses. We built the Logz.io Cloud Observability Platform because we want every software engineer in every company to have access to tools like ELK and Grafana without being bogged down by maintenance or scale.

The Logz.io Observability Platform will premier at AWS re:Invent booth #2213, where the companys product experts will showcase the platform and provide demos to event attendees. For more information on Logz.ios Observability Platform, contact lauren@logz.io.

About Logz.ioLogz.io is a cloud observability platform that enables engineers to use the best open source tools in the market without the complexity of managing and scaling them. Logz.io offers three products, Log Management built on ELK, Infrastructure Monitoring based on Grafana, and an ELK-based Cloud SIEM. These are offered as fully managed, developer-centric cloud services designed to help engineers monitor, troubleshoot and secure their distributed cloud workloads more effectively. Engineering driven companies like Turner Broadcasting, Siemens , and Unity use Logz.io to simplify monitoring and security workflows, increasing developer productivity, reducing time to resolve issues, and increasing the performance and security of their mission-critical applications.

Read this article:
Logz.io Unveils First-Ever Open Source-Based Cloud Observability Platform Powered by ELK and Grafana - GlobeNewswire

Join Us For The IBM i On The Public Cloud Webinar – IT Jungle

December 4, 2019Timothy Prickett Morgan

After so many years of waiting, it looks like IBM i shops are going to have a wide variety of options when it comes to acquiring true cloud computing to either replace or augment their on premises systems.

IBM, Google, Microsoft, and Skytap all are offing slices of Power9 machines, which complement the cloudy and hosted infrastructure that has been available for a number of years from Connectria, iInTheCloud, UCG Technologies, LightEdge Solutions, Data Storage Corp, Source Data Products, Secure Information and Services, and First Option IT have offerings that fall on the spectrum from traditional hosting to cloud as well. There is clearly a lot going on here, after a decade and a half of waiting for what I used to call utility computing before Amazon Web Services uncloaked from stealth back in March 2006 and everyone started using its cloud metaphor.

On December 5 at 1 p.m. Eastern, we will be participating in a webinar being hosted by John Blair, founder and president of Blair Technology Solutions, to talk about all things cloud as they relate to the IBM i platform. We did a profile of Blair Technology back in early November, and the company is offering services layers on top of the public cloud offerings from IBM, Google, and Microsoft, which is partnering with Skytap because of the IBM i expertise that it has developed over several years.

The webinar will go over the current state of the cloud for IBM i as well as go over the various scenarios where cloud capacity makes sense initially for customers disaster recovery and high availability are the obvious starting points for IBM i shops and how this expands out to either running test/development in the cloud or moving applications wholesale to a public cloud and getting rid of on premises iron entirely. This is not a cheap option from an operational perspective, but it does add flexibility and that is worth something. We will also talk about the various assessment, migration services, and managed services that are layered on top of these public cloud offerings. There is still plenty of stuff that the big public clouds dont do for IBM i shops that a service provider like Blair Technology can fill in the gaps for. And rapid templating, something that Skytap has been doing, is also a key feature. Everything we said about IBM i also applies to AIX, of course.

The IBM Power on Public Cloud webinar will last for 45 minutes, including plenty of time for question and answer from the audience. You can sign up for the live webinar at this link, and we hope that you will do so. We look forward to sharing our thoughts about IBM i on the cloud and hearing yours.

The Cloud Breathes New Life Into Managed Service Providers

Tags: Tags: AIX, Amazon Web Services, IBM i, IBM Power on Public Cloud, Skytap

Nagios Solidifies Role in IBM i Monitoring

See the original post:
Join Us For The IBM i On The Public Cloud Webinar - IT Jungle

Amazon reveals new server chip to take on Intel – MyBroadband

Amazon.com Inc.s cloud unit keeps trying to eat away at Intel Corp.s stranglehold on the server chip market.

AmazonWebServices has developed a more powerful version of its own chips to power services for cloud-computing customers, as well as some of AWSs own programs. AWS Chief Executive Andy Jassy on Tuesday introduced a second-generation chip, called Graviton2, aimed at general-purpose computing tasks. He didnt specify a release date.

The company last year unveiled its first line of Graviton chips, which it said would support new versions of its main EC2 cloud-computing service. Prior to that, Amazon and other big cloud operators had almost exclusively used Intel Xeon chips.

The company said at the time that the Graviton-backed cloud service would be available at a significantly lower cost than existing offerings run on Intel processors.

Intels chips account for more than 90% of the server chip market and handle most tasks at the biggest cloud providers including Amazon, Microsoft Corp. and Alphabet Inc.s Google. But these companies are also announcing plans to use Intels main rival Advanced Micro Devices Inc.

AMD has forecast it will top 10% in server processor market share by mid-2020, a target that analysts at Instinet LLC said in a note is achievable.

Jassy said on Tuesday that Intel is a very close partner, but that to push the envelope on prices, we had to do some innovating ourselves.

Amazon is using its 2015 acquisition of startup Annapurna Labs, which Jassy called a a big turning point for us, to design its own chips. The new processor uses technology from SoftBank Group Corp. unit ARM Holdings, a standard that dominates in mobile phones.

Link:
Amazon reveals new server chip to take on Intel - MyBroadband

SysGroup (LON:SYS) Hits New 1-Year Low at $33.50 – TechNewsObserver

SysGroup PLC (LON:SYS) hit a new 52-week low during mid-day trading on Monday . The stock traded as low as GBX 33.50 ($0.44) and last traded at GBX 35.49 ($0.46), with a volume of 152 shares changing hands. The stock had previously closed at GBX 36 ($0.47).

Separately, Shore Capital restated a house stock rating on shares of SysGroup in a research report on Monday, November 25th.

The company has a 50 day simple moving average of GBX 37.12 and a two-hundred day simple moving average of GBX 39.52. The company has a quick ratio of 0.59, a current ratio of 0.77 and a debt-to-equity ratio of 12.05. The company has a market capitalization of $17.30 million and a P/E ratio of -15.91.

About SysGroup (LON:SYS)

SysGroup plc, together with its subsidiaries, provides cloud hosting and managed IT services the United Kingdom and internationally. It operates through two segments, Managed Services and Value Added Resale (VAR) of Products/Services. The Managed Services segment offers various forms of managed services to customers.

Read More: Capital Gains Distribution

Receive News & Ratings for SysGroup Daily - Enter your email address below to receive a concise daily summary of the latest news and analysts' ratings for SysGroup and related companies with MarketBeat.com's FREE daily email newsletter.

Read more:
SysGroup (LON:SYS) Hits New 1-Year Low at $33.50 - TechNewsObserver

How to match your IT workloads to the right cloud – TechBeacon

When it comes tomanaging a multi-cloud world, matching your workloads to the best cloud hosting platforms is one of thebiggest challenges. Rational decision making often gives way to an emotional exercise, where beliefs, biases, and other human behaviors set the stage for a less-than-optimal hosting strategy.

If you use themodel described below, as our team did, you'll increase your chances of establishing a fact-based, data-driven hosting strategy that's easier to define and execute, while avoiding any perceptions of bias in your recommendation.

[ Enterprise Service Managementbrings innovation to the enterprise. Learn more in TechBeacon's new ESM guide. Plus: Get the 2019 Forrester Wave for ESM. ]

As Cloud CTO at Micro Focus, I was asked to helpbuild a model that we could apply with as little prejudice as possible. So our teamestablished a set of core principles that enabled us to build a balanced model that we can consistently use to evaluate the placement of specific workloads as well as for our overall hosting strategy.

It could work for you, too. The core principles are:

While your hosting decision model should support placement decisions for multiple workload types, there is almost no end to the number of workload types you could define. That's why you need to introduce a usability challenge into the model.

In this, less is more. We narrowedour list to three core workload types: development and rapid prototyping, traditional production, and cloud-native production.

Development and rapid prototyping workloads include everything development and testing teams might require from a hosting provider to develop and test their code.

Traditional production workloads are those that rely on the base infrastructure-as-a-service (IaaS) set of resources and have no cloud software-as-a-service (SaaS) requirements. You can deploy them in almost any public or private cloud environment.

Cloud-native production includes cutting-edge, cloud-reliant workloads that make moderateto heavy use of cloud concepts and/or rely on cloud platform-as-a-service (PaaS) offerings.

[ Learn how to transform your IT with AIOps in TechBeacon's guide. Plus: Download the analyst paper on how AI is changing the role of IT. ]

While building our model, we analyzed many KPIs from our body of research, picked up a set of KPIs that formed the model core, and then categorized those into five dimensions.

Building a model with dimensions helps us to have logical KPI groupings and to establish a scoring system per dimension. In this way, we can easily evaluate a hostingenvironment based on how well it scores in each dimension, instead of comparing each and every KPI across hosting options.

Figure 1: The five dimensions of a hosting assessment model. (GTM is "go-to market" strategy.)Source: Micro Focus

Heres a high-level view of the models five dimensions, along with a few KPIsfor each.

Thisdimension establishes the hosting environment's security and complianceposture, allowing you to weigh how secure and compliant each one is.

These KPIs evaluate employee background checks, physical access, access logs, and how cryptographic keys are being managed. The KPIs should evaluate support for ISO27001 or GDPR to assess complianceposture.

Comparing hosting providers on cost can be a futile exercise if you're focused on migrating a large data center, which is too big and has many variables, or if you're comparing compute or storage units, which is too granular.

The hosting decision model introducesthe concept of application comparison. For every workload type you pick, you need a poster-child application that you can model in each environment you're evaluating. Calculate the infrastructure cost for hosting that application from the bottom up, and then compare between providers.

Account for your labor costs for each application, since that can be different for each environment. For example, a private cloud has infrastructure support requirements that don't exist with public cloud. A best practice is to use labor per compute unit (virtual server), then multiply by number of servers within the application model.

Finally, if you wish to gaininsight intohow each environmentmight affect your organization's earnings, your cost model should have an earnings before interest, taxes, and amortization (EBITDA) impact, expressed as a percentage.

Here you evaluate the potential support each environment provider offers. This may highlight whether the environment provider will be able to deliver the level of support you need to properly rely on the provider for hosting services, as per your expected service-level agreement (SLA).

Some of your KPIs should measure the provided support level and the number of dedicated technical resources the hosting service provider will assign to you.

Since some workloads will be hosted with an environment provider to drive business, you should establish what potential business leverage a provider could deliver. This could be a critical insight that guides your hosting decision.

KPIs might includeanestablished joint go-to-market strategy, the amount of market development funds the hosting entity will provide, and how many joint and aligned global system integrators or regional system integrators are available.

Assessing environment resilience and performance is a key factor in meeting internal and customer SLAs, so properly evaluatingthese criteriais critically important.

To obtain such metrics you might need to rely on your previous experience to calculate anaverage number of incidents, mean-time-to-repair, or theperformance and availability of sample applications. However, you could also obtain publicly accessible information abouthosting providers to calculate the KPIs.

Some KPIssuch as whether the hosting entity supports demand elasticity, zero-downtime upgrades, and support for multi-zone availabilitymay be readily available from the provider's marketing literature.

Now that you have identified your model's dimensions and supported workload types, you can determine which workload types best align with your various dimensions.

For example, development and rapid prototyping might lean more toward hosting environments that optimize for cost, while traditional production might be better suited to environments that optimize for quality of service and security.

You can introduce this bias into your model with a weighting scheme where positivelybiased dimensions receivea higher weighted score than do other dimensions for a given workload type. See the images below for specific examples.

Once you have defined your model, it's time to populate its dimensions and KPIs with data for the cloud hosting platforms of choice. For this exercise, you need to gather data from yourexperience in hosting workloads,industry benchmarks, and any self-assessments made public by the hosting environment providers.

For balanced KPI results, you need between four and six months of data to counter any seasonality and other biaseswithin the datasets. Remove outliersby using the median instead of the average.

Once you have calculated the KPIs, assign a score between 0 and 10 to each dimension. Since each KPI is likely to have a different impact on the overall dimension score, apply your weighting logic as you calculate the dimension score.

The outcome of this phase is your cloud assessment model for each cloud-hosting option. Each should have a score for every dimension, as well as detailed KPI scores within those dimensions.

This gives you a standard lens through which to differentiate your cloud hosting options.

Using the weighting schemeyou created for each workload,evaluate each cloud hosting provider for each workload type. Do this by using the cloud hosting dimension score with the workload weight for each dimension, normalized between zero and 10.

You've now created an overall score for each combination of workload type and cloud-hosting platform. The higher the score for a specific workload type, the more aligned that cloud hosting platform is for that workload.

By establishing this baseline, you'llprovidea hosting decision recommendation that matches workload types with the right cloud hosting platform.

There are cases, however, that might impose additional requirements that cut across your recommendation results. For example, if a government or geographical presence is required, then your recommended cloud hosting platform must support that.

The lesson here: Build your overall cloud hosting strategy on your model's output while allowing for a certain percentage of cases that will go out of bounds.

Matching your workloads to the right cloud hosting platforms need not become an emotional exercise. Follow the steps above and you'll have a much more rational, data-driven basis for making those decisions while avoiding any perceptionof bias.

[ Learn how robotic process automation (RPA) can pay offif you first tackle underlying problems. See TechBeacon's guide. Plus: Get the white paper on enterprise requirements. ]

More:
How to match your IT workloads to the right cloud - TechBeacon

Cloud Performance Varies Across the World, New Report Finds – ITPro Today

There can be a good deal of variation in performance across cloud providers, the 2019-2020 edition of ThousandEyes' Cloud Performance Benchmark report found.

The 72-page report is based on a study that looked at Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, Alibaba Cloud and IBM Cloud over a period of 30 days from a number of vantage points. ThousandEyes collected 320 million data points from 98 global metro locations to gauge performance across the public cloud services.

The report found that both AWS and Alibaba Cloud rely on the public internet for much of the data transport, which means there is the potential for unknown risks to cloud performance to be introduced as neither company owns the data links. AWS does have its own private network, known as the AWS Global Accelerator, though apparently it doesn't always outperform the public internet.

"When AWS launched its Global Accelerator in November 2018, the intent was to let customers use the AWS private backbone network for a feerather than use the public internet, which is AWS default behavior," Angelique Medina, ThousandEyes' director of product marketing, told IT Pro Today. "While there are many examples in various regions around the world where the Global Accelerator trumps the internet connectivity path in performance, the ThousandEyes Cloud Performance Benchmark found examples of negligible improvementand even cases of worse performancewhen compared to default AWS connectivity via the internet."

Typically the cloud providers either use public internet links or private links, with IBM being the only cloud provider with a hybrid approach to cloud connectivity from users to hosting regions. According to ThousandEyes, depending on the hosting region and the expanse of the IBM Cloud backbone, user traffic rides the internet longer or enters the cloud providers backbone closer to the end user.

Looking at global cloud performance, ThousandEyes found roughly similar levels of performance across Western Europe and North America. The same cannot be said for other regions, with GCP having 2.5 to 3 times the network latency of its rivals, when measuring connectivity from Europe to India.

There is also a dramatic reduction in performance for all cloud traffic heading into China, thanks to China's so-called "Great Firewall," which is a robust content filtering machine.

"Employing a multitude of censorship toolssuch as IP blocking, DNS tampering and hijacking, deep packet inspection, and keyword filteringthe Great Firewall is designed to ensure that online content aligns with the government party line," the report states. "Privacy and ethics concerns aside, one of the drawbacks to this system is a vast reduction in performance."

ThousandEyes is in the business of providing network-level visibility for its users. Alongside the cloud benchmark report, the company announced its new Internet Insights service, which goes beyond what it had been offering in the past.

Medina explained that ThousandEyes' core service provides cross-layer visibility into application delivery over the internet. In contrast, she noted that Internet Insights is service provider-centric and provides a broad view of internet health. It leverages telemetry data derived from the testing performed by all of ThousandEyes' customers to identify outage events in service provider networks.

"Internet Insights is highly complementary to our existing offering, as it enables our customers to understand their application delivery in the context of the wider internet," Medina said. "It also enables our customers to manage external providers more effectively because they now have historical visibility into availability issuesnot just globally, but regionally as well."

The improved visibility has already been a big help to one ThousandEyes customer. The customer was complaining that it was unable to connect to a service, according to Medina. It could see in its ThousandEyes tests that there was network packet loss in an upstream telecom provider but couldnt determine the scope of the issue and why so many customers appeared to be impacted.

"Using Internet Insights, they were able to trace the cause to widespread internet issues that were caused by a Cloudflare route leak," she said. "They were able to mitigate the impact of the route leak early enough to get ahead of the issue by communicating with customers and working with one of their providers to reroute traffic around the most significantly impacted zones."

Visit link:
Cloud Performance Varies Across the World, New Report Finds - ITPro Today

Application modernisation in 2020 and beyond why businesses need to be ready now – CIO Australia

Research from the CSIRO has found that digital technologies could be worth as much as $315 billion to the Australian economy by 2028.1That return to the economy will be driven across a number of areas, perhaps most significantly AI, but the story within the story is that organisations will need to invest in application modernisation through digital transformation to prepare their businesses for this new, digital-first way of working.

The drive behind application modernisation cant just be because its the hot new trend, however. Organisations that approach application modernisation on vague promises of the benefits of the cloud and improved productivity will find themselves in a similar position to now in a few years with a legacy environment that no longer supports the organisations competitive position in the market.

Gartner predicts by 2023, 40 per cent of professional workers will expect orchestrated business application experiences and capabilities like they do their music streaming experience. The human desire to have a work environment similar to their personal environment continues to rise one where they can assemble their own applications to meet job and personal requirements in aself-service fashion, Gartner notes. The consumerisation of technology and introduction of new applications have elevated the expectations of employees as to what is possible from their business applications.2

Simply hosting applications in the cloud which is the extent of the application modernisation strategy for many organisations will not deliver what Gartner is predicting.

Instead, organisations need to take a deeply strategic approach to application modernisation. Many organisations struggle to build a strategy around application modernisation, and are often unsure of the approach to modernisation that they need to undertake whether thats refactoring, rehosting, or otherwise.

Determining the right approach to application modernisation can be an extravagant project in its own right. Depending on how databases and the environment is structured, moving an application to the cloud may result in a substantial project for a team of developers. Its important to get it right, however. If managed poorly, the application is likely to again become be a piece of legacy software inhibiting the business from working competitively.

Into 2020 and beyond, CIOs and other business leaders will need to approach application modernisation with a mindset of reimagining it from the ground up, with a focus on better security, faster speed, and consolidated systems.

How organisations will look to app modernisation in the new year

Western Australias School Curriculum and Standards Authority (SCSA) is one example of an organisation that faced the urgent and pressing need to modernise its applications. Previously holding student information for grades 11 and 12, SCSA was required to start holding records from kindergarten right through to grade 12, which meant a jump from 60,000 records to 465,000. There was no way the legacy systems were going to manage the load.

SCSA found its solution with Insight, which helped SCSA move to a cloud-based environment, running on Azure, and with Kubernetes deployed to help manage the databases and applications into containers, and orchestrate the pod lifecycles.

As a result, after an engagement of just a few months including a comprehensive planning period, SCSA had a solution it could rapidly scale, and was fully digital-ready. Both the internal team and the schools working with SCSA can now reliably and rapidly access the services and applications provided by SCSA.

For more information, visitau.insight.com

Its all about the foundations

The cloud is now the standard approach to applications. Everything is online, and everything needs to be available from anywhere, regardless of location or device. There are meaningful productivity and efficiency gains to this approach; and significantly, there are consequences to not moving legacy applications into the cloud. These include:

Insights approach with each of its customers, and the reason the SCSA project was such a success, is to identify and build a foundation for all applications within a business that is extensible for future needs. There is a range of different approaches that can be taken with application modernisation, from the relatively simple process of rehosting an application on the cloud, through to a complete re-coding or replacement for an application. Within the typical environment there will need to be a number of different approaches taken, depending on the state of each individual application. What determines the overall success of a modernisation project is whether the foundations are in place first, both in terms of technology, such as whether the organisation cloud-ready and far enough along with its digital transformation strategy to start application modernisation; and strategy, such as determining the five and 10-year goals of applications.

The cloud can be a complex environment. Some of the tools used to manage the transition to the cloud and operation within it, such as containerisation through Kubernetes, are effective but need careful planning and a change management process within the organisation first.

2020 will be a big year for application modernisation. Organisations that develop a sound foundation will find themselves set for the years ahead, with a highly scalable and flexible environment that is future proofed for the longer-term trends as they emerge. Where organisations will struggle is if they dont approach application modernisation from a whole-of-business, foundational approach first.

Read the Insight whitepaper on making a business case for application modernisation.

1https://www.afr.com/technology/ai-roadmap-forecasts-315b-industry-20191114-p53ali

2https://www.gartner.com/en/newsroom/press-releases/2019-22-10-gartner-unveils-top-predictions-for-it-organizations-and-users-in-2020-and-beyond

Error: Please check your email address.

Tags applicationDigital TechnologiesDTDXapplication modernisation

More about AustraliaCSIROGartnerInsight

Go here to see the original:
Application modernisation in 2020 and beyond why businesses need to be ready now - CIO Australia