Page 2,834«..1020..2,8332,8342,8352,836..2,8402,850..»

PODCAST | How Absa uses cloud computing to improve its business and operations – Business Day

Baker says that arrangement has given the bank flexibility to quickly grow and scale new banking platforms without the need to buy new physical equipment for its facilities. Instead, the company can simply increase the amount of computing power it needs from AWS, accessed online, which helps to save time and costs for the group.

Data from market research company Allied Market Research estimates the global hybrid cloud market size was valued at $36.1bn in 2017, and is projected to reach $171.9bn by 2025, growing at a compound annual growth rate of 21.7% from 2018 to 2025.

The discussion focuses on how Absa has made use of cloud computing to improve its offering to clients, cut costs, develop new technology faster, while at the same working to make sure that staff have the skills to use and develop new banking platforms.

Engage on Twitter at #BDSpotlight

Subscribe: iono.fm|Spotify|Apple Podcasts|Pocket Casts|Player.fm

Business Day Spotlight is a MultimediaLIVE production.

Link:
PODCAST | How Absa uses cloud computing to improve its business and operations - Business Day

Read More..

Silver Lake invests in Abu Dhabis AI and cloud computing company G42 – The National

Group 42, the Abu Dhabi-based artificial intelligence and cloud computing company, secured a "substantial" investment from US-based private equity manager Silver Lake.

As part of the deal, Silver Lake's managing partner and co-chief executive Egon Durban will join G42's board, the company said. Neither the size of the stake or the amount invested were disclosed.

We are honoured to partner with a world-class investor like Silver Lake and proud to be among their cutting-edge portfolio of technology leaders, said G42's chief executive Peng Xiao.

We aim to work with the best technologies and the best partners to deliver value to every market in the world. Our business verticals range from energy, to healthcare, to finance now is the right time to partner with Silver Lake to further expand our possibilities, he added.

G42, which owns and operates the worlds 26th-most powerful supercomputer, is carrying out high-level fundamental and applied research into AI as well as developing cloud computing for the most demanding use cases. Proceeds from Silver Lakes investment, reported by the Wall Street Journal to be worth about $800 million, will be used to help it scale its operations in the UAE and in international markets, G42 said.

Last month, the company partnered with British outsourcing company Serco to promote the adoption of technology by government clients in the Middle East and drive a shift towards data-driven operations.

G42 has quickly become a globally respected technology leader, poised to extend its leadership in AI and digital transformation, Mr Durban said.

G42 has not only experienced tremendous growth in recent years, but has done so by partnering with large-scale clients to address the most complex technology challenges. We are excited to have this opportunity to work with them, he added.

Silver Lake, which specialises in technology investments, is a Silicon Valley-based investor with more than $79 billion of committed capital and assets under management. It has invested in some of the world's best-known technology companies including Airbnb, Ant Financial, Expedia Group and Twitter.

Its portfolio of companies collectively generate more than $191bn of revenue a year and employ more than 441,000 people around the world.

In September last year, Abu Dhabi sovereign fund Mubadala Investment Company invested $2bn into Silver Lake as the companies agreed to partner on a long-term investment strategy spanning for 25 years. Mubadala also took a minority stake in the company.

See original here:
Silver Lake invests in Abu Dhabis AI and cloud computing company G42 - The National

Read More..

How Cloud Computing Can Be the Key to Ameliorating Outcomes While Mitigating Health Care Costs – Journal of Clinical Pathways

Cloud computing has been around in the health care industry for a few decades now. However, what's truly remarkable is that the adoption of this technology has increased at a frenetic pace only recently.

One 2019 research study by Technavio states that the global health care cloud technology market is anticipated to grow by USD 25.54 billion during 2020-2024. The coronavirus pandemic has only reinforced this trend further.

This new reality, along with new payment models and changes in patients' expectations, have together pushed cloud technology to the forefront. Today, the cloud is not only helping providers improve patient care, drive efficiency, and eliminate waste, but it is also playing a huge role in ensuring health care data safety by averting potential cyber attacks and thefts.

Integrating cloud computing into your practice can be the key to streamlining care delivery.

In this blog post, we'll discuss a few ways this state-of-the-art tech solution can support the health care industrys efforts to improve patient outcomes and mitigate costs in doing so.

1. Making Patient Data Interoperable while Mitigating Storage Costs

According to a recent survey conducted by the Center for Connected Medicine (CCM) in partnership with HIMMS Media, close to one-third of health care organizations report that their interoperability efforts are insufficient, even within their own organizations.

In most cases, physical data centers that are deployed on-premise not only demand an investment in hardware ahead of time, but they also come with ongoing costs of maintaining servers, spaces, cooling solutions, etc.

Cloud technology can be the solution to this persistent problem.

With health care organizations rapidly embracing virtual care delivery models such as telemedicine, especially amid the ongoing COVID-19 pandemic, the collaboration between various doctors, departments, and even institutions has become of increasing importance. The cloud enables physicians to share data in a hassle-free manner.

Health care cloud vendors can aid providers in seamlessly integrating various processes within the organization and lowering their data storage costs by managing the structure, and ensuring the harmonious functioning and maintenance of cloud storage services. This will significantly help care providers in focusing their efforts on ameliorating patient outcomes alone.

This, in turn, boosts interoperability across the organization and helps with faster care delivery.

2. Keeping Patient Information Secure at Each Stage of the Data Lifecycle

In 2018, health care data breaches of 500 or more medical records were being committed at a rate of approximately 1 per day. In 2020, the frequency at which these breaches were committed nearly doubled with the average number of breaches per day adding up to 1.76.

Source: HIPAA Journal

The fact that health care organizations need to have highly robust security measures in place to safeguard sensitive patient data is universally known.

Cloud computing adds supplemental layers of security and monitoring to health care data.

One best practice for health care organizations here would be to put adequate access controls in place. For instance, particulars about a patients medical condition and treatment can be blocked from back-office staff who dont require such details to do their work. Similarly, a patients financial information may be blocked from frontline care providers.

Telehealth is another technology where cloud computing is proving its potential. Getting a telemedicine platform developed for your practice that stores data on a cloud server can furnish robust security features such as end-to-end encryption of data, multi-factor authentication (MFA), etc. These ensure the patient data on your platform remains safeguarded at all times.

Today, a number of health care cloud providers also offer services in compliance with the Health Insurance Portability and Accountability Act (HIPAA). Choosing a compliant provider can further ensure that all the sensitive data you store adheres to HIPAA Rules and remains protected at all times. This can significantly help providers avoid fines and penalties.

3.Furnishing Efficient and Integrated Patient Care

Todays patients are quite savvy about their wellbeing.Equipped with state-of-the-art digital solutions, these patients are willing to accept nothing less than high-quality medicineone that will deliver care in a patient-centric and streamlined manner, making an integrated model of care delivery critical for caregivers.

Cloud technology is playing a huge role in delivering patient-centric care.

The integration of cloud storage with patients electronic health records (EHR) has helped revolutionize collective patient care, making it hassle-free for authorized individuals from the medical staff to retrieve vital patient information from any remote location, and at any given point in time. This further promotes anytime care and augments patient outcomes.

With EHRs, every provider can have the same accurate and up-to-date information about a patient, as explained by the Office of the National Coordinator for Health Information Technology on their website. Better coordination can lead to better quality of care and improved patient outcomes.

The cloud-based software behind collaboration tools of the likes of video conferencing and enterprise messaging holds the potential to leave a positive influence on both health care teams and their patients.

Moving to the cloud for our communications was the best decision weve made, as were now connected with our patients and colleagues whether we are in the office, at home or traveling overseas, states Dr. Ravi Patel, founder of the Comprehensive Blood & Cancer Center, Bakersfield, CA, in a recent press release.

Today, with the rapid innovation happening on the cloud technology front, the data gathered from remote patient monitoring devices can also be uploaded to a specific medical cloud or the user's private centralized cloud. This helps maintain a record of all the monitored data which can easily be retrieved at a later time by authorized medical personnel to suggest treatment.

All in all, cloud computing has transformed the health care industry in innumerable ways.

Now, this transformation may be occurring at a comparatively slower pace for some, but the growing need to make data more interoperable will eventually get many to notice the cloud and its endless benefits.

Having said that, it wouldnt be wrong to assume that the future of health care is in the cloud!

Rahul Varshneya is the co-founder and president ofArkenea, a digital health consulting firm. Rahul has been featured as a technology thought leader across Bloomberg TV, Forbes, HuffPost, Inc, among others.

The rest is here:
How Cloud Computing Can Be the Key to Ameliorating Outcomes While Mitigating Health Care Costs - Journal of Clinical Pathways

Read More..

2 Top Cloud Computing Stocks to Buy in 2021 – The Motley Fool

Cloud computing has revolutionized the business world over the last two decades. Enterprises no longer need to provision and maintain costly on-premises computing infrastructure. Instead, they can access resources like servers, storage, databases, and software remotely through the internet. Moreover, those resources can be accessed on demand, allowing enterprises to quickly and efficiently scale their operations.

According to research firm Gartner, spending on public cloud services will increase by 19% annually through 2022. That growth should be a tailwind for industry titans like Amazon (NASDAQ:AMZN) and Microsoft(NASDAQ:MSFT). Here's what investors should know about these two companies' opportunities in the cloud space in 2021.

Amazon's cloud computing business, Amazon Web Services (AWS), launched in 2006. Today, it's still the clear leader in the space, with a more extensive global infrastructure, a broader product offering, and a larger market share than any of its rivals. In fact, during the fourth quarter of 2020, AWS took 32 cents of every dollar spent on cloud infrastructure services.

Image source: Getty Images.

That dominance has attracted a diverse network of partners -- enterprises that use AWS to build solutions for their own clients. For example, consulting firm Deloitte developed its Smart Factory Fabric, a cloud-enabled manufacturing process, using Amazon IoT systems. This suite of applications brings smart manufacturing capabilities to its clients' operations. Notably, Deloitte's role as a consultant to 80% of Fortune Global 500 companies means it's well-positioned to bring new customers to AWS.

Not surprisingly, its robust product portfolio and large partner network have powered strong growth in Amazon's cloud computing business. Last year alone, AWS's revenue rose 30% to $45 billion.

Moreover, AWS's operating margin was 30% in 2020. Compare that to the combined operating margin of Amazon's other businesses -- 3%. The cloud segment's high profitability has helped the tech giant bankroll its e-commerce efforts, making it an even greater threat to traditional retailers. In other words, AWS generates enough cash that Amazon can afford to run its e-commerce business at a loss to gain market share.

AWS's lead in cloud computing should help the company grow its top and bottom lines quickly. That, in turn, should drive increased profitability for Amazon as a whole, while allowing it to fund the rapid innovation that has kept AWS ahead of its rivals.

Microsoft launched its cloud computing business, Microsoft Azure, in 2008. While it still trails AWS in terms of market share, the company is executing on a strong growth strategy, and Azure is gaining ground.

Market Share

Q4 2018

Q4 2019

Q4 2020

Amazon

33%

32%

32%

Microsoft

15%

18%

20%

Source: Canalys.

Specifically, Microsoft has focused on supporting hybrid and edge computing use cases. This strategy makes sense -- some types of data need to remain on company premises due to privacy or regulatory requirements. That can put an enterprise at a disadvantage, though, if it means they don't have access to cloud services to help them manage, analyze, and secure that data. But Microsoft has a solution.

First, Azure Arc extends Azure's management capabilities across any environment, from private data centers to public clouds. In other words, it allows clients to manage all their digital resources in a unified way, even if some of those resources are stored on-site or in a rival cloud like AWS. For example, Azure Arc makes it possible to train and run AI models using data stored in multiple different locations. That puts Microsoft ahead of rivals like AWS and Alphabet's Google Cloud in terms of its ability to power hybrid AI.

Second, Azure Stack allows clients to run their own Azure environments using on-premises servers. This makes it possible to bring Azure services -- think artificial intelligence, analytics, monitoring, and security -- to private data centers or even disconnected environments. Azure Stack also makes it possible for developers to build and run hybrid applications across cloud and on-premise locations.

In recent years, Microsoft has also forged partnerships with companies like Datadog, SpaceX, and General Motors that have helped expand Azure's client base. In another partnership that began in 2019, SAP started working with it to migrate its on-premise software customers to Azure. And in 2021, the two companies expanded this partnership, enabling SAP to integrate Microsoft Teams into its own software solutions.

On the whole, Microsoft's efforts have powered strong growth in its cloud computing business. In the company's fiscal 2020 (which ended June 30, 2020), Azure revenue surged 56%, and through the first two quarters of its fiscal 2021, sales were up 49% year over year.

Microsoft's size gives it an advantage over the vast majority of its rivals. And its focus on hybrid scenarios should power continued growth as more enterprises migrate to the cloud.

This article represents the opinion of the writer, who may disagree with the official recommendation position of a Motley Fool premium advisory service. Were motley! Questioning an investing thesis -- even one of our own -- helps us all think critically about investing and make decisions that help us become smarter, happier, and richer.

See the rest here:
2 Top Cloud Computing Stocks to Buy in 2021 - The Motley Fool

Read More..

Pensando tech slashes network management costs – but you may need 2000-servers to benefit Blocks and Files – Blocks and Files

Enterprises with 2,000-plus servers could save up to 84 per cent in various network monitoring and management costs over three years by by using Pensando SmartNIC server offload chips.

This is the headline finding of an Enterprise Strategy Group Economic Validation report, published yesterday: ESGs analysis found that Pensandos scale-out software-defined services approach enabled organisations to centralise management, simplify administration, and optimise performance.

Carriers and CSPs have embraced a scale-out approach, which enables services to be run on homogeneous, industry-standard server hardware, ESG adds. The challenge is that spinning up network or security functions on the generic servers employed by carriers and CSPs burns CPU resources and is generally less efficient and performant than specialty hardware.

Pensando, a California startup, has built the Arm-powered Naples DSC (Distributed Services Card) which connects to a host server across a PCIe interface. The card offloads and accelerates networking, storage and management tasks from its host server, freeing up the host CPU to run application workloads instead of infrastructure-focused tasks.

Pensandos DSC card replaces speciality hardware appliances. Infrastructure services such as security, encryption, flow-based packet telemetry, and fabric storage services are deployed on the DSC at every server. Pensando provides Policy and Services Manager (PSM) software to carry out centralised management. PSM collects events, logs, and metrics from the installed DSCs to speed troubleshooting.

In its report, ESG notes: While at first glance it might seem expensive to implement Pensando hardware and software into each data centre server ESGs modelled scenarios demonstrate significant savings for both traditional enterprises and cloud service providers.

In other words the performance and related savings per server are not enough at a server level to justify the Pensando card cost. But the total cost of ownership savings across large fleets of servers, 2,000 and upwards in ESGs scenarios, over three years make the expense of buying Pensando cards worthwhile.

The ESG researchers modelled two scenarios, an enterprise data centre with 2,000 servers each fitted with a DSC card, and a cloud services provider with 20,000 similarly equipped servers.

ESG calculated the costs of network adapters and monitoring appliances, east-west firewalls, load balancers, micro-segmentation nodes, and their associated license fees and operational expenditures (Opex). These were summed over three years and compared with the same servers fitted with Pensando DSCs and services software.

In the enterprise model and over three years, ESGs model predicts a total three-year savings of $16,180,092, or 84 per cent. In the cloud services provider model ESGs model predicts total three-year savings of $104,456,894, or 64 per cent. These are large numbers and suggest that enterprises and CSP with a thousand-plus servers might be well-advised to look at the costs and benefits of using Pensando DSCs in their data centres.

Continued here:
Pensando tech slashes network management costs - but you may need 2000-servers to benefit Blocks and Files - Blocks and Files

Read More..

The Bright Future of Cloud SIEM – Security Boulevard

TL;DR: People keep questioning SIEM value, but cloud SIEM makes SIEM so much better. SIEM is now capable of delivering a lot of security value with far less effort from security teams.

The SIEM market is a US$5B market with a two-digit annual growth rate. Still, we keep seeing multiple questions and discussions around SIEMs role, future and value. Why?

There are many reasons, including:

Nothing is more important to those discussions as Cloud SIEM. Not just hosted in the cloud, but as a native cloud offering. Why? Because now SIEM vendors can have some control over deployment success. What are you saying, Augusto? Didnt they have control over the success of their own product before? Yes, thats true!

As a traditional SIEM vendor, it is very hard for you to ensure the customer will be able to get all the benefits your product can provide. First, they may underestimate the required capacity for their environment. They will end with a sluggish product, overflowing with data, having to deal with adding servers, memory, storage, or even stopping the deployment to rearchitect the whole solution before getting any value from it. Ive seen countless SIEM deployments dying this way before generating any return of investment.

But it doesnt stop there. They may get the sizing right but underestimate the effort to keep it running. They estimate the number of people to use the SIEM, but they forget that a traditional SIEM requires people to use it but also to keep it running. That means people will spend their time keeping servers running, applying patches (to operating systems, middleware and to the SIEM software too), troubleshooting log collection, ensuring storage doesnt blow up, and not paying attention to what the SIEM should actually be doing for them. The tool is up and running, but again, not providing any value.

We can see how much the vendor depends on the customer to provide value. And even if the customers do things properly, there are other challenges too. Traditional software allows for high variation of deployments: Customers running on different versions, with different hardware and architecture. How can a vendor distribute SIEM content (parsers, rules, machine learning models, etc) that works in a consistent manner to its customers in this scenario? It just cant.

Considering these factors, I risk saying that offering a traditional SIEM solution is like the Sisyphus Myth. As much as the vendor tries to deliver value, the solution will eventually fail to achieve the customer objectives. As traditional software, SIEM was really destined to die.

First, many challenges on SIEM deployments are related to problems that are completely solved or minimized by the SaaS model. Cloud services are highly scalable and elastic, and SaaS practically eliminates the need to maintain the application and underlying components. Now you have a SIEM that finally scales and does not require an army to keep it running. You can focus on using it appropriately.

Second, a SaaS SIEM puts customers on highly standardized deployments. With most customers running on the same version, without capacity challenges, its far easier to deliver content that works for all of them. That makes a huge difference in perceived value. And it doesnt stop there. With this scenario it becomes easier to the vendor to finally realize the benefits of the wisdom of the crowds. Developing more complex ML models for threat detection, for example, becomes easier and more effective. The vendor now has access to more data to train and tune the models. Even simple IOC match detection content can be quickly developed and delivered to all customers, allowing the SIEM vendor to provide detection of new, in the wild threats.

Finally, delivering any software solution via SaaS gives the developer the opportunity to embrace more agile development practices. Upgrading a traditional SIEM deployment is so complex that vendors would naturally rely on traditional waterfall development practices, generating big releases with long times between them. SaaS SIEM can leverage agile development and CI/CD practices, so new features can be quickly added, and defects quickly fixed.

Cloud SIEM is on its infancy when you consider SIEM is just past its teenage years. But there are so many opportunities to explore with this model that I believe now we can say Next-Gen SIEM without feeling silly about it. Be careful with SIEM is dead claims. That sounds to me much like I think there is a world market for maybe five computers, by Thomas Watson in 1943.

*** This is a Security Bloggers Network syndicated blog from Security Balance - Augusto Barros authored by Unknown. Read the original post at: http://feedproxy.google.com/~r/SecurityBalance/~3/BAcr0fKDFm4/the-bright-value-of-cloud-siem.html

Read more:
The Bright Future of Cloud SIEM - Security Boulevard

Read More..

New SQL Monitor release gives organizations the opportunity to manage their on-premises and cloud databases from a single global dashboard – IT News…

RealWire2021-04-15

Cambridge UK, Thursday, 15 April To help organizations explore and manage the advantages the cloud provides, the latest release of Redgates popular database monitoring tool, SQL Monitor, now supports Amazon EC2 and RDS, and Azure SQL Database and Azure Managed Instances as well as on-premises SQL Server. A new global dashboard allows users to check the health of their entire SQL Server estate at a glance and pinpoint issues with individual servers and instances, wherever they are, however large the estate.

The new SQL Monitor keeps the user-experience consistent, and allows organizations to focus on responsiveness, improving performance and supporting business-critical areas, rather than trying to understand the complexity across database platforms. It also brings consistency and familiarity to database monitoring, and avoids the learning curve, cost and time involved in using multiple monitoring tools for different databases.

The release follows research from Redgates 2021 State of Database DevOps report, showing that 58% now use the cloud either wholly or in combination with on-premises servers, compared to 46% in the same report a year earlier.

This accelerated move to the cloud reflects how organizations are looking to reap the benefits that cloud platforms offer, even if it means that managing and monitoring server estates become more complex and difficult. Different use cases and requirements make choosing a single cloud offering rare and many server estates now feature a changing mixture of on-premises servers and platforms like Amazon and Azure.

To support these business needs and keep up with the evolution of hybrid server estates, its critical that organizations have the ability to monitor every type of server and instance with the same monitoring tool, using a consistent approach to minimize the time and effort involved. This ensures the availability, security and performance of all the databases across different hosts can be managed far more easily and effectively.

As Phil Grayson, CEO of Managed Service Provider xTEN, comments: Weve seen a big shift in the SQL Server space over the last few years, with hybrid estates growing in size and complexity. The latest version of SQL Monitor simplifies their management because organizations can now focus on choosing the right database solution for their business need, without worrying how theyre going to monitor it.

The development team behind SQL Monitor are now looking to add more estate management capabilities to the tool like providing security related information on demand, and automating the discovery and inventory of entire SQL Server estates.

To find out how Redgate SQL Monitor offers a complete overview of hybrid SQL Server estates with fast deep-dive analysis, organizations can download a 14-day, fully functional free trial or see a live demo online at http://www.red-gate.com/sql-monitor.

About Redgate SoftwareRedgate makes ingeniously simple software used by over 800,000 IT professionals around the world and is the leading Database DevOps solutions provider. Redgate's philosophy is to design highly usable, reliable tools which elegantly solve the problems developers and DBAs face every day and help them to adopt compliant database DevOps. As well as streamlining database development and preventing the database being a bottleneck, this helps organizations introduce data protection by design and by default. As a result, more than 100,000 companies use Redgate tools, including 91% of those in the Fortune 100. For more information, visit http://www.red-gate.com.

ContactsMeghana ShendrikarAllison+Partners for Redgate SoftwareRedgate@allisonpr.com

Source: RealWire

See the rest here:
New SQL Monitor release gives organizations the opportunity to manage their on-premises and cloud databases from a single global dashboard - IT News...

Read More..

A room with a view: a non-tech explanation of containers and Kubernetes – S&P Global

Introduction

Virtualization, containers and Kubernetes are big topics in tech, but the differences aren't always clear. Here, we present a simple analogy to aid understanding for a non-tech audience, and consider the role of these topics in a multicloud future.

The 451 Take

Containers are now a fundamental component of IT infrastructure: 53% of enterprises have at least some adoption today, with just 5% having no plans to implement, according to 451 Research's Voice of the Enterprise: DevOps, Organizational Dynamics 2020. Meanwhile, 43% of enterprises are using Kubernetes to, at least at some level, to manage their IT estates. The crucial benefit of containers is that applications can be decomposed into self-managed components that can live for as long or short as needed, depending on the demand at that time. These components can be updated independently across many physical servers without needing to rebuild the whole application, and components can be shared by multiple applications.

Orchestration platforms such as Kubernetes perform management of containers across many hosts, duplicating containers when needed to handle more requirements, and then scaling back down to save resources when not needed. Because of this, they are a logical enabler of multicloud applications. Being able to update and scale apps so users are always getting the best experience can make the difference between winning business and losing opportunities. In the post-COVID-19 era, the online experience has never been more important.

Virtualization

Imagine a building with a number of floors this represents a server in our analogy. An owner rents it out to a single tenant, but the tenant must pay a lot of rent for the whole building, even though they don't use it all. The landlord is only able to collect rent from a single person.

The owner could break the building into a number of apartments, all sharing the same physical building. Each apartment is a fully self-contained living space. Here, the landlord gets multiple revenue streams, and each tenant pays less than if they were to rent the whole house. This is akin to 'virtualization' in cloud computing, with tenants representing applications that are all hosted within the same building (server).

Virtualization enables the most basic attribute of a cloud service: a massive amount of computing or storage resources can be applied to many users simultaneously, with each user utilizing that service without regard to what other users are doing. Similarly, those users should be able to utilize the service without having to worry about the details of hardware-level implementation. Cloud suppliers virtualize compute, storage and other services on a massive scale using hardware virtualization, where a hypervisor layer (the abstraction layers) that runs on it enables operating systems (and their applications) to share hardware resources such as compute, storage and memory from a single asset, be it a server or even a pool of servers.

Originally, one server meant one operating system, typically delivering one workload. Through virtualization, one server (and its 'host' operating system) can hold multiple 'guest' operating systems, each one operating a logically separated workload, deployed together and contained within so-called virtual machines (the VM is the 'unit' of work in virtualization). Before virtualization, perhaps only a tiny fraction of the asset (the server and its resources) would be used at any one time. Through virtualization, multiple applications can be multiplexed together, so that resources are shared, and the asset is fully used. VMs contain a guest operating system, application payload, and anything else that may be needed (such as a database).

Common hypervisors include VMware ESX, Microsoft Windows Server, Citrix Xen, Red Hat Enterprise Virtualization and Linux KVMs. Cloud providers often have their own virtualization software. Increasingly, the hypervisors are less valuable than the software used to manage hypervisors across large numbers of servers, which allow virtual machines to rapidly be spun up and down, and configured to be resilient and performant the essence of cloud.

Containers

Back to our apartment building. Alternatively, the landlord could just break it up into a number of bedrooms, with a bathroom and kitchen shared among all tenants. In this model, the landlord is squeezing the most from the building, and the tenants are paying the least. Furthermore, a landlord that needs to perform maintenance on the kitchen or bathrooms only has to do it once, and will still satisfy all tenants. This is akin to 'containerization,' where each application isn't just sharing the physical server (the building), it is sharing code that is common across other applications (represented by tenants sharing some rooms). This code can be swapped out quickly across all applications at once, just like the kitchen can be repainted to satisfy all tenants. And if a room is unoccupied, the other tenants can take over that space temporarily, but can also vacate it quickly when a new tenant wants to move in.

The landlord in this case likes the predictability of tenants that have signed a contract to pay rent every month for a term commitment; and the tenants like knowing they have a roof over their head for the foreseeable future. But if there is a steady stream of people who only need a room for a few nights, the landlord might decide to turn the building into a hotel. Here, guests can stay for as long and as short as they need, and can rent as many rooms as they require. Containers can be spun up and consumed for a matter of seconds, to provide immediate capability where needed, but can be rapidly turned down to free up resources for other containers.

Container technology is essentially operating system virtualization workloads share operating system resources such as libraries and code. Containers have the same consolidation benefits as any virtualization technology, but with one major benefit there is less need to reproduce operating system code. Hardware virtualization means each workload must have all its underlying operating system technology. If the operating system takes up 10% of a workload's footprint, then in a hardware virtualized platform, 10% of the whole asset is spent on operating system code. This is regardless of the number of workloads being run. In the same environment utilizing containers, the operating system only takes up 10%, divided by the number of workloads.

In this case, a server runs 10 workloads, but only one operating system in the container environment. In the virtualized environment, the server would be running 10 workloads and 10 operating systems. An application container consists of an entire runtime environment: an application plus all of its dependencies, libraries and other binaries, as well as the configuration files needed to run it, bundled into a virtual container that can run on a variety of infrastructures bare metal, traditional datacenter, virtual environment, or public, private or hybrid cloud. Application containerization represents a method of deploying software that is immune to changes in the underlying computing environment.

The software that actually runs these containers is called a container runtime, and these include Docker, containerd and CRO-O. The Open Container Initiative is an industry project to standardize container runtimes based on Docker, with partners that include AWS, CoreOS, Docker, Google, IBM, HP, Microsoft, VMware, Red Hat and Oracle. As with virtualization, these runtimes are less differentiated and valuable than the orchestration platforms that manage containers across multiple servers, and even clouds.

Kubernetes and orchestration

At our hotel is a reception desk, staffed by a receptionist. The receptionist is responsible for keeping track of who is in what room, and allocating free rooms to new guests. Without the receptionist, the hotel would be static and underutilized. In containers, Kubernetes is the receptionist that manages the lifecycle of containers and automates the deployment, scaling and management of containerized applications.

Kubernetes is a standard today for container orchestration, due to its extensibility, powerful configuration options and ability to manage workloads independent of underlying infrastructure. It is an open source, container orchestration system for automating application deployment, scaling and management. Kubernetes was originally designed by Google, but wasn't the flagship project of the Cloud Native Computing Foundation (CNCF). It load balances the application load, just as the receptionist would allocate a block booking across many rooms, and monitors resource consumption so the hotel isn't overbooked. Kubernetes also allows new resources to be added, and can redistribute containers to different hosts should resources struggle.

Kubernetes 'clusters' are groups of machines referred to as nodes that are responsible for running containerised applications. A 'pod' is a single node in this cluster with one or multiple containers deployed onto it. Each node in a cluster can service pods or machines to run. Kubernetes can automate the management of clusters to perform at a desired state.

Many vendors are building Kubernetes compatibility into their own orchestration platforms, including Docker, Red Hat OpenShift, VMware Tanzu, IBM, Rancher Labs, Mirantis and Morpheus Data. Cloud providers, too, have created cloud services with support for Kubernetes, including AWS's Elastic Kubernetes Service, Google Kubernetes Engine, and Microsoft's Azure Kubernetes Services. There are also offerings from IBM, Oracle, Alibaba and OVH. But not everything is Kubernetes based: AWS's Elastic Container Service and Azure Container Instances are just two examples of container platforms that don't fit the mold.

Multicloud

If a receptionist can manage rooms in a hotel, why can't a centralized receptionist handle rooms at a range of locations, balancing capacity and availability across cities far apart? Kubernetes and containers are being touted as an enabler of multicloud deployments they provide a standard mechanism of managing and updating applications, regardless of the underlying infrastructure platform. This can help remove the risk of lock-in, because containers can be moved between venues without needing to be rewritten, and their lightweight nature (due to the sharing of code) means they can be moved from A to B far quicker than heavy-duty virtual machines.

However, it is unlikely we will see all applications being decomposed to containers. Just like the property and leisure markets sustain houses, apartments, hotels and hostels, there will be demand for physical hosts, virtual machines and containers depending on specific requirements. Some enterprises are happy to pay for a whole physical host, knowing it is isolated and has full access to physical resources. Others might prefer to share resources but isolate code, like in a virtual machine, while others might have the appetite to abstract apps as far as possible into containers. Lots of applications are still monolithic, and slimming them down to VM size or breaking them into containers isn't an easy task. And post-pandemic many enterprises will struggle to find the money and the motivation to rebuild applications from scratch, rather than just move them to more powerful servers.

Anyway, it is often the case that physical, virtual and containerized apps run nested and hand-in-hand. Containers can be deployed on virtual machines, providing flexibility with the benefits of isolation. And no one said containers and Kubernetes would be easy. Virtual machines have reached a point of maturity where they are easy to deploy and easy to manage. Right now, containers are a hodgepodge that are best deployed and managed by experts. This will change over time, but for now, the container hotel is open for business just make sure your receptionist knows what they're doing.

See the rest here:
A room with a view: a non-tech explanation of containers and Kubernetes - S&P Global

Read More..

Managed Servers Market Expected to Witness a Sustainable Growth over 2026 & Key Analysis by Capgemini, TCS, XLHost, Albatross Cloud, Sungard…

This detailed assessment of all the factors and dynamics affecting the global Managed Servers market landscape provides the client with a critical overview to gain detailed insights to understand the market. This document is well equipped with resourced and information that is essential in changing the growth course of an organization in the Managed Servers market.

Key Companies Covered in This Report: Capgemini, TCS, XLHost, Albatross Cloud, Sungard Availability Services, Hetzner, iPage, Viglan Solutions, Atos, LeaseWeb

Download Sample Copy of Managed Servers Market Report: https://www.reportsintellect.com/sample-request/1840084

NOTE: The Managed Servers report has been assessed while contemplating the COVID-19 Pandemic and its impact on the market.

Managed Servers market segmentation:

The Managed Servers market report has been bifurcated and further divided into various sub segments in order to make it easy to comprehend in a very efficient way, hence increasing the productivity. The segmentation adds a structure and ease of access to the data that has proven to be very overwhelming at times.

By types:Cloud-BasedOn-Premis

By Applications:BFSIIT & TelecommunicationEducationGovernmentRetailManufacturingConsumer GoodsEnergy & UtilityOthers

By Regions:North AmericaEuropeAsia-PacificSouth AmericaThe Middle East and Africa

Check discount for report @ https://www.reportsintellect.com/discount-request/1840084

Some of The Key Aspects Covered in This Report:

Report Highlights:

About Us:

Reports Intellect is your one-stop solution for everything related to market research and market intelligence. We understand the importance of market intelligence and its need in todays competitive world.

Our professional team works hard to fetch the most authentic research reports backed with impeccable data figures which guarantee outstanding results every time for you.So whether it is the latest report from the researchers or a custom requirement, our team is here to help you in the best possible way.

Contact Us:

sales@reportsintellect.comPhone No: + 1-706-996-2486US Address:225 Peachtree Street NE,Suite 400,`Atlanta, GA 30303

Here is the original post:
Managed Servers Market Expected to Witness a Sustainable Growth over 2026 & Key Analysis by Capgemini, TCS, XLHost, Albatross Cloud, Sungard...

Read More..

Benefits of Picking HostingRaja Server – The Citizen

There are chances that you have chosen shared hosting at the stage where you first host an area since it is the most sensible and less exorbitant other option. Regardless of the way that it is significant that you need to change to a dedicated server while your web develops. You can begin shared hosting from the outset and afterward move to Dedicated Server India Else, so your site can crash with developing traffic. Basically, the dedicated server shows that your site has its own foundation and you don't need to impart it to some other organizations. It furnishes the server with force and solidness.

What are the Benefits of picking HostingRaja Dedicated Server India?

Most space proprietors as of now have this inquiry at the highest point of the need list concerning if the faithful servers should choose. In spite of the way that associations have numerous options for their destinations and servers, the common worker is the most celebrated and attainable. In any case, it is important that you go to devoted workers as the association develops. Here are a couple of reasons why Dedicated Server India ought to be chosen.

Assets are not shared:

As you have a private worker, you don't have to impart your PC to somebody. The resources are totally there for you. You don't have to adjust the impacts of the traffic and the servers. The helpless substance runs when you utilize a common worker on another site and afterward diminishes in inactivity, affecting the data transmission. At the point when you pick a devoted worker, you shouldn't have to think about it.

Improved security and execution:

It enhances uptime for your server as you pick a dedicated server. Albeit a ton of traffic gets from the pages, it is more astute to utilize a dedicated server since it has more noteworthy consistency and sturdiness than most other hosting solutions. You are not sharimg your servers to whatever other site that can shield you from a malignant space or a potential spammer when you select a dedicated host.

Flexibility in picking dedicated server India

A dedicated server encourages customers to modify the server, paying little mind to the requirement for RAM, actual space, etc. Until you choose a common server, a customer is restricted to the ventures, code, and working plans presented on the cloud. A dedicated server permits an association to have a refreshed server framework that fits the client's prerequisites. A dedicated server gives more prominent consistency regarding how it is feasible to change a dedicated server. Regarding how a dedicated server ought to be changed, a dedicated server gives some adaptability. You will pick the stage and programming likewise to the details.

No overhead for the upkeep of hardware:

A dedicated server supplier handles the weight of conveying and keeping up cloud offices, lessening costs for enormous venture workers and improving the profit from speculation.

Exceptional IP Address:

Per-worker has an uncommon IP address of its own. You should share the IP address with any remaining pages while picking shared hosting. There is a danger that the site's score could decay if the site close by is indecent. No such question happens for a dedicated server. At the stage where you have an enormous web-based business organization, it is important to have a dedicated server.

Who Need Dedicated Server Hosting

HostingRaja for dedicated server suppliers is an unimaginably successful other option. In the event that you settle on a dedicated server, the specialist co-op can dedicate a PC to oblige the association's IT responsibility. Insurance and ad-lib execution stretches out by a dedicated server. In the accompanying situations, Dedicated Server India is fit.

Sites that need dedicated transmission capacity for substantial traffic altogether for the site to run appropriately.

Organizations with intricate sites frequently need refreshes.

Organizations whose point is proportional to their online presence

An association that has a huge capital base.

Continued here:
Benefits of Picking HostingRaja Server - The Citizen

Read More..