Category Archives: Cloud Servers
Amazon, Microsoft and Google face UK probe over dominance in cloud computing – CNBC
The probe will focus on so-called "hyperscalers" like Amazon Web Services and Microsoft Azure, which let businesses access computing power and data storage from remote servers.
Chesnot | Getty Images
British media regulator Ofcom is investigating Amazon, Microsoft and Google's tight grip on the cloud computing industry.
In the coming weeks, the watchdog will launch a study to examine the position of firms offering public cloud infrastructure and whether they pose any barriers to competition.
Its probe, announced Thursday, will focus on so-called "hyperscalers" like Amazon Web Services, Microsoft Azure and Google Cloud, which let businesses access computing power and data storage from remote servers, rather than host it on their own private infrastructure.
Further action could be taken by the regulator if it finds the companies are harming competition. Selina Chadha, Ofcom's director of connectivity, said the regulator hadn't yet reached a view on whether the cloud giants are engaged in anticompetitive behavior. Ofcom said it will conclude its review and publish a final report including any concerns and proposed recommendations within the 12 months.
Amazon, Microsoft and Google were not immediately available for comment when contacted by CNBC.
The review will form part of a broader digital strategy push by Ofcom, which regulates the broadcasting and telecommunications industries in the U.K.
It also plans to investigate other digital markets, including personal messaging and virtual assistants like Amazon's Alexa, over the next year. Ofcom said it is interested in how services including Meta's WhatsApp, Apple's Facetime and Zoom have impacted traditional calling and messaging, as well as the competitive landscape among digital assistants, connected TVs and smart speakers.
"The way we live, work, play and do business has been transformed by digital services," Ofcom's Chadha said in a statement Thursday. "But as the number of platforms, devices and networks that serve up content continues to grow, so do the technological and economic issues confronting regulators."
"That's why we're kick-starting a programme of work to scrutinise these digital markets, identify any competition concerns and make sure they're working well for people and businesses who rely on them," she added.
Ofcom has been selected as the enforcer of strict new rules policing harmful content on the internet. But the legislation, known as the Online Safety Bill, is unlikely to come into force anytime soon after Liz Truss replaced Boris Johnson as prime minister.With Truss' government grappling with a plethora of problems in the U.K.not least the cost-of-living crisisit's expected that online safety regulation will move to the back of the queue of policy priorities for the government.
The move adds to efforts from other regulators to rein in large tech companies over the perceived stranglehold they have on various parts of the digital economy.
The Competition and Markets Authority has several active probes into Big Tech companies and wants additional powers to ensure a level playing field across digital markets. The European Commission, meanwhile, has fined Google billions of dollars over alleged antitrust offences, is investigating Apple and Amazon in separate cases, and has passed landmark digital laws that may reshape internet giants' business models.
Amazon holds a comfortable lead in the cloud infrastructure services market, with its Amazon Web Services division making billions of dollars in profits every year. In 2021, AWS raked in $62.2 billion of revenue and over $18.5 billion in operating income.
Microsoft's Azure is the first runner up, while Google is the third-largest player. Other firms, including IBM and China's Alibaba, also operate their own cloud arms.
Combined, Amazon, Microsoft and Google generate roughly 81% of revenues in the U.K.'s cloud infrastructure services market according to Ofcom, which estimates the market to be worth 15 billion ($16.8 billion).
Microsoft recently announced a number of changes to its cloud contract terms, effectively making it easier for customers to use competing cloud platforms as well as Microsoft. The Redmond, Washington-based company had faced complaints from rivals in Europe that it was limiting choice in the market.
View post:
Amazon, Microsoft and Google face UK probe over dominance in cloud computing - CNBC
5 Benefits of Cloud Communications in Higher Education – EdTech Magazine: Focus on K-12
3. Cloud Communications Require Less IT Management
Transitioning communications offsite means no longer having members of the IT staff fully dedicated to managing phone communication and server rooms, freeing them up to focus on more pressing tasks at colleges and universities. And with higher education institutions reporting staffing shortages and struggles in hiring new employees,particularly in areas like cybersecurity, having IT experts with more time on their hands can only be a good thing.
On the infrastructure side, cloud communications require little more than a strong internet connection, and preferably a redundant connection in case of an outage, along with a quality switch to route communications correctly. As for the cloud providers themselves, most offer a 99.99 percent uptime promise for their service, so interruptions should be rare if not eliminated entirely.
Traditional phone systems involve many wires running from desk to desk and into a large phone server in a backroom somewhere. Cloud communications mean no more on-premises servers and, for those users who ask for nothing more than a headset to make their calls, no more physical phones. Even for people who still want a traditional on-desk phone, communications run through a switch, and connections are most often made to a computer via USB or Bluetooth.
That means theres not only less for IT departments to maintain, but also little upfront equipment to buy and fewer capital expenditures. As for the cloud communication service itself, most providers charge on a per-user basis, so unused desks no longer continue to rack up phone bills when they sit empty.
READ MORE:How to set up a cloud-based telephony solution in higher ed.
The flexibility that is a major selling point for cloud communication tools can also apply to future planning.
As collaboration tools evolve, there are sure to be new features that colleges and universities want to incorporate. With cloud communications, it will be as easy as updating a piece of software. That also goes for security updates as they are rolled out to keep cloud communications protected.
Cloud communications systems are also easily scalable, both up and down. So, if your communication needs expand, its as simple as adding any number of new users, or vice versa when users need to be retired.
The future in cloud communication is already here, but the vast number of options can be overwhelming to sort through. To find out which platform might be best for your college or university, a CDW expert can conduct an assessment of your current infrastructure, break down the pros and cons of every option, and recommend a solution tailored to your needs.
This article is part of EdTech: Focus on Higher EducationsUniversITy blogseries.
Read the original here:
5 Benefits of Cloud Communications in Higher Education - EdTech Magazine: Focus on K-12
Tom Details What’s New in vSphere 8 (and ‘Why People Are Excited About It’) – Virtualization Review
News
vSphere 8 was one of the most-talked about subjects at VMware Explore this year. After reading over its documentation and watching a few sessions, I've had a chance to discover why people are excited about it, which I will discuss in this article.
Even though vCenter Server and ESXi hosts are typically the vSphere components that first come to mind, vSphere is actually made up of many other components and features such as VMware vSphere Storage DRS, VMware vSphere Fault Tolerance, vRealize Orchestrator and vSphere Lifecycle Manager.
Why vSphere 8?As the industry continues to embrace technologies like the cloud, edge computing and Software-as-a-Service (SaaS) applications, VMware has realized that they need to extend their marquee product (vSphere) to better align with these trends. In this regard, VMware has laid out three current needs that they see in the industry: additional server capacity, specialized infrastructure silos and a CPU-centric-based security model.
Even though you may have your vCenter server and ESXi hosts running on premises, they can still be accessed from vSphere's Cloud Console, which is available in vSphere 8. This allows you to access your vSphere environment(s), as well as other cloud services, regardless of if you are in your datacenter or not. This is the same cloud console that is used by VMware Cloud on AWS, Google Cloud and Azure.
The VMware Cloud Portal accesses on-premises vSphere resources to the Cloud Console via a gateway that resides on premises. Of course, the Cloud Portal can access multiple on-premises vSphere environments as well as public cloud resources.
A Look at the Cloud ConsoleThe layout of the Cloud Console is intuitive and has a menu bar on the left side.
More details are shown in the center pane, and many of the individual components can be expanded for more specific information.
Some of the other vSphere components such as vRealize are surfaced in the Cloud Console.
Virtual MachinesWhen deploying VMs from the Cloud Console, you can decide which vCenter should host them.
For VM placement, vSphere now has Enhanced Memory Monitoring and Remediation (vMMR2) which factors-in memory statistics, latency and miss rates when deciding where a VM should reside.
Infrastructure-as-a-Service (IaaS) is mandatory in datacenters both large and small. To support this, vSphere has a neat feature where it displays the YAML code that can be used to programmatically deploy a VM when you create one.
vSphere 8 and DPUsI have long been interested in Data Process Units (DPUs), and vSphere 8 has the vSphere Distributed Service Engine to support them.
DPUs reside on servers and take over functions such as networking and security from the server to free up the central processor to run applications. By having an integrated engine, you can more closely coordinate the functions on the server and the DPUs to manage the workloads on each of them.
To drive home the need for DPUs in one presentation, VMware stated that up to 30 percent of CPU capacity is being consumed by network and security services.
One of the neat features with regards to DPUs is that when an ESXi host detects a DPU on it, it will recognize it and give you the option to install ESXi on the DPU!
Quicker UpdatesvSphere 8 supports a quick-update feature that greatly reduces downtime when updating a vCenter server. It accomplishes this by creating a new vCenter server and then bringing in the configuration and persistent data from an existing vCenter server. This has the added benefit of being able to restore the old vCenter server if you have issues with the upgrade by restoring the old instance.
Updating ESXi hosts is quicker and more streamlined by pre-staging the ESXi image and allowing multiple hosts to be remediated in parallel.
ConclusionOverall, vSphere 8 is a natural progression for VMware. Obviously the most visible change is Cloud Connector which allows administrators to access vSphere environments regardless of where they are located. Feature-wise, I like the fact that they are currently embracing and making DPUs first-class citizens as well as making it easier to deploy VMs using YAML code.
VMware has a hands-on lab with vSphere Distributed Services Engine and other vSphere 8 technologies for those who want to take a deeper look into it.
About the Author
Tom Fenton has a wealth of hands-on IT experience gained over the past 25 years in a variety of technologies, with the past 15 years focusing on virtualization and storage. He currently works as a Technical Marketing Manager for ControlUp. He previously worked at VMware as a Senior Course Developer, Solutions Engineer, and in the Competitive Marketing group. He has also worked as a Senior Validation Engineer with The Taneja Group, where he headed the Validation Service Lab and was instrumental in starting up its vSphere Virtual Volumes practice. He's on Twitter @vDoppler.
See the original post here:
Tom Details What's New in vSphere 8 (and 'Why People Are Excited About It') - Virtualization Review
Malicious OAuth applications used to control Exchange tenants in sweepstakes scam – Cybersecurity Dive
Dive Brief:
A threat actor has deployed malicious OAuth applications on compromised cloud tenants in order to take control of Exchange servers, Microsoft said in research released Thursday. The threat actor later sent spam email as part of a deceptive sweepstakes campaign.
The threat actor launched credential-stuffing attacks against high-risk accounts that didnt employ multifactor authentication, Microsoft said. The actor was able to gain initial access through unsecured administrator accounts.
After gaining access to the cloud tenant, the threat actor created malicious OAuth applications, which added an inbound connector to an email server. The inbound connector a set of instructions about the flow of email to organizations using Microsoft 365 or Office 365 allowed the actor to create emails that appeared to originate from the targets domain.
Microsoft is monitoring an increase in OAuth application abuse, particularly consent phishing. During a consent phishing attack, users are tricked into granting permission to malicious OAuth apps in order to access legitimate cloud services, including mail servers, file storage and management APIs.
Microsoft had previously warned about the rise in consent phishing, which coincided with the switch to remote work at the start of the COVID-19 pandemic.
A number of threat actors, including those working on behalf of nation-states, have used OAuth applications for a variety of malicious aims, including command and control (C2) communication, phishing and backdoors.
In order for the attack to work, the threat actor had to compromise cloud tenant users with enough permissions to let the attacker create applications in the cloud environment and to grant admin consent, Microsoft said. The attacker launched credential-stuffing attacks, attempting to reach users with global admin level of access.
Microsoft said 86% of the compromised tenants had at least one admin with a real-time high risk score, meaning Azure AD Identity Protection flagged them as most likely compromised. None of the compromised admins had MFA enabled.
Microsoft researchers said the threat actor mainly used cloud-based email platforms from Amazon Simple Email Service and Mailchimp in order for the campaign to achieve scale and make sure emails were successfully delivered.
While the spam attacks ultimately targeted consumer email accounts, Microsoft said, the threat actor targeted enterprise tenants to use as infrastructure for the campaign.
This attack thus exposes security weaknesses that could be used by other threat actors in attacks that could directly impact affected enterprises, according to the blog.
Microsoft officials did not return a request for additional comment.
The rest is here:
Malicious OAuth applications used to control Exchange tenants in sweepstakes scam - Cybersecurity Dive
Over Half of UK IT Industry Pros Trust Public Cloud Providers Less Than Two Years Ago, According to New Research from Leaseweb – Business Wire
AMSTERDAM--(BUSINESS WIRE)--Leaseweb Global, a leading hosting and cloud services company, today published the results of research revealing that over half (55%) of UK IT professionals currently trust public cloud services less than they did two years ago, having run into challenges around usage costs, migration and customer service.
The research, which explores 500 UK-based IT professionals* experience with public cloud providers over the last two years, raises questions whether hyperscale is the best way forward or viable as a long-term option. Transparency, customer service and the ease of migrating workloads are flagged as potential concerns, despite most respondents saying they had costs under control. Overall, the results indicate a significant trust issue when it comes to public cloud providers.
For example, the majority (57%) of respondents had found it challenging to migrate workloads out of a public cloud environment, while just under half (49%) said they had encountered difficulties in understanding their cloud usage costs. Despite this, nearly three quarters (72%) agree they have effectively controlled public cloud usage costs, while 46% stated they somewhat agree. Almost half (49%) had struggled to get hold of a public cloud providers customer services.
In addition, while cloud is now a key component for many IT infrastructure strategies, cloud only and cloud first are not dominant, nor are they considered a panacea for every business need. While there was an increase in the adoption of cloud infrastructure during the pandemic, the study also showed a decrease in support for cloud first strategies during 2022.
For instance, in the January 2019-December 2021 ("pre COVID pandemic") period, 36% of organisations described their approach to IT infrastructure as cloud first, with only 19% stating their organisation was officially committed to a cloud-only approach. From January 2022 onwards, the ("post COVID pandemic") period, cloud first commitments had decreased to 31%, with cloud only rising to 25% of respondents.
When asked about the optimum IT infrastructure for their organisation, private cloud only (23%) and a mixture of on-premise and public cloud (20%) were the most popular selections. These were followed by public cloud only (17%) and a mixture of on-premises and private cloud (14%), with on-premises only the least popular selection at 7%.
The move away from on-premise legacy infrastructure is clear, with two-thirds (66%) of respondents agreeing that the industry will see the end of on-premise infrastructure over the next two years. The research results indicate that while on-premises is not an important part of IT strategy, it still exists within many organisations environments.
The positive news is this does not appear to be stifling innovation: only 16% of respondents said that legacy infrastructure was either standing in the way of further cloud adoption or limiting their organisations ability to make business decisions. Instead, the focus is on deploying applications in the right place, with a key takeaway from the study being the end of on-premises infrastructure may be approaching, but not quite here.
The results of this study strengthen the case for hybrid combinations thanks to the flexibility and choice it can deliver to both large and small companies, commented Terry Storrar, Managing Director UK at Leaseweb. And much as there has been a shift towards cloud adoption, rather than highlighting the pandemic as a key driver of a shift to the cloud, it appears that businesses were investing in cloud beforehand and that investment levels have remained relatively static, continued Storrar.
Although respondents acknowledge that the desire and need to look after on-premises infrastructure is dying, the results also indicate that businesses are still using it as an ongoing component of their IT infrastructure when adopting hybrid cloud. The key takeaway from this research is IT teams are looking for flexibility - theres no one size fits all approach. Organisations are now more likely to qualify cloud out during the assessment stage, rather than the other way around, but the main focus is on choosing the right infrastructure locations for specific use cases, concluded Storrar.
To read more about the results of the study, click here to visit the Leaseweb website.
*Survey Methodology:
Study conducted in May 2022 across 500 UK based IT managers, Cloud Service Managers, Infrastructure Managers, Heads of IT, Heads of Cloud Services, Heads of IT Infrastructure, IT Directors, CIOs & CTOs working in UK based companies employing 100-1000 people.
About Leaseweb
Leaseweb is a leading Infrastructure as a Service (IaaS) provider serving a worldwide portfolio of 20,000 customers ranging from SMBs to Enterprises. Services include Public Cloud, Private Cloud, Dedicated Servers, Colocation, Content Delivery Network, and Cyber Security Services supported by exceptional customer service and technical support. With more than 80,000 servers, Leaseweb has provided infrastructure for mission-critical websites, Internet applications, email servers, security, and storage services since 1997. The company operates 25 data centres in locations across Europe, Asia, Australia, and North America, all of which are backed by a superior worldwide network with a total capacity of more than 10 Tbps.
Leaseweb offers services through its various subsidiaries, which are Leaseweb Netherlands B.V., Leaseweb USA, Inc., Leaseweb Asia Pacific PTE. LTD, Leaseweb CDN B.V., Leaseweb Deutschland GmbH, Leaseweb Australia Ltd., Leaseweb UK Ltd, Leaseweb Japan KK, Leaseweb Hong Kong LTD, and iWeb Technologies Inc.
For more information, visit: http://www.leaseweb.com.
###
See original here:
Over Half of UK IT Industry Pros Trust Public Cloud Providers Less Than Two Years Ago, According to New Research from Leaseweb - Business Wire
Defining the Modern Bare Metal Cloud – thenewstack.io
This is first in a series of contributed articles leading up to KubeCon + CloudNativeCon in October.
Often referred to as bare metal as a service (BMaaS), bare metal cloud can be defined as a single-tenant environment with the full self-service versatility of the cloud. Its surge in popularity has motivated providers to push bare metal cloud well beyond its original capabilities to cater to the needs of modern workloads and cloud native organizations. By 2026, the platform is expected to attain a compound annual growth rate of 38.5%.
Considering the expanded set of use cases it now addresses, bare metal cloud needs a new definition.
To define the bare metal cloud of today, we need to look at its key features, including:
The solid part involves the absence of a hypervisor. It gives users access to the servers physical components and lets them optimize their CPU, RAM and storage resources. Single tenancy eliminates performance, security and resource-contention issues commonly attributed to shared virtualized environments.
The cloud in bare metal cloud mostly refers to hourly or monthly billing options and API-driven provisioning. This platform supports automation both during and after deployment, letting developers use APIs or CLIs to set up, scale and manage their infrastructure programmatically. Were still just scratching the surface, though.
What we have defined so far as bare metal cloud usually comes with the following drawbacks:
Modern bare metal cloud providers offer preconfigured, workload-optimized servers that can be deployed in minutes from anywhere across the globe. Some even offer workload-specific hardware accelerators such as persistent memory preconfigured on their systems. This gives organizations turnkey access to powerful technologies they can use to boost the performance and reliability of their workloads while reducing their total cost of ownership.
To further abstract the infrastructure setup overhead, providers have embraced open source software solutions such as Canonicals MAAS, letting you deploy instances with preinstalled OS. Usually, you can choose between Linux distros, such as Ubuntu, Debian, or CentOS, and Windows Server, or hypervisor solutions such as VMware ESXi. To further adapt the solution to your exact requirements, some providers even give you the option to install a custom OS image.
While you cannot handpick individual stack components, choosing from dozens of servers powered by the latest hardware, software and network technologies surely helps teams optimize their IT.
CNCFs Annual Survey 2021 showed us that 90% of Kubernetes users use cloud-managed services. Since Kubernetes goes hand-in-hand with cloud native, its mainstream status has propelled bare metal cloud into interesting integrations.
You can now find bare metal cloud solutions with preinstalled open source K8s management platforms such as SUSE Rancher. Leveraging these, organizations can simplify the deployment and management of complex container environments at scale and gain easy access to enterprise-level Kubernetes hosted on bare metal.
Providers also invest more time and effort in delivering regularly updated GitHub pages offering a Kubernetes controller or a Docker Machine driver for their solution. The repos usually include Infrastructure-as-Code (IaC) modules that simplify infrastructure provisioning and management via popular tools such as Terraform, Ansible and Pulumi.
Such support lets DevOps teams seamlessly integrate bare metal servers into their workflows and provision resources directly from their preferred environments.
Bare metal cloud has become a go-to solution for distributed workloads and organizations looking to accelerate their cloud adoption. Its dedicated resources and high-performance hardware make it ideal for sensitive, demanding or legacy workloads that are often incompatible with virtualized environments. Data centers that host bare metal cloud often provide direct access to cloud on-ramps or software-defined networks that let organizations interconnect their bare-metal-hosted apps with their favorite hyperscale cloud-service providers.
This enables anything from cloud resource bursting to access to petabytes of disaggregated storage, allowing teams to easily distribute their workloads across different ecosystems and optimize IT costs.
Bare metal cloud is often contrasted with the highly flexible public cloud.If we compare the two today, its clear that the lines that set them apart get quite blurry:
If you choose the modern bare metal cloud, you get most of the public cloud features with added control, freedom, transparency and direct access to hardware.
In minutes, bare metal cloud lets you deploy anything from test environments for virtualized or containerized apps to enterprise, multinode K8s clusters managed by SUSE Rancher. The platform supports public and custom IPs and even lets you install a custom OS or select one of the available Linux, Windows or VMware systems.
Here are who can benefit from the features mentioned above:
All things considered, we can define bare metal cloud as a constantly evolving, versatile and powerful IT infrastructure solution supporting fast-paced, cloud native organizations. It gives you direct access not just to its underlying resources, but to the latest software and hardware technologies that simplify and optimize modern workloads and workflows.
See the original post here:
Defining the Modern Bare Metal Cloud - thenewstack.io
The Haziness in Microsoft’s Cloud Numbers The Information – The Information
Heres a quick question for enterprise software acolytes out there: which tech giant is bigger in cloud, Microsoft or Amazon?
It depends. Microsoft says total revenue for Microsoft Cloud was $91 billion in the year to June, which is significantly more than AWS $72 billion for the same 12-month period. But the Microsoft Cloud revenue number includes contributions from many different parts of Microsofteverything from its Azure cloud-services infrastructure unit to applications such as Office 365 and most of LinkedIn. Amazon Web Services, in contrast, is primarily infrastructureselling data storage and compute powerwith little in the way of applications. Microsoft doesnt disclose Azures revenue. According to Gartner, AWS has 39% of the global cloud-infrastructure market to Microsofts 21%.
This reporting haziness is par for the course for the cloud industry, where defining revenue is more of an art than a science. In their reported cloud revenue numbers, both Microsoft and Google include the infrastructure side of the cloud with applications that run on those services, such as word processing, human-resources tools, and email. Oracle used to be in that camp as well but last week it began breaking out cloud infrastructure from applications, giving investors a better window into its performance.
Go here to read the rest:
The Haziness in Microsoft's Cloud Numbers The Information - The Information
What is VDS, and when do you need it – Startup.info
For IT experts and gurus, there is an increased demand for cloud capacity. The dedicated server hosting market is booming with the increase of people looking for better security for their computers. Moreover, there are plenty of hosting providers which help provide solutions for your business.
Deltahost remains an ideal company that offers a plethora of server services such as VDS. This guide will talk about what VDS is and its features factors to consider when renting a server, as well as services offered by Deltahost company.
VSphere Distributed Switch is a management networking tool similar to VSS. This is a powerful network construct that sprayed management and data plan. All control of the vSphere Distributed Switch is distributed at the vCenter Server and passes traffic local to ESXi.
With the framework of the VDS, it comes with various features which include:
When you rent VDS servers at https://deltahost.ua/vps.html for example, its possible to easily simplify VM networking across several networks. This would include port group management, security and VLANs, and other computing settings.
The VDS features offer several network health check capabilities like checking physical structure. This feature is integral for the smooth running when you rent VDS servers that are in good condition.
When you rent VDS server, its possible to have access to ERSPAN, SNMPv3, Netflow version, and other network configurations. It has some great templates for restoring virtual machine network systems.
These VDS features include various crucial servers network I/0 management, SR-IOV, and BPDU tools. These tools are crucial for those who want a well-designed network frame.
A VDS permits the usage of private VLANs. This is great because they offer more security options. When you use these Private VLANs, they are great for the segmentation of traffic.
When you rent VDS servers, its possible to shape traffic policies on different print groups. This help with peak bandwidth, bust size, and average bandwidth.
Here are some important factors to consider when getting a server:
Before you rent VDS server, you firstly need to check the type of hardware and how it affects network reliability. Its very important to check the SSD and dual power features.
When you choose a managed service, having an accessible support center is important. You get much value while talking with a qualified support agent and this will have an effect on your business.
Here are some services provided by the Deltahost firm:
When you use Deltahost, its easy to administer your various platforms. They can help you with mail accounts and various in-house built hosting control solutions.
Delta host offers quality balanced hosting websites where you can rent VDS servers from them. When you have server overload issues, they can help you solve them effectively.
You will get affordable PHP script installation software. You will get an excellent PHP application which is great for setting up blogs and photo albums.
VDS is an essential tool that is great for tech experts. It comes with various features and you need to consider several factors before renting a server.
Read the original:
What is VDS, and when do you need it - Startup.info
CI/CD servers readily breached by abusing SCM webhooks, researchers find – The Daily Swig
Webhook, line, and sinker
Cloud-based source code management (SCM) platforms support integration with self-hosted CI/CD solutions through webhooks, which is great for DevOps automation.
However, the benefits can come with security trade-offs.
According to new findings from researchers at Cider, malicious actors can abuse webhooks to access internal resources, run remote code execution (RCE), and possibly obtain reverse shell access.
Software-as-a-service (SaaS) SCM systems provide an IP range for their webhooks. Organizations must open their networks to these IP ranges to enable integration between the SCM and their self-hosted CI/CD systems.
We knew the combination of a SaaS source control management system and a self-managed CI with the webhook service IP range allowed towards the CI is a common architecture, and we wanted to check our possibilities there, Omer Gil, head of research at Cider, told The Daily Swig.
Catch up on the latest DevSecOps news
Attackers can use webhooks to get past an organizations firewalls. But SCM webhooks have strict limits, and there is very little room to make modifications to webhook requests.
However, the researchers discovered that with the right changes, they could get beyond the limited endpoints available to SCM webhooks.
On the CI/CD side, the researchers ran their experiments on Jenkins, an open-source DevOps server.
We chose Jenkins since its self-hosted and commonly used, but [our findings] can be applied to any system that is accessible from the SCM, like artifact registries for example, Gil said.
On the SCM side, they tested both GitHub and GitLab. While webhooks have been designed to trigger specific CI endpoints, they could modify requests to direct them to other endpoints that return user data or the console output of pipelines. Nevertheless, limits remain.
Webhooks are sent as a POST request, which limits the options against the target service, since endpoints used to retrieve data usually only accept the GET parameter, Gil said. While its not possible to fire a GET request through GitHub, in GitLab its a different case, since if the POST request is responded with a redirection, the GitLab webhook service will follow it with sending the GET request.
Using GitLab, the researchers were able to use webhooks to combine POST and GET requests to access internal resources. Interestingly, some Jenkins resources are accessible without authentication.
By default, some resources can be accessed anonymously. Having said that, its not very common for an organization to leave it as is but some do allow anonymous access, Gil said.
In case authentication was required, the researchers found that they could direct webhooks to the login endpoint and conduct brute-force password attacks against the CI/CD platform. Once authenticated, they obtained a session cookie that could be used to access other resources.
If the Jenkins instance had a vulnerable plugin, the webhook mechanism could exploit it. In the proof-of-concept video above, the researchers show that they could force a vulnerable Jenkins server to download a malicious JAR file, run it on the server, and launch a reverse shell endpoint for the attacker.
This finding is a reminder of the risks created when CI/CD servers are partially open to the internet.
A hermetic solution is to deny inbound traffic from the SCM webhook service, but it usually comes with engineering costs, Gil said. Some countermeasures can be taken, like setting a secure authentication mechanism in the CI, patching, and making sure all actions in the server are saved in the logs.
RECOMMENDED Security teams often fight against developers taking control of AppSec: Tanya Janca on the drive to DevSecOps adoption
Continue reading here:
CI/CD servers readily breached by abusing SCM webhooks, researchers find - The Daily Swig
All You Need to Know About Virtual Machines – Spiceworks News and Insights
A virtual machine (VM) is defined as a computer system emulation, where VM software replaces physical computing infrastructure/hardware with software to provide an environment for deploying applications and performing other app-related tasks. This article explains the meaning and functionality of virtual machines, along with a list of the best VM software you can use.
A virtual machine (VM) is a computer system emulation. VM software replaces physical computing infrastructure/hardware with software to provide an environment for deploying applications and performing other app-related tasks.
The term virtual machine (VM) refers to a computer that exists only in digital form. The actual computer is often referred to as the host in these situations, while other operating system(s) running on it are referred to as the guests. Using the hardware resources of the host, virtual machines let users install more than one operating system (OS) on the same computer.
Virtual machines are also used to develop and publish apps to the cloud, run software that is not compatible with the host operating system, and back up existing operating systems. Developers may also use them to test their products quickly and easily in different environments. VM technology can be used both on-premises and within the cloud. For example, public cloud services often use virtual machines to give multiple users access to low-cost virtual application resources.
See More: What Is Jenkins? Working, Uses, Pipelines, and Features
Virtualization allows for creating a software-based computer with dedicated amounts of memory, storage, and CPU from the host computer. This process is managed by hypervisor software. As needed, the hypervisor moves resources from the host to the guest. It also schedules operations in VMs to avoid conflicts and interference when using resources.
A virtual machine (VM) allows a different operating system to be executed inside the confines of its distinct computing environment within a window similar to those used for other programs. As it is often separated from the rest of the system, the virtual machine cannot make any unapproved modifications to the host computer. This is done to prevent the virtual machine from interfering with the central operating system of the host.
See More: What is Root-Cause Analysis? Working, Templates, and Examples
Organizations, IT professionals, developers, and other home users looking for ways to solve problems that result from remote operations are set to benefit from what virtual machines offer. Virtual machines provide users with the same applications, settings, and user interfaces they would find in a physical computer from a remote area. Other benefits include:
See More: DevOps vs. Agile Methodology: Key Differences and Similarities
Virtual machines can be of two types i.e., system VMs and process VMs.
These kinds of VMs are completely virtualized to replace a real machine. The way they virtualize depends on a hypervisor such as VMware ESXi, which can operate on an operating system or bare hardware.
The hardware resources of the host can be shared and managed by more than one virtual machine. This makes it possible to create more than one environment on the host system. Even though these environments are on the same physical host, they are kept separate. This lets several single-tasking operating systems share resources concurrently.
Different VMs on a single computer operating system can share memories by applying memory overcommitment systems. This way, users can share memory pages with identical content among multiple virtual machines on the same host, which is helpful, especially for read-only pages.
The key advantages of system VMs are:
Disadvantages of system virtual machines are:
These virtual machines are sometimes called application virtual machines or Managed Runtime Environments (MREs). They run as standard applications inside the hosts operating system, supporting a single process. It is triggered to launch when the process starts and destroyed when it exits. It offers a platform-independent programming environment to the process, allowing it to execute similarly on any platform.
Process virtual machines are implemented using interpreters and they provide high-level abstractions. They are often used with Java programming language, which uses Java virtual machines to execute programs. There can be two more examples of process VMs i.e., the Parrot virtual machine and the .NET Framework that runs on the Common Language Runtime VM. Additionally, they operate as an abstraction layer for any computer language being used.
A process virtual machine may, under some circumstances, take on the role of an abstraction layer between its users and the underlying communication mechanisms of a computer cluster. In place of a single process, such a virtual machine (VM) for a process consists of one method for each real computer that is part of the cluster.
Special case process VMs enable programmers to concentrate on the algorithm instead of the communication process provided by the virtual machine OS and the interconnect.
These VMs are based on an existing language, so they dont come with a specific programming language. Their systems provide bindings for several programming languages, such as Fortran and C. In contrast to other process VMs, they can enter all OS services and arent limited by the system model. Therefore, it cannot be categorized strictly as virtual machines.
See More: Top 10 DevOps Automation Tools in 2021
A superior VN application facilitates the use of many operating systems on a computer. Users should consider what features they may require when choosing what VM software suits them best. The following is a list of the top 10 virtual machine software to use:
VMware Workstation Player is recognized as a virtualization solution that supports a variety of operating systems on a single machine without requiring a reboot. It allows for seamless data sharing between hosts and guests and is designed for IT professionals. The following are features of VMware Workstation Player:
Parallel Desktop software provides hardware visualization for Windows to run on Mac without rebooting, and their applications are the most powerful, fastest, and easiest for doing this. The following are features of Parallels Desktops:
Like several other options on this list, this is also an open-source hypervisor. It works on x86 computers and is suitable for home or enterprise use that runs on Linux, Windows, etc. The following are features of VirtualBox:
OracleVM VirtualBox is an open-source X86 and AMD64 virtualization product for home and enterprise use. The following are features of OracleVM VirtualBox:
Citrix Hypervisor simplifies operational administration to enable users to conduct intense tasks in a virtualized environment. It is best for Windows 10. The following are features of Citrix Hypervisor:
See More: DevOps Roadmap: 7-Step Complete Guide
It is an open-source platform that offers centralized management and enables its users to create new VMs. Additionally, one may utilize the method to replicate existing ones and see how everything works together. The following are features of Red Hat Virtualization.
Hyper-V is a hypervisor that enables the creation of virtual computers on x86-64-based systems. It may connect individual virtual computers to more than one network through setup. The following are features of Hyper-V:
Kernel Virtual Machine enables end-to-end virtualization for Linux. It was designed to operate on x86 hardware with virtualization features. KVM has two core components: the main virtualization infra and a processor-specific module. The following are features of the Kernel Virtual Machine:
Proxmox Virtual Environment integrates networking, KVM hypervisor, and Linux (LXC) container capabilities on a single platform. The following are features of Proxmox Virtual Environment:
QEMU is a common and open-source emulator and virtualization machine. Its system is written using C language. It allows the building of virtual worlds for many architectures and operating systems at no cost. The following are features of QEMU:
See More: What Is Serverless? Definition, Architecture, Examples, and Applications
According to a 2022 report by Market Data Forecast, the global VM market was worth $3.5 billion in 2020. This is poised to grow further as enterprises rely more on software-based technologies (like the cloud) and reduce their hardware footprint. Indeed, virtual machines can go a long way in helping to optimize IT costs and also provide a safe environment for application security testing and cybersecurity checks.
Did this article give you the information you were looking for about virtual machines? Tell us on Facebook, Twitter, and LinkedIn. Wed love to hear from you!
More here:
All You Need to Know About Virtual Machines - Spiceworks News and Insights