Page 1,649«..1020..1,6481,6491,6501,651..1,6601,670..»

Malicious OAuth applications used to control Exchange tenants in sweepstakes scam – Cybersecurity Dive

Dive Brief:

A threat actor has deployed malicious OAuth applications on compromised cloud tenants in order to take control of Exchange servers, Microsoft said in research released Thursday. The threat actor later sent spam email as part of a deceptive sweepstakes campaign.

The threat actor launched credential-stuffing attacks against high-risk accounts that didnt employ multifactor authentication, Microsoft said. The actor was able to gain initial access through unsecured administrator accounts.

After gaining access to the cloud tenant, the threat actor created malicious OAuth applications, which added an inbound connector to an email server. The inbound connector a set of instructions about the flow of email to organizations using Microsoft 365 or Office 365 allowed the actor to create emails that appeared to originate from the targets domain.

Microsoft is monitoring an increase in OAuth application abuse, particularly consent phishing. During a consent phishing attack, users are tricked into granting permission to malicious OAuth apps in order to access legitimate cloud services, including mail servers, file storage and management APIs.

Microsoft had previously warned about the rise in consent phishing, which coincided with the switch to remote work at the start of the COVID-19 pandemic.

A number of threat actors, including those working on behalf of nation-states, have used OAuth applications for a variety of malicious aims, including command and control (C2) communication, phishing and backdoors.

In order for the attack to work, the threat actor had to compromise cloud tenant users with enough permissions to let the attacker create applications in the cloud environment and to grant admin consent, Microsoft said. The attacker launched credential-stuffing attacks, attempting to reach users with global admin level of access.

Microsoft said 86% of the compromised tenants had at least one admin with a real-time high risk score, meaning Azure AD Identity Protection flagged them as most likely compromised. None of the compromised admins had MFA enabled.

Microsoft researchers said the threat actor mainly used cloud-based email platforms from Amazon Simple Email Service and Mailchimp in order for the campaign to achieve scale and make sure emails were successfully delivered.

While the spam attacks ultimately targeted consumer email accounts, Microsoft said, the threat actor targeted enterprise tenants to use as infrastructure for the campaign.

This attack thus exposes security weaknesses that could be used by other threat actors in attacks that could directly impact affected enterprises, according to the blog.

Microsoft officials did not return a request for additional comment.

The rest is here:
Malicious OAuth applications used to control Exchange tenants in sweepstakes scam - Cybersecurity Dive

Read More..

Tom Details What’s New in vSphere 8 (and ‘Why People Are Excited About It’) – Virtualization Review

News

vSphere 8 was one of the most-talked about subjects at VMware Explore this year. After reading over its documentation and watching a few sessions, I've had a chance to discover why people are excited about it, which I will discuss in this article.

Even though vCenter Server and ESXi hosts are typically the vSphere components that first come to mind, vSphere is actually made up of many other components and features such as VMware vSphere Storage DRS, VMware vSphere Fault Tolerance, vRealize Orchestrator and vSphere Lifecycle Manager.

Why vSphere 8?As the industry continues to embrace technologies like the cloud, edge computing and Software-as-a-Service (SaaS) applications, VMware has realized that they need to extend their marquee product (vSphere) to better align with these trends. In this regard, VMware has laid out three current needs that they see in the industry: additional server capacity, specialized infrastructure silos and a CPU-centric-based security model.

Even though you may have your vCenter server and ESXi hosts running on premises, they can still be accessed from vSphere's Cloud Console, which is available in vSphere 8. This allows you to access your vSphere environment(s), as well as other cloud services, regardless of if you are in your datacenter or not. This is the same cloud console that is used by VMware Cloud on AWS, Google Cloud and Azure.

The VMware Cloud Portal accesses on-premises vSphere resources to the Cloud Console via a gateway that resides on premises. Of course, the Cloud Portal can access multiple on-premises vSphere environments as well as public cloud resources.

A Look at the Cloud ConsoleThe layout of the Cloud Console is intuitive and has a menu bar on the left side.

More details are shown in the center pane, and many of the individual components can be expanded for more specific information.

Some of the other vSphere components such as vRealize are surfaced in the Cloud Console.

Virtual MachinesWhen deploying VMs from the Cloud Console, you can decide which vCenter should host them.

For VM placement, vSphere now has Enhanced Memory Monitoring and Remediation (vMMR2) which factors-in memory statistics, latency and miss rates when deciding where a VM should reside.

Infrastructure-as-a-Service (IaaS) is mandatory in datacenters both large and small. To support this, vSphere has a neat feature where it displays the YAML code that can be used to programmatically deploy a VM when you create one.

vSphere 8 and DPUsI have long been interested in Data Process Units (DPUs), and vSphere 8 has the vSphere Distributed Service Engine to support them.

DPUs reside on servers and take over functions such as networking and security from the server to free up the central processor to run applications. By having an integrated engine, you can more closely coordinate the functions on the server and the DPUs to manage the workloads on each of them.

To drive home the need for DPUs in one presentation, VMware stated that up to 30 percent of CPU capacity is being consumed by network and security services.

One of the neat features with regards to DPUs is that when an ESXi host detects a DPU on it, it will recognize it and give you the option to install ESXi on the DPU!

Quicker UpdatesvSphere 8 supports a quick-update feature that greatly reduces downtime when updating a vCenter server. It accomplishes this by creating a new vCenter server and then bringing in the configuration and persistent data from an existing vCenter server. This has the added benefit of being able to restore the old vCenter server if you have issues with the upgrade by restoring the old instance.

Updating ESXi hosts is quicker and more streamlined by pre-staging the ESXi image and allowing multiple hosts to be remediated in parallel.

ConclusionOverall, vSphere 8 is a natural progression for VMware. Obviously the most visible change is Cloud Connector which allows administrators to access vSphere environments regardless of where they are located. Feature-wise, I like the fact that they are currently embracing and making DPUs first-class citizens as well as making it easier to deploy VMs using YAML code.

VMware has a hands-on lab with vSphere Distributed Services Engine and other vSphere 8 technologies for those who want to take a deeper look into it.

About the Author

Tom Fenton has a wealth of hands-on IT experience gained over the past 25 years in a variety of technologies, with the past 15 years focusing on virtualization and storage. He currently works as a Technical Marketing Manager for ControlUp. He previously worked at VMware as a Senior Course Developer, Solutions Engineer, and in the Competitive Marketing group. He has also worked as a Senior Validation Engineer with The Taneja Group, where he headed the Validation Service Lab and was instrumental in starting up its vSphere Virtual Volumes practice. He's on Twitter @vDoppler.

See the original post here:
Tom Details What's New in vSphere 8 (and 'Why People Are Excited About It') - Virtualization Review

Read More..

Over Half of UK IT Industry Pros Trust Public Cloud Providers Less Than Two Years Ago, According to New Research from Leaseweb – Business Wire

AMSTERDAM--(BUSINESS WIRE)--Leaseweb Global, a leading hosting and cloud services company, today published the results of research revealing that over half (55%) of UK IT professionals currently trust public cloud services less than they did two years ago, having run into challenges around usage costs, migration and customer service.

The research, which explores 500 UK-based IT professionals* experience with public cloud providers over the last two years, raises questions whether hyperscale is the best way forward or viable as a long-term option. Transparency, customer service and the ease of migrating workloads are flagged as potential concerns, despite most respondents saying they had costs under control. Overall, the results indicate a significant trust issue when it comes to public cloud providers.

For example, the majority (57%) of respondents had found it challenging to migrate workloads out of a public cloud environment, while just under half (49%) said they had encountered difficulties in understanding their cloud usage costs. Despite this, nearly three quarters (72%) agree they have effectively controlled public cloud usage costs, while 46% stated they somewhat agree. Almost half (49%) had struggled to get hold of a public cloud providers customer services.

In addition, while cloud is now a key component for many IT infrastructure strategies, cloud only and cloud first are not dominant, nor are they considered a panacea for every business need. While there was an increase in the adoption of cloud infrastructure during the pandemic, the study also showed a decrease in support for cloud first strategies during 2022.

For instance, in the January 2019-December 2021 ("pre COVID pandemic") period, 36% of organisations described their approach to IT infrastructure as cloud first, with only 19% stating their organisation was officially committed to a cloud-only approach. From January 2022 onwards, the ("post COVID pandemic") period, cloud first commitments had decreased to 31%, with cloud only rising to 25% of respondents.

When asked about the optimum IT infrastructure for their organisation, private cloud only (23%) and a mixture of on-premise and public cloud (20%) were the most popular selections. These were followed by public cloud only (17%) and a mixture of on-premises and private cloud (14%), with on-premises only the least popular selection at 7%.

The move away from on-premise legacy infrastructure is clear, with two-thirds (66%) of respondents agreeing that the industry will see the end of on-premise infrastructure over the next two years. The research results indicate that while on-premises is not an important part of IT strategy, it still exists within many organisations environments.

The positive news is this does not appear to be stifling innovation: only 16% of respondents said that legacy infrastructure was either standing in the way of further cloud adoption or limiting their organisations ability to make business decisions. Instead, the focus is on deploying applications in the right place, with a key takeaway from the study being the end of on-premises infrastructure may be approaching, but not quite here.

The results of this study strengthen the case for hybrid combinations thanks to the flexibility and choice it can deliver to both large and small companies, commented Terry Storrar, Managing Director UK at Leaseweb. And much as there has been a shift towards cloud adoption, rather than highlighting the pandemic as a key driver of a shift to the cloud, it appears that businesses were investing in cloud beforehand and that investment levels have remained relatively static, continued Storrar.

Although respondents acknowledge that the desire and need to look after on-premises infrastructure is dying, the results also indicate that businesses are still using it as an ongoing component of their IT infrastructure when adopting hybrid cloud. The key takeaway from this research is IT teams are looking for flexibility - theres no one size fits all approach. Organisations are now more likely to qualify cloud out during the assessment stage, rather than the other way around, but the main focus is on choosing the right infrastructure locations for specific use cases, concluded Storrar.

To read more about the results of the study, click here to visit the Leaseweb website.

*Survey Methodology:

Study conducted in May 2022 across 500 UK based IT managers, Cloud Service Managers, Infrastructure Managers, Heads of IT, Heads of Cloud Services, Heads of IT Infrastructure, IT Directors, CIOs & CTOs working in UK based companies employing 100-1000 people.

About Leaseweb

Leaseweb is a leading Infrastructure as a Service (IaaS) provider serving a worldwide portfolio of 20,000 customers ranging from SMBs to Enterprises. Services include Public Cloud, Private Cloud, Dedicated Servers, Colocation, Content Delivery Network, and Cyber Security Services supported by exceptional customer service and technical support. With more than 80,000 servers, Leaseweb has provided infrastructure for mission-critical websites, Internet applications, email servers, security, and storage services since 1997. The company operates 25 data centres in locations across Europe, Asia, Australia, and North America, all of which are backed by a superior worldwide network with a total capacity of more than 10 Tbps.

Leaseweb offers services through its various subsidiaries, which are Leaseweb Netherlands B.V., Leaseweb USA, Inc., Leaseweb Asia Pacific PTE. LTD, Leaseweb CDN B.V., Leaseweb Deutschland GmbH, Leaseweb Australia Ltd., Leaseweb UK Ltd, Leaseweb Japan KK, Leaseweb Hong Kong LTD, and iWeb Technologies Inc.

For more information, visit: http://www.leaseweb.com.

###

See original here:
Over Half of UK IT Industry Pros Trust Public Cloud Providers Less Than Two Years Ago, According to New Research from Leaseweb - Business Wire

Read More..

Defining the Modern Bare Metal Cloud – thenewstack.io

This is first in a series of contributed articles leading up to KubeCon + CloudNativeCon in October.

Often referred to as bare metal as a service (BMaaS), bare metal cloud can be defined as a single-tenant environment with the full self-service versatility of the cloud. Its surge in popularity has motivated providers to push bare metal cloud well beyond its original capabilities to cater to the needs of modern workloads and cloud native organizations. By 2026, the platform is expected to attain a compound annual growth rate of 38.5%.

Considering the expanded set of use cases it now addresses, bare metal cloud needs a new definition.

To define the bare metal cloud of today, we need to look at its key features, including:

The solid part involves the absence of a hypervisor. It gives users access to the servers physical components and lets them optimize their CPU, RAM and storage resources. Single tenancy eliminates performance, security and resource-contention issues commonly attributed to shared virtualized environments.

The cloud in bare metal cloud mostly refers to hourly or monthly billing options and API-driven provisioning. This platform supports automation both during and after deployment, letting developers use APIs or CLIs to set up, scale and manage their infrastructure programmatically. Were still just scratching the surface, though.

What we have defined so far as bare metal cloud usually comes with the following drawbacks:

Modern bare metal cloud providers offer preconfigured, workload-optimized servers that can be deployed in minutes from anywhere across the globe. Some even offer workload-specific hardware accelerators such as persistent memory preconfigured on their systems. This gives organizations turnkey access to powerful technologies they can use to boost the performance and reliability of their workloads while reducing their total cost of ownership.

To further abstract the infrastructure setup overhead, providers have embraced open source software solutions such as Canonicals MAAS, letting you deploy instances with preinstalled OS. Usually, you can choose between Linux distros, such as Ubuntu, Debian, or CentOS, and Windows Server, or hypervisor solutions such as VMware ESXi. To further adapt the solution to your exact requirements, some providers even give you the option to install a custom OS image.

While you cannot handpick individual stack components, choosing from dozens of servers powered by the latest hardware, software and network technologies surely helps teams optimize their IT.

CNCFs Annual Survey 2021 showed us that 90% of Kubernetes users use cloud-managed services. Since Kubernetes goes hand-in-hand with cloud native, its mainstream status has propelled bare metal cloud into interesting integrations.

You can now find bare metal cloud solutions with preinstalled open source K8s management platforms such as SUSE Rancher. Leveraging these, organizations can simplify the deployment and management of complex container environments at scale and gain easy access to enterprise-level Kubernetes hosted on bare metal.

Providers also invest more time and effort in delivering regularly updated GitHub pages offering a Kubernetes controller or a Docker Machine driver for their solution. The repos usually include Infrastructure-as-Code (IaC) modules that simplify infrastructure provisioning and management via popular tools such as Terraform, Ansible and Pulumi.

Such support lets DevOps teams seamlessly integrate bare metal servers into their workflows and provision resources directly from their preferred environments.

Bare metal cloud has become a go-to solution for distributed workloads and organizations looking to accelerate their cloud adoption. Its dedicated resources and high-performance hardware make it ideal for sensitive, demanding or legacy workloads that are often incompatible with virtualized environments. Data centers that host bare metal cloud often provide direct access to cloud on-ramps or software-defined networks that let organizations interconnect their bare-metal-hosted apps with their favorite hyperscale cloud-service providers.

This enables anything from cloud resource bursting to access to petabytes of disaggregated storage, allowing teams to easily distribute their workloads across different ecosystems and optimize IT costs.

Bare metal cloud is often contrasted with the highly flexible public cloud.If we compare the two today, its clear that the lines that set them apart get quite blurry:

If you choose the modern bare metal cloud, you get most of the public cloud features with added control, freedom, transparency and direct access to hardware.

In minutes, bare metal cloud lets you deploy anything from test environments for virtualized or containerized apps to enterprise, multinode K8s clusters managed by SUSE Rancher. The platform supports public and custom IPs and even lets you install a custom OS or select one of the available Linux, Windows or VMware systems.

Here are who can benefit from the features mentioned above:

All things considered, we can define bare metal cloud as a constantly evolving, versatile and powerful IT infrastructure solution supporting fast-paced, cloud native organizations. It gives you direct access not just to its underlying resources, but to the latest software and hardware technologies that simplify and optimize modern workloads and workflows.

See the original post here:
Defining the Modern Bare Metal Cloud - thenewstack.io

Read More..

The Haziness in Microsoft’s Cloud Numbers The Information – The Information

Heres a quick question for enterprise software acolytes out there: which tech giant is bigger in cloud, Microsoft or Amazon?

It depends. Microsoft says total revenue for Microsoft Cloud was $91 billion in the year to June, which is significantly more than AWS $72 billion for the same 12-month period. But the Microsoft Cloud revenue number includes contributions from many different parts of Microsofteverything from its Azure cloud-services infrastructure unit to applications such as Office 365 and most of LinkedIn. Amazon Web Services, in contrast, is primarily infrastructureselling data storage and compute powerwith little in the way of applications. Microsoft doesnt disclose Azures revenue. According to Gartner, AWS has 39% of the global cloud-infrastructure market to Microsofts 21%.

This reporting haziness is par for the course for the cloud industry, where defining revenue is more of an art than a science. In their reported cloud revenue numbers, both Microsoft and Google include the infrastructure side of the cloud with applications that run on those services, such as word processing, human-resources tools, and email. Oracle used to be in that camp as well but last week it began breaking out cloud infrastructure from applications, giving investors a better window into its performance.

Go here to read the rest:
The Haziness in Microsoft's Cloud Numbers The Information - The Information

Read More..

What is VDS, and when do you need it – Startup.info

For IT experts and gurus, there is an increased demand for cloud capacity. The dedicated server hosting market is booming with the increase of people looking for better security for their computers. Moreover, there are plenty of hosting providers which help provide solutions for your business.

Deltahost remains an ideal company that offers a plethora of server services such as VDS. This guide will talk about what VDS is and its features factors to consider when renting a server, as well as services offered by Deltahost company.

VSphere Distributed Switch is a management networking tool similar to VSS. This is a powerful network construct that sprayed management and data plan. All control of the vSphere Distributed Switch is distributed at the vCenter Server and passes traffic local to ESXi.

With the framework of the VDS, it comes with various features which include:

When you rent VDS servers at https://deltahost.ua/vps.html for example, its possible to easily simplify VM networking across several networks. This would include port group management, security and VLANs, and other computing settings.

The VDS features offer several network health check capabilities like checking physical structure. This feature is integral for the smooth running when you rent VDS servers that are in good condition.

When you rent VDS server, its possible to have access to ERSPAN, SNMPv3, Netflow version, and other network configurations. It has some great templates for restoring virtual machine network systems.

These VDS features include various crucial servers network I/0 management, SR-IOV, and BPDU tools. These tools are crucial for those who want a well-designed network frame.

A VDS permits the usage of private VLANs. This is great because they offer more security options. When you use these Private VLANs, they are great for the segmentation of traffic.

When you rent VDS servers, its possible to shape traffic policies on different print groups. This help with peak bandwidth, bust size, and average bandwidth.

Here are some important factors to consider when getting a server:

Before you rent VDS server, you firstly need to check the type of hardware and how it affects network reliability. Its very important to check the SSD and dual power features.

When you choose a managed service, having an accessible support center is important. You get much value while talking with a qualified support agent and this will have an effect on your business.

Here are some services provided by the Deltahost firm:

When you use Deltahost, its easy to administer your various platforms. They can help you with mail accounts and various in-house built hosting control solutions.

Delta host offers quality balanced hosting websites where you can rent VDS servers from them. When you have server overload issues, they can help you solve them effectively.

You will get affordable PHP script installation software. You will get an excellent PHP application which is great for setting up blogs and photo albums.

VDS is an essential tool that is great for tech experts. It comes with various features and you need to consider several factors before renting a server.

Read the original:
What is VDS, and when do you need it - Startup.info

Read More..

CI/CD servers readily breached by abusing SCM webhooks, researchers find – The Daily Swig

Webhook, line, and sinker

Cloud-based source code management (SCM) platforms support integration with self-hosted CI/CD solutions through webhooks, which is great for DevOps automation.

However, the benefits can come with security trade-offs.

According to new findings from researchers at Cider, malicious actors can abuse webhooks to access internal resources, run remote code execution (RCE), and possibly obtain reverse shell access.

Software-as-a-service (SaaS) SCM systems provide an IP range for their webhooks. Organizations must open their networks to these IP ranges to enable integration between the SCM and their self-hosted CI/CD systems.

We knew the combination of a SaaS source control management system and a self-managed CI with the webhook service IP range allowed towards the CI is a common architecture, and we wanted to check our possibilities there, Omer Gil, head of research at Cider, told The Daily Swig.

Catch up on the latest DevSecOps news

Attackers can use webhooks to get past an organizations firewalls. But SCM webhooks have strict limits, and there is very little room to make modifications to webhook requests.

However, the researchers discovered that with the right changes, they could get beyond the limited endpoints available to SCM webhooks.

On the CI/CD side, the researchers ran their experiments on Jenkins, an open-source DevOps server.

We chose Jenkins since its self-hosted and commonly used, but [our findings] can be applied to any system that is accessible from the SCM, like artifact registries for example, Gil said.

On the SCM side, they tested both GitHub and GitLab. While webhooks have been designed to trigger specific CI endpoints, they could modify requests to direct them to other endpoints that return user data or the console output of pipelines. Nevertheless, limits remain.

Webhooks are sent as a POST request, which limits the options against the target service, since endpoints used to retrieve data usually only accept the GET parameter, Gil said. While its not possible to fire a GET request through GitHub, in GitLab its a different case, since if the POST request is responded with a redirection, the GitLab webhook service will follow it with sending the GET request.

Using GitLab, the researchers were able to use webhooks to combine POST and GET requests to access internal resources. Interestingly, some Jenkins resources are accessible without authentication.

By default, some resources can be accessed anonymously. Having said that, its not very common for an organization to leave it as is but some do allow anonymous access, Gil said.

In case authentication was required, the researchers found that they could direct webhooks to the login endpoint and conduct brute-force password attacks against the CI/CD platform. Once authenticated, they obtained a session cookie that could be used to access other resources.

If the Jenkins instance had a vulnerable plugin, the webhook mechanism could exploit it. In the proof-of-concept video above, the researchers show that they could force a vulnerable Jenkins server to download a malicious JAR file, run it on the server, and launch a reverse shell endpoint for the attacker.

This finding is a reminder of the risks created when CI/CD servers are partially open to the internet.

A hermetic solution is to deny inbound traffic from the SCM webhook service, but it usually comes with engineering costs, Gil said. Some countermeasures can be taken, like setting a secure authentication mechanism in the CI, patching, and making sure all actions in the server are saved in the logs.

RECOMMENDED Security teams often fight against developers taking control of AppSec: Tanya Janca on the drive to DevSecOps adoption

Continue reading here:
CI/CD servers readily breached by abusing SCM webhooks, researchers find - The Daily Swig

Read More..

All You Need to Know About Virtual Machines – Spiceworks News and Insights

A virtual machine (VM) is defined as a computer system emulation, where VM software replaces physical computing infrastructure/hardware with software to provide an environment for deploying applications and performing other app-related tasks. This article explains the meaning and functionality of virtual machines, along with a list of the best VM software you can use.

A virtual machine (VM) is a computer system emulation. VM software replaces physical computing infrastructure/hardware with software to provide an environment for deploying applications and performing other app-related tasks.

The term virtual machine (VM) refers to a computer that exists only in digital form. The actual computer is often referred to as the host in these situations, while other operating system(s) running on it are referred to as the guests. Using the hardware resources of the host, virtual machines let users install more than one operating system (OS) on the same computer.

Virtual machines are also used to develop and publish apps to the cloud, run software that is not compatible with the host operating system, and back up existing operating systems. Developers may also use them to test their products quickly and easily in different environments. VM technology can be used both on-premises and within the cloud. For example, public cloud services often use virtual machines to give multiple users access to low-cost virtual application resources.

See More: What Is Jenkins? Working, Uses, Pipelines, and Features

Virtualization allows for creating a software-based computer with dedicated amounts of memory, storage, and CPU from the host computer. This process is managed by hypervisor software. As needed, the hypervisor moves resources from the host to the guest. It also schedules operations in VMs to avoid conflicts and interference when using resources.

A virtual machine (VM) allows a different operating system to be executed inside the confines of its distinct computing environment within a window similar to those used for other programs. As it is often separated from the rest of the system, the virtual machine cannot make any unapproved modifications to the host computer. This is done to prevent the virtual machine from interfering with the central operating system of the host.

See More: What is Root-Cause Analysis? Working, Templates, and Examples

Organizations, IT professionals, developers, and other home users looking for ways to solve problems that result from remote operations are set to benefit from what virtual machines offer. Virtual machines provide users with the same applications, settings, and user interfaces they would find in a physical computer from a remote area. Other benefits include:

See More: DevOps vs. Agile Methodology: Key Differences and Similarities

Virtual machines can be of two types i.e., system VMs and process VMs.

These kinds of VMs are completely virtualized to replace a real machine. The way they virtualize depends on a hypervisor such as VMware ESXi, which can operate on an operating system or bare hardware.

The hardware resources of the host can be shared and managed by more than one virtual machine. This makes it possible to create more than one environment on the host system. Even though these environments are on the same physical host, they are kept separate. This lets several single-tasking operating systems share resources concurrently.

Different VMs on a single computer operating system can share memories by applying memory overcommitment systems. This way, users can share memory pages with identical content among multiple virtual machines on the same host, which is helpful, especially for read-only pages.

The key advantages of system VMs are:

Disadvantages of system virtual machines are:

These virtual machines are sometimes called application virtual machines or Managed Runtime Environments (MREs). They run as standard applications inside the hosts operating system, supporting a single process. It is triggered to launch when the process starts and destroyed when it exits. It offers a platform-independent programming environment to the process, allowing it to execute similarly on any platform.

Process virtual machines are implemented using interpreters and they provide high-level abstractions. They are often used with Java programming language, which uses Java virtual machines to execute programs. There can be two more examples of process VMs i.e., the Parrot virtual machine and the .NET Framework that runs on the Common Language Runtime VM. Additionally, they operate as an abstraction layer for any computer language being used.

A process virtual machine may, under some circumstances, take on the role of an abstraction layer between its users and the underlying communication mechanisms of a computer cluster. In place of a single process, such a virtual machine (VM) for a process consists of one method for each real computer that is part of the cluster.

Special case process VMs enable programmers to concentrate on the algorithm instead of the communication process provided by the virtual machine OS and the interconnect.

These VMs are based on an existing language, so they dont come with a specific programming language. Their systems provide bindings for several programming languages, such as Fortran and C. In contrast to other process VMs, they can enter all OS services and arent limited by the system model. Therefore, it cannot be categorized strictly as virtual machines.

See More: Top 10 DevOps Automation Tools in 2021

A superior VN application facilitates the use of many operating systems on a computer. Users should consider what features they may require when choosing what VM software suits them best. The following is a list of the top 10 virtual machine software to use:

VMware Workstation Player is recognized as a virtualization solution that supports a variety of operating systems on a single machine without requiring a reboot. It allows for seamless data sharing between hosts and guests and is designed for IT professionals. The following are features of VMware Workstation Player:

Parallel Desktop software provides hardware visualization for Windows to run on Mac without rebooting, and their applications are the most powerful, fastest, and easiest for doing this. The following are features of Parallels Desktops:

Like several other options on this list, this is also an open-source hypervisor. It works on x86 computers and is suitable for home or enterprise use that runs on Linux, Windows, etc. The following are features of VirtualBox:

OracleVM VirtualBox is an open-source X86 and AMD64 virtualization product for home and enterprise use. The following are features of OracleVM VirtualBox:

Citrix Hypervisor simplifies operational administration to enable users to conduct intense tasks in a virtualized environment. It is best for Windows 10. The following are features of Citrix Hypervisor:

See More: DevOps Roadmap: 7-Step Complete Guide

It is an open-source platform that offers centralized management and enables its users to create new VMs. Additionally, one may utilize the method to replicate existing ones and see how everything works together. The following are features of Red Hat Virtualization.

Hyper-V is a hypervisor that enables the creation of virtual computers on x86-64-based systems. It may connect individual virtual computers to more than one network through setup. The following are features of Hyper-V:

Kernel Virtual Machine enables end-to-end virtualization for Linux. It was designed to operate on x86 hardware with virtualization features. KVM has two core components: the main virtualization infra and a processor-specific module. The following are features of the Kernel Virtual Machine:

Proxmox Virtual Environment integrates networking, KVM hypervisor, and Linux (LXC) container capabilities on a single platform. The following are features of Proxmox Virtual Environment:

QEMU is a common and open-source emulator and virtualization machine. Its system is written using C language. It allows the building of virtual worlds for many architectures and operating systems at no cost. The following are features of QEMU:

See More: What Is Serverless? Definition, Architecture, Examples, and Applications

According to a 2022 report by Market Data Forecast, the global VM market was worth $3.5 billion in 2020. This is poised to grow further as enterprises rely more on software-based technologies (like the cloud) and reduce their hardware footprint. Indeed, virtual machines can go a long way in helping to optimize IT costs and also provide a safe environment for application security testing and cybersecurity checks.

Did this article give you the information you were looking for about virtual machines? Tell us on Facebook, Twitter, and LinkedIn. Wed love to hear from you!

More here:
All You Need to Know About Virtual Machines - Spiceworks News and Insights

Read More..

OCC frees Capital One from consent order tied to 2019 breach – Banking Dive

Dive Brief:

With the termination of the consent order, Capital One is no longer required to submit quarterly updates detailing its risk management and auditing practices to the OCC, which it was required to do following the discovery of the hack.

"The OCC believes that the safety and soundness of the bank and its compliance with laws and regulations does not require the continued existence of the [consent order]," the OCC wrote in its termination order, dated Aug. 31.

The consent order was handed down due to the failure to establish effective risk assessment processes before Capital One migrated significant operations to the public cloud, and the banks failure to correct the deficiencies in a timely manner. The OCC did, however,positively consider Capital Ones customer notification and remediation efforts following the breach.

Its termination indicates the bank has satisfied the OCCs risk management requirements and made good on Capital One CEO Richard Fairbanks 2019 apology.

"While I am grateful that the perpetrator has been caught, I am deeply sorry for what has happened, he said. I sincerely apologize for the understandable worry this incident must be causing those affected and I am committed to making it right."

Capital One had long positioned itself away from other banks, embracing a public cloud-first strategy, rather than using private clouds and internal firewalls.Fairbank, prior to the hacks exposure, had called the bankone of the most cloud-forward companies in the world.

The incident didnt pull Capital One off its cloud course, with the bank closing its final data center as planned in 2020.

A bank spokesperson that year said Capital One, since the breach, had invested significant additional resources into further strengthening our cyber defenses, and ...made substantial progress in addressing the requirements of these orders.

Capital One was also hit with a cease-and-desist orderfrom the Federal Reserve in conjunction with the OCCs penalty,requiring the banks board of directors to submit a written plan outlining how it would improve its risk management program and internal controls for protecting customer data.

The bank agreed in December to pay $190 million to settle a class-action lawsuit related to the breach but, along with Amazon Web Services (AWS),denied all liability in the incident.

The breach was one of the biggest to hit the financial services sector, affecting 100 million in the U.S.and 6 million in Canada. Thompson accessed data including bank account numbers and credit card balances, as well as identifying information including names and birth dates. A previous employee of Capital Ones cloud hosting company AWS, shed developed a tool to search for misconfigured AWS accounts and used it to download data from more than 30 entities including Capital One.

Thompson also inserted cryptocurrency mining software on new servers, and directed the income to her personal digital wallet.

She reportedly bragged about the hack in texts and on online forums.

Ms. Thompson used her hacking skills to steal the personal information of more than 100 million people, and hijacked computer servers to mine cryptocurrency, U.S. Attorney Nick Brown said during Thompsons seven-day jury trial.Far from being an ethical hacker trying to help companies with their computer security, she exploited mistakes to steal valuable data and sought to enrich herself.

She wanted data, she wanted money, and she wanted to brag, Assistant U.S. Attorney Andrew Friedman said in closing arguments.

Capital One wasnt the only financial services company subject to a data breach in 2019. That May, First American Financial Corp. exposed 885 million financial records linked to real estate transactions due to a web design error, and member data for 4.2 million customers at Desjardins, Canadas largest credit union, was accessed by an unauthorized employee.

Capital One did not return a request for comment by press time.

Read the rest here:
OCC frees Capital One from consent order tied to 2019 breach - Banking Dive

Read More..

Engineering the future in a new UC San Diego hub – KPBS

A new building officially opens on the campus of UC San Diego Friday. It houses all kinds of engineers who are designing products that have never been seen.

Franklin Antonio Hall is named after the late Qualcomm co-founder, who donated $30 million toward the $180 million total cost of the project.Were bursting at the seams, said Albert Pisano, who was a good friend of Antonio's. Pisano is also the dean of the UC San Diego Jacobs School of Engineering, which has reached a record enrollment of almost 10,000 students.

Until now, the school has had classrooms and laboratories spread across several buildings on campus.

Antonio Hall has four floors, with more than 186,000 square feet of space.

Henrik Christensen is the director of robotics at the school. He teaches and mentors mechanical and electrical engineering students and graduate students working on degrees in computer science.

Now I get to have them all in the same space, which makes a big difference for them to talk to each other. It allows them to really understand how can they complement each other in building products weve never seen before, he said.

Those products include devices using artificial intelligence and the development of powerful, longer-lasting batteries for electric cars.

Although the move-in and setup for experiments and research will continue for many more weeks, there are already projects underway.

Alex Chow is working on his master's degree in computer science. He is a member of a graduate student team developing a robot to support children with special needs.

Last week, as his team members worked in the new building in La Jolla, Chow was a hundred miles away at home in Riverside directing the robot.

So, with this robot, you can turn around in your environment. Grab stuff with the arm and the gripper, Chow said, speaking through an electronic tablet attached to the top of the robot.

This would benefit a student with disabilities who could remain home and still be part of a class meeting.

If theyre unable to physically attend school, then they may be able to use the robot to actually actively participate in school as a robot, said Pratyusha Ghosh, a member of Chows team working on her Ph.D. dissertation.

The learning curve and collaborative vibe at Antonio Hall are just getting started.

In his position as dean, Pisano has put out the welcome mat and an invitation to much younger students who hope to make engineering their career.

The world is filled with issues that need to be addressed now. A workable solution now is better than a perfect solution later, he said.

See more here:

Engineering the future in a new UC San Diego hub - KPBS

Read More..