Category Archives: Cloud Servers

Kubernetes on Bare Metal vs. VMs: It’s Not Just Performance The New Stack – thenewstack.io

Too often, the debate about running Kubernetes on bare metal versus virtual machines is overly simplistic. Theres more to it than a trade-off between the relative ease of management you get with VMs and the performance advantage of bare metal. (The latter, in fact, isnt huge nowadays, as Ill explain below.)

Im going to attempt to walk through the considerations at play. As you will see, while I tend to believe that Kubernetes on bare metal is the way to go for most use cases, theres no simple answer.

Off the bat, lets address the performance vs. ease-of-use question.

Andy Holtzmann

Andy is a site reliability engineer at Equinix and has been running Kubernetes on bare metal since v1.9.3 (2018). He has run production environments with up to 55 bare-metal clusters, orchestrated Kubernetes installs on Ubuntu, CentOS and Flatcar Linux, and recently helped accelerate the bring-up of Equinix Metals Kubernetes platform to under one hour per new greenfield facility. Andy joined Equinix after working in senior software engineer roles at Twilio and SendGrid.

Yes, VMs are easier to provision and manage, at least in some ways. You dont need to be concerned with details of the underlying server hardware when you can set up nodes as VMs and orchestrate them using the VM vendors orchestration tooling. You also get to leverage things like golden images to simplify VM provisioning.

On the other hand, if you take a hypervisor out of the picture, you dont spend hardware resources running virtualization software or guest operating systems. All of your physical CPU and memory can be allocated to business workloads.

But its important not to overstate this performance advantage. Modern hypervisors are pretty efficient. VMware reports hypervisor overhead rates of just 2 percent compared to bare metal, for example. You have to add the overhead cost of running guest operating systems on top of that number, but still, the raw performance difference between VMs and bare metal can be negligible, at least when youre not trying to squeeze every last bit of compute power from your infrastructure. (There are cases where that 2% difference is meaningful.)

When its all said and done, virtualization is going to reduce total resource availability for your pods by about 10% to 20%.

Now, lets get into all the other considerations for running Kubernetes on bare metal versus Kubernetes on VMs. First, the orchestration element. When you run your nodes as VMs, you need to orchestrate those VMs in addition to orchestrating your containers. As a result, a VM-based Kubernetes cluster has two independent orchestration layers to manage.

Obviously, each layer is orchestrating a different thing, so, in theory, this shouldnt cause problems. In practice, it often does. For example, imagine you have a failed node and both the VM-level orchestrator and the Kubernetes orchestrator are trying to recover from the failure at the same time. This can lead to your orchestrators working at cross purposes because the VM orchestrator is trying to stand up a server that crashed, while Kubernetes is trying to move pods to different nodes.

Similarly, if Kubernetes reports that a node has failed but that node is a VM, you have to figure out whether the VM actually failed or the VM orchestrator simply removed it for some reason. This adds operational complexity, as you have more variables to work through.

You dont have these issues with Kubernetes on bare metal server nodes. Your nodes are either fully up or theyre not, and there are no orchestrators competing for the nodes attention.

Another key advantage of running Kubernetes on bare metal is that you always know exactly what youre getting in a node. You have full visibility into the physical state of the hardware. For example, you can use diagnostics tools like SMART to assess the health of hard disks.

VMs dont give you much insight about the physical infrastructure upon which your Kubernetes clusters depend. You have no idea how old the disk drives are, or even how much physical memory or CPU cores exist on the physical servers. Youre only aware of VMs virtual resources. This makes it harder to troubleshoot issues, contributing again to operational complexity.

For related reasons, bare metal takes the cake when it comes to capacity planning and rightsizing.

There are a fair number of nuances to consider on this front. Bare metal and virtualized infrastructure support capacity planning differently, and there are various tools and strategies for rightsizing everything.

But at the end of the day, its easier to get things exactly right when planning bare metal capacity. The reason is simple enough: With bare metal, you can manage resource allocation at the pod level using cgroups in a hyper-efficient, hyper-reliable way. Using tools like the Kubernetes vertical autoscaler, you can divvy up resources down to the millicore based on the total available resources of each physical server.

Thats a luxury you dont get with VMs. Instead, you get a much cruder level of capacity planning because the resources that can be allocated to pods are contingent on the resource allocations you make to the VMs. You can still use cgroups, of course, but youll be doing it within a VM that doesnt know what resources exist on the underlying server. It only knows what it has been allocated.

You end up having to oversize your VMs to account for unpredictable changes in workload demand. As a result, your pods dont use resources as efficiently, and a fair amount of the resources on your physical server will likely end up sitting idle much of the time.

Another factor that should influence your decision to run Kubernetes on bare metal versus VMs is network performance. Its a complex topic, but essentially, bare metal means less abstraction of the network, which leads to better network performance.

To dig a level deeper, consider that with virtual nodes you have two separate kernel networking stacks per node: one for the VMs and another for the physical hosts. There are various techniques for negotiating traffic between the two stacks (packet encapsulation, NAT and so on), and some are more efficient than others (hint: NAT is not efficient at all). But at the end of the day, they each require some kind of performance hit. They also add a great deal of complexity to network management and observability.

Running on bare metal, where you have just one networking stack to worry about, you dont waste resources moving packets between physical and virtual machines, and there are fewer variables to sort through when managing or optimizing the network.

Granted, managing the various networks that exist within Kubernetes, and this partially depends on the container network interface (CNI) you use, does add some overhead. But its minor compared to the overhead that comes with full-on virtualization.

As Ive already implied, the decision between Kubernetes on bare metal and Kubernetes on VMs affects the engineers who manage your clusters.

Put simply, bare metal makes operations and hence your engineers lives simpler in most ways. Beyond the fact that there are fewer layers and moving parts to worry about, a bare-metal environment reduces the constraints under which your team works. They dont have to remember that VMs only support X, Y and Z configurations or puzzle over whether a particular version of libvirt supports a feature they need.

Instead, they simply deploy the operating system and packages and get to work. Its easier to set up a cluster, and its much easier to manage operations for it over the long term when youre dealing solely with bare metal.

Let me make clear that I do believe there are situations where running Kubernetes on VMs makes sense.

One scenario is when youre setting up small-scale staging environments, where performance optimization is not super important. Getting the most from every millicore is not usually a priority for this type of use case.

Another situation is when you work in an organization that is already very heavily wedded to virtualized infrastructure or particular virtualization vendors. In this case, running nodes as VMs simply poses less of a bureaucratic headache. Or maybe there are logistical challenges with acquiring and setting up bare metal servers. If you can self-service some VMs in a few minutes, versus taking months to get physical servers, just use the VMs if it suits your timeline better. Your organization may also be wedded to a managed Kubernetes platform offered by a cloud provider that only runs containers on VMs. Anthos, Google Clouds managed hybrid multicloud Kubernetes offering, supports bare-metal deployments, and so does Red Hats OpenShift. AWSs EKS Anywhere bare metal support is coming later this year.

In general, you should never let a dependency on VMs stop you from using Kubernetes. Its better to take advantage of cloud native technology than to be stuck in the past because you cant have the optimal infrastructure.

VMs clearly have a place in many Kubernetes clusters, and that will probably never change. But when it comes to questions like performance optimization, streamlining capacity management or reducing operational complexity, Kubernetes on bare metal comes out ahead.

Feature image via Pixabay

Read the original post:
Kubernetes on Bare Metal vs. VMs: It's Not Just Performance The New Stack - thenewstack.io

Expect sales reps’ calls if IT wants to ditch Oracle – The Register

Oracle executives brief clients against plans to move away from Big Red's technology platforms, it is alleged.

A recent webinar by Palisade Compliance heard that it took "guts" for enterprise customers to make the decision to move away from Oracle technology as its senior salespeople would call customer CEOs and board members and brief against IT management proposing such a move.

Craig Guarente, an advisor in Oracle licensing and compliance issues, said: "You need the courage to do something different, because you're going to have 20 Oracle reps telling you why it's a mistake, and they're going to call your CEO. They're going to call your board and do whatever they need to make you [change course]."

Guarente, CEO of Palisade Compliance and former Oracle veep, said customers would often complain that no matter how they try to reduce their reliance on Big Red's technology, "the Oracle calculator only has a plus button."

"Sometimes companies get in distress and they're shrinking and Oracle says, 'Yeah, but you still have to pay me, I know you only have half the users and half the capacity but you still have to pay me and we're going to raise prices because of inflation.' That really frustrates companies," he said.

The webinar was also hosted Ed Boyajian, CEO of EDB, a software company backing the open-source database PostgreSQL. He said large customers had moved away from Oracle to PostgreSQL but that it often required top-level support.

"Our biggest customers very large-scale enterprise-wide Postgres users report needing a strategic drive to change. That intersected the C-suite: there is a common theme that it takes a strong commitment at that level. Because people are always afraid of the risk of the unknown."

We have asked Oracle to comment.

Big Red has argued that its approach to the cloud has offered a way of integrating with the on-prem world. In 2020, it launched an on-premises cloud product, Oracle Dedicated Region Cloud, completely managed by Oracle, using the same architecture, cloud services, APIs, and SLAs as its equivalent regional public and private clouds.

"Customers can think of it as their own private cloud running inside their data centre, or they will also see it as a hybrid cloud, given that this the exact same thing we offer in a public cloud," said Regis Louis, exec veep of product management for Oracle Cloud Platform in EMEA.

Meanwhile, Oracle also claims to innovate with tight integration between hardware and software supporting the performance of its Exadata products. Big Red claims it beefed-up Exadata X9M, launched last year, provides online transaction processing (OLTP) with more than 70 per cent higher input/output operations per second (IOPS) than its earlier release.

But some customers have trodden the path away from the dominant application and database vendor. EDB claims to offer tools that smooth the migration to PostgreSQL, plus the option of moving applications without rewriting them.

Speaking to The Register in 2020, Ganadeva Bandyopadhay, associate vice president of IT at TransUnion CIBIL, described the migration from Oracle to Postgres EDB.

The company was looking to revamp older applications based on "rapidly outgoing concepts like heavy database servers with a lot of business logic within the database code," Bandyopadhay said.

The credit information company operating in India found its Oracle licences were being underused, but the rigidity in the rules made it difficult to move them onto different virtual instances and convert from the processor-based to the "Named User Plus" licensing.

Starting from 2015, Bandyopadhay and his team wanted to remove the business logic from the main database, improving performance and flexibility in the architecture, something he said would have been difficult to do with Oracle.

"It was nothing against Oracle, but our logic was to address Oracle features which are built within the database," he said. "There is a cost to that which we had accepted for a long time, but with the changing expectations [from the business], we had to really revamp and flatten out the databases and put the business logic into somewhere else in the middle tier," he said.

After completing the migration in 2017, Bandyopadhay's team found the Postgres EDB-based system achieved higher throughput at lower licensing costs than Oracle, but not before reskilling its internal IT team.

Read more:
Expect sales reps' calls if IT wants to ditch Oracle - The Register

Threat Actors Organize: Welcome to Ransomware Inc. – Virtualization Review

News

"Many people still think of a ransomware actor as the proverbial 400-pound hacker in his mom's basement -- nothing could be further from the truth," says in-the-trenches security expert Allan Liska. "There are a number of cottage industries that have sprung up in support of ransomware."

In fact, the intelligence analyst at Recorded Future outlined a businesslike environment and workflow he has discerned from his more than 20 years in the IT security field, most recently focused on ransomware:

"In fact, the leader of a ransomware group is often nothing more than a 'marketing' person whose sole purpose is to get more affiliates for the group," said Liska, who is known as the "Ransomware Sommelier."

He shared his thoughts with Virtualization & Cloud Review following his presentation in a recent multi-part online event titled "Modern Ransomware Defense & Remediation Summit," now available for on-demand viewing.

It's no surprise Liska started off discussing initial access brokers early on, as he has become somewhat of a specialist in that area. For example, last year took to Twitter to lead a crowdsourcing effort to create a one-stop-shop for a list of initial access vulnerabilities used by ransomware attackers, as we explained in the article "'Ransomware Sommelier' Crowdsources Initial Access Vulnerability List."

Of course, organized ransomware has been a known thing for a while now, with even nation-state actors getting in on the action, but Liska and other security experts indicate the bad guys are getting more sophisticated.

"Outsourcing the initial access to an external entity lets attackers focus on the execution phase of an attack without having to worry about how to find entry points into the victim's network," said an article last summer in Infosecurity Magazine titled "The Rise of Initial Access Brokers," which noted the flourishing market often sees compromised VPN or RDP accounts as network inroads, along with other exposed remote services like SSH.

Digital Shadows also charted "The Rise of Initial Access Brokers" a year ago, complete with a chart showing popular access types and their average prices (note that prices have likely gone up with the recent inflation spike):

Liska detailed the initial access scene in his opening presentation, titled "The Current Ransomware Threat Landscape & What IT Pros Must Know."

"So one of the things that you have to understand with ransomware is it's generally not the ransomware actor that's gaining that initial access," he explained. "There are other criminals that are called initial access brokers, and they're the ones who generally gain that access. And then they turn around and they sell it to the ransomware actors themselves, whether it's to the operator of the ransomware-as-a-service offering, or whether it's one of their affiliates that people that sign up to be able to deploy their ransomware.

"Think of it like flipping houses, except you're flipping networks. You're turning that network over to a ransomware actor who's then going to deploy the ransomware."

Allan Liska, Intelligence Analyst, Recorded Future

"So when you're talking about an attack like this, you're generally talking about two different types of actors: one to get the initial access and one that turns around and sells it. Think of it like flipping houses, except you're flipping networks. You're turning that network over to a ransomware actor who's then going to deploy the ransomware. And they generally sell that initial access from anywhere from a couple thousand to 10, 15, even 100,000, depending on the type of access they're able to get -- so if you have administrator access -- and the size of the network. But you know, the thing is, if you're a ransomware actor, it's still a good investment. Because if you're confident you can deploy the ransomware you're gonna make way more than what you're paying for that initial access."

Liska explained he and other security experts are seeing four primary initial access vectors: credential stuffing/reuse; phishing; third-party; and exploitation, summarized in this graphic:

Phishing was the popular vector throughout 2019 and 2020, Liska said, but RDP (Remote Desktop Protocol) -- "low hanging fruit" -- is gaining traction. Here are Liska's thoughts on RDP, third-party attacks and exploitation:

RDP: "Ransomware Deployment Protocol"?"What we're starting to see in 2021 -- and we expect this to continue into 2022 -- is that credential stuffing and credential reuse attacks are becoming much more common," Liska said. "In fact, we kind of have a joke in the industry that RDP actually stands for ransomware deployment protocol, instead of what it actually means, only because RDP is one of the most common entry methods. Because it's so easy for these initial access brokers to just fire up an old laptop and start scanning, looking for open RDP connections, and then trying credential stuffing/credential reuse attacks. You have to keep in mind, there are literally billions of credentials that are being sold on underground markets.

"So while it seems like a credential use attack would be a challenge, it really isn't. You connect to the RDP server, you see what network it belongs to, you search on Genesis market or one of the other markets for usernames and passwords that match it. And then you try those -- you get 100 of them -- you try them and unfortunately, most the time, they will find a match, and they'll be able to gain access. That's why Multi-Factor Authentication is so important for any system that's exposed to the internet."

Third-Party Attacks"These are increasingly common," Liska said. "We really saw this take off in 2021. So a ransomware actor, or the initial access broker, gains access to a managed service provider, or a vendor of some kind. And rather than [deploy] the ransomware on that vendor, what they do is they use that access to jump to those partners. They find it's really easy, because you get to start right in the gooey center, and work your way out. So we're seeing a big increase in that. And again, that goes with the sophistication and increasing sophistication of the ransomware access."

Exploitation"And then exploitation is also growing in popularity," Liska continued. "So, you know, in the last year, we catalogued more than 40 different exploits that were used by ransomware groups or the initial access focus in order to gain that first access. So it's really, really important that you're patching. Again, anything that's public facing, especially anything that has proof of concept code, or anything like that release, has to be patched immediately."

RaaS: Ransomware-as-a-ServiceOne striking fact that speaks to the businesslike organization of ransomware are numerous RaaS operations that have sprung up around the globe, as Liska's chart below shows:

Cybersecurity specialist Rubrik, in a ransomware compendium, says of RaaS: "Criminals don't have to create their viruses anymore. Developers will create ransomware for a fee or share of the profits, creating a whole new industry that caters to ransomware." Also, the company noted a growing ecosystem of dedicated infrastructure has formed to support ransomware, including "bulletproof" hosts who will refuse to take criminal users offline, along with dedicated networks to help criminals avoid anti-virus software and move and hide virtual currency payments.

See the original post:
Threat Actors Organize: Welcome to Ransomware Inc. - Virtualization Review

What is Edge Computing? – All You Need to Know | Techfunnel – TechFunnel

Edge computing is not just a methodology but also a philosophy of networking that is primarily focused on bringing computing devices closer to the network. The objective is to reduce any sort of latency in the usage of bandwidth. To put it in laymans terms, edge computing means executing a smaller number of processes in the cloud and migrating those processes to a more localized environment such as a users computer, an IoT device, or an edge server. Executing this process ensures a reduction in long-distance communication that arises between the client and the server.

For all internet devices, a network edge is a place where the device or the local network that contains the device, communicates with the internet. One can call the word edge a buzzword and its interpretation is rather funny. For instance, the computer of a user or the processor inside an IoT device can be treated as a network edge device; however, the router used by the user, or the ISP is also factored as a network edge device. The point to be noted here is that edge of any network, from a proximity point of view is very close to the device; unlike other scenarios involving cloud servers.

Historically speaking, early days computers were large, bulky machines that could be accessed either via a terminal or directly. However, with the invention of personal computers, which was quite a dominant computing device for quite a long time, the methodology of computing was more in a distributed manner. Multiple applications were executed, and the data was either stored in the local computer or probably stored in an on-premise data center.

However, with cloud computing, we are seeing a paradigm shift in the way the computing process is done. It brings a significant value proposition where data is stored in a vendor-managed cloud data center or a collection of multiple data centers. Using cloud computing technology, users can access data from any part of the world, through the internet.

But the flip side is that because of the distance between the user and the server location, the question of latency may arise. Edge computing brings the user closer to the server location, making sure that the data does not have to travel a distance. In a nutshell

Let us consider a situation where there is a building that has multiple high-definition IoT sensor cameras. These cameras just provide raw video footage, and they consistently stream the videos to a cloud server. On the server, the videos undergo processing through a motion detection application that captures all movements and stores the video footage in the cloud server. Imagine the amount of stress that the buildings internet infrastructure undergoes because of the high consumption of bandwidth due to heavy video footage files. On top of this, there is a heavy load on the cloud server as it has to store these video files.

Now, if we move the motion sensor application to the network edge, each camera can harness the power of its internal computer to run the motion sensor application and then push it to the cloud server as and when needed. This will bring about a considerable amount of reduction in bandwidth usage because a major chunk of camera footage wont be required to travel to the cloud server.

Furthermore, the cloud server will now be storing only the critical video footage, unlike the entire dump in the previous case.

(Also read:How Edge Computing Is Reshaping the Future of Technology)

(Also read:Why Edge Computing Is Critical for the Internet of Things)

The implementation and adoption of edge computing have brought about a paradigm shift in the domain of data analytics to a new dimension. More organizations are dependent upon this technology, which is completely data-driven, and organizations that require instant and lightning-fast results. There are many online platforms that provide certified courses on edge computing.

It does not matter what kind of edge computing is of interest to you be it cloud edge, IoT edge, or mobile edge, it is important that the right solution can help in achieving the following organizational goals:

Excerpt from:
What is Edge Computing? - All You Need to Know | Techfunnel - TechFunnel

Google Cloud takes a gap year. It may come back with very different ideas – The Register

Opinion Taking a look at the latest financial results from Google/Alphabet made some of us do a double-take ... and not because of the $40bn+ in ad revenue.

If you read closely, you'll see that Google Cloud has lessened its habitual loss by extending the operational lifespan of its cloud servers by a year, and stretching out some of its other infrastructure for longer.

Google is not a bit player on the market; it could push ahead with its upgrade cycle with some adjustments if it wanted to, and say as much, but no. It is opting out

So what, you might say, wearily playing along in the office with hardware that gets refreshed less often than an octogenarian teetotaller. But this is Google Cloud, one of the headline players in the most important enterprise IT market of our time.

If it's saying that it's improving its competitive offer by not bothering to upgrade its core CPU farm, that says a lot about the cloud, the processor market, and the future of both.

You can see the cloud as it is sold to you, easing the capex/opex ratio, adding flexibility, dialled-in scale, and performance while reducing managerial overhead. In a different light, it's also a fantastic experiment in abstracting what IT actually means in business: paying other people to worry about all the boring stuff on your behalf.

Security, energy, hardware tending, meeting demand at a global scale or just giving you an instant few cores of server to run up an idea or proof of concept without you having to buy so much as a multiway plug.

So when Google says in effect it doesn't care about upgrading CPUs this time around, you can believe it. Issues like the chip shortage and global economic uncertainty will factor into the decision, but reports from the front line of the server industry indicate that if you've got the clout, you get your share. Google is not a bit player on the market; it could push ahead with its upgrade cycle with some adjustments if it wanted to, and say as much, but no. It is opting out.

This is even more significant because Google is one of the most processor-focused providers. It reveals the processors it uses for different classes of task, sometimes even letting you pick the ones you want, and sometimes they'll even be in the region and the available configuration that you fancy.

Compare that to the choices offered by Amazon AWS EC2, which are number of cores per instance and whether you want multithreading. That's it, and that's much more typical of cloud service providers (CSPs). For most workloads, these firms don't compete on CPU. Storage tiers get the works with latency versus capacity versus cost, but compute performance? Acceptable is good enough. You will get virtual machines running on virtual CPUs, and you will like it.

This leaves the chip companies with some hard questions. They really can't shake the "performance" metric as the drug of choice, and it's still an easy sell to investors.

Headline numbers look good, HPC is always a happy place to be, and you can find plenty of other places where you need lots of performance grunt. General-purpose CPUs have to face off against GPUs and other hardware-optimised silicon there although massively parallel tasks mostly don't care about x86's legacy.

And, as Apple has proven with its M1 architecture, x86 legacy doesn't have to count for that much elsewhere these days. It's not that CSPs and data centres are gagging for M1s, which work so well because they are so highly evolved for Apple's market.

The x86 emulation overhead is perfectly bearable there while the ecosystem catches up with native versions; acceptable is good enough, and the path forward is clear.

But CSPs aren't gagging for the latest x86 magic either; they'll happily take it at the right price and at the right time, but they'll leave it for a while too. That's a gap, which is suddenly much more interesting. MacBook owners like battery life, but CSPs really don't like the new era of accelerating energy costs.

The ARM-ification of servers at scale has been predicted a few times now, although it's never been quite clear how you get there from here. The M1, however, is a great proof of concept: and the energy bills, specifically, are a great motivator to pay attention.

It is easy now to imagine what the M1's cloud-component cousin would look like. It could be a system-on-chip with a set of computing cores that are intrinsically efficient and can be even more so with the right workload, very tightly coupled to IO and integrated memory, but instead of being tuned for an Apple machine, the SoC would work very well for a particularly configured VM and work acceptably for others. There would be nothing here that would be beyond the talents of a competent design team, no matter where they work.

With a sea of these, a CSP could have a new, performant, and very competitively priced tier that rewards workloads that are optimised for the native, highly efficient modes, but one that would remain competitive for the older tasks that would otherwise be happy running on the older hardware already in the racks.

The CSPs would get enough wriggle room to price-nudge the clientele into the low-energy workload domain while still picking up a bit more margin.

The world has already moved into the sort of containerised, multi-platform, open-dev, automated regime with the necessary tools and techniques for making apps for such an architecture. That means not much novel engineering would be needed at the codeface.

The motivation is there, the methods are at hand, and the barriers to transition are much reduced. Maybe Google's gap year is an indication that business will not resume as usual.

Read more:
Google Cloud takes a gap year. It may come back with very different ideas - The Register

Will the security benefits of cloud computing outweigh its risks in 2022? – Bobsguide

Stakeholders and executives of financial organisations remain on the fence about whether the advantages of cloud computing outweigh the potential risks of trusting sensitive information to remote servers. With the current demands on banks IT infrastructure and front-, middle-, and back-office staff, and the implementation of Basel IV pushed forward to January 1, 2023, this year may be a good time to transition ever-growing IT infrastructure to the cloud.

Cloud computing is becoming increasingly attractive toand indispensable forfinancial organisations. The cloud has the potential to completely change the financial services landscape. Banks can take advantage of cloud technologies to improve their entire risk management systems and to access fast, high-end technologies on an as-needed basis. As a result of switching to cloud computing, many services can be delivered with reduced up-front capital outlay and IT expenses.

The current state of cloud computing allows financial organisations to access any modern core banking system offering without any loss in cost-effectiveness. This not only enables banks to save costs, but also increases data processing speed and improves the quality of the financial services they provide.

Despite possible initial hurdles in implementing cloud technologies, such as security risks, reliability issues, and problems with business continuity planning, the extra flexibility and scalability provided by the cloud far outweigh the negative aspects. If an organisation can ensure effective corporate governance and security by performing vigorous endpoint management and IT policy management, the cloud will provide many security benefits.

Some IT professionals still overlook the fact data can be more secure in the cloud than in a physical data center. They continue to see data which has been stored in the cloud as a vulnerable asset, raising security, privacy, and compliance concerns.

It is true some engineers are so focused on getting to the cloud they do not initially put the time into setting up security, governance, and auditing. In the best-case scenario, the organisation only has a permissions nightmare to deal with, even though incorporating proper governance will still be a painful and expensive process. In the worst case, neglecting security in a rush to the cloud can result in a data breach or the deletion of all of IaC (Infrastructure as Code to automate cloud resource deployments) and backups.

The cloud is very different from a traditional data center, and banks need to approach their data management differently as a result. Otherwise, the cloud could end up being an extra expensive data center should financial firms choose to throw their legacy technology into it.

Cloud computing has the resources to ensure high levels of security and prevent data breaches, but it is imperative an organisation implement vigorous endpoint management and IT policy management to gain the maximum benefit.

Unlike traditional data centers, which typically rely on physical defenses to prevent unauthorized access to data, public clouds, such as Amazon Web Service or Microsofts Azure, allow server-side 256-bit encryption to protect files. These files remain encrypted when they are transferred within the network or saved to cloud storage.

Data objects sent to the cloud server by the client/user are also deduplicated and compressed. In this case, if a third party were to gain access to the data, they would be forced not only to decrypt the objects without the AES (Advanced Encryption Standard) 256-bit encryption key, but also to uncompress and reassemble them into readable files.

When high-performance access to a file is required, the cloud infrastructure can be modified accordingly by deploying virtual or physical cache servers. As with traditional file servers and NAS (Network-Attached Storage) devices, these servers cache only the active files needed for local, high-speed access, thus reducing storage needs and costs.

Cloud storage data and metadata are encrypted and unavailable in their at-rest format, so a cache server is required to access them. This server, in turn, provides its own additional security, such as closed unused protocol ports, no open back-end access, additional encryption between the client and the directory server, and self-encrypting drives.

The same reliable authentication procedures and access tools as in an on-premises data center can be used for cloud deployments. For instance, access to remote data can be provided though standard file sharing protocols such as SMB (Server Message Block) 1, 2, and 3 or NFS (Near-Field Communication) v3 and v4, in exactly the same way as if traditional file servers or NAS (Network-Attached Storage) devices were used.

Additionally, AD (Active Directory) permissions, which are controlled by the banks system administrator, manage data access. An authenticated user can access only the data that is visible to them, and the rest of the data is protected through group- or user-specific policies. Moreover, the support of Active Directory trust relationships allows the creation of logical links and the application of policies between users and domains within the system.

The cloud easily surpasses the capabilities of traditional data storage when it comes to the protection of data against accidental or intentional mistakes and system failures which would otherwise lead to data corruptions.

Writing data to cloud storage is done using a WORM (Write Once Read Many) model, in which new data is always appended (added to the existing one) and never replaced or overwritten. The system creates snapshots of data at assigned intervals in order to be able to instantly recover any set of data in case any server-side or related problems occur.

Third party regulations and certifications ensure data is secure. All public clouds, such as AWS, Azure, or GCP, are required to go through extensive third-party certifications, e.g., HIPAA, HITECH, Soc2, PCI, and ITAR, to ensure all data is properly protected.

Consequently, they meet important audit and compliance requirements. Should a financial institution transfer its data to the cloud, it will meet all these requirements automatically. Should a financial institution transfer its data to the cloud, it will meet all these requirements automatically.

In the past, many data and file security solutions (such as firewalls and antivirus software only supported traditional NAS (Network-Attached Storage) software to detect and stop cyber threats. Today, the same integration capabilities are available when using cloud-based file storage.

Cloud solutions now allow high levels of flexibility when it comes to integration. This provides banks with the ability to find and isolate sensitive data, visualise data access, adopt and manage a least privilege access model, and streamline compliance activities.

Moreover, it allows unstructured data to be securely stored by financial institutions in public or on-premises cloud storage, where the cache server, as an extra layer of protection, processes the actively used data whenever high-performance access is required.

Working with on-premises deployment creates a false sense of security because of the perception the network itself is protected by a physical boundary. However, only the most sensitive networks operate in an air-gap mode without any outside access. Of course, providing remote access opens systems up to certain cybersecurity risks, but in the cloud, there is also less risk of misconfiguration, and all those risks are more easily mitigated by using standard security infrastructure and features, and standard security audit tools.

While cybersecurity risks exist in both on-premises and cloud environments, cloud systems are better protected than on-premises or data center deployments. It is notable many of the recent major hacks occurred in on-premises networks or hybrid environments rather than in purely cloud-based systems.

An optimally running cloud solution reduces cybersecurity risks through the use of a standard set of cloud services and technologies, which present less penetration risk than non-standard on-premises or hybrid networks.

Banking risk management functions will receive tangible benefits from cloud computing, but leaders of banks risk departments still face significant challenges when migrating to the cloud. With the increased number of cloud adoptions in finance, the importance of day one security, governance, and auditing should not be downplayed by a financial organizations management. Failing to take these factors seriously will undoubtedly lead to the disruption of business operations and could damage the organisations reputation owing to financial and legal issues.

To prevent disasters and secure a banks data in the cloud more effectively, they should set up multiple layers of security. For large banks and other financial organisations, it is better to set up risk management functions with a private cloud provider. Small- and medium-sized businesses, on the other hand, would benefit from taking advantage of the public cloud service providers in order to grow their business and connect data securely. For highly secure operations, it is better to use a private cloud. If you use a public cloud for the upper layer of your organisations operations, a hybrid cloud solution might also be a good option.

Moreover, hosting a cloud storage system in your own data center within a security perimeter can be just as efficient for your organisation. Private cloud solutions deployed in a private data center possess all the benefits of public clouds, including 256-bit encryption, compression, deduplication, and modular building blocks that can scale at a comparatively low cost.

By partnering with CompatibL, financial institutions can ensure they are always in control of their sensitive corporate and private information, and are compliant with the current and upcoming regulatory capital requirements.

Link:
Will the security benefits of cloud computing outweigh its risks in 2022? - Bobsguide

Software supply chain: the problem with outsourcing everything – Evening Standard

I

n the last decade, the global tech landscape has developed at lightning speed, and many of us are struggling to keep up. Most of us have got to grips with the basics Facebook, Slack, Zoom but terms like the cloud are harder to understand.

Far from a niche area of tech, the cloud is becoming a crucial tool for businesses, often with unwanted implications.

Every day, organisations looking to modernise their data practice are moving to cloud hosting and Software-as-a-Service (SaaS) based products. For those not well-versed: traditionally, a website would be hosted on a single server, usually in a data centre. Cloud hosting, on the other hand, sees a companys data distributed across different servers, usually in different places, which are all connected to form a network. This network is called the cloud.

Many companies migrate to these systems as theyre easy to manage and can integrate with other complementary products across an organisation, whilst also receiving updates for the software in real-time.

Across any given industry there are a small set of market leaders who provide the software, and the largest organisations in the world will naturally prefer to use these top solutions. They are easy to trust, have strong industry presence, and justification for procuring them is smooth for internal budget holders and external shareholders.

What we dont always think about are the macro-implications: now we have a large number of organisations who are heavily reliant on a single organisation to provide a common product.

Take a look at any accounting software used across major companies. If its managed by a cloud-based third-party on behalf of the company, then a single attack to that software provider would not only impact the accounting system of the individual company, but others who use that same product.

Global industries putting all their software eggs in one vendors basket is a very attractive proposition for an attacker: a single attack can scale across many different companies.

In the case of Ransomware, attackers have asked themselves the following economic question: why spend time managing and attacking 100 firms individually, receiving only a small amount of capital from each of them, when I can look to the biggest business-to-business software providers in an industry and target those, charging a huge amount to restore their systems so they can continue to deliver services to their clients?

(European Space Agency)

Their answer has been clear: weve started to see a global increase in software supply chain attacks. The impact on business, from down-time and lost revenue, is in the billions of dollars.

But what can organisations do? Look towards the space agencies, for starters.

Redundancy and failover the ability to switch automatically and seamlessly to a reliable backup system have always been key concepts in space missions, ensuring that everything is built to a high standard, but its assumed that what can go wrong, will go wrong.

What does that mean for the rest of us that spend most of our time below the outer atmosphere? First, organisations need to have catalogued all their software products, mapping these to their dependent business functions. We need to know which are mission critical for us to continue trading, as these create the greatest business risk. These should be prioritised, back-up providers evaluated, and a redundancy plan put in place to ensure any impact is minimal in case of a failure.

This needs to be built from the ground up. When organisations run their procurement process for these mission critical systems, a back-up provider should be identified and a failover deal should be negotiated in case the primary supplier goes down.

Lastly, but something which is often overlooked, back-ups need to be stored in a format usable by a different product, instead of being tied to a single product.

Attackers gravitate towards the greatest reward for their input, and supply chain attacks on software providers are an attractive way to scale the impact of their work. Fortunately, organisations can put in place sufficient failovers to mitigate many of the risks they face when their supply chain becomes the target.

Read more:
Software supply chain: the problem with outsourcing everything - Evening Standard

Mimecast : How Secure Is the Cloud with Cloud Security Tools? – marketscreener.com

Organizations everywhere are turning to cloud computing to reduce costs and improve mobility, flexibility and collaboration. Despite rapid adoption, however, 96% of cybersecurity professionals say they are at least moderately concerned about the security of cloud computing, according to a report from ISC2.[1]

How secure is cloud computing? And what can organizations do to fortify it? Answering these questions begins with understanding common cloud computing vulnerabilities and the cloud security policies, processes and tools to reduce them.

Cloud computing enables the delivery of computing services on demand over the internet. For businesses, these services can range from databases and storage to customer intelligence, data analytics, human resources platforms and enterprise resource planning. Cloud computing is attractive to many organizations because it can provide significant cost savings - organizations typically subscribe to and pay only for the cloud services they use, which can save them time and money otherwise spent on infrastructure and IT management.

The other benefit of cloud computing is enhanced security. In most cases, the cloud is more secure than on-premises data centers. When a company operates and manages its own on-premises data center, it's responsible for procuring the expertise and resources to appropriately secure its data from end to end. Cloud-based providers, however, offer a higher level of security than many businesses can match or could afford, particularly for growing organizations or ones with limited financial resources.

While organizations can benefit from improved security by migrating to the cloud, that doesn't mean they're free from threats. Importantly, cloud security is a shared responsibility between cloud service providers and their customers. Discussed below are some of the top risks that a cloud environment poses and what organizations can do to protect against these vulnerabilities:

Misconfiguration Creates Most Cloud Vulnerabilities

While cloud service providers often offer tools to help manage cloud configuration, the misconfiguration of cloud resources remains the most prevalent cloud vulnerability, which can be exploited to access cloud data and services, says the U.S. National Security Agency.[2] Misconfiguration can impact organizations in many ways, making them more susceptible to threats like denial of service attacks and account compromise.

Poor Access Control Gives Attackers Privileges

Poor access control results when cloud resources use weak authentication methods or include vulnerabilities that bypass authentication methods. This can allow attackers to elevate privileges and compromise cloud resources.

Employees Pose Risks

Companies that have difficulty tracking how employees are using cloud computing services risk becoming vulnerable to both external attacks and insider security threats. End users can access an organization's internal data without much trouble, so they can steal valuable information or be exploited by attackers to do similar harm.

Insecure APIs Are Becoming a Major Attack Vector

Many APIs require access to sensitive business data, and some are made public to improve adoption. APIs that are implemented without adequate authentication and authorization, however, pose risks to organizations. Insecure APIs are becoming a major attack vector for malicious actors.

Since cloud security is a shared responsibility between the cloud provider and the customer, sharing arrangements need to be well understood. While a provider would typically be responsible for safeguarding the infrastructure, patching and configuring the physical network, for example, its customer's responsibilities could include managing users, their access privileges and data encryption. The following cloud security tools help organizations fortify their environment:[3]

Why Cloud Security Policies Are Important

A cloud security policy is a formal guideline developed to ensure safe and secure operations in the cloud. Without one, a company risks security breaches, financial and data loss, and other costly consequences including fines for regulatory noncompliance.

A cloud security policy should include:

Cloud computing can provide important opportunities and cost savings for organizations. While security remains a prevalent concern, understanding the most common threats and putting in place the proper policies, processes and tools can help companies protect themselves and their data.

[1] "2021 Cloud Security Report," ISC2

[2] "Mitigating Cloud Vulnerabilities," National Security Agency

[3] "What Is Cloud Security?", IBM

Continued here:
Mimecast : How Secure Is the Cloud with Cloud Security Tools? - marketscreener.com

ClearOne : AV Practitioner’s Guide to the Cloud – Part 3 of 3 – marketscreener.com

Part 3: What AV Practitioners Need to Know When Incorporating Cloud into AV/IT Infrastructure

In Part I and Part II of the Guide to the Cloud series, we discussed the benefits of AV integrators using the cloud - from enabling remote management of AV/IT infrastructures to saving companies money and, in turn, creating revenue. However, this Cloud implementation can take on many different roles, and the degree of manual assistance is entirely reliant on the level of usage or assigned role, site setup required, software and installation assistance, and degree of customization desired.

For example, AV practitioners that want the cloud software to simply take on a read-only supervision role just need computer operation information, an email address, and details on website browser usage. On the other hand, an AV Management/Help-Desk Role for the cloud software would only require some level of understanding around AV device setup, network connectivity, and subnets.

That said, when using cloud-based software for AV/IT administration - materials, cooperation, and help are required from site IT staff.

The requirements from the IT staff vary based on the actions one wishes to manage via the cloud management software. IT support on the implementation of cloud management software can include:

However, ClearOne's CONVERGENCE Cloud AV Managerhas complex features that can bypass excessive IT oversight and can help enable a more straightforward setup. ClearOne CONVERGENCE Cloud AV Manager has simplified Local Agent setup, so no detailed IT skills are needed beyond onsite computer/server setup and software installation. Device-to-Cloud registration happens through the Local Agent server, making registering all ClearOne Pro Audio devices easy.

In addition, CONVERGENCE Cloud AV Manager allows an organization to use its own server for email notifications by default. However, if desired, an organization may use its own email server, in which case its settings would be required from the IT staff.

When it comes to security, Local Agents should be operated behind a network firewall to be secure. However, they require specific ports to be open on the Local Agent server's firewall to operate thoroughly. Through CONVERGENCE Cloud AV Manager, this task is now automated and somewhat customizable during installation.

Also, a strong understanding of network technologies such as FTP, TCP, UDP, and DHCP is typically required to set up and maintain a Cloud system. However, CONVERGENCE Cloud AV Manager does not require this understanding. Therefore, while some knowledge of these technologies can be helpful to better understand some security, network, AV device, and discovery options, it is not necessary when using CONVERGENCE software to integrate the cloud into AV infrastructure.

ClearOne's CONVERGENCE Cloud AV Manager is built for the integrator and supports a smooth and largely autonomous implementation.

The ClearOne CONVERGENCE Cloud AV Manager remote management software solution will be available for free until 2023. If you're interested in learning more about how this software can benefit your clients, click here.

See original here:
ClearOne : AV Practitioner's Guide to the Cloud - Part 3 of 3 - marketscreener.com

3 ETFs to invest in cloud computing – Marketscreener.com

If you want to know more about the cloud computing market and its major players, I suggest you take a look at Tommy Douziech's excellent article where he details how the cloud works, the market and the future prospects of this technology, all accompanied by a complete thematic list of cloud players.

In short, cloud computing is a tool that allows you to benefit from an IT infrastructure, a development platform or even a ready-to-use service, all online, without the need for hardware at home or in your company. It runs on servers managed by the provider, who takes care of the security and maintenance of the hardware.

The cloud allows companies and users to operate infrastructure and services online, without additional hardware costs with a simple annual or monthly subscription. It is like renting a PC that is not in your office but on a server whose location is kept secret.

First Trust Cloud Computing ETF (SKYY):This ETF is provided by the Illinois based investment management company First Trust. With more than $5 bn in total net asset, SKYY is the largest cloud fund. It employs a modified equal weighted index limited to 80 companies, aiming to optimize the performance of the ISE CTA Cloud Computing Index. Companies listed in the index must respect these three criteria :

Since its inception in May 2011, the NAV average annualized total return is +17.39% (S&P500 average annualized total return over the period is +15.16%) and +10.55% over one year. The expense ratio is 0.60%.

Here are the top holdings as of 02/07/2022:

Global X Cloud Computing ETF (CLOU):CLOU is the second largest cloud computing ETF with $1.2 bn in assets under management. Provided by the well known New-York based ETF producer Global X, it is based on the Indxx Global Computing Index, which offers exposure to exchange-listed companies in developed and emerging markets that are positioned to benefit from the increased adoption of cloud computing technology. Since its launch in April 2019, the NAV average annualized total return is +23.97% and -3.26% over one year. The ETF is composed of 35 companies and total expense ratio is 0.68%.

Here are the top 10 holdings :

Visit link:
3 ETFs to invest in cloud computing - Marketscreener.com