Category Archives: Cloud Servers

Opinion: Actually, the new Mighty browser is the Chrome Cloud Tabs feature Ive been waiting for – 9to5Google

Mighty is a new browser project that puts Google Chrome in the cloud and streams it to your PC. While I dont know if Mighty will end up making sense for my personal situation, it does perfectly resemble a product Ive found myself hoping Google itself would build as a feature in Chrome. How worthwhile is Mighty itself, though? And will (should) Google copy it?

First, the should the web be apps or documents? debate. We all know where various Silicon Valley companies land on this. If you dont, its basically the Google side, which is that the web is a glorious operating system built on technologies that naturally supplant the need for many native applications (hence things like Chrome OS and Instant Apps), and the Apple side, which is that the web should primarily be lightweight, static documents and native apps are best place to build more involved use cases.

The convergence of these two lines of thinking are where we all live today. Web apps have taken over the world, but depending on the platform you use or the task, theres lots of native applications to use as well. But the reality is that a good percentage of people do use the web as an OS whether thats good or not. This reality is pervasive. Electron apps, for example, which are basically web apps in a native macOS app container, are pervasive on the Mac, and very controversial.

That world isnt without its ills. Enter Mighty, a new app that wants to make web apps feel more like highly-optimized native apps and eliminate the various bottlenecks that make using lots of apps in Chrome at the same time a drag. So the first question is for those people that run into this issue occasionally (I am one of them) does Mighty (as pitched today) realistically or practically solve two problems: Running lots of web apps in Chrome being 1) a RAM hog and 2) a battery hog?

(As an aside, theres actually something of a parallel with Electron apps, which basically solve a problem (needing to build-it-once-and-fast-and-ship-it-everywhere) by putting things in a container. Mighty solves the limitations of Chromes resource hogginess by putting it in a container. This probably makes the entire idea of Mighty a non-starter for lots of native-app-purists, but not for me, really.)

From what I can tell, Mighty does what it says it does on the tin, which is offload all the resource-hungry parts of Chrome to the cloud, meaning the only thing your local PC has to do is just stream a video feed. Mighty obviously does all the tricks necessary to send your keyboard and mouse input to the cloud, as well as connect itself to all the normal browser connections to other areas of your desktop (default browser, links, downloads, etc.).

Mighty has its own drawbacks, though. For one, its expensive (supposedly $30+/mo), and two, as mentioned, it sends all your keystrokes and the entirety of your browser activity through Mightys servers. Mighty is also up against ongoing hardware innovation that makes this less a problem for people over time (see: M1 battery efficiency and super fast RAM swaps).

Given all these various factors, my initial impression is that Mighty does indeed makes sense and solves a real problem (today) for a tiny subset of Chrome users. Someone running four instances of Figma, swapping between four Slack channels, and editing 15 Google Docs at once, and wants to do all that on relatively underpowered hardware. A newer Mac could handle all of that surprisingly well, but a 2015 MacBook Pro with 8GB of RAM is still a perfectly usable machine but would struggle.

But even then, Mighty is a monthly subscription mostly competing with the idea of just buying better hardware. The M2 Macs are coming later this year, and if the initial run of M1 machines is anything to go by, a lot of these my computer gets bogged down and battery drains because of too many tabs running web apps complaint is on the verge of ending for many people! (And it never existed for desktop users, or those already using devices the tippy-top of the specs pyramid.)

I think its safe to say that the kind of people that would be reading this article or would even know what Mighty is are those who are least likely to need it in the nearing future?

In tandem with my answer to the first question of whether Mighty actually solves a problem, which I think is a yes, sort of, probably, for some very specific subset of web professionals in certain circumstances, and even then its maybe not economical, the next question is, given that, does Mighty make more sense as a feature or a product?

Here, I land firmly in the Mighty makes way more sense as a feature camp. A Chrome feature, to be exact.

One big thing is scaling. Mighty is a startup and can only scale so fast (not fast at all)! They apparently have some kind of proprietary backend and have to be careful not to frontload tons of server hardware before the demand exists! Mightys founder admits as much on their Product Hunt page:

Were kind of this hybrid software and hardware company. We must buy and capacity plan building lots of custom servers (unlike pure software) and must do so across the world to achieve low latency. That means its tough to scale instantly world-wide without Google-level resources.

Suhail

Google already has the scale! I often feel like my 2018 $2,500 MacBook Pro with 16GB of RAM doesnt handle my web app multitasking very well in terms of battery and performance, and one of my very first thoughts when Stadia launched was Why doesnt Google let me run a Chrome tab in this?

I know its not popular to root for Google, a tech monolith, and against Mighty, a startup thats clearly put in tons of money and effort trying to solve a real pain point, but I really cant help but do it here.

This is definitely a feature I want, but I just cant see myself (or anyone, really): 1) paying a hefty subscription for this in the long run, 2) trusting a startup with all the web keystrokes, or 3) needing this thing bad enough to choose it over simply upgrading hardware especially when this service also needs a constant high-speed internet connection.

So if I were to guess the fate of Mighty today, that would be it. It will either be acquired or sherlocked by Google within 12 months. Itll be called Chrome Cloud Tabs, and itll run on Stadia infrastructure, and itll be tied to your Google One subscription. Its awesome to imagine I could open a new tab in Chrome that runs on Stadia servers and streams to my desktop for those rare contexts where Id be OK with the trade-offs.

Im not going to get into the technicalities and possible reasons this hasnt happened already, but its probably some combination of the privacy concerns, the economics, and various technical restraints that are keeping Google from doing it the right way. Or maybe Google has done the due diligence and come away with the conclusion that its not a good long term bet for one or more of the reasons I outlined.

But whenever the stars do align, if they align, its certainly a feature Id use. Not all the time, but Id use it, and Id maybe even upgrade my Google One subscription for the luxury.

FTC: We use income earning auto affiliate links. More.

Check out 9to5Google on YouTube for more news:

See the article here:
Opinion: Actually, the new Mighty browser is the Chrome Cloud Tabs feature Ive been waiting for - 9to5Google

The missing link securing the supply chain – TechNative

The recent SolarWinds hack marks an important milestone for the cyber landscape

Not only does it demonstrate the growing sophistication of supply chain attacks equally, it highlights the urgent need for appropriate and comprehensive combative solutions such as NDR.

Physical security solutions stretch far beyond the perimeter of a building.

While access control systems and security personnel may often be deployed to control comings and goings, other technologies are regularly deployed within the confines of a property as a more holistic way of monitoring activity and ensuring occupant safety. From identification cards and surveillance systems, to automated central locking systems and alarms, you dont have to look far to find a multitude of measures in place.

The question is, why should this be any different for cyber security?

Data breaches are becoming increasingly hard to spot, the recent SolarWinds supply chain incident being a case in point.

During the attack, hackers successfully imbedded malicious code in an update scheduled for SolarWinds Orion software platform, used by many high-profile organisations including Fortune 500 companies and US government agencies. When the update was released, 18,000 firms installed it, providing the hackers with the means to further infiltrate their chosen networks.

The incident has demonstrated the growing sophistication and complexity of cyber attacks. Not only was a trusted software from a major software enterprise leveraged to allow attackers to hide in plain sight, but the dwell time that they achieved is alarming. It is said that Orion updates launched as early as March 2020 had been infected, yet the breach was not publicly reported until December 2020 nine months later.

Such a lengthy period shows the competency of hackers in staying incognito.

In supply chain attacks, access is typically first gained through phishing techniques, such as an email that has been crafted to look like its from a reliable internal or external source.

If successful, the attacker gains access and unless impeded can move laterally throughout a network while avoiding detection by exploiting native tools in what is called a living off the land strategy. In the case of the SolarWinds breach, the threat actors compromised the firms Microsoft 365 environment, for example.

Living off the land is often slow and methodical, so as to not raise suspicion. Investigations into the SolarWinds breach showed that commercial cloud servers were used to mask communications in otherwise monotonous traffic by acting as the command-and-control centres for the attack. Further, signature-based detection solutions, reliant on historical data, did not raise any alarm as newly created malware was used.

The case for network detection and response tools

If nothing else, the Orion compromise has shown the shortcomings of signature-based detection technologies and the need for organisations to adopt a more resolute cyber protection posture.

The acceptance that security breaches occur because of a vulnerability being exploited by attackers is simply outdated. Indeed, the Orion instance demonstrates how breaches may be executed through highly effective insidious engineering techniques.

Old school defence solutions such as signature-based anti-virus software, sandboxing, IDS and firewall are no longer adequate, providing little to no protection beyond a breach. Likewise, security operations centres (SOCs) still focus on identifying anomalies in user activities through logs that are often too simplistic to effectively identify sophisticated lateral movement.

Given the intricacy of the SolarWinds attack, comprehensive detection solutions capable of identifying extremely subtle changes are needed in the modern environment.

So, what is the answer?

There is a strong case to be made for network detection and response (NDR) tools. Powered by cutting-edge artificial intelligence and analytics capabilities, these technologies can flag any sign of suspicious activity by offering holistic oversight of an entire organisations IT and cloud network infrastructure.

They can be the difference between shutting down an attack before the exfiltration of data and extensive lateral movement and compromise.

NDRs use of AI is crucial. Capable of analysing vast amounts of data in a matter of moments, they provide real-time early warning and continuous visibility across the attack progression without any dependency on IoCs, signatures, or other model updates.

They are able to see through evasion tactics and detect the emergence of tunnels immediately, giving SOC teams the best opportunity of tracking and stopping attackers early in the kill chain.

For technology providers like SolarWinds, NDR solutions can provide an effective means of preventing source code from being tampered with. For end-users, meanwhile, they will prevent lateral movement should a Trojanised product slip through the net.

Supply chain attacks remain a challenge. As recently as February 2021, an ethical hacker managed to breach the systems of 35 firms including Microsoft, Apple, PayPal, Shopify, Netflix, Yelp, Tesla and Uber via a novel software supply chain attack.

They are highly lucrative and there is no doubt they will continue to persist through 2021 and beyond. Being proactive with the deployment of appropriate technologies such as NDR is, therefore, critical.

About the Author

Greg Cardiet is Senior Director of Security Engineering at Vectra. Vectra enables enterprises to immediately detect and respond to cyberattacks across cloud, data center, IT and IoT networks. As the leader in network detection and response (NDR), Vectra uses AI to empower the enterprise SOC to automate threat discovery, prioritization, hunting and response. Vectra is Security that thinks. http://www.vectra.ai

Featured image: 20Twenty

Here is the original post:
The missing link securing the supply chain - TechNative

Worldwide Hyper Converged Infrastructure (HCI) Industry to 2025 – Shifting Workload Towards Public Cloud is Driving Growth – ResearchAndMarkets.com -…

The "Global Hyper Converged Infrastructure (HCI): Size, Trends & Forecasts (2021-2025 Edition)" report has been added to ResearchAndMarkets.com's offering.

This report provides an analysis of the global hyper converged infrastructure market, with a detailed analysis of market size and growth of the industry. The analysis includes the market by value and regional value of the hyper converged infrastructure market.

Moreover, the report also assesses the key opportunities in the market and outlines the factors that are and will be driving the growth of the industry. Growth of the overall global hyper converged infrastructure market has also been forecasted for the years 2021-2025, taking into consideration the previous growth patterns, the growth drivers and the current and future trends.

Company Coverage

Region Coverage

North America

Europe

Asia-Pacific

MEA

Latin America

On the basis of hypervisor, the HCI market can be segmented into three categories. VMware, KVM and Hyper-V. Whereas, the market is categorized into four sections on the basis of application. These are Virtual desktop infrastructure, Server virtualization, data Protection and Remote Office/Branch Office.

Usually, before deployment of a hyper converged infrastructure (HCI), three steps are to be followed. First, measure and define the workload, second: selection of the right infrastructure, and lastly, planning and deploying of the HCI. There are many advantages that are associated with the HCI implementation, but the top three advantages include flexibility, predictability and simplicity, which makes hyper converged infrastructure very reliable.

The global hyper converged infrastructure market has increased at a significant CAGR during the years 2016-2020 and projections are made that the market would rise in the next four years i.e. 2021-2025 tremendously. The hyper converged infrastructure market is expected to increase due to many growth drivers such as shifting workload towards public cloud, growing HCI adoption rate by emerging countries, demand from the healthcare industry, etc. Yet the market faces some challenges such as Limitations due to dual-socket servers, challenges of HCI implementation, etc. The global hyper converged infrastructure market is expected to observe some new market trends such as the shift to subscription-based contracts, moving towards edge computing, etc.

Story continues

Key Topics Covered:

1. Executive Summary

2. Introduction

2.1 Hyper Convergence Infrastructure (HCI): An Overview

2.1.1 Hyper Convergence Infrastructure: Definition

2.1.2 HCI Vs. CI

2.1.3 HCI Industry Synopsis

2.1.4 HCI Industry: Based on the Hypervisor Type

2.1.5 HCI Industry: Based on the Application

2.1.6 Requisites for Deploying HCI

2.1.7 Advantages of Using Hyper Converged Infrastructure (HCI)

3. Global Market Sizing

3.1 Global Hyper-Converged Infrastructure Market: An Analysis

3.1.1 Global Hyper-Converged Infrastructure Market by Value

3.1.2 Global Hyper-Converged Infrastructure Market by Region (North America, Europe, Asia-Pacific (APAC), Middle East & Africa (MEA) and Latin America)

4. Regional Market Analysis

4.1 North America Hyper-converged Infrastructure Market: An Analysis

4.1.1 North America Hyper-converged Infrastructure Market by Value

4.2 Europe Hyper-converged Infrastructure Market: An Analysis

4.2.1 Europe Hyper-converged Infrastructure Market by Value

4.3 Asia Pacific Hyper-converged Infrastructure Market: An Analysis

4.3.1 Asia Pacific Hyper-converged Infrastructure Market by Value

4.4 MEA Hyper-converged Infrastructure Market: An Analysis

4.4.1 MEA Hyper-converged Infrastructure Market by Value

4.5 Latin America Hyper-converged Infrastructure Market: An Analysis

4.5.1 Latin America Hyper-converged Infrastructure Market by Value

5. Market Dynamics

5.1 Growth Drivers

5.1.1 Shifting Workload Towards Public Cloud

5.1.2 Growing HCI Adoption Rate by Emerging Countries

5.1.3 Demand from HealthCare Industry

5.1.4 Data Center Consolidation

5.1.5 Optimistic Organization Behavior For HCI Installation

5.1.6 Data Protection with HCI Adoption

5.2 Challenges

5.2.1 Limitations due to Dual-socket servers

5.2.2 Challenges of HCI Implementation

5.3 Market Trends

5.3.1 Shift to Subscription-Based Contracts

5.3.2 Benefits from HCI Adoption

5.3.3 Moving Towards Edge Computing

6. Competitive Landscape

6.1 Global Hyper-converged Infrastructure Market: Competitive Analysis

6.1.1 Global HCI Market Player by Share

6.1.2 Global HCI Software Market Player by Share

7. Company Profiling

For more information about this report visit https://www.researchandmarkets.com/r/erh12g

View source version on businesswire.com: https://www.businesswire.com/news/home/20210430005332/en/

Contacts

ResearchAndMarkets.comLaura Wood, Senior Press Managerpress@researchandmarkets.com For E.S.T Office Hours Call 1-917-300-0470For U.S./CAN Toll Free Call 1-800-526-8630For GMT Office Hours Call +353-1-416-8900

Link:
Worldwide Hyper Converged Infrastructure (HCI) Industry to 2025 - Shifting Workload Towards Public Cloud is Driving Growth - ResearchAndMarkets.com -...

Talking Servers With Inspur And Intel – The Next Platform

Any time a server maker comes into the global market and bypasses Cisco Systems, Lenovo, and IBM to become the third largest seller of machines in the world, you should pay attention. This is precisely what Inspur Information, the server unit of Chinese IT supplier Inspur Group, has accomplished, and it has done so with a combination of joint engineering with customers, high volume economics, and tight partnerships such as the one that the company has with server chip and chipset maker Intel.

Inspur has more than tripled its server revenues in 2020 to $7.7 billion from levels it had only four years earlier, driven in part by key server supplier relationships with the big three hyperscalers in China Alibaba, Baidu, and Tencent as well as an expanding presence in North America, Europe, and Africa. In early 2018, according to market data from IDC, Inspur was roughly the same size as a growing Lenovo, which has since plateaued, and in a dead heat with IBM and Cisco, both which Inspur has left in the dust. If current trends persist, in another four years Inspur will rival the server market leaders Dell Technologies and Hewlett Packard Enterprise.

One of the secrets to Inspurs success, as Rong Shen, vice president and general manager of the server product line at Inspur Information, tells The Next Platform, is its Joint Design Manufacturing, or JDM, approach, which brings together the best elements of an OEM and ODM under the same roof to serve customers. In the wake of the launch of the Ice Lake Xeon SP processors from Intel, we sat down with Shen and Anurag Handa, vice president in the Cloud and Enterprise Solutions of Intels Data Platforms Group, to talk about these new processors and the Inspur M6 server line that uses them.

We wanted to know more than feeds and speeds about the new processors and the systems that use them. We wanted to understand how much differentiation Inspur can offer truly compared to other server makers, and how much better bang for the buck it can deliver, too, as part of the overall value equation that it is bringing to bear to eat market share.

We also had a long talk about the challenges of being a server maker in the modern era. The days when you could build a two-socket server in a 1U and a 2U form factor with the difference mainly being memory and storage and I/O capacity are over. These days, systems have to be tuned for virtualized, containerized, AI, HPC, and even edge applications and they have very different requirements. This means server families are broader than they have been in the past, and it probably also means customers have to make tougher choices and manage a diverse mix of machines. Given all of this, what advice do Inspur and Intel have for customers? Find out by watching the interview.

Commissioned by Inspur

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.Subscribe now

See the article here:
Talking Servers With Inspur And Intel - The Next Platform

Excelero Launches NVMesh on Azure, Addressing Gaps in Public Cloud Storage with 25x Performance for IO-Intensive Workloads – Business Wire

SAN JOSE, Calif.--(BUSINESS WIRE)--Excelero, a disruptor in software-defined storage for IO-intensive workloads such as GPU computing for AI/ML/DL, commercial HPC and data analytics, has added public cloud storage support to its flagship NVMesh elastic NVMe software-defined storage solution. Available first for the Microsoft Azure platform, and later for other major public clouds, NVMesh expands public cloud capabilities by addressing the massive gaps experienced by thousands of organizations that face major performance challenges while attempting to transition their demanding IO-intensive workloads to public clouds at a reasonable cost.

By leveraging Exceleros field-proven scalable, elastic, low-latency software-defined storage on standard cloud compute elements, beta use has shown that NVMesh on Azure provides up to 25x more IO/s and up to 10x more bandwidth to a single compute element - while reducing latencies by 80% from a truly protected storage layer. Using standard instances for storage on cost-effective NVMe drives, enterprises can get the most value out of their data leveraging their cloud pricing and discounts. For converged environments, with applications running on the same virtual machines that run the storage, total cost of ownership (TCO) is further improved since the storage is embedded into the compute at almost no additional cost.

Many of our customers require low latency and high throughput storage for their IO-Intensive workloads," said Aman Verma, product manager, HPC at Microsoft Azure. Excelero's NVMesh on Azures InfiniBand-enabled H- and N-series virtual machines provides an exciting new scalable, protected storage option for several high growth segments of the market, including HPC and AI workloads.

With Excelero NVMesh, data scientists achieve efficient and cost-effective model training through high bandwidth and ultra-low latency and rates of millions of file accesses per second. Database and analytics workloads and high performance computation can be run on CPUs and GPUs without stalling for I/O and at a reasonable cost. The same methods can be employed with the same software stack deployed on-premise and on public clouds.

With data protection becoming essential for IO-intensive applications, Excelero NVMesh on Azure protects data by mirroring across local NVMe drives. The solution allows data to be spread across availability zones for an additional level of protection. Self-healing and advance warning functionality assist in ensuring data longevity. Enterprises have no concerns over data compliance and security as data is stored on nodes within their account.

In container-native settings, Exceleros Kubernetes CSI driver and industry-leading IBM Red Hat OpenShift integration provide a second simple means of rolling out NVMesh on Azure enabling hybrid cloud deployments, for instance for burst-oriented workloads.

Gaps in public cloud storage capabilities prevent many demanding applications from running on public clouds, regardless of the obvious cost and scalability advantages, and force enterprises to endure a latency penalty should they go there, said Eric Burgener, research vice president in the Infrastructure Systems, Platforms and Technologies Group at IDC. These gaps also are preventing public cloud providers from fueling their own growth. Solutions that remove these barriers are emerging, and they are an exciting development to watch.

Too many of our customers are struggling with IO-intensive workloads that they would prefer to move to the public cloud, yet public cloud providers are grappling to deliver the cost-performance their customers need with these storage workloads, said Yaniv Romem, CEO of Excelero. Exceleros new NVMesh on Azure bridges the gap between what the market offers and what enterprises require, helping them avoid costly overprovisioning of storage so they can embrace hybrid- and multi-cloud strategies assuring performance, agility and cost control. Look for continued innovation from us in this space across the coming months.

Excelero NVMesh on Azure is now publicly available. For more information, visit Excelero at http://www.excelero.com.

About Excelero

Excelero is the market leader in distributed block storage software. The company delivers Elastic NVMe software that powers AI training, commercial HPC and analytics workloads at any size and performance scale. With its partners, Excelero enables customers to massively improve ROI across their entire infrastructure, using standard servers in the data center and standard instances in the public clouds, maximizing GPU and NVMe utilization, minimizing overheads and reducing software license costs.

Exceleros NVMesh is distributed block storage that connects CPUs and GPUs to NVMe flash to create a significant improvement in price/performance, from entry level to any scale. NVMesh was designed as a storage layer that eradicates data bottlenecks so teams can access data at any speed in any location in public or private clouds. NVMesh delivers up to 20x faster data processing for multi-server, multi-GPU compute nodes when working with massive datasets for machine learning, deep learning and complex analytical workloads. Follow us on Twitter @ExceleroStorage, on LinkedIn or visit http://www.excelero.com.

Red Hat, the Red Hat logo, and OpenShift are trademarks or registered trademarks of Red Hat, Inc. or its subsidiaries in the U.S. and other countries.

More here:
Excelero Launches NVMesh on Azure, Addressing Gaps in Public Cloud Storage with 25x Performance for IO-Intensive Workloads - Business Wire

The next big thing in cloud computing? Shh It’s confidential – Help Net Security

The business-driven explosion of demand for cloud-based services has made the need to provide highly secure cloud computing more urgent. Many businesses that work with sensitive data view the transition to the cloud with trepidation, which is not entirely without good reason.

For some time, the public cloud has actually been able to offer more protection than traditional on-site environments. Dedicated expert teams ensure that cloud servers, for example, maintain an optimal security posture against external threats.

But that level of security comes at a price. Those same extended teams increase insider exposure to private datawhich leads to a higher risk of an insider data breach and can complicate compliance efforts.

Recent developments in data security technologyin chips, software, and the cloud infrastructureare changing that. New security capabilities transform the public cloud into a trusted data-secure environment by effectively locking data access to insiders or external attackers

This eliminates the last security roadblock to full cloud migration for even the most sensitive data and applications. Leveraging this confidential cloud, organizations for the first time can now exclusively own their data, workloads, and applicationswherever they work.

Even some of the most security-conscious organizations in the world are now seeing the confidential cloud as the safest option for the storage, processing, and management of their data. The attraction to the confidential cloud is based on the promise of exclusive data control and hardware-grade minimization of data risk.

Over the last year, theres been a great deal of talk about confidential computingincluding secure enclaves or TEEs (Trusted Execution Environments). These are now available in servers built on chips from Amazon Nitro Enclaves, Intel SGX (Software Guard Extensions), and AMD SEV (Secure Encrypted Virtualization).

The confidential cloud employs these technologies to establish a secure and impenetrable cryptographic perimeter that seamlessly extends from a hardware root of trust to protect data in use, at rest, and in motion.

Unlike the traditional layered security approaches that place barriers between data and bad actors or standalone encryption for storage or communication, the confidential cloud delivers strong data protection that is inseparable from the data itself. This in turn eliminates the need for traditional perimeter security layers, while putting data owners in exclusive control wherever their data is stored, transmitted, or used.

The resulting confidential cloud is similar in concept to network micro-segmentation and resource virtualization. But instead of isolating and controlling only network communications, the confidential cloud extends data encryption and resource isolation across all of the fundamental elements of IT, compute, storage, and communications.

The confidential cloud brings together everything needed to confidentially run any workload in a trusted environment isolated from CloudOps insiders, malicious software, or would-be attackers.

This also means workloads remain secure even in the event a server is physically compromised. Even an attacker with root-access to a server would be effectively prevented from seeing data or gaining access to data and applicationaffording a level of security traditional micro-segmentation cant today.

A strong argument can already be made that reputable major cloud providers deliver both the resources and focus needed to secure a vast majority of internal IT infrastructure. But data-open clouds bring the risk of greater data exposure to insiders, as well as the inability to lock down a trusted environment under the total control of the CISO.

Data exposure has manifested itself in some of the most publicized breaches to date. CapitalOne became the poster child for insider data exposure in the cloud when its data was breached by an AWS employee.

Implementing a confidential cloud eliminates the potential for cloud insiders to have exposure to data, closing the data attack surface that is otherwise left exposed at the cloud provider. Data controls extend wherever data might otherwise be exposedincluding in storage, over the network, and in multiple clouds.

OEM software and SaaS vendors are already building confidential clouds today to protect their applications. Redis recently announced a secure version of their high-performance software to run over multiple secure computing environmentscredibly creating what may be the worlds most secure commercial database.

Azure confidential computing has partnered with confidential cloud vendors to enable the secure formation and execution of any workload over existing infrastructure without any modification of the underlying application. Support for similarly transparent multi-cloud Kubernetes support isnt far behind.

Taking advantage of confidential computing previously required code modifications to run applications. This is because initial confidential computing technologies focused on protecting memory. Applications had to be modified to run selected sensitive code in protected memory segments. The need to rewrite and recompile applications was a heavy lift for most companiesand isnt even possible in the case of legacy or off the shelf packages.

A new lift and shift implementation path enables enterprises to create, test, and deploy sensitive data workloads within a protected confidential cloud without modifying or recompiling the application. Nearly all cloud providers, including Amazon, Azure, and Google, offer confidential cloud-enabling infrastructure today.

Confidential cloud software allows applications and even whole environments to work within a confidential cloud formation with no modification. Added layers of software abstraction and virtualization have the advantage of making the confidential cloud itself agnostic to the numerous proprietary enclave technologies and versions developed by Intel, AMD, Amazon, and ARM.

A new generation of security vendors has simplified the process to implement private test and demo environments for prospective customers of the public cloud. This speeds the process to both enclave private applications and generate full-blown confidential cloud infrastructure.

Data security is the last barrier to migrating applications to the cloud and consolidating IT resources. The resolution of cloud security flaws took a great step in migrating all but the most sensitive application and data. Eliminating data vulnerability opens a broad new opportunity for businesses to simply deploy a new and intrinsically secure hosted IT infrastructure built upon the confidential cloud.

Excerpt from:
The next big thing in cloud computing? Shh It's confidential - Help Net Security

Washington State Law Creates a Pathway to the Cloud – Government Technology

Almost 10 years after constructing a $255 million state data center in Olympia, Wash., in 2011, newly signed legislation will allow agencies to switch to the cloud as early as July.

According to House Bill 1274, one of the reasons for the switch is a result of the states current IT infrastructure having insufficient capacity to handle increased demand due to the pandemic. The bills sponsor, state Rep. David Hackney, D-11, said the legislation would set up a framework for the states current information technology infrastructure to move to the cloud.

The data center currently uses legacy servers, Hackney said. If they break down, have to be repaired, or need to be replaced, it can be very expensive.

Another problem these servers present is a lack of scalability, limiting opportunities to expand.

If we wanted to expand right now, wed need more servers, Hackney said. By switching to the cloud, it would not only provide more opportunities to expand, but it would also be more secure and cost-efficient.

In fact, it could potentially save the state $150 million over five years, according to Hackney. The catch, however, is that such a move would require shutting down the data center and solely utilizing the cloud to achieve this.

The concern in shutting down the data center is that it would lead to job loss, Hackney said.

However, the bill stipulates that it would create a new cloud transition task force to oversee the migration process and provide job training for legacy data center workers rather than outsourcing to an outside company. As for maintaining the cloud-based system, a third party will oversee and manage it.

Washington Technology Solutions (WaTech) will likely pick the provider, Hackney said. The idea is that WaTech will be in charge of this process.

WaTech did identify this as a key recommendation in our cloud assessment report, which we are working to implement, a WaTech spokesperson said. The state Legislature is still in session, and there may be additional changes before the session adjourns.

Derek Puckett, WaTechs legislative affairs director, expanded on the issue, saying, the cloud assessment report has identified key recommendations such as implementing a cloud center of excellence and working with cloud data brokers.

However, Puckett said, identifying what this process will look like needs to happen first. State agencies will decide whether to switch to the cloud or continue storing data in the state data center.

The switch is not going to happen overnight, Puckett said. Not all agencies and systems are going to be cloud-ready.

However, he said, it gives state agencies the opportunity to do so if its right for them.

The bill was signed by Gov. Jay Inslee earlier this month and will take effect July 25.

Katya Maruri is a staff writer for Government Technology. She has a bachelors degree in journalism and a masters degree in global strategic communications from Florida International University, and more than five years of experience in the print and digital news industry.

More here:
Washington State Law Creates a Pathway to the Cloud - Government Technology

CISA experiments with cloud log aggregation to ID threats – FCW.com

Cybersecurity

The Cybersecurity and Infrastructure Security Agency has pilot programs underway with multiple departments and agencies to experiment with aggregating cloud logs to a warehouse which in turn will feed the agency's data analysis efforts.

CISA wants to "see if it's possible to send their logs to our aggregation point and make sense of them as a community together," Brian Gattoni, CISA's chief technology officer, said on Wednesday at an event hosted by FCW. "We've run pilots through the [Continuous Diagnostics and Mitigation] program team, through our capacity building team to look at end point visibility capabilities to see if that closes the visibility gap for us."

So far what the agency has learned, Gattoni said, is that "technology is rarely the barrier. There's a lot of policy and legal and contractual and then just rote business process things to work out to make the best use of features in technology that are available to us."

Network visibility is a hot topic among government officials and lawmakers in the wake of the intrusions involving SolarWinds and Microsoft Exchange servers. CISA officials in public settings have made clear the government's current programs were not designed to monitor the vectors that Russian intelligence agents exploited during their espionage campaign.

At the same time, top intelligence chiefs such as Gen. Paul Nakasone, the head of the National Security Agency and U.S. Cyber Command, have warned foreign operatives are exploiting the fact the U.S. intelligence community is unable to freely surveil domestic infrastructure without a warrant.

Nakasone has also signaled he will not make any request for new authorities to monitor domestic networks, despite several lawmakers inviting him to do so.

This has prompted CISA to begin seeking out new capabilities that give the cybersecurity watchdog a clearer picture on individual end points in agency networks.

"For this reason, CISA is urgently moving our detective capabilities from that perimeter layer into agency networks to focus on these end points, the servers and workstations where we're seeing adversary activity today," Eric Goldstein, a top CISA official told House lawmakers at a March hearing.

Gattoni said during his panel discussion that some cloud providers already have the infrastructure built into their service that would aid CISA in gathering the security information it wants to aggregate, but he also said the federal government can't depend on that always being the case.

"There's a lot of slips between the cup and the lip when it comes to data access rights for third party services, so we at CISA have got to explore the use of our programs like [CDM] as way to establish visibility and also look at possibly building out our own capabilities to close any visibility gaps that may still persist," he said.

About the Author

Justin Katz covers cybersecurity for FCW. Previously he covered the Navy and Marine Corps for Inside Defense, focusing on weapons, vehicle acquisition and congressional oversight of the Pentagon. Prior to reporting for Inside Defense, Katz covered community news in the Baltimore and Washington D.C. areas. Connect with him on Twitter at @JustinSKatz.

See the article here:
CISA experiments with cloud log aggregation to ID threats - FCW.com

Contain yourselves: Scality object storage gets cloud-native Artesca cousin Blocks and Files – Blocks and Files

Scality has popped the lid on ARTESCA, its new cloud-native object storage, co-designed with HPE, that is available alongside its existing RING object storage product.

Artesca configurations start with a single Linux server and then scale out, whereas the RING product requires a minimum of three servers. The Kubernetes-orchestrated ARTESCA container software runs on x86 on-premises servers with HPE having an exclusive licence to sell them for six months.

A statement from Randy Kerns, senior strategist and analyst at the Evaluator Group, said: Scality has figured out a way to include all the right attributes for cloud-native applications in ARTESCA: lightweight and fast object storage with enterprise-grade capabilities.

Scality chief product officer Paul Speciale told us: We believe object storage is emerging as primary storage for Kubernetes workloads, with no need for file and block access.

ARTESCA uses the S3 interface and storage provisioning for stateful containers is done through its API. There is no POSIX. Artesca has a global namespace that spans multiple clouds and can replicate its object data to S3-supporting targets, and Scalitys RING storage. Speciale said Scality is working on an S3-to-tape interface with tape-held data included in the ARTESCA namespace.

The software can integrate with Veeam, Splunk, Vertica and WekaIO via S3, provisioning data services to them. Existing RING or cloud data can be imported into ARTESCAs namespace.

The software features multi-tenancy and its managementGUI supports multiple ARTESCA instances, both on-premises and in multiple public clouds AWS, Azure, GCP:

ARTESCA has built-in metadata search and workflows across private and public clouds.

Scality says it has high performance with ultra-low latency and tens of GB/s of throughput per server, although actual performance numbers are still being generated in the HPE lab and in actual deployments. We can expect them to be available in a couple of months.

The product has dual-layer erasure coding, local and distributed, to protect against drive and. Server failure. If a disk fails, the server has enough information to self-heal the data locally, with no time-sapping network IO needed. If a full server fails, the distributed codes can self-heal the data to the remaining servers in the cluster. They work in parallel to accelerate the recovery process. Lecat said this scheme makes high-capacity disk drive object storage reliable.

ARTESCA has been developed to support many Kubernetes distributions. It should run with VMwares Tanzu system and with HPE Ezmeral, although Lecat adds that both need to be validated.

Target application areas include cloud-native IoT edge deployments, AI and machine learning and big data analytics. There is an initial supportive ecosystem including CTERA, Splunk, Veeam and Veeams Kasten business, Vertica and WekaIO.

There are six ARTESCA configurations available from HPE, suitable for core and edge data centre locations and including Apollo and Proliant servers in all-flash and hybrid flash/disk versions:

Chris Powers, HPE VP and GM for collaborative platforms and big data, said in a statement: Combined with a new portfolio of six purposefully configured HPE systems, ARTESCA software empowers customers with an application-centric, developer-friendly, and cloud-first platform with which to store, access, and manage data for their cloud-native apps no matter where it lives in the data centre, at the edge, and in public cloud.

ARTESCA is available, through HPE only, for 6 months, with one, three and five-year subscriptions, starting at $3,800 per year which includes 247 enterprise support. HPE is also making ARTESCA available as a GreenLake service.

Scality is following MinIO in producing cloud-native object storage. Speciale said: MinIO is very popular but doesnt have all the enterprise features needed. Being lightweight, ARTESCA fits in with Edge deployment needs and Speciale hopes that this will help propel it to enterprise popularity.

Speciale said that Scalitys RING software has a 10-year roadmap and is not going away. He also said ARTESCA will support the coming COSI (Container Object Storage Interface). CSI is focused on file and block storage.

We can envisage all object storage providers converting their code to become cloud-native at some point in the future. ARTESCA, and MiniO, will surely have a heck of a lot more competition in the future.

More:
Contain yourselves: Scality object storage gets cloud-native Artesca cousin Blocks and Files - Blocks and Files

The Coolest Big Data Systems And Platform Companies Of The 2021 Big Data 100 – CRN

A Systemic Approach

Business analytics and data visualization applications, database software, data science and data engineering toolsall are critical components of a comprehensive initiative to leverage a business data assets for competitive gain.

But all those components run on hardware servers, operating software and cloud platforms that pull all those pieces together.

As part of the 2021 Big Data 100, CRN has compiled a list of major system and cloud platform companies that solution providers should be aware of. They include major computer system vendors like Dell Technologies, Hewlett Packard Enterprise and IBM that provide servers and operating software packaged for big data applications; cloud service providers like Amazon Web Services, Google and Snowflake that offer cloud-based big data services; and leading big data software developers including Microsoft, Oracle and SAP.

This week CRN is running the Big Data 100 list in slide shows, organized by technology category, with vendors of business analytics software, database systems, data management and integration software, data science and machine learning tools, and big data systems and platforms.

(Some vendors market big data products that span multiple technology categories. They appear in the slideshow for the technology segment in which they are most prominent.)

See the article here:
The Coolest Big Data Systems And Platform Companies Of The 2021 Big Data 100 - CRN