Category Archives: Cloud Servers

EDA moves to the cloud – eeNews Europe

If you are an engineer of a certain age, you will remember for engineering workstation. Those high cost, high performance machines were in short supply and were used to run the leading edge electronic design automation (EDA) tools.

But the workstation also drove chip and graphics architecture. Silicon Graphics drive the MIPS architecture (and was subsumed into Cray and now HP Enterprise), Sun Microsystems had its own Sparc architecture before the takeover by Oracle, and Digital Equipment (DEC, RIP) had its ARM-based Alpha chip, developed by a team that went on to Apple and arguably has resulted in the M1 chip powering the latest Macbook.

However, these mighty machines (for their time) were overtaken by the network. Client-server architectures allowed higher performance servers to run the EDA tools.

These servers were consolidated into server farms, with racks of processors on-site, and now we are seeing the next stage in the evolution into the cloud.

The big three Eda suppliers have had cloud in their strategic roadmaps for several years, and this work has come to fruition this year. Earlier this month, ARM showed the importance of the cloud.

Arm is moving the majority of its EDA workloads to AWS as part of our effort to reduce our global datacentre footprint by at least 45% and our on-premises compute capabilities by 80% as we complete our migration to AWS, said Rene Haas, President of thee IP Group at Arm. We have already realized a 6x improvement in performance time for EDA workflows on AWS and see the potential for increasing throughput by 10x.

As part of this move, AWS used the VCS Fine-Grained Parallelism (FGP) technology from Synopsys running on Arm-based Graviton2 servers. This enables accelerated development and verification of breakthrough connectivity technology and SoCs.

Link:
EDA moves to the cloud - eeNews Europe

One Option You Shouldnt Overlook When Setting Up a Security Camera – The New York Times

If you own or plan to buy a home security camera or smart doorbell camera, you need a place to hold all of the footage it captures. That means you need to decide where video will be saved once the camera captures it, because where the video goes determines how long it lasts, how secure it is, what it costs, and how easy it is to access.

There are two types of video storage for Wi-Fi security cameras. Local storage saves all your video recordings in the camera, on a networked device, or even on network-attached storage (NAS)so all your video stays local, inside your home. Cloud storage is the other option, in which the camera transfers all your recordings over the internet to store them on servers that you can access from almost anywhere (thats what the cloud part refers to).

We suggest that most people use cloud storage for their security cameras, or that they select cameras offering both local and cloud options (such as our current top-two indoor camera picks). Although local storage is usually cheap (just the cost of the memory card), and in going local you dont have to worry about who might potentially view your footage, there are a few specific reasons we recommend only those cameras that offer some type of cloud service.

If you merely want to spot mice running across the kitchen counter or view what your dog is doing during the day, local storage should be just fine. However, if someone breaks into your home and steals the camera cardor the whole camerayou wont have a record of the incident.

Cloud storage keeps all your footage safely away from prying eyes (or hands). Of course, if the power goes out, if a child yanks the power cord, or someone simply steals your camera, youre out of luck no matter what type of storage you use. However, cloud storage at least ensures that you have a video clipright up until your camera shuts downthat you can view in an app or a web browser.

Every type of storage option has limits. If you use local storage, youre limited by the amount of space on the memory card or NAS device. For example, the Eufy 2K Indoor Cam can support a microSD card up to 128 GB, which provides enough space to hold about 30 hours of 2K-resolution video or 36 hours worth at 1080p resolution. Wyze recommends using a 32 GB card with the Wyze Cam v2; that translates to 48 hours of 1080p video or 168 hours of 720p video. That may sound like a lot, but it can disappear quickly depending on how often your camera gets motion triggersyou may end up having hours of clips of you mowing the lawn, say, or of kids playing in the family room. Typically when a card hits its limit, the camera automatically deletes the oldest video clips to make room for new oneswhich means if you arent checking it once or twice a week, you may miss something.

With a cloud storage plan, you think in terms of time instead of storage size. For instance, Wyzes Cam Plus service stores your footage for 14 days (your recordings delete automatically after that). Although most cloud storage plans dont support 24/7 recording (Googles Nest Aware being a notable exception), Wyze says you could theoretically store 14 days of 24/7 video clips if your camera is constantly being triggered to record.

It often pays towell, pay. Companies want to get you on the hook for that recurring revenue, so they often include exclusive features and other perks to entice new subscribers and keep existing customers happy.

For instance, for $2 per month per camera (or $15 annually), Wyzes Cam Plus service includes person detectionso your camera can be more selective when it records clips and sends you alerts. Similarly, the Arlo Smart service provides people, vehicle, and package alerts, as well as activity zones. And some cameras, like those from the Google Nest line or newer Arlo models, dont provide any type of storage, person alerts, or the ability to share clips without a subscription.

Although were never eager to subscribe to yet another paid service, we think cloud storage for security cameras is a service worth having, even if you dont use it 365 days a year. Compared with most service fees, the pricing for cloud storage tends to be relatively low. Current cloud plans cost anywhere from $2 to $6 per month for a single camera (and usually offer a discount for multiple cameras) and include several privacy and safety features.

See more here:
One Option You Shouldnt Overlook When Setting Up a Security Camera - The New York Times

This is the best Google Cloud Print alternative – Android Police

Google Cloud Print is an excellent tool for people who still own older printers that don't have a network connection of their own. It allows you to hook up your printer to your computer via USB and then use that computer as your printing server no need to throw out a perfectly fine device. However, it looks like all good things must end when it comes to Google products, and the Cloud Print shutdown is already looming the service is turning off its servers on January 1, 2021. Google has a list of recommended replacements, but almost all of these are aimed exclusively at businesses, except for one: PaperCut and its Mobility Print service.

While PaperCut has lots of paid products in store for businesses that have to manage a plethora of printers for a multitude of different user groups, the company's free Cloud Print replacement looks promising. It currently only supports remote printing for Chrome OS and Windows, so you can only print from your phone when you're in your home network, but that's still better than being left stranded without any solution at all.

To get started, you need to download the Mobility Print server from PaperCut's website for your operating system. You're taken to a local server address once you've installed it, where you need to create a user name, password, and an organization name write those down or save them to your password manager. The software then automatically recognizes printers connected to your computer and makes them available for everyone using Mobility Print on your network. There are instructions on how to install clients on computers and phones on your local server's web address, but we want to highlight the mobile setup here.

On Android, you download and start the Mobility Print app, check if it's set up as a print service in your system settings via a link provided in the app, and then you're already all set. You'll find printers connected to your computer in the printing dropdown, accompanied by the PaperCut icon. When you select them as a target, you'll see a warning that your documents might pass through servers, but that's just a boilerplate statement your data doesn't actually leave your local network at all.

As mentioned, the biggest caveat with Mobility Print is the lack of iOS and Android support for remote printing when you're not connected to your home Wi-Fi. A spokesperson told us that "there are a few things we still want to add to optimise the experience on Mac, Windows, and Chromebooks before moving to mobile," so we probably can't expect this to be ready by January 2021, when Google Cloud Print dies. However, if you only ever need to print stuff from your phone while you in your Wi-Fi anyway, this limitation shouldn't bother you too much. In any case, Mobility Print should be even faster than Google's solution when you use it locally since jobs don't have to pass through servers before arriving on your printer.

Let's preface this by emphasizing that you only need to go through the following process when you want to print remotely while you're not at home. If you only want to print from your home network, you can skip this section.

To set up Mobility Print's remote printing solution for Chromebooks and Windows computers, there's an Enable Cloud Print button on your computer's local printer server interface. A popup will inform you that your print jobs are private and secure by using WebRTC to create peer-to-peer connections. ClickEnableto proceed, and you're taken to a website where you have to configure an invite link. If you want to set up remote printing permanently, tick the "no expiration" box underPrinting expiration date.You can do the same for theinvite link expiration dateif you want to be able to keep using the same invite link for more devices, but you can always generate a new one, which might be the more secure option.

Once you've generated the invite link, you can send it to your other Chromebook or Windows computer hit the copy to clipboard button and write yourself an email or use a service like Pushbullet. On a Chromebook, you're taken to a website from which you can install the Mobility Print Chrome extension. You should then be able to start printing remotely right away a handshake with your server is established via the personalized link you've used to open the website.

The setup screen for Chromebooks.

You can test if your installation was successful by disconnecting from your home network and see if you're able to see the printer you've set up via mobility print. If you have a tethering plan, you can use your phone's hotspot to do that real quick. Otherwise, you might have to go outside to and look for a public Wi-Fi network. You can easily spot the Mobility Print targets in your printing list thanks to the attached green PaperCut logo.

Printing remotely works!

You don't have to worry about opening any ports in your firewall because Mobility Print uses the same ports as video conferencing apps. Its reliance on existing standards and peer-to-peer connections is also the reason why PaperCut offers Mobility Print for free, as a spokesperson shared with us PaperCut simply doesn't have to process a lot of data, so it can cross-finance the service via its paid products. The company hopes that people who are happily using their product at home might recommend it to their work IT departments.

Since these are already the last days of Google Cloud Print, you probably don't need to deactivate Cloud Print manually. But if you'd like to clean up your printer selection before the end of the year, you can head to google.com/cloudprint#printers, click or tap your printer, and hit the delete button.

Unfortunately, Mobility Print can't replace the Save to Google Drive printer in Chrome's printing menu, which is also tied to Cloud Print. Google suggests you select Save as PDF and manually upload your documents to Google Drive in the future.

The detailed feature comparison. Source: PaperCut.

As you can see, Mobility Print isn't a 1:1 replacement for Cloud Print just yet, and it won't be able to replicate the Save to Google Driveprinter. Since Mobility Print isn't a first-party solution, the setup is also a bit more tedious than the seamless Google Account integration Cloud Print provides, but PaperCut made the process as simple as possible. And once Mobility Print gets remote printing support for Android and the remaining platforms, it should be the best replacement for Google Cloud Print you could ask for.

See more here:
This is the best Google Cloud Print alternative - Android Police

Finding the balance between edge AI vs. cloud AI – TechTarget

AI at the edge allows real-time machine learning through localized processing, allowing for immediate data processing, detailed security and heightened customer experience. At the same time, many enterprises are looking to push AI into the cloud, which can reduce barriers to implementation, improve knowledge sharing and support larger models. The path forward lies in finding a balance that takes advantage of cloud and edge strengths.

Centralized cloud resources are typically used to train deep learning inferencing models because large amounts of data and compute are required to develop accurate models. The resulting models can be deployed either in a central cloud location or distributed to devices at the edge.

"Edge and cloud AI complement one another, and cloud resources are almost always involved in edge AI use cases," said Jason Shepherd, vice president of ecosystem at Zededa, an edge AI tools provider.

"In a perfect world, we'd centralize all workloads in the cloud for simplicity and scale, however, factors such as latency, bandwidth, autonomy, security and privacy are necessitating more AI models to be deployed at the edge, proximal to the data source," Shephard said. Some training is occurring at the edge and there's increasing focus on the concept of federated learning that focuses processing in data zones while centralizing results to eliminate regional bias.

The rise of better networking infrastructure and new edge computing architectures is breaking down the barriers between centralized cloud AI and distributed edge AI workloads.

"The edge is a huge emerging shift in infrastructure that complements the cloud by adding on an information technology layer which is distributed out to every nook and cranny of the world," said Charles Nebolsky, intelligent cloud and infrastructure services lead at Accenture. Nebolsky believes edge AI is leading to a revolution as big as the cloud was when it gained traction.

When engineered well, edge AI opens new opportunities for autoscaling since each new user brings an entirely new machine to the collective workload. The edge also has better access to more unprocessed raw input data, whereas cloud AI solutions must work with pre-processed data to improve performance or enormous data sets, at which point bandwidth can become a serious concern.

"The reason for moving things to the edge is for better response time," said Jonas Bull, head of architecture for Atos North America's AI Lab, a digital transformation consultancy.

Speed and latency are critical for applications such as computer vision and the virtual radio access networks used for 5G. Another big benefit lies in improving privacy by limiting what data is uploaded to the cloud.

Edge AI's deployment is also full of constraints, including network latency, memory pressure, battery drain and the possibility of a process being backgrounded by the user or operating system. Developers working on edge AI need to plan for a wide range of limitations, particularly as they explore common use cases like mobile phones, said Stephen Miller, senior vice president of engineering and co-founder at Fyusion, an AI-driven 3D imaging company.

"You need to plan for every possible corner case [on the edge], whereas in the cloud, any solution can be monitored and fine-tuned," Miller said

Most experts see edge and cloud approaches as complementary parts of a larger strategy. Nebolsky said that cloud AI is more amenable to batch learning techniques that can process large data sets to build smarter algorithms to gain maximum accuracy quickly and at scale. Edge AI can execute those models, and cloud services can learn from the performance of these models and apply to the base data to create a continual learning loop.

Fyusion's Miller recommends striking the right balance -- if you commit entirely to edge AI, you've lost the ability to continuously improve your model. Without new data streams coming in, you have nothing to leverage. However if you commit entirely to cloud AI, you risk compromising the quality of your data -- due to the tradeoffs necessary to make it uploadable, and lack of feedback to guide the user to capture better data -- or the quantity of data.

"Edge AI complements cloud AI in providing access to immediate decisions when they are needed and utilizing the cloud for deeper insights or ones that require a broader or more longitudinal data set to drive a solution," Tracy Ring, managing director at Deloitte said.

For example, in a connected vehicle, sensors on the car provide a stream of real-time data that is processed constantly and can make decisions, like applying the brakes or adjusting the steering wheel. The same sensor data can be streamed to the cloud to do longer-term pattern analysis that can alert the owner of urgently needed repairs that may prevent an accident in the future. On the flip side, cloud AI complements edge AI to drive deeper insights, tune models and continue to enhance their insights.

"Cloud and edge AI work in tandem to drive immediate need decisions that are powered by deeper insights, and those insights are constantly being informed by new edge data," Ring said.

The main challenges of making edge and cloud AI work together are procedural and architectural.

"Applications need to be designed so that they purposefully split and coordinate the workload between them," said Max Versace, CEO and co-founder of Neurala, an AI inspection platform.

For instance, edge-enabled cameras can process all information as it originates at the sensor without overloading the network with irrelevant data. However, when the object of interest is finally detected at the edge, the relevant frames can be broadcasted to a larger cloud application that can store, further analyze (e.g., what subtype of object is in the frame and what are its attributes), and share the analysis results with a human supervisor.

One strategy lies in creating an architecture that balances the size of the model and data against the cost of data transfers, said Brian Sletten, president of Bosatsu Consulting and senior instructor for edge computing at Developintelligence.com. For large models, it makes more sense to stay put in the cloud.

"There are ways to reduce the model size to help resolve the issue, but if you are dealing with a very large model, you will probably want to run it in the cloud," Sletten said.

In other cases, when there is a lot of data generated at the edge, it may make more sense to update models locally and then feed subsets of this back to the cloud for further refinement. Developers also need to consider some of the privacy implications when doing inference on sensitive data. For example, if developers want to detect evidence of a stroke through a mobile phone camera, the application may need to process data locally to ensure HIPAA compliance.

Sletten predicts the frameworks will evolve to provide more options about where to do training and how to improve reuse. As an example, TensorFlow.js uses WebGL and WebAssembly to run in the browser (good for privacy, low-latency, leveraging desktop or mobile GPU resources, etc.) but also can load sharded, cached versions of cloud-trained models. Model exchange formats (e.g., Open Neural Network Exchange) could also increase the fluidity of models to different environments. Sletten recommends exploring tools like LLVM, an open source compiler infrastructure project, to make it easier to abstract applications away from the environments they run in.

"One of the key challenges in moving more AI from the cloud to the edge is coming up with neural network architectures that are able to operate in the edge AI chips efficiently," said Bruno Fernandez-Ruiz, co-founder and CTO of Nexar, a smart dash cam vendor.

General computing platforms, like the one found in the cloud servers, can run any network architecture. This becomes much harder in edge AI. Architectures and trained models must be adapted to run on the AI chipsets found at the edge.

Fernandez-Ruiz and his team have been exploring some of these tradeoffs to improve the intelligence they can bring to various dash cam applications. This is a big challenge as users may drive from highly performant mobile networks to dead zones yet expect good performance regardless. The team found that during inference time, there isn't enough network bandwidth to move all the data from the edge to the cloud, yet the use case requires local inference outputs to be aggregated globally. The edge AI can run neural networks that help filter the data which must be sent to the cloud for further AI processing.

In other cases, the cloud AI training may result in neural network models which have too many layers to run efficiently on edge devices. In these cases, the edge AI can run a lighter neural network that creates an intermediate representation of the input which is more compressed and can therefore be sent to the cloud for further AI processing. During training time, edge and cloud AI can operate in hybrid mode to provide something akin to "virtual active learning," where the edge AI sifts through vast amounts of data and "teaches" the cloud AI.

Fernandez-Ruiz has found the types of supported neural network architectures in edge AI chipsets are limited, and usually running months behind what can be achieved in the cloud. One useful approach for addressing these limitations has been to use compiler toolchains and stacks like Apache TVM, which help in porting a model from one platform to another.

Another approach has been to use network architectures known to work well in edge AI, and train them directly for the target platform. He has found that given enough volume and variety of training data, this approach can often outperform the cross-platform compiler approaches in terms of in terms of absolute performance. However, it also requires some manual work during training, and in pre- and post-processing.

Accenture's Nebolsky said some of the most common tradeoffs developers need to consider between cloud and edge AI include the following:

Security: AI services that drive authentication and processing of sensitive information like fingerprints or medical records are generally best accomplished locally for security concerns. Even when very strong cloud security is in place, the user perception of better privacy from edge processing can be an important consideration.

See the original post here:
Finding the balance between edge AI vs. cloud AI - TechTarget

Cybercriminals to focus on remote and cloud-based systems in UAE next year – Gulf Business

Trend Micro predicts that UAEs home networks, remote working software, and cloud systems will be at the center of a new wave of cyberattacks in 2021.

In a report titled Turning the Tide, the cybersecurity firm forecasts that cybercriminals in 2021 will particularly look to home networks as a critical launch pad to compromising corporate IT and IoT networks.

As the UAE begins to enter a post-pandemic world, the trend for remote and hybrid working is likely going to continue for many organisations, said Majd Sinan, country manager, UAE, Trend Micro. In 2021, we predict that cybercriminals will launch more aggressive attacks to target corporate data and networks in the UAE.

Showing the growing risk of cyberattacks, Trend Micro systems detected a combined 13,100,616 email, URL, and malware cyber-threats cyber-threats during the first half of 2020, according to its Midyear Security Report. Ransomware attacks in the UAE were 4.27 per cent of the worlds ransomware attacks.

In 2021, the UAEs security teams will need to double down on user training, extended detection and response and adaptive access controls, Majd Sinan. This past year, many UAE organisations were focused on surviving: now its time for the UAEs organisations to thrive, with comprehensive cloud security as their foundation.

The report warns that end-users who regularly access sensitive data (e.g. HR professionals accessing employee data, sales managers working with sensitive customer information, or senior executives managing confidential company numbers) will be at the greatest risk. Attacks will likely exploit known vulnerabilities in online collaboration and productivity software soon after they are disclosed, rather than zero-days.

Read: Researchers in Abu Dhabi build first national crypto library for the UAE

Access-as-a-service business models of cybercrime will grow, targeting the home networks of high-value employees, corporate IT and IoT networks. IT security teams will need to overhaul work from home policies and protections to tackle the complexity of hybrid environments where work and personal data comingle in a single machine. Zero-trust approaches will increasingly be favored to empower and secure distributed workforces.

As third-party integrations reign, Trend Micro also warned that exposed APIs will become a new preferred attack vector for cybercriminals, providing access to sensitive customer data, source code and back-end services.

Cloud systems are another area in which threats will continue to persist in 2021, from unwitting users, misconfigurations, and attackers attempting to take over cloud servers to deploy malicious container images.

See the original post:
Cybercriminals to focus on remote and cloud-based systems in UAE next year - Gulf Business

Top 10 Hyperconverged Infrastructure (HCI) Solutions – Datamation

A hyperconverged infrastructure (HCI) solution is a primary tool for connecting, managing and operating interconnected enterprise systems in a hyperconverged infrastructure (HCI). The technology helps organizations virtualize storage, servers, and networks. While converged infrastructure uses hardware to achieve this objective, HCI takes a software-centric approach.

To be sure, hyperconvergence has its pros and cons. Yet the advantages are clear: HCI boosts flexibility by making it easier to scale according to usage demands and adjust resources faster and more dynamically. By virtualizing components its possible to build more efficient databases, storage systems, server frameworks and more. HCI solutions increasingly extend from the data center to the edge. Many also incorporate artificial intelligence and machine learning to continually improve, adapt and adjust to fast-changing business conditions. Some also contain self-healing functions.

By virtualizing an IT environment an enterprise can also simplify systems management and trim costs. This can lead to a lower total cost of ownership (TCO). Typically, HCI environments use a hypervisor, usually running on a server that uses direct-attached storage (DAS), to create a data center pool of systems and resources. Most support heterogenous hardware and software systems. The end result is a more flexible, agile and scalable computing framework that makes it simpler to build and manage private cloud, public clouds and hybrid clouds.

A number of factors are important when evaluating HCI solutions. These include:

Edge-core cloud integration. Organizations have vastly different needs when it comes to connecting existing infrastructure, clouds and edge services. For instance, an organization may require only the storage layer in the cloud. Or it may want to duplicate or convert configurations when changing cloud providers. Ideally, an HCI solution allows an enterprise to change, upgrade and adjust as infrastructure needs change.

Analytics. Its crucial to understand operations within an HCI environment. A solution should provide visibility through a centralized dashboard but also offer ways to drill down into data, and obtain reports on what is taking place. This also helps with understanding trends and doing capacity planning.

Storage management. An HCI solution should provide support for setting up and configuring a diverse array of storage frameworks, managing them and adapting them as circumstances and conditions change. It should make it simple to add nodes to a cluster and support things like block file and object-oriented storage. Some systems also offer NVMeOF (non-volatile memory express over fabrics) support, which allows an enterprise to rearchitect storage layers using flash memory.

Hypervisor ease of use. Most solutions support multiple hypervisors. This increases flexibility and configuration optionsand its often essential in large organizations that rely on multiple cloud providers. But its important to understand whether youre actually going to use this feature and what you plan to do with it. In many cases, ease of use and manageability are more important than the ability to use multiple hypervisors.

Data protection integration. Its important to plug in systems and services to protect dataand apply policy changes across the organization. Its necessary to understand whether this protection is scalable and adaptable, as conditions change. Ideally, the HCI environment can replace disparate backup and data recovery systems. This greatly improves manageability and reduces costs.

Container support. A growing number of vendors support containers, or plan to do so soon. Not every organization requires this feature, but its important to consider whether your organization may move in this direction.

Serverless support. Vendors are introducing serverless solutions that support code-triggered events. This has traditionally occurred in the cloud but its increasingly an on-premises function that can operate within an HCI framework.

Here are ten leading HCI solutions:

Jump to:

The Cisco HyperFlex HX data platform manages business and IT requirements across a network. The solution accommodates enterprise applications, big data, deep learning and other components that extend from the data center to remote offices and out to retail sites and IoT devices. The platform is designed to work on any system or any cloud.

Pros

Cons

Datacore SDS delivers a highly flexible approach to HCI. It offers a suite of storage solutions that accommodate mixed protocols, hardware vendors and more within converged and hyperconverged SAN environments. The software-defined storage framework, SANsymphony, features block-based storage virtualization. It is designed for high availability. The vendor focuses heavily on healthcare, education, government and cloud service providers.

Pros

Cons

VxRail delivers a fully integrated, preconfigured, and pre-tested VMware hyper-converged infrastructure appliance. It delivers virtualization, compute and storage within a single appliance. The HCI platform takes an end-to-end automated lifecycle management approach.

Pros

Cons

HP Enterprise aims to take hyperconverged architectures beyond the realm of software-defined and into the world of AI-driven with SimpliVity. The HCI platform delivers a self-managing, self-optimizing, and self-healing infrastructure that uses machine learning to continually improve. HP offers solutions specifically designed for data center consolidation, multi-GPU image processing, high-capacity mixed workloads and edge environments.

Pros

Cons

NetApp HCI consolidates mixed workloads while delivering predictable performance and granular control at the virtual machine level. The solution scales compute and storage resources independently. It is available in different compute and storage configurations, thus making it flexible and scalable across data center, cloud and web infrastructures.

Pros

Cons

Nutanix offers a fully software-defined hyperconverged infrastructure that provides a single cloud platform for tying together hybrid and multi-cloud environments. Its Xtreme Computing platform natively supports compute, storage, virtualization and networkingincluding IoTwith the ability to run any app at scale. It also supports analytics and machine learning.

Pros

Cons

StarWind offers a HCI appliance focused on both operational simplicity and performance. It bills its all-flash system as turnkey with ultra-high resiliency. The solution, designed for SMB, ROBO and enterprisesaims to trim virtualization costs through a highly streamline and flexible approach. It connects commodity servers, disks and flash; a hypervisor of choice; and associated software within a single manageable layer.

Pros

Cons

StarWind Virtual SAN is essentially a software version of the vendors HyperConverged appliance. It eliminates the need for physically shared storage by mirroring internal hard disks and flash between hypervisor servers. The approach is designed to cut costs for SMB, ROBO, Cloud and Hosting providers. Like the vendors appliance, StarWind Virtual SAN is a turnkey solution.

Pros

Cons

The vCenter Server delivers centralized visibility as well as robust management functionality at scale. The HCI solution is designed to manage complex IT environments that require a high level of extensibility and scalability. It includes native backup and restore functions. vCenter supports plug-ins for major vendors and solutions, including Dell EMC, IBM and Huawei Technologies.

Pros

Cons

vSAN is an enterprise-class, storage virtualization solution that manages storage on a single software-based platform. When combined with VMwares vSphere, an organization can manage compute and storage within a single platform. The solutions connects to a broad ecosystem of cloud providers, including AWS, Azure, Google Cloud, IBM Cloud, Oracle Cloud and Alibaba Cloud.

Pros

Cons

Analytics Vendor

Pros

Cons

Cisco HyperFlex HX-Series

Supports numerous configurations and use cases

Highly scalable

Supports GPU-based deep learning

Requires Cisco networking equipment

Pricing model can be confusing

Some users find manageability difficult

DataCore Software-Defined Storage

Supports mixed SAN, flash and disk environments

Excels with load balancing and policy management

Strong failover capabilities

User interface can be daunting

Licensing can become complex

Customer support is inconsistent

Dell/EMC VxRail

Delivers a true single point of management and support

Handles multi-cloud clusters well

Integrates well with storage devices

Low TCO

Limited support for mixing older flash clusters and hyper-clusters

Some management challenges

Sometimes pricey

HPE SimpliVity

Strong storage management, backup and data replication capabilities

Users like the interface

Strong partner relationships

Highly scalable

Managing clusters can present challenges

Pricey

Users cite problems with technical and customer support

NetApp HCI

Excellent manageability with granular controls

Strong API framework

Support for numerous workloads from different vendors

Highly scalable

Installation and initial cabling can be difficult

Documentation sometimes lacking

Users say some security features and controls are missing

Nutanix AOS

Feature-rich platform

Single user interface with strong management tools

Users report excellent tech support

Pricey

Users report some complexity with using encryption and micro-segmentation

Can be difficult to integrate with legacy systems

StarWind HyperConverged Appliance

Highly scalable

Supports numerous configurations and technologies

Read the original post:
Top 10 Hyperconverged Infrastructure (HCI) Solutions - Datamation

The Diminishing Role of Operating Systems | IT Pro – ITPro Today

The role of operating systems is changing significantly. Due in part to trends like the cloud, it feels like the days when operating systems formed the foundation for application development, deployment and management are over.

So, is it time to declare the operating system dead? Read on for some thoughts on the past, present and future role of operating systems.

When I say that the operating system may be a dying breed, I dont mean that operating systems will disappear completely. Youre still going to need an OS to power your server for the foreseeable future, regardless of what that server does or whether it runs locally or in the cloud.

Whats changing, however, is the significance of the role of operating systems relative to other components of a modern software and hardware stack.

In the past, the OS was the foundation on which all else was built, and the central hub through which it was managed. Applications had to be compiled and packaged specifically for whichever OS they ran on. Deployment took place using tooling that was built into the OS. Logging and monitoring happened at the OS level, too.

By extension, OS-specific skills were critical for anyone who wanted to work as a developer or IT engineer. If you wanted to develop for or manage Linux environments, you had to know the ins and outs of kernel flags, init run levels, ext3 (or ext4, when that finally came along), and so on. For Windows, you had to be a master of the System Registry, Task Manager and the like.

Fast forward to the present, and much of this has changed due to several trends:

Perhaps the most obvious is the cloud. Today, knowing the architecture and tooling of a particular cloud--like AWS or Azure--is arguably more important than being an expert in a specific operating system.

To be sure, you need an OS to provision the virtual servers that you run in the cloud. But in an age when you can deploy an OS to a cloud server in seconds using prebuilt images, there is much less you need to know about the OS itself to use it in the cloud.

Likewise, many of the processes that used to happen at the OS level now take place at the cloud level. Instead of looking at logs within the operating system file tree, you manage them through log aggregators that run as part of your cloud tool set. Instead of having to partition disks and set up file systems, you build storage buckets in the cloud. In place of managing file permissions, groups and users within the operating system, you write IAM policies to govern your cloud resources.

In short, it often feels like the cloud is the new OS.

Kubernetes, and orchestration tools in general, are perhaps taking on the roles of operating systems.

If you deploy workloads using an orchestration platform like Kubernetes, knowing how to configure and manage the Kubernetes environment is much more important than understanding how to manage the operating systems that power the nodes within your cluster. From the perspective of Kubernetes, processes like storage management, networking and logging are abstracted from the underlying operating systems.

Alongside Kubernetes, containers are doing their part to make the OS less relevant. Whether you orchestrate containerized applications with a platform like Kubernetes, containers allow you to take an application and deploy it on any family of operating system without having to worry about the specific configuration of the OS.

In other words, a container that is packaged for Linux will run on Ubuntu just as easily as it will on Red Hat Enterprise Linux or any other distribution. And the tools you use to deploy and manage the container will typically be the same, regardless of which specific OS you use.

If the role of operating systems becomes totally irrelevant at some point, it will likely be thanks to unikernels, a technology that fully removes the OS from the software stack.

In a unikernel-based environment, there is no operating system in the conventional sense. Unikernels are self-hosting machine images that can run applications with just snippets of the libraries that are present in a traditional OS.

For now, unikernels remain mostly an academic idea. But projects like Vorteil (which technically doesnt develop unikernels but instead Micro-VMs, which are very similar) are now working to commercialize them and move them into production. It may not be long before its possible to deploy a real-world application without any kind of operating system at all.

In a similar vein, serverless functions are removing the operating system entirely, at least from the users perspective.

Serverless functions, which can run in the cloud or on private infrastructure, dont operate without an operating system, of course. They require traditional OS environments to host them. But from the users point of view, there is no OS to worry about because serverless functions are simply deployed without any kind of OS-level configuration or management.

Indeed, the primary selling point of serverless--at least, as it is presented by vendors like AWS--is that there is zero administration. The operating system may still lurk in the background, but it may as well be absent as far as developers and IT engineers are concerned.

In short, then, the operating system as we have known and loved it for decades is simply much less significant than it once was. Its not going away entirely, but it has been superseded in relevance by other layers that comprise modern software stacks.

If youre developing or deploying an application today, you may want to think less about which operating system it will run on and more about which cloud or orchestration tool will host it. Thats what really matters in modern IT.

Read the original here:
The Diminishing Role of Operating Systems | IT Pro - ITPro Today

Building a Better U.S. Approach to TikTok and Beyond – Lawfare

One of the defining technology decisions of the Trump administration was its August 2020 ban on TikTokan executive order to which legal challenges are still playing out in the courts. The incoming Biden-Harris administration, however, has indicated its intention to pivot away from Trumps approach on several key technology policies, from the expected appointment of a national cyber director to the reinvigoration of U.S. diplomacy to build tech coalitions abroad. President Biden will need to make policy decisions about software made by companies incorporated in foreign countries, and to what extent that might pose national security risks. There may be a future TikTok policy, in other words, that isnt at all aboutor at least isnt just aboutTikTok.

In April 2020, Republican Rep. Jim Banks introduced legislation in the House of the Representatives that sought to require developers of foreign software to provide warnings before consumers downloaded the products in question. Its highly likely that similar such proposals will enter Congress in the next few years. On the executive branch side, the Biden administration has many decisions ahead on mobile app supply chain security, including whether to keep in place Trumps executive order on TikTok. These questions are also linked to foreign policy: President Biden will need to decide how to handle Indias bans of Chinese software applications, as India will be a key bilateral tech relationship for the United States. And the U.S. government will also have to make choices about cloud-based artificial intelligence (AI) applications served from other countriesthat is, where an organizations AI tools are run on third-party cloud serversin the near future.

In this context, what might a better U.S.policy on the security risks of foreign-made software look like? The Trump administrations TikTok executive order was more of a tactical move against a single tech firm than a fully developed policy. The new administration will now have the opportunity to set out a more fully realized, comprehensive vision for how to tackle this issue.

This analysis offers three important considerations for the U.S. executive branch, drawing on lessons from the Trump administrations TikTok ban. First, any policy needs to explicitly define the problem and what it sets out to achieve; simply asserting national security issues is not enough. Second, any policy needs to clearly articulate the alleged risks at play, because foreign software could be entangled with many economic and security issues depending on the specific case. And third, any policy needs to clearly articulate the degree to which a threat actors supposed cost-benefit calculus makes those different risks likely. This is far from a comprehensive list. But failure to address these three considerations in policy design and implementation will only undermine the policys ultimate effectiveness.

Defining the Problem

First, any policy on foreign software security needs to be explicitly clear about scopethat is, what problem the government is trying to solve. Failure to properly scope policies on this front risks confusing the public, worrying industry and obscuring the alleged risks the government is trying to communicate. This undermines the governments objectives on all three fronts, which is why scoping foreign software policies clearly and explicitlyin executive orders, policy memos and communication with the publicis critical.

Trumps approach to TikTok and WeChat provides a lesson in what not to do. Arguably, the TikTok executive order was not even a policy: It was more a tactical-level move against a single tech firm than a broader specification of the problem set and development of solutions. Trump had discussed banning TikTok in July 2020 as retaliation for the Chinese governments handling of the coronavirusso, putting aside that this undermined the alleged national security motives behind the executive order, the order issued on TikTok wasnt completely out of the blue. That said, the order on WeChat that accompanied the so-called TikTok ban was surprising, and its signing only created public confusion. Until then, much of the congressional conversation on Chinese mobile apps had focused on TikTok, and the Trump administration had given no warning that WeChat would be the subject of its actions too. Whats more, even after the executive orders were signed in August, most of the Trump administrations messaging focused just on TikTok, ignoring WeChat. The administration also wrote the WeChat executive order with troublingly and perhaps sloppily broad language that scoped the ban as impacting Tencent Holdingswhich owns WeChat and many other software applicationsand thus concerned gaming and other software industries, though the administration subsequently stated the ban was aimed only at WeChat.

Additionally, the Trump administrations decisions on U.S.-China tech often blurred together trade and national security issues. The Trump administration repeatedly suggested that TikToks business presence in mainland China inherently made the app a cybersecurity threat, without elaborating on why the executive orders focused solely on TikTok and WeChat rather than other software applications from China too. Perhaps the bans were a possible warning shot at Beijing about potential collection of U.S. citizen databut its worth asking if that warning shot even worked given the legal invalidations of the TikTok ban and the blowback even within the United States. Again, the overarching policy behind these tactical decisions was undeveloped. It was unclear if TikTok and WeChat were one-off decisions or the beginning of a series of similar actions.

Going forward, any executive branch policy on foreign software needs to explicitly specify the scope of the cybersecurity concerns at issue. In other words, the executive needs to clearly identify the problem the U.S. government is trying to solve. This will be especially important as the incoming Biden administration contends with cybersecurity risks emanating not just from China but also from Russia, Iran and many other countries. If the White House is concerned about targeted foreign espionage through software systems, for example, those concerns might very well apply to cybersecurity software developed by a firm incorporated in Russiawhich would counsel a U.S. approach not just limited to addressing popular consumer apps made by Chinese firms. If the U.S. is concerned about censorship conducted by foreign-owned platforms, then actions by governments like Tehran would certainly come into the picture. If the problem is a foreign government potentially collecting massive amounts of U.S. citizen data through software, then part of the policy conversation needs to focus on data brokers, toothe large, unregulated companies in the United States that themselves buy up and sell reams of information on U.S. persons to anyone whos buying.

Software is constantly moving and often communicating with computer systems across national borders. Any focus on a particular company or country should come with a clear explanation, even if it seems relatively intuitive, as to why that company or country poses a particularly different or elevated risk compared to other sources of technology.

Clearly Delineate Between Different Alleged Security Risks

The Trump administrations TikTok ban also failed to clearly articulate and distinguish between its alleged national security concerns. Depending on ones perspective, concerns might be raised about TikTok collecting data on U.S. government employees, TikTok collecting data on U.S. persons not employed by the government, TikTok censoring information in China at Beijings behest, TikTok censoring information beyond China at Beijings behest, or disinformation on the TikTok platform. Interpreting the Trump administrations exact concerns was difficult, because White House officials were not clear and explicit about which risks most concerned them. Instead, risks were blurred together, with allegations of Beijing-compelled censorship thrown around alongside claims that Beijing was using the platform to conduct espionage against U.S. persons.

If there was evidence that these practices were already occurring, the administration did not present it. If the administrations argument was merely that such actions could occur, the administration still did not lay out its exact logic. There is a real risk that the Chinese government is ordering, coercing or otherwise compelling technology companies incorporated in its borders to engage in malicious cyber behavior on its behalf worldwide, whether for the purpose of censorship or cyber operations. Beijing quite visibly already exerts that kind of pressure on technology firms in China to repress the internet domestically. Yet to convince the public, industry, allies, partners, and even those within other parts of government and the national security apparatus that a particular piece or source of foreign software is a national security risk, the executive branch cannot overlook the importance of clear messaging. That starts with clearly articulating, and not conflating, the different risks at play.

The spectrum of potential national security risks posed by foreign software is large and depends on what the software does. A mobile app platform with videos and comments, for instance, might collect intimate data on U.S. users while also making decisions about content moderationso in that case, its possible the U.S. government could have concerns about mass data collection, censorship and information manipulation all at once. Or, to take another example, cybersecurity software that runs on enterprise systems and scans internal company databases and files might pose an array of risks related to corporate espionage and nation-state espionagebut this could have nothing to do with concerns about disinformation and content manipulation.

Software is a general term, and the types and degrees of cybersecurity risk posed by different pieces of software can vary greatly. Just as smartphones are not the same as computing hardware in self-driving cars, a weather app is not the same as a virtualization platform used in an industrial plant. Software could be integrated with an array of hardware components but not directly connect back to all those makers: Think of how Apple, not the manufacturers of subcomponents for Apple devices, issues updates for its products. Software could also directly connect back to its maker in potentially untrusted ways, as with Huawei issuing software updates to 5G equipment. It could constantly collect information, such as with the TikTok app itself and it could learn from the information it collects, like how TikTok uses machine learning and how many smartphone voice-control systems collect data on user speech. This varied risk landscape means policymakers must be clear, explicit and specific about the different alleged security risks posed by foreign software.

Give Cost-Benefit Context on Security Risks

Finally, the U.S. government should make clear to the public the costs and benefits that a foreign actor might weigh in using that software to spy. Just because a foreign government might hypothetically collect data via something like a mobile appwhether by directly tapping into specific devices or by turning to the apps corporate owner for data hand-oversdoesnt mean that the app is necessarily an optimal vector for espionage. It might not yield useful data beyond what the government already has, or it might be too costly relative to using other active data collection vectors. Part of the U.S. governments public messaging on cyber risk management should therefore address why that particular vector of data collection would be more attractive than some other vector, or what supplementary data it would provide. In other words, what is the supposed value-add for the foreign government? This could also include consideration of controls offered by the softwares country of originfor example, transparency rules, mandatory reporting for publicly traded companies, or laws that require cooperation with law enforcement or intelligence servicesmuch like the list of trust criteria under development as part of Lawfares Trusted Hardware and Software Working Group.

In the case of the Trump administrations TikTok executive order, for example, there was much discussion by Trump officials about how Beijing could potentially use the app for espionage. But administration officials spoke little about why the Chinese intelligence services would elect to use that vector over others, or what about TikTok made its data a hypothetical value-add from an intelligence perspective.

If the risk concern is about targeted espionage against specific high-value targets, then the cost-benefit conversation needs to be about what data that foreign software provides, and how easily it provides that benefit, relative to other methods of intelligence collection. If the risk concern is about bulk data collection on all the softwares users, then the cost-benefit conversation needs to be about why that data is different from information that is openly available, was stolen via previous data breaches, or is purchasable from a U.S. data broker. That should include discussing what value that data adds to what has already been collected: Is the risk that the foreign government will develop microtargeted profiles on individuals, supplement existing data, or enable better data analytics on preexisting information?

The point again is not that TikToks data couldnt add value, even if it overlapped with what Chinese intelligence services have already collected. Rather, the Trump administration did not clearly articulate Beijings supposed cost-benefit calculus.

Whatever the specific security concern, managing the risks of foreign espionage and data collection through software applications is in part a matter of assessing the potential payoff for the adversary: not just the severity of the potential event, or the actors capabilities, but why that actor might pursue this option at all. Policy messaging about these questions speaks to the governments broader risk calculus and whether the U.S. government is targeting the most urgent areas of concern. For instance, if the only concern about a piece of foreign software is that it collects data on U.S. persons, but it then turns out that data was already publicly available online or heavily overlaps with a foreign intelligence services previous data theft, would limiting that foreign softwares spread really mitigate the problems at hand? The answer might be yes, but these points need to be articulated to the public.

Conclusion

A key part of designing federal policies on software supply chain security is recognizing the globally interconnected and interdependent nature of software development today. Developers working in one country to make software for a firm incorporated in a second may sell their products in a third country and collect data sent to servers in a fourth. Software applications run in one geographic area may talk to many servers located throughout the world, whether a Zoom call or Gmailand the relatively open flow of data across borders has enabled the growth of many different industries, from mobile app gaming to a growing number of open-source machine-learning tools online.

If the U.S. government wants to draw attention to security risks of particular pieces or kinds of foreign software, or software coming from particular foreign sources, then it needs to be specific about why that software is being targeted. Those considerations go beyond the factors identified here. The WeChat executive order, for instance, wasnt just unclear in specifying the national security concerns ostensibly motivating the Trump administration; it also failed to discuss what a ban on WeChat in the United States would mean for the apps many users. Hopefully, greater attention paid to these crucial details will help better inform software security policies in the future.

More here:
Building a Better U.S. Approach to TikTok and Beyond - Lawfare

Legacy IT: The hidden problem of digital transformation – SC Magazine

Companies may want to undertake digital transformation, but often start with legacy servers built in the early 2000s. Todays columnist, Hemanta Swaim, formerly of TiVo, offers some insights on how to secure legacy IT systems. Jemimus CreativeCommons Attribution 2.0 Generic CC BY 2.0)

Legacy IT has become the dirty little secret of digital transformation. These systems, which include servers, OSes, and applications, are relied on by almost every organization for business-critical activities and many CISOs struggle to protect them from attackers.

During my time as CISO for a public company, I got a first-hand look at the depth of the legacy challenge. We had more than 1,000 servers in use that were built in 2003 but no longer supported by vendors, and more than 200 legacy servers were designated for business-critical activity that drove significant annual revenue. Its a non-starter to take these servers offline, and protecting them comes at a significant cost.

The cost and complexity of protecting legacy systems

The complexity of the legacy system lies in the IT teams inability to update and maintain them. Many of these systems and apps have been used for multiple years and may have millions of lines of code written in them. Changing or altering the code could impact one of the revenue-generating applications that keeps the business running.

On top of this, legacy systems are near impossible to patch. This makes them incredibly vulnerable and a target for attack. So how can organizations protect the systems that serve as the core of their business?

Legacy security cant protect legacy systems

Companies have to absorb the cost of protecting legacy systems within current cybersecurity spending. As such, organizations try to retrofit existing solutions like firewalls and endpoint protection.

Digital transformation has made this approach obsolete. Modern infrastructure, data centers, and the move to hybrid clouds give attackers more pathways to target these vulnerable systems.

Legacy systems that were once used by a handful of on-premise applications can now get used by hundreds of applications both on-prem and in the cloud. Containers may even interact with the mainframe. These are connections that firewalls were simply not built to secure. Many firewalls are also legacy devices and dont integrate with modern applications and environments. Using them to secure legacy systems against outside intrusion simply increases the total cost of ownership without actually securing the systems against modern threats.

CISOs require a convergence of security approaches that protect legacy assets, while also minimizing threats across modern assets. The approach we evaluated and trusted was based on the core principles of Zero Trust.

Improve legacy systems with Zero Trust

Establishing Zero Trust around legacy systems and applications requires four critical components: visibility of legacy assets, micro-segmentation, identity management and continuous monitoring.

Companies find it challenging to obtain the proper view of existing legacy assets, but its vital to ensuring the security of the organization. Its not enough to secure most assets it only takes one overlooked server that attackers find to breach the organization.

After an acquisition the first step we took was to create a full view of the entire ecosystem and map everything from legacy systems to cloud environments, containers, and applications. By understanding which workloads present the most risk, we could deduce the prime starting points for enforcing Zero Trust.

Its a recipe for inconsistent policy and blind spots to start on the path to Zero Trust with anything less than a holistic view of the entire network. Taking a holistic view empowers the security teams to identify the critical areas to start the second step: implementing micro-segmentation.

While firewalls have been the traditional choice for segmenting assets from networks, theyre not built to protect legacy and unpatched assets at such a granular level. Older techniques such as firewalls and VLANs are costly to own and maintain, and they frequently place similar legacy systems in a single silo. For an attacker, its like shooting fish in a barrel a single intrusion can lead to multiple critical systems being exploited.

In addition, security and operations teams need to constantly update rules and policies between the firewalls and the applications and assets theyre supposed to protect. This leads to overly permissive policies that may improve workflow, but significantly undermines the security posture the organization will try to build.

We used micro-segmentation technology Guardicore Centra, which let us build tight, granular security policies to prevent lateral movement. In addition, security teams can deploy micro-segmentation across the entire infrastructure and workloads of all types, including data centers, cloud and modern applications. This eliminates high-risk gaps in the security across infrastructure.

Its very important to enhance the organizations identity and access management platform. Proper user identity management plays a critical role in the Zero Trust principle. Users need access to systems and applications. Security teams must grant access based on each individual users role and automate to verify before granting access to minimize the operational burden and enable to scale.

Micro-segmentation technology offers deep visualization capabilities that make policy management easier and provide capabilities to manage segmentation based on application usage. Applying micro-segmentation across production infrastructure helps to minimize the risks with proper visualization of modern and legacy workloads. This enables the enforcement of server-level policy, which allows only specific workflows between legacy systems and between modern environments and applications to and from the legacy systems.

Legacy systems and applications continue to present a tough challenge for organizations. Theyre business critical, but incredibly hard to maintain and properly secure. As organizations embark on digital transformation and introduce hybrid cloud, new applications and data centers, the problem becomes exacerbated.

Securing the business starts with securing the critical assets that make the business run. Visibility of the infrastructure, combined with micro-segmentation and continuous monitoring, controls the risk of legacy systems by building tight segmentation policy that attackers cant exploit. And dont neglect enforcing the basic security hygiene across enterprise.

Hemanta Swain, senior independent consultant, former chief information security officer, TiVo

See the rest here:
Legacy IT: The hidden problem of digital transformation - SC Magazine

TGen Leverages phoenixNAP’s Hardware-as-a-Service Powered by Intel to Empower COVID-19 Research – PR Web

Empowering COVID-19 Research with Cutting-Edge Tech

PHOENIX (PRWEB) December 23, 2020

phoenixNAP, a global IT services provider offering security-focused cloud infrastructure, dedicated servers, colocation, and specialized Infrastructure-as-a-Service technology solutions, announced a case study detailing its collaboration with Intel on building an IT platform for a COVID-19 project by Translational Genomics Research Institute (TGen), an affiliate of City of Hope.

In an effort to help the global fight against COVID-19, TGen proposed the creation of a centralized platform for knowledge and information sharing between researchers from all over the world. The platform is intended to automatically pull data related to COVID-19 sequenced genomes from multiple sources and provide an aggregated dataset to enable comparative research. This would help identify previously uncharacterized elements in the SARS-CoV-2 genome and observe important correlation between them for the purpose of improving diagnostics, vaccine constructs, and treatments for COVID-19.

Considering the volume and complexity of biomedical data, the platform needed powerful hardware to ensure seamless processing, reliable storage, and global availability. phoenixNAP and Intel collaborated to provide a customized solution to support these needs. phoenixNAPs hardware-as-a-service (HaaS) powered by Intel Xeon Dual Gold 6258R CPUs and Intel NVMes (P4610) with Intel VROC, Intel NICs, and Intel Optane persistent memory met the needs of the project. The ultrafast network experience is enabled through a customized implementation of Intel Tofino Programmable Ethernet Switch Products, which Intel has offered since the acquisition of Barefoot Networks in June 2019.

We needed a robust computational environment for large data volumes and sophisticated analytical tools. We have maintained compute infrastructure with phoenixNAP for years, but we needed to expand and customize it to support this project. We got a more streamlined, powerful infrastructure that will give us enough power and memory, while at the same time providing us with a great degree of flexibility as our research expands. Intel Optane PMem emerged as a logical solution to support large data sets, said Glen Otero, VP Scientific Computing, TGen.

Healthcare is becoming more intelligent, distributed, and personalized. Intel technologies are helping to enable a new era of smart, connected, value-based patient care, remote medicine and monitoring, individually tailored treatment plans, and more-efficient clinical operations. Intel-enabled technologies help optimize workflow to lower research and development costs, improve operational efficiency, speed time to market, and improve patient health, said Rachel Mushahwar, VP and GM, Intel US Sales, Enterprise, Government and Cloud Server Providers

TGen is doing an amazing job every day and this project is one of the examples of how they are actively working to make life-changing results. We discussed their project and knew that Intel will be open to collaborating with us on building a proper platform for it. We are excited for having the opportunity to work with both Intel and TGen on something this relevant to the entire world, said Ian McClarty, President of phoenixNAP.

TGen has so far identified several new features in the SARS-CoV-2 genome and continues to focus on making new contributions to the cause. Its project addresses a critical need of the global biomedical community and promises to enhance further research on COVID-19. It also demonstrates the potential of using innovative technology to make a difference in the lives of millions of people.

Download full case study here: https://phoenixnap.com/company/customer-experience/tgen

About phoenixNAP

phoenixNAP is a global IT services provider with a focus on cyber security and compliance-readiness, whose progressive Infrastructure-as-a-Service solutions are delivered from strategic edge locations worldwide. Its cloud, dedicated servers, hardware leasing and colocation options are built to meet always evolving IT businesses requirements. Providing comprehensive disaster recovery solutions, a DDoS-protected global network, hybrid IT deployments with software and hardware-based security, phoenixNAP fully supports its clients business continuity planning. Offering scalable and resilient opex solutions with expert staff to assist, phoenixNAP supports growth and innovation in businesses of any size enabling their digital transformation.

phoenixNAP is a Premier Service Provider in the VMware Cloud Provider Program and a Platinum Veeam Cloud & Service Provider partner. phoenixNAP is also a PCI DSS Validated Service Provider and its flagship facility is SOC Type 1 and SOC Type 2 audited.

Share article on social media or email:

Go here to see the original:
TGen Leverages phoenixNAP's Hardware-as-a-Service Powered by Intel to Empower COVID-19 Research - PR Web