Category Archives: Cloud Servers

Western Digital goes to the edge with servers and SSD storage Blocks and Files – Blocks and Files

Disk drive and SSD maker Western Digital is entering the server business with two specialised transportable systems for edge deployments.

Its made embedded server storage systems already, in the form of home NAS boxes, but these new Ultrastar Edge servers are business products placed at the edge of the incumbent server vendors turf meaning Dell, HPE, Lenovo, et al.

Kurt Chan, Western Digitals VP for Data Centre Platforms, said in a statement: The growth in data creation at the edge, the opportunities to extract value from that data, and the total available markets and customers innovating and doing work at the edge, gives us a great opportunity for our new Ultrastar Edge server family.

WD is building two server systems:

WD organised a supporting quote from Manoj Sukumaran, Senior Analyst, Data Center Compute, at Omdia: We expect server deployments at edge locations to double through 2024, totalling an estimated five million units, as they are an essential component in enabling new innovations and products, cloud services, remote campuses, CDNs, and virtually any vertical industry that relies on IoT, sensor or remote data.

The base Ultrastar Edge product has 2x XeonGold 6230T CPUs, 2.1GHz, each with 20 cores, an Nvidia Tesla T4 GPU, and 8x 7.68TB Ultrastar DC SN640 NVMe SSDs providing up to 61TB of storage. It has two 50Gbit/s or one 100Gbit/s Ethernet connection, so it can be hooked up to a data centre or public cloud when connected.These features make the product basically fast and capable of use for real-time analytics, AI, deep learning, ML training and inference, and video transcoding at the edge of an IT network.

The Ultrastar Edge-MR is the Edge packaged as a rugged, stackable device, designed and tested in accordance with MIL-STD-810G-CHG-1 standards for limits of shock and vibration, and to the MIL-STD-461G standard for electromagnetic interference. It has the same pair of 2x gen-2 Xeon SP CPUs with up to 40 cores, a T4 GPU, 512GB of DRAM, and dual 10GBase-T RJ-45 ports plus a Mellanox ConnectX-5 100GbE QSFP28 port. These come inside a hardened box which is rated IP32 for protection against incoming water and debris.

All in it weighs 71.1lbs (32.25kg) a heavy thing to lug about. WD is envisaging it being used for analysing data during oil and gas explorations, doing research in the Amazon, or in military operations far away from any server closet.

Both Ultrastar Edge products feature the Trusted Platform Module 2.0, a tamper-evident enclosure, and are built to meet the FIPS 140-2 Level 2 security standard.

Asked why WD was getting into server market, a spokesperson said Western Digital provides JBODs and JBOFs to xSPS, OEM and channel customers. As the centralised cloud has evolved to the edge, our xSP customers have looked to us to provide specialised servers for data transport and edge capture and compute that leverage our vertical integration capabilities and meet unique requirements that traditional, whitebox servers dont satisfy.

Will Western Digital use RISC-V processors in its servers? The current Ultrastar Edge family is Intel-based. We are exploring other alternatives including AMD as well as RISC-V, especially for custom designs, and well continue to use Arm when that makes the most sense.

You can read an Edgy blog and get an Edge datasheet here and an Edge MR datasheet here.

Both Ultrastar Edge servers are sampling now and orderable, with general availability beginning c.Q4 2021.

Read the original post:
Western Digital goes to the edge with servers and SSD storage Blocks and Files - Blocks and Files

OVHcloud US Announces the Availability of Managed Kubernetes Services – Business Wire

RESTON, Va.--(BUSINESS WIRE)--OVHcloud US, a leading global cloud provider, today announced the availability of its managed Kubernetes Service. Kubernetes combined with OVHclouds Public Cloud service provides the perfect platform on which to build and operate scalable cloud-native applications with on-demand computing and storage resources.

As a managed service, in addition to maintaining the underlying hardware, OVHcloud also manages the Kubernetes software stack, keeping it up-to-date with critical updates associated with bugs and security patches. OVHclouds Kubernetes also delivers load-balancer and auto-scaling capabilities, giving developers the ability to start small with entry-level instances and then upgrade to more powerful instances when moving to larger-scale production.

The ability to deploy, manage, and scale consistent and secure-by-default Kubernetes instances on demand defines what it means to be cloud-native. The power of our managed Kubernetes in our Public Cloud environment allows teams to iterate faster, automate more aggressively, and exploit modern application-lifecycle paradigms. With an overwhelming amount of production applications expected to be cloud native within the next 18-24 months, this solution addresses the major challenges developers face in delivering superior software quickly, said Jeffrey Gregor, General Manager, OVHcloud US.

Key features of OVHclouds Free Managed Kubernetes Service

Kubernetes in combination with OVHclouds Public Cloud on-demand resources and pay-as-you-go consumption model provides access to several important features, including:

Additional details on this new product offering can be found at us.ovhcloud.com/public-cloud/.

About OVHcloud US

OVHcloud US is a subsidiary of OVHcloud, a global cloud provider that specializes in delivering industry-leading performance and cost-effective solutions to better manage, secure, and scale data. OVHcloud US delivers bare metal servers, hosted private cloud, and hybrid and public cloud solutions, and was recognized as a "Strong performer" in Forrester's Hosted Private Cloud Services in North America (2Q2020) and as a "Contender" in the IDC Worldwide Public Cloud as a Service Vendor Assessment (2020). OVHcloud manages 32 data centers across 12 sites on four continents, manufactures its own servers, builds its own data centers, and deploys its own fiber-optic global network to achieve maximum efficiency. Through the OVHcloud spirit of challenging the status quo, the company brings freedom, security, and innovation to solve data challenges today and tomorrow. With a 21-year heritage, OVHcloud is committed to developing responsible technology and strives to be the driving force behind the next cloud evolution. https://us.ovhcloud.com.

Follow this link:
OVHcloud US Announces the Availability of Managed Kubernetes Services - Business Wire

Bringing the Cloud and Network Together – CIOReview

Prayson Pate, CTO, Edge Cloud, ADVA

The cloud has been important to enterprises for some time. But now its everywhere literally. Examples include hybrid cloud for distributed applications, virtualized networking applications, hosting for IoT applications, public and private wireless infrastructure, and so on.

Enterprises must now take a more holistic approach to cloud and edge compute. And this approach must include their communications network.

Lets take a look at the latest trends in cloud and communications technology, and how they impact enterprises both today and tomorrow.

The what ubiquitous virtualization

Virtualization is now a standard approach to building out IT infrastructure. Virtual machines (VMs) are set up to handle applications, and these VMs share physical infrastructure. They can be hosted locally, or in the cloud, or moved based on changing requirements.

Now we can employ the same approach with communications infrastructure using network functions virtualization (NFV) and universal customer premises equipment (uCPE). We can remove closed and dedicated networking devices (also called appliances). Instead, we can use best-of-breed software running on a standard server.

Enterprises must now take a more holistic approach to cloud and edge compute. And this approach must include their communications network

This approach is like what we see with a smartphone. We can pick a smartphone with the physical features that we want (e.g., cameras, screen size, radios, etc.) and then load it with the apps of our choice. Developers can even create their own apps.

Likewise, with NFV and uCPE we can get similar benefits:

We can pick the software that best meets our needs, and change it later without changing the hardware.

We can pick a server or servers based on our criteria: cost, support, processor type, memory, physical hardening, resilience, number, and type of ports, etc. And we can change it later, or pick multiple suppliers. That way we can increase competition and supply chain resilience.

And because we are using an open and cloud-centric approach, we can run our own applications.

The why edge cloud

I often say that its all about the cloud. And not only as an end, but also as the means.

Lets assume we have decided to move forward with a cloud-centric approach to communications services. What else can we do with such an edge cloud? How can it act as a platform for innovation? Here are some of the drivers were seeing with enterprises.

Modernization: Enterprisesare making the changes required to become digital natives and to embrace the cloud. That means moving to software-centric approaches, containers, agile development, etc., all of which require ubiquitous cloud infrastructure such as uCPE.

Consolidation: We all are trying to do more with less. And that means eliminating unnecessary devices. NFV provides a perfect way to replace multiple dedicated devices with software running on a single server. Examples include software for Wi-Fi, surveillance systems, internet of things (IoT), point of sale (PoS), and custom applications.

Supply-chain resilience: Covid-19, the trade wars, and industry consolidation have forced us to re-learn a hard lesson. Its dangerous to rely on single-sourced items. Thats true of both hardware and software. With NFV, we can break open networking devices and replace them with separate hardware and software suppliers. And we can change them independently when required.

Hybrid cloud: Hybrid cloud started as a way to harness multiple cloud providers. Now enterprises need to extend that approach to on-site compute. The driver for this move could be the need to reduce latency, or to live with limited WAN bandwidth, or to meet privacy or data sovereignty requirements. Whatever the reason, enterprises need a strategy that lets them support these distributed applications.

Network improvements: Many enterprises are looking at software-defined wide-area network (SD-WAN) and secure access service edge (SASE) as ways to improve their network and to reduce costs. But they dont want to achieve these goals by introducing yet another networking device into their locations. The move to virtualization and uCPE gives them a perfect platform to make this migration.

Future-proofing: This may be the most important driver of all. Enterprises may justify the move to a virtualized system based on the requirements above. Thats because they can quantify the costs and benefits. However, theres also a larger but less tangible value and thats the ability to make rapid changes to address unforeseen requirements and problems.

A solution for today and a platform for tomorrows innovation

With open virtualization delivered with NFV/uCPE, we can power todays communications and hybrid cloud applications. We can consolidate and modernize at the same time. We can address the need to move to services like SD-WAN or SASE. We can power our small offices with a single server. And we can do all that right now.

But more importantly, we have created a platform for innovation. The world is unpredictable. But an open and virtualized platform helps our upside with easier development and reduces our risk. And thats always a good thing.

See the original post here:
Bringing the Cloud and Network Together - CIOReview

Taming the Overprivileged Cloud | eWEEK – eWeek

Enterprises are struggling with a swelling number of cloud identities, credentials sprawl and privilege creep. Gartner has sounded the alarm: by 2023, 75% of cloud security failures will result from inadequate management of identities, access and privileges.

Its time to tame the beast.

But first, we need to understand the components of an entitlement that can put security at risk. They can be broken down into: entities, identities, permissions and resources.

An entity is just what it sounds likea person, machine, service or application that needs access.

Identities can be cloud identity systems, on-premise identity systems, SaaS applications, etc. And theyre not always humans; they could be compute resources needed to complete a business function, like an application or a virtual machine using a service identity.

Identities do not necessarily have to belong to users or applications within your organization. We are seeing a sharp growth in what we call third-party identities belonging to vendors that need access to your public cloud infrastructure in order to provide some operational or business value. These can include security vendors, cost optimization vendors, etc.

These identities become more complicated depending on which public cloud infrastructure youre using. Each cloud platform manages identities differently, for example Microsoft Azure uses Azure Active Directory.

Meanwhile, many organizations also use on-premises identity systems, which are external to the cloud service provider. In these hybrid environments, users have two (or more) identities, a cloud platform specific identity and a federated identity to access the cloud infrastructure from on-premises identity systems.

Finally, every identity has entitlements or permissions such as the ability to read and write files, granted by policies associatedwith the cloud platform or custom-written by the organization based on the identitys roles and access. In addition, entitlements are linked to a specific resource or a group of resources, which could be virtual machines, containers, databases, serversor secrets such as encryption keys.

Entitlements can also be granted to identities in a number of ways that further complicate their management. These include:

To start connecting the dots between entities, identities, permissions and resources, an organization first needs to understand the permission structure of each cloud provider, whether its AWS, Azure, GCP or whomever. Each uses pre-baked permission policies that can lead to privilege creep. These policies tend to be extremely over-provisioned; we see many cases of DevOps or developers assigning administrative-type of policies to applications and resources such as databases, storage and machine IDs.

There are very few applications that really need the ability to read, write and delete all the storage services in your environment. Usually an application would be using a specific storage service to fulfill its business function. Savvy organizations are recognizing this risk and moving to custom-managed policies or policies that allow them to granularly control permissions.

Managing permissions starts with visibility. You have to be able to discover all the human and machine identities in the environment. Then you have to be able to map all the permission structures, identities and resources to answer one basic question: Who can access sensitive data in my environment?

Answering that question requires mapping permissions and analyzing the broader security context of your resources, such as their network exposure and who can update their configurations. You have to be able to remove stale access and do it at scale. In most cases, the number of permissions that the environment really requires is around 10% to 20% of the actual number that are provisioned.

This requires determining riskwhich users and machines have access to sensitive resources. You have to be able to identify excessive permissions, right-size them and remediate security problems such as lack of multi-factor authentication or failing to rotate access keys. This includes detecting and automatically removing inactive identities.

The biggest barrier to this exercise is the effort involved. Its impossible to do it manually; were talking about hundreds of thousands of users and resources, maybe even millions. Analytics can help you review access policies to provide a clearer picture of the organizations security posture vis-a-vis identities and their entitlements.

Its also difficult to address these challenges using tools that come with each cloud platform since they lack the granularity to capture and unravel the complexities of all the privileges attached to identities. They can look at a specific user and understand the permissions attached to that user directly, but they will miss groupings, or chaining of identities inside your organization that can introduce risk. They also miss the broader network context, like the ability to understand if an application is exposed to the internet.

If you are using more than one cloud platform, cloud provider tools will not be able to manage identities and permissions in other cloud infrastructures.

Cloud platform agonistic automation technology is available for managing identities and permissions at scale. By providing visibility into over privileged user and machine accounts its possible to tame the wilderness that cloud entitlements have become for many organizations.

ABOUT THE AUTHOR

Arick Goomanovsky, Chief Business Officer of Ermetic

Excerpt from:
Taming the Overprivileged Cloud | eWEEK - eWeek

GitLab fixes serious SSRF flaw that exposed orgs internal servers – The Daily Swig

John Leyden17 June 2021 at 15:03 UTC Updated: 17 June 2021 at 15:06 UTC

DevSecOops

Programming code-share platform GitLab has fixed a server-side request forgery (SSRF) issue in a software library after the problem was flagged by a security researcher.

Server-side request forgery is a class of web security vulnerability that allows, for example, an attacker to force a vulnerable server to make a connection to internal services within an organizations infrastructure.

Researcher Vin01 discovered that GitLabs CI Lint API, a library related to code handling and managing developer workflows, was flawed.

Catch up with the latest DevOps news and analysis

After discovering the problem last December, the researcher reported it to GitLab, which responded by publishing a temporary fix in February.

GitLab followed up with a more complete patch early this month, clearing the way for Vin01 to publish a detailed technical write-up of their findings.

The affected CI Lint API is used to validate CI/CD YAML configuration for GitLab instances. A flaw in the technology, if left unaddressed, created a means for miscreants to steal sensitive info such as passwords and cloud service credentials, Vin01 told The Daily Swig.

Installations which had a particular configuration in place to allow internal network requests from GitLab were vulnerable to server-side request forgery (SSRF), where an attacker could have sent a request to internal servers by jumping from the public facing GitLab servers.

These internal servers are usually not exposed to the internet as they are only meant to be used internally and may contain sensitive information like passwords, API keys, cloud service credentials, which could have been stolen as a result of this vulnerability.

Public facing GitLab servers are quite common, and the issue in hand was exacerbated because no authentication was required in order to exploit it.

The vulnerabilities are tracked as CVE-2021-22175 and CVE-2021-22214.

READ MORE Vulnerability in Microsoft Teams granted attackers access to emails, messages, and personal files

In my research I saw hundreds of vulnerable GitLab servers including but not limited to many open source projects, government departments and universities which use GitLab for hosting their code and integrate it with their infrastructure, Vin01 added.

The security researcher has put together a small script to test if a GitLab server is vulnerable, availableon GitHub.

Vin01 praised GitLabs handling of the disclosure process, adding that even though they have since privately warned many affected organizations about their exposure to the flaw, there are still many vulnerable instances.

RELATED Security researcher turns Apache Airflow into bug bounty cash cow

See the article here:
GitLab fixes serious SSRF flaw that exposed orgs internal servers - The Daily Swig

Easily transfer VMs to the cloud with Microsoft Azure Migrate – Illinoisnewstoday.com

Planning a cloud migration is like going on a trip. Its important to double-check that everything you need is in your suitcase and make sure that all travel sections are on time. When an administrator moves a VM to the cloud, they need to make sure they have the right tools and enough storage to move all their critical virtual infrastructure. Microsofts Azure Migrate can help with this type of task.

Microsoft Azure Migrate It provides IT teams with a centralized portal for discovering, evaluating, and migrating systems and data from their on-premises infrastructure to the Azure cloud.

Administrators can use the portal to move physical and virtual servers, VDIs, databases, web applications, and large datasets. You can use Azure Migrate to migrate your VMs to Azure in private and public clouds. This service is included in your Azure subscription at no additional cost.

Azure Migrate provides a single portal for managing the entire VM migration process. The portal guides administrators through the discovery, evaluation, and migration phases and provides end-to-end operational visibility. Administrators can start, run, track, and analyze workflows.

These tools allow administrators to perform a variety of tasks based on the system or data type they plan to migrate. Organizations need to be aware that Azure Migrate is a one-way service specifically designed to move servers, applications, and data to Azure.

The portal includes:

Azure Migrate also integrates several third-party tools such as Carbonite, Lakeside, RackWare, and UnifyCloud.

For certain operations, the administrator Install a lightweight appliance For infrastructure setup. Azure Migrate uses this appliance to discover and evaluate physical servers, VMware VMs, and Hyper-V VMs. A single Azure Migrate appliance can discover up to 1,000 physical servers, 10,000 VMware VMs, and 5,000 Hyper-V VMs. You can also use the appliance to perform agentless migration of your on-premises VMware VMs.

For administrators who are already using Azure, the service is free, so try Azure Migrate to minimize the risk. However, you may still incur charges if you are using integrated third-party tools or certain Azure services. The Database Migration Service Tool is free for the first 180 days only.

To better understand how Azure Migrate works, administrators need to refer to specific use cases. Suppose your IT department decides to migrate your on-premises Hyper-V VM to an Azure VM.

For this, administrators can use server evaluation and server migration tools with the Azure Migrate appliance. The entire migration process can be divided into three basic phases: discovery, evaluation, and migration.

These steps are short versions of the steps that an administrator must perform on a Hyper-V VM, but they should serve as an overview.

Before attempting to migrate a VM, please refer to the Azure Migrate documentation and pay close attention to potential limits and workload requirements.

Administrators can discover and evaluate up to 35,000 Hyper-V VMs in a single Azure Migrate project. However, a single appliance instance can only detect 5,000 Hyper-V VMs, distributed across 300 Hyper-V hosts. That said, administrators can deploy multiple appliances and create multiple projects. The project can include physical servers, VMware VMs, and Hyper-V VMs.

Hyper-V hosts can be standalone machines or deployed in clusters. It can be used as a server core installation, but the host must have administrator privileges.In addition, the system PowerShell remoting must be enabled, And Hyper-V Integration Services must run on the evaluated VM.

Administrators should consider port settings and storage limits. Azure Migrate only supports Integrated Drive Electronics and SCSI virtual controllers. The system cannot detect machine metadata or dynamic performance data for Hyper-V VMs, but it can for VMware VMs. However, even VMware VMs are limited to detection, not evaluation.

Read more from the original source:
Easily transfer VMs to the cloud with Microsoft Azure Migrate - Illinoisnewstoday.com

Wills and Probate: Why Your Firm Needs Cloud-Based Solutions – Today’s Wills & Probate

As of last year, 88% of UK businesses had adopted cloud-based technology. Has your wills and probate law firm done so yet for all your practice management needs?

Chances are youre already using some cloud-based technology. Most lawyers use the cloud as part of other internet-related services, whether those be professional or personal: Dropbox, Gmail, Evernote, Facebook, and Amazon all run on cloud technology. For professional usage, the cloud is here to stay for legal firms across the world, a change that has been accelerated by the events of the past 18 months.

Previously, many legal practices relied on on-premise, server-based software, but remote working has exposed many limitations to that sort of system. While firms may prefer the idea of having information stored in-office, there are a number of inherent problems:

As more of us have worked from home, the problems of relying on a server-based systemespecially one that is supplemented by paper files stored in filing cabinetshave been highlighted. By contrast, cloud-based technologies offer seamless remote access, greater time savings, better security, 99.9% uptime guarantees, and significant long-term cost savings.

To read more about the benefits of a cloud-based approach for your firm, check out Clios The Quick Guide to Cloud Computing for Law Firms.

Or, give the worlds leading cloud-based legal practice management software a try. Speak with a Clio expert to ask for your free demonstration today.

This article was submitted to be published by Clio as part of their advertising agreement with Todays Wills and Probate. The views expressed in this article are those of the submitter and not those of Todays Wills and Probate.

Read more:
Wills and Probate: Why Your Firm Needs Cloud-Based Solutions - Today's Wills & Probate

Invest in cloud security to future-proof your organization – TechTarget

During the COVID-19 pandemic, many enterprises faced immense operational resilience challenges. As such, the pandemic accelerated the shift to the cloud. This sudden shift to an online, no-contact economy prompted what Microsoft CEO Satya Nadella said was "two years' worth of digital transformation in two months."

Cloud platforms helped companies deploy new digital customer experiences in days rather than months, supporting analytics, agility and scalability that would be uneconomical or impossible with legacy platforms.

Yet, at the same time, numerous opportunities were presented to cybercriminals who exploited the new operating environment and preyed on a remote and vulnerable workforce. Data residing on premises and in the cloud quickly became a natural target for bad actors. The seemingly overnight shift of enterprise data to the cloud increased the number of possible failure points in security systems. In fact, McAfee reported a 630% increase in attack attempts from external threat actors on its customer's cloud accounts in early 2020.

This reality has driven enterprises to build an effective cloud security architecture and strategy -- but the path to achieving this has not been an easy one.

While organizational inertia to move to the cloud might have been overcome due to the pandemic, the shift itself is not without three major complexities:

Challenge No. 1: Confusion around the shared responsibility model hasn't helped the situation.

Public cloud providers take responsibility for their clouds' security, but they don't take responsibility for their clients' applications, servers and data security. Companies must encrypt and secure their own data. Yet, many enterprises leave data unencrypted on the cloud or do not implement available encryption tools and management services. Additionally, companies need to invest in a variety of tools, including antimalware, antivirus and secure web gateways, from cloud service providers to protect their data.

Challenge No. 2: CISOs must establish a solid foundation for their cloud security architecture on a security framework that can help define and prioritize risk areas.

Begin by identifying organizational requirements and completing security risk assessments. Next, implement safeguards to ensure infrastructure can self-sustain during an attack. The framework will have to use detection systems to monitor networks and identify security-related events, which will then launch countermeasures to combat potential or active threats. Finally, the framework will need inbuilt recovery capabilities to restore system capabilities and network services in the event of a disruption.

Challenge No. 3: CISOs have to prepare for the worst and hope for the best.

Focus remediation efforts and align security policies across the digital landscape by embedding security in the enterprise architecture. When migrating workloads to the cloud, the security architecture will clearly define how an organization should identify users and manage their access, and protect applications and data, with appropriate security controls across networks, data and applications. It also helps provide visibility into security, compliance and threat posture while injecting security-based principles into the development and operation of cloud-based services.

Cybersecurity regulations are evolving rapidly with the threat landscape, so architectures should design strict security policies and governances to meet compliance standards. CISOs also have the challenge of designing systems that cater to authentication and authorization needs of both on-premises and cloud workloads, which have different protocols. Finally, the IT team should build a centralized dashboard and reporting for security metrics before cloud operations begin.

Security concerns within the cloud landscape are complex due to rapid development. This complexity requires a paradigm shift to protect applications. It can be achieved by migrating from a perimeter-based approach to one where security moves closer to dynamic workloads that are identified based on attributes and metadata. This approach identifies and secures workloads to meet the scale needs of cloud-native applications while accommodating constant flux.

The cloud paradigm requires enterprises to upgrade their legacy technologies and increase automation in the application security lifecycle and secure-by-design architectures. Cloud-native security can be modeled in distinct phases that constitute the application lifecycle -- development, distribution, deployment and operation. This ensures security is embedded throughout these phases instead of separately managed. In addition to cloud-native security controls, add-on components such as security groups and network access control lists for firewalls and distributed denial-of-service attack mitigation must be implemented. AI will also become a core component of all cybersecurity systems to address vulnerabilities and detect security issues.

Cloud security services should safeguard physical infrastructure, applications, data, networks and endpoint devices with a proven technology reference architecture for quality assurance and risk management. Adapting existing authentication methods to enable consistent access control for cloud and on-premises network resources is the route toward greater security. Use real-time security monitoring and reporting to address cloud-specific, industry and compliance standards.

Cloud architects and systems designers must incorporate network security appliances at the design stage for unified control of distributed IT resources. Security protocols should combine multifactor authentication protocols and role-based access control systems. Cloud security itself remains an interdisciplinary field that cannot be isolated from the development lifecycle or treated as a purely technical domain. In the same vein, cybersecurity is not just an IT problem, it is a business problem. For it to be ultimately effective, organizations must focus on people, process and technology to make necessary changes and ensure security is practiced and embedded as part of the company's DNA.

About the author

Anant R. Adya is the senior vice president of cloud, infrastructure and security (CIS) services at Infosys. Adya is responsible for growth of the CIS service line in the Americas and Asia-Pacific regions for Infosys. In his 25 years of professional experience, he has worked closely with many global clients to help define and build their cloud and infrastructure strategies and run end-to-end IT operations. He currently works with customers and the industry sales and engagement teams on the digital transformation journey. He defines digital transformation as helping customers determine the location of workloads, using new-age development tools for cloud apps, enabling DevOps and, most importantly, keeping the environment secure and enhancing customer experience.

Here is the original post:
Invest in cloud security to future-proof your organization - TechTarget

Nutanix and HPE to scale cloud adoption with new Database as a Service offering – ITP.net

Nutanix and Hewlett Packard Enterprise (HPE) have extended their collaboration to drive hybrid cloud and multi-cloud adoption by offering a multi-database operations and management solution, Nutanix Era bundled with HPE ProLiant servers, as a service through HPE GreenLake.

According to the technology leaders, the fully managed cloud service enables customers to deploy applications and databases in minutes and benefit from the agile, elastic, and pay-per-use capabilities of the cloud while gaining the governance, visibility and compliance of an on-premises environment.

Customers want to simplify database operations and management to move away from IT siloes that can often lead to higher maintenance costs, security risks, and lack of flexibility to deploy and run solutions, said Keith White, senior vice president and general manager, HPE GreenLake Cloud Services at Hewlett Packard Enterprise. By building on our successful collaboration with Nutanix, together the HPE GreenLake and the Nutanix Era database operations and management software solution will increase agility, simplify operations and cut costs by delivering a fully managed cloud offering.

By combining Nutanix Era on HPE ProLiant servers, and delivering the solution as a cloud service through HPE GreenLake, customers can transform database management with one cloud-ready platform. The solutions will allow customers to modernise, consolidate, and automate tasks across their databases and gain support for multi-database operations management, including Oracle Database, Microsoft SQL Server, MySQL, PostgresSQL, and MariaDB.

We continue to see tremendous success in our partnership, and HPE GreenLake with Nutanix Era for databases provides one more opportunity to strengthen our joint offerings and further serve customers, said Tarkan Maner, chief commercial officer at Nutanix. As customers are looking for solutions to help them in their journey to hybrid and multicloud, HPE and Nutanix deliver strong, integrated solutions to support them on their journey by providing performance, control, and security available as a full breadth of portfolio whether its solely using HPE ProLiant DX series of servers for private clouds or combining with HPE GreenLake cloud services to run this environment as a managed cloud.

Customers can turn on these new capabilities quickly with pre-sized, pre-priced packages inclusive of: Nutanix software licensing and HPE pay-as-you-go infrastructure capacity as well as startup services for the new solution, until December 31, 2021.

HPE GreenLake with Nutanix Era for databases is currently available to customers, with metering billing capabilities available in July.

More here:
Nutanix and HPE to scale cloud adoption with new Database as a Service offering - ITP.net

Research: SMBs rely on a mix of internal and cloud-based servers – TechRepublic

Results from a TechRepublic Premium survey show that more respondents are using a hybrid combination of internal and cloud servers than they were in 2020.

Image: iStock/metamorworks

Picking the best IT infrastructure and tech vendor for your company is taxing in normal times, but it's even rifer with challenges during a global pandemic. That's what many small and medium businesses faced last year when COVID-19 caused them to accelerate digital transformation initiatives, software deployments and tech spends.

This is no easy task; as an organization's technology stack could mean the difference between operating a successful, innovative company or a struggling, unsustainable one. TechRepublic Premium wanted to find out how SMBs build their perfect technology infrastructure. So they conducted a survey and compared the results to a similar survey from last year.

SEE:Research: COVID-19 causes SMBs to increase IT deployment and spending(available free for TechRepublic Premium subscribers)

COVD-19 has impacted IT deployment and spending for 46% of respondents and affected the types of services SMBs tried, tested or experimented with over the last 12 months. In the previous year, only 27% experimented with Zoom, but that number rose to 60% in 2021.

Download this article and thousands of whitepapers and ebooks from our Premium library. Enjoy expert IT analyst briefings and access to the top IT professionals, all in an ad-free experience.

Consistent with last year's survey, SMBs continue to use Microsoft Office 365 (56%), Microsoft Azure (43%), and Amazon AWS (43%). However, the number of respondents who experimented and tested Google Cloud Platform (28%) was down from the 33% that experimented with the platform last year.

It's no surprise that SMBs are turning to cloud services for solutions. However, in 2021, 46% of respondents rely on internal on-premises systems, which is a stark decrease from the previous year's response of 63%. Also in 2021, some 44% of respondents use a hybrid combination of internal and cloud servers, which is notably higher than the 2020 survey result of 39%. Furthermore, survey results show that 26% of respondents use more than one cloud service, which is up significantly from the 17% reported in 2020.

The importance of fulfilling business needs has not changed much over the years. The 2021 survey reports that 45% of respondents believe fulfilling business needs is the most important factor when making decisions on IT deployment, which was the same sentiment reported in the 2020 survey.

The infographic below contains selected details from the research. To read more findings, plus analysis, download the full report:Research: COVID-19 causes SMBs to increase IT deployment and spending(available for TechRepublic Premium subscribers).

View post:
Research: SMBs rely on a mix of internal and cloud-based servers - TechRepublic