Category Archives: Cloud Servers

The role of the data centre in the future of Data Management – Data Economy

Given todays challenges with the at home economy, schooling & zooming, we need to focus more than ever on cleaning up our house and our Data Center. The ongoing trend toward multiple computing models, with workloads spread across on-premise, public cloud and hybrid environments, data center managers require more visibility and operational control than ever before. Subsequently, server asset management is essential when IT staff are making decisions based on the available computation and storage capacity. But with such an overwhelming number of IT assets to track and monitor, especially in large-scale data centers, the task of server asset management has gradually become an efficiency bottleneck.

Enterprises and cloud service providers (CSPs) very often manually maintain and manage server assets through a configuration management database (CMDB). Asset information includes CPU, memory, hard disk model, serial number, capacity and other information.

However, asset management solutions of this kind usually offer limited scope and cannot be easily integrated into existing systems. Moreover, this method presents a number of problems, such as low data entry efficiency, the failure to update data in real time, and the inability to track server component maintenance updates.

Additionally, many large data centers remain hamstrung by the outsourced hardware maintenance model.

With this approach, an operations and maintenance center confirms a hardware failure and then submits work orders to the onsite hardware supplier, and after the field personnel completes the batch replacement of parts, they provide feedback to the remote operations and maintenance center through the work order system.

This mode has glaring efficiency problems. The feedback information is slow, and manual remote login to the server is needed to confirm whether the parts were correctly replaced, as required.

Adopting a Lean Asset Management Approach for Improved Data Center Efficiency

Lean management practices, a function of lean manufacturing that sought to increase productivity and efficiency by eliminating waste in the automobile industry, date back to the 1940s and Taiichi Ohno, a Japanese industrial engineer, who is considered to be the father of the Toyota Production System, later adopted in the U.S., and worldwide.

With respect to lean asset management, Ohno advocated for a clear understanding of what inventory is required for a certain project, real-time visibility of what capacity is available and what is already committed, and a streamlined replenishment process. He also believed that inefficient processes will always cause delays, if not excess inventory (over-provisioning) and idle resources (underutilization).

Sound familiar?

Through the practice of lean asset management methodology in the data center, IT staff gain the ability to manage the server assets in a fine-grained manner, such as tracking the model, brand, capacity, serial number and other information of the main components of the server.

Lean asset management also enables IT teams to react quickly and efficiently when implementing change strategy. As any IT administrator will attest, change deployments and implementations can pose significant risk. When a deployed change affects systems in an unanticipated way, it can lead to service outages that negatively impact an organizations bottom line and its brand reputation.

By discovering changes in server components, it also becomes convenient for the operations and maintenance team to track changes in components in a timely manner, and improve the efficiency of the component replacement process. Its also easier to collect information concerning the data center computing resources on demand.

A Trusted Source for IT Asset Discovery and Management

Data center management solutions such as Intel Data Center Manager (Intel DCM) have the capability to automatically obtain server asset information such as CPU, memory, disk model, capacity, etc., for the various bands/models through out-of-band methods.

External applications can obtain server asset information through APIs, which are provided by the data center management solution. External systems can automatically compare device component information, find and identify information, and changes in parts.

The following is a typical scenario. The remote operation and maintenance center of a CSP discovers a server component is faulty, and requests that the supplier replace the parts onsite at the data center.

The operator has no need to double-confirm the component by logging into the server after the parts have been replaced. Furthermore, the real-time asset information of the entire data center can be reported to the IT staff at any time and before they make any decision.

To support lean asset management methodology, Intel DCM offers many asset management features, such as organizing systems in physical or logical groups, easily searching for systems using their asset tags or other details, and importing and exporting a data centers inventory and hierarchy.

Along with Intel DCMs real-time power and thermal monitoring, and its middleware APIs that allow the software to easily integrate with other solutions, these features assist companies to avoid investing in additional asset management tools.

As organizations continue to leverage multiple computing models, further dispersing their workloads, and the data center itself becomes more complex, manual processes cant keep pace with the rate of change in the IT environment.

By adopting a lean asset management approach, supported by a data center management solution with IT asset discovery and management capabilities, data center managers benefit from a trusted source of information about asset ownership, interdependencies and utilization so that they can make informed decisions regarding the deployment, operations and maintenance of their servers and systems.

So after youve cleaned our your 5th closet at home, think about cleaning your data center clutter using innovative automation tools to see these lean asset management principles at work, theres no question that Taiichi Ohno would be proud.

View post:
The role of the data centre in the future of Data Management - Data Economy

We’d love to come up with a Harbor container ship pun but we’re too corona-frazzled. Version 2.0 is out – The Register

Harbor, the open-source container image registry, has reached version 2.0, becoming the first open-source registry to fully support the Open Container Initiative (OCI) specification.

There are quite a few registries that allow you to store container images which represent blueprints for launching containerized applications and other cloud app artifacts that describe related metadata like Helm charts, OPAs (Open Policy Agents), and other configuration-related files.

Perhaps the best known is the open source Docker Registry and Docker Hub, a hosted implementation of that code.

There's also Portus, an authorization server and front-end for Docker Registry made by SUSE. And there are various commercial offerings from cloud providers like Alibaba Container Registry, Amazon Elastic Container Registry (ECR), Azure Container Registry, GitLab Container Registry, Google Container Registry, and Red Hat Quay.

Harbor was developed by VMware and put under the oversight of the Cloud Native Computing Foundation in 2018. It's used by thousands of enterprise organizations that want to run their own container image registries, said Michael Michael, director of product management at VMware and Harbor maintainer, in an interview with The Register.

The OCI defines an image specification, for building the application, and a runtime specification, for creating the application environment.

So Harbor's OCI-compliance means that it supports the full set of APIs for defining what can be stored in a Harbor registry.

"Now you have more ways to describe what a container image looks like and how it can be deployed," said Michael. "It enables you to store in any OCI-complaint registry all of the cloud-native artifacts that are important to you."

These cloud-native artifacts, he explained, allow policy rules to be applied to ensure that containers are handled securely.

Version 2.0 brings several noteworthy features.

For example, Harbor now supports the OCI image index specification, by which platform-specific versions of an image can be declared via an index. Also, charts for Helm, the Kubernetes package manager, can now be stored within Harbor, using Helm v3, instead of separately in a Helm-specific repository known as Chart Museum.

Also, the Clair image scanner, which looks for vulnerabilities in Harbor-stored images, is being replaced, though it will still be supported.

"We're making Aqua's image scanner Trivy the default scanner for all your projects," said Michael.

Version 2.0 adds the ability to set expiration dates on individual robot accounts, instead of just a single system-wide setting. It also introduces the ability to configure Harbor services to use SSL, a significant security improvement.

And it adds the ability to trigger webhooks API callbacks individually, so users can configure them, on a project basis, to target HTTP or Slack.

Michael said the 2.0 update also lays the groundwork for future improvements to how Harbor handles container image garbage collection containers create a lot of files that need to be removed. The Docker model of garbage collection, he said, isn't scalable because it adds downtime. Harbor now tracks all the layers in an image and in the future will be able to handle garbage collection without downtime.

Sponsored: Webcast: Build the next generation of your business in the public cloud

More:
We'd love to come up with a Harbor container ship pun but we're too corona-frazzled. Version 2.0 is out - The Register

Edge Intelligence: The Next Wave of AI – EE Times India

Article By : Dennis Goldenson

Edge computing provides an opportunity to turn AI data into real-time value across almost every industry. The intelligent edge is the next stage in the evolution and success of AI technology...

As adoption rates rise for artificial intelligence and machine learning (ML), the ability to process large amounts of data in the form of algorithms for computational purposes becomes increasingly important. To help make the expanding use of data applications across billions of connected devices more efficient and valuable, there is growing momentum to migrate the processing from centralized third-party cloud servers to decentralized and localized processing on-device, commonly referred to as edge computing. According to SAR Insight & Consultings latest AI/ML embedded chips database, the global number of AI-enabled devices with edge computing will grow at a compound annual growth rate of 64.2% during the 20192024 period.

Edge AI takes the algorithms and processes the data as close as possible to the physical system in this case, locally on the hardware device. The advantage is that the processing of data does not require a connection. The computation of data happens near the network edge, where the data is developed, instead of in a centralized data-processing center. Determining the right balance between how much processing can and should be done on the edge will become one of the most important decisions for device, technology, and component providers.

Given the training and inferencing engines that produce deep-learning predictive models, edge processing usually requires an x86 or Arm processor from suppliers such as Intel, Qualcomm, Nvidia, and Google; an AI accelerator; and the ability to handle speeds of up to 2.5 GHz with 10 to 14 cores.

Given the expanding markets and expanding service and application demands placed on computational data and power, there are several factors and benefits driving the growth of edge computing. Because of the shifting needs of reliable, adaptable, and contextual information, a majority of the data is migrating locally to on-device processing, resulting in improved performance and response time (in less than a few milliseconds), lower latency, higher power efficiency, improved security because data is retained on the device, and cost savings because data-center transports are minimized.

One of the biggest benefits of edge computing is the ability to secure real-time results for time-sensitive needs. In many cases, sensor data can be collected, analyzed, and communicated straightaway, without having to send the data to a time-sensitive cloud center. Scalability across various edge devices to help speed local decision-making is fundamental. The ability to provide immediate and reliable data builds confidence, increases customer engagement, and, in many cases, saves lives. Just think of all of the industries home security, aerospace, automotive, smart cities, health care in which the immediate interpretation of diagnostics and equipment performance is critical.

Innovative organizations such as Amazon, Google, Apple, BMW, Volkswagen, Tesla, Airbus, Fraunhofer, Vodafone, Deutsche Telekom, Ericsson, and Harting are now embracing and hedging their bets for AI at the edge. A number of these companies are forming trade associations, such as the European Edge Computing Consortium (EECC), to help educate and motivate small, medium-sized, and large enterprises to drive the adoption of edge computing within manufacturing and other industrial markets.

The goals of the EECC initiative include specification of a reference architecture model for edge computing, development of reference technology stacks (EECC edge nodes), identification of gaps and recommendation of best practices by evaluating approaches within multiple scenarios, and synchronization with related initiatives/standardization organizations.

The advancement of AI and machine learning is providing numerous opportunities to create smart devices that are contextually aware of their environment. The demands placed on smart machines will benefit from the growth in multi-sensory data that can compute with greater precision and performance. Edge computing provides an opportunity to turn AI data into real-time value across almost every industry. The intelligent edge is the next stage in the evolution and success of AI technology.

Related Posts:

Read the original here:
Edge Intelligence: The Next Wave of AI - EE Times India

Patch by Friday or compromised by Monday: Salt exploit exposes Infrastructure-as-Code tools threat – SC Magazine UK

The disruptive attacks highlight what some cyber experts say is an overlooked or underestimated threat vector among developers: Infrastructure-as-Code (IaC). Considered a key element of DevOps practices, IaC tools such as Salt typically allow developers to use code to automate the managing and provision of complex computer infrastructure environments, helping them avoid configuration discrepancies between machines that can hold up software deployments that might otherwise require manual intervention. But its these helpful capabilities that can also make the exploitation of IaC tools uniquely dangerous.

To understand the potential implications of an IaC, one must remember that IaC is designed to accomplish two fundamental objectives:consistency and speed, said Bill Santos,president and COO ofCerberus Sentinel. IaC tools are designed to quickly deploy and update large environments in a very standardised way very quickly.The implication to an exploited IaC is significant:Whereas the consistency and speed is advantageous for approved changes, an exploited change will get deployed equally quickly and equally consistently across that same environment, dramatically increasing its impact vs. other exploit approaches.

Santos added that many developers are not appreciating the importance of IaC code and are not reviewing it, testing it, etc. at the same level they would application-level code.And in so doing, they are creating or increasing a very real threat vector.

Therefore, Its important to elevate the significance of any automation code, especially IaC code, within the context of the development lifecycle, said Santos. It is not second class code, but rather carries the same importance and significance as any other code supporting an application. It needs to be reviewed, tested and assured in a [manner] similar to every other element of an application architecture.

Indeed, in the recently released Spring 2020 edition of theUnit 42 Cloud Security Report, researchers with Palo Alto Networkss global threat intel team warned that developers are failing to scan IAC templates for security issues whenever they are created or updated, which raises the likelihood of encountering exploitable cloud vulnerabilities.

We found that nearly 200,000 IaC templates contained at least one vulnerability or misconfiguration, which range in severity from exposing systems to the public to disabling encryption and logging requirements. So yes, IaC is often overlooked as a serious threat vector, said Nathaniel Quist senior cloud threat researcher with Unit 42.As an industry, we should encourage all organisations to employ the proper implementation of IaC templates within a vetted and secure CI/CD Development Operations using Cloud Native Security Platforms (CNSP). IaC templates greatly increase the speed at which organisations can deploy business-critical applications, but without proper security oversight, they could also increase the speed in which they open themselves up for malicious attacks.

The various attacks took place after adversaries scanned the internet looking for Salt masters servers used to control minions that carry out tasks for the IaC tool that were both exposed over the internet and vulnerable to the two bugs. Users are vulnerable to exploit only if these conditions are met.

Ghost on May 3reportedan outage affecting its services, later reporting that an actor exploited vulnerabilities in its Salt server management infrastructure to install cryptojacking software. The mining attempt spiked CPUs and quickly overloaded most of our systems, which alerted us to the issue immediately, the blogging platform stated.

In a subsequent update, Ghost said it removed the cryptominer and added multiple new firewalls and security precautions, the introduction of which ironically further disrupted customer blog sites temporarily. At this time there is no evidence of any attempts to access any of our systems or data, Ghost asserted. Nevertheless, all sessions, passwords and keys are being cycled and all servers are being re-provisioned.

Jeremy Rowley, VP of business development at DigiCert, reported via a May 3Google Groups postthat a CT (Certificate Transparency) Log 2 key used to sign Signed Certificate Timestamps was compromised.

We are pulling the log into read-only mode right now, the post said.Although we dont think the key was used to sign SCTs (the attacker doesnt seem to realise that they gained access to the keys and were running other services on the [infrastructure]), any SCTs provided from that log after 7pm MST yesterday are suspect. The log should be pulled from the trusted log list. Rowley later said in an update that the log should be distrusted for everything after 17:00:02 on May 2.

And LineageOSreportedthat on 2 May, a malicious actor accessed its Salt master to gain access to our infrastructure. LineageOSs services were knocked temporarily offline, forcing the developer to restore them in piecemeal fashion. However, signing keys and builds were unaffected.

Researchers with F-Secure, who discovered the flaws, reported last Friday in ablog postand correspondingadvisorythat attackers could exploit the bugs to bypass the authentication and authorisation controls used to regulate access to Salt implementations and then remotely execute code with root privileges on the master, allowing for control of all its minions.

Patch by Friday or compromised by Monday, said F-Secure principal consultantOlle Segerdahl in the blog post.

F-Secure says it conducted its own scan and found 6,000 instances of exposed Salt masters. I was expecting the number to be a lot lower. Theres not many reasons to expose infrastructure management systems, which is what a lot of companies use Salt for, to the internet, said Segerdahl.

However, Alex Peay, SVP of product at SaltStack, characterized the 6,000 instances as a very small portion of the [Salt] install base, adding that Clients who have followed fundamental internet security guidelines and best practices are not affected by this vulnerability.

According to SaltStacks officialadvisory, the two bugs, designated CVE-2020-11651 and CVE-2020-11652, were discovered in the salt-master process ClearFunc class of Salt versions prior to 2019.2.4 and 3000.2. The former bug is due to the improper validation of method calls, and allows a remote user to access some methods without authentication. These methods can be used to retrieve user tokens from the salt master and/or run arbitrary commands on salt minions, the advisory states. The other flaw allows access to methods that improperly sanitise paths. These methods allow arbitrary directory access to authenticated users, the advisory continues.

In a patch issued at the end of April, Salt fixed the validation process. However, attackers did not waste time taking advantage of users who did not immediately update one of the patched, secure versions.

Although there was no initial evidence that the CVE had been exploited, we have confirmed that some vulnerable, unpatched systems have been accessed by unauthorised users since the release of the patches, said Peay. We must reinforce how critical it is that all Salt users patch their systems and follow the guidance we have provided outlining steps for remediation and best practices for Salt environment security. It is equally important to upgrade to latest versions of the platform and register with support for future awareness of any possible issues and remediations.

James McQuiggan, security awareness advocate atKnowBe4, said that the Salt vulnerabilities can be abused for a lot worse than just the reported cryptomining scam.

If organisations do not update their SaltStack, they are exposed to an attack where malware, ransomware or attack vectors can be initiated to gain control, steal intellectual property or hold an organisations data for ransom, said McQuiggan. Incident response for organisations needs to be swift to implement testing and patching of the servers using SaltStack. If they cannot be updated, additional steps will be required to reduce access on applications, users and systems to only those necessary and required for access.

Quist from Unit offered these key takeaways for IaC users: Trust but verify all network operations. All user access events should be monitored and only authorised users should be given access. Changes or updates to all Salt master or minion nodes need to be vetted to ensure no security risks are present. No changes should be allowed to occur to any Salt IaC template without approval and changes need to be verified for integrity. All requests for change need to be properly authenticated and their integrity needs to be verified.

This article was first published in SC US.

See more here:
Patch by Friday or compromised by Monday: Salt exploit exposes Infrastructure-as-Code tools threat - SC Magazine UK

Serverless Exists In The Cloud and Both Need Servers – Computer Business Review

Add to favorites

One of the biggest advantages of serverless computing may also be hiding one of its biggest risks

For the layperson, serverless computing is another one of those annoying terms like cloud computing that tries to make it sound like no physical hardware is being used. Of course thats simply not true, and just like cloud computing all serverless really means is that you are outsourcing your computational needs to a third-party provider.

Serverless exists in the cloud, and both of these need servers. The difference of serverless to other cloud-based offerings is that you pay for backend services on an as-used basis; something that changes (like many cloud services) the investment model for firms from a significant initial investment and continued maintenance to a pay-as-you-go scheme.

With serverless computing cloud vendors are providing a function-as-a-service (FaaS) offering. With FaaS code is stored, run and executed in the cloud and is only spun up when it is required. Executed functions can be batch or data processing or any number of application functions depending on the task.

IT teams no longer have infrastructure to maintain and this frees up to innovate when it comes to developing functions, as all of the major providers allow you to write functions in languages like Go, Python and JavaScript. This code is then sent to the cloud provider where it is stored and called upon when required, be that resizing photos or organising messages. Essentially the end user accesses their code/applications via URLs and a connection with their cloud provider.

Thomas LaRock, head geek at SolarWinds states that serverless computing is part of a greater trend of transitory infrastructure: New models like containerisation and serverless functions are not only abstracting the reliance on hardware but also on operating systems. Theyre designed to be ephemeral, created, or called on-demand, delivering their outcome before disappearing, to be recreated whenever or wherever theyre needed and not remaining in our infrastructure forever.

The obvious advantage of operating a serverless infrastructure is the zero costs of server management as most of the hardware and tasks are run offsite, resulting in no servers to maintain or moan about when they crash (on your end at least).

Cybersecurity responsibilities such as configuring firewalls and data encryption are no longer dealt with by internal IT workers. Patches are automatically pushed out by your service provider so systems are never left pining for the latest patch or security update. (To be clear this is not an excuse to forget about cybersecurity that should always be a concern.)

Ben Newton, director of operation analytics at Sumo Logic told us that: A serverless back-end would scale and load-balance automatically as the application load increases or decreases, all the while keeping the application online. The user would only need to pay for the resources consumed by a running application. Theoretically at least, this has the promise of drastically reduced development cycles and low operational costs.

This type of flexible scalability is an ideal fit for organisations and business that see strong surges of engagement during seasonal events; such as the retail and charity sectors who can experience tenfold increases in computational demand during the Christmas season.

Working like this can be a boon for your development team who no longer have to worry about hardware and can simply focus on creating innovative applications or functions. Brandon, CTO at cloud-based rota planning software company RotaCloud told us that: The advantage of serverless computing for us was reducing the barrier to entry for developers, and vastly reducing DevOps workload. Running serverless allows us to focus on code, and completely eliminate the need to configure hardware, scaling, operating systems, and HA.

Serverless computing moves the burden of responsibility from the end-user to the third-party provider, unfortunately incident after incident has demonstrated the risk of trusting others.

Serverless computing also makes the job of testing and debugging more difficult due to its ephemeral nature. Environments that are created can be very difficult to replicate and some processes and logs may no longer be visible.

Nic Wood, ERP Enterprise and Cloud Architect at Version 1 notes that: Serverless follows a microservice design pattern, meaning each function provides a single service. This multiplicity of functions can increase difficulty in troubleshooting and tracing events. Even logs can be generated in numerous places and knowing where to investigate can be complicated.

One of the biggest advantages of serverless computing may hold its biggest risk; the pay-as-you-go payment structure. Sure your company no longer has to pay for hardware maintenance, but third-party providers are not doing it out of the goodness of their hearts, the reality is they make money every time you need to scale up.

Tom Weeks, technical director at Informed Solutions told us that for many firms: It might be a complicated step to move to a cost model that involves budgeting based on (for example) the number of requests made to an API and how long it takes to process that request. These sorts of pricing models fundamentally change what organisations need to know about the inner workings of their digital services and applications, which has the potential to be very challenging, particularly in environments that are running a lot of legacy IT.

View post:
Serverless Exists In The Cloud and Both Need Servers - Computer Business Review

Industrial 5G and the Mobile Edge – ARC Viewpoints

It is just over a year since South Koreas SK Telecom launched the worlds first commercial 5G service, on April 3, 2019. In a recent announcement to mark the anniversary, the company, which is the country's mobile market leader, revealed 5G progress subscriber base of 2.2 million and 45 percent market share as well as its plans to deliver business growth with the fifth-generation cellular technology over the next few years.

While in the consumer realm those plans include Project xCloud, a step change in the mobile gaming experience enabled by via 5Gs high-speed, high-bandwidth connections to cloud servers, SK Telecom is placing a particularly strong focus on the enterprise segment by working with companies in diverse sectors in order to, in its words, catalyse industrial innovations in Korea.

Those collaborations include one with semiconductor giant SK Hynix to develop a smart factory based on a private 5G network deployment; with Korea Hydro & Nuclear Power (KHNP) to realize a 5G smart power plant; and with LG Electronics to develop and commercialize 5G cloud-based autonomous robots.

Significantly, SK Telecoms 5G anniversary announcement also detailed its plans to build 5G MEC (multi-access edge computing) centers in 12 different locations across the country in order to lead a cloud-driven industrial revolution. As defined by ETSI (European Telecommunications Standards Institute), multi-access edge computing is an IT service environment provided at the edge of a mobile network and characterized by ultra-low latency, high bandwidth and real-time access to radio network information for leverage by applications.

In the context of industrial 5G, MEC allows communications from factories to/from cloud-server hosted applications to meet critical latency requirements by removing the latency component associated with directly connecting to cloud computing infrastructure sited at distant data centers, and hence enables the flexible and unfettered use of productivity and quality enhancing technologies such as augmented and virtual reality (AR/VR), machine vision and robotics.As well as the distributed type of MEC that SK Telecom is deploying at its own premises, MEC can also be implemented on-site at individual end user facilities, which can help to ease data security and privacy concerns as well as further reduce latency.

The aforementioned SK Telecom/LG Electronics 5G cloud-based autonomous robot project intends to make use of MEC technology, while SK Telecom competitor KT plans to use MEC in a collaboration with Cognex to develop a 5G based machine vision solution and also with Hyundai Heavy Industries to integrate 5G technology in the development of autonomous robots and smart factory facilities. According to company chairman Hwang Chang-Gyu, KT's edge-cloud architecture has led to the establishment of edge-computing telecom centers in major cities across the country, lowering the 5G latency to a 5 ms level.

Of course, multi-access edge computing activities are not restricted to Korea. Mobile operators of the likes of AT&T and Verizon in the US and Deutsche Telekom and Vodafone in Europe are all at various stages of MEC deployments and are actively exploring and promoting use cases that can best leverage the technology. Verizon, for example, touts the following potential applications for 5G + MEC: autonomous vehicles; immersive experiences (AR/VR); massive IoT (MIoT); connected factories; next-level logistics; and smart communities (public safety, transit, utilities, citizen engagement).

It is not surprising that the major cellular network technology suppliers such as Ericsson, Huawei and Nokia have developed MEC solutions for their telco customers. However, the market is also seeing activity from new entrants and non-traditional telecoms industry players. SK Telecom, for example, is partnering with MobiledgeX, a company funded by Deutsche Telekom but independently run in San Francisco, for its MEC deployments. Verizon is making use of Amazons AWS Wavelength 5G edge computing platform, which was launched in December 2019. And AT&T has an edge computing collaboration with Microsoft, building on a broader partnership between the two companies announced last year at MWC 2019 in Barcelona.

Follow this link:
Industrial 5G and the Mobile Edge - ARC Viewpoints

Privitar Announces New Native Integration With Google Cloud Platform – Business Wire

LONDON & BOSTON--(BUSINESS WIRE)--Privitar, the leading data privacy platform provider, today announced that the Privitar Data Privacy Platform now natively integrates with the Google Cloud Platform. The new integration adds to Privitars native support of public cloud services, including AWS and Azure, and enables customers to seamlessly protect and extract the maximum value from the sensitive data they collect, manage and use.

Traditional security measures such as encryption and attribute based access control are not sufficient as we increase the exposure to sensitive data assets in the cloud. Complimentary privacy controls are essential, particularly for analytics applications that need to optimise data utility, said Jason du Preez, CEO of Privitar. Privitars native integration with Google Cloud Platform makes it easy for customers to leverage their data to gain valuable insights and to support data-driven decisions, without jeopardizing its safety.

The Privitar Data Privacy Platform provides a unique combination of privacy techniques, governance and management features that are essential to any organization embracing data-driven insight. The native integration with Google Cloud Platform enables customers to:

Privitar is a GCP Partner Advantage partner. For more information about Privitars new integration with Google Cloud Platform, visit: http://www.privitar.com/partners/google-cloud-partner

About PrivitarOrganizations worldwide rely on Privitar to protect their customers sensitive personal data and to deliver comprehensive data privacy that frees them to extract maximum value from the data they collect, manage and use.

Founded in 2014, Privitar is headquartered in London, with regional headquarters in Boston and Singapore, a development center in Warsaw, and sales and services locations throughout the US and Europe. For more information, please visit http://www.privitar.com.

Read the rest here:
Privitar Announces New Native Integration With Google Cloud Platform - Business Wire

Analysis on Impact of COVID-19- Rugged Servers Market 2020-2024 | Increased Adoption of Cloud Applications to Boost Growth | Technavio – Business Wire

LONDON--(BUSINESS WIRE)--Technavio has been monitoring the rugged servers market and it is poised to grow by USD 546.29 million during 2020-2024, progressing at a CAGR of about 5% during the forecast period. The report offers an up-to-date analysis regarding the current market scenario, latest trends and drivers, and the overall market environment.

Technavio suggests three forecast scenarios (optimistic, probable, and pessimistic) considering the impact of COVID-19. Please Request Latest Free Sample Report on COVID-19 Impact

The market is fragmented, and the degree of fragmentation will accelerate during the forecast period. Core Systems, Crystal Group Inc., Dell Technologies Inc., EMET Computing, Enoch Systems LLC, International Business Machines Corp., Mercury Systems Inc., Sparton Corp., Systel Inc., and Trenton Systems Inc. are some of the major market participants. The increased adoption of cloud applications will offer immense growth opportunities. To make the most of the opportunities, market vendors should focus more on the growth prospects in the fast-growing segments, while maintaining their positions in the slow-growing segments.

Increased adoption of cloud applications has been instrumental in driving the growth of the market.

Rugged Servers Market 2020-2024: Segmentation

Rugged Servers Market is segmented as below:

To learn more about the global trends impacting the future of market research, download a free sample: https://www.technavio.com/talk-to-us?report=IRTNTR40500

Rugged Servers Market 2020-2024: Scope

Technavio presents a detailed picture of the market by the way of study, synthesis, and summation of data from multiple sources. Our rugged servers market report covers the following areas:

This study identifies the emergence of containerized data centers as one of the prime reasons driving the rugged servers market growth during the next few years.

Rugged Servers Market 2020-2024: Vendor Analysis

We provide a detailed analysis of vendors operating in the rugged servers market, including some of the vendors such as Core Systems, Crystal Group Inc., Dell Technologies Inc., EMET Computing, Enoch Systems LLC, International Business Machines Corp., Mercury Systems Inc., Sparton Corp., Systel Inc., and Trenton Systems Inc. Backed with competitive intelligence and benchmarking, our research reports on the rugged servers market are designed to provide entry support, customer profile and M&As as well as go-to-market strategy support.

Register for a free trial today and gain instant access to 17,000+ market research reports.

Technavio's SUBSCRIPTION platform

Rugged Servers Market 2020-2024: Key Highlights

Table Of Contents:

Executive Summary

Market Landscape

Market Sizing

Five Forces Analysis

Market Segmentation by End-user

Customer landscape

Geographic Landscape

Drivers, Challenges, and Trends

Vendor Landscape

Vendor Analysis

Appendix

About Us

Technavio is a leading global technology research and advisory company. Their research and analysis focus on emerging market trends and provides actionable insights to help businesses identify market opportunities and develop effective strategies to optimize their market positions. With over 500 specialized analysts, Technavios report library consists of more than 17,000 reports and counting, covering 800 technologies, spanning across 50 countries. Their client base consists of enterprises of all sizes, including more than 100 Fortune 500 companies. This growing client base relies on Technavios comprehensive coverage, extensive research, and actionable market insights to identify opportunities in existing and potential markets and assess their competitive positions within changing market scenarios.

Continue reading here:
Analysis on Impact of COVID-19- Rugged Servers Market 2020-2024 | Increased Adoption of Cloud Applications to Boost Growth | Technavio - Business Wire

Norton 360 Deluxe review: Comprehensive security solution with built-in VPN – Business Standard

The latest offering of Norton, the security solution provider best known for its antivirus and internet security solutions, the Norton 360, doubles up as a service to safeguard privacy on the internet and also neutralise viruses and protect data.

The Norton 360 suite includes device protection from viruses and internet security, access to secure virtual private network (VPN), cloud storage for backup, password manager and parental controls. Moreover, it is a single-security solution that covers computers (Windows and Mac) and mobile devices (iOS and Android).

Being a subscription-based offering, the product comes in three yearly plans Standard, Deluxe 3 Devices and Deluxe. In the standard plan it offers a single-device support, in Deluxe 3 three devices, and in Deluxe five devices. We used the Deluxe subscription and tested the product on Android smartphone and Windows-based notebook.

Device protection

The device protection tools are available in all subscription plans. They include antivirus, anti-spyware and anti-malware protection. Besides, there are internet security tools, too. The device protection tools get regular updates over the internet for protection against newly found threats, therefore you need internet connectivity to keep the threats database up to date.

The device protection feature works in the background and does not impact the overall system performance. Moreover, it has a built-in feature to improve system performance. The feature works only when the system is idle. One might notice a slight lag in system when the performance improvement tasks are running in the background. However, the lag is temporary and the system comes back to normal operating conditions quickly.

As for protection, the Norton 360 provides comprehensive coverage against most threats. In fact, some independent benchmark portals state that it has 100 per cent protection rate against web threats. For optimal performance, however, you might need to install and set up all the add-ons that it offers, especially for internet security. These add-ons are available for most popular browsers, including Chrome, Edge and Firefox.

Data security

The Norton 360 has a built-in backup service for data security. The Norton 360 Deluxe subscription comes with 100GB cloud storage, which is good enough to store important files and documents on an encrypted online server. It is easy to set up and offers flexibility to choose what to back up and how frequently to run the backup to keep files updated on the cloud storage. Unfortunately, the cloud storage service is available for backups only. There is no provision to store anything else on the cloud server using browser or any other means. The backup and restoration is done exclusively through My Norton application.

Privacy

Virtual private network (VPN) is one of the easiest ways to protect ones privacy online, and it comes bundled with Norton 360. On desktop, the service is integrated within My Norton application. On smartphones, it requires an additional Norton VPN app. Setting it up is easy. Enable it, and the application automatically selects best server to provide safe internet experience. There is a fair list of global servers available, too, if you want to go beyond geography virtually.

A caveat: Some websites and services do not work when VPN is enabled.

Parental controls

The Norton 360 also lets parent you set up controls to monitor and manage the online activities of your children. Once set up, the Norton Parental Control lets parents see what videos their kids are watching, websites they visit, terms they search for, and apps they download. Besides, it also provides GPS location service for Android and iOS devices, content filtering for PCs and more.

Verdict

At Rs 3,999 for the yearly Deluxe subscription, the Norton 360 is a comprehensive solution that goes beyond safeguarding from conventional threats. It is a complete package that protects your devices, including smartphone, while also providing additional tools to make your work smooth. Moreover, the addition of VPN and cloud storage for backups makes it particularly useful for someone looking for a security solution.

View post:
Norton 360 Deluxe review: Comprehensive security solution with built-in VPN - Business Standard

Neutrino Energy Will Power The Future’s Internet Consumption – Forbes India

As an increasing number of students and employees accomplish daily tasks from home, this additional internet usage has combined with gaming and streaming needs to bring about the most significant IT energy crisis of modern times. While it's easy to forget, keeping websites like Google, Facebook, and Netflix online requires an incredible amount of electrical energy, and the Neutrino Energy Group proposes that neutrino-derived electricity could be used to push server technology away from fossil fuels and toward true sustainability.Does IT Electrical Demand Damn Sustainable Energy to Fringe Applications?According to IT writer Mark Mills, the recent global lockdown clearly illustrates how our digital energy use is directly tied to future economic health. While gasoline consumption went down nearly 30 percent during the first week of lockdown in the United States, for instance, overall electrical consumption only decreased by seven percent. These figures actually indicate increased overall energy use across all sources, demonstrating how our ability to fuel online interactions will determine the viability of global technological society over the next few decades.At present, fossil fuels are the only energy technology capable of sustaining our combined internet needs. While existing renewable energy technologies can take some of the burden away from oil and coal, making a complete switch to renewables at this stage would cause the internet to collapse around the world.According to Mills, it is still "prohibitively expensive" to produce reliable energy from solar and wind farms since these energy technologies depend on specific environmental conditions to operate. What this seasoned tech author fails to point out, however, is the significant decrease in voltage that occurs when electricity is transported from sustainable energy farms to the server banks that operate the global cloud.On-Site Renewable Energy Generation Is RequiredIn 2015, two of the world's most prominent energy physics researchers independently discovered that neutrinos have mass, making these ethereal particles serious targets of sustainable energy research. Unlike wind or solar farms, neutrino-based energy technologies can operate anywhere, anytime regardless of environmental conditions.The neutrinovoltaic technology developed by the Neutrino Energy Group derives electrical energy from neutrinos, which are invisible and bombard the Earth in roughly equal numbers every moment of every day, instead of drawing energy from the visible spectrum of light.The revolutionary technology developed by the Neutrino Energy Group harvests a small amount of the kinetic energy of neutrinos as they pass through everything we see, and this kinetic-energy is then transformed into electricity.Solar cells can not be stacked on top of each other since they can only operate when they are unblocked from the suns rays. This design flaw ist not suffered by Neutrinvoltaic cells which means they can be stacked on top of each other with the bottom cells generating just as much electrical power as the cells on top.Neutrino Inside devices could operate mere centimeters away from server banks, eliminating voltage-over-distance energy decreases. In fact, Holger Thorsten Schubart,founder of the Neutrino Energy Group, even proposes that Neutrino Energy could be modified to fit inside existing technologies like cloud servers.Support the Neutrino Energy Group to Keep the Internet RunningWhile the present crisis spared the internet, we can't say the same for future economic catastrophes. The best way to preserve our ability to go online is to invest in renewable energy technologies, and neutrino energy is the only electricity-generating technology that can operate close to or inside of server banks. Support the Neutrino Energy Group to defend our IT infrastructure from any crisis the future may bring.For more information click here : https://neutrino-energy.comDisclaimer: The views, suggestions and opinions expressed here are the sole responsibility of the experts. NoForbes Indiajournalist was involved in the writing and production of this article.

Read more:
Neutrino Energy Will Power The Future's Internet Consumption - Forbes India