Category Archives: Cloud Servers

To Protect Consumer Data, Don’t Do Everything on the Cloud – Harvard Business Review

When collecting consumer data, there is almost always a risk to consumer privacy. Sensitive information could be leaked unintentionally or breached by bad actors. For example, the Equifax data breach of 2017 compromised the personal information of 143 million U.S. consumers. Smaller breaches, which you may or may not hear about, happen all the time. As companies collect more data and rely more heavily on its insights the potential for data to be compromised will likely only grow.

With the appropriate data architecture and processes, however, these risks can be substantially mitigated by ensuring that private data is touched at as few points as possible. Specifically, companies should consider the potential of what is known as edge computing. Under this paradigm, computations are performed not in the cloud, but on devices that are on the edge of the network, close to where the data are generated. For example, the computations that make Apples Face ID work happen right on your iPhone. As researchers who study privacy in the context of business, computer science, and statistics, we think this approach is sensible and should be used more because edge computing minimizes the transmission and retention of sensitive information to the cloud, lowering the risk that it could land in the wrong hands.

But how does this tech actually work, and how can companies who dont have Apple-sized resources deploy it?

Consider a hypothetical wine store that wants to capture the faces of consumers sampling a new wine to measure how they like it. The stores owners are picking between two competing video technologies: The first system captures hours of video, sends the data to third-party servers, saves the content to a database, processes the footage using facial analysis algorithms, and reports the insight that 80% of consumers looked happy upon tasting the new wine. The second system runs facial analysis algorithms on the camera itself, does not store or transmit any video footage, and reports the same 80% aggregated insight to the wine retailer.

The second system uses edge computing to restrict the number of points at which private data are touched by humans, servers, databases, or interfaces. Therefore, it reduces the chances of a data breach or future unauthorized use. It only gathers sufficient data to make a business decision: Should the wine retailer invest in advertising the new wine?

As companies work to protect their customers privacy, they will face similar situations as the one above. And in many cases, there will be an edge computing solution. Heres what they need to know.

In 1980, the Organization for Economic Cooperation and Development, an international forum of 38 countries, established guidelines for the protection of privacy and trans-border flows of personal data for its member countries with the goal of harmonizing national privacy legislation. These guidelines, which were based on principles such as purpose limitation and data minimization, evolved into recent data-privacy legislation such as the General Data Protection Regulation (GDPR) in Europe and the California Consumer Privacy Act (CCPA), both introduced in 2018.

The rise of edge computing helps organizations meet the privacy guidelines above by implementing three critical design choices. The design choices begin with how to think about data collection and extend to the actual data processing. They are:

A mindful data architecture should collect and retain only the must-have information. Data-collection approaches should be designed and implemented around the desired insights (in other words, its purpose should be limited), thus reducing the number of variables and people tracked, meaning the minimum amount of data is collected.

In some ways, this is an old idea: In 1922, the groundbreaking British statistician R.A. Fisher developed the statistical theory of a sufficient statistic, which provides all the information required on the desired insight. (E.g., 80% of consumers looked happy upon tasting the new wine.) Minimal sufficiency goes a step further by most efficiently capturing the sufficient information required for an insight. Translated loosely, the wine retailer may use an edge device to perform facial analysis on fewer consumers a smaller sample to reach the same 80% insight.

For many business decisions we dont need insights on the individual level. Summarizing the information at a group level retains most of the necessary insights while minimizing the risk of compromising private data. Such non-personal data is often not subject to data protection legislation, such as the GDPR or the CCPA.

When it is critical to obtain insights at a personal level, the data may be altered to hide the individuals identity while minimally impacting the accuracy of insights. For instance, Apple uses a technique called local differential privacy to add statistical noise to any information that is shared by a users device, so Apple cannot reproduce the true data. In some situations, alteration of individual data is legally mandated, such as in clinical studies. Techniques may include pseudo-anonymization and go as far as generating synthetic data.

Knowing when to apply data-processing tools is as critical as using the right tools. Applying sufficiency, aggregation, and alteration during data collection maximizes protection while retaining the most useful information. This approach can also reduce costs for cyber insurance, compliance with data-protection regulations, and more scalable infrastructure.

Restricting private data collection and processing to the edge is not without its downsides. Companies will not have all their consumer data available to go back and re-run new types of analyses when business objectives change. However, this is the exact situation we advocate against to protect consumer privacy.

Information and privacy operate in a tradeoff that is, a unit increase in privacy requires some loss of information. By prioritizing data utility with purposeful insights, edge computing reduces the quantity of information from a data lake to the sufficient data necessary to make the same business decision. This emphasis on finding the most useful data over keeping heaps of raw information increases consumer privacy.

The design choices that support this approach sufficiency, aggregation, and alteration apply to structured data, such as names, emails or number of units sold, and unstructured data, such as images, videos, audio, and text. To illustrate, let us assume the retailer in our wine-tasting example receives consumer input via video, audio, and text.

If the goal of the wine retailer is to understand consumer reactions broken down by demographic groups, there is no need to identify individual consumers via facial recognition or to maintain a biometric database. One might wonder arent the pictures that contain peoples faces private data? Indeed, they are. And this is where edge computing allows the video feed to be analyzed locally (namely, on the camera) without ever being stored permanently or transmitted anywhere. AI models are trained to extract in real time the required information, such as positive sentiment and demographics, and discard everything else. That is an example of sufficiency and aggregation employed during data collection.

In our wine-tasting setting, an audio analysis may distinguish between when speech occurs versus silence or background music. It may also reveal the age of the person speaking, their emotions, and energy levels. Are people more excited after tasting the new wine? AI models can understand the overall energy of the speaker without knowing what was said. They analyze inflections and intonations in the voice to reveal an individuals state of mind. Sufficiency is built into the classifications (i.e., the output) of the AI technology by default. Running these models on the edge and summarizing results by demographic group also achieves data aggregation.

Our wine retailer can use consumer textual feedback about the new wine not only to understand whether consumers are satisfied but, equally importantly, learn the words consumers use to describe the taste and feel of the new wine. This information is invaluable input into the development of advertising. In this analysis, the data do not need to be tied to specific consumers. Instead, textual comments are aggregated across consumers, and the relative frequencies of taste and feeling keywords for each wine type are sent to the wine retailer. Alternatively, if insights are desired on the personal level, textual feedback can be altered synthetically using Natural Language Generation (NLG) models.

In the examples above, the Sufficiency-Aggregation-Alteration design choices enhance privacy. These ideas are also relevant to applications and data types as far ranging as unlocking your phone, evaluating your health with smart devices, and creating better experiences. Paradoxically, the mindful use of edge computing and AI, which often scares people, is critical for maximizing privacy protection. Privacy advocates also promote the idea of consumers owning and controlling their personal data via a Customer Data Platform (CDP). A data architecture that links the CDP to an edge device (think of voice-activated home assistants) can further increase consumer trust by providing consumers complete control and transparency over their data.

This framework is only a partial solution to concerns about privacy, however, to be deployed alongside other beneficial practices such as data encryption, minimizing access privileges, and data retention. Encryption is employed when data are stored permanently and in transit. That is an essential first step to minimize unauthorized access because it converts the dataset into a black box. Without a key, the black box has no value. Likewise, limiting data access to a need-to-know basis, having clear policies for data retention, and providing opt-out mechanisms, reduces the risk of data leaks. Even though the above steps are standard practice, not everyone employs them, creating many more touchpoints where private data breaches can occur. Be a good manager and check with your IT team and third-party vendors.

***

Privacy is a social choice, and leadership teams should prioritize data utility. Many companies have been collecting as much data as possible and deciding later what is useful versus not. They are implicitly trading off all consumer privacy with the most information. We advocate a more disciplined approach wherein the uses of the data are specified upfront to guide both the collection and retention of data. Furthermore, technology has offered us all the tools we need to safeguard privacy without impacting business intelligence. By leveraging edge computing and AI technologies, companies may apply the design choices of sufficiency, aggregation, and alteration at the data collection stage. With a carefully designed architecture, we may obtain the desired insights and secure the privacy of consumers data at the same time. Contrary to conventional wisdom, we can have our (privacy) cake and eat it too.

Continued here:
To Protect Consumer Data, Don't Do Everything on the Cloud - Harvard Business Review

You can hijack Google Cloud VMs using DHCP floods, says this guy, once the stars are aligned and… – The Register

Google Compute Engine virtual machines can be hijacked and made to hand over root shell access via a cunning DHCP attack, according to security researcher Imre Rad.

Though the weakness remains unpatched, there are some mitigating factors that diminish the potential risk. Overall, it's a pretty neat hack if a tad impractical: it's an Ocean's Eleven of exploitation that you may find interesting from a network security point of view.

In a write-up on GitHub, Rad explains that attackers can take over GCE VMs because they rely on ISC DHCP software that uses a weak random number generator.

A successful attack involves overloading a victim's VM with DHCP traffic so that it ends up using a rogue attacker-controlled metadata server, which can be on the same network or on the other side of the internet. The DHCP flood would typically come from a neighboring attacker-controlled system hosted within Google Cloud.

When the technique is pulled off just right, the VM uses the rogue metadata server for its configuration instead of an official Google one, and ultimately the miscreant can log into the VM via SSH as the root user.

ISC's implementation of the DHCP client, according to Rad, relies on three things to generate a random identifier: the Unix time when the process is started; the PID of the dhclient process; and the sum of the last four bytes of the Ethernet addresses (MAC) of the machine's network interface cards. This random number, XID, is used by the client to track its communications with Google's DHCP servers.

So the idea is to hit the victim VM with a stream of DHCP packets, with a best guess for the XID, until the dhclient accepts them over Google's legit DHCP server packets, at which point you can configure the network stack on the victim VM to use the rogue metadata server by aliasing Google server hostnames.

Two of these XID ingredients, Rad says, are predictable. The last four bytes of the MAC address are the same as the internal IP address of the box. And the PID gets assigned by the Linux kernel in a linear way.

"To mount this attack, the attacker needs to craft multiple DHCP packets using a set of precalculated/suspected XIDs and flood the victim's dhclient directly," explains Rad.

"If the XID is correct, the victim machine applies the network configuration. This is a race condition, but since the flood is fast and exhaustive, the metadata server has no real chance to win."

Crafting the correct XID in a flood of DHCP packets is made easier by the insufficient randomization scheme. Doing so allows the attacker to reconfigure the target's network stack at will.

According to Rad, Google relies on its metadata servers to handle the distribution of SSH keys. By impersonating a metadata server, SSH access can be granted to the attacker.

Rad's technique is based on an attack disclosed last year by security researcher Chris Moberly, but differs in that the DHCP flooding is done remotely and the XIDs are guessed.

In the three attack scenarios devised by Rad, two require the attacker to be on the same subnet as the target VM to send the flood of DHCP traffic. In one scenario, the victim VM needs to be rebooting, and in the other, it is refreshing its DHCP lease. The third allows for a remote attack over the internet but requires the firewall in front of the target VM to be fully open.

Rad concedes this third case is "probably not a common scenario" but notes that GCP Cloud console provides that option and speculates there are likely to be VMs with that configuration.

Suggested defense techniques include not referring to the metadata server using its virtual hostname (metadata.google.internal), not managing the virtual hostname via DHCP, securing metadata server communication using TLS, and blocking UDP on Ports 67/68 between VMs.

Google was said to be informed of this issue back in September 2020. After nine months of inaction, Rad published his advisory. The Chocolate Factory did not immediately respond to a request for comment. We imagine Google Cloud may have some defenses in place, such as detection of weird DHCP traffic, for one.

Speaking of Google, its security expert Felix Wilhelm found a guest-to-host escape bug in the Linux KVM hypervisor code for AMD Epyc processors that was present in kernel versions 5.10 to 5.11, when it was spotted and patched.

Continued here:
You can hijack Google Cloud VMs using DHCP floods, says this guy, once the stars are aligned and... - The Register

Google has over 8 million terabytes of iCloud data on its servers, report claims – The Apple Post

[mc4wp_form id="618"]

Apple is reportedly storing 8 million terabytes of iCloud customer data on third-party servers owned by Google, according to a report from The Information.

In an article published on Tuesday citing a person with direct knowledge of the matter, reportersAmir Efrati and Kevin McLaughlin claim that Apples spending on Googles storage services is on track to double this year, with Apple continuing to rely heavily on Google to keep up with the strong customer demand for its iCloud services.

The report claims Apple isGoogle Clouds biggest client, with Google codenaming Apple Bigfoot due to the large quantities of data stored by the company across Googles servers, which has cost Apple around $300 million this year alone to store.

Despite relying heavily on third-party storage services, Apple themselves own several data centres that serve users of iMessage, Siri, the App Store and other Apple services. With external cloud providers, all servers are encrypted using keys owned exclusively by Apple, so that private data is only accessible by the user, making it fairly irrelevant from a customers point of view which server stores their data.

Alongside Google, Apple also relies on Amazon Web Services for cloud storage.

See more here:
Google has over 8 million terabytes of iCloud data on its servers, report claims - The Apple Post

HPE GreenLake: The HPC cloud that comes to you – The Register

Sponsored By its very nature, high performance computing is an expensive proposition compared to other kinds of computing. Scale and speed cost money, and that is never going to change. But that doesnt mean that you have to pay for HPC all at once, or even own it at all.

And it doesnt necessarily mean that you need to pay for a bunch of cluster experts to manage a complex system, either, which can be a significant part of the overall cost of an HPC system and which is often a lot more difficult to find than getting an HPC system through the budgeting process.

Traditionally, organizations have signed leasing or financing agreements to cushion the blow of a big capital outlay required to build an HPC cluster, which includes servers (usually some with GPU acceleration these days), high speed and capacious storage, and fast networking to link it all together.

However, the same kind of pay per use, self-service, scalability, and simplified IT operations that comes with the cloud is, thankfully, available on-premises for HPC systems though HPE GreenLake for HPC offering, which previewed in December 2020 and which will be in selected availability in June, with general availability coming shortly thereafter.

There is much more to HPE GreenLake than a superior cloud-like pricing scheme. But after getting a preview of the HPE GreenLake for HPC, we boiled it all down to this: HPE GreenLake is like having a local cloud on your premises for running IT infrastructure that is owned by HPE, managed by HPEs substantial experts and a lot of automation it has developed, that is used by you. In this case we are focusing on traditional HPC simulation and modeling, machine learning and other forms of AI, and data analytics that are commonly called high performance computing these days.

Ahead of the HPE Discover conference at the end of June 2021, Don Randall, worldwide marketing manager of the HPE GreenLake as-a-service offerings, gave us a preview of what the full HPE GreenLake for HPC service will look like and some hints about how it will be improved over time.

HPE has sold products under the HPE GreenLake as-a-service model for a dozen years now, and it has some substantial customers in the HPC arena using earlier versions of the service, including Italian energy company Ente Nazionale Idrocarburi (ENI) and German industrial manufacturer Siemens, which has HPE GreenLake for HPC systems in use in 20 different locations around the globe.

With the update this year, HPE is adding the GreenLake Central master console on top of its as-a-service offering, and is integrating telemetry from on-prem clusters, and usage and cost data from public clouds such as Amazon Web Services and Microsoft Azure that will allow GreenLake shops to see the totality of their on-premises GreenLake infrastructure alongside the public cloud capacity they use. This is all cloudy infrastructure, after all, and as Randall explains, HPE absolutely expects and wants for customers to use the public clouds opportunistically when it is appropriate.

The earlier versions of HPE GreenLake for HPC lacked the automated firmware and software patching capabilities that HPE is rolling out this year, and the fit and finish has improved considerably, too, according to Randall. And there are plans to add more features and functions to GreenLake for HPC in the coming months and years, some of which HPE is willing to hint about now.

HPE GreenLake has been evolving over the years, and adding features to expand support for HPC customers, for good reason. Despite a decade and a half of cloud computing, 83 percent of HPC implementations are outside of the public cloud, according to Hyperion Research. And they are staying on-premises for good, sound reasons. HPC is, by definition, not the general-purpose computing, networking, and storage that is typically deployed in an enterprise datacenter. Compute is often denser and hotter, networks are heftier, and storage is bigger and faster; the scale is generally larger than what is seen for other kinds of systems in the enterprise.

And moving to the public cloud presents its own issues, including latency issues between users and the closest public cloud regions, the size of datasets creates its own gravity that makes it very hard and expensive to move data off the public cloud once it has been placed there. And then there is the issue of application entanglement. In some cases, applications are so intertwined that they cant be moved piecemeal to the cloud, so you end up in an all-or-nothing situation, and moreover, for latency reasons, HPC applications want to be near to HPC data. So you cant break it apart that way, either, with data in the cloud and apps on premises, or vice versa, without paying some latency and cost penalties.

HPE GreenLake for HPC is meant to solve all of these issues, and more.

We have got a ton of things that that are putting us way out in the lead, says Randall. We have the expertise to design, integrate, and deliver HPC setups globally, and we are number one in HPC, and we have people who are really, really sharp. HPE has invented a lot of HPC technology or acquired it, and we have a services model that is we have been refining and is well ahead of what other IT companies and public clouds are doing. That services model is a key differentiator for HPE GreenLake for HPC, according to Randall. HPE puts more iron on the floor for an HPC system than the customer is using, so this excess capacity is ready to use when it is needed.

Self-service for the provisioning of compute, storage, and networks is done through the HPE GreenLake Central console, and the entire HPC stack clusters, operating software and so on, is managed by HPE experts from one of a dozen centers around the world. customers operate the clusters with self-service capabilities in HPE Greenlake Central to manage queues, jobs, and output. HPE GreenLake for HPC gets HPC centers out if the business of maintaining the hardware and software of those clusters, and while HPE is not offering application management services, it does have widget with self-service capabilities that will snap into the GreenLake Central console and it will entertain managing the HPC applications themselves under a separate contract if customers really want this.

The initial GreenLake for HPC stack was built on the Singularity HPC-specific Kubernetes platform, and over time it may evolve to use the HPE Ezmeral Kubernetes container platform. Initially, HPE GreenLake for HPC included HPEs Apollo servers and storage, plus standard storage and Aruba interconnects, but now includes Lustre parallel file systems, a homegrown HPE cluster manager, and the industry standard SLURM job scheduler as well as Ethernet networks with RDMA acceleration. The HPE Slingshot variant of Ethernet tuned for HPC and IBMs Spectrum Scale (formerly known as General Parallel File System, or GPFS) parallel storage will be added in the future. HPE Cray EX compute systems will also be available under the HPE GreenLake for HPC offering, as will other parallel file systems that are up and coming in the HPC arena.

HPE started out with a focus on clusters to run computer aided engineering applications (with a heavy emphasis on ANSYS), but is expanding its HPE GreenLake cluster designs so they are tuned for financial services, molecular dynamics, electronic design automation, and computational fluid dynamics workloads, and it has an eye on peddling GreenLake for HPC to customers doing seismic analysis, weather forecasting, and financial services risk management. The scale of the machines offered under GreenLake for HPC will be growing, too, and Randall says that HPE will absolutely sell exascale-class HPC systems to customers under the GreenLake model.

We have a feeling that the vast majority of exascale-class systems could end up being sold this way, given the benefits of the HPE GreenLake approach. Imagine if all of the firmware in the systems was updated automagically, and ditto for the entire HPC software stack? Imagine proactive maintenance and replacement of parts before they fail.

Imagine not trying to hire technical staff to design, build, and maintain a cluster, and getting the kind of cloud experience that people have come to expect without having to go all-in on one of the public clouds and make do with whatever compute, storage, and networking they have to offer which may or may not be what you need in your HPC system for your specific HPC workloads. Imagine keeping the experience of an on-premises cluster, but having variable capacity inherent in the system that you can turn on and off with the click of a mouse? This is what HPE GreenLake for HPC can do, and it is going to change the way that companies consume HPC.

Sponsored by HPE

See the rest here:
HPE GreenLake: The HPC cloud that comes to you - The Register

Application Server Market Size to Touch USD 28.11 Billion by 2025 at 12.06% CAGR by 2025 – GlobeNewswire

New York, US, June 29, 2021 (GLOBE NEWSWIRE) -- Market Overview: According to a comprehensive research report by Market Research Future (MRFR), Global Application Server Market information by Application Type, by Deployment, by Vertical and Region Forecast to 2027 the market valued at USD 12.95 Billion in 2018 and is projected to reach USD 28.11 Billion by 2025 at a CAGR of 12.06%.

Application Server Market Scope: An application server is a form of platform middleware and is mostly used for cloud applications, mobile devices, and tablets. This is a system software that lies in between the operating system, external resources like database management system (DBMS), user application, communications and internet services. They act as a host for the business logic of the user while facilitating access to and also performance of the business application. It performs the basic business needs of the application regardless of the traffic and variable of client requests, software and hardware failures, the larger-scale applications distributed nature, and potential heterogeneity of processing resources and data. This server also supports multiple application design patterns as per the nature of the business application as well as the practices in the specific industry for which the application is designed. Application server supports multiple programming languages as well as deployment platforms.

Dominant Key Players on Application Server Market Covered Are:

Get Free Sample PDF Brochure: https://www.marketresearchfuture.com/sample_request/8634

Market USP Exclusively Encompassed:Market DriversAccording to the MRFR report, there are numerous factors that are propelling the global application server market share. Some of these entail the growing use of mobile and computer-based internet applications, rapid developments in mobile device systems and wireless networks, increasing adoption of m-commerce and e-commerce applications, growing adoption of IoT technology and cloud platform, increasing need for portable software and high-end interface among various organizations, rising sophisticated applications which support data management, development of advanced application tools, growing need to support legacy application, database integration, and system with advanced technologies among numerous organizations, and alluring application server features that plays a pivotal role in enterprise application integration and business-to-business integration (B2Bi). The additional factors adding market growth include rising digitalization among multiple sectors, growing awareness of the benefits of application server such as integrity of data, centralized control on access and resources, improved performance of large applications on clients server model, and centralization of business logic by single server unit, increasing need to manage growing data traffic on network for optimizing overall performance of service delivery of application at network level, and rising integration of emerging technologies like internet of things with wide applications for supporting and running various applications like easy, presentative, and smart graphical user interfaces in smartphones.

On the contrary, high maintenance cost related to application servers, and rise in complexity of large application integration may limit the global application server market growth over the forecast period.

Browse In-depth Market Research Report (111 Pages) on Application Server:https://www.marketresearchfuture.com/reports/application-server-market-8634

Segmentation of Market covered in the research:The MRFR report highlights an inclusive analysis of the global application server market based on vertical, deployment, and type.

By application, the global application server market is segmented into mobile applications and web applications. Of these, the mobile applications segment will lead the market over the forecast period.

By deployment, the global application server market is segmented into on-premise and on-cloud. Of these, the on-cloud segment will dominate the market over the forecast period.

By vertical, the global application server market is segmented into retail, manufacturing, telecommunication and IT, education, healthcare, government, BFSI, and others. Of these, the manufacturing segment will spearhead the market over the forecast period.

Share your Queries:https://www.marketresearchfuture.com/enquiry/8634

Regional Analysis North America to Have Lions Share in Application Server Market Geographically, the global application server market is bifurcated into Europe, North America, the Asia Pacific, & Rest of the World (RoW). Of these, North America will have the lions share in the market over the forecast period. The presence of a well-established network infrastructure, the presence of several key players, market players making heavy investments in R&D activities to create application servers with advanced capabilities, early adoption of technology, increasing smartphone penetration, the presence of a well-established business, rising penetration of mobile communication devices, and increasing adoption in the US are adding to the global application server market growth in the region.

Europe to Hold Second-Largest Share in Application Server Market In Europe, the global application server market is predicted to hold the second-largest share over the forecast period. Rising initiatives undertaken by the European government to adopt cloud-based m-commerce and e-commerce and rising advances in different technologies for high application services use among various business sectors are adding to the global application server market growth in the region.

APAC to Have Admirable Growth in Application Server Market In the APAC region, the global application server market is predicted to have admirable growth over the forecast period. Rising demand for m-commerce and e-commerce applications, growing adoption of IoT technology, the presence of skilled experts, growth of the service and manufacturing sectors, rising penetration of smartphones, development of high-speed wireless internet network infrastructure, advances in cloud computing and networking technologies, rise in IT service and software providers in China and India, and rising number of service-based and technology startups are adding to the global application server market growth in the region.

RoW to Have Sound Growth in Application Server Market In RoW, the application server market is predicted to have sound growth over the forecast period. Growing penetration of smartphone and internet-based services, increasing industrialization, and rising awareness about application servers across different industries are adding to the global application server market growth in the region.

To Buy: https://www.marketresearchfuture.com/checkout?currency=one_user-USD&report_id=8634

COVID-19 Impact on the Global Application Server MarketThe ongoing COVID-19 pandemic has cast its shadow on the global application server market. The immediate & long-term impact of the crisis, fluctuations in demand share, supply chain disruptions, and economic consequences of the pandemic had a negative effect on the market growth.

About Market Research Future:Market Research Future (MRFR) is a global market research company that takes pride in its services, offering a complete and accurate analysis regarding diverse markets and consumers worldwide. Market Research Future has the distinguished objective of providing the optimal quality research and granular research to clients. Our market research studies by products, services, technologies, applications, end users, and market players for global, regional, and country level market segments, enable our clients to see more, know more, and do more, which help answer your most important questions.

Follow Us:LinkedIn|Twitter

The rest is here:
Application Server Market Size to Touch USD 28.11 Billion by 2025 at 12.06% CAGR by 2025 - GlobeNewswire

The global master patient index software market was valued at US$ 776.36 million in 2020 – Yahoo Finance

and it is projected to reach US$ 1,678. 86 million by 2028; it is expected to grow at a CAGR of 10. 12% during 2020-2028. A significant shift toward paperless data management has enabled various healthcare players to adopt cloud-based technologies.

New York, June 25, 2021 (GLOBE NEWSWIRE) -- Reportlinker.com announces the release of the report "Master Patient Index Software Market Forecast to 2028 - COVID-19 Impact and Global Analysis By Type and Deployment, and Geography" - https://www.reportlinker.com/p06099482/?utm_source=GNW Also, the decrease in the cost of cloud-based technologies, improvements in terms of flexibility and security, and low maintenance requirements and costs are further propelling the adoption of these technologies among healthcare organizations.

These advantages help provide high-quality services and personal care to patients.The cloud-based technologies have eliminated interoperability issues, and at the same time have enabled the easy data integration with a healthcare organization.As the healthcare data, available in huge volumes, are organized and saved on cloud servers, processing these data has become feasible for the healthcare professionals.

Cloud-based technologies allow healthcare professionals to operate in tandem with different departments, institutions, and healthcare service providers and consumers.Furthermore, technological advancements are supporting the integration of artificial intelligence (AI) and machine learning in patient management, which assist users to manage healthcare operations and massive data.

Thus, the adoption of cloud-based technologies boosts the demand for master patient index software to streamline and simplify the patient data management process.

Based on type, the master patient index software market is segmented into software and service.In 2020, the software segment held a larger share of the market and is estimated to grow at a significant CAGR during the forecast period.

Due to growing advancements in new and existing master patient index software, the adoption of the software in the healthcare system is likely to boost the market growth during the forecast period.

Based on deployment, the master patient index software market is segmented into cloud-based and on-premises. In 2020, the cloud-based segment held a larger share of the market and is expected to grow at a faster rate during the coming years.

Major primary and secondary sources referred to while preparing the report on the master patient index software market are Dubai Health Authority, National Health Service, Community Health Index, and World Health Organization.Read the full report: https://www.reportlinker.com/p06099482/?utm_source=GNW

About ReportlinkerReportLinker is an award-winning market research solution. Reportlinker finds and organizes the latest industry data so you get all the market research you need - instantly, in one place.

__________________________

Story continues

Read more from the original source:
The global master patient index software market was valued at US$ 776.36 million in 2020 - Yahoo Finance

Why an industrial giant known for dishwashers sees its future in electric vehicles, hydrogen, 5G and the cloud – MarketWatch

Siemens is a 174-year-old industrial giant that, once upon a time, made dishwashers, and maybe thats the image the brand still summons.

But its new chief executive has a plan to turn the German group into a focused technology company a blue-chip European stock that investors can buy to play growth in the likes of electric vehicles, lithium-ion batteries, hydrogen power, 5G, cybersecurity, and cloud computing.

In the first capital markets day under Roland Buschs leadership, Siemens SIE, +1.24% announced that next year it will change the business model of its fastest growing and most profitable division, digital industries, to focus on software that is sold on a recurring, subscription basis.

The move toward software-as-a-service will be accompanied by a new financial reporting metric that will describe annual recurring revenue giving investors and analysts new insights into the profitability of the software business.

A change like that has the possibility to cause a massive rerating in the price of Siemens stock by 20% to 25%, according to analyst Philip Buller of investment bank Berenberg. Until now, the software division has been buried within the industrial company, obscuring the higher valuation that software groups typically fetch compared with industrial stocks.

Its a big shift, but change isnt new to Siemens. The group built Europes first long-distance telegraph line in 1848, just a year after it was founded in a back courtyard in Berlin. By 1900, it had laid more than half of the undersea cables crossing the Atlantic Ocean.

Also: Spotifys CEO and Goldman Sachs have both invested in this high-tech Tesla rival

In the next 100 years it grew into a sprawling conglomerate, variously touching everything from household appliances to power generation, telecommunications to trains, and health technology to heavy industry.

Ahead of the capital markets day, Busch told MarketWatch how the industrial giant is pushing toward a new era of growth and acquisitions centered on connecting core technologies such as automation, 5G, and cloud computing across its disparate divisions.

Under the leadership of Buschs predecessor, Joe Kaeser, Siemens began the process of transitioning from an aging industrial behemoth to a focused technology company, which included spinning off renewable energy group Siemens Energy in 2020.

For Busch, the challenge is now to build synergies across the business divisions that remain, which cover industry, infrastructure, mobility, and health technology synergies that analysts say dont quite exist yet.

Technology is the backbone of our company, Busch said. Is 5G the future? Is it automation technology? Is it cybersecurity? Is it digital twins? All of that is playing into all of our businesses.

The crown jewel in the groups portfolio is the factory business it calls digital industries. The division builds high-tech factories for the future, supplying automated manufacturing systems and, increasingly, industrial software. It is building next-generation capabilities for automotive brand Mercedes-Benz DAI, -1.37% and scaling up COVID-19 vaccine production for biotech BioNTech BNTX, -0.33%.

Plus: This technology could transform renewable energy. BP and Chevron just invested.

We are a company with a very strong software portfolio, but we are not a software company, he told MarketWatch. Unlike any other company in the world, we are able to combine the real and the digital worlds.

Busch is focused especially on automation, driven by advances in 5G applications in the industrial space. This is where most of the groups capital allocation will be, he said. A key technological emphasis is on edge computing bringing the processing power available in cloud servers, including artificial intelligence, down to the shop floor.

Cybersecurity is another priority in an age where factories and power grids are targeted by hackers, Busch, said, and Siemens is developing an unhackable one-way communication chip.

The end market where Busch sees the most opportunity is the automotive sector, where Siemens already has deep roots. Central to that is the explosion of electric vehicles, which are expected to penetrate 100% of the automobile market by 2040, according to analysts at Swiss bank UBS UBS, -0.13%.

Having the next transformation ahead of us from combustion engines to electric cars that requires investments, Busch told MarketWatch, describing the wave of capital expenditure coming in the automotive sector. We are not delivering the car, we are delivering the manufacturing.

Adjacent to that boom is accelerating demand for the lithium-ion batteries that power electric vehicles. UBS predicts that the required battery-cell supply to meet the increased demand for EVs will result in regional tightness this year and global shortages by 2025.

Also read: Buy these 3 battery stocks to play the electric-vehicle party, but stay away from this company, says UBS

And Siemens has tied itself into battery production. The German company counts among its partners Northvolt, the Swedish battery manufacturer founded by former Tesla TSLA, -1.31% executives and backed by the likes of Goldman Sachs GS, +0.04%, car maker Volkswagen VOW, -0.98%, and Daniel Ek, the chief executive of music-streaming service Spotify SPOT, +0.23%. It has other battery partners in the U.K. and China.

Pharmaceuticals, food, and semiconductor industry software are other sectors ripe for market-share growth, Busch added. The CEO especially noted the opportunities for Siemens to use blockchain, the decentralized ledger technology that underpins crypto assets such as bitcoin, ethereum, and dogecoin, to monitor the integrity of food supply chains.

For its infrastructure business, the focus for the future is on electrification, and installing integrated solar-energy systems in complex networks like those in hospitals and data centers, Busch said.

The company will also hold on to its train and rail network business, Busch confirmed to MarketWatch, after the division was the subject of a failed merger with Frances Alstom ALO, +0.75%, blocked by regulators in 2019.

Mobility will be a key part of Siemens, which is looking into hydrogen power as a new type of train propulsion. There is huge potential in replacing thousands of diesel locomotives with green power, Busch said.

Not bad for a company famous for dishwashers.

See the original post here:
Why an industrial giant known for dishwashers sees its future in electric vehicles, hydrogen, 5G and the cloud - MarketWatch

The present is virtual, the future should be too – The Register

Register Debate Welcome to the latest Register Debate in which writers discuss technology topics, and you the reader choose the winning argument. The format is simple: we propose a motion, the arguments for the motion will run this Monday and Wednesday, and the arguments against on Tuesday and Thursday.

During the week you can cast your vote on which side you support using the embedded poll, choosing whether you're in favor or against the motion. The final score will be announced on Friday, revealing whether the for or against argument was most popular. It's up to our writers to convince you to vote for their side.

This week's motion is: Containers will kill virtual machines

And now, today, arguing AGAINST the motion is CHRIS MELLOR, the editor of our enterprise storage sister publication, Blocks & Files...

The history of the data centre is a long drive to efficiency. Bare metal servers waited for I/O to finish before continuing other work, so multi-tasking operating systems were invented to give servers the power to run other tasks while they waited for I/O to complete.

Multi-tasking created demand for more servers, but all too often those machines were tightly coupled to single applications and operating systems and if they werent busy, the server was underutilized.

Virtualisation rescued servers from that underutilization and meant organisations could run fewer but bigger physical servers and myriad virtual machines (VMs). Hypervisors could load VMs with different operating systems so that one physical server could run Windows, Unix and Linux environments simultaneously. Each VM was given the resources it needed and everything was rosy - for a while.

Kubernetes is an application like any other. It's better off virtualized.

Then came hyperscale services running on millions of servers, a situation that made it critical to extract every last cycle of server power with as little wasted or idle as possible.

VMs didnt work well at hyperscale. Enter containers and micro-services, which have become the base execution unit for hyperscale services and recently for more mainstream software developed using the same techniques employed by hyperscale operations.

So now we have two kinds of data centres used by businesses and other organisations: VM-centric data centres and containerized data centres.

We also have two ways of producing applications.

Its confusing and complex.

What should we do?

One option is to have public clouds convert to VM-centric operations, but that wont happen because hyperscale operators resource recovery models need containers. VMs as their as the core execution unit is too wasteful of IT resources.

Another option is for the on-premises world to convert to microservices, containerize everything and run like public clouds. But the complexity and expense involved is out of proportion for non-hyperscale operations.

The third choice is to go hybrid, to combine the different on-premises and public cloud worlds under an abstraction layer that presents a unified and coherent environment to run applications.

Brilliant idea. Then the on-premises world could carry on doing what its doing; running virtual machines in virtualized servers; and the public clouds could carry on running containers.

One problem; where is this abstraction layer?

It already exists. Its called virtualization because a virtualized server can run containers.

What strange magic is this? The tools that manage containers like Kubernetes are an application like any other. Theyre better off virtualized. Containers themselves share an operating system. Any instance of an OS is better off virtualized.

Further, we dont need containers to have on-premises-to-public cloud application mobility.

Virtual machines are already mobile. VMware, Microsoft and all the big clouds offer VM migration tools and services.

VMware, which dominates the virtual server market, has relationships that VMs it created run in AWS, Azure, Google Cloud, Oracle Cloud and Alibaba Cloud.

Hyperscale services extracting every last cycle of server power critical

Because VMs are already mobile we dont need to containerise our applications to enjoy multi-way mobility between public clouds and on-premises data centres. And even if you do decide to develop with containers, they need the resilience, security and manageability that virtual machines afford.

Get with the virtualised server program container purists. Theyre mature, reasonable, common sense and low friction.

Cast your vote below. We'll close the poll on Thursday night and publish the final result on Friday. You can track the debate's progress here.

JavaScript Disabled Please Enable JavaScript to use this feature.

See the rest here:
The present is virtual, the future should be too - The Register

Quantum resilience and the challenges of cloud security – DIGIT.FYI

Even before the pandemic, cloud computing had been recording major growth in 2019, despite slowing, the biggest cloud providers still grew 31% year on year.

As the industry matured, it was expected the rate of growth would slow towards a plateau. Instead, the pandemic made the cloud an attractive alternative to storing data locally to ensure business continuity for remote workers.

According to Deloitte, the sector grew steadily despite a general economic contraction.

This cloud migration is likely to continue research from IDC predicted that by the end of this year, 80% of enterprises will be looking to shift operations in the cloud twice as fast as before the pandemic.

Ultimately, the cloud migration has broken down traditional boundaries for network security the use of personal devices, public and home WiFi and access points no longer bound to one secure location have all contributed to creating a perimeter-less security environment.

This requires a new cybersecurity paradigm, as sensitive data is potentially vulnerable when stored on public clouds.

To understand more about building cybersecurity in a perimeter-less world, DIGIT spoke with Dr David Lanc, CEO and Founder of Edinburgh-based data protection company, Ionburst.

Under the old on-premises model, the perimeter was simple; everything within an office or building was safe, and everything outside was suspect. With digital operations moving to the cloud, the result is a system with billions of endpoints.

The cloud is designed to be open, Lanc says. The problem is most organisations that want to go there still want all the security and still have everything locked down as they did in their organisational fiefdoms.

So that becomes a problem, and security that was moved from the on-premises world to the cloud world needs to keep up.

With cloud migration, keeping data protected has become trickier. Lanc identified three key elements to the new cloud security paradigm.

The first is security, he explains. That must be non-deterministic can I make it more unpredictable for the hacker and turn that asymmetric benefit they have on us against them.

The second element is privacy. Meeting data protection requirements can prove difficult on a public cloud. For example, if something fails on a cloud providers server, they switch over to a duplicate of the data on another server.

This brings up privacy issues what happens to the copy when it is no longer needed; how many parties are involved; and even where is the data now located?

In an age of data protection legislation, being able to ensure the data is stored securely on an opaque public cloud can be difficult.

You have to think about privacy can I make sure that wherever that data is stored, nobody else can survey it? Lanc says.

The third point is resiliency. Today, if you lose your data, youre almost automatically going to a backup.

However, depending on the organisations backup culture, restoring data can be difficult. On the one hand, if data is not stored often enough, a day, a week, or even a months worth of data could be lost. But, if backups are done too often, any unwanted data, such as ransomware, could be saved to the backup, and the data corrupted.

Furthermore, incidents like the OVH data centre fire remind us that the cloud is still rooted firmly on Earth. Should the servers and data centres be compromised, the data can become irretrievable.

Cloud providers can put their hands up and say go to your backup systems, and of course, their customers say, dont you do that? Thats the challenge with the cloud shared responsibility model, Lanc says.

The future has to have this concept of data security, data privacy and resilience, so any data can be recovered, on demand, anytime.

With the cyber perimeter, today its in our homes it could also be in hospitals, or it could be on an IoT device, Lanc says. So how do you protect that all those billion endpoints? Because when you have the cloud and everyone can access it, any weakness is then exposed.

You have to start thinking about protecting data as an asset rather than protecting the people that need to access it.

Quantum resilience is a method put forward by Lanc to ensure data is not only protected, but increases security and compliance with critical data protection legislation.

In essence, quantum resilience fragments data into multiple redundant shards. These fragments are then stored in multiple locations public and private clouds, or locally across multiple devices, such as phones or computers. When data needs to be retrieved, the data is re-assembled from the multiple shards.

This helps mitigate some of the issues that arise from relying on third parties to store and protect data. For organisations, despite their data being stored on public clouds, they are still responsible for access and identity management.

The fragmented data is stored in different places, so even if something happens to a cloud store or your own systems internally, the data is still safe, Lanc explains.

It cant be surveyed by anybody because the data has been anonymised and encrypted. Its had the ownership classifications taken away from it Its what we call zero data. If a state actor says to a company like Google, Microsoft, or AWS that it wants to look at that data, they cant because they dont even know who the data belongs to or where it comes from.

By making the fragments anonymous, quantum resilience makes the data secure against GDPR. Even if a data breach should occur, the data cannot be traced back to the company, and its fragmented nature means anyone accessing it cannot use it.

And, in the instance where I do lose a cloud connectivity, I can spread my data more, so I take away the concentration risk for a cloud, Lanc explains. Instead of having to buy a private cloud service, you can actually use low-cost public cloud to store data, more privately, and more resiliently.

Furthermore, Lanc explains: The data will be much more non-deterministic, much more fluid, and the data will move with you.

Any person can move around and their data will effectively move with them, but within a security and privacy and resiliency mechanism to suit that person.

Like Loading...

Related

Read the original here:
Quantum resilience and the challenges of cloud security - DIGIT.FYI

Varjo’s Reality Cloud could become the foundation of the metaverse – TweakTown

Varjo today revealed Varjo Reality Cloud, an ambitious attempt at creating the foundation of the future metaverse. The company is leveraging the Lidar capabilities of its XR3 headset to enable real-time photorealistic virtual teleportation. This is Varjo's vision for the future of collaboration.

VIEW GALLERY - 3 IMAGES

"We believe that Varjo's vision for the metaverse will elevate humanity during the next decade more than any other technology in the world," said Timo Toikkanen, CEO of Varjo. "What we're building with our vision for the Varjo Reality Cloud will release our physical reality from the laws of physics. The programmable world that once existed only behind our screens can now merge with our surrounding reality - forever changing the choreography of everyday life."

With Varjo Reality Cloud, not only can you collaborate virtually with people around the world, you can bring others into your space, making it feel like you're sharing the same physical environment. With the Lidar scanners embedded on the Varjo XR-3 headset, users can capture a true-to-life 3D scan of their location, complete with full-color photorealistic texturing and share that with other people with Varjo headsets. Eventually, you'll be able to tap into the Varjo Reality Cloud with any device, including other VR headsets, computers, smartphones, and tablets.

Varjo is tapping into several technologies that it developed to create the Reality Cloud. "For the past five years, Varjo has been building and perfecting the foundational technologies needed to bring its Varjo Reality Cloud platform to market such as human-eye resolution, low-latency video pass-through, integrated eye-tracking and the LiDAR ability of the company's mixed reality headset," Varjo said in a prepared statement.

The Lidar system on the Varjo XR-3 headset allows Varjo to capture true-to-life 3D scans in real-time. The cameras update 200 times per second, ensuring that you will see the most up-to-date environmental information. The Varjo Reality Cloud interprets the scan data so you can capture all angles of an object or small scene, but Varjo said it does not retain the data long-term because of its client base. Varjo respects that many of the enterprise-level businesses that it works with have strict policies about data and privacy.

Varjo said that its foveated transport algorithm allows for super low bandwidth transmission. On a connection as low as 10Mbit, you can enjoy photorealistic virtual teleportation. During a virtual teleportation session, the environment data is uploaded to the Reality Cloud servers and shared with all active users. Varjo processes the data on the server and transmits a compressed stream to the receiving headset.

As part of this new direction, Varjo has tapped into some new talent. The company recently acquired a company called Dimension10, which created a collaboration platform for architecture, engineering, and construction companies. Varjo also welcomed Lincoln Wallen, current CTO of Improbable and former CTO at Dreamworks, to its board of directors. Lincoln brings with him a wealth of experience in digital content production and large-scale cloud computing.

Varjo did not say when the Varjo Reality Cloud would debut, but the company is already working with a handful of select partners who will participate in a closed alpha of the platform later this year. Varjo wouldn't commit to a timeline for the full-scale rollout. However, it said that enterprise companies would be able to tap into the Varjo Reality Cloud soon, and within a few years, the first consumer uses should begin to materialize.

See the rest here:
Varjo's Reality Cloud could become the foundation of the metaverse - TweakTown