Category Archives: Cloud Servers

COVID-19 Impact on Healthcare Cloud Computing Market Marked US$ 13 Bn in forecast Years 2025 – 3rd Watch News

With the outbreak of COVID-19 in worldwide and stipulated lockdown, the healthcare sector is witnessing an unprecedented slowdown as per EY-FICCI study titled, COVID-19 impact assessment for healthcare sector and key financial measures recommendations for the sector. The study is predicated on an assessment of healthcare players within the country to assess the economic impact of the COVID-19 pandemic and provides recommendations on the fiscal stimulus measures it needs within the coming months.

The research report on healthcare cloud computing market includes current market scenario analysis as well as a revised forecast for a period of eight years. According to a recent market report published by Persistence Market Research titled,Healthcare Cloud Computing Market: Global Industry Analysis (2012-2016) and Forecast (2017-2025)the globalhealthcare cloud computing marketis anticipated to be valued at US$ 7791.4 Mn in 2025, and is expected to register a CAGR of 18.9% from 2017 to 2025.

Increasing demand for better healthcare facilities and Rising investments by healthcare IT players are major factors driving growth of the global healthcare cloud computing market.

Get Sample Copy of Report @ https://www.persistencemarketresearch.com/samples/19390

Company Profiles

Get To Know Methodology of Report @ https://www.persistencemarketresearch.com/methodology/19390

Cloud refers to a prototype in which data is permanently stored on servers and accessed by clients with the help of different information systems such as computers, sensors, laptops, and others. Cloud computing refers to a process which involves delivering hosted services to clients.

Global Healthcare Cloud Computing Market: Segmentation & Forecast

Global healthcare cloud computing market is categorized on the basis of application, deployment model, by components, by service model and region. On the basis of application, the market is segmented as CIS and NCIS. The CIS segment is anticipated to register a CAGR of 20.3% during the forecast period.

The component segment is segmented into software, hardware, and services. On the basis of deployment model, the market is segmented as public cloud, private cloud, and hybrid cloud. Private cloud segment accounted for highest market share and was valued at US$ 2,504 Mn in 2016.

The service model segment is segmented as SaaS, Paas, and IaaS. The SaaS segment is poised to be highly lucrative in the coming years. This segment is estimated to reach a value of about US$ 25.4 Bn by 2025 end and is the fastest growing segment to register an exponential CAGR of 19.7% throughout the period of assessment, 2017-2025.

The PaaS segment is the smallest segment with a low estimate of US$ 360.3 Mn in 2017 and is expected to reach US$ 1.2 Bn by 2025 end. By component, the software segment is projected to grow at a higher rate to register value CAGR of 19.2% throughout the period of forecast and is the largest segment in terms of value share. It is estimated to reach a valuation of more than US$ 21 Bn by the end of 2025.

Access Full Report @ https://www.persistencemarketresearch.com/checkout/19390

Global Healthcare Cloud Computing Market: Regional Forecast

This report also covers drivers, restraints and trends driving each segment and offers analysis and insights regarding the potential of healthcare cloud computing market in regions including North America, Latin America, Europe, Asia Pacific, and Middle East and Africa.

Among these regions, North America accounted for the largest market share in 2016. Moreover, North America region is also expected to register a healthy CAGR of 19.6% during the forecast period.

Explore Extensive Coverage of PMR`sLife Sciences & Transformational HealthLandscape

Feminine Hygiene Product Market Feminine Hygiene Products Market Segmented By Sanitary Pads/Napkins, Tampons, Panty Liners, Menstrual Cups, and Feminine Hygiene Wash.For More Information

Persistence Market Research (PMR) is a third-platform research firm. Our research model is a unique collaboration of data analytics andmarket research methodologyto help businesses achieve optimal performance.

To support companies in overcoming complex business challenges, we follow a multi-disciplinary approach. At PMR, we unite various data streams from multi-dimensional sources. By deploying real-time data collection, big data, and customer experience analytics, we deliver business intelligence for organizations of all sizes.

Our client success stories feature a range of clients from Fortune 500 companies to fast-growing startups. PMRs collaborative environment is committed to building industry-specific solutions by transforming data from multiple streams into a strategic asset.

Contact us:

Ashish KoltePersistence Market ResearchAddress 305 Broadway, 7th FloorNew York City,NY 10007 United StatesU.S. Ph. +1-646-568-7751USA-Canada Toll-free +1 800-961-0353Sales[emailprotected]Websitehttps://www.persistencemarketresearch.com

See more here:
COVID-19 Impact on Healthcare Cloud Computing Market Marked US$ 13 Bn in forecast Years 2025 - 3rd Watch News

Cloud computing, future trends to be followed in the industry – Optocrypto

We all know the advantages of cloud computing. When we talk about the future, various trends such as hybrid cloud, serverless computing, and containers will dominate the industry in the future.

Industry experts expect the use of clouds to become more widespread in the coming years. Even the global cloud market is expected to reach higher numbers. According to CloudTech, public spending on cloud computing is expected to increase from $229 billion in 2019 to $500 billion in 2023, with a projected compound annual growth rate (CAGR) of 22.3 percent.

These are the top five cloud computing trends for the coming year.

A cloudless server is a technology that enables functionality in the cloud on a necessary basis. Organizations depend on serverless computing because it provides space to work on core products without the pressure of running or managing servers.

Microsoft CEO Satya Nadella favors a serverless cloud. In his opinion, serverless computing can not only be responsive and focus on back-end computing but can also be the emerging future of distributed computing.

According to Gartner, the global market for publicly-owned cloud services will grow by nearly 17 percent and reach a total value of $266.4 billion by 2020. This is a staggering figure compared to last years $227.8 billion.

In 2019, the actual number of companies using the hybrid cloud was 58%, compared to 52% in 2018, and this year there is a huge increase. While everyone is discussing the existence of this new technology, Markets is also discussing the growing need for hybrid cloud computing. They highlighted its functionality and focused on its multiple benefits, from efficiency to security.

As the world changes and technology evolves, the use of technology can be seen in almost every area of work. The increase in the number of employees is also due to the increased demand for labor. These digital native users need more knowledge about cloud computing and all the other technological advances that are readily available.

Cloud computing and other related technologies will bring these two parts together, which will only bring productivity to businesses.

AI in data centers will peak in the next few years, and IDC predicts that AI spending will increase to $52.2 billion by 2021, representing an average annual growth rate of 46.2 percent from 2016 to 2021.

From hardware failures to power savings and system fault detection, AI can solve a wide range of enterprise systems. Using AI in data centers can serve a variety of purposes, including automating various manual tasks and addressing skill shortages. In addition, AI resources can help companies learn from past data and draw productive conclusions. With the introduction of AI technology, more sophisticated data security solutions will be available without human intervention.

Read this article:
Cloud computing, future trends to be followed in the industry - Optocrypto

You couldn’t do this already? AWS adds size and bandwidth growth to FSx for Windows File Server – Blocks and Files

AWS has improved Amazon FSx for Windows File Server with dynamic size and throughput growth scaling capabilities.

FSx for Windows File Server provides disk or SSD-based SMB-protocol file storage and is integrated with Active Directory for authentication purposes. The software runs in single or multiple availability zones, and shares can also be accessed from VMware Cloud on AWS andAmazon WorkSpaces. FSx supports data encryption in transit and at rest and includes backups.

Previously, users created a file system with a fixed capacity and set network throughput levels.They paid for defined storage and throughput capacity and for any backups they created.

As announced in an AWS blog yesterday, users can now scale up or down capacity and or network throughput with button clicks in the AWS Management Console or by using an API call. That means you can add capacity or throughput performance for one-off burst work sessions or regular, cyclical workloads.

FSx file system users can make the changes free of charge once Amazon rolls out the update in a maintenance window. New users get them straight away.

AWS added the disk drive storage option to FSx in March this year, pricing it below the standard SSD option. Now it has added dynamic capacity and throughput scaling. And last month it increased EFS read performance from 7,000 to 35,000 IOPS at no charge.

AWSs FSx for Windows is an SMB file system, complementing its EFS (Elastic File System) service which is an NFS-based offering. NFS is largely used by Unix and Linux applications and is also available for Windows, whereas SMB is a Windows file access protocol. There is also a separate FSx for Lustre offering.

Read the rest here:
You couldn't do this already? AWS adds size and bandwidth growth to FSx for Windows File Server - Blocks and Files

Upstream Security Partners With Amazon Web Services to Enhance Automotive Cybersecurity – PRNewswire

HERZLIYA, Israel, June 2, 2020 /PRNewswire/ -- Upstream Security, a leader in cloud-based automotive cybersecurity, announced today that it has joined the Amazon Web Services (AWS) Partner Network as a Select Tier Technology Partner. Upstream Security pioneered automotive cloud cybersecurity and unlocks the value of automotive data, ensuring that connected vehicles and mobility services are safe, secure, and operating optimally. With their Connected Mobility Solution (CMS), AWS enables automotive manufacturers and suppliers to build applications that gather, process, analyze, and act on connected vehicle data, without having to manage any infrastructure.

The Upstream and AWS partnership will empower automotive companies utilizing AWS with the cybersecurity knowledge, capabilities, and mitigation techniques required to thrive in the new cyber-threat landscape of smart-mobility and connected vehicles.

"We're thrilled to have Upstream Security as part of our partner network and automotive solutions," explains Bill Foy, Director of AWS Automotive. "Upstream is known for delivering cutting-edge cybersecurity solutions that help automotive customers transform the way they secure their connected vehicles."

Upstream's enrollment in the Amazon Partner Network further enhances its digital and cloud capabilities to deliver industry-leading solutions. As an AWS Technology Partner, Upstream's software-based cybersecurity solution may now be used by Automotive AWS customers - OEMs as well as connected vehicle fleets - to detect cyber threats against connected vehicle infrastructure spanning the vehicle itself, telematics servers, and mobile applications.

"The collaboration with AWS enables us to offer a field-proven, scalable, and robust cybersecurity solution for OEMs and fleets," says Yoav Levy, Co-Founder and CEO of Upstream. "We look forward to enhancing our collaboration with AWS by offering seamless and out-of-the-box integration of Upstream and the AWS Connected Mobility Solution."

About Upstream Security

Upstream Security is the first cloud-based cybersecurity solution,, purpose-built for protecting connected vehicles and smart mobility services from cyber-threats and misuse. Upstream's C4 platform leverages existing automotive data feeds to detect threats in real-time and delivers cybersecurity insights supported by AutoThreat() Intelligence, the first automotive cybersecurity threat intelligence in the industry. Upstream Security is privately funded by Renault Venture Capital, Volvo Group, Hyundai, Nationwide Insurance, CRV, Glilot Capital Partners, and Maniv Mobility.

For more information go to http://www.upstream.auto

Follow Upstream on LinkedIn, Twitter, Facebook, YouTube

Media Contact: [emailprotected]

SOURCE Upstream Security

More here:
Upstream Security Partners With Amazon Web Services to Enhance Automotive Cybersecurity - PRNewswire

Improvements on the verify domain error in Office 365 – TechGenix

If you have registered a domain in Office 365/Azure Active Directory, and after a while, you try to register the same domain in a different tenant, you would see the verify you own this domain screen and suffer a lot of headaches because a ticket would be required to find out where that specific domain is in the Microsoft cloud. That was a common issue a few months back. Today, I was working on my Microsoft Teams articles here at TechGenix, and when I tried to register my domain, I got an error that the domain was already registered. Still, the difference nowadays is that the verify you own this domain error message now informs you precisely where the domain is being verified.

That saves a lot of time for the IT professional. Now it is just a matter of logging on that tenant and removing the domain from there before continuing. Make sure that you spare at least a couple of hours to flush the deletion in one tenant and reuse of the domain in a different tenant.

Post Views: 2

Link:
Improvements on the verify domain error in Office 365 - TechGenix

Digital transformation held back by lack of skilled people – ComputerWeekly.com

Most organisations believe digital transformation can benefit their businesss operations and customer service, but almost as many say they are held back by lack of skilled resources.

Those are some of the findings of the Veeam 2020 Data protection trends report, which also discovered the cost to respondents of outages of high priority applications $67,651 per hour.

The survey questioned 1,500 decision-makers in global enterprises on data protection and the IT challenges they face.

A key focus of the survey was digital transformation, which seeks to rebuild an organisations activities so that its operations are software-based and can take advantage of efficiency savings, benefit rapidly from changes in its market and be quickly reworkable to a constantly changing environment.

Analyst house IDC has predicted digital transformation spending will be around $7.4tn between 2020 and 2023.

According to the survey, more than half (51%) of respondents believe digital transformation can help their organisation transform customer service, while nearly half said it could transform business operations (48%) and deliver cost savings (47%).

But almost half of those surveyed said they are hindered in their digital transformation journey by unreliable legacy technologies, while 44% cited lack of IT skills or expertise as a barrier to success.

Almost a quarter (23%) of organisations described their progress towards digital transformation initiatives and goals as mature or fully implemented. Meanwhile, nearly one-third (30%) are in the early stages of implementing or planning digital transformation.

According to the survey, the vast majority (95%) of organisations suffer unexpected outages and an outage lasts, on average, almost two hours (117 minutes).

The decision-makers surveyed reported 10% of their servers having unexpected outages each year that last for hours and cost hundreds of thousands of dollars.

An hour of downtime from applications deemed high priority was estimated to cost $67,651, while this number was $61,642 for a normal application. The organisations surveyed considered 51% of their data as high priority.

Nearly one-third (32%) of the organisations surveyed currently make use of on-premise backup tools, while 43% see themselves moving to cloud-based backup tools by 2022.

Currently, more than a quarter (27%) of data is backed up to the cloud by a backup-as-a-service (BaaS) provider, while 43% of firms plan to go down this route within the next two years.

More than one-third (39%) of respondents said the ability to improve the reliability of backups was the most likely reason their organisation would change its primary backup solution. A similar number (38%) said reduced software or hardware costs would be the key factor, while 33% cited improving return on investment.

The survey showed that almost a quarter (23%) of data is replicated and made disaster recovery-capable via a cloud provider. About one-fifth (21%) of data across organisations globally is not replicated or staged for BC/DR, while 14% of data is not backed up.

Lack of staff to work on new data protection initiatives (42%) was cited as the biggest current challenge. Lack of budget for new initiatives (40%) and lack of visibility on operational performance (40%) were also cited.

The survey showed that respondents see the use of cloud as important to digital transformation. For more than half (54%) that means disaster recovery via a cloud service, while 50% cited the ability to burst workloads from on-premise to the cloud as an aim, and multicloud and the ability to move workloads from one cloud to another was cited as important by 48%.

Read more:
Digital transformation held back by lack of skilled people - ComputerWeekly.com

NTT Com internal cloud server hacked, information on 621 customers stolen – DatacenterDynamics

The company believes hackers started from an NTT base in Singapore, reached a cloud server in Japan, from there went to an NTT Com server on its internal network, before finally reaching an internal Active Directory server. There they stole data and uploaded it to a remote server.

NTT Com took the servers offline when they discovered the intrusion, but by that point, it was too late. Customers that may have been affected have been notified, the company said.

The company added that it would upgrade its IT infrastructure to stop a similar attack happening again, with a poorly secured migration project thought to be at fault.

"We will promptly disclose information when new information becomes available, but we will refrain from disclosing information regarding individual customers from the viewpoint of confidentiality," the company said in a statement (translated). "Thank you for your understanding."

Read more from the original source:
NTT Com internal cloud server hacked, information on 621 customers stolen - DatacenterDynamics

Where is the edge in edge computing? And who gets to decide? – ZDNet

Calling a technological domain "the edge" gives it a cool sound, like it's just pushing the boundaries of some innovative envelope. So naturally, there are multiple subdomains of the world's wireless network that operators and equipment providers have staked out as "the edge." There is a "network edge" that you'd think would extend to the furthest boundaries of its coverage areas. Actually the "network edge" can be inches away from the wireless core, if the functions being served there extend directly to the customer.

An edge-ready mini data center as envisioned by cabling solutions provider Datwyler.

Then there's the "customer edge," and if you're not confused yet, that should be the outmost frontier of a customer's own assets. There is a "cloud edge" and an "edge cloud," which some vendors say is the same thing, and others may construe as totally separate concepts.

The reason for the ambiguity is this: The future of both the communications and computing markets may depend on the shape these edges form once they are finally brought together. This will determine where the points of control, and where the points of access, will reside. And as 5G Wireless networks continue to be deployed, the eventual locations of these points will determine who gets to control them, and who gets to regulate them.

"The edge is not a technology land grab," remarked Cole Crawford, CEO of DC producer Vapor IO. "It is a physical, real estate land grab."

A Vapor Chamber, designed in collaboration with the Kinetic Edge Alliance and produced by Vapor IO.

As ZDNet Scale reported in November 2017, Vapor IO makes a 9-foot diameter circular enclosure it calls the Vapor Chamber. It's designed to provide all the electrical facilities, cooling, ventilation, and stability that a very compact set of servers may require. Its aim is to enable same-day deployment of compute capability almost anywhere in the world, including temporary venues and, in the most lucrative use case of all, alongside 5G wireless transmission towers.

Since that report, public trials have begun of Vapor Chamber deployments in real-world edge/5G scenarios. The company calls this initial, experimental deployment schematic Kinetic Edge. Through its agreements with cellular tower owners including Crown Castle -- the US' largest single wireless infrastructure provider, and an investor in Vapor IO since September 2018 -- this schematic has Vapor IO stationing shipping container-like modules with cooling components attached, to strategic locations across a metro area.

By stationing edge modules adjacent to existing cellular transmitters, Vapor IO leverages their existing fiber optic cable links to communicate with one another at minimum latency, at distances no greater than 20 km. Each module accommodates 44 server rack units (RU) and up to 150 kilowatts of server power, so a cluster of six fiber-linked modules would host 0.9 megawatts. While that's still less than 2% of the server power of a typical metropolitan colocation facility, from a colo leader such as Equinix or Digital Realty, consider how competitive such a scheme could become if Crown Castle were to install one Kinetic Edge module beside each of its more than 40,000 cell towers in North America. Theoretically, the capacity already exists to facilitate the computing power of greater than 700 metro colos.

"As you start building out this Kinetic Edge, through the combination of our software, fiber, the real estate we have access to, and the edge modules that we're deploying, we go from the resilience profile that would exist in a Tier-1 data center, to well beyond Tier-4," said Crawford, referring to the smallest and largest classifications of data centers, respectively. "When you are deploying massive amounts of geographically disaggregated and distributed physical environments, all physically connected by fiber, you now have this highly resilient, physical world that can be treated like a highly connected, logical, single world."

Vapor IO has perhaps done more to popularize the notion of cell tower-based data centers than any other firm, particularly by spearheading the February 2019 establishment of the Kinetic Edge Alliance. But perhaps seeing a startup seize a key stronghold from its grasp, AT&T has recently backed away from characterizing its network edge as a place within sight of civilian eyes. In a 2019 demonstration at its AT&T Foundry facilities in Plano, Texas, the telco showed how 5G connectivity could be leveraged to run a real-time, unmanned drone tracking application. The customer's application in this case was not deployed in a DC, but instead in a data center that, at some later date, may be replaced with one of its own, existing Network Technology Centers (NTC).

It's AT&T's latest bid to capture the edge for itself, and hold it closer to its own treasure chest. In response, Vapor IO has found itself tweaking its customer message.

A Vapor IO Kinetic Edge facility next to a Crown Castle-owned RAN tower in Chicago.

"When we first started describing our Kinetic Edge platform for edge computing, we often used the image of a data center at the base of a cell tower to make it simple to understand," stated Matt Trifiro, Vapor IO's chief marketing officer, in a note to this reporter. "This was an oversimplification."

"We evaluate dozens of attributes," Trifiro continued, "including the availability of multi-substation power, proximity to population centers, and the availability of existing network infrastructure, when selecting Kinetic Edge locations. While many of our edge data centers do, in fact, have cell towers on the same property, they mainly serve as aggregation hubs that connect to many macro towers, small cells and cable head ends."

Although cell towers are a principal factor in Vapor IO's site selection, Trifiro told ZDNet, they're not the only factor. Kinetic Edge sites are linked to one another through a dedicated software-defined network (SDN). The resulting system routes incoming traffic among multiple sites in a region, forming a cluster that Vapor IO does not call an "edge cloud."

"In this way, we enable the Kinetic Edge to span the entire middle-mile of a metropolitan area, connecting the cellular access networks to the regional data centers and the Tier-1 backbones using a modern network topology," said Trifiro.

The Kinetic Edge deployment model follows an emerging standard for enabling edge computing environments on highly distributed systems, for a plurality of simultaneous tenants. Last January, prior to the onset of the pandemic, the European standards group ETSI published two reports that jointly tackled the problem of virtualization -- giving each tenant a slice of an edge server -- in a way that could also serve as the foundation for telco-owned servers used in 5G Wireless.

Just as server and network virtualization provided the foundation for the modern data center cloud, these proposed standards could pave the way for a concept which, just last year, was being critiqued as oxymoronic: the edge cloud.

Network slicing is a deceptively difficult concept to implement in telco environments, many of which are already virtualized at one level. To pull it off, service providers would have to implement a second layer of virtualization at a deeper level -- one that allows telcos to utilize their servers for their own data services, while at the same time secluding and isolating customer-facing services so that they cannot peer into telcos' namespaces. There are both technological and legal hurdles for engineers to cross (many countries' regulations, including the US, prohibit the mixing of telco and customer environments), and prior to their drone tracker demo, AT&T's engineers had gone on record to say it cannot be done.

ETSI's proposed approach for what it calls multi-access edge computing (MEC) would be to refrain from specifying just how virtualization takes place.

"The ETSI MEC architectural framework. . . introduces the virtualization infrastructure of MEC host either as a generic or as a NFV [network functions virtualization] Infrastructure (NFVI)," one ETSI document [PDF] reads. "Neither the generic virtualization infrastructure nor the NFVI restricts itself to using any specific virtualization technology."

The result is a cluster of server components, each of which may be hosted by a hypervisor-driven environment such as classic VMware vSphere, or a container-driven, orchestrated environment such as Kubernetes. The system looks homogenous enough on the surface, with applications and services being hosted, for lack of a more explicit model, however they're hosted. The lower layers of the infrastructure provide whatever isolation each tenant's workload of applications and services may require. From the perspective of the orchestrator or manager, it's all one cloud -- and that is how ETSI defines "edge cloud."

The problem with this point of view, as some US-based engineers see it, is that it assumes edge systems may be contained unto themselves, entirely at the edge. If you're a manufacturer of systems and components designed to go elsewhere, you don't want to build partitions for yourself.

"If you're going to deliver real-time inferencing at the edge, typically that means you've trained a model back in your data center," explained Matt Baker, Dell Technologies' senior vice president of strategy and planning. "And this is one reason we say edge doesn't exist unto its own. Edge is a part of a broader environment: edge to core to cloud."

Last February, Baker was rolling out an extension to his company's edge systems architecture geared for high-performance AI and data analytics scenarios, called Dell EMC HPC Ready for AI and Data Analytics. In a system that enables its parts to be defined by the workloads it runs, said Baker, the separation of powers tends to evolve into silos. Case in point: machine learning. Bright Cluster Manager for ML may require one platform; if another workload runs better on Spark, that's another platform. The result is workload isolation and reinforced complexity for their own sake.

"So what we wanted to do is build a ready architecture for many AI and data analytics frameworks," said Baker, "so that it's just a whole lot easier for our customers to approach, deploy, and leverage all of these new, great technologies like Cassandra, Domino, Spark, Kubeflow."

What Dell is calling a system for edge computing is, in this case, a very dense server rack. At first glance, and even at second glance, it doesn't appear to fit the typical bill of an edge-optimized system, even one from Dell. Indeed, Dell EMC published earlier forms of its HPC Ready architecture, including one back in early 2018 [PDF], without any mention of edge computing. What is it that makes a server rack non-edge one year, and edge-certified the next?

"I think it's important to observe that this is an ecosystem, an end-to-end system," Baker explained. "And in order to develop a real-time inferencing application, it typically requires that you train it against a large set of data. This is designed to complement and be deployed not physically alongside, but logically alongside, the streaming data platform."

Dell believes an edge computing platform need not be physically deployed at any edge at all. It's an edge cloud of sorts, that you don't even have to know is at the edge. In response to a question from ZDNet, Baker confirmed that this architecture was designed for an environment that is staffed by human beings which already suggests its location is in the zone that Dell, at least in the past, called the "core."

The AT&T Foundry facility in Plano, Texas, ironically as seen by a drone.

For the 4G Wireless model, engineers added an ingenious type of network switch. It allowed a request to a service from a customer's device, such as a smartphone, to bypass the usual Internet routing scheme, enabling a local server to process that request more expediently. It was called the local breakout (LBO) switch, and it's the reason many major Web sites respond quickly to users, even with less-than-optimal connections.

Being able to switch an incoming data request so that a local server responds to it rather than a remote one, turns out to be a handy tool in the arsenal of a telco that wants to direct traffic from the radio access network (RAN) to wherever it considers the edge to be the place with the most value for that telco. For AT&T, as its drone demo proved, it can enable IoT traffic to be routed into its own facilities into what Dell would have defined as the "core," but what can now be marketed as the edge. It's a technique built on top of LBO, called serving gateway local breakout (SGW-LBO).

Athonet is a communications equipment provider that has already rolled out SGW-LBO to some telco customers, following its launch in February 2018. In a statement at the time, the company said, "The benefit of this approach is that it allows specific traffic (not all traffic) to be offloaded for key applications that are implemented at the network edge without impacting the existing network or breaking network security. . . We therefore believe that it is the optimal enabler for MEC."

There's that ETSI term again. If telcos have their hands exclusively on SGW-LBO switches, then what's to prevent them from diverting all incoming traffic from their RANs directly into their NTCs, declaring those NTCs "the new edge," and reaping the jackpots?

Juniper Networks, at least theoretically, would benefit from whichever way the LBO switch is thrown. Its CTO, Raj Yavatkar, told ZDNet he sees potential value for an AT&T, a Verizon, or a T-Mobile in embracing, or at least enabling, the Kinetic Edge model of letting LBO point their direction. His argument is that it would free telcos from depending exclusively upon the largest hyperscale cloud service providers.

"We see that if telcos simply rely on hyperscalers to provide all these services, and only focus on providing connectivity," said Yavatkar, "they won't be able to take advantage of the value-added services that they can sell to their enterprise customers, and monetize them. There's a balance to be considered, with respect to what is served from hyperscalers, and what is served in a cloud-agnostic, cloud-neutral way, from the edge of the cloud."

StackPath, with whom Juniper has partnered, could conceivably provide not only the edge infrastructure for telco services, but also the platform for a marketplace on which those services are sold a kind of cloud at the edge that, neither technically or commercially, is actually "the cloud."

It would be a mistake to presume that edge computing is a phenomenon which will eventually, entirely, absorb the space of the public cloud. Indeed, it's the very fact that the edge can be visualized as a place unto itself, separate from lower-order processes, that gives rise to both its real-world use cases and its someday/somehow, imaginary ones. It was also a mistake, in perfect hindsight, to presume the disruptive economic force of cloud dynamics could completely commoditize the computing market, such that a virtual machine from one provider is indistinguishable from any other VM from another, or that the cloud will always feel like next door regardless of where you reside on the planet.

Yet it's very difficult, when plotting the exact specifications for what any service provider's or manufacturer's edge services, facilities, or equipment should be, to get caught up in the excitement of the moment and imagine the edge as a line that spans all classes and all contingencies, from sea to shining sea. Like most technologies conceived and implemented this century, it's being delivered at the same time it's being assembled. Half of it is principle, and the other half promise.

Once you obtain a beachhead in any market, it's hard not to want to drive further inland. There's where the danger lies: where the ideal of retrofitting the Internet with quality of service can make anyone lose, to coin a phrase, its edge.

This article contains updated material that first appeared in an earlier ZDNet Executive Guide on edge computing.

More:
Where is the edge in edge computing? And who gets to decide? - ZDNet

Cloud-native architectures will define the vRAN future – 5Gradar

The building of virtual Radio Access Networks (vRAN) and the use of edge data centers has long been a major topic in the mobile communications sector and this development affects both the current 4G and the future 5G networks. However, technology continues to evolve away from virtualized workloads and towards containers and cloud-native architectures and applications.

Traditional radio access networks consist of antennas, base stations (baseband units BBUs), and controllers. This makes them some of the most expensive components in a mobile network. Whats more is that they also require specialized hardware and software. Virtualized RAN (vRAN) solutions overcome these disadvantages, which is why they are replacing proprietary, hardware-based radio access networks in ever-greater numbers. The vRAN is based on Network Functions Virtualization (NFV) that transforms a typical hardware-based network architecture to a software-based environment. There still might be a need for hardware acceleration in some form. Some BBU control functions are provided on virtual machines (VMs) that run on Commercial-Off-The-Shelf (COTS)" servers in an edge data center. This all results in disaggregation in two dimensions:

1. separation of hardware and software and2. functional split of the base station.

The trend towards edge computing affects both 4G and 5G networks. The main advantages of edge computing include zero-touch provisioning, multi-cluster management, a smaller footprint, high scalability and automated operation. vRAN or disaggregated RAN can be seen as a specific use case or workload on edge data centers.

There are a few differences between 4G LTE and 5G when it comes to edge implementation, especially in terms of how the functionalities of the base stations are divided between the antenna locations and the edge data centers.

In 4G LTE networks, the traditional status quo is a distributed RAN with baseband units on the antenna side, meaning the full functionality of the base stations is distributed across the individual antenna locations. This results in considerable costs, potential challenges of radio interference, and high energy consumption. An edge approach moves away from a distributed RAN with BBUs and towards a centralized vRAN. Some of the functions of the base stations are centralized in virtualized BBUs (vBBUs), meaning the base station is split.

In 5G networks, on the other hand, disaggregation in edge implementation is divided into three parts: Radio Units (RUs on antenna sites), Distributed Units (DUs), and Centralized Units (CUs).

The CUs are designed as a distributed cloud solution with low space requirements, while the DUs assume tasks such as real-time processing, supporting the Precision Time Protocol (PTP), hardware acceleration such as field-programmable gate arrays (FPGAs), smart network interface cards (Smart NICs) and even Application-Specific Integrated Circuits (ASICs).

At the edge, mobile network operators are already using Network Functions Virtualization such as Red Hat OpenStack with distributed nodes for software-defined wide area networks (SD-WAN) and mobile applications. However, introducing vRANs by using virtual machines on standard servers in an edge data center cannot be the final step but is a good first step. It has often been shown that the current virtual network functions (VNFs) and vRANs in particular are unable to meet expectations in terms of functionality, easy implementation, or management. Thats why the next step must be using applications that are compatible with the cloud or, even better, cloud-native applications. And this development is currently emerging in the telecommunications sector, with the use of cloud-native applications on Kubernetes-based container platforms such as Red Hat OpenShift for 5G Core (5GC), Edge and RANs, for example.

Cloud-native applications are designed as lightweight containers and loosely coupled microservices. As far as network operators are concerned, the main advantages of these types of applications are the lower development costs, the simpler upgrades and modifications, as well as the potential for horizontal scaling. This also avoids vendor lock-in.

In essence, cloud-native application development is characterized by service-based architecture, API-based communication, and container-based infrastructure. Service-Based Architecture (SBA) is defined in the 5G standard.

Service-based architectures such as microservices enable modular, loosely coupled services to be built. The services are provided via lightweight, technology agnostic APIs that reduce the complexity, effort, and expense during deployment, scaling, and maintenance. In addition, cloud-native applications are based on containers that enable operation across different environments. Container technology uses the operating systems functions to divide the available computing resources across multiple applications and at the same time ensure the applications are secure. Cloud-native applications also scale horizontally, meaning other application instances can be added easily often through automation within the container infrastructure. The lower overheads and high density enable numerous containers to be hosted within the same virtual machine or the same physical server.

It is becoming increasingly apparent that the transition to 5G is a transition to containers and cloud-native applications. This means that virtualized workloads are evolving into containerized workloads. Virtualization will be there for years to come though, in one form or another.

The advantages of the cloud-native approach can be seen in particular in the main 5G use cases, and thus in network slicing, or in other words, the provision of multiple virtual networks on a common physical infrastructure.

In principle, there are the following three use cases when it comes to 5G:

A virtualized RAN that is both container-based and cloud-native is a key component for the 5G network transformation and in providing optimal support for these technologies and use cases. Cloud-native architecture in particular allows initial costs to be kept to a minimum per slice and the scaling up to thousands of slices of all sizes to be cost-efficient.

There is no question that 5G will power a new generation of services thanks to its higher data rates and extremely low latencies. To be able to leverage their advantages to the fullest extent possible, however, telecommunications companies need to bring their data processing and processing power closer to the end user. The end user can ultimately also be a smartphone, a connected car, or a robot in a production process. The task clearly demonstrates that both edge computing and cloud-native capabilities are the focus of mobile network operators activities at the moment.

Some mobile providers have already set up commercial, if locally restricted, 5G environments, and numerous new projects are on the horizon. vRAN, edge computing, and cloud-native are the crucial technology drivers in this area, and open source solutions such as Red Hat OpenShift will form the basis of disaggregated 5G infrastructures.

Excerpt from:
Cloud-native architectures will define the vRAN future - 5Gradar

HSBC platform uses AI to analyse trading data thousands of times faster – ComputerWeekly.com

HSBC is offering a trading service that will use artificial intelligence (AI) to gain trading insights from publicly available information to help its clients make decisions when trading shares in companies.

The banks US business is using IBM Watson and technology from EquaBot in AiPEX, which will learn from publicly available data, including company announcements and tweets.

Monitoring the 1,000 largest companies listed, the service will predict which shares are likely to grow. It uses the same methods as traditional research in this area, but will be automated and thousands of times faster.

With the volume of data available about companies and their strategies exploding, trading companies need to be able to monitor data in near-real time and make investment decisions based on it.

Dave Odenath, head of quantitative investment solutions, Americas, at HSBC Global Banking and Markets, said investors needed to be able to keep up with the growing amount of data being generated each day.

We are now able to offer clients solutions that not only keep up, but thrive in an increasingly complex world of data. AiPEX with Watson simulates a team of thousands of analysts and traders working around the clock to learn from millions of pieces of information and identify potential investment opportunities, said Odenath.

AiPEX with Watson simulates a team of thousands of analysts and traders working around the clock to learn from millions of pieces of information and identify potential investment opportunities Dave Odenath, HSBC Global Banking and Markets

The trading sector relies on vast amounts of data about the performance of companies as well as historic economic trends. Technology is being offered by banks and other trading service providers to help businesses in the trading sector keep up.

For example, some of the datasets demanded by customers of financial data supplier Refinitiv are so large that many requests cannot be sent over fibre networks without huge delay. This means companies that want to analyse the data often have to transport it by truck on hard disks and upload it to their servers.

Refinitiv recently announced it was using Google Cloud to remove the need to physically transport data or even transfer it over fibre networks. It moved its Tick History database into Google Cloud to enable customers to analyse the data there.

Cloud computing and other digital technology advancements are transforming how data is analysed in the capital markets sector,Catalina Vazquez, proposition director for the Tick History database atRefinitiv,recently told Computer Weekly.

As the cloud delivers on its promise to make AI-based analytics more readily available, the potential of data to deliver answers that drive business performance gets ever greater.

US stock exchange Nasdaq recently said it was using Amazon Web Services (AWS) application programming interfaces (APIs) to give its investment industry customers access to trading data in real time.

Read the original:
HSBC platform uses AI to analyse trading data thousands of times faster - ComputerWeekly.com