Category Archives: Cloud Servers
While Microns (MU) - Get Reportsales to some major customers are under pressure, its sales to some others are holding up pretty well right now.
After the bell on Wednesday, the memory giant reported February quarter (fiscal second quarter) revenue of $4.8 billion (down 18% annually) and non-GAAP EPS of $0.45, topping consensus analyst estimates of $4.69 billion and $0.37.
Micron also guided for May quarter revenue of $4.6 billion to $5.2 billion and non-GAAP EPS of $0.40 to $0.70. The midpoints of those wider-than-usual guidance ranges are slightly above consensus estimates of $4.87 billion and $0.52.
With markets having been on edge about the COVID-19 pandemics impact on Microns sales, the numbers are going over well. As of the time of this article, Microns stock is up 5.3% in after-hours trading to $44.75. Shares are now up 44% from a March 18 low of $31.13, albeit still down 27% from a Feb. 12 52-week high of $61.19.
In its prepared remarks, Micron did caution that it expects smartphone, consumer electronics and automotive demand to be below prior expectations during the second half of its fiscal 2020 (ends in Aug. 2020). However, the company also noted it has seen higher notebook demand as more workers and students work and learn from home, and that demand from data center end-markets (already on the upswing in recent months thanks to a cloud capex rebound) is strong as usage for various online/cloud services grows, and even leading to shortages.
In addition, Micron reported that Chinas COVID-19 pandemic weighed on its sales to consumer electronics clients and caused factory shutdowns for some clients. However, it added local data center demand was strong, and that Chinese smartphone production volumes have begun to rebound.
Micron's near-term demand outlook. Source: Micron.
Microns demand commentary has some things in common with what GPU giant and Micron graphics DRAM client Nvidia (NVDA) - Get Reportshared on a Tuesday conference call. Among other things, Nvidia said its seeing strong demand for GPUs going into notebooks and cloud servers, and indicated Chinese demand has begun normalizing. Micron, for its part, disclosed that its bit shipments of GDDR6 graphics DRAM (used by some of Nvidias GPUs) rose over 40% sequentially last quarter.
With full-year supply and demand trends quite uncertain right now, Micron chose not to provide calendar 2020 outlooks for DRAM and NAND flash memory bit supply and demand growth -- either for itself or the memory industry at-large.
In December, Micron guided for DRAM industry bit demand to grow by a mid-teens percentage this year, and for NAND bit demand to grow by a high-20s to low-30s percentage. Now, Micron is merely reiterating long-term guidance for DRAM bit demand to see a mid-to-high teens compound annual growth rate (CAGR), and for NAND bit demand to see a roughly 30% CAGR.
Micron is also for now reiterating fiscal 2020 capital spending guidance of $7 billion to $8 billion, while adding that its evaluating its capex plans for calendar 2020. Several chip equipment makers, including Applied Materials (AMAT) - Get Report, have withdrawn their quarterly guidance, while noting that recent lockdown orders have impacted their manufacturing operations.
Micron received a few questions on its earnings call about inventory levels -- both its own and those of its customers.
Microns inventory rose by $300 million sequentially to $5.2 billion, leading its days of inventory to rise by 13 to 134. The company insisted that much of this growth was due to seasonality and the holding of additional NAND inventory ahead of a technology transition. However, it also reported building its raw materials stockpiles due to supply chain uncertainty.
Separately, Micron admitted that just as its stockpiling raw materials, some of its customers could be stockpiling memory products, and that these efforts could be masking weakening end-market demand.
Following a sharp downturn in late 2018 and early 2019, DRAM and (especially) NAND pricing trends have seen meaningful improvement, as industry capex cuts make themselves felt. And for now at least, Micron insists memory pricing trends remain favorable.
During the February quarter, Microns DRAM average selling price (ASP) was roughly flat sequentially, after having dropped by a high-single digit percentage in its November quarter. NAND ASP rose by a high-single digit percentage, after having risen by a low-single digit percentage in the November quarter.
Most organizations have migrated security tools to the cloud, according to a survey from security information & event management (SIEM) provider Exabeam.
by Dan Kobialka Mar 25, 2020
Approximately 58% of organizations have migrated at least a quarter of their security tools to cloud-based options, according to a survey from security information and event management (SIEM) platform provider Exabeam. In addition, 33 percent said they have moved more than half of their security tools to cloud options.
Other findings from Exabeams survey included:
The survey also revealed cloud-based security tools are being used to protect different types of data, including:
Cloud-based security tools may be beneficial, but organizations must still maintain visibility into their cloud services, Exabeam Security Strategist Sam Humphries stated. In doing so, these organizations can use cloud-based security tools to enjoy the functionality of traditional on-premise security solutions, along with reduced costs and maintenance issues.
Organizations that want to implement cloud-based security can leverageExabeam SaaS Cloud, which is available for hosting in 15 locations across the following regions:
SaaS Cloud helps organizations identify cyber threats and meet compliance and policy requirements, the company indicated. It also provides data lake, behavioral analytics, case management, security orchestration and incident response automation capabilities.
Read the original here:
Cloud-Based Security Tool Adoption: Latest Research Findings - MSSP Alert
Today Supermicro announced the industrys broadest portfolio of validated NGC-Ready systems optimized to accelerate AI and deep learning applications. Supermicro is highlighting many of these systems today at the Supermicro GPU Live Forum in conjunction with NVIDIA GTC Digital.
Supermicro NGC-Ready systems allow customers to train AI models using NVIDIA V100 Tensor Core GPUs and to perform inference using NVIDIA T4 Tensor Core GPUs. NGC hosts GPU-optimized software containers for deep learning, machine learning and HPC applications, pre-trained models, and SDKs that can run anywhere the Supermicro NGC-Ready systems are deployed whether in data centers, cloud, edge micro-datacenters, or in distributed remote locations as environment-resilient and secured NVIDIA-Ready for Edge servers powered by the NVIDIA EGX intelligent edge platform.
With over 26 years of experience delivering state-of-the-art computing solutions, Supermicro systems are the most power-efficient, the highest performing, and the best value, said Charles Liang, CEO and president of Supermicro. With support for fast networking and storage, as well as NVIDIA GPUs, our Supermicro NGC-Ready systems are the most scalable and reliable servers to support AI. Customers can run their AI infrastructure with the highest ROI.
Supermicro currently leads the industry with the broadest portfolio of NGC-Ready Servers optimized for data center and cloud deployments and is continuing to expand its portfolio. In addition, the company offers five validated NGC-Ready for Edge servers (EGX) optimized for edge inferencing applications.
NVIDIAs container registry, NGC, enables superior performance for deep learning frameworks and pre-trained AI models with state-of-the-art accuracy, said Ian Buck, vice president and general manager of Accelerated Computing at NVIDIA. The NGC-Ready systems from Supermicro can deliver users the performance they need to train larger models and provide low latency inference to make critical, real-time business decisions.
As the leader in AI system technology, Supermicro offers multi-GPU optimized thermal designs that provide the highest performance and reliability for AI, deep learning, and HPC applications. With 1U, 2U, 4U, and 10U rackmount NVIDIA GPU systems as well as GPU blade modules for our 8U SuperBlade enclosure, Supermicro offers the industrys best and widest selection of GPU systems.
Sign up for our insideHPC Newsletter
View original post here:
Supermicro Accelerates AI and Deep Learning with NGC-Ready Servers - insideHPC
While many organizationsalready have telecommute policies and solutions in place, they are mostcommonly for either fully-remote workers or for employees who typically work inthe office but need flexibility for unusual situations. The current environmentmost companies now face may put their remote workplace capabilities to thetest.
This is most pronounced whenconsidering security controls, cyber-hygiene, and reducing risk exposure that amore remote workforce creates. Are organizations prepared for such adistributed workforce and the potential risks that come with it?
When it comes to ITadministration teams, outsourced IT, and third-party vendors who might haveprivileged access to systems and infrastructure, they need secure, granularaccess to critical infrastructure resources regardless of location and withoutthe hassles of a virtual private network (VPN). Ideally, how privileged usersaccess these systems shouldnt be different, regardless of whether they are inan on-premise data center or accessing remotely.
Ditch the VPN
Last year it was reportedthat Citrix was breached through a password spraying attack that also sought toleverage VPN access. ARS Technica also reported last year that energy companies have specifically become targets of attacks thatuse password spraying and VPN hacking.
Unlike a VPN that generallygives users visibility to the entire network, organizations should only grantaccess to resources on a per-resource basis. This gives privileged internal ITadmins access to only as much infrastructure as necessary, while limitingaccess by an outsourced team to only the servers and network hardware their rolerequires.
Privileged users shouldauthenticate through Active Directory, LDAP, or whatever the authoritativeidentity store is, or grant granular, federated privileged access to resourcesfor business partners and third-party vendors.
Guard against cyber-attacksby combining risk-level with role-based access controls, user context and MFAto enable intelligent, automated and real-time decisions for grantingprivileged access to users who are remotely accessing servers, on passwordcheckout or when using a shared account to log into remote systems.
Secure Privileged Accessfor On-Site and Remote Administration
Here are six ways anyorganization can create consistency in their privileged access management (PAM)approaches to secure remote access to data center and cloud-basedinfrastructures through a cloud-based service or on-premises deployment.
Nate Yocom is Chief Technology Officer at Centrify
Discover aspects of the Hybrid Cloud Market as it value achieves $171926 million with CAGR 21.7% – WhaTech Technology and Markets News
Growing need for higher computational power and increasing demand across various organizations to enhance their IT service management capabilities without addition of servers would boost the growth of the global hybrid cloud market.
Increasing need for more computational power among organizations and growing awareness about the benefits of hybrid cloud drive the growth of the global hybrid cloud market. However, lurking concerns about data privacy and security hamper the market growth.
On the other hand, rapid increase in adoption of hybrid cloud among small- and large-sized companies and augmented demand among various organizations to boost its IT service management capabilities without the addition of servers are expected to create lucrative opportunities for the market players in near future.
The key market players analyzed in the report include Microsoft Corporation, VMware, Inc., Hewlett Packard Enterprise, Dell EMC, Google LLC, Cisco Systems, Inc., Amazon Web Services, Inc., Rackspace Inc., IBM Corporation (International Business Machines), and Verizon Enterprise. These market players have adopted various strategies such as partnerships, collaboration, mergers & acquisitions, and new product launch to maintain their leading position in the industry.
North Americacontributed about half share of the market in 2017, owing to the increasing number of cloud-based service providers in the region. However,Asia-Pacificregion would grow at the fastest CAGR of 25.3% during the study period, owing to rise in usage of cloud-based services and growth in deployment of data centers in the developing countries such asIndiaandChina.
In addition, the hybrid cloud market inEuropeis expected to grow gradually from 2018 to 2025.
The global hybrid cloud market was pegged at$36.14 billionin 2017 and is estimated to reach$171.93 billionby 2025, registering a CAGR of 21.7% from 2018 to 2025.
Among industrial verticals, the global hybrid cloud market report is analyzed across IT & Telecom, healthcare, BFSI, retail, government, media & entertainment, transportation & logistics, manufacturing, and other sectors. In 2017, the BFSI segment was the largest contributor, holding about one-third share of the market and would continue to retain its dominant position during the forecast period.
However, the healthcare segment is expected to manifest the fastest CAGR of 27.9% during the forecast period, as concerns regarding security, cost, and complexity have considerably increased among healthcare organizations.
In 2017, the small & medium enterprises segment was the largest contributor to the global hybrid cloud market in terms of revenue, holding more than two-thirds share. Moreover, the segment is expected to portray the fastest CAGR of 22.5% during the study period.
On the other hand, the large enterprises segment is estimated to manifest gradual growth through 2025.
Download Sample Report:www.alliedmarketresearch.com/request-sample/256
Based on service model, the global hybrid cloud market report is segmented into Software as a Service (SaaS), Infrastructure as a Service (IaaS), and Platform as a Service (PaaS). SaaS segment held the largest market share, contributing about 61% of the total revenue, owing to the increasing adoption of SaaS among organizations that seek complex software and hardware management.
However, IaaS segment is expected to register the fastest CAGR of 26.1% through 2025, owing to various benefits such as enhanced performance, improved productivity, flexible computing capabilities, and increased delivery speed. In addition, the PaaS segment is expected to grow at a steady rate during the forecast period.
The services segment is estimated to register the fastest CAGR of 23.1% during the forecast period, owing to the rise in adoption of hybrid cloud services on account of their cost-effectiveness and ease of access. However, the solutions segment held the largest market share, contributing about two-thirds of the total revenue, owing to the increasing inclination of companies toward building multi-cloud architectures.
Dell EMC announced yesterday a bunch of reference architectures and pre-defined workstation, server and Isilon scale-out filer bundles for data scientists and researchers working in artificial intelligence.
In effect these are recipes that are quicker to prepare and easier to cook than starting from scratch using raw ingredients only.
Dells purpose for the initiative is to reduce the time that customers take in setting up workstations, servers, filers, system software, and installing cloud-native working environments. That frees up more time for analysis and AI model runs.
A Dell spokesperson said: AI initiatives generally start small in the proof-of-concept stage but preparation and model training are often the most time-consuming portions of the job. Combining hardware and software, Dells new AI-ready solutions will hasten researchers ability to stand up AI applications efficiently.
David Frattura, a senior Dell technology strategist, details the eight AI reference bundles in this blog. The architectures encompass use cases such as machine learning, deep learning, artificial intelligence,high performance computing, data analytics, Splunk, data science and modelling.
The buzzword benefits are legion; deploy faster, achieve greater model accuracy, accelerate business value and more on your AI digital transformation journey.
How is Coronavirus Affecting the Daily Lives of Architects? Our Readers Answer
A glimpse of hope emerged from the endless loop of COVID-19 news this week when China announced the closure of their last temporary hospital in Wuhan due to their stabilization of the pandemic that has now taken the world by storm. Western countries have been enforcing more restrictive measures aiming to stop the spread of the virus, including mandating shelter-in-place orders and forcing any business deemed non-essential to close. Due to the quarantine and isolation politics imposed by the authorities around the globe, we asked you, our readers, how the coronavirus is affecting your daily life as architects and designers. These answers allowed us to compose an overall picture of the atmosphere established by the pandemic and the way we are adapting to it.
Our pollsurveyed our Spanish, English and Portuguese platforms, and more than 600 readers shared their experiences. Most of the participants (39%) were between 21 and 30 years old, followed by the groupranging between 31 and 40 years old (29%). Readers between ages 41 and 50 represent 13% of thesurvey participants, while 9% were between 50 and 60, and readers over 60 were 7% of the readers who shared their experiences since the outbreak.
We also discovered that approximately 65% of theparticipantsstated that they had already worked from home before the quarantine in some capacity, whether just for a few days, or as a part of their regular routine. For the others, the newreality of adapting to a home officehas broughtmany challenges, related to the ability to focus on work and finding new means of communication with colleagues.
For many of those surveyed, one of the main challenges of having to work from home is the inability to connect with colleagues for informal conversations. The idea of remaining isolated for an undefined period of time, compounded with the general sensation of anxiety has brought a variety of disruptions to usual work flow, demanding an additional layer of communication. Video and phone calls, social networks, and other technology platforms have helped maintain synergy among team members.
The ability to access to the files and digital drawings was another frequently mentioned topic in our survey, which have been supplemented with cloud servers and private company networks. The readers of our three platforms pointed out the slowness and instability of internet services as a major downside to working from home, thathas resulted in designers spending more time working than usual.
One of the main challenges of designers who have made the transition to working at home is the difficulty in maintaining their typical work pace and finding the discipline to focus on daily tasks. Distractions caused by other family members who are also facing quarantine lock down measures, pets, neighboring noises, and domestic activities were cited as a few of themain obstacles to work at home. The lack of spaces exclusively dedicated to work have forced some of our readers to improvise small offices in their living rooms or bedrooms, only further adding to the inefficiencies of having to work from home.
The absence of a barrier between domestic life and work also seems toconcernsomeof the readerswho have been working more hours than usual since the quarantine began.
Among the readers worries was the uncertainty of facing a potential economic recession. Projects that have alreadybegun design and construction phases are being closely monitored, and some architects are seeing that clients are hesitant to sign contracts and award more work. Thefear of this potential crisis and its immeasurably directly impact the concerns of architects andother design professionals around the globe.
While a home office might be a temporary solution for many architects and designers, it only works to a certain extent. Throughout this quarantine, many countries have deemed construction services as essential, which means that sites are still being built, evenas architects are required to stay home. The amount of on-site meetings and coordination that traditionally happens through face to face social interactions needs to find a new medium in order to continue to have successfully completed projects.
"Workingfrom home in a third world country is a privilegenot often shared by the laborers. These skilled workers are forced tochoose between going to work and being exposed to the virus, or to stay home, depriving themselves of basic needs since they live exclusively from their work. Some of these countries have governments that lack of humanitarian initiatives to help them financially during this crisis."
Jeric Rustia, Philippines Architect
Some readers also expressed that they have experienced local building departmentsinvolved in the project approvalshalting not only the start of new construction, but also not approving drawingsthat have been completed since the quarantine period began.
On the other hand, some readers said that despite the myriad of challenges and problems imposed by the isolation, there are a few advantages of remote working. No longer having to spend the time commuting intothe office, which in cities like So Paulo or New York can sometimes take up two hours, designers have gained additionaltime that before was not available for leisurely activities. Some survey participants noted that spending more time with their families, cooking, reading, and watching TV are activities that they now havemore time for.
The greatest opportunity though, is how to undermine this moment of crisis and rethink the modes of work that have become commonplace in most architecture officesaroundthe world. Improving remote communication abilities, storing project files in the cloud, and implementing the use of BIM models are just a few ways that offices have come to adapt and modernize their methods of practice.
"It is mandatory we rethink completely our role as architecture professionals. Will we all be seen as necessary in this field? I think not. In Italy, we are 153,000 strong, and architectural design is still been seen as a luxury service. The Coronavirus will change the priorities of people for better. This is a great opportunity to define how architectural projects positively affect the lives of the people who will ultimately inhabit them."
- Francesca Perani, Italian architect.
With any global crisis of this scale, there are many fears and unknowns that our readers have expressed that they face in their new ways of working. As seen from our perspective, this might be the starting pointfora deeply-rooted transformation in the way we work, communicate, and practice architecture. Despite the fear of a possible recession, our readers as designers fromaround theworld, seem to seek strength and believe that together we will not only overcome this, but we will also discover a more human futurein our profession.
We invite you to check out ArchDaily's coverage related to COVID-19, read our tips and articles on Productivity When Working from Home and learn about technical recommendations for Healthy Design in your future projects. Also, remember to review the latest advice and information on COVID-19 from the World Health Organization (WHO) website.
There is a constant stream of innovation happening in storage technology, and the hyperconverged infrastructure (HCI) market is leading the way
According to this report, the HCI market is expected to be worth $17.1 billion by 2023. This projected growth could be put down to the myriad of advantages that HCI offers, including single-pane-of-glass management, reduced rackspace and power which means greener data centres, and improved disaster recovery capabilities to list a few.
Logically, the next step for HCI in accelerating its evolution has been its move to the edge of the network. As the demand for supercharged instances of data-use is growing, such as artificial intelligence (AI), its not surprising that enterprises are looking to edge computing and HCI to enable them to capture data from the very start of their projects. By combining edge computing with HCI, businesses can enable their AI tools to make more intelligent decisions.
With the days of pen and paper behind us, digitalisation has become a necessity across industries. As a result, we are creating a tonne of data, which of course needs to be stored somewhere. More often than not, this data is stored on-site at the edge of a network not your traditional data centre architecture.
One key benefit of edge computing is that it takes up a lot less hardware space than traditional hardware storage. By deploying this infrastructure at the edge of the network, it has the ability to not only handle and compile the data, but also compress the large amount of data so that it can be easily transferred into the cloud or into a centralised data centre at another site. This method grants access for the data to be handled and reviewed closer to where it was created, rather than trying to transmit it further away. This is why edge computing is often used by various distributed enterprises like fast-food restaurants, supermarkets, and petrol stations, as well as industrial surroundings like mines and solar energy plants.
Data collated at the edge of the network is not always being utilised to its full capacity. AI, for example, albeit still at the beginning of its journey, requires vast quantities of resources to develop and train its models. However, with edge computing the data is able to move freely into the cloud. From there the data can be analysed and the AI models can be trained before then extending it back to the edge. The best way for AI to be optimally used to generate these models is to make use of the data centre or the cloud.
Take the silicon chip company, Cerebras, which dedicates its work to accelerating deep learning. It has recently introduced its new Wafer Scale Engine which has been purposefully built for deep learning. This new chip is incredibly fast and 56 times bigger than the largest Graphics Processing Unit. Despite its grand size however, it does mean that its power consumption is at such a high capacity that most edge deployments would not be able to handle it.
That said, there is still hope, as businesses are able to amalgamate edge computing tasks using hyper converged infrastructure, enabling them to build and make the most of data lakes. By placing the data within a data lake, companies are able to use this to analyse it against all applications. The machine learning aspect is also able to unveil new insights through the use of its shared data against the diverse applications and devices.
When thinking about edge computing, HCI has made it much easier to use by combining servers, storage, and networking all in one box. Not to mention, it doesnt face the configuration or networking issues it previously had. To add to this, the platform can administer integrated management for a high quantity of edge devices located in different parts of the country, with various forms of networks and interfaces, and thereby undoubtedly decrease operational expenses.
The surge in use cases for things like smart home devices, self-driving cars, and wearable technology means that AI is already prevalent in our everyday lives. According to Gartner, AI will continue to flourish with 80% of smart devices to contain on-device AI capabilities by 2022.
However, AIs data collection does come up against a problem, because most of the technology powering it is hugely reliant on the cloud, and therefore can only come to a conclusion based on the data it has access to in the cloud. This results in a delayed response because the data first has to travel to the cloud, before heading back to the device. In the case of technologies like self-driving cars, which require instantaneous decision-making, any lag could result in huge complications.
In this scenario, edge computing has one up on the cloud and the potential to take AI to the next level. Any data required for that AI application is able to reside in close proximity to the device, therefore increasing the speed in which it is able to access and process the data. AI devices which are dependent on data conversion benefit the most from this application because they wont always be able to connect to the cloud, as it requires access to bandwidth and network availability.
Another advantage of combining edge computing with HCI for AI is that it requires a smaller amount of storage space. The best operational feature about HCI is that technology is able to function, within a smaller hardware design. It will soon be commonplace to find companies launching highly available HCI edge compute clusters which are comparable to the size of a cup of tea.
If AI is to truly succeed, it will need to depend on HCI and edge computing to work together side by side, allowing AI to function on its own merit, and with minimal support. AI will be able to make the most of its deep learning asset, as well as improve its ability to make better decisions.
AI has the ability to be accessible to the vast majority thanks to technological advances in the cloud. However, it is the marriage of HCI and edge computing that will provide AI with the means it needs to surge into new territories, providing more intelligent and efficient methods to find a solution for all companies.
Follow this link:
Enabling AI with edge computing and HCI - Techerati
Innovative edge cloud platform withstate-of-the-art Stingray SmartNIC deliversunrivaled levels of server performance and security
DALLAS, March 24, 2020 (GLOBE NEWSWIRE) -- StackPath, the emerging leader in edge computing,today announced a joint collaboration with Broadcom Inc. (Broadcom) to provide advanced edge computing capabilities using Broadcoms Stingray SmartNIC to address the growing demand for cloud-based content-delivery services at the edge for streaming video, application performance optimization, andsecurity.
The proliferation of connected endpoints is rapidly increasing the need for highly available, low-latency and flexible edge computing resources.StackPath provides the leading platform for secure edge compute infrastructure including virtual machines (VMs), containers, and serverless, as well as managed edge services such as content delivery network (CDN), web application firewall (WAF), managed DNS, service monitoring, and DDoS protection.
Unlike centralized, traditional datacenters which are typically outside metropolitan geographies, StackPaths edge compute platform moves the functionality and processing power of the public cloud closer to where end users are. This gives developers and enterprises the building blocks, tools and proximity essential for cloud-centric workloads requiring ultra-low network latency and exceptional security. StackPath edge compute services can connect to end users up to 2.6x faster than competing cloud computing provided by public core cloud providers crucial to meet the high bandwidth and low latency demands of next generation applications such as online video gaming, real-time video and cloud-based security.
Compute located at the edge is essentially beach front property. Its always going to be finite and in high-demand, said Wen Temitim, StackPaths Chief Technology Officer. StackPath and Broadcom have developed an implementation that moves I/O network processing from the server to the Broadcom Stingray SmartNIC, providing us with optimum performance and latency within the real estate and power constraints of edge locations. The Stingray SmartNICs compact form factor, Truflow offload engine, advanced security co-processors and outstanding compute performance make it an ideal platform to deploy highly secure new services for our customers.
The network edge provides significant opportunities for innovation, said Dan Harding, vice president of marketing for the Compute and Connectivity Division at Broadcom.The challenge is how to deliver this innovation in a scalable, reliable and cost-effective manner within a physical environment that substantially constrains the available compute resources. StackPath solved this by deploying a novel server architecture that uses our Stingray SmartNIC. This enables them to deliver their core services, such as web application firewall, CDN and object storage, to their customers, and provides them with a substantial architectural advantage versus their competitors in providing cloud computing at the edge.
About StackPathStackPath is the worlds first platform providing compute and services at the clouds edge. StackPath offers core computing resourcesincluding Virtual Machines, Containers, and Object Storageas well as managed edge servicesincluding Serverless Scripting, CDN, WAF, Managed DNS, and Service Monitoringwith 45 edge locations spanning the world, all connected by a secure private network backbone. StackPath is trusted by customers ranging from Fortune 50 enterprises to one-person startups that want to develop, distribute, protect, and accelerate their cloud workloads in ways not possible with central cloud services. To learn more visitstackpath.comand follow StackPath atwww.fb.com/stackpathllcandwww.twitter.com/stackpath
Media ContactSusie McDonaldVP, Corporate CommunicationsStackPathsusan.email@example.com
Edge Computing can be used to push applications, data and services away from centralized hubs to logical extremes of a network. It additionally empowers analytics and information age to occur at the source of the data. Edge Computing covers a wide scope of technologies, for example, Remote Sensor Systems, Circulated Information stockpiling, Augmented Reality and the sky is the limit from there.
While its easy to find explanations about what Edge Computing is and how it functions, most organizations truly need to know how it could influence their business. Internet of things (IoT) gadgets are now hitting the market in tremendous numbers, so organizations need to see how new developments in Edge Computing practices can be made advantageous for them.
Read more: 5 Innovative Applications of Quantum Computing
Here are some of the innovative applications of Edge Computing-
Edge Computing architecture makes it workable for devices controlling utilities and other public administrations to react to changing conditions in real-time. Combined with the rising number of autonomous vehicles and the ever-growing Internet of Things, smart cities can change how individuals live and use benefits in an urban environment.
Since all Edge Computing applications depend upon gadgets collecting information to do fundamental processing tasks, the city of the coming days will have the ability to respond progressively to changing conditions as they happen.
By joining data stockpiling and enrolling in industrial equipment, manufacturers can amass data that will consider better perceptive upkeep and imperativeness adequacy, allowing them to reduce costs and essentialness usage while keeping up better constancy and beneficial uptime.
Savvy manufacturing frameworks instructed by constant data variety and examination will moreover assist organizations with altering creation races to all the more likely satisfy buyer needs.
With IoT gadgets fit for delivering tremendous measures of Patient-generated Health Information (PGHD), Healthcare suppliers might access essential data about their patients continuously instead of interfacing with moderate and fragmented databases.
Medical devices themselves could likewise be made to assemble and process information throughout diagnosis or treatment. While regulatory necessities for the sharing and exposure of medical data would make any edge solution challenging to execute, other emerging safety efforts, for example, Blockchain innovation could give better approaches to address such concerns.
Wearable AR gadgets like glasses and headsets are at times used to make this effect, however, most clients have encountered AR through their cell phone displays. Any individual who has messed around like Pokemon GO or used a channel on Snapchat or Instagram has utilized AR.
The innovation behind AR expects devices to process visual information and join pre-rendered visual components continuously. Without edge computing design, this visual information would be conveyed back to concentrated Cloud servers where the digital components could be added before sending it back to the gadget. This course of action would unavoidably bring about critical latency.
By joining Edge Computing architecture into their systems, organizations can improve performance altogether and diminish inactivity. Instead of AI Virtual Assistant sending preparing and data requests to a concentrated server, they can rather disperse the weight among edge data centers while playing out some processing capacities locally.
We can say that the multiplication of localized data servers for both Cloud and Edge Computing makes it simpler than at any other time for associations to extend their network reach and set themselves in a place to benefit as much as possible from their data assets.
Read more: Cloud Computing Versus Edge Computing
Go here to see the original:
5 Innovative Applications of Edge Computing - AiThority