Category Archives: Cloud Servers

Supermicro Accelerates AI and Deep Learning with NGC-Ready Servers – insideHPC

Today Supermicro announced the industrys broadest portfolio of validated NGC-Ready systems optimized to accelerate AI and deep learning applications. Supermicro is highlighting many of these systems today at the Supermicro GPU Live Forum in conjunction with NVIDIA GTC Digital.

Supermicro NGC-Ready systems allow customers to train AI models using NVIDIA V100 Tensor Core GPUs and to perform inference using NVIDIA T4 Tensor Core GPUs. NGC hosts GPU-optimized software containers for deep learning, machine learning and HPC applications, pre-trained models, and SDKs that can run anywhere the Supermicro NGC-Ready systems are deployed whether in data centers, cloud, edge micro-datacenters, or in distributed remote locations as environment-resilient and secured NVIDIA-Ready for Edge servers powered by the NVIDIA EGX intelligent edge platform.

With over 26 years of experience delivering state-of-the-art computing solutions, Supermicro systems are the most power-efficient, the highest performing, and the best value, said Charles Liang, CEO and president of Supermicro. With support for fast networking and storage, as well as NVIDIA GPUs, our Supermicro NGC-Ready systems are the most scalable and reliable servers to support AI. Customers can run their AI infrastructure with the highest ROI.

Supermicro currently leads the industry with the broadest portfolio of NGC-Ready Servers optimized for data center and cloud deployments and is continuing to expand its portfolio. In addition, the company offers five validated NGC-Ready for Edge servers (EGX) optimized for edge inferencing applications.

NVIDIAs container registry, NGC, enables superior performance for deep learning frameworks and pre-trained AI models with state-of-the-art accuracy, said Ian Buck, vice president and general manager of Accelerated Computing at NVIDIA. The NGC-Ready systems from Supermicro can deliver users the performance they need to train larger models and provide low latency inference to make critical, real-time business decisions.

As the leader in AI system technology, Supermicro offers multi-GPU optimized thermal designs that provide the highest performance and reliability for AI, deep learning, and HPC applications. With 1U, 2U, 4U, and 10U rackmount NVIDIA GPU systems as well as GPU blade modules for our 8U SuperBlade enclosure, Supermicro offers the industrys best and widest selection of GPU systems.

Sign up for our insideHPC Newsletter

View original post here:
Supermicro Accelerates AI and Deep Learning with NGC-Ready Servers - insideHPC

COVID-19 puts corporate WFH capabilities to the test – SC Magazine

While many organizationsalready have telecommute policies and solutions in place, they are mostcommonly for either fully-remote workers or for employees who typically work inthe office but need flexibility for unusual situations. The current environmentmost companies now face may put their remote workplace capabilities to thetest.

This is most pronounced whenconsidering security controls, cyber-hygiene, and reducing risk exposure that amore remote workforce creates. Are organizations prepared for such adistributed workforce and the potential risks that come with it?

When it comes to ITadministration teams, outsourced IT, and third-party vendors who might haveprivileged access to systems and infrastructure, they need secure, granularaccess to critical infrastructure resources regardless of location and withoutthe hassles of a virtual private network (VPN). Ideally, how privileged usersaccess these systems shouldnt be different, regardless of whether they are inan on-premise data center or accessing remotely.

Ditch the VPN

Last year it was reportedthat Citrix was breached through a password spraying attack that also sought toleverage VPN access. ARS Technica also reported last year that energy companies have specifically become targets of attacks thatuse password spraying and VPN hacking.

Unlike a VPN that generallygives users visibility to the entire network, organizations should only grantaccess to resources on a per-resource basis. This gives privileged internal ITadmins access to only as much infrastructure as necessary, while limitingaccess by an outsourced team to only the servers and network hardware their rolerequires.

Privileged users shouldauthenticate through Active Directory, LDAP, or whatever the authoritativeidentity store is, or grant granular, federated privileged access to resourcesfor business partners and third-party vendors.

Guard against cyber-attacksby combining risk-level with role-based access controls, user context and MFAto enable intelligent, automated and real-time decisions for grantingprivileged access to users who are remotely accessing servers, on passwordcheckout or when using a shared account to log into remote systems.

Secure Privileged Accessfor On-Site and Remote Administration

Here are six ways anyorganization can create consistency in their privileged access management (PAM)approaches to secure remote access to data center and cloud-basedinfrastructures through a cloud-based service or on-premises deployment.

Nate Yocom is Chief Technology Officer at Centrify

Link:
COVID-19 puts corporate WFH capabilities to the test - SC Magazine

Discover aspects of the Hybrid Cloud Market as it value achieves $171926 million with CAGR 21.7% – WhaTech Technology and Markets News

Growing need for higher computational power and increasing demand across various organizations to enhance their IT service management capabilities without addition of servers would boost the growth of the global hybrid cloud market.

Increasing need for more computational power among organizations and growing awareness about the benefits of hybrid cloud drive the growth of the global hybrid cloud market. However, lurking concerns about data privacy and security hamper the market growth.

On the other hand, rapid increase in adoption of hybrid cloud among small- and large-sized companies and augmented demand among various organizations to boost its IT service management capabilities without the addition of servers are expected to create lucrative opportunities for the market players in near future.

The key market players analyzed in the report include Microsoft Corporation, VMware, Inc., Hewlett Packard Enterprise, Dell EMC, Google LLC, Cisco Systems, Inc., Amazon Web Services, Inc., Rackspace Inc., IBM Corporation (International Business Machines), and Verizon Enterprise. These market players have adopted various strategies such as partnerships, collaboration, mergers & acquisitions, and new product launch to maintain their leading position in the industry.

North Americacontributed about half share of the market in 2017, owing to the increasing number of cloud-based service providers in the region. However,Asia-Pacificregion would grow at the fastest CAGR of 25.3% during the study period, owing to rise in usage of cloud-based services and growth in deployment of data centers in the developing countries such asIndiaandChina.

In addition, the hybrid cloud market inEuropeis expected to grow gradually from 2018 to 2025.

The global hybrid cloud market was pegged at$36.14 billionin 2017 and is estimated to reach$171.93 billionby 2025, registering a CAGR of 21.7% from 2018 to 2025.

Among industrial verticals, the global hybrid cloud market report is analyzed across IT & Telecom, healthcare, BFSI, retail, government, media & entertainment, transportation & logistics, manufacturing, and other sectors. In 2017, the BFSI segment was the largest contributor, holding about one-third share of the market and would continue to retain its dominant position during the forecast period.

However, the healthcare segment is expected to manifest the fastest CAGR of 27.9% during the forecast period, as concerns regarding security, cost, and complexity have considerably increased among healthcare organizations.

In 2017, the small & medium enterprises segment was the largest contributor to the global hybrid cloud market in terms of revenue, holding more than two-thirds share. Moreover, the segment is expected to portray the fastest CAGR of 22.5% during the study period.

On the other hand, the large enterprises segment is estimated to manifest gradual growth through 2025.

Download Sample Report:www.alliedmarketresearch.com/request-sample/256

Based on service model, the global hybrid cloud market report is segmented into Software as a Service (SaaS), Infrastructure as a Service (IaaS), and Platform as a Service (PaaS). SaaS segment held the largest market share, contributing about 61% of the total revenue, owing to the increasing adoption of SaaS among organizations that seek complex software and hardware management.

However, IaaS segment is expected to register the fastest CAGR of 26.1% through 2025, owing to various benefits such as enhanced performance, improved productivity, flexible computing capabilities, and increased delivery speed. In addition, the PaaS segment is expected to grow at a steady rate during the forecast period.

The services segment is estimated to register the fastest CAGR of 23.1% during the forecast period, owing to the rise in adoption of hybrid cloud services on account of their cost-effectiveness and ease of access. However, the solutions segment held the largest market share, contributing about two-thirds of the total revenue, owing to the increasing inclination of companies toward building multi-cloud architectures.

For Inquiry:www.alliedmarketresearch.com/-enquiry/256

This email address is being protected from spambots. You need JavaScript enabled to view it.

Follow this link:
Discover aspects of the Hybrid Cloud Market as it value achieves $171926 million with CAGR 21.7% - WhaTech Technology and Markets News

Dell debuts oven-ready AI platforms to ease researchers’ setup pain – Blocks and Files

Dell EMC announced yesterday a bunch of reference architectures and pre-defined workstation, server and Isilon scale-out filer bundles for data scientists and researchers working in artificial intelligence.

In effect these are recipes that are quicker to prepare and easier to cook than starting from scratch using raw ingredients only.

Dells purpose for the initiative is to reduce the time that customers take in setting up workstations, servers, filers, system software, and installing cloud-native working environments. That frees up more time for analysis and AI model runs.

A Dell spokesperson said: AI initiatives generally start small in the proof-of-concept stage but preparation and model training are often the most time-consuming portions of the job. Combining hardware and software, Dells new AI-ready solutions will hasten researchers ability to stand up AI applications efficiently.

David Frattura, a senior Dell technology strategist, details the eight AI reference bundles in this blog. The architectures encompass use cases such as machine learning, deep learning, artificial intelligence,high performance computing, data analytics, Splunk, data science and modelling.

The buzzword benefits are legion; deploy faster, achieve greater model accuracy, accelerate business value and more on your AI digital transformation journey.

View post:
Dell debuts oven-ready AI platforms to ease researchers' setup pain - Blocks and Files

How is Coronavirus Affecting the Daily Lives of Architects? Our Readers Answer – ArchDaily

How is Coronavirus Affecting the Daily Lives of Architects? Our Readers Answer

Facebook

Twitter

Pinterest

Whatsapp

Mail

Or

A glimpse of hope emerged from the endless loop of COVID-19 news this week when China announced the closure of their last temporary hospital in Wuhan due to their stabilization of the pandemic that has now taken the world by storm. Western countries have been enforcing more restrictive measures aiming to stop the spread of the virus, including mandating shelter-in-place orders and forcing any business deemed non-essential to close. Due to the quarantine and isolation politics imposed by the authorities around the globe, we asked you, our readers, how the coronavirus is affecting your daily life as architects and designers. These answers allowed us to compose an overall picture of the atmosphere established by the pandemic and the way we are adapting to it.

Our pollsurveyed our Spanish, English and Portuguese platforms, and more than 600 readers shared their experiences. Most of the participants (39%) were between 21 and 30 years old, followed by the groupranging between 31 and 40 years old (29%). Readers between ages 41 and 50 represent 13% of thesurvey participants, while 9% were between 50 and 60, and readers over 60 were 7% of the readers who shared their experiences since the outbreak.

We also discovered that approximately 65% of theparticipantsstated that they had already worked from home before the quarantine in some capacity, whether just for a few days, or as a part of their regular routine. For the others, the newreality of adapting to a home officehas broughtmany challenges, related to the ability to focus on work and finding new means of communication with colleagues.

For many of those surveyed, one of the main challenges of having to work from home is the inability to connect with colleagues for informal conversations. The idea of remaining isolated for an undefined period of time, compounded with the general sensation of anxiety has brought a variety of disruptions to usual work flow, demanding an additional layer of communication. Video and phone calls, social networks, and other technology platforms have helped maintain synergy among team members.

The ability to access to the files and digital drawings was another frequently mentioned topic in our survey, which have been supplemented with cloud servers and private company networks. The readers of our three platforms pointed out the slowness and instability of internet services as a major downside to working from home, thathas resulted in designers spending more time working than usual.

One of the main challenges of designers who have made the transition to working at home is the difficulty in maintaining their typical work pace and finding the discipline to focus on daily tasks. Distractions caused by other family members who are also facing quarantine lock down measures, pets, neighboring noises, and domestic activities were cited as a few of themain obstacles to work at home. The lack of spaces exclusively dedicated to work have forced some of our readers to improvise small offices in their living rooms or bedrooms, only further adding to the inefficiencies of having to work from home.

The absence of a barrier between domestic life and work also seems toconcernsomeof the readerswho have been working more hours than usual since the quarantine began.

Among the readers worries was the uncertainty of facing a potential economic recession. Projects that have alreadybegun design and construction phases are being closely monitored, and some architects are seeing that clients are hesitant to sign contracts and award more work. Thefear of this potential crisis and its immeasurably directly impact the concerns of architects andother design professionals around the globe.

While a home office might be a temporary solution for many architects and designers, it only works to a certain extent. Throughout this quarantine, many countries have deemed construction services as essential, which means that sites are still being built, evenas architects are required to stay home. The amount of on-site meetings and coordination that traditionally happens through face to face social interactions needs to find a new medium in order to continue to have successfully completed projects.

"Workingfrom home in a third world country is a privilegenot often shared by the laborers. These skilled workers are forced tochoose between going to work and being exposed to the virus, or to stay home, depriving themselves of basic needs since they live exclusively from their work. Some of these countries have governments that lack of humanitarian initiatives to help them financially during this crisis."

Jeric Rustia, Philippines Architect

Some readers also expressed that they have experienced local building departmentsinvolved in the project approvalshalting not only the start of new construction, but also not approving drawingsthat have been completed since the quarantine period began.

On the other hand, some readers said that despite the myriad of challenges and problems imposed by the isolation, there are a few advantages of remote working. No longer having to spend the time commuting intothe office, which in cities like So Paulo or New York can sometimes take up two hours, designers have gained additionaltime that before was not available for leisurely activities. Some survey participants noted that spending more time with their families, cooking, reading, and watching TV are activities that they now havemore time for.

The greatest opportunity though, is how to undermine this moment of crisis and rethink the modes of work that have become commonplace in most architecture officesaroundthe world. Improving remote communication abilities, storing project files in the cloud, and implementing the use of BIM models are just a few ways that offices have come to adapt and modernize their methods of practice.

"It is mandatory we rethink completely our role as architecture professionals. Will we all be seen as necessary in this field? I think not. In Italy, we are 153,000 strong, and architectural design is still been seen as a luxury service. The Coronavirus will change the priorities of people for better. This is a great opportunity to define how architectural projects positively affect the lives of the people who will ultimately inhabit them."

- Francesca Perani, Italian architect.

With any global crisis of this scale, there are many fears and unknowns that our readers have expressed that they face in their new ways of working. As seen from our perspective, this might be the starting pointfora deeply-rooted transformation in the way we work, communicate, and practice architecture. Despite the fear of a possible recession, our readers as designers fromaround theworld, seem to seek strength and believe that together we will not only overcome this, but we will also discover a more human futurein our profession.

We invite you to check out ArchDaily's coverage related to COVID-19, read our tips and articles on Productivity When Working from Home and learn about technical recommendations for Healthy Design in your future projects. Also, remember to review the latest advice and information on COVID-19 from the World Health Organization (WHO) website.

Visit link:
How is Coronavirus Affecting the Daily Lives of Architects? Our Readers Answer - ArchDaily

Enabling AI with edge computing and HCI – Techerati

There is a constant stream of innovation happening in storage technology, and the hyperconverged infrastructure (HCI) market is leading the way

According to this report, the HCI market is expected to be worth $17.1 billion by 2023. This projected growth could be put down to the myriad of advantages that HCI offers, including single-pane-of-glass management, reduced rackspace and power which means greener data centres, and improved disaster recovery capabilities to list a few.

Logically, the next step for HCI in accelerating its evolution has been its move to the edge of the network. As the demand for supercharged instances of data-use is growing, such as artificial intelligence (AI), its not surprising that enterprises are looking to edge computing and HCI to enable them to capture data from the very start of their projects. By combining edge computing with HCI, businesses can enable their AI tools to make more intelligent decisions.

With the days of pen and paper behind us, digitalisation has become a necessity across industries. As a result, we are creating a tonne of data, which of course needs to be stored somewhere. More often than not, this data is stored on-site at the edge of a network not your traditional data centre architecture.

One key benefit of edge computing is that it takes up a lot less hardware space than traditional hardware storage. By deploying this infrastructure at the edge of the network, it has the ability to not only handle and compile the data, but also compress the large amount of data so that it can be easily transferred into the cloud or into a centralised data centre at another site. This method grants access for the data to be handled and reviewed closer to where it was created, rather than trying to transmit it further away. This is why edge computing is often used by various distributed enterprises like fast-food restaurants, supermarkets, and petrol stations, as well as industrial surroundings like mines and solar energy plants.

Data collated at the edge of the network is not always being utilised to its full capacity. AI, for example, albeit still at the beginning of its journey, requires vast quantities of resources to develop and train its models. However, with edge computing the data is able to move freely into the cloud. From there the data can be analysed and the AI models can be trained before then extending it back to the edge. The best way for AI to be optimally used to generate these models is to make use of the data centre or the cloud.

Take the silicon chip company, Cerebras, which dedicates its work to accelerating deep learning. It has recently introduced its new Wafer Scale Engine which has been purposefully built for deep learning. This new chip is incredibly fast and 56 times bigger than the largest Graphics Processing Unit. Despite its grand size however, it does mean that its power consumption is at such a high capacity that most edge deployments would not be able to handle it.

That said, there is still hope, as businesses are able to amalgamate edge computing tasks using hyper converged infrastructure, enabling them to build and make the most of data lakes. By placing the data within a data lake, companies are able to use this to analyse it against all applications. The machine learning aspect is also able to unveil new insights through the use of its shared data against the diverse applications and devices.

When thinking about edge computing, HCI has made it much easier to use by combining servers, storage, and networking all in one box. Not to mention, it doesnt face the configuration or networking issues it previously had. To add to this, the platform can administer integrated management for a high quantity of edge devices located in different parts of the country, with various forms of networks and interfaces, and thereby undoubtedly decrease operational expenses.

The surge in use cases for things like smart home devices, self-driving cars, and wearable technology means that AI is already prevalent in our everyday lives. According to Gartner, AI will continue to flourish with 80% of smart devices to contain on-device AI capabilities by 2022.

However, AIs data collection does come up against a problem, because most of the technology powering it is hugely reliant on the cloud, and therefore can only come to a conclusion based on the data it has access to in the cloud. This results in a delayed response because the data first has to travel to the cloud, before heading back to the device. In the case of technologies like self-driving cars, which require instantaneous decision-making, any lag could result in huge complications.

In this scenario, edge computing has one up on the cloud and the potential to take AI to the next level. Any data required for that AI application is able to reside in close proximity to the device, therefore increasing the speed in which it is able to access and process the data. AI devices which are dependent on data conversion benefit the most from this application because they wont always be able to connect to the cloud, as it requires access to bandwidth and network availability.

Another advantage of combining edge computing with HCI for AI is that it requires a smaller amount of storage space. The best operational feature about HCI is that technology is able to function, within a smaller hardware design. It will soon be commonplace to find companies launching highly available HCI edge compute clusters which are comparable to the size of a cup of tea.

If AI is to truly succeed, it will need to depend on HCI and edge computing to work together side by side, allowing AI to function on its own merit, and with minimal support. AI will be able to make the most of its deep learning asset, as well as improve its ability to make better decisions.

AI has the ability to be accessible to the vast majority thanks to technological advances in the cloud. However, it is the marriage of HCI and edge computing that will provide AI with the means it needs to surge into new territories, providing more intelligent and efficient methods to find a solution for all companies.

Follow this link:
Enabling AI with edge computing and HCI - Techerati

StackPath and Broadcom Collaborate to Boost Cloud Compute Services at the Edge – Yahoo Finance

Innovative edge cloud platform withstate-of-the-art Stingray SmartNIC deliversunrivaled levels of server performance and security

DALLAS, March 24, 2020 (GLOBE NEWSWIRE) -- StackPath, the emerging leader in edge computing,today announced a joint collaboration with Broadcom Inc. (Broadcom) to provide advanced edge computing capabilities using Broadcoms Stingray SmartNIC to address the growing demand for cloud-based content-delivery services at the edge for streaming video, application performance optimization, andsecurity.

The proliferation of connected endpoints is rapidly increasing the need for highly available, low-latency and flexible edge computing resources.StackPath provides the leading platform for secure edge compute infrastructure including virtual machines (VMs), containers, and serverless, as well as managed edge services such as content delivery network (CDN), web application firewall (WAF), managed DNS, service monitoring, and DDoS protection.

Unlike centralized, traditional datacenters which are typically outside metropolitan geographies, StackPaths edge compute platform moves the functionality and processing power of the public cloud closer to where end users are. This gives developers and enterprises the building blocks, tools and proximity essential for cloud-centric workloads requiring ultra-low network latency and exceptional security. StackPath edge compute services can connect to end users up to 2.6x faster than competing cloud computing provided by public core cloud providers crucial to meet the high bandwidth and low latency demands of next generation applications such as online video gaming, real-time video and cloud-based security.

Compute located at the edge is essentially beach front property. Its always going to be finite and in high-demand, said Wen Temitim, StackPaths Chief Technology Officer. StackPath and Broadcom have developed an implementation that moves I/O network processing from the server to the Broadcom Stingray SmartNIC, providing us with optimum performance and latency within the real estate and power constraints of edge locations. The Stingray SmartNICs compact form factor, Truflow offload engine, advanced security co-processors and outstanding compute performance make it an ideal platform to deploy highly secure new services for our customers.

Story continues

The network edge provides significant opportunities for innovation, said Dan Harding, vice president of marketing for the Compute and Connectivity Division at Broadcom.The challenge is how to deliver this innovation in a scalable, reliable and cost-effective manner within a physical environment that substantially constrains the available compute resources. StackPath solved this by deploying a novel server architecture that uses our Stingray SmartNIC. This enables them to deliver their core services, such as web application firewall, CDN and object storage, to their customers, and provides them with a substantial architectural advantage versus their competitors in providing cloud computing at the edge.

About StackPathStackPath is the worlds first platform providing compute and services at the clouds edge. StackPath offers core computing resourcesincluding Virtual Machines, Containers, and Object Storageas well as managed edge servicesincluding Serverless Scripting, CDN, WAF, Managed DNS, and Service Monitoringwith 45 edge locations spanning the world, all connected by a secure private network backbone. StackPath is trusted by customers ranging from Fortune 50 enterprises to one-person startups that want to develop, distribute, protect, and accelerate their cloud workloads in ways not possible with central cloud services. To learn more visitstackpath.comand follow StackPath atwww.fb.com/stackpathllcandwww.twitter.com/stackpath

Media ContactSusie McDonaldVP, Corporate CommunicationsStackPathsusan.mcdonald@stackpath.com503-806-3841

See the original post:
StackPath and Broadcom Collaborate to Boost Cloud Compute Services at the Edge - Yahoo Finance

5 Innovative Applications of Edge Computing – AiThority

Edge Computing can be used to push applications, data and services away from centralized hubs to logical extremes of a network. It additionally empowers analytics and information age to occur at the source of the data. Edge Computing covers a wide scope of technologies, for example, Remote Sensor Systems, Circulated Information stockpiling, Augmented Reality and the sky is the limit from there.

While its easy to find explanations about what Edge Computing is and how it functions, most organizations truly need to know how it could influence their business. Internet of things (IoT) gadgets are now hitting the market in tremendous numbers, so organizations need to see how new developments in Edge Computing practices can be made advantageous for them.

Read more: 5 Innovative Applications of Quantum Computing

Here are some of the innovative applications of Edge Computing-

Edge Computing architecture makes it workable for devices controlling utilities and other public administrations to react to changing conditions in real-time. Combined with the rising number of autonomous vehicles and the ever-growing Internet of Things, smart cities can change how individuals live and use benefits in an urban environment.

Since all Edge Computing applications depend upon gadgets collecting information to do fundamental processing tasks, the city of the coming days will have the ability to respond progressively to changing conditions as they happen.

By joining data stockpiling and enrolling in industrial equipment, manufacturers can amass data that will consider better perceptive upkeep and imperativeness adequacy, allowing them to reduce costs and essentialness usage while keeping up better constancy and beneficial uptime.

Savvy manufacturing frameworks instructed by constant data variety and examination will moreover assist organizations with altering creation races to all the more likely satisfy buyer needs.

With IoT gadgets fit for delivering tremendous measures of Patient-generated Health Information (PGHD), Healthcare suppliers might access essential data about their patients continuously instead of interfacing with moderate and fragmented databases.

Medical devices themselves could likewise be made to assemble and process information throughout diagnosis or treatment. While regulatory necessities for the sharing and exposure of medical data would make any edge solution challenging to execute, other emerging safety efforts, for example, Blockchain innovation could give better approaches to address such concerns.

Wearable AR gadgets like glasses and headsets are at times used to make this effect, however, most clients have encountered AR through their cell phone displays. Any individual who has messed around like Pokemon GO or used a channel on Snapchat or Instagram has utilized AR.

The innovation behind AR expects devices to process visual information and join pre-rendered visual components continuously. Without edge computing design, this visual information would be conveyed back to concentrated Cloud servers where the digital components could be added before sending it back to the gadget. This course of action would unavoidably bring about critical latency.

By joining Edge Computing architecture into their systems, organizations can improve performance altogether and diminish inactivity. Instead of AI Virtual Assistant sending preparing and data requests to a concentrated server, they can rather disperse the weight among edge data centers while playing out some processing capacities locally.

We can say that the multiplication of localized data servers for both Cloud and Edge Computing makes it simpler than at any other time for associations to extend their network reach and set themselves in a place to benefit as much as possible from their data assets.

Read more: Cloud Computing Versus Edge Computing

Go here to see the original:
5 Innovative Applications of Edge Computing - AiThority

NVIDIA, Azure And AWS Offer Free Resources To Fight Against COVID-19 – Analytics India Magazine

As COVID-19 continues to spread, we are acutely aware of the impact this is having on families, businesses, and communities. This is a global health emergency that will only be resolved by governments, businesses, academia, and individuals working together to better understand this virus and ultimately find a cure.

The AI community especially has been at the forefront to fight against COVID-19 by offering solutions with the help of state-of-the-art-algorithms. However, for these machine learning models to work, they need tremendous amounts of computational power. In order to mitigate the operational costs of finding the solution, the hardware and cloud giants have stepped up to offer free services.

After bringing down the prediction time for diagnosis using AI, Alibaba now wants the whole world to come up with their own unique solutions to fight COVID-19. They are now offering Elastic High Performance Computing (HPC) technology to worldwide researchers to accelerate drug and vaccine discovery.

They have explained in their blog, how their technology has supported around 20 research groups implementing their research on COVID-19 with the help of Elastic High Performance Computing (E-HPC) technology that helps researchers find solutions for AI-driven-Drug-Design (AIDDD) for COVID-19.

NVIDIA has announced that it will be giving access to its Parabricks for 90 days to promote research on COVID-19 drug discovery. Analysing genomic data is computationally intensive. Therefore, time and cost are significant barriers to using genomics data for precision medicine. But, NVIDIA Parabricks Genomics Analysis Toolkit breaks down those barriers, providing GPU-accelerated genomic analysis.

By accelerating the existing CPU-only pipelines on GPUs, NVIDIA Parabricks Germline Pipeline provides more than 40 times faster analysis for an individual sample. By processing the FASTQ input files, the system generates sorted, marked BAM/CRAM files and variant call files (VCF or gVCF).

NVIDIA Parabricks pipelines have been tested on Dell, HPE, IBM, and NVIDIA servers at Amazon Web Services, Google Cloud, and Microsoft Azure.

NVIDIA advises users to start with a small server for a small number of samples and then use full-scale GPU servers to meet their needs.

Amazon Web Services (AWS), one of the most preferred cloud partners across the world, has announced that it is committing $20 million to accelerate research with regards to COVID-19 diagnostics.

This AWS Diagnostic Development Initiative will support customers who are working to bring better diagnostics solutions to market faster.

Funding will be provided through a combination of AWS in-kind credits and technical support to assist the customers research teams in harnessing the full potential of the cloud to tackle this challenge, stated AWS in their blog.

The program will be open to accredited research institutions and private entities that are using AWS to support research-oriented workloads for the development of point-of-care diagnostics (testing that can be done at home or at a clinic with same-day results).

Rescale Inc in cooperation with Google Cloud and Microsoft Azure, announced a new program on Monday that immediately offers high-performance computing resources (HPC) at no cost to teams working to develop test kits and vaccines for COVID-19.

Researchers can rapidly run simulations on the cloud without setup time or IT teams using Rescales platform combined with cloud computing resources from Google Cloud Platform and Microsoft Azure.

Rescales platform can provide access to high-performance computing resources that can help accelerate key processes and enable stronger collaboration, said Manvinder Singh, Director, Partnerships at Google Cloud.

Folding@home, a distributed computing project that encourages scientists to volunteer and run simulations of protein dynamics on their personal computers.

Last month, Folding@home announced that it is joining researchers from around the world working to accelerate the open science effort, where researchers are working to advance the understanding of the structures of potential drug targets for 2019-nCoV that could aid in the design of new therapies.

With all the top companies offering their best of services for free, we can safely assume that the research for COVID-19 diagnostics will accelerate.

comments

More here:
NVIDIA, Azure And AWS Offer Free Resources To Fight Against COVID-19 - Analytics India Magazine

Cloud Server: Market 2020 What Factors will drive the Market in Upcoming Years Dell, HP, IBM, Oracle, Cisco, Fujitsu, Hitachi, NEC – News Times

Cloud Server Market (By Major Eminent Players, By Types, By Applications, and Leading Regions) Segments Outlook, Business Assessment, Competition Scenario, Trends and Forecast by Upcoming Years. The study of the Cloud Server report is done based on the noteworthy research methodology that provides the analytical inspection of the global market based on various segments the Industry is alienated into also the summary and Advance size of the marketplace owing to the various outlook possibilities. The report also gives information about the key players of the Cloud Server Industry by different features that include the Cloud Server overview of the companies, the portfolio of the product and also the revenue facts from Period of Forecast.

DellHPIBMOracleCiscoFujitsuHitachiNEC

Product Type SegmentationPublic CloudPrivate CloudHybrid CloudCommunity Cloud

Industry SegmentationSMELarge Enterprise

Which prime data figures are included in the Cloud Server market report?

What are the crucial aspects incorporated in the Cloud Server market report?

Who all can be benefitted out of this Cloud Server market report?

Major Players: The report provides company profiling for a decent number of leading players of the global Cloud Server market. It brings to light their current and future market growth taking into consideration their price, gross margin, revenue, production, areas served, production sites, and other factors.

Industry Overview: The first section of the research study touches on an overview of the global Cloud Server market, market status and outlook, and product scope. Additionally, it provides highlights of key segments of the global Cloud Server market, i.e. regional, type, and application segments.

Cloud Server Market Dynamics: The report shares important information on influence factors, market drivers, challenges, opportunities, and market trends as part of market dynamics.

Regional Market Analysis: It could be divided into two different sections: one for regional production analysis and the other for regional consumption analysis. Here, the analysts share gross margin, price, revenue, production, CAGR, and other factors that indicate the growth of all regional markets studied in the report. covering North America, Europe, Asia-Pacific, South America, Middle East, and Africa.

Global Cloud Server Market Forecast: Readers are provided with production and revenue forecasts for the global Cloud Server market, production and consumption forecasts for regional markets, production, revenue, and price forecasts for the global Cloud Server market by type, and consumption forecast for the global Cloud Server market by application.

Cloud Server Market Competition: In this section, the report provides information on competitive situation and trends including merger and acquisition and expansion, market shares of top three or five players, and market concentration rate. Readers could also be provided with production, revenue, and average price shares by manufacturers.

Contact Us:Web: http://www.qurateresearch.comE-mail: [emailprotected]Ph: US +13393375221, IN +919881074592

Read the rest here:
Cloud Server: Market 2020 What Factors will drive the Market in Upcoming Years Dell, HP, IBM, Oracle, Cisco, Fujitsu, Hitachi, NEC - News Times