Category Archives: Cloud Servers
IBM Cloud Satellite and Lumen Technologies Adapt Rapidly at the Edge – CDOTrends
Workplace safety can come down to milliseconds. Whether enforcing rules for hard hats or masks, a system responsible for protecting a site needs to issue a noticeable alert almost simultaneously when it detects non-compliance.
Lumen Technologies and IBM developeda solutionthat meets the very low latency requirements of such use cases by processing all data ingesting and analyzing it where and as soon as it is generated. The solution uses video cameras to send images in real-time to a video management server, on which IBM Video Analytics software quickly processes each image, triggering an alert (if needed). Were the system to operate more slowly, a person at risk could already be many steps into the restricted area before being stopped.
Managing video analytics at the edge
Lumen Technologies and IBM built a safety system with a set of three video cameras and two servers. The cameras are linked to one of the servers the video management server which is running the analytics software. This software receives and processes video images, identifies violations of movement rules, and triggers alerts. In production, the number of video management servers increases in proportion to the number of cameras, depending on the ratio that preserves low-latency performance.
Scaling up while rapidly iterating
On a separate server on-site,IBM Edge Application Managerruns incontainersonRed Hat OpenShift for IBM Cloud; its role is to install the most recent version of the analytics software on all video management servers there. As the number of video management servers in a deployment increases, so wouldcontainerizedinstances of the Edge Application Manager and the number of OpenShift worker nodes needed to support them.
Consistent deployments across locations
But what is a deployment of Red Hat OpenShift for IBM Cloud doing in an edge site?
As you have already guessed, the short answer isIBM Cloud Satellite.
In setting up the solution with the video cameras and video management servers in place, a customers operations team first uses IBM Cloud to select hosts at the edge site to server as the Satellite location. Once the location is set up, the team uses the same IBM Cloud console to provision Red Hat OpenShift for IBM Cloud in that new location and deploy the Edge Application Manager in containers to pods onvirtual machinesserving as worker nodes.
And this is the key to scalability for this safety solution. Besides putting video management servers in place and linking video cameras to them, rolling out their solution in new sites is easily accomplished by setting up Satellite locations, provisioning Red Hat OpenShift for IBM Cloud, and deploying the appropriate number of Edge Application Manager instances to worker nodes.
The consistency of software across all locations is ensured through the single view in IBM Cloud, from which cloud services, containerized applications, security, and network policies are monitored and can be managed across public and private environments.
Adapting to emerging needs
Since the video analytics software can be trained to identify any visual pattern and enforce different movement rules related to what is observed, the safety solution is adaptable. For example, with a thermal camera for COVID-19 monitoring, retrained video analytics can allow employers to instantaneously detect employees temperatures. For that same use case, other camera analytics can calculate how many people are using a space and determine when the next deep cleaning is needed.
Continuous security and observability
A single, consolidated view of Satellite locations shows deployments and services running in every location. Teams can manage the network traffic and configure the applications within all locations and provision and use services as if they are working in the public cloud. That also means client teams can even deploy the same application stack to any location from the IBM Cloud catalog.
Satellite Link establishes secure tunnels and enables control of application and service traffic to and from each location. Satellite Link works with your existing network configuration and security postures. Teams in all Satellite locations use the same access and identity management (IAM). With support for a customers own keys and certificates, consistent data encryption enables workloads to span locations securely. Endpoints across the secure tunnels are uniquely and automatically named, yielding fast DNS, predictable operations, and easy compliance audits.
Consistent and portable operations at any scale
Lumen Technologies and IBM built a solution that can perform real-time, intelligent data analysis at thousands of edge sites across high-speed fiber connections to the many Lumen Edge platform locations whereIBM Cloud Satelliteand the Edge Application Manager run. Through a single view in IBM Cloud Satellite, operating the solution is consistent across all hubs and locations. That repeatability is a baseline from which teams can gain velocity in rolling-out deployments, quickly scaling up edge locations with new functionality, and remotely automating many operational chores.
The original article by Briana Frank, director of product management at IBM, is here.
The views and opinions expressed in this article are those of the author and do not necessarily reflect those of CDOTrends.Image credit: iStockphoto/metamorworks
Read this article:
IBM Cloud Satellite and Lumen Technologies Adapt Rapidly at the Edge - CDOTrends
The global video analytics market size is expected to grow at a Compound Annual Growth Rate (CAGR) of 20.4% during the forecast period, to reach USD…
Key factors that are expected to drive the growth of the market are the increasing investments and focus of governing institutions on public safety, need to utilize and examine unstructured video surveillance data in real time, significant drop in crime rate due surveillance cameras, growing need among enterprises to leverage BI and actionable insights for advanced operations, limitation of manual video analysis, government initiatives in adopting emerging technologies to enhance the public safety infrastructure, reduced cost of video surveillance equipment and long term RoI and demand for enhanced video surveillance.
New York, Sept. 15, 2021 (GLOBE NEWSWIRE) -- Reportlinker.com announces the release of the report "Video Analytics Market with COVID-19 Impact, by Component, Application, Deployment Model, Type, Vertical And Region - Global Forecast to 2026" - https://www.reportlinker.com/p04838914/?utm_source=GNW
The COVID-19 impact on the global video analytics marketThe recent economic slowdown with the impact of COVID-19 emphasizes the need for alternate business systems.It has become important for businesses to embrace cloud computing and migrate to cloud video analytics solutions.
This will help organizations to have a stable business condition in the short term while targeting continued growth and expansion in the long run.The recent COVID-19 crisis has shifted the focus on safety and security of human lives.
Also, the emergence of intuitive technologies mainly the Ai-based surveillance systems based on deep learning and computer vision technologies. The organizations are utilizing video analytics solutions across end user industries due to variety of benefits including dynamically attain situational awareness, proactively drive real time alerting, and scheduling BI dashboard.
Edge-based segment to grow at a higher CAGR during the forecast periodBased on the type, the market is segmented into two categories edge based and server based video analytics.Edge-based segment is expected to grow at a higher CAGR during the forecast period.
Edge-based video analytics evolves with the emergence of new powerful in-build chipsets in cameras that offer higher computational capabilities at the edge.Such systems inform operators on a wide range of real-time video or audio events requiring attention and providing more sophisticated analytics, such as queue management and heat maps that offer new opportunities for business and traffic intelligence.
Advancements in deep learning and its integration with the edge system are expected to drive the adoption in the coming years.Deep learning takes ML to another level based on neural network principles that impersonate the complexity of the human brain.
Earlier, the functionality was mainly available at server-side processes, which would require videos to be decompressed and processed. Edge-based devices need external inputs to learn from before proving as a useful tool to recognize known objects and behaviors.
On-premises segment to account for a higher market share during the forecast period.The video analytics market is segmented by deployment type into on-premises and cloud segment.The on-premises segment account for a higher share of the video analytics market during the forecast period.
This approach is mostly adopted for applications that involve the processing of sensitive and confidential data volumes.These data volumes include internal and external surveillance footage and video feeds of business operations that contain confidential information and crucial insights.
In the on-premises deployment, companies have to install the required hardware parts, such as OS, storage devices, servers, cameras, and routers, as well as video analytics software. Several large organizations are deploying on-premises video analytics due to privacy and security concerns related to confidential data.
Transportation and logistics vertical to grow at a higher CAGR during the forecast period.Transport and logistics is one of the fastest-growing verticals during the forecast period.Video surveillance has become an important part of the transportation and logistics vertical. The various benefits of video analytics for the transportation and logistics vertical are the elimination of overcrowding, behavior analysis, enhanced safety measures, incident recording, and detection of blind spots. Video analytics can contribute to the enhancement and betterment of this vertical for commuters while providing improved safety benefits. The various features offered by video analytics, such as facial recognition, object tracking, unidentified object detection, cargo and train carriage recognition, and intelligent traffic monitoring, can help transportation and logistics companies prevent disasters and detect emerging threats, which may lead to infrastructure destruction or vehicle crashing, resulting in the loss of life.
North America to account for the highest market share during the forecast period.The video analytics market is segmented into five regions: North America, Europe, APAC, MEA, and Latin America.The video analytics report provides insights into these regional markets in terms of market size, growth rates, future trends, market drivers, and COVID-19 impact.
North America is expected to hold the highest market share in the overall video analytics market during the forecast period.Following North America tops the world in terms of the presence of security vendors and the occurrence of security breaches.
Therefore, the global video analytics market is dominated by North America, which is the most advanced region with regard to technological adoption and infrastructure.The growing concerns about the protection of critical infrastructure and national borders have increased government intervention in recent years.
Specific budget allocations, such as the budget for The Department of Homeland Security, and mandated security policies are expected to make North America the most lucrative market for vendors from various verticals. The North American market covers the analysis of the US and Canada. The protection of critical infrastructure is the most serious economic and national security challenge for the governments of both countries. Many governments and law enforcement agencies in the US and Canada are taking initiatives for strengthening their security infrastructure. The US and the Canadian governments are continuously working with law enforcement agencies to prevent violent extremism and counter terrorism-related incidents.
The break-up of the profiles of primary participants in the global video analytics Market is as follows: By Company: Tier 120%, Tier 225%, and Tier 355% By Designation: C-Level Executives40%, Director Level33%, and Others27% By Region: North America32%, Europe38%, APAC18%, and RoW-12%The video analytics Market comprises major providers, such as Avigilon(Canada), Axis Communications(Sweden), Cisco(US), Honeywell(US), Agent Vi(US), Allgovision(India), Aventura Systems(US), Genetec(Canada), Intellivision(US), Intuvision(US), Puretech Systems(US), Hikvision(China), Dahua(China), Iomniscient(Australia), Huawei(China), Gorilla Technology(Taiwan), Intelligent Security Systems(US), Verint(US), Viseum(UK), Briefcam(US), Bosch Security(Germany), i2V(India), Digital Barrier(UK), Senstar(Canada), Qognify(US), Identiv(US), Ipsotek(US), Delopt(India), Drishti Technologies(US), Natix(Germany), DeepNorth(US), Cronj(India), Microtraffic(Canada), Actuate(US), Calipsa(UK), Athena Security(US), Corsight AI(Israel), Arcules(US), Cawamo(Israel), Kogniz(US), and Durac(US). The study includes an in-depth competitive analysis of key players in the video analytics Market with their company profiles, recent developments, COVID-19 developments, and key market strategies.
Research CoverageThe report segments the global video analytics Market by component into two categories:software and services.By deployment model, on-premises and cloud.
By application, the market is segmented into seven categories:incident detection, intrusion management, people/crowd counting, traffic monitoring, automatic number plate recognition, facial recognition, and others.By type, the market is segmented into two categories: server-based and edge-based.
By vertical, the video analytics market has been classified into banking and financial services, city surveillance, critical infrastructure,education, hospitality and entertainment, manufacturing, defense and border security, retail, traffic management, transport and logistics, and others. By region, the market has been segmented into North America, Europe, APAC, MEA, and Latin America.
Key benefits of the reportThe report would help the market leaders/new entrants in this market with the information on the closest approximations of the revenue numbers for the overall video analytics market and the subsegments.This report would help stakeholders understand the competitive landscape and gain insights to better position their businesses and plan suitable go-to-market strategies.
The report would help stakeholders understand the pulse of the market and provide them with information on the key market drivers, restraints, challenges, opportunities, and COVID-19 impact.Read the full report: https://www.reportlinker.com/p04838914/?utm_source=GNW
About ReportlinkerReportLinker is an award-winning market research solution. Reportlinker finds and organizes the latest industry data so you get all the market research you need - instantly, in one place.
__________________________
Story continues
Nasuni file sync accelerates ransomware recovery Blocks and Files – Blocks and Files
Nasuni has invented Global File Acceleration (GFA) a way to sync shared files up to 5x faster and enabled faster ransomware recovery as a by-product, with a demo restoring a million files in under 40 seconds.
The company provides shared access from edge appliances to files stored in the public cloud and file changes created by one user in an edge appliance are synced to other users. Nasunis UniFS filesystem stores files, their data and metadata in public cloud object storage. Continuous Versioning Technology sends changed file data fragments (snapshots) to immutable object storage in the cloud. Lost, deleted or corrupted files can be recovered to any point in time up to the last fragment stored using CVT metadata and data. GFA speeds this process.
Nasunis Chief Product Officer, Russ Kennedy, offered a statement: No other storage or backup vendor can provide Rapid Ransomware Recovery for file servers the way Nasuni can. And now Nasunis high-performance Global File Acceleration service sets us even further apart. Enterprises can solve their file protection, primary file storage and multi-site file sharing challenges all in one solution.
We are told by Nasuni that GFA dynamically performs near-real-time, intelligent analysis of file usage to orchestrate and prioritise data propagation of new files across Nasuni Edge Appliances in all locations. As a result, global users sharing files gain the very fastest access to new data that they need most.
How GFA does this is not revealed. Think of GFA as a way of souping up data movement speed when changed file data, created using CVT, is synced out to edge appliances. Stephen Held, VP and CIO at Nasuni customer LEO A DALY, said: It was already simple to manage and collaborate on our global file shares across 27 locations, and file synchronisation has always been much faster than traditional methods. But with this latest release, the performance is dramatically faster.
The same basic process is used in a ransomware recovery. Say a user at an edge appliance creates a new file. That has to be sucked up to the cloud store by Nasuni and then blasted out to other users at the various edge appliances. This involves sending out the updated file:folder metadata and the data in the file when it is accessed.
Nasunis UniFS software detects a file has been created and takes care of this. It similarly responds to file deletions. We could imagine a ransomware attack as being the equivalent of a mass file deletion the files are unobtainable. So UniFS restores them to a point in time up to a minute before the attack. Its a kind of mass sync exercise in a way and GFS speeds it up.
Up until now UniFS has been able to restore millions of files in minutes. LEO A DALY was hit by ransomware and Held said: Nasuni was a true lifesaver when we got hit by a ransomware attack. Once we contained the attack, we were able to restore files quickly. Our operations hardly missed a beat.
With GFA it is faster still, think seconds, and a demo shows 1,001,233 files being restored in 38.8 seconds.
That actually meant the file metadata was restored, as a look at the Size and Size on disk numbers in the image above shows. The demo then showed restored files being accessed once the recovered file:folder metadata was back in place. It was all smooth and simple.
Nasuni says a survey of its customers who had been hit by ransomware attacks showed none of them paid a ransom. More than one third of them stopped the attack, identified infected files and restored valid versions of them in under an hour. The others presumably took longer. GFA will help more of them break the 60-minute mark when handling a ransomware attack in the future.
Link:
Nasuni file sync accelerates ransomware recovery Blocks and Files - Blocks and Files
WhatsApp new feature will allow you to securely back up your chats in iCloud – Thewistle
WhatsApp has introduced end-to-end encryption for chats backups on iCloud. This means that even if you back up your messages and media to Apples cloud servers. They will be protected by End-to-End encryption so users can rest assured their private conversations are safe no matter where they go or who gets hold of them.
WhatsApp messages stored in iCloud are still not protected by end-to-end encryption. WhatsApp is aiming to introduce a new feature that will allow users the option of password protection for their chats before uploading it onto Apples cloud platform.
This way, you can ensure that no one but yourself has access and thus avoid any verifiable information being compromised in case it gets hacked or otherwise accessed by an outsider!
Encrypted chat backups are set to become available soon and will be rolling out several weeks from now. The encryption key will make all of your backups secure in remote iCloud servers by ensuring they cannot be read without a password.
The 64-bit encryption key or password will be optional and saved in the users account so they can recover their data if needed.
The encrypted chat backups feature is coming to Android (for WhatsApp users backing up their chats) and iOS in the next few weeks.
See the original post here:
WhatsApp new feature will allow you to securely back up your chats in iCloud - Thewistle
The time Animoto almost brought AWS to its knees – TechCrunch
Today, Amazon Web Services is a mainstay in the cloud infrastructure services market, a $60 billion juggernaut of a business. But in 2008, it was still new, working to keep its head above water and handle growing demand for its cloud servers. In fact, 15 years ago last week, the company launched Amazon EC2 in beta. From that point forward, AWS offered startups unlimited compute power, a primary selling point at the time.
EC2 was one of the first real attempts to sell elastic computing at scale that is, server resources that would scale up as you needed them and go away when you didnt. As Jeff Bezos said in an early sales presentation to startups back in 2008, you want to be prepared for lightning to strike, [] because if youre not that will really generate a big regret. If lightning strikes, and you werent ready for it, thats kind of hard to live with. At the same time you dont want to prepare your physical infrastructure, to kind of hubris levels either in case that lightning doesnt strike. So, [AWS] kind of helps with that tough situation.
An early test of that value proposition occurred when one of their startup customers, Animoto, scaled from 25,000 to 250,000 users in a 4-day period in 2008 shortly after launching the companys Facebook app at South by Southwest.
At the time, Animoto was an app aimed at consumers that allowed users to upload photos and turn them into a video with a backing music track. While that product may sound tame today, it was state of the art back in those days, and it used up a fair amount of computing resources to build each video. It was an early representation of not only Web 2.0 user-generated content, but also the marriage of mobile computing with the cloud, something we take for granted today.
For Animoto, launched in 2006, choosing AWS was a risky proposition, but the company found trying to run its own infrastructure was even more of a gamble because of the dynamic nature of the demand for its service. To spin up its own servers would have involved huge capital expenditures. Animoto initially went that route before turning its attention to AWS because it was building prior to attracting initial funding, Brad Jefferson, co-founder and CEO at the company explained.
We started building our own servers, thinking that we had to prove out the concept with something. And as we started to do that and got more traction from a proof-of-concept perspective and started to let certain people use the product, we took a step back, and were like, well its easy to prepare for failure, but what we need to prepare for success, Jefferson told me.
Going with AWS may seem like an easy decision knowing what we know today, but in 2007 the company was really putting its fate in the hands of a mostly unproven concept.
Its pretty interesting just to see how far AWS has gone and EC2 has come, but back then it really was a gamble. I mean we were talking to an e-commerce company [about running our infrastructure]. And theyre trying to convince us that theyre going to have these servers and its going to be fully dynamic and so it was pretty [risky]. Now in hindsight, it seems obvious but it was a risk for a company like us to bet on them back then, Jefferson told me.
Animoto had to not only trust that AWS could do what it claimed, but also had to spend six months rearchitecting its software to run on Amazons cloud. But as Jefferson crunched the numbers, the choice made sense. At the time, Animotos business model was for free for a 30 second video, $5 for a longer clip, or $30 for a year. As he tried to model the level of resources his company would need to make its model work, it got really difficult, so he and his co-founders decided to bet on AWS and hope it worked when and if a surge of usage arrived.
That test came the following year at South by Southwest when the company launched a Facebook app, which led to a surge in demand, in turn pushing the limits of AWSs capabilities at the time. A couple of weeks after the startup launched its new app, interest exploded and Amazon was left scrambling to find the appropriate resources to keep Animoto up and running.
Dave Brown, who today is Amazons VP of EC2 and was an engineer on the team back in 2008, said that every [Animoto] video would initiate, utilize and terminate a separate EC2 instance. For the prior month they had been using between 50 and 100 instances [per day]. On Tuesday their usage peaked at around 400, Wednesday it was 900, and then 3,400 instances as of Friday morning. Animoto was able to keep up with the surge of demand, and AWS was able to provide the necessary resources to do so. Its usage eventually peaked at 5000 instances before it settled back down, proving in the process that elastic computing could actually work.
At that point though, Jefferson said his company wasnt merely trusting EC2s marketing. It was on the phone regularly with AWS executives making sure their service wouldnt collapse under this increasing demand. And the biggest thing was, can you get us more servers, we need more servers. To their credit, I dont know how they did it if they took away processing power from their own website or others but they were able to get us where we needed to be. And then we were able to get through that spike and then sort of things naturally calmed down, he said.
The story of keeping Animoto online became a main selling point for the company, and Amazon was actually the first company to invest in the startup besides friends and family. It raised a total of $30 million along the way, with its last funding coming in 2011. Today, the company is more of a B2B operation, helping marketing departments easily create videos.
While Jefferson didnt discuss specifics concerning costs, he pointed out that the price of trying to maintain servers that would sit dormant much of the time was not a tenable approach for his company. Cloud computing turned out to be the perfect model and Jefferson says that his company is still an AWS customer to this day.
While the goal of cloud computing has always been to provide as much computing as you need on demand whenever you need it, this particular set of circumstances put that notion to the test in a big way.
Today the idea of having trouble generating 3,400 instances seems quaint, especially when you consider that Amazon processes 60 million instances every day now, but back then it was a huge challenge and helped show startups that the idea of elastic computing was more than theory.
Read more from the original source:
The time Animoto almost brought AWS to its knees - TechCrunch
EXCLUSIVE Amazon considers more proactive approach to determining what belongs on its cloud service – Reuters
Attendees at Amazon.com Inc annual cloud computing conference walk past the Amazon Web Services logo in Las Vegas, Nevada, U.S., November 30, 2017. REUTERS/Salvador Rodriguez/File Photo
Sept 2 (Reuters) - Amazon.com Inc (AMZN.O) plans to take a more proactive approach to determine what types of content violate its cloud service policies, such as rules against promoting violence, and enforce its removal, according to two sources, a move likely to renew debate about how much power tech companies should have to restrict free speech.
Over the coming months, Amazon will expand the Trust & Safety team at the Amazon Web Services (AWS) division and hire a small group of people to develop expertise and work with outside researchers to monitor for future threats, one of the sources familiar with the matter said.
It could turn Amazon, the leading cloud service provider worldwide with 40% market share according to research firm Gartner, into one of the world's most powerful arbiters of content allowed on the internet, experts say.
AWS does not plan to sift through the vast amounts of content that companies host on the cloud, but will aim to get ahead of future threats, such as emerging extremist groups whose content could make it onto the AWS cloud, the source added.
A day after publication of this story, an AWS spokesperson told Reuters that the news agencys reporting "is wrong," and added "AWS Trust & Safety has no plans to change its policies or processes, and the team has always existed."
A Reuters spokesperson said the news agency stands by its reporting.
Amazon made headlines in the Washington Post on Aug. 27 for shutting down a website hosted on AWS that featured propaganda from Islamic State that celebrated the suicide bombing that killed an estimated 170 Afghans and 13 U.S. troops in Kabul last Thursday. They did so after the news organization contacted Amazon, according to the Post.
The discussions of a more proactive approach to content come after Amazon kicked social media app Parler off its cloud service shortly after the Jan. 6 Capitol riot for permitting content promoting violence. read more
Amazon did not immediately comment ahead of the publication of the story on Thursday. After publication, an AWS spokesperson said later that day, "AWS Trust & Safety works to protect AWS customers, partners, and internet users from bad actors attempting to use our services for abusive or illegal purposes. When AWS Trust & Safety is made aware of abusive or illegal behavior on AWS services, they act quickly to investigate and engage with customers to take appropriate actions."
The spokesperson added that "AWS Trust & Safety does not pre-review content hosted by our customers. As AWS continues to expand, we expect this team to continue to grow."
Activists and human rights groups are increasingly holding not just websites and apps accountable for harmful content, but also the underlying tech infrastructure that enables those sites to operate, while political conservatives decry what they consider the curtailing of free speech.
AWS already prohibits its services from being used in a variety of ways, such as illegal or fraudulent activity, to incite or threaten violence or promote child sexual exploitation and abuse, according to its acceptable use policy.
Amazon investigates requests sent to the Trust & Safety team to verify their accuracy before contacting customers to remove content violating its policies or have a system to moderate content. If Amazon cannot reach an acceptable agreement with the customer, it may take down the website.
Amazon aims to develop an approach toward content issues that it and other cloud providers are more frequently confronting, such as determining when misinformation on a company's website reaches a scale that requires AWS action, the source said.
A job posting on Amazons jobs website advertising for a position to be the "Global Head of Policy at AWS Trust & Safety," which was last seen by Reuters ahead of publication of this story on Thursday, was no longer available on the Amazon site on Friday.
The ad, which is still available on LinkedIn, describes the new role as one who will "identify policy gaps and propose scalable solutions," "develop frameworks to assess risk and guide decision-making," and "develop efficient issue escalation mechanisms."
The LinkedIn ad also says the position will "make clear recommendations to AWS leadership."
The Amazon spokesperson said the job posting on Amazons website was temporarily removed from the Amazon website for editing and should not have been posted in its draft form.
AWS's offerings include cloud storage and virtual servers and counts major companies like Netflix (NFLX.O), Coca-Cola (KO.N) and Capital One (COF.N) as clients, according to its website.
PROACTIVE MOVES
Better preparation against certain types of content could help Amazon avoid legal and public relations risk.
"If (Amazon) can get some of this stuff off proactively before it's discovered and becomes a big news story, there's value in avoiding that reputational damage," said Melissa Ryan, founder of CARD Strategies, a consulting firm that helps organizations understand extremism and online toxicity threats.
Cloud services such as AWS and other entities like domain registrars are considered the "backbone of the internet," but have traditionally been politically neutral services, according to a 2019 report from Joan Donovan, a Harvard researcher who studies online extremism and disinformation campaigns.
But cloud services providers have removed content before, such as in the aftermath of the 2017 alt-right rally in Charlottesville, Virginia, helping to slow the organizing ability of alt-right groups, Donovan wrote.
"Most of these companies have understandably not wanted to get into content and not wanting to be the arbiter of thought," Ryan said. "But when you're talking about hate and extremism, you have to take a stance."
Reporting by Sheila Dang in Dallas; Editing by Kenneth Li, Lisa Shumaker, Sandra Maler, William Mallard and Sonya Hepinstall
Our Standards: The Thomson Reuters Trust Principles.
Server and virtualization business trends to watch in 2021 – TechBullion
Share
Share
Share
There are many different trends in data center technology, which can make it difficult to keep up with the latest requirements. There is always something new and exciting popping up. But what are the latest trends? Whats changing in 2021?
A lot of people think server virtualization is outdated, but in 2021 it will still be very much around. One of the most important trends to watch for is software-defined infrastructure. Its already popular, but it will continue to grow in popularity over the next few years.
Here are some other trends to watch for when looking at server and virtualization business trends in 2021.
Throughout 2020, an increasingly large number of businesses adopted hybrid cloud technology, and many more plan to do so in 2021 and beyond. Enterprises, in particular, are embracing hybrid cloud technology to gain greater agility and mobility.
The idea of cloud providers delivering multiple services (compute, storage, network, and data services) in the form of a single package appeals to them, and this has helped solidify hybrid cloud technology as a requirement for IT operations.
A move to a fully hybrid cloud infrastructureone in which customers are not only using public cloud providers such as Amazon Web Services (AWS) but also implementing private clouds and a mixture of public and private cloudsis the logical next step for many organizations.
Low-cost commercial bare metal servers have been steadily rising in popularity in the second half of 2021 but will find their strongest markets in the web hosting, hosting, and cloud computing sectors.
Because virtualization is likely to be more complicated than traditional servers, dedicated bare metal servers will have a strong advantage over virtualized servers in terms of ease of operation.
The advantages of bare metal cloud servers will also prove useful to some private cloud providers, which may install a single server and load it up with virtual machine workloads on demand for the end client.
Many companies have found themselves needing server virtualization throughout the pandemic and the rise of remote work, and cloud computing. Using a hybrid approach that integrates virtualization, cloud, and more traditional computing solutions has become the norm for most enterprises.
Whether its a proprietary solution like Hyper-V and VMware, a solution based on open standards like OpenStack, or a different approach like KVM, containers, or Googles Cloud Native Application Engine, this is a space with significant momentum and growth. Its an area where each of the players HPE, IBM, Cisco, Dell, Oracle, HP, Microsoft, Red Hat, and VMware all have strong positions.
As the number of remote workers continues to climb, so does the risk of viable cyberattacks on corporations. Modern businesses have a wide range of remote workers, and those workers, along with the device they use, are vulnerable to security issues.
This is a key concern for server providers, and most continue to invest in security measures and products. Some of the latest cybersecurity trends to emerge in 2021 include things like:
Devices moving closer to the point of application access, processing and delivery will require new kinds of capabilities that were not needed before. Edge computing, in which a local compute node, edge gateway, or other compute element is set up to handle compute-intensive activity close to the data source, is expected to see major gains throughout 2021, and see some major liftoff in 2022.
Markets for edge computing will include verticals such as supply chain and retail, and the edge can enable new business models and revenue streams for application vendors and system integrators.
The devices that are most often seen as edge nodes in the context of edge computing tend to be low-power and low-cost IoT devices such as sensors and electronic logs. Edge computing vendors and service providers will bring services to edge networks, based on their commitment to systems integration, interoperability, standards support, and vendor enablement.
See the original post:
Server and virtualization business trends to watch in 2021 - TechBullion
Automated ‘cloud lab’ will handle all aspects of daily lab work – E&T Magazine
Carnegie Mellon University (CMU) is working with Emerald Cloud Lab (ECL) to build a world-first cloud laboratory, which they hope will provide researchers with facilities for routine life sciences and chemistry research.
According to the partners, the remote-controlled Carnegie Mellon University (CMU) Cloud Lab will provide a universal platform for AI-driven experimentation, and revolutionise how academic laboratory research and education are done.
Emeralds 'cloud lab', which will be used as the basis for the new lab, allows scientists to conduct wet laboratory research without being in a physical laboratory. Instead, they can send their samples to a facility, design their experiments using ECLs command-based software (with the assistance of AI-based design tools), and then execute the experiment remotely. A combination of robotic instrumentation and technicians perform the experiments as specified and the data is sent to cloud servers for access.
CMU researchers have used ECL facilities for research and teaching for several years. According to the university, cloud lab classes gave students valuable laboratory experience during the Covid-19 pandemic, even with all courses being taught remotely.
CMU is a world leader in [AI], machine learning, data science, and the foundational sciences. There is no better place to be home to the worlds first university cloud lab, said Professor Rebecca Doerge. Bringing this technology, which Im proud to say was created by CMUs alumni, to our researchers and students is part of our commitment to creating science for the future.
The CMU Cloud Lab will democratise science for researchers and students. Researchers will no longer be limited by the cost, location, or availability of equipment. By removing these barriers to discovery, the opportunities are limitless.
The new cloud lab will be the first such laboratory built in an academic setting. It will be built in a university-owned building on Penn Avenue, Pittsburgh. Construction on the $40m project is expected to begin in autumn for completion in summer 2022.
The facility will house more than 100 types of scientific instruments for life sciences and chemistry experiments and will be capable of running more than 100 complex experiments simultaneously, 24 hours a day and 365 days a year. This will allow users to individually manage many experiments in parallel from anywhere in the world. The university and company will collaborate on the facilitys design, construction, installation, management, and operations. Already, staff and students are being trained to use the cloud lab.
While the CMU Cloud Lab will initially be available to CMU researchers and students, the university hopes to make time available to others in the research community, including high school students, researchers from smaller universities that may not have advanced research facilities, and local life sciences start-ups.
We are truly honoured that Carnegie Mellon is giving us the chance to demonstrate the impact that access to a cloud lab can make for its faculty, students and staff, said Brian Frezza, a CMU graduate and co-CEO of ECL. We couldnt think of a better way to give back to the university than by giving them a platform that redefines how a world-class institution conducts life sciences research.
Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.
Continued here:
Automated 'cloud lab' will handle all aspects of daily lab work - E&T Magazine
How to Move Fast in the Cloud Without Breaking Security – insideBIGDATA
In this special guest feature, Asher Benbenisty, Director of Product Marketing at Algosec, looks at how organizations can solve the problems of managing and maintaining security in hybrid, multi-cloud environments. Also discussed is the common confusion over cloud ownership, and how organizations can get consistent control and take advantage of agility and scalability without compromising on security. Asher is an experienced product marketing professional with a diverse background in all aspects of the corporate marketing mix, product/project management as well as technical expertise. He is passionate about bringing innovative products that solve real business problems to the market. When not thinking of innovative products, Asher enjoys outdoor running especially by the ocean.
Move fast and break things is a familiar motto. Attributed to Facebook CEO Mark Zuckerberg, it helps to explain the companys stellar growth over the past decade, driven by its product innovations. However, while its a useful philosophy for software development, moving faster than youd planned is a risky approach in other areas, as organizations globally realized during the COVID-19 pandemic. While 2020 saw digital transformation programs advance by up to seven years, enterprises quick moves to the cloud also meant that some things got damaged along the way including security.
Arecent survey conducted with the Cloud Security Allianceshowed that over half of organizations are now running over 41% of their workloads in public clouds, compared to just one quarter in 2019, and this will increase further by the end of 2021. Enterprises are moving fast to the cloud, but they are also finding that things are getting broken during this process.
11% of organizations reported a cloud security incident in the past year, with the three most common causes being cloud provider issues (26%), security misconfigurations (22%), and attacks such as denial of service exploits (20%). In terms of the business impact of these disruptive cloud outages, 24% said it took up to 3 hours to restore operations, and for 26% it took over half a day.
As a result, Its no surprise that organizations have significant concerns about enforcing and managing security in the cloud. Their leading concerns were maintaining overall network security, a lack of cloud expertise, problems when migrating workloads to the cloud, and insufficient staff to manage their expanded cloud environments. So, what are the root causes of these cloud security concerns and challenges, and how should enterprises address them?
Confusion over cloud control
When asked about which aspects of security worried them most when running applications in public clouds, respondents overwhelmingly cited getting clear visibility of topologies and policies for the
entire hybrid network estate, followed by the ability to detect risks and misconfigurations.
A key reason for these concerns is that organizations are using a range of different controls to manage cloud security as part of their application orchestration. 52% use cloud-native tools, and 50% reported using orchestration and configuration management tools such as Ansible, Chef and Puppet. However, nearly a third (29%) said they use manual processes to manage cloud security.
In addition, theres competition for overall control over cloud security: 35% of respondents said their security operations team managed cloud security, followed by the cloud team (18%), and IT operations (16%). Other teams such as network operations, DevOps and application owners all figured too. Having different teams using multiple different controls for security limits overall visibility across the hybrid cloud environment, and also adds significant complexity and management overheads to security processes. Any time you need to make a change, you need to duplicate the work across each of these different controls and teams. This results in security holes and the types of misconfiguration-based incidents and outages we mentioned earlier.
How to move fast and not break things
So how can organizations address these security and management issues, and get consistent control over their cloud and on-prem environments, so they can take full advantage of cloud agility and scalability without compromising on security? Here are the four key steps:
With a network security automation solution handling these steps, organizations can get holistic, single-console security management across all of their public cloud accounts, as well as their private cloud and on-premises deployments. This helps them to solve the cloud complexity challenge and ensures faster, safer and more compliant cloud management making it possible for organizations to move fast in response to changing business needs without breaking things.
Sign up for the free insideBIGDATAnewsletter.
Join us on Twitter:@InsideBigData1 https://twitter.com/InsideBigData1
The rest is here:
How to Move Fast in the Cloud Without Breaking Security - insideBIGDATA
The myths behind Linux security. – The CyberWire
Executive Summary.
Attackers do not target Linux environments because Windows is the most used operating system globall is a belief many in the technology hold. With this one false belief attackers are creating havoc on companies Linux based environments by creating and transitioning Windows malware to:
There is a notion in our community that added Linux operating system security features, such as Security-Enhanced Linux (SELinux) along with cloud provider offerings such as cloud-based firewall rules and access management, offer security by default. Companies do not need to focus on the hardening of the cloud server itself.
Myths such as these can lead companies to suffer devastating losses. When securing Linux servers, whether physical or in the cloud, the basics still remain the same. Just because a server is running Linux does not mean you can be lenient on security practices on the server itself. A companys security posture frequently relies on cloud providers security controls, and while they do provide help, if the company does not know what code is running on their servers the effectiveness of the providers security controls are negated.
Software development has changed drastically over the past several years to meet the need for faster time to market conditions. To accommodate these requirements, developers are increasing the frequency of their code deployments. Capital One reports they are currently deploying up to 50 times per day for a single product, with Amazon, Google, and Netflix deploying thousands of times per day. With the frequency of these code changes, it is becoming increasingly difficult for security teams to adapt their monitoring and hardening practices.
New code deployments can alter a servers expected behavior. Suppose companies are focusing their monitoring on behavior-based detection. In that case, new code deployments can lead to false positives, which create an additional workload for teams. Security teams often report that its challenging to address these situations as they do not have enough visibility into what code is running on their servers. Thus, they must spend a significant amount of time investigating them. If attackers can craft their code to fit with the expected behavior, no alerts are triggered, and a compromise could occur without any detection. However, companies are often worried about deploying new security solutions as they may degrade performance by using vital resources or slowing down the development process.
Attackers focus on Windows as its the worlds most used operating system.
The number of threats that Linux servers face are downplayed due to another common myth about the popularity of Linuxs usage around the world. The belief is that Windows is the primary operating system in use, and while this might be true for desktop computers, when it comes to Linux cloud or physical servers, the numbers say it all.
Currently 500 of the worlds fastest supercomputers are running on Linux. These systems are used for everything from advancing artificial intelligence to helping save lives by potentially aiding in COVID-19 gene analysis and vaccine research.
96.3% of the worlds servers are running Linux.
83.1% of developers say they prefer to work on the Linux platform than any other operating system.
In the past decade, researchers have discovered many advanced persistent threat campaigns targeting Linux systems using adapted Windows malware, as well as unique Linux malware tools tailored for espionage operations. Once the code was modified to work in Linux environments, there was no barrier to shifting these attacks to new targets. One example is IPStorm; researchers first saw this malware in 2019 targeting Windows systems. IPStorm has now evolved to target other platforms such as Android, Linux, and Mac devices, leading to increased compromised systems of more than 13,500. Detecting if systems are compromised is not always difficult; however, many businesses are unaware they should examine their devices until they see this attacks impact. The increase in compromised systems has led some to call IPStorm one of the most dangerous malware in existence.
Perhaps one of the leading factors for attackers deciding to morph their attack strategies is the growth of cloud technology and the increasing number of cloud providers making the transition to Linux-based environments easier than ever.
Even governments have embraced Linuxs usage in their environments. For example, in 2001, the White House transitioned to the Red Hat Linux-based distribution. The US Department of Defense migrated to Linux in 2007, and the US Navys warships have been using Red Hat since 2013. The US is not alone in this transition. The Austrian capital of Vienna and the government of Venezuela have also adopted the use of Linux.
Open-source software is inherently secure, due to the visibility of the code and contributions from the community.
Attackers contribute to open-source projects as well. For example, we have seen NPM packages that contained code providing access to the environmental variables, allowing for the collection of information for the host device.
Not everyone has the skills to understand the code, so despite seeing the installed code, the compromise can go undetected. When an issue is reported, an experienced developer reviews the code, then writes a patch. Once this work is done, we wait for the new code to be approved. Dont forget during this time unknowing parties would still be using these packages.
Once a fix is available, many companies still do not upgrade the new code. The State of Software Security (SOS) analyzed 85,00 applications and found that over 75% shared similar code. 47% of these had flawed libraries used by multiple applications; 74% of these libraries could have been fixed with a simple upgrade.
Our Linux environments are just as vulnerable as any other environment. All of these myths around Linux security fall short because they do not take into account what history has taught us, that breaches do happen.
In order to have a healthy security posture, companies need to grow beyond the idea that all breaches can be prevented and address the need for visibility of the code running within their workloads.
Webinar: Is Linux Secure By Default?
Blog: 2020 Set a Record for New Linux Malware Families
Read this article:
The myths behind Linux security. - The CyberWire