Category Archives: Cloud Servers

Recognizing the Risks of the Cloud – Security Boulevard

A recent article in Forbes discussed prioritizing the risks of the cloud, and specifically called out four areas of risk that need to be addressed by organizations moving to the cloud. The article recognized risks come from:

The last two, network and platform security are well understood, and have been widely used in on-premises deployments as well as cloud deployments. Most security deployments and spend are around security for network and platforms. Unfortunately as many organizations are finding out, network and platform security isnt enough to effectively secure the cloud.

In todays world of increasing attacks, (especially zero day attacks) and the increasing success of attacks, application and workload security are gaining in attention and focus. Applications and workloads are the gateways to the data thats sought after by cyber criminals, and attacks against applications and workloads are increasingly sophisticated, meaning traditional tools around network and platform security are failing to detect these types of attacks.

For many organizations, a Web Application Firewall (WAF) is fulfilling their application security requirements. But a WAF alone is missing out on many application security needs. While WAFs have been around in their current form since around 2002, WAFs function as a network perimeter security solution and they have failed to meet the security needs around many of the issues that applications and workloads face in todays threat landscape. WAFs only have visibility to see the traffic coming to and from the application or workload, but not whats happening in the application or workload directly.

The second area organizations have typically had security is on the platform. For platform security, many organizations continue to rely on standard anti-virus/anti-malware or Endpoint Detection and Response (EDR) solutions to protect their servers. Unfortunately, these types of solutions are designed to protect end-user systems, and specifically the operating systems running on those systems, rather than applications servers. They arent designed to protect against attacks targeted specifically against applications and workloads, and typically dont understand the transactional languages or operations of the applications and workloads.

Application and workload security needs to have visibility into the application itself, along with the ability to understand the transactions happening between the end-user and the application and the application with the APIs it is using to access data.

Unlike network and platform security solutions, a Runtime Application Self-Protection (RASP) solution can see whats happening inside the application, to determine if theres inappropriate use of the application itself. In addition, RASP is really the first security category to offer self protection for applications and workloads.

A typical RASP solution has code level visibility into applications and workloads and can analyze all the activity related to applications and workloads to accurately identify when an attack occurs, thereby reducing the amount of false positives.

Even the latest revision of NIST SP800-53 includes the addition of RASP (Runtime Application Self-Protection) to the catalog of controls required by the security and privacy framework. The update came in September of 2020 and its a first in recognizing this advancement in application security by now requiring RASP.

By running on same server as the application, RASP solutions provide continuous security for the application during runtime. For example, as mentioned earlier, a RASP solution has complete visibility into the application, so a RASP solution, like the one from K2 Cyber Security can analyze an applications execution to validate the execution of the code, and can understand the context of the applications interactions.

K2 Cyber Securitys RASP solution offers significant application protection while at the same time using minimal resources and adding negligible latency to an application.

Here at K2 Cyber Security, wed like to help out with your RASP and IAST requirements. K2 offers an ideal runtime protection security solution that detects true zero-day attacks, while at the same time generates the least false positives and alerts. Rather than rely on technologies like signatures, heuristics, fuzzy logic, machine learning or AI, we use a deterministic approach to detect true zero-day attacks, without being limited to detecting attacks based on prior attack knowledge. Deterministic security uses application execution validation, and verifies the API calls are functioning the way the code intended. There is no use of any prior knowledge about an attack or the underlying vulnerability, which gives our approach the true ability to detect new zero-day attacks. Our technology has 8 patents granted/pending, and has no false alerts.

K2s technology can also be used with DAST testing tools to provide IAST results during penetration and vulnerability testing. Weve also recently published a video, The Need for Deterministic Security. The video explains why the technologies used in todays security tools, including web application firewalls (WAFs) fail to prevent zero day attacks and how deterministic security fills the need for detecting zero day attacks. The video covers why technologies like artificial intelligence, machine learning, heuristics, fuzzy logic, pattern and signature matching fail to detect true zero day attacks, giving very specific examples of attacks where these technologies work, and where they fail to detect an attack.

The video also explains why deterministic security works against true zero day attacks and how K2 uses deterministic security. Watch the video now.

Change how you protect your applications, include RASP and check out K2s application workload security.

Find out more about K2 today byrequestinga demo, orget your free trial.

Read this article:
Recognizing the Risks of the Cloud - Security Boulevard

Morphisec Raises $31M Funding Led by JVP to Enable Every Business to Simply and Automatically Prevent the Most Dangerous Cyberattacks – PR Web

Morphisec Logo

BEER SHEVA, Israel and BOSTON (PRWEB) March 25, 2021

Morphisec, a leader in cloud-delivered endpoint and server security solutions, today announced that it raised $31 million in funding led by JVP. Other existing investors, including Orange and Deutsche Telekom Capital Partners, also participated in the funding. Morphisec, deployed on over 7 million endpoints, offers enterprises cutting-edge cyber prevention that automatically stops the most dangerous attacks in an automated and easy-to-manage manner without any impact on users, performance, or IT teams, while conserving costs and achieving best-in-class efficacy.

The investment will support an aggressive hiring push aimed at drastically increasing headcount across the U.S. and Israel. As Morphisec ramps up recruiting talent for every level of its organization, it is announcing today the appointment of Steve Bennett to its board of directors, effective immediately. Bennett formerly served as CEO of major software and security companies, including Symantec and Intuit. Before that, Bennett spent over 20 years at General Electric in multiple executive management roles.

Morphisec aims to protect users and workloads everywhere. The pandemic resulted in remote work at levels never seen before, making perimeter security irrelevant and forcing organizations to protect the endpoint as the last true perimeter. Moreover, accelerated migration to the cloud, whether on the applications/SaaS level (e.g., Office365, SalesForce) or infrastructure (e.g., AWS, Azure) requires organizations to protect endpoints and workloads in a low-cost, automated and deterministic fashion. Morphisec comes to these organizations defense without needing dedicated security teams to respond to and investigate attacks automatically stopping the most dangerous attacks targeting workstations, VDIs, servers, virtual machines, and cloud workloads.

Midsized enterprises are historically underserved by the cybersecurity market and left behind by cost-prohibitive tools and staff constraints, said Ronen Yehoshua, CEO of Morphisec. The challenges for these organizations have only increased in the last year with work-from-home employees using unsecured devices and connecting to an endless array of cloud-based applications. Morphisec has proven to be the only cybersecurity solution capable of bringing them simple yet effective protection that also fits into their existing budget. With this new investment, we will further our commitment to bring organizations of all sizes threat prevention that stops advanced attacks in their tracks before the breach and costly damage.

Morphisecs suite of solutions for endpoints, servers, and cloud workloads uses patented zero trust runtime security powered by moving target defense technology to block threats. Rather than trying to remediate attacks after they hit, Morphisecs proprietary technology based on moving target defense stops attacks deterministically and automatically, without requiring knowledge of threat type or manual oversight, making it highly effective against advanced attacks such as zero-day and unknown threats.

The companys flagship solution Morphisec Guard is a complete endpoint prevention platform that combines traditional antivirus with the power of Morphisecs advanced protection against ransomware, malware, and evasive attacks. Its latest solution, Morphisec Keep, protects servers and cloud-based applications from advanced threats. Keep ensures mission-critical workloads running on server cloud instances, including private and public clouds hosted on AWS, Azure, and GCP, are automatically protected with zero downtime or performance impact.

Endpoints of all types workstations and servers, on-premises and in the cloud, physical and virtual are the ultimate frontiers of cyber protection. Organizations today settle for low efficacy, high cost, non-deterministic, performance-impacting, knowledge-challenged sets of solutions like EDRs, behavioral, and signature-based approaches. These result in uncertainty, high-cost, and are difficult to manage in WFH and Cloud environments, said Yoav Tzruya, General Partner at JVP. Morphisecs unique approach provides measurable, deterministic, low-cost value while providing best-in-class protection, serving distributed organizations and further allowing risk-free cloud migration. Morphisecs unique ability to prevent attacks before any breach occurs without requiring knowledge of the threat positions it as the de facto proactive cybersecurity solution for the cloud.

Morphisec has brought the most significant innovation to prevention the market has seen in the last 10 years, said Steve Bennett. Ive never witnessed a cybersecurity company that has delivered so much value potential for mid-sized customers. Not only does it stop the breaches that make the headlines, but it does so in a way that allows budget-constrained businesses to receive the world-class prevention and business continuity that is often only reserved for the large deep-pocket corporations.

For information on the growing number of open positions in development, sales, and marketing at Morphisec visit: https://www.morphisec.com/careers.

About Morphisec

Morphisec delivers an entirely new level of innovation to endpoint protection to create a zero-trust execution environment for workstations, VDI, servers, and cloud workloads. This proactively creates a prevent-first posture against the most advanced threats to the enterprise, including APTs, file-based malware, zero-days, ransomware, fileless attacks, and web-borne exploits. This complete endpoint security solution easily deploys into a companys existing security infrastructure to form a simple, highly effective, cost-efficient technology stack that is truly disruptive to todays current cybersecurity model.

Share article on social media or email:

Read the original:
Morphisec Raises $31M Funding Led by JVP to Enable Every Business to Simply and Automatically Prevent the Most Dangerous Cyberattacks - PR Web

IT Insight: The seamless connection between the digital and real world – Seacoastonline.com

JoAnn Hodgdon| Guest Columnist

IoT, the Internet of Things.What is it?Wikipedia defines it as: the network of physical objects - things or objects - that are embedded with sensors, software, and other technologies for the purpose of connecting and exchanging data with other devices and systems over the Internet.

The Internet of Things (IoT) is the seamless connection between the digital world and what is our real world.By 2025, IoT trends suggest the number of connected devices will rise to 75 billion. According to Gartner, every second 127 devicesget connected to the internet!Think mobile phones, smart home and office security, heating systems, appliances, contact lenses and even smart yoga mats.

With many people working remotely, a key trend coming out of this year's list is the need for location independence.IT providers and leaders must recommend location-independent services. Live anywhere in the world and make a living with little more than a laptop and an internet connection! IoT edge computing which includes Internet of Things, both for consumer and for business, is one example of how to get there while securing your connected devices.

Even if you are not working remotely now due to the pandemic, the workplace of the future is evolving to include mobile device implementation and remote support technology.Take for instance contactless interactions accessing spaces and buildings with IoT such as smart cards, sensors and wearables as opposed to password enabled criteria.

IoT applications are utilized to enhance employee work environments at home.Soon corporate real estate will be a thing of the past with a greater percentage of decision makers anticipating permanent remote workers after COVID.Those who plan future offices are using smart power, lighting and energy.Many already use sensor-enabled space use to monitor congested areas, like dining facilities and for social distancing.

Virtual queues for curbside assistance and check-in have replaced waiting or standing in line.The IoT has enabled new expectations in customer service. Precise time slots due to tracking customer location have raised the customer experience to a new level. With a customers permission via an app, the consumer world is forever changed!

Within a business network, it is possible to move your IoT devices further from the data to the edge of your network.Via wireless connection, Edge computing allows you to take advantage of IoT and new transformational business applications. Your devices will spend less time communicating with the Cloud, react quicker to local changes and operate more reliably.

You can transform your business and devices with Microsoft Azure. A managed Azure Cloud Solution includes less physical infrastructure to manage.Retire your local servers when you move all your server functions to the cloud. No more static servers in your building or any other hardware, let alone the floor space and electricity to keep them running.

You also pay only for what you use so your company can scale and grow without any large capital investments. You have total control over your costs each month and can easily adjust your usage and capacity on demand to suit your budget.

With all your key business applications and data in the cloud, you are automatically protected when disaster strikes. Downtime is limited to a matter of hours, not days, and your staff and remote users have permanent access no matter what.

Your Azure Cloud Solution can be built to keep your IT infrastructure at optimal performance all the time, including your IoT devices.

MS Azure IoT Edge is a fully managed service that provides your business with the necessary uptime, communication and security you need. You will know that your devices are operating with the correct software and are authorized to communicate with one another.

Securing connected devices is mandatory. IoT Edge integrates with Azure Defender for IoT to provide threat protection and security management. It also supports any current hardware security model to provide strong authenticated connections for confidential computing.Make the switch to a managed Azure Cloud Solution.The seamless connection between your digital and real world is here.Prepare for future innovation!

JoAnn Hodgdon is vice president and co-founder of Portsmouth Computer Group (PCGiT) with her husband David. PCG provides comprehensive managed IT services, business continuity, security, cloud computing and Virtual CIO services to their clients. You may reach her at joann@pcgit.com or at http://www.pcgit.com.

View post:
IT Insight: The seamless connection between the digital and real world - Seacoastonline.com

Designing Servers In Rapidly Changing Times – The Next Platform

When chip makers launch the latest additions of their datacenter processors, server OEMs have historically immediately or soon after followed with a rollout of the latest new or enhanced systems based on those offerings. Then theyd wait until the next big unveiling by another chip maker, and the process starts again.

However, its getting more difficult to follow that pattern, for a number of reasons. The most obvious, at least right now, is timing. AMD this month unveiled its third generation Milan Epyc 7003 processors, the latest server chips based on the ever-evolving Zen microarchitecture that has enabled the company to muscle its way back into the server space and compete with larger rival Intel. For its part, Intel in short order will soon roll out its third generation Xeon SP CPUs codenamed Ice Lake prompting another round of announcements of servers that will be armed with the new chips.

But thats only part of the story. CPUs no doubt are still crucial components of servers, but increasingly so are accelerators like GPUs and field programmable gate arrays (FPGAs), as are features like security, cooling and management. In addition, enterprises have shifted over the past several years to look at the machines not simply for the power they offer but for how they can be used to run such advanced workloads as machine learning and data analytics and HPC.

And now compute environments like the cloud and increasingly the edge are factors when evaluating servers.

All that plays into the how system makers not only plan and develop their hardware lineups but also when they decide to roll them out. Those considerations played a role in Dells decision this month to unveil its complete portfolio of new and enhanced servers two days after AMD unveiled its Epyc 7003 family of new processors and just weeks before Intels expected rollout of its Ice Lake offerings.

We debated quite a bit, Ravi Pendekanti, senior vice president of server and networking product management and marketing at Dell, tells The Next Platform. We said, Should we launch a set of products with AMD? Should we then come out and do a lot of stuff with Intel? Then we looked back and said, Wait a second, that is not helping our customers who are making purchasing decisions. They want to look at the entire portfolio. Thats why we changed to a portfolio update, having both Intel and AMD in the mix. It wasnt easy across the organization, because we would bring typically in the past bring four or five products here, maybe three or four products. This meant the entire organization had to work on a pretty broad portfolio. We have never done this, honestly 17 platforms in one go and as we did that we also wanted to put the lens on workloads, which is why we said that we see that AI and the machine learning with the GPU stuff that customers want. But the other thing thats happening is the advent of things like 5G thats coming to the telco space.

Dells new portfolio includes systems with a mix of Intel and AMD chips and that touch upon the workloads and environments that are driving the rapid changes in IT. An example are the PowerEdge R6515 and R750 systems, seen below. The R6515 is powered by AMD Epyc chips and is designed to improve data processing in big data Hadoop databases by as much as 60 percent. Meanwhile, the R750 will come with Intels upcoming Ice Lake processors and promise to 43 percent better performance in massively parallel linear equations for compute-heavy workloads. The XE8545 server offers up to 128 Epyc cores, four Nvidia A100 GPUs and Nvidias vPGU software to accelerate AI and similar workloads as well as deliver security and management benefits. The 4U rack system is the foundation for Dells HPC Ready Solution for AI and Data Analytics. The 2U dual-socket R750xa, which will be powered by the Intels Ice Lake chips, also offers deep GPU capabilities support up to four double-wide GPUs and six single-wide GPUs for machine learning, inference and AI and supports Nvidias AI Enterprise, a suite of AI tools and frameworks launched earlier this month.

Pendekanti says the rapidly changing CPU picture not only with Intel and AMD, but also Arm and its manufacturing partners is something Dell and other OEMs have to take into account when developing its server roadmaps. For one, the timeline for new processors is accelerating. Where once chip makers came out with new offerings every two-plus years, the time between new generations is shrinking. AMDs first-generation Eypc chips launched in 2017, with the second generation rolling out in 2019. Secondly, just the number of chip makers needs to be addressed in server development, such as in the x86 space.

Until a few years ago, we didnt have to worry about [other chip makers]. It was only Intel. There was a hiatus that AMD took and now we have just for the CPU this interesting bifurcation, so we have to look at two of them, he says, adding that other technologies and concerns management, security and cooling are coming into play. Its not trivial because the technology adoption [trends] are changing [and] the number of technologies that are coming out. If I fast-forward to a couple of years, well have to think about things like smartNICs, which are coming out.

Power efficiency continued to be a focus in the latest PowerEdge systems. Dell is including ducted fans and adaptive cooling that can improve efficiency by up to 60 percent over the previous generation, as well as multi-vector cooling, which automatically directs airflow to the hottest parts of the server. Some servers also offer Direct Liquid Cooling, which includes technology that can detect leaks.

Security also was a focus. With the cloud and now the edge, data and applications increasingly are being generated and accessed outside of the datacenter, making them more vulnerable to hacking. Intel, AMD and Arm which Nvidia is looking to buy for $40 billion all are putting more security feature into their silicon and designs. Such hardware-based security also is key for system makers, according to David Schmidt, senior director of server product management at Dell. Its important to balance hardware- and software-based security.

You cant have one without the other, Schmidt tells The Next Platform. You really have to go down to the silicon-based root of trust. What weve been doing in the past couple of years around security, it starts in the hardware. It allows you to chain that root of trust all the way up into an operating system. Thats exactly where embedded hardware-based security comes into play, when it is time to secure OS all the way down to the hardware. It has been a huge focus of ours to make sure its there. Then you start being able to do some really cool things like security on the verification, which goes up into supply-chain type use cases. You can do active software scanning, you can do a greater chip security, which youll see both on Intel and AMD offerings.

Dell offers what it calls a cyber-resilient architecture and silicon root of trust. With the new systems, the vendor includes Secured Component Verification is an extension of its Secure Supply Chain assurance process, ensuring that the systems delivered to enterprises are exactly as they were manufactured, without any interference during delivery. The PowerEdge UEFI Secure Boot Customization enables boot security to be more closely managed to mitigate attacks.

Modern server planning also has to take into account the cloud and the edge. Hyperscalers and major cloud providers continue to be a driving force in the hardware space. According to Synergy Research Group, in 2020, global spending on cloud infrastructure services including infrastructure-as-a-service (IaaS), platform-as-a-service (PaaS) and hosted private clouds grew 35 percent, to almost $130 billion, helped in part by the COVID-19 pandemic. Meanwhile, enterprise spending on datacenter hardware and software fell 6 percent, to less than $90 billion. In addition, capital expenditures in the first three quarters of 2020 by hyperscalers hit $99 billion, up 16 percent year-over-year, with the capex aimed specifically at datacenters up 18 percent.

Dell has drawn from what the cloud providers are looking for, Pendekanti says.

We absolutely have learned quite a bit because we do work and provide product to some of the CSPs and its helped us in a few ways, he says. One of the things that most of these cloud service providers look for [is management], because most of these guys dont have tens of servers, they literally have tens of thousands of servers and hundreds of thousands of servers, which essentially means we need to make sure that we are able to provide the right management tools Those lessons have played a huge role into how we have morphed our whole portfolio in terms of manageability. Number two: When you look at the CSPs, they do actually look at architectures. Theyre looking at faster deployment, for example, or theyre looking at how to deal with security.

Dell last year also announced Project Apex, which is aimed at offering its product portfolio as a service, spanning products, consumption models and cloud strategies.

Dell and other OEMs also have to address the edge, which is expanding with the growth of the Internet of Things (IoT) and will accelerate as 5G connectivity comes to the fore. In its updated portfolio, Dell is including the PowerEdge XR11 and XR12 ruggedized systems, which will be powered by Intels Ice Lake processors and include support for multiple accelerators. They include smaller form factors that are 400mm and 16 inches deep, hardened chasses, remote manageability and NEBS Level 3 compliance, all key points for systems that are designed to be installed closer to where the data is being generated and users are located. They are certified for telcos and military uses.

The key thing has been most of these are ruggedized, Pendekanti says. On the CPU side, we dont need as much power right now for the device sitting at the edge. We dont really need to go out put the most platinum kind of SKU with the highest TDP. In most cases, we are looking at probably a processor SKU that is at a lower end of the TDP rating because you are literally not getting into all kinds of analytics at the edge. But you will probably do some processing, so the CPU stack that we look at doesnt have to take the entire spectrum. Our goal would be to actually leave it at the lower end of the spectrum for these edge boxes.

See more here:
Designing Servers In Rapidly Changing Times - The Next Platform

eyeson is moving beyond video conferencing running auto-moderated town hall meetings – Press Release – Digital Journal

Austria - March 26, 2021 - eyesons patented single stream technology provides a feature that optimizes video conferencing streams on cloud servers that are transmitted to all participants in a defined layout in real time. While a unique selection algorithm, also developed by eyeson, makes it possible to go beyond moderated video conferencing towards cloud based auto arrangement of videos and participants based on intelligent activity tracking.

Virtual events are oftentimes a source of concerns regarding the moderation of the event and participants, says Andreas Kropfl, CEO of eyeson. We care about excellence and easy communication, so we conceived a technology that dynamically prioritizes activity and allows refocusing of participants as needed.

Merging multiple video/audio streams into one stream is well known by now. However, merging hundreds of video meeting participants can cause headache to many event organisers who want to keep discussions controlled and well moderated! The eyeson tech team developed an algorithm that helps you prioritise by activity and keep everyone focused on the essential.With long discussions in mind, where leaders, event organizers and moderators need to coordinate over 100 participants, eyeson brought its users an algorithm that automatically brings active speakers to the virtual main stage.

eyesons custom video layout can be set to one, two, four or nine participants with the innovative capability of displaying active participants on the video podium automatically. There are two layout options available, filling empty slots automatically or arranging users by voice activity. The first option updates the on-screen arrangement placing users in the layout based on their meeting entry order. Whereas the second option, newly released, provides intelligent moderation! It detects active users while talking and places them on the visible video podium.

Our technology takes over all handling from the client's side and acts as a virtual moderator based on an intelligent cloud-hosted decision algorithm and our cloud-based video transcoding services. says Michael Wolfgang, CTO of eyeson. Developers even can make their own video layout configurations via the API. Whether it is the case of a virtual event, a team meeting with many participants or online classrooms, the intelligence that eyeson brings to video communication facilitates management and reduces online fatigue.

eyeson is a leader in cloud based video conferencing with managed multipoint video processing technology at a scale. Based on its patented single stream technology, eyeson provides API video services to easily integrate video collaboration in business workflows for full customer engagement. eyeson is managing the cloud capacity, scalable video coding performance and data management for the customer. Based on WebRTC technology, eyeson provides browser-based video meeting integrations on all desktop and mobile devices. eyeson offers B2B focused products used by Forbes 500 companies and named by Gartner as "cool vendor in unified communications".

Media ContactCompany Name: Eyeson GmbHContact Person: Giannenta MilioEmail: Send EmailCity: GrazState: StyriaCountry: AustriaWebsite: http://www.eyeson.com

Excerpt from:
eyeson is moving beyond video conferencing running auto-moderated town hall meetings - Press Release - Digital Journal

Benu Networks brings a cloud-native architecture to its virtual BNG – FierceTelecom

Benu Networks is bringing together the benefits of a cloud-native architecture with the companys broadband network gateway (BNG). The company released a cloud-native virtual BNG (vBNG) that is targeted at telcos and is based on Benus software-defined edge (SD-Edge) platform.Because the vBNG is based upon disaggregated network functions, Benu said that service providers can scale their broadband service securely and use the gateway in both wired and wireless networks, or in a converged network.

According to Mike McFarland, VP of product management at Benu Networks, the vBNG sits between the access network and the core network, an area that some service providers refer to as the service edge. The vBNG sees all the traffic to the operators network and can authentic subscribers and also enforce policies such as bandwidth restrictions. McFarland said that currently most BNGs site further back in the network but one of the benefits of using a cloud-native architecture is that the vBNG can be closer to the network edge, which results in lower latency and improved customer experience because operators can push their content caching closer to the edge reducing the amount of traffic on the network.

McFarlane added that Benu decided to build a vBNG in response to service provider requests to push the BNG closer to the network edge. Currently telcos are using traditional hardware-based BNG platforms, he said. But with our cloud-native disaggregated vBNG the hardware is separated from the software so operators can run the software on off-the-shelf servers. This gives them a broader diversity of vendors and they can pick and choose the right server for their needs.

Related: Broadband Forum debuts BNG Disaggregation project

Benu also integrated its secure access service edge (SASE) into its cloud-native vBNG, which means that the SASE runs inside the carrier network. McFarlane said the advantage of having the SASE integrated into the vBNG is that it enables security services to be at the network edge without having to buy appliances for every branch. Its much more cost effective and easier to manage and rollout, McFarland said. This way you dont have to manage thousands of endpoint devices.

Benu Networks SASE is already being commercially deployed and the SASE integrated with the vBNG is currently being tested with some customers.

McFarland added that Benus integration of the SASE with the vBNG has attracted some interested from service providers that want to use it for both their wired broadband networks and their wireless broadband networks. This is viewed by carriers as a great way to leverage the investment on the 5G core side and provide a path to having a more unified user experience across both the wired and wireless networks, he said.

Benu Networks was founded in 2010. Its customers include Comcast, Liberty Global and Mediacom.

Read more here:
Benu Networks brings a cloud-native architecture to its virtual BNG - FierceTelecom

Cloud Backup & Recovery Software Market: GLOBAL OPPORTUNITY ANALYSIS AND INDUSTRY FORECAST 2023 KSU | The Sentinel Newspaper – KSU | The Sentinel…

Cloud or online backup is a process involving backing up of electronic data by sending a copy of the data over the proprietary or public network to a remote network server. The server is usually hosted by a third party service provider which charges the customer fees based on backup file, bandwidth, number of users and capacity. Cloud backup and recovery software securely copy the files to many servers. It is also encrypted so that no user can view them and protect the data from viruses and hackers. The adoption of cloud backup provides additional benefits such as cost saving, security, storage, virtualization, fast and easy access to backed up files.

Increasing focus on reducing IT expenditure drives the global cloud backup & recovery software market. Moreover, rising demand for cloud based services across several industry verticals and growing backup requirements of enterprises drives the growth of the global cloud backup & recovery software market. However, latency in data retrieval and interruptions as well as storage management and securing backups are expected to impede the market growth. Increasing adoption of these solutions among SMEs and emergence of new trends such as Infrastructure as a service (IaaS), IoT in the market is expected to provide numerous opportunities for the market.

Request for a FREE sample of this market research report@ https://www.reportocean.com/industry-verticals/sample-request?report_id=31119

The global cloud backup & recovery market is segmented on the basis of deployment model, user type, industry vertical and region. Deployment model covered in this study include private, public and hybrid. Based on user type, the market is bifurcated into large enterprises and small and medium enterprises. On the basis of industry vertical, the market is bifurcated into BFSI, government, healthcare, telecom & it, retail, manufacturing and others. Based on the regional study, the market is analyzed across North America, Europe, Asia-Pacific, and LAMEA.

Global cloud backup & recovery market is dominated by the key players such as Veritas Technologies LLC, Veeam Software, Commvault, IBM Corporation, Dell EMC, CA Technologies, Symantec Corporation, Microsoft Corporation, Hewlett Packard Enterprise, and Actifio Inc.

KEY BENEFITS FOR STAKEHOLDERS

The study provides an in-depth analysis of the global cloud backup & recovery software market and current & future trends to elucidate the imminent investment pockets.Information about key drivers, restrains, and opportunities and their impact analysis on the market size is provided.Porters Five Forces analysis illustrates the potency of buyers and suppliers operating in the industry.The quantitative analysis of the global market from 2016 to 2023 is provided to determine the market potential.

KEY MARKET SEGMENTS

BY DEPLOYMENT MODEL

PrivatePublicHybrid

BY USER TYPE

Large EnterprisesSmall and Medium EnterprisesBY INDUSTRY VERTICALBFSIGovernmentHealthcareTelecom & ITRetailManufacturingOthers

BY GEOGRAPHY

North AmericaU.S.CanadaMexicoEuropeUKGermanyFranceRest of EuropeAsia-PacificChinaJapanIndiaRest of Asia-PacificLAMEALatin AmericaMiddle EastAfrica

Send a request to Report Ocean to understand the structure of the complete report @https://www.reportocean.com/industry-verticals/sample-request?report_id=31119

KEY MARKET PLAYERS

Veritas Technologies LLCVeeam SoftwareCommvaultIBM CorporationDell EMCCA TechnologiesSymantec CorporationMicrosoft CorporationHewlett Packard EnterpriseActifio Inc.

More here:
Cloud Backup & Recovery Software Market: GLOBAL OPPORTUNITY ANALYSIS AND INDUSTRY FORECAST 2023 KSU | The Sentinel Newspaper - KSU | The Sentinel...

The World Has Changed Why Havent Database Designs? – The Next Platform

It seems like a question a child would ask: Why are things the way they are?

It is tempting to answer, because thats the way things have always been. But that would be a mistake. Every tool, system, and practice we encounter was designed at some point in time. They were made in particular ways for particular reasons. And those designs often persist like relics long after the rationale behind them has disappeared. They live on sometimes for better, sometimes for worse.

A famous example is the QWERTY keyboard, devised by inventor Christopher Latham Sholes in the 1870s. According to the common account, Lathams intent with the QWERTY layout was not to make typists faster but to slow them down, as the levers in early typewriters were prone to jam. In a way it was an optimization. A slower typist who never jammed would produce more than a faster one who did.

New generations of typewriters soon eliminated the jamming that plagued earlier models. But the old QWERTY layout remained dominant over the years despite the efforts of countless would-be reformers.

Its a classic example of a network effect at work. Once sufficient numbers of people adopted QWERTY, their habits reenforced themselves. Typists expected QWERTY, and manufacturers made more QWERTY keyboards to fulfill the demand. The more QWERTY keyboards manufacturers created, the more people learned to type on a QWERTY keyboard and the stronger the network effect became.

Psychology also played a role. Were primed to like familiar things. Sayings like better the devil you know and If it aint broke, dont fix it, reflect a principle called the Mere Exposure effect, which states that we tend to gravitate to things weve experienced before simply because weve experienced them. Researchers have found this principle extends to all aspects of life: the shapes we find attractive, the speech we find pleasant, the geography we find comfortable. The keyboard we like to type on.

To that list I would add the software designs we use to build applications. Software is flexible. It ought to evolve with the times. But it doesnt always. We are still designing infrastructure for the hardware that existed decades ago, and in some places the strain is starting to show.

Hadoop offers a good example of how this process plays out. Hadoop, you may recall, is an open-source framework for distributed computing based on white papers published by Google in the early 2000s. At the time, RAM was relatively expensive, magnetic disks were the main storage medium, network bandwidthwas limited, files and datasets were large and it was more efficient to bring compute to the data than the other way around. On top of that, Hadoop expected servers to live in a certain place in a particular rack or data center.

A key innovation of Hadoop was the use of commodity hardware rather than specialized, enterprise-grade servers. That remains the rule today. But between the time Hadoop was designed and the time it was deployed in real-world applications, other facts on the ground changed. Spinning disks gave way to SSD flash memory. The price of RAM decreased and RAM capacity increased exponentially. Dedicated servers were replaced with virtualized instances. Network throughput expanded. Software began moving to the cloud.

To give some idea of the pace of change, in 2003 a typical server would have boasted 2 GB of RAM and a 50 GB hard drive operating at 100 MB/sec, and the network connection could transfer 1Gb/sec. By 2013, when Hadoop came to market, the server would have 32 GB RAM, a 2 TB hard drive transferring data at 150 MB/sec, and a network that could move 10 Gb/sec.

Hadoop was built for a world that no longer existed, and its architecture was already deprecated by the time it came to market. Developers quickly left it behind and moved to Spark (2009), Impala (2013), Presto (2013) instead. In that short time, Hadoop spawned several public companies and received breathless press. It made a substantial albeit brief impact on the tech industry even though by the time it was most famous, it was already obsolete.

Hadoop was conceived, developed, and abandoned within a decade as hardware evolved out from under it. So it might seem incredible that software could last fifty years without significant change, and that a design conceived in the era of mainframes and green-screen monitors could still be with us today. Yet thats exactly what we see with relational databases.

In particular, the persistence is with the Relational Database Management System, or RDBMS for short. By technological standards, RDBMS design is quite old, much older than Hadoop, originating in the 1970s and 1980s. The relational database predates the Internet. It comes from a time before widespread networking, before cheap storage, before the ability to spread workloads across multiple machines, before widespread use of virtual machines, and before the cloud.

To put the age of RDBMS in perspective, the popular open source Postgres is older than the CD-ROM, originally released in 1995. And Postgres is built on top of a project that started in 1986, roughly. So this design is really old. The ideas behind it made sense at the time, but many things have changed since then, including the hardware, the use cases and the very topology of the network,

Here again, the core design of RDBMS assumes that throughput is low, RAM is expensive, and large disks are cost-prohibitive and slow.

Given those factors, RDBMs designers came to certain conclusions. They decided storage and compute should be concentrated in one place with specialized hardware and a great deal of RAM. They also realized it would be more efficient for the client to communicate with a remote server than to store and process results locally.

RDBMS architectures today still embody these old assumptions about the underlying hardware. The trouble is those assumptions arent true anymore. RAM is cheaper than anyone in the 1960s could have imagined. Flash SSDs are inexpensive and incredibly responsive, with latency of around 50 microseconds, compared with roughly 10 milliseonds for the old spinning disks. Network latency hasnt changed as much still around 1 millisecond but bandwidth is 100 times greater.

The result is that even now, in the age of containers, microservices, and the cloud, most RDBMS architectures treat the cloud as a virtual datacenter. And thats not just a charming reminder of the past. It has serious implications for database cost and performance. Both are much worse than they need to be because they are subject to design decisions made 50 years ago in the mainframe era.

One of the reasons relational databases are slower than their NoSQL counterparts is that they invest heavily in keeping data safe. For instance, they avoid caching on the disk layer and employ ACID semantics, writing to disk immediately and holding other requests until the current request has finished. The underlying assumption is that with these precautions in place, if problems crop up, the administrator can always take the disk to forensics and recover the missing data.

But theres little need for that now at least with databases operating in the cloud. Take Amazon Web Services as an example. Its standard Elastic Block Storage system makes backups automatically and replicates freely. Traditional RDBMS architectures assume they are running on a single server with a single point of storage failure, so they go to great lengths to ensure data is stored correctly. But when youre running multiple servers in the cloud as you do if theres a problem with one you just fail over to one of the healthy servers.

RDBMSs go to great lengths to support data durability. But with the modern preference for instant failover, all that effort is wasted. These days youll failover to a replicated server instead of waiting a day to bring the one that crashed back online. Yet RDBMS persists in putting redundancy on top of redundancy. Business and technical requirements often demand this capability even though its no longer needed a good example of how practices and expectations can reinforce obsolete design patterns.

The client/server model made a lot of sense in the pre-cloud era. If your network was relatively fast (which it was) and your disk was relatively slow (which it also was), it was better to run hot data on a tricked-out, specialized server that received queries from remote clients.

For that reason, relational databases originally assumed they had reliable physical disks attached. But once this equation changed, and local SSDs could find data faster than it could be moved over the network, it made more sense for applications to read data locally. But at the moment we cant do this because its not how databases work.

This makes it very difficult to scale RDBMS, even with relatively small datasets, and makes performance with large data sets much worse than it would be with local drives. This in turn makes solutions more complex and expensive, for instance by requiring a caching layer to deliver the speed that could be obtained cheaper and easier with fast local storage.

RAM used to be very expensive. Only specialized servers had lots of it, so that is what databases ran on. Much of classic RDBMS design revolved around moving data between disk and RAM.

But here again, the cloud makes that a moot point. AWS gives you tremendous amounts of RAM for a pittance. But most people running traditional databases cant actually use it. Its not uncommon to see application servers with 8 GB of RAM, while the software running on them can only access 1 GB, which means roughly 90 percent of the capacity is wasted.

That matters because theres a lot you can do with RAM. Databases dont only store data. They also do processing jobs. If you have a lot of RAM on the client, you can use it for caching, or you can use it to hold replicas, which can do a lot of the processing normally done on the server side. But you dont do any of that right now because it violates the design of RDBMS.

Saving energy takes energy. But software developers often choose not to spend it. After all, as the inventor of Perl liked to say, laziness is one of the great programmers virtues. Wed rather build on top of existing knowledge than invent new systems from scratch.

But there is a cost to taking design principles for granted, even if it is not a technology as foundational as RDBMS. We like to think that technology always advances. RDBMS reminds us some patterns persist because of inertia. They become so familiar we dont see them anymore. They are relics hiding in plain sight.

Once you do spot them, the question is what to do about them. Some things persist for a reason. Maturity does matter. You need to put on your accountants hat and do a hard-headed ROI analysis. If your design is based on outdated assumptions, is it holding you back? Is it costing you more money than it would take to modernize? Could you actually achieve a positive return?

Its a real possibility. Amazon created a whole new product the Aurora database by rethinking the core assumptions behind RDBMS storage abstraction.

You might not go that far. But where theres at least a prospect of positive ROI, its a good sign that change is strategic. And thats your best sign that tearing down your own design is worth the cost of building something new in its place.

Avishai Ish-Shalom is developer advocate at ScyllaDB.

Go here to see the original:
The World Has Changed Why Havent Database Designs? - The Next Platform

Telos moves intercom to the cloud with Infinity VIP Virtual Intercom Platform – NewscastStudio

Telos Alliance has released a new broadcast intercom system that moves communication to the cloud.

Branded as the Infinity VIP Virtual Intercom Platform, the new system makes workflows available on any devicesmartphone, laptop, desktop, or tablet.

Third-party devices, such as Elgatos Stream Deck can also be utilized to control the system.

VIP allows users to utilize Telos Infinity IP Intercom anywhere, including at home, on-prem, site-to-site or in the cloud.

Telos Infinity has revolutionized comms forever by eliminating the outmoded centralized matrix. We are doing it again with the new Telos Infinity Virtual Intercom Platform, the next evolution of Infinity that, for the first time, puts fully-featured broadcast Intercom in the Cloud. This opens up a whole new world of virtual comm workflows, responds to customer demand for remote workflows, and aligns with Telos Alliances larger push toward virtualization across product lines, said Martin Dyster of Telos Alliance.

Meeting users where they are on the path toward virtualization, Telos Alliance offers several deployment options for VIP:

On-Prem Use Telos Infinity VIP hardware appliance or your own server for on-prem installations.

Integrated For both On-Prem or Cloud versions, the Telos Infinity VIP system can be integrated with Telos Infinity beltpacks and hardware panels or any third-party intercom or audio subsystem using AES67 or SMPTE 2110-30 connectivity.

Advertisement

Cloud Server Software for supported Cloud platform installations. A complete Communications infrastructure in the Cloud (including Amazon Web Services and Google Cloud) with connectivity options for integration with third-party cloud-based and On-Prem audio subsystems.

Software as a Service (SaaS) Various third-party Telos Alliance partners will offer a Telos Infinity VIP SaaS option, allowing users to lease it in a virtual environment.

Go here to see the original:
Telos moves intercom to the cloud with Infinity VIP Virtual Intercom Platform - NewscastStudio

Cloud spending topped data centers for the first time last year – ITProPortal

Businesses around the world spent more money on cloud infrastructure than they did on on-premise solutions for the first time last year, new figures have said.

A report from market researchers Synergy Research Group claims enterprise spending on cloud-based solutions grew by another third (35 percent) in 2020, compared to the year before, adding that total annual spending has now come close to the $130 billion mark.

At the same time, spending on on-prem solutions dropped by six percent, year-on-year, shrinking to less than $90 million.

Speaking to TechCrunch, chief analyst and research director at Synergy, John Dinsdale, said CIOs spend their money on servers, storage, security and software for the cloud, among other things:

The software pieces included in this data is mainly server OS and virtualization software. Comparing SaaS with on-prem business apps software is a whole other story, Dinsdale said.

Despite significant growth in spending, sceptics out there are still saying that the majority of workloads remains on-prem. For Dinsdale, its a tough question to answer because of the ease at which workloads move around in todays hybrid world.

Ive seen plenty of comments about only a small percentage of workloads running on public clouds. That may or may not be true (and I tend more toward the latter), but the problem I have with this is that the concept of workloads is such a fungible issue, especially when you try to quantify it, he said.

See the article here:
Cloud spending topped data centers for the first time last year - ITProPortal