Category Archives: Cloud Servers

ThreatLocker partners with Datto to Streamline Secure Business Operations – PR Web

ThreatLockerLOGO

MAITLAND, Fla. (PRWEB) March 02, 2021

ThreatLocker is a global cybersecurity leader providing enterprise-level cybersecurity tools for the Managed Services Provider (MSP) industry to improve the security of servers and endpoints, today announced its partnership with Datto, the leading global provider of cloud-based software and technology solutions purpose-built for delivery by MSPs. This integration streamlines secure business operations for MSPs, allowing them to access their Autotask PSA account from within the ThreatLocker portal.

Ransomware remains the most common cyber threat to SMBs, and MSPs have seen increased security risks for clients following the move to remote working and the accelerated adoption of cloud applications in 2020. Its imperative that businesses adopt solutions and best practices to protect themselves and their clients data from a cyber attack. ThreatLocker addresses these concerns, protecting against ransomware, viruses, and other malicious software, integrating seamlessly with Autotask PSA which combines all the mission-critical tools MSPs need to operate a successful business.

This integration enables MSPs to validate their Autotask PSA account from within the ThreatLocker portal. Successful validation allows the MSP to map their existing Autotask PSA sites to their existing ThreatLocker organizations. Initial PSA ticket settings can also be selected through the integration and with successful company mapping, PSA tickets can now be created.

Datto is excited to partner with ThreatLocker, whose platform and seamless integration with Autotask PSA helps partners to securely manage their business operations, said Joe Rourke, Director of Product Management at Datto. Partnering with ThreatLocker and their leading cybersecurity solutions, this integration further protects our partners and their clients' data from malware and malicious threat actors.

To learn more about how you can protect your clients from the costly effects of downtime following a ransomware attack, visit: http://www.threatlocker.com

About ThreatLockerThreatLocker is a global cybersecurity leader, providing enterprise-level cybersecurity tools for the Managed Services Provider (MSP) industry to improve the security of servers and endpoints. ThreatLockers combined Application Whitelisting, Ringfencing, Storage Control and Privileged Access Management solutions are leading the cybersecurity market towards a more secure approach of blocking all unknown application vulnerabilities. To learn more about ThreatLocker visit: http://www.threatlocker.com

Share article on social media or email:

See the rest here:
ThreatLocker partners with Datto to Streamline Secure Business Operations - PR Web

We add a small private bank every month, that is the kind of growth YONO gives us Amit Saxena, Global Deputy CTO, SBI – Express Computer

The year 2020 was an unprecedented year due to the pandemic. SBI, being a systemically important bank couldnt have missed the digitisation bandwagon that was witnessed in 2020. More digitisation leads to a pressure on data centers and thus its important to manage this nerve centre for 24X7 availability of systems. When many of the SBI employees were given virtual access and privileges, overall, the data center availability was satisfactory. The servers performed upto the expectations. The softwares and firmwares were upgraded as a part of the regular refresh exercise.

Fireside Chat: Amit Saxena, Global Deputy CTO, State Bank of India | DCIDS 2021

The year of the pandemic has offered opportunities to embed automation capabilities. SBI is looking forward to adopting automated functionalities. The bank is currently using machine learning and analytics to improve server performance. The application monitoring tools are operational in addition to ML models being run to monitor server performance, API call performance, which helps in procurement decisions in terms of compute, storage etc. to be acquired for meeting the future demand. The decisions are taken based on the historic data of the last few months. AI is yet to come in action. SBI doesnt take decisions on cloud adoption on which workloads can go on cloud and which cannot. The growth potential of the workload is the determinant to qualify it to be hosted on a cloud model. The bank runs its own private cloud with one of the very important applications hosted on the private cloud, in addition to a few important internal facing applications too. Private cloud enables quick creation of RHEL based and VAPT tested, virtual machines, in a matter of 5-10 mins. We have benchmarked our cloud on how well it is doing however we are too big a bank to rely on just one cloud. A hybrid cloud model works for us. There is a discussion going on with big public cloud providers on the scope of standardising their cloud models to collapse the learning curve for companies like us. This will help in adoption of multiple clouds from different companies without going through the learning curve, says Amit Saxena, Global Deputy CTO, State bank of India. Given the growth in the amount of digital transactions, the plan is to soon add more cooling systems in the data center. In the last year, the Govt undertook many initiatives to improve the network connectivity. Additionally, the overall internet penetration has also improved, which has reduced connectivity bottlenecks. As a consequence, the digital transactions at the branch level and especially on the YONO platform has taken a surge. We add a small private bank every month, that is the kind of growth YONO has given us, says Saxena.Amit Saxena has a three point suggestion for CIOs planning to move to public cloud data purity, API standardisation and API rationalisation. All coupled with a robust middleware architecture will make the application ripe to be moved onto a public cloud. SBI is exploring public cloud opportunities because of the limitations of handling the exponential transactions surge with a bare metal environment.

This article is based on the fireside chat conducted as a part of Express Computers Data Center and Infrastructure Digital Summit. To watch the video, click here

If you have an interesting article / experience / case study to share, please get in touch with us at [emailprotected]

Here is the original post:
We add a small private bank every month, that is the kind of growth YONO gives us Amit Saxena, Global Deputy CTO, SBI - Express Computer

Form 8.3 – Willis Towers Watson plc – Yahoo Finance UK

Bloomberg

(Bloomberg) -- Texas Governor Greg Abbott lifted the mask mandate and other anti-pandemic restrictions, defying warnings from health officials about the perils of dropping those precautions too soon.Effective March 10, all businesses will be allowed to open at full capacity, Abbott said during a media briefing in Lubbock on Tuesday. Although his executive order allows counties to reimpose anti-virus rules should hospitalizations surge, it forbids them from jailing or fining scofflaws.This will kill Texans, Texas Democratic Party Chairman Gilberto Hinojosa said in a statement. Our countrys infectious disease specialists have warned that we should not put our guard down even as we make progress towards vaccinations.Abbott acted at what federal authorities warn is a critical juncture in the pandemic that has killed 516,000 Americans: While hospitalizations and caseloads have dropped in Texas and nationwide, U.S. vaccinations are not yet widespread enough to provide so-called herd immunity, and new, easier-to-spread variants of Covid-19 are proliferating.Abbots move drew immediate criticism from Democrats while giving the Republican governor an opportunity to shift attention from the weather-induced blackouts that crippled the state two weeks ago. Texas deregulated electric market, a product of the states GOP leadership, is at the center of blame for the failures.Hard-Earned GroundThe announcement flies in the face of pleas by federal health officials for a continuation of masking and other anti-virus protocols.Abbotts anti-pandemic measures also have grated on his conservative electoral base, which saw them as government overreach, and may have wounded any presidential aspirations. He received 0% of the vote in a presidential straw poll at the Conservative Political Action Conference this past weekend.At this level of cases with variants spreading, we stand to completely lose the hard-earned ground we have gained, Rochelle Walensky, director of the Centers for Disease Control and Prevention, said during a Monday briefing. Please stay strong in your conviction, continue wearing your well-fitted mask and taking the other public health prevention actions that we know work.Bidens WarningWalensky warned that a fourth wave of Covid-19 infections could be in the offing without continued vigilance. On Tuesday, President Joe Biden reinforced her admonition.I urge all Americans to please keep washing hands, stay socially distanced, wear masks, Biden said. Now is not the time to let our guard down.New Covid-19 cases in Texas dropped to a five-month low of 1,637 on Monday, state health department figures showed. Virus hospitalizations slipped to the smallest tally since Oct. 28.Too many Texans have been sidelined from employment opportunities; too many small-business owners have struggled to pay their bills, the governor said. It is now time to open Texas 100%.Earlier Tuesday, Abbott said in a tweet that Texas is administering more than 1 million Covid-19 vaccinations weekly.Texans have mastered the daily habits to avoid getting Covid, Abbott said.As of Sunday, none of the states 22 trauma-service areas had more than 15% of hospital capacity occupied by virus patients. The pandemic has claimed almost 43,000 Texans since it emerged in early 2020.An irresponsible decision guided by political expedience and nothing else, Houston City Controller Chris B. Brown said in a tweet. Not only will this set us back in the battle against #COVID19 in the region, it will likely prolong the economic pain brought on by the pandemic.For more articles like this, please visit us at bloomberg.comSubscribe now to stay ahead with the most trusted business news source.2021 Bloomberg L.P.

Here is the original post:
Form 8.3 - Willis Towers Watson plc - Yahoo Finance UK

The Future of Video Security – Security Today

The Future of Video Security

Even before the pandemic, cloud computing wasflourishing. It is nearly impossible today to find anorganization that doesnt use some form of cloudservice. From applications to operating systems, toweb servers, storage, and virtual LANs an almostinfinite array of solutions can be found in the cloud.

No doubt, the adoption of cloud-based tools and services willbe a priority for organizations for years to come. The pandemichas brought a considerable acceleration of technology developmentand a broad demand and acceptance of countless new usecases, all unheard of just a year ago.

Gartner recently reported that the worldwide public cloud servicesmarket is forecast to grow 6.3% in 2020 to total $257.9 billion, upfrom $242.7 billion in 2019. Desktop as a service (DaaS) is expectedto have the most significant growth in 2020, increasing 95.4% to $1.2billion. DaaS offers an inexpensive option for enterprises supportingthe surge of remote workers and their need to securely access enterpriseapplications from multiple devices and locations.

Part of the attraction for a cloud-based services approach is thatit offers an easy and efficient way for an enterprise to manage applicationsand technology with a high level of redundancy, stability,and security. Simply put, cloud computing eliminates the problemsof buying and maintaining hardware and software on a user-by-user,workstation-to-workstation basis. Cloud solutions are generally moreaffordable, which makes them suitable for businesses of any size.

THE VSAAS BOOM

For reasons mentioned, cloud services are particularly attractiveto professional security and surveillance applications. Some ofthe key factors driving the popularity of Video Surveillance as aService (VSaaS) include the low cost of investment, the increaseddemand for real-time surveillance data, and the flexible scalabilityoffered by cloud-based solutions.

According to a recent Markets and Markets report, the VSaaSmarket is expected to grow from $2.2 billion in 2020 to reach $4.7billion by 2025, and its expected to grow at a compound annualgrowth rate of 16% from 2020 to 2025.

This article originally appeared in the March 2021 issue of Security Today.

Excerpt from:
The Future of Video Security - Security Today

The cloud without the wait: mobile edge computing and 5G – Verizon Communications

It all starts with the cloud

The cloud stores your data, all your pictures and your phone contacts, and it processes information that helps make your favorite apps work. Cloud computing can do several things at once, really well: It can compute, store data and work with the network, all in one location. Many cloud providers, for example, have storage facilities that do cloud computing in locations all over the world. When you take a photo with your phone and send it to Instagram, it goes to a cloud facilitypossibly several hops and four or five states awaywhere all the necessary computing takes place, and then it publishes to Instagram. Its a similar process for reading your morning email or listening to a podcast. For things like that, the centralized cloud works really well, and the latency is low enough that your experience is just fine.

But certain experiences require a lot of data to move very quickly to and from a device and the cloud. Thats where MEC comes in. It brings the cloud closer to you.

The edge refers to the part of Verizons network that is closest to you: Your device connects to the network at the edge. And edge computing means bringing the cloud to the edge of the network closest to your device.

So how do you make edge computing more mobile, and closer to the devices that need it?

MEC is an entire network architecture that brings computing power close to any device thats using it. Instead of data going back and forth to cloud servers four or five states away, its processed just miles or meters from the device. For this purpose, Verizon has installed cloud servers in its own access points across its networks.

Read more:
The cloud without the wait: mobile edge computing and 5G - Verizon Communications

Red Hat supports high-availability apps in AWS and Azure Blocks and Files – Blocks and Files

Red Hat Linux running in the AWS and Azure public clouds now supports high-availability and clustered applications with its Resilient Storage Add-On (RSAO) software. This means apps like SAS, TIBCO MQ, IBM Websphere MQ, and Red Hat AMQ can all run on Red Hat Linux in AWS and Azure for the first time.

Announcing the update in a company blog post, Red Hat Enterprise Linux product manager Bob Handlin wrote: This moment provides new opportunities to safely run clustered applications on cloud servers that, until recently, would have needed to run in your data centre. This is a big change.

AWS and Azure did not support shared block storage devices in their clouds until recently. One and only one virtual machine instance, such as EC2 in AWS, could access an Elastic Block Storage (EBS) device at a time. That meant high-availability applications, which guard against server (node) failure by failing over to a second node which can access the same storage device, were not supported.

Typically, enterprise high-availability applications such as IBM WebSphere MQ have servers accessing a SAN to provide the shared storage. These applications could not be moved to the public cloud without having shared block storage there.

Azure announced shared block storage with an Azure shared disks feature in July 2020. And AWS announced support for clustered applications using shared (multi-attach) EBS volumes in January this year. The company said customers could now lift-and-shift their existing on-premises SAN architecture to AWS and Azure without refactoring cluster-aware file systems such as RSAO or Oracle Cluster File System (OCFS2).

Red Hats Resilient Storage Add-On lets virtual machines access the same storage device from each server in a group through Global File System 2 (GFS2). This has no single point of failure and supports a shared namespace and full cluster coherency which enables concurrent access, and cluster-wide locking to arbitrate storage access. RSAO also features a POSIX-compliant file system across 16 nodes, and Clustered Samba or Common Internet File System for Windows environments.

AWS and Azure s shared block storage developments have enabled Red Hat to port RSAO software to their environments. RSAO uses the GFS2 clustered filesystem and it passes Fibre Channel LUN or iSCSI SAN data IO requests to either an AWS shared EBS volume or Azure shared disk as appropriate.

Handlin said Red Hat will test RSAO on the Alibaba Cloud and likely other cloud offerings as they announce shared block devices.

Go here to read the rest:
Red Hat supports high-availability apps in AWS and Azure Blocks and Files - Blocks and Files

What is cloud native? – BusinessCloud

Today, people are talking more and more about cloud-native apps and cloud-native development. But what does this actually mean? Many applications are hosted on public cloud resources, but this alone doesnt mean theyre cloud-native.

A definition of cloud-native

A cloud-native app is one which has been designed solely for the cloud. This means cloud computing has been leveraged in every element of its design, which separates it from an application thats merely been lifted and shifted (moved onto cloud) or cloud enabled (partially built on cloud).

This more cloud-based design approach garners many benefits, incorporating the latest technologies and practices, such as DevOps, microservices and containers.

This approach is whats known as cloud-native development.

Elements of cloud-native development

DevOps

DevOpsis a delivery/development philosophy which holds that the ops and dev teams should be integrated for faster, more reliable developments.

In practice, this means utilizing cloud automation technologies to reduce manual work and guard against human error.

The end goal of DevOps is to reach continuous delivery/intergeneration (CI/CD), in which new features can be tested and deployed with minimal human input.

Microservices

Microservices is a cloud architecture approach in which applications are made up of many smaller cloud-based services. This is in contrast tomonolithic architectures, in which all elements of an application are hostedcentrally, andare generally inseparable.

Microservice applications are more costly to develop in the short run, but once achieved, offer far more flexibility and scaling options than their monolithic counterparts.

Serverless

Serverless applications are not, as the name suggests, wholly serverless. Instead, the applications requests are performed by servers provisioned on an on-demand/per request basis, as opposed to running on permanent cloud servers.

This comes with a host of benefits, including reduced latency, faster time to market, and lower cost of production completely bypassing the traditional costs of infrastructure provisioning.

Containers

Containerised applicationsrun off containers, which are a newer form of the traditional cloud server the VM.

Containers are much lighter than VMs in terms of required compute resources and are more environment agnostic, meaning they can be deployed on a greater range of infrastructures without undergoing change.

This means that, in many situations, containers offer increased flexibility and lower total cost.

Why cloud-native applications and cloud-native deployments will be key moving forward

Although some of the above cloud-native deployment methods are mutually exclusive, in general, these approaches confer many of the same advantages, which are already indispensable in todays competitive landscape.

Compared to traditionally designed applications and eventomany cloud-enabled or lifted and shifted applications cloud-native is:

More scalable

Soley cloud-based applications, and particularly microservice-based applications, can scale more easily. In the case of microservices, the ability to scale one service without scaling another is especially impactful, avoiding the costs of scaling all other elements in tandem, as would be the requirement for a monolithic application.

Easier to manage

Cloud-native infrastructure is more geared towards automation and reduced management costs. Serverless is the most obvious example of this trend, with applications being uploaded as functions only, and provisioning taken care of automatically.

Quicker pace of development

Cloud-native applications are better suited to DevOps, which seeks to automate testing, building and deployment. This in turn, of course leads, to shorter overall time to market.

More reliable

Many cloud-native technologiesare able tocope with faults far better than traditional technologies. Kubernetes one of the most widely used container orchestration tools automatically detects and heals non-functional clusters of containers.

Cloud-native technologies also allow faults to be more easily isolated.

Trends for the future

Increasingly, cloud-native is seen as the required benchmark for competitive development. And although many may cut corners with cloud-ready or cloud-enabled, the clear advantages of true cloud-native development and applications will ensure cloud-native becomes the norm in the years to come.

More here:
What is cloud native? - BusinessCloud

Akash Network, the World’s First Decentralized Cloud Computing Marketplace and the First DeCloud for DeFi, Develops Critical IBC Relayer for…

"This ability for chains to transact and interoperate will be revolutionary for the industry"

Key features for the IBC Relayer include:

Essential to launching the IBC protocol and the only way users will be able to use IBC, the Relayer is the user interface that enables all transfers and transactions on IBC. In development for over three years, IBC is the flagship feature of the Cosmos Network. For crypto and blockchain, where interoperability and composability are essential for continued growth for decentralized sectors like DeFi, IBC is the most promising and production-ready solution.

Akash will be one of the first networks in the world to integrate with IBC and IBC Relayer, through the early March 2021 launch of Akash MAINNET 2, the first viable decentralized cloud alternative to centralized cloud providers such as Amazon Web Services, Microsoft Azure, and Google Cloud.

For media inquiries, please contact Kelsey Ruiz at (916) 412-8709 or kelsey(at)akash(dot)network.

About Akash Network:

Akash Network is developing the world's first and only decentralized cloud computing marketplace, enabling any data center and anyone with a computer to become a cloud provider by offering their unused compute cycles in a safe and frictionless marketplace. Akash DeCloud greatly accelerates scale, efficiency, and price performance for DeFi, decentralized organizations, and high-growth industries like machine learning/AI.

Through Akash's platform, developers can easily and securely access cloud compute at a cost currently 2x-3x lower than centralized cloud providers (AWS, Google Cloud, and Microsoft Azure). The platform achieves these benefits by integrating advanced containerization technology with a unique staking model to accelerate adoption. For more information, visit: https://akash.network/

SOURCE Akash Network

https://akash.network/

Read more from the original source:
Akash Network, the World's First Decentralized Cloud Computing Marketplace and the First DeCloud for DeFi, Develops Critical IBC Relayer for...

The complexities of moving to the cloud | Industry Trends | IBC – IBC365

Media organisations are moving at different paces and there is no one size fits all technical approach. While some have already moved lock stock and barrel to the cloud, others face investment, training and technology dilemmas about how best to proceed.

Cloud is a one-way street and something broadcast CTOs have to embrace, says Baskar Subramanian, Co-founder, Amagi. Its a question of what to move and how fast.

The poster child for this is Discovery which began its wholesale move to AWS Cloud in 2016.

Previously wed have to buy a load of servers and install them, wed need a file transfer system, and we wouldnt know what the return on investment would be over time, explains Simon Farnsworth, CTO Broadcast Technology & Operations at an SDVI hosted webinar. Now, we can very accurately cost things like major new projects. It has become a lot less emotional and more binary since we can accurately predict cost.

He estimates Discoverys cloud-based supply chain has already saved the company $100m. Cloud is also claimed to have shaved $1bn in synergies from Discoverys 2018 acquisition of Scripps Networks.

Historically, [when Discovery entered] new territories we had siloed ops teams with siloed tech stacks and siloed workflows but were able to standardise that now, Farnsworth says. All the content for Discovery+ is in the cloud and its just a question of feeding it through to our own operated platforms or to affiliates. We need to be fast. What [cloud] has allowed us to do is generate the same amount of content while investing a truck load in new product.

Comcast Technology Solutions, perhaps the largest service provider in the world, is about to make a major acceleration in moving its own supply chain (though not yet including Sky) to the cloud.

We want to move into the cloud for flexibility and speed, explains Bart Spriester, VP and GM of Content and Streaming. To provide services to spin up and down and we need it to be usage based. We need to remove the integration lead time of on-prem solutions and remove capital approval cycles and slow software deployment.

In 2020 the company syndicated 66 million minutes of content to partners like Cox and Rogers and more than 170 affiliates. With this volume we need to take a lot of friction out of the system, Spriester says. We think there will be a huge benefit to moving this out to public cloud infrastructure.

The benefits of moving the supply chain to the cloud are clear. This includes the ability to build up and down rapidly and only pay for resources when required. Enterprise scale operations can be run with greater efficiency and accuracy than before.

Changing from a custom on-premises environment where different processes are done on different vendors kit to using common tools in an open source environment gives a much more consistent view of the operational state of the platform, explains Tony Jones, principal technologist, MediaKind. The way you build and configure systems is declarative meaning that you instruct system components what you state you want it to be and the system executes how to get there.

It is the deterministic behaviour of systems in a cloud environment that means broadcasters can predict with far greater certainty exactly what operating a service should cost.

Changing from a custom on-premises environment where different processes are done on different vendors kit to using common tools in an open source environment gives a much more consistent view of the operational state of the platform, Tony Jones,Mediakind

Add to that a microservices approach to development and deployment and broadcasters can upgrade equipment and introduce new features far faster, more economically and more flexibly than before.

If broadcasters want to have a healthy future in competition with SVOD vendors they have got to think along those lines, urges Jones.

One destination, many pathsThese arguments may be well known but getting there is not straightforward for the majority of broadcasters. In the negative column are cost, complexity and cultural inertia.

People are on different pathways, says Peter Sykes, Strategic Technology Development Manager at Sony Europe. At one end you have more traditional organisations making SDI to a IP as a first step while others are now moving to combine IP with cloud. Media companies know they have to reach new audiences but cant increase resources and in some case are having to reduce capital outlays.

This financial squeeze is one reason for a phased migration to cloud. Many broadcasters put their toe in first by moving disaster recovery operations. This has sped up since Covid-19 underlined the necessity for business continuity.

Another step might be to take less critical workflows like media processing and VOD to cloud. For others it makes sense to move complete sub-systems into a public cloud environment rather than a component-based approach. These systems are typically operated as-a-service by external providers.

They could move a complete broadcast chain encompassing playout, compression and multiplexing or ABR packaging as a one functional unit, Jones says. Theres not really any value to the broadcaster to build that themselves but if they choose to take it prepacked its an operationally easier environment and theres just one [vendor] to talk to if theres a problem.

The pace also differs depending on delivery technology. The traditional DTH anchor of broadcast delivery is moving slower than OTT DTC service launches which are more likely to be cloud deployments, says Richard Mansfield, MediaKinds Steaming Director. Broadcasters not ready to migrate their entire infrastructure are making this their first step.

Arguably, the biggest issue hindering broadcaster moves to the cloud surround skills and mindset.

The primary issue is cultural mindset more so than technology, says Subramanian. Its a question of being comfortable with a particular way of doing things and a reluctance to doing things differently.

Broadcasters used to plugging-in individual components using SDI or its IP version SMPTE 2110 face difficulties in working out how to apply that to the cloud.

Imagine you picked a handful of vendors, one of whom deploys into AWS virtual machines, one deploys into a Kubernetes environment and another one into a Kubernetes service in a cloud provider, posits Jones. How you integrate that as a complete system is a nightmare and probably beyond most broadcast engineers.

Not only do different vendor applications need to interface together but you also need to consider whether the deployment environment they work in are compatible with each other, he adds. A lot of legacy software apps that were built to run on premises have been adapted for the cloud but are not cloud native. There are no standards for this deployment. Its a wild west.

We have seen some big network operators that been able to grasp that change but it does take quite a big investment.

IT training neededRelated to this is the need for a whole new set of IT skills required of broadcast engineers.

Broadcasters launching OTT services in the cloud are often doing so using an IT team, says Mansfield. In the long run, this separation is insane. They are essentially doing the same thing as the broadcast team but delivering to a different output medium. To be successful those teams need to be merged together as one operation.

For Subramanian the answer lies in better education about the total ownership cost of cloud workloads. The finance team, the operations and tech departments are all used to a capex model which, when suddenly taken to opex-driven model, catches them off guard. In some senses the cloud complicates life because there are so many different pieces of the puzzle.

In one simple illustration, buying a server for on-premises versus putting a server on the cloud cannot be compared like for like. With the cloud model you need to consider the networking gear, the data centre, the air con power and performance, he says.

Aside from Kubernetes, which is an orchestration layer adopted by all major cloud platforms, Subramanian agrees that the internet is fracturing away from the broadcast safety net of unified standards. He doesnt think this a problem. There will be a plurality of standards that we all need to support including NDI, SRT, RIST and Zixi but this multiplicity breeds innovation.

Fundamentally what is missing from the whole ecosystem is better education to create business models. We have seen customers cross that bridge once they understand the significant benefits.

Somehow, Discovery seems to have done this. We managed to flip the [internal] conversation from finance looking at the bottom line to looking at metrics, explains Farnsworth. How much volume is flowing through? What is the reliability like? What is the cost so we can start delivering KPIs? Its a much more straightforward conversation and means we can concentrate on creating a better consumer experience rather than how we make it work technically.

Read the original post:
The complexities of moving to the cloud | Industry Trends | IBC - IBC365

Hundreds of Thousands Immigration and COVID Records Exposed in Jamaica – Security Boulevard

Jamaica just experienced a massive data breach that exposed the immigration and COVID-19 records of hundreds of thousands of people who visited the island over the past year. Much of the information found on the exposed server was from Americans.

According to TechCrunch, the Jamaican government contractor Amber Group left a storage server on Amazon Web Services (AWS) unprotected and without a password. The server was set to public which enabled anyone to have access to the data. The unprotected data consisted of 70,000 COVID lab results, 425,000 immigration records, 250,000 quarantine orders, and 440,000 images of travelers signatures.

Anyone traveling to Jamaica had to download Amber Groups app to report COVID results before allowed entry. Those who tested positive in Jamaica had to use the app to monitor symptoms and allow the government to track their whereabouts to ensure they were quarantining. All of that data from the app, over 1.1 million records, was exposed.

When handling sensitive and private information like Amber Group was, there is no room for mistakes like leaving a server unprotected. Organizations need to adopt cloud security strategies to protect their data and provide automated real-time remediation to catch risks.

DivvyCloud by Rapid7 protects your cloud and container environments from misconfigurations, policy violations, threats, and IAM challenges. With automated, real-time remediation, DivvyCloud by Rapid7 customers achieve continuous security and compliance, and can fully realize the benefits of cloud and container technology.

The post Hundreds of Thousands Immigration and COVID Records Exposed in Jamaica appeared first on DivvyCloud.

*** This is a Security Bloggers Network syndicated blog from DivvyCloud authored by Shelby Matthews. Read the original post at: https://divvycloud.com/blog-covid-records-exposed-in-jamaica/?utm_source=rss&utm_medium=rss&utm_campaign=blog-covid-records-exposed-in-jamaica

Read the original:
Hundreds of Thousands Immigration and COVID Records Exposed in Jamaica - Security Boulevard