Page 2,037«..1020..2,0362,0372,0382,039..2,0502,060..»

Five Cloud Startups Going After AWS’ Blind Spots – The Information

Amazon Web Services has built a commanding lead in the cloud computing market by listening to what services and features its customers want and then delivering them. But some application developers believe AWS, in its relentless pursuit of Fortune 500 customers to fuel revenue growth, has become more aligned with corporate IT departments than with the coders who initially propelled its rise starting more than 15 years ago.

This has prompted several former AWS employees, as well as those from other cloud giants like Microsoft and Google, to launch and join startups selling software that makes AWS easier to use for individual app developers. The startups provide back-end services, such as spinning up the cloud servers and databases that power websites or automating the creation of the application programming interfaces that let apps share data. Despite representing a much larger pool of corporate spending, these businesses have received less venture capital in recent years than those developing front-end tools for designing the look and feel of applications and websites.

Read more from the original source:
Five Cloud Startups Going After AWS' Blind Spots - The Information

Read More..

Could Russia plug the cloud gap with abandoned Western tech? Blocks and Files – Blocks and Files

What happens to a country when it runs out of cloud? We might just be about to find out as Russia has apparently realized itll be out of compute capacity in less than three months and is planning a grab for resources left by Western companies who have exited the country after Vladimir Putins invasion of Ukraine.

A report in Russian newspaper Kommersant says the Kremlin is preparing for a shortage of computing power, which in the coming months may lead to problems in the operation of state information systems. Initial translations of the report referred to a shortage of storage.

The Russian Ministry of Digital Transformation reportedly called in local operators earlier this month to discuss the possibility of buying up commercial capacity, scaling back gaming and streaming services, and taking control of the IT resources of companies that have announced their withdrawal from the Russian Federation.

Apparently, authorities are conducting an inventory of datacenter computing equipment that ensures the uninterrupted operation of systems critical to the authorities. The ministry told the paper it did not envisage critical shortages, but was looking at mechanisms aimed at improving efficiency.

The report cited rising public-sector demand for computing services of 20 percent. It added that one major impact is from the use of smart cities and surveillance systems. Apparently, its source explains that due to the departure of foreign cloud services, which were also used by some departments, the demand for server capacities instantly increased.

Meanwhile, the report continues, Russias datacenter operators are struggling, swept up in sanctions, economic turmoil and facing the challenge of sourcing kit when the ruble is collapsing. And they are effectively left with just one key supplier China.

Its not like Russia was awash with datacenter and cloud capacity in the first place. According to Cloudscene, there are 170 datacenters, eight network fabrics, and 267 providers in Russia, which has a population of 144 million.

Neither AWS, Google nor Azure maintain datacenters in Russia, and while there may be some question as to what services they provide to existing customers, it seems unlikely theyll be offering signups to the Russian authorities. AliBaba cloud doesnt apparently have any datacenters in Russia either.

By comparison, the UK, with 68 million citizens, has 458 data centers, 27 network fabrics, and 906 service providers, while the USs 333 million citizens enjoy 2,762 datacenters, 80 network fabrics, and 2,534 providers.

Its also debatable how much raw tin there is available in the territory. In the fourth quarter, external storage systems shipped in Russia totaled $211.5m, up 34.2 percent. Volumes slipped 12.3 percent on the third quarter, while in the fourth quarter 50,199 servers were delivered, up 4.1 percent, though total value was up 28.8 percent at $530.29m.

Server sales were dominated by Dell and HP. Storage sales were dominated by Huawei at 39.5 percent, with Russian vendor YADRO on 14.5 percent, and Dell on 11.2 percent by value, though YADRO dominated on capacity.

Now, presumably, Dell and HP kit will not be available. Neither will kit from Fujitsu, Apple, Nokia or Ericsson, and cloud services from AWS, Google or Azure.

Chinese brands might be an option, but theyll still want to be paid, and the ruble doesnt go very far these days. Chinese suppliers will have to weigh the prospect of doing business in Russia against the possibility of becoming persona non grata in far more lucrative markets like Europe, and perhaps more scarily being cut off from US-controlled components. Kommersant reported that Chinese suppliers have put deliveries on hold, in part because of sanctions.

So there are plenty of reasons for Russia to eke out its cloud compute and storage capacity. According to the Kommersant: The idea was discussed at the meeting to take control of the server capacities of companies that announced their withdrawal from the Russian market.

Could this fill the gap? One datacenter analyst told us that, in terms of feasibility, two to three months is doable as what normally holds up delivery of services is permits and government red tape, construction. If they are taking over existing datacenter space with connectivity and everything in place, they could stand up services pretty fast.

But it really depends on the nature of the infrastructure being left behind. This is not a question of annexing Western hyperscalers estates, given they are not operating there. Which presumably leaves corporate infrastructure as the most likely target.

Andrew Sinclair, head of product at UK service provider iomart, said co-opting dedicated capacity thats already within a managed service provider or cloud provider might be fairly straightforward.

Things would be far more complicated when it came to leveraging dedicated private cloud infrastructure thats been aligned to these companies that are exiting. These are well-recognized Fortune 500 businesses weve seen exiting. These businesses have really competent IT leaders. Theyre not just going to be leaving these assets in a state that people are going to be be able to pick them up and repurpose them.

From the Russian authorities point of view, they would be going out and taking those servers, and then reintegrating them into some of these larger cloud service providers more than likely. Even from a security perspective, a supply chain perspective, from Russias perspective, would that be a sensible idea? I dont know, Sinclair added.

The exiting companies would presumably have focused on making sure their data was safe, he said, which would have meant eradicating all the data and zeroing all the SAN infrastructure.

Following that, theres a question about whether they just actually brick all the devices that are left, whether they do that themselves, or whether the vendors are supporting them to release patches to brick them.

Connecting Fiber Channel storage arrays that have been left behind to a Fiber Channel network? Reasonable. But to be able to do that in two to three months, and to be able to validate that the infrastructures are free of security exploits, all the drives have been zeroed, and its all nice and safe? I think thats an extreme challenge.

But he added: When youre backed into a corner, and theres not many choices available

Of course, its unwise to discount raw ingenuity, or the persuasive powers the Kremlin can bring to bear. Its hard not to recall the story of how NASA spent a million dollars developing a pen that could write in space, while the Soviets opted to give its cosmonauts pencils. Except that this is largely a myth. The Fisher Space Pen was developed privately. And Russia used it too.

See original here:
Could Russia plug the cloud gap with abandoned Western tech? Blocks and Files - Blocks and Files

Read More..

TYAN Drives Innovation in the Data Center with 3rd Gen AMD EPYC Processors with AMD 3D V-Cache Technology – PR Newswire

"The modern data center requires a powerful foundation to balance compute, storage, memory and IO that can efficiently manage growing volumes in the digital transformation trend," said Danny Hsu, Vice President of MiTAC Computing Technology Corporation's Server Infrastructure Business Unit. "TYAN's industry-leading server platforms powered by 3rd Gen AMD EPYC processorswith AMD 3D V-Cache technology give our customers better energy efficiency and increased performance for a current and future of highly complex workloads."

"3rd Gen AMD EPYC processors with AMD 3D V-Cache technology continue to drive a new standard for the modern data center with breakthrough performance for technical computing workloads due to 768 MB of L3 cache, enabling faster time-to-results on targeted workloads. Fully socket compatible with our 3rd Gen AMD EPYC platforms, customers can adopt these processors to transform their data center operations to achieve faster product development along with exceptional energy savings," said Ram Peddibhotla, corporate vice president, EPYC product management, AMD.

Optimized for technical computing workloads to boost performance

Leveraging breakthrough performance of 3rd Gen AMD EPYC processors with AMD 3D V-Cache technology, the TYAN Transport HX product line is built to optimize workloads like EDA, CFD, and FEA software and solutions. The Transport HX FT65T-B8030is a 4U pedestal server platform featuring a single processor, eight DDR4-3200 DIMM slots, eight 3.5-inch SATA, and two NVMe U.2 hot-swap, tool-less drive bays. The FT65T-B8030 supports four double-wide PCIe 4.0 x16 slots for professional GPUs to accelerate HPC applications.

The Transport HX TN83-B8251is a 2U dual-socket server platform with eight 3.5-inch hot-swap SATA or NVMe U.2 tool-less drive bays. The platform supports up to four double-wide GPU cards and two additional low-profile PCIe 4.0 x16 slots that provides optimized topology to improve HPC and deep learning performance.

Optimized for HPC and virtualization applications, the Transport HX TS75-B8252and Transport HX TS75A-B8252are 2U dual-socket server platforms with support for 32 DIMM slots and two double-wide, active-cooled GPU cards. The TS75-B8252 accommodates twelve hot-swap, tool-less 3.5-inch drive bays with up to four NVMe U.2 support; TS75A-B8252 accommodates 26 hot-swap, tool-less 2.5-inch drive bays with up to eight NVMe U.2 devices.

High memory footprints, multi-node servers to power big data computing

TYAN's Transport CX lineup is designed for cloud and data analytics that require large memory capacity and fast data processing. The Transport CX GC79-B8252and Transport CX GC79A-B8252are 1U dual-socket server platforms that are ideal for high-density data center deployment with a variety of memory-based computing applications. These systems feature 32 DDR4 DIMM slots, two standard PCIe Gen.4 x16 expansion slots, and one OCP 3.0 LAN mezzanine slot. The GC79-B8252 platform offers four 3.5-inch SATA drive bays and four 2.5-inch NVMe drive bays with tool-less carriers, while the GC79A-B8252 platform offers twelve 2.5-inch drive bays with all NVMe U.2 support.

The Transport CX TN73-B8037-X4Sis a 2U multi-node server platform with four front-serviced compute nodes. Each node supports one AMD EPYC 7003 Series processor with AMD 3D V-Cache technology, four 2.5-inch tool-less NVMe/SATA drive bays, eight DDR4 DIMM slots, three internal cooling fans, two standard PCIe Gen.4 x16 expansion slots, two internal NVMe M.2 and one OCP 2.0 LAN mezzanine slot. The platform is suited for high-density data center deployments and targets scale-out applications with large numbers of nodes.

Hybrid storage servers to drive outstanding performance

TYAN Transport SX lineup is designed to deliver massive I/O and memory bandwidth for storage applications. The Transport SX TS65-B8253is a 2U hybrid software storage server for various data center and enterprise deployment featuring dual-socket CPUs, 16 DDR4 DIMM slots and seven standard PCIe 4.0 slots. The platform is equipped with up to two 10GbE and two GbE onboard network connections, twelve front 3.5-inch tool-less SATA drive bays with up to four NVMe U.2 support, and two rear 2.5-inch tool-less SATA drive bays for boot drive deployment.

TYAN's Transport SX TS65-B8036and Transport SX TS65A-B8036 are 2U single-socket storage servers with support for 16 DDR4 DIMM slots, five PCIe 4.0 and one OCP 2.0 LAN mezzanine slots. The TS65-B8036 accommodates twelve front 3.5-inch with up to four NVMe U.2 support, and two rear 2.5-inch hot-swap, tool-less SATA drive bays for boot drive deployment; theTS65A-B8036offers 26 front and two rear 2.5-inch hot-swap, tool-less drive bays for high-performance data streaming applications, the 26 front drive bays can support up to 24 NVMe U.2 devices by configuration.

AMD EPYC 7003 processors with AMD 3D V-Cache technology can run on TYAN's existing AMD EPYC 7003 platforms through a BIOS update.Customers can enjoy faster time-to-results on targeted workloads powered by new AMD EPYC 7773X, 7573X, 7473X, and 7373X processors.

SOURCE MiTAC Computing - TYAN

Go here to see the original:
TYAN Drives Innovation in the Data Center with 3rd Gen AMD EPYC Processors with AMD 3D V-Cache Technology - PR Newswire

Read More..

Greg Osuri creates ripples of growth with Akash Network in cloud computing – Newsd.in

The co-founder and CEO of Akash Network Greg Osuri is determined to transform the future of cloud computing.

Isnt it incredible to learn about all those people who make sure to cross boundaries and create a unique niche for themselves in all that they choose to lay their hands on? Well, the world is filled with too many success stories, but a few rare gems like Greg Osuri strive to make a prominent difference in their respective industries with their brands and businesses. Taking over the technological world and trying to find his footing in the digital financial industry with his token $AKT, Greg Osuri, over the years has come a long way in the industry as an entrepreneur of influence in the ever-so-evolving and competitive tech world.

The kind of innovations that have happened so far in the technological and the digital world can be attributed to the rigorous efforts and astute ideas of passionate beings like Greg Osuri, who have been giving it their all to bring about a wave of great change in their respective industries. He loves to build things for people that build things. As the co-founder and CEO of Akash Network, Greg Osuri has been transforming the future of cloud computing and how.

Cloud computing is the delivery of computing services over the internet, which is the cloud, including storage, servers, software, networks, analytics, intelligence, and databases. All of this is for paving the path of faster innovation, economies of scale, and flexible resources.

Speaking more about Akash Network, Greg Osuri says that it is infrastructure that powers Web3 and a distributed peer-to-peer marketplace for cloud compute. It offers fast and simple deployment, where people can deploy their application in minutes without having to set up, configure or manage servers. He further explains that any cloud-native and containerized application can be deployed on Akash Networks decentralized cloud like decentralized projects, serverless apps, and traditional cloud-native.

His clients so far have bombarded him with enormous positive testimonials and have thanked him for contributing heavily in the general cosmos community and the whole of the blockchain industry. Greg Osuri is more than what we know him; he is also a scientist, economist, artist, and storyteller with his photography skills. Do follow him on Twitter, https://twitter.com/gregosuri.

Read more here:
Greg Osuri creates ripples of growth with Akash Network in cloud computing - Newsd.in

Read More..

Why machine identities matter (and how to use them) – Help Net Security

The migration of everything to the cloud and corresponding rise of cyberattacks, ransomware, identity theft and digital fraud make clear that secure access to computer systems is essential. When we talk about secure access, we tend to think about humans getting access to applications and infrastructure resources. But the real security blind spot is the computing infrastructure, i.e., the machines themselves.

The modern digital economy relies on a massive network of data centers with reportedly 100 million servers operating worldwide. These 100 million physical servers might represent nearly a billion virtual servers, each an entry point for hackers and state-sponsored bad actors. Additionally, depending on which analyst you listen to, the number of connected devices shows no signs of slowing down the installed base for the internet of things (IoT) was reportedly around 35 billion by the end of 2021, with 127 new devices hooking up to the internet every second. That is an incredible amount of machine-to-machine communication, even more so when you factor in the 24/7 demands of the connected society.

At the same time, denial of service (DoS) attacks and most hacking attempts are also automated. Human hackers write software exploits, but they rely on large fleets of compromised computers to deploy them.

In the dangerous world of cloud computing the machines are hacking into machines.

For these reasons alone, it is not hyperbole to say that machine identities and secure access has become a priority for both IT leaders and decision makers alike. In the 18 months since machine identity management made its debut on the Gartner 2020 IAM Hype Cycle, the trust that we need to have in the machines that we rely on for seamless communication and access has become a critical part of business optimization.

The fundamental reason for the increase of successful hacking attempts is explained by the fact that machine-to-machine access technology is not as advanced as its human-to-machine counterpart.

It is well accepted that reliance on perimeter network security, shared accounts, or static credentials such as passwords, are anti-patterns. Instead of relying on shared accounts, modern human-to-machine access is now performed using human identities via SSO. Instead of relying on network perimeter, a zero-trust approach is preferred.

These innovations have not yet made their way into the world of machine-to-machine communication. Machines continue to rely on the static credentials an equivalent of a password called the API key. Machines often rely on perimeter security as well, with microservices connecting to databases without encryption, authentication, authorization, or audit.

There is an emerging consensus that password-based authentication and authorization for humans is woefully inadequate to secure our critical digital infrastructure.

As a result, organizations are increasingly implementing passwordless solutions for their employees that rely on integration with SSO providers and leverage popular, secure, and widely available hardware-based solutions like Apple Touch ID and Face ID for access.

However, while they both outnumber humans and have the capacity to create more widespread damage due to scale and automation, machines are still frequently using outdated security methods like passwords to gain access to critical systems.

These methods include but are not limited to:

If passwords are insufficient to protect applications and infrastructure resources for humans, we need to acknowledge that they are even worse for machines. But what should we replace them with? Without fingertips or a face, Touch ID and Face ID are non-starters.

I believe the answer is short-lived, cryptographically secure certificates. Every machine and every microservice running on it must receive a certificate and use it to communicate with others.

A certificate is superior to other forms of authentication and authorization in multiple ways.

First, it contains metadata about the identity of its owner. This allows production machines to assume a different identity from the staging or testing fleet. A certificate allows for highly granular access, so the blast radius from a compromised microservice will be limited only to resources accessible to that microservice. Certificates also expire automatically, so the loss of a certificate will limit the exposure even further.

Certificates are not new. They adhere to the open standard called X.509 and are already widely used to protect you when you visit sites like this one. The little lock in the address bar of your browser is the result of a Certificate Authority confirming that the website is encrypting traffic and has a valid SSL/TLS certificate. The certificate prevents a phony website from impersonating a legit one. Lets Encrypt is the most popular way to generate these certificates for websites and is currently used by over 260 million websites worldwide.

We need to adopt certificates for all forms of machine-to-machine communications. Like Lets Encrypt, this system should be open-source so anyone can use it regardless of ability to pay. It should be trivial to request, distribute, and renew certificates that uniquely identify a machine.

If all machines have an identity, organizations can manage access to infrastructure with one passwordless system that treats people and machines the same way. This simplicity is not only more secure since complexity is the most common cause of insecurity, but it also dramatically simplifies implementation. For example, companies already have rules that prevent an intern from being able to access root on a production server. Now, they can have a rule that dictates that a CI/CD bot should not be able to login to a production database. Both users can be authenticated with the same technique (short-lived certificates), authorized using the same catalog of roles, and audited with the same logging and monitoring solutions.

The joy of being a human is increasingly mediated by machines. Maybe you are singing happy birthday via Zoom to a distant relative, or opening a college savings account for a grandchild. None of this is possible without a vast fleet of servers spread across the world. We all deserve to know that the machines making up this network have an identity, and that their identity is used to explicitly authorize and audit their actions. By moving machine identity out of the shadows, the world will be a safer place.

Read the original here:
Why machine identities matter (and how to use them) - Help Net Security

Read More..

Countless App Developers Have Unsecured Cloud Databases, And Its Putting Consumers at Risk – Digital Information World

The move to cloud databases has the potential to transform data storage because of the fact that this is the sort of thing that could potentially end up allowing people to avoid having to rely on bulky and space consuming physical servers. However, a prime disadvantage of cloud based data storage is that it is vulnerable to all manner of cyber attacks, and that is something that makes it essential for businesses to use encryption and other methods to keep this data secure.

There have been numerous examples of unsecured cloud databases getting hacked which resulted in the widespread loss of personal and private data, but in spite of the fact that this is the case it seems that app developers havent gotten the memo. A recent analysis conducted by Check Point Research revealed that there are over 2,000 apps that dont have an adequate level of security for their cloud data storage spaces, or 2,113 to be precise.

Some of these apps are relatively small scale with only a few thousand downloads at most, but with all of that having been said and now out of the way it is important to note that there are apps with over ten million downloads that have unsecured cloud data storage as well. That is especially concerning when you take into account the sensitive nature of the data that is stored on these databases with all things having been considered and taken into account.

Things like chat history, pictures that you might have taken as well as a wide range of other things can be easily stolen by hackers due to the lack of security measures put in place by these app developers. Its clear that this is not something that they seem to be taking all that seriously, and if things continue in this vein we might see a crisis of lost data in the near future. Such insecurity could lead to problems as severe as identity theft among other things, and steps need to be taken to mitigate the risk that consumers are currently being exposed to.

Read the original post:
Countless App Developers Have Unsecured Cloud Databases, And Its Putting Consumers at Risk - Digital Information World

Read More..

Slazzer launches the first ever on premise image background removal Platform As A Service (PaaS) – Gulf News

While Slazzer's cloud-based solution already provides a high level of security and privacy, it's on premise solution goes a step further for when data simply cannot leave local servers. Clients will now have the ability to control the deep-learning visual recognition technology as if it were their own.

"Our on premise offering is the most comprehensive image background removal solution for private cloud, ecommerce, advertising, design, photography, government, medical, legal, and highly sensitive corporate content." "Slazzer's image recognition technology, when installed on a server, can process millions of images without leaving the company's premises," says Deep Sircar, CEO and co-founder. "It ensures the utmost in data security and privacy." No other image AI company in the world offers this level of control and security."

Deployment will require a minimum server configuration of 16GB GPU memory, 16 VCPU, and 110GB RAM. When running, the on premise solution offers parameters identical to the existing Slazzer API to create a transparent, colored or custom background for all images as well as additional options to position, scale, crop, crop_margin, or create a region of interest. It is highly scalable and delivers the same level of accuracy as the existing cloud offering, which is currently deployed by over 15,000 developers worldwide and has been used by companies such as Microsoft, ScandiSystem, and Visme.

Read the original post:
Slazzer launches the first ever on premise image background removal Platform As A Service (PaaS) - Gulf News

Read More..

Why Ransomware Attacks Steer Clear of the Cloud – Business Wire

FREDERICK, Md.--(BUSINESS WIRE)--In a brief video explainer and commentary, Josh Stella, chief architect at Snyk and founding CTO of Fugue, a cloud security and compliance SaaS company, talks to business and security leaders about why the cloud is generally spared from ransomware and examines the top threat to their cloud environments.

Ransomware made news headlines worldwide earlier this month after a successful attack against one of Toyota Motor Corp.s parts suppliers forced the automaker to shut down 14 factories in Japan for a day, halting their combined output of around 13,000 vehicles.

That attack was the latest example of the threat ransomware poses to all industries. The most recent edition of SonicWalls annual threat report states that the volume of ransomware attacks in 2021 has risen 231.7% since 2019. And an advisory jointly issued by the Cybersecurity and Infrastructure Security Agency (CISA), the FBI, and the NSA reveals the latest trend is ransomware as a service gangs of bad actors essentially franchising their ransomware tools and techniques to less organized or less skilled hackers.

Clearly, protecting against ransomware attacks must be part of your organizations holistic cybersecurity strategy if youre still operating data center infrastructure and not cloud infrastructure. Hardening data centers and endpoints to protect against ransomware attacks is mandatory, but cloud infrastructure faces a different kind of threat. And if your organization is all in on cloud, ransomware is less of a worry.

What Is Ransomware?

Dont confuse a ransomware attack with a data breach, which involves stolen data. The purpose of ransomware is not to steal your data (although that can also occur during a ransomware attack) but rather to take control of the systems that house or encrypt your data and prevent you from accessing it until you pay the ransom. This can have a devastating impact on an organization by effectively shutting down operations until access to the data is restored.

While ransomware is a major cybersecurity threat, were simply not seeing ransomware attacks executed against cloud environments. The reason for this involves fundamental differences between cloud infrastructure and data center infrastructure.

A New Threat Landscape

Your cloud environment is not simply a remote replica of your onsite data center and IT systems. Cloud computing is 100% software driven by application programming interfaces (APIs) the software middlemen that allow different applications to interact with each other. The control plane is the API surface that configures and operates the cloud.

For example, you can use the control plane to build a virtual server, modify a network route, and gain access to data in databases or snapshots of databases (which are actually a more popular target among cloud hackers than live production databases). The API control plane is the rapidly growing collection of APIs your organization uses to configure and operate the cloud.

The priority for all cloud platform providers like Amazon, Google and Microsoft is to ensure your data is robust and resilient. And replicating data in the cloud is both easy and cheap, and a well-architected cloud environment ensures there are multiple backups of your data. Thats the key inhibitor to an attackers ability to use ransomware: Multiple copies of your data negates their ability to lock you out. If an attacker is able to encrypt your data and demands ransom from you, you can simply revert to the latest version of the data prior to the encryption.

The redundancy and resiliency that AWS, Google and Microsoft are building for hundreds of thousands of their customers running millions of servers and networks are impossible for you to replicate in your own data center infrastructure. And if your access to your on-premises systems is taken away from you and encrypted, it can be extremely difficult and in some cases effectively impossible for you to regain access without paying the ransom.

Security in the cloud is different because its a function of good design and architecture not intrusion detection and security analysis. Hackers are not trying to penetrate your network in order to lock you out of your systems; theyre trying to exploit cloud misconfigurations that enable them to operate against your cloud control plane APIs and steal your data right out from under you.

What Is Cloud Misconfiguration?

A misconfiguration can vary from individual resource misconfigurations that can appear simple, such as leaving a port open, to significant architectural design flaws that attackers use to turn a small misconfiguration into a massive blast radius. And I can guarantee that if your organization is operating in the cloud, your environment has both kinds of vulnerabilities. The good news is that because cloud infrastructure is software that can be programmed, these kinds of attacks can be prevented with software engineering approaches using policy as code.

Build Cloud Security on Policy as Code

When developers build applications in the cloud, theyre also building the infrastructure for the applications as opposed to buying physical infrastructure and deploying apps into it. The process of designing and building cloud infrastructure is done with code, which means developers own that process, and this fundamentally changes the security teams role.

In a completely software-defined world, securitys role is that of the domain expert who imparts knowledge to the people building stuff the developers to ensure theyre working in a secure environment. And that knowledge is delivered as automated developer tooling that leverages policy as code rather than checklists and policy documents written in a human language.

Policy as code enables your team to express security and compliance rules in a programming language that an application can use to check the correctness of configurations. Its designed to check other code and running environments for unwanted conditions or things that should not be. It empowers all cloud stakeholders to operate securely without any ambiguity or disagreement on what the rules are and how they should be applied at both ends of the software development life cycle (SDLC).

Cloud Security Must Be Automated

At the same time, policy as code automates the process of constantly searching for and remediating misconfigurations. There are no other approaches that in the long run are successful at this because the problem space keeps growing. The number of cloud services keeps growing, the number of deployments you have, and the amount of resources keeps growing. And so you must automate to relieve security professionals from having to spend their days manually monitoring for misconfigurations and enable developers to write code in a way that is flexible, that can be changed over time, and that can incorporate new knowledge, such as the latest big data breach that makes news headlines.

Harden Your Cloud Security Posture

Organizations that have implemented effective cloud security programs share some characteristics that any enterprise can emulate to harden their cloud security posture:

I dont want to downplay the threat ransomware attacks pose to your organization and encourage you to visit http://www.StopRansomware.gov, the U.S. federal governments resource for learning how to protect yourself from becoming a ransomware victim.

But I also want to emphasize that although your cloud environments are not highly vulnerable to ransomware, the risk of a data breach due to misconfigurations is high and growing as you adopt more cloud-based platforms and services.

The best defense is prevention. Use policy as code in the development phase, in the continuous integration/continuous delivery (CI/CD) pipeline, and in the runtime to quickly identify and remediate misconfigurations. As you gain maturity, these steps can be operationalized throughout your DevOps processes so that the entire process is automated and efficient.

About Josh Stella

Josh Stella is chief architect at Snyk and a technical authority on cloud security. Josh brings 25 years of IT and security expertise as founding chief technology officer at Fugue, principal solutions architect at Amazon Web Services, and advisor to the U.S. intelligence community. Joshs personal mission is to help organizations understand how cloud configuration is the new attack surface and how companies need to move from a defensive to a preventive posture to secure their cloud infrastructure. He wrote the first book on Immutable Infrastructure (published by OReilly), holds numerous cloud security technology patents, and hosts an educational Cloud Security Masterclass series. Connect with Josh on LinkedIn and via Fugue at http://www.fugue.co.

About Fugue

Fugue (part of Snyk) is a cloud security and compliance SaaS company enabling regulated companies such as AT&T, Red Ventures, and SAP NS2 to ensure continuous cloud security and earn the confidence and trust of customers, business leaders, and regulators. Fugue empowers developer and security teams to automate cloud policy enforcement and move faster in the cloud than ever before. Since 2013, Fugue has pioneered the use of policy-based cloud security automation and earned the patent on policy as code for cloud infrastructure. For more information, connect with Fugue at http://www.fugue.co, GitHub, LinkedIn and Twitter.

All brand names and product names are trademarks or registered trademarks of their respective companies.

Tags: Fugue, Snyk, cloud security, SaaS, Josh Stella, ransomware, policy as code, cybersecurity, cloud, infrastructure as code, open source, cloud security automation, network configuration, cloud configuration, cloud misconfiguration, data breach, cloud threats, application programming interface, API

Read this article:
Why Ransomware Attacks Steer Clear of the Cloud - Business Wire

Read More..

Datadog: Sorting Through The Rubble – Seeking Alpha

Just_Super/E+ via Getty Images

The last couple of years in the market have seen some significant volatility. It started with the COVID crash in March 2020, which whipsawed into an insane rally through the end of that year to seemingly punish anyone too fearful to buy the dip. Now, those investors who benefitted most from the high growth rally at the end of 2020 have taken a beating in early 2022. A few names worth mentioning are Zoom (NYSE:ZM)(80% off highs), Roku (ROKU) (68% off highs), and Fastly (FSLY) (87% off highs). The baby has been thrown out with the metaphorical bathwater in the high growth space, much in the same way that the rising tide lifted all boats regardless of the validity of the investment thesis in 2020.

Datadog

One of the best-in-class cloud operators I want to take a look at today is Datadog (NYSE:DDOG). The best analogy I've seen to explain what the company does is that they operate a cloud version of the Windows Task Manager but for the entire technology stack of a company.

That's dumbed down, and the company continues to buy/build new features for its clients, but it's good enough to understand it at a basic level. The company operates a monitoring and security platform for company's cloud applications, which is used for event-tracking of enterprise services. The software is compatible across nearly every cloud service and in on-premises servers, and continues to expand its offering to be an all-in-one solution.

Like many other cloud companies, Datadog is looking to ease the transition for companies as they conduct their digital transformation. In this case, companies may have multiple different pieces of software or some home-grown monitoring that doesn't communicate across the enterprise when there are issues. Datadog seeks to solve that by "breaking down silos" and not only enhancing visibility but also predicting issues using commonalities across the customer base like on-premises shared hardware, third-party software, etc.

An interesting case study discussed by Datadog management was Seven.One Entertainment, a Germany company which was able to lower its monitoring costs by 78% after onboarding Datadog software.

Datadog Investor Presentation

Datadog Investor Presentation

Another interesting metric is that 78% of customers now use at least 2 products, up from 72% YOY, 43% use at least four products, up from 22%, and 10% use six or more products, which is up from 3%.

Add that to a >130% net revenue retention rate for 18 straight quarters and those metrics set the stage for an incredibly successful SaaS business.

Datadog Investor Presentation

If you haven't been paying much attention to the market, cloud software has been a huge story, and for good reason. Between high margins, easy onboarding, and the necessity of digitization made abundantly clear by a global pandemic, you couldn't lose investing in the cloud space in 2020. I think this year has shown the importance of digging a little deeper and finding the best-in-class operators, but either way it's a rapidly growing market opportunity. Gartner Peer Insights projects just IT operations management to be a $25B opportunity in 2025.

Datadog Investor Presentation

Generally speaking, these software/tech companies tend to compete with different companies across the business lines. Datadog is no exception, as there is no perfect parallel, but Splunk (NYSE:SPLK) and Dynatrace (NYSE:DT) are decent proxies.

Gartner Peer Insights

Looking above, Datadog has been well reviewed within its industry on Gartner Peer Insights, and carries a similar rating to its competitors mentioned above. Datadog typically competes with Splunk in log management and Dynatrace in application performance monitoring. However, when discussing competition on the most recent earnings call, Co-Founder and CEO Olivier Pomel had this to say:

So, first off all, we don't actually see the competition all that much. So I don't wake up every morning asking myself how are we going to win or whether we're winning. We mostly compete against customers building it themselves or being under-tooled and starting in the cloud without the clear idea what's going on. We do see a few big replacements in every quarter. We've mentioned a few on the call. When that's the case, we - I mean, the ones we mentioned are typically the ones that are upmarket. The reason why we win in those situations is we offer integrated platforms where others don't. We're cloud native where others aren't. And most importantly, we have a lot more usage and adoption from the teams on the ground around our product. So, that's the deployed everywhere, used by everyone saying that I repeat at every call, and that really is what makes us win in the end with customers. And that applies upmarket, that applies down-market, that applies everywhere.

[W]e build a product and a company that serves the whole market, like the whole gamut of potential customers. We think that developers at small companies behave, especially in the cloud, like they behave very much the way as developers in very large enterprises. They have the same tool box. They work the same way largely. And so, we build a product that serves everyone. We do expect to have very large counts of customers in the end. But to your second question, we also see right now a lot of that demand, a lot of the growth is coming from midmarket and large enterprises and also the higher end of the market. And we feel good about that part of the market, like we see it successfully standardize on Datadog. We see it successfully land and expand with us. I think we're growing faster - we're in equivalent size and growing faster than anybody else in the market for that specific part of the market. So, I think we feel good about it. That's a big part of what we do.

That's really the best-case scenario. Looking at it as a green-field opportunity, you don't have to worry as an investor about some of the pricing pain and potentially slower growth that can come with intensely competitive industries. There appears to be plenty of room and Datadog is growing into the field quickly.

Speaking of which, the company's recent results were pretty fantastic. Q4 revenues of $326M were up 84% yoy, the company was GAAP profitable, and drove $250.5M of free cash flow in 2021. Net revenue retention rate remained above 130% and the company is carrying $1.6B on its balance sheet. It's starting to make sense why the company is so expensive. Management has guided for FY2022 for $1.51-1.53B in revenue for 48% growth with high 70's% gross margins and net income of $0.45-0.51 per share.

The company is obviously growing at a blistering pace, but the fact that management is keeping its expenses in check while continuing to maintain a best-in-class net revenue retention rate for existing customers sets it apart. Of course, keep an eye on everyone's favorite technology metric, share-based compensation. The share count rose 3% YOY, with share-based compensation up to $163M in 2021 from $74M in 2020. It's basically a fact of life at this point in the tech space, but it's still worth monitoring.

One additional takeaway from the most recent quarter is the win the company had on the federal government side, which could yield some highly lucrative and sticky contracts.

The goal is really to be able to sell to fully go-to market on the federal side. With the FedRAMP law we had before, we were limited in the number of agencies we could target and we also were limited in number of use cases. We basically could only target Infrastructure Monitoring use cases. We couldn't send logs, APM, things like that. With FedRAMP medium, what we can do is we can sell all of our products and we can pretty much sell to every single civilian agency in the federal government as well as a number of other government agencies that are - local agencies but that take the same or use FedRAMP as the same guidelines for security and compliance. So it really opens up the market. We've seen already some success to-date. We already have as of last quarter a $1 million-plus customer on the federal side on our FedRAMP on OSCAL offer. So we're optimistic, but we still have a lot of building to do on the go-to market there, like it's not necessarily exactly the same go-to market that we're used to.

I just wanted to highlight this as one of many growth vectors the company is pursuing.

Clouded Judgement Substack

Clouded Judgement Substack

Looking below, the market has not punished Datadog like much of the rest of the tech sector. After a significant run-up (the company IPO'ed in 2019 at $26 a share), the company is really treading water, a mere 27% off of its highs.

Finviz

I do own shares of Datadog, and I have continued to buy the company over time. I don't think it's a bargain today, but I don't like to wait anymore for the best companies to become bargains to own them. Quite a bit of the froth has come out of the market, and amid the selloff, Datadog appears to me to be a best-in-class operator trading at a deserved premium that is already profitable and will reward shareholders for many years to come.

Go here to see the original:
Datadog: Sorting Through The Rubble - Seeking Alpha

Read More..

Digital Cleanup Day: It’s time to take out the digital trash – ComputerWeekly.com

In this guest post,Yann Lechelle, CEO of French cloud provider Scaleway, sets out the steps consumers and businesses can take to reduce the environmental impact of their digital activities, ahead of Digital Cleanup Day

Back in 2012, a survey revealed that 51% of Americans believe stormy weather affects cloud computing. This might seem amusing today, but even now there remains a disconnect among the general population between what digital is imagined to be and what it really is.

Things like the internet, the cloud, and other digital technologies are often regarded as light and fluffy, floating somewhere through the air, making it easy to forget that behind these services lies expansive infrastructure that gobbles up gargantuan amounts of resources.

Did you know that the digital sector uses 10% of all the electricity produced globally and is responsible for 4% of total global CO2 emissions? Thats more than aviation.

A household internet router uses as much electricity as a refrigerator and the energy consumed by a single bitcoin transaction (~2250 kWh) could power an average US household for 2.5 months or a French household for nearly half a year. More than that, every digital action has a physical impact a single online search uses as much electricity as a lightbulb for 1-2 minutes.

Even so, the bulk of the carbon impact of the digital sector comes from manufacturing. For example, 90% of the carbon impact of a smartphone comes from its manufacture and just 10% from its usage.

As things stand, the digital sector is a major contributor to the climate crisis. The bad news is that its global carbon dioxide footprint is expected to double to 8% by 2025.

The good news is that we can do something about it.

Digital Cleanup Day (March 19 2022) is conceived to shine a light on these efforts, and to encourage us to minimize digital pollution by cleaning up digital trash, reducing e-waste, and spreading the word. And there are steps we can all take to make a difference.

When you upload something online, its stored in a physical drive in a datacentre somewhere in the world. The emails in your inbox, the videos on your social media profile, the images in your cloud account they all take up space in servers that are running 24/7.

The chances are, though, that you will never revisit most of those files, emails, documents, which means theyre sitting there wasting energy. Its like keeping a light on in a room that youll never use.

Join thousands of others on Digital Cleanup Day and beyond in cleaning up your folders and deleting unnecessary files, apps, emails, photos, videos, and other digital waste.

E-waste is short for electronic waste and refers to the hardware itself. As mentioned earlier, the electronics manufacturing process is among the biggest culprits in terms of carbon emissions. In response, one of the best things you can do is to give your old electronics a second life.

Dont just throw out hardware repair it, donate it, sell it, repurpose it instead. Do you have an old laptop sitting in the back of the closet? Why not donate it to a local school to help support remote students?

Sometimes electronics are beyond repair. If its not salvageable, then make sure you recycle it. E-waste can contain materials that are harmful to the environment and disposing of them properly will help minimize the damage. Apple has even developed robots specifically trained to disassemble iPhones. So if you dont know what to do with your old smartphone, why not take it back to its maker?

The digital sectors environmental footprint is still something most people are oblivious to. You can amplify your own positive contribution toward a more sustainable future by involving others as well.

Simply having a chat with friends and family and educating them about how the internet and the cloud works (its all run on electricity-powered, water-cooled servers) can help them understand the real-world impacts of their digital actions. In terms of concrete actions, you can, for example, invite your friends to switch to a privacy-focused messaging tool such as Signal it doesnt process any data on you, and hence uses less energy. Privacy is green.

Spreading the word beyond your social circles also matters. You can organise events on social media, invite people to join digital cleanup initiatives, or even publicly call out organizations to consider their digital footprints and act to lessen them.

To that last point, all of your favorite online platforms and websites are using hosting services (read: datacentres) to stay online. But not all datacentres are made equal some are more efficient and environmentally friendly than others. Accordingly, you can write to companies and institutions and ask them to host their services in modern datacentres that waste less resources.

Deleting an email is like turning the water off while you brush your teeth its good to do your part, but your individual impact is small. That email takes up a minuscule fraction of just one server, whereas a datacentre will host thousands of servers that not only use up electricity, but also water for cooling.

How much electricity? Datacentres are reportedly responsible for ~1% of the worlds electricity use and this share is growing. To put it into perspective, a single datacentre can use as much power as a small city.

The numbers are jaw-dropping for water use as well. For instance, datacentres in the Netherlands use an average of 1 million cubic meters of water per year, which is roughly equivalent to the yearly water consumption of 20,000 people.

Accordingly, minor datacentre efficiency improvements translate to massive resource savings. More than could ever be achieved by deleting old emails.

Lack of public awareness about the environmental impact of datacentre means inefficient ones arent publicly scrutinized. Nor are the companies that choose to use these services. In turn, energy continues to be wasted.

That said, the tides are turning thanks to improved understanding about the role of the digital sector, which has been accelerated by events such as Digital Cleanup Day. Soon, popular pressure, hand-in-hand with legislative action, will reshape the environmental standards all datacentres must adhere to.

Scaleway stands with future generations to take on the urgent challenge of climate change, and empower them with the tools and knowledge needed to demand a more sustainable technology sector.

The climate crisis can be averted by a combination of individual, collective, legislative, and industrial measures. The only condition for success: that everyone plays their part, be it through deleting an email, donating an old laptop, or choosing an eco-conscious cloud service provider.

Read the original post:
Digital Cleanup Day: It's time to take out the digital trash - ComputerWeekly.com

Read More..