Category Archives: Cloud Servers

Ofcom concerned about Microsoft and Amazon domination of cloud market – Yahoo Finance UK

Ofcom could call in the competition regulator after finding concerns in the cloud services market, a backbone of the online world which is dominated by two companies.

The telecoms regulator proposed on Wednesday that the Competition and Markets Authority open its own probe into the sector amid concerns customers find barriers in their way when trying to switch suppliers.

The cloud space is dominated by two players, Amazon and Microsoft, which together hold an approximate 60%-70% market share.

Ofcom said it was particularly concerned about the two companies practices because of their dominant position.

Millions of people and businesses have come to rely on cloud computing in recent years.

The cloud loosely refers to a series of massive servers around the world which users can tap into to store photographs or emails, or run software from.

Ofcom said there was still competition in the sector, with innovative products and discounts offered to new customers.

However, it was concerned for customers trying to move from one cloud supplier to another.

The massive suppliers charge significantly higher fees than smaller providers to move data out of the cloud and to another companys servers, Ofcom said.

Users might also struggle to use more than one companys services at the same time because the leading firms prevent some of their services working effectively alongside those from other suppliers.

There is a risk that the features we have identified could lead the market to concentrate further towards the market leaders, Ofcom said.

It said the Competition and Markets Authority would be best-placed to investigate this further.

Fergal Farragher, the Ofcom director who led its study into the sector, said: Weve done a deep dive into the digital backbone of our economy and uncovered some concerning practices, including by some of the biggest tech firms in the world.

Story continues

High barriers to switching are already harming competition in what is a fast-growing market.

We think more in-depth scrutiny is needed, to make sure its working well for people and businesses who rely on these services.

Ofcom said it would take feedback on its findings until mid May and would make its final decision in October.

Microsoft said: We look forward to continuing our engagement with Ofcom on their cloud services market study.

We remain committed to ensuring the UK cloud industry stays highly competitive, and to supporting the transformative potential of cloud technologies to help accelerate growth across the UK economy.

Amazon Web Services said: These are interim findings and AWS will continue to work with Ofcom ahead of the publication of its final report.

The UK has a thriving and diverse IT industry with customers able to choose between a wide variety of IT providers.

At AWS, we design our cloud services to give customers the freedom to build the solution that is right for them, with the technology of their choice.

This has driven increased competition across a range of sectors in the UK economy by broadening access to innovative, highly secure, and scalable IT services.

View post:
Ofcom concerned about Microsoft and Amazon domination of cloud market - Yahoo Finance UK

Cloud WAF Pricing: All You Need to Know – Security Boulevard

Choosing the right Cloud WAF pricing model is like finding the perfect pair of shoes: its all about comfort, fit, and style for your organizations needs.

In this guide, well help you navigate the world of Cloud WAF pricing, exploring different options and factors so that you can find the perfect fit for your web application security requirements.

For those still evaluating Cloud vs. on-prem WAF, heres a detailed article onwhy cloud WAFs are better than on-premise WAFs.

WAFs provided by public clouds such as AWS and Azure typically price on a pay-as-you-go model.

On the other hand, specialized WAF providers such as Indusface, Akamai, and Cloudflare offer a subscription model.

There are many pay-as-you-go features offered even by subscription providers. The value addition that specialized WAFs provide is the availability of core rules that provide by-default protection against OWASP Top 10 vulnerabilities.

In public Cloud WAFs, youll typically need to either:

That said, several pay-as-you-go features are provided even by specialized WAF providers.

In the next section, we will cover all the factors that affect WAF pricing.

This is the first parameter that affects pricing. Even within this, there are two models:

a. Domain: One license for the domain, and this includes subdomains too. This model is typically used when similar applications are on different sub-domains, for example, qa.acme.com vs. acme.com.

While you can use this model for sub-domains that host different applications, the possibility of false positives is more as the same rule set is applied on multiple applications.

b. Application: Since every application differs, this model helps get fine-grained protection and custom rules. Usually, the license depends on a per-website model or a Fully Qualified Domain Name (FQDN).

For example, youll typically be charged one license for http://www.acme.com and one more for abc.acme.com.

Cloud WAFs act as filters before traffic hit your origin server. All the traffic passed over to your origin servers is billed as the bandwidth cost.

Here also, there are three models:

a. Requests: The pricing plan might have a set cost for a specific number of requests each month, plus extra charges for any extra requests over the set limit. Another option is that the pricing depends only on the total number of requests, so customers pay for what they use.

b. Peak Mbps : Some WAF companies use a peak Mbps (megabits per second) pricing plan. They charge customers based on the highest bandwidth (mainly in the 95th percentile) used in a set time, like a month. This model looks at the most traffic the WAF handles, not the total requests or data moved. Its important for organizations with changing traffic or different bandwidth needs.

c. Bandwidth: Some WAFs use a pricing plan based on the bandwidth over the wire. This includes both the request and response data. They charge customers for data moving through the system. This pricing model is easy to understand and works well for many organizations.

As discussed earlier, depending on the WAF provider, you may get charged for the following features:

a. DDoS & Bot Mitigation:This is probably the single most expensive feature addition. As per the application, the subscription to this feature alone typically costs a couple of thousand dollars per month in the subscription. In addition, some vendors even bill you for the bandwidth in case of a DDoS attack. In the case of Indusface AppTrana,DDoS is bundled as part of the monthly subscription plans.

b. API Security: Most popular WAFs now include an API security solution. This category is now called WAAP. However, this is generally priced as an add-on as API security needs special configuration, especially to create a positive security model. The AppTrana WAAP, by default, protects all APIs that are part of the same FQDN.See more details here.

c. Analytics: Getting analytics on the kind of attacks blocked is also, a big add-on, especially if you just get one WAF license and use that to protect multiple applications such as acme.com, payroll.acme.com, crm.acme.com along with acme.com. As these are all different applications, storing attack logs and analytics on these logs would be extremely expensive.

Hence, most WAF providers dont provide access on a single license. At Indusface, we often suggest taking additional licenses for critical applications requiring attack logs and analysis.

d. DAST scanners: DAST and WAF are not integrated and separate products in most organizations. This is a lost opportunity, as vulnerabilities found on a DAST could quickly be patched on the WAF. This process is called virtual patching, and it buys developers time before they patch these vulnerabilities on code.

At Indusface, we bundle DAST scanner Indusface WAS as part of the AppTrana WAAP. You save costs on subscriptions and integrate DAST and virtual patching into CI/CD pipelines so that security is handled even in an agile development cycle.

e. CDN: Since WAAP providers have some pricing component dependent on data transfer, enabling a CDN will lead to significant cost savings. In most WAFs, this is an add-on.

f. Support:24X7 phone, email, and chat support is yet another feature that most WAF vendors add only in enterprise contracts. At Indusface, you will get enterprise support at SMB pricing; see the WAAP pricing page here.

Managed services play a big part in application security, especially as threats evolve. For example,200+ application-level critical/high zero-day vulnerabilitiesare discovered monthly. Compute power is so cheap that a one-hour DDoS attack can be bought for $5, and this will get cheaper.

To combat all of this, any WAAP/WAF solution needs to evolve. While most Cloud WAFs keep the software updated, a key part of defense is the rule set, and unless the security teams have highly skilled security engineers, they wouldnt be able to touch any of the rule sets.

The other problem is that even if rules are sent as patches, the onus is on the application team to monitor for false positives and ensure 99.99% availability while preventing downtime. Often, application teams do not apply these patches; worse, most WAFs are perpetually in log mode, as in they dont block any attacks!

Then theres the problem of DDoS, which is a big ransomware threat, and sophisticated actions such as rate limits, Tarpitting, CAPTCHA, and blocks need careful monitoring as there is a high possibility of false positives.

So managed services are essentially an extended SOC/IT team to help with the following:

While every vendor can promise managed services, evaluating the SLAs with which they operate is critical. We highly recommend checking the support response times and SLAs, uptime guarantee, and latency with the vendor.

At Indusface, we are proud to ensure a 24-hour SLA on virtual patches for critical vulnerabilities.You can find more details on the SLA here.

Heres a step-by-step framework to help people choose a WAF based on pricing:

1. Identify your organizations requirements:

2. Research WAF providers

3. Analyse pricing models:

4. Evaluate included features and additional services

5. Assess data center locations and regions

6. Compare technical support and SLAs

7. Calculate the total cost of ownership (TCO)

8. Rank various WAF providers

9. Run product trials

By following this framework, you can systematically evaluate and compare different WAFs based on pricing, features, support, and other factors, ultimately selecting the most suitable and cost-effective solution for your organization.

In conclusion, selecting the right Cloud WAF is crucial for safeguarding your web applications and maintaining a strong security posture. A thorough understanding of Cloud WAF pricing, features, and service level agreements will enable your organization to make informed decisions, ensuring you invest in a solution that fits your budget and provides robust protection against ever-evolving cyber threats.

Stay tuned for more relevant and interesting security updates. Follow Indusface onFacebook,Twitter, andLinkedIn

The post Cloud WAF Pricing: All You Need to Know appeared first on Indusface.

*** This is a Security Bloggers Network syndicated blog from Indusface authored by Indusface. Read the original post at: https://www.indusface.com/blog/cloud-waf-pricing-all-you-need-to-know/

See the rest here:
Cloud WAF Pricing: All You Need to Know - Security Boulevard

iExec RLC: Unlocking New Possibilities in the Cloud Computing … – The Southern Maryland Chronicle

In a world where cloud computing is becoming increasingly popular, iExec RLC offers a unique solution for businesses looking to make the most out of their resources. By providing access to distributed applications and services, iExec RLC unlocks new possibilities in the cloud computing space.

The platform allows users to securely access and deploy any application or service from anywhere in the world without having to worry about data security or reliability. It also eliminates the need for complex infrastructure setup and maintenance, as all applications and services are hosted on an Ethereum-based blockchain network. As such, businesses can benefit from reduced costs associated with theiExec RLC Priceand hosting fees while also having the advantage of increased flexibility when scaling their solutions according to their needs.

Sign up to receive our free Daily Digest newsletter, in your inbox each morning.

Furthermore, iExec RLC gives users complete control over their data privacy settings, allowing them to decide who can access what information they store on the platform. All of these features make iExec RLC an attractive option for businesses looking for a reliable and secure way to unlock new possibilities in the cloud computing space.

iExec RLC (RLC stands for Run on Lots of Computers) is a decentralized cloud computing platform that enables users to rent out their computing resources in exchange for cryptocurrency. It was created by the French startupiExec, which has been developing blockchain-based solutions since 2016. The platform allows users to access distributed applications and services without owning or managing any hardware. Instead, they can rent out the necessary computing power from other users on the network. This makes it easier and more cost-effective for developers to create and deploy distributed applications and for businesses to access powerful computing resources without investing in expensive hardware, allowing them to tap into new digital markets like theMetaverse, accessing a new market of digital consumers. Additionally, iExec provides a marketplace where developers can list their applications and services, allowing them to monetize their work while giving users easy access to high-quality products.

To buy and sell iExec RLC tokens, you will need to use a cryptocurrency exchange. First, you will need to create an account on the exchange platform of your choice. Once your account is created, you can deposit funds using various payment methods such as bank transfer or credit card. After your funds have been deposited, you can then search for the iExec RLC token and place an order to buy or sell it at the current market price. Once your order has been filled, you will be able to withdraw your tokens from the exchange into a secure wallet that supports them.

iExec RLC uses distributed ledger technology (DLT) to ensure the integrity of its network by providing an immutable record of all transactions on the platform. This makes it an ideal solution for companies looking for a secure way to store sensitive information such as customer data or financial records. iExec also offers a range of advanced analytic capabilities which allow businesses to gain valuable insights into their operations and make better decisions based on real-time data analysis. All user data is encrypted using industry-standard encryption algorithms, and all communication between servers and the customers device is done over a secure HTTPS connection. Two-factor authentication has also been implemented for added security, so you can be sure that only you have access to your account. Additionally, the company regularly monitors its systems for any suspicious activity or potential threats. By combining these various security measures, iExec RLC ensures that its users data remains safe and secure at all times.

Show your love for Southern Maryland by powering authoritative, in-depth reporting about your community, and keeping access free for neighbors who need it.

Like Loading...

Related

View post:
iExec RLC: Unlocking New Possibilities in the Cloud Computing ... - The Southern Maryland Chronicle

What are the sustainability benefits of using a document … – Journalism.co.uk

Press Release

Document management specialists Filestream discusses the sustainability benefits of combining document management and Cloud storage. Filestream works with partner Sire Cloud to provide businesses with a seamless solution and also aid productivity

The benefits of combining document management and cloud storage fall into two main areas, sustainability and productivity. They work together, one effortlessly leading to the other.

In todays world, ambitious, growing SMEs, and corporates, large or small, are keen to ensure their ESG (Environmental, Social and Governance) credentials are meeting current standards. Linking their document management and cloud storage is a huge step to attaining this.

We have worked with our partners at SIRE Cloud to produce a solution using the combined advantages of File Stream document management and the UK based SIRE Cloud platform.

How does this help any business meet sustainability goals?

Increasingly, businesses are taking sustainability seriously. Many make the leap for their own ethical reasons. However, many often realise theyhave little choice as their customers are insisting more and more that suppliers showevidence they are actively working to be more sustainable. Failure to do so can be veryserious and even long-standing, successful, productive, and profitable businessrelationships can come to end.

Here are some examples of how a Cloud-based approach to document

management and storage can help sustainability goals and improve business

practices:

Why use our Cloud storage?

All backups, antivirus/malware software, firewalls, and Microsoft 365 are maintained to the highest standards. This removes a considerable burden of responsibility as well as freeing up valuable time.

Additionally, a program like File Stream which has a zero-carbonfootprint (similar to an on-line banking application), enables access to the Cloud where the documents are stored from any device and from anywhere, via the internet.

The SIRE Cloud servers (and therefore the documents) remain in the UK. They are protected in different locations (data centres) that are also in the UK. This gives confidence to businesses that their important information is stored as locally as possible.

What are the sustainability advantages of the SIRE Cloud servers?

Once the data is stored it will remain on storage devices which are three times more power-efficient than a PC hard disk.

SIRE select data centres that use 100 per cent renewable energy to power the data centres and have been doing so since 2011. Working with Sire on sustainable technologies and policies has ensured a PUE of 1.14. This is lower than the global average of 1.57.(to understand what a PUE is, see the fact-file below).

Cold-aisle containment:

There are many different ways for data centres to deliver cooling to the servers on their data floors. At the Sire data centre one way this is managed is using cold-aisle containment which forces cool air over servers rather than escaping: The server racks make up the walls of cold aisles, with doors and a roof sealing the corridor.

Chilled air is delivered through the floor into the aisle. Since it has nowhere else to go, the chilled air is forced through the walls of the corridor, and over the servers.

Adiabatic cooling towers:

Adiabatic cooling towers are one of the ways to generate chilled water. They use the natural process of evaporation to cool water down, so the only power used is to pump the water through the towers. These cooling towers can keep up with cooling on the data floor, even on the hottest days of the year.

Efficient UPSs:

They have invested in state-of-the-art UPSs with line-interactive / Smart Active efficiency of up to 98.5 per cent. This means only 1.5 per cent of energy is lost in the transfers, significantly less than data centres.(See fact-file below for more information on UPSs).

LED lights on motion detectors:

Reducing energy consumption goes beyond just the data floor. Throughout the data centres there are energy-efficient LED bulbs. These are also fitted with motion detector switches, so that they turn off automatically when no one is using a room.

Want to know more?

Get in touch with us (link to enquiry form) to find out how this partnership can help your business or organisation become more efficient, productive, and sustainable. We look forward to hearing from you.

Fact-file:

Read more from the original source:
What are the sustainability benefits of using a document ... - Journalism.co.uk

The Silent Platform Revolution: How eBPF Is Fundamentally … – InfoQ.com

Key Takeaways

Kubernetes and cloud native have been around for nearly a decade. In that time, weve seen a Cambrian explosion of projects and innovation around infrastructure software. Through trial and late nights, we have also learned what works and what doesnt when running these systems at scale in production. With these fundamental projects and crucial experience, platform teams are now pushing innovation up the stack, but can the stack keep up with them?

With the change of application design to API-driven microservices and the rise of Kubernetes-based platform engineering, networking, and security, teams have struggled to keep up because Kubernetes breaks traditional networking and security models. With the transition to cloud, we saw a similar technology sea change at least once. The rules of data center infrastructure and developer workflow were completely rewritten as Linux boxes in the cloud began running the worlds most popular services. We are in a similar spot today with a lot of churn around cloud native infrastructure pieces and not everyone knowing where it is headed; just look at the CNCF landscape. We have services communicating with each other over distributed networks atop a Linux kernel where many of its features and subsystems were never designed for cloud native in the first place.

The next decade of infrastructure software will be defined by platform engineers who can take these infrastructure building blocks and use them to create the right abstractions for higher-level platforms. Like a construction engineer uses water, electricity, and construction materials to build buildings that people can use, platform engineers take hardware and infrastructure software to build platforms that developers can safely and reliably deploy software on to make high-impact changes frequently and predictably with minimal toil at scale. For the next act in the cloud native era, platform engineering teams must be able to provision, connect, observe, and secure scalable, dynamic, available, and high-performance environments so developers can focus on coding business logic. Many of the Linux kernel building blocks supporting these workloads are decades old. They need a new abstraction to keep up with the demands of the cloud native world. Luckily, it is already here and has been production-proven at the largest scale for years.

eBPF is creating the cloud native abstractions and new building blocks required for the cloud native world by allowing us to dynamically program the kernel in a safe, performant, and scalable way. It is used to safely and efficiently extend the cloud native and other capabilities of the kernel without requiring changes to kernel source code or loading kernel modules unlocking innovation by moving the kernel itself from a monolith to more modular architecture enriched with cloud native context. These capabilities enable us to safely abstract the Linux kernel, iterate and innovate at this layer in a tight feedback loop, and become ready for the cloud native world. With these new superpowers for the Linux kernel, platform teams are ready for Day 2 of cloud nativeand they might already be leveraging projects using eBPF without even knowing. There is a silent eBPF revolution reshaping platforms and the cloud native world in its image, and this is its story.

eBPF is a decades-old technology beginning its life as the BSD Packet Filter (BPF) in 1992. At the time, Van Jacobson wanted to troubleshoot network issues, but existing network filters were too slow. His lab designed and created libpcap, tcpdump, and BPF as a backend to provide the required functionality. BPF was designed to be fast, efficient, and easily verifiable so that it could be run inside the kernel, but its functionality was limited to read-only filtering based on simple packet header fields such as IP addresses and port numbers. Over time, as networking technology evolved, the limitations of this classic BPF (cBPF) became more apparent. In particular, it was stateless, which made it too limiting for complex packet operations and difficult to extend for developers.

Despite these constraints, the high-level concepts around cBPF of having a minimal, verifiable instruction set where it is feasible for the kernel to prove the safety of user-provided programs to then be able to run them inside the kernel have provided an inspiration and platform for future innovation. In 2014, a new technology was merged into the Linux kernel that significantly extended the BPF (hence, eBPF) instruction set to create a more flexible and powerful version. Initially, replacing the cBPF engine in the kernel was not the goal since eBPF is a generic concept and can be applied in many places outside of networking. However, at that time, it was a feasible path to merge this new technology into the mainline kernel. Here is an interesting quote from Linus Torvalds:

So I can work with crazy people, thats not the problem. They just need to sell their crazy stuff to me using non-crazy arguments and in small and well-defined pieces. When I ask for killer features, I want them to lull me into a safe and cozy world where the stuff they are pushing is actually useful to mainline people first. In other words, every new crazy feature should be hidden in a nice solid Trojan Horse gift: something that looks obviously good at first sight.

This, in short, describes the organic nature of the Linux kernel development model and matches perfectly to how eBPF got merged into the kernel. To perform incremental improvements, the natural fit was first to replace the cBPF infrastructure in the kernel, which improved its performance, then, step by step, expose and improve the new eBPF technology on top of this foundation. From there, the early days of eBPF evolved in two directions in parallel, networking and tracing. Every new feature around eBPF merged into the kernel solved a concrete production need around these use cases; this requirement still holds true today. Projects like bcc, bpftrace, and Cilium helped to shape the core building blocks of eBPF infrastructure long before its ecosystem took off and became mainstream. Today, eBPF is a generic technology that can run sandboxed programs in a privileged context such as the kernel and has little in common with BSD, Packets, or Filters anymoreeBPF is simply a pseudo-acronym referring to a technological revolution in the operating system kernel to safely extend and tailor it to the users needs.

With the ability to run complex yet safe programs, eBPF became a much more powerful platform for enriching the Linux kernel with cloud native context from higher up the stack to execute better policy decisions, process data more efficiently, move operations closer to their source, and iterate and innovate more quickly. In short, instead of patching, rebuilding, and rolling out a new kernel change, the feedback loop with infrastructure engineers has been reduced to the extent that an eBPF program can be updated on the fly without having to restart services and without interrupting data processing. eBPFs versatility also led to its adoption in other areas outside of networking, such as security, observability, and tracing, where it can be used to detect and analyze system events in real time.

Moving from cBPF to eBPF has drastically changed what is possibleand what we will build next. By moving beyond just a packet filter to a general-purpose sandboxed runtime, eBPF opened many new use cases around networking, observability, security, tracing, and profiling. eBPF is now a general-purpose compute engine within the Linux kernel that allows you to hook into, observe, and act upon anything happening in the kernel, like a plug-in for your web browser. A few key design features have enabled eBPF to accelerate innovation and create more performant and customizable systems for the cloud native world.

First, eBPF hooks anywhere in the kernel to modify functionality and customize its behavior without changing the kernels source. By not modifying the source code, eBPF reduces the time from a user needing a new feature to implementing it from years to days. Because of the broad adoption of the Linux kernel across billions of devices, making changes upstream is not taken lightly. For example, suppose you want a new way to observe your application and need to be able to pull that metric from the kernel. In that case, you have to first convince the entire kernel community that it is a good ideaand a good idea for everyone running Linuxthen it can be implemented and finally make it to users in a few years. With eBPF, you can go from coding to observation without even having to reboot your machine and tailor the kernel to your specific workload needs without affecting others. eBPF has been very useful, and the real power of it is how it allows people to do specialized code that isnt enabled until asked for, said Linus Torvalds.

Second, because the verify checks that programs are safe to execute, eBPF developers can continue to innovate without worrying about the kernel crashing or other instabilities. This allows them and their end users to be confident that they are shipping stable code that can be leveraged in production. For platform teams and SREs, this is also crucial for using eBPF to safely troubleshoot issues they encounter in production.

When applications are ready to go to production, eBPF programs can be added at runtime without workload disruption or node reboot. This is a huge benefit when working at a large scale because it massively decreases the toil required to keep the platform up to date and reduces the risk of workload disruption from a rollout gone wrong. eBPF programs are JIT compiled for near native execution speed, and by shifting the context from user space to kernel space, they allow users to bypass or skip parts of the kernel that arent needed or used, thus enhancing performance. However, unlike complete kernel bypasses in user space, eBPF can still leverage all the kernel infrastructure and building blocks it wants without reinventing the wheel. eBPF can pick and choose the best pieces of the kernel and mix them with custom business logic to solve a specific problem. Finally, being able to modify kernel behavior at run time and bypass parts of the stack creates an extremely short feedback loop for developers. It has finally allowed experimentation in areas like network congestion control and process scheduling in the kernel.

Growing out of the classic packet filter and taking a major leap beyond the traditional use case unlocked many new possibilities in the kernel, from optimizing resource usage to adding customized business logic. eBPF allows us to speed up kernel innovation, create new abstractions, and dramatically increase performance. eBPF not only reduces the time, risk, and overhead it takes to add new features to production workloads, but in some cases, it even makes it possible in the first place.

So many benefits begs the question if eBPF can deliver in the real worldand the answer has been a resounding yes. Meta and Google have some of the worlds largest data center footprints; Netflix accounts for about 15% of the Internets traffic. Each of these companies has been using eBPF under the hood for years in production and the results speak for themselves.

Meta was the first company to put eBPF into production at scale with its load balancer project Katran. Since 2017, every packet going into a Meta data center has been processed with eBPFthats a lot of cat pictures. Meta has also used eBPF for many more advanced use cases, most recently improving scheduler efficiency, which increased throughput by 15%, a massive boost and resource saving at their scale. Google also processes most of its data center traffic through eBPF, using it for runtime security and observability, and defaults its Google Cloud customers to using an eBPF-based dataplane for networking. In the Android operating system, which powers over 70% of mobile devices and has more than 2.5 billion active users spanning over 190 countries, almost every networking packet hits eBPF. Finally, Netflix relies extensively on eBPF for performance monitoring and analysis of their fleet, and Netflix engineers pioneered eBPF tooling, such as bpftrace, to make major leaps in visibility for troubleshooting production servers and built eBPF-based collectors for On-CPU and Off-CPU flame graphs.

eBPF clearly works and provides extensive benefits for Internet-scale companies and has been for the better part of a decade, but those benefits also need to be translated to the rest of us.

At the beginning of the cloud native era, GIFEE (Google Infrastructure for Everyone Else) was a popular phrase, but largely fell out of favor because not everyone is Google or needs Google infrastructure. Instead, people want simple solutions that solve their problems, which begs the question of why eBPF is different. Cloud native environments are meant to run scalable applications in modern, dynamic environments. Scalable and dynamic are key to understanding why eBPF is the evolution of the kernel that the cloud native revolution needs.

The Linux kernel, as usual, is the foundation for building cloud native platforms. Applications are now just using sockets as data sources and sinks, and the network as a communication bus. But cloud native needs newer abstractions than currently available in the Linux kernel because many of these building blocks, like cgroups (CPU, memory handling), namespaces (net, mount, pid), SELinux, seccomp, netfiler, netlink, AppArmor, auditd, perfare decades old before cloud even had a name. They dont always talk together, and some are inflexible, allowing only for global policies and not per-container or per-service ones. Instead of leveraging new cloud native primitives, they lack awareness of Pods or any higher-level service abstractions and rely on iptables for networking.

As a platform team, if you want to provide developer tools for a cloud native environment, you can still be stuck in this box where cloud native environments cant be expressed efficiently. Platform teams can find themselves in a future they are not ready to handle without the right tools. eBPF now allows tools to rebuild the abstractions in the Linux kernel from the ground up. These new abstractions are unlocking the next wave of cloud native innovation and will set the course for the cloud native revolution.

For example, in traditional networking, packets are processed by the kernel, and several layers of network stack inspect each packet before reaching its destination. This can result in a high overhead and slow processing times, especially in large-scale cloud environments with many network packets to be processed. eBPF instead allows inserting custom code into the kernel that can be executed for each packet as it passes through the network stack. This allows for more efficient and targeted network traffic processing, reducing the overhead and improving performance. Benchmarks from Cilium showed that switching from iptables to eBPF increased throughput 6x, and moving from IPVS-based load balancing to eBPF based allowed Seznam.cz to double throughput while also reducing CPU usage by 72x. Instead of providing marginal improvements on an old abstraction, eBPF enables magnitudes of enhancement.

eBPF doesnt just stop at networking like its predecessor; it also extends to areas like observability and security and many more because it is a general-purpose computing environment and can hook anywhere in the kernel. I think the future of cloud native security will be based on eBPF technology because its a new and powerful way to get visibility into the kernel, which was very difficult before, said Chris Aniszczyk, CTO of Cloud Native Computing Foundation. At the intersection of application and infrastructure monitoring, and security monitoring, this can provide a holistic approach for teams to detect, mitigate, and resolve issues faster.

eBPF provides ways to connect, observe, and secure applications at cloud native speed and scale. As applications shift toward being a collection of API-driven services driven by cloud native paradigms, the security, reliability, observability, and performance of all applications become fundamentally dependent on a new connectivity layer driven by eBPF, said Dan Wendlandt, CEO and co-founder of Isovalent. Its going to be a critical layer in the new cloud native infrastructure stack.

The eBPF revolution is changing cloud native; the best part is that it is already here.

While the benefits of eBPF are clear, it is so low level that platform teams, without the luxury of Linux kernel development experience, need a friendlier interface. This is the magic of eBPFit is already inside many of the tools running the cloud native platforms of today, and you may already be leveraging it without even knowing. If you spin up a Kubernetes cluster on any major cloud provider, you are leveraging eBPF through Cilium. Using Pixie for observability or Parca for continuous profiling, also eBPF.

eBPF is a powerful force that is transforming the software industry. Marc Andreessens famous quote on software is eating the world has been semi-jokingly recoined by Cloudflare as eBPF is eating the world. However, success for eBPF is not when all developers know about it but when developers start demanding faster networking, effortless monitoring and observability, and easier-to-use security solutions. Less than 1% of developers may ever program something in eBPF, but the other 99% will benefit from it. eBPF will have completely taken over when theres a variety of projects and products providing massive developer experience improvement over upstreaming code to the Linux kernel or writing Linux kernel modules. We are already well on our way to that reality.

eBPF has revolutionized the way infrastructure platforms are and will be built and has enabled many new cloud native use cases that were previously difficult or impossible to implement. With eBPF, platform engineers can safely and efficiently extend the capabilities of the Linux kernel, allowing them to innovate quickly. This allows for creating new abstractions and building blocks tailored to the demands of the cloud native world, making it easier for developers to deploy software at scale.

eBPF has been in production for over half a decade at the largest scale and has proven to be a safe, performant, and scalable way to dynamically program the kernel. The silent eBPF revolution has taken hold and is already used in projects and products around the cloud native ecosystem and beyond. With eBPF, platform teams are now ready for the next act in the cloud native era, where they can provision, connect, observe, and secure scalable, dynamic, available, and high-performance environments so developers can focus on just coding business logic.

Read more:
The Silent Platform Revolution: How eBPF Is Fundamentally ... - InfoQ.com

Data Backup And Recovery Global Market Report 2023 – GlobeNewswire

New York, April 06, 2023 (GLOBE NEWSWIRE) -- Reportlinker.com announces the release of the report "Data Backup And Recovery Global Market Report 2023" - https://www.reportlinker.com/p06443941/?utm_source=GNW , Cohesity, Broadcom Inc., Carbonite Inc., Actifio Technologies and Redstor Limited.

The global data backup and recovery market grew from $12.18 billion in 2022 to $14.15 billion in 2023 at a compound annual growth rate (CAGR) of 16.2%. The Russia-Ukraine war disrupted the chances of global economic recovery from the COVID-19 pandemic, at least in the short term. The war between these two countries has led to economic sanctions on multiple countries, a surge in commodity prices, and supply chain disruptions, causing inflation across goods and services and affecting many markets across the globe. The data backup and recovery market is expected to grow to $23.64 billion in 2027 at a CAGR of 13.7%.

The data backup and recovery market includes revenues earned by entities by providing disks/tape backup, hybrid cloud backup, and direct-to-cloud backup, recovery from local device, recovery from cloud and recovery right in the cloud.The market value includes the value of related goods sold by the service provider or included within the service offering.

Only goods and services traded between entities or sold to end consumers are included.

Data backup and recovery refer to the area of onshore and cloud-based technology solutions that allow enterprises to secure and maintain their data for legal and business requirements. The data backup and recovery are used in the process of making a backup copy of data, keeping it somewhere safe in case it becomes lost or damaged, and then restoring the data to the original location or a secure backup so it can be used once more in operations.

North America was the largest region in the data backup and recovery market in 2022.Asia-Pacific is expected to be the fastest-growing region in the forecast period.

The regions covered in the data backup and recovery market report are Asia-Pacific, Western Europe, Eastern Europe, North America, South America, Middle East and Africa.

The main types of data backup and recovery are service backup, media storage backup, and email backup.The services backup are used to connect systems to a private, public, or hybrid cloud managed by the outside provider in place of doing backup with a centralized, on-premises IT department.

The services backup is a method of backing up data that entails paying an online data backup provider for backup and recovery services.The various components of data backup and recovery are software and services which are deployed on cloud and on-premises.

The various industry verticals that use backup and recovery are IT and telecommunications, retail, banking, financial services, and insurance, government and public sector, healthcare, media and entertainment, manufacturing, education, and other industry verticals.

An increase in the adoption of cloud data backup is expected to propel the growth of the data backup and recovery market.Cloud backup is storing a copy of a physical or virtual file, database, or other data in a secondary, off-site location in case of equipment failure or other emergencies.

Cloud-based data backup helps to store data in the cloud which is accessible anywhere and anytime.This helps the data to be safe and easily recoverable.

For instance, in November 2020, according to Gartner, a US-based management consulting company, following the COVID-19 crisis, there will an increase in IT investment toward the cloud, which is predicted to account for 14.2% of all worldwide enterprise IT spending in 2024 as opposed to the 9.1% in 2020. Therefore, an increase in the adoption of cloud data backup is driving the growth of the data backup and recovery market.

Technological advancement is a key trend gaining popularity in the data backup and recovery market.Major data backup and recovery companies are advancing in their new technologies and research and development to adopt efficient alternatives such as multi-cloud data backup and recovery.

Data can be backed up across many cloud services from different providers using multi-cloud data backup and recovery systems.These systems frequently copy backups from one service to another and store them there for disaster recovery.

These solutions ought to allow recovery from many sources, ideally. For instance, in June 2022, Backblaze, Inc., a US-based cloud storage and data backup company, partnered with Veritas Technologies LLC. to offer multi-cloud data backup and recovery. Customers who use Backup Exec to synchronize their data backup and recovery procedures can use their combined solutions simple, inexpensive, and S3-compatible object storage. The Backup Exec service from Veritas enables companies to safeguard almost any data on any storage medium, including tape, servers, and the cloud. Veritas Technologies LLC is a US-based data management company.

In September 2021, HPE, a US-based information technology company, acquired Zerto for $374 million.Through this acquisition, HPE further transforms its storage business into a cloud-native, software-defined data services company and positions the HPE GreenLake edge-to-cloud platform in the fast-growing data protection sector with a tested solution.

Zerto is a US-based company specializing in software for on-premises and cloud data migration, backup, and disaster recovery.

The countries covered in the data backup and recovery market report are Australia, Brazil, China, France, Germany, India, Indonesia, Japan, Russia, South Korea, UK and USA.

The market value is defined as the revenues that enterprises gain from the sale of goods and/or services within the specified market and geography through sales, grants, or donations in terms of the currency (in USD, unless otherwise specified).

The revenues for a specified geography are consumption values that are revenues generated by organizations in the specified geography within the market, irrespective of where they are produced. It does not include revenues from resales along the supply chain, either further along the supply chain or as part of other products.

The data backup and recovery market research report is one of a series of new reports that provides data backup and recovery market statistics, including data backup and recovery industry global market size, regional shares, competitors with a data backup and recovery market share, detailed data backup and recovery market segments, market trends and opportunities, and any further data you may need to thrive in the data backup and recovery industry. This data backup and recovery market research report delivers a complete perspective of everything you need, with an in-depth analysis of the current and future scenario of the industry.Read the full report: https://www.reportlinker.com/p06443941/?utm_source=GNW

About ReportlinkerReportLinker is an award-winning market research solution. Reportlinker finds and organizes the latest industry data so you get all the market research you need - instantly, in one place.

__________________________

Read the rest here:
Data Backup And Recovery Global Market Report 2023 - GlobeNewswire

Western Digital Network Breach Hackers Gained Access to Company Servers – GBHackers

Western Digital (WD), a renowned manufacturer of Scandisk drives, has announced a data breach on its network, resulting in unauthorized access to data on multiple systems by attackers.

WD is a company based in the United States that specializes in manufacturing computer drives and data storage devices, providing data center systems, and offering customers cloud storage services.

The incident is ongoing, so the Company has promptly deployed incident responders and collaborated with digital forensic experts to investigate the attack.

Western Digital identified a network security incident involving Western Digitals systems. In connection with the ongoing incident, an unauthorized third party gained access to a number of the Companys systems. WD said in a Press release.

The Company is implementing proactive measures to secure its business operations, including taking systems and services offline, and will continue taking additional steps as appropriate.

Additionally, the Company has stated that they are actively working on restoring the affected systems. They suspect that the unauthorized party obtained detailed data from their systems and are striving to comprehend the nature and extent of that data.

As a result of this incident, several users reported that My Cloud, another cloud operating service, experienced over 12 hours of downtime.

Our team is working urgently to resolve the issue and restore access as soon as possible. We apologize for any inconvenience this may cause and appreciate your patience.

According to their incident report, We are experiencing a service interruption preventing customers from accessing the My Cloud, My Cloud Home, My Cloud Home Duo, My Cloud OS 5, SanDisk ibi, SanDisk Ixpand Wireless Charger service.

Following the attack, the storage manufacturer has taken further security measures to protect its systems and operations, which may affect some of Western Digitals services.

Here the following products are impacted by this security incident:

My CloudMy Cloud HomeMy Cloud Home DuoMy Cloud OS5SanDisk ibiSanDisk IxpWireless Charger

We attempted to contact Western Digital for further information on the incident but did not receive a response. We will provide updates to the article as soon as they become available.

Searching to secure your APIs? Try Free API Penetration Testing

Related Read:

Read more from the original source:
Western Digital Network Breach Hackers Gained Access to Company Servers - GBHackers

Park ‘N Fly Adopts Keepit for Microsoft Backup and Recovery – ITPro Today

Parking at the airport is one of lifes annoyances. Its crowded, expensive, and hard to find a spot near the entrance. Thats where Park N Fly comes in. The company shuttles customers from their cars to their terminals. Over the years, Park N Fly has expanded to include car washes, bag checks, and even pet boarding.

Related: How New Orleans Transformed Its Data Storage System After Cyberattack

While one might assume a parking company is fairly low-tech, thats not the case with Park N Fly. During its more than 50 years in business, the company has increasingly invested in technology. It launched its first booking engine in 2005 and uses a multichannel approach to drive sales. It also provides kiosks for flight check-in and has a full cadre of security protections for its back-office resources and customer information.

Park N Fly is a Microsoft shop, dependent on Office 365, SharePoint, Exchange, Active Directory, and Azure to remain productive. While the Microsoft technologies works well, CTO Ken Schirrmacher had long worried that Microsofts backup and recovery methods werent fully protecting data stored in the cloud.

With Office 365 you can do some Outlook-level archiving and, if you have the right license, a full backup of your entire inbox history, but those dont provide real full-service retention, Schirrmacher said. When you do a full Exchange deployment locally on-premises, it just backs up the Exchange server, but when you put everything into the cloud, youre missing that backup piece.

Microsofts backup shortcomings are common knowledge. For example, email backups in Outlook are restricted to 30 days, and the cloud server backing up that data could be lost if something happens to the servers stored in a specific area. Whats more, Microsoft doesnt guarantee retrieval of stored data or content during an outage. Microsoft itself recommends customers use third-party backups.

As a result, the company had added backup technologies into its mix, including Veritas to back up SharePoint drives. Backup processes became cumbersome over time, however. Transferring data required copying it to modular removable storage devices like solid-state drives, which employees could easily misplace.

Altogether, Park N Fly has between one and two terabytes of data that it cant afford to lose. Still, the company was mostly relying on Microsoft for backup, and Schirrmacher knew that had to change.

The thought just kept getting louder and louder until I finally listened to it, he said. I knew it would eventually bite us and that we needed to install some type of safety net.

When looking for a better way to back up Microsoft data, Schirrmacher wanted a cloud-based product that would be easy to implement, have a straightforward restore process, and offer strong security.

After asking around, a Park N Fly partner told him about Keepit, a cloud-based service that specializes in Microsoft and Azure AD backup and recovery. Keepit also encrypts data in transit and at rest using Transport Layer Security 1.2 and 256-bit Advanced Encryption Standard. The fine-grained user access controls also appealed to Schirrmacher.

I didnt want something with simple RSA 1024-bit encryption, because anybody with a decent security background could probably get around it, he said. And I really liked the idea of not having to swap keypairs with my coworkers.

After a successful trial, Park N Fly signed on the dotted line and rolled out Keepits service companywide. Schirrmacher noted that a 10-minute demo showedhis IT staff to how to use the service.

Once in use, Schirrmacher saw that the service performed fast, which he valued. Were constantly having things thrown at us, and we need to be able to focus, he said. If we have to tend to our backups or spend an entire day restoring files, we cant do our other tasks.

Today, Keepit is Park N Flys main backup and recovery technology, along with AWS S3 buckets for storage. The company also uses a small amount of on-premises storage for specific workloads.

Keepit has proven to be easy to work with. After signing in on Keepits web-based portal and adding an account, IT staff can log in with single sign-on via Office 365 non-interactively, which essentially means that sign-ins are done on behalf of users. The system then asks permission to access files. Once given that permission, Keepit asks staff to select areas to be backed up. Initially, Keepit backups took an entire day, but since then, Keepit takes snapshots continuously. The portal displays what has been backed up, using root-level trees that let staff navigate down to the file level.

Although Keepit has an API to enable organizations to work with data on Keepits platform, Park N Fly hasnt yet taken advantage of it. That will change eventually, Schirrmacher said. He plans to investigate building the API into the companys executive PowerBI dashboard. This would allow executives to quickly see uptime and endpoint management statistics, plus data from other tools like Mailchimp, Trustpilot, and ActiveCampaign.

Schirrmacher also is looking forward to an upcoming Keepit enhancement that will provide a self-service portal for users.

I look forward to the day when our users will be able to restore their own data, he said. That would really free up time for our IT staff.

About the author

See the rest here:
Park 'N Fly Adopts Keepit for Microsoft Backup and Recovery - ITPro Today

The changing world of Java – InfoWorld

Vaadin recently released new research on the state of Java in the enterprise.Combined with other sources, this survey offers a good look into Javas evolution.The overall view is one of vitality, and even a resurgence of interest in Java, as it continues to provide a solid foundation for building applications of a wide range of sizes and uses.

I dug into Vaadin's 2023 State of Java in the Enterprise Report, along with a few others. This article summarizes what I think are the most significant developments in enterprise Java today.

Java has seen a long succession of incremental improvements over the last decade.We're currently on the cusp of more significant changes through the Java language refactor in Project Valhalla and Java concurrency updates in Project Loom.Those forthcoming changes, combined with security considerations, make staying up to date with Java versions especially important.

Vaadin's research indicates that developers using Java have kept up with version updates so far. Twenty-six percent of respondents report they are on version 17 or newer; 21% are in the process of upgrading; and 37% are planning to upgrade.

These results jive with research from New Relic showing that Java 11 is becoming the current LTS (long-term support) standard, gradually supplanting Java 8.Java 17 is the newest LTS release, replacing Java 11 under the two-year release cadence, and will soon become the baseline upgrade for Java. The next LTS release will be Java 21, currently targeted for September 2023.

Survey results indicate that security is a major concern for Java developers, and for good reason.Discovering the Log4j vulnerability shined a glaring spotlight on code vulnerabilities in Java applications and elsewhere. Cybersecurity is a slow-moving hurricane that seems to only gather strength as time goes on.

The Vaadin report indicates that 78% of Java developers see ensuring app security as a core concern; 24% describe it as a significant challenge; and 54% say it is somewhat of a challenge.

Java by itself is a very secure platform. But like any other language, it is open to third-party vulnerabilities. Writing and deploying secure Java applications requires maintaining good security practices across the entire application life cycle and technology stack.Even the federal government, through CISA, is taking securing open source software and tracking vulnerabilities seriously, and urging the adoption of zero-trust architectures.

Because Java is a solid, evolving platform, Java developers are well-positioned to take on the very real and changing universe of threats facing web applications. We just need to be aware of security concerns and integrate cybersecurity into our daily development activities.

According to the Vaadin research, 76% of respondents see hiring and retaining developers as either a significant challenge or somewhat of a challenge.This is, of course, an industry-wide problem, with developer burnout and dissatisfaction causing major difficulty in both attracting and retaining good software developers.

Perhaps the best way to think about developer retention is in light of the developer experience (or DX). Like other coders, Java programmers want to work in an environment that supports our efforts and allows us to use our skills and creativity.A supportive environment encompasses the development tools and processes and the overall culture of the organization.

One way to improve developer experience is through a robust devops infrastructure, which streamlines and brings consistency to otherwise stressful development phases like deployment.There is an interplay between devops and developer experience. Improving the tools and processes developers use makes it easier for us to maintain them and ensure adaptive correctness.

Deployment figures large in the Vaadin research.Cloud infrastructure and serverless platformscloud-native environmentsare seen as an essential evolution for Java applications.Right now, 55% of Java applications are deployed to public clouds.On-prem and private hosting still account for 70% of application deployments.Kubernetes and serverless account for 56% of deployments, spread between public cloud, on-prem and PaaS.

Of serverless providers, Amazon Web Services (AWS) leads the space, with 17% of respondents saying they deploy their Java applications using AWS Lambda.Microsoft Azure and Google Cloud Platform serverless both account for 4% of all deployments, according to survey responses.

After on-prem servers and virtual machines, on-prem Kubernetes is the most prevalent style of deployment, used by 29% of respondents.

These numbers point to a Java ecosystem that has continued to move toward cloud-native technology but still has a big chunk of functionality running on self-hosted servers.Many Java shops feel a sense of urgency to adopt cloud platforms. But some developers continue to prefer self-hosted platforms and frameworks to being locked into a cloud provider's compute-for-rent business model.

Not surprisingly, the lions share of Java applications are web applications, with desktop applications accounting for only 18% of all products in development at the time of the survey.As for the composition of new and existing applications that use Java, its a diverse group.The Vaadin research further distinguishes between current technology stacks and planned changes to the stack.

The continued strong focus on full-stack Java applications is particularly interesting.Fully 70% of respondents indicated that new full-stack Java applications were planned for upcoming projects.

Just behind full-stack applications is back-end development.Back-end APIs accounted for 69% of new investment plans, according to respondents.

After full-stack and back-end development, respondents' development efforts were spread between modernizing existing applications (57%); developing heterogenous (Java with JavaScript or TypeScript) full-stack applications (48%); migrating existing applications to the cloud (36%); and building new front ends for existing Java back ends (29%).

The survey also gives a sense for what front-end frameworks Java developers currently favor. Angular (37%) and React (32%) are in the lead, followed by Vue (16%).This is in contrast to the general industry where React is the most popular framework.Other frameworks like Svelte didnt make a strong enough showing to appear in the survey.

Given its popularity and utility, it is unsurprising that Spring is heavily used by Java developers.Of respondents, 79% reported using Spring Boot and 76% were using the general Spring framework.The forecast among developers is for them both to continue being used.

Fifty-seven percent of respondents to the Vaadin survey indicated that modernization was a chief concern for planned investment.The highest ranked reason given for modernization was maintainability.

Maintainability is a universal and perennial concern for developers of all stripes and stacks.With the huge volume of what we might term legacy codethat is, anything thats already been builtin Java, there is a strong sense that we need to upgrade our existing systems so that they can be worked on and brought into the future. It's a healthy impulse.To find the will and money to refactor and strengthen what is already there is key in any long-term project.

After maintainability comes security, which weve already discussed. In this case, though, security is seen as another reason for modernization, with20% of respondents ranking security as their number one cause, 16% in second place, and 21% in third.Security is once again a reasonable and healthy focus among developers.

Among all the challenges identified by Java developers, building an intuitive and simple UX appears to be the greatest.It is a significant challenge for 30% and somewhat of a challenge for 51% of developers.

The UI is a tricky part of any application.I get the sense that Java developers are strong with building back-end APIs and middleware and longing for a way to use their familiar technology to build across the stackjust notice the heavy emphasis on full-stack Java applications.One respondent commented in the survey, We want to use Java both for backend and frontend.Maybe with WASM that will be possible someday.

For the time being, Java developers are confronted with either building in a JavaScript framework like React, using a technology that allows for coding in Java and outputting in JavaScript (like JavaServer Faces or Google Web Toolkit), or using a framework that tries to encompass both Java and JavaScript under a single umbrella like Hilla or jHipster. (I've written about both here on InfoWorld.)

With the industry as a whole, Java developers have moved toward better devops practices like CI/CD as well as adopting third-party integrations.The Vaadin report identifies logging, observability, and single sign-on (SSO) solutions as the most popular tools in use.Kubernetes, business tools like enterprise resource planning (ERP) and customer relationship management (CRM), devops, and multi-factor authentication (MFA) solutions round out the rest of the most-used third-party tools in the Java ecosystem.

Like the State of JavaScript survey for JavaScript, Vaadin's State of Java in the Enterprise Report offers an expansive portrait of Java, both as it is and in where it is moving.Overall, Java appears to be riding a wave of stability coupled with an evolving dynamism. The two together indicate a vital technology that is ready for the future.

Go here to read the rest:
The changing world of Java - InfoWorld

An intro to the IDMZ, the demilitarized zone for ICSes – TechTarget

To protect internal networks from untrusted networks, such as the internet, many organizations traditionally used a demilitarized zone. Derived from the military concept of an area that cannot be occupied or used for military means, a DMZ in networking is a physical or logical subnet that prevents external attacks from accessing confidential internal network resources and data.

Cloud adoption has largely negated the need for a DMZ, with zero trust and segmentation becoming more popular options amid the dissolving network perimeter. DMZs can still be useful, however, especially when it comes to the convergence of IT and operational technology (OT). Known as an industrial DMZ (IDMZ), it is key to keeping IT and industrial control system (ICS) environments separate.

Pascal Ackerman, author of Industrial Cybersecurity, Second Edition, was on hand to explain the IDMZ.

What is an IDMZ?

Pascal Ackerman: The name itself has been questioned, and I've had a couple people call me up and say, 'Can't you just call it a DMZ, please?' But it's different.

The concept was taken from the enterprise side. For decades, people connected enterprise environments to the internet through a DMZ. They had a shared server or web server exposed to the internet, but if they didn't want to easily allow access into their enterprise environment, they put a DMZ in place.

We took a page from that book and put a DMZ between the enterprise network and the industrial network.

Where things differ between an IT network and the internet and an IT network and OT network is with what we put in the DMZ. By design, it's supposed to be a middle ground for traffic to traverse from an insecure to a secure network -- with the insecure network being the enterprise and the secure network being the ICS. Where you typically do it on the IT side for web services, on the industrial side, you do it for industrial protocols and to make sure they don't have to traverse through the IDMZ. Rather, you have a way to broker or relay or translate industrial protocols and data into something easily available on the enterprise side -- this typically tends to be a web browser.

How does this relate to IT/OT convergence?

Ackerman: Until the late 1990s, IT -- the business network where you do email, ordering and shipping -- was separate from OT -- your production environment -- via segmentation. There was no communication between the two.

As managers and folks on the business side saw the benefits of using industrial data, more and more IT and OT environments connected. While the true controls engineer inside me wants to keep IT and OT separate -- it's really the most secure way -- companies want to get data out of an ICS to do better production and business overall. In order to do this securely, an IDMZ is the way to go.

We're not just putting a firewall in place and poking a bunch of holes in it -- because, eventually, there's no firewall because you've made so many exceptions. Instead, the IDMZ means traffic from the enterprise network is not allowed to go directly to the industrial side. It has to land in the IDMZ first.

Do you have an example of when you'd do this?

Ackerman: Say you want to remote desktop into one of your production servers. That would be initiated on the enterprise side. Instead of going straight into the industrial network and connecting to a server there, you're authenticating to a broker server in the IDMZ, which brokers that into the target server or workstation on the industrial environment.

How does IoT fit in? IoT deployments can be on either the enterprise or the industrial side -- or, sometimes, both.

Ackerman: One of the design goals for implementing industrial security is that industrial protocols need to stay in the industrial environment. If you have a smart camera or an IoT barcode scanner for your MES [manufacturing execution system] or ERP system, those should go on the enterprise network because they're communicating with enterprise systems.

On the other hand, if you have a smart meter that takes the temperature of a machine in the ICS, it might use industrial protocols and send information to a cloud service, where you can look at trends and monitor it. This type of IoT deployment would live in the OT network. Then, you have to deal with the connection to the cloud -- through the IDMZ.

I recommend setting up security zones within the IDMZ. Set up a separate segment for your remote access solution, for your file transfer solution and for your IoT devices.

What threats does an IDMZ prevent or mitigate?

Ackerman: Pretty much anything that will attack the enterprise network.

The fundamental goal with an IDMZ is to have any interactions with the ICS be initiated on the enterprise side. So, if a workstation on the enterprise network is infected by malware, the enterprise client is infected or crashes. The underlying HMI [human-machine interface] sitting on the industrial network is protected by the IDMZ. If the enterprise network is compromised, the compromise stays within the IDMZ and can't travel to the industrial environment.

Who is responsible for setting up and managing an IDMZ?

Ackerman: Companies that have separate IT and OT teams often have the IT team support and maintain the IDMZ. For companies that have converged IT and OT teams, it's usually a shared responsibility. This typically works better because each team understands the other and can build upon each other's knowledge.

How do you build an IDMZ?

Ackerman: You have two separate networks: the enterprise network with physical standalone hardware and the industrial network with physical standalone hardware. Put a firewall between them -- sometimes two -- one for the enterprise side and one for the industrial side. They should be separate brands, too -- that's the most secure. Most of the time, you'll see a three-legged firewall implementation with the IDMZ sitting in the middle.

From there, deploy the IDMZ service itself. The services often run on VMware or a hypervisor from Microsoft and Hyper-V -- some dedicated software. Further components depend on what you're looking to relay. Most of the time, there's a file-sharing mechanism and remote access solution.

Is zero trust ever implemented in an IDMZ?

Ackerman: Zero trust makes sense all the way down to Level 3 of the Purdue model. Levels 2, 1 and 0 -- which are your controls, HMIs and PLCs [programmable logic controllers] -- wouldn't make sense for zero trust. The devices on those levels don't have authentication mechanisms; they just respond to anything that tries to ping them.

Where zero trust does make sense is in Level 3 site operations, where you have servers, workstations, Windows domain, etc. Where you have authentication and authorization is where you can implement zero trust.

What are the challenges of implementing an IDMZ?

Ackerman: Support. An IDMZ is extra hardware and extra software for someone to support, and it's not always the easiest to do from the enterprise side. You have to go an extra step to log in to an industrial asset, and from there, you can support the IDMZ.

Another challenge is the services running on it. If you want to be really secure, you can't just extend your enterprise Windows domain into your industrial environment. You usually end up having a dedicated Windows domain for your industrial environment, which, again, has to be supported by someone.

It can be time-consuming and costly, but think of it another way: If something compromises your enterprise environment and can dig into your industrial environment, how much work and money are you going to spend to get everything up again?

About the authorPascal Ackerman is a seasoned industrial security professional with a degree in electrical engineering and more than 20 years of experience in industrial network design and support, information and network security, risk assessments, pen testing, threat hunting and forensics. His passion lies in analyzing new and existing threats to ICS environments, and he fights cyber adversaries both from his home base and while traveling the world with his family as a digital nomad. Ackerman wrote the previous edition of this book and has been a reviewer and technical consultant of many security books.

Continue reading here:
An intro to the IDMZ, the demilitarized zone for ICSes - TechTarget