Category Archives: Cloud Servers

Hypercompetitive Cloud Market a Blessing for Cloud OTT Platforms – Analytics India Magazine

Cloud-native OTT platforms were born out of the explosion in the streaming audience. More viewers brought along more security threats, more complex workflows and bigger infrastructures. A cloud-based infrastructure solved a bunch of these problems scalability became easier and the quality of experience improved by a wide margin.

Manik Bambha, the co-founder and CEO of ViewLift, was early to spot this shift having founded the cloud-based streaming platform in 2008. We realised we can help sports, media companies and broadcasters quickly launch their own OTT services without draining their time and resources. These companies can start making money from their digital content in a matter of weeks, rather than months or years of joining the platform, he explained.

The benefits from this push in cloud infrastructure were far too many. Cloud native tech is a revolution. In the past, brands had to order servers and wait for them to be ready before they could launch their digital platforms. This process could take between 6-12 months, which was a significant barrier of entry for many businesses. With cloud native technology, brands can launch their platforms in a matter of weeks. Cloud native platforms are built to scale quickly and efficiently, which means that they can handle millions of users in a matter of days, he stated.

During the development process, a cloud native offers greater flexibility and agility. This means developers can easily make changes to the platform without disrupting user experience. It also means brands can respond to changing market conditions and customer needs quicker than ever before. More for consumers, more for businesses.

Bambha said that this shift in content was a natural one and has been in the making for a long time. Over the past decade, we have seen a massive shift in the media industry towards over-the-top (OTT) media services. Ten years ago, OTT was largely seen as a pilot or test project, but today it is a key growth strategy for many companies.

The rise of OTT was driven by a number of factors, including increased internet speeds, the proliferation of smart devices and changing consumer habits. While traditional TV is still the main revenue source for many big brands, it is shrinking and will possibly vanish within the next 10-15 years. Consumers are increasingly turning to OTT services like Netflix, Hulu and Amazon Prime Video for their entertainment needs. OTT is the future of media, and companies that do not adapt to this new reality risk being left behind, he stated.

Since these platforms are married to cloud businesses, it goes without saying that the furiously competitive segment will affect them. We have stayed ahead in predicting the cloud wars and we have made our OTT solutions cloud-agnostic and multi-cloud capable. Currently, we support AWS and Google Cloud, Bambha said.

Bambha discusses how the increasingly competitive cloud computing market is also opening up new opportunities for OTT content owners. Theres a wider range of cloud providers to choose from, which can help them optimise their costs and improve performance. Additionally, competition drives innovation, which is a win-win for both consumers and the industry, he added.

One of the most significant applications of AI/ML is our content-recommendation engine. By analysing user behaviour and preferences, ViewLift can provide personalised content recommendations that are more likely to resonate with each individual user. This helps to keep users engaged and coming back for more, he said. Predictive analytics is another area where AI/ML is being used. ViewLift is also using AI/ML to personalise the user interface providing a more intuitive and engaging experience.

Bambhas transition to the media and entertainment industry wasnt entirely unforeseen. Formerly a director of engineering at MySpace, and following that up with a stint as the VP of engineering at Shindig, Bambha has a deep understanding of social media, content and what goes behind it. As I continued to solve these technical problems, I began to see how they intersected with business problems, particularly in the domain of digital and OTT media, he signed off.

Excerpt from:
Hypercompetitive Cloud Market a Blessing for Cloud OTT Platforms - Analytics India Magazine

Securing Medical Devices is a Matter of Life and Death – Infosecurity Magazine

When a man arrived in the middle of the night at a North London hospital and was emotionally upset, distressed, with seizure-like movements and unable to speak, Isabel Straw, an NHS emergency doctor, first struggled to find the reason because all the tests her team performed on him did not reveal any issues.

That is until she realized the man had a brain stimulator implanted inside his head and its malfunctioning was probably the reason for his pain.

Straw, also the director of the non-profit bleepDigital, urged decision-makers at all levels to start investigating further the cybersecurity risks of medical devices, from the consumer ones through the implanted and ingested technologies.

In the past 10 years, weve seen a lot of advances in these technologies, which has opened up new vulnerabilities, she said during a presentation at UK Cyber Week, on April 4, 2023.

The Internet of medical things (IoMT), as all these devices have come to be called, is increasingly used in healthcare settings and at home, both outside and inside the body, and is ever more interconnected, and so the security threats the IoMT poses are becoming more concerning and can have a significant impact on patients health.

The fear that these devices could start malfunctioning, or even get hacked, is real, and examples of cyber incidents involving IoMT devices are growing. As a result, there needs to be increased coordination between manufacturers and governments to implement more safeguards against security incidents and more capabilities to operate digital forensics, Straw said.

She also insisted that healthcare professionals should be trained on technical issues they could encounter with IoMT devices and on as many models as possible.

With the patient I mentioned, we had to go through his bag, where we found a remote control for the brain stimulator, which no doctor at the hospital knew about. So, I took a photo of it, did a reverse Google image search and found the manual online after a few hours. We realized the device was just desynchronized, but it took us 13 hours to find someone to reset it. If this happened again tomorrow, we would still not know how to treat him, she explained.

To this date, we still dont know why it malfunctioned. Often, these medical devices dont have the memory space or the ability for digital forensics, she added.

These devices can process increasing amounts of data, posing a security risk and data privacy concerns.

Since 2013, the electrodes in brain stimulators have started to be able to read more data, on top of just delivering a voltage. This allowed us to get more data from the patients brain activity and read it externally, which can be used to personalize the data youre analyzing to the patients disease. But streaming peoples brain data also brings a confidentiality issue, Straw highlighted.

In that case, not only does the brain stimulator needs to be secure, but also the communication streams with the health center, the system the health professional is using, and the cloud servers as health professionals increasingly use cloud services to process and analyze data.

Another challenge is what to do when someone dies because of a medical device. If this man had died, what would have happened with his device? Should we bury it with him, or dispose of it? Does it go to the general waste? And what do you mention on the death certificate? These questions are still unanswered, and we dont get training on those issues, Straw noted.

See the article here:
Securing Medical Devices is a Matter of Life and Death - Infosecurity Magazine

Dubai, UAE Dedicated server hosting with Best Data Center … – Digital Journal

High Uptime, Low Latency and Low Cost dedicated server Hosting Plans with IP based at UAE, Dubai

Delhi, Delhi, India, 8th Apr 2023, King NewsWire Data centers are crucial to running your business, storing and managing vital data. Theyre also a great way to keep your company secure and ensure that everyone has access to information when they need it.

Theyre also a great way to simplify scaling when your company needs more capacity. TheServerHost Dubai Dedicated Server solutions can scale up and relatively cheaply and in real time.

Dubai data centers are designed to handle demanding computing needs with the greatest efficiency, reliability and security. This means that they need to be built with the latest technologies and be able to adapt quickly to changing requirements.

Among the most important considerations are power, space and cooling capacity, with flexibility and scalability in mind. This is essential to ensuring that your data center is able to keep up with the demands of the business and grow as you do.

Dubai data centers also ensure that your business is well protected from external threats by using multiple layers of security systems, including biometric locks and video surveillance. This can prevent unauthorized people from accessing your servers and other equipment, which can lead to data breaches or malicious attacks.

Redundancy is the act of adding duplicate hardware or systems that can step in to take over the work if the original system fails. This is important in data center operations because it can prevent downtime and keep businesses running.

While redundant equipment helps reduce downtime, it also requires maintenance and care to ensure it works as expected. This is why many data centers have dedicated technicians on staff 24 hours a day.

There are several ways to build redundancy into your business. Some of the most common include having redundant rack power distribution units (PDUs), uninterruptible power supply (UPS) modules, and generators. These redundancy devices help keep your IT equipment powered up in the event of a power outage.

Another way to make sure that your equipment has backup power is by using dual feed or dual substations for utility power. These redundant components help ensure that your servers and other IT devices have plenty of power to keep them operating, even if one side of the power chain fails.

This type of redundancy can save your business money by reducing the amount of time that it takes to get your computer back up and running again. Additionally, it can minimize the impact that downtime has on your business and its customers.

The N value of redundancy is the minimum number of critical components needed for the data center to function at full capacity. It is a standard measurement for all data centers. However, it does not account for the additional redundancy that is required to keep your data center functioning at a high level of resilience.

Security is a vital part of any data center, as it protects critical information and applications from physical threats. Keeping data and applications secure can be an expensive and complicated endeavor, but it is also one that should never be ignored.

The most important thing about security in data centers is the right combination of strategy, architecture, technology and processes to mitigate the risk. By following these best practices, you can rest assured that your companys sensitive data is protected at all times.

First and foremost, you must ensure that you have a system in place that allows you to control access to the data center. This can include biometric readers, mantraps, anti-tailgating systems, and a number of other options.

Second, it must have a system in place that monitors all movement through the data center and prevents unwanted activity. This can be accomplished by using CCTV cameras to record movements in the hallways and at the data center itself.

Third, it must have a system in place to protect data and applications from environmental factors. This can be done by ensuring that the data center is built to withstand major weather events, such as floods, hurricanes, tornadoes and earthquakes.

Fourth, it must have a system in place for managing equipment thats onsite at the data center. This can be done by having a logically segmented network and by protecting the physical devices that make up that network from threats such as malware and viruses.

Finally, it must have a system in place where a firewall can be configured to block traffic based on endpoint identity and endpoint location. This will help you find attacks early before they can spread across your entire network.

A security strategy in a data center must be constantly monitored and adjusted, as the threat landscape changes. This is why its essential to conduct regular audits and testing to identify vulnerabilities and patch up holes in your security infrastructure.

In addition to implementing the best security technologies and techniques, it must also make sure that your security staff is aware of the protocols they need to follow. This can be achieved by training all employees about the proper use of security measures and why they are needed.

Data centers are responsible for the storage of large amounts of data that businesses need to access. As a result, the management of data center resources becomes an important factor in ensuring that the data is available to meet business demands.

With so much data to manage, businesses are transforming their data center infrastructures into automated systems that help with monitoring, processing, and troubleshooting processes. These tools help to improve operational efficiency and reduce IT staff workloads by minimizing repetitive, time-consuming tasks so that they can focus on higher-level, strategic goals.

Besides improving productivity and operational efficiency, automation can also enhance the security of the data center. It can identify potential security threats, and it can respond to them in a timely manner.

Another benefit of data center automation is that it streamlines the network configuration process by enabling the use of common policy settings for all networks. This eliminates the need to manually implement changes that are necessary to accommodate changing IT needs.

Its also possible to integrate different automation solutions together to create a unified control center. This allows IT to configure event triggers and thresholds for compute, provisioning and deprovisioning resources across different layers of the infrastructure.

As an added bonus, many data center automation tools allow for API programmability. This ensures that applications can be easily integrated with each other and that they maintain a fast data exchange, which is critical for agile IT operations.

With these considerations in mind, the best data center infrastructure will enable businesses to take advantage of new technology while keeping costs down and avoiding unnecessary headaches. With automation in place, organizations will be able to manage their data center more effectively and deliver high-quality services to customers.

AI is the field of computer science that aims to create machines that can learn and think like humans. It encompasses machine learning and deep learning, which allow computers to mimic the neural networks in the human brain.

AI has become an increasingly important technology, and its being applied in many different industries, including finance, healthcare and manufacturing. Companies use machine learning algorithms to understand data and uncover information about their customers, products, competitors and more.

There are also numerous AI-powered services available to organizations, many of which are provided by cloud providers. These services are aimed at speeding up data prep, model development and application deployment.

Dedicated servers are a great choice for businesses that have a lot of traffic or need enterprise applications. They offer better hardware, security, and experienced support. They also have unlimited bandwidth and dedicated IP addresses, so you can run as many websites as you want. TheServerHost offers a variety of plans and packages, so you can choose one that suits your needs.

TheServerHost Dubai servers are optimized for high-speed performance. They feature multiple high-speed network interfaces, daily security scans, redundant power and network connections, and are built with enterprise-grade hardware. The company also offers a centralized control panel, which makes managing your server easier.

TheServerHost has a team of technical support specialists that can help you with any issues you may have. They are available round the clock and can answer your questions quickly and efficiently. You can also contact them by phone or chat to get an immediate response.

Daily Backup: TheServerHosts daily backup service is free and provides cloud-to-cloud, disaster recovery, migration, deletion control, and search solutions. It can be used to backup databases, email accounts, and other important data.

Managed Services: TheServerHosts managed services can help you with your website and keep it secure and virus-free. They can also update your operating system, install security updates, and maintain your servers performance.

Memcached and Redis Caching: TheServerHosts caching technology speeds up the processing and execution of PHP-based applications, which helps your website load faster. It also stores the most requested and important databases in RAM, which reduces their retrieval time.

Unmatched Uptime: TheServerHost has a 100% uptime guarantee, so you can rest assured that your site will always be online. They also have a team of dedicated engineers that can quickly respond to any problems you may encounter.

Whether you need a dedicated server for your business or just a personal blog, TheServerHost can provide you with everything you need to make your website a success. They have a variety of packages and plans to suit your needs, including free DNS, a control panel, and live chat support.

The best way to ensure your server is working at peak efficiency is to perform maintenance checks regularly. These include checking hardware and software updates, security upgrades, and RAID alarms. Performing these maintenance tasks can save you a lot of time and money down the road, so its worth taking the time to do them.

In addition to maintaining your server, TheServerHost also offers a host of other services that can help you stay productive and on track. These include daily backup, daily malware scans, and daily malware removal. They can also help you upgrade your hardware, install new applications, and create a customized hosting plan.

Choosing the right dedicated hosting provider can be tricky. You need to choose a company that offers quality service and a fair price. Its also important to find a company that offers a wide range of features and services, such as managed hosting and unlimited bandwidth.

For Dubai VPS Server visit https://theserverhost.com/vps/dubai

For UAE Dedicated Server visit https://theserverhost.com/dedicated/dubai

Organization: TheServerHost

Contact Person: Robin Das

Website: https://theserverhost.com/

Email: [emailprotected]

Address: 493, G.F., Sector -5, Vaishali, Ghaziabad 201010.

City: Delhi

State: Delhi

Country: India

Release Id: 0804233047

The post Dubai, UAE Dedicated server hosting with Best Data Center Infrastructure TheServerHost appeared first on King Newswire.

Information contained on this page is provided by an independent third-party content provider. Binary News Network and this Site make no warranties or representations in connection therewith. If you are affiliated with this page and would like it removed please contact [emailprotected]

Read this article:
Dubai, UAE Dedicated server hosting with Best Data Center ... - Digital Journal

The Mastodon plugin is now available on the Steampipe Hub – InfoWorld

When Twitter changed hands last November I switched to Mastodon; ever since Ive enjoyed happier and more productive social networking. To enhance my happiness and productivity I began working on a Mastodon plugin for Steampipe. My initial goal was to study the fediverse writ large. Which people and which servers are powerful connectors? How do moderation policies work? Whats it like to join a small server versus a large one?

These are important questions, and you can use the plugin to begin to answer them. But I soon realized that as a newcomer to a scene thats been evolving for six years, and has not welcomed such analysis, I should start by looking for ways to enhance the experience of reading Mastodon. So I began building a set of dashboards that augment the stock Mastodon client or (my preference) elk.zone. And Ive narrated that project in a series of posts.

Last week we released the plugin to the Steampipe Hub. If youve installed Steampipe, you can now get the plugin using steampipe plugin install mastodon. The next phases of this project will explore using the plugin and dashboards in Steampipe Cloud, and speeding up the dashboards by means of persistent Postgres tables and Steampipe Cloud snapshots. Meanwhile, heres a recap of what Ive learned thus far.

While the dashboards use charts and relationship graphs, they are mainly tables of query results. Because Steampipe dashboards dont (yet) render HTML, these views display plain text onlyno images, no styled text. Ive embraced this constraint, and I find it valuable in two ways. First, Im able to scan many more posts at a glance than is possible in conventional clients, and more effectively choose which to engage with. When I described this effect to a friend he said: Its a Bloomberg terminal for Mastodon! As those of us who rode the first wave of the blogosphere will recall, RSS readers were a revelation for the same reason.

Second, I find that the absence of images and styled text has a calming effect. To maintain a healthy information diet you need to choose sources wisely but, no matter where you go, sites deploy a barrage of attention-grabbing devices. I find dialing down the noise helpful, for the same reason that I often switch my phone to monochrome mode. Attention is our scarcest resource; the fewer distractions, the better.

Theres a tradeoff, of course; sometimes an image is the entire point of a post. So while I often read Mastodon using these Steampipe dashboards, I also use Elk directly. The Steampipe dashboards work alongside conventional Mastodon clients, and indeed depend on them: I click through from the dashboards to Elk in order to boost, reply, or view images. That experience is enhanced by instance-qualified URLs that translate foreign URLs to ones that work on your home server.

The ability to assign people to lists, and read in a list-oriented way, is a handy Twitter affordance that I never used much because it was easy to let the algorithms govern my information diet. Because Mastodon doesnt work like that, lists have become the primary way I read the fediverse flow. Of the 800+ people I follow so far, Ive assigned more than half to lists with titles like *Climate* and *Energy* and *Software*. To help me do that, several dashboards report how many of the people I follow are assigned to lists (or not).

I want as many people on lists as possible. So I periodically review the people I follow, put unassigned people on lists, and track the ratio of people who are, or arent, on lists. Heres the query for that.

When you read in a list-oriented away, as is also true when you read by following hashtags, there are always people whose chattiness becomes a distraction. To control that Ive implemented the following rule: Show at most one original toot per person per list per day. Will I miss some things this way? Sure! But if youve said something that resonates with other people, Im likely to hear about it from someone else. Its a tradeoff thats working well for me so far.

Heres the SQL implementation of the rule.

On the home timelines dashboard Ive made it optional to include or hide boosts, which can be the majority of items. On the list-reading dashboard Ive opted to always exclude them, but the SQL idiom for doing soselect distinct on (person, day)is simple, easy to understand, and easy to change.

Ive so far found three ways in which relationship graphs can make Mastodon more legible. First, in Mastodon relationship graphs, I showed how to use SQL-defined nodes and edges to show boost relationships among people and servers. In another article I used the same tools to map relationships among people and tags. And most recently I used them to explore server-to-server moderation.

In all three cases the format conveys information not directly available from tabular views. Clusters of interesting people pop out, as do people who share tags. And when I graphed servers that block other servers I discovered an unexpected category: some servers that block others are themselves also blocked, like infosec.exchange in this example.

The Steampipe combo of SQL-oriented API access and dashboards as code is a uniquely productive way to build relationship graphs that can unlock insights in any domain. As weve seen with Kubernetes, they can help make cloud infrastructure more legible. The Mastodon graphs suggest that the same can happen in the social networking realm.

When you append .rss to the URL of a Mastodon account, or tag, you produce an RSS feed like https://mastodon.social/@judell.rss or https://mastodon.social/tags/steampipe.rss. These feeds provide a kind of auxiliary API that includes data not otherwise available from the primary API: related tags, which appear in the feeds as RSS category elements. Steampipe really shines here thanks to the RSS plugin which enables joins with the primary Mastodon API. This query augments items in accounts feed with tags that appear in each item.

A similar query drives the graph discussed in Mapping people and tags on Mastodon.

In that example, surfacing the connection between a user, @themarkup, and a pair of tags, scotus and section230, was useful in two ways. First, it helped me instantly spot the item that I most wanted to read, which was buried deep in the search results. Second, it helped me discover a source that Ill return to for guidance on similar topics. Of course I added that source to my Law list!

Everyone who comes to Mastodon appreciates not having an adversarial algorithm control what they see in their timelines. Most of us arent opposed to algorithmic influence per se, though; we just dont like the adversarial nature of it. How can we build algorithms that work with us, not against us? Weve already seen one example: The list-reading dashboard displays just one item per list per person per day. Thats a policy that I was able to define, and easily implement, with Steampipe. And in fact I adjusted it after using it for a while. The original policy was hourly, and that was too chatty, so I switched to daily by making a trivial change to the SQL query.

In News in the fediverse I showed another example. The Mastodon server press.coop aggregates feeds from mainstream news sources. I was happy to have those feeds, but I didnt want to see those news items mixed in with my home timeline. Rather, I wanted to assign them to a News list and read them only when I visit that list in a news-reading mindset. The fediverse offers an opportunity to reboot the social web and gain control of our information diets. Since our diets all differ, it ought to be possibleand even easyfor anyone to turn on a rule like *news only on lists, not timelines*. Steampipe can make it so.

When you ask people on Mastodon about these kinds of features, the response is often Have you tried client X? It offers feature Y. But that solution doesnt scale. It would require massive duplication of effort for every client to implement every such policy; meanwhile, people dont want to switch to client X just for feature Y (which might entail losing feature Z). Could policies be encapsulated and made available to any Mastodon client? Its interesting to think about Steampipe as a component that delivers that encapsulation. A timeline built by SQL queries, and governed by SQL-defined policies, is a resource available to any app that can connect to Postgres, either locally or in Steampipe Cloud.

If youre curious about the Steampipe + Mastodon combo, install the plugin, try out the sample queries, then clone the mod and check out the dashboards. Do they usefully augment your Mastodon reader? What would improve them? Can you use these ingredients to invent your own customized Mastodon experience? Join our Slack community and let us know how it goes!

See the article here:
The Mastodon plugin is now available on the Steampipe Hub - InfoWorld

Software Architecture and Design InfoQ Trends Report – April 2023 – InfoQ.com

Key Takeaways

The InfoQ Trends Reports provide InfoQ readers a high-level overview of the topics to pay attention to, and also help the InfoQ editorial team focus on innovative technologies. In addition to this report and the trends graph, an accompanying podcast features some of the editors discussing these trends.

More details follow later in the report, but first it is helpful to summarize the changes from last year's trends graph.

Three new items were added to the graph this year. Large language models and software supply chain security are new innovator trends, and "architecture as a team sport" was added under early adopters.

Trends which gained adoption, and therefore moved to the right, included "design for portability," data-driven architecture, and serverless. eBPF was removed as it has niche applications, and is not likely to be a major driver in architectural decisions.

A few trends were renamed and/or combined. We consider Dapr as an implementation of the "design for portability" concept, so it was removed as a separate trend. Data-driven architecture is the combination of "data + architecture" and data mesh. Blockchain was replaced with the broader idea of decentralized apps, or dApps. WebAssembly now notes both server-side and client-side, as these are related but separate ideas and may evolve independently in the future.

The portability aspect of "design for portability" is not about being able to pick up your code and move it. Rather, it creates a clean abstraction from the infrastructure. As InfoQ editor Vasco Veloso says, "whoever is designing and building the system can focus on what brings value, instead of having to worry too much with the platform details that they are going to be running on."

This design philosophy is being enabled by frameworks such as Dapr. Daniel Bryant, InfoQ news manager, sees the benefit of the CNCF project as providing a clearly defined abstraction layer and API for building cloud-native services. Bryant said, [with integration] it's all about the APIs and [Dapr] provides abstractions without doing the lowest common denominator".

A recent article by Bilgin Ibryam described the evolution of cloud-native applications into cloud-bound applications. Instead of designing a system with logical components for application logic and compute infrastructure, cloud-bound applications focus on the integration bindings. These bindings include external APIs as well as operational needs such as workflow orchestration and observability telemetry.

Another technology that supports designing for portability is WebAssembly, specifically server-side WebAssembly. Often WebAssembly is thought of as a client-side capability, for optimizing code running in the browser. But using WebAssembly has significant benefits for server-side code. InfoQ Editor Eran Stiller described the process for creating WebAssembly-based containers.

Instead of compiling it to a Docker container and then needing to spin up an entire system inside that container on your orchestrator, you compile it to WebAssembly and that allows the container to be much more lightweight. It has security baked in because it's meant to run the browser. And it can run anywherein any cloud, or on any CPU, for that matter. Eran Stiller

More information about Daprand WebAssembly can be found by following those topics on InfoQ.

The news around AI, specifically large language models such as GPT-3 and GPT-4, has been impossible to ignore. This is not simply a tool used by software professionals as the adoption by everyday people and the coverage in all forms of media has demonstrated. But what does it mean to software architects? In some ways, it is too early to know what will happen.

With ChatGPT and Bing, we're just beginning to see what is possible with large language models like GPT-3. This is the definition of an innovator trend. I don't know what will come of it, but it will be significant, and something I look forward to seeing evolve in the next few years. Thomas Betts

While the future is uncertain, we have optimism that these AI models will generally have a positive benefit on the software we build and how we build it. The code-generation capabilities of ChatGPT, Bing chat, and GitHub Copilot are useful for writing code and tests and allowing developers to work faster. Architects are also using the chatbots to discuss design options and analyze trade-offs.

While these improvements in efficiency are useful, care must be taken to understand the limitations of AI models. They all have built-in biases which may not be obvious. They also may not understand your business domain, despite sounding confident in their responses.

This will definitely be a major trend to watch in 2023, as new products are built on large language models and companies find ways to integrate them into existing systems.

Last year, we discussed the idea of data + architecture as a way to capture how architects are considering data differently when designing systems. This year we are combining that idea with Data Mesh under the heading of data-driven architecture.

The structure, storage, and processing of data are up-front concerns, rather than details to be handled during implementation. Blanca Garcia-Gil, a member of the QCon London programming committee, said, when designing cloud architectures there is a need to think from the start about data collection, storage, and security, so that later on we can derive value from it, including the use of AI/ML. Garcia-Gil also pointed out that data observability is still an innovator trend, at least compared to the state of observability of other portions of a system.

Data Mesh was a paradigm shift, with teams aligned around the ownership of data products. This fits the idea of data-driven architecture, as well as incorporating Conways Law into the overall design of a system.

While there has been more adoption in designing for sustainability, we chose to leave it as an innovator trend because the industry is just starting to really embrace sustainable systems and designing for a low carbon footprint. We need to consider sustainability as a primary feature, not something we achieve secondarily when trying to reduce costs. Veloso said, I have noticed that there is more talk about sustainability these days. Let's be honest that probably half of it is because energy is just more expensive and everybody wants to reduce OPEX.

One of the biggest challenges is the difficulty in measuring the carbon footprint of a system. Until now, cost has been used as a stand-in for environmental impact, because there is a correlation between how much compute you use and how much carbon you use. But this technique has many limitations.

The Green Software Foundationis one initiative trying to help create tools to measure the carbon consumed. At QCon London, Adrian Cockcroft gave an overview of where the three major cloud vendors (AWS, Azure, GCP) currently stand in providing carbon measurements.

As the tooling improves, developers will be able to add the carbon usage to other observability metrics of a system. Once those values are visible, the system can be designed and modified to reduce them.

This also ties into the ideas around portability and cloud-native frameworks. If our systems are more portable, that means we will more easily be able to adapt them to run in the most environmentally-friendly ways. This could mean moving resources to data centers that use green energy, or processing workloads during times when the energy available is more green. We can no longer assume running at night, when the servers are less busy is the best option, as solar power could mean the middle of the day is the greenest time.

Blockchain and a distributed ledger is the technology behind decentralized apps. Mostly due to changes at Twitter, Mastodon emerged as an alternative, decentralized social network. However, blockchain remains a technology that solves a problem most people do not see as a problem. Because of this niche applicability it remains classified as an innovator trend.

Architects no longer work alone, and architects can no longer think only about technical issues. The role of an architect varies greatly across the industry, and some companies have eliminated the title entirely, favoring principal engineers as the role primarily responsible for architectural decisions. This corresponds to a more collaborative approach, where architects work closely with the engineers who are building a system to continually refine the system design.

Architects have been working collaboratively with software teams to come up with and iterate designs. I continue to see different roles here (especially in larger organizations), but communication and working together through proof of concepts to try out designs if needed is key. Blanca Garcia-Gil

Architecture Decision Records (ADRs) are now commonly recognized as a way to document and communicate design decisions. They are also being used as a collaboration tool to help engineers learn to make technical decisions and consider trade-offs.

The Architecture & Designeditorial team met remotely to discuss these trends and we recorded our discussion as a podcast. You canlisten to the discussion and get a feel for the thinking behind these trends.

Read the original post:
Software Architecture and Design InfoQ Trends Report - April 2023 - InfoQ.com

The Future of Applied AI: Towards A Hyperpersonalised & Sustainable World – BBN Times

Business leaders are facing the challenges of addressing sustainability goals including reducing carbon footprint and managing energy consumption costs, whilst also ensuring that they position their firm to take advantage of the rapid pace of change and new business opportunities that advancing technology, in particular AI, is enabling across every sector of the economy.

As an Intel Ambassador, I am delighted to continue my collaboration with Intel in relation to the 4th Generation of Intel Xeon Scalable Processors and the potential to scale AI across the economy whilst also helping meet sustainability objectives.

With built-in accelerators and software optimizations, 4th Gen Intel Xeon with built-in Accelerators have been shown to deliver leading performance per watt on targeted real-world workloads. This results in more efficient CPU utilization, lower electricity consumption, and higher ROI, while helping businesses achieve their sustainability goals.

One may add broad AI or Artificial Broad Intelligence (ABI) into the categories on the lower left side from the image above. We are now in the era of ABI as Multimodal, Multitasking Transformers from the likes of Microsoft, Google, OpenAI, and others enable certain Deep Learning algorithms to perform both vision and natural language processing (NLP) tasks, albeit such powerful algorithms require capable Central Processing Units (CPUs) and Graphical Processing Units (GPUs) that scale to perform well.

Intel 4th Generation Xeon Scalable Processors enable accelerated AI workloads 3x to 5x for Deep Learning inference on SSD- ResNet34 and up to 2x for training on ResNet50 v1.5 with Intel Advanced Matrix Extensions (Intel AMX) compared with the previous generation. Furthermore, in terms of AI performance the 4th Gen IntelXeon Scalable Processors deliver up to 10X higher PyTorch performance for both real-time inference and training with built-in AMX (BF16) vs prior generation. (FP32).

As we enter an era of ever more powerful AI algorithms such as Transformers with Self-Attention and Generative AI and the rise of AI meets the IoT (AIoT) well need the kind of capability that the 4th Generation Gen IntelXeon Scalable Processors deliver with more efficient and powerful CPUs that allow for AI to scale and process large volumes of data very rapidly in low latency use cases, and yet at the same time to do so with energy efficiency and reduced carbon footprint as a key objective too.

Microsoft commissioned a report from PWC entitled How AI can deliver a sustainable future in relation to the potential for AI across four sectors of the global economy:

Energy;

Agriculture;

Water;

Transportation.

The results from the report demonstrated the potential of AI to drive a reduction in emissions, whilst also increase jobs and economic growth across the four sectors explored in the report:

Reduction of CO2 emissions by up to 4% globally;

GPD growth of 4.4% amounting to a vast $5.2 trillion;

Employment growth amounting to a net 38 million jobs created.

The potential for the reduction in GHG emissions (up to 4% globally) is based upon assumptions applied across all four sectors (water, energy, agriculture and transportation) and the role that AI may play across those sectors including but not limited to precision agriculture, precision monitoring, fuel efficiencies, optimising use of inputs, higher productivity.

Furthermore, the resulting gains from Standalone 5G networks was set out by the US EPA and BCG (see right side of infographic above) whereby the resulting ability of SA 5G networks to enable massive scaling of the AIoT (AI applied onto IoT devices and sensors) and the increased automation flows with machine-to-machine communications may result in both a jobs gain and potential to reduce GHG emissions.

The latest Intel Accelerator Engines and software optimizations help improve power efficiency across AI, data analytics, networking and storage. Organizations can achieve a 2.9x average performance per watt efficiency improvement for targeted workloads utilizing built-in accelerators compared with the previous generation. This leads to more efficient

CPU utilization, lower electricity consumption and higher return on investment, while helping businesses achieve their sustainability and carbon reduction goals.

The 4thGeneration of IntelXeon Scalable Processors provide energy efficiency improvements achieved due to innovations within the design of the built-in accelerators. This allows for particular workloads to consume less energy whilst running at faster speeds.

The result per watt (on average) is 2.9X over the 3rd Gen Intel Xeon Processors whilst also allowing for massive scaling of workloads that will needed in the new era of the AIoT that we are entering into, for example inferencing and learning increased by 10X, 2X for improved compression, 3X for data analytics all achieved with 95% less cores. [1]

Another innovation is the Optimized Power Mode feature that, when enabled, provides 20% energy savings (up to 140 Watts on a dual socket system) while only minimally impacting performance (2-5% on select workloads).

The convergence of Standalone (SA) 5G Networks that will allow for a massive increase in device connectivity and ultra-low latency environments will allow for a massive scaling of the Internet of Things (IoT) with internet connected devices and sensors communicating with human users and each other (machine to machine). Increasingly these IoT devices will have AI embedded onto them (on the edge of the network).

Furthermore, Statista forecast that by 2025 there will be a staggering 75 billion internet connected devices, or over 9 per person on the planet! And IDC Seagate forecast that the volume of data generated will increase from 64 Zetabytes in 2020 (when we talked about the era of big data) to almost three times the volume amounting to 175 Zetabytes in 2025 with 1/3rd of this data consumed in real-time! Applying AI will be essential to efficiently manage networks and also to make sense of the data and provide near real-time responses to users.

Furthermore, this new era will allow us to measure, analyse (evaluate) and respond dynamically to our environment (whether that be healthcare, energy, smart cities with traffic, manufacturing, etc). AI capabilities and inference performance will be key to succeed in this era that we are entering into.

A world where Machine-to-Machine Communication reduces risk (broken down car is detected by the red car that then broadcasts to other vehicles around it who also then broadcast and thereby also avoid traffic jams where emissions can increase due to slow moving traffic) as shown in the illustration below.

IntelXeon Scalable processors provide for more networking compute at lower latency while helping preserve data integrity. Achieve up to 79% higher storage I/O per second (IOPS) with as much as 45% lower latency when using NVMe over TCP, accelerating CRC32C error checking with Intel Data Streaming Accelerator (Intel DSA), compared to software error checking without acceleration.

BCG in an article entitled Reduce Carbon and Costs with the Power of AI forecast that AI technology applied towards corporate sustainability goals may yield reductions of 2.6 to 5.3 gigatons or 1 to 3 USD trillion in value added.

The process for achieving this entails:

Monitoring emissions;

Predicting emissions;

Reducing emissions.

BCG believes that the sectors with the greatest potential for reductions of GHGs due to application of AI include: Industrial goods, transportation, pharmaceutical, consumer packaged goods, energy and utilities.

Intels vision is to accelerate sustainable computing, from manufacturing to products to solutions, for a sustainable future. Organizations can help reduce their scope 3 GHG emissions by choosing 4th Gen Gen IntelXeon Scalable Processors, which are manufactured with 90-100% renewable energy at sites with state-of-the-art water reclamation facilities that in 2021 recycled 2.8 billion gallons of water. For the avoidance of doubt, it is noted that the statistics provided in this paragraph entail Scope 3 emissions related to embodied carbon that don't impact the operational emissions of carbon, however, Scope 3 also includes operational carbon within which servers form a larger part of the equation.

Use case examples of applying the AIoT towards sustainability include the following:

Sensors that may detect that a no person is present in a room and hence switch off the lights and turn the heating (or if summer the air conditioning) off or to a lower level;

Sensors that may realise that a window is open whilst the heating is running and close it;

Predicting issues before they occur such as bust water pipes, unplanned outages in monitoring, traffic congestion spots and trying to reroute traffic or amend the traffic light sequencing to reduce the jams;

In relation to agriculture applying computer vision on a drone to determine when the crops are ripe for harvesting (so as to reduce wasted crops) and also to check for signs of drought and insect infestations;

Deforestation near real-time analytics of illegal logging.

Renewable energy drones applying Computer Vision from Deep Learning algorithms that may inspect the blades of wind turbines and solar panels on solar farms for cracks and damages thereby improving asset life and enhancing amount generated.

Energy storage optimisation with Machine Learning algorithms applied towards maximising the operational performance and return on investment for battery storage.

Rolnick et al. (2019) published a paper entitled Tackling Climate Change with Machine Learning (co-authored by leading AI researchers including Demis Hassabis (Co-founder of DeepMind), Andrew Y Ng, and Yoshua Bengio set out the potential to reduce emissions by applying AI across the manufacturing operations of a firm all the way from the design stage with generative design and 3D printing, supply chain optimization with a preference for low greenhouse gas emissions options, improving factory energy consumption with renewable supplies and efficiency gains (including predictive maintenance) through to detection emissions with the follow up action of abating emissions from heating and cooling and optimizing transport routes.

The 4th Generation of Intel Xeon Scalable Processors also have power management tools to enable more control and greater operational savings. For example, new Optimized Power Mode in the platform BIOS can deliver up to 20% socket power savings with a less than 5% performance impact for selected workloads.

Furthermore, the paper by Rolnick et al. sets out how firms may deal with the unsold inventory problem for retailers with some estimates placing the annual costing the fashion industry $120 billion a year! This is both an economic and an environmental wastage. Targeted recommendation algorithms to match supply with demand, and application for Machine Learning towards forecasting demand and production needs may also help reduce such wastage.

In the world of the AIoT a customer could be walking along the high street or the mall and a Machine Learning algorithm could offer them personalised product recommendations based upon the stores in close proximity to them.

Both the retail and manufacturing examples would require near real-time responses from the AI algorithms and hence a reason why accelerators within the CPU are important factors to deliver enhanced performance.

The world of the AIoT will require the ability to work within power constrained environments and respond to user needs in near-real time.

Intel enables organizations can make dynamic adjustments to save electricity as computing needs fluctuate. Gen Intel Xeon Scalable Processors have built-in telemetry tools that provide vital data and AI capabilities to help intelligently monitor and manage CPU resources, build models that help predict peak loads on the data centre or network, and tune CPU frequencies to reduce electricity use when demand is lower. This opens the door to greater electricity savings, the ability to selectively increase workloads when renewable energy sources are available and an opportunity to lower the carbon footprint of data centres.

In addition, only Intel offers processor SKUs optimized for liquid-cooled systems, with an immersion cooling warranty rider available, helping organizations further advance their sustainability goals.

AI will literally be all around us across the devices and sensors that we use, allowing for mass hyper- personalisation at scale with near real-time instant responses to the customer user. However, in order to avail these opportunities business leaders will need to ensure that they have invested in the appropriate technology that can meet the needs of the business and its customers.

We are entering an era where near immediate responses (often on the fly) will be necessary to engage with customers and also to respond dynamically in a world of machine-to-machine communication.

Intel Advanced Matrix Extensions (Intel AMX) allows for efficient scaling of AI capabilities to respond to the needs of the user and the network.

Significantly accelerate AI capabilities on the CPU with Intel Advanced Matrix Extensions (Intel AMX). Intel AMX is a built-in accelerator that improves the performance of Deep Learning training and inference on 4th Gen Intel Xeon Scalable Processors, ideal for work-loads like natural language processing, recommendation systems, and image recognition.

4th Gen Intel Xeon Scalable Processors have the most built-in accelerators of any CPU on the market to deliver performance and power efficiency advantages across the fastest growing workload types in AI, analytics, networking, storage, and HPC. With all newaccelerated matrix multiply operations, 4th Gen Intel Xeon Scalable Processors have exceptional AI training and inference performance.

Other seamlessly integrated accelerators speed up data movement and compression for faster networking, boost query throughput for more responsive analytics, and offload scheduling and queue management to dynamically balance loads across multiple cores. To enable new built in accelerator features, Intel supports the ecosystem with OS level software, libraries, and APIs.

Performance gains from the 4th Gen Intel Xeon Scalable Processors include the following (source 4th Gen Intel Xeon Scalable Processors perf index):

Run cloud and networking workloads using fewer cores with faster cryptography. Increase client density by up to 4.35x on an open-source NGINX web server with Intel QuickAssist Technology (Intel QAT) using RSA4K compared to software running on CPU cores without acceleration.

Improve database and analytics performance with 1.91x higher throughput for data decompression in the open source RocksDB engine, using Intel In MemoryAnalytics Accelerator (Intel IAA) compared to software compression on coreswithout acceleration solutions with 8.9x increased memory tomemory transfer using Intel Data Streaming Accelerator (Intel DSA), versusprevious generation direct memory access.

For 5G vRAN deployments, increase network capacity up to 2x with new instruction set acceleration compared to the previous generation.

Security is a key issue in the era of the AIoT as SA 5G networks expand and scale.

Businesses need to protect data and remain compliant with privacy regulations whether deploying on premises, at the edge, or in the cloud. 4th Gen Intel Xeon Scalable processors unlock new opportunities for business collaboration and insights even with sensitive or regulated data. Confidential computing offers a solution to help protect data in use with hardware-based isolation and remote attestation of workloads. Intel Software Guard Extensions (Intel SGX) is the most researched, updated, and deployed confidential computing technology in data centres on the market today, with the smallest trust boundary of any confidential computing technology in the data centre today. Developers can run sensitive data operations inside enclaves to help increase application security and protect data confidentiality.

Intels Bosch case study provides an example of an application of security in the IoT sector.

The case study observed that access to raw data sets is ideal for the development of analytics based on Artificial Intelligence. The example sets out how Boschs autonomous vehicles unit reduced risks associated with data or IP leakage using the open source project Gramine, running on Intel SGX. For more details, please refer to Implementing Advanced Security for AI and Analytics.

By the end of this decade, we may experience a substantial increase in the number of advanced Electric and Autonomous Vehicles (EVs and AVs) on the road and a world where

battery storage will be of greater importance as more renewable energy scales across the grid (following the Inflation Reduction Act in the US, and the continued policies of the UK and EU towards reducing carbon emission targets). Powerful CPUs with built in accelerators can help Machine Learning techniques scale across battery storage facilities to optimise the availability of energy and battery performance. This is relevant for edge and network scenarios with power and battery constraints, such as EVs and power-optimized devices in smart homes and manufacturing facilities.

In this world mass hyper-personalisation at scale enabled by the AIoT will allow for both near real-time engagement with the customer on the fly as well as greater efficiency and hence less wastage as Machine Learning and Data Science will enable superior prediction of customer needs from the vast amount of data that will be created.

One may imagine the users engaging in retail or entertainment on their way into and back home from work and the EV/AV recognising the passengers with Computer Vision from Deep Learning and personalising the environment of the car (entertainment, etc) to the user profile. The AV/EVs will go on one journey to another adjusting to different passengers and allowing the user to utilize their time efficiently and as they wish (engaging with brands, working, entertainment). However, even before more advanced EV/AVs arrive, there are many opportunities for firms to avail in the era of the AIoT for near real-time engagement with the customer whilst also reducing wastage (for example better matching supply and demand, improved demand forecasting, identifying and matching supply chain and manufacturing processes).

The 4th Gen Intel Xeon Scalable processors enable a more secure environment for developing IoT services and applications across the edge of the network, in turn enabling businesses to create new opportunities with greater confidence around security.

This vision of scaling and enabling a secure AIoT aligns with my own personal vision of applying AI and related data analytical and digital technology to deliver on sustainability objectives whilst also delivering world of genuine mass hyper-personalisation at scale whereby firms can truly respond to their customers needs in real-time conditions and further tailor their offerings to the individual customers needs.

Were entering an exciting new era from this year and across the rest of this decade whereby AI will scale rapidly across the devices and sensors around us as well as the remote cloud servers that will continue to remain important for training algorithms, acting as data lakes and enabling analytics on historic data to improve learning outcomes for AI and improve the personalisation of service or identify the opportunities to further enhance operational efficiencies across organisations.

Well be able to measure and evaluate emissions and energy consumption around us and identify wastage and reduce inefficiencies.

The AI algorithms across the Edge of the network will require energy efficient CPUs to operate across power constrained environments and to achieve reduced carbon footprint. The 4th Gen Intel Xeon Scalable Processors allow organisations to scale AI capabilities, provide hyper-personalisation at scale, and manage their internal operations at the Edge more efficiently whilst also helping enable security and sustainability goals to be met.

Imtiaz Adam

Data Scientist

Postgraduate in Computer Science with research in AI, Sloan Fellow

More here:
The Future of Applied AI: Towards A Hyperpersonalised & Sustainable World - BBN Times

Haar Cloud Ltd. announces the launch of a new range of managed hosting services – EIN News

Haar Cloud Ltd. launches a new range of managed hosting services

Haar Cloud, a leading cloud solutions and IT infrastructure company, has just released a new range of Managed Hosting services.

Adrian Huma, co-founder Haar Cloud and Director

Understanding how complex it can be to start a new website or ecommerce store from scratch, Haar Cloud now offers a new range of cloud web hosting services, all fully managed and powered by an easy-to-use and performant cPanel control panel.

With these new managed hosting services, whether our customers own a startup or have a large company, they can choose a Haar web hosting plan customized to their needs and goals, said Adrian Huma, co-founder of Haar Cloud and Director.

The new Managed Hosting gives customers access to industry-leading systems and highly-skilled technicians, along with NVME SSD, 100% Uptime, 10Gbits servers, and Anti-DDoS protection.

"On top of everything, our new managed hosting range provides a website with optimal speed and security, pre-installed tools, such WooCommerce or WordPress, and 24/7 technical support with an average response time of 15 minutes," added Adrian Huma, co-founder of Haar Cloud.

Haar's new General Managed Hosting services include:

- General Hosting cPanel Small Plan provides 1 site/domain,100GB cloud storage and 250GB bandwidth, among other features, such as DDoS protection.

- General Hosting cPanel Medium Plan provides 10 sites/domains, 125GB cloud storage and 500GB bandwidth, among others, such as 50 Email addresses.

- General Hosting cPanel Professional Plan provides unlimited sites/domains, 150GB cloud storage, unlimited bandwidth, and 24/7 monitoring.

- General Hosting cPanel Scale Plan provides unlimited sites/domains, 300GB cloud storage, unlimited bandwidth, 24/7 monitoring, and Cloudflare enabled.

This new range of managed hosting services will help customers build a high-performance, secure online store at a low cost. The Haar team of hosting specialists will pre-install everything, making sure every website is up to date, being available 24/7 to answer any technical questions customers may have.

Haar Cloud is committed to delivering all the technology solutions businesses worldwide need, helping its customers achieve the best results using the right cloud technology, cyber security, and IT support services.

Customers can already access the new Managed Hosting range through the Haar Cloud client portal.

If you want to know more about Haar Cloud and Technology solutions, please visit http://www.hellohaar.com.

About Haar

Haar provides tailor-made cloud and IT infrastructure services, all delivered by accredited technologies and certified experts, with 24x7 support included. Were here to help you get the most from your technology with the best Cloud Infrastructure, Managed Hosting, Cyber Security and IT Consultancy solutions on the market. For more information, please visit http://www.hellohaar.com.

Ana DumbravescuHaar Cloud Ltd+44 161 768 3149email us hereVisit us on social media:FacebookLinkedIn

We are Haar!

Follow this link:
Haar Cloud Ltd. announces the launch of a new range of managed hosting services - EIN News

The New Frontiers of Cybersecurity Exponential Increase in Complexity – Security Boulevard

Author:Itzik Kotler, CTO & Co-Founder, SafeBreach

The New Frontiers of Cybersecurity is a three-part thought-leadership series investigating the big-picture problems within the cybersecurity industry. In the first post, we explored the reasons malicious actors have been able to enhance their ability to execute and profit from attacks. In the second post, we discussed how the massive increase in endpoints and systems online has dramatically increased the attack surface. A differentbut equally criticaldimension that well discuss for our third and final installment is that alongside this increase in attack surface comes a significant increase in complexity that is plaguing security teams.

The simple combinatorial mathematics of the sheer increase in endpoints not only means a greater number of systems to manage but also much more complex network architectures and webs of connections underlying IT and technology infrastructure and systems. The rise of cloud computing added a further layer of complexity for individuals trying to keep their applications and data secure. For example, an organization like Twitter that is composed of thousands of microservices will have a vastly more complex endpoint infrastructure than an enterprise that is guarding a handful of servers or even a few cloud instances.

Rather than linear complexity increases with each new node, we see exponential increases in complexity for every added node. Then there is the element of time. It is hard enough to guard and proactively protect an IT infrastructure that is growing quickly but steadily and constantly. It is entirely another issue to protect an IT infrastructure with a growing number of endpoints or systems attached to IP addresses that only exist for short periods and then morph into something else.

This combinatorial complexity is the new reality of Kubernetes and containers, serverless computing, and IPv6 (the newer IP numbering structure that enables billions more endpoints and systems to have their own unique IP addresses). In Kubernetes and containers, new endpoints with IP addresses may spin up and shut down every hour or even on a minute-by-minute basis. Unlike the billions of connected devices, which are far more limited in compute resources and other restrictions, containers and serverless are general purpose and can be more easily adapted for almost any type of payload or attack.

So we are now in a world where anyone can provision hundreds or even thousands of general-purpose servers or lightweight computers with the push of a button. This means a lot more complexity to protect, but also that attackers can generate significantly more complex attacks. Remember, the nature of cloud computing is that it is open to everyone. This includes the Kubernetes engines offered by cloud providers, as well as more abstracted systems to scale up and manage large fleets of containers like Amazons Fargate platform.

We already see signs of this new complexity. A scan by security researchers in mid-2022 pulled in over 900,000 exposed Kubernetes management endpoints. To be clear, these endpoints were not necessarily vulnerable or unprotected. But in security, exposing endpoints provides attackers information they can use to create more targeted attacks. Likewise, public compute clouds have unpatched security flaws that can allow rogue users to break out of a container and potentially access the management plane of the public cloud. This can then allow them to attack other tenants in the cloud, violating the core proposition of secure multi-tenancy.

In the legacy world of tightly controlled network perimeters and less secure internal networks, there was little need to harden endpoints not designed to be exposed to the world. In the datacenter era, a firewall on the edge of the center guarded against unauthorized probes and kept everything private. Even a misconfigured internal server was not accessible to the world. A firewall engineer had to explicitly change firewall rules to open that server to access from the Internet. Today, the opposite is true, with open-to-the-Internet being the default state and the burden falling on developers, DevOps teams, and security teams to set up firewalls, API gateways, and other protections to guard against probes and attacks.

Kubernetes can (and often does) expose endpoints as default behavior, providing a handy map to attackers. We are already seeing attackers exploit the complexity of containers and Kubernetes as a new attack vector, driven in part by the elimination or limitation of favorite older vectors such as macros.

The big push behind the cloud and Kubernetes is to allow developer teams, DevOps, and IT to be more agile, flexible, and resilient. However, this is a paradigm shift with many implications that may be hard for IT and security teams to address. In the cloud, the default is public. In the legacy datacenter world, the default was private, and IT or security would need to grant access. In the public cloud, IT or security governs access. The default premise of cloud, going back to Jeff Bezoss policies at AWS, is to make services, APIs, storage, computing, and networking accessible to anyone with a credit card. In the cloud, therefore, the default for a service is exposed to the world. In the traditional datacenter and legacy networking world, a service must be configured to be exposed.

This paradigm shift injects a new layer of complexity into security and can lead to configuration mistakes, even for cloud-native companies. A developer may build a test application and load code onto it that communicates with other services out in the cloud or even opens an API to the public Internet. The developer may not realize that the cloud server the test application is on the same namespace and security groups as other key production assets. That test server might also be left open by mistake for days, becoming a pivot or jump point for a malicious actor. Another point to consider is that in the past, storage was physically attached to networks and segregated from public access. To access data contained in that storage, you had to go through the server that was attached to it. Cloud computing broke that paradigm and allowed the easy storage of data in object stores and other online storage buckets. In the cloud, developers and even security teams often store data in public cloud storage buckets without properly configuring the buckets to secure access to them.

While physical data centers are somewhat obscured and blocked from public access or even scans, cloud service providers operate using well-known blocks of public IP addresses. This is true even down to individual services. For example, the IP blocks used by Amazons S3 storage service are well documented and publicly shared on the Internet. Because malicious actors know the IP addresses, this makes running continuous probes of those blocks searching for vulnerabilities far less resource intensive and expensive. Attackers also know the default configurations of Kubernetes clusters and connecting APIs. They know the default security configurations of most server images deployed as part of the default public compute cloud catalogs, as well as what ports are protected and opened by default in commonly deployed public cloud Web Application Firewalls. The upshot of all this? We face the opposing trends of operating and security infrastructure being made more complicated by the shift to the cloud, while at the same time identifying attack targets is becoming simpler.

The days of firewalling the data center to guard infrastructure are long gone. Many organizations maintain a global firewall in front of their infrastructure. These firewalls are necessarily porous due to the growing number of APIs and services that must connect to the outside world. In the cloud, the initial approach was to create security groups. Critical processes, services, and instances were placed inside more security groups. Access controls were applied on a per-group basis, associated with identity providers and authentication systems. Security groups are still necessary but insufficient to handle the cloud infrastructures complexity.

The answer is defense-in-depth. Security teams put in place more defense technologies, protecting data assets and applications in multiple ways. APIs are guarded by API gateways. Kubernetes clusters are guarded by specialized Web Application Firewalls and Ingress Controllers. SecDevOps teams mandate smaller, more lightweight firewalls in front of every public service or API. Application security teams require that SAST and SCA scans be run on any code iterations. Cloud providers add technology to ensure that cloud services, such as storage buckets, are properly secured. Endpoint detection and response is mandatory for all devices interacting with enterprise and cloud assets. Security is also placed in the content delivery network (CDN), extending web firewalls and denial of service (DoS) protection further away from core app servers to intercept attacks further upstream. These layered systems require proper configuration and managementa never-ending task.

Complexity increases the probability for mistakes. This complexity also provides malicious actors with potential opportunities to hide and attack. The high degrees of complexity are precisely what hackers use and abuse to get their way. An enterprise may have multiple directories that maintain user permissions, and an admin forgets to update one of them. There may be five valid authentication methods and the last method is the weakest. This will be the one malicious actors invariably choose to exploit. While 90% of development use cases and user requirements are satisfied with the standard catalog of infrastructure and devices, the remaining 10% of non-standard use cases will be the last to be updated and will likely present the best opportunities for exploits. Complexity creeps up on CISOs one exception at a time, one additional service or software or SaaS tool at a time.

So what can security teams do? According to recommendations from leading security agencies like the Cybersecurity and Infrastructure Security Agency (CISA), organizations must begin to invest in automated, continuous security validation to keep up. Unlike an annual penetration test by a third party, organizations must continually evaluate their security control stack. This means performing adversary simulations to test that the defensive controls are working correctly to detect, log, and stop attacks. This continuous test also helps organizations identify those temporary resources that may have been brought up and not protected correctly. Security teams should also make sure they do not limit themselves to external attack surface validation only. Any network can become an entry, exit, or pivot point for malicious actors to use.

Connect with a SafeBreach cybersecurity expert or request a demo of our advanced platform today to see what continuous security validationpowered by breach and attack simulation (BAS)can do for you.

Go here to see the original:
The New Frontiers of Cybersecurity Exponential Increase in Complexity - Security Boulevard

Forest Admin launches cloud version of its low-code internal tool builder – TechCrunch

Image Credits: Christoph Wagner / Getty Images

French startup Forest Admin is launching a cloud-based version of its product. The company helps you create flexible back-end admin panels for operations teams. Essentially, Forest Admin helps development teams spend less time on back office tools so they can focus on the actual product.

With the cloud version, companies just have to integrate the service with their own SQL database. After that, they can start using Forest Admin to manage their business.

The onboarding is very similar to business intelligence tools, Forest Admin co-founder and CEO Sandro Munda told me. But BI tools mostly fetch data so that it can be transformed, analyzed and compiled into quarterly reports and reused in business planning meetings.

Forest Admin is all about interacting with your products data. Companies can also integrate the admin panel with third-party services like Stripe, Mixpanel and Intercom. Forest Admin users can then trigger actions and create workflows with different levels of permission in the company.

Unlike other internal tool builders, such as Retool, Forest Admin is focused on admin panels exclusively. It isnt designed to be an all-in-one internal tool builder because sophisticated services also tend to be complex.

For instance, a fintech company could use Forest Admin to review and validate documents and make sure it complies with KYC and AML regulation (know your customer and anti-money laundering) Qonto is one of the startups biggest customers, with 2,000 people using Forest Admin. An e-commerce company could also use Forest Admin to refund customers or order an item once again in case its been lost.

In addition to centralizing all your data, a tool like Forest Admin also makes it easier to interact with your data. Companies can filter their user base and create segments, update and delete records and more.

Currently, Forest Admin customers install a component on their servers. This agent can read your data and makes it accessible through an API. Forest Admin hosts the front-end interface on its own servers. When customers connect to their admin panels, Forest Admin fetches information from the component that is installed on your infrastructure.

With the new cloud version, it greatly lowers the barrier to entry as you dont have to install Forest Admins component on your servers. With the right firewall rules and tunneling software, your database should remain secure. Theres no data duplication, you make changes on your database directly, Munda said.

Our goal is really to attract a new segment of customers, he added. Previously, you needed to integrate Forests agent in your own application. If a company is already using high-level cloud services exclusively, they couldnt use Forest Admin before the release of the cloud-based version.

Many operations-driven companies already use Forest Admin, such as fintech, marketplace, mobility and healthcare companies. We are close to profitability but it isnt what were aiming for right now, Munda said. And the cloud product should help when it comes to bringing more revenue.

Read more:
Forest Admin launches cloud version of its low-code internal tool builder - TechCrunch

IBM Furthers Flexibility, Sustainability and Security within the Data … – IBM Newsroom

ARMONK, N.Y., April 4, 2023 /PRNewswire/ -- IBM (NYSE: IBM)today unveiled new single frame and rack mount configurations of IBM z16and IBM LinuxONE 4, expanding their capabilities to a broader range of data center environments. Based on IBM's Telum processor, the new options are designed with sustainability in mind for highly efficient data centers, helping clients adapt to a digitized economy and ongoing global uncertainty.

Introduced in April 2022, the IBM z16 multi frame has helped transform industries with real-time AI inferencing at scale and quantum-safe cryptography. IBM LinuxONE Emperor 4, launched in September 2022, features capabilities that can reduce both energy consumption and data center floor space while delivering the scale, performance and security that clients need.The new single frame and rack mount configurations expand client infrastructure choices and help bring these benefits to data center environments where space, sustainability and standardization are paramount.

Rack Mount in Use

"IBM remains at the forefront of innovation to help clients weather storms generated by an ever-changing market," said Ross Mauri, General Manager, IBM zSystems and LinuxONE. "We're protecting clients' investments in existing infrastructure while helping them to innovate with AI and quantum-safe technologies. These new options let companies of all sizes seamlessly co-locate IBM z16 and LinuxONE Rockhopper 4 with distributed infrastructure, bringing exciting capabilities to those environments."

Designed for today's changing IT environment to enable new use cases

Organizations in every industry are balancing an increasing number of challenges to deliver integrated digital services. According to a recent IBM Transformation Index report, among those surveyed, security, managing complex environments and regulatory compliance were cited as challenges to integrating workloads in a hybrid cloud. These challenges can be compounded by more stringent environmental regulations and continuously rising costs.

"We have seen immense value from utilizing the IBM z16 platform in a hybrid cloud environment," said Bo Gebbie, president, Evolving Solutions. "Leveraging these very secure systems for high volume transactional workloads, combined with cloud-native technologies, has enabled greater levels of agility and cost optimization for both our clients' businesses and our own."

The new IBM z16 and LinuxONE 4 offerings are built for the modern data center to help optimize flexibility and sustainability, with capabilities for partition-level power monitoring and additional environmental metrics. For example, consolidating Linux workloads on an IBM LinuxONE Rockhopper 4 instead of running them on compared x86 servers with similar conditions and location can reduce energy consumption by 75 percent and space by 67 percent.1These new configurations are engineered to deliver the same hallmark IBM security and transaction processing at scale.

Designed and tested to the same internal qualifications as the IBM z16 high availability portfolio2, the new rack-optimized footprint is designed for use with client-owned, standard 19-inch racks and power distribution units. This new footprint opens opportunities to include systems in distributed environments with other servers, storage, SAN and switches in one rack, designed to optimize both co-location and latency for complex computing, such as training AI models.

Installing these configurations in the data center can help create a new class of use cases, including:

Securing data on the industry's most available systems3

For critical industries, like healthcare, financial services, government and insurance, a secure, available IT environment is key to delivering high quality service to customers. IBM z16 and LinuxONE 4 are engineered to provide the highest levels of reliability in the industry, 99.99999% availability to support mission-critical workloads as part of a hybrid cloud strategy. These high availability levels help companies maintain consumer access to bank accounts, medical records and personal data. Emerging threats require protection, and the new configurations offer security capabilities that include confidential computing, centralized key management and quantum-safe cryptography to help thwart bad actors planning to "harvest now, decrypt later."

"IBM z16 and LinuxONE systems are known for security, resiliency and transaction processing at scale," said Matt Eastwood, SVP, WW Research, IDC. "Clients can now access the same security and resiliency standards in new environments with the single frame and rack mount configurations, giving them flexibility in the data center. Importantly, this also opens up more business opportunity for partners who will be able to reach an expanded audience by integrating IBM zSystems and LinuxONE capabilities to their existing footprints."

With the IBM Ecosystem of zSystems ISV partners, IBM is working to address compliance and cybersecurity. For clients that run data serving, core banking and digital assets workloads, an optimal compliance and security posture is key to protecting sensitive personal data and existing technology investments.

"High processing speed and artificial intelligence are key to moving organizations forward," said Adi Hazan, director ofAnalycat. "IBM zSystems and LinuxONE added the security and power that we needed to address new clients, use cases and business benefits. The native speed of our AI on this platform was amazing and we are excited to introduce the IBM LinuxONE offerings to our clients with large workloads to consolidate and achieve corporate sustainability goals."

IBM Business Partners can learn more about the skills required to install, deploy, service and resell single frame and rack mount configurations in this blog.

Complementary Technology Lifecycle Support Services

With the new IBM LinuxONE Rockhopper 4 servers, IBM will offer IBM LinuxONE Expert Care. IBM Expert Care integrates and prepackages hardware and software support services into a tiered support model, helping organizations to choose the right fit of services. This support for LinuxONE Rockhopper 4 will offer enhanced value to clients with predictable maintenance costs and reduced deployment and operating risk.

The new IBM z16 and LinuxONE 4 single frame and rack mount options, supported by LinuxONE Expert Care, will be generally available globally[4] from IBM and certified business partners beginning on May 17, 2023. To learn more:

About IBMIBM is a leading global hybrid cloud and AI, and business services provider, helping clients in more than 175 countries capitalize on insights from their data, streamline business processes, reduce costs and gain the competitive edge in their industries.Nearly 3,800 government and corporate entities in critical infrastructure areas such as financial services, telecommunications and healthcare rely on IBM's hybrid cloud platform and Red Hat OpenShift to affect their digital transformations quickly, efficiently, and securely.IBM's breakthrough innovations in AI, quantum computing, industry-specific cloud solutions and business services deliver open and flexible options to our clients.All of this is backed by IBM's legendary commitment to trust, transparency, responsibility, inclusivity, and service. For more information, visitwww.ibm.com

Media Contact:Ashley Petersonashley.peterson@ibm.com

1 DISCLAIMER: Compared IBM Machine Type 3932 Max 68 model consisting of a CPC drawer and an I/O drawer to support network and external storage with 68 IFLs and 7 TB of memory in 1 frame versus compared 36 x86 servers (2 Skylake Xeon Gold Chips, 40 Cores) with a total of 1440 cores. IBM Machine Type 3932 Max 68 model power consumption was measured on systems and confirmed using the IBM Power estimator for the IBM Machine Type 3932 Max 68 model configuration. x86 power values were based on Feb. 2023 IDC QPI power values and reduced to 55% based on measurements of x86 servers by IBM and observed values in the field. The x86 server compared to uses approximately .6083 KWhr, 55% of IDC QPI system watts value. Savings assumes the Worldwide Data Center Power Utilization Effectiveness (PUE) factor of 1.55 to calculate the additional power needed for cooling. PUE is based on Uptime Institute 2022 Global Data Center Survey (https://uptimeinstitute.com/resources/research-and-reports/uptime-institute-global-data-center-survey-results-2022). x86 system space calculations require 3 racks. Results may vary based on client-specific usage and location.2 DISCLAIMER: All the IBM z16 Rack Mount components are tested via same process requirements as the IBM z16 traditional Single Frame components. Comprehensive testing includes a wide range of voltage, frequency, temperature testing.3 Source: Information Technology Intelligence Consulting Corp. (ITIC). 2022. Global Server Hardware, Server OS Reliability Survey. https://www.ibm.com/downloads/cas/BGARGJRZ4 Check local availability for rack mount here.

SOURCE IBM

Excerpt from:
IBM Furthers Flexibility, Sustainability and Security within the Data ... - IBM Newsroom