Category Archives: Cloud Servers
We Have a Pandemic-Driven Data Protection Gap To Close – CDOTrends
Before the pandemic, society was already inching towards complete digitalization for online banking or even something as simple as using an app for grocery shopping. At that time, digital transformation and focusing on keeping data protected but also recoverable and accessible was already a challenge.
Large organizations struggled in their digital transformation journeys as many relied on legacy systems, which can be costly to keep up and maintain. Decision-makers have to work within a budget to ensure that business goals, workforce issues, and innovation are addressed.
However, the pandemic has accelerated the need to digitize, forcing organizations to facilitate remote working options at an unimaginably fast pace. To respond, most organizations simply advanced the execution of their pre-planned IT modernization initiatives. While this enabled them to immediately accommodate their users and update legacy systems and enhance functionality, in many cases, it did so at the long-term cost of ignoring growing risks around protecting critical data.
Data protection requires a holistic approach to addressing risks, such as the continued growth of cybersecurity threats and all potential outages, from human errors to system failures and natural disasters. Keeping data secure, backed up, readily available and recoverable is vital to a modern digital business that must operate in an always-on manner.
The challenge is only getting more demanding. According to Veeams Data Protection Trends Report 2022, 84% of APJ organizations have a protection gap between how much data they can afford to lose after an outage and how frequently IT protects their data. While 86% have an availability gap between how quickly they need systems to be recoverable and how quickly IT can bring them back.
Containing business challenges
Business data has become more vulnerable to cyberattacks, forcing organizations to bolster data protection and security to overcome severe disruptions. IT leaders must take the initiative to plan and anticipate what to do when they get attacked rather than waiting for disaster to strike.
Continuing cyberthreats
Ransomware attacks continue to be more frequent than ever. For organizations in the APJ region, only 18% of businesses escaped ransomware attacks in 2022. For those that were attacked, 18% experienced only one, 45% experienced two or three, and 19% experienced as many as four or more in 2022
36% of organizations stated that ransomware (including prevention and remediation) was their most significant hindrance to digital transformation or IT modernization initiatives due to its burden on budgets and workforce.
Human error and education
While cyberthreats can put a massive strain on a businesss productivity and ability to restore data quickly, there is a common, often overlooked security threat unintentional human error. Despite significant education efforts, almost half of global and Asia Pacific businesses reported accidental deletion, overwrite of data, or data corruption as a primary cause of IT outages. Data loss due to human error is an unavoidable fact. Thus all organizations must be on guard and educate their employees on mitigating these events.
Managing hybrid infrastructure complexity
With cloud computing evolving rapidly, the need to protect cloud workloads and maintain compliance has grown. Hybrid IT continues to be the norm, with a relatively even balance between servers within the data center and cloud-hosted servers. Within the data center, there is a good mix of both physical and virtual servers. This year, organizations in the APJ region reported: 29% physical servers within data centers; 25% virtual machines within data centers and 46% cloud-hosted server instances. As a result, 37% of organizations in the APJ region stated that being able to standardize their protection capabilities across their data center, IaaS and SaaS workloads is a crucial driver in their 2023 strategy
Road to Success
Business leaders should consider the following points to ensure that their organizations are set up to succeed:
Prepare your Team
Human error accounts for a significant portion of data breaches. Reducing such errors should not be reactive. Instead, proactive measures should be fully adopted to recover its mission-critical applications in a timely manner. However, to begin recovery at this level, teams within the company must be prepared to take the necessary steps. A report by Forrester Consulting found that in APAC, 53% of businesses agree that their managers do not stress the importance of good security practices and training. Whether its part of a holistic IT strategy or separate, organizations should be educating all staff on safe practices when online. This can significantly reduce the risks of data loss caused by ransomware or other attacks.
Prepare your Plan
To prepare a business for disaster recovery, the ability to anticipate what a zero-day attack looks like and the next steps needed at that moment is vital. Getting services and employees back online as soon as possible is another important aspect that should be prioritized. To achieve this, businesses must have a robust, well-defined plan, enabling them to choose the best course of action to counter the possibility of disasters and minimize any resulting downtime. Businesses must not only have a plan but also put it to the test before a disaster strikes.
Test your Network
Weak, misconfigured, or inadequately maintained networks are an excellent entry point for malicious actors. Investing in network security is a great way to ensure you can mitigate these threats. Penetration testing is a must when figuring out the weaknesses in your network and is often best done by a neutral third party. Sometimes we can be blinded to faults when were used to seeing the same networks and systems.
With technology reaching steep heights, systems and networks are becoming more complex. All businesses have had to contend with resolving the immediate challenges of the pandemic. Still, with cyber-attacks, hardware failures, network issues, and more creating increased complexity, business continuity must be at the top of any organizations list of IT concerns. Today, a BC/DR plan's objective should include speed, accessibility, and remote availability. While the plan isn't something you must update often, it needs to be a solid, well-fleshed-out plan. Because when disaster strikes, all you may have to rely on is your recovery plan and your employees.
Joseph Chan, vice president for Hong Kong, Macau & Taiwan for Hong Kong at Veeam, wrote this article.
The views and opinions expressed in this article are those of the author and do not necessarily reflect those of CDOTrends.Image credit: iStockphoto/pishit
Visit link:
We Have a Pandemic-Driven Data Protection Gap To Close - CDOTrends
What will hybrid working mean 10 years from now? – TechHQ
Hybrid working, meaning a mix of working remotely and working in the office, was surprisingly, with hindsight a significant novelty in the years before Covid-19.
It was a work model that was seen as neither one thing nor the other, and that didnt usually fit well with strong, decisive business decision-making. Mixing remote and office-based work, where it was allowed at all, usually had to come attached to a strong and specific reason.
It was also often seen as a perk that came with a lowering of salary, to compensate companies for a workers absence from the office (where the work would normally be office-based), and to account for the lower overheads of largely remote workers on things like travel, daily food, and so on.
Perhaps the two biggest reasons why hybrid work was not a widespread thing before Covid though are 1) No-one was especially certain that the technologies required to facilitate it (such as video conferencing, remote collaboration tools and the like) could handle the transfer from being useful in a pinch to being essential all the time, and 2) There was no particular economic incentive for the whole business world to test out whether they could or not.
In the words of the old adage, if it aint broke, dont fix it. Almost nobody in business regarded the system of near-universal in-office working as broke, so there was no incentive to see whether it could be fixed.
Then the pandemic hit, the world of business rocked on its axis as real-world locations largely shut down, and suddenly there was a real economic incentive to innovate or die.
Many businesses were forced by government mandate to either go entirely remote or go out of business, with the understanding that the situation was temporary, and that normal service meaning in-office work would be resumed as soon as the world recovered from its viral apocalypse assuming it ever did.
Remote working was the epitome of the flexible approach to work. If you legally couldnt go into work, companies had no option but to offer a new work model, and to invest where necessary in the appropriate technology to make the new normal work.
Several things became apparent in those days. First, the pandemic had hit a world of work which, as far as offices were concerned, had the technological tools to continue from a distance.
Second, those tools would only get more and more refined as time went on, spurring a vast growth in the likes of cloud servers to allow remote workers to access company systems from anywhere.
And third, a combination of fear, loss, and tested technology would bring about a shift in the work-life balance paradigm, meaning hybrid working, when it took over from purely remote working, would not be shifted from the public consciousness as a perfectly normal way to get the job done.
Attempts have been made to reinstate full in-office working, but, as weve seen in the case of Twitter under Elon Musk, staff these days have plenty of options, and will be open to the idea of moving from even premium companies if the issue of hybrid working and a return to inflexibility is forced.
But already, three years into the era of hybrid working, and mercifully past at least the initial fatal/virulent phase of Covid-19, what hybrid working means has radically shifted.
When it began, hybrid working meant business as usual, but geographically varied. The workday would still be 9-5 (or whatever variant of that pattern the company worked). Communication would be largely asynchronous (via email, for instance).
Trips into the office would largely mean a day spent doing the same work as was done at home. Use of collaboration tools and cloud-based company programs was relatively new, and a comparatively steep learning curve.
And, certainly at the start, hybrid workers were invasively monitored by some companies, to ensure they delivered their work to the in-office standards day after day while they adjusted to the new routines and disciplines of hybrid working.
Now, hybrid working, with its mixing of home work, remote work, and in-office work, has shifted from being a new and sometimes uncomfortable discipline to a genuine new normal.
Companies are finding the freedom that comes with being able to hire from a much wider geographical talent pool, and so are frequently offering roles as hybrid working from day one.
By this point, most office workers will have done at least one role on a hybrid basis, so the discipline it requires has been inculcated into them. That also means most businesses with any ethical bones will have largely abandoned invasive monitoring.
The standard 9-5 working pattern is beginning to dissipate, as hybrid workers do what needs to be done, both to meet (and exceed) the companys requirements, and to enrich their own life experience with a healthier work-life balance.
The vast improvement of cloud services and video conferencing technology means everything from collaboration to communication with team members in the hybrid mix is already light years beyond what it was in the very early days of the first lockdowns.
Hybrid working as a work model is not going away.
But where could it be going in terms of its evolution over the next decade?
Firstly, it seems likely that more and more businesses will reduce their physical footprint, because real estate is a big business expense that will grow less and less necessary the more hybrid working becomes normalized.
Secondly, the dissipation of geographically-centralized teams or businesses is likely to continue. Hybrid work makes it unnecessary to only recruit within a certain radius surrounding a physical HQ, and thats likely to result in more geographical dissipation the better and faster connective technologies become.
The traditional 9-5 work pattern, if its not quite dead yet, will be dead long before 2033. Thats likely, partly due to that geographical broadening of teams and businesses, and partly as a result of the enhanced work-life balance.
Allowing people to work more when they are at their most productive, be that 5am or 11pm, means you can actually get more productivity out of what is technically a smaller number of active hours, so theres even significant credence given to the idea that hybrid working will lead to a 4-day week, with no loss of productivity.
Such centralized office space as there is will likely be radically redesigned the cubicle will largely be a dead concept within the next decade, because theres no need to partition space for staff if theyre working remotely most of the time. Space will likely be reallocated into larger, more collaborative areas.
That will likely follow another trend the specialness of time spent actually in the office space. Rather than an in-office day being business as usual, firstly, they are likely to be fewer and less regular than they have been up to now, and secondly, when they happen, theyre likely to be focused on collaborative, team-based work or discussions.
And as technology gets faster, and connectivity to online company assets and programs becomes a greater norm, the scope of what most roles will entail is likely to expand too, in directions that allow for greater personal and professional training and growth.
An offshoot of that, already surfacing in Gen Z staff entering the hybrid working marketplace, will be significantly more opportunities for both upskilling and mental health, delivered as a corporate cultural norm via either self-learning courses or remote sessions with trainers and counsellors located around the country, or even around the world.
Its even conceivable that a workplace culture more defined by a work-life balance skewed towards the home will open up significantly more than it did pre-pandemic. That would allow more women to return to work after pregnancy on a hybrid basis, for instance.
It would also help significantly broader ranges of staff younger, older, people with physical disabilities, neurodiverent people, people across the spectrum of gender expression and so on to make ongoing contributions which a strictly in-office culture denied them the opportunity to make in the years before the first lockdowns turned hybrid working into a new normality.
Hybrid working is here to stay. Its an economic reality that signals a shift in the work-life balance post-Covid, and its already had profound effects on the way businesses work.
All the evidence suggests that a decade from now, those effects will only be more profound, radically changing the normality of working life into something that would have been inconceivable in the pre-pandemic years.
Read the rest here:
What will hybrid working mean 10 years from now? - TechHQ
Service NSW seeks containerised application hosting platform – Cloud – CRN Australia
Ahead of the incoming Labour administration, Service NSW said it is looking for a containerised
"Service NSW requires a stable, rapid delivery, resilient, application-hosting container-based platform that allows product teams high levels of control and self-management of infrastructure," the government agency said.
"Therefore, a market assessment is required, followed by procurement of container platform service from a vendor that can satisfy business requirements," it added.
The DICT/17971 request for tender is for pre-qualifed advanced suppliers, approved for contracts valued over $150,000, or high-risk ones.
Such vendors must meet capability requirements under Service NSW's SCM0020 prequalification scheme, sub-category R02.
This specifices services to assist agencies provisioning of platform and utility services, through pubic, private and community clouds, allowing for the development, operation, and management applications.
An as-a-service model solution is set out, which includes provision of servers, storage, networks, appliances, telecommunications, ancillaries and peripherals, and the hosting of the equipment and operating systems.
See the original post:
Service NSW seeks containerised application hosting platform - Cloud - CRN Australia
What Wasm Needs to Reach the Edge – The New Stack
Write once, run anywhere. This mantra continues to hold true for the promise of WebAssembly (WASM) but the keyword is promise since we are not there yet, especially for edge applications, or at least not completely. Of course, strides have been made as far as WebAssemblys ability to accommodate different languages beyond JavaScript and Rust as vendors begin to support different languages, such as TypeScript, Python or C#.
As of today, WASM is very much present in the browser. It is also rapidly being used for backend server applications. And yet, much work needs to be done as far as getting to the stage where applications can reach the edge. The developer probably does not care that much they just want their applications to run well and security wherever they are accessed, without wondering so much about why edge is not ready yet but when it will be.
Indeed, the developer might want to design one app deployed through a WebAssembly module that will be distributed across a wide variety of edge devices. Unlike years past when designing an application for a particular device could require a significant amount of time to reinvent the wheel for each device type, one of the beautiful things about WASM once standardization is in place is for the developer to create a voice-transcription application that can run not only on a smartphone or PC but in a minuscule edge device that can be hidden in a secret agents clothing during a mission. In other words, the application is deployed anywhere and everywhere across different edge environments simultaneously and seamlessly.
During the WASM I/O conference held in Barcelona, a few of the talks discussed successes for reaching the edge and other things that need to be accomplished before that will happen, namely, having standardized components in place for edge devices.
Edge is one of those buzzwords that can be misused or even misunderstood. For telcos, it might mean servers or different phone devices. Industrial devices might include IoT devices, applicable to any industry or consumer use for users that require connected devices with CPUs.
An organization might want to deploy WASM modules through a Kubernetes cluster to deploy and manage applications on edge devices. Such a WASM use case was the subject of the conference talk and demo Connecting to devices at the edge with minimal footprint using AKS Edge Essentials and Akri on WASMs given by Francisco Cabrera Lieutier, technical program manager for Micrsosoft, and virtually by Yu Jin Kim, product manager at Microsofts Edge and Platforms.
Lieutier and Kim showed how a WASM module was used to deploy and manage camera devices through a Kubernetes environment. This was accomplished with AKS Edge Essentials and Akr. One of the main benefits of WASMs low power was being able to manage the camera device remotely that like other edge devices, such as thermometers or other sensor types, would lack the CPU power to run Kubernetes that would otherwise be a requirement without WASM.
How can we coordinate and manage these devices from the cluster? Kim said. The solution used in the demo is Akri, which is a Kubernetes features interface to makes connections to the IoT devices with WASM, Kim explained.
However, while different edge devices can be connected and managed with WASM with AKS Edge Essentials and Akri, the edge device network is not yet compatible with say an edge network running under an AWS cluster from the cloud or distributed directly from an on-premises environment.
Again, the issue is interoperability. We know that WebAssembly already works. It does what you need to do and the feature set of WASM has already been proven in production, both in the browser and on the server, Ralph Squillace, a principal program manager for Microsoft, Azure Core Upstream, told The New Stack during the conference sidelines.
The thing thats missing is we dont have interoperability, which we call portability the ability to take the same module and deploy it after rebuilding a different cloud but you need a common interface, common runtime experience and specialization. Thats what the component model provides for interoperability.
Not that progress is not being made, so hopefully, the interoperability issue will be solved and a standardized component model will be adopted for edge devices in the near future. As it stands now, WASI has emerged as the best candidate for extending the reach of Wasm beyond the browser. Described as a modular system interface for WebAssembly, it is proving apt to help solve the complexities of running Wasm runtimes anywhere there is a properly configured CPU which has been one of the main selling points of WebAssembly since its creation. With standardization, the Wasi layers should eventually be able to run all different Wasm modules into components on any and all edge devices with a CPU.
During the talk wasi-cloud: The Future of Cloud Computing with WebAssembly, Bailey Hayes, Bailey Hayes, director of the Bytecode Alliance Technical Standards Committee and a director at Cosmonic and Dan Chiarlone (virtually), an open source Software engineer at Microsofts WASM Container Upstream team, showed in a working demo how wasi-cloud offers standardized interfaces for running Wasm code on the cloud.
Our answer to the question of how do you write one application that you can run anywhere across clouds is with wasi-cloud, Hayes said. And you can imagine that using standard APIs, one application is runnable anywhere or on any architecture, cloud or platform.
See the rest here:
What Wasm Needs to Reach the Edge - The New Stack
Power To The Engineering People – The Next Platform
SPONSORED: The value of great engineering is often overlooked, yet almost every object we use on a daily basis has been meticulously designed and tested by somebody somewhere to deliver the best possible performance and meet exacting cost and efficiency requirements.
Those processes have become considerably more sophisticated with the evolution of finite element analysis (FEA), which now plays a critical role in the computational simulation of physical components using mathematical techniques. FEA has become a staple feature of the modeling software that engineers from multiple verticals use to optimize their designs by running virtual experiments which help to reduce the number of physical prototypes they then have to build.
FEA is utilized by pretty much any company that does any sort of engineering which includes everybody from aircraft or rocket manufacturers to healthcare companies that make stents used in coronary arteries. It is geared toward structural analysis which is why FEA is one of the key tools for computer-aided engineering (CAE) applications. The simulation process involves generating a mesh that maps a 3D drawing of the overall shape of an object using a series of mathematical points that form millions of small elements which are then analyzed with a structural physics solver.
With that said, you can imagine FEA needs a lot of compute power to solve structural physics equations over those millions of computational elements. A single large engineering simulation job can run on 1000 CPU cores for hours, with a design process for a single product or component involving 100s of individual simulations and thousands of jobs. FEA also has some specific requirements for computers depending on the type of problem to be solved. There are dozens and dozens of different types of problems and applications, and each of them has different needs for memory and throughput for example.
Just getting access to those sort of compute resources can be a significant challenge for companies involved in FEA simulation.
For many, it makes more sense to use the flexibility and agility offered by the cloud. Cloud hosted HPC infrastructure often provides far more in the way of flexible billing, ease of access, scale and redundancy than any alternative HPC cluster that could be owned and operated in-house, thus enabling engineering firms to concentrate on doing what they do best, which is building products.
AWS has a proven history of delivering HPC infrastructure services for customers as diverse as Boeing, Volkswagen Group, Formula 1 and Western Digital. In recognition of the fact that engineers want to run ever more complex FEA workloads on cloud-hosted HPC clusters more quickly and cost efficiently, the company is also investing in scaling up the power and speed of the HPC-optimized Amazon EC2 instances it offers using the latest processors from Intel.
AWS announced general availability of Amazon Elastic Compute Cloud (Amazon EC2) Hpc6id instances at its annual re:Invent conference last November. These instances are optimized to efficiently run memory bandwidth-bound, data-intensive HPC workloads, and specifically FEA models, powered by 3rd Gen Intel Xeon Scalable processors. That raw CPU muscle is supplemented by the AWS Nitro System and Elastic Fabric Adapter (EFA) network interconnect which delivers 200 Gbit/sec of inter-node throughput between different instances, so that customers can instantly scale their HPC resources to handle even the most demanding of FEA workloads.
EC2 Hpc6id instances have 15.2 TB of local NVM-Express storage for data-intensive workloads. These instances also have very fast network interconnection bandwidth, meaning multiple computers can be put together to rapidly solve large problems. As such, EC2 Hpc6id instances deliver double the compute speed of the previous instance in the AWS line, says the company.
The performance boost added by 3rd Gen Intel Xeon Scalable processors coupled with ample local storage, the AWS Nitro System, and EFA, mean that customers can run their FEA simulations on a smaller number of instances. That in turn has a knock-on effect in terms of faster job completion and reduced infrastructure and licensing costs. FEA workloads also require the ability to read and write data very quickly. To meet that fast I/O requirement, AWS has attached dedicated NVMe local storage to EC2 Hpc6id instances which eliminates latency associated with using networked storage components.
This is the sort of infrastructure that enables companies to solve particularly intense FEA problems such as those used for linear static analysis or vibration analysis for example. Think about safety simulations on the shell of an aircraft, where the intensity of vibration on the wing inevitably creates stresses and strains which can constrain the number of passengers and load that it can safely carry.
Those in the automotive industry too make extensive use of engineering simulation to design everything from the car chassis, engine and thermal cooling systems to the battery, electric motor and electronic sensors, as well as overall aerodynamic profile. One prominent example of FEA usage is car crash simulation. Hundreds of thousands of virtual car crash tests have been conducted with FEA simulations over the past couple of decades, and the learnings have led to safer car designs that may save lives.
High-tech companies involved in the manufacturing of CPUs, memory, batteries, antennae and other electronics components also widely use FEA. And EC2 Hpc6id instances bring new metrics to the table in terms of accessibility and cost for businesses in this sector that may not previously have had access to compute resources with this power and scale.
Irrespective of the precise CPU architecture those HPC workloads utilize, the very fact that they are hosted in the cloud delivers many advantages for enterprises compared to using their own on-prem HPC clusters for the same job. Thats a realization reflected in market trends seeing more organizations shifting their workloads to the cloud.
Research firm Hyperion has forecast that the HPC cloud market will grow twice as quickly as its on-prem equivalent a 17.6 percent as opposed to a 6.9 percent CAGR respectively as companies in various industry verticals shift their workloads into externally hosted infrastructure, led by the manufacturing, financial services and government sectors. The company notes that small organizations, including workgroups and department segments within larger companies, are particularly keen to adopt cloud resources for their HPC jobs to better align their procurement with budgets, timescales and skillsets that dictate their operational schedules.
The shift to a pay-as-you-go rather than a fixed cost model associated with infrastructure owned and operated in-house brings its own rewards in terms of flexible billing, and economies of scale leading to lower overall total cost of ownership in many cases. And associated productivity improvements usually follow from having more scalable compute resources available to larger numbers of concurrent users.
Customers might have a limited number of on-prem servers for example, but if they have a lot of users submitting jobs, there can be long queues for using those resources. In the cloud though, all of those users can run their jobs simultaneously simply because they have access to a vast pool of HPC-optimized instances from AWS at the same time. That gives them a lot of flexibility when it comes to optimizing CPU, memory, storage and networking utilization rates which can vary significantly at different stages of the engineering development and testing process, or during specific times of the year when companies see peaks in seasonal demand.
Even a single user can run fluctuating volumes of FEA workloads at different points in time, particularly when they are simulating multiple jobs to support the testing and release schedule of a particular product. So, if they need a burst access to a very high quantity of resources and those resources are not available on prem due to fixed capacity, the job could stall. Whereas on AWS if demand goes up, the resources which are readily available to be utilized can increase in parallel.
The world has some tough problems to solve over the next decade, and the ability to streamline crash testing and design more energy efficient structures and components can go some way to addressing them. Clever engineers using FEA simulation are undoubtedly up to the task, but they might need the backing of instantly available, powerful compute resources like EC2 Hpc6id instances to help them complete it.
Sponsored by AWS and Intel.
See the original post here:
Power To The Engineering People - The Next Platform
What is Portainer and can it help the average computer user? – ZDNet
Screenshot by Jack Wallen/ZDNET
The average computer user doesn't fall into the neophyte classification as easily as it once did. Nearly everyone carries with them a very powerful computer, right in their pockets. Senior citizens, children, and everyone in between use a computer on a daily basis and have reached a point of comfort that would have been impossible 10 years ago.
Now, I find people are doing things they would never have previously thought of and it's exciting. I've had readers reach out to me to say things like, "I installed Linux for the first time and never thought I could!"
Also: How to install Linux on an old laptop
My mother-in-law does things with her Chromebook I never thought she'd be able to do. In fact, once upon a time, I would receive pretty regular calls asking how to do X or Y. Now? She's solving problems on her own and making Chrome OS do everything she needs.
Such evolution had me thinking: If those users are able to solve such problems on their own, why couldn't they take that a few steps further and start benefitting from the technology they would have previously called "too difficult"?
Case in point, Portainer. What is Portainer? Before we answer that, we must answer another question:What are containers?
Also:8 things you can do with Linux that you can't do with MacOS or Windows
In the realm of technology, containers are bundled applications and services that contain everything they need to run and can be run on any supporting platform. Most often, containers are used by businesses to run applications that can automatically scale to meet demand.
But containers don't have to be limited to businesses. Every home now has its own network. On that network are computers and devices. You might have Windows, MacOS, and Chromebook computers attached to your network (along with smart TVs, thermostats, phones, tablets, security devices, and much more). In fact, your network is teeming with devices, all of which give you considerable power and flexibility.
At the moment, you're probably only using a fraction of the available power and usability offered by those devices. Case in point containers.
Also: How to convert your home's old TV cable into powerful Ethernet lines
Imagine, if you will, that you could deploy a complete cloud service to your network, as I've demonstrated in "How to install a cloud service at home." You might be asking yourself, "Why would me, my wife, my kids, or my mother-in-law need something like this?"
Imagine you or your in-laws have a need to save and share files and there are people in the family who aren't so willing to trust the likes of Google, Apple, or Microsoft. Should that be the case, you might want to deploy a cloud service to your home network that everyone could use but the outside world couldn't access. Or maybe you have kids in school and you want them to have their own cloud service without having to worry they'll be using a third-party platform (so you have better control over things). Or maybe you want to deploy a productivity platform (such as ONLY OFFICE) that is only accessible via your family.
Also:Stop using your browser's built-in password manager. Here's why
Take my home network, for example. I have access to a cloud service, an office suite, an invoice tool (and more) that only myself and my wife can access. It's convenient, secure, and reliable.
Of course, Some might read this and say, "I don't want to have to type a bunch of commands to install software to my network."
But what if I told you that you didn't have to? There's a piece of software that makes deploying containers easy enough that almost anyone can do it. That software is Portainer.
Now, before you get too excited, the installation of Portainer isn't just a matter of downloading an installer and running it. You have to first install Docker (which can be installed on Linux, MacOS, and Windows) and then install Portainer. The good news for MacOS and Windows users is that installing Docker Desktop (which installs Docker itself) can be done by simply downloading and running an installer file.
Also: 4 ways Windows people get MacOS wrong
Do note, however, that if your MacOS device uses Apple Silicon, you'll want to install Rosetta first, which can be done with the command:
Once Docker Desktop is installed, you can then download either the Portainer .dmg file (for MacOS) or the .exe file for Windows.
After you've installed Portainer, the excitement begins. With the help of App Templates, you can install the likes of WordPress and other applications (without having to first install web or database servers).
Installing apps from Templates is the easiest method.
Or, by working with the easy-to-use Forms, you could deploy countless applications (such as the Nextcloud cloud server) with just a few clicks. Sure, there will be a slight learning curve involved but it's really no more challenging than getting a printer up and running on your home network.
Installing an app using the Portainer forms system.
I've been using Portainer for some time and it's made deploying the tools I need to get things done exponentially easier than installing them the old-fashioned way. And although it might be unfamiliar territory at first, once you get the hang of it, the sky's the limit to what you can do on your home network.
Also:The most important reason you should be using Linux at home
You do not have to be constrained by the old ways of using a computer on your private network. With just a bit of effort upfront, you can expand your understanding and usage of technology in ways you never thought possible.
And you don't need a degree in computer science to do it. On top of which, if you try it out and decide it's too challenging, the only thing you've lost is a bit of time (as you can use the community editions of both Portainer and Docker Desktop at home for free).
So, what are you waiting for? Expand your knowledge and the tools you have available to your home network with the ease of Portainer.
Read the original here:
What is Portainer and can it help the average computer user? - ZDNet
Radware wins two golds for application security in 2023 … – iTWire
COMPANY NEWS: Radware, a leading provider of cyber security and application delivery solutions, today announced that it is the winner of two 2023 Cybersecurity Excellence Awards.
The company's API Discovery and Protection solution received gold honours in the API Security category and the Radware SecurePath architecture won gold in the Web Application Security category.
The 2023 Cybersecurity Excellence Awards honour individuals and companies that demonstrate excellence, innovation, and leadership in information security. The 2023 awards program, which is produced by Cybersecurity Insiders and the Information Security Community on LinkedIn, included more than 800 entries.
"We are proud to be recognised among the industry's top innovators," said Radware chief marketing officer Sharon Trachtman. "At a time when the cyber security talent shortage is at an all-time high, our application security solutions offer consistent, high-grade protection across complex hybrid environments while helping companies reduce overhead and administration.
"It's a combination that's core to the value proposition we deliver for our customers every day."
Radware's API Discovery capabilities enable security teams to automatically identify and secure undocumented APIs without relying on human intervention or application and security expertise.
Using advanced machine-learning algorithms, Radware's API protection works in real time to detect and block a broad range of threats. This includes defence against access violations, data leakage, automated bot-based threats, and DDoS and embedded attacks.
Radware's Cloud Application Protection Services uniquely leverage Radware SecurePath architecture to safeguard today's multi-cloud application environments, while maintaining consistent, comprehensive protection for applications regardless of where they're deployed.
Radware's application security architecture can be deployed either as an "inline" or API-based out-of-path SaaS service, enabling coverage of any data centre and cloud platform with minimal latency, interruptions, and risks to uptime and availability.
The 2023 Cybersecurity Excellence Awards add to Radware's other industry recognitions. Industry analysts such as Aite-Novarica Group, Forrester Research, Gartner, GigaOm, KuppingerCole and Quadrant Knowledge Solutions continue to recognise Radware as a market leader in cyber security. The company has received numerous awards for its application and API protection, web application firewall, bot management, and DDoS mitigation solutions.
About Radware
Radware is a global leader of cyber security and application delivery solutions for physical, cloud, and software defined data centres. Its award-winning solutions portfolio secures the digital experience by providing infrastructure, application, and corporate IT protection, and availability services to enterprises globally. Radware's solutions empower enterprise and carrier customers worldwide to adapt to market challenges quickly, maintain business continuity, and achieve maximum productivity while keeping costs down. For more information, please visit the Radware website.
Reducing WAN latency is one of the biggest issues with hybrid cloud performance. Taking advantage of compression and data deduplication can reduce your network latency.
Research firm, Markets and Markets, predicted that the hybrid cloud market size is expected to grow from US$38.27 billion in 2017 to US$97.64 billion by 2023.
Colocation facilities provide many of the benefits of having your servers in the cloud while still maintaining physical control of your systems.
Cloud adjacency provided by colocation facilities can enable you to leverage their low latency high bandwidth connections to the cloud as well as providing a solid connection back to your on-premises corporate network.
Download this white paper to find out what you need to know about enabling the hybrid cloud in your organisation.
DOWNLOAD NOW!
Marketing budgets are now focused on Webinars combined with Lead Generation.
If you wish to promote a Webinar we recommend at least a 3 to 4 week campaign prior to your event.
The iTWire campaign will include extensive adverts on our News Site itwire.com and prominent Newsletter promotion https://itwire.com/itwire-update.html and Promotional News & Editorial. Plus a video interview of the key speaker on iTWire TV https://www.youtube.com/c/iTWireTV/videos which will be used in Promotional Posts on the iTWire Home Page.
Now we are coming out of Lockdown iTWire will be focussed to assisting with your webinars and campaigns and assistance via part payments and extended terms, a Webinar Business Booster Pack and other supportive programs. We can also create your adverts and written content plus coordinate your video interview.
We look forward to discussing your campaign goals with you. Please click the button below.
MORE INFO HERE!
Read this article:
Radware wins two golds for application security in 2023 ... - iTWire
Serverless Security Market to Hit USD 15.69 Billion by 2030 due to … – GlobeNewswire
Pune, March 27, 2023 (GLOBE NEWSWIRE) -- The SNS Insider reported that the size for Serverless Security Market was valued at USD 1.79 billion in 2022, and it is projected to reach USD 15.69 billion by 2030, with a compound annual growth rate (CAGR) of 31.12% during the forecast period of 2023 to 2030.
Market Overview
With serverless computing, your code and data are stored on third-party servers. Therefore, it is critical to ensure that these servers are secure and that your data is protected. Data breaches and other security incidents can damage an organization's reputation. By implementing serverless security measures, organizations can demonstrate that they take their security responsibilities seriously, which can help build trust with customers and partners.
Market Analysis
The key driver for the serverless security market is the rapid adoption of serverless computing technology by organizations of all sizes. Serverless computing offers several benefits, including scalability, cost efficiency, and reduced operational overhead. As a result, it has become a popular choice for developing and deploying cloud-native applications. Another key driver for the market is the increasing number of regulations and compliance standards that organizations must adhere to. These regulations often require strict security measures to be implemented, and serverless security solutions can help organizations meet these requirements.
Get a Sample Report of Serverless Security Market@ https://www.snsinsider.com/sample-request/2578
Key Company Profiles Listed in this Report Are:
Impact of Recession on Serverless Security Market Growth
A recession is likely to have both positive and negative impacts on the serverless security market. While some businesses may delay or cancel serverless security projects, the overall demand for serverless architecture is likely to remain strong. The key for vendors will be to offer cost-effective solutions that meet the needs of businesses looking to cut costs without sacrificing base level margin.
Serverless Security Market Report Scope:
Key Regional Developments
North America is expected to dominate the serverless security market during the forecast period. The region's significant market share can be attributed to the presence of major IT businesses, including cloud service providers and technology giants, which are driving the adoption of serverless security solutions. The United States and Canada, two of the largest economies in North America, have a strong financial position and a thriving technology industry, which enable them to invest heavily in leading services of the market. This has led to the development of innovative serverless security solutions that cater to the specific needs of North American businesses.
Enquiry about Report@ https://www.snsinsider.com/enquiry/2578
Key Takeaway from Serverless Security Market Study
Recent Developments Related to Serverless Security Market
Table of Contents
1. Introduction
2. Research Methodology
3. Market Dynamics
4. Impact Analysis
5. Value Chain Analysis
6. Porters 5 Forces Model
7. PEST Analysis
8. Serverless SecurityMarket Segmentation, By Service Model
9. Serverless Security Market Segmentation, By Security Type
10. Serverless Security MarketSegmentation, By Organization Size
11. Serverless Security Market Segmentation, By Verticals
12. Regional Analysis
13. Company Profiles
14. Competitive Landscape
15. Conclusion
Buy Single-User PDF of Serverless Security Market Report@ https://www.snsinsider.com/checkout/2578
About Us:
SNS Insider is one of the leading market research and consulting agencies that dominate the market research industry globally. Our company's aim is to give clients the knowledge they require in order to function in changing circumstances. In order to give you current, accurate market data, consumer insights, and opinions so that you can make decisions with confidence, we employ a variety of techniques, including surveys, video talks, and focus groups around the world.
Access Complete Report Details@ https://www.snsinsider.com/reports/serverless-security-market-2578
Read the original post:
Serverless Security Market to Hit USD 15.69 Billion by 2030 due to ... - GlobeNewswire
Why IT operations are evolving in a multi-cloud environment – IT Brief Australia
As the usage of cloud platforms and resources has increased in recent years, the approach being taken by organisations to their IT operations (ITOps) has needed to evolve.
Techniques and strategies that worked when IT resources were housed in on-premise datacentres are not suited to a world where workloads are often spread across multiple cloud platforms. In many cases, the focus now needs to be on how to streamline and automate management tasks.
The role of ITOps
By definition, ITOps is a discipline that covers the actions and decisions made by an organisations operations team that is responsible for its IT infrastructure. It refers to the process of acquiring, designing, deploying, configuring, and maintaining equipment and services that support business functions.
The main goal of ITOps is to provide a high-performing and consistent IT environment. To achieve this, different members of the team are given responsibility for specific tasks.
For example, a system administrator will be charged with configuring servers, installing applications, and monitoring the overall health of the infrastructure. At the same time, a network administrator will deploy and manage network links, create and authorise user profiles, and monitor secure access.
Some ITOps team members will focus their attention on the management of physical servers and associated equipment, while others will be responsible for cloud computing platforms and services.
Having a well-resourced ITOps team in place is critical for any organisation, especially as growing numbers of workloads are being shifted to cloud platforms. This is because complexity can quickly increase, especially when multiple platforms are linked to legacy on-premise resources.
Delivering business value
When functioning properly, ITOps can help an organisation to achieve its business goals, improve product delivery, and plan for future growth. Its value can be assessed by considering four key factors:
1. Usability: Assess how well the ITOps team is managing the performance of key applications and ensuring optimal user experiences at all times.
2. Functionality: Examine whether all core systems are functioning as they should across the organisation.
3. Reliability: Monitor the number of system failures that occur and the time that is required to restore normal function.
4. Performance: Maintain oversight of the overall performance of the entire IT infrastructure and any changes in configuration that may be required.
Differentiating between ITOps, DevOps, and DevSecOps
Its worth taking the time to understand the key differences between the three groups that exist within many organisations: ITOps, DevOps, and DevSecOps.
ITOps is responsible for all IT operations, including the end users, while DevOps is focused on agile integration and delivery practices and improving workflows. DevOps works in conjunction with IT. However, it has a different broad visibility of the entire technology stack, as is the case for ITOps.
Many organisations are also integrating application security into their DevOps teams and renaming them DevSecOps. The goal of this shift is to improve application security during development and in runtime while also promoting greater security awareness overall.
Further evolution
Within some organisations, there are even further changes being made to the structure and function of IT departments. As workloads increasingly shift to public, private, and hybrid cloud environments, CloudOps teams are being created to help IT, and DevOps manage the resulting increased complexity.
In other cases, AIOps (Artificial Intelligence for IT Operations) teams are also being created. Their task is to combine big data, AI algorithms, and machine learning to deliver actionable, real-time insights that help the ITOps team to continuously improve operations.
These teams can use data collected from both virtual and non-virtual environments in multiple feeds. This data can then be normalised, structured, and aggregated to produce alerts.
AIOps teams can also apply AI and machine learning to identify normal behavioural patterns and topologies within data, correlate relationships, and detect anomalies. Teams can also use automation tools to continuously gather high-fidelity data in context without manual configuration or scripting.
While ITOps relies on manual correlations and dashboards for analysis, AIOps uses AI and machine learning for automatic analysis and insights. AIOps also provides contextual understanding of anomalies and event correlation in relation to an organisations IT infrastructure, systems, and applications.
Having an understanding of how these different groups both function and work together is important to ensure the proper operation of any large IT department. They demonstrate that, as the nature of IT infrastructures evolves, the traditional lines of demarcation are also changing.
Go here to read the rest:
Why IT operations are evolving in a multi-cloud environment - IT Brief Australia
Critical flaw in AI testing framework MLflow can lead to server and data compromise – CSO Online
MLflow, an open-source framework that's used by many organizations to manage their machine-learning tests and record results, received a patch for a critical vulnerability that could allow attackers to extract sensitive information from servers such as SSH keys and AWS credentials. The attacks can be executed remotely without authentication because MLflow doesn't implement authentication by default and an increasing number of MLflow deployments are directly exposed to the internet.
"Basically, every organization that uses this tool is at risk of losing their AI models, having an internal server compromised, and having their AWS account compromised," Dan McInerney, a senior security engineer with cybersecurity startup Protect AI, told CSO. "It's pretty brutal."
McInerney found the vulnerability and reported it to the MLflow project privately. It was fixed in version 2.2.1 of the framework that was released three weeks ago, but the release notes don't mention any security fix.
MLflow is written in Python and is designed to automate machine-learning workflows. It has multiple components that allow users to deploy models from various ML libraries; manage their lifecycle including model versioning, stage transitions and annotations; track experiments to record and compare parameters and results; and even package ML code in a reproducible form to share with other data scientists. MLflow can be controlled through a REST API and command-line interface.
All these capabilities make the framework a valuable tool for any organization experimenting with machine learning. Scans using the Shodan search engine reinforce this, showing a steady increase of publicly exposed MLflow instances over the past two years, with the current count sitting at over 800. However, it's safe to assume that many more MLflow deployments exist inside internal networks and could be reachable by attackers who gain access to those networks.
"We reached out to our contacts at various Fortune 500's [and] they've all confirmed they're using MLflow internally for their AI engineering workflow,' McInerney tells CSO.
The vulnerability found by McInerney is tracked as CVE-2023-1177 and is rated 10 (critical) on the CVSS scale. He describes it as local and remote file inclusion (LFI/RFI) via the API, where a remote and unauthenticated attackers can send specifically crafted requests to the API endpoint that would force MLflow to expose the contents of any readable files on the server.
For example, the attacker can include JSON as part of the request where they modify the source parameter to be whatever file they want on the server and the application will return it. One such file can be the ssh keys, which are usually stored in the .ssh directory inside the local user's home directory. However, knowing the user's home directory in advance is not a prerequisite for the exploit because the attacker can first read /etc/passwd file, which is available on every Linux system and which lists all the available users and their home directories. None of the other parameters sent as part of the malicious request need to exist and can be arbitrary.
What makes the vulnerability worse is that most organizations configure their MLflow instances to use Amazon AWS S3 for storing their models and other sensitive data. According to Protect AI's review of the configuration of the publicly available MLflow instances, seven out of ten used AWS S3. This means that attackers can set the source parameter in their JSON request to be the s3:// URL of the bucket used by the instance to steal models remotely.
It also means that AWS credentials are likely stored locally on the MLflow server so the framework can access S3 buckets, and these credentials are typically stored in a folder called ~/.aws/credentials under the user's home directory. Exposure of AWS credentials can be a serious breach because depending on the IAM policy, it can give attackers lateral movement capabilities into an organization's AWS infrastructure.
Requiring authentication for accessing the API endpoint would prevent exploitation of this flaw, but MLflow does not implement any authentication mechanism. Basic authentication with a static username and password can be added by deploying a proxy server like nginx in front of the MLflow server and forcing authentication through that. Unfortunately, almost none of the publicly exposed instances use such a setup.
"I can hardly call this a safe deployment of the tool, but at the very least, the safest deployment of MLflow as it stands currently is to keep it on an internal network, in a network segment that is partitioned away from all users except those who need to use it, and put behind an nginx proxy with basic authentication," McInerney says. "This still doesn't prevent any user with access to the server from downloading other users' models and artifacts, but at the very least it limits the exposure. Exposing it on a public internet facing server assumes that absolutely nothing stored on the server or remote artifact store server contains sensitive data."
See the original post here:
Critical flaw in AI testing framework MLflow can lead to server and data compromise - CSO Online