Category Archives: Cloud Servers

iOS 15.2 Makes it Easier to Replace the Screen on the iPhone 13 – iDrop News

It appears that iOS 15.2 packs in one more small but significant improvement that should improve the lives of do-it-yourselfers and independent repair shops.

Last month, well-known DIY repair site iFixit blew the whistle on an unfriendly new feature in the iPhone 13 lineup that would have made it far more difficult for small third-party repair providers to swap out a broken iPhone display.

The problem, iFixit pointed out, was that replacing the screen on any iPhone 13 model would break Face ID unless very specific and extremely complex steps were taken to also move a very small microchip over from the old display, and delicately microsolder it into the new one.

Since this is well beyond the skill set of most DIYers, and even many small repair shops, it basically threatened to block these kinds of repairs entirely.

To be clear, this wasnt just a problem with non-genuine displays, either. Even swapping displays between two identical, brand new iPhone 13 models would result in Face ID being disabled on both of them.

Like the Touch ID sensor on older iPhone models, Apple pairs, or serializes the TrueDepth camera system with each specific iPhone to protect against potential tampering that could allow hackers to bypass the normal security protocols. However, it made no sense that the display should be serialized in this manner, since its not connected to any of the components used by Face ID.

Even so, the display used in the iPhone 13 includes a small chip about the size of a Tic-Tac, and this is uniquely linked to the specific iPhone 13 device that it was originally installed on. Move that screen to another device, and the new iPhone will fail to recognize it, declaring it non-genuine and disabling Face ID in the process.

This isnt a problem for authorized Apple repair shops, as they have access to special tools that allow them to sync up the iPhone with a new display via Apples cloud servers. Of course, these tools arent available to independent repair shops unless theyre willing to sign up for Apples Independent Repair Provider (IRP) program.

However, many smaller shops consider the terms of that program far too onerous, since Apple requires them to submit to random inspections to look for prohibited repair parts, and customers have to sign special waivers acknowledging that theyre not getting real Apple repairs.

Not long afteriFixitbroke the news, Apple promised that a fix would be coming in a future iOS update, and it looks like thats arrived with iOS 15.2.

While Apple made no mention of it in the iOS 15.2 release notes, iFixit has confirmed that the latest version fixes the Face ID Repair Trap on the iPhone 13. Its also upgraded its Repairability Score for the iPhone 13 to 6 out of 10, bringing it back in line with other recent iPhone models.

After iOS 15.2 landed, iFixit conducted a full parts-swap test, grabbing to iPhone 13 Pro Max devices and moving over not just the display, but also the battery and the camera system.

In doing so, iFixit discovered that even though Face ID will no longer be disabled when swapping a new screen over, Apple still provides the usual series of Important warnings telling users that theyre not using genuine Apple parts.

To be fair, this shouldnt come as a big surprise, since Apple has been doing this with batteries for a few years now, and began flashing up the same warnings for the screen and camera with the iPhone 11 and iPhone 12, respectively.

iFixit also points out that theres an interesting discrepancy between the messages, however. Apple using the phrase Unable to determine for the display and camera, and points the user to the Settings app for more information.

By comparison, the battery warning says, Unable to verify, and omits the section telling the user to Go to Settings for more information, although it still includes a Settings button that takes the user to the battery health section of the Settings app.

Its probably not entirely a coincidence that iOS 15.2 also introduces a new Parts and Service History section in the Settings app, giving you a summary of which parts have been replaced, and whether theyre genuine.

This section will only appear if youve had anything replaced on your iPhone it doesnt show up if your device still has all of its original parts. It also only shows the status of parts that would normally generate a warning on each given model if they werent genuine. For instance, Apple only started serializing the camera with last years iPhone 12 models, so iOS 15.2 wont be able to tell you if an iPhone 11 or older model is using a non-genuine camera.

Continue reading here:
iOS 15.2 Makes it Easier to Replace the Screen on the iPhone 13 - iDrop News

4-Year-Old Bug in Azure App Service Exposed Hundreds of Source Code Repositories – The Hacker News

A security flaw has been unearthed in Microsoft's Azure App Service that resulted in the exposure of source code of customer applications written in Java, Node, PHP, Python, and Ruby for at least four years since September 2017.

The vulnerability, codenamed "NotLegit," was reported to the tech giant by Wiz researchers on October 7, 2021, following which mitigations have been undertaken to fix the information disclosure bug in November. Microsoft said a "limited subset of customers" are at risk, adding "Customers who deployed code to App Service Linux via Local Git after files were already created in the application were the only impacted customers."

The Azure App Service (aka Azure Web Apps) is a cloud computing-based platform for building and hosting web applications. It allows users to deploy source code and artifacts to the service using a local Git repository, or via repositories hosted on GitHub and Bitbucket.

The insecure default behavior occurs when the Local Git method is used to deploy to Azure App Service, resulting in a scenario where the Git repository is created within a publicly accessible directory (home/site/wwwroot).

While Microsoft does add a "web.config" file to the .git folder which contains the state and history of the repository to restrict public access, the configuration files are only used with C# or ASP.NET applications that rely on Microsoft's own IIS web servers, leaving out apps coded in other programming languages like PHP, Ruby, Python, or Node that are deployed with different web servers like Apache, Nginx, and Flask.

"Basically, all a malicious actor had to do was to fetch the '/.git' directory from the target application, and retrieve its source code," Wiz researcher Shir Tamari said. "Malicious actors are continuously scanning the internet for exposed Git folders from which they can collect secrets and intellectual property. Besides the possibility that the source contains secrets like passwords and access tokens, leaked source code is often used for further sophisticated attacks."

"Finding vulnerabilities in software is much easier when the source code is available," Tamari added.

Here is the original post:
4-Year-Old Bug in Azure App Service Exposed Hundreds of Source Code Repositories - The Hacker News

Top 5 Best Free Linux Cloud Servers [2020]

If you want to test your web application or service, you need a Linux server. Thanks to the advancement of cloud computing, deploying preconfigured Linux server has become child's play.

Moreover, many cloud server providers also offer free credits to try their platform. You can take advantage of these offers to deploy Linux servers and test your web application or service.

This not only helps in reducing costs, you also get the opportunity to figure out whether a certain platform suits your needs and skills or not.

You should keep in mind that though some cloud servers offer hefty credits, they might have time restriction.

Please note that some links in this article are affiliate links.

Linux Handbook is official partner of Linode. Linux Handbook website is hosted on Linode. We also use Linode servers for testing and validating the tutorials we cover here.

You can deploy Linux servers of your choice (Ubuntu, Debian, Fedora, SUSE, Arch, Slackware etc) within minutes and with a few clicks.

Not only that, with Linode Marketplace, you can deploy Linux servers preconfigured with a web-service like WordPress, WireGuard VPN, Discourse and more.

Want more? You also get to deploy Load Balancer, object storage, Kubernetes clusters among other DevOps focused tools.

You can also configure regular automatic backups for your servers.

Linode offers $60 free credit to Linux Handbook readers. Credits last for 60 days.

You can sign up for Linode to get $60 free credits here.

Digital Ocean is another good platform where you can get free cloud Linux server.

Like Linode, Digital Ocean is also developer focused. This means you can deploy bare Linux servers or preconfigured with a web service of your choice.

Kubernetes clusters, databases, load balancers, object storage, automatic backups and everything else you saw with Linode are also available in Digital Ocean.

Everything is click and deploy which makes your work much easier.

I use Digital Ocean to host a Discourse forum for It's FOSS readers.

New Digital Ocean users get $100 free credits and the credits last for 60 days. You can sign up for Digital Ocean here.

Another cloud server provider similar to Linode and Digital Ocean.

I use Vultr occasionally for deploying test servers for testing Linux tutorials.

They have micro-nodes with 10 GB SSD storage and 512 MB RAM for just $2.5 a month (or $0.004/hr). This is ideal for me when I want to avoid cost and don't need high configuration Linux server.

You can deploy Linux server of your choice and you can also use their One Click Apps to deploy preconfigured servers.

Vultr offers $100 free credits to try out their platform and the credits are valid for 30 days. You can sign up for free Linux cloud server with Vultr here.

My other website, It's FOSS, is hosted on UpCloud.

Unlike Linode and Digital Ocean, UpCloud doesn't have a marketplace to allow you to deploy preconfigured web-services on Linux server.

However, they do have APIs available to easily integrate your app with UpCloud infrastructure.

You can deploy Linux servers of your choice within minutes and the Linux servers offered by UpCloud have superb performance thanks to their MaxIOPS block storage.

Automatic server backups are available to give you peace of mind.

You can get free Linux cloud servers on UpCloud with a credit line of $25. They are strict with free credits and free trials.

So far all the entries in this list of free cloud Linux servers are from medium players.

Bigger cloud players like Microsoft, Amazon, Alibaba and Google also offer free credits.

These big platform might be overwhelming and personally, I am averted to using corporate giants. I prefer to support smaller players given that they have good product and service.

Anyway, Google offers $300 credits to try out its Google Cloud Platform (GCP). The credits last for a year.

You see the difference here? Other smaller player are restricted to 2 months with hardly $100 free credits. And a giant like Google with deep pockets can afford such hefty offer to hurt its competitors.

I shared my experience with cloud server providers here. I hope the free credits allow you to test some of these platforms.

What's your choice of cloud service? Do you know some other reliable cloud server providers that offer free credits? Why not share it with the rest of us in the comment section?

Read more:
Top 5 Best Free Linux Cloud Servers [2020]

AWS outages and cloud computing, explained – Popular Science

In the first two weeks of this month, Amazon Web Services (AWS) hit some bumps that caused two outages: a bigger, more widespread one on December 7, and a smaller, more localized one on Dec. 15. Both catalyzed disruptions across a range of websites and online applications, including Google, Slack, Disney Plus, Amazon, Venmo, Tinder, iRobot, Coinbase, and The Washington Post. These services all rely on AWS to provide cloud computing for themin fact, AWS is the leading cloud computing provider among other big players like Microsoft Azure, Google, IBM, and Alibaba.

To understand why the impact was so big, and what steps that companies can take to prevent something like these disruptions in the future, it makes sense to take a step back and take a look at what cloud computing is, and what its good for.

Whenever you connect to anything over the internet, your computer is essentially just talking to another computer. A server is a type of computer that can process requests and deliver data to other computers in the same network or over the internet.

But running your own server isnt cheap. You have to buy the hardware box, install it somewhere, and feed it a lot of power. In many cases, it needs internet connectivity too. Then, to ensure that data is received and sent with minimal delays, these servers need to be physically close to its users.

Additionally, you have to install software that needs to be updated regularly. And you have to build fail-safe mechanisms that will switch over operations to another server if a main server malfunctions.

[Related: Facebook has an explanation for its massive Monday outage]

The thing that companies like Amazon noticed is that a lot of [computing infrastructure] is not really specific to the service youre running, says Justine Sherry, an assistant professor at Carnegie Mellon University.

For example, the code running Netflix does something different compared to the code running a service like Venmo. The Netflix code is serving videos to users, and the Venmo code is facilitating financial transactions. But underneath, most of the computing work is actually the same.

This is where cloud providers come in. They usually have hundreds to thousands of servers all over the country with good bandwidth. They offer to take care of the tedious tasks like security, day-to-day management of the data center operations, and scaling services when needed.

Then you can focus on your [specialized] code. Just write the part that makes the video work, or the part that makes the financial transactions work. Its easier, its cheaper because Amazon is doing this for lots and lots of customers. Sherry explains. But there are also downsides, which is that everyone in the world is relying on the same couple of Costco-sized warehouses full of computers. There are dozens of them across the US. But when one of them goes down, its catastrophic.

What caused the AWS outages appeared to be related to errors with the automated systems handling the data flow behind the scenes.

AWS explained in a post that the December 7 error was due to a problem with an automated activity to scale capacity of one of the AWS services hosted in the main AWS network, which resulted in a large surge of connection activity that overwhelmed the networking devices between the internal network and the main AWS network, resulting in delays for communication between these networks.

[Related: A Look Inside the Data Centers of The Cloud]

This autoscaling capability allows the whole system to adjust the number of servers its using based on the amount of users on the network. The idea there is if I have 100 users at 7 am, and then at noon, everyone is on lunch break Amazon shopping and now I have 1,000 users, I need 10 times as many computers to interact with all those clients, explains Sherry. These frameworks automatically look at how much demand there is and can dedicate more servers to doing whats needed when its needed.

Later on December 15, a status update issued by AWS said that the outage was caused by traffic engineering incorrectly moving more traffic than expected to parts of the AWS Backbone that affected connectivity to a subset of Internet destinations.

Big data centers have lots of internet connections through different internet service providers. They get to choose where online traffic gets routed, whether its over one cable through AT&T, or another cable through Sprint.

Their automatic traffic engineering decides to reroute traffic based on a number of conditions. Most providers are going to reroute traffic mostly based on load. They want to make sure things are relatively balanced, Sherry says. It sounds like that auto-adaptation failed on the 15th, and they wound up routing too much traffic over one connection. You can literally think of it like a pipe that has had too much water and the water is coming out the seams. That data ends up getting dropped and disappears.

Despite some prevalent outages over the past few years, Sherry argues that AWS is quite good at managing their infrastructure. Inherently, its very difficult to design perfect algorithms that can anticipate every problem, and bugs are an annoying but regular part of software development. The only thing thats unique about the cloud situation is the impact.

[Related: Amazons venture into the bizarre world of quantum computing has a new home base]

A growing number of independent companies are turning to third-party centralized services like AWS for cloud infrastructure, storage, and more.

If I pay Amazon to run a data center for me, store my files, and serve my clients theyre going to do a better job than I can do as an university administrator or as an administrator to a small company, says Sherry. But from a societal perspective, when all of these small individual actors decide to outsource to the cloud, we wind up with one really big centralized dependency.

During the time AWS went out, Sherry could not control her television. Normally, she uses her phone as a remote control. But the phone does not directly talk to the TV. Instead, both the phone and the TV talk to a server in the cloud, and that server is orchestrating that in-between. The cloud is essential for some functions, like downloading automatic software updates. But for scrolling through cable offerings available from an antenna or satellite, theres no reason that needs to happen, she says. Were in the same room, were on the same wireless network, all Im trying to do is change the channel. In short, the cloud can offer convenient tech solutions in some instances, but not all.

[Related: This Is Why Microsoft Is Putting Data Servers In The Ocean]

One account of a marooned technology that struck her most as an unnecessarily roundabout design was a timed cat feeder that had to go through the cloud. Automated cat feeders have been around a long time before the cloud. Theyre basically paired to an alarm clock. But for some reason, someone decided that rather than building the alarm clock part into the cat feeder, they were going to put the alarm clock feeder in the cloud, and have the cat feeder go over the internet and ask the cloud, is it time to feed the cat? Sherry says. Theres no reason that that needed to be put into the cloud.

Moving forward, she thinks that application developers should review every feature thats intended for the cloud and ask if it can work without the cloud, or at least have an offline mode thats not as completely debilitating during an internet, data center, or even power outage.

There are other things that are probably not going to work. Youre probably not going to be able to log in to your online banking if you cant get to the bank server, says Sherry. But so many of the things that failed are things that really should not have failed.

Read the original:
AWS outages and cloud computing, explained - Popular Science

phoenixNAP and MemVerge to Enable Memory Virtualization in Bare Metal Cloud – HPCwire

PHOENIX, Ariz.,Dec. 20, 2021 phoenixNAP, a global IT services provider offering security-focused cloud infrastructure, dedicated servers, colocation, and specialized Infrastructure-as-a-Service (IaaS) technology solutions, today announced a collaboration with MemVerge, a pioneer of Big Memory Computing and Big Memory Cloud technology. The two companies are working together to enable simplified deployments of MemVerge Memory Machine on phoenixNAPs Bare Metal Cloud and provide a robust infrastructure solution for Big Memory workloads.

With big data volumes growing at an unprecedented pace, the demand for memory-optimized compute is accelerating. Organizations are increasingly facing a challenge of deploying efficient memory resources to support real time analytics and long running data. DRAM scaling requires significant investments, while server configurations are often limited in their capacity to support such deployments.

As the industrys first software to virtualize memory hardware, MemVerge Memory Machine offers an alternative way to deploy and scale memory technology. With a single Memory Machine virtualization layer, organizations can power their application with persistent memory that provides DRAM-like capabilities and performance at an affordable price point. In addition to this, the software takes application snapshots in DRAM and persistent memory, enabling higher availability and mobility.

Deployed on phoenixNAPs Bare Metal Cloud platform, MemVerge Memory Machine will provide a comprehensive infrastructure solution for Big Memory data processing. Bare Metal Cloud relies on powerful hardware to provide advanced configurations for Big Memory data processing, ensuring consistent performance with cloud-like flexibility. As an automation-driven platform, Bare Metal Cloud can be deployed in minutes and managed easily using its API, CLI, and Infrastructure as Code integrations.

Through our collaboration with MemVerge, we are able to address an emerging need for memory-optimized server solutions, saidIan McClarty, President of phoenixNAP.

MemVerge Memory Machine is taking an innovative approach to memory technology, providing a simplified solution to efficient memory scaling and management. Coupled with performance capabilities of our Bare Metal Cloud, their software provides optimized compute for Big Memory workloads. Organizations handling data-hungry applications that need to be processed and analyzed fast can leverage this platform to streamline their projects and applications, while simplifying infrastructure management tasks.

Hardware limitations are a common challenge for advanced memory deployments, and this is the issue that phoenixNAPs Bare Metal Cloud successfully addresses, saidJonathan Jiang, COO at MemVerge. It offers dozens of configurations that provide a robust foundation for MemVerge Memory Machine implementations to process Big and Fast data. We are excited to work with phoenixNAP and leverage Bare Metal Cloud to demonstrate the potential of our memory virtualization technology, as well as to address the emerging use cases for Big Memory optimization.

By providing direct access to CPU and RAM resources, Bare Metal Cloud helps organizations ensure consistent performance even for demanding workloads and applications. At the same time, the platforms DevOps integrations, flexible billing models, and integration features ensure simplified server provisioning, scaling, and management. Automation-focused organizations can leverage it to streamline their CI/CD pipelines, access burst resources easily, and support global deployments.

Bare Metal Cloud comes with 15 TB of free bandwidth (5 TB inSingapore) and flexible bandwidth packages for more advanced needs. The platform also provides easy access to S3-compatible object storage, phoenixNAPs global DDoS-protected network, and strategic global locations.

To learn more about phoenixNAPs API-driven bare metal servers, visit theBare Metal Cloud page.For customized options, view itsdedicated server configurations.

Bare Metal Cloud Features:

About MemVerge

MemVerge is pioneering Big Memory Computing and Big Memory Cloud technology for the memory-centric and multi-cloud future. MemVerge Memory Machine is the industrys first software to virtualize memory hardware for fine-grained provisioning of capacity, performance, availability, and mobility. On top of the transparent memory service, Memory Machine provides another industry first, ZeroIO in-memory snapshots which can encapsulate terabytes of application state within seconds and enable data management at the speed of memory. The breakthrough capabilities of Big Memory Computing and Big Memory Cloud Technology are opening the door to cloud agility and flexibility for thousands of Big Memory applications. To learn more about MemVerge, visit http://www.memverge.com.

About phoenixNAP

phoenixNAP is a global IT services provider with a focus on cyber security and compliance-readiness, whose progressive Infrastructure-as-a-Service solutions are delivered from strategic edge locations worldwide. Its cloud, dedicated servers, hardware leasing, and colocation options are built to meet always evolving IT business requirements. Providing comprehensive disaster recovery solutions, a DDoS-protected global network, and hybrid IT deployments with software and hardware-based security, phoenixNAP fully supports its clients business continuity planning. Offering scalable and resilient opex solutions with expert staff to assist, phoenixNAP supports growth and innovation in businesses of any size enabling their digital transformation.

Source: phoenixNAP, MemVerge

More:
phoenixNAP and MemVerge to Enable Memory Virtualization in Bare Metal Cloud - HPCwire

How the Cloud Helps With Medical Research and Remote Medicine – Business Insider

The cloud has had a major impact on data-driven medical research, enabling breakthroughs that otherwise would have taken substantially longer to happen. Such is the case with the massive, orchestrated effort that went into the development of COVID-19 vaccines.

Using cloud computing and artificial intelligence (AI), researchers developed the vaccines in less than a year, and the effort required collaboration by various entities in the private and public sectors pharmaceutical companies, hospitals, non-profit organizations, and government agencies. The monumental undertaking involved sharing large volumes of data as new discoveries occurred.

The development of these vaccines certainly is a major achievement for the pharmaceutical field. But there are several other examples of how cloud computing supports advancements in medicine, such as wearables devices that connect doctors and patience, storage of medical records, and remote surgery.

What makes the cloud so attractive to medical researchers comes down to the same characteristics that make it valuable in other fields elasticity, scalability, and the capacity to handle massive data volumes.

"One of the incredible powers of the cloud is that ability to scale up quickly," said Adam Glick, senior director of portfolio marketing for APEX Cloud Services at Dell Technologies. "Processing large amounts of drug discovery and trial data more quickly helps get lifesaving medications to people that need them faster. Imagine that you are in phase 2 trials for a new treatment, or you're in a much earlier stage doing drug discovery, and you want to analyze the data you're collecting. The ability to get data analysis in minutes as opposed to days can radically change the speed of drug discovery and approval, which ultimately mean saving more lives."

Without access to a cloud infrastructure, Glick added, the time and financial requirements to procure and set up the environment to conduct data-driven research are much higher. And once the project is completed, much of the servers and infrastructure used in the research may sit idle since they're no longer needed.

But with the cloud, "you can scale up your resources quickly and then you can process the data much faster," Glick said. This translates to faster development of life-saving drugs and treatments.

The cloud also plays a role in connected medical devices. Currently, 10 to 15 connected devices are used at each hospital bed. The global market for connected medical devices is expected to reach $158 billion in 2022, up from $41 billion in 2017.

Remote devices such as blood pressure, glucose, and heart monitors stay connected with clinics and physician offices, maintaining a continuous flow of data that helps enhance patient care. In some cases, timely data transmission can limit damage to a patient and even prevent death. If a device detects a problem with a patient, it can send an alert to dispatch an ambulance. In stroke and heart attack situations, a quick response can help minimize the impact on a patient.

Data transmitted from medical devices increasingly leverages edge networks, which place computing and analytics close to data sources and users to enable real-time decisions. But data that isn't used for real-time responses is stored in the cloud, where it can later be useful for research leading to new treatment methods and the development of therapeutic drugs.

Whether supporting operating rooms, wearable medical devices, or lab workers involved in critical research, the cloud already has proven critical to healthcare.

COVID-19 vaccines illustrate just how important the cloud can be, but as technologies and AI evolve to work together with the cloud, the list of possibilities of what medical researchers can accomplish is growing by the day.

Find out how APEX Cloud Services can help your R&D efforts.

This post was created by Insider Studios with Dell Technologies APEX.

Read this article:
How the Cloud Helps With Medical Research and Remote Medicine - Business Insider

Contributed | The role of the Cloud in digital transformation – DIGIT.FYI

The pandemic has undoubtedly super-charged digital transformation strategies, leading many organisations to accelerate their migration to the cloud or modernise existing cloud-based applications to keep pace with their competition.

However, cloud migration itself should never be the goal its essential to identify your business goals, not just your technology goals, and how cloud migration and modernisation will help you to achieve them alongside broader cultural and process changes.

This integration drives greater agility, the adoption of new processes and encourages innovation. With 2022 approaching at pace, organisations can no longer ignore the cloud. Instead, they must consider it as the key to enabling digital transformation.

Skill up or lose out

Making sure your business is fit for the future both from a personnel and technical perspective is crucial to success. The alternative after two extremely challenging years and the potential of ongoing uncertainty next year, is unnerving.

We are all too aware that the IT skills gap is an ever-growing chasm potentially extended recently by the buoyant tech job market: cloud native skills top the hiring lists of nearly half of hiring managers.

Upskilling your existing teams means you can bring staff with deep understanding of your business on the digital transformation journey with you. Cloud empowers your teams, providing them with a toolkit that allows them to build amazing products quickly, reliable and cost-effectively.

Digital transformation is also a skills transformation, and ultimately this provides staff with the power to innovate quickly rather than focusing on the undifferentiated heavy lifting of managing racks and servers or database admin that a cloud provider can do.

Dont treat the cloud like another data centre

Its important not to just see the cloud as a new data centre, replicating old processes and silos in your new cloud environment.

In the cloud, everything is programmable you can get new storage/compute almost instantly, automate your infrastructure and network setup, automate your release processes, and get access to an amazing toolbox for achieving this with products and services built for easy implementation by cloud providers.

You should also consider how your teams can benefit from new ways to build and operate your digital products and accelerate innovation. For example, serverless lets you develop without having to worry about the underlying infrastructure. And as technology now extends into every aspect of business, it is no longer entirely under the purview of the IT team.

This means that old silos between teams need to come down, and a more collaborative approach should be taken.

A great example of this is how Developers, Security and Operations have started to work much more closely together. Proper adoption of DevOps can really help with improving time-to-market, getting feedback faster from your customers and getting new features out faster, more often and more reliably.

Transform your business perspective

Both the preceding issues demonstrate that digital transformation with cloud is not just an IT project. Cloud is a new way to do business, so you also need to transform perspectives on IT and on cloud across your business, and this comes with communication and transforming operating models.

One of the clearest examples of this is the rise of FinOps. This is a joined-up approach from Cloud Operations and Finance teams to help overcome some of the frictions that can sometimes arise Finance teams need to get used to the variable cost models of cloud and Developers need to be encouraged to take greater accountability for the spend they incur.

Cloud also transforms how Product teams operate, enabling them to shift from quarterly or monthly release cadences to a more frequent, feature-driven release cadence. Ideas can be tested, feedback can be gathered and improvements made in hours or days, and released instantly, providing value to customers and therefore to the business as a whole.

Cloud can also count towards sustainability initiatives. According to research from 451 Research, migrating business applications to the public cloud could reduce energy consumption by 80%, and carbon emissions by 96% further evidence of the benefits to expedite cloud migration strategies as part of digital transformation programmes, particularly in the drive towards a Net Zero future.

Make the time to modernise

While it may feel that once a migration programme has been completed it marks mission accomplished, it is far from the end of the journey.

Ensuring applications are modernised once an organisation is in the cloud will keep your applications fit for purpose. For example, there are cloud-native technologies available, like off the shelf machine learning products, which remove the need for organisations to procure scarce and expensive skills and instead utilise the skills of cloud providers.

Taking advantage of the cloud as transformational for both technology and the wider business requires a holistic approach to migration. Consider skills, consider the business goals, consider how you will use cloud-native services and above all how you will bring the business along with you on your transformation journey.

Like Loading...

Related

Read more here:
Contributed | The role of the Cloud in digital transformation - DIGIT.FYI

Cloud Security Market 2021 is Expected to be on Course to Achieve Considerable Growth to 2027 mainlander.nz – mainlander.nz

Description

Thecloud security marketsize is valued atUSD 34.5 billion in 2020and is expected to register a 14.3% CAGR during the forecast period (2021-2027). Cloud security is one of the important aspects taken into consideration by every company that has shifted its business to the cloud. Most organizations are using multiple cloud servers and looking for a unified way to secure them, which will boost the growth of the cloud security market. Many enterprises use CASB software that integrates cloud service users and cloud applications for monitoring activity and enforcing security policies.

Request for Report Sample:https://www.marketstatsville.com/Cloud-Security-Market-will-reach-USD-68.5-billion-by-2025

The increasing sophistication of cyber espionage, cybercrimes, and new cyberattacks are the key factors that boost the growth of the cloud security market. Moreover, increasing data leakage and data breaches in enterprises are creating the demand for cloud security and are expected to accelerate the growth of the cloud security market.

Due to trust issues, small and large enterprises are hesitant to move all their data to the cloud. This factor is estimated to impede the growth of the cloud security market. Additionally, the rise in the number of government initiatives to support smart city projects is investing in cloud computing technology, which increases the demand for cloud security and fuel the growth of the global market.

The impact of the COVID-19 pandemic on the cloud security market was relatively mild compared to other industries. Since all the government and regulatory authorities have mentioned to both public and private organizations to work remotely and maintain social distancing, digital business has increased. On the other perspective internet penetration across the corners of the globe has also increased exponentially. This results in cloud security growth in the later stages of lockdown to ensure safety from malicious attackers and hackers.

Request for Buy Full Report: https://www.marketstatsville.com/buy-now/Cloud-Security-Market-will-reach-USD-68.5-billion-by-2025

The report outlines the global cloud security market study based on service, security type, application, and region.

Cloud Security Market Regional Outlook

The global cloud security market has been segmented into five geographical regions: North America, Asia Pacific, South America, Europe, and the Middle East and Africa (MEA). North America, followed by Europe and the Asia Pacific, has the largest share in the global cloud security market, owing to the high adoption of IT security services. Further, Asia Pacific is the fastest-growing global cloud security market.

Request for Report Table of contents: https://www.marketstatsville.com/table-of-content/Cloud-Security-Market-will-reach-USD-68.5-billion-by-2025

The global cloud security market is fairly fragmented, with the presence of a large number of small players across the globe. The vital cloud security manufacturers operating in the global market are

The cloud security market report thoroughly analyzes macro-economic factors and every segments market attractiveness. The report will include an in-depth qualitative and quantitative assessment of the segmental/regional outlook with the market players presence in the respective segment and region/country. The information concluded in the report includes the inputs from primary interviews.

Full Report Analysis: https://www.marketstatsville.com/Cloud-Security-Market-will-reach-USD-68.5-billion-by-2025

See the rest here:
Cloud Security Market 2021 is Expected to be on Course to Achieve Considerable Growth to 2027 mainlander.nz - mainlander.nz

How Tripwire Can Be a Partner on Your Zero Trust Journey – tripwire.com

In a previous blog post, I discussed the different applications of integrity for Zero Trust and provided four use cases highlighting integrity in action. The reality is that many organizations cant realize any of this on their own. But they dont need to. They can work with a company like Tripwire as a partner on their Zero Trust journey.

Lets explore how they can do this below.

Security teams can begin their Zero Trust journeys by establishing a baseline of integrity. Infosec personnel need a trusted state of their employers systems and information to understand the security, compliance, and operational state of their employers assets over time. Only if they establish a single source of truth can they monitor for low-priority, routine changes as well as events that could signify a security incident. These include the addition of unrecognized binaries and the alteration of access privileges on critical files.

With this continuous monitoring capability, the integrity platform also becomes critical to successful prevention and detection within a Zero Trust environment. In that sense, integrity management doesnt just serve as the foundation for Zero Trust Architecture (ZTA). It also serves as the ultimate backstop should attackers get in, as these threat actors need to make a change to perform their malicious activity sooner or later.

Once they have an integrity-based Zero Trust program in place, organizations can then continuously revalidate the trustworthiness of systems and information using security tools such as those offered by Tripwire. They can turn to four solutions in particular. Those are security configuration assessment, policy compliance, vulnerability assessment, and integrity monitoring.

Security teams need to trust that their employers information and data is configured to a secure baseline that aligns with policy. This can help to ensure that the Trust Policy Engine makes appropriate risk-based decisions for connection requests to different business assets. Towards that end, Tripwire Enterprise provides a combination of platforms and policies for organizations to determine how their assets are configured. This assessment of security policy is available for integration via APIs and apps connected to Tripwire Enterprise. Simultaneously, Tripwire Configuration Manager provides assessment of cloud infrastructure such as cloud accounts, storage, and SaaS solutions, thereby allowing for Zero Trust to extend beyond on-premises assets.

Security teams dont just need to worry about protecting their employers assets against digital threats. They also need to make sure they fulfill any relevant compliance obligations that cover some or all of their systems and data. Tripwire Enterprise can provide compliance assessment results to inform trust policy decision making, as well as satisfy auditor requirements. Where it can be difficult to assign a static asset scope to a compliance requirement, Zero Trust using compliance results from Tripwire can provide assurance that all entities involved in a particular system are compliant.

An important part of Zero Trust is evaluating risk, such as software vulnerabilities. Indeed, a Zero Trust policy might specify that assets with vulnerabilities providing remote privilege access should not be able to connect to specific data sets, for instance. It might also specify vulnerability score thresholds for access to specific sets of resources.

These functions emphasize the need for infosec personnel to assess their employers infrastructure for known vulnerabilities. With that said, Tripwire IP360 provides both agent-less and agent-based vulnerability assessment across a variety of asset types including servers, workstations, network devices, containers, and cloud workloads. Those tests yield visibility into vulnerabilities affecting the operating systems and applications on those devices, and they provide results in a robust REST API that apply to both access requesters and ZTA resources such as Network Access Control (NAC) and Privileged Access Management (PAM) platforms.

Finally, security teams need to close any gaps left over from their security configuration assessments, policy compliance initiatives, and vulnerability assessments. Otherwise, an attacker could exploit undetected or unremediated vulnerabilities and abuse them to gain access to an organizations network. Thats why its not enough for security teams to implement these solutions and other solutions once and leave them alone after that. They need to bring in integrity monitoring to spot potential deviations. In the example of security configuration, for instance, that would mean establishing a baseline configuration and then monitoring that configuration for changes. This can help security teams to identify and address risk proactively before the Trust Policy Engine needs to make a decision about access. It can also help to spot changes in the configuration of the Zero Trust policy, the Trust Policy Engine, and any of the other supporting components themselves.

Ultimately, theres no Zero Trust without integrity. Security teams need to use this realization to get Zero Trust right the first time and to continue getting it right from there.

To learn more about how Tripwire can help, download this whitepaper: https://www.tripwire.com/misc/a-tripwire-zero-trust-reference-architecture.

Originally posted here:
How Tripwire Can Be a Partner on Your Zero Trust Journey - tripwire.com

Top Cloud Computing Trends Shaping Our IT Landscape in 2022 – CRN – India – CRN.in

By Indrajeet Ghorpade General Manager Technical Services at Rahi Systems

The year 2020 and 2021 witnessed Data Explosion, there was an exponential rise in the generation and storage of data as work went virtual and businesses adopted digital services. 2022 will not only experience the quick cloud deployments for specific applications but a complete overhaul of enterprise systems embracing cloud migration.

As per a survey conducted by Forbes, 83% of the enterprise workload will be stored on the cloud by the end of 2021. The current shared cloud model is less flexible in terms of controlling it but it offers ease of access and high security. 94% of businesses experienced a significant improvement in security after migrating to the cloud, according to Salesforce.

Augmenting the best of public and private cloud environments with the hybrid cloud will continue to be a major trend continuing from 2021. Cloud has emerged as the most efficient data storage and application management solution. It has enabled organizations to focus on their core services. This article highlights the innovation and ongoing advancements in cloud computing infrastructure identified by Rahis cloud experts.

Cloud service providers like Amazon (AWS Lambda), IBM Cloud Function, and Microsoft Azure Functions are offering serverless cloud computing services. A relatively new concept, the serverless cloud is gaining huge traction as it provides a true pay-as-you-go service and users dont need to pay a fixed amount for storage or lease a server.

Serverless technology does not imply that servers are not present; servers are present, but users only use them without getting into the setups and technicalities. Also referred to as the functions-as-a-service, it can be utilized by startups and small businesses to rapidly upscale their business leveraging serverless architectural solutions with minimal capital investments.

Serverless infrastructure is infinitely scalable, and the expenses spent are dependent only on consumption, allowing businesses to pay for the exact cloud services they utilize. According to Mordor Intelligence, the serverless computing market is expected to grow at a CAGR of over 23.17% during the year 2021-2026.

As per IDG, 92% of organizations are hosting their IT infrastructure on the cloud, 55% are using multiple cloud systems with 21% of organizations using 3 or more cloud systems. This exponential growth in cloud adoptions created a new vulnerability for bad actors to compromise the IT systems. Cybercrime saw a steep increase of 630% in the January to April period only.

The recent breach in which over a million email addresses were exposed of GoDaddy customers shows that cloud security is critical and requires a proactive approach to fill up blind spots in the system. As per AllClouds Cloud Infrastructure Report, 28% of respondents consider security as one of the most important criteria in selecting the cloud service provider. Cloud offers the flexibility and operational efficiency an organization needs to grow and scale at ease but at the same time, it opens a gateway for cybercriminals via multiple entry points.

A major trend to counter cloud-based security issues is the use of Cloud Access Security Brokers (CASBs). CASBs are cloud-based security enforcement points placed between the cloud user and service provider. It can be on-premise or cloud-based security policy enforcement points and consolidates multiple security enforcement measures. Over 50% of organizations dont have a proper security framework for their cloud applications. Cloud adoptions saw a huge jump, cloud security comes next.

According to CDP Global, climate change will cost global businesses $1.3 trillion by 2026. Cloud significantly reduces the power consumption at the users end as physical IT infrastructure is handled by the colocation facility providing the cloud services. Maximum power consumption occurs due to the Always on infrastructure, powerful computing engines, massive digital storage requirements, etc.

Out of 9 consideration factors, 80% of consumers list sustainability as the most crucial while selecting a service provider. A wider audience is emerging who want their brand ethics to be in line with the values of service providers. With 44% of CEOs planning on net-zero futures, it is now increasingly important to take the best of highly efficient cloud operations. Adopting a public cloud reduces carbon dioxide emissions by 59 MT per year. Noticing the sustainability impact of cloud, it will be a very important trend in 2022 and years to come.

Tech giants around the world will be focussing on becoming net zero in 2022, Amazon the largest cloud service provider is also the biggest buyer of renewable energy resources for its data centers. Going green is more of an environmental necessity than just a trend to follow. Businesses globally are relying on the cloud for most of their operations, with advanced digital transformation. Rahis cloud experts are always evolving to bring the best of the cloud to your IT infrastructure.

If you have an interesting article / experience / case study to share, please get in touch with us at [emailprotected]

See the article here:
Top Cloud Computing Trends Shaping Our IT Landscape in 2022 - CRN - India - CRN.in