Category Archives: Cloud Servers

Internet of Things is the way for technological innovation – The Standard

The Internet of Things (IoT) is a system of interrelated computing devices, mechanical and digital machines, objects, animals or people that are provided with unique identifiers. They have the ability to transfer data over a network without requiring human-to-human or human-to-computer interaction.

Research firm MarketsandMarkets forecasts the global IoT market size to reach $561 billion (Sh64 trillion) by 2022 from $170.6 billion (Sh19.4 trillion) in 2017, rising at a compound annual growth rate of 26.9 per cent during the forecast period.

With digital transformation accelerating, the benefits that IoT can give organisations, especially those reliant on data collected from Edge devices, are substantial. IoT can be better integrated into existing systems while remaining scalable for growth opportunities.

Africa being a developing continent has shown great promise in the past decade that can be seen in the continents eagerness to transcend into the Fourth Industrial Revolution.

The industrial revolution that global markets are experiencing today, commonly referred to as Industry 4.0, is powered by technological advancements that include IoT, smart manufacturing, robotics and Artificial Intelligence (AI), to name a few.

IoT, supervisory control and data acquisition and industrial IoT are transforming manufacturing by allowing for a connected experience that has streamlined and simplified many manufacturing processes.

This will become a critical technology to enable the continent to embrace a digitally transformed landscape.

Industry leaders and government policymakers need to be open-minded and embrace the fast-paced change of digital technology for IoT to be implemented correctly. These include laws and legal frameworks to support data-driven technologies and innovation-driven growth. With the right mix of policies, technology experts can reap the benefits of IoT and AI in years to come.

An IoT environment delivers more intelligence on the assembly line or in the manufacturing process. Furthermore, it allows for data analytics to be captured and used to provide feedback, advanced notifications, state of health and other forms of reporting to improve the process throughout manufacturing operations.

All these elements become vital enablers in developing economies across the continent.

So, technology companies must therefore take the time to integrate IoT into existing technology roadmaps. There are several challenges to the adoption of IoT, which include:

Security

One of the key concerns over the future of IoT devices is security. As more devices become interconnected, unless they are well-secured, it could cause enormous security issues, especially for businesses. Hackers could easily gain entry into an entire network if just one of the devices isnt adequately protected. This means companies will need to spend a lot of money securing their IoT devices. The problem becomes even more concerning when you take into account that there is no concrete law thatcovers the numerous layers of IoT.

Speed and accessibility

The more devices are connected to a network, the slower it could become. Work will need to be done to improve speeds and businesses may want to consider switching to a wired, rather than a wireless network. Cloud architecture will also need to be upgraded. As it stands, the Cloud architecture can manage thousands, if not millions of devices. However, in future, they could end up having to support billions of devices.

Having so many devices connected at one time will prove problematic for existing servers. This also raises concerns over power supply and businesses will need to ensure they have a reliable source.

Compatibility

When it comes to actually connecting IoT devices, there are also likely to be issues with compatibility. This is because IoT is currently growing in so many different directions, so additional software and hardware will be needed to connect future devices. Issues could also be caused by diversity within operating systems, not enough standardised M2M protocols and non-unified cloud servers.

One of the area where IoT has grown in popularity over the past few years is financial technology (fintech). Companies here utilise the Internet, algorithms and software technologies to offer financial services to consumers.

Fintech organisations provide services that would usually be typically available within a traditional banking branch. Therefore, services such as loans, payments, investments and wealth management are now available through this industry vertical with the assistance of IoT. In addition, the other verticals that IoT will impact are agriculture and healthcare. By combining IoT with AI, the agriculture sector can benefit from optimal harvests to get advice on the best time to sow depending on weather conditions, soil and other indicators.

Agriculture can use these technologies to identify healthy and unhealthy crops, thereby triggering when pesticides should be sprayed with accuracy.

Adopting IoT technology can enable businesses to enhance their productivity and performance.

IoT devices streamline activities, collect data, monitor activities and provide useful insights.

IoT enables businesses to streamline their operations and utilise their resources effectively while gathering real-time information, allowing companies to explore and pursue new opportunities. While IoT adoption in Africa faces many hurdles with Internet penetration rates still low, there are some countries that are leading the way.

These include Kenya, South Africa, Rwanda, Mauritius and Seychelles, which have been addressing IoT as a priority. If IoT is to be successful, especially in rural areas, four issues must be addressed. These are; a more extended range for rural access; cost of hardware and services; limit dependency on proprietary infrastructures, and; provide local interaction models.

Looking ahead, IoT will impact the future of virtually every industry and every human being not only in Africa, but globally. IoT technology will continue to work as a technological innovator for the foreseeable future in Africa.

The writer is the chairman of BTN, an ICT provider in East Africa. ([emailprotected])

More here:
Internet of Things is the way for technological innovation - The Standard

Top 5 Reasons Why Companies are Moving to the Cloud – hackernoon.com

The term cloud refers to the software and services that run on the internet instead of locally on your on-site server or computer. Adopting the cloud has helped companies to find alternative plans to cut costs and ensure their data and systems are available to their customers anywhere and at any time. A survey by OReilly shows that Cloud adoption by companies has increased to 90%. Cloud technology is clearly showing its potential to different businesses and it continues to expand as well. With the availability of many cloud providers around the world such as AWS, Microsoft Azure and Google GCP, you can find different options to transit from on-sites servers to cloud servers.

Data Scientist | AI Practitioner | Software Developer. Giving talks, teaching, writing.

The term cloud refers to the software and services that run on the internet instead of locally on your on-site server or computer. Adopting the cloud has helped companies to find alternative plans to cut costs and ensure their data and systems are available to their customers anywhere and at any time.

Cloud technology is clearly showing its potential to different businesses and it continues to expand as well. With the availability of many cloud providers around the world such as AWS, Microsoft Azure and Google GCP, you can find different options to migrate from on-site servers to cloud servers.

A2021 cloud adoption surveyconducted by OReilly shows that Cloud adoption by companies has increased to 90%.

In this article, you are going to learn why some companies are moving to the cloud.

Let's get started.

There are several reasons why companies may plan to move to the cloud. you could accomplish these things without the cloud like you've been doing for years, but the cloud has introduced some new business capabilities through cost-effectiveness or accessibility.

A very common reason companies first get interested into the cloud is fault tolerance. By definition, Fault tolerance is the ability of a system to continue operating without interruption when one or more of its components fail. Here system can be a computer, network, or cloud cluster.

If you're a company that values your existence, you'll have some disaster recovery plans. These plans usually involve some alternate place where you can store data or your systems if your primary data center has some sort of problem.

Traditionally, this was done by contracting with some provider to keep a physical second copy of your hardware, ready to go when your primary hardware has failed to operate.

The only problem is that when everything's up and running fine, that backup physical hardware is just sitting around collecting dust and becoming obsolete, but it will cost you more because you're still paying for it even if you dont use it at that moment.

But what if there was a better way rather than buying all your backup equipment? Why not just rent it only when you need it? Cloud providers have massive amounts of the system capacity available to you in seconds, and you only have to pay for what you use, which is almost always a huge cost saving. Then when the crisis is over, you can just shut those things down and stop paying.

Moving to cloud technology enables you to save both space and costs, previous you have to pay for on-site servers (sometimes even off-site data centers). With the cloud, you pay cloud providers to handle the data centers and other resources rather than hosting the servers in-house on your own. For example, Oracle Cloud customers save approximately30% to 50%by switching to the cloud.

But if you misuse your cloud resources, they can easily cost dramatically more than any sort of on-site server. And this is why it's so important to train your staff and involve experienced cloud architects.

So cost savings can be realised, but you have to be careful about the expectations you set, especially in the early days.

This is another common reason for cloud adoption. As your business grows and expands beyond your home borders it makes sense to have resources and services close to those new markets that you want to reach, it can be for regulatory reasons or maybe performance reasons.

Cloud providers already have data centers and resources in various geographical locations available around the world, and you can use those with little more than a click of a button. This will definitely save you a lot of costs since you dont have to create data centers on your own.

Another reason why companies move to the cloud is agility.In a simple definition, Agility is the ability to respond to changing needs.

In many companies, if you wanted to run an experiment that required IT equipment, you would likely have to endure a requisition and procurement process and secure resources from the IT group to set up and maintain that equipment. These steps could take weeks or months.

But by adopting cloud from different providers, you can get access to that equipment in a matter of minutes and you can try your experiment and then shut down the equipment.

Another advantage is that you can get your results in days instead of a month and the cost will be low compared to other ways.

Another reason companies adopt the cloud is scalability. Scalability in cloud computing refers to the ability to increase or decrease IT resources as needed to meet changing business demands. Your goal is to have our capacity as close to your need as possible, but that's a pretty tricky thing to forecast.

The pay-as-you-go model of cloud providers enables you to have the flexibility and the ability to scale up or scale down depending on your business needs.

Cloud technology has a lot of benefits from both business and operation perspectives, in this article I have explained a few of them that can help you make a decision to adopt cloud technology in your company or even work on personal projects.

Besides its benefits to adopting cloud technology, there are still potential risks that you have to take into consideration. Some of these risks are

In the next article, you will learn more about Types of clouds and Migration approaches.

If you learned something new or enjoyed reading this article, please share it so that others can see it. Until then, see you in the next post!

You can also find me on Twitter@Davis_McDavid.

And you can read more articles like thishere.

Want to keep up to date with all the latest in cloud technology and data?Subscribe to our newsletterin the footer below

Read more:
Top 5 Reasons Why Companies are Moving to the Cloud - hackernoon.com

Intel, AMD and ARM Agree on Universal Standard for Chiplets – HYPEBEAST

Major chip manufacturers Intel, AMD and ARM have now come together in agreement on a universal chiplet standard.

Known aptly as Universal Chiplet Interconnect Express (UCIe), the new coalition includes other tech giants such as Samsung, Microsoft, Meta, Google, Qualcomm and TSMC, and the shared design will enable these manufacturers to mix and match chiplets from each other to create the best system-on-chips without increased cost, time and effort from producing their own unique designs from scratch. Theyll also potentially perform more consistently, benefiting everything from smartphones to cloud servers.

So far, the coalition has already ratified a UCIe 1.0 design, althoughEngadget points out that it could still be some time before consumers will be able to pick up a chip with that specification given that the group still needs to define its form factor and protocols among other intricacies. Of course, theres also the issue with global supply chain shortages.

Elsewhere in tech, Apple will be hosting a Peek Performance event on March 8.

Originally posted here:
Intel, AMD and ARM Agree on Universal Standard for Chiplets - HYPEBEAST

Is Amzon Web Services Highly Used In Company? ictsd.org – ICTSD Bridges News

The leading cloud computing provider worldwide is Amazon Web Services. The cloud-computing arm of the behemoth Amazon has grown into the most profitable unit of the company, earning the trust of businesses worldwide for its offerings.

The majority of businesses 61% will migrate their workloads to the cloud by 2020. In 2020, Amazon Web Services (AWS) will account for 76% of enterprise cloud usage.

All that is offered by AWS is server, storage, networking, remote computing, email, mobile development, and security. According to Amazon, its revenue from AWS will represent 13% by Q2. The company controls nearly twice as many cloud computers as its nearest competitor, Google.

For many enterprises, moving to cloud computing has been an easy decision. Automating fragmented processes, speeding up delivery of projects, and lowering costs are just a few of the ways AWS can benefit companies. In fact, Amazon Web Services is most commonly used by household names like Disney and Pinterest.

A cloud service provider that provides on-demand services like compute, storage, networking, security, databases, etc that can be accessed through the internet across the globe, and those services may not be managed. Using Amazon Web Services or AWS as an abbreviation represents the democratization of the internet.

Which Top Companies Use Aws?

Which Aws Service Is Most Popular?

What Companies Use Aws 2021?

Amazon Web Services is being used by more than 64% of enterprises (in some capacity) in the public cloud, according to Continos study The State of the Public Cloud in the Enterprise for 2020.

AWS is used to power the enterprise applications of more than 1,300 organizations, including 120 Fortune 500 companies. By combining finance, HR, and planning into one integrated system, Workday has helped businesses improve performance for decades.

Nearly 94% of enterprises have already used the cloud. A report published by Right Scale for 2019 reports that 91% of enterprises are using public clouds while 72% use private clouds. Both options are actually used by organizations today 65% of them using a hybrid cloud solution.

are the percentage of small, mid-sized and large ost of small and mid-sized companies that use AWS Cloud? In a given region, there are 284,997 companies which operate with AWS. These companies number around 1-10 individuals. Among the 92,853 midsized companies that use Amazon AWS, 51-200 employees utilize it.

What Aws Service Is Most Used?

Currently, Amazon Web Services delivers a highly reliable, scalable platform in the cloud which powers hundreds of thousands of businesses around the globe with cheap, scalable computing resources from the cloud.

What Is The Most Popular Aws Product?

Amazon Web Services is an electronic content delivery platform for servers, storage, networking, remote computing, emails, mobile application development, and security. As of Q2 2021, Amazons total revenue from AWS accounts for about 13%.

Amazon Web Services (AWS) is the most commonly used computing and storage platform by Netflix, which runs more than 100,000 functions, such as databases, analytics, and recommendation engines, videos, and more.

By using AWS, you will have the ability to quickly and securely host applications, even when their current applications or those based on SaaS are already in use. This platform can be accessed via AWSs application hosting platform via the AWS Management Console, or through well-documented APIs related to applications.

What Are The Services Provided By Amazon?

AWS offers over 200 products and services, including cloud computing, storage, networking, database storage, analytics, application processing, deployment, management, machine learning, and robotics and analytics for smart factories.

Visit link:
Is Amzon Web Services Highly Used In Company? ictsd.org - ICTSD Bridges News

With Systems, HPE Walks The Line Between Demand And Supply – The Next Platform

It is a tough environment for a lot of enterprises right now, and has been since the advent of the coronavirus pandemic more than two years ago. And now the war in Ukraine is going to add another layer of uncertainty over commerce. And yet, data is being collected and needs to be processed for competitive advantage this has not changed, and it will not change.

Those IT suppliers who are flexible with their offerings particularly those that conserve capital as well as move expenditures from CapEx to OpEx, offloading the capital expenses to a certain degree on the IT vendors or those who lease and finance gear are going to do better in such an environment. And that, in a nutshell, is precisely why Hewlett Packard Enterprise has been turning in good numbers in the trailing twelve months under relatively new chief executive officer Antonio Neri.

Over the past six years, HPE has intentionally spun off software, outsourcing, PCs, and printers to focus itself down on the core enterprise business that is an amalgam of the server businesses of the old Hewlett Packard, Compaq, Digital Equipment, SGI, and now Cray, plus a bunch of edge networking stuff and an amalgamated services business, called PointNext. The much smaller HPE is not particularly profitable, but then again the volume enterprise server business and the HPC systems business have never been terribly profitable, so this is no surprise to us. We have often said that system buyers should send HPE as well as Dell, Lenovo, Inspur, and IBM Thank You notes because, based on their financials, they often are building systems as much as for love as for money.

But, HPE is making more money and keeping more of it in recent quarters, and is also making the transition to sell everything in the catalog under its GreenLake cloud utility pricing model, and this seems to be resonating with customers at this uncertain time. While HPE is not booking most of this GreenLake as a service revenue now, since it is allocated proportionally over the length of contracts that run from 36 months to 60 months the deferred revenue is starting to build.

In the first quarter of fiscal 2022 ended in January, HPE added another 100 GreenLake customers, bringing the total up to 1,350. While GreenLake is an important annuity-like revenue stream for HPE, I tis not the only one it has, and if you add up all of the as a service revenues that were recognized and then show the annualized revenue rate (ARR) for these sales, you get charts that look like this:

Orders for as-a-service priced products grew at 136 percent year-on-year in Q1 of fiscal 2022, and the annualized run rate came in at $798 million. Nearly two third of that is for systems and networking software, and the rest is for infrastructure and regular financing think of it as money as a service, we suppose. The order backlog for these AAS offerings at HPE grew by 100 customers and $500 million which is another way of saying, we believe, that the new customers were all GreenLake customers and the total contract value from all AAS products sold to customers stood at $6.5 billion. (It is unclear if this is a figure that is net of revenue already recognized, but it should be if AAR is to be a meaningful figure.) The plan is for the AAR for the AAS portion of the HPE business to have a compound annual growth rate of 35 percent to 40 percent between fiscal 2021 and fiscal 2024, which puts it at around a $2.2 billion run rate as HPE exits its fiscal 2024 in October 2024. And if the backlog for AAS products scales like the run rate does, it should be somewhere around $18 billion at that time.

Thats why I said that when I think about the future of this company, the product is HPE GreenLake, Neri said on the call with Wall Street analysts going over the financial results. Everything gets delivered through HPE GreenLake, whether it is a connectivity through a subscription model, whether it is compute and storage that you can consume elastically with data services running on top of it, whether it is the services to operate in a HPE GreenLake. HPE GreenLake is becoming a platform of choice for many customers because it offers that flexibility and in an architecture thats edge-to-cloud. And thats inclusive, by the way, of the public cloud. And thats why when you see the innovation were going to bring in the next two weeks, it includes the public cloud in the way we manage that.

It will be interesting to see what HPE does here, but the implication is that either GreenLake systems will be one someones cloud, or HPE will buy capacity on someones cloud and make the management of all of it consistent under a single GreenLake management framework. (The latter one is an interesting concept. Are you allowed to resell AWS capacity? Think of the volume discounting HPE could command if it bought up giant blocks of Amazon Web Services, Microsoft Azure, and Google Cloud.)

In the quarter ended in January, HPE posted overall sales of $6.96 billion, up 1.9 percent, and net income more than doubled to $513 million. Some of that income is due to price increases it is passing along to customers, and some of it is due to intense cost controls and effective supply chain management. The company burned a little cash to help boost its parts inventory, which HPE is building up because customers are giving it more visibility into their infrastructure plans because of the shortages endemic in the IT sector right now. With parts lead times ranging from 52 weeks to 70 weeks for a lot of components, end user customers have little choice but to make a plan and tell their IT vendors as far in advance as they can.

The Compute division, which is comprised of mostly ProLiant servers based on Intel Xeon SP and AMD Epyc processors, had 1 percent growth to $3.02 billion in the quarter, but operating income rose by 21.6 percent to $416 million. Orders of X86 servers rose by more than 20 percent in the quarter, so you can see the gap between supply (revenues up 1 percent) and demand (orders up by more than 20 percent). Some orders always spill over into the next quarter, but this is probably a higher than usual rate. The Storage division had more than 15 percent order growth, so not as strong as servers, but this was the fourth quarter with such high growth. (Well, by HPE historical standards, this is high growth.) Storage revenues fell by 3.1 percent in the quarter, to $1.16 billion, and operating income fell by 28.5 percent to $168 million. In the quarter, close to 10 percent of combined compute and storage orders were for products sold AAS as opposed to being acquired straight up by customers, according to Neri.

HPE breaks out its HPC and AI business as a separate division, and this is mostly Cray supercomputer business with some smattering of SGI big memory NUMA machines aimed at those markets along with the Apollo dense server and storage designs plus their respective software stacks. Two big HPC deals were pushed out from Q1 to Q2 for revenue recognition one of them almost certainly is some of the Frontier Cray EX supercomputer at Oak Ridge National Laboratory and that hurt sales, which fell 21 percent sequentially to $790 million, but up 3.7 percent year on year. The HPC system business is always choppy, and the AI systems business is no different (and in fact HPE customers sometimes buy the same type of machinery to run these workloads separately, or buy one machine to run them concurrently). That the HPC & AI division saw operating income swing to a 7 million loss from a $43 million gain in the year ago quarter and from a $143 gain in Q4 of fiscal 2021 ended in October. The good news is that HPE order growth for its HPC and AI systems rose by more than 20 percent in the first quarter of fiscal 2022, which boosted the order book for these systems to $2.7 billion a record for HPE and Cray alike.

What we have been looking at through all of the years of changes at Hewlett-Packard and then Hewlett Packard Enterprise is the core systems business, and as you can see below, the company has done a pretty good job of building it and maintaining it since the Great Recession:

For the January quarter, this core systems business in this case, Compute, plus HPC & AI plus Storage had $4.96 billion in sales, up four-tenths of a point year on year, with operating income of $577 million, down 6.9 percent. This data includes servers, storage, and networking all the way back, but we did not have an easy way to extract tech support for systems out of the old Technology Services group, so it does not include that in the old data. It is as apples to applesauce as we can make the comparison to show you the general trend over the past decade and change.

The point is, HPE has always been able to keep rebuilding a systems business on about the same scope. Which is a feat in and of itself.

See original here:
With Systems, HPE Walks The Line Between Demand And Supply - The Next Platform

VMware inks more telco partnerships as 5G takes off – The Register

MWC VMware has detailed products and partnerships at Mobile World Congress (MWC) involving service providers and others using its tech to build next-generation networks and services covering applications, the radio access network (RAN), and the network edge.

The virtualization giant is among many firms looking to exploit the fusion of cloud-native technologies and telecoms as telcos deploy 5G networks around the globe. It already has its VMware Telco Cloud Platform, which is based on familiar technologies such as vSphere and its VMware NSX-T network technology, plus Tanzu for software container support.

As already covered by The Register, Verizon Business has added VMware to its portfolio of managed Software-Defined Wide Area Network (SD WAN) providers within Verizon's Managed WAN Service, adding another option for enterprise customers.

MetTel, another US service provider, has said it will offer its customers a managed secure access service edge (SASE) solution powered by VMware SASE. The solution will allow organisations to provide cloud-based security, networking, and edge compute services for applications running at the edge.

The move follows BT's recent announcement that it will sell its customers VMware SASE as a global managed service, combining the company's networking capabilities and security expertise with VMware technology.

IT services and consulting firm HCL Technologies is partnering with VMware to deliver integrated solutions for service providers around the world. This will see it expand its Cloud Smart portfolio of services powered by VMware technology to include support for VMware Telco Cloud 5G Core and VMware Telco Cloud RAN.

As part of this expanded partnership, HCL will establish a lab to streamline adoption of VMware tools by customers. This will provide facilities for onboarding, integration, verification and benchmarking of various 5G Core and virtual RAN configurations on VMware Telco Cloud Platform, HCL said.

Meanwhile, Dish Network in the US is building up a 5G network from scratch and has chosen VMware technology to power its RAN. According to Dish, its RAN workloads will run on the VMware Telco Cloud Platform RAN while the firm said it will also evaluate VMware's RAN Intelligent Controller (RIC).

A RIC is a new function apparently developed by the O-RAN Alliance that enables service providers to deploy cloud-native control and management apps in the RAN. VMware's RIC, unveiled last year, abstracts the underlying RAN infrastructure and can host near-real-time and non-real-time applications, which VMWare claims will enable new capabilities for automation, optimisation and service customisation.

Dish EVP and chief network officer Marc Rouanne said the company aims to use network slicing, Open RAN, and other 5G technologies to provide customised network services.

VMware also said it is partnering with technology providers on testing and validation of third-party solutions for its Telco Cloud Platform to shorten network deployments by reducing the time needed to design, test, and integrate components from multiple partners.

Dell, for example, has introduced the Dell Telecom Multi-Cloud Foundation as a turnkey network infrastructure package to help service providers build and deploy cloud-native networks. Dell's package combines Dell hardware with the Dell Bare Metal Orchestrator management tool and the service provider's choice of software platform, including VMware.

See the original post here:
VMware inks more telco partnerships as 5G takes off - The Register

The role of cloud in the datacentre revolution – Global Banking And Finance Review

By Mike Gallagher, Business Development Director at Claranet

The datacentre industry has been undergoing transformation for some time now, and the global pandemic has delivered a whole new wave of disruption. Organisations are under increasing pressure to cut costs and reduce their real estate footprint, bringing on-premises datacentres into the spotlight.

In addition to this, ongoing hardware supply chain issues are impacting both existing datacentres and new builds. Experts have warned that the global semiconductor shortage could stretch to 2023, and organisations currently face a 52-week lead time for many other datacentre components.

Gartner predicts that by 2025,80% of enterprises will shut down their traditional centres, with 10% of organisations having already done soto increase efficiency. So, what does the future datacentre landscape look like? The shift in working patterns over the last two years has prompted organisations to embrace cloud-enabled solutions including data management. Cloud-based services offer a sustainable, long-term solution which mitigates hardware issues and drives operational and cost efficiencies.

Building infrastructure stability

Research from Information Technology Intelligence Corp (ITIC) reveals that 44% of organisations report that a single hour of server downtime now costs from $1 million to over $5 million, excluding any legal fees, fines, or penalties. A 73% majority cited security as the number one cause of unplanned downtime, with 64% saying that human error caused server outages. These outages not only have financial implications but can lead to reputational damage in the case of data breaches.

Businesses must consider the stability of their on-premises datacentres, but the necessary server updates including the multiple backup servers needed in case of failure can be costly, and the technology does not have a long working life. According to IDC research, server performance erodes at an annual average of 14%, meaning that after five years, performance has diminished by 40%. Therefore, it is not just the initial installation that businesses must account for, but the time and resource that will need to be dedicated to maintenance and repairs.

Moving to the cloud enables businesses to transition from extendeddisasterrecovery times to true business continuity. Organisations will no longer have to worry about their data resiliency, with the cloud provider supporting data replication and managing the storage and security of those backups.

Embracing continual transformation

Following a period of extreme uncertainty and change, business agility is at the front of leaders minds. To stay competitive, organisations must keep pace with change, and this involves meeting the demands of a hybrid workforce. A recent HP Wolf Security Rebellions&Rejections report reveals that over three-quarters (76%) of IT teams admit security took a backseat to business continuity during the pandemic, while 91% felt pressure to compromise security for business continuity. The flexible working model is here to stay, and businesses must work out how to prioritise security while ensuring that remote employees have access to the data they need to do their best work.

Reduced reliance on in-house datacentres can boost company-wide agility and innovation, accelerating the process of testing and deploying new software applications. With cloud solutions, businesses can transform at speed; teams are free to fail fast and fail safe, welcoming mistakes as part of the development process. Cloud-based organisations also have more freedom when it comes to scalability, automatically adjusting data storage capabilities to meet demand and avoid over provisioning at peak times.

Supercharging sustainability

Another key driver for businesses to reduce their datacentre footprint is minimising environmental impact. As it stands, datacentre power consumption could devour 20% of UK generation in the next few years nearly the equivalent of the entire global airline industry. The real estate required to house and store individual on-premises systems has a significant impact on a companys electricity usage, and energy-saving technology is not often accessible for smaller businesses.

Although cloud technology is a still a long way off being environmentally friendly, mass-migration will ultimately lead to a huge reduction in emissions and e-waste, with organisations paying for the exact amount of processing power they need. And modern cloud infrastructures also offer increased transparency and tracking of applications power and carbonconsumption, which will be vital in the race to reach to net zero.

Not only is this crucial to meeting the goals of the Paris Agreement but demonstrating ESG credentials is becoming increasingly important in appealing to key stakeholders.With many customers choosing organisations based on their sustainability ethos, and investors using climate change markers to evaluate non-financial performance, striving to fulfil environmental commitments is not only an ethical decision, but a profitable one.

By using cloud organisations can leverage the sustainability capability of the hyperscales carbon neutral / negative strategies. Cloud also gives the transparency of consumption, which is difficult to achieve on premises, and enables power to be consumed only when required, scaling up and down as demand dictates.

Making the move

Though some organisations are still reluctant to make the move from on-premises infrastructure, the mounting challenges for the datacentre and hardware supply chain mean that those who delay this transformation risk being left behind. Migrating to cloud puts companies datacentre infrastructure in the hands of dedicated professionals, with the clout and purchasing power needed to manage supply chain shortages more flexibly.

For those who are ready to liberate their organisation from on-premises datacentres, there are a number of key things to consider. They must have a good handle on theirdesired outcomes and the dataneeded toformulatea strategy for a sustainable transition, and they must also ensure that they have the required cloud literacy to implement that strategy.

This can be a daunting task but, with a robust plan in place and the right partner businesses can modernise at the pace thats right for them,to ensure that the expected benefits of cloud are fully realised.

Go here to see the original:
The role of cloud in the datacentre revolution - Global Banking And Finance Review

Tips for Healthcare Organizations to Prevent and Respond to Data Breaches – HealthTech Magazine

One of the things that weve seen from traditional architectures is that most organizations have the same virtual machines. They have physical servers and databases that have grown so large that they cant protect them inside their window. In many cases, they have NAS architectures, which theyd traditionally protect using native NAS tools, but they dont necessarily provide the same level of recovery or separation from cyberattacks.

To protect these different workloads, traditional architecture had different parts and pieces, whether it was something like a master server or media server, and these server-based operating systems with applications installed on them send data to different storage devices. In many cases, weve seen these servers be compromised as part of a ransomware attack.

At Cohesity, we took all these different parts and pieces and consolidated them into a single hyperconverged architecture. Effectively, we run all those services inside our cluster as logical entities. That clustered approach gives us several big advantages. The first is that we distribute the workload across all the nodes. This allows us to back up and recover much more quickly than the traditional architectures.

The platform architecture itself gives us the ability to rapidly recover data, which is a key concern. Because its a node-based architecture, it doesnt have any things like disruption for upgrades, forklift upgrades or outage from software upgrades. We can add or remove nodes all while its up and running. We have a whole host of ransomware protection thats built into the platform, and we have storage efficiencies to help organizations reduce the amount of data that they have to store to drive down the cost.

READ MORE:Layered security is essential to healthcare systems incident response planning.

HALEY: We built an architecture designed with security in mind. It starts with a hardened architecture, where we built a platform so that it leverages technologies like encryption and immutability and has capabilities for things like write once read many (WORM), even architectures to support technologies like air gap. Weve also done a whole host of technologies to maintain and restrict access, and so we have granular role-based access control. Not everybody needs to be an administrator. We can give people the rights they need to do what they need to do without making everybody have too many rights.

We also support technologies such as multifactor authentication. My No. 1 recommendation to everybody professionally and personally is to enable multifactor authentication on everything. Anything that you care about, you should turn it on. Its a huge deterrent from several of the credential compromises weve seen. Multifactor authentication is a huge defense against attack. In addition to protecting the data, we also help people detect anomalous activity.

HALEY: We have a platform built into our Helios single pane of management consult. What were doing is looking at every object that we protect and creating a trend line for each object. The trend line shows how much data is backed up every day, how much changes and which files are being added, changed or deleted. We also look further into it so that we can understand how compressible the data is, or how eligible it is for deduplication.

What were really doing is looking for the signatures of a ransomware attack as it relates to data. The idea of creating a trend is that we understand what a normal day, a normal week or even a normal month looks like for every object in the environment. As part of the anomaly detection, whenever we see something thats out of trend, well alert you to it. We also show you the last clean backup. So, well show you where we detected the anomaly, and well show you the last nonanomalous protection point as well as a list of the files that we discovered that were affected by this.

Generally, if you see this as a challenge, you can initiate recovery right from the detection panel. If its something that you expected maybe you installed a service pack or you updated an application on the system you can simply ignore the anomaly. Weve also set this up so that it can send an alert directly to the Cohesity mobile app. Its just another set of eyes looking at the data, and were trending it using artificial intelligence and machine learning.

DISCOVER:Learn how infrastructure upgrades helped an organization survive a ransomware attack.

HALEY: We index all the data that we store. We build a searchable index. We also have an index and an inventory thats globally searchable for all the objects that we protect. We have tools in an actionable methodology. We can search for something and then act right when we find it. So, we have these to help organizations understand all the data thats being protected. If you think about it, the data protection architecture becomes an aggregation point for all the data in an environment. Its like a central repository for the data. These tools provide a great deal of power.

Our architecture is a multinode cluster, but we have this idea of the Cohesity marketplace, the idea that we can run apps and services natively on the architecture, and they spin up as Kubernetes containers. We run apps and services on the architecture that you could download and install directly into the cluster.

One example is a data classification architecture. Instead of indexing the file, server and database names, it can actually index the contents of files. Imagine being able to go through all the files youre protecting and look for patterns. Understanding where that sensitive data is allows you to better understand how to secure it.

Visit link:
Tips for Healthcare Organizations to Prevent and Respond to Data Breaches - HealthTech Magazine

Best free password managers: Better online security doesn’t have to cost a thing – PC World

You need a password manager. Data breaches now happen regularly, and that flood of stolen info has made cracking passwords even easier. Not just the password12345 variety is at riskit's also any that use strategies like variations on a single password or substituting numbers for letters. Even if you're using unique, random passwords, storing them in a document or spreadsheet leaves you vulnerable to prying eyes.

While paid password managers offer nice extras, a free password manager still protects you from the risks of using weak passwords (or worse, using the same one everywhere). You just have to remember one password to access a single, secure place where all your other passwords are stored.

And because free password managers come in different flavors and styles, you should be able to find one that fits your lifestyle. Down the road, you can always upgrade to a paid service if your needs grow.

Image: PCWorld

Like several other services, Bitwarden offers a free tier and a paid tierbut its free tier packs in so many features that most individuals won't need more. You can access the service across an unlimited amount of devices and a multitude of device types, enable basic TOTP two-factor authentication, and fill your vault with as many passwords as you'd like. The free personal plan also allows privacy-minded users to avoid the company's cloud hosting and instead self-host.

Rivals dole out far less to their free users, and it's particularly rare for them to grant unrestricted movement between multiple device types. (LastPass and Dashlane begin charging as soon as you want to leave the confines of a single device.) Most competitors are also not open-source like Bitwarden, which prevents their communities from being able to hunt for hidden backdoors or security holes.

The one thing that the free personal plan doesn't offer is real-time password sharingbut you can partially get around that by signing up for a free two-person org plan instead. It allows unlimited password sharing between the two users, thus allowing both individuals to safely access current passwords for shared accounts. However, the trade-off is that this free enterprise plan does not allow self-hosting.

Bitwarden's generous lineup of features for its free service makes it our top pick. Choose the free 2-person org plan to enable password sharing with one other account. Image: PCWorld.

Bitwarden's other advantage is that should your needs expand down the road, the transition to a paid plan won't cost much. A premium personal plan is just $10 per year (compared to $36+ per year for rivals), and a family plan is $40 per year for up to six users (compared to $48+ per year for rivals). And moving up to a paid tier does come with concrete benefits: support for more sophisticated forms of two-factor authentication, evaluations of your passwords' health (e.g., strength, public exposure, etc.), encrypted file storage, and emergency access for trusted individuals.

Finally, if you decide to move elsewhere one day, Bitwarden allows you to export your passwordswith the option to do so as an encrypted file. But with such a generous and thorough set of features, you'll likely not want to go elsewhere.

PCWorld

KeePass may not look like much, but under the hood this desktop-application-based password manager has all the features you could want, particularly if you're privacy and security minded.

Because the program and its encrypted database file(s) are stored locally on your computer by default, you retain full control over who can access itunlike a cloud service, where you have to trust that servers are set up correctly and that the employees are trustworthy. Moreover, you don't even have to install it on your system, but can run it via a portable .exe application kept on a USB stick.

KeePass is also an open-source program, which means that the community can always vet it for any hidden backdoors or just plain old security-crippling bugs. And you can enable two-factor authentication through the use of key files (which augments your master password), plus lock the database file to the Windows account that created it, too.

KeePass's numerous plugins let you approximate much of the premium features you'd get with a paid service, so long as you're willing to put in some elbow grease. This is only part of the full list! Image: PCWorld.

You're not just locked to a Windows desktop system, eitherbecause the program is open source, you can find community-created ports of KeePass for MacOS, Linux, Android, and iOS, as well as a boatload of plugins that let you customize it to your taste. With plugins, you can re-create most of the features you'd find in paid cloud-based services, like checking to see if any of your passwords have been found as part of a data dump.

You can also get creative with how you store your database filefor remote access, you can put it on a home server, or if you're comfortable, a cloud service of your own choosing. (Perhaps you're more comfortable with how Google safeguards its accounts than a dedicated password manager service, for example.) And should you ever decide to hang up your hat as a DIY password manager administrator, KeePass allows for easy exports of your passwords.

Password managers within mobile operating systems and major browsers have come a long way. Just a few years ago, we wouldn't have advised using them at all, but now they've shored up their security and features to become a viable (though basic) option.

But basic isn't badwhen it comes to password managers, the best service is the one that you'll use. For some people, using a dedicated password manager can be too much to keep track of. In those cases, leaning on Google, Apple, or even Firefox can help upgrade your password security with little extra effort necessary. Their built-in password management tools can do the heavy lifting of creating and remembering unique random passwords across the web, and you won't need to switch to a different app to make it work.

If you're going to choose a browser-based password manager, Firefox is one of the best options among the bunch.

Of course, you will lock yourself into those ecosystems by doing so, but if you live your whole life within those waters already, you won't be bothered by that fact. Google probably will appeal to most people, as Chrome is ubiquitous, but those who worry about data privacy can instead turn to Firefox and its pledge to not sell your data. Apple also shares Firefox's commitment to privacy, but it's the hardest platform to leave, as the company doesn't provide an easy method to export passwords. We advise choosing Google or Firefox for the widest reach across devices, and Apple if you own both MacOS and iOS devices (and don't plan to leave). Microsoft's password manager in Edge can also be worth a look for people deeply enmeshed in the Windows ecosystem.

The one primary downside to using your Google, Apple, or Firefox account to store passwords is that they're not as tightly safeguarded as with a third-party service. Even if you secure your account with two-factor authentication (and you absolutely should if you're storing passwords in it!), Google, Apple, or Firefox tend to be more lax about accessing passwords from a device that's logged in. Often they don't ask for reauthentication to use a stored password, unlike most dedicated password managersand that can be a security hazard on a shared device.

Why bother with a paid password manager if you can use a free one? Paid services provide premium features that enable more control over your passwords and how you secure them. For example, you'll often gain access to password sharing (handy if your household members all need to know the Netflix password), support for YubiKey and other more advanced forms of 2FA authenticators, and alerts that tell you if your password turned up in a data dump. Some paid services even have a signature feature that makes them stand out from competitorsfor example, 1Password has a travel vault feature that hides some passwords when you're traveling, as an extra security measure when you might encounter aggressive airport screening or simply lose access to your devices due to theft or lost baggage.

If you need these kinds of features, check out our list of the best paid password managers to see which ones offer the best bang for your buck.

Read more here:
Best free password managers: Better online security doesn't have to cost a thing - PC World

What is a cloud server? Types of cloud servers and how …

What is a cloud server?

A cloud server is a compute server that has been virtualized, making its resources accessible to users remotely over a network. Cloud servers are intended to provide the same functions, support the same operating systems (OSes) and applications, and offer similar performance characteristics as traditional physical servers that run in a local data center. Cloud servers are often referred to as virtual servers, virtual private servers or virtual platforms.

Cloud servers are an important part of cloud technology. Widespread adoption of server virtualization has largely contributed to the rise and continued growth of cloud computing. Cloud servers power every type of cloud computing delivery model, from infrastructure as a service (IaaS) to platform as a service (PaaS) and software as a service (SaaS).

Cloud servers work by virtualizing physical servers to make them accessible to users from remote locations. Server virtualization is often, but not always, done through the use of a hypervisor. The compute resources of the physical servers are then used to create and power virtual servers, which are also known as cloud servers. These virtual servers can then be accessed by organizations through a working internet connection from any physical location.

In a public cloud computing model, cloud vendors provide access to these virtual servers and storage resources in exchange for fees that are typically structured as a pay-as-you-go subscription model. Cloud computing delivery models that include only virtual servers, storage and networking are called IaaS. PaaS products provide customers a cloud computing environment with software and hardware tools for application development, which are powered by cloud servers, storage and networking resources. In the SaaS model, the vendor delivers a complete, fully managed software product to paying customers through the cloud. SaaS applications rely on cloud servers for compute resources.

Although private cloud servers work similarly, these physical servers are part of a company's private, owned infrastructure.

An enterprise can choose from several types of cloud servers. Three primary models include:

Public cloud servers. The most common expression of a cloud server is avirtual machine(VM) -- or compute "instance" -- that apublic cloudprovider hosts on its own infrastructure and delivers to users across the internet using a web-based interface or console. This model is known asIaaS. Examples of cloud servers includeAmazon Elastic Compute Cloud(EC2) instances, Microsoft Azure instances andGoogle Compute Engineinstances.

Private cloud servers. A cloud server may also be a compute instance within an on-premisesprivate cloud. In this case, an enterprise delivers the cloud server to internal users across alocal area network (LAN) and, in some cases, also to external users across the internet. The primary difference between a hosted public cloud server and a private cloud server is that the latter exists within an organization's own infrastructure, whereas a public cloud server is owned and operated outside of the organization. Hybrid clouds may include public or private cloud servers.

Dedicated cloud servers. In addition to virtual cloud servers, cloud providers can supply physical cloud servers, also known asbare-metal servers, which essentially dedicate a cloud provider's physical server to a user. These dedicated cloud servers -- also called dedicated instances -- are typically used when an organization must deploy a custom virtualization layer or mitigate the performance and security concerns that often accompany a multi-tenant cloud server.

Cloud servers are available in a wide range of compute options, with varying processor and memory resources. This enables an organization to select an instance type that best fits the needs of a specific workload. For example, a smaller Amazon EC2 instance might offer one virtual CPU and 2 GB of memory, while a larger Amazon EC2 instance provides 96 virtual CPUs and 384 GB of memory. In addition, it is possible to find cloud server instances that are tailored to unique workload requirements, such as compute-optimized instances that include more processors relative to the amount of memory.

While it's common for traditional physical servers to include some storage, most public cloud servers do not include storage resources. Instead, cloud providers typically offer storage as a separate cloud service, such asAmazon Simple Storage Service(Amazon S3) andGoogle Cloud Storage. An organization provisions and associates storage instances with cloud servers to hold content, such as VM images and application data.

The choice to use a cloud server will depend on the needs of the organization and its specific application and workload requirements. Some potential benefits include:

Ease of use. An administrator can provision a server in a matter of minutes. With a public cloud server, an organization does not need to worry about server installation, maintenance or other tasks that come with owning a physical server.

Globalization. Public cloud servers can globalize workloads. With a traditional centralized data center, admins can still access workloads globally, but networklatencyand disruptions can reduce performance for geographically distant users. By hosting duplicate instances of a workload in different global regions, organizations can benefit from faster and often more reliable access.

Cost and flexibility. Public cloud servers follow apay-as-you-go pricing model. Compared to a physical server and its maintenance costs, this can save an organization money, particularly for workloads that only need to run temporarily or are used infrequently. Cloud servers are often used for temporary workloads, such as software development and testing, as well as for workloads where resources need to be scaled up or down based on demand. However, depending on the amount of use, the long-term and full-time cost of cloud servers can become more expensive than owning the server outright. Furthermore, a full breakdown of cloud computing expenses is important to avoid hidden costs.

The choice to use a cloud server may also pose some potential disadvantages for organizations.

Regulation and governance. Regulatory obligations and corporate governance standards may prohibit organizations from using cloud servers and storing data in different geographic locations.

Performance. Because cloud servers are typically multi-tenant environments, and an admin has no direct control over those servers' physical location, a VM may be adversely impacted by excessive storage or network demands of other cloud servers on the same hardware. This is often referred to as the "noisy neighbor" issue. Dedicated or bare-metal cloud servers can help an organization avoid this problem.

Outages and resilience. Cloud servers are subject to periodic and unpredictable service outages, usually due to a fault within the provider's environment or an unexpected network disruption. For this reason, and because a user has no control over a cloud provider's infrastructure, some organizations choose to keep mission-critical workloads within their local data center rather than in the public cloud. Also, there is no inherent high availability or redundancy in public clouds. Users that require greater availability for a workload must deliberately build that availability into the workload.

When organizations are evaluating the use of cloud servers to satisfy their compute needs, there are a few key considerations.

When considering any type of cloud service, organizations should examine the specific cloud servers the provider uses -- such as the type, configuration and virtualization technology. While use of cloud servers for computing tasks can offer customers many specific benefits compared to physical servers, certain use cases can favor traditional on-premises servers.

Read here for more information on cloud security best practices for customer organizations.

See the rest here:
What is a cloud server? Types of cloud servers and how ...