Category Archives: Cloud Servers

Ampere Altra: Cloud computing ARM processor with 80 cores engraved in 7 nm, currently being tested by giants – Optocrypto

Ampere is a start-up to create a data center processor built on the foundations of Applied Micro Circuits. The company has now announced that it is able to outperform its competitors server processors, and these are the highest level ones available we are talking AMD Epyc and Intel Xeon Cascade Lake. Its 80-core Ultra-Arm processor which is built using the 7nm TSMC process is also expected to deliver performance at competitive levels with better efficiency. Trial versions of Ampere Altra 210W are already being sent to major companies such as Microsoft and Oracle and will be available as single- and dual-socket platforms. Mass production of the systems is planned for mid-2020.

Ampere Altra Arm 80-core process equipment is already being tested by the largest players in the market. Will this Stratup threaten Intel and AMD in the server market?

Renee James, founder, and CEO of Ampere is the former CEO of Intel. James says the new Ampere-Altra is designed to excel in cloud applications and extreme loads. In particular, Ampere Altra is considered the industrys first native 80-core cloud microprocessor. So that makes Ampere Altra different and a better choice for cloud computing than AMD Epyc or Intel Xeon.

According to the Ampere press release, the unique features of the cloud are very different from more traditional enterprise data center environments in terms of how it takes advantage of processing power, security, and energy efficiency. Atiq Bajwa, Amperes CTO and lead architect, provided some information about how Altra works.

Bajwa explained that Altras single-threaded Ampere cores and dense, energy-efficient servers can provide reliable, durable performance and a high level of isolation and security for each customer, regardless of what other tenants are running in these environments. The 64-bit ampere Ultra is based on the Arm Neoverse N1 platform.

Microsoft is evaluating amp systems in its laboratories for use in the Azure cloud. Meanwhile, Oracle plans to use ampere chips in its services and is optimizing most of its software, including Oracle Linux, Oracle Java, and Oracle Database, to run on Altra. Oracle has previously invested $40 million in amps. Amperes press release also lists other companies testing new processors, including Canonical, VMware, Lenovo, Micron and Gigabyte.

David is the chief editor, publicist, and marketer by profession at Optocrypto. He is a Passionist for the technological world and wants to aware of all the benefits of the latest technology.

The rest is here:
Ampere Altra: Cloud computing ARM processor with 80 cores engraved in 7 nm, currently being tested by giants - Optocrypto

Building IT Security Requires Improving Teams – Forbes

If youre looking to strengthen your businesss IT security, the solution includes the human factorand its not just about new hires.

Digital transformation can make almost any enterprise better. It brings together collaborators from around the world, draws smarter insights with the help of machine learning, and empowers businesses to become more responsive and innovative.

This combination of ubiquitous connectivity and cloud computing is changing how people workand the kinds of business strategies an enterprise can pursue.

As much as conversations about digital transformation can focus on finding the right kinds of programmers or data scientists, its equally important to emphasize that digital transformation requires the right kind of security professionals.

But when technology improves, enterprises aren't the only ones to experience innovation increases. Hackers and other bad actors can be pretty innovative too. This is one reason its hard to go more than a few weeks without seeing some new data breach, malware risk, or cybercrime in the headlines.

Successful digital transformation boils down to leveraging technology to produce business outcomes, which is a simple idea. But deploying, connecting, protecting, and maintaining those technologies can be enormously complex, making it easy to accidentally expose security vulnerabilities or to react too slowly to a sudden advance in attackers capabilities.

As much as conversations about digital transformation can focus on finding the right kinds of programmers or the right kinds of data scientists, its equally important to emphasize that digital transformation requires the right kind of security professionals.

According to the 2019 The Cybersecurity Workforce Gap report, 82% of employers report a shortage of cybersecurity skills, with 314,000 additional cybersecurity professionals needed as of January 2019.

The need for security professionals is not new. In fact, security is one of the fastest growing job fields, and not just in IT. According to the 2019 (ISC) Cybersecurity Workforce Study, going forward, there will be 10,000 cybersecurity professionals for every 100,000 U.S.-based establishments.

And yet, for all of this need, this CSIS survey showed that 82% of employers report a shortage of cybersecurity skills, with 314,000 additional cybersecurity professionals needed as of January 2019, despite the 716,000 such professionals already in the field.

Think about that: Its as if every single person in Denver were already working in IT security, but because the job is so big, we need everyone in St. Louis to pitch in as well. That is a huge need and a huge shortage.

So whats causing this shortfall? A large part of it is that the professionals in these roles are bogged down by manual work. Between patching servers, maintaining security infrastructure, updating security configurations, and collecting and analyzing data, theres hardly any time left to design proactive cybersecurity.

As with so many issues in the modern workplace, improvements to this people problem lie in the cloud. Most notably, cloud providers maintain and secure the underlying infrastructure,relieving you of some of the more time-consuming manual tasks of infrastructure management.

The cloud provides security by default with systems that simplify IT resource configuration, deployment, and operation throughout the organization. This frees security professionals to concentrate on tasks that are a better use of their time and skills, like designing and modifying security policies, auditing access to critical systems, classifying business-critical content, and investigating anomalous activity through a business lens.

But the cloud offers more than just time. Many cloud providers offer tools and guidance to help users secure their apps and data by letting security teams determine which data is sensitive, who should have access to what, and how to translate the organizations security and regulatory policy to controls. And since the cloud is exposed to users as software and APIs, automation becomes much simpler, resulting in more consistency at scale with fewer opportunities for human errors.

But the cloud offers more than just time. Many cloud providers offer tools and guidance to help users secure their apps and data by letting security teams determine which data is sensitive and who should have access to what. Moreover, since the cloud is exposed to users as software and APIs, automation becomes much simpler, resulting in more consistency at scale with fewer opportunities for human errors.

In addition to the people problem, modern security workforces also find themselves facing a skills problem. Security threats are always evolving, as are the solutions and tools, which means that many established security professionals cant keep up with the skills they need to detect and address new types of attacks. The longer it takes to find solutions, the more productivity may suffer across the organization.

On a deeper level, knowledge of the latest skills is essential to strong DevSecOps. The basic concept of DevSecOps is to build apps with security in mind from the start, rather than the traditional tactic of designing security in toward the end of development or bolting it on after systems and apps are built. Executing this requires a deep knowledge of security skills and tools that grows throughout the development process.

The cloud helps overcome these issues by providing access to the latest technological advancements and giving professionals access to the latest tools without the constant need to acquire and retrain.

Security professionals can also use the cloud to drive DevSecOps by embracing the best practices embedded into cloud-based tools. For example, Google Cloud offers vulnerability scanning, deploy-time controls, and configuration managementtools that underpin Googles own best practices for develop-and-deploy processes. With tools like these, security experts can set up strong security practices from the start that persist throughout the projects life cycle.

Learn more: This Google Cloud Next 19 session explores how enterprises can deliver software faster, without compromising security or reliability.

IT security is only going to become more essential as businesses rely more on technology for innovation and competitive advantage, and the need for professionals who are equipped for the challenge is going to grow as well.

Fortunately, with cloud-based security tools and a healthy amount of security by default, not only can security professionals continue to do their jobs effectively even as the landscape changes, but the next generation of experts will likely already be trained on cloud-based tools. That leaves the major people and skills problems in the IT landscape to those who havent taken advantage of the cloud.

Discover how the highest performers scale DevOps to maximize success. Get the latest Accelerate State of DevOps Report.

See the original post here:
Building IT Security Requires Improving Teams - Forbes

How to Choose the Right Kubernetes Distribution – ITPro Today

So, you want to use Kubernetes to orchestrate your containerized applications. Good for you. Kubernetes makes it easy to achieve enterprise-scale deployments. But before you actually go and install Kubernetes, theres one thing you need to wrap your head around: Kubernetes distributions. In most cases, you wouldnt install Kubernetes from source code. Youd instead use one of the various Kubernetes distributions that are offered from software companies and cloud vendors.

Heres a primer on what a Kubernetes distribution is, and what the leading Kubernetes distros are today.

What Is Kubernetes?

Before talking about Kubernetes distributions, lets briefly go over what Kubernetes is. Kubernetes is an open source platform for container orchestration. Kubernetes automates many of the tasks that are required to deploy applications using containers, including starting and stopping individual containers, as well as deciding which servers inside a cluster should host which containers.

Kubernetes is only one of several container orchestrators available; other popular options include Docker Swarm and Mesos Marathon. But, for reasons I wont get into here, Kubernetes enjoys majority mindshare, and probably majority market share, too, when it comes to container orchestration.

What Is a Kubernetes Distribution?

As an open source project, Kubernetes makes it source code publicly and freely available on GitHub. Anyone can download, compile and install Kubernetes on the infrastructure of their choice using this source code. But most people who want to install Kubernetes would never download and compile the source code, for several reasons:

Most people turn to a Kubernetes distribution to meet their container orchestration needs. A Kubernetes distribution is a software package that provides a pre-built version of Kubernetes. Most Kubernetes distributions also offer installation tools to make the setup process simpler. Some come with additional software integrations, too, to help handle tasks like monitoring and security.

In this sense, you can think of a Kubernetes distribution as being akin to a Linux distribution. When most people want to install Linux on a PC or server, they use a distribution that provides a pre-built Linux kernel integrated with various other software packages. Almost no one goes and downloads the Linux source code from scratch.

What Are the Main Kubernetes Distributions?

Technically speaking, any software package or platform that includes a pre-built version of Kubernetes counts as a Kubernetes distribution. Just as anyone can build his or her own Linux distribution, anyone can make a Kubernetes distribution.

However, if you want a Kubernetes distribution for getting serious work done, there are several main options available:

Conclusion

To say that Kubernetes is a complex beast is to understate. Fortunately, Kubernetes distributions make it easy to take advantage of Kubernetes without all of the fuss and muss of setting up Kubernetes yourself from scratch. For most use cases, one of the Kubernetes distributions described above is the most practical way to get up and running with Kubernetes.

Read the rest here:
How to Choose the Right Kubernetes Distribution - ITPro Today

‘You Cannot Break That’ HPE Says Of The iLO 5 Chip In Gen 10 Servers – CRN: Technology news for channel partners and solution providers

Allen Whipple, distributor business development channel consultant at Hewlett Packard Enterprise, had a simple question for a room full of MSPs.

Are your customers truly protected? he asked. I see all of your heads shaking no. A majority of them are not, right?

Whipple said HPEs claim that its Proliant servers are the worlds most secure industry standard server, is not marketing fluff.

We threw that tagline around all over the place. We mean it, he said, speaking at a breakout session sponsored by D&H Distributing at XChange 2020. When we first designed this server, we took it out to a third party and said Hack this computer. Tear it apart and tell us what you know. Theyre the ones that came back and said, Based on our research you have the worlds most secure industry-standard server. Based on what were seeing, you are two generations ahead of the competition.

He said for the Gen10 model, HPE stopped using third-party chip makers, choosing instead to make its iLO 5 chip itself, and infuse them with security at the point of manufacture.

We took those iLO 5 chips, we brought them in house and we made them layer upon layer out of silicon, he said. We took our HPE firmware and literally embedded it in that silicon. Its like surrounding it with concrete. What does that mean? Its beautiful. Its like a digital handshake. You cannot break that.

Whipple said that while firmware outside of the chip can still be hacked and ransomed, the iLO 5 chip will stay secure.

In the slim chance your Gen10 server is hacked, we are going to provide you with a way to recover your server in a matter of clicks or minutes, instead of days or weeks, he said, speaking specifically about the iLO 5 Advanced chip. You can set the server to check itself once every 24 hours. So once every 24 hours it is going to go and check all the firmware settings and make sure they are in an authentic state. Now if theyre not in an authentic state, its going to give you three options.

He said first, it will allow the user to restore to the last known good state within the previous 24 hours since it last checked.

That is critical, Whipple said. In less than 24 hours you not only can identify the event hack, but you can absolutely be back up and running to how you were before the hack occurred.

He said the second option is that the server will also allow a user to restore it to factory settings which, with VMware, will allow tech support to drop an image on to the system to get it back up and running. Option number three, meanwhile, will allow a user to take the server off line to preserve it for forensics in the event it needs to be used in a hacking investigation.

They can study it, they can look at it, they can see exactly how that ransomware was operating, he said.

The insurance company Marsh & McLennan awarded the server its Cyber Catalyst designation as a top security product calling it: Arguably a close-to-perfect solution. Security that is baked in at the bare metal hardware level is the standard that security risk management professionals should strive for. Marsh will offer discounted insurance rates to those selling Gen 10 servers, Whipple said.

MSPs in the room said they were anxious to return to their shops and get this rolled out everywhere.

We just switched to the HPE ProLiant Servers. Just learning about the iLO 5 Advanced was worth coming to this whole thing, said Jeff Willems, president of CSRA Technologies, an IT service provider to the U.S. military. To be able to roll something back 24 hours earlier if we have a problem, Im going to go back and challenge my guys: Lets get this rolled out everywhere we have ProLiant servers.

Christopher Alghini, the principal G Suite and Google Cloud consultant at Cool Head Tech in Austin, Tex., agreed.

I was very impressed with the iLO advanced chip, and I can see where it would work really well for government and enterprise server installations, he said. For a data center install, these would be perfect devices because of the security and I think also the speed. Even if the chip is breached, to have that evidence to go back to, I think is an excellent idea as well, so theres some recovery in there.

Read this article:
'You Cannot Break That' HPE Says Of The iLO 5 Chip In Gen 10 Servers - CRN: Technology news for channel partners and solution providers

Eight Reasons to Reach for the Cloud – STN Media – School Transportation News

A technology supervisor at Sabine Parish School District in Louisiana got an alert on his phone around 4:00 a.m. on a Sunday morning letting him know there had been a surge in bandwidth on the schools server. In addition to the odd day and time, it was also summer break. Something was wrong.

After a quick investigation, the staff discovered a ransomware attack on their servers. An anonymous hacker now held years worth of dataimportant documents, test schedules, and moreand was demanding money in exchange for its release.

A similar scenario played out in multiple Louisiana school districts this past summer, forcing Governor Edwards to declare a state of emergency. Louisianas emergency response was modeled after one Colorado took in 2018, and that ransomware cost the state $1.5 million to clean up. In 2019 alone, 500 schools across the country were the target of ransomware incidents.

If you search the internet, youll find all the typical advice for managing your data to help avoid such an attack: use strong passwords, make database backups, buy anti-virus software, avoid suspicious emails. However, one of the best ways to prevent a cyberattack on your schools server is remarkably simple: dont put your data there.

Cloud-based software has become the norm for every industry across the world, and its not by coincidence. The biggest reason is data security, but there are several other key advantages that explain why school bus transportation offices, in particular, have a lot to gain from Software as a Service (SaaS), and adapting to this new norm.

These are just some of the key benefits to having a software subscription instead of the outdated license model. Cyberattacks are a problem that, unfortunately, will not be going away, and by most accounts are only getting worse. SaaS can help you be better positioned to prepare for them or hopefully avoid them, and that starts with educating yourself on its value, including what exactly Software as a Service means.

Its not paying a vendor to handle your routing operation remotely. Thats a consulting serviceworth paying for when you need it, but different than SaaS. And it shouldnt just be a web-interface for your existing licensed program. That will give you mobile access, yes, but its a far cry from all the other advantages of SaaS explained above.

If any of this is important or interesting to you, you can learn more in our on-demand webinar, or you can contact Tyler Technologies and wed be happy to consult with you about your individual needs.

Go here to see the original:
Eight Reasons to Reach for the Cloud - STN Media - School Transportation News

Which IoT Applications will Benefit Most from Edge Computing? – IoT For All

Edge computing refers to information being processed at the edge of the network, rather than being sent to a central cloud server. The benefits of edge computing include reduced latency, reduced costs, increased security and increased business efficiency.

Transferring data from the edge of a network takes time, particularly if the data is being collected in a remote location. While the transfer may usually take less than a second, glitches in the network or an unreliable connection may increase the time required. For some IoT applications, for example, self-driving cars, even a second may be too long.

Imagine a security camera thats monitoring an empty hallway. Theres no need to send hours of large video files of an empty hallway to a cloud server (where you will need to pay to store them). With edge computing, the video could be sent to the cloud only if there is movement detected in the hallway.

Sending less data through a network increases security. Any time you transmit data youre opening yourself up to the possibility of it being stolen or hijacked.

Processing data at the edge of the network can reduce the amount of data thats sent to a cloud server. By storing only the most relevant information on the cloud, it will be easier to locate the information your business needs and to perform analysis on this data.

For example, if a temperature sensor shows a reading of 5 degrees every second, then this information does not need to be sent to the cloud. It only becomes important to transmit this information if the temperature increases outside a preset range.

So, which IoT applications will benefit most from lower latency and costs and increased security and efficiency?

Healthcare, manufacturing and energy are all sectors that can benefit hugely from decreased latency and increased security.

Healthcare is a growing IoT sector. According to a report from research and consulting firm Grand View Research the global healthcare sector will invest nearly $410 billion in IoT devices, software and services in 2022, up from $58.9 billion in 2014.

For IoT devices that process such sensitive information, security and data privacy are paramount. By sending as little information as possible to a central cloud server, patients will retain greater control of their personal data and be less exposed to data breaches.

IoT healthcare devices also require as near to instantaneous as possible decision making. If a persons blood glucose or heart rate monitor registers dangerous readings then that information must be acted on immediately.

By utilizing edge computing, these IoT healthcare applications become less dependent on network connectivity. Patients can feel reassured that, if anything is awry, their IoT healthcare application will notify them as soon as possible, no matter where they are or how strong their internet connection is.

Low latency is crucial for industrial IoT, which is one of the reasons why this sector stands to benefit the most from edge computing.

In a factory setting, if a sensor logs a reading as being too hot, then a machine may need to be shut down immediately. By not sending that data for processing in a central cloud server, action can be taken more quickly.

We discussed why edge computing is vital for IIoT in more depth in a previous blog.

By their very nature, energy and environment IoT applications are often deployed in remote locations. Oil rigs, gas pipelines, wind turbines, hydroelectricity dams, they all stand to benefit from deploying connected solutions and all tend to be located in remote areas where network connections are not always reliable.

Many energy and environment IoT applications need to be able to respond quickly to changing conditions but aretoo remote to be able to benefit from 5G. Edge computing will be immensely useful for IoT applications in this sector. Edge computing will also increase efficiency and reduce cloud server storage costs by only transferring any relevant information.

An IoT application on a wind farm which is collecting data on wind speeds or energy generated could process that data at the edge and only transfer it to a central cloud server if it records data outside a pre-determined norm.

The healthcare, industrial IoT, energy and environment sectors will benefit from adopting edge computing networks. However, this technology will also be beneficial to the Internet of Things as a whole and we expect to see it being widely adopted in the coming years.

According to Gartner, Around 10 percent of enterprise-generated data is created and processed outside a traditional centralized data center or cloud. By 2025, Gartner predicts this figure will reach 75 percent.

More and more data processing will move to the edge, particularly within the world of IoT as IoT applications increase in number and collect more data.

Read this article:
Which IoT Applications will Benefit Most from Edge Computing? - IoT For All

Is the cloud really safe? – IT-Online

Optimal cloud security requires a distinct way of thinking about IT infrastructure, writes Ray Pompon, principal threat evangelist at F5 Labs.

Back in the day, the theft and loss of backup tapes and laptops were a primary cause of data breaches.

That all changed when systems were redesigned and data at rest was encrypted on portable devices.

Not only did we use technology to mitigate a predictable human problem, we also increased the tolerance of failure.

A single lapse, such as leaving a laptop in a car, doesnt have to compromise an organisations data. We need the same level of failure tolerance, with access controls and IT security, in the cloud.

In the cloud, all infrastructure is virtualised and runs as software. Services and servers are not fixed but can shrink, grow, appear, disappear, and transform in the blink of an eye. Cloud services arent the same as those anchored on-premises. For example, AWS S3 buckets have characteristics of both file shares and web servers, but they are something else entirely.

Practices differ too. You dont patch cloud servers they are replaced with the new software versions. There is also a distinction between the credentials used by an operational instance (like a virtual computer), and those that are accessible by that instance (the services it can call).

Cloud computing requires a distinct way of thinking about IT infrastructure.

A recent study by the Cyentia Institute shows that organisations using four different cloud providers have one-quarter the security exposure rate. Organisations with eight clouds have one-eighth the exposure. Both data points could speak to cloud maturity, operational competence, and the ability to manage complexity. Compare this to the lift and shift cloud strategies, which result in over-provisioned deployments and expensive exercises in wastefulness.

So how do you determine your optimal cloud defense strategy?

Before choosing your deployment model, it is important to note that there isnt one definitive type of cloud out there.

The National Institute of Standards and Technologys (NIST) definition of cloud computing lists three cloud service models infrastructure-as-a-service (IaaS), platform-as-a-service (PaaS), and software-as-a-service (SaaS)). It also lists four deployment models: private, community, public, and hybrid.

Heres a quick summary of how it all works through a security lens:

* Software-as-a-Service (SAAS) cloud is an application service delivered by the cloud. Most of the infrastructure is managed by the provider. Examples include Office 365, Dropbox, Gmail, Adobe Creative Cloud, Google G Suite, DocuSign, and Shopify. Here, you are only responsible for your logins and data. Primary threats include phishing, credential stuffing, and credential theft. These can be controlled via solutions such as multi-factor authentication, application configuration hardening, and data-at-rest encryption (if available).

* Platform-as-a-Service (PaaS) cloud is a platform to build applications into before they are delivered by the cloud. The provider manages the platform infrastructure, but you build and run the applications. Examples include AWS S3 buckets, Azure SQL Database, Force.com, OpenShift, and Heroku. You are only responsible for your logins and data. In addition to SaaS threats (access attacks), there is a need to secure the application itself against web app attacks. In this model, you are likely to have exposed APIs and service interfaces that could leak data if unsecure. Controls include User/Role Rights Management processes, secure API gateways, Web App Security, Web App Firewalls, bot scrapers, and all the referenced SaaS controls.

* Infrastructure-as-a-Service (IaaS) Cloud is a platform to build virtual machines, networks, and other computing infrastructures. The provider manages the infrastructure below the operating system, and you build and run everything from the machine and network up. Examples include AWS EC2, Linode, Rackspace, Microsoft Azure, and Google Compute Engine. You are responsible for the operating systems, networking, servers, as well as everything in the PaaS and SaaS models. In addition to the threats targeting SaaS and PaaS models, the main security concerns are exploited software vulnerabilities in OS and infrastructure, as well as network attacks. This calls for a hardening of virtualised servers, networks, and services infrastructure. Youll need all the above-mentioned controls, plus strong patching and system hardening, and network security controls.

* On-Premises/Not Cloud is the traditional server in a rack, whether its in a room in your building or in a colocation (Colo) facility. Youre responsible for pretty much everything. Theres less worries about physical security, power, and HVAC but there are concerns related to network connectivity and reliability, as well as resource management. In addition to threats to networks, physical location, and hardware, youll have to secure everything else mentioned above.

If you have a hybrid cloud deployment, youll have to mix and match these threats and defenses. In that case, an additional challenge is to unify your security strategy without having to monitor and configure different controls, in different models and in different environments. Other, specific organisational proficiencies integral to reducing the chances of a cloud breach include:

Technical skills and strategy

* A strong understanding of cloud technology, including its deployment models, advantages, and disadvantages at the IT executive/management level.

* A deep understanding of the operating modes and limitations of associated controls.

* Comprehensive service portfolio management, including tracking environment, applications, deployed platforms, and ongoing IT projects.

* Risk assessments and threat modelling, including understanding possible breach impacts and failure modes for each key service.

Access control processes

* Defined access and identity roles for users, services, servers, and networks.

* Defined processes to correct erroneous, obsolete, duplicate, or excessive user and role permissions.

* Methods for setting and changing access control rules across all data storage elements, services, and applications.

* Automated lockdown of access to all APIs, logins, interfaces, and file transfer nodes as they are provisioned.

* Centralised and standardised management of secrets for encryption and authentication.

Observability

* Defined and monitored single-path-to-production pipeline.

* Inventory of all cloud service objects, data elements, and control rules.

* Configuration drift detection/change control auditing.

* Detailed logging and anomaly detection.

Adherence to secure standards

* Guardrails to ensure secure standards are chosen by default, including pre-security certified libraries, frameworks, environments and configurations.

* Audit remediation and hybrid cloud governance tools.

* Automated remediation (or deletion) of non-compliant instances and accounts.

* Automated configuration of new instances that includes secure hardening to latest standard.

Any strategy and priority decisions should come before the technological reasons. Dont go to the cloud for the sake of it. A desired goal and robust accompanying strategy will show the way and illuminate where deeper training and tooling are needed.

Related

Read the original:
Is the cloud really safe? - IT-Online

When the Cloud Falls to Earth. Is It Time for Your Organization to Consider Cloud Repatriation? – Data Economy

For many oftodays applications and workloads, cloud computing offers the enterprise ahost of advantages over traditional data centers, including lowered operationaland capital expenditures, improved time to market, and the ability todynamically adjust provisioning to meet changing needs globally. Consequently,there has been a massive shift to cloud migration over the past decade, with cloud computing trends showing significantyear-over-year growth since it was first introduced, and Cisco predicting thatby 2021 cloud data centers will process 94 percent of all workloads.According toMarketsandMarkets, the global cloud computing market is projected to surge at acompound annual growth rate (CAGR) of 18 percent to reach approximately $623.3 billion by 2023,up from $272 billion in 2018.

Today, however, we are seeing more companies bringing workloads backinto their data centers or edge environments after having them run in the cloudfor several years because they didnt originally fully understand theirsuitability in a cloud environment. 451Research has referred to this dynamic as cloud repatriation,and a recent survey found that 20 percent of cloud users had already moved atleast one or more of their workloads from the public cloud to a private cloud,and another 40 percent planned to do so in the near future.

All of this begs a deceivingly simple question: How do I know whena workload would be better off running in or outside of the cloud?

When Latency, Availability and ControlAre Key

Aswith any IT decision, an inadequately researched, planned and tested process islikely to cause setbacks for enterprise end-users when the organization atlarge is faced with uncertainty whether to move an application or workload outof the public cloud and return it to an on-premises data center or edgeenvironment.

Very often, moving an application or workload from the cloud makes good business sense when critical operational benchmarks are not being met. This might mean inconsistent application performance, high network latency due to congestion, or concerns about data security. For example, we know of one Fortune 500 financial services firm that was pursuing an initiative to move its applications and data to the public cloud and only later discovered that its corporate policy prohibited placement of personally identifiable information (PII) and other sensitive data beyond their internal network/firewall. Although many security standards are supported by public cloud providers, because of its internal policy, the financial organization opted to keep its data on-premises.

Newsletter

Time is precious, but news has no time. Sign up today to receive daily free updates in your email box from the Data Economy Newsroom.

Somecompanies, such as Dropbox, have chosen to migrate from the public cloud tobenefit their bottom line. While cost is but one criterion for leaving, it is amajor one. In the wake of leaving the cloud, Dropbox was able to save nearly$75 million over two years.

Generallyspeaking, applications that are latency sensitive or have datasets which arelarge and require transport between various locations for processing are primecandidates for repatriation. Consider smart cities and IoT-enabled systems, whichcreate enormous amounts of data. While cloud computing provides a strongenabling platform for these next-gen technologies because it provides thenecessary scale, storage and processing power, edge computing environments willbe needed to overcome limitations in latency and the demand for more localprocessing.

Additionally,if your applications and databases require very high availability orredundancy, they may be best suited to private or hybrid clouds. Repatriation alsoprovides improved control over the applications and enables IT to better planfor potential problems.

Yes,moving to the cloud means a decrease in rack space, power usage and ITrequirements, which results in lower installation, hardware, and upgrade costs.Moreover, cloud computing does liberate IT staff from ongoing maintenance andsupport tasks, freeing them to focus on building the business in more innovativeways. And yet, while many businesses are attracted to the gains associated withpublic or hybrid cloud models, they often do not fully appreciate the strategynecessary to optimize their performance. Fortunately, there are tools to assistIT teams to better understand how their cloud infrastructure is performing.

DemystifyingCloud Decision-Making

Nomatter the shape of an organizations cloud public, private or hybrid data center management solutions can provideIT staff with greater visibility and real-time insight into power usage,thermal consumption, server health and utilization. Among the key benefits arebetter operational control, infrastructure optimization and reduced costs.

Beforeany organization moves its data to the public cloud, the IT staff needs tounderstand how its systems perform internally. The unique requirements of itsapplications, including memory, processing power and operating systems, should determinewhat it provisions in the cloud. Data center management solutions collect andnormalize data to help teams understand their current implementationon-premise, empowering them to make more informed decisions as to what isnecessary in a new cloud configuration.

Intel Data Center Manageris a software solution that collects and analyzes the real-time health, power,and thermals of a variety of devices in data centers. Providing the clarityneeded to improve data center reliability and efficiency, including identifyingunderlying hardware issues before they impact uptime, these tools bring invaluableinsight to increasingly cloudy enterprise IT environments, demystifying thequestion of on-premises, public and hybrid cloud decision-making.

Here are some factors to consider whenmaking a decision about embarking on a course of cloud repatriation:

If you answered yes to a majority of the questions above, it might be time to consider cloud repatriation.

Read the latest from the Data Economy Newsroom:

See more here:
When the Cloud Falls to Earth. Is It Time for Your Organization to Consider Cloud Repatriation? - Data Economy

Germany Healthcare Cloud Computing Market analysis and forecast 2019-2025 edited by leading research firm – WhaTech Technology and Markets News

Germany Healthcare Cloud Computing Market Size, Share & Trends Analysis Report by Application (Clinical Information Systems and Nonclinical Information Systems) By Deployment Type (Private Cloud, Public Cloud and Hybrid Cloud) Forecast period (2019-2025). Germany healthcare cloud computing market is anticipated to grow at a CAGR around 21.2% during the forecast period.

Germany healthcare cloud computing market is anticipated to grow at a CAGR around 21.2% during the forecast period. Germanys sophisticated economy is one of the major drivers of advancing digitalization in all areas of personal life and business.

Germanys well-established infrastructure and consumer base are driving demand for cloud services in the country. As per an estimation, 26% of the German companies do not use or plan to use cloud services currently in their operations.

Request a Free Sample of our Report on Germany Healthcare Cloud Computing Market: http://www.omrglobal.com/requesting-market

This indicates huge market potential that private sector of the Germany offers for cloud adoption. Germanys cloud market is attractive to international, regional, and domestic cloud service providers.

Further, the government of the Germany has encouraged and effectively implemented an EU or German domestic data infrastructure without the necessity of legislating it by relying on pressure from German companies and consumers for storing information on cloud servers located domestically. Further, the rising adoption of EHR in this region is also augmenting the growth of the Germany healthcare cloud computing market.

Germany Healthcare cloud computing market is segmented on the basis of application and deployment type. Based on application, the market is segmented into clinical information systems and nonclinical information systems.

Based on deployment type, the market is segmented into private cloud, public cloud and hybrid cloud.

The key players that are active in the market include CareCloud Corp., Cisco, Inc, Deutsche Telekom, Dell Inc., GE Healthcare, IBM Corp., Merge Healthcare Inc., Microsoft Corp., Oracle Corp. and Siemens Healthineers. The market players are considerably contributing to the market growth by the adoption of various strategies including new product launch, merger, and acquisition, collaborations with government, funding to the start-ups and technological advancements to stay competitive in the market.

A Full Report of Germany Healthcare Cloud Computing Market is Available at http://www.omrglobal.com/industring-market

Germany Healthcare Cloud Computing Market Segmentation

By Application

By Deployment Type

Company Profiles

For More Customized Data, Request for Report Customization @ http://www.omrglobal.com/report-ing-market

This email address is being protected from spambots. You need JavaScript enabled to view it.

The rest is here:
Germany Healthcare Cloud Computing Market analysis and forecast 2019-2025 edited by leading research firm - WhaTech Technology and Markets News

Reasons to consider hyperconverged infrastructure in the data centre – Small Business

(Image: Stockfresh)

By 2023, 70% of enterprises will be running some form of hyperconverged infrastructure

Print

Read More: data centre hyperconverged infrastructure virtualisation

Demand for on-premises data centre equipment is shrinking as organisations move workloads to the cloud. But on-prem is far from dead, and one segment that is thriving is hyperconverged infrastructure (HCI).

HCI is a form of scale-out, software-integrated infrastructure that applies a modular approach to compute, network and storage capacity. Rather than silos with specialised hardware, HCI leverages distributed, horizontal blocks of commodity hardware and delivers a single-pane dashboard for reporting and management. Form factors vary: Enterprises can choose to deploy hardware-agnostichyperconvergence softwarefrom vendors such as Nutanix and VMware, or an integrated HCI appliance from vendors such as HP Enterprise, Dell, Cisco, and Lenovo.

The market is growing fast. By 2023, Gartner projects 70% of enterprises will be running some form of hyperconverged infrastructure, up from less than 30% in 2019. And as HCI grows in popularity, cloud providers such as Amazon, Google and Microsoft are providing connections to on-prem HCI products for hybrid deployment and management.

So why is it so popular? Here are some of the top reasons.

A traditional data centre design is comprised of separate storage silos with individual tiers of servers and specialised networking spanning the compute and storage silos. This worked in the pre-cloud era, but it is too rigid for the cloud era. Its untenable for IT teams to take weeks or months to provision new infrastructure so the dev team can produce new apps and get to market quickly, said Greg Smith, vice president of product marketing at Nutanix.

HCI radically simplifies data centre architectures and operations, reducing the time and expense of managing data and delivering apps, he said.

HCI software, such as from Nutanix or VMware, is deployed the same way in both a customers data centre and cloud instances; it runs on bare metal instances in the cloud exactly the same as it does in a data centre. HCI is the best foundation for companies that want to build a hybrid cloud. They can deploy apps in their data centre and meld it with a public cloud, Smith said.

Because its the same on both ends, I can have one team manage an end-to-end hybrid cloud and with confidence that whatever apps run in my private cloud will also run in that public cloud environment, he added.

HCI allows you to consolidate compute, network, and storage into one box, and grow this solution quickly and easily without a lot of downtime, said Tom Lockhart, IT systems manager with Hastings Prince Edward Public Health in Bellville, Ontario, Canada.

In a legacy approach, multiple pieces of hardware a server, Fiber Channel switch, host-based adapters, and a hypervisor have to be installed and configured separately. With hyperconvergence, everything is software-defined. HCI uses the storage in the server, and the software almost entirely auto-configures and detects the hardware, setting up the connections between compute, storage, and networking.

Once we get in on a workload, [customers] typically have a pretty good experience. A few months later, they try another workload, then another, and they start to extend it out of their data centre to remote sites, said Chad Dunn, vice president of product management for HCI at Dell.

They can start small and grow incrementally larger but also have a consistent operating model experience, whether they have 1,000 nodes or three nodes per site across 1,000 sites, whether they have 40 terabytes of data or 40 petabytes. They have consistent software updates where they dont have to retrain their people because its the same toolset, Dunn added.

By starting small, customers find they can reduce their hardware stack to just what they need, rather than overprovision excessive capacity. Moving away from the siloed approach also allows users to eliminate certain hardware.

Josh Goodall, automation engineer with steel fabricator USS-POSCO Industries, said his firm deployed HCI primarily for its ability to do stretched clusters, where the hardware cluster is in two physical locations but linked together. This is primarily for use as a backup, so if one site went down, the other can take over the workload. In the process, though, USS-POSCO got rid of a lot of expensive hardware and software. We eliminated several CPU [software] licenses, we eliminated the SAN from other site, we didnt need SRM [site recovery management] software, and we didnt need Commvault licensing. We saved between $25,000 and $30,000 on annual license renewals, Goodall said.

To run a traditional three-tiered environment, companies need specialists in compute, storage, and networking. With HCI, a company can manage its environment with general technology consultants and staff rather than the more expensive specialists.

HCI has empowered the storage generalist, Smith said. You dont have to hire a storage expert, a network expert. Everyone has to have infrastructure, but they made the actual maintenance of infrastructure a lot easier than under a typical scenario, where a deep level of expertise is needed to manage under those three skill sets.

Lockhart of Hastings Prince Edward Public Health said adding new compute/storage/networking is also much faster when compared to traditional infrastructure. An upgrade to our server cluster was 20 minutes with no down time, versus hours of downtime with an interruption in service using the traditional method, he said.

Instead of concentrating on infrastructure, you can expand the amount of time and resources you spend on workloads, which adds value to your business. When you dont have to worry about infrastructure, you can spend more time on things that add value to your clients, Lockhart adds.

Key elements of hyperconvergence products are their backup, recovery, data protection, and data deduplication capabilities, plus analytics to examine it all. Disaster recovery components are managed from a single dashboard, and HCI monitors not only the on-premises storage but also cloud storage resources. With deduplication, compression rates as high as 55:1, and backups can be done in minutes.

USS-POSCO Industries is an HP Enterprise shop and uses HPEs SimpliVity HCI software, which includes dedupe, backup, and recovery. Goodall said he gets about 12-15:1 compression on mixed workloads, and that has eliminated the need for third-party backup software.

More importantly, recovery timeframes have dropped. The best recent example is a Windows update messed up a manufacturing line, and the error wasnt realised for a few weeks. In about 30 minutes, I rolled through four weeks of backups, updated the system, rebooted and tested a 350GB system. Restoring just one backup would have been a multi-hour process, Goodall said.

HCI products come with a considerable amount of analytics software to monitor workloads and find resource constraints. The monitoring software is consolidated into a single dashboard view of system performance, including negatively impacted performance.

Hastings recently had a problem with a Windows 7 migration, but the HCI model made it easy to get performance info. It showed that workloads, depending on time of day, were running out of memory, and there was excessive CPU queuing and paging, Lockhart said. We had the entire [issue] written up in an hour. It was easy to determine where problems lie. It can take a lot longer without that single-pane-of-glass view.

Goodall said he used to spend up to 50% of his time dealing with storage issues and backup matrixes. Now he spends maybe 20% of his time dealing with it and most of his time tackling and addressing legacy systems. And his apps are better performing under HCI. Weve had no issues with our SQL databases; if anything, weve seen huge performance gain due to the move to full SSDs [instead of hard disks] and the data dedupe, reducing reads and writes in the environment.

IDG News Service

Read More: data centre hyperconverged infrastructure virtualisation

More here:
Reasons to consider hyperconverged infrastructure in the data centre - Small Business