Category Archives: Cloud Computing

Powering the next generation of trustworthy AI in a confidential cloud using NVIDIA GPUs – Microsoft

Cloud computing is powering a new age of data and AI by democratizing access to scalable compute, storage, and networking infrastructure and services. Thanks to the cloud, organizations can now collect data at an unprecedented scale and use it to train complex models and generate insights.

While this increasing demand for data has unlocked new possibilities, it also raises concerns about privacy and security, especially in regulated industries such as government, finance, and healthcare. One area where data privacy is crucial is patient records, which are used to train models to aid clinicians in diagnosis. Another example is in banking, where models that evaluate borrower creditworthiness are built from increasingly rich datasets, such as bank statements, tax returns, and even social media profiles. This data contains very personal information, and to ensure that its kept private, governments and regulatory bodies are implementing strong privacy laws and regulations to govern the use and sharing of data for AI, such as the General Data Protection Regulation (GDPR) and the proposed EU AI Act. You can learn more about some of the industries where its imperative to protect sensitive data in this Microsoft Azure Blog post.

Microsoft recognizes that trustworthy AI requires a trustworthy cloudone in which security, privacy, and transparency are built into its core. A key component of this vision is confidential computinga set of hardware and software capabilities that give data owners technical and verifiable control over how their data is shared and used. Confidential computing relies on a new hardware abstraction called trusted execution environments (TEEs). In TEEs, data remains encrypted not just at rest or during transit, but also during use. TEEs also support remote attestation, which enables data owners to remotely verify the configuration of the hardware and firmware supporting a TEE and grant specific algorithms access to their data.

At Microsoft, we are committed to providing a confidential cloud, where confidential computing is the default for all cloud services. Today, Azure offers a rich confidential computing platform comprising different kinds of confidential computing hardware (Intel SGX, AMD SEV-SNP), core confidential computing services like Azure Attestation and Azure Key Vault managed HSM, and application-level services such as Azure SQL Always Encrypted, Azure confidential ledger, and confidential containers on Azure. However, these offerings are limited to using CPUs. This poses a challenge for AI workloads, which rely heavily on AI accelerators like GPUs to provide the performance needed to process large amounts of data and train complex models.

The Confidential Computing group at Microsoft Research identified this problem and defined a vision for confidential AI powered by confidential GPUs, proposed in two papers, Oblivious Multi-Party Machine Learning on Trusted Processors and Graviton: Trusted Execution Environments on GPUs. In this post, we share this vision. We also take a deep dive into the NVIDIA GPU technology thats helping us realize this vision, and we discuss the collaboration among NVIDIA, Microsoft Research, and Azure that enabled NVIDIA GPUs to become a part of the Azure confidential computing ecosystem.

Today, CPUs from companies like Intel and AMD allow the creation of TEEs, which can isolate a process or an entire guest virtual machine (VM), effectively eliminating the host operating system and the hypervisor from the trust boundary. Our vision is to extend this trust boundary to GPUs, allowing code running in the CPU TEE to securely offload computation and data to GPUs.

Unfortunately, extending the trust boundary is not straightforward. On the one hand, we must protect against a variety of attacks, such as man-in-the-middle attacks where the attacker can observe or tamper with traffic on the PCIe bus or on a NVIDIA NVLink connecting multiple GPUs, as well as impersonation attacks, where the host assigns an incorrectly configured GPU, a GPU running older versions or malicious firmware, or one without confidential computing support for the guest VM. At the same time, we must ensure that the Azure host operating system has enough control over the GPU to perform administrative tasks. Furthermore, the added protection must not introduce large performance overheads, increase thermal design power, or require significant changes to the GPU microarchitecture.

Our research shows that this vision can be realized by extending the GPU with the following capabilities:

NVIDIA and Azure have taken a significant step toward realizing this vision with a new feature called Ampere Protected Memory (APM) in the NVIDIA A100 Tensor Core GPUs. In this section, we describe how APM supports confidential computing within the A100 GPU to achieve end-to-end data confidentiality.

APM introduces a new confidential mode of execution in the A100 GPU. When the GPU is initialized in this mode, the GPU designates a region in high-bandwidth memory (HBM) as protected and helps prevent leaks through memory-mapped I/O (MMIO) access into this region from the host and peer GPUs. Only authenticated and encrypted traffic is permitted to and from the region.

In confidential mode, the GPU can be paired with any external entity, such as a TEE on the host CPU. To enable this pairing, the GPU includes a hardware root-of-trust (HRoT). NVIDIA provisions the HRoT with a unique identity and a corresponding certificate created during manufacturing. The HRoT also implements authenticated and measured boot by measuring the firmware of the GPU as well as that of other microcontrollers on the GPU, including a security microcontroller called SEC2. SEC2, in turn, can generate attestation reports that include these measurements and that are signed by a fresh attestation key, which is endorsed by the unique device key. These reports can be used by any external entity to verify that the GPU is in confidential mode and running last known good firmware.

When the NVIDIA GPU driver in the CPU TEE loads, it checks whether the GPU is in confidential mode. If so, the driver requests an attestation report and checks that the GPU is a genuine NVIDIA GPU running known good firmware. Once confirmed, the driver establishes a secure channel with the SEC2 microcontroller on the GPU using the Security Protocol and Data Model (SPDM)-backed Diffie-Hellman-based key exchange protocol to establish a fresh session key. When that exchange completes, both the GPU driver and SEC2 hold the same symmetric session key.

The GPU driver uses the shared session key to encrypt all subsequent data transfers to and from the GPU. Because pages allocated to the CPU TEE are encrypted in memory and not readable by the GPU DMA engines, the GPU driver allocates pages outside the CPU TEE and writes encrypted data to those pages. On the GPU side, the SEC2 microcontroller is responsible for decrypting the encrypted data transferred from the CPU and copying it to the protected region. Once the data is in high bandwidth memory (HBM) in cleartext, the GPU kernels can freely use it for computation.

The implementation of APM is an important milestone toward achieving broader adoption of confidential AI in the cloud and beyond. APM is the foundational building block of Azure Confidential GPU VMs, now in private preview. These VMs, designed in collaboration with NVIDIA, Azure, and Microsoft Research, feature up to four A100 GPUs with 80 GB of HBM and APM technology and enable users to host AI workloads on Azure with a new level of security.

But this is just the beginning. We look forward to taking our collaboration with NVIDIA to the next level with NVIDIAs Hopper architecture, which will enable customers to protect both the confidentiality and integrity of data and AI models in use. We believe that confidential GPUs can enable a confidential AI platform where multiple organizations can collaborate to train and deploy AI models by pooling together sensitive datasets while remaining in full control of their data and models. Such a platform can unlock the value of large amounts of data while preserving data privacy, giving organizations the opportunity to drive innovation.

A real-world example involves Bosch Research, the research and advanced engineering division of Bosch, which is developing an AI pipeline to train models for autonomous driving. Much of the data it uses includes personal identifiable information (PII), such as license plate numbers and peoples faces. At the same time, it must comply with GDPR, which requires a legal basis for processing PII, namely, consent from data subjects or legitimate interest. The former is challenging because it is practically impossible to get consent from pedestrians and drivers recorded by test cars. Relying on legitimate interest is challenging too because, among other things, it requires showing that there is a no less privacy-intrusive way of achieving the same result. This is where confidential AI shines: Using confidential computing can help reduce risks for data subjects and data controllers by limiting exposure of data (for example, to specific algorithms), while enabling organizations to train more accurate models.

At Microsoft Research, we are committed to working with the confidential computing ecosystem, including collaborators like NVIDIA and Bosch Research, to further strengthen security, enable seamless training and deployment of confidential AI models, and help power the next generation of technology.

The Confidential Computing team at Microsoft Research Cambridge conducts pioneering research in system design that aims to guarantee strong security and privacy properties to cloud users. We tackle problems around secure hardware design, cryptographic and security protocols, side channel resilience, and memory safety. We are also interested in new technologies and applications that security and privacy can uncover, such as blockchains and multiparty machine learning. Please visit our careers page to learn about opportunities for both researchers and engineers. Were hiring.

Read more:
Powering the next generation of trustworthy AI in a confidential cloud using NVIDIA GPUs - Microsoft

The Data Center Industry Begins to Feel the Supply Chain Pinch – Data Center Frontier

Data center operators have employed a variety of strategies to navigate supply chain pressures and line up equipment for construction projects. (Image: Shutterstock)

Supply chain disruptions are tough on everyone. But the digital infrastructure sector faces a particular challenge, as it must manage the supply chain crisis during a period of dramatic growth amid a pandemic-driven shift to digital service delivery and distributed computing. Pre-ordering and inventory management kept the data center industry on schedule in 2020 and 2021, but can this continue as supply chain disruptions persist?

Our panelists include Sean Farney of Kohler Power Systems, Michael Goh from Iron Mountain Data Centers, DartPoints Brad Alexander, Amber Caramella of Netrality Data Centers and Infrastructure Masons, and Peter Panfil of Vertiv. The conversation is moderated by Rich Miller, the founder and editor of Data Center Frontier. Heres todays discussion:

Data Center Frontier:How would you assess the state of the data center supply chain? Are the global supply chain challenges impacting the delivery of data center capacity?

Brad Alexander, DartPoints: The data center supply chain is extremely strained currently. Transportation bottlenecks, massive labor and material shortages, and the increasing cost of critical components are causing roadblocks in both new construction and expansion. For example, we are currently going through an expansion project in one of our markets. The project is 14 percent over budget due to increased labor and components costs and is five months delayed because of generator shortages. Smaller components such as servers and storage are being delayed an additional eight weeks, and we have seen delays as long as seven months for various pieces of core networking equipment. Nearly every hardware vendor has increased costs by 7-12 percent since the beginning of 2022.

I feel the industry has built considerable capacity in the cloud and major data center markets over the last three years, and a critical capacity constraint has not yet fully been felt. However, customers may soon have to rely on more regions or multiple providers to meet capacity requirements. Smaller markets where the peak capacity was only a fraction of the larger markets are feeling the delays in expansion as these edge markets are gaining more and more traction.

Amber Caramella, Infrastructure Masons and Netrality: The supply chain continues to struggle, considering extensive global shortages of labor, materials, and equipment resulting in longer lead times. Shortages and impact vary across regions and types of equipment.

Broadly, I have seen supply chain shortages and shipping delays affect delivery time frames with new data center builds and bringing new capacity online.

Capacity planning, creating, and executing strategies is critical. Key strategies include ordering equipment ahead of time not only when needed and storing extra inventory. Expand your supplier list to provide optionality for the availability and delivery of equipment. Establishing a roadmap, capacity planning, relative to your data center needs is paramount.

Sean Farney, Kohler Power Systems: Supply chains everywhere are feeling pressure right now. Everything from basic materials like lumber, steel, and copper to microchips are in short supply, with doubling of delivery times in last year. Some equipment providers are even selling manufacturing production slots on their assembly lines!

In the meantime, data centers have been facing a tidal wave of demand. This has impacted the priority of what we can do today. Companies are needing to closely evaluate their processes to ensure they are operating existing facilities in the most efficient way possible. For example, sustainability is a priority to all of us in the data center space; we all have goals to be net-zero within a few years.

With the constraints were facing, many are having to double down on efforts to make efficiency upgrades and revisit operating procedures. Even in challenging circumstances, data center operators are finding ways to meet capacity demands, bring power usage down, and turn it all into a model that they can effectively sell.

Peter Panfil, Vertiv:The pandemic has created disruptions in the supply chain, as has the way data centers responded to it. Many are taking a bounce forward approach in which they are conducting major modernization and upgrades so they can come out of this period stronger and more resilient than they went into it. That has put further pressure on the supply chain.

Vendors across the value chain are working with their customers to get ahead of supply chain issues through proactive communication, longer term project planning, and enhanced maintenance practices that extend the lifecycle of existing equipment.

Michael Goh, Iron Mountain Data Centers: Supply chain challenges are visible in the data center industry. This is the case in other industries as well because of the shortages that occurred during COVID. We are coping well, but we see new capacity coming online at a slower pace.

In a fast growing and high-demand industry such as the data center industry, new capacity lead times are taking longer than before COVID. We also see this with any sort of equipment that needs a semiconductor to function.

NEXT: Is nuclear power an option for battling climate change?

Keep pace with the fact-moving world of data centers and cloud computing by following us onTwitterandFacebook, connecting with DCF onLinkedIn, and signing up for our weekly newspaper using the form below:

Visit link:
The Data Center Industry Begins to Feel the Supply Chain Pinch - Data Center Frontier

Inside the Army’s distributed mission command experiments in, and over, the Pacific – Breaking Defense

U.S. Army Soldiers assigned to Americas First Corps maneuver a Stryker combat vehicle off United States Naval Ship City of Bismarck while conducting roll on-roll off training at Naval Base Guam, Feb. 9, 2022. (U.S. Army/ailene Bautista)

WASHINGTON: The US Armys I Corps is testing a new distributed mission command concept that could fundamentally change how the Army Corps functions across the vast distances of the Indo-Pacific.

Instead of the I Corps maintaining a single headquarters with several hundred staff shuffling around, the service is looking at breaking down the traditional headquarters infrastructure into functional nodes that would be distributed across the region but can remain in constant communication, Brig. Gen. Patrick Ellis, I Corps chief of staff, told Breaking Defense in a recent interview.

If were running the Corps and performing our command and control functions from six, three, five, six locations as opposed to just one, were much more resilient, and honestly much more survivable, in the event that were ever targeted, said Ellis. The existing Corps structure the kind of the doctrinal, the way the Corps are built didnt necessarily make the most sense for us.

The Corps and Army formations from large to small want to move away from static command posts or headquarters marked by tents, trucks and generators, and fight in a distributed manner, meaning that a Corps can coordinate the battlefield using assets that are hundreds, if not thousands, of miles apart.

In a recent experiment in Guam using four Strykers loaded with advanced communication capabilities, the I Corps worked to prove that it can pass important battlefield data, including fires and targeting information, between platforms, even while they are in transit in the air or on a boat. For example, Ellis was able to send mission command information from a Stryker, in mid-air transit aboard a C-17 headed to Guam, back to Joint Base Lewis-McChord using the airplanes antenna. Additionally, the Stryker element succeeded in sending information while plugged into the network of the Navys USNS City of Bismarck, a military sealift ship, though it remained portside.

U.S. Army Soldiers assigned to Americas First Corps and service members assigned to Joint Communications Support Element establish communications onboard the U.S. Naval Ship City of Bismarck, Feb. 16, 2022, as part of a joint training operation to experiment and exercise distributed mission command in the Indo-Pacific. (Army/Jailene Bautista,)

Instead of losing situational awareness, like we do now either in flight or in transit when your stuffs on a vessel, we figured out that if we could use the transport thats on these vessels and perform mission command functions and stay situationally aware as were transiting from one location to another, he said.

Related: At Project Convergence, Army struggling to see joint battlefield as it heeds hard lessons

Its worth reiterating what happened here: from the back of the C-17, the Army conducted a video teleconference. While that may not sound impressive given the last two years of remote work, its an important feat given the high bandwidth requirements to support live video particularly on a military aircraft in the air. Ellis said thats not a function the Corps would always use, but it proved that we could move large amounts of data.

Second, and perhaps a more germane function for the Corps, is that it was able to pass targeting data from the Armys Field Artillery Tactical Data System from the aircraft to the ground.

Its a useful capability for us because it allows us to provide updates in-flight from mission commander back down to launchers, Ellis said. Or in the event that we flipped it and we put some of our HIMARS launchers aboard the aircraft and we can actually update their targeting data from the ground while theyre in route.

The ability to update targeting data while flying for several hours would prevent targeting data from going stale, he added.

The Corps experiment will also help feed into Joint All-Domain Command and Control, the Pentagons future warfighting construct in which sensors and shooters across the battlespace are connected to provide up-to-date information.

The opportunity to operate with the Joint Forces is key in this organization, said Col. Elizabeth Casely, I Corps G6, or the network manager. The ability to sense from a different service component and fire from a different service component is predominant for JADC2.

Enabling technologies

Central to the whole concept is cloud computing at the edge. Using the cloud, soldiers at the different nodes can access the data they need to accomplish their mission and operate more efficiently. However, soldiers will need to bring forward some network hardware to have some immediate access to data in an environment where they were disconnected from the hyper-scale cloud hub thousands of miles away.

So instead of having to bring a separate computer to do fires, a separate computer to do common operational picture and a separate computer to do logistics, we could actually access all those systems through one standard laptop, Ellis said. You bring your little slice of the cloud forward with you when you come forward, so in the event that you are disconnected from the hyper-scale, you can continue to perform your mission command functions.

But the challenge that the Corps has to consider is how much data and what types of mission command data are vital for soldiers to bring forward with them, versus how much can remain in the main cloud hub. The I Corps is working through what information exchanges absolutely have to occur, Casely said.

For example, Ellis said its Corps-level fires personnel are being asked questions about how much data they need to do their job do they need all of their targeting data or just imagery, and how long before a mission? Thats a question similar to one from Project Convergence, where the Army grappled with what data has to be sent, and in what format, while not eating up the limited bandwidth in a conflict zone.

It really, really challenges not just the folks in the G6, but the entire staff and the entire whole staff process to think about what information needs to be exchanged, in order for them to perform their mission, Casely said.

Another challenge is deconflicting updates in the cloud system, Ellis said. If one group of soldiers come out of the disconnected environment and update the broader cloud, while someone else is updating the same type of information in the larger cloud, how do the soldiers sort out whose data is the most relevant?

Were in the very nascent stages of determining how some of the cloud computing works to support this manner of fighting, Casely said. But Im feeling really, really good about where were headed.

More:
Inside the Army's distributed mission command experiments in, and over, the Pacific - Breaking Defense

Financial Sector and Cloud Security Providers Complete Initiative To Enhance Cybersecurity – Business Wire

WASHINGTON--(BUSINESS WIRE)--The Cyber Risk Institute (CRI), the Cloud Security Alliance (CSA), and the Bank Policy Institute-BITS announced today the release of a cloud extension for the CRI Profile version 1.2. The Cloud Profile represents the collaboration of over 50 financial institutions and major cloud service providers (CSPs) to extend the CRI Profile, which is a widely accepted cybersecurity compliance framework for the financial sector.

Todays release marks an historic achievement, said CRI President Joshua Magri. This is the first time that financial institutions, the major CSPs, and trade associations have come together to develop a set of baseline expectations related to cybersecurity and roles and responsibilities for cloud deployment. We are exceedingly proud of the work done here and what it may mean for future cloud usage in the financial services sector. We are pleased to be part of a collaborative solution to a longstanding challenge.

As more financial institutions move to the cloud, financial regulators globally have become increasingly focused on ensuring firms use sound risk management practices during cloud implementation. The Cloud Profile provides guidance to financial institutions and CSPs on commonly understood responsibilities related to cloud deployment across software-as-a-service, platform-as-a-service, and infrastructure-as-a-service delivery models.

Financial regulators need clear, consistent, and timely information on firms relationships with their third parties. The Cloud Profile helps clarify where a firms responsibilities end and a cloud service providers responsibilities begin, said Chris Feeney, Executive Vice President of BPI and President of BPI-BITS. A common understanding of cybersecurity controls for cloud implementation that has been developed, vetted, and accepted by firms and CSPs is a sound approach in ensuring our financial sector is more secure.

This guidance is designed to enable financial institutions and CSPs to come to contractual understanding more easily and should also facilitate more streamlined and secure processes for deploying cloud services.

"We are very happy to work with a like-minded organization such as CRI, and we are excited about these initial results. The Cloud Profile extension brings together the CRI Profile with the security controls and security shared responsibility model of the CSA Cloud Controls Matrix v4.0. This represents a very powerful tool to support financial institutions in building a cloud security governance and compliance program that can meet their strict sectorial requirements," said Daniele Catteddu, Chief Technology Officer, Cloud Security Alliance.

CRI, CSA, and BPI will continue working on ways to leverage this joint framework and look forward to greater collaboration.

About Cyber Risk Institute.

The Cyber Risk Institute (CRI) is a not-for-profit coalition of financial institutions and trade associations. Were working to protect the global economy by enhancing cybersecurity and resiliency through standardization. https://cyberriskinstitute.org/ *The CRI Profile is the successor to the Financial Services Sector Coordinating Council (FSSCC) Cybersecurity Profile, a NIST and IOSCO based approach to assessing cybersecurity in the financial services industry.

About Bank Policy Institute.

The Bank Policy Institute (BPI) is a nonpartisan public policy, research and advocacy group, representing the nations leading banks and their customers. Our members include universal banks, regional banks and the major foreign banks doing business in the United States. Collectively, they employ almost 2 million Americans, make nearly half of the nations small business loans, and are an engine for financial innovation and economic growth.

About Cloud Security Alliance.

The Cloud Security Alliance (CSA) is the worlds leading organization dedicated to defining and raising awareness of best practices to help ensure a secure cloud computing environment. CSA harnesses the subject matter expertise of industry practitioners, associations, governments, and its corporate and individual members to offer cloud security-specific research, education, training, certification, events, and products. CSA's activities, knowledge, and extensive network benefit the entire community impacted by cloud from providers and customers to governments, entrepreneurs, and the assurance industry and provide a forum through which different parties can work together to create and maintain a trusted cloud ecosystem. For further information, visit us at http://www.cloudsecurityalliance.org, and follow us on Twitter @cloudsa.

Read the original:
Financial Sector and Cloud Security Providers Complete Initiative To Enhance Cybersecurity - Business Wire

How to Prepare Your IT team for Cloud Computing Developments? – Analytics Insight

For many organizations adopting cloud computing is innovation as much as fostering cultural change.

Businesses are enthusiastically moving to cloud computing, adopting the latest technology in cloud computing, and implementing cloud environments in their infrastructure. However, a successful transition to cloud computing requires employees to be prepared for the shift and get familiar with the cloud solution they plan to use. Preparing employees for cloud computing is an important part of preparing an organization for cloud integration.

Weve compiled four crucial steps to prepare your IT team for cloud computing developments

You cant keep your team in the dark about moving to the latest technology in cloud computing until its time to move. Teams need to know that their business is moving to cloud computing and what that means for their business. You need to present a cloud deployment business case, including what cloud computing does and why its migrating, to educate people about why migrations occur. This gives the company time to respond to employee questions and concerns about the cloud solution they are using.

Depending on the nature of cloud computing, different teams within the company use cloud solutions. The company needs to identify the employees who will use a particular solution and the tasks they will perform using the solution. This will help you understand, who needs training in your cloud solution and the specific training you need to receive to function effectively.

As with any new technology, employees need to learn specific skills to take advantage of the cloud computing solutions they are trying to integrate. The skills required by employees can be cloud-specific, such as Knowledge of cloud security, cloud migration, and public cloud platforms of choice. However, skills such as artificial intelligence, machine learning, serverless architecture, and DevOps are also valuable to employees.

The best way to train an employee to work in cloud computing is to make them take an external training course. These courses can be created for a particular cloud solution or provide an overview of how cloud computing works. When an employee passes the course, it indicates that they are ready to work in the latest technology in cloud computing. Most cloud providers offer their own solution certifications, but there are also some third-party cloud certifications.

Share This ArticleDo the sharing thingy

About AuthorMore info about author

Analytics Insight is an influential platform dedicated to insights, trends, and opinions from the world of data-driven technologies. It monitors developments, recognition, and achievements made by Artificial Intelligence, Big Data and Analytics companies across the globe.

Originally posted here:
How to Prepare Your IT team for Cloud Computing Developments? - Analytics Insight

With The Rapid Adoption Of Cloud Computing, The Software Consulting Market Grows At A Rate Of 12% As Per The Business Research Company’s Software…

LONDON, March 16, 2022 (GLOBE NEWSWIRE) -- According to The Business Research Companys research report on the software consulting market, the increasing adoption of cloud computing by enterprises is the key driver contributing to the growth of the software consulting market. Enterprises are increasingly adopting the latest technologies such as cloud computing to increase productivity and efficiency. As of 2020, the cloud services industry was valued at about $200 billion, and most organizations are expected to increase their cloud spending budget by about 50%. Hence, the growing adoption of cloud computing is positively impacting the software consulting market scope.

Request for a sample of the global software consulting market report

The global software consulting market size is expected to grow from $209.8 billion in 2021 to $234.38 billion in 2022 at a compound annual growth rate (CAGR) of 12.18%. The growth in the market is mainly due to the companies' rearranging their operations and recovering from the COVID-19 impact, which had earlier led to restrictive containment measures involving social distancing, remote working, and the closure of commercial activities that resulted in operational challenges. The market is expected to reach $375.17 billion in 2026 at a CAGR of 12.46%.

Remote consulting services are gaining popularity amongst the software consulting market trends. Software consulting companies are increasingly offering consulting services remotely in response to the COVID-19 pandemic and to improve their efficiencies. Virtual consulting is likely to become widespread going forward as well. Major IT consulting companies such as IBM, Oracle, and Accenture are offering virtual and remote consulting services.

Major players in the software consulting market are Cap Gemini, Atos SE, Oracle, Accenture, IBM Corporation, Deloitte Touche Tohmatsu Limited, CGI Group Inc., Cognizant, Ernst & Young Global Limited, and SAP SE.

The global software consulting market is segmented by type into enterprise solutions, application development, migration and maintenance services, design services; by enterprise size into large enterprise, small and medium enterprise; by end-use industry into automotive, education, government, healthcare, IT and telecom, manufacturing.

In 2021, North America was the largest region in the software consulting market. Asia-Pacific is expected to be the fastest-growing region in the global software consulting market during the forecast period. The regions covered in this report are Asia-Pacific, Western Europe, Eastern Europe, North America, South America, the Middle East, and Africa.

Software Consulting Global Market Report 2022 Market Size, Trends, And Global Forecast 2022-2026 is one of a series of new reports from The Business Research Company that provide software consulting market overviews,software consulting market analyze and forecast market size and growth for the whole market,software consulting market segments and geographies, software consulting market trends, software consulting market drivers, software consulting market restraints,software consulting market leading competitors revenues, profiles and market shares in over 1,000 industry reports, covering over 2,500 market segments and 60 geographies.

The report also gives in-depth analysis of the impact of COVID-19 on the market. The reports draw on 150,000 datasets, extensive secondary research, and exclusive insights from interviews with industry leaders. A highly experienced and expert team of analysts and modelers provides market analysis and forecasts. The reports identify top countries and segments for opportunities and strategies based on market trends and leading competitors approaches.

Not the market you are looking for? Check out some similar market intelligence reports:

Software As A Service (SaaS) Global Market Report 2022 By Application (Customer Relationship Management (CRM), Enterprise Resource Planning (ERP), Human Resource Management (HRM), Manufacturing And Operations, Supply Chain Management (SCM)), By Deployment Model (Public Cloud, Private Cloud, Hybrid Cloud ), By Enterprise Size (Small & Medium Enterprises (SMEs), Large Enterprises), By End User (Manufacturing, Retail, Education, Healthcare, IT & Telecom, BFSI) Market Size, Trends, And Global Forecast 2022-2026

Management Consulting Market By Service Type (Operations Advisory, HR Advisory, Strategy, Financial Advisory, Technology Advisory), By End Use Industry (Financial Services, IT Services, Manufacturing, Construction, Mining And Oil & Gas), And By Region, Opportunities And Strategies Global Forecast To 2022

Cloud Services Global Market Report 2022 By Type (Software As A Service (SaaS), Platform As A Service (PaaS), Infrastructure As A Service (IaaS), Business Process As A Service (BPaaS)), By End-User Industry (BFSI, Media And Entertainment, IT And Telecommunications, Energy And Utilities, Government And Public Sector, Retail And Consumer Goods, Manufacturing), By Application (Storage, Backup, And Disaster Recovery, Application Development And Testing, Database Management, Business Analytics, Integration And Orchestration, Customer Relationship Management), By Deployment Model (Public Cloud, Private Cloud, Hybrid Cloud), By Organization Size (Large Enterprises, Small And Medium Enterprises) Market Size, Trends, And Global Forecast 2022-2026

Interested to know more about The Business Research Company?

The Business Research Company is a market intelligence firm that excels in company, market, and consumer research. Located globally it has specialist consultants in a wide range of industries including manufacturing, healthcare, financial services, chemicals, and technology.

The Worlds Most Comprehensive Database

The Business Research Companys flagship product, Global Market Model, is a market intelligence platform covering various macroeconomic indicators and metrics across 60 geographies and 27 industries. The Global Market Model covers multi-layered datasets which help its users assess supply-demand gaps.

See the original post here:
With The Rapid Adoption Of Cloud Computing, The Software Consulting Market Grows At A Rate Of 12% As Per The Business Research Company's Software...

How Cloud Automation is Changing Business World – Security Boulevard

Have you migrated to the cloud? If not, heres the complete guide for cloud migration checklist 2022. Now, lets begin with cloud automation benefits.

One reason organizations hesitate with their cloud adoption is security. However, with cloud automation, human intervention can be eliminated and that implies fewer manual mistakes that compromise safety. Likewise, when you take the help of automation, theres no requirement for multiple people working on the same system. It additionally contributes to the organizations security policies as there wont be numerous individuals accessing the framework.

Any business that wants to scale up its infrastructure on a frequent basis because of client demands should attempt cloud automation services. In the initial phase, you can manually look after your business. But handling terabyte of data in a huge number of virtual machines becomes complex at a higher level. This is the place where cloud automation becomes convenient, as it helps settle adaptability stresses.

Total Cost of Ownership (TCO) is a fundamental parameter that decides the outcome of a business. High TCO means that youre not getting a lot of gain with your business as youre burning through a large chunk of change behind your assets. But lowering your TCO can prove to be highly profitable for your business. In such a scenario, cloud automation plays an important role. It would reduce your hardware expenses and furthermore the human resources to look after them.

Numerous individuals engage in the process while youre handling your cloud services manually. In this scenario, you dont have total command over the procedures, and considering any single individual responsible for any issue would be troublesome. Changing to automation would figure out all these worries and provide you with centralized governance. Additionally, youre in more control of infrastructure, empowering you to settle on informed business choices.

Since CI/CD (Continuous Integration/Continuous Deployment) and DevOps are in the trend. Continuous deployment is only possible when youre automating your application work process pipeline, and with cloud automation services, you will get the tools and other needed facilities to get it done. For that reason, an ever-increasing number of organizations are embracing cloud automation services in the present day.

One of the fundamental reasons you ought to decide on automation is to guarantee higher operational productivity and proficiency. A similar applies to cloud automation, where organizations decide on it to get done with their responsibilities quicker and all the more productively.

Organizations manage a ton of information its actually the new currency in the 21st century. As follows, reinforcement is a particularly basic part of the present business scenario. With cloud automation, you dont need to stress over backup. It will consequently deal with it oftentimes over the course of the day and save you from missing out on valuable information in case of system failures.

Also Read Top 10 Facts Every CIO Should Know About Cloud in 2022

Continue reading here:
How Cloud Automation is Changing Business World - Security Boulevard

Tennessee Board of Regents announces training and education initiative to prepare 5000 Tennesseans for cloud computing careers by 2025 – The Mountain…

NASHVILLE Amazon Web Services Inc., the Tennessee Board of Regents, and the Tennessee Higher Education Commission announced a collaborative effort to train, upskill, and certify 5,000 Tennesseans in cloud computing by 2025. Through this statewide initiative, technical training and education mapped to in-demand skills will be available from participating public community and technical colleges across Tennessee.

Im delighted that AWS Amazons cloud computing business is partnering with our community and technical colleges to provide this opportunity for Tennesseans an opportunity to learn cloud computing skills for great careers in the states growing tech sector. Thanks to AWS for providing the resources to our colleges at no cost, Tennessee Gov. Bill Lee said.

TBR will work with the AWS Academy program to provide the colleges with no-cost, ready-to-teach, cloud computing curricula that prepares students for industry-recognized AWS Certifications and in-demand cloud-related jobs. Educators at participating institutions will have access to instructor training and a limited number of AWS Certification exams at no cost as they qualify to become AWS Academy accredited educators. Students can also access self-paced online training courses and labs from AWS.

The future is now with cloud computing, and this initiative will enable Tennesseans to learn the skills they need for new careers in this field or to better perform in their existing information technology work, TBR Chancellor Flora W. Tydings said during a press conference at Nashville State Community College.

Although AWS is providing much of the resources for this initiative, the programs graduates will be able to work anywhere cloud computing skills are in demand. Were grateful to AWS for this generous support.

This collaborative effort between an industry leader and educational institutions is critical because it ensures that students will be trained for actual industry needs and by trained instructors skilled in teaching the latest technical skills that will help learners earn industry certifications. Certifying 5,000 students by 2025 is a short-term target for the ongoing initiative.

THEC is proud to support increased access to high quality industry certifications that can not only help students get in-demand jobs, but also aid in their pursuits of higher education credentials, said Dr. Emily House, Executive Director, Tennessee Higher Education Commission. This collaboration between AWS and TBR is vital to building a strong workforce in Tennessee, and many students across the state will benefit from this work.

Tennessee has a rapidly growing tech sector, creating a growing demand for employees with cloud computing skills to fill well-paying jobs.

AWS education programs will be offered initially through 12 Tennessee community colleges and 15 Tennessee Colleges of Applied Technology spanning the state. Some of the colleges will build entirely new cloud computing programs, and others will incorporate cloud-computing skills into existing Information Technology courses. Some colleges will also work with TBRs Tennessee eCampus to offer the courses online, and additional TBR colleges are expected to offer the programs at a later time.

The community and technical colleges comprising the College System of Tennessee, governed by the Tennessee Board of Regents, are committed to student success. TBR is an open-access system serving students of all backgrounds, and is committed to meeting student, workforce, and community needs for education and training.

This commitment to providing technical skills training and education across the state is designed to fill in-demand cloud computing jobs throughout Tennessee. This includes available jobs from organizations across various sectors in roles such as software development, cloud architecture, data science, cybersecurity, cloud support engineers, and more.

According to The American Upskilling Study: Empowering Workers for the Jobs of Tomorrow conducted by Gallup, 58% of workers in Tennessee are highly interested in upskilling. For individuals who are unemployed or underemployed, cloud computing skills training offers an opportunity for workers to reskill and re-enter the workforce.

We are excited to see Tennessees burgeoning tech sector across the state and right here in Nashville, said Kim Majerus, Vice President, US Education, State and Local Government at AWS.

With an Amazon corporate office in Nashville serving as a Center of Operational Excellence, our collaboration with TBR will help prepare learners to pursue tech jobs at our company and with local organizations. We are committed to working with employers in the state of Tennessee to bolster their technical talent pipeline, so they can continue to innovate in the state.

Read the original here:
Tennessee Board of Regents announces training and education initiative to prepare 5000 Tennesseans for cloud computing careers by 2025 - The Mountain...

Measuring the Environmental Impact of Software and Cloud Services – InfoQ.com

Software has an influence on the limitation of the service life or the increased energy consumption. Its possible to measure the environmental impacts that are caused by cloud services.

Marina Khn spoke about the environmental impact of software and cloud services at OOP 2022.

So far, the development of computer science has always followed the same pattern, Khn explained, where new faster technology is developed, and software exploits the faster processors, the greater memory and data transfer volume. This begins a spiral that leads to equipment becoming obsolete because it can no longer meet the increased performance requirements imposed by the software, Khn argued.

The design of the software architecture determines how much hardware and electrical power is required. Software can be economical or wasteful with hardware resources, Khn stated:

Depending on how intelligently it is programmed, for example, it requires less or more processor power and memory.

Khn mentioned that the greatest challenge lies not in the technical-physical area, but primarily in the economic and organizational conditions that lead to the premature failure of software. It can be entire product systems, such as the discontinuation of technical support or the lack of compatibility between different systems. The quality of software therefore also increasingly determines the service life, functionality and reliability of devices, she said.

The Federal Environment Agency (Umweltbundesamt, UBA) in Germany has created a method for measuring the environmental footprint of applications.

With our method, the environmental expenditure for the production of information technology and for the operation of data centers is recorded in the four impact categories:

The method has been applied for cloud services in the first step. The environmental effort determined in this way is distributed to the individual cloud services using allocation rules. Each service receives a percentage of the environmental impact of the data center, Khn said.

InfoQ interviewed Marina Khn about the environmental impact of software development.

InfoQ: How does software-related hardware obsolescence impact the life of consumer goods?

Marina Khn: For several years now, the number of intelligent electrical devices and networked systems in everyday life and in households has been increasing rapidly. This also increases the risk of software obsolescence, i.e., the software-related shortening of the useful life of a technically functional device.

InfoQ: How does the blue Angel label work for software products?

Khn: The Blue Angel has been the German governments environmental label for 41 years.

The Blue Angel environmental label for Resource and Energy-Efficient Software Products may be awarded to products that use hardware resources in a particularly efficient manner and consume a low amount of energy during their use. In addition, these software products stand out due to their high level of transparency and give users greater freedom in their use of the software

InfoQ: Whats your definition of green cloud computing?

Khn: Unfortunately, we do not have sufficient data to make this statement. In our research project, we have developed a method that can be used to provide information about the environmental effects of cloud services. The figures calculated in our research for the environmental effects of cloud services only apply to the respective case studies and are not fundamentally transferable to all similar cloud services. In order for the results to be used comparably, it is necessary to apply the methodology to a large number of data centers or cloud services.

Go here to read the rest:
Measuring the Environmental Impact of Software and Cloud Services - InfoQ.com

This Rapidly Growing Cloud Stock Is on Sale Right Now – The Motley Fool

The past six months have been terrible for Nutanix ( NTNX 6.79% ) investors as shares of the enterprise cloud platform provider have plunged more than 42% during this period.

However, Nutanix's severe pullback doesn't seem justified, as it has been reporting impressive growth in recent quarters thanks to the switch to a subscription business model. The trend of strong results continued when Nutanix released its fiscal 2022 second-quarter earnings reporton March 2.

The company beat Wall Street's expectations handsomely and slightly raised the lower end of its full-year guidance on account of robust spending on enterprise cloud infrastructure. The increasing adoption of the software-defined hyper-converged infrastructure (HCI) that combines computing, storage, and networking onto a single platform was also a tailwind for Nutanix.

Image source: Getty Images

Let's take a closer look at Nutanix's quarterly numbers and see why this is one cloud stock investors may not want to miss after its recent slide.

The hyper-converged infrastructure market was reportedly worth $7.8 billion in 2020, according to third-party estimates. By 2025, the HCI market is expected to generate $27 billion in revenue, clocking a compound annual growth rate of 28%. So Nutanix has a lot of room for growth in the future, as it has generated $1.5 billion in revenue over the trailing 12 months.

More importantly, the rate of growth in the company's billings indicates that it is on track to grow at a faster pace than the market it operates in. The annual contract value (ACV) of Nutanix's billings in Q2 increased 37% year over year to $218 million. Nutanix arrives at the ACV by dividing the total value of a contract by the term of the contract; ACV billings refers to the sum of all contracts that were billed during the period.

The increase in Nutanix's ACV should translate into robust revenue growth when the company fulfills its obligations and recognizes revenue for the services provided. Meanwhile, Nutanix's annual recurring revenue increased 55% year over year to $1.04 billion. Annual recurring revenue is the sum of the ACV of all subscription contracts that were in effect at the end of the quarter, and the metric's impressive growth points toward the solid growth of Nutanix's subscription business, which is also leading to fatter margins.

Nutanix's adjusted gross margin increased 110 basis points year over year during the quarter. Thanks to the 19% year-over-year growth in revenue to $413 million and an improved margin profile, Nutanix reducedits adjusted net loss to $0.03 per share last quarter from$0.37 per share in the prior-year period. Analysts were lookingfor a bigger loss of $0.17 per share on $407 million in revenue.

For the full year, Nutanix is now anticipating its revenue to increase 17% to $1.63 billion, which would be an acceleration over fiscal 2021's revenue growth of 7%. Additionally, its ACV billings are expected to increase 28% year over year to $762.5 million, up from fiscal 2021's growth of 18%.

So the faster pace of growth in Nutanix's billings this year should pave the way for solid revenue growth in the long run.

Nutanix is one of the best ways to tap into the fast-growing HCI market. That's because the company controls nearly25% of this space, second only to VMware, which has a 41.5% share of the HCI market under its control. The good part is that Nutanix's share of the HCI market more than doubled in the third quarter of 2021, compared to 11.5% in the prior-year period.

Looking ahead, it wouldn't be surprising to see Nutanix dominate the HCI market -- the company's simplified portfolio, which now consists of five offerings as compared to 15 products earlier, is leading to faster sales growth. Not surprisingly, the potential growth of Nutanix's end market and the company's solid share should lead to an improvement in its top and bottom lines going forward.

With the stock tradingat 3.5 times sales right now as compared to its five-year average price-to-sales multiple of 5, now looks like a good time to buy this cloud computing play since it could explode in the long run.

This article represents the opinion of the writer, who may disagree with the official recommendation position of a Motley Fool premium advisory service. Were motley! Questioning an investing thesis even one of our own helps us all think critically about investing and make decisions that help us become smarter, happier, and richer.

The rest is here:
This Rapidly Growing Cloud Stock Is on Sale Right Now - The Motley Fool