Category Archives: Cloud Computing
Internet of Things (IoT) Cloud Trends 2021 – Datamation
Exploring Internet of Things (IoT) cloud trends provides valuable insight into the extent to which organizations and consumers have embraced both of these technologies.
As Mordor Intelligence reports, the IoT market was valued at $1.1 trillion in 2020 and is expected to grow to $6 trillion by 2026, a compound annual growth rate (CAGR) of more than 32% between 2021 and 2026. These numbers come as no surprise when we consider that recent Microsoft research revealed that around 94% of companies would use some variety of IoT by the end of 2021.
Cloud adoption, in general, has been on the rise as well. Gartner forecasts end-user spending on cloud services to reach nearly $400 billion by the end of 2021 and $482 billion by the end of 2022. Further, the research firm predicts, public cloud spending will grow to more than 45% of all enterprise IT spending up from less than 17% in 2021.
While there are many factors at play affecting the adoption of both cloud and IoT technology, the link between the two is apparent. In order to derive value from the data being collected from IoT devices, increased access to fast, reliable connection speeds is a must. The cloud offers expanded reach, reduced latency over direct remote network connections, and increasingly, IoT-based services provided by third parties.
This article will take a look at some of the key trends shaping the IoT cloud landscape:
According to a Markets & Markets study, the global as-a-service market (XaaS) was forecasted to grow at nearly 40% each year between 2016 and 2020. IoT as a service is a relatively new idea that builds on the framework of other as-a-service models.
IoT employs wireless microcontroller (MCU) devices to extract operational data, which is transported to the cloud, where any number of services can be managed. In other words, the rise in cloud computing makes it possible for IoT insights to be derived in real-time, from anywhere.
Companies like Samsara rely on the cloud to provide their clients with real-time visibility into production facility operations, GPS fleet tracking, equipment monitoring, and more by monitoring and reporting on IoT-derived data.
As organizations add IoT devices to their networks, bad actors are waiting to pounce on the vulnerabilities these new endpoints can expose. An emerging trend is for security teams to focus on encryption of data in the cloud ensuring that data is encoded when it is at rest and in transit is vital.
Several vendors offer features in this space:
Tech research firm Omdia reports that 124 million 5G connections were added globally between Q1 and Q2 2021, a 41% increase. By the end of 2021, the firm predicts, there will be 692 million global 5G connections. Increased IoT usage is one factor driving this growth. As organizations adopt IoT services accessible through the cloud, fast connection speeds are a must. 5G is also helping to bridge the connectivity gap in remote locations, allowing new industries to enhance their operations with IoT.
Similar to the trend we see with 5G adoption, organizations are increasingly turning to edge computing to handle IoT data due to the promise of increased connection speed and reliability especially those related to big data analytics. IoT devices retrieve massive volumes of data, but without big data processing capabilities, organizations miss out on the many benefits of data analytics.
Edge computing provides a distributed network model that can work alongside cloud data depositories to reduce latency in data processing speeds. Industry watchers expect edge computing to continue expanding as organizations and consumers alike increasingly adopt IoT technologies.
Cloud computing has been a boon to the field of artificial intelligence (AI), providing public access to powerful machine learning (ML) platforms that require huge processing power and data bandwidth. For example, machine learning is being used to learn from IoT-gathered data to automate operational processes and streamline supply chains. These use cases are the tip of the IoT cloud iceberg as cloud resources have evolved, virtually any AI application can be accessed and used through the cloud, greatly enhancing the usability of IoT-generated data, often in real-time.
See more: Best IoT Platforms & Software
Here is the original post:
Internet of Things (IoT) Cloud Trends 2021 - Datamation
Artificial Intelligence and Machine Learning, Cloud Computing, and 5G Will Be the Most Important Technologies in 2022, Says New IEEE Study – Dark…
Piscataway, N.J. - 18 November 2021 -IEEE, the world's largest technical professional organization dedicated to advancing technology for humanity, today released the results of "The Impact of Technology in 2022 and Beyond: an IEEE Global Study," a new survey of global technology leaders from the U.S., U.K., China, India, and Brazil. The study, which included 350 chief technology officers, chief information officers, and IT directors, covers the most important technologies in 2022, industries most impacted by technology in the year ahead, and technology trends through the next decade.Learn moreabout the study and the impact of technology in 2022 and beyond.
The most important technologies, innovation, sustainability, and the future
Which technologies will be the most important in 2022? Among total respondents, more than one in five (21%) say AI and machine learning, cloud computing (20%), and 5G (17%) will be the most important technologies next year. Because of the global pandemic, technology leaders surveyed said in 2021 they accelerated adoption of cloud computing (60%), AI and machine learning (51%), and 5G (46%), among others.
Its not surprising, therefore, that 95% agreeincluding 66% who strongly agreethat AI will drive the majority of innovation across nearly every industry sector in the next one to five years.
When asked which of the following areas 5G will most benefit in the next year, technology leaders surveyed said:
As for industry sectors most impacted by technology in 2022, technology leaders surveyed cited manufacturing (25%), financial services (19%), healthcare (16%), and energy (13%). As compared to the beginning of 2021, 92% of respondents agree, including 60% who strongly agree, that implementing smart building technologies that benefit sustainability, decarbonization, and energy savings has become a top priority for their organization.
Workplace technologies, human resources collaboration, and COVID-19
As the impact of COVID-19 varies globally and hybrid work continues, technology leaders nearly universally agree (97% agree, including 69% who strongly agree) that their team is working more closely than ever before with human resources leaders to implement workplace technologies and apps for office check-in, space usage data and analytics, COVID and health protocols, employee productivity, engagement, and mental health.
Among challenges technology leaders see in 2022, maintaining strong cybersecurity for a hybrid workforce of remote and in-office workers is viewed by those surveyed as challenging by 83% of respondents (40% very, 43% somewhat) while managing return-to-office health and safety protocols, software, apps, and data is seen as challenging by 73% of those surveyed (29% very, 44% somewhat). Determining what technologies are needed for their company in the post-pandemic future is anticipated to be challenging for 68% of technology leaders (29% very, 39% somewhat). Recruiting technologists and filling open tech positions in the year ahead is also seen as challenging by 73% of respondents.
Robots rise over the next decade
Looking ahead, 81% agree that in the next five years, one quarter of what they do will be enhanced by robots, and 77% agree that in the same time frame, robots will be deployed across their organization to enhance nearly every business function from sales and human resources to marketing and IT. A majority of respondents agree (78%) that in the next ten years, half or more of what they do will be enhanced by robots. As for the deployments of robots that will most benefit humanity, according to the survey, those are manufacturing and assembly (33%), hospital and patient care (26%), and earth and space exploration (13%).
Connected devices continue to proliferate
As a result of the shift to hybrid work and the pandemic, more than half (51%) of technology leaders surveyed believe the number of devices connected to their businesses that they need to track and managesuch as smartphones, tablets, sensors, robots, vehicles, drones, etc.increased as much as 1.5 times, while for 42% of those surveyed the number of devices increased in excess of 1.5 times.
However, the perspectives of technology leaders globally diverge when asked about managing even more connected devices in 2022. When asked if the number of devices connected to their companys business will grow so significantly and rapidly in 2022 that it will be unmanageable, over half of technology leaders disagree (51%), but 49% agree. Those differences can also be seen across regions78% in India, 64% in Brazil, and 63% in the U.S. agree device growth will be unmanageable, while a strong majority in China (87%) and just over half (52%) in the U.K disagree.
Cyber and physical security, preparedness, and deployment of technologies
The cybersecurity concerns most likely to be in technology leaders top two are issues related to the mobile and hybrid workforce including employees using their own devices (39%) and cloud vulnerability (35%). Additional concerns include data center vulnerability (27%), a coordinated attack on their network (26%), and a ransomware attack (25%). Notably, 59% of all technology leaders surveyed currently use or in the next five years plan to use drones for security, surveillance, or threat prevention as part of their business model. There are regional disparities though. Current drone use for security or plans to do so in the next five years are strongest in Brazil (78%), China (71%), India (60%), and the U.S. (52%) compared to only (32%) in the U.K., where 48% of respondents say they have no plans to use drones in their business.
An open-source distributed database that uses cryptography through a distributed ledger, blockchain enables trust among individuals and third parties. The four uses in the next year respondents were most likely to cite in their own top three most important uses for blockchain technology are:
The vast majority of those surveyed (92%) believe that compared to a year ago, their company is better prepared to respond to a potentially catastrophic interruption such as a data breach or natural disaster. Of that majority, 65% strongly agree that COVID-19 accelerated their preparedness.
About the Survey
"The Impact of Technology in 2022 and Beyond: an IEEE Global Study" surveyed 350 CIOs, CTOs, IT directors, and other technology leaders in the U.S., China, U.K., India, and Brazil at organizations with more than 1,000 employees across multiple industry sectors, including banking and financial services, consumer goods, education, electronics, engineering, energy, government, healthcare, insurance, retail, technology, and telecommunications. The surveys were conducted 8-20 October 2021.
About IEEE
IEEE is the worlds largest technical professional organization dedicated to advancing technology for the benefit of humanity. Through its highly cited publications, conferences, technology standards, and professional and educational activities, IEEE is the trusted voice in a wide variety of areas ranging from aerospace systems, computers, and telecommunications to biomedical engineering, electric power, and consumer electronics.Learn more
Read the original here:
Artificial Intelligence and Machine Learning, Cloud Computing, and 5G Will Be the Most Important Technologies in 2022, Says New IEEE Study - Dark...
How SOAR Helps to Hold Up Your Part of the Cloud Security Shared Responsibility Model – Security Boulevard
The allure of the cloud is indisputable. Flexibility, reliability, efficiency, scalability and cost savings are tantalizing traits for a business at any time, never mind when most have been catapulted into a colossal work-from-home experiment.
According to OReillys annual cloud adoption survey, nine out of 10 businesses now use cloud computing, with nearly half planning to migrate more than 50 percent of their applications into the cloud in the upcoming year. Amazon Web Services (AWS) is leading the pack, with a recent Vectra AI study reporting that 78% of organizations are running AWS across multiple regions, including 40% in at least three.
But the benefits of the cloud make it easy to leap headfirst without adequately acknowledging and prioritizing its dangers, especially within multi-cloud and hybrid cloud environments. Indeed, as cloud adoption increases, so will the magnitude of both malicious attacks and user errors. For example, a study by Ermetic found that 90% of AWS S3 buckets are prone to identity management and configuration errors that could permit admin-level ransomware attacks.
Free E-Book Download: The Definitive Guide to Ransomware Response
Thankfully public cloud services like AWS, Google Cloud Platform (GCP), and Microsoft Azure offer numerous controls for managing these threats and making compromise more difficult. However, these tools experience their optimal value when organizations accept a communal burden for security, something Amazon references as the Shared Responsibility Model. This is where a security orchestration, automation and response (SOAR) platform can step in, helping to bridge the gap between alert overload and analyst capacity, and pave the way for successful case investigations and remediation.
[A SOAR-AWS integration can help to] bridge the gap between alert overload and analyst capacity, and pave the way for successful case investigations and remediation.
At Siemplify, AWS cloud-native controls, including GuardDuty, CloudWatch, and Security Hub, conveniently integrate with the Siemplify Security Operations Platform, allowing threat responders to slash investigation times, extract valuable context-rich insights into incidents and immediately investigate and take action, such as disabling rogue instances and correcting misconfigurations.
The Siemplify platform combines security orchestration, automation and response with end-to-end security operations management to make analysts more productive, engineers more effective and managers more informed. The SOAR experience is brought to life inside the rich Siemplify Marketplace, where security professionals can access a vast array of integrations, including AWS, and ready-to-deploy use cases.
The Siemplify platform seamlessly connects to cloud threat detection technologies, as well as any on-premises tools, effectively delivering unified incident response at the speed of cloud. Additionally, Siemplify leverages AWS capabilities for monitoring and securing the environment in best of class solutions.
Siemplify customers, as well as users of the free Siemplify Community Edition, can integrate AWS within Siemplify by downloading the marketplace connector and entering AWS credentials. For more information, visit siemplify.co/marketplace.The Siempify platform is also available on the AWS marketplace for existing AWS customers. You can find it here.
Dan Kaplan is director of content at Siemplify.
The post How SOAR Helps to Hold Up Your Part of the Cloud Security Shared Responsibility Model appeared first on Siemplify.
*** This is a Security Bloggers Network syndicated blog from Siemplify authored by Dan Kaplan. Read the original post at: https://www.siemplify.co/blog/how-soar-helps-to-hold-up-your-part-of-the-cloud-security-shared-responsibility-model/
See the rest here:
How SOAR Helps to Hold Up Your Part of the Cloud Security Shared Responsibility Model - Security Boulevard
Global Virtualization Security Market Expected to Generate a Revenue of $6,986.3 Million, Growing at a Healthy CAGR of 13.6% During the Forecast…
The global virtualization security market is predicted to witness striking growth during the forecast period owing to the increasing adoption of virtual applications across small and medium businesses and large corporations worldwide. Based on the component, the solution sub-segment is expected to be most lucrative. Regionally, the North America region is predicted to hold the largest share of the market throughout the forecast timeframe.
New York, USA, Nov. 23, 2021 (GLOBE NEWSWIRE) -- According to a report published by Research Dive, the global virtualization security market is anticipated to garner $6,986.3 million and a CAGR of 13.6% over the estimated period from 2021-2028.
Download FREE Sample Report of the Global Virtualization Security Market: https://www.researchdive.com/download-sample/5363
Covid-19 Impact on the Global Virtualization Security Market
Though the outbreak of the Covid-19 pandemic has devastated several industries, however, it has had a positive impact on the virtualization security market. Due to stringent lockdowns and strict government guidelines, many IT companies have adopted work from the home culture which enhanced the reliance on virtualized platforms and cloud-based environments. This has surged the demand for the usage of virtualization security to protect network perimeter access. Moreover, the increasing demand for cloud computing technology especially in the healthcare industry to analyze patients data has further propelled the growth of the market during the period of crisis.
Check out How COVID-19 impacts the Global Virtualization Security Market: https://www.researchdive.com/connect-to-analyst/5363
As per our analysts, the rapid adoption of virtual applications across small and medium businesses and large corporations globally is expected to bolster the growth of the market during the forecast period. Moreover, the utilization of cloud computing to manage a remote workforce, eliminate hardware requirements, and reduce maintenance and operational costs is further expected to upsurge the growth of the market throughout the estimated timeframe. Besides, the rising demand for virtualization security solutions across small and large organization across small and large organizations is expected to fortify the growth of the virtualization security market throughout the analysis period. However, a lack of skilled IT experts in virtualization security may impede the growth of the market during the forecast timeframe.
Story continues
Segments of the Global Virtualization Security Market
The report has divided the market into segments namely, component, deployment, enterprise size, end-user, and region.
Component: Solution Sub-Segment to be Most Lucrative
The solution sub-segment is expected to garner a revenue of $4,955.9 million and is predicted to continue a steady growth during the analysis period. This is mainly due to the rising threat of cyber-attacks all across the globe. In addition, the rapid growth of cloud computing and expanding use of virtualization technology is predicted to upsurge the growth of the virtualization security market sub-segment during the analysis period.
Check out all Information and communication technology & media Industry Reports: https://www.researchdive.com/information-and-communication-technology-and-media
Deployment: Cloud Sub-Segment to be Most Profitable
The cloud sub-segment is predicted to generate a revenue of $4,332.0 million during the forecast period. This is mainly because of the improving efficiency, flexibility of using cloud computing across businesses. Moreover, the emerging way of improving system security by cloud computing and increasing utilization of cloud computing across businesses to avoid platforms vulnerability is expected to fortify the growth of the market sub-segment over the estimated timeframe.
Access Varied Market Reports Bearing Extensive Analysis of the Market Situation, Updated With The Impact of COVID-19: https://www.researchdive.com/covid-19-insights
Enterprise Size: Large Enterprises Sub-Segment to be Most Beneficial
The large enterprises sub-segment is predicted to generate a revenue of $4,384.2 million over the analysis period. This is wide because of the increased flexibility, robustness, and network security of enterprise cloud computing. In addition, the organizations can access security tools such as access management, cloud security monitoring and can implement network-wide identity with enterprise cloud. This factor is expected to boost the growth of the virtualization security market sub-segment over the analysis period.
End-User: IT & Telecommunication Sub-Segment to be Most Productive
The IT & telecommunication sub-segment is anticipated to generate a revenue of $1291.5 million over the forecast period. This is due to the significant impact of cloud computing on the IT, technology, and business sectors. Furthermore, the unexpected jump in data traffic due to the global pandemic, the rise of cloud-native 5G technology, rising usage of broadband services, and increasing customer demands for security solutions, are the factors expected to fuel the growth of the virtualization security market sub-segment throughout the analysis timeframe.
Region: North America Region Expected to Have the Maximum Market Share
The North America region is expected to generate a revenue of $2,430.5 million and is predicted to dominate the market during the forecast period. This is major because of the strong presence of the technical professionals and substantial IT firms in this region. Moreover, the growing transformation of a traditional network, security workloads into computation with the help of virtualization of security and network activities is predicted to amplify the growth of the market-sub-segment during the analysis period.
Key Players of the Global Virtualization Security Market
1. IBM2. Fortinet Inc.3. Cisco Systems, Inc.4. Citrix Systems, Inc.5. Trend Micro6. VMware7. Sophos Ltd8. Juniper Networks, Inc.9. Broadcom Corporation10. Check Point Software Technologies, Ltd
These players are widely working on the development of new business strategies such as mergers and acquisitions, product development to acquire leading positions in the global industry.
For instance, in August 2020, Intel, a leading American multinational corporation and technology company, has announced its collaboration with VMware, a renowned cloud computing and virtualization technology company. This collaboration has taken place on an integrated software platform virtualized Radio Access Networks (RAN). With this collaboration, the companies aimed to accelerate the rollout of LTE and future 5G networks.
Further, the report also presents important aspects including SWOT analysis, product portfolio, latest strategic developments, and the financial performance of the key players. Click Here to Get Absolute Top Companies Development Strategies Summary Report.
TRENDING REPORTS WITH COVID-19 IMPACT ANALYSIS
Point of Sale Software Market: https://www.researchdive.com/8423/point-of-sale-software-market
Digital Vault Market: https://www.researchdive.com/5497/digital-vault-market
Control Tower Market: https://www.researchdive.com/8491/control-towers-market
Cohere partners with Google Cloud to train large language models using dedicated hardware – VentureBeat
Hear from CIOs, CTOs, and other C-level and senior execs on data and AI strategies at the Future of Work Summit this January 12, 2022. Learn more
Google Cloud, Googles cloud computing services platform, today announced a multi-year collaboration with startup Cohere to accelerate natural language processing (NLP) to businesses by making it more cost effective. Under the partnership, Google Cloud says itll help Cohere establish computing infrastructure to power Coheres API, enabling Cohere to train large language models on dedicated hardware.
The news comes a day after Cohere announced the general availability of its API, which lets customers access models that are fine-tuned for a range of natural language applications in some cases at a fraction of the cost of rival offerings. Leading companies around the world are using AI to fundamentally transform their business processes and deliver more helpful customer experiences, Google Cloud CEO Thomas Kurian said in a statement. Our work with Cohere will make it easier and more cost-effective for any organization to realize the possibilities of AI with powerful NLP services powered by Googles custom-designed [hardware].
Headquartered in Toronto, Canada, Cohere was founded in 2019 by a pedigreed team including Aidan Gomez, Ivan Zhang, and Nick Frosst. Gomez, a former intern at Google Brain, coauthored the academic paper Attention Is All You Need, which introduced the world to a fundamental AI model architecture called the Transformer. (Among other high-profile systems, OpenAIs GPT-3andCodexare based on the Transformer architecture.) Zhang, alongside Gomez, is a contributor at FOR.ai, an open AI research collective involving data scientists and engineers. As for Frosst, he, like Gomez, worked at Google Brain, publishing research on machine learning alongside Turing Award winner Geoffrey Hinton.
In a vote of confidence, even before launching its commercial service, Cohere raised $40 million from institutional venture capitalists as well as Hinton, Google Cloud AI chief scientist Fei-Fei Li, UC Berkeley AI lab co-director Pieter Abbeel, and former Uber autonomous driving head Raquel Urtasun.
Unlike some of its competitors, Cohere offers two types of English NLP models, generation and representation, in Large, Medium, and Small sizes. The generation models can complete tasks involving generating text for example, writing product descriptions or extracting document metadata. By contrast, the representational models are about understanding language, driving apps like semantic search, chatbots, and sentiment analysis.
To keep its technology relatively affordable, Cohere charges access on a per-character basis based on the size of the model and the number of characters apps use (ranging from $0.0025-$0.12 per 10,000 characters for generation and $0.019 per 10,000 characters for representation). Only the generate models charge on input and output characters, while other models charge on output characters. All fine-tuned models, meanwhile i.e., models tailored to particular domains, industries, or scenarios are charged at two times the baseline model rate.
The partnership with Google Cloud will grant Cohere access to dedicated fourth-generation tensor processing units (TPUs) running in Google Cloud instances. TPUs are custom chips developed specifically to accelerate AI training, powering products like Google Search, Google Photos, Google Translate, Google Assistant, Gmail, and Google Cloud AI APIs.
The partnership will run until the end of 2024 with options to extend into 2025 and 2026. Google Cloud and Cohere have plans to partner on a go-to-market strategy, Gomez told VentureBeat via email. We met with a number of Cloud providers and felt that Google Cloud was best positioned to meet our needs.
Coheres decision to partner with Google Cloud reflects the logistical challenges of developing large language models. For example, Nvidias recently released Megatron 530B model was originally trained across 560 Nvidia DGX A100 servers, each hosting 8 Nvidia A100 80GB GPUs. Microsoft and Nvidia say that they observed between 113 to 126 teraflops per second per GPU while training Megatron 530B, which would put the training cost in the millions of dollars. (A teraflop rating measures the performance of hardware, including GPUs.)
Inference actually running the trained model is another challenge. On two of its costly DGX SuperPod systems, Nvidia claims that inference (e.g., autocompleting a sentence) with Megatron 530B only takes half a second. But it can take over a minute on a CPU-based on-premises server. While cloud alternatives might be cheaper, theyre not dramatically so one estimatepegs the cost of running GPT-3 on a single Amazon Web Services instance at a minimum of $87,000 per year.
Cohere rival OpenAI trains its large language models on an AI supercomputer hosted by Microsoft, which invested over $1 billion in the company in 2020, roughly $500 million of which came in the form of Azure compute credits.
In Cohere, Google Cloud which already offered a range of NLP services gains a customer in a market thats growing rapidly during the pandemic. According to a 2021 survey from John Snow Labs and Gradient Flow, 60% of tech leaders indicated that their NLP budgets grew by at least 10% compared to 2020, while a third 33% said that their spending climbed by more than 30%.
Were dedicated to supporting companies, such as Cohere, through our advanced infrastructure offering in order to drive innovation in NLP, Google Cloud AI director of product management Craig Wiley told VentureBeat via email. Our goal is always to provide the best pipeline tools for developers of NLP models. By bringing together the NLP expertise from both Cohere and Google Cloud, we are going to be able to provide customers with some pretty extraordinary outcomes.
The global NLP market is projected to be worth $2.53 billion by 2027, up from $703 million in 2020. And if the current trend holds, a substantial portion of that spending will be put toward cloud infrastructure benefiting Google Cloud.
See more here:
Cohere partners with Google Cloud to train large language models using dedicated hardware - VentureBeat
5 Ways to Improve Data Management in the Cloud – ITPro Today
Managing data can be challenging in any environment. But data management in the cloud is especially difficult, given the unique security, cost and performance issues at play. With that reality in mind, here are some tips to help IT teams optimize cloud data management and strike the right balance among the various competing priorities that shape data in public, private or hybrid cloud environments.
Before delving into best practices for cloud data management, lets briefly discuss why managing data in the cloud can be particularly challenging. The main reasons include:
Those are the problems. Now, lets look at five ways to tackle them.
A basic best practice for striking the right balance between cloud storage costs and performance is to use data storage tiers. Most public cloud providers offer different storage tiers (or classes, as they are called on some clouds) for at least their object storage services.
The higher-cost tiers offer instant access to data. With lower-cost tiers, you may have to wait some amount of time--which could range from minutes to hours--to access your data. Data that doesnt require frequent or quick access, then, can be stored much more cheaply using lower-cost tiers.
For many teams, object storage services like AWS S3 or Azure Blob Storage are the default solution for storing data in the cloud. These services let you upload data in any form and retrieve it quickly. You dont have to worry about structuring the data in a particular way or configuring a database.
The downside of cloud object storage is that you usually have to pay fees to interact with the data. For instance, if you want to list the contents of your storage bucket or copy a file, youll pay a fee for each request. The request fees are very small--fractions of a penny--but they can add up if you are constantly accessing or modifying object storage data.
You dont typically have to pay special request fees to perform data operations on other types of cloud storage services, like block storage or cloud databases. Thus, from a cost optimization perspective, it may be worth forgoing the convenience of object storage in order to save money.
One of the key security challenges that teams face when managing cloud data is the risk that they dont actually know where all of their sensitive data is within cloud environments. It can be easy to upload files containing personally identifiable information or other types of private data into the cloud and lose track of it (especially if your cloud environment is shared by a number of users within your organization, each doing their own thing with few governance policies to manage operations).
Cloud data loss prevention (DLP) tools address this problem by automatically scanning cloud storage for sensitive data. Public cloud vendors offer such tools, such as Google Cloud DLP and AWS Macie. There are also third-party DLP tools, like Open Raven, that can work within public cloud environments.
Cloud DLP wont guarantee that your cloud data is stored securely--DLP tools can overlook sensitive information--but it goes a long way toward helping you find data that is stored in an insecure way before the bad guys discover it.
Data egress--which means the movement of data out of a public cloud environment--is the bane of cloud data cost and performance optimization. The more egress you have, the more youll pay because cloud providers bill for every gigabyte of data that moves out of their clouds. Egress also leads to poorer performance due to the time it takes to move data out of the cloud via the Internet.
To mitigate these issues, make data egress mitigation a key priority when designing your cloud architecture. Dont treat egress costs and performance degradations as inevitable; instead, figure out how to store data as close as possible to the applications that process it or the users who consume it.
In addition to allowing you to store data, all of the major clouds now also let you process it using a variety of managed data analytics services, such as AWS OpenSearch and Azure Data Lake Analytics.
If you want to analyze your data without having to move it out of the cloud (and pay those nasty egress fees), these services may come in handy. However, youll typically have to pay for the services themselves, which can cost a lot depending on how much data you process. There may also be data privacy issues to consider when analyzing sensitive cloud data using a third-party service.
As an alternative, you can consider installing your own, self-managed data analytics platform in a public cloud, using open source tools like the ELK Stack. That way, you can avoid egress by keeping data in the cloud, without having to pay for a third-party managed service. (Youll pay for the cloud infrastructure that hosts the service, but that is likely to cost much less than a managed data analytics service.)
The bottom line here: Managed cloud data analytics may be a useful tool, but deploy them wisely.
Like many other things, data management is just harder when you have to do it in the cloud. The good news is that, by being strategic about which cloud storage services you use, how you manage data in the cloud and how you factor data management into your cloud architecture, you can avoid the cost, performance and security pitfalls of cloud data.
Read more from the original source:
5 Ways to Improve Data Management in the Cloud - ITPro Today
Scalable Modular Data Centers and the Race to ROI – Data Center Frontier
To ensure your data center design is modular and scalable, it is essential to select scalable equipment. Switchgear, uninterruptible power supplies (UPS), power distribution units (PDU), and remote power panels (RPP) are all examples of scalable equipment. (Source: ABB)
Last week, we continued our special report series on how physical infrastructure, the data center, and the cloud are keeping up with new modular solutions delivery, and streamlined operational support. In our final article in the series, wellexamine some solution architectures for scalable, modular data center designs.
With the modular market developing in the industry, some tremendous innovation and engineering design efforts have been put into solutions. The modular market is maturing, with even more large enterprises actively deploying the modular data center platform.
To that extent, there is already quite a bit of industry adoption as it relates to modular solutions:
With all of this in mind, there are still some hesitations related to modular adoption. These modular myths date back to the first generation of modular deployments. Lets examine some of these myths and where todays modular modernization and the race to data center ROI impact digital infrastructure.
MODULAR FACT
Modular solutions can be seen as intelligently applying capital to the data center in line with changing technology and IT requirements. Instead of a $50 million project on day one, ten $5 million modules can be built as they are needed. It enables the ability to add capacity to the data center incrementally.
MODULAR FACT
Heres another critical point: you dont have to worry about a lack of sub-contractors and trade professionals. Due to the nature of the design and standardized module architecture, you can have your equipment and facility up and running with minimal requirements for contractor support. The reason for this is that your equipment comes delivered as factory-built units. These modular units are pre-assembled, tested in a controlled factory environment, and delivered directly to the construction site. These efforts minimize the need for additional onsite construction and additional personnel.
As the modular data center market matures and new technologies are introduced, data center administrators will need a new way to manage their infrastructure. There will be an immediate need to transform complex data center operations into simplified plug & play delivery models. This means lights-out automation, rapid infrastructure assembly, and even further simplified management. The next iteration of DCIM aims to work more closely with modular ecosystems to remove the many challenges which face administrators when it comes to creating a road map and building around efficiencies. In working with the future of DCIM, expect the following:
MODULAR FACT
Another critical consideration is working with a modular partner that can support a healthy supply chain. When working with modular designs, make sure you have a partner that can think locally and deliver globally.
When working with modular designs, make sure you have a partner that can think locally and deliver globally.
Much like anything in the technology market, solutions continue to change and evolve. Many of the legacy perspectives on modular solutions revolve around an older generation of modular design. Today, modular data centers are more efficient, denser, and a lot easier to deploy. Lets examine some solution architectures for scalable, modular data center designs.
To ensure your data center design is modular and scalable, it is essential to select scalable equipment. Switchgear, uninterruptible power supplies (UPS), power distribution units (PDU), and remote power panels (RPP) are all examples of scalable equipment. Get this right and specifying future expansions will be time and cost-efficient.
With this in mind, lets look at some emerging Gen 2 Modular Design considerations.
Digitalization within the modular industry is a significant design consideration for Gen 2 modular designs. Systems of this nature are much more scalable because changes to the configuration can be done remotely using software, as opposed to changing out hardware or reassembling wiring.
IEC 61850 is a well-established communications standard for substation automation. The high reliability, integrated diagnostics, fine selectivity, shorter fault reaction times, and better fault tolerance delivered by IEC 61850 make it ideal for data center power infrastructure.
IEC 61850 AND MODULAR DATA CENTERS
The world is experiencing a data explosion. Not only is the quantity of data increasing at a dizzying rate, but the extent to which society relies on that data is also growing by the day. These trends elevate the data center to the status of critical infrastructure in many countries. If a data center fails, chaos ensues, which makes a reliable power supply indispensable. Generally, data centers have well-thought-out power backup provisions such as uninterruptible power supplies (UPSs), diesel generators, etc. By employing IEC 61850-enabled devices and IEC 61850-based GOOSE (generic object-oriented substation event) communication to automate the data center power infrastructure, significant improvements can be made: better power supply reliability, greater operational control, and reduced cost, for example.
GEN 2 MODULAR CONCEPTS AND AUTOMATION
Working with the next iteration of modular data center design means eliminating wasteful processes and operations. In many cases, this means adopting new solutions around infrastructure automation.
IEC 61850 is eminently suited to data center power infrastructure automation. Using just one protocol can form the bedrock of a complete electrical design concept that includes the full protection, control and supervision system, and cybersecurity. By using optical fiber instead of copper wire, wiring costs are lowered, space requirements are substantially reduced, and safety is increased. IEC 61850 also delivers the capability to monitor and control IEDs remotely. The convenience is that devices supplied by different manufacturers can communicate with each other without custom-designed gateways or other engineering-intensive complications.
Taking a broader perspective, the IEC 61850 standard allows digitalization of the data center power system in a way that opens it to collaboration with other digital entities in the data center, such as a building management system (BMS), power management system (PMS), data center infrastructure management (DCIM) or ABB Ability Data Center Automation.
These are all essential parts of the final goal: the single plane of glass that orchestrates the entire data center. Decathlon for Data Centers, for instance, gives power and cooling visibility, and IEC 61850s open protocols allow integration of existing equipment and systems. With IEC 61850 peer-to-peer communication capabilities in components like ABBs Relion relays and Emax circuit breakers, one can go from the DCIM system controlling or supervising software to having real-time interaction with the subsystem (such as a UPS breaker) itself.
The IEC 61850 architecture is the ideal standard for data centers, as it delivers increased reliability, finer selectivity, shorter fault reaction times, and the possibility to implement fault tolerance and integrated diagnostics, as well as a host of other advantages.
Download the full report, Cloud and the Data Center: How Digital Modernization is Impacting Physical Modular Infrastructure, courtesy of ABB for two exclusive case studies and tips for getting started on the modular journey.
Follow this link:
Scalable Modular Data Centers and the Race to ROI - Data Center Frontier
Stocks making the biggest moves after hours: Nordstrom, Gap, VMware, HP and more – CNBC
Shoppers leave a Nordstrom store on May 26, 2021 in Chicago, Illinois.
Scott Olson | Getty Images News | Getty Images
Check out the companies making headlines after the bell:
Nordstrom Shares of the department store chain tumbled roughly 20% following its quarterly results. Nordstrom reported earnings of 39 cents per share, well below the 56 cents expected by analysts. Labor costs ate into profits and sales and Nordstrom Rack, its off-price division, has struggled to return to pre-pandemic levels, the company reported.
Gap The apparel retailer saw its shares drop more than 15% after missing profit and revenue expectations for its fiscal third-quarter. It also slashed its full-year revenue outlook from a 30% increase to a 20% increase, compared with analysts' expectations of a 28.4% year-over-year gain, according to Refinitiv.
HP The computer hardware company saw shares jump about 6% following its quarterly results. HP reported earnings of 94 cents per share on revenue of $16.68 billion, beating analysts' estimates of 88 cents per share on revenue of $15.4 billion, according to Refinitiv. It also raised its first quarter guidance to a range of 99 cents to $1.05 per share, compared with the 94 cents per share expected by analysts.
VMware Shares of cloud computing company VMware got a 1% lift after the company reported a quarterly beat on the top and bottom lines. VMware recorded $1.72 per share in earnings, beating expectations by 18 cents, and $3.19 billion in revenue, topping estimates of $3.12 billion.
Autodesk The software company's shares fell more than 13% despite reporting a beat on the top and bottom lines for its most recent quarter. Autodesk issued fourth quarter earnings and revenue guidance that were largely below estimates.
View post:
Stocks making the biggest moves after hours: Nordstrom, Gap, VMware, HP and more - CNBC
Why Dream11 wants to float in the cloud – ETCIO.com
Dream Sports and its brands such as Dream11, FanCode, Dream Capital have a collective user base of over 140 million. To cater to such a humongous user base with peak concurrency reaching up to 5.5+ million, the cloud became a very obvious choice for the organisation from the beginning. With so much traffic data coming through, every decision is data-driven and cloud adoption ensures that this objective is achieved in a seamless manner.
With hyper-growth that we were (and are seeing) on a year-on-year basis, we needed a highly scaled solution that can be elastic as per our traffic patterns and data volumes. We were also looking to get reliable out of the box software/infrastructure as a service so that we could focus on our core product. Cloud Technologies fitted the bill perfectly, said Abhishek Ravi, CIO, Dream Sports.
According to Ravi, some key aspects that are considered for going with the cloud are scalability, elasticity, performance, reliability, resilience, security and cost.
Cloud has really helped us to plan, develop and scale our product without worrying about the performance, availability and cost of ownership. We could quickly test out our features, scale tests in load/stress environments and ship them to our users to give them the best experience. With managed services available on the cloud, our teams could and continue to focus on core products and thus, derive maximum efficiency, Ravi added.
The company intends to remain cloud-native in the future too. With our data volumes increasing day by day and newer solutions evolving in cloud space, we want to use the cloud at the optimum.
The companys strategy is multi-cloud as it selects the right solution for the different use cases as per the requirements.
Ravi believes that a multi-cloud strategy should be well thought through so that the best cloud technology for the right use case can be provisioned. It also helps to optimise the infrastructure and thus, the cost.
In the coming months, Dream Sports aims to upscale its tech advancements and expand tech infrastructure.
We leverage Big Data, Analytics, Artificial Intelligence and Machine Learning to focus on every aspect that makes sports better. We are heavily experimenting with push architecture to serve information to the users in real-time. We are also very advanced on containerization which has reduced our infrastructure requirements drastically, Ravi maintained.
The company is also working on several tech initiatives such as a concurrency prediction model to predict hourly concurrency on the Dream11 platform, and a fraud detection system to identify & mitigate users creating multiple/ duplicate accounts on the platform to abuse referral or promotional cash bonus schemes.
To ensure a quality user experience during peak traffic, Dream Sports also stress tests every feature that is released for a smooth experience at scale. We have a testing framework that simulates any kind of traffic load with real life-like patterns. This gives a high degree of assurance that the backend would behave exactly as expected, Ravi added.
Read more here:
Why Dream11 wants to float in the cloud - ETCIO.com
Beginners Guide to Cloud Computing – An Introduction …
You might have read or heard about Cloud Computing maybe a few times by now or it could be your first time. This indicates that you have been drawn towards this domain with your interest or by some other means. If the former is the reason then you would next want to know more about cloud computing as every other beginner. Therefore, after reading this article, you would have a good understanding of cloud computing, types of cloud computing, cloud computing services, benefits of cloud computing, top cloud service providers by market share, and career in cloud computing. Read through each of the subsequent sections to accumulate information related to cloud computing topics as much as possible. Hence, you can treat this article as a beginners guide to cloud computing as the title suggests.
The Cloud is one of the popular and trending terms in the IT sector. This comes without a surprise because of the enormous potential that cloud technology offers. Someone new to the technology or specifically to IT with relatively lesser knowledge may find this term quite ambiguous initially. If you are an absolute beginner in cloud and thinking where to start from, I would suggest AWS cloud practitioner certification, take the practice exam and analyse your current level of understanding.
The origin of this term dates back to the mid-2000s when global networks and IT infrastructure were evolving. The cloud can be thought of as the whole of the internet (or sometimes as a major chunk of the internet such as remote servers, storage, and so on) from which you can access just about anything. The cloud is a symbolic representation of the internet for starters.
With the increasing demand for products and services over the internet, many businesses are looking for ways to reduce their overhead expenses related to IT infrastructure (hardware and software). Every company small or big in one way or the other uses computers, business applications, and the services on the internet for a majority of their work. Traditionally, the companies would set up, manage and maintain their data centers in which all the business applications run and provide services to their customers.
Cloud computing is a technology that has the potential to completely revolutionize this traditional IT infrastructure design. Cloud computing can be defined as a technology that delivers IT resources such as servers, storage, networking, computing power, software, and analytics over the internet. Companies that provide these resources to other companies are called cloud providers or cloud vendors. The cloud providers have a variety of cloud services and solutions that are highly flexible, innovative, scalable, and economical. Due to these reasons, the majority of companies are moving from traditional IT infrastructure to the cloud for their business needs.
Becoming a certified cloud professional makes you stand out of the crowd. Here are the 10 Top Paying Cloud Computing Certifications in 2021for the growth of your cloud career!
Cloud computing enables companies to digitize their tangible and resource-consuming applications. Another good thing about cloud computing is that companies only pay for the services they use on the cloud which is termed as pay-as-you-go. Because of this, companies can use resources cost-effectively and focus on their business growth without all the hassle.
Cloud computing is not particularly owned by a single entity. The cloud services, applications, and deployment models differ across companies and business requirements. The deployment model or the underlying cloud architecture decides the overall experience on the cloud. With so many improvements to cloud computing, multiple providers offer various services, models, and applications.
It is important to choose the correct type of cloud computing before implementing business applications. Apart from the deployment model, there are other important factors as well. Primarily, there are three different deployment models, public cloud, private cloud, and hybrid cloud. Many other deployment models such as multi-cloud, community cloud, and others are also available.
Public clouds are third-party-owned cloud environments that offer computing resources such as servers, storage, and other services over the internet. Cloud providers manage, maintain, and provide all the supporting infrastructure required to run all the business applications. Accessing the public cloud on the internet requires a web browser and a business or individual accounts linked to the cloud environment.
Private clouds are hosted exclusively for a company or business in which the services and IT infrastructure are maintained on a private network. For private clouds, the computing resources can be either hosted on the companys data center or a third-party private cloud environment managed in a private network.
Hybrid clouds are a combination of public and private clouds. The hybrid cloud has an underlying technology that allows the business data and applications to be shared between the two types of clouds. The hybrid clouds are very effective because of the resources allocation and presence in both private and public clouds. A business may require applications and services which are secure, adaptable, and flexible. In such cases, hybrid clouds offer great value by sharing the data and resources across both public and private clouds.
Multicloud is a type of cloud deployment in which multiple public clouds are designed to work together. A company may require different cloud services, resources, and applications from multiple cloud providers to increase flexibility, disaster recovery options, and scalability. In such cases, different types of clouds are used correspondingly, which we refer to as a multi-cloud.
Community clouds are either third-party-owned cloud environments or hosted on exclusive on-site data centers where the resources are shared by multiple companies. The companies that opt to implement community clouds usually would belong to a specific community of businesses that have common security, compliance, applications, and other requirements.
Cloud services are unique to each use case and business requirement. The type of cloud service is one of the crucial factors that need to be considered before implementing into any business needs.
The cloud services are broadly classified into four categories as Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS), Software-as-a-Service (SaaS), and Serverless computing. Serverless computing is the newest category of cloud computing service.
IaaS is the most basic type of cloud service which allows companies to rent servers, storage, networks, virtual machines (VMs), and other infrastructure to build their business applications. The cloud provider would support, maintain, manage the services.
Some of the examples of IaaS providers are Amazon Web Services (AWS), DigitalOcean, Google Compute Engine, Microsoft Azure, Linode Alibaba, and OpenStack.
Companies that opt for PaaS cloud service, pay the cloud providers for the resources that they would need to build their applications only. PaaS vendors provide everything necessary for building an application such as infrastructure, development tools, and operating systems, over the internet. With this, the companies can comfortably design and build applications.
Some of the examples of PaaS providers are AWS, Google App Engine, Heroku, Microsoft Azure, Oracle Cloud Platform, and others.
When the cloud vendors deliver software applications to the companies over the internet, on-demand then it is considered as SaaS. With SaaS, cloud providers host, manage and maintain the software application and the underlying infrastructure. The providers would also perform software, security upgrades. Users can directly access the application from their web browser from anywhere on any device. Most of the online applications that we use regularly fall under the SaaS category.
Some of the examples of SaaS companies are Salesforce, AWS, Google G Suite, Adobe Creative Cloud, Microsoft Azure, ServiceNow, Slack, MailChimp, and others.
In Serverless Computing, the companies or developers pay only for a fragment of the services that they run and use without worrying about the server or the underlying infrastructure. The implication here is that the cloud provider takes care of everything else and gives the developers to work on codeon a code or develop a function on-demand.
Interested in AWS Certifications? Let us help you decide out of 11 AWS Certifications Which One Should YOU Choose?
The serverless platform is broad and generally, there are two major serverless offerings called Function-as-a-Servicecalled as Function-as-a-Service (FaaS) and Backend-as-a-Service (Baas). In FaaS companies or users work with certain features of an application managed completely by the vendor. And users work with these features on-demand and pay as and when they use them. With BaaS companies or users get everything required to deploy their code and build the application without worrying about the underlying servers, APIs, databases, storage, and so on.
Serverless applications run on servers like the other cloud service models of cloud computing. However, theyre called serverless since they dont run on dedicated machines, and also the companies building the applications wont manage any servers.
Some of the examples of Serverless computing services are Back4App, AWS Lambda, Google Cloud Functions, Cloudflare Workers, and Microsoft Azure Functions.
In the current times, cloud computing has proved to provide maximum value to businesses and organizations in many ways. Following are some of the key benefits of cloud computing.
Now that we have covered all the introductory information about cloud computing, what is ahead of us?
Well, it is time to ask yourself if pursuing your career in cloud computing resonates with your career aspirations. If so, then the approach has to be systematic and practical. Cloud computing jobs promise wonderful opportunities for IT professionals, entry-level aspirants, and even freshers.
Planning for Microsoft Azure Certification? Go through the New Microsoft Azure Certifications Path in 2021 now before making a decision!
Depending on your career level you can take appropriate steps to start your career in cloud computing. Typically, you should start by understanding the basics of cloud computing by consuming information from the internet or any viable resources. Set aside a few minutes daily to strengthen your understanding. If you would like to read and learn more on cloud and other interesting topics visit our Whizlabs Blogpage.
Once you have the basic knowledge, you should aim for getting relevant cloud computing skills. You can get technical skills by taking up cloud certification exams. There are many cloud certifications in the market, but choose the certifications that add both in-demand skills and career opportunities. Research thoroughly about each cloud certification and the certification issuing agency or the company.
To prepare for certifications you would have to find learning resources such as online courses, practice tests, and hands-on labs. Finding them on the internet is very easy and quick. Enroll in a course and learn diligently and gain practical skills by signing up for hands-on labs. After you complete online courses and hands-on labs, you could test your skills in practice tests or exams. The good news is that we provide all the above-mentioned learning resources along with free tests for cloud computing and many other disciplines. Explore our courses here.
With proper preparation, you would be ready to take up the actual cloud certification exams and secure good scores. After getting certified, you can immediately start looking for job opportunities in cloud computing. Having one or more in-demand cloud certifications gives you an edge to land a job and through which you can pursue your career in this domain.
Professionally, pursuing a career in cloud computing is highly beneficial if your thoughts and interests align with this technology.
About Abhishek MauryaAbhishek Maurya is a cloud architect possessing explicit knowledge of the Analytics services offered by AWS. He possesses the following certifications:AWS certified solution architect associate with 901/1000.AWS certified Developer-Associate with 905/1000.AWS Certified SysOps Administrator-Associate with 873/1000.His current role as a Cloud Product Associate in Whizlabs helps him make his customers understand the power of the cloud and steer clear of all kinds of roadblocks.Further, the following skills help him in bringing out his best in a data-driven organisation.*He is adept in languages like Python, C/C++, .NET, C#, JavaScript.*He also has full control over version control tools like Git, GitHub and Bitbucket.*He is agile in working in both Linux and Windows platforms.*He is also proficient in databases like Oracle, MySQL and PostgreSQL.
Originally posted here:
Beginners Guide to Cloud Computing - An Introduction ...