Page 1,360«..1020..1,3591,3601,3611,362..1,3701,380..»

BingBang Shows Why Cloud Providers Need Bug Bounties – Analytics India Magazine

Earlier this week, a cloud security researcher from Wiz Research found a huge vulnerability in the Bing content management system. Termed BingBang, this bug exposed access to misconfigured systems, allowing third parties to access them without authorisation. While the bug was found by a white hat hacker and promptly fixed by Microsoft, the vulnerability itself shows a fatal flaw in modern web services centralisation.

Services offered by software companies, such as Microsoft or Google, are hosted on their own cloud computing infrastructure. While these tech companies have since made it into a product, it seems that there are still ways for parties to move beyond the security created by cloud service providers.

Earlier this week, Hillai Ben-Sasson, the aforementioned security researcher, published a tweet thread and accompanying blog that provided details on this vulnerability. Calling it BingBang, Hillai explained how finding this vulnerability began with a toggle in their Azure app settings. This toggle allowed users to switch an apps permissions from being single tenant to being multi-tenant. If a certain app was set to being multi-tenant, it meant that anyone could log in to the app.

Multi-tenancy is one of the secret sauces that make modern cloud service providers (CSPs) work. Using this approach, multiple tenants or users can access the same resources while not being aware of each other. This allows CSPs to effectively use resources for multiple users, increasing the scalability of the server farm while allowing resources to stretch for longer.

By finding a Microsoft application configured with multi-tenancy, the researcher was able to gain access to the backend of Bings CMS. Called Bing Trivia, this application provided backend access to a facet of Bing Search which covered features such as various quizzes, the On This Day feature, spotlights and common answers for entertainment queries. By accessing this application and abusing his privileges, Hillai was able to manipulate Bings search results.

While this is a relatively mild abuse of the bug, the researcher also found that it was possible to create a cross-site scripting (XSS) package and serve it to other applications on the network. Using this exploit, Hillai found that it was possible for attackers to get an authentication token, which could then be used to access Outlook emails, Calendars, Teams messages, and OneDrive files from any Bing user.

Reportedly, the researcher discovered this vulnerability in mid-January and proceeded to inform Microsoft about it. To Microsofts credit, it quickly responded to the report and fixed the vulnerable applications, awarding the researcher a $40,000 bug bounty under the Microsoft 365 Bounty Program. It also added further authorisation checks to address the issue and made additional changes to reduce the risk of future misconfigurations.

According to Wizs blog, about 25% of multi-tenant applications were found to be vulnerable to this bug. This was just one application they accessed, with the blog stating that there were several high-impact, vulnerable Microsoft applications. While Microsoft cannot be blamed directly for this vulnerability, it is important to note the risks that come with hosting sensitive applications on a publicly accessible cloud.

This isnt the first time that a vulnerability has been discovered in Azure. In the past 3 months alone, Microsofts security response centre (MSRC) has discovered six exploits in Azure. While some of these are low-risk, one of them allows attackers to elevate privileges in Microsoft Outlook, leading to possible credential theft. To this end, Microsoft has also handed out $13.7 million in bounties in 2022, with the biggest reward being $200,000 for a bug found in Hyper-V.

At a glance, CSPs can be subjected to denial of service attacks, cloud malware injection attacks, cross-cloud attacks, and insider attacks. This means that cloud service providers need to take multiple security measures to mitigate these possible attacks, However, sometimes vulnerabilities slip through the cracks due to the sheer amount of angles the problem can be approached from.

Azure is not the only one to suffer from such shortcomings. As part of the GCP vulnerability reward program, Google pays over $313,000 to a handful of security researchers every year. Apart from this, the vulnerability rewards program also pays bug bounties for security vulnerabilities discovered in GCP, with the company dishing out $8.7 million in rewards in 2021 alone.

AWS, on the other hand, has not disclosed how much they pay out in bounties, instead tying up with platforms like HackerOne and Bugbounter to discover and fix bugs in its platforms. However, it is clear that it is a priority for them, mainly due to the large amount of attack surfaces the centralised cloud service providers have.

Instituting bug bounty programs is a good place to start, as this will not only monetarily incentivise researchers to find bugs, but also instil a sense of curiosity around the workings of CSPs offerings. Googles Eduardo Vela, the head of GCPs security response team, said in an interview, We dont care about vulnerabilities; we care about exploits. The whole idea is what to do beyond just patching a couple of vulnerabilities. This is why we pay $100,000. It is so much more work, and we learn a lot from these exploits.

In 2022, both Google and Microsoft increased their bug bounty payouts to reflect the larger attack surface brought about by their upgrades and new products. As CSPs continue to innovate and accelerate, it seems that security researchers have now become their secret weapon, finding and reporting bugs in platforms with possibly thousands of security flaws.

See the original post here:
BingBang Shows Why Cloud Providers Need Bug Bounties - Analytics India Magazine

Read More..

IONOS Signs Partnership with AYOZAT Integrating Their Cloud … – StreetInsider.com

News and research before you hear about it on CNBC and others. Claim your 1-week free trial to StreetInsider Premium here.

London, England--(Newsfile Corp. - April 3, 2023) - AYOZAT partners with IONOS to scale up its deep tech product, AYOZAT TLC, "The Layer Cake". It is the layered mechanism that powers, stores, and distributes different ecosystems and market sectors, securely and reliably.

Ayozat & IONOS Signs Partnership

To view an enhanced version of this graphic, please visit:https://images.newsfilecorp.com/files/9210/160925_ayozatxionos.jpg

AYOZAT TLC initially launched within the media industry and saw a highly successful 24-month commercial trial with leading brands. This led to 150 channels being processed and distributed, including 5 of its own across the Sky network and OTT, plus multiple streaming platforms, and premium live sporting brands. Live and pre-recorded media is captured, processed, monetized, delivered, and analysed in a seamless workflow.

Integrating IONOS's compute engine with AYOZAT TLC has made a powerful tool, enabling unlimited compute resources on demand with endless technology layers, for any sector or market, anytime, delivered anywhere with extremely low latency.

The partnership will include promotion of each companies' products and services, along with advertising technology solutions and a content delivery network from Ayozat with cloud computing and hosting from IONOS.

IONOS is the leading European digitalisation partner. The company serves six million customers and operates across 18 markets in Europe and North America, with its services being accessible worldwide. IONOS acts as a 'one-stop shop' for all hosting and cloud infrastructure needs.

"It has been an incredible journey to see AYOZAT start with IONOS. Starting on our base systems and now scaling on our Cloud Compute Engine integrating with AYOZAT TLC. Offering their clients this platform provides AYOZAT the ability to scale their workloads to keep up with their media platforms exponential growth as well as expand into new markets like finance and iGaming," said Sab Knight of IONOS.

With a strong foothold in the media sector, AYOZAT has begun integrating the finance, banking, iGaming, and governmental sectors, which will also be supported by IONOS.

"Having IONOS' compute engine married to our deep tech mechanism, AYOZAT TLC, opens a myriad of opportunities for both our respective clients and new entries to the market looking for transparency within deep technology," added Umesh Perera, founder of AYOZAT.

Collaborating at this level allows both companies to expand their brand recognition and exposure across the sectors they operate in, which includes products and skill sets.

https://www.ionos.com/https://ayozat.co.uk/

For further information contactAntonio Marazzi - [emailprotected] Gabriella Szecsi - [emailprotected]

To view the source version of this press release, please visit https://www.newsfilecorp.com/release/160925

Read the original post:
IONOS Signs Partnership with AYOZAT Integrating Their Cloud ... - StreetInsider.com

Read More..

Industry Insights: Navigating the future of media asset management … – NewscastStudio

Subscribe to NewscastStudio's newsletter for the latest in broadcast design, technology and engineering delivered to your inbox.

The rapid evolution of the media landscape has created an increasing demand for efficient, scalable, and secure broadcast storage and media asset management (MAM) solutions.

As part of our Industry Insights series, leading vendors gathered to discuss the current challenges and explore the potential of cloud-based MAM systems along with emerging technologies such as artificial intelligence (AI) and machine learning (ML) to address these pain points. Central to the discussion is the importance of seamless collaboration, navigating the complex storage options landscape, managing high operational costs for new formats, and prioritizing flexibility and openness in MAM systems.

The roundtable participants acknowledge that cloud adoption for MAM and storage has gained significant traction, primarily due to the coronavirus pandemic, which emphasized the need for remote access and greater flexibility. However, professionals in the industry often operate within storage silos and face challenges in unlocking the full value of stored assets for distribution and monetization. As a solution, hybrid cloud models, which combine both on-premise and cloud storage, are emerging as a practical and efficient approach for many organizations.

In addition to embracing cloud solutions, the impact of AI and ML on broadcast workflows is becoming increasingly apparent.

These technologies have the potential to streamline operations through improved metadata management, automatic transcription and translation, and intelligent indexing of content. This allows media professionals to focus on creating and delivering high-quality content in a competitive market. As the industry evolves, leveraging these cutting-edge technologies will be essential for success and maintaining a competitive edge.

Sunil Mudholkar, VP of product management, EditShare:Current pain points I think are focused on making collaboration easier across locations that are more dispersed than ever. Whether this is performant, remote access to media or keeping NLE projects in sync across tools and creators/producers.

Jon Finegold, CMO, Signiant:The sheer variety of storage options and MAM vendors make it a very confusing landscape. There are so many different choices between on-prem and cloud, different tiers of storage, file and object storage, etc. IT teams have a lot of flexibility to balance cost and performance but that choice also creates complexity.

Toni Vilalta, director of product development, VSN:New formats, such as 4K or 8K, make the operational costs too high. With cloud or hybrid storage architectures, MAM systems should provide support for critical security services like encryption or cryptographic protocols. Another challenge of MAM systems is to be able to manage enormous amounts of content in storage, adding AI capabilities for automatic cataloging.

Sam Peterson, COO, Bitcentral: There is no one size fits all approach because customers and the industry as a whole will have varying business requirements and theyre constantly evolving depending on their needs and the market landscape. For some in the industry, there is also a resistance to change, which is undermining successful projects. Changing these attitudes can have a positive impact going forward.

Andy Shenkler, CEO and co-founder, TMT Insights: As people have shifted their supply chains to become predominantly cloud based, their assets continue to exist in a both a legacy on-prem storage model as well as single or multi-cloud. Processing of content must be co-located with your assets in order to be economically viable. Large content libraries are not easily migrated, and often times require clean-up before being viable for automated processing, all of which comes at a cost of both money and time.

Aaron Kroger, product marketing manager for media workflows, Dalet:Many people find themselves with aging on-premises infrastructure managed by an out-of-date monolithic MAM that is lacking the connectivity and scalability they need to achieve their business goals. Replacing this equipment comes at a high cost and leads people towards the cloud. While the cloud can alleviate many of the current pain points, its not without creating some new ones and raising questions such as what are the true costs, how do I migrate all my data, and is my data secure?

Savva Mueller, director of business development, Telestream:In this constantly shifting market, media companies do not want to be locked into any one vendors solution, and they need their content to be accessible to all of their business systems instead of being stored in a proprietary format. For these reasons, they are looking for more open approaches to asset management and storage.

Stephanie Lone, director of solutions architecture in media and entertainment, AWS:While our M&E customers are in varying stages of their digital transformation journeys, common pain points include: operating in storage silos; navigating the sheer volume of assets that require storage; unlocking the value of these stored assets for distribution and monetization; and localizing content for broader distribution. Presently, many of our customers operate multiple lines of business that use different MAM and storage solutions, making it challenging to uncover and unlock the value of all the assets across their enterprise. Often, they find that their on-premises storage capacity cant accommodate the growing volume of video footage being acquired.

Melanie Ciotti, marketing manager, Studio Network Solutions:Lack of speed, collaboration, ease-of-use, and organization are repeat workflow offenders, and creative teams are looking to solve those shortcomings when they set out to find their first shared storage and MAM solution. What they dont always consider is the flexibility of that system, which becomes an issue after its been in use for some time. Accessing the shared storage and MAM system remotely, adding users easily and cost-effectively, and scaling the system as your team grows are all pain points we see when well-established teams come to us to fix their existing storage or MAM workflow.

Geoff Stedman, CMO, SDVI:Users must select an archive format, a tape format, a tape library and drives, and a hierarchical storage management system. They also must continually keep track of milestones such as hardware and software end-of-life, and tape format or drive migrations. MAM systems were typically deployed to manage what assets were stored where, but most have significant gaps in metadata, making it difficult to find what a user is looking for.

Julin Fernndez-Campn, CTO, Tedial:The physical location of files and the obsolescence of hardware leading to hardware replacement and content migration from time to time.

Alex Grossman, VP of product management and marketing, Perifery, a division of DataCore: One of the most common pain points we hear is the overall complexity in setting and using most MAM systems, and the on-going difficulty in configuring for change.

Sunil Mudholkar:I think its practically in the main line at this point. Virtually every opportunity we are involved with has some sort of cloud component whether it be MAM or storage or both. Use cases range from simple archival to full cloud editing.

Jon Finegold:On the MAM side, it seems most deployments are still on-prem but there are some innovative approaches to media management leveraging cloud technology. Media Engine isnt a MAM, but it does leverage the power of the Signiant Platform and cloud technology to offer lightweight media management capabilities in a disruptive way.

Roberto Pascual, head of sales, VSN:The adoption of cloud technology in terms of MAM and storage has been accelerated for the last four years, especially after the Covid-19 outbreak, and it will continue as we discussed a few months ago on FIAT/IFTA World Conference.

Sam Peterson:MAM has generated more interest in recent times, and we are seeing more and more media companies make the transition to the cloud. This was accelerated due to the pandemic, but its evolution in a short space of time has really helped the whole value chain thrive in this new era for broadcasting.

Andy Shenkler:Cloud adoption for core MAM services has finally reached a crescendo and most go-forward activities now are being done in the cloud. Along with that adoption is a cloud-first model for storage, but trepidation still exists around mismanaged costs and lack of control. There is still emotional comfort that comes from having a fixed based cost model for storage that has been the predominant way on-prem storage has been thought of for so long.

Aaron Kroger:The industry is well on its way to transitioning to the cloud, but its happening in steps. Having a cloud-native solution such as Dalet Flex that can also be deployed on-premises or hybrid, is a popular option allowing for the best of both worlds. There are still some links in the chain that have not migrated to the cloud so a hybrid solution can create better connectivity to those today and be ready for the transition to a fully cloud-hosted business in the future.

Savva Mueller:Pre-2020, cloud adoption was still fairly low. While many customers were investigating hosting critical systems and storage in the cloud, very few had near-term plans to do so, and even fewer had already made the move. The Covid pandemic accelerated the move to cloud storage and cloud processing. This was most pronounced in North America. Other regions have seen a slower adoption.

Stephanie Lone:Challenges remain in defining the best practices for how the industry should build media supply chains for enhanced localization when it comes to MAM and storage. To this end, the International Broadcasting Convention (IBC) Accelerator Initiative Cloud Localization Blueprint is working to standardize practices and formats to ultimately empower the entire industry to save time and money.

Melanie Ciotti:The cloud is everywhereits on our phones; its in our workflows; its omnipresent. And while the cloud has made its way into a majority of broadcast and post-production workflows across the nation (and around the world), very rarely is the cloud managing 100% of that workflow. It is much more common to see a hybrid approach with both on-premise and cloud storage working togetherwhich truly offers the best of both worlds.

Geoff Stedman:Today, the cloud has become a central location for media storage, as users have become much more comfortable with the reliability, security, and affordability of the cloud for content archives. In many cases, what started out as a secondary, or backup, location for content storage turned into the primary storage location as people discovered the ease with which they could access and collaborate on content from anywhere.

Julin Fernndez-Campn:Storage in the cloud has been adopted for some specific use cases, but not widely. Often a second, low-res copy is used for redundancy or native storage for workflows that are executed in the cloud, such as massive distribution or collaboration workflows.

Alex Grossman:Many organizations adopted a public cloud first initiative in 2018 or 2019 and archive was the most often preferred usage model. News and live broadcast saw the adoption of production/editing, but there has been a retraction due to unpredictable costs.

Sunil Mudholkar:MAM and storage can become easier to access for clients in varying sites. Utilizing tiered cloud storage for both block and object based, in an intelligent manner can be very cost effective for those that like OPEX style financial models with predictable infrastructure/software expenses.

Jon Finegold:Elasticity is probably the biggest benefit, being able to manage surges. If you have lots of projects at once or a big influx of assets at one time, the cloud gives you tremendous elasticity. There are cases where cloud can be a lot more economical, but that depends on a lot of factors and your use case.

Roberto Pascual:Firstly, the cloud allows maximizing flexibility as well as minimizing capital investment in terms of business, which is significantly appreciated in times of upheaval or constant adaptation to new viewer demands. Secondly, cost is high but evolution of hybrid solutions Finally, there is a final reason to think about since maintenance and security might be one of the unexpected benefits from moving to the cloud.

Sam Peterson:There are many benefits for broadcasters and other media companies including greater flexibility and reliability. Cloud also enables a level of scalability that would be otherwise unaffordable through on-premise storage. Moving to the cloud provides the added capabilities regarding remote access to content and tools, which allows the industry greater opportunity to work more collaboratively.

Andy Shenkler:A clear benefit from moving to the cloud is the ability to scale dynamically without needing to invest ahead of an activity or needing to procure capacity for peak loads that become costly and sit idle for the majority of the time. In addition, flexibility around business continuity without needing to stand-up complete duplicate physical footprints certainly changes the mindset about your business and its options.

Aaron Kroger:Moving your MAM to the cloud enables you to have a highly accessible, auto-scalable, metadata-rich library that will decrease your TCO while increasing your collaboration and ultimately, your revenue. Being able to easily access content from anywhere allows you to reuse content already captured in new and creative ways or even directly monetize it. With cloud storage, you can automatically scale your storage volume and tier as you need, allowing you to find the correct balance between storage costs vs retrieval time.

Savva Mueller:The trend toward remote work has been a major factor in the increased adoption of cloud services since cloud services are designed to be accessible anywhere. Hosting systems and storage in the cloud also provides operational benefits including built-in business continuity through data replication and the reduction of organizations data center footprints and associated costs.

Stephanie Lone: Elasticity is one key benefit, as providing live coverage of tent-pole events such as the Super Bowl and March Madness to large-scale audiences requires the ability to quickly spin up resources on-demand. The cloud enables content providers to deploy hundreds, or even thousands, of servers in just minutes and then promptly spin them back down as their traffic patterns return to normal. Cost savings is another cloud advantage, as it alleviates customers need to wade through the lengthy hardware purchasing and provisioning processes required to house data centers, which typically takes months to plan, acquire, install, and provision.

Melanie Ciotti:When done right, a cloud or hybrid cloud workflow can be a major catalyst for productivity and creativity. The cloud can enable better remote editing, archival, file sharing, mobile workflows, and so much more for a production team. No hardware needed can also be a benefit.

Geoff Stedman:Companies of all sizes are realizing that they can become more efficient and agile when they take advantage of cloud technologies. With content in the cloud, it can be easily standardized into a common format, and the metadata can be enriched using cloud-based AI tools. Moving their archives and media processing to the cloud, even at relatively smaller scale, makes monetizing that content for the plethora of distribution platforms now available much easier and faster.

Julin Fernndez-Campn:Benefits include redundancy, which is provided naturally for the storage service, scalability, and accessibility.

Alex Grossman:While most would say OpEx vs CapEx, the real benefits are derived from taking advantage of the apps and services provided by the cloud including AI/ML functionality.

Sunil Mudholkar:AI is making it easier to add value to content through extended metadata with great accuracy and volume, reducing the need for manual resources. Its also speeding up aspects of the remote workflow with features like automatic transcription.

Jon Finegold:There are some very practical applications of machine learning and AI that are in play today. One example is Signiants use of machine learning in its intelligent transport protocol to determine the most efficient way to move data over a network at any moment in time. Theres certainly a lot of buzz about using artificial intelligence to automatically translate content, to tag content, identify images in videos but that mostly seems in the early experimental phase but were on the cusp of some of that capability being used in more widespread ways.

Toni Vilalta:Thanks to AI, human tasks can be focused on supervising the metadata generated automatically, instead of wasting time and resources in manual cataloging. The automatic transcription and translation can save a lot of time too, and the closed captions or subtitle files can be easily generated, delivering packages to traditional broadcast or new multiple non-linear platforms. With machine learning capabilities, broadcast and media professionals can train their own archiving systems and create their own term structure, without worrying about the type of content or the localization of their companies.

Sam Peterson:AI and machine learning have the potential for significant positive impacts on broadcast workflows as they are helping broadcasters make more informed decisions. One application where broadcasters are using AI and ML technology today is for intelligent indexing of content. These techniques are also improving workflow efficiencies which is crucial in todays demanding market, allowing broadcasters time to focus on creating new products/productions.

Andy Shenkler:At the moment, the machine learning activities around broadcast workflows still remain heavily focused on reducing repetitive human tasks (i.e. identifying commercial breaks, credit points, augmented QC functions), not to say there isnt other more sophisticated processes being deployed. As both the technology and the skillsets of the people leveraging that technology improve, we will begin to see greater adoption around compliance editing, localization and real-time enriched consumer experiences.

Aaron Kroger:AI enables you to identify what is in your content, what was said, who is in it, what logos are shown, and more. Today, this also allows you to increase the automation of your existing workflows. With richer metadata, you can trigger automated processes to send those clips to the next step in the process and the relevant audiences.

Savva Mueller:Currently speech to text is being widely used, particularly to provide closed captioning and subtitles for content. The quality of these services has improved dramatically over the past decade. Visual recognition is not being used heavily today, both because of its costs and its effectiveness. Going forward, we expect that visual recognition services will become more effective, and that systems will provide more efficient ways to implement these services to reduce their costs.

Melanie Ciotti:AI and ML continue to astound me. ChatGPT and friends are making waves in every industry with well-written, well-researched content for virtually any purpose, and AI can add relevant metadata tags to video clips in ShareBrowser MAM at the click of a button, making decades worth of untagged media searchable within the media asset management system. AI is the present and future of broadcast workflows, and Im waiting with bated breath to see what it does next.

Geoff Stedman:The use of AI and machine learning is still in its infancy for broadcast workflows, although it is starting to have a positive impact. One way that AI tools are being used is to analyze video and audio content to extract information about the content that can be used to enrich the metadata for those assets. AI is also being used to perform automated QC reviews, with results that then guide operators for manual reviews of only questionable points.

Julin Fernndez-Campn:The impact of AI and machine learning is increasing in some use cases such as video analysis, image recognition, speech to text, but there are many features that currently exist as a result of AIs based on GPT 3 (Open AI) and others that claim to be able to do summaries, create pictures or videos.

Alex Grossman:The impact has been minimal compared to where it will go. As applications take advantage of AI/ML, the efficiencies provided will drive faster adoption.

Subscribe to NewscastStudio for the latest delivered straight to your inbox.

Read the rest here:
Industry Insights: Navigating the future of media asset management ... - NewscastStudio

Read More..

The ultimate guide to finding the best Cloud Computing Course … – Udaipur Kiran

With the world becoming more digitalized, cloud computing has become extremely popular in the job market. However, suppose you are still left unaware of the technology. In that case, cloud computing can be understood as the platform that delivers on-demand computing services, including servers, storage, databases, networking, software, analytics, and intelligence, over the internet.

With this innovative technology, businesses can receive technology services through the cloud without purchasing or maintaining the supporting infrastructure. The cloud provider manages all of it, allowing companies to concentrate on their core skills.

The cloud market is expected to grow at 15.80% by 2028 globally, so this is the most crucial time to gain cloud skills. So basically, to fit perfectly into the role of a cloud computing professional, you will need all the right knowledge and skills.

Enrolling in a good cloud computing course online will increase your chances of kick-starting your career in the cloud computing industry correctly. So lets look at how you would choose the right path.

Cloud computing is a vast field that requires diverse skills, from basic knowledge to advanced technical skills. Therefore, consider your present level of expertise and the areas you need to grow before looking for a course.

Many platforms, such as Simplilearn, Coursera, and Udemy, provide cloud computing courses. To choose the platform that best suits your needs, it is best to investigate each one, as each has distinctive features and course offers.

When youve chosen a course that piques your interest, read testimonials from previous students. This will give you an idea of the courses quality and whether it fits you well.

The most excellent cloud computing courses will offer hands-on experience, so look for that. To improve your skills, look for courses that include real-world exercises, case studies, and projects.

When selecting a course, its essential to consider the instructors knowledge and experience in cloud computing. Seek teachers who have substantial industry expertise or who have held positions at respectable cloud computing organizations.

Ensure that the course you choose covers the most important subjects you need to learn. Ensure the course description and syllabus contain all the crucial facets of cloud computing. This will, of course, also include the tools covered by the course.

To get certified in cloud computing, you should look for courses that prepare you for the certification exam. Luckily, many platforms offer courses that provide certification upon completion, so its worth exploring these options.

When selecting an online cloud computing course, price is a crucial factor to consider. Compare the costs of various courses, and remember to account for extra expenses like exam and certification fees.

Around holidays and other occasions, several platforms provide discounts and promotions on their courses. To obtain the greatest deal, keep an eye out for these promotions and save a little extra while still getting the same knowledge.

Lastly, consider how much time you have to commit to the course. Choose a class that fits your schedule and is flexible enough to accommodate your learning style. After all, when choosing an online course, time reservations are on your mind.

While getting a certification in cloud computing will require you to enroll in a paid course that will cover all the topics and fully upskill you for the role, you can always start easy with cloud computing free courses.

Here are the best cloud computing courses you can take up for free:

Basic and advanced cloud computing ideas are covered in this free course by Simplilearn. You will discover more about cloud hosting, services, and architecture while also learning the principles of cloud computing, the cloud computing lifecycle, and the essential ideas behind Amazon, Azure, and Google Cloud Platform.

Skills you will learn in this course:

Whether open-source or employed by businesses, cloud computing systems today are created utilizing a shared set of fundamental methodologies, algorithms, and design philosophies based on distributed systems.

This Coursera training program teaches you all about these basic concepts of distributed computing for the cloud.

Skills to be gained from this course:

The AWS course is a great way to get started on the Amazon learning plan for any beginner in the cloud computing job market. Plenty of on-demand courses are available on the platform, with the only limitation being focused on the AWS cloud computing systems.

Skills you learn from AWS:

With EdX, you will learn the fundamentals of cloud computing, including cloud architecture, security, emerging technologies, and potential jobs within weeks. Additionally, after completing the course, you will earn a skill badge indicating your knowledge and expertise in cloud computing.

What you will learn during this training:

We already know a big need for qualified people in the fast-expanding cloud computing industry. Many free and paid online courses can assist you in gaining the abilities and information required to excel in this industry.

Choosing the right program for you is essential as it might affect what you learn and how you learn it. So use this guide and pick the right course to enroll in.

See the rest here:
The ultimate guide to finding the best Cloud Computing Course ... - Udaipur Kiran

Read More..

Cloud Migration Services Market to Reach USD 71.05 Billion by … – GlobeNewswire

Pune, March 31, 2023 (GLOBE NEWSWIRE) -- As per SNS Insider, the Cloud Migration Services Market had a worth of USD 11.54 billion in 2022 and is expected to grow at a compound annual growth rate (CAGR) of 25.5% between 2023 and 2030, eventually reaching a valuation of USD 71.05 billion.

Market Overview

Cloud migration services refer to the specialized set of services that help businesses move their applications, data, and other IT assets from on-premises infrastructure to the cloud. These services are essential for companies that want to harness the benefits of cloud computing, including greater scalability, flexibility, and cost savings. Cloud migration services also help businesses achieve greater agility, enabling them to respond more quickly to changing market conditions and customer demands. By moving their IT infrastructure to the cloud, companies can scale up or down their resources as needed, without the need for significant capital investments.

Market Analysis

The rise of hybrid cloud solutions has brought significant changes in the way businesses approach their IT infrastructure. As more and more companies seek to harness the benefits of both public and private clouds, the demand for the cloud migration services market

has surged. This is because cloud migration is a complex process that requires a specialized set of skills and expertise. In addition to the adoption of hybrid cloud solutions, the need for business agility has become more crucial than ever. By migrating their IT infrastructure to the cloud, businesses can achieve greater flexibility and scalability, enabling them to adapt to changes in the marketplace more effectively.

Get a Sample Report of Cloud Migration Services Market@ https://www.snsinsider.com/sample-request/1225

Key Company Profiles Listed in this Report Are:

The key players include Amazon Web Services (AWS) Inc., International Business Machines (IBM) Corporation,Microsoft Corporation, Google LLC, Cisco Systems Inc, NTT Data corporation, DXC Technology Company,VMware inc., Rackspace Hosting Inc., Informatica Inc., WSM international, Zerto Ltd., Virtustream Inc., RiverMeadow Software Inc.,OpenStack Inc.and others.

Impact of Russia-Ukraine Conflict

The ongoing conflict between Russia and Ukraine has had a negative impact on the cloud migration services market. The instability and uncertainty in the region have led to delays and disruptions in service delivery, while the increase in the cost of cloud services has made cloud migration less attractive to companies. The imposition of sanctions has further reduced the options available to companies that were planning to migrate their data to the cloud.

Cloud Migration Services Market Report Scope:

Do you need any customization or Enquiry about this research report@ https://www.snsinsider.com/enquiry/1225

Key Regional Developments

North America is the leading region in the cloud migration services market and is expected to maintain its dominance throughout the forecast period. The growth of the market in this region is due to the high adoption rate of cloud migration services by businesses and organizations. This trend is driven by various factors, including the availability of advanced technological resources, innovative IT infrastructure, and a mature IT industry. The North American market is well-established and mature, owing to the region's strong technological and economic foundations.

Key Takeaway from Cloud Migration Services Market Study

Recent Developments Related to Cloud Migration Services Market

Table of Contents

1. Introduction

2. Research Methodology

3. Market Dynamics

4. Impact Analysis

5. Value Chain Analysis

6. Porters 5 Forces Model

7. PEST Analysis

8.Cloud Migration Services Market, By Service Type

9. Cloud Migration Services Market, ByApplications

10. Cloud Migration Services Market, ByOrganization Size

11. Cloud Migration Services Market, ByDeployment Mode

12.Cloud Migration Services Market, By IndustryVerticals

13. Regional Analysis

14. Company Profiles

15. Competitive Landscape

16. Conclusion

Buy Single-User PDF of Cloud Migration Services Market Report@ https://www.snsinsider.com/checkout/1225

About Us:

SNS Insider is one of the leading market research and consulting agencies that dominate the market research industry globally. Our company's aim is to give clients the knowledge they require in order to function in changing circumstances. In order to give you current, accurate market data, consumer insights, and opinions so that you can make decisions with confidence, we employ a variety of techniques, including surveys, video talks, and focus groups around the world.

Access Complete Report Details@ https://www.snsinsider.com/reports/cloud-migration-services-market-1225

See more here:
Cloud Migration Services Market to Reach USD 71.05 Billion by ... - GlobeNewswire

Read More..

AlienFox malware caught in the cloud hen house – The Register

A fast-evolving toolkit that can be used to compromise email and web hosting services represents a disturbing evolution of attacks in the cloud, which for the most part have previously been confined to mining cryptocurrencies.

The AlienFox toolkit is being hawked on Telegram as a way to compromise misconfigured hosts on cloud services platforms and harvest sensitive information like API keys and other secrets, according to security shop SentinelOne.

It's a relatively fresh turn in opportunistic cloud attacks, Alex Delamotte, senior threat research with SentinelLabs, wrote in a report today.

"AlienFox tools facilitate attacks on minimal services that lack the resources needed for mining," she wrote. "By analyzing the tools and tool output, we found that actors use AlienFox to identify and collect service credentials from misconfigured or exposed services. For victims, compromise can lead to additional service costs, loss of customer trust, and remediation costs."

It can also open the doors to further criminal campaigns. Later versions of AlienFox include scripts that automate malicious operations using the stolen credentials, such as establishing persistence and allowing privilege escalation in AWS accounts. Another script automates spam campaigns through victim accounts and services.

Through AlienFox, attackers are able to collect lists of misconfigured hosts through scanning platforms like LeakIX and SecurityTrails, exhibiting an increasingly common trait among threat groups of using legitimate security products as with threat emulation tool Cobalt Strike in their malicious operations.

They can then use multiple scripts in the toolkit to steal sensitive information from the misconfigured hosts on such cloud platforms like Amazon Web Services and Microsoft Office 365. While the AlienFox scripts can be used against a range of web services, they primarily target cloud-based and software-as-a-service (SaaS) email hosting services, Delamotte wrote.

Most of the misconfigurations that are exploited are tied to a number of web frameworks, including Laravel, Drupal, WordPress, and OpenCart. The AlienFox scripts check for cloud services and includes a list of targets that are generated by a separate script, such as grabipe.py and grabsite.py. The targeting scrips use brute force methods for IPs and subnets and web APIs for open-source intelligence platforms like SecurityTrails and LeakIX.

When a vulnerable server is found, the miscreants move in for the sensitive information. SentinelOne found scripts targeting tokens and other secrets from more than a dozen cloud services, not only AWS and Office 365 but also Google Workspace, Nexmo, Twilio, and OneSignal.

AlienFox is a modular open source toolkit that is highly adaptable. While primarily available via Telegram, some modules can be found on GitHub, which can lead to constant adaptation and multiple variants being used, according to the report.

"The evolution of recurring features suggests the developers are becoming increasingly sophisticated, with performance considerations at the forefront in more recent versions," Delamotte wrote.

Given the massive amounts of sensitive data in cloud-based email and messaging systems that now are at "severe risk of exposure," the threat represented by AlienFox is a worry, according to Dan Benjamin, co-founder and CEO of cloud data security startup Dig Security.

"The emergence of toolkits like AlienFox underscores the increasing sophistication of attacker networks and their collective ability to cause harm and disruption," Benjamin told The Register. "This is a very concerning trend where the attackers behind AlienFox are adapting the tool to be effective across more targets, particularly those in use widely across enterprises."

SentinelOne has detected three versions of AlienFox dating back to February 2022 and some of the scripts found has been tagged as malware families by other researchers, such as Androxgh0st by Lacework.

"It is worth noting that each of the SES-abusing toolsets we analyzed targets servers using the Laravel PHP framework, which could indicate that Laravel is particularly susceptible to misconfigurations or exposures," she wrote.

AlienFox v4 is organized differently than the others for example, each tool gets a numerical identifier, such as Tool1 and Tool2 and some new ones suggest the developers are looking for new users or augmenting what existing toolkits can do. For example, one checks to see if email addresses are linked to Amazon retail accounts. If not, the script will create a new Amazon account using the email address. Another automates cryptocurrency wallet seeds for Bitcoin and Ethereum.

Given its ongoing evolution, it's likely that AlienFox will be around for a while.

"Cloud services have well-documented, powerful APIs, enabling developers of all skill levels to readily write tooling for the service," Delamotte wrote. "The toolset has gradually improved through improved coding practices as well as the addition of new modules and capabilities."

View post:
AlienFox malware caught in the cloud hen house - The Register

Read More..

A CEOs tactical guide to driving profitable growth – Bessemer Venture Partners

In the software world, a growth at all costs mindset has given way to profitable growth. Building a venture-backed business was easier when only growth mattered. But now CEOs need to drive both growth and profitability. In the public markets, the companies with the highest growth efficiency (which we define as ARR growth rate + free cash flow margin) command the highest multiples:

In this guide, we unpack a software profit and loss statement (P&L) into its component parts. Much has been written about how to drive growth, but here, we provide tactical steps for CEOs to follow in order to drive more efficiency and profitability.

Gross margin acts as the limit to the ultimate profitability of your business and has an enormous impact on your valuation. Its a great place to start because improving gross margin rarely comes at the expense of investment in growth.

The median gross margin for high-growth public cloud companies is 77%.

Case example: Take two identical software businesses. One is an 80% gross margin business that operates at 40% profit margins at scale. Holding all other factors constant, that same business with 60% gross margins will have only 20% profit margins. The difference between 80% gross margin and 60% gross margin cuts the value of the business by at least 50%.

Cloud Hosting Costs

Implementation

Customer Success and Support

The median amount of ARR that an Enterprise CSM manages is $2 million to $5 million.

Customer Profitability

As a guiding principle, we suggest you use CAC Payback benchmarks to assess your GTM efficiency. CAC payback benchmarks range by scale of the business and whether you are selling into an SMB or enterprise customer base, as sales cycles differ across segments. Here are the good-better-best benchmarks we have aggregated from private cloud companies:

At scale, one of the most powerful drivers of S&M efficiency is improving retention. It is always cheaper to retain and upsell existing customers than to acquire new customers. Moreover, if gross retention is low, refilling a leaky bucket makes it tough to maintain profitable growth.

Sales

Marketing Efficiency

Research & Development (R&D) is the most fraught area to cut spending to drive profitability: overcutting in R&D can lead to short-term wins but degrade competitive advantage over the long-term. Management teams should apply discretion when looking at R&D benchmarks given unique factors such as product complexity and market competitiveness.

The median R&D as a % of revenue is 20% for high-growth public cloud companies.

G&A is a ripe target to drive efficiency given that it is a cost center.

The median G&A as a % of revenue is 12% for high-growth public cloud companies.

Throughout your company, people-related costs tend to be the biggest area of expense. Making sure you are staffed appropriately across the entire organization is critical.

Pricing is one of the most important drivers of revenue growth and profitability. It is one of the most efficient ways to drive margin because any price increase drops straight to the bottom line. If youre a SaaS leader looking for new levers of revenue growth, take our B2B SaaS Pricing course.

The benchmarks we leverage for this article do not include stock-based compensation. However, it is important for founders to understand the impact of stock-based compensation. Although it does not immediately impact cash flow and profitability, it will inevitably at a future point. Further, looking at benchmarks inclusive of stock-based compensation also mitigates any noise resulting from different stock vs cash compensation structures across companies (e.g., the same company that gives 70% stock and 30% cash will look much more efficient than one that gives 50% stock and 50% cash). Lastly, the role of stock-based compensation is becoming an increasingly common topic of discussion for public market investors and has an increasing impact on valuation. We recommend benchmarking your business by including stock-based compensation or benchmarking against companies with a similar equity burn to fully understand relative cost structure and profitability.

As a CEO, it is important to remember:

In this article, we tried to be as comprehensive as possible in ideating tactics CEOs can implement to drive efficient growth. As we wrote the piece, I wondered if it might be helpful to share how I applied at least some of these 40 different tactics to a real business. The case study below shows that these ideas are very actionable. They changed the trajectory and outcome for SendGridand can for you, too!

Fortunately, I worked with a very talented and mature team at SendGrid. Creating alignment on our need to drive to a healthier rule of 40 was like pushing on an open door, and our whole company (not just the leadership team) got behind this goal.

Ultimately, it was a collective effort from across the organization that enabled SendGrid to drive profitable growth. A few examples of notable initiatives included a finance leader branding a company-wide Save to Reinvest campaign, a vice president of support helping us create and monetize new customer support tiers, and a customer success leader helping us launch new add-on services

Focusing on profitable growth isnt just the job of the CEO. Invite the smart and hard-working teammates of your company, who know your day-to-day operations best, to be part of the solution.

Our Save to Reinvest'' campaign was one of the best examples of internal marketing Id ever seen. We demonstrated to everyone in the company that we werent cost-cutting for its own sake, but rather so that we could afford to reinvest in growth levers for the business. This framing is what allowed us to both slash our burn and reaccelerate growth, what we later called the SendGrid smile; in other words, the graph that illustrates our growth rate over time which went down, then flat, then up and to the right.

Over the course of six quarters, we took action on the following tactical steps throughout the organization. Incrementally, we moved from -30% to roughly breakeven, and then reaccelerated our growth as we scaled in 2016, ramping toward our IPO in the fall of 2017.

SendGrids Gross Margin was in the 60s at the time I joined, and was mid-70s by our IPO, and mid-80s when I left Twilio. Unquestionably, some of that improvement was simply due to economies of scalecosts held flat as we increased our output. But it also was the result of many intentional, cost-focused initiatives across a number of areas, including:

General and administrative (G&A) expenses:

a. Vendors: We found it helpful to align the whole team on being more disciplined about vendor costs and negotiations. I personally helped renegotiate our renewal for a event and data analytics platform, which had reached $1 million per year when I arriveduntenable for a $30 million ARR company like ours. This saved us a boatload of money each year. Importantly, it also showed the company that the CEO cared a lot about cost containment.

b. Real estate: When SendGrid expanded from its roots in Boulder to a larger presence in Denver, our CFO and COO championed the consolidation of our two Colorado-based locations. Economies of scale, again, saved us a lot of money as we hired more people into the same physical footprint.

The biggest takeaway for CEOs is to remember that there are opportunities to drive better margins and profitable growth in every aspect of the business. If youre a SaaS leader fundraising in the near future and looking for ways to drive profitable growth, reach out to Brian Feinstein (Brian@bvp.com), Caty Rea (crea@bvp.com), Janelle Teng (jteng@bvp.com), or Sameer Dholakia (sdholakia@bvp.com) to learn more.

Read the original here:
A CEOs tactical guide to driving profitable growth - Bessemer Venture Partners

Read More..

Why Microsoft Teams has only just launched in China – IT PRO

Microsoft has officially launched Microsoft Teams in China via its local partner 21Vianet.

The tech giant launched the collaboration platform in the country on 1 April, at the same time as upgrading Office 365 to Microsoft 365, which will also be operated by 21Vianet.

Under the premise of fully satisfying the Chinese market's requirements for data security, personal information protection and other laws and regulations, Microsoft and 21Vianet Blue Cloud have joined hands to officially launch the Microsoft Teams service operated by 21Vianet for the Chinese market, the company said in a blog post.

Microsoft said that Teams has more than 280 million monthly users around the world after being launched globally in 2017.

Users in China were still able to use the service before it officially launched, although they may have experienced some latency due to the country's 'Great Firewall'.

Microsoft has advised businesses in the past on how to operate their Microsoft 365 accounts in the country if they have a branch or office there.

For enterprises with global Microsoft 365 tenants and a corporate presence in China, Microsoft 365 client performance for China-based users can be complicated by factors unique to China telco's internet architecture, the company said in a notice.

China ISPs have regulated offshore connections to the global public internet that go through perimeter devices that are prone to high levels of cross-border network congestion. This congestion creates packet loss and latency for all internet traffic going into and out of China, it said.

Users in the country who connect to global Microsoft 365 tenants from locations like their houses or hotels, without using an enterprise network, may have experienced poor network performance in the past. This is because the traffic has to go through Chinas congested cross-border network circuits.

The new offering of Teams may improve the service within the country, but it may not change anything for global companies who need to communicate with their branches in China. It can be assumed thatinternet traffic from outside of the countrys borders will still need to go through the same cross-border congestion.

Microsoft said it doesnt operate the service itself and it will instead be operated by 21Vianet. This partner will provide hosting, managed network services, and cloud computing infrastructure services.

21Vianet is a Microsoft strategic partner that is in charge of operating Microsoft Azure, Microsoft 365, Dynamics 365 and Power Platform in China.

It also operates Office 365 services in the country, and claims to be the largest carrier-neutral internet data centre service provider in China.

By licensing Microsoft technologies, 21Vianet operates local Office 365 data centres to provide the ability to use Office 365 services while keeping data within China. 21Vianet also provides subscription and billing services, as well as support, said Microsoft.

"Due to the unique nature of the China services operated by a partner from data centres inside China there are some features that have not yet been enabled, the tech giant added. Customers will see the services come closer to full feature parity over time.

In October 2022, China reportedly upgraded its Great Firewall to crack down on Transport Layer Security (TLS) encryption tools, which citizens had used to evade censorship.

Users had reported that at least one of their TLS-based censorship circumvention servers had been blocked, which was done by blocking the specific port the circumvention services listed on.

Anatomy of a good meeting

And how to eliminate horrid hybrids

The year before, in October 2021, Microsoft decided to shut LinkedIn down in China which was going to be replaced by a standalone job application platform.

Social media platforms in the country have to remove content deemed inappropriate which was uploaded by users. LinkedIn had already been ordered to perform a self-evaluation and suspend new sign-ups of users inside China after failing to control political content on the platform.

While weve found success in helping Chinese members find jobs and economic opportunity, we have not found that same level of success in the more social aspects of sharing and staying informed, Mohak Shroff said at the time, senior vice-president of engineering at LinkedIn.

Were also facing a significantly more challenging operating environment and greater compliance requirements in China.

The tech giant had been expanding its business in the country, and in June 2021 it revealed it was set to add four new data centres in China by early 2022 to expand its service capacity.

It already had six data centres in the country, and was making the move due to new regulation in China which encouraged domestic and foreign companies to shift to local data management.

The business value of storage solutions from Dell Technologies

Streamline your IT infrastructure while meeting the demands of digital transformation

Better APIs for better business

Realities of API security

Innovation to boost productivity and provide better data insights

Dell Technologies continuously modern storage

A roadmap to Zero Trust with Cloudflare and CrowdStrike

Achieve end-to-end protection across endpoints, networks, and applications

Continue reading here:
Why Microsoft Teams has only just launched in China - IT PRO

Read More..

5 Paths to Legacy Transformation – TechBeacon

It's common to talk about legacy-system transformation as if there's just one path available for modernizing systems.But in reality, legacy transformation is like navigating a sprawling interstate highway network; there are many routes that can potentially get your legacy systemsto where you want them to be. The challenge organizations face is identifying whichtactics will best help them update older technology systems to align with current business needswhileimproving performance.

Allow me to explain by discussing five different approaches to legacy-system transformation. As you'll learn, all of these approaches can add value to legacy systems, but they do so in different waysand it may or may not make sense to tryone particular approachor another on your transformation journey.

One of the simplest and most obvious ways to get more value from a legacy system is to update it to a newer version of the system (or to update the platform on which it depends).

For example, if your app is built on top of a legacy ERP platform, migrating to the newversion of the platform may well add efficiency, flexibility, and/or scalability to the appall without requiring you to modify the application itself.

Before taking this approach, it's important to evaluate how much value a platform upgrade would addand then weigh that against how much time, effort, and money the upgrade requires. Depending on when your last upgrade took place, a major platform upgrade may not create enough value to justify itself. But in other casesespecially if it has been years since you last updated the legacy systems or platforms on which your applications dependan upgrade is a comparatively fast and easy way to improve application performance and/or manageability. Although upgrading doesn't change the fundamentals of the technology you're using, itis likely tounlock new features and flexibility that help to modernize the application.

Another common transformation approach is moving legacy apps to the cloud. Here,again, moving to the cloud doesn't fundamentally change your system. But it makes it easier in many respects to operate and manage the system because you can take advantage ofcloud infrastructure that you can consume on demand. It also frees you from having to acquire, deploy,and maintain your own hosting infrastructure.

In many cases, legacy-platform vendors offer both on-premises and cloud-based versions of their systems. Although both types of offerings typically provide the same core features, migrating to the cloud-based versioncan simplify application management and increase scalability.

Moving to a cloud platform takes time and effort, so it is important to evaluate whether it is worth it before undertaking a cloud migration. In many cases, though, you may find that it is.

Whether you move your legacy system to the cloud or not, you canif your legacy-application platform supports ittake advantage of microservices architectures and/or container-based deployment.

A microservice implementation involves breakingcomplex applications into smaller pieces that operate independently from each other; these smaller pieces are called microservices. This makes applications easier to scale because you canallocate more resources to each microservice on an individual basis. It's also faster to deploy or update a microservice than it is to deploy a larger application.

Containers are a deployment technology that organizations commonly use to host microservices. You can run a different microservice inside each container, making it easy to keep track of which microservices you have running and to deploy new microservices by deploying new containers to host them.

There is an added benefit to containers. Containers representa form of virtualization, but they don't come with one of thebig drawbacks of other virtualization technologies. Traditional virtualization requires services to run guest operating systems on top of the host operating system. The more operating systems you have running on a server, the more CPU and memory you have to provide to the operating systemsand the fewer you have available for your applications. This is not a problem in containerization because containers do not rely on traditional virtualization technology.

Thus, by taking advantage of microservices and containers, you can deploy legacy applications in a more scalable and efficient way. You are likely in turn to improve performance and reduce hosting costs relative to operating your application as a monolith.

The catch here is that not every legacy system supports microservices and containers, so be sure to check your legacy-system vendor's documentation before assuming you can take advantage of these technologies.

In its narrow definition, DevOps representsthe integration of software development and IT operations. More broadly, it refers to a wide range of modern operational techniques and practices, such as the continuous deployment of changes and user-centric application management.

You can leverage DevOps methodologies for legacy apps just as well as you can for modern, cloud-native applications. In so doing, you'll gain more operational flexibility and agility, which translates to higher application availability and an enhanced ability to make changes without disrupting functionality.

Embracing DevOps requires changing the way your organization thinks about software delivery and management;it may require adopting some new tools, too. But the effort is almost always worth it.

Thanks especially to the AI revolution heralded by generative-AI tools such asChatGPT,artificial intelligence (AI)andmachine learning (ML) are transforming all sectors of the technology industry.

This technology is still maturing, and it's too soon to say exactly how it might support legacy-system transformation. But going forward, efficiency-focused organizations might use AI and MLfor tasks such as, for example,parsingthe configurations of legacy systems to detect opportunities for improvement.AI could also power chatbots that help to train end users in navigating new systems following a migration or transformation.

I'm being a little speculative here;again, AI tools designed for specific use cases such as these don't yet exist. But they're easy to envisionand they're likely to become another tool in the legacy-transformation arsenal for businesses going forward.

The fact that there are many viable routes toward legacy-system transformation is a great thing. Organizations canchoose which approaches and methodologies best align with their needs and resources.

But thisalso presents challenges. If you embark on a legacy transformation without knowing how best to arrive at your destinationor, worse, without being sure what your destination even isyou'll likely become bogged down in inefficient strategies that yield lackluster results.

That's why it'scritical to establish a road map that lays out your legacy-transformation strategy and helps you gain buy-in from stakeholders. Creating the road map may involve conducting a thorough assessment of the existing landscape, identifying areas for improvement and innovation, and prioritizing initiatives based on business value and impact.

To generate a realistic legacy-transformation road map, you'll likely need to evaluate the development resources you have available within your organizationand then decide on that basis (1) how many changes you can feasibly make to your applications and (2) how quickly your developers can implement the changes. You'll also want to think about what your most serious pain points are(Application cost? Scalability? Reliability? Something else?)and prioritize them accordingly.

Along similar lines, it's critical to have a strong team in place to guide your legacy-transformation journey. Your team should have expertise in both the legacyplatforms you use and the latest innovations in areas like cloud computing, DevOps, and AI. External partners and consultants maybe helpful as wellparticularly for organizations that mightnot have in-house expertise in all the areas needed for a successful legacy transformation.

After all, just as you wouldn't want to set off on a cross-country road trip without knowing anything about the roads you'll be traversing or the pros and cons of different routes, you don't want to start a complex legacy-system transformation without both critical knowledge and guiding insight on hand.

Go here to see the original:
5 Paths to Legacy Transformation - TechBeacon

Read More..

10 things to know about data-center outages – Network World

The severity of data-center outages appears to be falling, while the cost of outages continues to climb. Power failures are the biggest cause of significant site outages. Network failures and IT system glitches also bring down data centers, and human error often contributes.

Those are some of the problems pinpointed in the most recent Uptime Institute data-center outage report, which analyzes types of outages, their frequency, and what they cost both in money and consequences.

Uptime cautions that data relating to outages should be treated skeptically given the lack of transparency of some outage victims and the quality of reporting mechanisms. Outage information is opaque and unreliable, said Andy Lawrence, executive director of research at Uptime, during a briefing about Uptimes Annual Outages Analysis 2023.

While some industries, such as airlines, have mandatory reporting requirements, theres limited reporting in other industries, Lawrence said. So we have to rely on our own means and methods to get the data. And as we all know, not everybody wants to share details about outages for a whole variety of reasons. Sometimes you get a very detailed root-cause analysis, and other times you get pretty well nothing, he said.

The Uptime report culled data from three main sources: Uptimes Abnormal Incident Report (AIRs) database; its own surveys; and public reports, which include news stories, social media, outage trackers, and company statements. The accuracy of each varies. Public reports may lack details and sources might not be trustworthy, for example. Uptime rates its own surveys as producing fair/good data, since the respondents are anonymous, and their job roles vary. AIRs quality is deemed very good, since it comprises detailed, facility-level data voluntarily shared by data-center owners and operators among their peers.

Theres evidence that outage rates have been gradually falling in recent years, according to Uptime.

That doesnt mean the total number of outages is shrinkingin fact, the number of outages globally increases each year as the data-center industry expands. This can give the false impression that the rate of outages relative to IT load is growing, whereas the opposite is the case, Uptime reported. The frequency of outages is not growing as fast as the expansion of IT or the global data-center footprint.

Overall, Uptime has observed a steady decline in the outage rate per site, as tracked through four of its own surveys of data-center managers and operators conducted from 2020 to 2022. In 2022, 60% of survey respondents said they had an outage in the past three years, down from 69% in 2021 and 78% in 2020.

There seems to be a gently, gently improving picture of the outage rate, Lawrence said.

While 60% of data-center sites have experienced an outage in the past three years, only a small proportion are rated serious or severe.

Uptime measures the severity of outages on a scale of one to five, with five being the most severe. Level 1 outages are negligible and cause no service disruptions. Level five mission-critical outages involve major and damaging disruption of services and/or operations and often include large financial losses, safety issues, compliance breaches, customer losses. and reputational damage.

Level 5 and Level 4 (serious) outages historically account for about 20% of all outages. In 2022, outages in the serious/severe categories fell to 14%.

A key reason is that data-center operators are better equipped to handle unexpected events, according to Chris Brown, chief technical officer at Uptime. Weve become much better at designing systems and managing operations to a point where a single fault or failure does not necessarily result in a severe or serious outage, he said.

Todays systems are built with redundancy, and operators are more disciplined about creating systems that are capable of responding to abnormal incidences and averting outages, Brown said.

When outages do occur, they are becoming more expensivea trend that is likely to continue as dependency on digital services grows.

Looking at the last four years of Uptimes own survey data, the proportion of major outages that cost more than $100,000 in direct and indirect costs is increasing. In 2019, 60% of outages fell under $100,000 in terms of recovery costs. In 2022, just 39% of outages cost less than $100,000.

Also in 2022, 25% of respondents said their most recent outage cost more than $1 million, and 45% said their most recent outage cost between $100,000 and $1 million.

Inflation is part of the reason, Brown said; the cost of replacement equipment and labor are higher.

More significant is the degree to which companies depend on digital services to run their businesses. The loss of a critical IT service can be tied directly to disrupted business and lost revenue. Any of these outages, especially the serious and severe outages, have the ability to impact multiple organizations, and a larger swath of people, Brown said, and the cost of having to mitigate that is ever increasing.

As more workloads are outsourced to external service providers, the reliability of third-party digital infrastructure companies is increasingly important to enterprise customers, and these providers tend to suffer the most public outages.

Third-party commercial operators of IT and data centerscloud providers, digital service providers, telecommunications providersaccounted for 66% of all the public outages tracked since 2016, Uptime reported. Looked at year-by-year, the percentage has been creeping up. In 2021 the proportion of outages caused by cloud, colocation, telecommunications, and hosting companies was 70%, and in 2022 it was up to 81%.

The more that companies push their IT services into other peoples domain, theyre going to have to do their due diligenceand also continue to do their due diligence even after the deal is struck, Brown said.

While its rarely the single or root cause of an outage, human error plays some role in 66% to 80% of all outages, according to Uptimes estimate based on 25 years of data. But it acknowledges that analyzing human error is challenging. Shortcomings such as improper training, operator fatigue, and a lack of resources can be difficult to pinpoint.

Uptime found that human error-related outages are mostly caused either by staff failing to follow procedures (cited by 47% of respondents) or by the procedures themselves being faulty (40%). Other common causes include in-service issues (27%), installation issues (20%), insufficient staff (14%), preventative maintenance-frequency issues (12%), and data-center design or omissions (12%).

On the positive side, investing in good training and management processes can go a long way toward reducing outages without costing too much.

You dont need to go to a banker and get a bunch of capital money to solve these problems, Brown said. People need to make the effort to create the procedures, test them, make sure theyre correct, train their staff to follow them, and then have the oversight to ensure that they truly are following them.

This is the low hanging fruit to prevent outages, because human error is implicated in so many, Lawrence said.

Uptime said its current survey findings are consistent with previous years and show that on-site power problems remain the biggest cause of significant site outages by a large margin. This despite the fact that most outages have several causes, and that the quality of reporting about them varies.

In 2022, 44% of respondents said power was the primary cause of their most recent impactful incident or outage. Power was also the leading cause of significant outages in 2021 (cited by 43%) and 2020 (37%)

Network issues, IT system errors, and cooling failures also stand out as troubling causes, Uptime said.

Uptime used its own data, from its2023 Uptime resiliency survey, to dig into network outage trends. Among survey respondents, 44% said their organization had experienced a major outage caused by network or connectivity issues over the past three years. Another 45% said no, and 12% didnt know.

The two most common causes of networking- and connectivity-related outages are configuration or change management failure (cited by 45% of respondents) and a third-party network providers failure (39%).

Uptime attributed the trend to todays network complexity. In modern, dynamically switched and software-defined environments, programs to manage and optimize networks are constantly revised or reconfigured. Errors become inevitable, and in such a complex and high-throughput environment, frequent small errors can propagate across networks, resulting in cascading failures that can be difficult to stop, diagnose, and fix, Uptime reported.

Other common causes of major network-related outages include:

When Uptime asked respondents toits resiliency survey if their organization experienced a major outage caused by an IT systems or software failure over the past three years, 36% said yes, 50% said no, and 15% didnt know. The most common causes of outages related to IT systems and software are:

Publicly recorded outages, which include outages that are reported in the media, reveal a wide range of causes. The causes can differ from what data-center operators and IT teams report, since the media sources knowledge and understanding of outages depends on their perspective. Whats really interesting is the sheer variety of causes, and thats partly because this is how the public and the media perceive them, Lawrence said.

Fire is one cause that showed up among publicly reported outages but didnt rank highly among IT-related sources. Specifically, Uptime found that 7% of publicly reported data-center outages were caused by fires. In the web briefing, Uptime researchers related the incidence of data-center fires to increasing use of lithium-ion (Li-ion) batteries.

Li-ion batteries have a smaller footprint, simpler maintenance, and longer lifespan compared to lead-acid batteries. However, Li-ion batteries present a greater fire risk. A Maxnod data center in France suffered a devasting fire on March 28, 2023, and we believe its caused by lithium-ion battery fire, Lawrence said. A lithium-ion battery fire is also the reported cause of a major fire on Oct. 15, 2022, at a South Korea colocation facility owned by SK Group and operated by its C&C subsidiary.

We find, every time we do these surveys, fire doesnt go away, Lawrence said.

Read more here:
10 things to know about data-center outages - Network World

Read More..