Category Archives: Cloud Computing

Why Banks Have Been Slow to Embrace the Cloud and What Could Soon Change That – BizTech Magazine

At this time, high-level enterprise functions such as collaboration tools, customer relationship management and IT operations are the most likely cloud workloads for financial institutions. More fundamental functions, including risk and compliance, capital markets, and consumer and commercial banking, make up 4 percent or less of cloud workloads.

However, trends in cloud computing appear to suggest a firmer embrace in the years to come. Per data from the American Bankers Association, at least 90 percent of banks maintain at least some data, applications or operations in the cloud, and 91 percent expect to increase cloud use in the coming years. This will most likely be for functions that can improve the customer experience, such as digital banking apps and CRM tools.

There are genuine reasons for banks to be cautious with cloud computing. These dont stem from technical hesitancy, but rather from a desire to be careful with issues of risk, which carry different meaning for the financial sector than in other fields.

In 2019 testimony before a task force of the House Financial Services Committee, Paul Benda, the American Bankers Associations senior vice president for operational risk and cybersecurity, explained why the industry has traditionally been slow to embrace the cloud, citing a mix of regulatory concerns, security desires and a goal of risk management.

Although there are compelling business and operational resilience reasons for financial institutions to consider the use of the cloud, it is critical that financial institutions first put in place strong and effective risk mitigation strategies to address the risks that are unique to the cloud, Benda told the committee.

EXPLORE:Can cloud computing help financial institutions manage regulatory compliance?

His commentary points to Title V of the Gramm-Leach-Bliley Act, a 1999 law that requires banks to respect the privacy of its customers and to protect the security and confidentiality of those customers nonpublic personal information.

These standards apply equally, regardless of whether that information is stored or handled by a financial institution or its vendor on the financial institutions own system or in a third-party cloud, Benda added in his testimony. These standards also require that financial institutions have in place incident response programs to address security incidents involving unauthorized access to customer information, including notifying customers of possible breaches when appropriate.

Despite the concerns about liability and organizational risk, the banking industry collectively sees high potential in the cloud. Benda emphasized a willingness for a more collaborative approach.

The challenges in this space are complex, and we believe that every stakeholder wants to ensure that the security of these critical systems is maintained and at the same time innovation is not hindered, he explained.

Click the banner below to unlock exclusive cloud content when you register as an Insider.

Go here to see the original:
Why Banks Have Been Slow to Embrace the Cloud and What Could Soon Change That - BizTech Magazine

Stacklet Named a 2022 Cool Vendor in Cloud Computing by Gartner – Business Wire

ARLINGTON, Va.--(BUSINESS WIRE)--Stacklet, developers of the industry-first cloud governance as code platform based on the open source Cloud Custodian project, today announced it has been recognized in the 2022 Gartner Cool Vendors in Cloud Computing report. We think this recognition builds on its continued momentum, including the recently announced Stacklet SaaS Platform which makes it easier and frictionless for organizations to shift to governance as code model.

Innovating efficiently and securely in the cloud requires a paradigm shift from traditional approaches to governance. Governance as code is a new paradigm that allows cloud engineering, security, and FinOps teams to quickly understand, codify, and automate cloud governance for a frictionless experience for development teams and rapid cloud adoption.

Cloud Custodian, an open source project and part of Cloud Native Computing Foundation (CNCF), is rapidly becoming the de-facto standard for cloud governance as code with millions of downloads occurring globally each month. Stacklet Platforms extends Cloud Custodian with intelligent management capabilities like governance insights, real-time asset inventory, out-of-the-box policy packs, and advanced communications to make it easier for DevSecOps and FinOps teams to automate and enforce governance policies at scale.

"We believe being named a Cool Vendor by Gartner is a strong recognition of how governance as code and Stacklet can help organizations scale operations in the cloud," said Travis Stanfield, co-founder, and CEO, of Stacklet. "We are looking forward to continuing our momentum and helping our customers control costs and be secure across multiple cloud platforms in a way that doesn't hinder developer velocity."

Supporting Resources

You can access the full report here. Gartner Cool Vendors in Cloud Computing, Arun Chandrasekaran, Sid Nag, et al, 26, April 2022.

Gartner does not endorse any vendor, product or service depicted in our research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner's research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

GARTNER and COOL VENDORS are registered trademarks and service marks of Gartner, Inc. and/or its affiliates in the U.S. and internationally and are used herein with permission. All rights reserved.

About Stacklet

Stacklet was founded by the creator and lead maintainer of Cloud Custodian, an open source cloud native security and governance project used by thousands of well-known global brands today. Stacklet provides the commercial cloud governance platform that accelerates how organizations manage their security, asset visibility, operations, and cost optimization policies in the cloud. For more information, go to https://stacklet.io or follow @stackletio.

See the original post here:
Stacklet Named a 2022 Cool Vendor in Cloud Computing by Gartner - Business Wire

Scalability and elasticity: What you need to take your business to the cloud – VentureBeat

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!

By 2025, 85% of enterprises will have a cloud-first principle a more efficient way to host data rather than on-premises. The shift to cloud computing amplified by COVID-19 and remote work has meant a whole host of benefits for companies: lower IT costs, increased efficiency and reliable security.

With this trend continuing to boom, the threat of service disruptions and outages is also growing. Cloud providers are highly reliable, but they are not immune to failure. In December 2021, Amazon reported seeing multiple Amazon Web Services (AWS) APIs affected, and, within minutes, many widely used websites went down.

So, how can companies mitigate cloud risk, prepare themselves for the next AWS shortage and accommodate sudden spikes of demand?

The answer is scalability and elasticity two essential aspects of cloud computing that greatly benefit businesses. Lets talk about the differences between scalability and elasticity and see how they can be built at cloud infrastructure, application and database levels.

Both scalability and elasticity are related to the number of requests that can be made concurrently in a cloud system they are not mutually exclusive; both may have to be supported separately.

Scalability is the ability of a system to remain responsive as the number of users and traffic gradually increases over time. Therefore, it is long-term growth that is strategically planned. Most B2B and B2C applications that gain usage will require this to ensure reliability, high performance and uptime.

With a few minor configuration changes and button clicks, in a matter of minutes, a company could scale their cloud system up or down with ease. In many cases, this can be automated by cloud platforms with scale factors applied at the server, cluster and network levels, reducing engineering labor expenses.

Elasticity is the ability of a system to remain responsive during short-term bursts or high instantaneous spikes in load. Some examples of systems that regularly face elasticity issues include NFL ticketing applications, auction systems and insurance companies during natural disasters. In 2020, the NFL was able to lean on AWS to livestream its virtual draft, when it needed far more cloud capacity.

A business that experiences unpredictable workloads but doesnt want a preplanned scaling strategy might seek an elastic solution in the public cloud, with lower maintenance costs. This would be managed by a third-party provider and shared with multiple organizations using the public internet.

So, does your business have predictable workloads, highly variable ones, or both?

When it comes to scalability, businesses must watch out for over-provisioning or under-provisioning. This happens when tech teams dont provide quantitative metrics around the resource requirements for applications or the back-end idea of scaling is not aligned with business goals. To determine a right-sized solution, ongoing performance testing is essential.

Business leaders reading this must speak to their tech teams to find out how they discover their cloud provisioning schematics. IT teams should be continually measuring response time, the number of requests, CPU load and memory usage to watch the cost of goods (COG) associated with cloud expenses.

There are various scaling techniques available to organizations based on business needs and technical constraints. So, will you scale up or out?

Vertical scaling involves scaling up or down and is used for applications that are monolithic, often built prior to 2017, and may be difficult to refactor. It involves adding more resources such as RAM or processing power (CPU) to your existing server when you have an increased workload, but this means scaling has a limit based on the capacity of the server. It requires no application architecture changes as you are moving the same application, files and database to a larger machine.

Horizontal scaling involves scaling in or out and adding more servers to the original cloud infrastructure to work as a single system. Each server needs to be independent so that servers can be added or removed separately. It entails many architectural and design considerations around load-balancing, session management, caching and communication. Migrating legacy (or outdated) applications that are not designed for distributed computing must be refactored carefully. Horizontal scaling is especially important for businesses with high availability services requiring minimal downtime and high performance, storage and memory.

If you are unsure which scaling technique better suits your company, you may need to consider a third-party cloud engineering automation platform to help manage your scaling needs, goals and implementation.

Lets take a simple healthcare application which applies to many other industries, too to see how it can be developed across different architectures and how that impacts scalability and elasticity. Healthcare services were heavily under pressure and had to drastically scale during the COVID-19 pandemic, and could have benefitted from cloud-based solutions.

At a high level, there are two types of architectures: monolithic and distributed. Monolithic (or layered, modular monolith, pipeline, and microkernel) architectures are not natively built for efficient scalability and elasticity all the modules are contained within the main body of the application and, as a result, the entire application is deployed as a single whole. There are three types of distributed architectures: event-driven, microservices and space-based.

The simple healthcare application has a:

The hospitals services are in high demand, and to support the growth, they need to scale thepatient registration and appointment scheduling modules. This means they only need to scale the patient portal, not the physician or office portals. Lets break down how this application can be built on each architecture.

Tech-enabled startups, including in healthcare, often go with this traditional, unified model for software design because of the speed-to-market advantage. But it is not an optimal solution for businesses requiring scalability and elasticity. This is because there is a single integrated instance of the application and a centralized single database.

For application scaling, adding more instances of the application with load-balancing ends up scaling out the other two portals as well as the patient portal, even though the business doesnt need that.

Most monolithic applications use a monolithic database one of the most expensive cloud resources. Cloud costs grow exponentially with scale, and this arrangement is expensive, especially regarding maintenance time for development and operations engineers.

Another aspect that makes monolithic architectures unsuitable for supporting elasticity and scalability is the mean-time-to-startup (MTTS) the time a new instance of the application takes to start. It usually takes several minutes because of the large scope of the application and database: Engineers must create the supporting functions, dependencies, objects, and connection pools and ensure security and connectivity to other services.

Event-driven architecture is better suited than monolithic architecture for scaling and elasticity. For example, it publishes an event when something noticeable happens. That could look like shopping on an ecommerce site during a busy period, ordering an item, but then receiving an email saying it is out of stock. Asynchronous messaging and queues provide back-pressure when the front end is scaled without scaling the back end by queuing requests.

In this healthcare application case study, this distributed architecture would mean each module is its own event processor; theres flexibility to distribute or share data across one or more modules. Theres some flexibility at an application and database level in terms of scale as services are no longer coupled.

This architecture views each service as a single-purpose service, giving businesses the ability to scale each service independently and avoid consuming valuable resources unnecessarily. For database scaling, the persistence layer can be designed and set up exclusively for each service for individual scaling.

Along with event-driven architecture, these architectures cost more in terms of cloud resources than monolithic architectures at low levels of usage. However, with increasing loads, multitenant implementations, and in cases where there are traffic bursts, they are more economical. The MTTS is also very efficient and can be measured in seconds due to fine-grained services.

However, with the sheer number of services and distributed nature, debugging may be harder and there may be higher maintenance costs if services arent fully automated.

This architecture is based on a principle called tuple-spaced processing multipleparallel processors with shared memory. This architecture maximizes both scalability and elasticity at an application and database level.

All application interactions take place with the in-memory data grid. Calls to the grid are asynchronous, and event processors can scale independently. With database scaling, there is a background data writer that reads and updates the database. All insert, update or delete operations are sent to the data writer by the corresponding service and queued to be picked up.

MTTS is extremely fast, usually taking a few milliseconds, as all data interactions are with in-memory data. However, all services must connect to the broker, and the initial cache load must be created with a data reader.

In this digital age, companies want to increase or decrease IT resources as needed to meet changing demands. The first step is moving from large monolithic systems to distributed architecture to gain a competitive edge this is what Netflix, Lyft, Uber and Google have done. However, the choice of which architecture is subjective, and decisions must be taken based on the capability of developers, mean load, peak load, budgetary constraints and business-growth goals.

Sashank is a serial entrepreneur with a keen interest in innovation.

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even considercontributing an articleof your own!

Read More From DataDecisionMakers

Originally posted here:
Scalability and elasticity: What you need to take your business to the cloud - VentureBeat

Multi-cloud: balancing the cloud concentration regulation risk with the innovation reward – Finextra

Regardless of size and business mix, most financial institutions have come to understand how cloud and multi-cloud computing services can benefit them. There are cost benefits when it comes to scale, deploying new services and innovating. There are security and resiliency benefits that can be difficult and expensive to replicate on-premises, especially for smaller institutions trying to keep pace with rapidly changing standards. And there is geographic access to new markets from China to Canada that require deployment of local, in-country systems under emerging sovereignty laws.

However, as the industry continues to embrace cloud services, regulators are becoming more aware of the challenges associated with cloud computing, especially those that could expose financial institutions to systemic risks potentially undermining the stability of the financial system. The Financial Stability Board (FSB) and the European Banking Authority have urged regulators worldwide to review their supervisory frameworks to ensure that different types of cloud computing activities are fully scoped into industry guidelines.

At the same time, public cloud provider outages have disproved the never fail paradigm, and there are growing calls for heightened diligence around cybersecurity risks. This is causing regulators to focus on cloud concertation risks as well because of the potential peril created when the technology underpinning global financial services relies on so few large cloud service providers.

So how do financial institutions balance the risk versus the reward of the cloud?

Understanding the risk

The concern over infrastructure concentration and consolidation is twofold. First is the systemic risk of having too many of the worlds banking services concentrated on so few public cloud platforms. Historically, this problem did not exist as each bank operated its own on-premises infrastructure. Failure in a data centre was always limited to one single player in the market.

Second is the vulnerability of individual institutions, including many smaller institutions, that outsource critical banking infrastructure and services to a few solution providers. These software-as-a-service hyperscalers also tend to run on a single cloud platform, creating cascading problems across thousands of institutions in the event of an outage.

In both cases, performance, availability, and security-related concerns are motivating regulators who fear that a provider outage, caused either internally or by bad external actors, could cripple the financial systems under their authority.

For financial services companies, the stakes of a service interruption at a single cloud service provider (CSP) rise exponentially as they begin to run more of their critical functions in the public cloud.

Regulators have so far offered financial institutions warnings and guidance rather than enacting new regulations, though they are increasingly focused on ensuring that the industry is considering plans, such as cloud exit strategies, to mitigate the risk of service interruptions and their knock-on effects across the financial system.

The FSB first raised formal public concern about cloud concentration risk in an advisory published in 2019, and has since sought industry and public input to inform a policy approach. However, authorities are now exploring expanding regulations, which could mean action as early as 2022. The European Commission has published a legislative proposal on Digital Operational Resilience aimed at harmonising existing digital governance rules in financial services including testing, information sharing, and information risk management standards. The European Securities & Markets Authority warned in September 2021 of the risks of high concentration in cloud computing services providers, suggesting that requirements may need to be mandated to ensure resiliency at firms and across the system.

Likewise, the Bank of Englands Financial Policy Committee said it believes additional measures are needed to mitigate the financial stability risks stemming from concentration in the provision of some third-party services. Those measures could include the designation of certain third-party service providers as critical, introducing new oversight to public cloud providers; the establishment of resilience standards; and regular resilience testing. They are also exploring controls over employment and sub-contractors, much like energy and public utility companies do today.

To get ahead of regulators, steps should be taken to address the underlying issues.

From hybrid to multi-cloud

Looking at the existing banking ecosystem, a full embrace of the cloud is extremely rare. While they would like to be able to act like challenger and neo banks, many of the largest and most technology-forward established banks and financial services firms have adopted a hybrid cloud architecture linking on-premises data centres to cloud-based services as the backbone of an overarching enterprise strategy. Smaller regional and national institutions, while not officially adopting a cloud-centric mindset, are beginning to explore the advantages of cloud services by working with cloud-based SaaS providers through their existing ISVs and systems integrators.

In these scenarios, some functions get executed in legacy, on-premises data centres and others, such as mobile banking or payment processing, are operated out of cloud environments, giving the benefits of speed and scalability.

Moving to a hybrid approach has itself been an evolution. At first, financial institutions put non-core applications in a single public cloud provider to trial its capabilities. Some pursued deployments on multiple cloud vendors to handle different tasks, while maintaining robust on-premises primary systems, both to pair with public cloud deployments and to power core services.

While a hybrid approach utilising one or two separate cloud providers works for now, the next logical step (taken by many fintech startups) is to fully embrace the cloud and, eventually, a multi-cloud approach that moves away from on-premises infrastructure entirely.

Solve for the cloud concentration risks

Recent service disruptions at the top public cloud providers remind us that no matter how many data centres they run, single cloud providers remain vulnerable to weaknesses created by their own network complexity and interconnectivity across sites. Disruptions vary in severity, but when an institution relies on a single provider for cloud services, it exposes its business to the risk of potential service shocks originating from that organisations technical dependencies.

By distributing data across multiple clouds, they can improve high availability and application resiliency without sacrificing latency. This enables financial services firms to distribute their data in a single cluster across Azure, AWS, and Google Cloud while also distributing data across many regions available across these CSPs.

This is particularly relevant for financial services firms that must comply with data sovereignty requirements, but have limited deployment options due to sparse regional coverage on their primary cloud provider. In some cases, only one in-country region is available, leaving users especially vulnerable to disruptions in cloud service.

Going beyond the regulations

Beyond the looming regulatory issues, there are a number of practical business and technology limitations of a single-cloud approach that the industry must address to truly future-proof their infrastructure.

Geographic constraints: not all cloud service providers operate in every business region and the availability of local cloud solutions grows increasingly important as more countries adopt data sovereignty and residency laws designed to govern how data is collected, stored and used locally.

Vendor lock-in: there is a commercial risk in placing all of an institution's bets on one cloud provider. The more integration with a single cloud provider, the harder it becomes to negotiate the cost of cloud services or to consider switching to another provider.

Security homogeneity: while CSPs invest heavily in security features, in the event of an infrastructure meltdown or cyberattack, a multi-cloud environment can give organisations the ability to switch providers and to back up and protect their data.

Feature limitations: cloud service providers develop new features asynchronously. Some excel in specific areas of functionality and constantly innovate, while others focus on a different set of core capabilities. By restricting deployments to one cloud services provider, institutions limit their access to best-of-breed features across the cloud.

With pressure building from regulatory bodies at the same time as consumers increasingly demanding premium product experiences from financial services institutions, harnessing multi-cloud can satisfy both. It provides redundancy, security and peace of mind as infrastructure is not solely dependent on one CSP, while also providing the features and space to innovate on the very best the industry has to offer. Now is the time to embrace multi-cloud.

Follow this link:
Multi-cloud: balancing the cloud concentration regulation risk with the innovation reward - Finextra

IBM CEO Arvind Krishna On The Future Of Big Blue – Forbes

Arvind Krishna began his career with IBM in 1990 and has been the CEO of IBM since April 2020 and Chairman since January 2021. Following his IBM Think 2022 Keynote in Boston, that I attended, Arvind sat down for a live round table with industry analysts to address a range of subjects in an open question format.

In this article, I will paraphrase Arvind's detailed and lengthy responses, hopefully providing clues to the future of Big Blue.

IBM CEO and Chairman Arvind Krishna conducts an analyst Q&A with VP of Analyst Relations, Harriet ... [+] Fryman

Technology is an undisputed source of competitive advantage

Today, few CEOs think technology doesn't matter. Most CEOs will say that technology is the single most protected line item even in a down market and a bad economy because technology will provide a sustainable advantage. Government leaders all want a robust technology industry to increase gross domestic product (GDP) at the country level.

I would concur based on my experience in talking with CEOs. Treating IT as a cost of doing business will ultimately lead to a loss of competitiveness in the marketplace.

Arvind is a technologist at heart, so it is not surprising that he has placed technology and the necessary ecosystem of partners at the heart of the business model.

I believe that, in the new wave of tech CEOs, technology is respected more than ever and is a benefit to IBM.

Creativity and co-creation are critical

Today's IBM is more open to partnerships across the stack, based on the reality that no single company has all the expertise and technology to meet customers' needs.

IBM wants partners to succeed but will still play a lesser but critical role in providing innovative technology. IBM plans to focus on key technologies such as artificial intelligence (AI), hybrid cloud, quantum computing, and blockchain.

The rise of the cloud drives this significant change in strategy. The days of selling software, hardware, and consulting in one package are in the rear-view mirror for IBM. Cloud computing meant disruption across every layer of the stack. We all know the acronyms now SaaS (software-as-a-service), PaaS (platform-as-a-service), and IaaS (infrastructure-as-a-service).

In a follow-up 1:1 with Arvind, he talked about products taking a front-row seat to consulting. Since his arrival, the company has shifted from 70% consulting- 30% products to 30% consulting- 70% product. This is a massive shift, and, to me, a "product guy", says everything.

IBM's newly announced deals with AWS and SAP represent the theme of partner co-creation well. McDonalds is a good IBM client example, where IBM went so far as to acquire the tech arm of McDonalds, MCD Tech, to facilitate the new drive-through solution.

The hybrid cloud has come of age

Four years ago, there was general skepticism around the hybrid cloud with a preference for a single public cloud provider. That has changed as customers tried to avoid vendor lock-in, security, regulations, cost of moving data, and the desire to have a strategic architecture to address reality and stay flexible for the future. In four years, we went from a preference for one public cloud to the majority embracing a hybrid cloud model. Cloud is no longer a place but an operating model.

The hybrid cloud delivers the flexibility of deployment. The ability to deploy anywhere with security, scale, and ease of use with the end goal of frictionless development. There is also the incremental value (IBM believes it is two-and-a-half times more) from a hybrid cloud architecture than any singular architecture only on public or private.

There is no debate with me. I have written that the hybrid cloud model and multiple cloud providers are the norms for enterprises. The turning point for me was in 2018 when AWS announced Outposts, and the debate stopped. The public cloud began 15 years ago, and the hybrid cloud is in its infancy, so it will take years for the two to cross.

AI will transform every company and every industry

Technology is the only way to scale the business without linearly adding costs. Given the vast amount of data being created today on public clouds, private clouds, and at the edge, artificial intelligence is the only technology we know that can begin to do something with this data.

Given the shifts in labor and demographics, AI is the only option to automate and take complexity and cost out of enterprise processes.

AI will also play a critical role in cybersecurity. With labor shortages in the cybersecurity profession, artificial intelligence is the technology that will spot suspicious activity and bad actors.

I have written several articles detailing how organizations adopt AI to bring efficiency, productivity gains, and a return on investment. In these uncertain times, AI is a powerful differentiator for companies of all sizes to transform digitally.

Unlocking the full potential of Red Hat

Arvind was the driving force behind the acquisition of Red Hat in 2018 and the decision to keep the company autonomous.

Red Hat is one of the few companies that has managed to tap into open-source innovation and make a market out of it. The value of Red Hat comes because it can run on all infrastructures and work with all partners. The open-source culture is very different from a proprietary source culture because of the commitment that anything the Red Hat does will go back into upstream open-source.

I think if IBM can keep Red Hat independent on most vectors, the sky is the limit for the company. There are only two on-prem container platforms that are extending to the public cloud, Red Hat and VMware, with HPE fielding a compelling alternative.

Accepting corporate responsibility for diversity and climate change

For IBM, it is a fundamental business priority. It is vital to constantly reflect the demographics of the societies we live in. If done well, IBM can attract and retain employees.

IBM has committed to being net-zero without purchasing carbon offsets by 2030, twenty years before the Paris Accords goal.

I have long maintained that sustainability has become a fundamental business issue. I have written several articles that detailed how companies deal with the challenge. These challenges have become real business issues. It is not just a cost of doing business or an ESG (environmental, social, and governance) checkmark. I think sustainability offers a way to improve the business and lower costs.

Arvind is the first executive I had ever heard say that companies could save money through sustainability strategies.

Wrapping up

Unlike previous IBM CEOs, Arvind has prioritized communicating IBM's strategy and its value to industry analysts. His approach is very straightforward, he's open to criticism and receptive to feedback on how to improve.

As regular readers will know, I have followed IBM for several years. I believe IBM is on an improved tack, with a focus on AI and hybrid cloud. Incredibly enough, it is the leader in the next big step in the future of computing, quantum computing.

IBM is one of the few companies with the resources to tackle challenging problems that take years of persistence to make breakthroughs.

IBM has taken quantum computing from science fiction to where everything is now just science in ten years. It is hard to find another company with the same staying power. It's one of the only companies to do research still.

With Arvind at the reins and a group of brilliant people solving problems that make a big difference, I think IBM has a promising future.

Moor Insights & Strategy, like all research and analyst firms, provides or has provided paid research, analysis, advising, or consulting to many high-tech companies in the industry, including 8x8, Advanced Micro Devices, Amazon, Applied Micro, ARM, Aruba Networks, AT&T, AWS, A-10 Strategies, Bitfusion, Blaize, Box, Broadcom, Calix, Cisco Systems, Clear Software, Cloudera, Clumio, Cognitive Systems, CompuCom, Dell, Dell EMC, Dell Technologies, Diablo Technologies, Digital Optics, Dreamchain, Echelon, Ericsson, Extreme Networks, Flex, Foxconn, Frame (now VMware), Fujitsu, Gen Z Consortium, Glue Networks, GlobalFoundries, Google (Nest-Revolve), Google Cloud, HP Inc., Hewlett Packard Enterprise, Honeywell, Huawei Technologies, IBM, Ion VR, Inseego, Infosys, Intel, Interdigital, Jabil Circuit, Konica Minolta, Lattice Semiconductor, Lenovo, Linux Foundation, MapBox, Marvell, Mavenir, Marseille Inc, Mayfair Equity, Meraki (Cisco), Mesophere, Microsoft, Mojo Networks, National Instruments, NetApp, Nightwatch, NOKIA (Alcatel-Lucent), Nortek, Novumind, NVIDIA, Nuvia, ON Semiconductor, ONUG, OpenStack Foundation, Oracle, Poly, Panasas, Peraso, Pexip, Pixelworks, Plume Design, Poly, Portworx, Pure Storage, Qualcomm, Rackspace, Rambus, Rayvolt E-Bikes, Red Hat, Residio, Samsung Electronics, SAP, SAS, Scale Computing, Schneider Electric, Silver Peak, SONY, Springpath, Spirent, Splunk, Sprint, Stratus Technologies, Symantec, Synaptics, Syniverse, Synopsys, Tanium, TE Connectivity, TensTorrent, Tobii Technology, T-Mobile, Twitter, Unity Technologies, UiPath, Verizon Communications, Vidyo, VMware, Wave Computing, Wellsmith, Xilinx, Zebra, Zededa, and Zoho which may be cited in blogs and research.

Link:
IBM CEO Arvind Krishna On The Future Of Big Blue - Forbes

Microsoft CEO Satya Nadella tells employees that pay increases are on the way – CNBC

Microsoft CEO Satya Nadella speaks during the Microsoft Annual Shareholders Meeting at the Meydenbauer Center on November 28, 2018 in Bellevue, Washington. Microsoft recently surpassed Apple, Inc. to become the world's most valuable publicly traded company.

Stephen Brashear | Getty Images News | Getty Images

Microsoft CEO Satya Nadella told staffers on Monday that the company is raising compensation as the labor market tightens and employees contend with increasing inflation.

A spokesperson for the company confirmed the pay increase, which was reported earlier by GeekWire.

"People come to and stay at Microsoft because of our mission and culture, the meaning they find in the work they do, the people they work with, and how they are rewarded," the spokesperson told CNBC in an email. "This increased investment in our worldwide compensation reflects the ongoing commitment we have to providing a highly competitive experience for our employees."

Inflation jumped 8.3% in April, remaining close to a 40-year high. Meanwhile, the U.S. economy continues to add jobs and unemployment has steadily been falling, reaching 3.6% last month. Tech companies have been responding with salary bumps.

Google parent Alphabet is adjusting its performance system in a way that will bring higher pay to workers, while Amazon committed to more than doubling maximum base pay for corporate employees.

Nadella told employees that the company is "nearly doubling the global merit budget" and allocating more money to people early and in the middle of their careers and those in specific geographic areas. He said the company is raising annual stock ranges by at least 25% for employees at level 67 and under. That includes several tiers in the company's hierarchy of software-engineering roles.

In the first quarter, Microsoft increased research and development costs, which include payroll and stock-based compensation costs, by 21%. The company bolstered spending in cloud engineering as Microsoft tries to keep pace with Amazon Web Services. Research and development growth has accelerated for five consecutive quarters.

While the biggest tech companies have been lifting pay to try and retain talent, some smaller companies have been implementing layoffs as the war in Ukraine and supply shortages strain their businesses. Carvana and Robinhood are among those that are cutting staff.

WATCH: Jefferies senior analyst Brent Thill says he's positive on cloud stocks long-term

Read more here:
Microsoft CEO Satya Nadella tells employees that pay increases are on the way - CNBC

Google Needs Another Database To Attack Oracle, DB2, And SQL Server Directly – The Next Platform

Why does Google need another database, and why in particular does it need to introduce a version of PostgreSQL highly tuned for Googles datacenter-scale disaggregated compute and storage?

It is a good question in the wake of the launch of the AlloyDB relational database last week at the Google I/O 2022 event.

The name Google is practically synonymous with large scale data storage and manipulation in myriad forms. The company created the MapReduce technique for querying unstructured data that inspired Hadoop, the BigTable NoSQL database, the Firestore NoSQL document database, and the Spanner geographically distributed relational SQL database. These tools were used internally at first, and then put on Google Cloud as the Dataproc, Cloud BigTable, and Cloud Spanner services.

Relational databases are back in vogue, due in part by Google showing that a true relational database is can scale with the advent of Spanner. And to try to encourage adoption of Spanner on the cloud, Google last year created a PostgreSQL interface for Spanner that makes it look and feel like that increasingly popular open source database. This is important because PostgreSQL has become the database of choice in the aftermath of Oracle buying Sun Microsystems in early 2010 and taking control of the much more widely used open source MySQL relational database that Sun itself took control of two years earlier.

The reason why Google needs a true version of PostgreSQL running in the cloud is that it needs to help enterprise customers who are stuck on IBM DB2, Oracle, and Microsoft SQL Server relational databases as their back-end datastores for their mission-critical systems of record get off those databases and not only move to a suitable PostgreSQL replacement, but to also make the move from on-premises applications and databases to the cloud.

That is the situation in a nutshell, Andi Gutmans, vice president and general manager of databases at the search engine, ad serving, and cloud computing giant.

Google has been an innovator on data, and we have had to innovate because we have had these billion user businesses, says Gutmans. But our strength has really been in cloud native, very transformative databases. But Google Cloud has accelerated its entrance into mainstream enterprises we have booming businesses in financial services, manufacturing, and healthcare, and we have focused on heritage systems and making sure that lifting and shifting applications into the cloud. Over the past two years, we have focused on supporting MySQL, PostgreSQL, SQL Server, Oracle, and Redis, but the more expensive, legacy, and proprietary relational databases like SQL Server and Oracle have unfriendly licensing models that really force them into one specific cloud. And we continue to get requests to help customers modernize off legacy and proprietary databases to open source.

The AlloyDB service is the forklift that Google created for this lift and shift, and dont expect for Google to open up all of the goodies it has added to PostgreSQL because these are highly tuned for Google own Colossus file system and its physical infrastructure. But, it could happen in the long run, just as Google took its Borg infrastructure and container controller and open sourced a variant of it as Kubernetes.

As we have pointed out before, the database, not the operating system and certainly not the server infrastructure, is arguably the stickiest thing in the datacenter, and companies make database decisions that span one or two decades and sometimes more. So having a ruggedized, scalable PostgreSQL that can span up to 64 vCPUs running on Google Cloud is important, as will be scaling it to 128 vCPUs and more in the coming years, which Gutmans says Google is working on.

But that database stickiness has to do with databases implementing different dialects of the SQL query language, and also having different ways of creating and embedding stored procedures and triggers within those databases. Stored procedures and triggers essentially embed elements of an application within the database rather than outside of it for reuse and performance, but there is no universally accepted and compatible way to implement these functions, and this has created lock in.

That is one of the reasons why Google acquired CompilerWorks last October. CompilerWorks has created a tool called Transpiler, which can be used to convert SQL, stored procedures, and triggers from one database to another. As a case in point, Gutmans says that Transpiler, which is not yet available as a commercial service, can convert about 70 percent of Oracles PL SQL statements to another format, and that Google Cloud is working with one customer that has 4.5 million lines of PL SQL code that it has to deal with. To help with database conversions, Google has tools to do data replication and scheme conversion, and has provided additional funding where they can get human help from systems integrators.

AlloyDB is not so much a distribution of PostgreSQL as it is a storage layer designed to work with Googles compute and storage infrastructure.

And while Google has vast scale for supporting multi-tenant instances of PostgreSQL, you will not that it doesnt have databases that span hundreds or even thousands of threads. IBMs DB2 on Power10 processors, which has 1,920 threads in a 240-core, 16-socket system with SMT8 simultaneous multithreading turned on, can grab any thread that is not being used by AIX or Linux and use it to scale the database, just to give you a sense of what real enterprise scale is for relational databases. But we are confident that is Google needed to create a 2,000-thread implementation of PostgreSQL, it could do it with NUMA clustering across its network and other caching techniques or by installing eight-way X86 servers that would bring 896 threads to bear with 56-core Sapphire Rapids Xeon SPs and 1,204 threads to bear with 64-core Granite Rapids Xeon SPs. (Again, the operating system would eat a bunch of these threads, but certainly not as much as the database could.) The latter approach using NUMA-scaled hardware is certainly easier when it comes to scaling AlloyDB, but it also means adding specialized infrastructure that is really only suitable for databases. And that cuts against the hyperscaler credo of using cheap servers and only a few configurations of them at that to run everything.

So what exactly did Google do to PostgreSQL to create AlloyDB? Google took the PostgreSQL storage engine and built what Gutmans called a cloud native storage fleet that is linked to the main PostgreSQL node, database logging and point in time recovery for the database runs on this distributed storage engine. Google also did a lot of work on the transaction engine at the heart of PostgreSQL and as a result, Google is able to get complete linear scaling up to 64 virtual cores on its Google Cloud infrastructure. Google has also added an ultra fast cache inside of PostgreSQL, and if there is a memory miss in the database, this cache can bring data into memory with microsecond latencies instead of the millisecond latencies that other caches have.

In initial tests running the TPC-C online transaction processing benchmark against AlloyDB, Gutmans says that AlloyDB was 4X faster than open source PostgreSQL and 2X faster than the Aurora relational database (which has a PostgreSQL compatible layer on top) from Amazon Web Services.

And to match the high reliability and availability of those legacy databases such as Oracle, SQL Server, and DB2, Google has a 99.99 percent uptime guarantee on the AlloyDB service, and this uptime importantly includes maintenance of the database. Gutmans says that other online databases only count unscheduled and unplanned downtime in their stats, not planned maintenance time. Finally, AlloyDB has an integrated columnar representation for datasets that is aimed at doing machine learning analysis on operational data stored in the database, and this columnar format can get up to 100X better performance on analytical queries than the open source PostgreSQL.

The PostgreSQL license is very permissive about allowing innovation in the database, and Google does not have to contribute these advances to the community. But that said, Gutmans adds that Google intends to contribute bug fixes and some enhancements it has made to the PostgreSQL community. He was not specific, but stuff that is tied directly to Googles underlying systems like Borg and Colossus are not going to be opened up.

So now Google has three different ways to get PostgreSQL functionality to customers on the Google Cloud. Cloud SQL for PostgreSQL is a managed version of the open source PostgreSQL. AlloyDB is s souped up version of PostgreSQL. And Spanner has a PostgreSQL layer thrown on top but it doesnt have compatibility for stored procedures and triggers because Spanner is a very different animal from a traditional SQL database.

Here is another differentiator. With the AlloyDB service, Google is pricing it based on the amount of compute and storage customers consume, but the IOPS underpinning access to the database are free. Unmetered. Unlike many cloud database services. IOPS gives people agita because it cannot be easily predicted, and it can be upwards of 60 percent of the cost of using a cloud database.

AlloyDB has been in closed preview for six months and is now in public preview. General availability on Google Cloud is expected in the second half of this year.

Which leads us to our final thought. Just how many database management systems and formats does a company need?

We think of ourselves as the pragmatists when it comes to databases, says Gutmans, who is also famous as the co-founder of the PHP programming language and the Zend company that underpins its support. If you look at the purpose built database, there is definitely a benefit, where you can actually optimize the query language and the query execution engine to deliver best in class price and performance for that specific workload. The challenge is, of course, that if you have too many of these, it starts to become cognitive overload for the developers and system managers. And so theres probably a sweet spot in the middle ground between monolithic and multimodal. You dont go multimodal completely because then you lose that benefit around price, performance, use case specific optimization. But if you go too broad with too many databases, it becomes complicated. On the relational side, customers definitely have at least one relational database and in many cases they also are dealing with legacy database. And with those legacy databases, we are definitely seeing more and more interest in standardizing on a great open source relational database. Document databases provide a lot of ease of use, especially under web facing side of applications when you want to do things like customer information and session management with a very loose schemas, to basically have a bag of information about a customer or transaction or song. I am also a big fan of graph databases. Graph is really going through a renaissance because not only is it very valuable in the traditional use cases around fraud detection and recommendation engines and drug discovery and master data management, but with machine learning, people are using graph databases to extract more relationships out of the data, which can then be used to improve inferencing. Beyond that, we have some other database models that, in my opinion, have some level of diminishing returns, like time series or geospatial databases.

PostgreSQL has very good JSON support now, so it can be morphed into a document database, and it is getting geospatial support together, too. There is a reason why Google is backing this database horse, and getting it fit for the race. It seems unlikely that any relational database could have a good graph overlay, or that a graph database could have a good relational overlay, but that latter item is something to think about another day. . . .

See the rest here:
Google Needs Another Database To Attack Oracle, DB2, And SQL Server Directly - The Next Platform

How Web3 and Cloud3 will power collaborative problem-solving and a stronger workforce – VentureBeat

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!

The onset of the COVID-19 pandemic propelled cloud adoption at an unprecedented rate. The benefits of cloud computing combined with the promises Web3 holds for such things as blockchain-backed decentralization, scalability and increased ownership for everyday users became clearer when the world shut down in March 2020.

Now, Web3s lesser-known but important counterpart, Cloud3, is also beginning to gain traction. Executives like Salesforce CEO Marc Bennioff are already mapping how their companies will adopt the new iteration of cloud computing the core of which is built around working from anywhere further supporting the workforce shift.

Were in a new world. This is a huge opportunity to create and extend and complement our platform. We realized for each and every one of our clouds, it was time to transform to become a work-from-anywhere environment. We ultimately are focused on delivering the operating system for Cloud3, Bennioff said in a company press release earlier this year.

Cloud3 and Web3 may sound like the latest tech buzzwords, but according to industry experts, the two are on the rise, and enterprise executives and community leaders need to pay attention or risk getting left behind.

Before there was a third iteration of anything, there had to, of course, be a first and second to lay the foundation.

The original iteration of the World Wide Web created by Tim Berners-Lee in 1990. It focused on HTML, specification of URLs and hypertext transfer protocol (HTTP) commands. While Web 2.0 is complex, it can be simplistically defined as what we know the internet to be today, including access to the web via Wi-Fi, smartphones and the rise of social media usage.

Web3s features ensure more democratization over the web. With blockchain-backed decentralization and scalability, there will be less oversight, which may, of course, lead to bad actors, but could also pave the way for underrepresented people, communities and companies to gain more control.

It always seems that POC [people of color] create the culture of communities or companies, but end up benefiting last from it. And with this kind of new paradigm of power Web3 can provide, were able to finally take ownership of our communities, said Cheryl Campos, head of venture growth and partnerships at Republic.

We can use Web3 to more easily and equally share the wealth with others and make sure we are sharing the profits with others. What is so exciting is that Web3 allows for that through non-fungible tokens (NFTs) and decentralized autonomous organizations (DAOs)and even with new DeFi (decentralized finance) products coming out that focus on supporting communities or loaning to others. That is not just the wealth gap, but also the ownership gap, that Web3 helps bring back to the hands of communities and the people within in them in a meaningful way, Campos added.

Founder and partner of the Open Web Collective, Mildred Mimi Idada, agrees: The Web3 ethos can bring in more diversity, not just in terms of race, nationality or gender, but also diversity in backgrounds, skills and perspectives.

Diverse skill sets and perspectives are also necessary for innovation in the Web3 space. We need not only technical talent, such as developers, but also creatives, lawyers, bankers and community builders, Idada said.

That said, innovation and benefits Web3 can provide to communities, businesses and investors alike wont happen overnight.

According to Greg Isenberg, cofounder and CEO of Late Checkout, a company that designs, creates and acquires Web3 and community-based technology businesses, Web3 still has a ways to go until the full breadth of its benefits is visible, but its important for executives and community leaders to pay attention now.

Web3 doesnt, and cant, work unless the UX [user experience] is very simple so much so that your grandmother could buy a digital asset like an NFT to have ownership in Web3. But to do that, we need a lot of infrastructure in place, Isenberg said.

Isenberg said he has seen several companies make great strides in UX with a proactive eye toward the rise of Web3 like Rainbow, the Ethereum wallet that allows you to manage many digital assets in one place. Isenberg said he expects other companies across industries to soon follow suit. He also echoes Campos and Idadas excitement and predictions regarding Web3, citing the impressive outpouring of cryptocurrency donations made to Ukraine totaling around $55 million in just days. Its the Web3 infrastructure that these platforms and currencies like crypto are beginning to build that creates the scalability of donations like this.

What gets me excited about Web3 in general is the coordination it brings to capital to address important things. That [was possible] because of the web infrastructure that was built on top of it, Isenberg said. I expect social causes to be a huge part of popular Web3 projects going forward. Now Im thinking, What else can this help change? Its interesting because theres a perception that Web3 is bad for the environment, for example, but I actually think that a large part of solving the worlds problems will stem from coordinating people and capital and Web3 has already proven to be really good at that.

Part of the needed infrastructure to support Web3s promise to coordinate and help solve community and world problems efficiently and at scale will require Cloud3s advanced capabilities which assure secure access to collaborative tools from anywhere.

The evolution of cloud technology began with large IT operations that were disrupted by the software-as-a-service boom. Next came infrastructure-as-a-service and platform-as-a-service technologies, further relieving pressures placed on IT teams and developers alike. Now, the demand for everything the prior cloud iterations provide is just as fierce as the demand from companies and the public alike to access these tools from wherever, whenever while simultaneously having strong IT security as a backbone from wherever, whenever.

Cloud3 will empower businesses to leverage cloud-based experience platforms as a toolkit to seamlessly compose personalized communication experiences, said Steve Forcum Cloud3 expert and director and chief evangelist for marketing at Avaya.

A report from the health information technology and clinical research company, Iqvia, underscores that emerging Cloud3 technologies will disrupt application development in organizations across all industries. Companies in the life sciences and financial industries, in particular, are well-positioned to leverage Cloud3 to differentiate themselves by applying artificial intelligence to big data.

Cloud3s emergence will also transform how businesses are run and how tools and information are supported and accessed to match the pace and style of life that the world has shifted to post-pandemic.

Rather than businesses focusing on moving to the cloud, [with Cloud3] theyll be forced to think of ways to transform within the cloud. With this comes innovation and new, cloud-based technologies. Disruptive technology should not require disruption to your business, Forcum said. A converged platform approach with composability at its core is malleable in nature, adjusting to the organizations business processes, versus forcing processes to compromise around the limitations of a cloud platform or app.

Though intriguing promises and benefits stem from both the emergence of Web3 and Cloud3, there are concerns where they overlap.

A drawback we do see with [the overlap of the] decentralized web [Web3] and Cloud3, is more the industry recognizing that while there are similarities, these are also two very different spaces with very different mechanisms and tools to achieve their goals, said Idada. Nonetheless, hardware, computation power and cloud computing will be key pieces to the next phase of the web. Improved and enhanced capabilities will change how everyday apps operate and what is possible to meet our changing faster pace and on the go lifestyles.

As for what the future holds as innovation increases and cloud adoption accelerates, pay attention or risk getting left behind is the consensus from experts.

Isenberg predicts that moving closer to the fully fleshed out iterations of both Web3 and Cloud3 that we may see more legacy companies begin to adopt them and make moves in the space, but that along with it, particularly for Web3, we may also see many of those companies fail.

Well likely see legacy companies embrace Web3 and its probably not going to go very well for many of them, he said. I think youre going to see a small percentage, maybe 1% to 5%, embrace it really, really well and become category leaders among crypto data brands while others struggle to find their place.

The future of work is remote. So, you have to make sure that there is infrastructure that will allow for this, or otherwise, you will not retain or get the best talent right for your operations. And more than ever, it has been clear that companies that embrace this Web3 space are more likely to attract younger talent and folks that are bullish on the space, Campos added.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.

View post:
How Web3 and Cloud3 will power collaborative problem-solving and a stronger workforce - VentureBeat

Optical Network Hardware Market Forecasted to Hit USD 8.21 Billion by 2030 with a CAGR of 4.62% – Report by Market Research Future (MRFR) -…

New York, US, May 16, 2022 (GLOBE NEWSWIRE) -- Optical Network Hardware Market Overview: According to a comprehensive research report by Market Research Future (MRFR), Optical Network Hardware Market information by Equipment, by Application and Region Forecast to 2030 market size to reach USD 8.21 billion, growing at a compound annual growth rate of 4.62% by 2030.

Optical Network Hardware Market Scope: Optical networking is a sort of communication that employs light-encoded signals to convey data through various telecommunications networks. These include long-distance national, international, and transoceanic networks and limited-range local-area networks or wide-area networks that cross metropolitan and regional areas. Optical fibers are used to connect optical network hardware. Optical fibers are typically very thin glass cylinders or filaments conveying light information.

Dominant Key Players on Optical Network Hardware Market Covered are:

Get Free Sample PDF Brochure: https://www.marketresearchfuture.com/sample_request/5446

Factors compelling demand for optical network gear include the huge expansion in online connected people, increased bandwidth needs for heavy network applications, and growing adoption of data centers. Point-to-point networks establish permanent connections between two or more points, allowing any pair of nodes to communicate with each other; point-to-multipoint networks broadcast the very same signals to many different nodes at the same time; and switched networks, such as the telephone system, including switches that establish temporary connections between pairs of nodes.

Market USP Exclusively Encompassed:Optical Network Hardware Market DriversThe escalating use of high-bandwidth internet and cloud computing is projected to stimulate the need for fiber optic cables, which would boost growth in the global optical network hardware market. Data centers require dependable components to meet the stringent requirements of the cloud while also sustaining the network's physical architecture. Over the projection period, the advent of mobile services, reliance on connected devices, and reliance on 3G and 4G services may increase market demand. Fiber-to-the-home (FTTH) is critical for connecting millions of houses to the internet. As citizens increasingly seek online consultations, telehealth is responsible for driving the most demand. The proliferation of virtual healthcare may have a positive impact on the market. The usage of the internet for streaming material and conducting video conferences creates new industry prospects.

Browse In-depth Market Research Report (85 Pages) on Optical Network Hardware Market:https://www.marketresearchfuture.com/reports/optical-network-hardware-market-5446

Market Restraints:

The lack of developed infrastructure in emerging nations may stymie the growth of the worldwide optical network hardware market. Regulations and shifting government policies might limit market demand. The market may face challenges due to the necessity for appropriate fiber management to take advantage of available ports and ensure data center uptime. The need for qualified employees to maintain continuous use of data services without sacrificing speed may result in high demand for hardware engineers.

COVID 19 Analysis

The COVID-19 epidemic has hampered global optical network hardware operations. The consequences of the epidemic and government-enforced lockdown limitations have lowered the demand for hardware. Homeschooling and online education necessitate continuous connectivity to ensure the training of children and adults and serve as a viable market proponent. Low tolerance for delay can be beneficial to the market. Furthermore, there has been an increase in internet buying during the epidemic, with e-commerce companies demanding fast speeds to keep people engaged. The need for appropriate network hardware might be driven by the necessity for continual end-to-end latency and sustaining interaction with users. The development of virtualization and cloud computing is projected to threaten market demand.

Talk to Expert: https://www.marketresearchfuture.com/ask_for_schedule_call/5446

However, network design upgrades, WDM equipment spending, and the requirement for capacity for broadband services and IP video can bring market reprieve. Companies have seen the necessity for network flexibility and adaptability improvements due to the COVID-19 epidemic. To accommodate the influx of remote teleworkers and other types of online business activity flooding wide area network infrastructure, top communication service providers (CSPs) and their clients have had to accelerate connectivity across the board.

Segmentation of Market Covered in the Research:By Equipment

Wavelength-division multiplexing (WDM) is expected to grow at a 15% CAGR during the evaluation period. Over the projected period, the utilization of 100 Gbps data rates is likely to attract additional customers and enhance segment demand.

By Application

The broadband infrastructure sector is predicted to provide significant revenue for the worldwide optical network hardware market. The swelling penetration of cellphones and the internet may help to accelerate the trend.

Buy this Report:

https://www.marketresearchfuture.com/checkout?currency=one_user-USD&report_id=5446

Regional AnalysisAccording to estimates, North America will dominate the worldwide optical network hardware market. Because users rely on fiber networks to complete daily chores. The expansion of work-from-home opportunities and unified communication software may increase the need for fiber-enabled broadband networks. The requirement for seamless connectivity for online learning, content streaming, and telehealth is expected to drive the optical network hardware market demand throughout the research period. APAC is a key hub for consumer electronics, healthcare, automobile, and other industries. This gives the market a big opportunity to expand its offerings to boost its consumer base. Over the assessment period, the region is predicted to grow at a CAGR of 16%. Usage of smartphones, emphasis on network quality, demand for connection, and the advent of video streaming can all contribute to market expansion.

Grain Management, LLC, a top private investment firm focused solely on broadband technology and the global communications industry, announced the acquisition of LightRiver's Technologies & Software entities, which comprise a premier optical network integration system solution to the telecommunications, utilities, datacenter, and cloud industries. The company provides full lifecycle software, hardware, services, and support solutions in multi-technology networking. It focuses on designing, acquiring, delivering, and continuing technical support of heterogeneous transport networks and open software tools for discovering, monitoring, provisioning, and controlling multi-vendor packet-optical networks.

Related Reports:Time-Sensitive Networking Market Research Report: Information based on types of Standards, based on Component - Forecast till 2027

Network Slicing Market Research Report: Information by Component, End User, Application, and Region Forecast till 2027

Network Probe Market Research Report: Information by Component, Organization Size, Deployment Mode, End Users, and Region - Forecast Till 2027

About Market Research Future:Market Research Future (MRFR) is a global market research company that takes pride in its services, offering a complete and accurate analysis regarding diverse markets and consumers worldwide. Market Research Future has the distinguished objective of providing the optimal quality research and granular research to clients. Our market research studies by products, services, technologies, applications, end users, and market players for global, regional, and country level market segments, enable our clients to see more, know more, and do more, which help answer your most important questions.

Follow Us:LinkedIn|Twitter

Read this article:
Optical Network Hardware Market Forecasted to Hit USD 8.21 Billion by 2030 with a CAGR of 4.62% - Report by Market Research Future (MRFR) -...

Paddle, the company that wants to take on Apple in IAP, raises $200M at a $1.4B valuation to supercharge SaaS payments – TechCrunch

Software as a service has become the default for how organizations adopt and use apps these days, thanks to advances in cloud computing and networking, and the flexibility of pay-as-you-use models that adapt to the evolving needs of a business. Today, a company called Paddle, which has built a large business out of providing the billing backend for those SaaS products, is announcing a large funding round of $200 million as it gears up for its own next stage of growth.

The Series D investment led by KKR with participation from previous backers FTV Capital, 83North, Notion Capital, Kindred Capital, with debt from Silicon Valley Bank values London-based Paddle at $1.4 billion. With this round, the startup has raised $293 million.

Paddle today works with more than 3,000 software customers in 200 markets, where it provides a platform for them to set up and sell their SaaS products in those regions, primarily in a B2B model. But with so many consumer services also sold these days in SaaS models, its ambitions include a significant expansion of that to areas like in-app payments.

Were been growing a lot in the last couple of years. We thought it would tail off [after the COVID-19 peak] but it didnt, said Christian Owens, the CEO and co-founder. Indeed that includes more videoconferencing use by everyday people, arranging Zoom dinner, but also the explosion of streamed media and other virtual consumer services. B2C software has over the years blurred with what is thought of as B2B. Suddenly everyone needed our B2B tools.

Payments has long been a complicated and fragmented business in the digital world: banking practices, preferred payment methods and regulations differ depending on the market in question, and each stage of taking and clearing payments typically involves piecing together a chain of providers. Paddle positions itself as a merchant of record that has built a set of services around the specific needs of businesses that sell software online, covering checkout, payment, subscription management, invoicing, international taxes and financial compliance processes.

Sold as a SaaS itself basic pricing is 5% + 50 cents per transaction Paddles premise follows the basic principle of so many other business tools: payments is typically not a core competency of, say, a video conferencing or security company (one of its customers is BlueJeans, now owned by Verizon, which used to own TechCrunch; another is Fortinet).

To be fair, there are dozens (maybe hundreds) of merchants of record in the market for payments services from PayPal and Stripe through to Amazon and many more no surprise since it is complicated and just about any businesses selling online will turn to these at some point to handle that flow. However, Paddle believes (and has proven) that there is a business to be made in bringing together the many complicated parts of providing a billing and payments service into a single product specifically tailored to software businesses. It does not disclose actual revenues or specific usage numbers, but notes that revenue growth (not necessarily revenue) has doubled over the last 18 months.

Paddle as a company name doesnt have a specific meaning.

Its not a reference to anything, just a name we liked, Owens who himself is a Thiel Fellow said. And that impulse to make decisions on a hunch that it could be catchy is something that seems to have followed him and the company for a while.

Image Credits: Paddle

He came to the idea of Paddle with Harrison Rose (currently chief strategy officer and credited with building its sales ethos), after the two tried their hands at a previous software business they founded when they were just 18, an experience that gave them a taste of one of the big challenges for startups of that kind.

You make your first $1 million-$2 million in revenue with a handful of employees, but gradually those businesses become $2-20 million in sales, and then $300 million, but the basic problems of running them dont go away, he said.

Billing and payments present a particularly thorny problem because of the different regulations and compliance requirements, and practices, that scaling software companies face across different jurisdictions. Paddle itself works with some half dozen major payment companies to enable localized transactions, and many more partners, to provide that as a seamless service for its customers (which arenot payment companies themselves).

You may recognize the name Paddle for having been in the news last autumn, when it took its observations on the challenges of payments to a new frontier: apps, and specifically in-app payments. It announced last October that it was building an alternative to Apples in-app payments service.

This was arrived at through much of the observational logic that started Paddle itself, as Owens describes it. Apple, as is well known, has been locked in a protracted dispute with a number of companies that sell apps through the app store, which have wanted to have more control over their billing (and to give Apple less of a cut of those proceeds). Owens said Paddle felt encouraged to build an alternative in the heat of that dispute, before it has even been resolved, based on the response from the market (and specifically developers and app publishers) to that public dispute and governments stance.

Its approach is not unlike Apples itself, ironically:

There is one thing Apple has done right, which is to build a full set of tools around commerce for these businesses, he said. But, he added, its failing has been in not giving customers a choice of when to use it, and how much to charge for it. There has to be an alternative to cover all that as well.

(Paddle plans to charge 10% for transactions under $10, and just 5% on transactions over $10, compared to Apples 30%, a spokesperson later told me.)

The product is built and ready to go, Owens said, adding that there are already 2,000 developers signed up, representing $2 billion in app store volume, ready to try it out. Due to launch in December, Paddle has held off as Apples case with Epic (one of the most outspoken critics of IAP) has dragged on.

And he said, found Paddles name included, and not in a good way, in an update to Apples complaint.

That bold attitude may indeed keep Paddle in Apples bad books, but has made it a hero to third-party developers.

Paddle is solving a significant pain point for thousands of SaaS companies by reducing the friction and costs associated with managing payments infrastructure and tax compliance, said Patrick Devine, a director at KKR, in a statement. By simplifying the payments stack, Paddle enables faster, more sustainable growth for SaaS businesses. Christian and the team have done a phenomenal job building a category-defining business in this space, and we are excited to be supporting them as they embark on the next phase of growth.

See the original post:
Paddle, the company that wants to take on Apple in IAP, raises $200M at a $1.4B valuation to supercharge SaaS payments - TechCrunch