Category Archives: Cloud Servers

Major Data Breaches, Ransomware Attacks and Cybersecurity … – GeekWire

In todays digital world, businesses constantly face cyber threats, making disaster recovery planning crucial for operational, reputational, and financial protection.

In this article I will talk about threats, present statistics, share security breach stories, highlight common mistakes, and outline disaster recovery planning, including various approaches and guidance for comprehensive business protection.

A disaster in the context of business encompasses any event or circumstance that has a highly negative impact on a companys operations, financial stability, or reputation.

These range from natural disasters to cybersecurity breaches, economic downturns, and more. In the digital age, cybersecurity threats are a new category of disasters, posing significant risks to businesses worldwide.

Cyber threats are among the most fast-growing business threats today, and recent statistics reveal their severity. For instance, according to estimates from Statistas Cybersecurity Outlook, the global cost of cybercrime is expected to surge in the next five years, rising from $8.44 trillion in 2022 to $23.84 trillion by 2027.

To illustrate the real-world consequences of security breaches, lets just remind ourselves of the recent Twitter (while it still was called that) data breach in June 2023, which exposed the emails of 253 million people. Although it appears that no additional information has been exposed, the primary concern in this situation is the possibility that malicious individuals could potentially reveal the identities of those who prefer to post anonymously by exploiting these email addresses.

Consequences of a cyberattack vary based on its nature, industry, preparation, and cybersecurity measures. Common outcomes include data breaches (resulting in financial losses, legal liabilities, and reputational damage), financial losses, ransomware attacks (causing data loss, downtime, and ransom payments), reputational damage affecting customer trust, regulatory and legal consequences, disruptions in operations (leading to recovery costs), intellectual property theft, phishing scams compromising accounts, business email compromises prompting unauthorized transactions, supply chain disruptions, loss of customer trust, and operational challenges post-attack.

Common mistakes made by companies include, but are not limited to, underestimating the importance of cybersecurity, failing to regularly update security systems, and lacking employee training in recognizing and responding to threats. Additionally, some businesses neglect to back up critical data or create an effective disaster recovery plan. Here is a more detailed list of the most common mistakes:

To avoid these mistakes, businesses should invest in comprehensive cybersecurity measures, employee training, and the creation and regular testing of disaster recovery and incident response plans. Prioritizing cybersecurity is essential in todays digital landscape to protect sensitive data and ensure business continuity.

A disaster recovery plan (DRP) is a comprehensive strategy that outlines how a business will respond to and recover from disasters, including cybersecurity incidents. It is designed to minimize downtime, protect data, and ensure the continuity of operations.

A well-rounded DRP covers various aspects of a business, including:

Disaster recovery planning for businesses involves various approaches, depending on the organizations size, industry, and specific needs. Here are three common approaches to disaster recovery planning:

Choosing a disaster recovery approach involves considering budget, system criticality, recovery time objectives (RTOs), and recovery point objectives (RPOs). Regular testing of plans is crucial, regardless of the approach, to ensure they meet organizational needs. The ultimate goal of disaster recovery planning is to minimize downtime, protect data, and ensure business continuity amid unexpected disasters or disruptions.

A successful disaster recovery plan is a dynamic document starting with a thorough risk assessment. This assessment identifies organization-specific threats and vulnerabilities, forming the foundation for the entire plan.

In the plan, clear recovery time objectives (RTOs) and recovery point objectives (RPOs) are crucial, guiding the acceptable timeframe for restoring operations and defining the maximum tolerable data loss. A dedicated, well-prepared team is vital, with assigned roles and regular training for swift crisis response.

Regular testing and updates are essential as the threat landscape evolves. The plan should adapt to emerging threats, incorporate latest technologies, and remain a living document.

Communication is key; a plan should inform stakeholders during incidents, managing crises and maintaining trust.

Secure data backup and recovery processes are core. Regular offsite backups of critical data and efficient recovery minimize downtime and data loss.

A successful disaster recovery plan encompasses technical and human elements. Employee training is crucial, and the plan should be regularly reviewed and updated to reflect evolving needs and the changing threat landscape.

In an era where cyber threats are constantly evolving, businesses must prioritize disaster recovery planning. By understanding the types of threats, learning from real-life examples, and avoiding common mistakes, businesses can develop robust disaster recovery plans that safeguard their operations and data. Investing in cybersecurity and having a well-structured DRP is not only a smart business move, but a necessary one in todays digital landscape.

Original post:
Major Data Breaches, Ransomware Attacks and Cybersecurity ... - GeekWire

iOS 17.2Amazing New iPhone Features And Fixes Suddenly Revealed – Forbes

17.2 is coming soon, including some brilliant new features and fixing several pesky bugs.SOPA Images/LightRocket via Getty Images

Apples iOS 17.1 has only just arrived, but iOS 17.2 is coming soon, and it includes some brilliant new features and fixes several pesky bugs. Among the new features in iOS 17.2 is a very cool update to iMessage, which will be a huge boost to iPhone security.

First unveiled in December last year as part of new iCloud security features and finally appearing in the iOS 17.2 beta now, iMessage Contact Key verification has been a long time coming.

The new iMessage feature in iOS 17.2 prevents attackers from listening to or reading your conversations if theyve managed to breach cloud servers.

If you have Contact Key Verification enabled in iOS 17.2, you will receive a notification if someone is able to eavesdrop on your conversations. As an extra layer of security, the new iPhone feature also allows you to use a Contact Verification Code on FaceTime or in personjust to make sure the person you are speaking to is who they say they are.

Contact Key Verification in iOS 17.2 is designed for people who could be targets for attacks utilizing iPhone malware called spywarewhich can allow adversaries to see everything you write and hear anything you say.

Over the last year or so, Apple has been busy releasing new iPhone features to protect users from spyware attacks as well as patching numerous security holes that could be used in so-called zero click attacks.

However, while it is a security feature akin to the likes of Apples Lockdown setting, Contact Key Verification doesnt reduce your iPhones functionality like Lockdown Mode does, so there is no security-functionality trade off. That makes it more accessible to all security-conscious iPhone users.

With this in mind, Jake Moore, global cybersecurity advisor at ESET, has praised the new iPhone feature. Contact Key Verification works seamlessly without any direct action needed, making it another security-focused feature working tirelessly in the background. Having the ability to use this new iOS 17.2 feature in strict security situations gives users that vital piece of mind that they are conversing with who they think they are.

As AI steps up, offering quality voice cloning techniques and with relatively good deep fakes in the pipeline, this sort of protection is imperative, Moore says.

Apple has described how Contact Key Verification Works in iOS 17.2 in a new technical blog.

Contact Key Verification is designed to detect sophisticated attacks against iMessage servers and allow users to verify that theyre messaging only with whom they intend, Apple says.

The iPhone maker explains how iMessage Contact Key Verification uses a mechanism called Key Transparency. This builds on the ideas of Certificate Transparencyessentially a security standard for monitoring and auditing digital certificates.

Yet as Apple explains, Contact Key Verification uses a verifiable log-backed map data structure, which can provide cryptographic proofs of inclusion and be audited for consistency over time.

These properties allow for higher scalability and better user privacy, Apple says.

Its certainly an exciting new feature, and when it debuts in iOS 17.2, it can be found in your iPhones Settings > your name > Contact Key Verification, where you can toggle it to on. This applies to Beta users as of now.

So when will iOS 17.2 arrive? It looks like the update will be released around November or December, including other new features such as the Journal app, new AirPlay settings for the Apple Vision Pro headset, collaborative Apple music playlists, new weather widgets and enhancements to the Contacts app.

According to Apple-focused site 9to5Mac, iOS 17.2 will also fix the Wi-Fi connectivity issues that have been plaguing some iPhone users since updating to iOS 17.

Its also likely iOS 17.2 will come with a bunch of security fixes, so keep an eye on my Forbes page for updates.

Kate is an award winning and widely-recognized cybersecurity and privacy journalist with well over a decades experience covering the issues that matter to users, businesses and governments. In addition to Forbes, her work can be found in publications including Wired, The Guardian, The Observer, The Times and The Economist.

With a focus on smartphone security including Apple iOS security and privacy, application security, cyberwarfare and data misuse by the big tech firms, Kate reports and analyzes breaking cybersecurity and privacy stories and trending topics.

She is also a recognized industry commentator and has appeared on radio shows including the WVON Morning Show with Attorney Ernest B. Fenton, BBC Radio 5 Live and podcasts such as the Guardians Today in Focus. Kate can be reached atkate.oflaherty@techjournalist.co.uk.

The rest is here:
iOS 17.2Amazing New iPhone Features And Fixes Suddenly Revealed - Forbes

Application Hosting Market to Surpass USD 178.1 Billion by 2030 on Account of Rising Demand for Mobile Applications and Cloud Computing Advancements |…

SNS Insider pvt ltd

Based on SNS Insiders research, the application hosting market continues to thrive, driven by technological innovations, security concerns, and the evolving needs of businesses and consumers.

Pune, Oct. 31, 2023 (GLOBE NEWSWIRE) --

The SNS Insider report states that the Application Hosting Market was valued at USD 67.0 billion in 2022 and is projected to reach USD 178.1 billion by 2030, with a compound annual growth rate (CAGR) of 13% anticipated during the forecast period from 2023 to 2030.

Market Overview

Application hosting is the practice of deploying and operating software applications on remote servers, making them accessible to users over the internet. Instead of installing and running applications locally on individual devices, users can access these applications via a web browser. The hosting service provider is responsible for maintaining the servers, ensuring uptime, and handling technical aspects, allowing businesses and individuals to focus on using the applications rather than managing the underlying infrastructure.

Market Analysis

The proliferation of smartphones and the increasing reliance on mobile apps across various industries have significantly boosted the demand for application hosting services. As businesses develop mobile applications to enhance customer engagement and expand their reach, reliable hosting solutions become crucial. Application hosting providers offer optimized infrastructure to ensure seamless performance and accessibility of these mobile apps, catering to the growing market needs. In an era where data breaches and cyber threats are prevalent, businesses prioritize the security and compliance aspects of application hosting. Hosting providers are investing heavily in robust security measures, including encryption protocols, multi-factor authentication, and regular security audits. Moreover, adherence to industry-specific regulations and compliance standards ensures that businesses can trust their hosting partners with sensitive data, thereby fueling application hosting market growth.

Story continues

Get a Sample Report of Application Hosting Market@ https://www.snsinsider.com/sample-request/3365

Key Company Profiles Listed in this Report are:

IBM

Google

Rackspace

Microsoft

Liquid Web

Sungard AS

DXC

Apprenda

Navisite

GoDaddy & Other Players

Application Hosting Market Report Scope:

Report Attributes

Details

Market Size in 2022

US$ 67.0 Bn

Market Size by 2030

US$ 178.1 Bn

CAGR

CAGR of 13% From 2023 to 2030

Base Year

2022

Forecast Period

2023-2030

Report Scope & Coverage

Market Size, Segments Analysis, CompetitiveLandscape, Regional Analysis, DROC & SWOT Analysis, Forecast Outlook

Key Regional Coverage

By Hosting Type (Managed, Cloud, Colocation) By Service Type (Application Monitoring, Application Programming Interface Management, Infrastructure Services, Data Based Administration, Backup, Application Security) By Application (Mobile Based, Web Based) By Organization Size (Large Enterprise, Small and Medium Size Enterprise) By Industry (BFSI, Retail and Ecommerce, Healthcare, Media and Entertainment, Energy and Utilities, Telecommunications and IT, Manufacturing)

Key Takeaway from Application Hosting Market Study

The application monitoring segment stands out as a crucial segment, dominating the market with its advanced technologies and innovative solutions. This segment is marked by its ability to monitor applications in real-time, ensuring seamless performance, optimal user experience, and robust security protocols.

The large enterprise segment stands as a powerhouse, driving innovation and shaping the industry's future. Large enterprises, characterized by their extensive resources, complex infrastructures, and diverse customer bases, are at the forefront of adopting advanced application hosting solutions. Their unique requirements and substantial investments fuel the growth of this segment, making it a dominant force in the application hosting market.

Recent Developments

HostPapa has recently announced its acquisition of Cloud 9 Hosting. HostPapa's reputation for reliability and Cloud 9 Hosting's expertise in advanced hosting solutions seem to complement each other seamlessly, promising customers a host of benefits.

Kinsta, a leading player in the web hosting industry, has recently unveiled its innovative Application Hosting and Database Hosting services. Kinsta's new Application Hosting services are designed to cater to the dynamic needs of modern applications.

Do you have any specific queries or need any customization research on Application Hosting Market, Enquiry Now@ https://www.snsinsider.com/enquiry/3365

Market Dynamics Analysis

As enterprises increasingly migrate their operations to cloud-based applications, the demand for reliable and scalable hosting solutions surges. Moreover, the growing trend of remote workforces amplifies this need, pushing organizations to invest in hosting services that ensure seamless access and data security. Additionally, the rapid advancements in technologies such as edge computing and IoT devices are compelling businesses to seek hosting solutions capable of handling complex and data-intensive applications. However, amidst these opportunities, there exist significant challenges and restraints. Security concerns loom large as cyber threats become more sophisticated, posing a substantial threat to hosted applications and sensitive data. Compliance with regulations and standards also presents a challenge, especially for industries with strict data governance requirements. Furthermore, the fast-paced evolution of hosting technologies creates a challenge for businesses to keep up, requiring constant adaptation and skill development. In this dynamic application hosting market, the competition among hosting service providers intensifies, making it essential for companies to differentiate their offerings and provide unparalleled value to their clients.

Application Hosting Market Key Segmentation:

By Hosting Type

By Service Type

Application Monitoring

Application Programming Interface Management

Infrastructure Services

Data Based Administration

Backup, Application Security

By Application

By Organization Size

By Industry

Key Regional Developments

North America stands tall as the hub of technological innovation, fostering a highly developed application hosting market. The presence of major tech giants and a robust IT infrastructure drive the market growth. Europe embraces a diverse landscape of hosting solutions tailored to meet the unique needs of various industries. The market in Europe is characterized by a mix of cloud, dedicated, and shared hosting services. The Asia-Pacific region is witnessing unprecedented growth in digital transformation, propelling the market to new heights. Countries like China, India, and Japan are experiencing a surge in e-commerce, mobile apps, and online services, leading to an increased demand for hosting services.

Impact of Recession on Application Hosting Market Growth

The recession has accelerated the shift from traditional in-house hosting to manage hosting services. With managed hosting, businesses can outsource their hosting needs to specialized providers, reducing the burden on their IT departments and cutting operational costs. This trend is likely to continue as companies recognize the value of expert management and support, especially when facing economic uncertainties. Despite the challenges posed by the recession, the application hosting market is witnessing a surge in innovation. Hosting providers are investing in research and development to offer more efficient, secure, and user-friendly hosting solutions.

Buy a Single-User PDF of Application Hosting Market Report 2023-2030@ https://www.snsinsider.com/checkout/3365

Table of Contents

1. Introduction

2. Research Methodology

3. Market Dynamics

4. Impact Analysis

5. Value Chain Analysis

6. Porters 5 forces model

7. PEST Analysis

8. Application Hosting Market Segmentation, by Hosting Type

9. Application Hosting Market Segmentation, by Service Type

10. Application Hosting Market Segmentation, by Application

11. Application Hosting Market Segmentation, by Organization Size

12. Application Hosting Market Segmentation, By Industry

13. Regional Analysis

14. Company Profile

15. Competitive Landscape

16. USE Cases and Best Practices

17. Conclusion

Access Complete Report Details@ https://www.snsinsider.com/reports/application-hosting-market-3365

[For more information or need any customization research mail us at info@snsinsider.com]

About Us:

SNS Insider is one of the leading market research and consulting agencies that dominates the market research industry globally. Our company's aim is to give clients the knowledge they require in order to function in changing circumstances. In order to give you current, accurate market data, consumer insights, and opinions so that you can make decisions with confidence, we employ a variety of techniques, including surveys, video talks, and focus groups around the world.

Read More ICT Market Research Report

Read more here:
Application Hosting Market to Surpass USD 178.1 Billion by 2030 on Account of Rising Demand for Mobile Applications and Cloud Computing Advancements |...

Ransomware Readiness Assessments: One Size Doesn’t Fit All – Dark Reading

Ransomware attacks can be devastating for organizations, causing significant damage to operations and reputations. Therefore, it's crucial to prepare for such an eventuality with a comprehensive ransomware response plan. However, it's also essential to understand that ransomware readiness assessments aren't a one-size-fits-all solution.

Let's explore why a tailored approach to ransomware readiness assessments is necessary and highlight some scenarios you may encounter during a ransomware attack.

The impact and severity of a ransomware attack can vary depending on the attacker's objectives, the organization's security posture, and other factors. Therefore, a comprehensive response plan must be tailored to the specific circumstances of different types of impacts from an attack.

For example, a ransomware attack may impact servers only within a particular geographic region, cloud environment, or data center. Alternatively, the attack may affect authentication of every user due to compromised Active Directory servers. Or you may not know the viability of backups, or the threat actor may provide a decryption tool.

Preparing for different scenarios requires a thorough ransomware readiness assessment to better understand the current maturity of response and to develop or improve an incident-response plan that considers each potential scenario's unique characteristics. There is definitely value in identifying and resolving what keeps the business up at night and hyperfocusing on that in the assessment's first pass. For instance, prioritizing backup immutability can be a critical step in ensuring the organization's resilience against ransomware attacks. Your assessment could focus solely on immutability or disaster-recovery strategies.

Here are a few questions that can help you think through your ransomware readiness preparations:

If you obtain a decryption tool from the threat actor, do you have a plan in place to safely and effectively decrypt servers?

To prepare for the various scenarios that can arise during a ransomware attack, you can hold workshops on topics such as emergency implementation of containment measures, backup tooling and configurations, critical application assessment, Active Directory and network architecture, coordination processes, and surge resourcing.

Workshops on emergency server, end-user, network, and backup system containment help identify the steps required to contain an attack, minimize malware spread, and isolate affected systems.

Backup tooling and configuration workshops help ensure you have backups available and accessible during a ransomware attack. Identify and address any risks, such as privileged credential misuse, and establish backup restoration times sufficient to recover critical systems.

Assessing critical applications and executive user backup capabilities is another essential workshop topic. It allows you to identify your most critical systems and institute adequate backup capabilities. Addressing any risks identified during the assessment enables you to recover critical applications in the event of an attack.

Active Directory and network architecture workshops are necessary to understand the lateral movement that may occur during a ransomware attack. This knowledge can help minimize the severity of an attack and limit the attacker's ability to move laterally within the network.

Workshops on coordination processes help organizations stay aligned while executing recovery operations. These workshops bring together key technical engineering teams, such as server admins, backup system admins, security teams, outsourced IT providers, and third-party service providers, to make recovery efforts coordinated, efficient, and effective.

Workshops on surge resourcing help you obtain access to the necessary resources to restore servers, build new servers, install and validate apps, provide help desk support, and so on. Identifying potential surge resourcing scenarios in advance can help you respond effectively during a ransomware attack.

Overall, conducting workshops on these topics is critical to help organizations prepare to respond to a ransomware attack. These workshops can help you identify your organization's strengths and weaknesses in terms of readiness and create a response plan that considers your unique circumstances.

Ransomware attacks are a significant threat to organizations, and their impact and severity can vary. Therefore, it's wise to develop a comprehensive ransomware response plan for the specific circumstances of each type of attack. By conducting tailored ransomware readiness assessments and workshops, you can develop a comprehensive response plan that minimizes damage and restores operations quickly.

See the article here:
Ransomware Readiness Assessments: One Size Doesn't Fit All - Dark Reading

How to Become a Security Engineer – Dice Insights

A security engineer ensures that an organizations software, networks, hardware, and data are safe from intrusion and theft. Lets first look at the details of what a security engineer does and the skills needed, and then what training and career paths look like.

Security engineers ensure the network and IT engineers are using best security practices, such as keeping firmware and device software up-to-date with the latest security patches and minimizing whats known as an attack surface. For instance, when dealing with software applications that require connectivity to a database server, it's essential to decrease the attack surface by limiting that database servers unnecessary exposure to the internet.

Companies that build their own software also need security engineers to ensure software is secure as its being built; the security engineer will work with the developers on best practices to harden the software against potential attacks. This includes using proper password techniques, as well as ensuring the software doesnt inadvertently provide an intruder with access via backdoors.

Security engineers work full-time. This is not a job that an application developer does a little bit on the side. A security engineer needs to devote his or her entire work to security, becoming an expert in its impact on their company and industry. An organization shouldnt simply trust its security requirements to application developers or project managers who know a little bit of security. They need either an expert on staff full time, or a part-time consultant from a firm that hires full-time security engineers.

Security engineers help test existing systems for vulnerabilities, as well as advise the IT team as theyre building such systems. Here are some skills needed for that:

Penetration testing: This refers to the task of intentionally breaking into a system from the outside. Security engineers typically have a set of software tools that assist in this. The tools will attempt to break into the software through multiple means, including using different software ports. Some tools are run manually by the security engineer; other tools the security engineer must install and configure to run automatically on a regular basis.

Vulnerability and security assessment: Once penetration testing is complete, the security engineer will put together an assessment showing where all the problems are, and what needs to be done to correct the problems.

Intrusion detection: Security engineers install software that detects an active intrusion and immediately notifies the security engineers and other people in the organization, and possibly even the regular security staff or even police. (Note that this could mean being awakened in the night!)

Setting up new systems securely: In addition to testing existing systems, security engineers will help the IT team build networks that are secure. The security engineer meets with the IT team before the network is built and advises them on the steps to build a secure network, the best software and devices to use, and the best way to configure everything. The security engineer will also work with them as the network is being built, doing penetration testing early on to catch problems before the system goes live. After going live, the security engineer will continue doing penetration tests on a regular basis afterwards.

Security policies: Security engineers will help the IT team put together policies to be enforced, such as lists of the only software allowed to be installed on computers; lists of software thats blacklisted; rules on password management, and so on. They will also likely train employees on how to keep their passwords safe and how to not fall for phishing scams.

Compliance: Companies that work with certain organizations such as government agencies typically need to meet regulatory compliance. The security engineer is the one who needs to understand such regulations, how to implement them correctly, and how to report that the system is compliant.

Assisting application developers: For companies that build software, security engineers work in a specialized field called application security engineer, whereby they help the software developers follow best security practices. This requires skills in addition to the above, such as:

There are some steps to landing the first job:

Training. Training is vital. While some IT professions you can learn on your own, security requires as much training as you can stand to get. The reason is liability. Organizations trust the people they hire. An organization can usually survive if their software crashes and restarts. There might be some annoyed users, but if no data is lost, theres likely little financial liability.

Security engineers, on the other hand, need to ensure that intruders wont break in and steal millions of customer records; such an intrusion can result in the company getting sued for tens of millions of dollars or even more. Theres a high risk in hiring the right security engineer.

If you want a recruiter or hiring manager to be totally comfortable with the idea of you as a security engineer, it helps to have a collection of courses, certifications, and degrees on your resume and other application materials. If you already have a bachelors degree in a related field such as computer science, another option is to go back and get a masters degree in security.

People networking. As with most jobs these days, its important to grow your network. Large corporations will typically hire teams of security engineers. A software development firm might hire just one, compared to dozens of software developers. And security firms will typically be hiring multiple people. That means finding these companies, and ideally, meeting people who work there who can get your resume to the top of the list. You can meet people through job networking sites such as LinkedIn, as well as by attending conferences and meetups.

But remember, because competition is tight, youll need to be ready to prove yourself, both with your certification and your skillset. Plan to be the best you can and shine above the others.

As with many tech careers, there are junior, mid-level, and senior level security engineers. In large companies and security consulting firms that hire multiple security engineers, you would be starting out at a junior level working under people with years of experience who can help teach you additional skills beyond your training.

Medium-sized companies that have an opening for one security engineer are likely to go with somebody with more experience than a junior level. When you reach such a position, you would have a great deal of autonomy.

Senior engineers at a large corporation or security firm might not be doing so much hands-on work and might be managing teams of security engineers. Or as you advance in your career, instead of managing, you can become more specialized. You might focus on only cloud security, network security, or the application security we already mentioned.

And as you advance, plan to keep training and learning through your entire career. With every new version of operating systems and software, you need to update your skills to know the new features and security risks of those features. You also need to learn about new methods of attacks and intrusion and how to prevent them, and what to do if an intrusion happens.

Note finally that some security engineers start out as software developers. Such people are in demand as they can take on the job of security application engineer. But again, since security is a full-time job (with very solid pay), this is a full career change, not just a side gig for an application developer.

Security engineering is a difficult field and requires continual training and certification updates. But it can be exciting and rewarding. Plan to work hard, and soon youll find yourself getting your first position.

More here:
How to Become a Security Engineer - Dice Insights

Q3 2023 cloud results: AI investments drive up revenue and CapEx – DatacenterDynamics

Growth across the board, some more than others

Amazon Web Services (AWS), Microsoft, and Google have all posted their Q3 results for 2023.

AWS saw a strong quarter with double-digital year-on-year sales growth, with sales and income for the unit also jumping more than $1 billion from Q2.

Microsoft reported higher-than-expected AI consumption, driving up Azure results.

Though it posted consecutive profits, Googles operating income for Q3 was down, despite revenue being up.

Wikimedia Commons

AWS segment sales increased 12 percent year-over-year to $23.1 billion. Operating income was $7 billion, compared with operating income of $5.4 billion in the third quarter of 2022.

Q2 sales were $22.1bn, while operating income was $5.4 billion.

We had a strong third quarter as our cost to serve and speed of delivery in our Stores business took another step forward, our AWS growth continued to stabilize, our Advertising revenue grew robustly, and overall operating income and free cash flow rose significantly, said Andy Jassy, Amazon CEO. The AWS team continues to innovate and deliver at a rapid clip, particularly in generative AI, where the combination of our custom AI chips, Amazon Bedrock being the easiest and most flexible way to build and deploy generative AI applications.

Overall, the wider company saw Net sales increase 13 percent to $143.1 billion in the third quarter; Operating income increased to $11.2 billion, and Net income increased to $9.9 billion. AWS makes up 16 percent of the company's net sales.

During the earnings call, CEO Jassy said the customer reaction to its generative AI Bedrock service had been very positive and its launch into general availability had buoyed that further.

He said the GenAI opportunity could equate to tens of billions of dollars of revenue for the company over the next several years.

CapEx investments were $50 billion for the trailing 12-month period ended September 30, down from $60 billion in the comparable prior year period. The company said lower fulfillment and transportation CapEx would be partially offset by increased infrastructure CapEx to support AWS, including additional investments related to generative AI and large language model efforts.

Brian Olsavsky said AWS margin improvements were partly down to headcount reductions in Q2 and also continued slowness in hiring.

There's been also a lot of cost control in non-people categories, things like infrastructure costs and also discretionary costs, he said. Natural gas prices and other energy costs have come down a bit in Q3 as well.

For Q3, Microsofts Revenue in the Intelligent Cloud unit which includes Azure was $24.3 billion and increased 19 percent year on year. The company said server products and cloud services revenue increased 21 percent; Azure and other cloud services revenue grew 29 percent and 28 percent.

For Q2 2023, Microsofts Intelligent Cloud unit posted revenue of $24 billion.

For the wider company, quarterly revenue was $56.5 billion (up 12 percent), while operating income was $26.9bn and net income was $22.3bn.

Satya Nadella, chairman and chief executive officer of Microsoft, said: "We are rapidly infusing AI across every layer of the tech stack and for every role and business process to drive productivity gains for our customers.

Microsofts capital expenditures, including finance leases, were $11.2 billion for the quarter to support cloud demand, including investments to scale AI infrastructure.

In the earnings call, CFO Amy Hood said higher-than-expected AI consumption contributed to revenue growth in Azure, but Microsoft Clouds slight margin gains in Azure were partially offset by the impact of scaling AI infrastructure to meet growing demand.

She said growth was ahead of expectations primarily driven by increased GPU capacity and better-than-expected GPU utilization of AI services, and on-premises server business revenues increased 2 percent, ahead of expectations and driven primarily by demand in advance of Windows Server 2012 end of support.

Google Cloud reported Q3 2023 revenues of $8.41 billion. This was up from Q3 2022s $6.86bn, and Q2 2023s $8.03bn.

Operating income for Google Cloud this quarter was $266 million, up from the previous years $440m loss, but down on Q2s $395m profit.

The wider company announced revenues of $76.69 billion, operating income of $21.34bn, and a net income of $19.69bn.

Sundar Pichai, CEO, said: Im pleased with our financial results and our product momentum this quarter, with AI-driven innovations across Search, YouTube, Cloud, our Pixel devices, and more. Were continuing to focus on making AI more helpful for everyone; theres exciting progress and lots more to come.

In the results, Google said adjusting the estimated useful life of servers from four years to six years and the estimated useful life of certain network equipment from five years to six years has resulted in a depreciation expense reduction of $977 million and $2.9 billion respectively, and an increase in net income of $761 million and $2.3 billion for the three and nine months ended September 30, 2023.

Capital expenditures were $8.1 billion for the three months ended September 30, 2023. Company CFO Ruth Porat said this was driven overwhelmingly by investment in technical infrastructure, with the largest component for servers, followed by data centers, reflecting a meaningful increase in investments in AI compute.

We do continue to expect elevated levels of investment in our technical infrastructure. It will be increasing in the fourth quarter, she said. We will continue to grow CapEx in 2024.

Data from Synergy Research Group released this week shows that Q3 enterprise spending on cloud infrastructure services was over $68 billion worldwide, up by $10.5 billion from the third quarter of last year.

"The current economic and political climate has crimped some growth in cloud spending, but there is clear evidence that generative AI technology and services are starting to help overcome those barriers," the company said.

"While the law of large numbers continues to exert downward pressure on cloud market growth rates, AI is giving the market an added boost. Helped by AI, there are signs that many enterprises are through their period of belt-tightening and of optimizing rather than growing their cloud operations. AI is helping to open up a wide range of new cloud workloads."

Read more:
Q3 2023 cloud results: AI investments drive up revenue and CapEx - DatacenterDynamics

Are cloud computing stocks going to take off in 2024? – The Armchair Trader

There is a whiff of recovery in the stock markets at least in the US, and it is time to look for stocks that have the potential to outperform the market going into next year.

High on that list are tech stocks. They have been through a few waves of selling this year but their underlying growth potential is still strong. Precisely because of their recent weakness now may be a good time to look at them again.

At the Armchair Trader we particularly like the prospect for cloud computing and cybersecurity companies, two parts of the tech universe that will be in high demand even when companies or retail buyers try to reign in their spending.

The BVP Nasdaq Emerging Cloud Index has gained over 11% since the start of the year and increased in value by almost 50% over the last five years. Compare this with a year-to-date increase of 2.12% and a 5-year rally of 30% in the DJIA and the case speaks for itself.

The value of the cloud computing market reached $405 billion in 2022 and is expected to mushroom to $1.46 trillion over the next five years.

There are good reasons for it. Cloud computing offers significant cost advantages over using physical infrastructure such as servers and data centres and this is more so the case in the post-COVID hybrid work environment. Cloud computing services companies maintain their high profitability and margins because their services are relatively easy to scale, they are very flexible and increasingly indispensable. Profit margins are high, and providers continue to benefit from economies of scale.

This is an exceptionally dynamic field where technological innovation, such as the development of advanced AI capabilities, edge computing, or hybrid cloud solutions, can significantly impact a companys competitive advantage.

Apart from the biggest players such as Amazon [NASDAQ:AMZN], the industrys first mover and market share leader, and Google parent company Alphabet [NASDAQ:GOOGL] there is whole host of smaller and medium sized companies worth a look.

One of them is US-Dutch company Elastic NV [NYSE:ESTC], a search company that builds self-managed and software offerings for search, logging and analytics use cases. It recently released strong Q1 earnings which featured both stable growth rates and high net expansion rates and forecast a significant increase in year-on-year profitability.

Elastic NVs valuation is still only moderate, even after a share price increase of 40% year-to-date to 70.87. Several forecasters expect the average Elastic NV share price to increase to at least 84 over the next 12 months with some optimistic views reaching as high as 108.

Another prospect is cybersecurity firm Zscaler [NASDAQ:ZS]. The ever more widespread use of AI has also introduced new security issues in the form of AI-enabled cybersecurity attacks and companies are increasing their spending in order to fend off such attacks.

Zscaler has reported revenue of $1.62 billion for the full year 2023, a 48% increase y-o-y and is expanding its product portfolio to cover the whole cybersecurity market. Year-to-date Zscaler share price increased 42%, the stock is up 287% over the last five years.

Also on the radar is Uipath [NYSE:PATH], a company which was founded in Romania but has since become a global software player with headquarters in New York. The company builds platforms that provide what in the past would have been called autopilots technologies designed to carry out definable, repeatable and manageable workplace tasks from basic to increasingly more complex tasks. UiPaths share price is up nearly 40% on the year, trading at 17.07.

Go here to see the original:
Are cloud computing stocks going to take off in 2024? - The Armchair Trader

The Cloud has a serious and fragile vulnerability: Access Tokens – Security Boulevard

The October 2023 OKTA support system attack that so far has publicly involved Cloudflare, 1Password and BeyondTrust informs us just how fragile and vulnerable our cloud applications are because they are built using access tokens to authenticate counterparties.

If a valid access token is stolen by a threat actor, most systems dont have the internal defenses built-in to detect that a valid token is now being used by a threat actor, leaving them completely vulnerable. To be clear, reliance on access tokens to authenticate counter parties, is NOT unique to Okta and could happen to any of a myriad of cloud services to application functionality such as email, CRM, accounting/finance are just some examples.

What are access tokens and why do they make our cloud infrastructure and applications so vulnerable? Simple, they are what is known as a bearer token, which means that if you possess the token you have the access rights granted to the original possessor of the token. By design the recipient is required to check if the token is still valid and if so, grants access.

Because of their power, access tokens must be protected at all costs from theft. There are four key aspects to protecting tokens:

Deviate from these simple rules and applications (cloud, hybrid and classical) that rely on access tokens are extremely vulnerable.

Looping back to the recent Okta breach, it boils down to stolen access tokens due to 3 out of the 4 rules being breached. The only rule that wasnt fully breached was number 2, which by public reporting does not appear to have been a mutually authenticated connection.

What is an access token?

Think of an access token being similar to a ticket used to get into a sporting event with some notable differences Presenting a valid sporting ticket to a stadium gate agent allows you to enter the venue (authentication). Once inside the stadium gate, you go to your section, row and seat for that ticket (scope/authorization).

Access tokens, technically called OAUTH tokens, work like tickets with two notable differences: reuse and copy protection.

How are access tokens generated and used?

Without getting down into the weeds of how modern Identity Providers (IDPs) and protocols work, the answer is fairly simple in that an Agent (human or system) presents credentials to an IDP and requests access tokens to a resource (e.g. system, application, database, etc,..). The credentials presented to the IDP can be a username/password augmented with multi-factor authentication, a certificate or other types of secrets. If successfully authenticated, the IDP sends the Agent an access token. To protect access tokens, they should only be sent over properly encrypted channels with the valid time period set as short as possible, though the downside to short time durations means the Agent has to re-authenticate to the IDP more often..

Once an Agent has their access tokens, they present them to resources to gain access. The resource validates the tokens with the IDP to see if they have been revoked or expired and if not, grants access based on the authorization rights associated with the Agent.

How do you protect access tokens?

Given how powerful access tokens are, they must be protected at all costs.

Going back to the four methods of protection listed earlier, the first order of business is to never persistently store access tokens. Avoiding persistent storage greatly reduces the attack surface of a threat actor to get a copy of a token. As a side note, ensuring that the original credentials presented to the IDP get the access tokens in the first place should also not be persisted as well. Instead, applications should make use of Key Management Services or Vaults.

The second task is to ensure that tokens are always sent over secure channels which in the majority of cases means using Transport Layer Security (TLS), Using TLS versions older than V1.2 should be avoided as these are no longer considered secure and have been deprecated.

TLS has two modes of operation: One-way (most common) and mutual.

With one-way TLS the channel has only data privacy and data integrity but misses out on authentication due to the fact that only the server providing the resource has a certificate. While many human based use cases can be sufficiently secured using one-way-TLS due to the fact that a human can be asked for additional authentication factors by the IDP as a part of the initial IDP login, it usually leaves open vulnerabilities for machine-to-machine connections where mutual TLS should be used as it serves as a critical second factor for authentication.

Ideally mutual TLS (mTLS) connections should be used, whereby both sides of the connection mutually authenticate each other at the transport layer using certificates or even pre-shared-keys which acts as an additional authentication factor. The use of mTLS ensures the Agent is authenticated with the server to validate that connections are only coming from valid and known sources as opposed to a threat actors machine armed with a stolen access token.

The third task is to ensure access tokens have as short a lifetime as possible such that if they are stolen, their useful lifetime is limited. For human Agents, this translates to having to re-authenticate with the IDP, so many of these use cases tend to have lifetimes set in hours or days. For machine based Agents, the lifetime should be set as short as possible without putting unnecessary burden on the IDP. Often this translates to 15 to 60 minutes.

The four task is to ensure that some means of restricting who can connect to servers (defense-in-depth) is vital in that this serves a very significant hurdle to overcome should an access token be stolen. Servers that require all connections to be mTLS based protects itself with a critical second factor ensuring that only authenticated connections can be used to send access tokens. This second factor significantly reduces the attack surface in which access tokens can be abused. Using mTLS combined with cloud privacy or network security groups or layer 3 segmentation solutions go even further with their defense-in-depth attack surface reductions.

Conclusion:

Applications and infrastructure must ensure they adhere to solid code and implementation practices such as the ones described in this blog when access tokens are used as the center of the authentication strategy. Best practice of short expirations, never persisting tokens and using mTLS for all connections combined with network capabilities such as network or privacy groups or segmentation are all required to ensure sufficient defense-in-depth protective controls are in place.

The post The Cloud has a serious and fragile vulnerability: Access Tokens appeared first on TrustFour: TLS Compliance Monitoring.

*** This is a Security Bloggers Network syndicated blog from | TrustFour: TLS Compliance Monitoring | TrustFour: TLS Compliance Monitoring authored by Robert Levine. Read the original post at: https://trustfour.com/the-cloud-has-a-serious-and-fragile-vulnerability-access-tokens/

Originally posted here:
The Cloud has a serious and fragile vulnerability: Access Tokens - Security Boulevard

Touring the Intel AI Playground – Inside the Intel Developer Cloud – ServeTheHome

We had the opportunity to do something that is just, well cool. In August, just as STH was preparing to move from Austin to Scottsdale, I had the opportunity to head up to Oregon and tour something I had been asking about for at least a year. My questions to Intel were: What is the Intel Developer Cloud? Where does it run out of? Is it only a few systems set up that you get short-term SSH access to? All those and more were answered when I visited the Intel Developer Cloud.

As a quick note, we are going to say this is sponsored since we had to fly up to Oregon to do this piece, and also, this is not common access. It took well over a year to go from the idea to getting the approvals to doing the tour. As with everything on STH, our team produces this content independently, but we just wanted to call this out.

As one would imagine, we also have a video for this article since it was so cool to see.

For those who have not seen this yet, the Intel Developer Cloud is the companys place to try out systems in a cloud environment with various technologies, including Intel Xeon, Xeon Max, GPU Flex Series, GPU Max Series, and (formerly Habana) Gaudi. Something that I did not know before the tour was that Intel has service tiers ranging from a more limited free test drive tier to paid plans for developers and teams. There is another option for enterprises that need larger scale deployments as a more customized program.

One can create an account, and through Intels various developer account types and presumably some sales logic, different platforms become available to you to try.

At that point, SSH credentials are deployed alongside instances running on hardware, and access is granted to develop on platforms. Some plans have early access to hardware, support, and additional toolkits. Here is an example of starting an instance with a 4th Gen Intel Xeon Scalable (Sapphire Rapids) Platinum 8480+ and four PCIe GPUs (Max 1000):

That was only part of the equation, however. What I wanted to know, and pushed Intel to let me get access to, was the hardware that this is actually running on. After many approvals, we got access to another part of a data center suite in Oregon that I had been to previously. We are only allowed to show a smaller fraction of this suite (one of many suites in the DC), but I grabbed this photo for some scale of the floor size. Check out the lights overhead as they extend well beyond the cage we are in, and it is a fence behind me, not even the corner of the floor.

What I can show is a few photos from my December 2022 visit to the facility from perhaps the opposite corner from the above photo. There, Intel has seemingly countless systems running large scale testing for things like reliability at scale for cloud providers.

At the time, it was exciting to see how many of these systems were running Sapphire Rapids that would not be launched for about a month after this visit.

Intel also has areas here that go far beyond what looks like standard servers. There are things like these development systems where Intel can cause voltage drops on different parts of the platform and more to test what would happen if a component failed, for example.

Hello to theLantronix Spider we reviewed in the fun photo above.

For some sense of scale, this was just one aisle in the facility with systems set up for this type of testing.

The bottom line is that Intel has its own farm of development systems just down from its Jones Farm campus and Oregon fabs. Whether it is for the Intel Developer Cloud today, testing at scale for reliability for cloud customers, or doing platform development work, there is a lot here. Knowing that we are going to show off just a tiny portion, of this suite in this single facility should give some sense of the Intel development scale.

With that, let us get to our tour.

Read more:
Touring the Intel AI Playground - Inside the Intel Developer Cloud - ServeTheHome

SPAs and React: You Don’t Always Need Server-Side Rendering – The New Stack

As you may have noticed, the Start a New React Project section of the React docs no longer recommends using CRA (Create React App). Create React App used to be the go-to approach for building React applications; (that only required client-side routing and page rendering). Now, however, the React docs suggest picking one of the popular React-powered frameworks that support server-side rendering (SSR).

Ive built applications with everything youll see on that list of production-grade React frameworks, but I also spent many years building SPAs (Single Page Applications) that only needed client-side functionality and everything was fine.

Whilst there are many applications that do need server-side rendering, there are also many applications that dont. By opting to choose an SSR React framework, you might be creating problems rather than solving them.

As the acronym suggests, an SPA only has a single page. An SPA might have navigation, but when you click from page to page what youre experiencing are routes, not pages. When you navigate to a new route, React takes over and hydrates the page with HTML and (usually) data that has been sourced using a client-side HTTP request.

SSR applications are different. Server-side rendered applications actually do have pages. The data is sourced on the server, where the page is compiled, and then the final output is sent to the browser as a complete HTML webpage.

As noted, with SSR you need a server and usually this will involve a cloud provider. If your SSR framework only really works with one cloud provider, you might end up experiencing vendor lock-in. Thankfully frameworks like Remix and Astro are server agnostic, so you can either bring your own server, or use an adapter to enable SSR in your cloud provider of choice.

One problem that crops up again and again is spinner-geddon; where each time you navigate to a new page, youre presented with a spinner animation to indicate that data is being requested, and only after a successful HTTP request completes will the page become hydrated with content.

An SPA isnt great for SEO (Search Engine Optimization) either, because as far as Google is concerned, the page is blank. When Google crawls a webpage, it doesnt wait for HTTP requests to complete, it just looks at the content/HTML in the page, and if theres no HTML then how can Google rank the page?

Because of this (and a number of other reasons), there has been a shift in React application development towards server-side rendering. But, whilst both of the above sound like considerable problems are they, really?

The classic developer response will likely be: It depends. And it really does! Ill now tell you a short story about an SPA I built a few years ago, so you can judge for yourselves.

Rewind the clock to 2018, Id been hired by a tech consultancy company that had been brought in to perform a digital transformation for a large financial institution based in London.

My first project was to build a browser-based solution that would replace an antiquated piece of licensed software that was no longer fulfilling its duties, not to mention costing the company money. The application was for internal use only and would only ever have three users: Margaret, Celia and Evelyn, a delightful team of people who were nearing retirement age, but who played an important role in the firm.

The application I built took approximately eight weeks to complete, only used client-side HTTP requests to fetch data from an API, had authentication, was deployed using an existing Azure DevOps pipeline, and wasnt Search Engine Optimized.

Margaret, Celia and Evelyn absolutely loved it, and they didnt mind the occasional spinner since the app solved a problem for them. It also solved a problem for the firm: no more expensive software licensing. I have it on good authority that its still in use today. I also happen to know that Margaret, Celia and Evelyn have all since retired, in case you were wondering.

I think they are. There are many internal applications that will never see the outside world and wont need to use any of the features that come with the more modern React-powered SSR frameworks. But since the React docs are no longer recommending CRA, what else could you use if you were building an SPA today?

Vite can be used alongside React and steps in as a more modern replacement to Webpack (the module bundler used by CRA).

Vite is a build tool that aims to provide a faster and leaner development experience for modern web projects.

I thought about turning this into a tutorial, but theres really no point.

The Vite docs cover everything youll need to know in the Scaffolding Your First Vite Project section; and choosing from the CLI prompts, youll have a React app up and running in about ~20 seconds.

Youll also see from the image above, Vite isnt only a great choice for building React applications, its also suitable for use with other frameworks.

In short, bundling.

When developing an application, code is split up into smaller modules. This makes features easier to develop and allows common code to be shared among different parts of the application. But, at some point, all those modules need to be bundled together to form one giant JavaScript file. This giant JavaScript file is required by the browser to run the application.

Bundling occurs whenever a file is saved (which happens hundreds of thousands of times during the course of development). With tools like Webpack, bundles have to be torn down and rebuilt to reflect the changes. Only after this bundling step is complete will the browser refresh, which in turn allows developers to actually see their changes.

As an application grows, and more and more JavaScript is added, the bundler has more and more work to do. Over time, this bundling step starts to take longer and can really affect developer productivity. Vite addresses this by leveraging native ES Modules and HMR (Hot Module Replacement).

With Vite, when a file is saved, only the module that changed is updated in the bundle. This results in a much faster bundling step and a much more productive and pleasant development experience.

There are a number of other benefits to using Vite, which have been clearly explained in the docs: Why Vite.

So there you have it, out with the old and in with the newbut, the legacy of the React SPA can live on!

Naturally, there are many cases where an SPA isnt the most suitable choice. However, when it comes to SPA or SSR, its not either-or its both-and.

Follow this link:
SPAs and React: You Don't Always Need Server-Side Rendering - The New Stack