Category Archives: Cloud Servers
OVHcloud, the best dedicated server provider in the market – Startup.info
Managing multiple websites or large, traffic-heavy websites on a normal hosting plan can be a huge challenge as your business grows. Its also important to upgrade when you experience extensive downtime or because this significantly impacts your business. A dedicated server is a type of web hosting where your business gets its own server and doesnt compete with others for traffic.
This improved service comes at a higher price. Therefore you should consider multiple factors before upgrading to the dedicated server to ensure its the best option for your business. Indeed getting resources committed entirely to your site eliminates downtime and bottlenecks.
While there are many dedicated server hosting providers in the market, OVHcloud has positioned itself as the best hosting provider. This article reviews what a dedicated server is all about and the cost of upgrading to it.
A dedicated server offers advanced features, so the price is much higher than other hosting plans. Actually, a dedicated server is the most expensive hosting method. However, its certainly worth it because it gives you complete control and is highly customizable.
Other benefits include:
Dedicated Resources: The hosting gives you exclusive server resources such as disk space, CPU usage, bandwidth, and RAM. This eliminates risks such as network congestion, occasional crashes, frequent downtime, and more.
Enhanced Website Security: A dedicated server allow you to customize your security settings. You can limit admin access, set up new firewalls, use a preferred malware protection program, install an Intrusion Detection System (IDS), and much more.
Fast Load Speed: A dedicated server is much faster than a shared one. Your website will likely rank higher in search results when its faster. Also, speed has a huge impact on your bottom line because it affects the conversion rates.
OVHcloud offers hosting, servers, and cloud computing solutions. You can take advantage of its experience and expertise in bare metal servers and choose a dedicated server from its wide range of servers.
The following are the prices of its dedicated servers.
These are the most affordable servers and suitable for most applications.
These are versatile servers for SMEs.
These servers are designed for complex and high resilience infrastructures.
These are the most efficient servers and are optimized for critical workloads.
You can save much more with OVHcloud dedicated servers. Contact them for more information.
Continue reading here:
OVHcloud, the best dedicated server provider in the market - Startup.info
SK Hynix sees server memory chip demand slowing in H2 as recession fears haunt customers – CNBC
Although SK Hynix plans to continue with infrastructure investment such as securing land and utilities for future plants, it can reduce investment in chip equipment.
Bloomberg | Bloomberg | Getty Images
South Korea's SK Hynix, the world's no. 2 memory chipmaker, warned demand is likely to slow in the second half of the year as customers brace for recession, after booking its biggest second-quarter profit since 2018.
SK Hynix executives warned during an earnings call that customers are cutting costs noticeably and reducing investment out of recession concerns, hitting server chip demand and corporate PC demand, in addition to already slowing consumer demand for smartphones and PCs.
Server chips had been the only remaining bright spot in memory chip demand that drove SK Hynix to report a 56% jump in operating profit to 4.2 trillion won ($3.2 billion) in the April-June quarter, with large data center firms such as Amazonmeeting rising cloud demand.
"As a general trend, customers are holding more (memory chip) inventory for all applications" like PC, smartphone and servers, SK Hynix said. The firm's own inventory has gone up by about a week's worth of chip sales as of end-June compared with end-March.
Insight from key customers showed long-term demand for cloud services is still expected to expand, the company said. But short-term component shortages, macroeconomic uncertainty, and the hit to consumer-sector demand is turning server clients conservative in spending for the second half, SK Hynix said.
Given the uncertain environment, SK Hynix said it may decide on 2023 business plans as soon as late August and is looking at several scenarios, including a considerable reduction to its capital expenditure plans next year.
Although SK Hynix plans to continue with infrastructure investment such as securing land and utilities for future plants, it can reduce investment in chip equipment, it said.
Rising inflation, concerns about a downturn in major markets, and repeated Covid-19 lockdowns in China have resulted in slowing smartphone sales.
U.S. chipmaker Texas Instrumentson Tuesday forecast sustained demand from industrial and automotive customers, but said it was seeing weaker demand "particularly from customers in personal electronics market."
A clutch of chipmakers including Micron Technologyhave warned of a rising chip glut after a two-year long global shortage of chips.
In the company's second-quarter results, a strong dollar also offset higher material costs. SK Hynix's chip sales are booked in the U.S. dollar, which hit a 20-year high in the period, boosting the value of its operating profit reported in Korean won by about 400 billion won, the company said.
Revenue climbed by a third on the year to a quarterly record 13.8 trillion won.
Meanwhile, parent SK Group said on Tuesday it plans to invest $15 billion in the semiconductor industry in the United States through research and development programs and the creation of an advanced packaging and testing facility.
SK Hynix is expected to carry out the $15 billion investment, the company said.
Read more here:
SK Hynix sees server memory chip demand slowing in H2 as recession fears haunt customers - CNBC
Digital Marketing Software Market to Surpass US$ 265.2 Billion by 2030, Says The Brainy Insights – GlobeNewswire
Newark, July 28, 2022 (GLOBE NEWSWIRE) -- As per the report published by The Brainy Insights, the globaldigital marketing software market is expected to grow from USD 62.6 billion in 2021 to USD 265.2 billion by 2030, at a CAGR of 17.4% during the forecast period 2022-2030.
Digital marketing is a digital approach for promoting goods, brands, and services using electronic media and the internet. Digital marketing software depends on various channels such as social media platforms, websites, instant messaging (IM), and mobile applications, which helps improve the business's engagement with the customer. The arrangement can be generated using software that allows creating landing pages, generating analytics and reports, and performing other promotional activities. Digital marketing is an excellent way to reach the target audience and build customer loyalty toward the brand. Rapid digitization has forced businesses to focus on expanding their consumer reach globally. Digital marketing software manufacturers use digital marketing strategies to analyze customers' behaviors and know about their preferences in real-time.
Request a Sample Copy of the Research Report: https://www.thebrainyinsights.com/enquiry/sample-request/12800
Competitive Strategy
To enhance their market position in the global digital marketing software market, the key players are now focusing on adopting the strategies such as product innovations, mergers & acquisitions, recent developments, joint ventures, collaborations, and partnerships.
In December 2021, Rovio Entertainment Corporation announced the feature of Angry Birds on Netflix.
Market Growth & Trends
The increasing internet penetration and rising digitalization drive the market's growth. The increase in sales of smartphones and surge in usage of social networking websites drives the market's growth. The outbreak of the Covid-19 pandemic also boosted the market's growth and triggered a change in the way people use different apps. However, data security and privacy concerns are expected to restrain the market's growth during the forecast period. Furthermore, as the cost of digital marketing services decreases, it provides access to mass audiences and is gaining popularity among small and medium enterprises.
Report Scope & Segmentation
Pre Book - Digital Marketing Software Market: https://www.thebrainyinsights.com/buy-now/12800/single
Key Findings
In 2021, the CRM software segment dominated the market with the largest market share of 39.3% and market revenue of 24.6 billion.
The solution segment is divided into CRM software, marketing automation, and social media. In 2021, the CRM software segment dominated the market with the largest market share of 39.3% and market revenue of 24.6 billion. CRM Software is widely being used by businesses to communicate efficiently with customers. The increasing demand for CRM software in enterprises drives the segment's growth.
In 2021, the cloud-based deployment segment accounted for the largest share of the market, with 61.3% and market revenue of 38.3 billion.
The deployment segment is divided into on-premise and cloud. In 2021, the cloud-based deployment segment accounted for the largest share of the market, with 61.3% and market revenue of 38.3 billion. Cloud-based deployment is used to combine virtual cloud servers with dedicated hosting infrastructure. The rising need for cloud-based deployment drives the growth of the segment.
In 2021, the large enterprise segment accounted for the largest share of the market, with 58% and market revenue of 36.3 billion.
The enterprise size segment is divided into large enterprises and small and medium enterprises. In 2021, the large enterprises segment accounted for the largest share of the market, with 58% and market revenue of 36.3 billion. Large enterprises with massive databases use digital marketing software to manage the data of the consumers efficiently. Large enterprises' increasing need for digital marketing software to efficiently manage email marketing, CRM, and content management drives the segment's growth.
Request for Customization: https://www.thebrainyinsights.com/enquiry/request-customization/12800
Regional Segment Analysis of the Digital Marketing Software Market
North America (U.S., Canada, Mexico) Europe (Germany, France, U.K., Italy, Spain, Rest of Europe) Asia-Pacific (China, Japan, India, Rest of APAC) South America (Brazil and Rest of South America) The Middle East and Africa (UAE, South Africa, Rest of MEA)
Among all regions, North America emerged as the largest market for the global digital marketing software market, with a market share of around 42.2% and 26.4 billion of the market revenue in 2021. The digital marketing software market in the North American region has been rapidly growing owing to the increasing demand for digital marketing software from the entertainment and media industry. Furthermore, the development in the e-commerce industry in the region also drives the market's growth.
Key players operating in the global digital marketing software market are:
Adobe Systems Inc. Google Corporation Hewlett Packard Enterprise Development LP HubSpot, Inc. IBM Corporation Microsoft Corporation Oracle Corporation Salesforce Inc. SAP SE SAS Institute Inc.
This study forecasts revenue at global, regional, and country levels from 2019 to 2030. Brainy Insights has segmented the global digital marketing software market based on the below-mentioned segments:
Global Digital Marketing Software Market by Solution:
CRM Software Marketing Automation Social Media
Global Digital Marketing Software Market by Deployment Type:
On-Premise Cloud
Global Digital Marketing Software Market by Enterprise Size:
Large Enterprise Small and Medium Enterprise
Have Any Query? Ask Our Experts: https://www.thebrainyinsights.com/enquiry/speak-to-analyst/12800
About the report:
The global digital marketing software market is analyzed based on value (USD Billion). All the segments have been analyzed worldwide, regional, and country basis. The study includes the analysis of more than 30 countries for each part. The report analyzes driving factors, opportunities, restraints, and challenges for gaining critical insight into the market. The study includes porter's five forces model, attractiveness analysis, raw material analysis, supply, demand analysis, competitor position grid analysis, distribution, and marketing channels analysis.
About The Brainy Insights:
The Brainy Insights is a market research company, aimed at providing actionable insights through data analytics to companies to improve their business acumen. We have a robust forecasting and estimation model to meet the clients' objectives of high-quality output within a short span of time. We provide both customized (clients' specific) and syndicate reports. Our repository of syndicate reports is diverse across all the categories and sub-categories across domains. Our customized solutions are tailored to meet the clients' requirement whether they are looking to expand or planning to launch a new product in the global market.
Contact Us
Avinash DHead of Business DevelopmentPhone: +1-315-215-1633Email: sales@thebrainyinsights.comWeb: http://www.thebrainyinsights.com
The rest is here:
Digital Marketing Software Market to Surpass US$ 265.2 Billion by 2030, Says The Brainy Insights - GlobeNewswire
How Server Location Affects The Latency In Web Hosting? – Startup.info
We often talk about the impact oflatencycan have on any network connection as well as the users experience when using any web-based application.
But what exactly is latency and what makes it important to take into account when planning your applications infrastructure including cloud deployment as well as load balancers?
Despite the incredible advancements in the field of computer networking technology, high-speed connectivity, and the rise in cloud computing (which allows data centers to be moved to greater locations than ever) however, latency is still a major issue.
In this article, were going to discuss the effect the servers location has on latency or delays in web hosting. We will also discuss some tips that can be used to lessen the impact of this problem.
In the competitive digital landscape, you have to fight for a portion of Internet traffic.One of the key factors in your efforts to get there is using search engines such as Google.
Today, your website speed directly influences the way Google rates your website.Of course, there are additional elements to consider and its an intricate mix of many factors, but speed is the main well-known factor.The latency is the most significant issue.
The most recent Google updates include the speed with which your website serves pages to mobile devices, which is directly associated with the latency phenomenon.
There is a widespread misinterpretation between latency and bandwidth.Lets discuss a bit about these terms and the meaning they convey.
Bandwidth is the number of data transfers every minute.A good way to think about it is to think of it as a highway.
A six-lane highway permits more vehicles to travel past an exact point in a second than a highway with four lanes.Similarly, a 1Gbps connection will transmit more data in a second than a connection of 100Mbps.
Latency is the period of time the piece of data travels from its place of origin towards its final destination.If we keep the road analogy it is the amount of time needed to travel from one point to another.
The choice of server affects the latency in web hosting. Lets suppose you are choosing the VDS server hosting then it is very important that you buy the right server at the right location where you want to target your audience to reduce the latency rate of your dedicated server.
Lets take an example you have a dedicated server that loads the webpage which have the 20 objects to load and each object takes the 150 ms to load. It will take 4.5 seconds to load the complete webpage. So reducing the latency in the dedicated server are more important to load the webpage faster.
If a user types the address of your website in the web browser, their computer sends and retrieves information with the velocity of light by the way that is comprised of gateway nodes or more simply, hops.
The greater the distance between the client and server that hosts the website the higher the delay.The latency also depends on the performance of the network as well as the quality of routing devices.
Based on this data, if you have the option of hosting your website on a Server in Dubai or in the USA, which region should you select? You should select the location that is closest for your targeted audience.
Data is transmitted over the Internet through a network of cables.However fast electronic signals and pulses are, the more distance over which data must travel, the longer it takes to get there.
Lets say, for instance, an example of operating a website that targets users mostly from Asia.If the web hosting provider youre using has only data centers in the U.S., that means the information that is part of the site must travel across the globe to reach every visitor in Asia.
So, if your targeted audience in California USA, then you should go with VPS Los Angeles to ensure that your users have the best experience with latency issues.
In the same way, when looking to download an image via the internet, there are lots of things that can impact the speed of your download.One of them is the location of the server hosting your file.
The farther away the server is from you, the longer it takes for data to get to your personal computer.
If a user uses their browser to go to a website, it sends a request to the server hosting the site and is greeted with an answer at extremely fast speeds, even with the light speed over an optical fiber network.
While light-speed is extremely quick, with every hop there is an inherent delay in processing as the router processes the data, analyzes it, and then transmits it to the next place.
Furthermore, the number of hops or intermediary devices that connect your device and server will influence the latency, also known as the lag for your connections.The fewer hops are there, the lower the latency.
One of the most effective methods to prevent this problem is to take into account the location of your server before building your website.For example, if your targeted audience is North America, then you should go with VPS Mexico option to minimize the latency.
If you have an approximate concept of where your users should be from, youre good to go.We recommend hosting with a trustworthy provider that offers you with an array of different server location options.You should go with a server location that is closest to your target audience.
If youre hesitant to move to a new server due to some reasons, another option is to utilize a Content Delivery Network (CDN).
CDNs aid in storing your website information on a variety of servers across the globe to improve the speed with which your website delivers data to your customers.
Although not the best choice for an actual website hosted close to the users, CDNs are an option that can be helpful.One example of a great CDN that you could use is Cloudflare.
The response time is contingent on the databases optimization.When you initially set up your website the database responds rapidly to requests.
As time goes by and the database grows, it accumulates information.The compilation results in massive quantities of data being stored.
There are methods of optimizing databases to increase the speed of your website.If youre using WordPress the initial step would be to spot slow queries by using the query checker.
If you spot the slow ones, focus on optimizing them.Switch the group to objects, and make use of indexes or any other solution that is appropriate to the issue at the moment.
The performance of your website is greatly influenced by the distance between your hosting facilities to the region you are attracting visitors from.Choose a web hosting service that has servers located in a data center located in close proximity to your targeted audience.
Link:
How Server Location Affects The Latency In Web Hosting? - Startup.info
AppDynamics shares the key to cloud-native success – Gulf Business
The rapid digital transformation over the past few years, arguably only accelerated by the Covid-19 pandemic, has led to the rapid adoption of cloud-native technologies such as microservices and Kubernetes in enterprises across the region. These modern application architectures offer huge benefits for organisations in terms of improved speed to innovation, greater flexibility and improved reliability.
But IT teams in organisations across the UAE, and the world for that matter, are now finding themselves under immense pressure as they attempt to monitor and manage availability and performance across hugely complex cloud-native application architectures. In particular, theyre struggling to get visibility into applications and underlying infrastructure for large, managed Kubernetes environments running on public clouds.
There is no doubt that staying on top of availability and performance is a far greater challenge in a software-defined, cloud environment, where everything is constantly changing in real-time. But with digital transformation projects and innovation initiatives continuing to run at break-neck speed, the heat is on for technologists to adapt and get the visibility and insight they need across these modern environments.
An issue of scalabilityTraditional approaches to application availability and performance were often based on physical infrastructure. A decade ago, for example, IT departments operated a fixed number of servers and network wires they were dealing with constants and fixed dashboards for every layer of the IT stack. The advent of cloud computing added a new level of complexity and organisations found themselves continually scaling their use of IT, up and down, based on real-time business needs. While monitoring solutions have adapted to accommodate rising deployments of cloud alongside traditional on-premise environments, the reality is that most were not designed to efficiently handle the dynamic and highly volatile cloud-native environments that we increasingly see today.
Therefore, the fundamental question is one of scale these highly distributed systems rely on thousands of containers and spawn a massive volume of metrics, events, logs and traces (MELT) telemetry every second. And currently, most technologists simply dont have a way to cut through this crippling data volume and noise when troubleshooting application availability and performance problems caused by infrastructure-related issues that span across hybrid environments.
The case for cloud-native observability As such, it is essential for technologists to implement a cloud-native observability solution, to provide observability into highly dynamic and complex cloud native applications and the entire technology stack. In order for technologists to be able to thoroughly understand how their applications are behaving and where issues might lie, they need visibility across the application level, into the supporting digital services (such as Kubernetes) and into the underlying infrastructure-as-code (IaC) services (such as compute, server, database and network) they leverage from their cloud providers. But before technologists rush to implement a solution to this growing challenge, there are some important factors that must be considered when thinking about observability into cloud environments.
For one, technologists should be looking to implement a purpose-built solution; one that can observe distributed and dynamic cloud-native applications. Traditional monitoring solutions continue to play a vital role and will do so for years to come but it becomes problematic when cloud functionality is bolted onto existing monitoring and APM solutions. Too often, when new use cases are added to existing solutions, data remains disconnected and siloed, forcing users to jump from tab to tab, to try to identify the root causes of performance issues. Very few of these solutions provide complete visibility for example insight into business metrics or security performance and many are naturally biased towards a particular lay er of the IT stack depending on their legacy, whether that is the application or core infrastructure.
A new approach for new teamsCloud-native applications are built in completely different ways, and theyre managed by new teams Site Reliability Engineers (SRE), DevOps and CloudOps that have new and different skill sets, mindsets and ways of working compared to other functions within IT. As such, they require a completely different kind of technology to track and analyse availability and performance data. They need a solution that is truly customised to the needs of cloud-native technology stack to decipher short-lived microservices interactions with one another and which can be long gone once troubleshooting is done.
DevOps and SRE teams need a solution that embraces open standards, giving a full-stack, correlated view of all telemetry data across the technology stack most notably, open telemetry. Technologists need to be able to collect all telemetry across the stack and domains, and then analyse all of that telemetry data since it is interconnected and interdependent at once. A standards-driven solution is essential to future-proof organisations for the next decade and beyond.
Technologists also need a solution that allows them to monitor the health of key business transactions that are distributed across their technology landscape. If an issue is detected, they need to follow the thread of the business transactions telemetry data, so they can quickly determine the root cause of issues, with fault domain isolation, and triage the issue to the correct teams for expedited resolution.
Finally, technologists should be looking for a solution that combines observability with advanced AIOps functionality. They need to leverage the power of AIOps and business intelligence to prioritise actions for their cloud environments. In the future, organisations will utilise AI-assisted issue detection and diagnosis with insights for faster troubleshooting. Ultimately, it allows technologists to focus more quickly on what really matters, where and why it happened.
Over the last two years we have seen a seismic evolution in applications, and technologists need to ensure that their monitoring capabilities keep pace. From understanding how highly-distributed cloud-native applications work and predicting incidents, to adopting new ways to gather vast amounts of MELT telemetry data, teams across ITOps, DevOps, CloudOps, and SREs need contextual insights that provide business context deep within the tech stack.
Only with the right cloud-native observability solution in place, will IT teams and their organisations be able to optimise the benefits of modern applications, driving enhanced digital experiences for customers and improved business outcomes.
Gregg Ostrowski is the executive CTO at Cisco AppDynamics
Watch: GB Talks: In conversation with Gregg Ostrowski, Executive CTO, Cisco AppDynamics
View post:
AppDynamics shares the key to cloud-native success - Gulf Business
AWS Server Chip Becomes a Not-So-Secret Weapon Against Microsoft, Google – The Information
For the past decade, Amazon Web Services has maintained its edge over Microsoft and Google in selling cloud computing services by speeding up its technology and lowering prices. Over the next 10 years, a key advantage will be its Graviton microchips, which AWS developed in-house to power apps on the internet or to help customers train machine-learning models.
Six AWS customers told The Information that cloud servers using Graviton processors consume less power and can deliver higher speeds than servers made by incumbents Intel and AMD. The Amazon customers said they saved 10% to 40% on computing costs by renting Graviton servers. Twitter, Snap, Adobe and SAP are among the customers of Graviton servers, which became a multibillion-dollar revenue business only three years after it launched, according to a person with direct knowledge of the figures. Since Amazon in May debuted a more cost-efficient third generation of Graviton chips, rivals are feeling even more pressure to catch up.
Go here to see the original:
AWS Server Chip Becomes a Not-So-Secret Weapon Against Microsoft, Google - The Information
The Promise and Peril of Cloud Computing – The Hudson Reporter
By Carl Mazzanti, president of eMazzanti Technologies in Hoboken
Small, medium, and large businesses are increasingly embracing cloud computing, which offers the ability to access computing services over the internet. The benefits of moving to the Cloud are significant. These range from reduced hardware expenses to ease of administration. In more detail, businesses moving to the Cloud can generally avoid upfront and ongoing costs of purchasing and maintaining certain assets including servers, storage, databases, networking, software, analytics, and intelligence since cloud providers set up and maintain the necessary hardware and software on data centers over the internet.
When considering moving to the Cloud it is helpful to use an experienced Cloud services provider. Cloud providers can offer fast, reliable application updates with greater flexibility, and enable businesses to only pay for the cloud services they use while with the flexibility to add new features as needed.
Cloud providers can easily scale a clients computing power or software up or down as needed. The scaling ability was dramatically illustrated during the first months of the COVID-19 pandemic when large gatherings were banned. The NFL was able to tap its cloud computing partner to rapidly scale up its resources, which meant the league could safely and efficiently conduct virtual drafts, and more than 100 live feeds were running simultaneously for the following three days.
However, all Cloud providers are not equal. Business owners should trust but verify a potential or existing cloud provider. Why? Because just as Willie Sutton famously said, Thats where all the money is, when asked why he robbed the banks during the Depression. Cloud services today are just like the banks with the money- The cloud is where all the data is. Even a well-meaning cloud provider may unintentionally serve as a honeypot for cybercriminals who can crack a single digital safe and access reams of potentially valuable passwords, personally identifiable information and other data.
In addition to the scalability and potentially reduced upfront capital costs, there are plenty of reasons to go with an experienced cloud provider. A cloud computing environment can offer improved reliability with efficient data backup, disaster recovery, and business continuity services; data will be mirrored (or copied) in multiple sites on the cloud providers network. And reputable cloud providers can offer robust policies, technology, and controls that help protect data, apps, and infrastructure from potential threats.
Business owners should be aware, however, that all not all cloud providers are equal, and should engage in a trust but verify approach to vet a potential or existing cloud provider. This would include verifying a cloud providers claims, ensuring the provider has the ability to meet the security and other needs of the business.
A good way to begin is to scour the providers contract and confirm exactly what the provider is promising. Will they move your information into the cloud and secure it? Or will they just move your data? A contract that limits a guarantee to a data transfer is like hiring a moving company to transport your household goods, only to find it all dumped on the lawn of your new house because the agreement did not state they would place it inside the house.
Another important step involves understanding who is verifying the providers claims. For example, a company that performs services should not be the one that checks them the best practice is when a qualified independent third party reviews the providers cyber-practices.
It is also important to consider whether a providers cloud architecture, standards, and services align with your business workload and management preferences, and whether a significant amount of re-coding or customization will be necessary to prepare your business legacy workloads to mesh with the cloud providers platforms.
Cloud providers will say they can safeguard your sensitive data but that claim is only valid if their cyber-defenses are robust. One way to validate this is to have an ethical hacker test the providers defenses, but a more realistic approach involves inquiring about the providers network of secure data centers. A provider that maintains multiple regularly upgraded datacenters will likely offer more benefits including the latest generation of fast and efficient computing hardware, reduced network latency for applications, and larger economies of scale as opposed to a provider that operates only a single corporate datacenter.
There is no question that cloud computing can offer significant benefits to businesses of all sizes but selecting the right one, and successfully migrating your data may involve some time and work. Businesses that work with a trusted IT services consultant and prepare by gaining a thorough understanding of the issues involved can make the process smoother, though, while ensuring that their data is efficiently migrated and safely maintained.
Read more here:
The Promise and Peril of Cloud Computing - The Hudson Reporter
Chinese TikTok owner increased U.S. lobbying spending by 130% this quarter – CNBC
Rafael Henrique | Sopa Images | Lightrocket | Getty Images
TikTok's Chinese parent ByteDance had its biggest lobbying quarter ever, spending more than $2.1 million in the second quarter to lobby the U.S. government, according to its disclosure filed Wednesday in a federal database.
That represents a 130% increase from ByteDance's spending the previous quarter and marks the first time it's topped $2 million in a single quarter since it first registered lobbying disclosures in 2019. The company spent about $4.7 million on lobbying in all of 2021, according to the disclosures.
The company lobbied on a variety of issues. One piece of legislation it discussed was the American Innovation and Choice Online Act, the key antitrust bill that would prohibit dominant tech platforms from favoring their own offerings over those of rivals that rely on their services. It also lobbied on the two versions of a large funding bill aimed at boosting American competitiveness against China, a handful of online privacy bills, a defense spending bill and a bill to ban TikTok from Department of Homeland Security devices.
ByteDance engaged with both chambers of Congress during the quarter as well as executive agencies including the departments of Commerce, Defense, State and the Executive Office of the President, according to the filing.
The lobbying disclosures don't elaborate on what exactly ByteDance was pushing for and both the parent company and TikTok did not immediately respond to CNBC's requests for comment.
TikTok's Chinese ownership has complicated its relationship with Washington as many lawmakers are skeptical about how secure it can keep U.S. user data while believing that Beijing could compel ByteDance to hand over information.
TikTok has said it does not store U.S. user data in China and that it would not hand over such information to the Chinese government. But lawmaker skepticism has persisted and was recently reignited by a BuzzFeed News report that found Chinese-based ByteDance employees were able to access nonpublic U.S. user data. A TikTok spokesperson told BuzzFeed at the time it continuously works to validate its security standards including through independent third-party tests.
Shortly before that article was published last month, TikTok released a blog post announcing that through its partnership with Oracle, it's "changed the default storage location of US user data" so that "100% of US user traffic is being routed to Oracle Cloud Infrastructure."
"We still use our U.S. and Singapore data centers for backup, but as we continue our work we expect to delete U.S. users' private data from our own data centers and fully pivot to Oracle cloud servers located in the U.S.," the company added.
Subscribe to CNBC on YouTube.
WATCH: Lawmakers grill TikTok, YouTube, Snap executives
Excerpt from:
Chinese TikTok owner increased U.S. lobbying spending by 130% this quarter - CNBC
Three Key Challenges That Impede True Multi-Cloud Success Featured – The Fast Mode
Successfully implementing a multi-cloud strategy means overcoming the complexity of integrating and managing disparate solutions and standards across multiple clouds.
No one will deny that enterprise embrace of the cloud has been swift and sure. Cloud offers flexibility and scalability, efficient collaboration, business continuity, and much more.
Today, organizations looking to create an even more dynamic network are transforming their cloud landscapes once again. A new enterprise strategy a multi-cloud, or hybrid cloud, approach is quickly gaining a foothold in organizations of all sizes.
The fact is that organizations are increasingly mixing it up and using multiple cloud computing and storage services in a single network architecture. According to a recent Foundry (formerly IDG) survey of 850 IT decision-makers, only 16% reported that their organizations relied on a single cloud provider for their public cloud deployments.
Whether an organization is using more than one public cloud to deliver business services to its users, combining public and private cloud, or co-mingling multiple clouds with on-premise solutions, the benefits of a multi-cloud strategy are clear. Enterprises gain maximum flexibility to choose providers and cloud environments that meet a variety of organizational and customer needs while, at the same time, lessening the chance of vendor lock-in and ensuring better business continuity.
Multi-cloud ecosystems are complex, so they come with a wide range of challenges. Some of the most complex ones are related to network architecture and what happens when an organization attempts to take advantage of disparate solutions and standards across multiple clouds. Consider the following:
Despite the challenges, forward-looking organizations with a digital-first approach continue to view investment in cloud transformation as a strategic enabler for their businesses. With this mindset, they are able to develop multi-cloud capabilities that actually move the needle.
To be honest, most companies reaping the benefits of a multi-cloud or hybrid cloud strategy recognize that they cannot do it internally. They understand that putting in place a multi-cloud management platform not only smooths the way for an effective digital transformation but also makes sure everything operates effectively post transformation.
The right platform does all the heavy lifting via operational tools that bring together every segment of the cloud and simplify essential areas like connectivity, network governance, production, analytics, automation, and more.
The future of cloud is multi-cloud. Setting your organization up for success now with the right infrastructure and cloud management platform puts you a step ahead of those still just thinking about it.
See the article here:
Three Key Challenges That Impede True Multi-Cloud Success Featured - The Fast Mode
Cato aims to bust cyber myths as it extends network protections – ComputerWeekly.com
As secure access service edge (SASE) specialistCato Networks burnishes its cyber credentials with the addition of multiple features to its platform, the companys senior director of security strategy, Etay Maor, has urged users to challenge some of their preconceptions around security, using data drawn from Catos global network to counter some established cyber truths.
In June 2022, Cato became the first SASE supplier to add network-based ransomware protection to its platform, combining heuristic algorithms that scan server message block (SMB) protocol flows for attributes such as file properties and network or user behaviours, with the deep insights it already has into its network traffic from its day-to-day operations.
The algorithms were trained and tested against the firms existing data lake drawn from the Cato SASE Cloud which holds over a trillion flows from Cato-connected edges.
The firm claims this will let it spot and stop the spread of ransomware across an organisations network by blocking SMB traffic to and from the source device to prevent lateral movement and file encryption.
Speaking to Computer Weekly, Maor, who joined Cato from IntSights, and is also an adjunct professor at the Woods College of Advancing Studies at Boston College, described a Black Basta ransomware attack to which he responded, in which the victim an unnamed US organisation could have benefited from this.
When he gained access to the victims security logs, Maor found that all the information that a ransomware attack was incoming was there, the security operations centre (SOC) team had just not been able to see it.
I know its cool to get to sit in front of six screens, but what SOC analysts are trying to do is gather so much information and put it all together, so I understand why stuff is missed, he said.
In this case, it was remote desktop [RDP] to an Exchange server. Yes, they said, but that Exchange server doesnt exist anymore so why attack a server thats not there? So I had to introduce them to ransomware as a service [RaaS].
What happened was someone else who attacked them sold their network data to someone else who wrote a script to automate the attack. They werent there for weeks, they were there for a minute, they didnt know the victim had changed their Exchange server, but got lucky somewhere else.
So if you can see east-west traffic, like an attempt to connect to a server that isnt there, that should be a red flag to the SOC, he explained. We created our heuristic algorithms to look for these quirks.
Maor said he wanted to explode the myth favoured by presenters at security conferences that attackers need to get lucky only once, while defenders need to get lucky all the time.
When you look at MITRE ATT&CK and see how attackers operate, you soon see that saying is the opposite of the truth. Attackers have to be successful at phishing, gaining an endpoint, lateral movement, privilege escalation, downloading malware payloads, et cetera.
You actually realise that attackers need to be right all the time, but defenders need to be right only at one point to protect, defend and mitigate, he said.
Cato is now going further still, adding a data loss prevention (DLP) engine to protect data across all enterprise applications without needing to implement complex and cumbersome DLP rules. It forms part of Catos SSE 360 architecture and is designed to solve for what the firm describes as the limitations with which traditional DLP solutions are fraught.
For example, legacy DLP may have inaccurate rules that block legitimate activities or, worse still, allow illegitimate ones while a focus on public cloud applications is leaving sensitive data in proprietary or unsanctioned applications exposed.
Added to that, investment in legacy DLP solutions does not help provide protection from other threat vectors.
Cato believes it has these problems licked by introducing scanning across the network for sensitive files and data that is defined by the customer. It is capable of identifying more than 350 distinct data types, and once identified, customer-defined rules will block, alert or allow the transaction.
Since joining Cato, Maor has been creating quarterly threat landscape reports using data drawn from the firms global network, and the latest edition of this report also challenges established cyber thinking in many ways.
For example, to spend a few days immersed in the security community, one might reasonably expect that most cyber attacks originate from within countries such as China or Russia, but Catos data reveal this is far from the case.
In fact, during the first three months of 2022, the most malicious activity was initiated from within the US, followed by China, Germany, the UK and Japan. Note this data is related to malware command and control (C2) communications, therefore the data reveals what countries host the most C2 servers.
Maor said that understanding where attacks really originate from should be a crucial part of a defenders visibility into threats and trends. Attackers know full well that many organisations will add countries such as China or Russia to their deny lists or at the very least closely inspect traffic from those jurisdictions therefore, he said, it makes perfect sense for them to base their C2 infrastructure in countries that organisations perceive as safer.
Catos report also pulled data on the most-abused cloud applications Microsoft, Google, RingCentral, AWS and Facebook in that order with Telegram, TikTok and YouTube also in vogue, likely as a result of the Russia-Ukraine war.
The report also showed the most targeted common vulnerabilities and exposures (CVEs) predictably, Log4Shell was the runaway winner here, with more than 24 million exploit attempts seen in Catos telemetry,but in second place was CVE-2009-2445, a 13-year-old vulnerability in Oracle iPlanet Web Server (formerly Sun Java System Web Server or Sun ONE Web Server) that lets an attacker read arbitrary JSP files via an alternate data stream syntax.
With such old vulnerabilities, people are completely unaware of them, said Maor. [It shows] the way defenders look at the network is completely different from how attackers do defenders will send me a PDF visual file of their servers, DMZ, cloud, et cetera, [but] attackers will say, Hey, you have a 14-year-old server, thats interesting.
Follow this link:
Cato aims to bust cyber myths as it extends network protections - ComputerWeekly.com