Category Archives: Cloud Servers
HFactory and OVHcloud leverage their partnership to advance Data Science & AI skilling – PR Web
"With HFactory leveraging the capabilities of OVHcloud AI Training, we are together in a position to offer a secure, fully integrated solution for the organisation of data challenges and AI hackathons."
PARIS (PRWEB) May 03, 2022
HFactory, the EdTech startup behind the first all-in-one platform for creating hands-on learning activities in Data Science and AI, enters a product and marketing partnership with OVHcloud. As part of the alliance, HFactory will now be included in OVHcloud marketplace, thus joining an ecosystem of sales partners offering the best Cloud solutions built on OVHcloud infrastructures.
HFactory was born out of the Hi! PARIS Research Center in Artificial Intelligence, where it was most recently applied for the annual hackathon gathering students from Institut Polytechnique de Paris, HEC Paris and select partner institutions. The SaaS platform combines advanced features for organising such events - from the registration and formation of diverse groups to a built-in chat service - with a seamless access to OVHcloud AI Training capabilities.
"Our deep integration with OVHcloud AI Training is central to our vision of delivering an end-to-end, perfectly integrated solution. HFactory provides the sort of instant, frictionless access to Cloud object storage and GPU computing resources that hackathon participants and AI students just love. details Ghislain Mazars, Founder & CEO at HFactory. "Users' feedback has been outstanding, and we are now moving on to build advanced new features around traceability through the OVHcloud bastion.
"We share common values around digital and technology sovereignty with the HFactory team, and have been deeply impressed by the ease of use and extensive functionalities of their solution. says Alexis Gendronneau, Head of Data Products at OVHcloud. "With HFactory leveraging the capabilities of OVHcloud AI Training, we are together in a position to offer a secure, fully integrated solution for the organization of data challenges and AI hackathons. As expectations rise on that matter, we are thus especially pleased to thereby contribute to European sovereignty in Data and AI education.
In that spirit, HFactory, already a member of the OVHcloud Startup Program, is also joining the Open Trusted Cloud initiative, which aims to unite companies willing to actively defend trusted solutions and see them evolve within the same ecosystem. As a result of the newly announced partnership, a license for HFactory can now be directly subscribed to on the OVHcloud marketplace at https://marketplace.ovhcloud.com/p/plateforme-challenges-data-ia.
About HFactoryHFactory helps educators create engaging active learning experiences in Data Science & AI. With its SaaS application to run data innovation challenges, machine learning courses and AI research projects, the company is the natural partner of higher education institutions and enterprise customers willing to step up their Data & AI training and pedagogy.
About OVHcloudOVHcloud is a global player and Europe's leading cloud provider operating over 400,000 servers within 33 data centers across four continents. For 22 years, the Group has relied on an integrated model that provides complete control of its value chain from the design of its servers to the construction and management of its data centres, including the orchestration of its fiber-optic network. This unique approach allows it to independently cover all the uses of its 1.6 million customers in more than 130 countries. OVHcloud now offers its customers latest-generation solutions combining performance, price predictability and total sovereignty over their data to support their growth in complete freedom.
Share article on social media or email:
See the original post here:
HFactory and OVHcloud leverage their partnership to advance Data Science & AI skilling - PR Web
VPN services in India to store user-data for 5 years: All you need to know – The Indian Express
The Indian IT Ministry has ordered VPN companies to collect and store users data for a period of at least five years, as per a new report published last week. CERT-in, or the Computer Emergency Response Team has also asked data centers and crypto exchanges to collect and store user data for the same period to coordinate response activities and emergency measures related to cyber security in the country.
Failing to meet the Ministry of Electronics and ITs demands could lead to imprisonment of up to a year, as per the new governing law. Companies are also required to keep track of and maintain user records even after a user has canceled his/her subscription to the service.
Many resort to VPN services in India to maintain a layer of privacy. VPNs or virtual proxy networks allow users to stay free of website trackers that can keep track of data like a users location. Paid VPN services and even some good free ones, often offer a no-logging policy. This allows users to have full privacy as the services themselves operate on RAM-only servers, preventing any storage of user-data beyond a standard temporary scale.
If the new change is implemented, companies will be forced to switch to storage servers, which will allow them to log in user-data and store it for the set term of at least five years. Switching to storage servers will also mean higher costs for the companies.
For the end-user, this translates to lesser privacy and perhaps, higher costs. With data being logged, it would be possible to track your browsing and download history. Meanwhile, paid VPN services may increase cost of subscription plans to cover expenses of the new storage servers that they must now use.
The new laws are expected to come into action from 60 days of being issued, which means they could kick in from July 27, 2022.
CERT-in will reportedly require companies to report a total of twenty vulnerabilities including unauthorised access of social media accounts, IT systems, attacks on servers and more. Check a full list of the twenty vulnerabilities below.
1. Targeted scanning/probing of critical networks/systems.
2. Compromise of critical systems/information.
3. Unauthorised access of IT systems/data.
4. Defacement of website or intrusion into a website and unauthorised changes such as inserting malicious code, links to external websites etc.
5. Malicious code attacks such as spreading of virus/worm/Trojan/Bots/Spyware/Ransomware/Cryptominers.
6. Attack on servers such as Database, Mail and DNS and network devices such as Routers.
7. Identity Theft, spoofing and phishing attacks,
8. Denial of Service (DoS) and Distributed Denial of Service (DDoS) attacks.
9. Attacks on Critical infrastructure, SCADA and operational technology systems and Wireless networks.
10. Attacks on Application such as E-Governance, E-Commerce etc.
11. Data Breach.
12. Data Leak.
13. Attacks on Internet of Things (IoT) devices and associated systems, networks, software, servers.
14. Attacks or incident affecting Digital Payment systems.
15. Attacks through Malicious mobile Apps.
16. Fake mobile Apps.
17. Unauthorised access to social media accounts.
18. Attacks or malicious/ suspicious activities affecting Cloud computing systems/servers/software/applications.
19. Attacks or malicious/suspicious activities affecting systems/ servers/ networks/ software/ applications related to Big Data, Block chain, virtual assets, virtual asset exchanges, custodian wallets, Robotics, 3D and 4D Printing, additive manufacturing, Drones.
20. Attacks or malicious/ suspicious activities affecting systems/ servers/software/ applications related to Artificial Intelligence and Machine Learning.
See the rest here:
VPN services in India to store user-data for 5 years: All you need to know - The Indian Express
Top 10 Python Jobs Developers Should Apply for in FAANG Companies – Analytics Insight
Explore your enthusiasm for Python with these top 10 jobs in the biggest tech giants like FAANG
In finance, FAANG is an acronym that refers to the stocks of five prominent American technology companies: Meta (FB) (formerly known as Facebook), Amazon (AMZN), Apple (AAPL), Netflix (NFLX); and Alphabet (GOOG) (formerly known as Google). All the tech enthusiasts, in particular Python developers around the world, are interested in jobs at FAANG companies which are also popular for their wonderful work environment and best quality of teams. Here are the top 10 Python jobs at FAANG companies that you can apply for in 2022.
Apple
Responsibilities:
Requirements:
Click here to apply
Apple
Responsibilities:
Requirements:
Click here to apply
Responsibilities:
Requirements:
Click here to apply
Responsibilities:
Requirements:
Click here to apply
Amazon
Responsibilities:
Requirements:
Click here to apply
Amazon Fuse
Responsibilities:
Requirements:
Click here to apply
Meta
Responsibilities:
Requirements:
Click here to apply
Meta
Responsibilities:
Requirements:
Click here to apply
Netflix
Responsibilities:
Requirements:
Click here to apply
Netflix
Responsibilities:
Requirements:
Click here to apply
Share This ArticleDo the sharing thingy
About AuthorMore info about author
Read the original:
Top 10 Python Jobs Developers Should Apply for in FAANG Companies - Analytics Insight
Centerserv – International Cloud and Web Servers
CenterServ offers a wide range of managed web server systems that provides their customers with exceptional performance. Read More
CenterServ offers to CDN Companies in need of global presence servers and connectivity everywhere including the most remote areas in the world. Read More
While it seems cloud computing is everywhere and anywhere, the fact is it is still only the early stages of its advance into the cores of enterprises. Read More
While maintaining the company's reputation for stability and performance in the field of web server solutions, they work continuously to offer their clients the latest and most advanced technologies. Read More
The newest cloud computing company to come out of Silicon Valley, has a very different approach to business solutions in the cloud. Read More
CenterServ offers a wide range of managed web server systems that provides their customers with exceptional performance.
CenterServ offers a wide range of managed web server systems that provides their customers with exceptional performance.
CenterServ offers a wide range of managed web server systems that provides their customers with exceptional performance.
CenterServ offers a wide range of managed web server systems that provides their customers with exceptional performance.
CenterServ offers a wide range of managed web server systems that provides their customers with exceptional performance.
See the original post:
Centerserv - International Cloud and Web Servers
Computational storage and the new direction of computing – VentureBeat
We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!
The aggravation, the unexpected delays, the lost time, the high costs: commuting ranks regularly as the worst part of the day by people worldwide and is one of the big drivers for work-from-home policies.
Computers feel the same way. Computational storage is part of an emerging trend to make datacenters, edge servers, IoT devices, cars and other digitally-enhanced things more productive and more efficient by moving data less. In computational storage, a full-fledged computing system complete with DRAM, I/O, application processors, dedicated storage and system software gets squeezed into the confines of an SSD to manage repetitive, preliminary, and/or data-intensive tasks locally.
Why? Because moving data can soak up inordinate amounts of money, time, energy, and compute resources. For some applications like compression in the drive, hardware engines consuming less than a watt can achieve the same throughput as over 140 traditional server cores, said JB Baker, VP of marketing and product management at ScaleFlux. Thats 1,500 watts and we can do the same work with a watt.
Unnecessary data circulation is also not good for the environment. A Google-sponsored study from 2018 found that 62.7% of computing energy is consumed by shuttling data between memory, storage and the CPU across a wide range of applications. Computational storage, thus, could cut emissions while improving performance.
And then theres the looming capacity problem. Cloud workloads and internet traffic grew by 10x and 16x in the past decade and will likely grow at that rate or faster in the coming years as AI-enhanced medical imaging, autonomous robots and other data-heavy applications move from concept to commercial deployment.
Unfortunately, servers, rack space and operating budgets struggle to grow at that same exponential rate. For example, Amsterdam and other cities have applied strict limits on data center size forcing cloud providers and their customers to figure out how to do more within the same footprint.
Consider a traditional two-socket server set-up with 16 drives. An ordinary server might contain 64 computing cores (two processors with 32 cores each). With computational storage, the same server could potentially have 136: 64 server cores and 72 application accelerators tucked into its drives for preliminary tasks. Multiplied over the number of servers per a rack, racks per datacenter, and datacenters per cloud empire, computational drives have the power to boost the potential ROI of millions of square feet of real estate.
So if computational storage is so advantageous, how come its not pervasive already? The reason is simple a confluence of advancements, from hardware to software to standards must come together to make a paradigm shift in processing commercially viable. These factors are all aligning now.
For example, computational storage drives have to fit within the same power and space constraints of regular SSDs and servers. That means the computational element can only consume two to three watts of the 8 watts allotted to a drive in a server.
While some early computational SSDs relied on FPGAs, companies such as NGD Systems and ScaleFlux are adopting system-on-chips (SoCs) built around Arm processors originally developed for smartphones. (An eight-core computational drive SoC might dedicate four cores to managing the drive and the remainder to applications.) SSDs typically already have quite a bit of DRAM 1GB for every terabyte in a drive. In some cases, the computational unit can use this as a resource. Manufacturers can also add more DRAM.
Additionally, a computational storage drive can support standard cloud-native software stack: Linux OSes, containers built with Kubernetes, or Docker. Databases and machine learning algorithms for image recognition and other applications may also be loaded into the drive.
Standards will also need to be finalized. The Storage Networking Industry Association (SNIA) last year released its 0.8 specification covering a broad range of issues such as security and configuration; a full specification anticipated later this year.
Other innovations you should expect to see: more ML acceleration and specialized SoCs, faster interconnects, enhanced on-chip security, better software for analyzing data in real-time, and tools for merging data from distributed networks of drives.
Over time, we could also see the emergence of computational capabilities added to traditional rotating hard drives, still the workhorse of storage in the cloud.
Some early use cases will occur at the edge with the computational drive acting in an edge-for-the edge manner. Microsoft Research and NGD Systems, for instance, found that computational storage drives could dramatically increase the number of image queries that can be performed by directly processing the data on the CSDs one of the most discussed use cases and that throughput grows linearly with more drives.
Bandwidth-constrained devices often with low latency requirements such as airplanes or autonomous vehicles are another prime target. Over 8,000 aircraft carrying over 1.2 million people are in the air at any given time. Machine learning for predictive maintenance can be performed efficiently during the flight with computational storage to increase safety and reduce turnaround time.
Cloud providers are also experimenting with computational cloud drives and will soon start to shift to commercial deployment. Besides helping offload tasks from more powerful application processors, computational drives could enhance security by running scans for malware and other threats locally.
Some might argue that the solution is obvious: reduce computing workloads! Companies collect far more data than they use anyway.
That approach, however, ignores one of the unfortunate truths about the digital world. We dont know what data we need until we already have it. The only realistic choice is devising ways to process the massive data onslaught coming our way in an efficient manner. Computational drives will be a critical linchpin in letting us filter through the data without getting bogged down by the details. Insights generated from this data can unlock capabilities and use-cases that can transform entire industries.
Mohamed Awad is vice president of IoT and embedded at Arm.
Welcome to the VentureBeat community!
DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even considercontributing an articleof your own!
Read More From DataDecisionMakers
Read the original post:
Computational storage and the new direction of computing - VentureBeat
Nvidia, Marvell, AMD and Broadcom may benefit from strong cloud results – Seeking Alpha
serg3d/iStock Editorial via Getty Images
Microsoft (MSFT) and Alphabet (GOOG) (GOOGL) this week both told investors that their cloud businesses continued to see strength in their most recent quarters. And with that trend expected to continue, several chip companies may stand to benefit, according to Bank of America analyst Vivek Arya.
In a new research report, Arya noted that spending on cloud computing has been "resilient" so far, despite global worries over a resurgence in COVID cases in China, a broader economic slowdown and rising inflation. As such, stocks like Nvidia (NASDAQ:NVDA), Marvell Technology (NASDAQ:MRVL), Advanced Micro Devices (NASDAQ:AMD) and Broadcom (NASDAQ:AVGO) could show similar strength when they report quarterly results.
"[W]e believe strong employment trends and needs for secure [and] high-speed hybrid work environments globally are boosting enterprise demand, while the tight supply situation provides strong pricing support to chip vendors," Arya wrote.
Amazon (AMZN) also posted strong cloud results this week, as its Amazon Web Services [AWS] revenue grew 37% year-over-year to $18.4 billion, up from $13.5 billion in the year-ago period.
Delving further, Arya noted that AMD (AMD) was recently selected by Meta Platforms (FB) as a new CPU vendor for its servers late last year.
Additionally, Meta (FB) selected Nvidia to build its AI Research SuperCluster using Nvidia's GDX A100 systems for a number of tasks, including training artificial intelligence models.
Arya has per-share price targets of $153 on AMD (AMD), $780 on Broadcom (AVGO), $100 on Marvell (MRVL) and $320 on Nvidia (NVDA), respectively.
In addition to the data points provided by both Microsoft's (MSFT) and Alphabet's (GOOG) (GOOGL) cloud results, Taiwan Semiconductor (TSM) saw high-performance computing wager sales rise 26% sequentially, accounting for 41% of demand, surpassing smartphone chips for the first time. Texas Instruments (TXN) also saw strength in the enterprise, up 35% year-over-year, building off the same trends from Marvell and Broadcom, Arya explained.
With companies needing to boost infrastructure spending as employees increasingly opt for a hybrid work week, areas such as networking, Wi-Fi [and] Bluetooth and storage "could see elevated spending," the analyst explained.
Earlier this month, investment firm New Street Research upgraded Nvidia (NVDA), nothing its attractive valuation and likelihood for a strong outlook for its datacenter business.
Originally posted here:
Nvidia, Marvell, AMD and Broadcom may benefit from strong cloud results - Seeking Alpha
Spend On Future-proofing It Puts Pressure On Banks Opex | Mint – Mint
MUMBAI :The shift towards digitization, disruptive innovation, and new technologies has forced lenders to invest substantial amounts to upgrade their information technology infrastructure, leading to higher operating expenses, analysts said.
Although this will benefit them in the long term, high operating expense (opex) amid hardening yields is impacting their operating profit. Lower treasury gains and elevated opex amid business normalization and high tech spends should keep pre-provisioning operating profit growth in check at 4% year-on-year," Emkay Global Financial Services said in a note on 8 April.
Banks and non-banks will continue to spend on upgrading IT infrastructure, at least for the near term, as traditional business models are undergoing a massive digital transition not only for retail banking and small businesses but also for large corporates, experts said, and added that it is only fair that they spend more on IT systems.
For instance, consumer durables lender Bajaj Finance has announced the roll-out of a super app and is building a new web platform which will go live by the year-end.
The initiatives are part of its efforts to widen its omni-channel distribution network to allow customers to switch seamlessly between physical and online stores to make payments, transfer funds, borrow, and invest across channels.
In the last seven-nine months, Bajaj Finance invested in domain talent and technology to develop a large digital web platform, it said. In the March quarter, the non-bank lenders ratio of operating expenses to net interest income (NII) was 34.6%.
The company continues to invest in teams and technology for business transformation. Given the deep investments being committed to omni-channel strategygeo-expansion, app and web platformthe company expects opex to NII to remain elevated for FY23," it said on 26 April.
Private sector lender ICICI Bank, too, reported 17.4% year-on-year opex growth for the March quarter. While employee expenses increased by 21% year-on-year, the lender told analysts that non-employee expenses increased 15.6% over the year-ago in Q4, primarily due to retail business and technology-related expenses. The banks technology expenses were about 8.5% of the operating expenses for FY22, ICICI Bank said on 23 April.
As far as technology spends are concerned, it is not a constraint at all. What we have to always be vigilant about are two things: resilience of the technology and cyber risks. We spend a lot and are very focused on it, discussions happen at the highest level, and at the board level," Anup Bagchi, executive director, ICICI Bank, said at an event on 28 April.
For large private sector lenders such as HDFC Bank, Kotak Mahindra Bank and Axis Bank, technology spends are estimated to account for 7-9% of total expenses.
Much of this investment is going into artificial intelligence (AI) and machine learning (ML), moving to cloud servers, building new platforms, improving security, and changing the back-end architecture.
Earlier, technology used to support business, but now, technology is the business," said Deepak Sharma, president and chief digital officer, Kotak Mahindra Bank, adding that there has been an increase in investment towards modernization of applications, cloud-based solutions, new platforms, security, automation, AI and ML.
State-run lenders have also realized the need to modernize to remain competitive, and are investing substantially in technology. While Indias largest lender State Bank of India is leading the pack in technological innovations with its Yono app or its plans to build a separate digital entity. On 6 April, public sector lender Union Bank of India said that it will invest 1,000 crore to upgrade its IT platforms this financial year, as it looks to source half of its business from digital channels by 2025 and save costs.
Subscribe to Mint Newsletters
* Enter a valid email
* Thank you for subscribing to our newsletter.
Excerpt from:
Spend On Future-proofing It Puts Pressure On Banks Opex | Mint - Mint
Cloudflare Names OVH and Hetzner as Origins of DDOS Attack – Search Engine Journal
Cloudflare published a report of a massive DDOS attack, naming several well known cloud hosting data centers as the origins of the attack. The attack appeared to follow a trend of attacks increasingly being launched from data centers instead of the traditional residential botnets.
The attack was described as among the largest ever seen:
Earlier this month, Cloudflares systems automatically detected and mitigated a 15.3 million request-per-second (rps) DDoS attack one of the largest HTTPS DDoS attacks on record.
A Distributed Denial-of-Service (DDoS) attack is when thousands of Internet-connected devices make page requests at a rapid rate, which can result in the website server being unable to process requests for web pages from, a condition known as a denial of service.
DDOS attacks generally come from whats referred to as botnets.
A botnet is a network of Internet-connected devices like routers, IoT devices, computers, websites and web hosting servers that are infected and put under control of hackers.
The Cloudflare report noted that DDOS attacks are increasingly coming from cloud-based data centers instead of residential ISP botnets. This represents a change in tactics.
According to the Cloudflare DDOS attack report:
Whats interesting is that the attack mostly came from data centers. Were seeing a big move from residential network Internet Service Providers (ISPs) to cloud compute ISPs.
Cloudflare named several cloud-based data centers as origins of the attack, two of which are already well known in the publishing community as common sources of spam and unwanted bot visitors.
The two biggest sources of this DDOS attack, according to Cloudflares data, were OVH and Hetzner.
Cloudflare offered these details:
the attack originated from over 1,300 different networks. The top networks included the German provider Hetzner Online GmbH (Autonomous System Number 24940), Azteca Comunicaciones Colombia (ASN 262186), OVH in France (ASN 16276), as well as other cloud providers.
In addition to being origins of DDOS attacks, OVH and Hetzner are known to be sources of spam-related attacks.
According to SaaS spam protection service CleanTalk data, spam bots originating from OVH comprise 10.97% of detected activity from IP addresses associated with OVH.
Spam activity originating from Hetzner that was detected by CleanTalk, out of 213,621 IP addresses detected as a source of traffic, 14,997 (7.02%) of those IP addresses were associated with spam attacks.
While DDOS and spam attacks are two different things, these statistics are cited to show how both of those cloud data centers are used for a variety of malicious activity, not just for DDOS attacks.
A publisher over at WebmasterWorld Forum recently observed that they were experiencing bot traffic from OVH that was greater than from legitimate human traffic from known ISPs.
The WebmasterWorld member wrote in a forum post:
Over the past 24 months, the web server logs across a dozen websites I manage have a high percentage of traffic coming from the OVH data center.
This traffic is coming in via numerous IP addresses assigned to OVH. Since the volume of traffic is dramatically larger than the traffic coming from legitimate ISPs (ATT, Verizon, Charter, Comcast, Shaw, etc), I have the impression that the traffic from OVH is due to bots/scrapers hosted at the OVH data center cloud servers.
Unwanted bot traffic from OVH is such a common problem that when an OVH datacenter in France burned down a WebmasterWorld member practically applauded the event by posting:
Looking on the bright side, our websites will have less bot traffic now.
The question maybe that needs asking is, why is there so much rogue bot traffic originating from OVH and Hetzner?
This isnt something new, either. Webmaster and publisher complaints about bot traffic from OVH go back a long time.
These are examples of discussions on WebmasterWorld involving OVH:
The above are forum discussions going back as far as 2013 where publishers and webmasters are complaining about rogue bot traffic from OVH.
In a WebmasterWorld forum discussion from 2015 titled Botnet sources, one forum member posted:
RE: botnets, Im more concerned with those who are false-clicking my advertisers (hosted, 3rd party & AdSense.)
However Im sure there is a significant crossover to both categories, so those linked Spamhaus articles are a good read, thanks. Small surprise that OVH leads the pack!
Given the long history of unwanted bot traffic from OVH and Hetzner, its not entirely surprising to see that they are now cited by Cloudflare as origins of a DDOS attack.
Its well-documented by Saas spam blocking services that OVH and Hetzner are sources of spam. Now we have documentation from Cloudflare that OVH and Hetzner cloud hosting services serve as origins of DDOS attacks.
Cloudflare identified the attacks as coming from a botnet on those cloud hosts. So that may mean that various servers were compromised.
Cloudflare blocks 15M rps HTTPS DDoS attack
Read the original here:
Cloudflare Names OVH and Hetzner as Origins of DDOS Attack - Search Engine Journal
The Coolest System And Cloud Platform Companies Of The 2022 Big Data 100 – CRN
Foundational Support For Big Data
Business analytics software, databases, data management tools are critical for managing big data and leveraging it for competitive advantage. But all those technologies need to run on foundational systems including hardware servers, operating systems and cloud platforms. And most of those are provided by some of the biggest names in the IT industry.
As part of the CRN 2022 Big Data 100, weve put together the following list of big data system and cloud platform companies that solution providers should be familiar with.
Many of these companies are household names like IBM, Dell Technologies and Hewlett Packard Enterprise that develop the underlying hardware/software that power big data analytics and operational applications. In the cloud, where many businesses are deploying big data projects, cloud platform companies like Amazon Web Services and Google Cloud provide the platforms for those initiatives.
Long-established software giants like Microsoft and Oracle provide foundational cloud systems and databases for big data initiatives, in addition to offering their own broad portfolios of data management and data analysis software. Other vendors like Cloudera, Databricks and Snowflake represent a new generation of big data platform providers.
This week CRN is running the Big Data 100 list in a series of slide shows, organized by technology category, spotlighting vendors of business analytics software, database systems, data warehouse systems, data management and integration software, data science and machine learning tools, and big data systems and cloud platforms.
Some vendors market big data products that span multiple technology categories. They appear in the slideshow for the technology segment in which they are most prominent.
Read more:
The Coolest System And Cloud Platform Companies Of The 2022 Big Data 100 - CRN
Yo-Yo DDoS Cyber Attacks; What they Are and How You Can Beat Them – Geektime
Typically, DDoS (Distributed Denial of Service) attacks use massive traffic such as HTTP, DNS, TCP, and other methods to allow attackers to disrupt even the most well-defended networks or servers. But Yo-Yo DDoS is an entirely different animal.
They are a much more innovative way to attack public cloud infrastructure resources. In today's cloud architecture, almost every resource can scale quickly. It could be nodes, Kubernetes Pods, load balancers, etc. You have unlimited resources when it comes to scaling in the public cloud. The cyber attackers use those cloud auto-scaling capabilities against you and hurt you financially. It literally could destroy small organizations that have limited cloud budgets. This article will shed more light on these types of attacks to help you increase your cyber readiness.
This is a simulation of how it looks:
Yo-Yo DDoS attacks can be tricky to identify because these attacks are brief and dont necessarily result in denial-of-service (DOS) conditions. When carrying out a yo-yo attack, hackers flood their targets with so much traffic that it automatically scales cloud resources such as load balancers, front-end services, and other cloud resources. Then they suddenly halt traffic so that the application is over-provisioned and automatically scales down again. Once the autoscaler decides that traffic volume has decreased, it scales down its resources. The attacker turns on the DDoS traffic anew, and the cycle repeats, hence the name Yo-Yo attack.
Constantly scaling up and down can be a financial drain on the applications owners, who must pay a lot of money to the hyperscalers. In some cases, this behaviour can be difficult or impossible to differentiate from legitimate requests. Unlike other forms of DDoS attacks, Yo-Yos have no centralized sourcethey often originate from many different machines across the Internet.
You should control your cloud scaling behaviour by setting limitations for every cloud resource you scale to avoid large financial spending. If you dont set a max scaling limitation, you could waste a lot of cloud computing resources and cloud-native services. Monitor your compute autoscaling groups and use anomaly detection to recognize unusual scaling patterns automatically. Then you will be able to create alerts for unusual scaling patterns and further investigate your infrastructure scaling and spending.
Although theyre difficult to detect, Yo-Yo attacks can be mitigated by hiding traffic scaling configuration. Attackers need to know how much scaling has taken place to stop the DDoS attack and eventually turn it on again once the traffic goes to a predetermined average level. If the website or service owner can hide scaling information, this would help mitigate any preparations attackers might have made before launching the attack.
To improve the security of your cloud against such attacks, its worth exploring third-party solutions made by specialized security companies such as AWS Shield and Google Armor that can help you mitigate complex attacks. They are Hyperscalers security cloud-native services, but you can pick third-party solutions such as Cloudflare or Incapsula.
Another way to mitigate against Yo-Yo DDoS attacks is to not use the default values for downscaling and upscaling when it comes to the cloud service providers load balancing mechanism. Doing so also disrupts any plan attackers might have made of when to stop sending extra junk traffic and when to start again.
The general tips to guard against DDoS attacks include keeping everything on the system updated. Fix all the security issues and bugs and quickly develop a plan to identify such problems. Its also important to emphasize that Yo-Yo DDoS attacks are a relatively recent development, and mitigation is generally available only within the best web security platforms. For example, the native security tools included in the top-tier cloud platforms are usually not adequate for defeating these attacks.
Some of the more common Yo-Yo mitigation techniques include:
Quick Takeaways to Defend Against Yo-Yo DDoS Cyber Attacks
DDos and Yo-Yo DDoS attacks happen all the time, and the attacks are getting more innovative and more frequent. In general, Yo-Yo DDoS attacks are meant to hurt companies and countries financially.
In the end, the best way to beat a Yo-Yo DDoS attack is to stay vigilant. You dont want to be the next victim of such an attack. To ensure that doesnt happen, use multiple layered defences against attack, keep your systems up-to-date, and stay on top of threats.
Written by Ido Vapner, CTO and Chief Architect at Kyndryl
More:
Yo-Yo DDoS Cyber Attacks; What they Are and How You Can Beat Them - Geektime