Category Archives: Cloud Servers

DDoS report reveals that the complexity and volume of attacks continues to grow – Continuity Central

DetailsPublished: Wednesday, 12 February 2020 09:22

Link11 has released findings from its annual DDoS Report, which revealed a rising number of multivector and cloud computing attacks during 2019.

The latest Link11 DDoS report is based on data from repelled attacks on web pages and servers protected by Link11s Security Operations Center (LSOC).

Key findings from the annual report include:

The data showed that the frequency of DDoS attacks depends on the day of the week and time of the day, with most attacks concentrated around weekends and evenings. More attacks were registered on Saturdays, and between 4pm and midnight on weekdays.

There was also a number of new amplification vectors registered by the LSOC last year including WSDiscovery, Apple Remote Management Service and TCP amplification, with registered attacks for the latter doubling compared to the first six months of the year. The LSOC also saw an increase in carpet bombing attacks in the latter part of 2019, which involves a flood of individual attacks that simultaneously target an entire subnet or CIDR block with thousands of hosts. This popular method spreads manipulated data traffic across multiple attacks and IPs. The data volume of each is so small that it stays under the radar and yet the combined bandwidth has the capacity of a large DDoS attack.

More details.

Continued here:
DDoS report reveals that the complexity and volume of attacks continues to grow - Continuity Central

How To Fill Your Data Lakes And Not Lose Control Of The Data – Forbes

Data lakes are everywhere now that cloud services make it so easy to launch one. Secure cloud data lakes store all the data you need to become a data-driven enterprise. And data lakes break down the canonical data structures of enterprise data warehouses, enabling users to describe their data better, gain better insights and make better decisions.

Data lake users are data-driven. They demand historical, real-time and streaming data in huge quantities. They browse data catalogs, prefer text search, and use advanced analytics, machine learning (ML) and artificial intelligence (AI) to drive digital transformation into the business. But where exactly does all the data come from?

The complexity of compliance and governance

Filling data lakes is a complex process that must be done properly to avoid costly data preparation and compliance breakdowns. Data is collected from everywhere, and ingestion involves high volumes of data from IoT, social media, file servers, and structured and unstructured databases. Such large-scale data exchange poses significant data availability and data governance challenges.

Big data governance shares the same disciplines as traditional information governance, including data integration, metadata management, data privacy and data retention. But one important challenge is how to achieve centralized compliance and control over the vast amounts of data traversing multicloud networks of distributed data lakes.

And there is a sense of urgency. As digital transformation becomes a priority, data governance, data security and compliance must always be in place. Recently passed laws, specifically GDPR and CCPA, require robust data privacy controls, including the right to be forgotten. For many organizations, such compliance is a real challenge, even when it comes to answering the seemingly simple question, Do you know where your data is?

Federated Data Governance

One solution is a federated data governance model. Federated data governance solves the centralized versus decentralized dilemma. By establishing compliance controls at the point of data ingestion, information life cycle management (ILM) policies may be applied to classify and govern data throughout its life cycle. As high volumes of data move from databases and file servers and transform into cloud-based object storage, policy-driven compliance controls are needed like never before.

IMAGE COURTESY OF JOHN OTTMAN

As a best practice to set up federated data governance, compliance policies and procedures should be standardized across the enterprise. Proper data governance involves business rules that are followed hard and fast. "Comply or explain" systems lead to distrust by audit authorities and require rigorous follow-up to ensure proper remedies are consistently applied. Once noncompliant data is released to the network, recall may not be possible.

Enterprise Data Lakes

An enterprise data lake is the centerpiece of the interconnected data fabric. Enterprise data lakes ingest data, prepare it for processing and provide a federated data governance framework to manage the data throughout its life cycle. Centralized, policy-driven data governance controls ensure compliant data is available for decentralized data lake operations.

Enterprise data lakes also speed up data ingestion. Centralized connections to import data from structured, semi-structured, unstructured and siloed S3 object stores simplify compliance control. Whether the data arrives as a simple "copy" or more complicated "move" function (for archiving), centralized ingestion enables data to be cataloged, labeled, transformed and governed with ILM and retention plans. As data is classified during ingestion, centralized security management and access control become possible as well.

The decision to move versus copy data is important. For many organizations, data growth is reaching crisis proportions. Response times struggle to perform when datasets are too large. Batch processes may fail to complete in time, upending schedules. Downtime windows required for system upgrades may require extension. Storage costs are increased, and disaster recovery processes become even more challenging. A move process purges data at the source, relieving performance pressure on production systems, whereas a copy process increases infrastructure requirements by doubling the amount of data to process.

Conclusion

So, as data lakes roll out within your organization, remember that filling them may be the hardest part. An enterprise data lake with a federated big data governance model establishes a more reliable system of centralized compliance and enables decentralized data lakes to flourish.

Original post:
How To Fill Your Data Lakes And Not Lose Control Of The Data - Forbes

The Biometric Threat by Jayati Ghosh – Project Syndicate

As with so many other convenient technologies, the world is underestimating the risks associated with biometric identification systems. India has learned about those risks the hard way and should serve as a cautionary tale to the governments and corporations seeking to expand the use of these technologies.

NEW DELHI Around the world, governments are succumbing to the allure of biometric identification systems. To some extent, this may be inevitable, given the burden of demands and expectations placed on modern states. But no one should underestimate the risks these technologies pose.

Biometric identification systems use individuals unique intrinsic physical characteristics fingerprints or handprints, facial patterns, voices, irises, vein maps, or even brain waves to verify their identity. Governments have applied the technology to verify passports and visas, identify and track security threats, and, more recently, to ensure that public benefits are correctly distributed.

Private companies, too, have embraced biometric identification systems. Smartphones use fingerprints and facial recognition to determine when to unlock. Rather than entering different passwords for different services including financial services users simply place their finger on a button on their phone or gaze into its camera lens.

It is certainly convenient. And, at first glance, it might seem more secure: someone might be able to find out your password, but how could they replicate your essential biological features?

But, as with so many other convenient technologies, we tend to underestimate the risks associated with biometric identification systems. India has learned about them the hard way, as it has expanded its scheme to issue residents a unique identification number, or Aadhaar, linked to their biometrics.

Originally, the Aadhaar programs primary goal was to manage government benefits and eliminate ghost beneficiaries of public subsidies. But it has now been expanded to many spheres: everything from opening a bank account to enrolling children in school to gaining admission to a hospital now requires an Aadhaar. More than 90% of Indias population has enrolled in the program.

Subscribe today and get unlimited access to OnPoint, the Big Picture, the PS archive of more than 14,000 commentaries, and our annual magazine, for less than $2 a week.

SUBSCRIBE

But serious vulnerabilities have emerged. Biometric verification may seem like the ultimate tech solution, but human error creates significant risks, especially when data-collection procedures are not adequately established or implemented. In India, the government wanted to enroll a lot of people quickly in the Aadhaar program, so data collection was outsourced to small service providers with mobile machines.

If a fingerprint or iris scan is even slightly tilted or otherwise wrongly positioned, it may not match future verification scans. Moreover, bodies can change over time for example, daily manual labor may alter fingerprints creating discrepancies with the recorded data. And that does not even cover the most basic of mistakes, like misspelling names or addresses.

Correcting such errors can be a complicated, drawn-out process. That is a serious problem when ones ability to collect benefits or carry out financial transactions depends on it. India has had multiple cases of lost entitlements whether food rations or wages for public-works programs as a result of biometric mismatches.

If honest mistakes can do that much harm, imagine the damage that can be caused by outright fraud. Police in Gujarat, India, recently found more than 1,100 casts of beneficiary fingerprints made on a silicone-like material, which were used for illicit withdrawals of food rations from the public distribution system. Because we leave fingerprints on everything we touch, we are all vulnerable to such replication.

And manual replication is just the tip of the iceberg. Researchers have created synthetic MasterPrints that enabled them to achieve a frighteningly high number of imposter matches.

Further risks arise during the transmission and storage of biometric data. Once collected, biometric data are usually moved to a central database for storage. They have to be encrypted while in transit, but the encryptions can be and have been hacked. Nor are they necessarily safe once they arrive in local, foreign, or cloud servers.

In India, one of the web systems used to record government employees work attendance was left without a password, allowing anyone access to the names, job titles, and partial phone numbers of 166,000 workers. Three official Gujarat-based websites were found to be disclosing beneficiaries Aadhaar numbers. And the Ministry of Rural Development accidentally exposed nearly 16 million Aadhaar numbers.

Moreover, an anonymous French security researcher accused two government websites of leaking thousands of IDs, including Aadhaar cards. That leak has now reportedly been plugged. But, given how many public and private agencies have access to the Aadhaar database, such episodes underscore how risky a supposedly secure system can be.

Of course, such vulnerabilities exist with all personal data. But exposure of someones biometric information is far more dangerous than exposure of, say, a password or credit card number, because it cannot be undone. We cannot, after all, simply get new irises.

The risk is compounded by efforts to use collected biometric data for monitoring and surveillance, as is occurring in China and elsewhere. In this sense, the large-scale collection and storage of peoples biometric data pose an unprecedented threat to privacy. And few countries have anything close to adequate laws to protect their residents.

In India, revelations of the Aadhaar programs weaknesses have largely been met with official denials, rather than serious efforts to protect users. Worse, other developing countries, such as Brazil, now risk replicating these mistakes, as they rush to adopt biometric technology. And, given the large-scale data breaches that have occurred in the developed world, these countries citizens are not safe, either.

Biometric identification systems are permeating every facet of our lives. Unless and until citizens and policymakers recognize and address the complex security risks they entail, no one should feel safe.

Follow this link:
The Biometric Threat by Jayati Ghosh - Project Syndicate

Throwing Down The Gauntlet To CPU Incumbents – The Next Platform

The server processor market has gotten a lot more crowded in the past several years, which is great for customers and which has made it both better and tougher for those that are trying to compete with industry juggernaut Intel. And it looks like it is going to be getting a little more crowded with several startups joining the potential feeding frenzy on Intels Xeon profits.

We will be looking at a bunch of these server CPU upstarts in detail, starting with Nuvia, which uncloaked itself from stealth mode last fall and which has said precious little about what it can do to differentiate in the server space with its processor designs. But Jon Carvill, vice president of marketing with long experience in the tech industry, gave The Next Platform a little more insight into the companys aspirations as it prepares to break into the glass house.

Before we even get into who is behind Nuvia and what it might be up to, its very existence begs the obvious question: Why would anyone found a company in early 2019 that thinks there is room for another player in the server CPU market?

And this is a particularly intriguing question given the increasing competition from AMD and the Arm collective (lead by Ampere and Marvell) and ongoing competition from IBM against Intel, which commands roughly 99 percent of server CPU shipments and probably close to 90 percent server revenue share. We have watched more than three decades of consolidation in this industry, where there were north of three dozen different architectures and almost as many suppliers of operating systems to Intel dominating almost all of the shipments with its Xeons, almost all of the server CPU revenue, and Windows Server and Linux splitting most of the operating system installations and money.

Why now, indeed.

Or even more precisely, why havent the hyperscalers, who own their own workloads, as distinct from the big public cloud providers, who run mostly Windows Server and Linux code on X86 servers on behalf of customers who have zero interest in changing the applications, much less the processor instruction set, just thrown in the towel and created their own CPUs? It always comes down to economics, and specifically performance per watt and dollars per performance and the confluence of the two. And that is why the founders of Nuvia think they have a chance when others have tried and not precisely succeeded even if they have not failed. To be sure, AMD is getting a second good run now at Intel with the Epyc processors after a pretty good run with the Opterons more than a decade ago. But up until this point, Intel has done more damage to itself, with manufacturing delays, unaggressive roadmaps, and premium pricing, than AMD has done to it.

Clearly the co-founders of Nuvia see an opportunity, and they are seeing it from inside the hyperscalers. Gerard Williams, who is the companys president and chief executive officer, had a brief stint after college at Intel, designed the TMS470 microcontroller at Texas Instruments back in the mid-1990s, and was the CPU architect lead for the Cortex-A8 and Cortex-A15 designs that breathed new life into the Arm processor business and landed it inside smartphones and tablets. Williams went on to be a Fellow at Arm, and in in 2010, when Apple no longer wanted to buy chips of its own from Samsung, it tapped Williams to be the CPU chief architect for a slew of Arm-based processors used in its iPhone and iPad devices namely, the Cyclone A7, the Typhoon A8, the Twister A9, the Hurricane and Zephyr A10 variants, the Monsoon and Mistral A11 variants, and the Vortex and Tempest A12 variants. And Williams was also the SoC chief architect for unreleased products and that can have a bunch of interesting meanings.

The two other co-founders, Manu Gulati, vice president of SoC engineering at Nuvia, and John Bruno, vice president of system engineering, both most recently hail from hyperscaler and cloud builder Google. Gulati cut his CPU teeth back in the mid-1990s at AMD, doing CPU verification and designing the floating point unit for the K7 chip and the HyperTransport and northbridge chipset for the K8 chip. Gulati then jumped to SiByte, a designer of MIPS cores, in 2000 and before the year was out Broadcom acquired the company and he spent the next nine years working on dual-core and quad-core SoC. Gulati then moved to Apple and was the lead SoC architect for the companys A5X, A7, A9, A9X, A11, and A12 SoCs. (Not just the CPU cores that Williams focused on, but all the stuff that wraps around them.) Between 2017 and 2019, Gulati was chief SoC architect for the processors used in Googles various consumer products.

Bruno has a similar but slightly different resume, landing as an ASIC designer at GPU maker ATI Technologies after college and significantly as the lead on the design of several of ATIs mobile GPUs prior to its acquisition by AMD in 2006 and for the Trinity Fusion APUs from AMD, which combine CPU and GPU compute in the same die. Bruno then did nearly six years at Apple as the system architect on the iPhone generations 5s through X, and like Gulati, moved to Google in 2017, in this case to be a system architect.

Both Gulati and Bruno left Google in March last year to join Williams as co-founders of Nuvia, which is not a skin product or a medicine, but a server CPU upstart. Carvill joined Nuvia last November soon after it uncloaked, and so did Jon Masters, formerly chief Arm software architect for Linux distributor Red Hat.

What do these people, looking out to the datacenter from their smartphones and tablets, see as not only an opportunity in servers, but as a chance to school server CPU architects on how to create a new architecture that leads in every metric that matters to datacenters: performance, energy efficiency, compute density, scalability, and total cost of ownership?

This is a situation where Gerard, Manu, and John obviously had a pretty substantial role to play at a certain company in Cupertino in building a series of processors that were really designed to establish a step function improvement in performance, and also either a decrease in or, at a minimum, a consistent TDP, Carvill tells The Next Platform. And that has essentially redefined the performance level that people expect out of mobile phones. And now you have a scenario where those phones are performing very close to, if not in some cases exceeding, what you get out of a client PC and they are within striking distance of a server. Now, if you look at the servers, by contrast, now we have a similar problem that is beginning to manifest, especially at the hyperscalers, is that their datacenters have thermal envelopes that are becoming more and more constrained. They have not seen any meaningful improvement in IPC in CPU performance in some time. If you look at the last five years, they have largely had the same architectures. They have had incremental improvements in basic CPU performance. Theres been some new workloads on the scene and theres been a lot of improvements in areas like AI and some other corner cases, for sure. But if you look at the core CPU, can you think of the last time you have seen a big meaningful difference or change in the datacenter?

We have seen some big instructions per clock (IPC) jumps think of the big jump with the initial Zen cores from AMD used in the Naples Epyc chips or in the Armv8 cores designed by Arm Holdings moving from its Cosmos to Ares reference chips. Even IBM has relatively big jumps in IPC between Power generations, but it takes more than three years for a generation to come to market. And when these big IPC jumps do happen, it is often one-off jumps because the architectures had been lagging for years. Speaking very generally, instructions per clock has been stuck at somewhere around 5 percent, sometimes 10 percent, and rarely more per generation. But heres the kicker. As the IPC goes up, the clock speed goes down because the core count is going up because this is the only way to not increase heat dissipation more than is already happening. Over the past decade, server CPUs have been getting hotter and hotter and top-bin parts running full bore will soon be as searing as a GPU or FPGA accelerator.

We agree this is undesirable, but were under the impression that it was also mostly unavoidable if you wanted to maintain absolute compatibility of processors from today back through a very long range of time, which the IT industry very clearly does want to do.

The trick with Nuvia is that it is not trying to build a server CPU for the entire industry, but rather one that is focused on the specific and thankfully more limited needs of the hyperscalers.

This is a server-class CPU, with an SoC surrounding it, and it is designed to be the clear-cut winner on each of those categories and in totality, says Carvill, throwing down the gauntlet to all of the remaining CPU players, who each have their own ideas about how to take on Intels hegemony. And we are not talking about the incremental performance improvements that we have come to expect over the past five years. We are talking about really meaningful, significant, double-digit performance improvements over what anyone has seen before. It will be designed for the hyperscale world we are not going after everybody. We are not going after the entire enterprise, we are starting with the hyperscalers, and we are doing that very deliberately because thats an area where you can take a lot of the legacy that you have had to support in the past and push that aside to some degree and design a processor for modern workloads from the ground up. What we are doing is custom, and we will not be using off the shelf, licensed cores. We are going to use an Arm ISA, but we are doing it as a clean sheet architecture from the ground up that is built for the hyperscaler world.

So that begs the question of what you can throw out and what you can add without breaking the licensing requirements to stick to the compatibility of the Arm ISA. We dont have an answer as to what this might be, but certainly this is precisely what Applied Micro (reborn as Ampere) was trying to do with its X-Gene Arm server chips and what Broadcom and then Cavium and then Marvell were doing with the Vulcan ThunderX2 chips; others, like Qualcomm, would claim that they did the same thing. So we are very intrigued about what portion of the Arm ISA the hyperscalers needs and what parts they can throw away, as well as any other interesting bits for acceleration that Nuvia might come up with. For the moment, the Nuvia team is not saying much about what it is, except that numerous hyperscalers are privy to what the company is doing and have given input from their workloads to help the architects come up with the design.

What is also obvious is that this is for hyperscalers, not cloud builders, at least in the initial implementation of the Nuvia chip. By definition, the raw infrastructure services of public clouds run mostly X86 code on either Linux or Windows Server operating systems, and this chip certainly wont support Windows Server externally on any public cloud, although there is always a chance that Microsoft will run Nuvia Arm chips on internal workloads in its Azure cloud. Microsoft has made no secret about its desire to have half of its Azure compute capacity running on Arm architecture chips, and all of the other hyperscalers notably Google and Facebook as well as Apple, which is not quite a hyperscaler but is probably interested in what the Nuvia team is up to since it probably has millions of its own servers and would no doubt love to have a single architecture spanning its entire Apple stack if it could happen. We could even see Apple get back into the server business with Nuvia chips at the end of this adventure, which would be interesting indeed but perhaps only for its own internal consumption working with a bunch of ODMs but perhaps through the Open Compute Project.

John and Manu were the founders who really had the initial idea because they were working at Google with a lot of the internal teams, looking at the limitations and challenges in their datacenter architecture and infrastructure, and they thought they could build something a lot better for what Google needs to scale this thing forward. But they needed a CPU architect who came with the pedigree and legacy to be able to go build something that was custom that had been successful at scale. And then thats what when they got Gerard.

The point is this: Google, Apple, and Facebook do not have to design a hyperscale-class CPU because they can get Nuvia to do it and spread the cost across Silicon Valley venture capitalists instead of spending their own dough.

There are precedents for the kind of tight focus on hyperscalers, and it comes from none other than Broadcom with its distinction between its Trident Ethernet switch ASICs aimed at the enterprise, which frankly did not have as good of a cost, thermal, and performance profile as the hyperscalers in this case Google and Microsoft wanted. And so, they worked with Broadcom and Mellanox Technologies to cook up the 25 G Ethernet standard, whether or not the IEEE standards committee would endorse it. In the case of Broadcom, the company rolled out the Tomahawk line, with trimmed down Ethernet protocols and more routing functions as well as better bang for the buck and better thermals per port. Innovium, another upstart switch ASIC maker, just went straight to making an Ethernet switch ASIC aimed at the hyperscalers.

There are not a lot of details about what Nuvia will do, but here is what the company is planning:

All of this work is being supported by an initial $53 million Series A investment from Capricorn Investment Group, Dell Technologies Capital, Mayfield, WRVI Capital, and Nepenthe.

As soon as we learn more, we will tell you.

Original post:
Throwing Down The Gauntlet To CPU Incumbents - The Next Platform

China retreats online to weather coronavirus storm – The Jakarta Post – Jakarta Post

Virus-phobia has sent hundreds of millions of Chinese flocking to online working options, with schools, businesses, government departments, medical facilities even museums and zoos wrapping themselves in the digital cloud for protection.

China remains in crisis mode weeks after the epidemic exploded, with much of the country shut down and the government pushing work-from-home policies to prevent people gathering together.

That has been a boon for telecommuting platforms developed by Chinese tech giants such as Alibaba, Tencent and Huawei, which have suddenly leapt to the ranks of China's most-downloaded apps, leaving them scrambling to cope with the increased demand.

Tencent said its office collaboration app WeChat Work has seen a year-on-year tenfold increase in service volume since February 10, when much of the country officially came back from a virus-extended Lunar New Year holiday.

Alibaba's DingTalk has observed the highest traffic in its five-year existence, company officials told state media, with around 200 million people using it to work from home.

Huawei said its WeLink platform is experiencing a fiftyfold increase, with more than one million new daily users coming on board.

Eric Yang, chief executive of Shanghai-based iTutorGroup, which operates a range of online courses, said his company's business has surged 215 percent.

"We just helped an art education school open online painting classes, and are also helping another music school to open virtual classes," Yang said.

"More kids in third- and fourth-tier cities are increasingly taking our online courses because of the outbreak. In the past, most users came from first-tier cities [such as Beijing and Shanghai]."

The online migration received an implicit endorsement from President Xi Jinping, who on Monday was shown on the nightly state television news broadcast watched by tens of millions giving a pep talk to medical staff in the contagion epicenter city of Wuhan via Huawei WeLink.

The virus, which has killed more than 1,100 people and infected nearly 45,000, has shuttered factories across the country and is forecast to cut Chinese economic growth.

But China's highly developed online sector and population of more than 850 million mobile internet consumers may soften the blow.

The similar Severe Acute Respiratory Syndrome of 2003 is widely credited with helping to kickstart e-commerce development in China, and the coronavirus also is expected to "further the long-term structural shift" to an online economy, said S&P Global Ratings.

Hospitals, overwhelmed by people seeking a virus test at the first sign of sniffles, have pivoted to online telemedicine to help sort through the patients, with tens of millions of consultations taking place, state media said.

Countless museums and cultural sites have been closed, but many including Beijing's Forbidden City and the terracotta warriors in Xi'an have put exhibits online or created new virtual tours, and animal lovers can watch the Beijing Zoo's pandas on social media.

Even China's foreign ministry briefing the government's primary daily interface with the outside world has been converted into an online Q&A.

With schools nationwide shut until March, online learning has received a particular jolt. Institutions are scrambling to comply with an Education Ministry order to "stop classes, but don't stop learning."

Grace Wu, whose nine-year-old daughter Charlotte attends the now-shuttered Shanghai American School, had faced the prospect of a lengthy learning break with the family "self-quarantining" at home.

"It's like kind of a double worry. We worry first about the virus... the second worry is about learning," Wu said.

But the school last week re-launched lessons online until normality returns.

Charlotte and her classmates have embraced the situation, even organizing a virtual birthday party on video-conferencing platform Zoom.

"It's a birthday party in the cloud," said Wu, a 37-year-old blogger.

Alibaba said that as of Monday, schools in more than 300 cities across 30 provinces were utilizing a classroom function, with participating students totaling 50 million.

It has not all been smooth.

Users across the country complained last week that major Chinese platforms were glitch-prone or crashed frequently due to heavy traffic, sending providers scrambling to shore up their networks.

Alibaba told state media it had installed more than 10,000 new cloud servers in response.

Some providers were creating new features such as allowing users to blur their backgrounds to avoid looking "unprofessional" by logging in from their living rooms.

Chinese already are deeply connected to their mobile phones, going online to shop, order meals, find partners, pay bills and express themselves.

Wang Guanxin, an instructor with iTutorGroup, said this would only grow as a result of the virus.

Speaking after a video-conference training session he gave to a wall-length bank of 36 Chinese-language instructors on screen at the company's Shanghai offices including one woman who lay in bed in red pajamas Wang said the virus was a "turning point" for his industry.

"Objectively speaking, it will allow people who didn't really trust or rely on online learning to change their views," he said.

Read this article:
China retreats online to weather coronavirus storm - The Jakarta Post - Jakarta Post

Global IT Security Market Size, Share, Growth Rate and Gross Margin, Industry Chain Analysis, Development Trends & Industry Forecast Report 2025 -…

The Global IT Security Market Research Report 2020 Provides In Depth Analysis Of The Industry along with Important Statistics and Facts. With the help of this information, investors can plan their business strategies.

TheGlobal IT Security Marketstatus, future forecast, growth opportunity, key market and key players. The study objectives are to present the IT Security development in United States, Europe and China.

IT security is the practice of preventing unauthorized access, use, disclosure, disruption, modification, inspection, recording or destruction of information. To standardize this discipline, academics and professionals collaborate and seek to set basic guidance, policies, and industry standards on password, antivirus software, firewall, encryption software, legal liability and user/administrator training standards. This standardization may be further driven by a wide variety of laws and regulations that affect how data is accessed, processed, stored, and transferred.

The increasing use of mobile devices and cloud servers to store sensitive data and the subsequent rise in technologically sophisticated cyber criminals threatening to steal that data have accelerated growth in the IT Security Consulting industry. This industry offers managed IT security services, such as firewalls, intrusion prevention, security threat analysis, proactive security vulnerability and penetration testing and incident preparation and response, which includes IT forensics.

In 2018, the global IT Security market size was xx million US$ and it is expected to reach xx million US$ by the end of 2025, with a CAGR of xx% during 2019-2025.

Request a sample of this report @https://www.orbisresearch.com/contacts/request-sample/3221137

The key players covered in this study

Blue Coat

Cisco

IBM

Intel Security

Symantec

Alert Logic

Barracuda Networks

BT Global Services

CA Technologies

CenturyLink

CGI Group

CheckPoint Software Technologies

CipherCloud

Computer Sciences

CYREN

FishNet Security

Fortinet

HP

Microsoft

NTT Com Security

Panda Security

Proofpoint

Radware

Trend Micro

Trustwave

Zscaler

Market segment by Type, the product can be split into

Internet security

Endpoint security

Wireless security

Network security

Cloud security

Market segment by Application, split into

Commercial

Industrial

Military and Denfense

If enquiry before buying this report @https://www.orbisresearch.com/contacts/enquiry-before-buying/3221137

Market segment by Regions/Countries, this report coversUnited StatesEuropeChinaJapanSoutheast AsiaIndiaCentral & South America

The study objectives of this report are:To analyze global IT Security status, future forecast, growth opportunity, key market and key players.To present the IT Security development in United States, Europe and China.To strategically profile the key players and comprehensively analyze their development plan and strategies.To define, describe and forecast the market by product type, market and key regions.

About Us:Orbis Research (orbisresearch.com) is a single point aid for all your market research requirements. We have vast database of reports from the leading publishers and authors across the globe. We specialize in delivering customized reports as per the requirements of our clients. We have complete information about our publishers and hence are sure about the accuracy of the industries and verticals of their specialization. This helps our clients to map their needs and we produce the perfect required market research study for our clients.

Contact Us:Hector CostelloSenior Manager Client Engagements4144N Central Expressway,Suite 600, Dallas,Texas 75204, U.S.A.Phone No.: +1 (214) 884-6817; +912064101019Email ID:[emailprotected]

Original post:
Global IT Security Market Size, Share, Growth Rate and Gross Margin, Industry Chain Analysis, Development Trends & Industry Forecast Report 2025 -...

X-Force Threat Intelligence Index Reveals Top Cybersecurity Risks of 2020 – Security Intelligence

The volume of threats that security teams see on a daily basis can make it especially difficult to look at the big picture when it comes to developing an effective cybersecurity strategy. To see through the flood of data and alerts, organizations depend on actionable threat intelligence to help them understand and mitigate risks. Looking at long-term trends can also help organizations make effective decisions for allocating resources to prevent costly breaches, ransomware and destructive attacks.

IBMs annual X-Force Threat Intelligence Index presents an overview of the threat landscape and cybersecurity risk trends of the past year, based on IBM X-Force analysis of data from hundreds of millions of IBM Security-protected endpoints and servers, spam sensors, IBM Security managed services, red team, and incident response engagements.

IBM X-Force research teams came together to look at the trends that shaped the information security landscape in 2019, following the data to highlight the most prominent trends that can help organizations better assess risk factors, understand relevant threats and bolster their security strategy in 2020 and beyond.

Among the findings in this years X-Force Threat Intelligence Index, a few stand out: the most common attack vectors, the evolution of ransomware and malware, and the risks posed by accidental breaches caused by factors such as misconfigurations, inadvertent insiders, and old, continually exploited software vulnerabilities. New data from 2019 also showed a trend toward attacks on operational technology (OT), posing threats to industries such as energy and manufacturing. Finally, this years report provides geographic insights to show how threats vary by country or region.

With access to billions of compromised records over the past decade, rampant credential reuse and an ever-growing number of unpatched vulnerabilities to prey on, attackers took the path of least resistance through a number of ways to gain access and compromise organizations security.

According to data in this years report, initial infection vectors used by attackers were fairly evenly divided between phishing attacks, unauthorized use of credentials and exploitation of vulnerabilities. Out of the top attack vectors in 2019, 31 percent of attacks relied on phishing (down from about half of attacks in 2018). The share of attacks using stolen credentials in 2019 was close behind at 29 percent. Meanwhile, attacks on known vulnerabilities increased significantly as a share of the top attack vectors, up to 30 percent in 2019 versus 8 percent in 2018.

Ransomware attacks have been an increasing issue in the past five years, and in 2019, this threat evolved into an all-out digital hostage crisis. When companies are not paying millions for a decryption key, they may see their data destroyed or published on the internet, or they may even become the victims of a destructive attack as retaliation for not paying criminals.

Our data shows a considerable rise in ransomware incidents in 2019, almost doubling between the second half of 2018 (10 percent) and the first half of 2019 (19 percent). Ransomware affected companies in a large variety of industries, in both the public and private sectors and 12 countries across the globe. Top targets for these attacks were retailers, manufacturing and transportation, sectors where downtime is detrimental to operations, which adds to the pressure to pay. Another potential reason could include the ease of exploitation of legacy systems and lax security programs in some sectors.

Healthcare organizations also faced the wrath of ransomware in 2019, and with attacks on the industry affecting a large number of facilities, the threat to human lives compelled organizations to pay to regain operational capabilities.

One of the biggest drivers of ransomware becoming a prolific threat to organizations in 2019 was the move of organized cybercrime gangs from the banking Trojan realms into the enterprise attack arena. Banking Trojan operators are already known to be professional, sophisticated attackers who operate as a business. These capabilities, combined with access to already-compromised networks and an ability to spread to pivotal assets, have given ransomware like Ryuk, DoppelPaymer, LockerGoga, Sodinokibi and MegaCortex the ability to extort victimized organizations for millions of dollars. Those who did not pay up often faced arduous recovery processes that were no less costly or faster.

Law enforcement continues to discourage companies from paying ransoms as a way to reduce the profitability of high-stakes attacks and deter attackers in the long run.

Of note in 2019 was code innovation in the malware arena. Attackers in this sphere constantly evolve their code to bypass security controls. According to data from Intezer, banking Trojans and ransomware showed the most innovation in their genetic code, with an increase in new (previously unobserved) code from 2018 to 2019. Some 45 percent of banking Trojan code was new in 2019, compared to 33 percent in 2018, while 36 percent of ransomware code was new in 2019, compared to 23 percent in 2018.

With over 8.5 billion records leaked or compromised in 2019, it was a big year for lost data. But could these numbers have been lower? Our analysis finds that of the more than 8.5 billion records breached in 2019, 86 percent were compromised via misconfigured assets, including cloud servers and a variety of other systems. The same issues affected only half of the records breached in 2018. As organizations move to the cloud, security must remain a high priority, especially when it comes to proper configuration, access rights and privileged account management (PAM).

More records exposed equals more credentials up for grabs that can be used as an initial entry point into businesses. It is high time for organizations to pay closer attention to these potential security gaps and favor automation to limit human error and misconfiguration.

OT attacks hit an all-time high. Malicious activity targeting operational technology assets, most notably industrial control systems (ICS), increased 2000 percent year-over-year in 2019, marking the largest number of attempted attacks on ICS and OT infrastructure in three years.

Tech and social media giants were the top spoofed brands in 2019, with attackers using various cybersquatting tactics to gain the trust of potential victims.

Nearly 60 percent of the top 10 spoofed brands identified were Google and YouTube domains, with Apple (15 percent) and Amazon (12 percent) coming in next. Facebook, Instagram, Netflix and Spotify were also among the top 10 spoofed brands.

With nearly 10 billion accounts combined, the top 10 spoofed brands listed in the report offer attackers a wide target pool, increasing the likelihood of credential theft and account takeover.

North America and Asia were the most targeted regions. For the first time this year, the X-Force Threat Intelligence Index included geo-centric insights on the threat trends weve seen on a regional basis. North America and Asia suffered the largest data losses, having seen 5 billion and 2 billion records compromised, respectively.

IBM X-Force research for this report has a truly global reach, based on insights and observations from monitoring over 70 billion security events per day in more than 130 countries. For more insights about the global threat landscape and the threats most relevant to your organization, download the X-Force Threat Intelligence Index and sign up for the webinar to dive deeper into the findings from this years report.

Download the X-Force Threat Intelligence Index 2020

See the article here:
X-Force Threat Intelligence Index Reveals Top Cybersecurity Risks of 2020 - Security Intelligence

The APAC data center market is expected to grow at a CAGR of over 3% during the period 20192025 – GlobeNewswire

New York, Feb. 14, 2020 (GLOBE NEWSWIRE) -- Reportlinker.com announces the release of the report "Data Center Market in APAC - Industry Outlook and Forecast 2020-2025" - https://www.reportlinker.com/p05830419/?utm_source=GNW Renewable Energy to Power Data Centers Increased Awareness to Improve Data Center Efficiency Growth in Data Center Traffic To aid in 200 GbE and 400 GbE Switch Procurement Increase in Submarine Fiber Cable Deployment

The APAC data center market is witnessing steady growth with continued investments from hyperscale and cloud service providers. In 2019, China and Hong Kong led the market in terms of data center development, followed by India, Australia, Japan, and Singapore. Apart from these countries, Indonesia, Thailand, and Malaysia made a sizable contribution toward growth. The implementation of 5G has commenced in several countries, which will have a significant impact on the market with telecommunication providers partnering with service providers in establishing edge data centers throughout the forecast period. The market is also witnessing significant investments in submarine cable projects from telecommunication service providers, and government entities with hyperscale operates are continuing to invest millions in improving submarine connectivity across regions aiding in the growth.

The introduction of artificial intelligence and machine learning workloads is expected to contribute over 40% in the infrastructure investment in APAC by 2025. Artificial intelligence and machine learning workloads will increase the demand for liquid-immersion and direct-to-chip cooling techniques that can support the density of up to 200 kW. Over 30% of Australian enterprises are involved in the use of AI-based infrastructure solutions as experimental as well as production workloads. The average size of a facility in the APAC region has increased considerably in the last two years. Several operators are involved in land acquisitions for future development, which, in turn, is propelling growth. ODM infrastructure solutions and all-flash storage arrays contribution to the market will continue to grow.

The report considers the present scenario of the APAC data center market during the forecast period and its market dynamics for the forecast period 2020?2025. It covers a detailed overview of several market growth enablers, restraints, and trends. The report profiles and examines leading companies and several other prominent companies operating in the market.

APAC Data Center Market: Segmentation

This research report includes detailed segmentation by IT infrastructure, electrical infrastructure, mechanical infrastructure, general construction, tier standards, and geography. With the adoption of IoT, artificial intelligence, and big data analytics, the demand for high-performance computing infrastructure is increasing. The demand for supercomputers is also increasing with the adoption increase in investment towards cryptocurrency mining. Moreover, the increasing development of facilities in China and Hong Kong and the implementation of a high-speed 5G network will boost the data center network market. Over 50% of the business IT budget is spent on the migration to cloud-based services in Australia, with IaaS spending leading the chart followed by SaaS. The Australian market will also witness the increased demand for managed data center services. IT infrastructure spending will be dominated by cloud-service providers.

The increasing adoption of IT infrastructure is a major driver in the data center market in India, with high adoption of servers, storage, and networking infrastructure. Around 70% of startups in India are adopting IoT in their business. Healthcare and manufacturing are popular verticals attracting a lot of investor interests. The IT Infrastructure market in Japan is gaining traction with the increasing popularity of cloud-based services, IoT, and AI. In Japan, the majority of facilities are having blade types of servers developed for the high-density computing environment. The increasing usage of social media platform in the region will lead to the development of new facilities to store data, which increase the demand for high-capacity storage solutions.

The construction of data centers with power capacity over 20 MW will increase the demand for high-capacity electrical infrastructure. Most facilities support rack power density of 5?10 kW; however, it is expected that new facilities will support a capacity of up to 20 kW during the forecast period. Multiple facilities with a power capacity of more than 10 MW are being implemented in Australia. Data center providers are investing in DRUPS systems with a capacity of around 1,500 kVA. Service providers are featuring lithium-ion batteries, which are likely to increase adoption in the market. The market in India has witnessed the installation of energy-efficient power infrastructure. The contribution of the power infrastructure segment is high because of the growth in the installation of 2N power infrastructure solutions among extensive facilities. The operators are also being prompted to support renewable energy sources to power their IT infrastructure.

Several APAC countries do not support the use of free cooling systems. The operators in this region are still highly dependent on traditional air-based cooling techniques in small facilities that are built as part of commercial complexes. The use of dual water feeds in data centers with on-site water treatment plants is fast gaining popularity in the region as a few countries such as India suffer from acute water shortage for cooling purposes. The increasing demand for ASHRAE and Uptime Institute certified infrastructure is likely to increase the importance of metrics such as power usage effectiveness, water-usage effectiveness, and carbon usage effectiveness during the forecast period. The use of air-based cooling will continue to co-exist in the APAC region because of the growth of small facilities.

China is leading in greenfield construction. There will be more likely to have brownfield developments in Hong Kong due to the space shortage during the forecast period. In terms of physical security, most service providers prefer four layers of safety, with a few engaging in the implementation of five-layer ones due to the increasing demand for colocation services. Australia is witnessing an increase in the construction of greenfield data center projects. Perth, Canberra, Brisbane, and Sydney are some of the major cities in Australia, where greenfield development is likely to increase. The need for DCIM software to monitor facilities will continue to grow as the need to improve operational efficiency is growing. The general construction market will witness an increased construction of data centers in Japan. However, the cost of developing new facilities in the country is high. The rest of the APAC market will witness the entry of new construction services providers with greenfield construction growing. Telecommunication service providers and government agencies are the major investors in Southeast countries.

Multiple facilities are being developed in this region as part of commercial buildings in major cities. This scenario will change in the future years as more standalone data center developments will be witnessed in regions such as Southeast Asia, India, and Rest of APAC during the forecast period. However, with the increase in greenfield development, the need for skilled labor will also grow. The labor shortage is not higher in APAC countries than in European and American regions. Mega data center development will provide a major boost to the revenue growth for local construction contractors and suppliers.

In the APAC region, several under-developed projects fall under the Tier III category. This trend is likely to continue during the forecast period, with many operators expected to move to the Tier IV category based on the growth in rack power density and critical applications. Data centers in Japan are likely to adopt the Uptime Institutes Tier III or Tier IV design with a minimum of N+N redundancy across infrastructures. Most facilities developed were Tier III and Tier IV standards in 2019.

Market Segmentation by IT Infrastructure Servers Storage NetworkMarket Segmentation by Electrical Infrastructure UPS Systems Generators Transfer Switches and Switchgears Rack PDU Other Electrical InfrastructuresMarket Segmentation by Mechanical Infrastructure Cooling Systemso CRAC & CRAH unitso Chiller Unitso Cooling Towers, Dry Coolers, & Condenserso Other Cooling Units Racks Others Mechanical InfrastructureMarket Segmentation by General Construction Building Development Installation and Commissioning Services Building Designs Physical Security DCIM & BMSMarket Segmentation by Tier Standards Tier I &II Tier III Tier IV

Insights by Geography

The deployment of data centers in China & Hong Kong is likely to exceed supply due to the increasing demand for cloud-based services, big data analytics, and IoT. China is the worlds largest IoT market, with 64% of the 1.5 billion global cellular connections. The market in Hong Kong is witnessing investments YOY, which is aiding the country to emerge as one of the major data center hubs in the world. The submarine cable deployment in Australia will boost facility growth. In 2019, submarine fiber cable projects were INDIGO-Central and INDIGO-West, which connect Australia, Indonesia, and Singapore will contribute to the increase in network traffic.

The data center market in India is one of the fastest-growing in the APAC region. India witnessed continuous investment in cloud adoption and big data analytics from small and medium-sized industries in 2019. Government initiatives such as Digital India are the major contributor to the data center investment growth in the market. The market has also witnessed an increase in the number of new service providers offering hosting, storage, colocation services, and disaster recovery services.

Key Countries China and Hong Kong Australia and New Zealand India Japan Rest of Asia Southeast Asia Singapore Malaysia Thailand Indonesia Other Southeast Asian Countries

Key Vendor AnalysisThe APAC data center market is witnessing a steady growth in terms of IT infrastructure procurement, greenfield, brownfield, and modular data center development as well as the high adoption of efficient, scalable, flexible, and reliable infrastructure solutions.

Moreover, the market has a strong presence of vendors in the three categories: IT infrastructure, support infrastructure, and data center investors. From the IT infrastructure perspective, the contribution from APAC based infrastructure providers as well as global providers is almost equal. The increasing competition will prompt vendors to reduce the price of solutions, namely, SSDs and Ethernet Switches to gain major shares. The market is witnessing the growth of data centers that are keen to reduce power and water consumption and decrease carbon dioxide emissions. This will increase the demand for energy-efficient and innovation power and cooling infrastructure solutions. Partnership with facility operators will play a vital role in gaining market share. It is because the majority of the providers in the regions have planned to invest millions of dollars in new facility development.

Key Data Center Critical (IT) Infrastructure Providers Hewlett Packard Enterprise (HPE) Cisco Dell Technologies Huawei IBM Inspur

Key Data Center Support Infrastructure Providers ABB Eaton Rittal Schneider Electric STULZ Vertiv Caterpillar Cummins

Key Data Center Contractors AECOM Arup Aurecon CSF Group DSCO Group M+W Group Nikom Infrasolutions NTT FACILITIES Group

Key Data Center Investors Apple AWS (Amazon Web Services) GDS Holdings Google Digital Realty Equinix NEXTDC NTT Communications ST TELEMEDIA GLOBAL DATA CENTERS (STT GDC)

Other Prominent Critical (IT) Infrastructure Providers Arista, Atos, Broadcom, Extreme Network, Hitachi Vantara, Inventec, Juniper, Lenovo, NEC, NetApp, Oracle, Pure Storage, Quanta Cloud Technology (Quanta Computer), Super Micro Computer, and Wistron (Wiwynn)

Other Prominent Support Infrastructure Providers Airedale Air Conditioning, Alfa Laval, Asetek, Bosch Security Systems (Robert Bosch), Cyber Power Systems, Delta Group, Euro-Diesel (KINOLT), Green Revolution Cooling (GRC), Hitech Power Protection, KOHLER (SDMO), Legrand, Nlyte Software, Mitsubishi, MTU On Site Energy (Rolls-Royce Power Systems AG), Socomec, and Trane (Ingersoll Rand)

Other Prominent Construction Contractors DPR Construction, Corgan, CSF Group, Cundall, Faithful+Gould, Flex Enclosure, Fortis Construction, Hutchinson Builders, ISG, Larsen & Turbo (L&T), Linesight, LSK Engineering, Nakano Corporation, Obayashi Corporation, and Red-Engineering

Other Prominent Data Center Investors - Bridge Data Centres, Canberra Data Centres, Chayora, China Unicom, CtrlS, FPT (Frasers Property Thailand), Global Switch, Internet Initiative Japan Inc. (IIJ), Keppel Data Centres, Neo Telemedia, Pi DATACENTERS, Reliance Communications (GLOBAL CLOUD XCHANGE), Sify Technologies, Space DC, Tenglong Holdings Group (Tamron), and Yotta Infrastructure

Key Market Insights IncludeThe report provides the following insights into the data center market in APAC during the forecast period 20202025.1. It offers comprehensive insights into current industry trends, forecast, and growth drivers in the APAC data center market.2. The report provides the latest analysis of share, growth drivers, challenges, and investment opportunities.3. It offers a complete overview of segments and the regional outlook of the APAC data center market.4. The study offers a detailed overview of the vendor landscape, competitive analysis, and key strategies to gain competitive advantage.Read the full report: https://www.reportlinker.com/p05830419/?utm_source=GNW

About ReportlinkerReportLinker is an award-winning market research solution. Reportlinker finds and organizes the latest industry data so you get all the market research you need - instantly, in one place.

__________________________

See the article here:
The APAC data center market is expected to grow at a CAGR of over 3% during the period 20192025 - GlobeNewswire

Spotting the elephant in the room: Why cloud will not burst colo’s bubble just yet – Cloud Tech

When it comes to the future demand for data centre colocation services, it would be easy to assume theres a large elephant in the room in the shape of a large cloud ready to consume all before it.

From what we are seeing, however, alongside our cloud provider hosting services and in line with market forecasts, this is far from actual reality. The signs are that colocation can look forward to a vibrant long term market. CBRE, for example, recently reported 2019 was another record year of colocation market growth in the so-called FLAP (Frankfurt, London, Amsterdam, Paris) markets. Theres also a growing choice of high quality colocation facilities thriving in regional UK locations.

Perhaps more telling, however, amid all the excitement and market growth statistics surrounding cloud, some analysts are already predicting only about half of enterprise workloads will ultimately go into to it: best practice and business pressures will see most of the remaining share gradually moving from on-premise to colo - with only a minority remaining on-premise in the long term.

This is because a public cloud platform, while great for scalability, flexibility and ease of access, probably wont totally satisfy all enterprise application and workload needs. Some will demand extremely high performance while others just need low-cost storage. And unless your own in-house data centre or hosting provider is directly connected to the cloud providers network infrastructure, latency is a consideration. This will impact on user experience as well as become a potential security risk. Then of course theres the governance and security concerns around control of company data.

At the same time, there are serious engineering challenges and costs involved when running private cloud solutions on-premise. The initial set-up is one thing, but theres also the ongoing support and maintenance involved. For critical services, providing 24 hour technical support can be a challenge.

Sooner or later, therefore, enterprises will inevitably have to address the implications and risks of continuing to run servers in-house for storing and processing large volumes of data and applications. Faced with solving the rising costs, complexities and security issues involved, many will turn to finding quality colocation facilities capable of supporting their considerable requirements - from housing servers for running day to day applications, legacy IT systems, and in some cases, mission-critical systems, and for hosting private or hybrid clouds.

So wheres the elephant? Right now, the elephant is most likely residing in the board rooms of many enterprise businesses. However, the real-life issues and challenges associated with a cloud or nothing approach will increasingly come to light and the novelty of instant cloudification will wear off. CIOs will be once again able to see the wood for the trees. Many will identify numerous workloads that dont go into cloud, and where the effort or cost of cloud is a barrier.

This journey and eventual outcome is natural - an evolution rather than a sudden and dramatic revolution. Its a logical process that enterprise organisations and CIOs need to go through, to finally achieve their optimum balance for highly effective, cost-efficient, secure, resilient, flexible and future-proofed computing.

Nevertheless, CIOs shouldnt assume that colocation will always be available immediately, exactly where they need it and at low cost. As the decade wears on, some colocation providers will probably need to close or completely upgrade smaller or power strapped facilities. Others will build totally new ones from the ground up. Only larger ones, especially those located in lower cost areas where real estate is significantly cheaper, may be capable of the economies of scale necessary for delivering affordable and future-proofed solutions for larger workload requirements. Time is therefore of the essence for commencing the evaluation process for identifying potential colocation facilities.

In summary, the cloud is not going to consume colocations lunch. More likely, together, they will evolve as the most compelling proposition for managing almost all enterprise data processing, storage and applications requirements. They are complementary solutions rather than head to head competitors.

Interested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

View post:
Spotting the elephant in the room: Why cloud will not burst colo's bubble just yet - Cloud Tech

The frequency of DDoS attacks depends on the day and time – Help Net Security

Multivector and cloud computing attacks have been rising over the last twelve months, according to Link11. The share of multivector attacks which target and misuse several protocols grew significantly from 46% in the first quarter to 65% in the fourth quarter.

DNS amplification was the most used technique for DDoS attackers in 2019 having been found in one-third of all attacks. The attackers exploited insecure DNS servers, of which there were over 2.7m worldwide by the end of 2019, according to the Open Resolver Project.

The average bandwidth of attacks keeps increasing by more than 150% within four years, reaching 5 Gbps in 2019, up from 2 Gbps in 2016. The maximum attack volume has also nearly doubled compared to 2018; from 371 Gbps to 724 Gbps.

The proportion of DDoS attacks that involved corrupted cloud servers was 45% between January and December; this is a 16% increase over the same time period the previous year. The proportion rose to 51% over the last six months of 2019.

The number of attacks traced to cloud providers was roughly proportionate to their relative market share, with more cases of corrupt clouds registered for AWS, Microsoft Azure and Google Cloud.

The longest DDoS attack lasted 6,459 minutes; more than 100 hours.

The data showed that the frequency of DDoS attacks depends on the day of the week and time of the day, with most attacks concentrated around weekends and evenings. More attacks were registered on Saturdays, and between 4pm and midnight on weekdays.

There was also a number of new amplification vectors registered by the LSOC last year including WSDiscovery, Apple Remote Management Service and TCP amplification, with registered attacks for the latter doubling compared to the first six months of the year.

The LSOC also saw an increase in carpet bombing attacks in the latter part of 2019, which involves a flood of individual attacks that simultaneously target an entire subnet or CIDR block with thousands of hosts.

This popular method spreads manipulated data traffic across multiple attacks and IPs. The data volume of each is so small that it stays under the radar and yet the combined bandwidth has the capacity of a large DDoS attack.

Marc Wilczek, COO of Link11 said: There was a noticeable surge in attack bandwidths and volumes, and in multivector attacks in 2019, due in part to the increased malicious use of cloud resources and the popularity of IoT devices.

The rest is here:
The frequency of DDoS attacks depends on the day and time - Help Net Security