Category Archives: Cloud Servers

Is the Cloud More Secure Than On Prem? – TechDecisions

Both the cloud and on-premises systems have their advantages and disadvantages, but recent attacks against on-premise systems coupled with the proliferation and advancement of cloud-based IT architecture are tilting the scales in favor of the cloud.

A company that owns its own on-premises servers has more control over security, but are responsible for all of the upgrades, maintenance and other upkeep not to mention the large up-front costs associated with the hardware.

In the cloud, most of that upgrading and maintenance is done by the provider, and organizations can pay for those services on a fixed, monthly basis.

Although on-premises systems have historically been viewed as more secure, recent attacks say otherwise, says Aviad Hasnis, CTO of autonomous breach protection company Cynet.

Its a trend that has really stressed out the fact that companies especially in the mid market that utilize these kinds of on-premises infrastructure dont usually have the capabilities or the manpower to make sure they are all up to date in terms of security updates, he said.

Thats why weve seen so many successful attacks against on-premises systems of late including the ProxyLogon and ProxyShell exploits of Microsoft Exchange Server vulnerabilities and the massive Kaseya ransomware attack, Hasnis says.

One of the main reasons there are more attacks against on-premises systems is the fact that most cloud vulnerabilities arent assigned a CVE number, which makes it hard for hackers to discover the flaw and successful exploit it.

Case in point was the recently disclosed Azure Cosmos DB vulnerability. Microsoft mitigated the vulnerability shortly after it was discovered, and no customer data appears to be impacted.

Meanwhile, known vulnerabilities in on-premises systems are exploited until the IT department can patch their systems. For example, the ProxyLogon and ProxyShell vulnerabilities in Microsoft exchange were assigned a CVE and patched shortly after they were disclosed, but organizations that were slow to patch or implement workarounds remained vulnerable as attackers seized on the newly discovered flaws.

In the case of the Kaseya attack, the damage was limited to only on-premises customers of Kaseya using the VSA product, but once the breach was disclosed and the company had to manually reach out to customers and urge them to take their servers down.

Attacking Kaseyas SaaS customers likely would have raised additional red flags that could have stopped the attack in its tracks, Hasnis says.

There are many different defenses for detecting this kind of threat behavior, Hasnis says.

In general, the cloud can be a much safer place to be if your organization practices SaaS Security Posture Management (SSPM), which, according to Gartner, is the constant assessment of the security risk of your Saas applications, including reporting the configuration of native SaaS security settings and tweaking that configuration to reduce risk.

For example, someone using Microsoft 365 without two-factor authentication should trigger a warning, Hasnis says.

The fact that someone uses cloud or SaaS infrastructure doesnt necessarily mean its safe, but they have to make sure their organization aligns with the best security protocols, Hasnis says.

Especially for smaller organizations that dont have the in-house staff and expertise to update and patch on-premises systems after an attack, migrating to the cloud can help cut down on that response time and keep the company safe by enlisting the help of the provider and other internal IT experts.

If your organization is spread around the globe in more than one location and youre working on-prem, you dont necessarily have access to all of the different infrastructure within the environment, Hasnis says.

Continued here:
Is the Cloud More Secure Than On Prem? - TechDecisions

Meet the Self-Hosters, Taking Back the Internet One Server at a Time – VICE

It's no secret that a small handful of enormous companies dominate the internet as we know it. But the internet didn't always have services with a billion users and quasi-monopolistic control over search or shopping. It was once a loose collection of individuals, research labs, and small companies, each making their own home on the burgeoning world wide web.

That world hasn't entirely died out, however. Through a growing movement of dedicated hobbyists known as self-hosters, the dream of a decentralized internet lives on at a time when surveillance, censorship, and increasing scrutiny of Big Tech has created widespread mistrust in large internet platforms.

Self-hosting is a practice that pretty much describes itself: running your own internet services, typically on hardware you own and have at home. This contrasts with relying on products from large tech companies, which the user has no direct involvement in. A self-hoster controls it all, from the hardware used to the configuration of the software.

My first real-world reason for learning WordPress and self-hosting was the startup of a podcast, KmisterK, a moderator of Reddit's r/selfhosted community, told Motherboard. I quickly learned the limitations of fake unlimited accounts that were being advertised on most shared hosting plans. That research led to more realistic expectations for hosting content that I had more control over, and it just bloomed from there.

Edward, co-creator of an extensive list of self-hosted software, similarly became interested in self-hosting as a way to escape less-than-ideal circumstances. I was initially drawn to self-hosting by a slow internet connection and a desire to share media and information with those I lived with," he told Motherboard. I enjoyed the independence self-hosting provided and the fact that you owned and had control over your own data.

Once you're wrapped up in it, it's hard to deny the allure of the DIY self-hosted internet. My own self-hosting experiences include having a home server for recording TV and storing media for myself and my roommates, and more recently, leaving Dropbox for a self-hosted, free and open source alternative called Syncthing. While Ive been happy with Dropbox for many years, I was paying for more than I needed and ran into issues with syncing speed. With a new Raspberry Pi as a central server, I had more control over what synced to different devices, no worries about any storage caps, and of course, faster transfer speeds. All of this is running on my home network: nothing has to be stored on cloud servers run by someone else in who-knows-where.

My experience with Syncthing quickly sent me down the self-hosting rabbit hole. I looked at what else I could host myself, and found simply everything: photo collections (like Google Photos); recipe managers; chat services that you can connect with the popular tools like Discord; read-it-later services for bookmarking; RSS readers; budgeting tools; and so much more. There's also the whole world of alternative social media services, like Mastodon and PixelFed, to replace Twitter, Facebook, and Instagram, which can be self-hosted as a private network or used to join others around the world.

Self-hosting is something I've found fun to learn about and tinker with, even if it is just for myself. Others, like KmisterK, find new opportunities as well. Eventually, a career path started with it, and from there, being in the community professionally kept me personally interested as a hobby. Edward also found a connection with his career in IT infrastructure, but still continues self-hosting. It is nice to be able to play around in a low risk/impact environment, he said.

But beyond enjoyment, self-hosters share important principles that drive the desire to self-hostnamely, a distrust of large tech companies, which are known to scoop up all the data they can get their hands on and use it in the name of profit.

Despite new privacy laws like Europe's General Data Protection Regulation (GDPR) and the California Consumer Protection Act (CCPA), the vast majority of Americans still don't trust Big Tech with their privacy. And in recent years, the countless privacy scandals like Cambridge Analytica have driven some tech-savvy folks to take matters into their own hands.

I think that people are becoming more privacy conscious and while neither these laws, nor self-hosting can currently easily resolve these concerns, I think that they can at least alleviate them, said Edward.

Some self-hosters see the rising interest in decentralized internet tools as a direct result of Silicon Valley excess. The growth of self-hosting does not surprise me, nodiscc, a co-creator and maintainer of the self-hosted tech list, told Motherboard. People and companies have started realizing the importance of keeping some control over their data and tools, and I think the days of 'everything SaaS [Software as a Service]' are past.

Another strong motivator comes from large companies simply abandoning popular tools, along with their users. After all, even if you're a paying customer, tech companies offer access to services at their whim. Google, for example, is now infamous for shutting down even seemingly popular products like Reader, leaving users with no say in the matter.

KmisterK succinctly summarized the main reasons people have for self-hosting: curiosity and wanting to learn; privacy concerns; looking for cheaper alternatives; and the betrayed, people who come from platforms like Dropbox or Google Photos or Photobucket or similar, after major outages, major policy changes, sunsetting of services, or other dramatic changes to the platform that they disagree with. This last one is probably the majority gateway to self-hosting, based on recent traffic to r/selfhosted, he says. Look no further than their recent Google Photos megathread and recent guides from self-hosters on the internet. For me, changes in LastPass, even as a paid user, had me looking elsewhere.

nodiscc also noted the different reasons people self-host, saying, There would be many... technical interest, security/privacy, customization, control over the software, self-reliance, challenge, economical reasons, political/Free software activism. Looking at the growth of self-hosting over the years, Edward says, These aren't comprehensive reasons but I expect that privacy-consciousness, hardware availability and more mainstream open-source software have contributed to the growth of self-hosting.

These are all good reasons why self-hosting is so essential. Self-hosting brings freedom and empowerment to users. You own what you use: you can change it, keep it the same, and have your data in your own hands. Much of this derives from the free (as in freedom to do what you like) nature of self-hosting software. The source code is freely available to use, modify, and share. Even if the original author or group stops supporting something, the code is out there for anyone to pick up and keep alive.

Despite the individualistic nature of self-hosting, there is a vibrant and growing community.

Much of this growth can be seen on Reddit, with r/selfhosted hitting over 136,000 members and continuing to rise, up from 84,000 just a year ago. The discussions involve self-hosting software that spans dozens of categories, from home automation, genealogy, and media streaming to document collaboration and e-commerce. The list maintained by nodiscc and the community has grown so long that its stewards say it needs more curation and better navigation.

The quality of free and easy-to-use self-hosting software has increased too, making the practice increasingly accessible to the less-technically savvy. Add to that the rise of cheap, credit card-sized single-board computers like the Raspberry Pi, which lower the starting costs of creating a home server to as little as $5 or $10. Between high-available hosting environments, to one-click/one-command deploy options for hundreds of different softwares, the barrier for entry has dramatically been lowered over the years, said KmisterK.

Of course, even the most dedicated self-hosters admit that it isn't for everyone. Having some computing knowledge is fairly essential when it comes to running your own internet services, and self-hosting will never truly compete with big-name services that make it exponentially easier," KmisterK said.

But while self-hosters may never number enough to put a serious dent in Big Tech's offerings, there is aclear need and benefit to this alternative space. And I can't think of a better model for the kind of DIY community we can have, when left to our own devices.

Read the original:
Meet the Self-Hosters, Taking Back the Internet One Server at a Time - VICE

Google is designing its own Arm-based processors for 2023 Chromebooks report – The Register

Google is reportedly designing its own Arm-based system-on-chips for Chromebook laptops and tablets to be launched in 2023.

The internet search giant appears to be following the same path as Apple by developing its own line of processors for client devices, according to Nikkei Asia.

Google earlier said its latest Pixel 6 and Pixel 6 Pro Android smartphones will be powered by a homegrown system-on-chip named Tensor. This component will be made up of CPUs and GPU cores licensed from other designers as well as Googles own AI acceleration engine to boost machine-learning-based features, such as image processing and speech recognition.

The Chocolate Factory also launched its homemade Tensor Processing Units (TPUs) in 2016, aimed at training and running machine learning workloads on its cloud servers. Googles CEO Sundar Pichai announced the fourth-generation of TPUs in May at the web titan's annual IO conference. Google also has a collection of its own Titan chips.

The rumored processors for its laptops and fondleslabs use Arm CPU cores, meaning Google will pay licensing fees to use the British chip designers blueprints. The chips will be manufactured elsewhere by fabrication plants, probably TSMC or Samsung. Technical specifications are hush-hush right now; The Register has asked Google for comment.

Its beneficial for tech companies to develop their own chips as they, for one thing, roll out AI algorithms in their products. Custom accelerators can be optimized to run their makers' software stacks more efficiently, enabling more real-time intelligent decision-making by devices, whether that's in facial recognition or machine-learning-powered smartphone apps.

Apples iPhone 12 handsets, for example, contain the iGiant's 5nm 64-bit Arm-compatible A14 bionic SoC that's capable of accelerating computer-vision code and the processing of data from its sensors. Amazon also has custom processors available for its cloud customers on AWS, such as Inferentia and Graviton.

There are reportedly other in-house chip projects underway at Facebook for its Oculus VR headsets and at Microsoft for its servers and laptops.

Continued here:
Google is designing its own Arm-based processors for 2023 Chromebooks report - The Register

US government warns of error in union – ICT News – The Press Stories

This is a significant bug in the Atlasian Association, which is being actively exploited worldwide, according to the American Cybercom Organization.

Error CVE-2021-26084 discussed at Atlasian Association wiki software. The vulnerability is so serious that US cybercom, the Department of Defenses cyber security, has issued a warning. The massive exploitation of the Atlasian Association error CVE-2021-26084 is ongoing and is expected to accelerate further, read a statement released Friday. Therefore, the company advises companies to close this breach as soon as possible, but the United States is enjoying a long weekend. Therefore, it is a privilege moment for cyber attacks because it often takes a long time before a company employee notices something unusual.

Sangamam is a popular wiki software. A series of misguided gangs aimed at infiltrating corporate networks are currently scanning violations. Atlasian announced on August 25 that a critical bug had been detected in various versions of the association server and data center, allowing a user to use unauthorized run code in the software. Revised version has been published. The defect appears to affect only servers on campus, not the cloud-provided versions of the association.

Error CVE-2021-26084 discussed at Atlasian Association wiki software. The vulnerability is so serious that US cybercom, the Department of Defenses cyber security, has issued a warning. The massive exploitation of the Atlasian Association error CVE-2021-26084 is ongoing and is expected to accelerate further, read a statement released Friday. Therefore, the company advises companies to close this breach as soon as possible, but the United States is enjoying a long weekend. So this is an important time for cyber attacks because it takes a long time before a company employee notices anything unusual. Sangam is a popular wiki software, often used for communication purposes. Atlasian announced on August 25 that a critical bug had been detected in various versions of the association server and data center, allowing a user to use unauthorized run code in the software. Revised version has been published. The defect appears to affect only servers on campus, not the cloud-provided versions of the association.

// Additional initialization code such as adding Event Listeners goes here FB.Event.subscribe('edge.create', function (href, widget) { if (href.indexOf('facebook.com') == -1) { headjs.ready("bp_bt", rmgBtTrackerCallBack(1, "fb")) } } ); };

// Load the SDK asynchronously (function () { // If we've already installed the SDK, we're done if (document.getElementById('facebook-jssdk')) { return; }

// Get the first script element, which we'll use to find the parent node var firstScriptElement = document.getElementsByTagName('script')[0];

// Create a new script element and set its id var facebookJS = document.createElement('script'); facebookJS.id = 'facebook-jssdk';

// Set the new script's source to the source of the Facebook JS SDK facebookJS.src="https://connect.facebook.net/fr_FR/all.js"; // Insert the Facebook JS SDK into the DOM firstScriptElement.parentNode.insertBefore(facebookJS, firstScriptElement); }());

Read more here:
US government warns of error in union - ICT News - The Press Stories

Emergen Research: Akamai Leads in DNS Market Among Fortune 500 Companies | Increasing Preference for Cloud DNS and Rising Need to Prevent Ddos Attacks…

VANCOUVER, BC, Aug. 31, 2021 /PRNewswire/ -- Demand for cloud-based DNS has been increasing substantially in the recent past and is driving-up revenue share of major players in the global Domain Name System (DNS) provider market. A few global players have risen substantially above the rest and the major player currently, Akamai, accounts for a substantially larger revenue shares and has close to double the number of websites as its closest competitor. The providers in the latter part of the list are far from the leader in the market and it does not seem that the leader will be dethroned anytime soon.

A major change in the DNS industry currently is a transition to cloud-based systems and this trend is gaining robust traction in the market. As the cloud-computing eco-system expands exponentially across the Internet, clients will get more options for controlling DNS performance and improving customer experience on a global scale. Majority of businesses areshiftingto cloud-based DNS or adoptinghybrid DNS-Cloud setup by separating their traffic respectively as per standard DNS and cloud infrastructure.

Another factor supporting steady increase in revenue share the various players in the market is increasing adoption of Domain Name System Security Extensions (DNSSEC). DNSSEC is a new trend in the DNS Industry. DNSSEC protects DNS from cyberattacks by employing digital signatures to DNS data, verifying sources of data, and ensuring accurate data flow over the Internet.

Click Here to Access Free Extract PDF Copy of the [emailprotected]https://www.emergenresearch.com/request-extract

However, increasing incidents of DNS server outages is a key factor expected to result in reduced revenue share of DNS providers. Websites are rapidlybeing hosted by businesses in order to engage customers, clients, and partner organizations, and DNS plays an important role in enterprise profits. DNS services are chosen by a company to ensure better website performance and better user experience. It is important for DNS service providers to operate their DNS servers efficiently to achieve these objectives. No response from web pages owing to DNS outages can have a substantial impact on an enterprise's earnings. Companies cannot afford DNS outages, especially during seasonal sales. DNS outages also have a major negative influence on user experience and application accessibility for users.

DNS Market Share of Fortune 500 - August 2021

As of 2021, Akamai [AKAM:NASDAQ] has 82 websites and accounted for a majority revenue share of 16.4% in the global DNS market, CSC DNS has 44 websites and 8.8% revenue share, Neustar UltraDNS [NYSE:NSR] has 42 websites and 8.4% revenue share, and Amazon Route 53 [AMZN:NASDAQ] has 27 websites and accounts for revenue share of 5.4%. Cloudflare [NET:NASDAQ]has 25 websites and revenue share of 5.0%, GoDaddy [GDDY:NASDAQ] has 19 websites and 3.8% revenue share, Azure DNS [AZRE:NASDAQ]has 15 websites and 3.0% revenue share, and DNS Made Easy has 11 websites and accounts for revenue share of 2.2%.

With the introduction of Edge DNS, Akamai Technologies has managed to gain a substantially larger market share than other players in the global market. Edge DNS is an authoritative DNS service that sends DNS resolution from business premises or data centers to the Akamai Intelligent Edge. CSC DNS and Neustar UltraDNS follow Akamai closest in terms of market revenue share. In addition, Neustar UltraDNS offers cost-effective cloud-based recursive DNS service with sophisticated threat intelligence that enables speedy and secure online application access, which places it in a lucrative position in the market.

Have a look at our Quote for detail information on different package on offer [Fortune 500 Companies, Fortune 5000 Companies, Fortune 1000 Companies ]@ https://www.emergenresearch.com/request-quote

Some Key Findings:

Have a look at Complete Insight on Domain Name System Market Among Fortune 500 Companies @ https://www.emergenresearch.com/livedata/dns-market-share-2021-fortune-500-companies

About Emergen Research

Emergen Research is a market research and consulting company that provides syndicated research reports, customized research reports, and consulting services. Our solutions purely focus on your purpose to locate, target, and analyze consumer behavior shifts across demographics, across industries, and help clients make smarter business decisions. We offer market intelligence studies ensuring relevant and fact-based research across multiple industries, including Healthcare, Touch Points, Chemicals, Types, and Energy. We consistently update our research offerings to ensure our clients are aware of the latest trends existent in the market. Emergen Research has a strong base of experienced analysts from varied areas of expertise. Our industry experience and ability to develop a concrete solution to any research problems provides our clients with the ability to secure an edge over their respective competitors.

Contact Us:

Eric LeeCorporate Sales SpecialistEmergen Research | Web: http://www.emergenresearch.com Direct Line: +1 (604) 757-9756 E-mail: [emailprotected] Explore Our Custom Intelligence services | Growth Consulting Services Facebook| LinkedIn| Twitter| Blogs Related Report @ Managed DNS Service Market

About Emergen Research

Emergen Research is a market research and consulting company that provides syndicated research reports, customized research reports, and consulting services. Our solutions purely focus on your purpose to locate, target, and analyze consumer behavior shifts across demographics, across industries, and help clients make smarter business decisions. We offer market intelligence studies ensuring relevant and fact-based research across multiple industries, including Healthcare, Touch Points, Chemicals, Types, and Energy. We consistently update our research offerings to ensure our clients are aware of the latest trends existent in the market. Emergen Research has a strong base of experienced analysts from varied areas of expertise. Our industry experience and ability to develop a concrete solution to any research problems provides our clients with the ability to secure an edge over their respective competitors.

SOURCE Emergen Research

Continue reading here:
Emergen Research: Akamai Leads in DNS Market Among Fortune 500 Companies | Increasing Preference for Cloud DNS and Rising Need to Prevent Ddos Attacks...

More SMBs are shifting IT infrastructure as part of hybrid working plans – ITProPortal

With most workers nowadays operating in a hybrid model, small and medium-sized businesses (SMB) are being forced to rethink their IT infrastructure. This is according to a report from data center specialists ServerChoice, which claims that SBMs are turning to cloud to support their employees.

Polling more than 900 SME business leaders, ServerChoice found that most of them (72 percent) are considering creating a private cloud solution, while 19 percent are eyeing up colocation. The remaining nine percent will most likely go down the public cloud route.

SMB leaders are in no rush to get this done, however, as there are major roadblocks along the way. While many are worried about the cost of moving their servers and setting up cloud-based infrastructure, some are also wary of the potential downtime during the move. There are also concerns about possible breakdowns during the move, which would not only prolong the process, but also make it more expensive.

SMEs are undergoing a rapid shift in working patterns with four in ten of these businesses moving offices. This has become a driver for businesses to relook at their IT server estate and our research found that SMEs are using the office move as an opportunity to shift IT servers off-premises, said Adam Bradshaw, Commercial Director at ServerChoice.

SMEs remain unconvinced by public cloud, with colocation found to be twice as popular as public cloud. This is unsurprising as perfectly good IT hardware does not need to be replaced with colocation. It is a solution that not only maximizes the potential of existing hardware but provides a more secure, and often more reliable, foundation for a business core infrastructure.

The rest is here:
More SMBs are shifting IT infrastructure as part of hybrid working plans - ITProPortal

Intels Best DPU Will Be Commercially Available Someday – The Next Platform

UPDATE: One of the reasons why Intel spent $16.7 billion to acquire FPGA maker Altera six years ago was because it was convinced that its onload model where big parts of the storage and networking stack were running on CPUs was going to go out of favor and that companies would want to offload this work to network interface cards with lots of their own much cheaper and much more energy efficient processing.

This is what we used to call SmartNICs, which meant offloading and accelerating certain functions using a custom ASIC on the network interface card. We are now increasingly calling them DPUs, short for Data Processing Units, as these devices get a hybrid approach for their compute and acceleration, mixing CPUs, GPUs, and FPGAs together on the same device. Because it has to be different, Intel gives offload devices that are substantially expanded SmartNICs the name Infrastructure Processing Unit, or IPU but to avoid confusion we are sticking with the DPU name for all of these.

In any event, Intel trotted out three of its impending DPUs at its recent Architecture Day extravaganza, and the executives in its Data Platforms Group showed that they had indeed been on the road to Damascus for the past couple of years and were going to not only stop persecuting DPUs, but embrace them fully. Well, it was not so much a conversion as it was an injection of new people bringing new thoughts, and this includes Guido Appenzeller, who is these days chief technology officer at what used to be called the Data Center Group. Appenzeller ran the Clean Slate Lab at Stanford University, which gave birth to the OpenFlow software defined networking control plan standard and was co-founder and CEO of at Big Switch Networks (now part of Arista Networks). Appenzeller was chief technology strategy officer at the Networking and Security business unit at VMware for a while and was behind the OpenSwitch open source network operating system project created by Hewlett Packard Enterprise a few years ago.

Intel has not talked much about offloading work from CPUs, because that is heresy even if it is happening and even if there are very good economic and security reasons for doing so. The metaphor for DPUs that Appenzeller came up with, and talked about at Architecture Day, is clever. Its more about resource sharing and multitenancy than it is getting better price/performance across a cluster of systems, which we think is the real driver behind the DPU. (This is hairsplitting, we realize. Offloading network and storage to the DPU helps cut latency, helps improve throughput, lowers cost, and delivers secure multitenancy.)

If you want to think about an analogy, this is a little bit like hotels versus single family homes, explained Appenzeller. In my home, I want it to be easy to move around from the living room to the kitchen to the dinner table. In a hotel, it is very different. The guest rooms and the dining hall and the kitchen are cleanly separated. The areas where the hotel staff works is different from the area where the hotel guests are. And you get a bed, you may want to move from one to the other in some cases. And essentially this is the same trend that were seeing in cloud infrastructure today.

In the Intel conception of the DPU, the IPU is where the control plane of the cloud service providers what we call hyperscalers and cloud builders runs and the hypervisor and the tenant code runs on the CPU cores inside the server chassis where the DPU is plugged in. Many would argue with this approach, and Amazon Web Services, which has perfected the art of the DPU with its Nitro intelligent NICs, would be the first to raise an objection. All network and storage virtualization code runs on the Nitro DPU for all EC2 instances and, importantly, so does the server virtualization hypervisor excepting all but the very tiniest piece of paravirtualized code that has nearly no overhead at all. The CPU cores are meant only to run operating systems and do compute tasks. No more.

In a sense, as we have been saying for some time, a CPU is really a serial compute accelerator for the DPU. And not too far into the future, the DPU will have all accelerators linking to it in a high-speed fabric that allows the whole shebang to be disaggregated and composable, with the DPU not the CPU at the heart of the architecture. This is going too far for Intel, we suspect. But this makes more sense, and fulfills a lot of the four-decade vision of the network is the computer espoused by former Sun Microsystems techie extraordinaire John Gage. There will be more and more in-network processing, in DPUs and in switches themselves, as we move forward because this is the natural place for collective operations to run. Perhaps they never should have been put on the CPU in the first place.

To be fair, later in his talk, as you see in the chart above, Appenzeller did concede that CPU offload is happening, allowing customers to maximize revenues from CPUs. Intel surely has been doing that for the past decade, but that strategy no longer works. Which is one of the reasons why Appenzeller was brought in from outside of Intel.

And this data below, from Facebook, that Appenzeller cited makes it clear why Intel has had a change in thinking particularly after watching AWS and Microsoft fully embrace DPUs over the past several years and other hyperscalers and cloud builders following suit with various levels of deployment and success.

This is perhaps a generous dataset particularly if you are not including the overhead of a server virtualization hypervisor, as many large enterprises have to even if the hyperscalers and cloud builders tend to run bare metal with containers on top.

At the moment, because it does not have its oneAPI software stack fully cooked and it does not have an ecosystem of software running on GPU-accelerated devices, Intel is only talking about DPUs that are based on GPUs, FPGAs, and custom ASICs. But in the fullness of time, we believe that GPUs, which excel at certain kinds of parallel processing and are faster to reprogram than FPGAs, will be part of the DPU mix at Intel, as they have come to dominate at Nvidia. Its only a matter of time.

For now, two of the DPUs that Intel showed off at Architecture Day were based on CPU and FPGA combos one called Arrow Creek that is based on an FPGA/CPU SoC, one called Oak Springs Canyon with a mix of an FPGA plus an external Xeon D processor or was based on a custom ASIC code-named Mount Evans that Intel was creating for a top cloud provider that remains unnamed.

Here are the Arrow Creek (left) and Oak Springs Canyon (right) cards, which plug into PCI-Express slots inside of servers:

And here is a drilldown on Arrow Creeks features:

The Arrow Creek DPU has two 100Gb/sec ports that use QSFP28 connectors and has an Agilex FPA compute engine. The DPU has a dual-port E810 Ethernet controller chip that hooks into eight lanes of PCI-Express 4.0 slot capacity and the Agilex FPGA has its own eight lanes of PCI-Express as well; both run back into the CPU complex on the servers through the PCI-Express bus. The Agilex FPGA has Arm cores embedded on it, and these can run modest compute jobs and have five channels of memory (four plus a spare it looks like) with a total of 1GB of capacity. The FPGA part of the Agilex device has four channels of DDR4 memory with a combined 16GB of capacity.

This Arrow Creek DPU is aimed specifically at network acceleration workloads, including customizable packet processing done on the bump in the wire as we have been saying about FPGA-accelerated SmartNICs for a long time. This device is programmable through the OFS and DPDK software development kits and have Open vSwitch and Juniper Contrail virtual switching as well as SRv6 and vFW stacks already shaped onto their FPGA logic gates. This is for workloads that change sometimes, but not very often, which is what we have been saying about FPGAs from the beginning.

Oak Springs Canyon is a little different, as you can see:

The feeds and speeds on the Xeon D processor were not revealed as yet, but it probably has 16 cores as a lot of SmartNICs tend to these days. As far as we know, the Xeon D CPU and Agilex FPGA are on the same die Intel has been working on this for years and promised such devices as part of the Altera acquisition back in 2015 but for all we know they are integrated in a single socket using EMIB interconnects. The CPU and GPU each have 16GB of DDR4 memory across four channels, and they link through the FPGA to a pair of 100Gb/sec QSFP28 ports.

The Oak Springs Canyon DPU is programmable through the OFS, DPDK, and SPDK toolkits and have integrated stacks for Open vSwitch virtual switching as well as the NVM-Express over Fabrics and RoCE RDMA protocols. Obviously, this DPU is aimed at accelerating network and storage and offloading it from the CPU complex in the servers.

The third DPU, the Mount Evans device, is perhaps the most interesting in that it was co-designed with that top cloud provider and that it has a custom Arm processor complex and a custom network subsystem integrated on the same package. Like this:

The networking subsystem has four SerDes running at 56Gb/sec, which delivers 200Gb/sec at full duplex and which can be carved up and used by four host servers. (The charts say it has to be Xeons, but it seems unlikely that this is a requirement. Ethernet is Ethernet.) The network interface implements the RoCE v2 protocol for accelerating network without involving the CPU (as RDMA implementations do) and also has an NVM-Express offload engine so the CPUs in the host dont have to deal with this overhead, either. There is a custom programmable packet processing engine, which used the P4 programming language and which we strongly suspect is based on chunks of the Tofino switch ASICs from Intels acquisition of Barefoot Networks more than two years ago. The network subsystem has a traffic-shaping logic block to boost performance and lower latency between the network and the hosts, and there is also a logic block that does IPSec inline encryption and decryption at line rate.

The compute complex on the Mount Evans device has 16 Neoverse N1 cores licensed from Arm Holdings, which are front-ended by a cache hierarchy that was not divulged and an unusual three DDR4 memory controllers (thats not a very base-2 number). The compute complex also has a lookaside cryptography engine and a compression engine, thus offloading these two jobs from the host CPUs, and a management complex to allow outboard management of the DPU.

It is not clear what the workload is, but Intel says that as for the programming environment, it will leverage and extend the DPK and SPDK tools, presumably with P4. We strongly suspect that Mount Evans is being used in Facebook microservers, but that is just a guess. It could be Google, and it definitely is not AWS or Microsoft. And we also strongly suspected that it would not available to anyone other than its intended customer, which we said when this story first came out would be a shame.

Update: Intel apparently will commercialize Mount Evans. At some point.

Here is the statement we got from Brian Neipoky, director of Connectivity Group Marketing at Intel, after the story ran: Mount Evans will be commercially available, but we are not announcing product availability at this time.

So, there is a little more precision, and you are welcome.

Read more:
Intels Best DPU Will Be Commercially Available Someday - The Next Platform

EXCLUSIVE Microsoft warns thousands of cloud customers of exposed databases – Reuters

SAN FRANCISCO, Aug 26 (Reuters) - Microsoft (MSFT.O) on Thursday warned thousands of its cloud computing customers, including some of the world's largest companies, that intruders could have the ability to read, change or even delete their main databases, according to a copy of the email and a cyber security researcher.

The vulnerability is in Microsoft Azure's flagship Cosmos DB database. A research team at security company Wiz discovered it was able to access keys that control access to databases held by thousands of companies. Wiz Chief Technology Officer Ami Luttwak is a former chief technology officer at Microsoft's Cloud Security Group.

Because Microsoft cannot change those keys by itself, it emailed the customers Thursday telling them to create new ones. Microsoft agreed to pay Wiz $40,000 for finding the flaw and reporting it, according to an email it sent to Wiz.

"We fixed this issue immediately to keep our customers safe and protected. We thank the security researchers for working under coordinated vulnerability disclosure," Microsoft told Reuters.

Microsoft's email to customers said there was no evidence the flaw had been exploited. "We have no indication that external entities outside the researcher (Wiz) had access to the primary read-write key," the email said.

This is the worst cloud vulnerability you can imagine. It is a long-lasting secret, Luttwak told Reuters. This is the central database of Azure, and we were able to get access to any customer database that we wanted.

Luttwak's team found the problem, dubbed ChaosDB, on Aug. 9 and notified Microsoft Aug. 12, Luttwak said.

A Microsoft logo is pictured on a store in the Manhattan borough of New York City, New York, U.S., January 25, 2021. REUTERS/Carlo Allegri

Read More

The flaw was in a visualization tool called Jupyter Notebook, which has been available for years but was enabled by default in Cosmos beginning in February. After Reuters reported on the flaw, Wiz detailed the issue in a blog post.

Luttwak said even customers who have not been notified by Microsoft could have had their keys swiped by attackers, giving them access until those keys are changed. Microsoft only told customers whose keys were visible this month, when Wiz was working on the issue.

Microsoft told Reuters that "customers who may have been impacted received a notification from us," without elaborating.

The disclosure comes after months of bad security news for Microsoft. The company was breached by the same suspected Russian government hackers that infiltrated SolarWinds, who stole Microsoft source code. Then a wide number of hackers broke into Exchange email servers while a patch was being developed.

A recent fix for a printer flaw that allowed computer takeovers had to be redone repeatedly. Another Exchange flaw last week prompted an urgent U.S. government warning that customers need to install patches issued months ago because ransomware gangs are now exploiting it.

Problems with Azure are especially troubling, because Microsoft and outside security experts have been pushing companies to abandon most of their own infrastructure and rely on the cloud for more security.

But though cloud attacks are more rare, they can be more devastating when they occur. What's more, some are never publicized.

A federally contracted research lab tracks all known security flaws in software and rates them by severity. But there is no equivalent system for holes in cloud architecture, so many critical vulnerabilities remain undisclosed to users, Luttwak said.

Reporting by Joseph Menn; Editing by William Mallard

Our Standards: The Thomson Reuters Trust Principles.

Read the rest here:
EXCLUSIVE Microsoft warns thousands of cloud customers of exposed databases - Reuters

Monday: Hardware & consumption boom, Bitcoin theft, cloud & T-Mobile gaps – Market Research Telecast

Processors and graphics cards continue to sell well, but in the context of the energy transition and the call for more sustainability, electric cars, new house insulation and organic shoes are also in demand. But does that have to be all? It is also environmentally friendly if you continue to use your previous belongings instead of replacing them a brief overview of the most important messages.

Although the second quarter has traditionally been rather weak, the three big chip manufacturers have Intel, Nvidia and AMD further increased their sales figures. Despite the lack of chips, sales of CPUs and graphics cards continue to rise. Intel was able to assert itself as the market leader, but Nvidia increased its market share for graphics cards a little.

These Consumer culture is also evident in other areas. A survey on the personal participation in the protection of future human habitat shows what the respondents bought. New electric cars. New house insulation. New organic shoes. New e-bikes. New bamboo straws. New zinc sheet watering cans. the The downside is missing, the garbage behind it. The Missing Link is about overconsumption and false consumption promises: Dont buy an electric car!

Not bought, but supposedly with one Malware steals bitcoins did two years ago british youngsters. With a Civil action an American wants to regain these 16 bitcoins. At the time of the theft, the two alleged perpetrators were still minors and lived with their parents. After the loss of 16 bitcoins, the stolen person also sued the parents of the alleged thieves.

Microsofts cloud service Azure was not infested with malware, but apparently offered one security breach, through which unauthorized persons could gain full access to the customers cloud databases. Microsoft says it has closed the gap in the meantime, but affected customers should take action themselves to prevent unauthorized access. After the cloud database disaster, Microsoft has therefore informed its Azure customers about the serious gap.

In contrast to the Azure vulnerability, which has not had any consequences so far, the most recent Break into the servers of T-Mobile US above 50 million customer data stolen. The system made it easy for the hacker, he himself explains in a letter to the press. Cracking the defense mechanisms of the US Telecom subsidiary cost him little effort. The hacker used a devastating security hole for the data breach at T-Mobile US.

A devastating development is also becoming apparent in the coronavirus pandemic, because the number of nationwide Covid-19 patients treated in intensive care units is in the fourth corona wave rose above 1000 for the first time. In the DIVI Register daily report on Sunday, 1008 Covid 19 patients were reported in intensive care, 485 of whom had to be ventilated. The low was 354 on July 22nd. Since then, the occupancy has increased again, so that the number of Covid-19 patients in intensive care units has risen again to over 1000.

Also important:

(fds)

Article Source

Disclaimer: This article is generated from the feed and not edited by our team.

Read the original:
Monday: Hardware & consumption boom, Bitcoin theft, cloud & T-Mobile gaps - Market Research Telecast

Rethinking Your Tool Chain When Moving Workloads to the Cloud – Virtual-Strategy Magazine

Software-driven IT organizations generally rely on a tool chain made up of commercial and home-grown solutions to develop, deploy, maintain and manage the applications and OSes that their business depends on. Most IT shops have preferred tools for needs like application monitoring, data protection, release management or provisioning and deprovisioning resources. But are those tools always the best options?

While tool chains do evolve over time, its rare for IT organizations to conduct a full, top-to-bottom review of the tools they are comfortable using with an eye toward optimization or new capabilities. One motivator is when companies are considering moving workloads from the data center to the cloud. The inherent differences in how applications are developed and managed for on-premise vs. cloud environments is a strong reason to reassess whether the current tools in your arsenal are the best alternatives available or, just as important, whether theyre well-suited to a more cloud-centric software lifecycle.

When it comes to reevaluating your tool chain, it helps to have a process. Heres one approach:

Its important to start with a full audit of your current stack, including areas such as:

Obviously, its important to assess how well each product meets your current needs as they stand today. (Are there capabilities you wish it had or weaknesses youve become accustomed to working around?) Then consider how those needs will change as workloads move to the cloud. A good first question to ask is whether the tool is still supported by the vendor. Given how infrequently IT teams switch tools, theres a not insignificant likelihood that one or more of your tools has become an orphan. Second, does the license agreement for the tool accommodate or restrict its use in the cloud? For instance, some tools are licensed to a specific physical server and some vendors require their hardware to be owned by the same entity that holds the license. Both of these scenarios are problematic for cloud-based deployments. Third, does moving to a cloud base tool open up new possibilities that you want to take advantage of? Removing the constraints of on-premise solutions and gaining capabilities like nearly unlimited compute and storage, dynamic workloads and multiple regions around the world can provide much needed flexibility. But the advantage of moving to a cloud-based tool (replacing, say, an on-premise application log reporting solution with Azure Log Manager), needs to be balanced against the added management required solution as well as the need to retrain teams.

There are also non-technical factors to consider when looking at new tools. Do teams enjoy using the tool? Does it make them more productive (or conversely, slow them down)? Does it meet the business needs of the organization? How much work will adopting a new tool take and will it be worth it in the long run? While these may not be the most important considerations, they shouldnt be overlooked.

There is almost always going to be an alternative to any individual tool and, potentially, one tool that can do the work of several, making it possible to consolidate. One way to get a sense of whats available is to start by asking other teams in your organization what they use in the destination cloud. There are often cloud-based tools (offered by cloud vendors or sold as separate SaaS products) that offer pay-as-you-go licensing, can be easier to scale up or down, move workloads around, and can expand to other regions. Today, some legacy vendors even offer consumption-based options to better match up against cloud-based competitors, while others stick with more traditional perpetual licenses. Last, consider if a new tool will give IT teams the opportunity and motivation to expand their skill set. Offering the chance to learn and use new products could actually increase job satisfaction and improve your organizations ability to retain engineering talent.

Before you pull the trigger on a new solution it often pays to check in with the existing vendor. To keep your business they may offer more generous or flexible terms. Of course, vendors that see the cloud as a threat are probably going to be less inclined to give you a break on licensing. But the conversation doesnt lead to new or better terms, talking to your vendors on a regular basis can provide insights into how they see their customers and the market.

Once youve completed the previous steps, youll have a good idea of the tools youre likely to keep and those youd like to upgrade. At that point, its important to create a plan for adopting each new tool. Start by separating products that need to be replaced soon from those where more research is required; it also helps to compile any other useful information learned during the process so that the larger IT team can access it. Youll want to assess whether teams will need training, whether internal documentation or playbooks need to be updated, and how new tools will plug into existing authorization/authentication solutions. Finally, you will also need a migration plan for each tool that details how and when the organization will move from the old product to the new one, what scripts will need to be rewritten, and what to do with historical data like log files from the old product.

While cloud-based tools offer meaningful benefits in terms of flexibility, cost savings and ease of scalability, they may not be the best solution for every organization. The only way to be sure is to do the kind of analysis outlined above. For companies that have already made the decision to move workloads to the cloud, the potential long-term benefits of adopting new solutions is worth the effort.

Skytap

More:
Rethinking Your Tool Chain When Moving Workloads to the Cloud - Virtual-Strategy Magazine