Page 2,648«..1020..2,6472,6482,6492,650..2,6602,670..»

EXCLUSIVE Amazon considers more proactive approach to determining what belongs on its cloud service – Reuters

Attendees at Amazon.com Inc annual cloud computing conference walk past the Amazon Web Services logo in Las Vegas, Nevada, U.S., November 30, 2017. REUTERS/Salvador Rodriguez/File Photo

Sept 2 (Reuters) - Amazon.com Inc (AMZN.O) plans to take a more proactive approach to determine what types of content violate its cloud service policies, such as rules against promoting violence, and enforce its removal, according to two sources, a move likely to renew debate about how much power tech companies should have to restrict free speech.

Over the coming months, Amazon will expand the Trust & Safety team at the Amazon Web Services (AWS) division and hire a small group of people to develop expertise and work with outside researchers to monitor for future threats, one of the sources familiar with the matter said.

It could turn Amazon, the leading cloud service provider worldwide with 40% market share according to research firm Gartner, into one of the world's most powerful arbiters of content allowed on the internet, experts say.

AWS does not plan to sift through the vast amounts of content that companies host on the cloud, but will aim to get ahead of future threats, such as emerging extremist groups whose content could make it onto the AWS cloud, the source added.

A day after publication of this story, an AWS spokesperson told Reuters that the news agencys reporting "is wrong," and added "AWS Trust & Safety has no plans to change its policies or processes, and the team has always existed."

A Reuters spokesperson said the news agency stands by its reporting.

Amazon made headlines in the Washington Post on Aug. 27 for shutting down a website hosted on AWS that featured propaganda from Islamic State that celebrated the suicide bombing that killed an estimated 170 Afghans and 13 U.S. troops in Kabul last Thursday. They did so after the news organization contacted Amazon, according to the Post.

The discussions of a more proactive approach to content come after Amazon kicked social media app Parler off its cloud service shortly after the Jan. 6 Capitol riot for permitting content promoting violence. read more

Amazon did not immediately comment ahead of the publication of the story on Thursday. After publication, an AWS spokesperson said later that day, "AWS Trust & Safety works to protect AWS customers, partners, and internet users from bad actors attempting to use our services for abusive or illegal purposes. When AWS Trust & Safety is made aware of abusive or illegal behavior on AWS services, they act quickly to investigate and engage with customers to take appropriate actions."

The spokesperson added that "AWS Trust & Safety does not pre-review content hosted by our customers. As AWS continues to expand, we expect this team to continue to grow."

Activists and human rights groups are increasingly holding not just websites and apps accountable for harmful content, but also the underlying tech infrastructure that enables those sites to operate, while political conservatives decry what they consider the curtailing of free speech.

AWS already prohibits its services from being used in a variety of ways, such as illegal or fraudulent activity, to incite or threaten violence or promote child sexual exploitation and abuse, according to its acceptable use policy.

Amazon investigates requests sent to the Trust & Safety team to verify their accuracy before contacting customers to remove content violating its policies or have a system to moderate content. If Amazon cannot reach an acceptable agreement with the customer, it may take down the website.

Amazon aims to develop an approach toward content issues that it and other cloud providers are more frequently confronting, such as determining when misinformation on a company's website reaches a scale that requires AWS action, the source said.

A job posting on Amazons jobs website advertising for a position to be the "Global Head of Policy at AWS Trust & Safety," which was last seen by Reuters ahead of publication of this story on Thursday, was no longer available on the Amazon site on Friday.

The ad, which is still available on LinkedIn, describes the new role as one who will "identify policy gaps and propose scalable solutions," "develop frameworks to assess risk and guide decision-making," and "develop efficient issue escalation mechanisms."

The LinkedIn ad also says the position will "make clear recommendations to AWS leadership."

The Amazon spokesperson said the job posting on Amazons website was temporarily removed from the Amazon website for editing and should not have been posted in its draft form.

AWS's offerings include cloud storage and virtual servers and counts major companies like Netflix (NFLX.O), Coca-Cola (KO.N) and Capital One (COF.N) as clients, according to its website.

PROACTIVE MOVES

Better preparation against certain types of content could help Amazon avoid legal and public relations risk.

"If (Amazon) can get some of this stuff off proactively before it's discovered and becomes a big news story, there's value in avoiding that reputational damage," said Melissa Ryan, founder of CARD Strategies, a consulting firm that helps organizations understand extremism and online toxicity threats.

Cloud services such as AWS and other entities like domain registrars are considered the "backbone of the internet," but have traditionally been politically neutral services, according to a 2019 report from Joan Donovan, a Harvard researcher who studies online extremism and disinformation campaigns.

But cloud services providers have removed content before, such as in the aftermath of the 2017 alt-right rally in Charlottesville, Virginia, helping to slow the organizing ability of alt-right groups, Donovan wrote.

"Most of these companies have understandably not wanted to get into content and not wanting to be the arbiter of thought," Ryan said. "But when you're talking about hate and extremism, you have to take a stance."

Reporting by Sheila Dang in Dallas; Editing by Kenneth Li, Lisa Shumaker, Sandra Maler, William Mallard and Sonya Hepinstall

Our Standards: The Thomson Reuters Trust Principles.

Read more:
EXCLUSIVE Amazon considers more proactive approach to determining what belongs on its cloud service - Reuters

Read More..

Server and virtualization business trends to watch in 2021 – TechBullion

Share

Share

Share

Email

There are many different trends in data center technology, which can make it difficult to keep up with the latest requirements. There is always something new and exciting popping up. But what are the latest trends? Whats changing in 2021?

A lot of people think server virtualization is outdated, but in 2021 it will still be very much around. One of the most important trends to watch for is software-defined infrastructure. Its already popular, but it will continue to grow in popularity over the next few years.

Here are some other trends to watch for when looking at server and virtualization business trends in 2021.

Throughout 2020, an increasingly large number of businesses adopted hybrid cloud technology, and many more plan to do so in 2021 and beyond. Enterprises, in particular, are embracing hybrid cloud technology to gain greater agility and mobility.

The idea of cloud providers delivering multiple services (compute, storage, network, and data services) in the form of a single package appeals to them, and this has helped solidify hybrid cloud technology as a requirement for IT operations.

A move to a fully hybrid cloud infrastructureone in which customers are not only using public cloud providers such as Amazon Web Services (AWS) but also implementing private clouds and a mixture of public and private cloudsis the logical next step for many organizations.

Low-cost commercial bare metal servers have been steadily rising in popularity in the second half of 2021 but will find their strongest markets in the web hosting, hosting, and cloud computing sectors.

Because virtualization is likely to be more complicated than traditional servers, dedicated bare metal servers will have a strong advantage over virtualized servers in terms of ease of operation.

The advantages of bare metal cloud servers will also prove useful to some private cloud providers, which may install a single server and load it up with virtual machine workloads on demand for the end client.

Many companies have found themselves needing server virtualization throughout the pandemic and the rise of remote work, and cloud computing. Using a hybrid approach that integrates virtualization, cloud, and more traditional computing solutions has become the norm for most enterprises.

Whether its a proprietary solution like Hyper-V and VMware, a solution based on open standards like OpenStack, or a different approach like KVM, containers, or Googles Cloud Native Application Engine, this is a space with significant momentum and growth. Its an area where each of the players HPE, IBM, Cisco, Dell, Oracle, HP, Microsoft, Red Hat, and VMware all have strong positions.

As the number of remote workers continues to climb, so does the risk of viable cyberattacks on corporations. Modern businesses have a wide range of remote workers, and those workers, along with the device they use, are vulnerable to security issues.

This is a key concern for server providers, and most continue to invest in security measures and products. Some of the latest cybersecurity trends to emerge in 2021 include things like:

Devices moving closer to the point of application access, processing and delivery will require new kinds of capabilities that were not needed before. Edge computing, in which a local compute node, edge gateway, or other compute element is set up to handle compute-intensive activity close to the data source, is expected to see major gains throughout 2021, and see some major liftoff in 2022.

Markets for edge computing will include verticals such as supply chain and retail, and the edge can enable new business models and revenue streams for application vendors and system integrators.

The devices that are most often seen as edge nodes in the context of edge computing tend to be low-power and low-cost IoT devices such as sensors and electronic logs. Edge computing vendors and service providers will bring services to edge networks, based on their commitment to systems integration, interoperability, standards support, and vendor enablement.

See the original post:
Server and virtualization business trends to watch in 2021 - TechBullion

Read More..

How to Move Fast in the Cloud Without Breaking Security – insideBIGDATA

In this special guest feature, Asher Benbenisty, Director of Product Marketing at Algosec, looks at how organizations can solve the problems of managing and maintaining security in hybrid, multi-cloud environments. Also discussed is the common confusion over cloud ownership, and how organizations can get consistent control and take advantage of agility and scalability without compromising on security. Asher is an experienced product marketing professional with a diverse background in all aspects of the corporate marketing mix, product/project management as well as technical expertise. He is passionate about bringing innovative products that solve real business problems to the market. When not thinking of innovative products, Asher enjoys outdoor running especially by the ocean.

Move fast and break things is a familiar motto. Attributed to Facebook CEO Mark Zuckerberg, it helps to explain the companys stellar growth over the past decade, driven by its product innovations. However, while its a useful philosophy for software development, moving faster than youd planned is a risky approach in other areas, as organizations globally realized during the COVID-19 pandemic. While 2020 saw digital transformation programs advance by up to seven years, enterprises quick moves to the cloud also meant that some things got damaged along the way including security.

Arecent survey conducted with the Cloud Security Allianceshowed that over half of organizations are now running over 41% of their workloads in public clouds, compared to just one quarter in 2019, and this will increase further by the end of 2021. Enterprises are moving fast to the cloud, but they are also finding that things are getting broken during this process.

11% of organizations reported a cloud security incident in the past year, with the three most common causes being cloud provider issues (26%), security misconfigurations (22%), and attacks such as denial of service exploits (20%). In terms of the business impact of these disruptive cloud outages, 24% said it took up to 3 hours to restore operations, and for 26% it took over half a day.

As a result, Its no surprise that organizations have significant concerns about enforcing and managing security in the cloud. Their leading concerns were maintaining overall network security, a lack of cloud expertise, problems when migrating workloads to the cloud, and insufficient staff to manage their expanded cloud environments. So, what are the root causes of these cloud security concerns and challenges, and how should enterprises address them?

Confusion over cloud control

When asked about which aspects of security worried them most when running applications in public clouds, respondents overwhelmingly cited getting clear visibility of topologies and policies for the

entire hybrid network estate, followed by the ability to detect risks and misconfigurations.

A key reason for these concerns is that organizations are using a range of different controls to manage cloud security as part of their application orchestration. 52% use cloud-native tools, and 50% reported using orchestration and configuration management tools such as Ansible, Chef and Puppet. However, nearly a third (29%) said they use manual processes to manage cloud security.

In addition, theres competition for overall control over cloud security: 35% of respondents said their security operations team managed cloud security, followed by the cloud team (18%), and IT operations (16%). Other teams such as network operations, DevOps and application owners all figured too. Having different teams using multiple different controls for security limits overall visibility across the hybrid cloud environment, and also adds significant complexity and management overheads to security processes. Any time you need to make a change, you need to duplicate the work across each of these different controls and teams. This results in security holes and the types of misconfiguration-based incidents and outages we mentioned earlier.

How to move fast and not break things

So how can organizations address these security and management issues, and get consistent control over their cloud and on-prem environments, so they can take full advantage of cloud agility and scalability without compromising on security? Here are the four key steps:

With a network security automation solution handling these steps, organizations can get holistic, single-console security management across all of their public cloud accounts, as well as their private cloud and on-premises deployments. This helps them to solve the cloud complexity challenge and ensures faster, safer and more compliant cloud management making it possible for organizations to move fast in response to changing business needs without breaking things.

Sign up for the free insideBIGDATAnewsletter.

Join us on Twitter:@InsideBigData1 https://twitter.com/InsideBigData1

The rest is here:
How to Move Fast in the Cloud Without Breaking Security - insideBIGDATA

Read More..

Automated ‘cloud lab’ will handle all aspects of daily lab work – E&T Magazine

Carnegie Mellon University (CMU) is working with Emerald Cloud Lab (ECL) to build a world-first cloud laboratory, which they hope will provide researchers with facilities for routine life sciences and chemistry research.

According to the partners, the remote-controlled Carnegie Mellon University (CMU) Cloud Lab will provide a universal platform for AI-driven experimentation, and revolutionise how academic laboratory research and education are done.

Emeralds 'cloud lab', which will be used as the basis for the new lab, allows scientists to conduct wet laboratory research without being in a physical laboratory. Instead, they can send their samples to a facility, design their experiments using ECLs command-based software (with the assistance of AI-based design tools), and then execute the experiment remotely. A combination of robotic instrumentation and technicians perform the experiments as specified and the data is sent to cloud servers for access.

CMU researchers have used ECL facilities for research and teaching for several years. According to the university, cloud lab classes gave students valuable laboratory experience during the Covid-19 pandemic, even with all courses being taught remotely.

CMU is a world leader in [AI], machine learning, data science, and the foundational sciences. There is no better place to be home to the worlds first university cloud lab, said Professor Rebecca Doerge. Bringing this technology, which Im proud to say was created by CMUs alumni, to our researchers and students is part of our commitment to creating science for the future.

The CMU Cloud Lab will democratise science for researchers and students. Researchers will no longer be limited by the cost, location, or availability of equipment. By removing these barriers to discovery, the opportunities are limitless.

The new cloud lab will be the first such laboratory built in an academic setting. It will be built in a university-owned building on Penn Avenue, Pittsburgh. Construction on the $40m project is expected to begin in autumn for completion in summer 2022.

The facility will house more than 100 types of scientific instruments for life sciences and chemistry experiments and will be capable of running more than 100 complex experiments simultaneously, 24 hours a day and 365 days a year. This will allow users to individually manage many experiments in parallel from anywhere in the world. The university and company will collaborate on the facilitys design, construction, installation, management, and operations. Already, staff and students are being trained to use the cloud lab.

While the CMU Cloud Lab will initially be available to CMU researchers and students, the university hopes to make time available to others in the research community, including high school students, researchers from smaller universities that may not have advanced research facilities, and local life sciences start-ups.

We are truly honoured that Carnegie Mellon is giving us the chance to demonstrate the impact that access to a cloud lab can make for its faculty, students and staff, said Brian Frezza, a CMU graduate and co-CEO of ECL. We couldnt think of a better way to give back to the university than by giving them a platform that redefines how a world-class institution conducts life sciences research.

Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.

Continued here:
Automated 'cloud lab' will handle all aspects of daily lab work - E&T Magazine

Read More..

The myths behind Linux security. – The CyberWire

Executive Summary.

Attackers do not target Linux environments because Windows is the most used operating system globall is a belief many in the technology hold. With this one false belief attackers are creating havoc on companies Linux based environments by creating and transitioning Windows malware to:

There is a notion in our community that added Linux operating system security features, such as Security-Enhanced Linux (SELinux) along with cloud provider offerings such as cloud-based firewall rules and access management, offer security by default. Companies do not need to focus on the hardening of the cloud server itself.

Myths such as these can lead companies to suffer devastating losses. When securing Linux servers, whether physical or in the cloud, the basics still remain the same. Just because a server is running Linux does not mean you can be lenient on security practices on the server itself. A companys security posture frequently relies on cloud providers security controls, and while they do provide help, if the company does not know what code is running on their servers the effectiveness of the providers security controls are negated.

Software development has changed drastically over the past several years to meet the need for faster time to market conditions. To accommodate these requirements, developers are increasing the frequency of their code deployments. Capital One reports they are currently deploying up to 50 times per day for a single product, with Amazon, Google, and Netflix deploying thousands of times per day. With the frequency of these code changes, it is becoming increasingly difficult for security teams to adapt their monitoring and hardening practices.

New code deployments can alter a servers expected behavior. Suppose companies are focusing their monitoring on behavior-based detection. In that case, new code deployments can lead to false positives, which create an additional workload for teams. Security teams often report that its challenging to address these situations as they do not have enough visibility into what code is running on their servers. Thus, they must spend a significant amount of time investigating them. If attackers can craft their code to fit with the expected behavior, no alerts are triggered, and a compromise could occur without any detection. However, companies are often worried about deploying new security solutions as they may degrade performance by using vital resources or slowing down the development process.

Attackers focus on Windows as its the worlds most used operating system.

The number of threats that Linux servers face are downplayed due to another common myth about the popularity of Linuxs usage around the world. The belief is that Windows is the primary operating system in use, and while this might be true for desktop computers, when it comes to Linux cloud or physical servers, the numbers say it all.

Currently 500 of the worlds fastest supercomputers are running on Linux. These systems are used for everything from advancing artificial intelligence to helping save lives by potentially aiding in COVID-19 gene analysis and vaccine research.

96.3% of the worlds servers are running Linux.

83.1% of developers say they prefer to work on the Linux platform than any other operating system.

In the past decade, researchers have discovered many advanced persistent threat campaigns targeting Linux systems using adapted Windows malware, as well as unique Linux malware tools tailored for espionage operations. Once the code was modified to work in Linux environments, there was no barrier to shifting these attacks to new targets. One example is IPStorm; researchers first saw this malware in 2019 targeting Windows systems. IPStorm has now evolved to target other platforms such as Android, Linux, and Mac devices, leading to increased compromised systems of more than 13,500. Detecting if systems are compromised is not always difficult; however, many businesses are unaware they should examine their devices until they see this attacks impact. The increase in compromised systems has led some to call IPStorm one of the most dangerous malware in existence.

Perhaps one of the leading factors for attackers deciding to morph their attack strategies is the growth of cloud technology and the increasing number of cloud providers making the transition to Linux-based environments easier than ever.

Even governments have embraced Linuxs usage in their environments. For example, in 2001, the White House transitioned to the Red Hat Linux-based distribution. The US Department of Defense migrated to Linux in 2007, and the US Navys warships have been using Red Hat since 2013. The US is not alone in this transition. The Austrian capital of Vienna and the government of Venezuela have also adopted the use of Linux.

Open-source software is inherently secure, due to the visibility of the code and contributions from the community.

Attackers contribute to open-source projects as well. For example, we have seen NPM packages that contained code providing access to the environmental variables, allowing for the collection of information for the host device.

Not everyone has the skills to understand the code, so despite seeing the installed code, the compromise can go undetected. When an issue is reported, an experienced developer reviews the code, then writes a patch. Once this work is done, we wait for the new code to be approved. Dont forget during this time unknowing parties would still be using these packages.

Once a fix is available, many companies still do not upgrade the new code. The State of Software Security (SOS) analyzed 85,00 applications and found that over 75% shared similar code. 47% of these had flawed libraries used by multiple applications; 74% of these libraries could have been fixed with a simple upgrade.

Our Linux environments are just as vulnerable as any other environment. All of these myths around Linux security fall short because they do not take into account what history has taught us, that breaches do happen.

In order to have a healthy security posture, companies need to grow beyond the idea that all breaches can be prevented and address the need for visibility of the code running within their workloads.

Webinar: Is Linux Secure By Default?

Blog: 2020 Set a Record for New Linux Malware Families

Read this article:
The myths behind Linux security. - The CyberWire

Read More..

Is the Cloud More Secure Than On Prem? – TechDecisions

Both the cloud and on-premises systems have their advantages and disadvantages, but recent attacks against on-premise systems coupled with the proliferation and advancement of cloud-based IT architecture are tilting the scales in favor of the cloud.

A company that owns its own on-premises servers has more control over security, but are responsible for all of the upgrades, maintenance and other upkeep not to mention the large up-front costs associated with the hardware.

In the cloud, most of that upgrading and maintenance is done by the provider, and organizations can pay for those services on a fixed, monthly basis.

Although on-premises systems have historically been viewed as more secure, recent attacks say otherwise, says Aviad Hasnis, CTO of autonomous breach protection company Cynet.

Its a trend that has really stressed out the fact that companies especially in the mid market that utilize these kinds of on-premises infrastructure dont usually have the capabilities or the manpower to make sure they are all up to date in terms of security updates, he said.

Thats why weve seen so many successful attacks against on-premises systems of late including the ProxyLogon and ProxyShell exploits of Microsoft Exchange Server vulnerabilities and the massive Kaseya ransomware attack, Hasnis says.

One of the main reasons there are more attacks against on-premises systems is the fact that most cloud vulnerabilities arent assigned a CVE number, which makes it hard for hackers to discover the flaw and successful exploit it.

Case in point was the recently disclosed Azure Cosmos DB vulnerability. Microsoft mitigated the vulnerability shortly after it was discovered, and no customer data appears to be impacted.

Meanwhile, known vulnerabilities in on-premises systems are exploited until the IT department can patch their systems. For example, the ProxyLogon and ProxyShell vulnerabilities in Microsoft exchange were assigned a CVE and patched shortly after they were disclosed, but organizations that were slow to patch or implement workarounds remained vulnerable as attackers seized on the newly discovered flaws.

In the case of the Kaseya attack, the damage was limited to only on-premises customers of Kaseya using the VSA product, but once the breach was disclosed and the company had to manually reach out to customers and urge them to take their servers down.

Attacking Kaseyas SaaS customers likely would have raised additional red flags that could have stopped the attack in its tracks, Hasnis says.

There are many different defenses for detecting this kind of threat behavior, Hasnis says.

In general, the cloud can be a much safer place to be if your organization practices SaaS Security Posture Management (SSPM), which, according to Gartner, is the constant assessment of the security risk of your Saas applications, including reporting the configuration of native SaaS security settings and tweaking that configuration to reduce risk.

For example, someone using Microsoft 365 without two-factor authentication should trigger a warning, Hasnis says.

The fact that someone uses cloud or SaaS infrastructure doesnt necessarily mean its safe, but they have to make sure their organization aligns with the best security protocols, Hasnis says.

Especially for smaller organizations that dont have the in-house staff and expertise to update and patch on-premises systems after an attack, migrating to the cloud can help cut down on that response time and keep the company safe by enlisting the help of the provider and other internal IT experts.

If your organization is spread around the globe in more than one location and youre working on-prem, you dont necessarily have access to all of the different infrastructure within the environment, Hasnis says.

Continued here:
Is the Cloud More Secure Than On Prem? - TechDecisions

Read More..

Meet the Self-Hosters, Taking Back the Internet One Server at a Time – VICE

It's no secret that a small handful of enormous companies dominate the internet as we know it. But the internet didn't always have services with a billion users and quasi-monopolistic control over search or shopping. It was once a loose collection of individuals, research labs, and small companies, each making their own home on the burgeoning world wide web.

That world hasn't entirely died out, however. Through a growing movement of dedicated hobbyists known as self-hosters, the dream of a decentralized internet lives on at a time when surveillance, censorship, and increasing scrutiny of Big Tech has created widespread mistrust in large internet platforms.

Self-hosting is a practice that pretty much describes itself: running your own internet services, typically on hardware you own and have at home. This contrasts with relying on products from large tech companies, which the user has no direct involvement in. A self-hoster controls it all, from the hardware used to the configuration of the software.

My first real-world reason for learning WordPress and self-hosting was the startup of a podcast, KmisterK, a moderator of Reddit's r/selfhosted community, told Motherboard. I quickly learned the limitations of fake unlimited accounts that were being advertised on most shared hosting plans. That research led to more realistic expectations for hosting content that I had more control over, and it just bloomed from there.

Edward, co-creator of an extensive list of self-hosted software, similarly became interested in self-hosting as a way to escape less-than-ideal circumstances. I was initially drawn to self-hosting by a slow internet connection and a desire to share media and information with those I lived with," he told Motherboard. I enjoyed the independence self-hosting provided and the fact that you owned and had control over your own data.

Once you're wrapped up in it, it's hard to deny the allure of the DIY self-hosted internet. My own self-hosting experiences include having a home server for recording TV and storing media for myself and my roommates, and more recently, leaving Dropbox for a self-hosted, free and open source alternative called Syncthing. While Ive been happy with Dropbox for many years, I was paying for more than I needed and ran into issues with syncing speed. With a new Raspberry Pi as a central server, I had more control over what synced to different devices, no worries about any storage caps, and of course, faster transfer speeds. All of this is running on my home network: nothing has to be stored on cloud servers run by someone else in who-knows-where.

My experience with Syncthing quickly sent me down the self-hosting rabbit hole. I looked at what else I could host myself, and found simply everything: photo collections (like Google Photos); recipe managers; chat services that you can connect with the popular tools like Discord; read-it-later services for bookmarking; RSS readers; budgeting tools; and so much more. There's also the whole world of alternative social media services, like Mastodon and PixelFed, to replace Twitter, Facebook, and Instagram, which can be self-hosted as a private network or used to join others around the world.

Self-hosting is something I've found fun to learn about and tinker with, even if it is just for myself. Others, like KmisterK, find new opportunities as well. Eventually, a career path started with it, and from there, being in the community professionally kept me personally interested as a hobby. Edward also found a connection with his career in IT infrastructure, but still continues self-hosting. It is nice to be able to play around in a low risk/impact environment, he said.

But beyond enjoyment, self-hosters share important principles that drive the desire to self-hostnamely, a distrust of large tech companies, which are known to scoop up all the data they can get their hands on and use it in the name of profit.

Despite new privacy laws like Europe's General Data Protection Regulation (GDPR) and the California Consumer Protection Act (CCPA), the vast majority of Americans still don't trust Big Tech with their privacy. And in recent years, the countless privacy scandals like Cambridge Analytica have driven some tech-savvy folks to take matters into their own hands.

I think that people are becoming more privacy conscious and while neither these laws, nor self-hosting can currently easily resolve these concerns, I think that they can at least alleviate them, said Edward.

Some self-hosters see the rising interest in decentralized internet tools as a direct result of Silicon Valley excess. The growth of self-hosting does not surprise me, nodiscc, a co-creator and maintainer of the self-hosted tech list, told Motherboard. People and companies have started realizing the importance of keeping some control over their data and tools, and I think the days of 'everything SaaS [Software as a Service]' are past.

Another strong motivator comes from large companies simply abandoning popular tools, along with their users. After all, even if you're a paying customer, tech companies offer access to services at their whim. Google, for example, is now infamous for shutting down even seemingly popular products like Reader, leaving users with no say in the matter.

KmisterK succinctly summarized the main reasons people have for self-hosting: curiosity and wanting to learn; privacy concerns; looking for cheaper alternatives; and the betrayed, people who come from platforms like Dropbox or Google Photos or Photobucket or similar, after major outages, major policy changes, sunsetting of services, or other dramatic changes to the platform that they disagree with. This last one is probably the majority gateway to self-hosting, based on recent traffic to r/selfhosted, he says. Look no further than their recent Google Photos megathread and recent guides from self-hosters on the internet. For me, changes in LastPass, even as a paid user, had me looking elsewhere.

nodiscc also noted the different reasons people self-host, saying, There would be many... technical interest, security/privacy, customization, control over the software, self-reliance, challenge, economical reasons, political/Free software activism. Looking at the growth of self-hosting over the years, Edward says, These aren't comprehensive reasons but I expect that privacy-consciousness, hardware availability and more mainstream open-source software have contributed to the growth of self-hosting.

These are all good reasons why self-hosting is so essential. Self-hosting brings freedom and empowerment to users. You own what you use: you can change it, keep it the same, and have your data in your own hands. Much of this derives from the free (as in freedom to do what you like) nature of self-hosting software. The source code is freely available to use, modify, and share. Even if the original author or group stops supporting something, the code is out there for anyone to pick up and keep alive.

Despite the individualistic nature of self-hosting, there is a vibrant and growing community.

Much of this growth can be seen on Reddit, with r/selfhosted hitting over 136,000 members and continuing to rise, up from 84,000 just a year ago. The discussions involve self-hosting software that spans dozens of categories, from home automation, genealogy, and media streaming to document collaboration and e-commerce. The list maintained by nodiscc and the community has grown so long that its stewards say it needs more curation and better navigation.

The quality of free and easy-to-use self-hosting software has increased too, making the practice increasingly accessible to the less-technically savvy. Add to that the rise of cheap, credit card-sized single-board computers like the Raspberry Pi, which lower the starting costs of creating a home server to as little as $5 or $10. Between high-available hosting environments, to one-click/one-command deploy options for hundreds of different softwares, the barrier for entry has dramatically been lowered over the years, said KmisterK.

Of course, even the most dedicated self-hosters admit that it isn't for everyone. Having some computing knowledge is fairly essential when it comes to running your own internet services, and self-hosting will never truly compete with big-name services that make it exponentially easier," KmisterK said.

But while self-hosters may never number enough to put a serious dent in Big Tech's offerings, there is aclear need and benefit to this alternative space. And I can't think of a better model for the kind of DIY community we can have, when left to our own devices.

Read the original:
Meet the Self-Hosters, Taking Back the Internet One Server at a Time - VICE

Read More..

Google is designing its own Arm-based processors for 2023 Chromebooks report – The Register

Google is reportedly designing its own Arm-based system-on-chips for Chromebook laptops and tablets to be launched in 2023.

The internet search giant appears to be following the same path as Apple by developing its own line of processors for client devices, according to Nikkei Asia.

Google earlier said its latest Pixel 6 and Pixel 6 Pro Android smartphones will be powered by a homegrown system-on-chip named Tensor. This component will be made up of CPUs and GPU cores licensed from other designers as well as Googles own AI acceleration engine to boost machine-learning-based features, such as image processing and speech recognition.

The Chocolate Factory also launched its homemade Tensor Processing Units (TPUs) in 2016, aimed at training and running machine learning workloads on its cloud servers. Googles CEO Sundar Pichai announced the fourth-generation of TPUs in May at the web titan's annual IO conference. Google also has a collection of its own Titan chips.

The rumored processors for its laptops and fondleslabs use Arm CPU cores, meaning Google will pay licensing fees to use the British chip designers blueprints. The chips will be manufactured elsewhere by fabrication plants, probably TSMC or Samsung. Technical specifications are hush-hush right now; The Register has asked Google for comment.

Its beneficial for tech companies to develop their own chips as they, for one thing, roll out AI algorithms in their products. Custom accelerators can be optimized to run their makers' software stacks more efficiently, enabling more real-time intelligent decision-making by devices, whether that's in facial recognition or machine-learning-powered smartphone apps.

Apples iPhone 12 handsets, for example, contain the iGiant's 5nm 64-bit Arm-compatible A14 bionic SoC that's capable of accelerating computer-vision code and the processing of data from its sensors. Amazon also has custom processors available for its cloud customers on AWS, such as Inferentia and Graviton.

There are reportedly other in-house chip projects underway at Facebook for its Oculus VR headsets and at Microsoft for its servers and laptops.

Continued here:
Google is designing its own Arm-based processors for 2023 Chromebooks report - The Register

Read More..

US government warns of error in union – ICT News – The Press Stories

This is a significant bug in the Atlasian Association, which is being actively exploited worldwide, according to the American Cybercom Organization.

Error CVE-2021-26084 discussed at Atlasian Association wiki software. The vulnerability is so serious that US cybercom, the Department of Defenses cyber security, has issued a warning. The massive exploitation of the Atlasian Association error CVE-2021-26084 is ongoing and is expected to accelerate further, read a statement released Friday. Therefore, the company advises companies to close this breach as soon as possible, but the United States is enjoying a long weekend. Therefore, it is a privilege moment for cyber attacks because it often takes a long time before a company employee notices something unusual.

Sangamam is a popular wiki software. A series of misguided gangs aimed at infiltrating corporate networks are currently scanning violations. Atlasian announced on August 25 that a critical bug had been detected in various versions of the association server and data center, allowing a user to use unauthorized run code in the software. Revised version has been published. The defect appears to affect only servers on campus, not the cloud-provided versions of the association.

Error CVE-2021-26084 discussed at Atlasian Association wiki software. The vulnerability is so serious that US cybercom, the Department of Defenses cyber security, has issued a warning. The massive exploitation of the Atlasian Association error CVE-2021-26084 is ongoing and is expected to accelerate further, read a statement released Friday. Therefore, the company advises companies to close this breach as soon as possible, but the United States is enjoying a long weekend. So this is an important time for cyber attacks because it takes a long time before a company employee notices anything unusual. Sangam is a popular wiki software, often used for communication purposes. Atlasian announced on August 25 that a critical bug had been detected in various versions of the association server and data center, allowing a user to use unauthorized run code in the software. Revised version has been published. The defect appears to affect only servers on campus, not the cloud-provided versions of the association.

// Additional initialization code such as adding Event Listeners goes here FB.Event.subscribe('edge.create', function (href, widget) { if (href.indexOf('facebook.com') == -1) { headjs.ready("bp_bt", rmgBtTrackerCallBack(1, "fb")) } } ); };

// Load the SDK asynchronously (function () { // If we've already installed the SDK, we're done if (document.getElementById('facebook-jssdk')) { return; }

// Get the first script element, which we'll use to find the parent node var firstScriptElement = document.getElementsByTagName('script')[0];

// Create a new script element and set its id var facebookJS = document.createElement('script'); facebookJS.id = 'facebook-jssdk';

// Set the new script's source to the source of the Facebook JS SDK facebookJS.src="https://connect.facebook.net/fr_FR/all.js"; // Insert the Facebook JS SDK into the DOM firstScriptElement.parentNode.insertBefore(facebookJS, firstScriptElement); }());

Read more here:
US government warns of error in union - ICT News - The Press Stories

Read More..

Emergen Research: Akamai Leads in DNS Market Among Fortune 500 Companies | Increasing Preference for Cloud DNS and Rising Need to Prevent Ddos Attacks…

VANCOUVER, BC, Aug. 31, 2021 /PRNewswire/ -- Demand for cloud-based DNS has been increasing substantially in the recent past and is driving-up revenue share of major players in the global Domain Name System (DNS) provider market. A few global players have risen substantially above the rest and the major player currently, Akamai, accounts for a substantially larger revenue shares and has close to double the number of websites as its closest competitor. The providers in the latter part of the list are far from the leader in the market and it does not seem that the leader will be dethroned anytime soon.

A major change in the DNS industry currently is a transition to cloud-based systems and this trend is gaining robust traction in the market. As the cloud-computing eco-system expands exponentially across the Internet, clients will get more options for controlling DNS performance and improving customer experience on a global scale. Majority of businesses areshiftingto cloud-based DNS or adoptinghybrid DNS-Cloud setup by separating their traffic respectively as per standard DNS and cloud infrastructure.

Another factor supporting steady increase in revenue share the various players in the market is increasing adoption of Domain Name System Security Extensions (DNSSEC). DNSSEC is a new trend in the DNS Industry. DNSSEC protects DNS from cyberattacks by employing digital signatures to DNS data, verifying sources of data, and ensuring accurate data flow over the Internet.

Click Here to Access Free Extract PDF Copy of the [emailprotected]https://www.emergenresearch.com/request-extract

However, increasing incidents of DNS server outages is a key factor expected to result in reduced revenue share of DNS providers. Websites are rapidlybeing hosted by businesses in order to engage customers, clients, and partner organizations, and DNS plays an important role in enterprise profits. DNS services are chosen by a company to ensure better website performance and better user experience. It is important for DNS service providers to operate their DNS servers efficiently to achieve these objectives. No response from web pages owing to DNS outages can have a substantial impact on an enterprise's earnings. Companies cannot afford DNS outages, especially during seasonal sales. DNS outages also have a major negative influence on user experience and application accessibility for users.

DNS Market Share of Fortune 500 - August 2021

As of 2021, Akamai [AKAM:NASDAQ] has 82 websites and accounted for a majority revenue share of 16.4% in the global DNS market, CSC DNS has 44 websites and 8.8% revenue share, Neustar UltraDNS [NYSE:NSR] has 42 websites and 8.4% revenue share, and Amazon Route 53 [AMZN:NASDAQ] has 27 websites and accounts for revenue share of 5.4%. Cloudflare [NET:NASDAQ]has 25 websites and revenue share of 5.0%, GoDaddy [GDDY:NASDAQ] has 19 websites and 3.8% revenue share, Azure DNS [AZRE:NASDAQ]has 15 websites and 3.0% revenue share, and DNS Made Easy has 11 websites and accounts for revenue share of 2.2%.

With the introduction of Edge DNS, Akamai Technologies has managed to gain a substantially larger market share than other players in the global market. Edge DNS is an authoritative DNS service that sends DNS resolution from business premises or data centers to the Akamai Intelligent Edge. CSC DNS and Neustar UltraDNS follow Akamai closest in terms of market revenue share. In addition, Neustar UltraDNS offers cost-effective cloud-based recursive DNS service with sophisticated threat intelligence that enables speedy and secure online application access, which places it in a lucrative position in the market.

Have a look at our Quote for detail information on different package on offer [Fortune 500 Companies, Fortune 5000 Companies, Fortune 1000 Companies ]@ https://www.emergenresearch.com/request-quote

Some Key Findings:

Have a look at Complete Insight on Domain Name System Market Among Fortune 500 Companies @ https://www.emergenresearch.com/livedata/dns-market-share-2021-fortune-500-companies

About Emergen Research

Emergen Research is a market research and consulting company that provides syndicated research reports, customized research reports, and consulting services. Our solutions purely focus on your purpose to locate, target, and analyze consumer behavior shifts across demographics, across industries, and help clients make smarter business decisions. We offer market intelligence studies ensuring relevant and fact-based research across multiple industries, including Healthcare, Touch Points, Chemicals, Types, and Energy. We consistently update our research offerings to ensure our clients are aware of the latest trends existent in the market. Emergen Research has a strong base of experienced analysts from varied areas of expertise. Our industry experience and ability to develop a concrete solution to any research problems provides our clients with the ability to secure an edge over their respective competitors.

Contact Us:

Eric LeeCorporate Sales SpecialistEmergen Research | Web: http://www.emergenresearch.com Direct Line: +1 (604) 757-9756 E-mail: [emailprotected] Explore Our Custom Intelligence services | Growth Consulting Services Facebook| LinkedIn| Twitter| Blogs Related Report @ Managed DNS Service Market

About Emergen Research

Emergen Research is a market research and consulting company that provides syndicated research reports, customized research reports, and consulting services. Our solutions purely focus on your purpose to locate, target, and analyze consumer behavior shifts across demographics, across industries, and help clients make smarter business decisions. We offer market intelligence studies ensuring relevant and fact-based research across multiple industries, including Healthcare, Touch Points, Chemicals, Types, and Energy. We consistently update our research offerings to ensure our clients are aware of the latest trends existent in the market. Emergen Research has a strong base of experienced analysts from varied areas of expertise. Our industry experience and ability to develop a concrete solution to any research problems provides our clients with the ability to secure an edge over their respective competitors.

SOURCE Emergen Research

Continue reading here:
Emergen Research: Akamai Leads in DNS Market Among Fortune 500 Companies | Increasing Preference for Cloud DNS and Rising Need to Prevent Ddos Attacks...

Read More..