Page 2,490«..1020..2,4892,4902,4912,492..2,5002,510..»

Congratulations to the 2021 SWE Scholarship Recipients! – All Together – Society of Women Engineers

Congratulations to the 2021 SWE Scholarship Recipients! SWE is proud to award nearly 289 scholarships, totaling more than $1,200,000, to freshman, sophomore, junior, senior and graduate students for the 2021-2022 academic year. The recipients of SWE scholarships are a group of extremely accomplished and driven students who excel both inside and outside of the classroom.

The SWE scholarship program will assist these young individuals in accomplishing their dreams of being engineers who contribute to society. The names of all recipients are posted below and will also be published in the conference issue of SWE Magazine in October.

Alma Kuppinger Forman, P.E. Scholarship

SWE received thousands of applications this year and is very grateful for the SWE members who generously volunteered their time to help judge and award the scholarships!

We would also like to thank SWEs 2021 scholarship team:

2021 Judges:

SWE Blog

SWE Blog provides up-to-date information and news about the Society and how our members are making a difference every day. Youll find stories about SWE members, engineering, technology, and other STEM-related topics.

More here:

Congratulations to the 2021 SWE Scholarship Recipients! - All Together - Society of Women Engineers

Read More..

Reviewing the Eight Fallacies of Distributed Computing – InfoQ.com

In a recent article on the Ably Blog, Alex Diaconu reviewed the thirty-year-old "eight fallacies of distributed computing" and provided a number of hints at how to handle them. InfoQ has taken the chance to talk with Diaconu to learn more about how Ably engineers deal with the fallacies.

The eight fallacies are a set of conjectures about distributed computing which can lead to failures in software development. The assumptions are: the network is reliable; latency is zero; bandwidth is infinite; the network is secure; topology doesn't change; there is one administrator; transport cost is zero; the network is homogeneous.

The fallacies can be seen as architectural requirements you have to account for when designing distributed systems. InfoQ has taken the chance to talk with Diaconu to learn more about how Ably engineers deal with the fallacies.

InfoQ: Almost thirty years since the fallacies of distributed computing were initially suggested, they are still highly relevant. What's their role at Ably?

Diaconu: All of the fallacies are pointers to distributed system design pitfalls, and they are all still relevant today. They don't all have the same impact some are more easily accommodated than others. The fallacies that have the most pervasive effect on how we structure our systems at Ably are:

InfoQ: Do you think the evolution of distributed systems in the last thirty years has revealed any additional fallacies that should be taken into account?

Diaconu: I believe the most significant transformation over the last 30 years is the maturity of our understanding of how to deal with them. Thats not to say that the answers are any easier, but they are better understood. We know what approaches are good, what approaches are bad, and the limits of any given approach. There is now well-established scientific theory and engineering practice around these problem spaces. Computer science students are taught the problems and what the state of the art is.

Of course, its important to acknowledge that the fallacies are manifestations of enduring technical challenges; they shouldnt be thought of as easily avoided pitfalls. I suppose you could say that there is, in fact, a new fallacy "avoiding the fallacies of distributed computing is easy."

InfoQ: Some of the fallacies have become meanwhile commonplace, for example the idea that the Cloud is not secure is widely accepted. Still there may be some subtlety to them that makes the process of dealing with them not so trivial.

Diaconu: As previously mentioned, the challenges of distributed systems, and the broad science around the techniques and mechanisms used to build them, are now well researched. The thing you learn when addressing these challenges in the real world, however, is that academic understanding only gets you so far.

Building distributed systems involves engineering pragmatism and trade-offs, and the best solutions are the ones you discover by experience and experiment.

As an example, the "network is reliable" fallacy is the most basic thing you have to address. The known solutions involve protocols with retries; or consensus formation protocols; or redundancy for fault tolerance, depending on the particular failure mode of concern.

However, the engineering reality is that multiple kinds of failures can, and will, occur at the same time. The ideal solution now depends on the statistical distribution of failures; or on analysis of error budgets, and the specific service impact of certain errors.

The recovery mechanisms can themselves fail due to system unreliability, and the probability of those failures might impact the solution. And of course, you have the dangers of complexity: solutions that are theoretically sound, but complex, might be far more complicated to manage or understand whenever an incident takes place than simpler mechanisms that are theoretically not as complete.

InfoQ: If we look at microservices, which have become quite popular in the last few years, they seem to be at odds with the "transport cost is zero" fallacy. In fact, the smaller each microservice, the larger their overall count and the ensuing transport cost. How do you explain this?

Diaconu: Maybe another fallacy is "microservices make it easier to reason about your system". Sometimes breaking things down into components with a smaller surface area makes them easier to reason about. However, sometimes creating those boundaries adds complexity; it can certainly add failure modes, and it can create new things whose behavior also needs to be reasoned about.

Much like the previous answer, the actual design choices, and when and where you deploy the known theoretical solutions, come down to engineering judgment and experience. At Ably, we operate a system with multiple roles that scale, interoperate and discover one another independently. However, splitting functionality out into a distinct role is something we rarely do, and only when there is a particular driver for that to happen. For example, if we want some specific functionality to scale independently of other functionality, that justifies the creation of an independent role, even if it brings additional complexity.

Diaconu's article not only helps you understand where the fallacies originate from, but it also attempts to provide useful hints at current techniques and approaches to address the fallacies, so do not miss it if you are interested in the subject.

Read the rest here:

Reviewing the Eight Fallacies of Distributed Computing - InfoQ.com

Read More..

The time Animoto almost brought AWS to its knees – TechCrunch

Today, Amazon Web Services is a mainstay in the cloud infrastructure services market, a $60 billion juggernaut of a business. But in 2008, it was still new, working to keep its head above water and handle growing demand for its cloud servers. In fact, 15 years ago last week, the company launched Amazon EC2 in beta. From that point forward, AWS offered startups unlimited compute power, a primary selling point at the time.

EC2 was one of the first real attempts to sell elastic computing at scale that is, server resources that would scale up as you needed them and go away when you didnt. As Jeff Bezos said in an early sales presentation to startups back in 2008, you want to be prepared for lightning to strike, [] because if youre not that will really generate a big regret. If lightning strikes, and you werent ready for it, thats kind of hard to live with. At the same time you dont want to prepare your physical infrastructure, to kind of hubris levels either in case that lightning doesnt strike. So, [AWS] kind of helps with that tough situation.

An early test of that value proposition occurred when one of their startup customers, Animoto, scaled from 25,000 to 250,000 users in a 4-day period in 2008 shortly after launching the companys Facebook app at South by Southwest.

At the time, Animoto was an app aimed at consumers that allowed users to upload photos and turn them into a video with a backing music track. While that product may sound tame today, it was state of the art back in those days, and it used up a fair amount of computing resources to build each video. It was an early representation of not only Web 2.0 user-generated content, but also the marriage of mobile computing with the cloud, something we take for granted today.

For Animoto, launched in 2006, choosing AWS was a risky proposition, but the company found trying to run its own infrastructure was even more of a gamble because of the dynamic nature of the demand for its service. To spin up its own servers would have involved huge capital expenditures. Animoto initially went that route before turning its attention to AWS because it was building prior to attracting initial funding, Brad Jefferson, co-founder and CEO at the company explained.

We started building our own servers, thinking that we had to prove out the concept with something. And as we started to do that and got more traction from a proof-of-concept perspective and started to let certain people use the product, we took a step back, and were like, well its easy to prepare for failure, but what we need to prepare for success, Jefferson told me.

Going with AWS may seem like an easy decision knowing what we know today, but in 2007 the company was really putting its fate in the hands of a mostly unproven concept.

Its pretty interesting just to see how far AWS has gone and EC2 has come, but back then it really was a gamble. I mean we were talking to an e-commerce company [about running our infrastructure]. And theyre trying to convince us that theyre going to have these servers and its going to be fully dynamic and so it was pretty [risky]. Now in hindsight, it seems obvious but it was a risk for a company like us to bet on them back then, Jefferson told me.

Animoto had to not only trust that AWS could do what it claimed, but also had to spend six months rearchitecting its software to run on Amazons cloud. But as Jefferson crunched the numbers, the choice made sense. At the time, Animotos business model was for free for a 30 second video, $5 for a longer clip, or $30 for a year. As he tried to model the level of resources his company would need to make its model work, it got really difficult, so he and his co-founders decided to bet on AWS and hope it worked when and if a surge of usage arrived.

That test came the following year at South by Southwest when the company launched a Facebook app, which led to a surge in demand, in turn pushing the limits of AWSs capabilities at the time. A couple of weeks after the startup launched its new app, interest exploded and Amazon was left scrambling to find the appropriate resources to keep Animoto up and running.

Dave Brown, who today is Amazons VP of EC2 and was an engineer on the team back in 2008, said that every [Animoto] video would initiate, utilize and terminate a separate EC2 instance. For the prior month they had been using between 50 and 100 instances [per day]. On Tuesday their usage peaked at around 400, Wednesday it was 900, and then 3,400 instances as of Friday morning. Animoto was able to keep up with the surge of demand, and AWS was able to provide the necessary resources to do so. Its usage eventually peaked at 5000 instances before it settled back down, proving in the process that elastic computing could actually work.

At that point though, Jefferson said his company wasnt merely trusting EC2s marketing. It was on the phone regularly with AWS executives making sure their service wouldnt collapse under this increasing demand. And the biggest thing was, can you get us more servers, we need more servers. To their credit, I dont know how they did it if they took away processing power from their own website or others but they were able to get us where we needed to be. And then we were able to get through that spike and then sort of things naturally calmed down, he said.

The story of keeping Animoto online became a main selling point for the company, and Amazon was actually the first company to invest in the startup besides friends and family. It raised a total of $30 million along the way, with its last funding coming in 2011. Today, the company is more of a B2B operation, helping marketing departments easily create videos.

While Jefferson didnt discuss specifics concerning costs, he pointed out that the price of trying to maintain servers that would sit dormant much of the time was not a tenable approach for his company. Cloud computing turned out to be the perfect model and Jefferson says that his company is still an AWS customer to this day.

While the goal of cloud computing has always been to provide as much computing as you need on demand whenever you need it, this particular set of circumstances put that notion to the test in a big way.

Today the idea of having trouble generating 3,400 instances seems quaint, especially when you consider that Amazon processes 60 million instances every day now, but back then it was a huge challenge and helped show startups that the idea of elastic computing was more than theory.

Read more from the original source:
The time Animoto almost brought AWS to its knees - TechCrunch

Read More..

EXCLUSIVE Amazon considers more proactive approach to determining what belongs on its cloud service – Reuters

Attendees at Amazon.com Inc annual cloud computing conference walk past the Amazon Web Services logo in Las Vegas, Nevada, U.S., November 30, 2017. REUTERS/Salvador Rodriguez/File Photo

Sept 2 (Reuters) - Amazon.com Inc (AMZN.O) plans to take a more proactive approach to determine what types of content violate its cloud service policies, such as rules against promoting violence, and enforce its removal, according to two sources, a move likely to renew debate about how much power tech companies should have to restrict free speech.

Over the coming months, Amazon will expand the Trust & Safety team at the Amazon Web Services (AWS) division and hire a small group of people to develop expertise and work with outside researchers to monitor for future threats, one of the sources familiar with the matter said.

It could turn Amazon, the leading cloud service provider worldwide with 40% market share according to research firm Gartner, into one of the world's most powerful arbiters of content allowed on the internet, experts say.

AWS does not plan to sift through the vast amounts of content that companies host on the cloud, but will aim to get ahead of future threats, such as emerging extremist groups whose content could make it onto the AWS cloud, the source added.

A day after publication of this story, an AWS spokesperson told Reuters that the news agencys reporting "is wrong," and added "AWS Trust & Safety has no plans to change its policies or processes, and the team has always existed."

A Reuters spokesperson said the news agency stands by its reporting.

Amazon made headlines in the Washington Post on Aug. 27 for shutting down a website hosted on AWS that featured propaganda from Islamic State that celebrated the suicide bombing that killed an estimated 170 Afghans and 13 U.S. troops in Kabul last Thursday. They did so after the news organization contacted Amazon, according to the Post.

The discussions of a more proactive approach to content come after Amazon kicked social media app Parler off its cloud service shortly after the Jan. 6 Capitol riot for permitting content promoting violence. read more

Amazon did not immediately comment ahead of the publication of the story on Thursday. After publication, an AWS spokesperson said later that day, "AWS Trust & Safety works to protect AWS customers, partners, and internet users from bad actors attempting to use our services for abusive or illegal purposes. When AWS Trust & Safety is made aware of abusive or illegal behavior on AWS services, they act quickly to investigate and engage with customers to take appropriate actions."

The spokesperson added that "AWS Trust & Safety does not pre-review content hosted by our customers. As AWS continues to expand, we expect this team to continue to grow."

Activists and human rights groups are increasingly holding not just websites and apps accountable for harmful content, but also the underlying tech infrastructure that enables those sites to operate, while political conservatives decry what they consider the curtailing of free speech.

AWS already prohibits its services from being used in a variety of ways, such as illegal or fraudulent activity, to incite or threaten violence or promote child sexual exploitation and abuse, according to its acceptable use policy.

Amazon investigates requests sent to the Trust & Safety team to verify their accuracy before contacting customers to remove content violating its policies or have a system to moderate content. If Amazon cannot reach an acceptable agreement with the customer, it may take down the website.

Amazon aims to develop an approach toward content issues that it and other cloud providers are more frequently confronting, such as determining when misinformation on a company's website reaches a scale that requires AWS action, the source said.

A job posting on Amazons jobs website advertising for a position to be the "Global Head of Policy at AWS Trust & Safety," which was last seen by Reuters ahead of publication of this story on Thursday, was no longer available on the Amazon site on Friday.

The ad, which is still available on LinkedIn, describes the new role as one who will "identify policy gaps and propose scalable solutions," "develop frameworks to assess risk and guide decision-making," and "develop efficient issue escalation mechanisms."

The LinkedIn ad also says the position will "make clear recommendations to AWS leadership."

The Amazon spokesperson said the job posting on Amazons website was temporarily removed from the Amazon website for editing and should not have been posted in its draft form.

AWS's offerings include cloud storage and virtual servers and counts major companies like Netflix (NFLX.O), Coca-Cola (KO.N) and Capital One (COF.N) as clients, according to its website.

PROACTIVE MOVES

Better preparation against certain types of content could help Amazon avoid legal and public relations risk.

"If (Amazon) can get some of this stuff off proactively before it's discovered and becomes a big news story, there's value in avoiding that reputational damage," said Melissa Ryan, founder of CARD Strategies, a consulting firm that helps organizations understand extremism and online toxicity threats.

Cloud services such as AWS and other entities like domain registrars are considered the "backbone of the internet," but have traditionally been politically neutral services, according to a 2019 report from Joan Donovan, a Harvard researcher who studies online extremism and disinformation campaigns.

But cloud services providers have removed content before, such as in the aftermath of the 2017 alt-right rally in Charlottesville, Virginia, helping to slow the organizing ability of alt-right groups, Donovan wrote.

"Most of these companies have understandably not wanted to get into content and not wanting to be the arbiter of thought," Ryan said. "But when you're talking about hate and extremism, you have to take a stance."

Reporting by Sheila Dang in Dallas; Editing by Kenneth Li, Lisa Shumaker, Sandra Maler, William Mallard and Sonya Hepinstall

Our Standards: The Thomson Reuters Trust Principles.

Read more:
EXCLUSIVE Amazon considers more proactive approach to determining what belongs on its cloud service - Reuters

Read More..

Server and virtualization business trends to watch in 2021 – TechBullion

Share

Share

Share

Email

There are many different trends in data center technology, which can make it difficult to keep up with the latest requirements. There is always something new and exciting popping up. But what are the latest trends? Whats changing in 2021?

A lot of people think server virtualization is outdated, but in 2021 it will still be very much around. One of the most important trends to watch for is software-defined infrastructure. Its already popular, but it will continue to grow in popularity over the next few years.

Here are some other trends to watch for when looking at server and virtualization business trends in 2021.

Throughout 2020, an increasingly large number of businesses adopted hybrid cloud technology, and many more plan to do so in 2021 and beyond. Enterprises, in particular, are embracing hybrid cloud technology to gain greater agility and mobility.

The idea of cloud providers delivering multiple services (compute, storage, network, and data services) in the form of a single package appeals to them, and this has helped solidify hybrid cloud technology as a requirement for IT operations.

A move to a fully hybrid cloud infrastructureone in which customers are not only using public cloud providers such as Amazon Web Services (AWS) but also implementing private clouds and a mixture of public and private cloudsis the logical next step for many organizations.

Low-cost commercial bare metal servers have been steadily rising in popularity in the second half of 2021 but will find their strongest markets in the web hosting, hosting, and cloud computing sectors.

Because virtualization is likely to be more complicated than traditional servers, dedicated bare metal servers will have a strong advantage over virtualized servers in terms of ease of operation.

The advantages of bare metal cloud servers will also prove useful to some private cloud providers, which may install a single server and load it up with virtual machine workloads on demand for the end client.

Many companies have found themselves needing server virtualization throughout the pandemic and the rise of remote work, and cloud computing. Using a hybrid approach that integrates virtualization, cloud, and more traditional computing solutions has become the norm for most enterprises.

Whether its a proprietary solution like Hyper-V and VMware, a solution based on open standards like OpenStack, or a different approach like KVM, containers, or Googles Cloud Native Application Engine, this is a space with significant momentum and growth. Its an area where each of the players HPE, IBM, Cisco, Dell, Oracle, HP, Microsoft, Red Hat, and VMware all have strong positions.

As the number of remote workers continues to climb, so does the risk of viable cyberattacks on corporations. Modern businesses have a wide range of remote workers, and those workers, along with the device they use, are vulnerable to security issues.

This is a key concern for server providers, and most continue to invest in security measures and products. Some of the latest cybersecurity trends to emerge in 2021 include things like:

Devices moving closer to the point of application access, processing and delivery will require new kinds of capabilities that were not needed before. Edge computing, in which a local compute node, edge gateway, or other compute element is set up to handle compute-intensive activity close to the data source, is expected to see major gains throughout 2021, and see some major liftoff in 2022.

Markets for edge computing will include verticals such as supply chain and retail, and the edge can enable new business models and revenue streams for application vendors and system integrators.

The devices that are most often seen as edge nodes in the context of edge computing tend to be low-power and low-cost IoT devices such as sensors and electronic logs. Edge computing vendors and service providers will bring services to edge networks, based on their commitment to systems integration, interoperability, standards support, and vendor enablement.

See the original post:
Server and virtualization business trends to watch in 2021 - TechBullion

Read More..

How to Move Fast in the Cloud Without Breaking Security – insideBIGDATA

In this special guest feature, Asher Benbenisty, Director of Product Marketing at Algosec, looks at how organizations can solve the problems of managing and maintaining security in hybrid, multi-cloud environments. Also discussed is the common confusion over cloud ownership, and how organizations can get consistent control and take advantage of agility and scalability without compromising on security. Asher is an experienced product marketing professional with a diverse background in all aspects of the corporate marketing mix, product/project management as well as technical expertise. He is passionate about bringing innovative products that solve real business problems to the market. When not thinking of innovative products, Asher enjoys outdoor running especially by the ocean.

Move fast and break things is a familiar motto. Attributed to Facebook CEO Mark Zuckerberg, it helps to explain the companys stellar growth over the past decade, driven by its product innovations. However, while its a useful philosophy for software development, moving faster than youd planned is a risky approach in other areas, as organizations globally realized during the COVID-19 pandemic. While 2020 saw digital transformation programs advance by up to seven years, enterprises quick moves to the cloud also meant that some things got damaged along the way including security.

Arecent survey conducted with the Cloud Security Allianceshowed that over half of organizations are now running over 41% of their workloads in public clouds, compared to just one quarter in 2019, and this will increase further by the end of 2021. Enterprises are moving fast to the cloud, but they are also finding that things are getting broken during this process.

11% of organizations reported a cloud security incident in the past year, with the three most common causes being cloud provider issues (26%), security misconfigurations (22%), and attacks such as denial of service exploits (20%). In terms of the business impact of these disruptive cloud outages, 24% said it took up to 3 hours to restore operations, and for 26% it took over half a day.

As a result, Its no surprise that organizations have significant concerns about enforcing and managing security in the cloud. Their leading concerns were maintaining overall network security, a lack of cloud expertise, problems when migrating workloads to the cloud, and insufficient staff to manage their expanded cloud environments. So, what are the root causes of these cloud security concerns and challenges, and how should enterprises address them?

Confusion over cloud control

When asked about which aspects of security worried them most when running applications in public clouds, respondents overwhelmingly cited getting clear visibility of topologies and policies for the

entire hybrid network estate, followed by the ability to detect risks and misconfigurations.

A key reason for these concerns is that organizations are using a range of different controls to manage cloud security as part of their application orchestration. 52% use cloud-native tools, and 50% reported using orchestration and configuration management tools such as Ansible, Chef and Puppet. However, nearly a third (29%) said they use manual processes to manage cloud security.

In addition, theres competition for overall control over cloud security: 35% of respondents said their security operations team managed cloud security, followed by the cloud team (18%), and IT operations (16%). Other teams such as network operations, DevOps and application owners all figured too. Having different teams using multiple different controls for security limits overall visibility across the hybrid cloud environment, and also adds significant complexity and management overheads to security processes. Any time you need to make a change, you need to duplicate the work across each of these different controls and teams. This results in security holes and the types of misconfiguration-based incidents and outages we mentioned earlier.

How to move fast and not break things

So how can organizations address these security and management issues, and get consistent control over their cloud and on-prem environments, so they can take full advantage of cloud agility and scalability without compromising on security? Here are the four key steps:

With a network security automation solution handling these steps, organizations can get holistic, single-console security management across all of their public cloud accounts, as well as their private cloud and on-premises deployments. This helps them to solve the cloud complexity challenge and ensures faster, safer and more compliant cloud management making it possible for organizations to move fast in response to changing business needs without breaking things.

Sign up for the free insideBIGDATAnewsletter.

Join us on Twitter:@InsideBigData1 https://twitter.com/InsideBigData1

The rest is here:
How to Move Fast in the Cloud Without Breaking Security - insideBIGDATA

Read More..

Automated ‘cloud lab’ will handle all aspects of daily lab work – E&T Magazine

Carnegie Mellon University (CMU) is working with Emerald Cloud Lab (ECL) to build a world-first cloud laboratory, which they hope will provide researchers with facilities for routine life sciences and chemistry research.

According to the partners, the remote-controlled Carnegie Mellon University (CMU) Cloud Lab will provide a universal platform for AI-driven experimentation, and revolutionise how academic laboratory research and education are done.

Emeralds 'cloud lab', which will be used as the basis for the new lab, allows scientists to conduct wet laboratory research without being in a physical laboratory. Instead, they can send their samples to a facility, design their experiments using ECLs command-based software (with the assistance of AI-based design tools), and then execute the experiment remotely. A combination of robotic instrumentation and technicians perform the experiments as specified and the data is sent to cloud servers for access.

CMU researchers have used ECL facilities for research and teaching for several years. According to the university, cloud lab classes gave students valuable laboratory experience during the Covid-19 pandemic, even with all courses being taught remotely.

CMU is a world leader in [AI], machine learning, data science, and the foundational sciences. There is no better place to be home to the worlds first university cloud lab, said Professor Rebecca Doerge. Bringing this technology, which Im proud to say was created by CMUs alumni, to our researchers and students is part of our commitment to creating science for the future.

The CMU Cloud Lab will democratise science for researchers and students. Researchers will no longer be limited by the cost, location, or availability of equipment. By removing these barriers to discovery, the opportunities are limitless.

The new cloud lab will be the first such laboratory built in an academic setting. It will be built in a university-owned building on Penn Avenue, Pittsburgh. Construction on the $40m project is expected to begin in autumn for completion in summer 2022.

The facility will house more than 100 types of scientific instruments for life sciences and chemistry experiments and will be capable of running more than 100 complex experiments simultaneously, 24 hours a day and 365 days a year. This will allow users to individually manage many experiments in parallel from anywhere in the world. The university and company will collaborate on the facilitys design, construction, installation, management, and operations. Already, staff and students are being trained to use the cloud lab.

While the CMU Cloud Lab will initially be available to CMU researchers and students, the university hopes to make time available to others in the research community, including high school students, researchers from smaller universities that may not have advanced research facilities, and local life sciences start-ups.

We are truly honoured that Carnegie Mellon is giving us the chance to demonstrate the impact that access to a cloud lab can make for its faculty, students and staff, said Brian Frezza, a CMU graduate and co-CEO of ECL. We couldnt think of a better way to give back to the university than by giving them a platform that redefines how a world-class institution conducts life sciences research.

Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.

Continued here:
Automated 'cloud lab' will handle all aspects of daily lab work - E&T Magazine

Read More..

The myths behind Linux security. – The CyberWire

Executive Summary.

Attackers do not target Linux environments because Windows is the most used operating system globall is a belief many in the technology hold. With this one false belief attackers are creating havoc on companies Linux based environments by creating and transitioning Windows malware to:

There is a notion in our community that added Linux operating system security features, such as Security-Enhanced Linux (SELinux) along with cloud provider offerings such as cloud-based firewall rules and access management, offer security by default. Companies do not need to focus on the hardening of the cloud server itself.

Myths such as these can lead companies to suffer devastating losses. When securing Linux servers, whether physical or in the cloud, the basics still remain the same. Just because a server is running Linux does not mean you can be lenient on security practices on the server itself. A companys security posture frequently relies on cloud providers security controls, and while they do provide help, if the company does not know what code is running on their servers the effectiveness of the providers security controls are negated.

Software development has changed drastically over the past several years to meet the need for faster time to market conditions. To accommodate these requirements, developers are increasing the frequency of their code deployments. Capital One reports they are currently deploying up to 50 times per day for a single product, with Amazon, Google, and Netflix deploying thousands of times per day. With the frequency of these code changes, it is becoming increasingly difficult for security teams to adapt their monitoring and hardening practices.

New code deployments can alter a servers expected behavior. Suppose companies are focusing their monitoring on behavior-based detection. In that case, new code deployments can lead to false positives, which create an additional workload for teams. Security teams often report that its challenging to address these situations as they do not have enough visibility into what code is running on their servers. Thus, they must spend a significant amount of time investigating them. If attackers can craft their code to fit with the expected behavior, no alerts are triggered, and a compromise could occur without any detection. However, companies are often worried about deploying new security solutions as they may degrade performance by using vital resources or slowing down the development process.

Attackers focus on Windows as its the worlds most used operating system.

The number of threats that Linux servers face are downplayed due to another common myth about the popularity of Linuxs usage around the world. The belief is that Windows is the primary operating system in use, and while this might be true for desktop computers, when it comes to Linux cloud or physical servers, the numbers say it all.

Currently 500 of the worlds fastest supercomputers are running on Linux. These systems are used for everything from advancing artificial intelligence to helping save lives by potentially aiding in COVID-19 gene analysis and vaccine research.

96.3% of the worlds servers are running Linux.

83.1% of developers say they prefer to work on the Linux platform than any other operating system.

In the past decade, researchers have discovered many advanced persistent threat campaigns targeting Linux systems using adapted Windows malware, as well as unique Linux malware tools tailored for espionage operations. Once the code was modified to work in Linux environments, there was no barrier to shifting these attacks to new targets. One example is IPStorm; researchers first saw this malware in 2019 targeting Windows systems. IPStorm has now evolved to target other platforms such as Android, Linux, and Mac devices, leading to increased compromised systems of more than 13,500. Detecting if systems are compromised is not always difficult; however, many businesses are unaware they should examine their devices until they see this attacks impact. The increase in compromised systems has led some to call IPStorm one of the most dangerous malware in existence.

Perhaps one of the leading factors for attackers deciding to morph their attack strategies is the growth of cloud technology and the increasing number of cloud providers making the transition to Linux-based environments easier than ever.

Even governments have embraced Linuxs usage in their environments. For example, in 2001, the White House transitioned to the Red Hat Linux-based distribution. The US Department of Defense migrated to Linux in 2007, and the US Navys warships have been using Red Hat since 2013. The US is not alone in this transition. The Austrian capital of Vienna and the government of Venezuela have also adopted the use of Linux.

Open-source software is inherently secure, due to the visibility of the code and contributions from the community.

Attackers contribute to open-source projects as well. For example, we have seen NPM packages that contained code providing access to the environmental variables, allowing for the collection of information for the host device.

Not everyone has the skills to understand the code, so despite seeing the installed code, the compromise can go undetected. When an issue is reported, an experienced developer reviews the code, then writes a patch. Once this work is done, we wait for the new code to be approved. Dont forget during this time unknowing parties would still be using these packages.

Once a fix is available, many companies still do not upgrade the new code. The State of Software Security (SOS) analyzed 85,00 applications and found that over 75% shared similar code. 47% of these had flawed libraries used by multiple applications; 74% of these libraries could have been fixed with a simple upgrade.

Our Linux environments are just as vulnerable as any other environment. All of these myths around Linux security fall short because they do not take into account what history has taught us, that breaches do happen.

In order to have a healthy security posture, companies need to grow beyond the idea that all breaches can be prevented and address the need for visibility of the code running within their workloads.

Webinar: Is Linux Secure By Default?

Blog: 2020 Set a Record for New Linux Malware Families

Read this article:
The myths behind Linux security. - The CyberWire

Read More..

Is the Cloud More Secure Than On Prem? – TechDecisions

Both the cloud and on-premises systems have their advantages and disadvantages, but recent attacks against on-premise systems coupled with the proliferation and advancement of cloud-based IT architecture are tilting the scales in favor of the cloud.

A company that owns its own on-premises servers has more control over security, but are responsible for all of the upgrades, maintenance and other upkeep not to mention the large up-front costs associated with the hardware.

In the cloud, most of that upgrading and maintenance is done by the provider, and organizations can pay for those services on a fixed, monthly basis.

Although on-premises systems have historically been viewed as more secure, recent attacks say otherwise, says Aviad Hasnis, CTO of autonomous breach protection company Cynet.

Its a trend that has really stressed out the fact that companies especially in the mid market that utilize these kinds of on-premises infrastructure dont usually have the capabilities or the manpower to make sure they are all up to date in terms of security updates, he said.

Thats why weve seen so many successful attacks against on-premises systems of late including the ProxyLogon and ProxyShell exploits of Microsoft Exchange Server vulnerabilities and the massive Kaseya ransomware attack, Hasnis says.

One of the main reasons there are more attacks against on-premises systems is the fact that most cloud vulnerabilities arent assigned a CVE number, which makes it hard for hackers to discover the flaw and successful exploit it.

Case in point was the recently disclosed Azure Cosmos DB vulnerability. Microsoft mitigated the vulnerability shortly after it was discovered, and no customer data appears to be impacted.

Meanwhile, known vulnerabilities in on-premises systems are exploited until the IT department can patch their systems. For example, the ProxyLogon and ProxyShell vulnerabilities in Microsoft exchange were assigned a CVE and patched shortly after they were disclosed, but organizations that were slow to patch or implement workarounds remained vulnerable as attackers seized on the newly discovered flaws.

In the case of the Kaseya attack, the damage was limited to only on-premises customers of Kaseya using the VSA product, but once the breach was disclosed and the company had to manually reach out to customers and urge them to take their servers down.

Attacking Kaseyas SaaS customers likely would have raised additional red flags that could have stopped the attack in its tracks, Hasnis says.

There are many different defenses for detecting this kind of threat behavior, Hasnis says.

In general, the cloud can be a much safer place to be if your organization practices SaaS Security Posture Management (SSPM), which, according to Gartner, is the constant assessment of the security risk of your Saas applications, including reporting the configuration of native SaaS security settings and tweaking that configuration to reduce risk.

For example, someone using Microsoft 365 without two-factor authentication should trigger a warning, Hasnis says.

The fact that someone uses cloud or SaaS infrastructure doesnt necessarily mean its safe, but they have to make sure their organization aligns with the best security protocols, Hasnis says.

Especially for smaller organizations that dont have the in-house staff and expertise to update and patch on-premises systems after an attack, migrating to the cloud can help cut down on that response time and keep the company safe by enlisting the help of the provider and other internal IT experts.

If your organization is spread around the globe in more than one location and youre working on-prem, you dont necessarily have access to all of the different infrastructure within the environment, Hasnis says.

Continued here:
Is the Cloud More Secure Than On Prem? - TechDecisions

Read More..

Meet the Self-Hosters, Taking Back the Internet One Server at a Time – VICE

It's no secret that a small handful of enormous companies dominate the internet as we know it. But the internet didn't always have services with a billion users and quasi-monopolistic control over search or shopping. It was once a loose collection of individuals, research labs, and small companies, each making their own home on the burgeoning world wide web.

That world hasn't entirely died out, however. Through a growing movement of dedicated hobbyists known as self-hosters, the dream of a decentralized internet lives on at a time when surveillance, censorship, and increasing scrutiny of Big Tech has created widespread mistrust in large internet platforms.

Self-hosting is a practice that pretty much describes itself: running your own internet services, typically on hardware you own and have at home. This contrasts with relying on products from large tech companies, which the user has no direct involvement in. A self-hoster controls it all, from the hardware used to the configuration of the software.

My first real-world reason for learning WordPress and self-hosting was the startup of a podcast, KmisterK, a moderator of Reddit's r/selfhosted community, told Motherboard. I quickly learned the limitations of fake unlimited accounts that were being advertised on most shared hosting plans. That research led to more realistic expectations for hosting content that I had more control over, and it just bloomed from there.

Edward, co-creator of an extensive list of self-hosted software, similarly became interested in self-hosting as a way to escape less-than-ideal circumstances. I was initially drawn to self-hosting by a slow internet connection and a desire to share media and information with those I lived with," he told Motherboard. I enjoyed the independence self-hosting provided and the fact that you owned and had control over your own data.

Once you're wrapped up in it, it's hard to deny the allure of the DIY self-hosted internet. My own self-hosting experiences include having a home server for recording TV and storing media for myself and my roommates, and more recently, leaving Dropbox for a self-hosted, free and open source alternative called Syncthing. While Ive been happy with Dropbox for many years, I was paying for more than I needed and ran into issues with syncing speed. With a new Raspberry Pi as a central server, I had more control over what synced to different devices, no worries about any storage caps, and of course, faster transfer speeds. All of this is running on my home network: nothing has to be stored on cloud servers run by someone else in who-knows-where.

My experience with Syncthing quickly sent me down the self-hosting rabbit hole. I looked at what else I could host myself, and found simply everything: photo collections (like Google Photos); recipe managers; chat services that you can connect with the popular tools like Discord; read-it-later services for bookmarking; RSS readers; budgeting tools; and so much more. There's also the whole world of alternative social media services, like Mastodon and PixelFed, to replace Twitter, Facebook, and Instagram, which can be self-hosted as a private network or used to join others around the world.

Self-hosting is something I've found fun to learn about and tinker with, even if it is just for myself. Others, like KmisterK, find new opportunities as well. Eventually, a career path started with it, and from there, being in the community professionally kept me personally interested as a hobby. Edward also found a connection with his career in IT infrastructure, but still continues self-hosting. It is nice to be able to play around in a low risk/impact environment, he said.

But beyond enjoyment, self-hosters share important principles that drive the desire to self-hostnamely, a distrust of large tech companies, which are known to scoop up all the data they can get their hands on and use it in the name of profit.

Despite new privacy laws like Europe's General Data Protection Regulation (GDPR) and the California Consumer Protection Act (CCPA), the vast majority of Americans still don't trust Big Tech with their privacy. And in recent years, the countless privacy scandals like Cambridge Analytica have driven some tech-savvy folks to take matters into their own hands.

I think that people are becoming more privacy conscious and while neither these laws, nor self-hosting can currently easily resolve these concerns, I think that they can at least alleviate them, said Edward.

Some self-hosters see the rising interest in decentralized internet tools as a direct result of Silicon Valley excess. The growth of self-hosting does not surprise me, nodiscc, a co-creator and maintainer of the self-hosted tech list, told Motherboard. People and companies have started realizing the importance of keeping some control over their data and tools, and I think the days of 'everything SaaS [Software as a Service]' are past.

Another strong motivator comes from large companies simply abandoning popular tools, along with their users. After all, even if you're a paying customer, tech companies offer access to services at their whim. Google, for example, is now infamous for shutting down even seemingly popular products like Reader, leaving users with no say in the matter.

KmisterK succinctly summarized the main reasons people have for self-hosting: curiosity and wanting to learn; privacy concerns; looking for cheaper alternatives; and the betrayed, people who come from platforms like Dropbox or Google Photos or Photobucket or similar, after major outages, major policy changes, sunsetting of services, or other dramatic changes to the platform that they disagree with. This last one is probably the majority gateway to self-hosting, based on recent traffic to r/selfhosted, he says. Look no further than their recent Google Photos megathread and recent guides from self-hosters on the internet. For me, changes in LastPass, even as a paid user, had me looking elsewhere.

nodiscc also noted the different reasons people self-host, saying, There would be many... technical interest, security/privacy, customization, control over the software, self-reliance, challenge, economical reasons, political/Free software activism. Looking at the growth of self-hosting over the years, Edward says, These aren't comprehensive reasons but I expect that privacy-consciousness, hardware availability and more mainstream open-source software have contributed to the growth of self-hosting.

These are all good reasons why self-hosting is so essential. Self-hosting brings freedom and empowerment to users. You own what you use: you can change it, keep it the same, and have your data in your own hands. Much of this derives from the free (as in freedom to do what you like) nature of self-hosting software. The source code is freely available to use, modify, and share. Even if the original author or group stops supporting something, the code is out there for anyone to pick up and keep alive.

Despite the individualistic nature of self-hosting, there is a vibrant and growing community.

Much of this growth can be seen on Reddit, with r/selfhosted hitting over 136,000 members and continuing to rise, up from 84,000 just a year ago. The discussions involve self-hosting software that spans dozens of categories, from home automation, genealogy, and media streaming to document collaboration and e-commerce. The list maintained by nodiscc and the community has grown so long that its stewards say it needs more curation and better navigation.

The quality of free and easy-to-use self-hosting software has increased too, making the practice increasingly accessible to the less-technically savvy. Add to that the rise of cheap, credit card-sized single-board computers like the Raspberry Pi, which lower the starting costs of creating a home server to as little as $5 or $10. Between high-available hosting environments, to one-click/one-command deploy options for hundreds of different softwares, the barrier for entry has dramatically been lowered over the years, said KmisterK.

Of course, even the most dedicated self-hosters admit that it isn't for everyone. Having some computing knowledge is fairly essential when it comes to running your own internet services, and self-hosting will never truly compete with big-name services that make it exponentially easier," KmisterK said.

But while self-hosters may never number enough to put a serious dent in Big Tech's offerings, there is aclear need and benefit to this alternative space. And I can't think of a better model for the kind of DIY community we can have, when left to our own devices.

Read the original:
Meet the Self-Hosters, Taking Back the Internet One Server at a Time - VICE

Read More..