Page 1,940«..1020..1,9391,9401,9411,942..1,9501,960..»

Special Address at ISC 2022 Shows Future of HPC – Nvidia

Researchers grappling with todays grand challenges are getting traction with accelerated computing, as showcased at ISC, Europes annual gathering of supercomputing experts.

Some are building digital twins to simulate new energy sources. Some use AI+HPC to peer deep into the human brain.

Others are taking HPC to the edge with highly sensitive instruments or accelerating simulations on hybrid quantum systems, said Ian Buck, vice president of accelerated computing at NVIDIA, at an ISC special address in Hamburg.

For example, a new supercomputer at Los Alamos National Laboratory (LANL) called Venado will deliver 10 exaflops of AI performance to advance work in areas such as materials science and renewable energy.

LANL researchers target 30x speedups in their computational multi-physics applications with NVIDIA GPUs, CPUs and DPUs in the system, named after a peak in northern New Mexico.

Venado will use NVIDIA Grace Hopper Superchips to run workloads up to 3x faster than prior GPUs. It also packs NVIDIA Grace CPU Superchips to provide twice the performance per watt of traditional CPUs on a long tail of unaccelerated applications.

The LANL system is among the latest of many around the world to embrace NVIDIA BlueField DPUs to offload and accelerate communications and storage tasks from host CPUs.

Similarly, the Texas Advanced Computing Center is adding BlueField-2 DPUs to the NVIDIA Quantum InfiniBand network on Lonestar6. It will become a development platform for cloud-native supercomputing, hosting multiple users and applications with bare-metal performance while securely isolating workloads.

Thats the architecture of choice for next-generation supercomputing and HPC clouds, said Buck.

In Europe, NVIDIA and SiPearl are collaborating to expand the ecosystem of developers building exascale computing on Arm. The work will help the regions users port applications to systems that use SiPearls Rhea and future Arm-based CPUs together with NVIDIA accelerated computing and networking technologies.

Japans Center for Computational Sciences, at the University of Tsukuba, is pairing NVIDIA H100 Tensor Core GPUs and x86 CPUs on an NVIDIA Quantum-2 InfiniBand platform. The new supercomputer will tackle jobs in climatology, astrophysics, big data, AI and more.

The new system will join the 71% on the latest TOP500 list of supercomputers that have adopted NVIDIA technologies. In addition, 80% of new systems on the list also use NVIDIA GPUs, networks or both and NVIDIAs networking platform is the most popular interconnect for TOP500 systems.

HPC users adopt NVIDIA technologies because they deliver the highest application performance for established supercomputing workloads simulation, machine learning, real-time edge processing as well as emerging workloads like quantum simulations and digital twins.

Showing what these systems can do, Buck played a demo of a virtual fusion power plant that researchers in the U.K. Atomic Energy Authority and the University of Manchester are building in NVIDIA Omniverse. The digital twin aims to simulate in real time the entire power station, its robotic components even the behavior of the fusion plasma at its core.

NVIDIA Omniverse, a 3D design collaboration and world simulation platform, lets distant researchers on the project work together in real time while using different 3D applications. They aim to enhance their work with NVIDIA Modulus, a framework for creating physics-informed AI models.

Its incredibly intricate work thats paving the way for tomorrows clean renewable energy sources, said Buck.

Separately, Buck described how researchers created a library of 100,000 synthetic images of the human brain on NVIDIA Cambridge-1, a supercomputer dedicated to advances in healthcare with AI.

A team from Kings College London used MONAI, an AI framework for medical imaging, to generate lifelike images that can help researchers see how diseases like Parkinsons develop.

This is a great example of HPC+AI making a real contribution to the scientific and research community, said Buck.

Increasingly, HPC work extends beyond the supercomputer center. Observatories, satellites and new kinds of lab instruments need to stream and visualize data in real time.

For example, work in lightsheet microscopy at Lawrence Berkeley National Lab is using NVIDIA Clara Holoscan to see life in real time at nanometer scale, work that would require several days on CPUs.

To help bring supercomputing to the edge, NVIDIA is developing Holoscan for HPC, a highly scalable version of our imaging software to accelerate any scientific discovery. It will run across accelerated platforms from Jetson AGX modules and appliances to quad A100 servers.

We cant wait to see what researchers will do with this software, said Buck.

In yet another vector of supercomputing, Buck reported on the rapid adoption of NVIDIA cuQuantum, a software development kit to accelerate quantum circuit simulations on GPUs.

Dozens of organizations are already using it in research across many fields. Its integrated into major quantum software frameworks so users can access GPU acceleration without any additional coding.

Most recently, AWS announced the availability of cuQuantum in its Braket service. And it demonstrated how cuQuantum can provide up to a 900x speedup on quantum machine learning workloads while reducing costs 3.5x.

Quantum computing has tremendous potential, and simulating quantum computers on GPU supercomputers is essential to move us closer to valuable quantum computing said Buck. Were really excited to be at the forefront of this work, he added.

To learn more about accelerated computing for HPC, watch the full talk below.

Read the original:
Special Address at ISC 2022 Shows Future of HPC - Nvidia

Read More..

This Week’s Awesome Tech Stories From Around the Web (Through June 4) – Singularity Hub

COMPUTING

Manipulating Photons for Microseconds Tops 9,000 Years on a SupercomputerJohn Timmer | Ars TechnicaThanks to some tweaks to the design it described a year ago, [quantum computing startup] Xanadu is now able to sometimes perform operations with more than 200 qubits. And it has shown that simulating the behavior of just one of those operations on a supercomputer would take 9,000 years, while its optical quantum computer can do them in just a few-dozen milliseconds.

Researchers in Japan Just Set a Staggering New Speed Record for Data TransfersAndrew Liszewski | GizmodoResearchers from Japans National Institute of Information and Communications Technology (NICT) successfully sent data down a custom multi-core fiber optic cable at a speed of 1.02 petabits per second over a distance of 51.7 km. Thats the equivalent of sending 127,500 GB of data every second, which, according to the researchers, is also enough capacity for over 10 million channels of 8K broadcasting per second.i

California Allows Driverless Taxi Service to Operate in San FranciscoAssociated Press | The GuardianCruise and another robotic car pioneer, Waymo, have already been charging passengers for rides in parts of San Francisco in autonomous vehicles with a backup human driver present to take control if something goes wrong with the technology. But now Cruise has been cleared to charge for rides in vehicles that will have no other people in them besides the passengersan ambition that a wide variety of technology companies and traditional automakers have been pursuing for more than a decade.

With Glass Buried Under Ice, Microsoft Plans to Preserve Music for 10,000 YearsMark Wilson | Fast CompanyLocated in Norway, its part of a cold-storage facility drilled into the very same mountain as the Svalbard Global Seed Vault. While the seed vault protects the earths cache of seeds, the Global Music Vault aims to preserve the sonic arts for generations to come. Dubbed Project Silica, you could oversimplify [Microsofts] technology as something akin to a glass hard drive thats read like a CD. Its a 3-by-3-inch platter that can hold 100GB of digital data, or roughly 20,000 songs, pretty much forever.

How Do You Decide? Cancer Treatments CAR-T Crisis Has Patients Dying on a WaitlistAngus Chen | StatBy the fall of 2021, Patel saw only one possibility left to save Goltzenes lifea newly approved CAR-T cell therapy for myeloma. Its an approach that is transforming treatment of blood cancers: CAR-T therapy labs convert the immune systems T cells into assassins of cancer cells by inserting a gene for whats known as a chimeric antigen receptor. But the process is slow and laborious, and drugmakers simply cant keep up.

How to Make the Universe Think for UsCharlie Wood | QuantaPhysicists are building neural networks out of vibrations, voltages and lasers, arguing that the future of computing lies in exploiting the universes complex physical behaviors. McMahon views his devices as striking, if modest, proof that you dont need a brain or computer chip to think. Any physical system can be a neural network, he said.

AstroForge Aims to Succeed Where Other Asteroid Mining Companies Have FailedEric Berger | Ars Technicathe company plans to build and launch what Gialich characterized as a small spacecraft to a near-Earth asteroid to extract regolith, refine that material, and send it back toward Earth on a ballistic trajectory. It will then fly into Earths atmosphere with a small heat shield and land beneath a parachute. Acain and Gialich, veterans of SpaceX and Virgin Orbit, respectively, readily acknowledge that what theyre proposing is rather audacious. But they believe it is time for commercial companies to begin looking beyond low Earth orbit.

Eavesdropping on the Brain With 10,000 ElectrodesBarun Dutta | IEEE SpectrumVersion 2.0 of the [Neuropixels] system, demonstrated last year, increases the sensor count by about an order of magnitude over that of the initial version produced just four years earlier. It paves the way for future brain-computer interfaces that may enable paralyzed people to communicate at speeds approaching those of normal conversation. With version 3.0 already in early development, we believe that Neuropixels is just at the beginning of a long road of exponential Moores Lawlike growth in capabilities.

This Is What Flying Car Ports Should Look LikeNicole Kobie | WiredIt might be years before flying cars take to the skies, but designers and engineers are already testing the infrastructure theyll need to operate. to hail an air taxi, passengers will need to make their way to a local vertiport, which could sit atop train stations, office blocks, or even float in water. Figuring out exactly what these buildings will require isnt simple. Urban-Air worked with Coventry University on a virtual reality model to test the space before spending 11 weeks assembling Air One, [Urban-Air Ports 1,700-square-meter modular popup building].

Image Credit:Bryan Colosky / Unsplash

Read the original here:
This Week's Awesome Tech Stories From Around the Web (Through June 4) - Singularity Hub

Read More..

USs Frontier is the worlds first exascale supercomputer – Freethink

The USs Frontier system is now the fastest supercomputer in the world. Its also the first exascale computer, meaning it can process more than a quintillion calculations per second an ability that could lead to breakthroughs in medicine, astronomy, and more.

Why it matters: Supercomputers arent a fundamentally different kind of machine, like quantum computers they work in the same basic way as your laptop, but with much more powerful hardware. This makes them invaluable tools for data-intensive, computation-heavy research.

It took us a day or two [with the supercomputer] whereas it would have taken months on a normal computer.

When the pandemic first started, for example, researchers used Summit the worlds fastest supercomputer at the time to simulate how different compounds would attach to the coronavirus spike protein and potentially prevent infection.

Summit was needed to rapidly get the simulation results we needed, said researcher Jeremy Smith in March 2020. It took us a day or two whereas it would have taken months on a normal computer.

Other scientists use supercomputers to analyze genomes, map the human brain, simulate the formation of stars, and more.

The rankings: Twice a year since 1993, the TOP500 project has released a list of the 500 most powerful supercomputers in the world. To compile this list, it measures each systems performance in FLOPS (floating-point operations per second).

A floating-point operation is a simple math problem (like adding two numbers). A person can typically perform at a rate of 1 FLOPS, meaning it takes us about one second to find the answer to one problem. Your PC might operate at about 150 gigaFLOPS, or 150 billion FLOPS.

In 2008, a supercomputer crossed the petaFLOPS threshold (one quadrillion FLOPS) for the first time, and since then, the goal has been an exaFLOPS system, capable of calculating at least one quintillion FLOPS (thats a lot of zeroes: 1,000,000,000,000,000,000).

Frontier is ushering in a new era of exascale computing to solve the worlds biggest scientific challenges.

The fastest supercomputer: Frontier a supercomputer at the Department of Energys Oak Ridge National Laboratory (ORNL) has taken the top spot on the latest TOP500 list, and its score of 1.102 exaFLOPS on a benchmark test makes it the worlds first exascale computer.

According to ORNL, creating a computer with that kind of power required a team of more than 100 people and millions of components. The system occupies a space of more than 4,000 square feet and includes 90 miles of cable and 74 cabinets, each weighing 8,000 pounds.

Frontier is already more than twice as powerful as the second fastest supercomputer on the TOP500 list Japans Fugaku, which had a score of 442 petaFLOPS and according to ORNL, its theoretical peak performance is almost twice as fast, a full 2 exaFLOPS.

Frontier is ushering in a new era of exascale computing to solve the worlds biggest scientific challenges, ORNL Director Thomas Zacharia said. This milestone offers just a preview of Frontiers unmatched capability as a tool for scientific discovery.

The caveat: Frontier might be the worlds fastest supercomputer and the first to cross the exascale threshold according to the TOP500 list, but China is suspected of having two exascale systems it just hasnt submitted test results to the TOP500 team.

There are rumors China has something, Jack Dongarra, one of the projects leaders, told the New York Times. There is nothing official.

Looking ahead: ORNL plans to continue testing and validating Frontier before granting scientists early access to it later in 2022. The system should then be fully operational by January 1, 2023.

Scientists and engineers from around the world will put these extraordinary computing speeds to work to solve some of the most challenging questions of our era, said Jeff Nichols, ORNL Associate Lab Director for computing and computational sciences.

Wed love to hear from you! If you have a comment about this article or if you have a tip for a future Freethink story, please email us at tips@freethink.com.

More:
USs Frontier is the worlds first exascale supercomputer - Freethink

Read More..

Good News: Big step towards quantum internet and a village lit up by the sea – Euronews

It can be hard to find among the headlines but some news is good news.

Here is your weekly digest of whats going well in the world.

These are this weeks positive news stories:

1. Scientists have identified the brain mechanism behind memory loss in old age

If youve ever forgotten where you left your keys or accidentally told the same story twice, help may soon be at hand.

Neuroscientists at Johns Hopkins have been working with rats to investigate the parts of the brain that control memory.

They have discovered a mechanism in the CA3 region of the hippocampus that appears to be responsible for a common type of memory loss and might turn out to be our greatest hope for combating Alzheimers and other age-related neurological disorders.

The Johns Hopkins team has found that the mechanism is responsible for two basic, co-dependent, memory functions pattern separation and pattern completion.

Lets say you visit a restaurant with your family and a month later you visit the same restaurant again with your friends. You should be able to recognise that it is the same restaurant, even though some details have changed, like the people who work there, the menu, the people eating there, and so on. Your ability to recognise it as the same restaurant is the responsibility of the pattern completion function of the brain.

Now pattern separation is what allows you to remember, for example, which conversation happened when, so you do not confuse two similar experiences or patterns. Lets say you talked about love with your friends, and money with your family. Pattern separation allows you to remember who you had the conversation with.

What the Johns Hopkins team has discovered is that as the brain ages, our ability to distinguish patterns diminishes, and as a result our memory becomes impaired, causing us to become forgetful or repeat ourselves.

Concretely what happens is that the pattern separation function of the brain fades away, and the other function, the pattern completion one, takes over.

In other words, your brain is focused on the common experience of the restaurant, but leaves out the details of the separate visits, so you might remember you had a conversation about love, but be unsure who you had it with, your family or your friends.

But researchers noticed that some of the older rats they worked with performed their memory tasks perfectly, even though their neurons and pattern-recognising functions were impaired.

It's just like people, says James J. Knierim from the Department of Neuroscience, Johns Hopkins University. There's a lot of variability in humans in terms of their cognitive ageing and how their cognitive abilities can decline over age. So we see the same thing in our rat population.

Professor Knierim says that they want to turn all the rats, and subsequently people, into really high performers.

Something was allowing those rats to compensate for the deficit which we also see in those lucky humans who remain surprisingly sharp into their older years. If we can isolate this factor, the hope is that we can replicate it.

Is it just different strategies they use that they've learnt to compensate for deterioration in some of the brain function? Or is it the fact that their brains are not deteriorating as fast?

Identifying the memory loss mechanism could really help us understand what prevents impairment in some people and open the door to preventing or delaying cognitive decline in the elderly.

We know that this same region that we're studying is one of the first areas that is affected in Alzheimer's, explains Professor Knierim, so if we want to understand Alzheimer's and what it does, we need to understand how the brain ages normally.

2. The French village being lit up by the sea.

Living lamps are lighting up the small French town of Rambouillet, about 50 kilometres southwest of Paris.

Its the same natural phenomenon that allows fireflies to light up, and algae to glow at night when the water around them moves.

The lamps are the work of a French start-up called Glowee, which collects bioluminescent marine bacteria called Aliivibrio fischeri, which is then stored inside tubes filled with saltwater. This turns the tubes into fluorescent aquariums.

The goal is to create a living bioluminescent raw material to create urban furniture and redesign the city of tomorrow, to be more respectful of biodiversity and the environment, says Sandra Rey, founder of Glowee.

Mrs Rey says they are currently developing the first pilot project of bioluminescence urban furniture, which will be installed in the city of Rambouillet in the fall.

We are in the process of producing this urban furniture so that it can be tested in the field. And to then be able to, after this first pilot project, really deploy bioluminescence in the city of Rambouillet, but obviously in many other cities too.

The manufacturing process consumes less water than the production of LED lights and releases less CO2, while the liquid is also biodegradable.

Mrs Rey says Glowee works with almost 50 development projects today in France, with constructors, with developers and with municipalities directly.

3. We take a huge step towards a revolutionary quantum internet

Scientists are working on a groundbreaking new computer that will make the ones we use today seem like antiques.

They are using the mysterious powers of quantum mechanics, in a way Albert Einstein himself once deemed impossible.

Quantum mechanics could be revolutionary for modern life as we know it. Tasks that would take todays supercomputers thousands of years to complete could be performed in minutes.

But the thing is quantum computing needs another technological breakthrough to reach its full potential. It needs the equivalent of quantum internet a network that can send quantum information between distant machines without being connected.

It needs what Einstein called spooky action at a distance.

And a group of scientists at the Delft University of Technology in the Netherlands has done just that, spooky computing.

This team of physicists used a technique called quantum teleportation to send data across non-neighbouring locations in a quantum network.

Up until now, researchers have only been able to send data between neighbouring nodes, but the new study represents what they call a prime building block for the future of quantum networks and the advances in technology it will bring with it.

4. A new gel can absorb water from desert air and make it drinkable

Pulling water out of thin air just became a reality and not just for magicians.

Scientists and engineers at the University of Texas in Austin have come up with a gel film that could offer cheap access to clean drinking water for people living in arid regions around the globe.

A third of the world's population lives in drylands, which are areas that experience significant water shortages, so this advancement could have a huge global impact.

The gel can pull water from the air in even the driest climates, and its as cheap as it is efficient.

The material costs around 2 a kilogram, and a single kilogram can produce more than six litres of water per day in areas with less than 15 per cent relative humidity. To give you an idea, Las Vegas, a notably dry US city that sits in the middle of a desert, has an average humidity rate of a little over 30 per cent.

And although six litres doesnt sound like much, the researchers say they could drastically increase the amount of water the invention yields by simply making thicker films or absorbent beds.

Pulling water from desert air is usually energy-intensive and rarely produces much clean water, but this invention is set to change all that. Its also easy to use and simple to replicate.

It's very simple. It doesn't require advanced equipment or something else. You just mix it. Its even easier than making a meal, jokes Nancy Guo, lead researcher of the study.

All the materials are easy to find, she says, adding that they were inspired by stuff in the kitchen, like salt, flour and sugar.

5. An EU plan to make solar panels mandatory on all new buildings

The outlook for Europes energy crisis might soon get a little sunnier.

A new proposal from the European Commission intends to make solar panels mandatory on all new buildings within the European Union.

The goal is to make solar energy the largest electricity source in the bloc, replacing reliance on Russian oil and gas supplies with renewable energy.

Following Russias invasion of Ukraine, the European Commission is speeding up their original green energy transition plans, increasing the renewable energy goals to 45 per cent of electricity consumption by 2030.

In 2020 renewable energy sources already made up 37.5 per cent of the EUs electricity consumption, meaning the continent is already well on track.

"The big lessons that we have to take from this war are that renewable energies are not only fundamental to facing the climate goal, but it's the best ally for the European Union for its independence and strategic autonomy, said Pedro Snchez, the Spanish prime minister, speaking at a World Economic Panel on energy in the Swiss resort town of Davos.

Theres still work to be done, however, and the Commissions REPowerEU plan and the solar rooftop initiative is introducing a phased-in legal obligation to install solar panels on new public and commercial buildings, as well as new residential buildings, by 2029.

If the plan is successful, solar energy will become the largest electricity source in the EU by 2030, with more than half of the share coming from rooftops.

As well as the obvious environmental benefits, the EU hopes the plan will help reduce energy prices over time. In its World Energy Outlook 2020 report, the International Energy Agency (IEA) confirmed that solar power schemes now offer the cheapest electricity in history and predicted that by 2050 solar power production will skyrocket to become the worlds primary source of electricity.

6. The Canadian chef helping immigrants into the workplace

Jessica Rosval has worked alongside triple-Michelin-starred chef Massimo Bottura in his restaurant Osteria Francescana, in Modena, Italy, for over a decade. Shes received many awards along the way, but her most recent recognitions are for her humanitarian work.

This year she opened a brand new culinary venture that helps women who immigrate to Italy to find careers and integrate into life in a new country.

Roots, the social enterprise restaurant she opened in March with her friend Caroline Caporossi, showcases the cultural diversity of Modena's immigrant women.

Rosval says that the menu is inspired by her chefs-in-training and where they come from. You know, the story of the trip from Cameroon to Modena or from Colombia to Italy.

Rosval says the training teaches the women participating the technical skills needed to be able to pursue a professional career in cooking, But also non-technical skills that really help in terms of better understanding Italian bureaucracy, culture, the history of Modena, the food culture that exists in Modena, which are all also very fundamental and important aspects of cooking in this new country.

Dishes inspired by Cameroon, Guinea, Nigeria, Tunisia and Ghana are all on this seasons menu.

For example, Zaira is one of our trainees, she's from Tunisia and in Tunisia they make brik, which is a rolled fried savoury dumpling filled with a lot of different things. It can be interpreted a lot of different ways in Tunisia, but there is always fresh cheese in the original Tunisian recipe. But when Zaira moved into Modena, she started making it with Parmigiano Reggiano. And when she told us that story, we thought it was great.

Rosval says that sometimes the best ways for us to get to understand new places is by picking out these little ingredients, and tasting the food and seeing what the actual land is giving us.

And how are the Italians taking it?

We were unsure of what people's reactions would be. But it has been miraculous. We have had so much support from our community. We have had from the florists to the electricians to the plumbers, everybody donating their time, everybody donating their energy, their services. The restaurant is full every single night that we're open, Rosval told Euronews.

Besides teaching women how to cook and run a kitchen, Roots taps into a wide network of government agencies, small businesses and volunteers who help train the women in everything from how to open a bank account and manage household finances to workers' rights and dealing with Italian bureaucracy.

During this year alone, more than 17,000 migrants have arrived in Italy via boat, according to the UNHCR. Seven per cent of these are women, who can be doubly disadvantaged, both socially and economically.

Roots is part of the Modena-based Association for the Integration of Women, and just one of the incredible examples of local commitment to bringing these women into the workforce.

And if you're still hungry for more positive news, there's more below

Read the original here:
Good News: Big step towards quantum internet and a village lit up by the sea - Euronews

Read More..

The role of encrypted traffic analysis for threat detection [Q&A] – BetaNews

Everyone is striving to make their systems more secure and in many cases that means adopting encryption in order to protect data.

But the use of encrypted traffic over networks presents a headache for security teams as malicious content can be harder to detect. We spoke to Thomas Pore, director of security products at Live Action, to find out more about the problem and how it can be addressed.

BN: How is encrypted traffic impacting network threat detection today?

TP: The increased adoption of encrypted network protocols is causing the deterioration of network visibility for security teams, and legacy tools are increasingly less effective. In Q4 of 2021 alone, 78 percent of malware delivered via encrypted connections were evasive, according to a recent report, highlighting the growing threat of advanced malware attacks. Additionally, the rising acceptance of HTTPS, rapid deployment of encrypted protocols such as DNS over HTTPS, and TLS 1.3 are greatly decreasing visibility into server identity and content inspection, making threat detection more difficult, and in many cases nearly impossible, for network defenders. Once inside an organization's network, threat actors are leveraging encrypted sessions to move laterally -- east to west. Traditional detection tools only inspect north-south traffic. This gives attackers the advantage they need to complete advanced actions, like a ransomware attack.

BN: What is encrypted traffic analysis and why is it important to threat detection and response?

TP: Encrypted traffic analysis is a type of side-channel analysis that allows network defenders to do their jobs while maintaining the privacy and network integrity provided by a fully encrypted system. Encrypted Traffic Analysis, coupled with machine learning capabilities, evaluates complex data patterns over time and differentiates normal and abnormal activities, all without requiring access to the content of the data. It allows security teams to leverage varying types of C2 activity (such as beaconing, TLS fingerprinting and sequence of packet lengths) to quickly uncover malicious behavior and network anomalies, which are vital for effective threat detection and response. Effectively, ETA enables network transaction visibility, which provides valuable insights about the encrypted traffic to aid network defenders.

BN: What is encryption blindness and how can it impact organizational security?

TP: Encryption blindness is caused by a lack of visibility into encrypted traffic leading to missed (hidden) threats in the network. Because most modern IT network traffic is now concealed in encryption, hackers can leverage this gap in security to hide their actions inside encrypted traffic. In other words, a large amount of traffic in organizations today goes uninspected simply because it's encrypted, opening the door to attacks. As threats get more sophisticated and the attack surface grows, the effectiveness of many traditional strategies is decreasing, such as IDS, IPS, and break-and-inspect decryption. This is challenging the effectiveness of organizational security more than ever.

BN: What is the difference between Deep Packet Inspection (DPI) and Deep Packet Dynamics (DPD) for ETA?

TP: Deep Packet Dynamics (DPD) is a new approach to evaluating network packets that eliminate the need for payload inspection. By analyzing more than 150 packet traits and behaviors across multi-vendor, multi-domain, and multi-cloud network environments, it can more reliably evaluate both encrypted and unencrypted traffic.

When DPD is coupled with machine learning and ETA, it enables unique capabilities for regaining visibility into encrypted traffic and delivers some of the most advanced network detection and response capabilities available today. This includes a variety of benefits such as detecting threats and anomalies others miss; detecting threats in real-time; eliminating encryption blindness; decreasing the time a SOC needs to investigate and respond to threats; validating end-to-end encryption compliance; offering visibility from core to edge to cloud; and enabling the security team to create a coordinated and cohesive response through other security tools like SIEM, SOAR, etc.

In contrast, Deep Packet Inspection (DPI) is an older legacy approach that primarily works on unencrypted or clear text protocols such as HTTP. But encryption undermines DPI and allows malicious payloads to hide in encrypted traffic. In short, DPD offers network defenders a much clearer vision of encrypted network traffic than DPI does.

BN: What role does ETA play in broader network detection and response solutions?

TP: Encrypted traffic analysis is a way to restore network visibility for defenders while maintaining privacy for users by combining DPD and advanced behavior analysis combined with machine learning. Malicious threat actors and malware system operators communicate with infected target systems using a set of techniques called Command and Control (C2). Threat actors employ C2 techniques to mimic expected, benign traffic using common ports and standard encryption protocols to avoid detection. Despite these precautions, ETA with machine learning effectively identifies malicious C2 activity on the network so you can stop an attack. Even with zero visibility into the content of the connection, ETA can tell a great deal about the behavior of encrypted traffic and helps network defenders prioritize their network detection and response activities.

BN: What's next or on the horizon -- when it comes to ETA?

TP: Encrypted traffic analysis will further fortify the long-term security strategies of organizations, through the continued characterization of encrypted flows and behavioral pattern recognition. This extends across endpoints, assets, and end-to-end encryption, mapping benign and expected traffic against malicious anomalies. Phishing and remote access protocols (RDP/VPN) continue to be the leading infection vectors of ransomware and state-sponsored APT actors. ETA's high-fidelity detection of anomalous characterization will be the difference in stopping the attack into the future.

Photo credit: Rawpixel.com / Shutterstock

Read more here:
The role of encrypted traffic analysis for threat detection [Q&A] - BetaNews

Read More..

What is SSH access? Everything you need to know – TechRadar

SSH (Secure Shell) is a network protocol that enables secure communication between two devices, often used to access remote servers as well as to transfer files or execute commands.

SSH was originally developed by Tatu Ylonen in 1995 to replace Telnet, a network protocol that allowed users to connect to remote computers, most often to test connectivity or to remotely administer a server.

Today, SSH has become the standard for remote access for many organizations, and is used by system administrators to manage servers remotely or to securely connect to their personal computers. SSH is also commonly used to tunnel traffic through untrusted networks, such as public Wi-Fi hotspots.

SSH access is used for a variety of tasks, including remotely logging into servers, transferring files, and running commands. Some popular SSH clients include PuTTY (Windows), Terminal (Mac), and Linux Shell.

SSH is a powerful tool that can be used for a variety of tasks. However, its important to note that SSH is not intended to be used as a general-purpose file transfer protocol. If you are looking to transfer files between two computers, you should use a tool such as SFTP instead.

To get SSH access, you need to have a user account on your web hosting server. Once you have a user account, you can generate an SSH key pair. The public key will be added to the server's authorized_keys file, and the private key will be kept on your local machine. Once the key pair is generated, you can use an SSH client to connect to the server.

There are many different SSH clients available, but we recommend using PuTTY for Windows users and Terminal for Mac users. If you're using Linux, you should already have a Terminal application installed.

Once you've launched your chosen SSH client, enter the hostname or IP address of the server into the connection settings.

Make sure to select "SSH" as the connection type, and then enter your username. Once you've entered all of the necessary information, you can click "Connect" to connect to the server.

If everything was entered correctly, you should see a message asking for your password. Type in your password and hit "Enter". If you're connected successfully, you should see a command prompt for the server.

From here, you can run any commands that you would normally run on the server. To disconnect from the server, simply type "exit" at the command prompt and hit "Enter".

SSH encryption is a process that uses mathematical algorithms to encode data. The sender and receiver of the encoded data can then use a secret key to decode the data.

This process helps to ensure that the data remains confidential and is not tampered with during transit. SSH also provides authentication, which helps to prevent unauthorized access to systems and data.

There are two main types of SSH encryption: public-key encryption and symmetric key encryption. Public key encryption uses two different keys, one for encoding and one for decoding.

The keys are typically generated by a third-party provider and are shared between the sender and receiver. Symmetric key encryption uses the same key for both encoding and decoding. This means that the sender must first send the key to the receiver before any data can be encrypted or decrypted.

While both public key and symmetric key encryption are secure, symmetric key encryption is typically faster and is therefore often used for high-speed data transfers.

Secure Shell (SSH) is available on all major mobile platforms, including iOS, and Android. It provides a secure way to access your mobile device's command-line interface (CLI), allowing you to run commands and transfer files without having to worry about someone eavesdropping on your session.

To use SSH on your mobile device, you'll need to install a client app such as Termius or Connectbot. Once you've installed a client app, you can connect to your device by entering its IP address into the app's connection screen. You'll also need to enter your username and password (if using password authentication).

SSH is not completely free, but it is free for many purposes. For example, when using SSH to access a remote server, you will need to pay for the server.

However, if you just want to use SSH to connect to a friend's computer, there is no charge. In general, SSH is free for personal use, but some commercial applications require a fee.

The short answer is no. Not all browsers support Secure Shell or SSH. The most popular browser that does not support SSH is Google Chrome. There are, however, many ways to get around this.

One way is to use a different browser that does support SSH such as Mozilla Firefox or Microsoft Edge. Another way is to use an extension for Google Chrome that will add SSH support.

SSH encrypts all traffic between the client and server, making it much more difficult for attackers to eavesdrop on communications.

This is especially important when transmitting sensitive information, such as passwords or financial data. SSH also provides authentication capabilities, meaning that only authorized users can access the server.

This is accomplished through the use of public-private key pairs. The server has a copy of the public key, and the client has a copy of the private key. When the client attempts to connect to the server, the server uses the public key to verify that the client has the private key. If everything checks out, then the user is granted access.

There are a few different types of SSH clients available, but the most popular ones are open-source. While open-source software is generally considered to be more secure than closed-source software, there is a debate about whether or not this is true for SSH clients.

Some people argue that open-source SSH clients are less secure because its source code is available for anyone to examine. This means that potential attackers can find vulnerabilities more easily. While others argue that open-source SSH clients are more secure because its source code is available for anyone to examine.

Which side is right? It's hard to say for sure. There are pros and cons to both sides of the argument. Ultimately, it's up to each individual to decide whether they want to use an open-source or closed-source SSH client.

If security is your top priority, you may want to consider using a closed-source SSH client. However, if you're more concerned about features and flexibility, an open-source SSH client may be a better choice for you.

A secure Shell is a great tool for securing data in transit, as it can be used to encrypt traffic between two computers or secure data being sent over the internet.

Secure Shell can also be used to create secure tunnels between two computers, most often to securely connect to remote servers.

Additionally, it can be used to create secure backups of files, databases and to protect data in transit.

SSH access is a great way to manage your web server remotely. There are a few things to keep in mind when using SSH. First of all, make sure that you are connecting to the correct server.

Secondly, make sure that your connection is secure by verifying the fingerprint of the server's SSH key. Lastly, make sure to use a strong password for your SSH account.

Step 1. You will need to create the SSH KEY. To do this, use the SSH-KEYGEN command. After that, you need to copy the ssh-key.

Step 2. You will now install the SSH-KEY. To do this, you will use the SSH-COPY-ID command. This works on a Unix or Linux server.

Step 3. Next, you need to add yourself to the Wheel or Sudi group admin account.

Step 4. Next, you should DISABLE password Login. This is to allow a root account.

Step 5. Now, you need to test your passwordless SSH-KEY Login. To do this, use: the SSH_USER@server-name command.

Security is always a top priority when it comes to choosing a web hosting provider. When it comes to SSH access, you want to make sure that your provider uses strong encryption methods and that their servers are well protected.

Ease of use is also important. You want a hosting provider that makes it easy to set up and manage your SSH access. And finally, price is always a factor you should consider if youre on a budget. You want to find a provider that offers competitive pricing without sacrificing quality or security.

So without further ado, here are the best hosting providers that offer SSH access:

Bluehost

Bluehost is a great choice for SSH access. They offer strong encryption methods and their servers are well protected. Bluehost is also easy to set up and manage, making it a great choice for those who are new to using SSH. And finally, Bluehost offers competitive pricing without sacrificing quality or security.

HostGator

HostGator is a top provider of secure and reliable web hosting. They offer SSH access on all of their plans, including shared hosting, VPS, and dedicated servers. HostGator uses strong encryption methods to keep your data safe and their servers are well protected. They also offer an easy-to-use control panel that makes it easy to manage your SSH access.

InMotion Hosting

InMotion Hosting offers strong security and easy management of SSH access. Their prices are competitive, and they offer a wide range of plans to choose from.

Some third-party programs are more secure than others. If you're using a program that isn't as secure, you may not be getting the same level of protection as you would with SSH. That said, there are some things you can do to help keep your data safe even when using a less secure program.

Here are a few tips:

- Make sure the program you're using is up to date. Older versions may have security vulnerabilities that have since been fixed.- Be careful about what information you share through the program. Don't share sensitive information unless you're confident it will be kept safe.

There are a few different ways to connect to a remote server without using SSH. Here is the list of SSH alternatives:

Eternal Terminal

Eternal Terminal is one way to connect to a remote server without using SSH. Its an open-source, cross-platform terminal emulator and telnet client. It can be used as a drop-in replacement for the standard Terminal app on macOS and Linux.

Features include:

- Supports SSH, telnet, and raw socket connections- Automatic reconnection- Scriptable with Lua- Cross-platform support for macOS, Linux, Windows, and more.

Mosh

Mosh is a free and open-source replacement for the SSH terminal application. Mosh can be used to connect to any server that has an SSH daemon running.

Mosh has several features that make it more reliable than SSH, including:

- UDP support: This means that Mosh can reconnect if the connection is dropped, without losing any data.- Mobile device support: Mosh works well on mobile devices with high latency or unstable connections.- Keyboard handling: Mosh supports most of the same keyboard shortcuts as SSH, making it easy to use for anyone familiar with SSH.

SSH is a powerful tool that can be used for a variety of tasks such as remotely logging into servers, running commands, and transferring files. It's important to note that SSH is not intended to be used as a general-purpose file transfer protocol, and should only be used when security is a concern.

By using SSH, you can encrypt your traffic so that anyone who is sniffing the network will not be able to read your data.

Continue reading here:
What is SSH access? Everything you need to know - TechRadar

Read More..

Explained: Social media and the Texas shooter’s messages – The Indian Express

Could technology companies have monitored ominous messages made by a gunman who Texas authorities say massacred 19 children and two teachers at an elementary school? Could they have warned the authorities? Answers to these questions remain unclear, in part because official descriptions of the shooting and the gunmans social media activity have continued to evolve. For instance, on Thursday Texas officials made significant revisions to their timeline of events for the shooting.

But if nothing else, the shooting in Uvalde, Texas, seems highly likely to focus additional attention on how social platforms monitor what users are saying to and showing each other.

A day after the Tuesday shooting, Texas Gov. Greg Abbott said this: There was no meaningful forewarning of this crime other than what Im about to tell you: As of this time the only information that was known in advance was posted by the gunman on Facebook approximately 30 minutes before reaching the school. Facebook posts are typically distributed to a wide audience. Shortly thereafter, Facebook stepped in to note that the gunman sent one-to-one direct messages, not public posts, and that they werent discovered until after the terrible tragedy.

HOW DID THE GUNMAN USE SOCIAL MEDIA?

By Thursday, new questions arose as to which and how many tech platforms the gunman used in the days before the shooting. The governors office referred questions about the gunmans online messages to the Texas Department of Public Safety, which didnt respond to emailed requests for comment.

Some reports appear to show that at least some of the gunmans communications used Apples encrypted iPhone messaging services, which makes messages almost impossible for anyone else to read when sent to another iPhone user.

Facebook parent company Meta, which also owns Instagram, says it is working with law enforcement but declined to provide details. Apple didnt respond to requests for comment.

The latest mass shootings in the US by active social-media users may bring more pressure on technology companies to heighten their scrutiny of online communications, even though conservative politicians Abbott among them are also pushing social platforms to relax their restrictions on some speech.

COULD TECH COMPANIES HAVE CAUGHT THE SHOOTERS MESSAGES?

It would depend on which services Salvador Ramos used. A series of posts appeared on his Instagram in the days leading up to the shooting, including photos of a gun magazine in hand and two AR-style semi-automatic rifles. An Instagram user who was tagged in one post shared parts of what appears to be a chilling exchange on Instagram with Ramos, asking her to share his gun pictures with her more than 10,000 followers.

Meta has said it monitors peoples private messages for some kinds of harmful content, such as links to malware or images of child sexual exploitation. But copied images can be detected using unique identifiers a kind of digital signature which makes them relatively easy for computer systems to flag. Trying to interpret a string of threatening words which can resemble a joke, satire or song lyrics is a far more difficult task for artificial intelligence systems.

Facebook could, for instance, flag certain phrases such as going to kill or going to shoot, but without context something AI in general has a lot of trouble with there would be too many false positives for the company to analyze. So Facebook and other platforms rely on user reports to catch threats, harassment and other violations of the law or their own policies.

SOCIAL PLATFORMS LOCK UP THEIR MESSAGES

Even this kind of monitoring could soon be obsolete, since Meta plans to roll out end-to-end-encryption on its Facebook and Instagram messaging systems next year. Such encryption means that no one other than the sender and the recipient not even Meta can decipher peoples messages. WhatsApp, also owned by Meta, already uses such encryption.

A recent Meta-commissioned report emphasized the benefits of such privacy but also noted some risks including users who could abuse the encryption to sexually exploit children, facilitate human trafficking and spread hate speech.

Apple has long had end-to-end encryption on its messaging system. That has brought the iPhone maker into conflict with the Justice Department over messaging privacy. After the deadly shooting of three US sailors at a Navy installation in December 2019, the Justice Department insisted that investigators needed access to data from two locked and encrypted iPhones that belonged to the alleged gunman, a Saudi aviation student.

Newsletter | Click to get the days best explainers in your inbox

Security experts say this could be done if Apple were to engineer a backdoor to allow access to messages sent by alleged criminals. Such a secret key would let them decipher encrypted information with a court order.

But the same experts warned that such backdoors into encryption systems make them inherently insecure. Just knowing that a backdoor exists is enough to focus the worlds spies and criminals on discovering the mathematical keys that could unlock it. And when they do, everyones information is essentially vulnerable to anyone with the secret key.

See the article here:
Explained: Social media and the Texas shooter's messages - The Indian Express

Read More..

What is quantum mechanics trying to tell us? – Big Think

Classical physics did not need any disclaimers. The kind of physics that was born with Isaac Newton and ruled until the early 1900s seemed pretty straightforward: Matter was like little billiard balls. It accelerated or decelerated when exposed to forces. None of this needed any special interpretations attached. The details could get messy, but there was nothing weird about it.

Then came quantum mechanics, and everything got weird really fast.

Quantum mechanics is the physics of atomic-scale phenomena, and it is the most successful theory we have ever developed. So why are there a thousand competing interpretations of the theory? Why does quantum mechanics need an interpretation at all?

What, fundamentally, is it trying to tell us?

There are many weirdnesses in quantum physics many ways it differs from the classical worldview of perfectly knowable particles with perfectly describable properties. The weirdness you focus on will tend to be the one that shapes your favorite interpretation.

But the weirdness that has stood out most, the one that has shaped the most interpretations, is the nature of superpositions and of measurement in quantum mechanics.

Subscribe for counterintuitive, surprising, and impactful stories delivered to your inbox every Thursday

Everything in physics comes down to the description of what we call the state. In classical physics, the state of a particle was just its position and momentum. (Momentum is related to velocity.) The position and velocity could be known with as much accuracy as your equipment allowed. Most important, the state was never connected to making a measurement you never had to look at the particle. But quantum mechanics forces us to think about the state in a very different way.

In quantum physics, the state represents the possible outcomes of measurements. Imagine you have a particle in a box, and the box has two accessible chambers. Before a measurement is made, the quantum state is in a superposition, with one term for the particle being in the first chamber and another term for the particle being in the second chamber. Both terms exist at the same time in the quantum state. It is only after a measurement is made that the superposition is said to collapse, and the state has only one term the one that corresponds to seeing the particle in the first or the second chamber.

So, what is going on here? How can a particle be in two places at the same time? This is also akin to asking whether particles have properties in and of themselves. Why should making a measurement change anything? And what exactly is a measurement? Do you need a person to make a measurement, or can you say that any interaction at all with the rest of the world is a measurement?

These kinds of questions have spawned a librarys worth of so-called quantum interpretations. Some of them try to preserve the classical worldview by finding some way to minimize the role of measurement and preserve the reality of the quantum state. Here, reality means that the state describes the world by itself, without any reference to us. At the extreme end of these is the Many Worlds Interpretation, which makes each possibility in the quantum state a parallel Universe that will be realized when a quantum event a measurement happens.

This kind of interpretation is, to me, a mistake. My reasons for saying this are simple.

When the inventors of quantum mechanics broke with classical physics in the first few decades of the 1900s, they were doing what creative physicists do best. They were finding new ways to predict the results of experiments by creatively building off the old physics while extending it in ways that embraced new behaviors seen in the laboratory. That took them in a direction where measurement began to play a central role in the description of physics as a whole.Again and again, quantum mechanics has shown that at the heart of its many weirdnesses is the role played by someone acting on the world to gain information. That to me is the central lesson quantum mechanics has been trying to teach us: That we are involved, in some way, in the description of the science we do.

Now to be clear, I am not arguing that the observer affects the observed, or that physics needs a place for some kind of Cosmic Mind, or that consciousness reaches into the apparatus and changes things. There are much more subtle and interesting ways of hearing what quantum mechanics is trying to say to us. This is one reason I find much to like in the interpretation called QBism.

What matters is trying to see into the heart of the issue. After all, when all is said and done, what is quantum mechanics pointing to? The answer is that it points to us. It is trying to tell us what it means to be a subject embedded in the Universe, doing this amazing thing called science. To me that is just as exciting as a story about a Gods eye view of the Universe.

See the article here:

What is quantum mechanics trying to tell us? - Big Think

Read More..

How the Multiverse could break the scientific method – Big Think

Today lets take a walk on the wild side and assume, for the sake of argument, that our Universe is not the only one that exists. Lets consider that there are many other universes, possibly infinitely many. The totality of these universes, including our own, is what cosmologists call the Multiverse. It sounds more like a myth than a scientific hypothesis, and this conceptual troublemaker inspires some while it outrages others.

The controversy started in the 1980s. Two physicists, Andrei Linde at Stanford University and Alex Vilenkin at Tufts University, independently proposed that if the Universe underwent a very fast expansion early on in its existence we call this an inflationary expansion then our Universe would not be the only one.

This inflationary phase of growth presumably happened a trillionth of a trillionth of a trillionth of one second after the beginning of time. That is about 10-36 seconds after the bang when the clock that describes the expansion of our universe started ticking. You may ask, How come these scientists feel comfortable talking about times so ridiculously small? Wasnt the Universe also ridiculously dense at those times?

Well, the truth is we do not yet have a theory that describes physics under these conditions. What we do have are extrapolations based on what we know today. This is not ideal, but given our lack of experimental data, it is the only place we can start from. Without data, we need to push our theories as far as we consider reasonable. Of course, what is reasonable for some theorists will not be for others. And this is where things get interesting.

The supposition here is that we can apply essentially the same physics at energies that are about one thousand trillion times higher than the ones we can probe at the Large Hadron Collider, the giant accelerator housed at the European Organization for Nuclear Research in Switzerland. And even if we cannot apply quite the same physics, we can at least apply physics with similar actors.

In high energy physics, all the characters are fields. Fields, here, mean disturbances that fill space and may or may not change in time. A crude picture of a field is that of water filling a pond. The water is everywhere in the pond, with certain properties that take on values at every point: temperature, pressure, and salinity, for example. Fields have excitations that we call particles. The electron field has the electron as an excitation. The Higgs field has the Higgs boson. In this simple picture, we could visualize the particles as ripples of water propagating along the surface of the pond. This is not a perfect image, but it helps the imagination.

The most popular protagonist driving inflationary expansion is a scalar field an entity with properties inspired by the Higgs boson, which was discovered at the Large Hadron Collider in July 2012.

Subscribe for counterintuitive, surprising, and impactful stories delivered to your inbox every Thursday

We do not know if there were scalar fields at the cosmic infancy, but it is reasonable to suppose there were. Without them, we would be horribly stuck trying to picture what happened. As mentioned above, when we do not have data, the best that we can do is to build reasonable hypotheses that future experiments will hopefully test.

To see how we use a scalar field to model inflation, picture a ball rolling downhill. As long as the ball is at a height above the bottom of the hill, it will roll down. It has stored energy. At the bottom, we set its energy to zero. We do the same with the scalar field. As long as it is displaced from its minimum, it will fill the Universe with its energy. In large enough regions, this energy prompts the fast expansion of space that is the signature of inflation.

Linde and Vilenkin added quantum physics to this picture. In the world of the quantum, everything is jittery; everything vibrates endlessly. This is at the root of quantum uncertainty, a notion that defies common sense. So as the field is rolling downhill, it is also experiencing these quantum jumps, which can kick it further down or further up. Its as if the waves in the pond were erratically creating crests and valleys. Choppy waters, these quantum fields.

Here comes the twist: When a sufficiently large region of space is filled with the field of a certain energy, it will expand at a rate related to that energy. Think of the temperature of the water in the pond. Different regions of space will have the field at different heights, just as different regions of the pond could have water at different temperatures. The result for cosmology is a plethora of madly inflating regions of space, each expanding at its own rate. Very quickly, the Universe would consist of myriad inflating regions that grow, unaware of their surroundings. The Universe morphs into a Multiverse.Even within each region, quantum fluctuations may drive a sub-region to inflate. The picture, then, is one of an eternally replicating cosmos, filled with bubbles within bubbles. Ours would be but one of them a single bubble in a frothing Multiverse.

This is wildly inspiring. But is it science? To be scientific, a hypothesis needs to be testable. Can you test the Multiverse? The answer, in a strict sense, is no. Each of these inflating regions or contracting ones, as there could also be failed universes is outside our cosmic horizon, the region that delimits how far light has traveled since the beginning of time. As such, we cannot see these cosmoids, nor receive any signals from them. The best that we can hope for is to find a sign that one of our neighboring universes bruised our own space in the past. If this had happened, we would see some specific patterns in the sky more precisely, in the radiation left over after hydrogen atoms formed some 400,000 years after the Big Bang. So far, no such signal has been found. The chances of finding one are, quite frankly, remote.

We are thus stuck with a plausible scientific idea that seems untestable. Even if we were to find evidence for inflation, that would not necessarily support the inflationary Multiverse. What are we to do?

The Multiverse suggests another ingredient the possibility that physics is different in different universes. Things get pretty nebulous here, because there are two kinds of different to describe. The first is different values for the constants of nature (such as the electron charge or the strength of gravity), while the second raises the possibility that there are different laws of nature altogether.

In order to harbor life as we know it, our Universe has to obey a series of very strict requirements. Small deviations are not tolerated in the values of natures constants. But the Multiverse brings forth the question of naturalness, or of how common our Universe and its laws are among the myriad universes belonging to the Multiverse. Are we the exception, or do we follow the rule?

The problem is that we have no way to tell. To know whether we are common, we need to know something about the other universes and the kinds of physics they have. But we dont. Nor do we know how many universes there are, and this makes it very hard to estimate how common we are. To make things worse, if there are infinitely many cosmoids, we cannot say anything at all. Inductive thinking is useless here. Infinity gets us tangled up in knots. When everything is possible, nothing stands out, and nothing is learned.

That is why some physicists worry about the Multiverse to the point of loathing it. There is nothing more important to science than its ability to prove ideas wrong. If we lose that, we undermine the very structure of the scientific method.

Follow this link:

How the Multiverse could break the scientific method - Big Think

Read More..

No, particle physics on Earth won’t ever destroy the Universe – Big Think

Anytime you reach deeper into the unknown than ever before, you should not only wonder about what youre going to find, but also worry about what sort of demons you might unearth. In the realm of particle physics, that double-edged sword arises the farther we probe into the high-energy Universe. The better we can explore the previously inaccessible energy frontier, the better we can reveal the high-energy processes that shaped the Universe in its early stages.

Many of the mysteries of how our Universe began and evolved from the earliest times can be best investigated by this exact method: colliding particles at higher and higher energies. New particles and rare processes can be revealed through accelerator physics at or beyond the current energy frontiers, but this is not without risk. If we can reach energies that:

certain consequences not all of which are desirable could be in store for us all. And yet, just as was the case with the notion that The LHC could create black holes that destroy the Earth, we know that any experiment we perform on Earth wont give rise to any dire consequences at all. The Universe is safe from any current or planned particle accelerators. This is how we know.

The idea of a linear lepton collider has been bandied about in the particle physics community as the ideal machine to explore post-LHC physics for many decades, but only if the LHC makes a beyond-the-Standard-Model discovery. Direct confirmation of what new particles could be causing CDFs observed discrepancy in the W-bosons mass might be a task best suited to a future circular collider, which can reach higher energies than a linear collider ever could.

There are a few different approaches to making particle accelerators on Earth, with the biggest differences arising from the types of particles were choosing to collide and the energies were able to achieve when were colliding them. The options for which particles to collide are:

Travel the Universe with astrophysicist Ethan Siegel. Subscribers will get the newsletter every Saturday. All aboard!

In the future, it may be possible to collide muons with anti-muons, getting the best of both the electron-positron and the proton-antiproton world, but that technology isnt quite there yet.

A candidate Higgs event in the ATLAS detector at the Large Hadron Collider at CERN. Note how even with the clear signatures and transverse tracks, there is a shower of other particles; this is due to the fact that protons are composite particles, and due to the fact that dozens of proton-proton collisions occur with every bunch crossing. Examining how the Higgs decays to very high precision is one of the key goals of the HL-LHC.

Regardless, the thing that poses the most danger to us is whatevers up there at the highest energy-per-particle-collision that we get. On Earth, that record is held by the Large Hadron Collider, where the overwhelming majority of proton-proton collisions actually result in the gluons inside each proton colliding. When they smash together, because the protons total energy is split among its constituent particles, only a fraction of the total energy belongs to each gluon, so it takes a large number of collisions to find one where a large portion of that energy say, 50% or more belongs to the relevant, colliding gluons.

When that occurs, however, thats when the most energy is available to either create new particles (via E = mc2) or to perform other actions that energy can perform. One of the ways we measure energies, in physics, is in terms of electron-volts (eV), or the amount of energy required to raise an electron at rest to an electric potential of one volt in relation to its surrounding. At the Large Hadron Collider, the current record-holder for laboratory energies on Earth, the most energetic particle-particle collision possible is 14 TeV, or 14,000,000,000,000 eV.

Although no light can escape from inside a black holes event horizon, the curved space outside of it results in a difference between the vacuum state at different points near the event horizon, leading to the emission of radiation via quantum processes. This is where Hawking radiation comes from, and for the tiniest-mass black holes, Hawking radiation will lead to their complete decay in under a fraction-of-a-second.

There are things we can worry will happen at these highest-of-energies, each with their own potential consequence for either Earth or even for the Universe as a whole. A non-exhaustive list includes:

If you draw out any potential, it will have a profile where at least one point corresponds to the lowest-energy, or true vacuum, state. If there is a false minimum at any point, that can be considered a false vacuum, and it will always be possible, assuming this is a quantum field, to quantum tunnel from the false vacuum to the true vacuum state. The greater the kick you apply to a false vacuum state, the more likely it is that the state will exit the false vacuum state and wind up in a different, more stable, truer minimum.

Although these scenarios are all bad in some sense, some are worse than others. The creation of a tiny black hole would lead to its immediate decay. If you didnt want it to decay, youd have to impose some sort of new symmetry (for which there is neither evidence nor motivation) to prevent its decay, and even then, youd just have a tiny-mass black hole that behaved similarly to a new, massive, uncharged particle. The worst it could do is begin absorbing the matter particles it collided with, and then sink to the center of whatever gravitational object it was a part of. Even if you made it on Earth, it would take trillions of years to absorb enough matter to rise to a mass of 1 kg; its not threatening at all.

The restoration of whatever symmetry was in place before the Universes matter-antimatter symmetry arose is also interesting, because it could lead to the destruction of matter and the creation of antimatter in its place. As we all know, matter and antimatter annihilate upon contact, which creates bad news for any matter that exists close to this point. Fortunately, however, the absolute energy of any particle-particle collision is tiny, corresponding to tiny fractions of a microgram in terms of mass. Even if we created a net amount antimatter from such a collision, it would only be capable of destroying a small amount of matter, and the Universe would be fine overall.

The simplest model of inflation is that we started off at the top of a proverbial hill, where inflation persisted, and rolled into a valley, where inflation came to an end and resulted in the hot Big Bang. If that valley isnt at a value of zero, but instead at some positive, non-zero value, it may be possible to quantum-tunnel into a lower-energy state, which would have severe consequences for the Universe we know today. Its also possible that a kick of the right energy could restore the inflationary potential, leading to a new state of rapid, relentless, exponential expansion.

But if we instead were able to recreate the conditions under which inflation occurred, things would be far worse. If it happened out in space somewhere, wed create in just a tiny fraction of a second the greatest cosmic void we could imagine. Whereas today, theres only a tiny amount of energy inherent to the fabric of empty space, something on the order of the rest-mass-energy of only a few protons per cubic meter, during inflation, it was more like a googol protons (10100) per cubic meter.

If we could achieve those same energy densities anywhere in space, they could potentially restore the inflationary state, and that would lead to the same Universe-emptying exponential expansion that occurred more than 13.8 billion years ago. It wouldnt destroy anything in our Universe, but it would lead to an exponential, rapid, relentless expansion of space in the region where those conditions occur again.

That expansion would push the space that our Universe occupies outward, in all three dimensions, as it expands, creating a large cosmic bubble of emptiness that would lead to unmistakable signatures that such an event had occurred. It clearly has not, at least, not yet, but in theory, this is possible.

Visualization of a quantum field theory calculation showing virtual particles in the quantum vacuum. (Specifically, for the strong interactions.) Even in empty space, this vacuum energy is non-zero, and what appears to be the ground state in one region of curved space will look different from the perspective of an observer where the spatial curvature differs. As long as quantum fields are present, this vacuum energy (or a cosmological constant) must be present, too.

And finally, the Universe today exists in a state where the quantum vacuum the zero-point energy of empty space is non-zero. This is inextricably, although we dont know how to perform the calculation that underlies it, linked to the fundamental physical fields and couplings and interactions that govern our Universe: the physical laws of nature. At some level, the quantum fluctuations in those fields that cannot be extricated from space itself, including the fields that govern all of the fundamental forces, dictate what the energy of empty space itself is.

But its possible that this isnt the only configuration for the quantum vacuum; its plausible that other energy states exist. Whether theyre higher or lower doesnt matter; whether our vacuum state is the lowest-possible one (i.e., the true vacuum) or whether another is lower doesnt matter either. What matters is whether there are any other minima any other stable configurations that the Universe could possibly exist in. If there are, then reaching high-enough energies could kick the vacuum state in a particular region of space into a different configuration, where wed then have at least one of:

Any of these would, if it was a more-stable configuration than the one that our Universe currently occupies, cause that new vacuum state to expand at the speed of light, destroying all of the bound states in its path, down to atomic nuclei themselves. This catastrophe, over time, would destroy billions of light-years worth of cosmic structure; if it happened within about 18 billion light-years of Earth, that would eventually include us, too.

The size of our visible Universe (yellow), along with the amount we can reach (magenta). The limit of the visible Universe is 46.1 billion light-years, as thats the limit of how far away an object that emitted light that would just be reaching us today would be after expanding away from us for 13.8 billion years. However, beyond about 18 billion light-years, we can never access a galaxy even if we traveled towards it at the speed of light. Any catastrophe that occurred within 18 billion light-years of us would eventually reach us; ones that occur today at distances farther away never will.

There are tremendous uncertainties connected to these events. Quantum black holes could be just out of reach of our current energy frontier. Its possible that the matter-antimatter asymmetry was only generated during electroweak symmetry breaking, potentially putting it within current collider reach. Inflation must have occurred at higher energies than weve ever reached, as do the processes that determine the quantum vacuum, but we dont know how low those energies could have been. We only know, from observations, that such an event hasnt yet happened within our observable Universe.

But, despite all of this, we dont have to worry about any of our particle accelerators past, present, or even into the far future causing any of these catastrophes here on Earth. The reason is simple: the Universe itself is filled with natural particle accelerators that are far, far more powerful than anything weve ever built or even proposed here on Earth. From collapsed stellar objects that spin rapidly, such as white dwarfs, neutron stars, and black holes, very strong electric and magnetic fields can be generated by charged, moving matter under extreme conditions. Its suspected that these are the sources of the highest-energy particles weve ever seen: the ultra-high-energy cosmic rays, which have been observed to achieve energies many millions of times greater than any accelerator on Earth ever has.

The energy spectrum of the highest energy cosmic rays, by the collaborations that detected them. The results are all incredibly highly consistent from experiment to experiment, and reveal a significant drop-off at the GZK threshold of ~5 x 10^19 eV. Still, many such cosmic rays exceed this energy threshold, indicating that either this picture is not complete or that many of the highest-energy particles are heavier nuclei, rather than individual protons.

Whereas weve reached up above the ten TeV threshold for accelerators on Earth, or 1013 eV in scientific notation, the Universe routinely creates cosmic rays that rise up above the 1020 eV threshold, with the record set more than 30 years ago by an event known, appropriately, as the Oh-My-God particle. Even though the highest energy cosmic rays are thought to be heavy atomic nuclei, like iron, rather than individual protons, that still means that when two of them collide with one another a near-certainty within our Universe given the vastness of space, the fact that galaxies were closer together in the past, and the long lifetime of the Universe there are many events producing center-of-mass collision energies in excess of 1018 or even 1019 eV.

This tells us that any catastrophic, cosmic effect that we could worry about is already tightly constrained by the physics of what has happened over the cosmic history of the Universe up until the present day.

When a high-energy particle strikes another one, it can lead to the creation of new particles or new quantum states, constrained only by how much energy is available in the center-of-mass of the collision. Although particle accelerators on Earth can reach very high energies, the natural particle accelerators of the Universe can exceed those energies by a factor of many millions.

None of the cosmic catastrophes that we can imagine have occurred, and that means two things. The first thing is that we can place likely lower limits on where certain various cosmic transitions occurred. The inflationary state hasnt been restored anywhere in our Universe, and that places a lower limit on the energy scale of inflation of no less than ~1019 eV. This is about a factor of 100,000 lower, perhaps, than where we anticipate inflation occurred: a reassuring consistency. It also teaches us that its very hard to kick the zero-point energy of the Universe into a different configuration, giving us confidence in the stability of the quantum vacuum and disfavoring the vacuum decay catastrophe scenario.

But it also means we can continue to explore the Universe with confidence in our safety. Based on how safe the Universe has already shown itself to be, we can confidently conclude that no such catastrophes will arise up to the combined energy-and-collision-total threshold that has already taken place within our observable Universe. Only if we begin to collide particles at energies around 1020 eV or greater a factor of 10 million greater than the present energy frontier will we need to begin to worry about such events. That would require an accelerator significantly larger than the entire planet, and therefore, we can reach the conclusion promised in the articles title: no, particle physics on Earth wont ever destroy the Universe.

Link:

No, particle physics on Earth won't ever destroy the Universe - Big Think

Read More..