Page 1,977«..1020..1,9761,9771,9781,979..1,9902,000..»

United Kingdom: UK publishes report on the impact of Artificial Intelligence on product safety – GlobalComplianceNews

The Office for Product Safety and Standards (OPSS) published a report on 23 May 2022 which considered the impact of artificial intelligence (AI) on product safety. This issue is also being considered in a number of other jurisdictions (see, for example, the EUs Proposal for a Regulation laying down harmonised rules on AI).

The report provides a framework for considering the impact of AI consumer products on existing product safety and liability policy. This framework seeks to support the work of policymakers by highlighting the main considerations that should be taken into account when evaluating and developing product safety and liability policy for AI consumer products. No timeline is stated in the report for that evaluation/ development to take place, but the report makes clear the view that work is needed to ensure the UKs product safety and liability regime can deal with AI developments.

The report considers the potential negative implications of AI use on the safety of consumer products. In particular:

The report also considers the ways in which the incorporation of AI systems into manufactured consumer products can be of benefit. More specifically:

The report opines that the current legal framework is insufficient in many ways to deal with AI. In particular there are various shortcomings from a product safety / liability perspective:

The report notes that the hypothetical application of the UKs product liability rules to AI products is a challenge, and that It remains unclear how product safety rules will apply to AI products.

At the moment, there are two core ways in which challenges brought by AI are being addressed:

The inevitability of future AI developments is one of the factors driving likely reform at a UK level.

Read the original post:
United Kingdom: UK publishes report on the impact of Artificial Intelligence on product safety - GlobalComplianceNews

Read More..

Reply: Automation and Artificial Intelligence Are the Strategic Keys for an Effective Defense Against Growing Threats in the Digital World – Business…

TURIN, Italy--(BUSINESS WIRE)--Today, cybersecurity represents an essential priority in the implementation of new technologies, especially given the crucial role that they have come to play in our private and professional lives. Smart Homes, Connected Cars, Delivery Robots: this evolution will not stop and so, in tandem, it will be necessary to develop automated and AI-based solutions to combat the growing number of security threats. The risks from these attacks are attributable to several factors, such as increasingly complex and widespread digital networks and a growing sensitivity to data privacy issues. These are the themes that emerge from the new Cybersecurity Automation research conducted by Reply, thanks to the proprietary SONAR platform and the support of PAC (Teknowlogy Group) in measuring the markets and projecting their growth.

In particular, the research estimates the principal market trends in security system automation, based on analysis of studies of the sector combined with evidence from Replys own customers. The data compares two different clusters of countries: the Europe-5 (Italy, Germany, France, the Netherlands, Belgium) and the Big-5 (USA, UK, Brazil, China, India) in order to understand how new AI solutions are implemented in the constantly evolving landscape of cybersecurity.

As cyberattacks like hacking, phishing, ransomware and malware have become more frequent and sophisticated, resulting in trillions of euros in damages for businesses both in terms of profit and brand reputation, the adoption of hyperautomation techniques has demonstrated how artificial intelligence and machine learning represent possible solutions. Furthermore, these technologies will need to be applied at every stage of protection, from software to infrastructure, and from devices to cloud computing.

Of the 300 billion in investments that the global cybersecurity market will make in the next five years, a large part will be directed toward automating security measures in order to improve detection and response times to threats in four different segments: Application security, Endpoint security, Data security and protection, Internet of Things security.

Application Security. Developers who first introduced the concept of security by design, an adaptive approach to technology design security, are now focusing on an even closer collaboration with the operations and security teams, termed DevSecOps. This newer model emphasizes the integration of security measures throughout the entire application development lifecycle. Automating testing at every step is crucial for decreasing the number of vulnerabilities in an application, and many testing and analysis tools are further integrating AI to increase their accuracy or capabilities. Investments in application security automation in the Europe-5 market are expected to see enormous growth, around seven times the current value, reaching 669 million euros by 2026. A similar growth is forecast in the Big-5 market, with investments rising to 3.5 billion euros.

Endpoint security. Endpoints, such as desktops, laptops, smartphones and servers, are sensitive elements and therefore possible sources of entry for cyberattacks if not adequately protected. In recent years, the average number of endpoints within a company has significantly increased, so identifying and adopting efficient and comprehensive protection tools is essential for survival. Endpoint detection and response (EDR) and Extended detection and response (XDR) are both tools created to accelerate the response time to emerging security threats, delegating repetitive and monotonous tasks to software that can manage them more efficiently. Investments in these tools are expected to increase in both the Europe-5 and Big-5 markets over the next few years, reaching 757 million euros and 3.65 billion euros respectively. There are also a multitude of other tools and systems dedicated to incident management that can be integrated at the enterprise level. For example, in Security Orchestration Automation and Response (SOAR) solutions, AI can be introduced in key areas such as threat management or incident response.

Data security and protection. Data security threats, also called data breaches, can cause significant damage to a business, resulting in risky legal complications or devaluating brand reputation. Ensuring that data is well-preserved and well-stored is an increasingly important challenge. It is easy to imagine how many different security threats can come from poor data manipulation, cyberattacks, untrustworthy employees, or even just from inexperienced technology users. Artificial intelligence is a tool for simplifying these data security procedures, from discovery to classification to remediation. Security automation is expected to reduce the cost of a data breach by playing an important role in various phases of a cyberattack, such as in data loss prevention tools (DLP), encryption, and tokenization. In an effort to better protect system security and data privacy, companies in the Europe-5 cluster are expected to invest 915 million euros in data security automation by 2026. The Big-5 market will quadruple its value, reaching 4.4 billion euros in the same timeframe.

Internet of Things security. The interconnected nature of IoT allows for every device in a network to be a potential weak point, meaning even a single vulnerability could be enough to shut down an entire infrastructure. By 2026, it is estimated that there will be 80 billion IoT devices on earth. The impressive range of abilities offered by IoT devices for different industries, though enabling smart factories, smart logistics, or smart speakers, prevents the creation of a standardized solution for IoT cybersecurity. As IoT networks reach fields ranging from healthcare to automotive, the risks only multiply. Therefore, IoT security is one of the most difficult challenges: the boundary between IT and OT (Operational Technology) must be overcome in order for IoT to unleash its full business value. As such, it is estimated that the IoT security automation market will exceed the 1-billion-euro mark in the Europe-5 cluster by 2026. In the Big-5 market, investments will reach a whopping 4.6 billion euros.

Filippo Rizzante, Replys CTO, has stated: The significant growth that we are witnessing in the cybersecurity sector is not driven by trend, but by necessity. Every day, cyberattacks hit public and private services, government and healthcare systems, causing enormous damage and costs; therefore, it is more urgent than ever to reconsider security strategies and reach new levels of maturity through automation, remembering that though artificial intelligence has increased the threat of the hacker, it is through taking advantage of AIs opportunities that cyberattacks can be prevented and countered.

The complete research is downloadable here. This new research is part of the Reply Market Research series, which includes the reports From Cloud to Edge, Industrial IoT: a reality check and Hybrid Work.

ReplyReply [EXM, STAR: REY, ISIN: IT0005282865] is specialized in the design and implementation of solutions based on new communication channels and digital media. Reply is a network of highly focused companies supporting key European industrial groups operating in the telecom and media, industry and services, banking, insurance and public administration sectors in the definition and development of business models enabled for the new paradigms of AI, cloud computing, digital media and the Internet of Things. Reply services include: Consulting, System Integration and Digital Services. http://www.reply.com

Original post:
Reply: Automation and Artificial Intelligence Are the Strategic Keys for an Effective Defense Against Growing Threats in the Digital World - Business...

Read More..

Mintlify Uses Artificial Intelligence To Address Software Documentation Challenges, Raises $2.8 Million – Tech Times

Mintlify, which automates software documentation tasks announced that it raised $2.8 million in a seed round led by Bain Capital Ventures. The startup developing software's CEO Han Wang said that the proceeds will go to product development and double their staff. Currently, Mintlify is a three-person team.

The New York-based software company was founded in 2021 by Han Wang and Hahnbee Lee. Both are software engineers and their profession drove them to build Mintlify.

Both Wang and Lee's experiences in software development involved working with documentation that wasn't always high quality or complete.

"We've worked as software engineers at companies in all stages ranging from startups to big tech and found that they all suffer from bad documentation if it even existed at all," Wang said.

He also added that documentation is crucial to engineers and those that are working on new codebases.

Also read:AI-Powered Video Editing Platform: Why Should You Use It- Virtual Doppelganger?

With that, Mintlify is established to address documentation challenges with auto-generating documentation. The software reads code and creates docs to explain it using technologies, such as Natural Language Processing (NLP) and web scraping.

This only shows that generating documentation from code is possible with the help of Artificial Intelligence (AI).

However, Mintlify isn't the first one to do this. In fact, the software company already has a few competitors that are taking similar approaches.

Still, Wang assures that their software delivers higher-quality results and they don't force developers to host documentation on a cloud service.

"Mintlify's mission is to solve documentation rot by developing continuous documentation into a standard practice for software teams," Wang said.

Aside from document generation, the software also scans for stale documentation and detects how users engage with the documentation. These help improve its readability. The software that will not store code and ensures all user data at rest and in transit are encrypted.

The platform is free for developers and can be integrated with existing systems.

Since its launch in January, Mintlify continues to grow with 6,000 active accounts. With this, they are looking to offering a premium that is aimed at enterprise customers.

It has also received good feedback from developers and people who have texted the software. For many, it saves a lot of time and keystrokes from writing docstrings from scratch and it would be useful for reading and understanding undocumented code by ghosts.

They also noted that the global pandemic's impact on the work environment has even made it more important to have high-quality documentation for more efficient product development. And this is exactly what Mintlify is doing as they expand into workflow automation that addresses documentation challenges.

Related article:Top 5 Best Bot Platforms Software for Better Customer Support

This article is owned by TechTimes

Written by April Fowell

2021 TECHTIMES.com All rights reserved. Do not reproduce without permission.

The rest is here:
Mintlify Uses Artificial Intelligence To Address Software Documentation Challenges, Raises $2.8 Million - Tech Times

Read More..

Median Technologies Launches Imaging Lab, Spearheading the Integration of iBiopsy Artificial Intelligence Technologies Into iCRO Imaging Services for…

SOPHIA ANTIPOLIS, France--(BUSINESS WIRE)--Regulatory News:

Median Technologies (Paris:ALMDT) announces that the company is expanding its portfolio of services with Imaging Lab, a new entity whose mission is to leverage AI, data mining, and radiomics technologies to exploit imaging data from clinical trials in oncology.

The creation of Imaging Lab materializes the convergence of iCRO's activities for image management in the development of new oncologic drugs and iBiopsy's activities for the development of software as medical device targeting early diagnosis of cancers, especially lung cancer.

"We are seeing a paradigm shift of pharmaceutical companies towards new drug candidates targeting patients with early-stage cancers," said Fredrik Brag, CEO and founder of Median Technologies. "The synergy between our iCRO and iBiopsy businesses is perfect to respond to this change: iBiopsy develops software as medical device, integrating AI technologies, which allow the diagnosis of diseases at a very early stage, when patients are still asymptomatic. At the same time, iCRO has extensive knowledge of image processing and its management in clinical trials. The cross-fertilization of our two businesses will enable us to leverage imaging data in conjunction with other clinical information in an unparalleled way and provide biopharmaceutical companies with tools for Go/No-Go decisions in trials," adds Fredrik Brag.

Imaging Lab will provide new answers in four areas that determine the success of clinical trials: selection of patients included in trials, especially inclusion of patients diagnosed at early stages of disease thanks to AI technologies, prediction of response to therapy, measurement of disease progression, and evaluation of the safety of drug candidates. The goal is to optimize development plans, including facilitating Go/No-Go decisions to increase the success rate of clinical trials. This rate is especially low in oncology, generating an average development cost of $2.8 billion to take a new molecule to market, compared with an average of $1 billion per new molecule brought to market for other therapeutic areas1.

"Our experience of image management in clinical trials has shown that trial data is vastly underutilized. We can extract much more information from images through the widescale use of data mining, AI, and radiomics and use these technologies to better support our customers and biopharmaceutical partners in their clinical developments," says Nicolas Dano, COO iCRO of Median Technologies.

The Imaging Lab team will be present from June 4-6 (exhibition dates) at the ASCO Annual Conference in Chicago , Medians booth #2098, Exhibit Hall A, to meet the pharmaceutical community.

About Median Technologies: Median Technologies provides innovative imaging solutions and services to advance healthcare for everyone. We harness the power of medical images by using the most advanced Artificial Intelligence technologies, to increase the accuracy of diagnosis and treatment of many cancers and other metabolic diseases at their earliest stages and provide insights into novel therapies for patients. Our iCRO solutions for medical image analysis and management in oncology trials and iBiopsy, our AI-powered software as medical device help biopharmaceutical companies and clinicians to bring new treatments and diagnose patients earlier and more accurately. This is how we are helping to create a healthier world.

Founded in 2002, based in Sophia-Antipolis, France, with a subsidiary in the US and another one in Shanghai, Median has received the label Innovative company by the BPI and is listed on Euronext Growth market (Paris). FR0011049824 ticker: ALMDT. Median is eligible for the French SME equity savings plan scheme (PEA-PME), is part of the Enternext PEA-PME 150 index and has been awarded the Euronext European Rising Tech label. For more information: http://www.mediantechnologies.com

1 https://www.biopharmadive.com/news/new-drug-cost-research-development-market-jama-study/573381/

See original here:
Median Technologies Launches Imaging Lab, Spearheading the Integration of iBiopsy Artificial Intelligence Technologies Into iCRO Imaging Services for...

Read More..

MIT Engineers Use Artificial Intelligence To Capture the Complexity of Breaking Waves – SciTechDaily

Using machine learning along with data from wave tank experiments, MIT engineers have found a way to model how waves break. With this, you could simulate waves to help design structures better, more efficiently, and without huge safety factors, says Themis Sapsis. Credit: iStockphoto

The new models predictions should help researchers improve ocean climate simulations and hone the design of offshore structures.

Waves break once they swell to a critical height, before cresting and crashing into a shower of droplets and bubbles. These waves can be as big as a surfers point break and as small as a gentle ripple rolling to shore. For decades, the dynamics of how and when a wave breaks have been too complex for scientists to predict.

Now, MIT engineers have found a new method for modeling how waves break. The researchers tweaked equations that have previously been used to predict wave behavior using machine learning and data from wave-tank tests. Engineers frequently use such equations to help them design robust offshore platforms and structures. But until now, the equations have not been able to capture the complexity of breaking waves.

The researchers discovered that the modified model predicted how and when waves would break more accurately. The model, for example, assessed a waves steepness shortly before breaking, as well as its energy and frequency after breaking, more accurately than traditional wave equations.

Their results, published recently in the journal Nature Communications, will help scientists understand how a breaking wave affects the water around it. Knowing precisely how these waves interact can help hone the design of offshore structures. It can also improve predictions for how the ocean interacts with the atmosphere. Having better estimates of how waves break can help scientists predict, for instance, how much carbon dioxide and other atmospheric gases the ocean can absorb.

Wave breaking is what puts air into the ocean, says study author Themis Sapsis, an associate professor of mechanical and ocean engineering and an affiliate of the Institute for Data, Systems, and Society at MIT. It may sound like a detail, but if you multiply its effect over the area of the entire ocean, wave breaking starts becoming fundamentally important to climate prediction.

The studys co-authors include lead author and MIT postdoc Debbie Eeltink, Hubert Branger, and Christopher Luneau of Aix-Marseille University, Amin Chabchoub of Kyoto University, Jerome Kasparian of the University of Geneva, and T.S. van den Bremer of Delft University of Technology.

To predict the dynamics of a breaking wave, scientists typically take one of two approaches: They either attempt to precisely simulate the wave at the scale of individual molecules of water and air, or they run experiments to try and characterize waves with actual measurements. The first approach is computationally expensive and difficult to simulate even over a small area; the second requires a huge amount of time to run enough experiments to yield statistically significant results.

The MIT team instead borrowed pieces from both approaches to develop a more efficient and accurate model using machine learning. The researchers started with a set of equations that is considered the standard description of wave behavior. They aimed to improve the model by training the model on data of breaking waves from actual experiments.

We had a simple model that doesnt capture wave breaking, and then we had the truth, meaning experiments that involve wave breaking, Eeltink explains. Then we wanted to use machine learning to learn the difference between the two.

The researchers obtained wave breaking data by running experiments in a 40-meter-long tank. The tank was fitted at one end with a paddle which the team used to initiate each wave. The team set the paddle to produce a breaking wave in the middle of the tank. Gauges along the length of the tank measured the waters height as waves propagated down the tank.

It takes a lot of time to run these experiments, Eeltink says. Between each experiment, you have to wait for the water to completely calm down before you launch the next experiment, otherwise they influence each other.

In all, the team ran about 250 experiments, the data from which they used to train a type of machine-learning algorithm known as a neural network. Specifically, the algorithm is trained to compare the real waves in experiments with the predicted waves in the simple model, and based on any differences between the two, the algorithm tunes the model to fit reality.

After training the algorithm on their experimental data, the team introduced the model to entirely new data in this case, measurements from two independent experiments, each run at separate wave tanks with different dimensions. In these tests, they found the updated model made more accurate predictions than the simple, untrained model, for instance making better estimates of a breaking waves steepness.

The new model also captured an essential property of breaking waves known as the downshift, in which the frequency of a wave is shifted to a lower value. The speed of a wave depends on its frequency. For ocean waves, lower frequencies move faster than higher frequencies. Therefore, after the downshift, the wave will move faster. The new model predicts the change in frequency, before and after each breaking wave, which could be especially relevant in preparing for coastal storms.

When you want to forecast when high waves of a swell would reach a harbor, and you want to leave the harbor before those waves arrive, then if you get the wave frequency wrong, then the speed at which the waves are approaching is wrong, Eeltink says.

The teams updated wave model is in the form of an open-source code that others could potentially use, for instance in climate simulations of the oceans potential to absorb carbon dioxide and other atmospheric gases. The code can also be worked into simulated tests of offshore platforms and coastal structures.

The number one purpose of this model is to predict what a wave will do, Sapsis says. If you dont model wave breaking right, it would have tremendous implications for how structures behave. With this, you could simulate waves to help design structures better, more efficiently, and without huge safety factors.

Reference: Nonlinear wave evolution with data-driven breaking by D. Eeltink, H. Branger, C. Luneau, Y. He, A. Chabchoub, J. Kasparian, T. S. van den Bremer & T. P. Sapsis, 29 April 2022, Nature Communications.DOI: 10.1038/s41467-022-30025-z

This research is supported, in part, by the Swiss National Science Foundation, and by the U.S. Office of Naval Research.

Continue reading here:
MIT Engineers Use Artificial Intelligence To Capture the Complexity of Breaking Waves - SciTechDaily

Read More..

Farmers Increasing Their Crop Yield with Artificial Intelligence – Farmers Review Africa

The demand for agricultural products is surging in countries such as Brazil, India, the U.S., and China due to the rapid urbanization, surging disposable income, and changing consumption patterns of the booming population. On account of the soaring demand, these countries are leveraging artificial intelligence (AI) to increase their overall agricultural productivity. Owing to this reason, the AI in agriculture market is expected to progress at a robust CAGR of 24.8% during 20202030. According to P&S Intelligence, at this rate, the value of the market will rise from $852.2 million in 2019 to $8,379.5 million by 2030.

In recent years, the usage of smart sensors has increased tremendously in agriculture, as they enable farmers to map their fields accurately and apply crop treatment products to the areas that need them. Moreover, the development of several operation-specific sensors, including airflow sensors, location sensors, weather sensors, and soil moisture sensors, is assisting farmers in monitoring and optimizing their yields. Additionally, technology companies are developing smart sensors that are adaptable to the altering environmental conditions.

Additionally, the agrarian community is deploying drones in large numbers to monitor the growth and health of crops. Farmers use drones to scan the soil health, estimate the yield data, draft irrigation schedules, and apply fertilizers. Besides, the increasing support from the government has led to the widescale adoption of drones for modernizing agricultural practices. For example, in January 2019, the government of Maharashtra, India, partnered with the World Economic Forum (WEF) to enhance the agricultural yield by gathering insights about the farms through drones.

How Are AI-Powered Smart Sensors Improving Agricultural Practices?

Further, AI is being used in the agriculture sector to monitor the livestock in real-time. The utilization of AI solutions, such as facial recognition and image classification integrated with feeding patterns and condition score, enables dairy farms to individually monitor all the behavioral aspects of a herd. Moreover, farmers are using machine vision to recognize facial features and hide patterns, record the behavior and body temperature, and monitor the food and water intake of the livestock.

North America witnesses large-scale deployment of the AI technology in agricultural activities owing to the early adoption of computer vision and machine learning (ML) for soil management, precision farming, greenhouse management, and livestock management. Moreover, the increasing adoption of the internet of things (IoT) technology bolstered with computer vision will promote the application of AI solutions by the farming community. Besides, the existence of numerous technology vendors and sensor manufacturers in the region promotes the usage of the AI technologies in the agricultural space.

Furthermore, the Asia-Pacific (APAC) region is expected to adopt AI-enabled agricultural solutions at the fastest pace in the coming years. The high adoption rate of AI in China, Australia, India, and Japan will contribute significantly to the APAC AI in agriculture market in the future. Moreover, the entry of the Alibaba Group in the agricultural solution business, with its AI technology, will increase the penetration of these solutions in the Chinese agricultural industry. Additionally, India is utilizing such solutions due to the escalating effort by multinational companies (MNCs) and the government to spread awareness regarding data sciences and farm analytics among farmers.

Thus, the growing need to increase the crop yield and improve livestock management will fuel the adoption of AI-enabled solutions in the agricultural space.

Source: P&S Intelligence

Read the original here:
Farmers Increasing Their Crop Yield with Artificial Intelligence - Farmers Review Africa

Read More..

Education Executives Tout Artificial Intelligence Benefits for Classroom Learning – BroadbandBreakfast.com

WASHINGTON, May 24, 2022 Experts in education technology said Monday that to close the digital divide for students, the nation must eliminate barriers at the community level, including raising awareness of programs and resources and increasing digital literacy.

We are hearing from schools and district leaders that its not enough to make just broadband available and affordable, although those are critical steps, said Ji Soo Song, broadband advisor at the U.S. Department of Education, said at an event hosted by trade group SIIA, formerly known as the Software and Information Industry Association. We also have to make sure that were solving for the human barriers that often inhibit adoption.

Song highlighted four initial barriers that students are facing. First, a lack of awareness and understanding of programs and resources. Second, signing up for programs is often confusing regarding eligibility requirements, application status, and installment. Third, there may be a lack of trust between communities and services. Fourth, a lack of digital literacy among students can prevent them from succeeding.

Song said he believes that with the Infrastructure, Investment and Jobs Act, states have an incredible opportunity to address adoption barriers.

Rosemary Lahasky, senior director for government affairs at Cengage, a maker of educational content, added that current data suggests that 16 million students lack access to a broadband connection. While this disparity in American homes remained, tech job posts nearly doubled in 2021, but the average number of applicants shrunk by 25 percent.

But panelists said they are hopeful that funding will address these shortages. Almost every single agency that received fundingreceived either direct funding for workforce training or were given the flexibility to spend some of their money on workforce training, said Lahasky of the IIJA, which carves out funding for workforce training.

This money is also, according to Lahasky, funding apprenticeship programs, which have been recommended by many as a solution to workforce shortages.

Student connectivity has been a long-held concern following the COVID-19 pandemic. Students themselves are stepping up to fight against the digital inequity in their schools as technology becomes increasingly essential for success. Texas students organized a panel to discuss internet access in education just last year.

See the article here:
Education Executives Tout Artificial Intelligence Benefits for Classroom Learning - BroadbandBreakfast.com

Read More..

Tamagotchi kids: could the future of parenthood be having virtual children in the metaverse? – The Guardian

Name: Tamagotchi kids.

Age: Yet to be born, though it wont be long, says Catriona Campbell.

Is she pregnant? No. Well, I dont know, thats not the point.

What is the point? That some people might decide never to be pregnant, ever again.

That already happens, doesnt it? True, for loads of reasons, including concerns about the environment, overpopulation, the rising cost of bringing up a child, etc.

So who is this Catriona Campbell, then? One of the UKs leading authorities on artificial intelligence. She has a new book out, called AI by Design: A Plan For Living With Artificial Intelligence.

What does she say in it? That within 50 years, technology will have advanced to such an extent that babies which exist in the metaverse are indistinct from those in the real world.

Does that mean that Mark Zuckerberg is going to be everyones dad? Or (shivers) Nick Clegg? No. It means virtual digital children will exist in the metaverse which, as youll know, is the immersive digital future of the internet. Campbell predicts they will be commonplace and embraced by society within half a century. She has called this digital demographic the Tamagotchi generation, after those digital pet toys from Japan, remember?

So, will our new kids be egg-shaped and have three buttons? And will we soon get bored and forget about them? Technology has come on since the 90s. Campbell says virtual children will look like you, and you will be able to play with and cuddle them. They will be capable of simulated emotional responses as well as speech, which will range from googoo gaga to backchat, as they grow older.

I hate it when they become teenagers. Then put it off.

So we would get to decide how quickly they grow up? Or if they grow up.

And if we do get bored with them? Well, if you have them on a monthly subscription basis, which is what Campbell thinks might happen, then I suppose you can just cancel.

If you can get through! Customer services might be better in the future.

It sounds a teeny bit creepy, no? Think of the advantages: minimal cost and environmental impact. And less worry though you might want a bit of that programmed in for a more authentic parental experience.

Any downsides? Well, you might think if you can turn it on and off it is more like a dystopian doll than a human who is your own flesh and blood. But thats just old fashioned.

Do say: Sold. Ill take 2.4 of them please.

Dont say: Any more of your cheek and youre deleted!

Read more:
Tamagotchi kids: could the future of parenthood be having virtual children in the metaverse? - The Guardian

Read More..

Could quantum mechanics explain the Mandela effect? – Big Think

There are some questions that, if you look up the answer, might make you question the reliability of your brain.

Many other examples abound, from the color of different flavor packets of Walkers crisps to the spelling of Looney Tunes (vs. Looney Toons) and Febreze (vs. Febreeze) to whether the Monopoly Man has a monocle or not.

Perhaps the simplest explanation for all of these is simply that human memory is unreliable, and that as much as we trust our brains to remember what happened in our own lives, that our own minds are at fault. But theres another possibility based on quantum physics thats worth considering: could these truly have been the outcomes that occurred for us, but in a parallel Universe? Heres what the science has to say.

Visualization of a quantum field theory calculation showing virtual particles in the quantum vacuum. (Specifically, for the strong interactions.) Even in empty space, this vacuum energy is non-zero, and what appears to be the ground state in one region of curved space will look different from the perspective of an observer where the spatial curvature differs. As long as quantum fields are present, this vacuum energy (or a cosmological constant) must be present, too.

One of the biggest differences between the classical world and the quantum world is the notion of determinism. In the classical world which also defined all of physics, including mechanics, gravitation, and electromagnetism prior to the late 19th century the equations that govern the laws of nature are all completely deterministic. If you can give details about all of the particles in the Universe at any given moment in time, including their mass, charge, position, and momentum at that particular moment, then the equations that govern physics can tell you both where they were and where they will be at any moment in the past or future.

But in the quantum Universe, this simply isnt the case. No matter how accurately you measure certain properties of the Universe, theres a fundamental uncertainty that prevents you from knowing those properties arbitrarily well at the same time. In fact, the better you measure some of the properties that a particle or system of particles can have, the greater the inherent uncertainty becomes an uncertainty that you can not get rid of or reduce below a critical value in other properties. This fundamental relation, known as the Heisenberg uncertainty principle, cannot be worked around.

This diagram illustrates the inherent uncertainty relation between position and momentum. When one is known more accurately, the other is inherently less able to be known accurately. Every time you accurately measure one, you ensure a greater uncertainty in the corresponding complementary quantity.

Travel the Universe with astrophysicist Ethan Siegel. Subscribers will get the newsletter every Saturday. All aboard!

There are many other examples of uncertainty in quantum physics, and many of those uncertain measurements dont just have two possible outcomes, but a continuous spectrum of possibilities. Its only by measuring the Universe, or by causing an interaction of an inherently uncertain system with another quantum from the environment, that we discover which of the possible outcomes describes our reality.

The Many Worlds Interpretation of quantum mechanics holds that there are an infinite number of parallel Universes that exist, holding all possible outcomes of a quantum mechanical system, and that making an observation simply chooses one path. This interpretation is philosophically interesting, but may add nothing-of-value when it comes to actual physics.

One of the problems with quantum mechanics is the problem of, what does it mean for whats really going on in our Universe? We have this notion that there is some sort of objective reality a really real reality thats independent of any observer or external influence. That, in some way, the Universe exists as it does without regard for whether anyone or anything is watching or interacting with it.

This very notion is not something were certain is valid. Although its pretty much hard-wired into our brains and our intuitions, reality is under no obligation to conform to them.

What does that mean, then, when it comes to the question of whats truly going on when, for example, we perform the double-slit experiment? If you have two slits in a screen that are narrowly spaced, and you shine a light through it, the illuminated pattern that shows up behind the screen is an interference pattern: with multiple bright lines patterned after the shape of the slit, interspersed with dark lines between them. This is not what youd expect if you threw a series of tiny pebbles through that double slit; youd simply expect two piles of rocks, with each one corresponding to the rocks having gone through one slit or the other.

Results of a double-slit-experiment performed by Dr. Tonomura showing the build-up of an interference pattern of single electrons. If the path of which slit each electron passes through is measured, the interference pattern is destroyed, leading to two piles instead. The number of electrons in each panel are 11 (a), 200 (b), 6000 (c), 40000 (d), and 140000 (e).

The thing about this double slit experiment is this: as long as you dont measure which slit the light goes through, you will always get an interference pattern.

This remains true even if you send the light through one photon at a time, so that multiple photons arent interfering with one another. Somehow, its as though each individual photon is interfering with itself.

Its still true even if you replace the photon with an electron, or other massive quantum particles, whether fundamental or composite. Sending electrons through a double slit, even one at a time, gives you this interference pattern.

And it ceases to be true, immediately and completely, if you start measuring which slit each photon (or particle) went through.

But why? Why is this the case?

Thats one of the puzzles of quantum mechanics: it seems as though its open to interpretation. Is there an inherently uncertain distribution of possible outcomes, and does the act of measuring simply pick out which outcome it is that has occurred in this Universe?

Is it the case that everything is wave-like and uncertain, right up until the moment that a measurement is made, and that act of measuring a critical action that causes the quantum mechanical wavefunction to collapse?

When a quantum particle approaches a barrier, it will most frequently interact with it. But there is a finite probability of not only reflecting off of the barrier, but tunneling through it. The actual evolution of the particle is only determined by measurement and observation, and the wavefunction interpretation only applies to the unmeasured system; once its trajectory has been determined, the past is entirely classical in its behavior.

Or is it the case that each and every possible outcome that could occur actually does occur, but simply not in our Universe? Is it possible that there are an infinite number of parallel Universes out there, and that all possible outcomes occur infinitely many times in a variety of them, but it takes the act of measurement to know which one occurred in ours?

Although these might all seem like radically different possibilities, theyre all consistent (and not, by any means, an exhaustive list of) interpretations of quantum mechanics. At this point in time, the only differences between the Universe they describe are philosophical. From a physical point of view, they all predict the same exact results for any experiment we know how to perform at present.

However, if there are an infinite number of parallel Universes out there and not simply in a mathematical sense, but in a physically real one there needs to be a place for them to live. We need enough Universe to hold all of these possibilities, and to allow there to be somewhere within it where every possible outcome can be real. The only way this could work is if:

From a pre-existing state, inflation predicts that a series of universes will be spawned as inflation continues, with each one being completely disconnected from every other one, separated by more inflating space. One of these bubbles, where inflation ended, gave birth to our Universe some 13.8 billion years ago, where our entire visible Universe is just a tiny portion of that bubbles volume. Each individual bubble is disconnected from all of the others.

The Universe needs to be born infinite because the number of possible outcomes that can occur in a Universe that starts off like ours, 13.8 billion years ago, increases more quickly than the number of independent Universes that come to exist in even an eternally inflating Universe. Unless the Universe was born infinite in size a finite amount of time ago, or it was born finite in size an infinite amount of time ago, its simply not possible to have enough Universes to hold all possible outcomes.

But if the Universe was born infinite and cosmic inflation occurred, suddenly the Multiverse includes an infinite number of independent Universes that start with initial conditions identical to our own. In such a case, anything that could occur not only does occur, but occurs an infinite number of times. There would be an infinite number of copies of you, and me, and Earth, and the Milky Way, etc., that exist in an infinite number of independent Universe. And in some of them, reality unfolds identically to how it did here, right up until the moment when one particular quantum measurement takes place. For us in our Universe, it turned out one way; for the version of us in a parallel Universe, perhaps that outcome is the only difference in all of our cosmic histories.

The inherent width, or half the width of the peak in the above image when youre halfway to the crest of the peak, is measured to be 2.5 GeV: an inherent uncertainty of about +/- 3% of the total mass. The mass of the particle in question, the Z boson, is peaked at 91.187 GeV, but that mass is inherently uncertain by a significant amount.

But when we talk about uncertainty in quantum physics, were generally talking about an outcome whose results havent been measured or decided just yet. Whats uncertain in our Universe isnt past events that have already been determined, but only events whose possible outcomes have not yet been constrained by measurables.

If we think about a double slit experiment thats already occurred, once weve seen the interference pattern, its not possible to state whether a particular electron traveled through slit #1 or slit #2 in the past. That was a measurement we could have made but didnt, and the act of not making that measurement resulted in the interference pattern appearing, rather than simply two piles of electrons.

There is no Universe where the electron travels either through slit #1 or slit #2 and still makes an interference pattern by interfering with itself. Either the electron travels through both slits at once, allowing it to interfere with itself, and lands on the screen in such a way that thousands upon thousands of such electrons will expose the interference pattern, or some measurements occurs to force the electron to solely travel through slit #1 or slit #2 and no interference pattern is recovered.

Perhaps the spookiest of all quantum experiments is the double-slit experiment. When a particle passes through the double slit, it will land in a region whose probabilities are defined by an interference pattern. With many such observations plotted together, the interference pattern can be seen if the experiment is performed properly; if you retroactively ask which slit did each particle go through? you will find youre asking an ill-posed question.

What does this mean?

It means as was recognized by Heisenberg himself nearly a century ago that the wavefunction description of the Universe does not apply to the past. Right now, there are a great many things that are uncertain in the Universe, and thats because the critical measurement or interaction to determine what that things quantum state is has not yet been taken.

In other words, there is a boundary between the classical and quantum the definitive and the indeterminate and that the boundary between them is when things become real, and when the past becomes fixed. That boundary, according to physicist Lee Smolin, is what defines now in a physical sense: the moment where the things that were observing at this instant fixes certain observables to have definitively occurred in our past.

We can think about infinite parallel Universes as opening up before us as far as future possibilities go, in some sort of infinitely forward-branching tree of options, but this line of reasoning does not apply to the past. As far as the past goes, at least in our Universe, previously determined events have already been metaphorically written in stone.

This 1993 photo by Carol M. Highsmith shows the last president of apartheid-era South Africa, F.W. de Klerk, alongside president-elect Nelson Mandela, as both were about to receive Americas Liberty Medal for effecting the transition of power away from white minority rule and towards universal majority rule. This event definitively occurred in our Universe.

In a quantum mechanical sense, this boils down to two fundamental questions.

The answer seems to be no and no. To achieve a macroscopic difference from quantum mechanical outcomes means weve already crossed into the classical realm, and that means the past history is already determined to be different. There is no way back to a present where Nelson Mandela dies in 2013 if he already died in prison in the 1980s.

Furthermore, the only places where these parallel Universes can exist is beyond the limit of our observable Universe, where theyre completely causally disconnected from anything that happens here. Even if theres a quantum mechanical entanglement between the two, the only way information can be transferred between those Universes is limited by the speed of light. Any information about what occurred over there simply doesnt exist in our Universe.

We can imagine a very large number of possible outcomes that could have resulted from the conditions our Universe was born with, and a very large number of possible outcomes that could have occurred over our cosmic history as particles interact and time passes. If there were enough possible Universes out there, it would also be possible that the same set of outcomes happened in multiple places, leading to the scenario of infinite parallel Universes. Unfortunately, we only have the one Universe we inhabit to observe, and other Universes, even if they exist, are not causally connected to our own.

The truth is that there may well be parallel Universes out there in which all of these things did occur. Maybe there is a Berenstein Bears out there, along with Shazaam the movie and a Nelson Mandela who died in prison in the 1980s. But that has no bearing on our Universe; they never occurred here and no one who remembers otherwise is correct. Although the neuroscience of human memory is not fully understood, the physical science of quantum mechanics is well-enough understood that we know whats possible and what isnt. You do have a faulty memory, and parallel Universes arent the reason why.

Read the original:

Could quantum mechanics explain the Mandela effect? - Big Think

Read More..

Ultracold gas bubbles on the space station could reveal strange new quantum physics – Space.com

While it might be a comfortable 72 degrees Fahrenheit (22 degrees Celsius) inside the International Space Station (ISS), there's a small chamber onboard where things get much, much colder colder than space itself.

In NASA's Cold Atom Lab aboard the ISS, scientists have successfully blown small, spherical gas bubbles cooled to just a millionth of a degree above absolute zero, the lowest temperature theoretically possible. (That's a few degrees colder than space!) The test was designed to study how ultracold gas behaves in microgravity, and the results may lead to experiments with Bose-Einstein condensates (BECs), the fifth state of matter.

The test demonstrated that, like liquid, gas coalesces into spheres in microgravity. On Earth, similar experiments have failed because gravity pulls the matter into asymmetrical droplets.

Related: Scientists create exotic, fifth state of matter on space station to explore the quantum world

"These are not like your average soap bubbles," David Aveline, the study's lead author and a member of the Cold Atom Lab science team at NASA's Jet Propulsion Laboratory (JPL) in California, said in a statement (opens in new tab). "Nothing that we know of in nature gets as cold as the atomic gases produced in Cold Atom Lab.

"So we start with this very unique gas and study how it behaves when shaped into fundamentally different geometries," Aveline explained. "And, historically, when a material is manipulated in this way, very interesting physics can emerge, as well as new applications."

Now, the team plans to transition the ultracold gas bubbles into the BEC state, which can exist only in extremely cold temperatures, to perform more quantum physics research.

"Some theoretical work suggests that if we work with one of these bubbles that is in the BEC state, we might be able to form vortices basically, little whirlpools in the quantum material," Nathan Lundblad, a physics professor at Bates College in Maine and the principal investigator of the new study, said in the same statement. "That's one example of a physical configuration that could help us understand BEC properties better and gain more insight into the nature of quantum matter."

Such experiments are possible only in the microgravity of the Cold Atom Lab, which comprises a vacuum chamber about the size of a minifridge. It was installed on the ISS in 2018, and it's operated remotely by a team on the ground at JPL.

"Our primary goal with Cold Atom Lab is fundamental research we want to use the unique space environment of the space station to explore the quantum nature of matter," said Jason Williams, a project scientist for the Cold Atom Lab at JPL. "Studying ultracold atoms in new geometries is a perfect example of that."

The team's observations were published May 18 in the journal Nature (opens in new tab).

Follow Stefanie Waldek on Twitter @StefanieWaldek (opens in new tab). Follow us on Twitter @Spacedotcom (opens in new tab) and on Facebook (opens in new tab).

Read more from the original source:

Ultracold gas bubbles on the space station could reveal strange new quantum physics - Space.com

Read More..