Category Archives: Machine Learning

AI Is Building Highly Effective Antibodies That Humans Can’t Even … – WIRED

James Field, founder and CEO of LabGenius.

The tests are almost fully automated, with an array of high-end equipment involved in preparing samples and running them through the various stages of the testing process: Antibodies are grown based on their genetic sequence and then put to the test on biological assayssamples of the diseased tissue that theyve been designed to tackle. Humans oversee the process, but their job is largely to move samples from one machine to the next.

When you have the experimental results from that first set of 700 molecules, that information gets fed back to the model and is used to refine the models understanding of the space, says Field. In other words, the algorithm begins to build a picture of how different antibody designs change the effectiveness of treatmentwith each subsequent round of antibody designs, it gets better, carefully balancing exploitation of potentially fruitful designs with exploration of new areas.

A challenge with conventional protein engineering is, as soon as you find something that works a bit, you tend to make a very large number of very small tweaks to that molecule to see if you can further refine it, Field says. Those tweaks may improve one propertyhow easily the antibody can be made at scale, for instancebut have a disastrous effect on the many other attributes required, such as selectivity, toxicity, potency, and more. The conventional approach means you may be barking up the wrong tree, or missing the wood for the treesendlessly optimizing something that works a little bit, when there may be far better options in a completely different part of the map.

Youre also constrained by the number of tests you can run, or the number of shots on goal, as Field puts it. This means human protein-engineers tend to look for things they know will work. As a result of that, you get all of these heuristics or rules of thumb that human protein-engineers do to try and find the safe spaces, Field says. But as a consequence of that you quickly get the accumulation of dogma.

The LabGenius approach yields unexpected solutions that humans may not have thought of, and finds them more quickly: It takes just six weeks from setting up a problem to finishing the first batch, all directed by machine learning models. LabGenius has raised $28 million from the likes of Atomico and Kindred, and is beginning to partner with pharmaceutical companies, offering its services like a consultancy. Field says the automated approach could be rolled out to other forms of drug discovery too, turning the long, artisanal process of drug discovery into something more streamlined.

Ultimately, Field says, its a recipe for better care: antibody treatments that are more effective, or have fewer side effects than existing ones designed by humans. You find molecules that you would never have found using conventional methods, he says. Theyre very distinct and often counterintuitive to designs that you as a human would come up withwhich should enable us to find molecules with better properties, which ultimately translates into better outcomes for patients.

This article appears in the September/October 2023 edition of WIRED UK magazine.

See original here:
AI Is Building Highly Effective Antibodies That Humans Can't Even ... - WIRED

Using machine learning to tame plasma in fusion reactors – Advanced Science News

For fusion reactions to become practical, parameters such as plasma density and shape must be monitored in real time and impending disruptions responded to instantly.

Nuclear fusion is widely regarded as one of the most promising sources of clean and sustainable energy of the future. In a fusion reaction, two light atomic nuclei combine to form another, whose mass is less than the total mass of the original pair, and according to Einsteins famous formula E = mc2, this mass difference gets transformed into energy that can be utilized.

The problem with this source of energy is that for positively charged nuclei to fuse, they have to overcome the electrical repulsion between them. For this, the velocity of colliding nuclei must be very high, which is achieved by heating the substance in which the reaction takes place to an enormous temperature, at least tens of millions of degrees Kelvin.

Of course, no material can withstand contact with matter at such temperature, so in all prototype fusion reactors, a magnetic field is used to contain the hot plasma, limiting its movement and preventing it from coming into contact with the walls of the reactor. However, in a hot plasma instabilities constantly arise, which can force it to leave the region of the magnetic container and collide with the walls of the reactor, damaging them. Such contacts also guarantee the cooling of the plasma and the termination of the fusion reaction.

In order to prevent these violent plasma disruptions, it is necessary to monitor plasma parameters such as its density and shape in real time and respond instantly to impending disruptions. To achieve this, a team of American and British scientists led by William Tang of Princeton University, has developed a machine learning-based software that can predict the disruptions and analyze the physical conditions which result in them.

In their work, the physicists used a large amount of data from the British JET facility and the American DIII-D machine, which are tokamaks, fusion reactors in which the plasma has the shape of a donut. To be more precise, the researchers used some of the data they had on the state of the plasma in the reactors during their operation to train the program. This training allows the software to to predict when a disruption would occur. The accuracy of these predictions could then be tested using real world data not used in the training set.

The team not only trained their software to correctly predict the disruptions, but also to analyze the physical processes occurring in the plasma that led to these events. This property of the algorithm is essential, since in the operation of a real fusion reactor it is important not only to understand that a disruption is approaching, but also to be able to prevent it by changing the parameters of the plasma in the reactor within milliseconds.

With a larger dataset and more powerful supercomputers, such as those currently being built at Oak Ridge National Laboratory, Lawrence Berkeley National Laboratory, and Argonne National Laboratory, the researchers hope they can make their algorithm even more sensitive to the processes occurring in the plasma, and hence more accurately predict and respond to impending disruptions.

They expect that the software they have developed will be implemented on the current prototype tokamaks, whose data they used in their study, as well as on future more powerful machines such as ITER, currently under construction in France. If this happens, then this may lead to earlier stable energy production from fusion reactions.

References: William Tang et al, Implementation of AI/DEEP learning disruption predictor into a plasma control system, Contributions to Plasma Physics (2023), DOI: 10.1002/ctpp.202200095.

Julian Kates-Harbeck, et al, Predicting disruptive instabilities in controlled fusion plasmas through deep learning, Nature (2019), DOI: 10.1038/s41586-019-1116-4.

Feature image credit: TheDigitalArtist on Pixabay

See the article here:
Using machine learning to tame plasma in fusion reactors - Advanced Science News

Oxford University successful machine learning in outer space – SpaceWatch.Global

ION Satellite Carrier over Scotland. Credit D-Sense

London, 28 July 2023.- A project led by the University of Oxford has trained a machine learning model in outer space, on board a satellite.A group of researchers led by DPhil student Vit Rika participated in the project. During 2022, the team successfully pitched their idea to the Dashing through the Stars mission, which had issued a call for project proposals to be carried out on board the ION SCV004 satellite, launched in January 2022.

The researchers trained a model to detect changes in cloud cover from aerial images directly on board the satellite. The model was based on few-shot learning, which enables a model to learn the most important features to look for when it has only a few samples to train from.

The project was conducted in collaboration with the European Space Agency (ESA) -lab via the Cognitive Cloud Computing in Space campaign and the Trillium Technologies initiative Networked Intelligence in Space and partners at D-Orbit and Unibap.

Machine learning has a huge potential for improving remote sensing the ability to push as much intelligence as possible into satellites will make space-based sensing increasingly autonomous, said Professor Andrew Markham, who supervised Vit Rikas DPhil research. This would help to overcome the issues with the inherent delays between acquisition and action by allowing the satellite to learn from data on board. Vts work serves as an interesting proof-of-principle.

Machine learning in outer space could help overcome the problem of on-board satellite sensors being affected by harsh environmental conditions, requiring regular calibration. The researchers believe the model could be easily adapted to carry out different tasks such as differentiating between changes of interest e.g. flooding and fires, and natural changes.

Read the original:
Oxford University successful machine learning in outer space - SpaceWatch.Global

The Role of Reinforcement Learning in Advancing … – Fagen wasanni

Exploring the Impact of Reinforcement Learning on the Evolution of Telecommunications

The role of reinforcement learning in advancing telecommunications is a topic of increasing interest and relevance in todays digital age. As the world becomes more interconnected, the demand for efficient, reliable, and advanced telecommunications systems is growing. Reinforcement learning, a type of machine learning where an agent learns to make decisions by interacting with its environment, is playing a pivotal role in meeting this demand.

Reinforcement learning is a powerful tool that can help telecommunications companies optimize their networks, improve service quality, and reduce costs. It works by using algorithms to learn from past experiences and make better decisions in the future. This approach is particularly useful in telecommunications, where networks are complex and constantly changing.

One of the key areas where reinforcement learning is making a significant impact is in network optimization. Telecommunications networks are incredibly complex, with a multitude of variables and parameters that need to be managed and optimized. Traditional methods of network management are often manual, time-consuming, and prone to errors. Reinforcement learning, on the other hand, can automate this process, learning from past network states to make optimal decisions about how to manage the network in the future.

For instance, reinforcement learning can be used to optimize the allocation of resources in a network, such as bandwidth or power. By learning from past network states, the reinforcement learning algorithm can determine the best way to allocate these resources to maximize network performance and minimize costs. This can result in significant improvements in service quality and efficiency.

Another area where reinforcement learning is making a difference is in the management of network traffic. With the explosion of data traffic due to the proliferation of smartphones, IoT devices, and other connected technologies, managing network traffic has become a major challenge for telecommunications companies. Reinforcement learning can help address this challenge by learning from past traffic patterns and making intelligent decisions about how to route traffic to avoid congestion and ensure smooth service.

Moreover, reinforcement learning can also play a crucial role in the development of next-generation telecommunications technologies, such as 5G and beyond. These technologies require highly dynamic and flexible network management, which is exactly what reinforcement learning can provide. By continuously learning and adapting to changes in the network environment, reinforcement learning can help these technologies reach their full potential.

In conclusion, reinforcement learning is playing a crucial role in advancing telecommunications. By automating network management, optimizing resource allocation, managing network traffic, and supporting the development of next-generation technologies, reinforcement learning is helping telecommunications companies meet the growing demand for efficient, reliable, and advanced services. As the world becomes more interconnected, the role of reinforcement learning in telecommunications is only set to grow. It is an exciting time for both the fields of machine learning and telecommunications, as they work together to shape the future of digital communication.

The rest is here:
The Role of Reinforcement Learning in Advancing ... - Fagen wasanni

AI-Powered Government: The Role of Machine Learning in … – Fagen wasanni

Exploring the Future: AI-Powered Government and the Role of Machine Learning in Streamlining Public Services

As we stand on the precipice of a new era, the role of artificial intelligence (AI) in shaping our future cannot be overstated. One area where AI is poised to make a significant impact is in the realm of public services, where machine learning technologies are being leveraged to streamline operations and enhance efficiency. This is the dawn of the AI-powered government, a concept that is rapidly gaining traction worldwide.

Machine learning, a subset of AI, involves the use of algorithms that improve automatically through experience. It is this ability to learn and adapt that makes machine learning a powerful tool for governments. By analyzing vast amounts of data, machine learning can identify patterns and trends that would be impossible for humans to discern. This can lead to more informed decision-making and more effective policies.

One of the key areas where machine learning can be applied is in predictive analytics. For instance, by analyzing historical data, machine learning algorithms can predict future trends in areas such as crime rates, disease outbreaks, or traffic congestion. This can enable governments to allocate resources more effectively and take proactive measures to address potential issues.

Moreover, machine learning can also be used to automate routine tasks, freeing up government employees to focus on more complex issues. For example, machine learning algorithms can be used to sort through and categorize large volumes of data, such as applications for government services or public feedback. This can significantly reduce processing times and improve the efficiency of public services.

In addition, machine learning can also play a crucial role in enhancing transparency and accountability in government operations. By analyzing data on government spending and performance, machine learning algorithms can identify areas of inefficiency or potential corruption. This can help to ensure that public funds are being used effectively and that government officials are held accountable for their actions.

However, the adoption of machine learning in government also raises important questions about privacy and security. Governments must ensure that the use of AI technologies does not infringe upon citizens rights to privacy and that adequate measures are in place to protect sensitive data from cyber threats.

Furthermore, there is also the issue of the digital divide. While AI technologies can greatly enhance the efficiency of public services, they also require a certain level of digital literacy to use effectively. Governments must therefore also invest in digital education and infrastructure to ensure that all citizens can benefit from these technologies.

In conclusion, the advent of the AI-powered government presents both opportunities and challenges. Machine learning technologies have the potential to revolutionize public services, making them more efficient, transparent, and responsive. However, governments must also navigate the complex issues of privacy, security, and digital inequality. As we move forward into this new era, it is clear that the role of machine learning in streamlining public services will be a key area of focus.

Read the original:
AI-Powered Government: The Role of Machine Learning in ... - Fagen wasanni

Machine Learning Tools: Transformative Insights into Animal … – Fagen wasanni

Animal communication signals have always been a complex field to decipher. Researchers rely on careful observation and experimentation to understand their meaning. However, this process is time-consuming, and even experienced biologists struggle with differentiating similar signal types.

AI may offer a solution to expedite this process. Machine learning algorithms, known for their pattern detection abilities, can potentially decode the communication systems of various animals like whales, crows, and bats. These algorithms have proven their effectiveness in processing human language and can also identify and classify animal signals from audio and video recordings.

One of the main challenges with machine learning methods is the need for vast amounts of data. For instance, the Chat GPT-3 language model was trained using billions of tokens or words. This means creative solutions are necessary to collect data from wild animals.

Despite these challenges, there are ongoing research projects exploring the use of AI in animal communication. Project CETI (Cetacean Translation Initiative) focuses on the communicative behavior of sperm whales. Utilizing bioinspired whale-mounted tags, underwater robots, and other methods, researchers aim to map the full richness of these animals communication.

Understanding who talks to whom and the environmental and social conditions are essential for decoding animal conversations. By combining machine learning approaches with well-designed experiments, researchers hope to discover which signals animals use and potentially their meanings. This knowledge can then be applied to improve animal welfare in captivity and develop more effective conservation strategies.

In the future, machine learning could even enable the ability to listen in on entire communities of animals. Detailed comparisons of communication could be made, including historical baseline recordings of the last surviving individuals held in conservation breeding centers. This research has the potential to reintroduce lost calls and restore cultural practices among animal populations.

Moreover, the use of passive acoustic monitoring systems could help identify communication signals associated with distress or avoidance. This could provide insights into the well-being of animals at a landscape level and aid in conservation efforts.

Here is the original post:
Machine Learning Tools: Transformative Insights into Animal ... - Fagen wasanni

Tactical and Operational Benefits of Artificial Intelligence and … – Fagen wasanni

The US Department of Defense (DoD) recognizes the significant advantages that artificial intelligence (AI) and machine learning (ML) can offer to its armed forces. As a result, the department is actively seeking to deepen and accelerate the adoption of these technologies across its services and agencies.

To achieve this goal, the DoD has implemented measures to reduce bureaucracy and expedite the procurement of AI and ML capabilities. This initiative aims to simplify the process for acquiring these technologies, allowing the armed forces to benefit from their tactical and operational advantages more quickly and effectively.

In addition to streamlining procurement, the DoD has been actively involved in various projects and programs that focus on AI and ML. Through close collaboration with industry partners, the department aims to harness the potential of these technologies to enhance military capabilities.

Furthermore, the DoD recognizes the importance of integrating data from multiple sources. To this end, it has been conducting experimentation to identify the best methods for integrating data produced by various sources. This research aims to optimize the use of AI and ML in analyzing and utilizing vast amounts of data generated by the armed forces.

By harnessing the power of AI and ML, the US Department of Defense aims to enhance its operational efficiency and effectiveness. These technologies offer the potential to improve decision-making, automate routine tasks, and enhance situational awareness, among other benefits. With ongoing efforts to streamline procurement and optimize data integration, the DoD is paving the way for a future where AI and ML play integral roles in the success of its armed forces.

Continue reading here:
Tactical and Operational Benefits of Artificial Intelligence and ... - Fagen wasanni

The Scamdemic: Can Machine Learning Turn the Tide? – CDOTrends

The worldwide digital space was gripped by an unprecedented surge in online scams and phishing attacks in 2022. Cybersecurity company Group-IB unveiled an alarming analysis detailing this rising threat.

Their recently launched study showed that the number of scam resources created per brand soared by 162% globally, and even more drastically in the Asia-Pacific region, with a whopping increase of 211% from 2021. The report also disclosed a more than three-fold increase in detected phishing websites over the last year.

These findings underscore the persistent cyber threat landscape, shedding light on a cyber menace that cost more than USD55 billion in damages last year, according to the Global Anti Scam Alliance and ScamAdviser's 2022 Global State of Scams Report. With these alarming trends, the scamdemic shows no signs of slowing down.

Scam campaigns are not just affecting more brands each year; the impact that each individual brand faces is growing larger. Scammers are using a vast amount of domains and social media accounts to not only reach a greater number of potential victims but also evade counteraction, explained Afiq Sasman, head of the digital risk protection analytics team in the Asia Pacific at Group-IB.

The rise in scams was attributed to increased social media use and the growing automation of scam processes. Social media platforms often serve as the first point of contact between scammers and potential victims, with 58% of scam resources created on such platforms in the Asia-Pacific region last year. Group-IB's Digital Risk Protection analysts found that more than 80% of operations are now automated in scams like Classiscam.

Using automation and AI-driven text generators by cybercriminals to craft convincing scam and phishing campaigns poses an escalating threat. Such advancements allow cybercriminals to scale operations and provide increased security within their illicit ecosystems.

The study also highlighted the uptick in scam resources hosted on the .tk domain, accounting for 38.8% of all scam resources examined by Group-IB in the second half of 2022. This development reveals the increasing impact of automation in the scam industry, as affiliate programs automatically generate links on this domain zone.

The research underscores the urgent need for robust and innovative cybersecurity measures. By leveraging advanced technologies such as neural networks and machine learning, organizations can monitor millions of online resources to guard against external digital risks, protecting their intellectual property and brand identity. Only through such proactive measures can we hope to turn the tide against the rising tide of this digital 'scamdemic.

Image credit: iStockphoto/Dragon Claws

Read the rest here:
The Scamdemic: Can Machine Learning Turn the Tide? - CDOTrends

Animations, and 3-D Models, and 3,000 Drawings: Inside Googles Massive Machine-Learning Masterclass on Leonardo da Vinci – artnet News

Science & Tech

Thanks to machine learning, Leonardo's expansive codices have been broken down into different themes.

What can A.I. teach us about Italian Renaissance polymath Leonardo da Vinci? A lot, as we discovered from a new online retrospective from Google Arts and Culture thats powered by machine learning.

Its a fascinating mini consumer Phd in Leonardo, Amit Sood, founder and director of Google Arts and Culture, told Artnet News. He added that he personally enjoyed learning that the great artist was a left-handed vegetarian:Theres a quote in one of the codices about being vegetarian and drinking wine in moderationvery practical health and well-being advice from Leonardo da Vinci!

The expansive project, titled Inside a Genius Mind, is a collaboration with 28 institutions around the world, curated by noted Leonardo expert and art historian Martin Kemp (who recently offered an online masterclass on the artist). It features 3,000 drawings, including 1,300 pages of the Old Masters famed codices, such as the 12-volume Codex Atlanticus.

Over 500 years after Leonardos death, these fragile manuscriptsrarely on view to the general publicoffer the closest thing we have to a glimpse inside the mind of the artist, inventor, and engineer.

Inside a Genius Mind, a new online Leonardo da Vinci retrospective from Google Arts and Culture.

Written back to front in semi-legible old Italian and covering subjects from science to anatomy to flight, the contents of [Leonardos] codices can feel overwhelmingly vast, varied, and inaccessible, Kemp said in a statement. Inside a Genius Mind transforms the diverse contents of the codices as an interactive visual journey, engaging audiences with a powerful tool to learn more about the complexities and connections that run throughout Leonardos genius.

The team from Google used machine learning to sort through Leonardos prolific writings and drawings, presenting his oeuvre in thematic sections that represent the full breadth of his varied artistic and scientific output, and seemingly boundless, interdisciplinary ingenuity.

Leonardo da Vinci, Codex Atlanticus, folio 755 r. Collection of the Veneranda Biblioteca Ambrosiana, Milan, courtesy of Google Arts and Culture.

Weve always tried to use technology to build online projects that are very difficult to do in a physical realm, Sood said. We use machine learning to uncover visual ideas and similarities that will take the human eye much longer to seeor cant see at all.

People know Leonardos art, but they dont necessarily know his codices, because they are spread across different institutions. Bringing them into one platform was something that was important to us, headded.

Google Arts and Culture digitizing Leonardo da Vincis wall paintings at the Sala delle Asse at Castello Sforzesco in Milan. Photo courtesy of Google Arts and Culture.

The project was a massive undertaking that involved working closely with museums in Poland, Italy, and France, among others. At some institutions, Google scanned all the drawings. It also digitized Leonardos room of wall paintingsat the Sala delle Asse, at the Castello Sforzesco in Milan, which has been closed for renovations since 2012.

The online exhibition also includes impressive 3-D models and animations of some of Leonardos drawings and inventions, such as his flying machines. Google has been working with these visualizations for the last seven years, and is also offering them to museums to include in traditional exhibitions, where they can augment the irreplaceable experience of seeing Leonardos drawings in person.

Google Arts and Culture created this 3-D animation of Leonardo da Vincis Leocopter. Courtesy of Google Arts and Culture.

But Inside a Genius Mind aims to tell Leonardos incredible story in a way that appeals to both art history neophytes and experts, from the comfort of their own homes.

The diversity of what Leonardo was able to accomplish in his lifetime is something that people are going to be inspired and surprised by, Sood said. In his sketches, he was not putting different disciples in silos. Everything seemed to merge and converge in different ways.

Leonardo da Vinci, Ginevra de Benci. Collection of the National Gallery of Arts, Washington, D.C., courtesy of Google Arts and Culture.

The online exhibition also uses A.I. to generate playful mashups of Leonardos sketches, dubbed Da Vincis Stickies. It also transports you to the artists birthplace and final resting place courtesy of Google Street View, and offers a deep dive into the only Leonardo painting in North America, Ginevra de Benci at the National Gallery of Art in Washington, D.C.

Read the original here:
Animations, and 3-D Models, and 3,000 Drawings: Inside Googles Massive Machine-Learning Masterclass on Leonardo da Vinci - artnet News

Energy Consumption in Machine Learning: An Unseen Cost of … – EnergyPortal.eu

Energy Consumption in Machine Learning: An Unseen Cost of Innovation

In recent years, machine learning has emerged as a driving force behind many technological advancements, from self-driving cars to facial recognition systems. As these innovations continue to transform our world, there is a growing concern about the environmental impact of the energy consumption required to power these advancements. The energy consumption in machine learning is an unseen cost of innovation that needs to be addressed in order to ensure a sustainable future.

Machine learning, a subset of artificial intelligence, involves the development of algorithms that enable computers to learn from and make predictions or decisions based on data. These algorithms require vast amounts of computational power to process and analyze the data, which in turn requires significant energy resources. As the demand for machine learning applications grows, so does the need for more powerful hardware and energy to fuel these computations.

One of the most energy-intensive aspects of machine learning is the training process, during which an algorithm is exposed to a large dataset and learns to recognize patterns and make predictions. This process can take days, weeks, or even months to complete, depending on the complexity of the task and the size of the dataset. During this time, the hardware used to run the algorithms consumes a considerable amount of electricity, contributing to greenhouse gas emissions and exacerbating climate change.

The energy consumption of machine learning is not only an environmental concern but also a financial one. As the cost of electricity continues to rise, companies and researchers may find it increasingly difficult to afford the energy required to develop and deploy machine learning applications. This could potentially slow down the pace of innovation and hinder the adoption of new technologies that could improve our lives.

Recognizing the need to address this issue, researchers and technology companies are exploring ways to reduce the energy consumption of machine learning. One approach is to develop more energy-efficient hardware, such as specialized processors designed specifically for machine learning tasks. These processors can perform computations more efficiently than traditional CPUs or GPUs, reducing the amount of energy required to run machine learning algorithms.

Another approach is to optimize the algorithms themselves, making them more efficient and requiring less computational power to achieve the same results. This can be achieved through techniques such as pruning, which involves removing unnecessary connections in a neural network, and quantization, which reduces the precision of the numerical values used in the computations. Both of these techniques can lead to significant reductions in energy consumption without sacrificing the accuracy of the machine learning model.

In addition to these technological solutions, there is also a growing awareness of the need for more sustainable practices in the field of machine learning. Researchers and companies are increasingly considering the environmental impact of their work and taking steps to minimize their energy consumption. This can include using renewable energy sources to power their data centers, implementing energy-efficient cooling systems, and recycling or repurposing old hardware.

As machine learning continues to advance and become more prevalent in our daily lives, it is crucial that we address the issue of energy consumption in order to ensure a sustainable future. By developing more energy-efficient hardware and algorithms, adopting sustainable practices, and raising awareness of the environmental impact of machine learning, we can continue to enjoy the benefits of these innovations while minimizing their impact on our planet. The unseen cost of innovation must be acknowledged and addressed to ensure that the progress we make does not come at the expense of our environment.

Here is the original post:
Energy Consumption in Machine Learning: An Unseen Cost of ... - EnergyPortal.eu