Category Archives: Machine Learning
ChatGPTs gamechanger- multi-modality. What this means | by … – Medium
What is multi-modal AI and does it deserve the hype its generating
If you went to LinkedIn over last week/2 weeks, you were probably inundated by people losing their minds over GPT integrating multi-modality into its capabilities. Normally, I would take some time to tell you that this is another example of the hype machine working overtime to sell you another fundamentally useless idea.
Well, this time is different. Multi-modality is a genuinely powerful development, one that does warrant the attention that it is receiving. In this article, I will give you a quick introduction to multi-modality, why its a big deal for AI Models, and some problems it can come with (remember, nothing is a silver bullet).
Overall, multi-modality is really cool. It enables all kinds of applications in compression, data annotation, labeling etc. This might be a bit of a heretical take, but Im personally more excited by multi-modal embeddings than I am by the multi-modal AI models themselves. I might be the only one here, but I just see more utility in developing better embeddings than I do with building better models. That being said, in the right circumstances integrating multi-modal capabilities into your AI Models can definitely be a big dub.
If you liked this article and wish to share it, please refer to the following guidelines.
If you find AI Made Simple useful and would like to support my writing- please consider becoming a premium member of my cult by subscribing below. Subscribing gives you access to a lot more content and enables me to continue writing. This will cost you 400 INR (5 USD) monthly or 4000 INR (50 USD) per year and comes with a 60-day, complete refund policy. Understand the newest developments and develop your understanding of the most important ideas, all for the price of a cup of coffee.
Support AI Made Simple
Use the links below to check out my other content, learn more about tutoring, reach out to me about projects, or just to say hi.
Small Snippets about Tech, AI and Machine Learning over here
AI Newsletter- https://artificialintelligencemadesimple.substack.com/
My grandmas favorite Tech Newsletter- https://codinginterviewsmadesimple.substack.com/
Check out my other articles on Medium. : https://rb.gy/zn1aiu
My YouTube: https://rb.gy/88iwdd
Reach out to me on LinkedIn. Lets connect: https://rb.gy/m5ok2y
My Instagram: https://rb.gy/gmvuy9
My Twitter: https://twitter.com/Machine01776819
Thank you for being a part of our community! Before you go:
More here:
ChatGPTs gamechanger- multi-modality. What this means | by ... - Medium
Meet the Undergraduate: Malik Francis, School of Engineering … – University of Connecticut
Malik Francis 24 (ENG), has taken full advantage of the research and professional opportunities UConn has to offer from researching machine learning, to developing a sustainable energy project for UConn Storrs, to interning for Raytheon Technologies.
Francis, a computer engineering major, has been doing research for the past two years.
As a CAPS Research Apprentice, within the Center for Access and Postsecondary Success, Francis was paired with Farhad Imani, Assistant Professor in the Department of Mechanical Engineering, whose interests include machine learning, quality, and reliability improvement with applications in advanced manufacturing.
The selective CAPS Research Apprenticeship Program pairs first-generation college students in STEM majors with faculty researchers to gain first-hand research experience and learning the foundations of academic writing, graduate school applications, and seeking summer internships.
Renee Trueman of the CAPS Research office pairs you with a professor and youre able to get right into the environment you requested, Francis says. You are able to choose from a great collection of professors on the UConn campus, studying in a diverse spectrum of fields.
Additive manufacturing is 3-D printing physical objects with metals instead of plastics, Francis explains. Francis worked with Imani to research and develop machine learning algorithms to detect defects in metal objects.
Francis was first introduced to machine learning through a friend. His conversations with his friend and later experience with Imani showed Francis the potential of machine learning to address real-world problems, further solidifying his interest in pursuing machine learning engineering.
It was a great fit for me at the moment, and I was eager to learn more, Francis says. And, through working with Dr. Imanis expertise and passion for pushing boundaries showed me how machine learning can transform the future.
Francis was also selected as a 2022-23 CAPS Research Scholar, where he gained invaluable hands-on experience continuing his project with Imani, mentorship, as well as professional development and financial and cultural literacy with his CAPS Research cohort.
The CAPS Research office offers the Apprentice opportunity as well as CAPS Research Scholar and McNair Scholar. As a Scholar, mentorship and research guidance continue for all semesters until graduation alongside step-by-step assistance with graduate school applications and funding to present at research conferences.
I think it was a great experience considering all of the technical skills you develop and being able to network with students and professors within your major, Francis says.
Francis is now working on a project for the Clean Energy and Sustainability Innovation Program at UConn. Francis and his teammates developed a plan for UConn Storrs to integrate fuel cells with UConns co-generation plant to create a more sustainable environment on the main campus. Fuel cells convert the energy from a chemical reaction into electricity with lower carbon emissions.
Were basically trying to integrate fuel cell technologies onto the UConn main campus, in order to meet or exceed the rising energy demands while lowering our carbon emissions, Francis says.
Over the past summer, and into the fall semester, Francis has interned with Raytheon Technologies Collins Aerospace as a data science intern. Francis has helped develop machine learning algorithms to address problems like aircraft maintenance with predictive analytics and machine learning methods.
Francis, who is originally from Jamaica, has also been a part of ScHOLA2RS House, a learning community for Black men. Being part of ScHOLA2RS House served as Francis introduction to research as it was through this learning community that he learned about the programs offered through the CAPS Research office and applied to be an Apprentice and then a Scholar.
That was the main reason I even discovered research, Francis says. I feel like without them I wouldnt have come this far.
After graduation, Francis plans to work in industry as a machine learning engineer, with a potential future continuing his research in graduate school.
Francis says the opportunities to connect with companies through UConn career fairs as well as ScHOLA2RS House events have prepared him to start a career after graduation.
Through the research experiences and also the ability to connect with different companies in a professional setting, I feel like UConns done a great job, Francis says. I think UConn provides you with a great amount of preparation for an industry and research position. I feel like this is the ideal school for you, as long as you are motivated and take advantage of opportunities.
October is the Month of Discovery, when undergraduates are introduced to the wealth of research and innovation opportunities at UConn. This month, enjoy profiles of outstanding undergraduate researchers on UConn Today, attend afull slate of programmingon campus and online, and register forDiscovery Questto launch your undergraduate experience to new heights.
Follow UConn Research onTwitter&LinkedIn.
Link:
Meet the Undergraduate: Malik Francis, School of Engineering ... - University of Connecticut
Deep Learning Meets Trash: Amp Robotics Revolution in Materials … – Robohub
In this episode, Abate flew to Denver, Colorado, to get a behind-the-scenes look at the future of recycling with Joe Castagneri, the head of AI at Amp Robotics. With Materials Recovery Facilities (MRFs) processing a staggering 25 tons of trash per hour, robotic sorting is the clear long-term solution.
Recycling is a for-profit industry. When the margins dont make sense, the items will not be recycled. This is why Amps mission to use robotics and AI to bring down the cost of recycling and increase the number of items that can be sorted for recycling is so impactful.
Joe CastagneriJoe Castagneri graduated with his Master of Science in Applied Mathematics, with an undergrad degree in Physics. While still in university, he first joined the team at Amp Robotics in 2016 where he worked on Machine Learning models to identify recyclables in video streams of Trash in Materials Recovery Facilities (MRFs). Today, he is the Head of AI at Amp Robotics where he is changing the economics of recycling through automation.
transcript
[00:00:00](Edited for clarity)Abate: Welcome to Robohub. Today, were in Denver, Colorado, speaking with Joe Castagneri, head of AI at Amp Robotics. Its staggering how much trash materials recovery facilities (MRFs) process: 25 tons per hour. And yet, much of this is done manually. Amp Robotics believes robots are the future of this industry. Joe, how did you get involved with Amp Robotics?
Joe Castagneri: At 19, while studying applied math at CU Boulder, I met Matan Horowitz, the companys founder. Amp Robotics was in its early stages, experimenting with sorting using an Xbox Kinect sensor. After seeing a presentation on robotics and recycling, I joined as an intern in 2016 and transitioned into machine learning by 2019.
Abate: Fascinating. So, the companys foundation was built on AI?
Joe Castagneri: Exactly. The goal was to merge robotics, AI, and green tech to address major societal problems. Matan saw recycling as the right challenge for our tech.
Abate: Given the advances in GPU technology, did you begin with cloud processing?
Joe Castagneri: Actually, we opted for edge computing due to poor internet in trash facilities and the need for real-time operations. But as we grew, we shifted some support functions to Google Cloud.
Abate: How did Amp Robotics evolve from its early days to its current state?
Joe Castagneri: By listening and learning from our failures. Each robot deployed taught us valuable lessons. Rapid iteration and understanding customer needs were essential. The challenge lies in the diverse and unpredictable nature of waste.
Abate: Absolutely. Recycling facilities deal with so much variety in trash items.
Joe Castagneri: Indeed. Consider a milk jug; its appearance can vary greatly. Traditional computer vision struggles in this space. But deep learning, with enough data, can tackle this complexity.
Abate: And packaging materials and designs constantly evolve. How does the AI handle these changes?
Joe Castagneri: The key is consistent retraining and adaptation. Our models need to evolve as the industry and materials change. Model maintenance is crucial in this ever-shifting environment.
Abate: It sounds like this industry experiences significant model drift.
Joe Castagneri: Yes. Good way of concisely putting it. Totally agree.
Abate: So, and then here behind you, we have this, not a prototype, but like an in-assembly, model.
Joe Castagneri: Yes. So this is our flagship cortex product where we have a Delta style robot that will overhang over a belt. The belt will go from where I am through here. This unit in particular, were on our production floor where we manufacture the units we assemble. The robots that are Omron robots, we integrate with Omron and then we custom design the pneumatics and the wiring, the frame, the vision cabinet that is running that edge compute. And we bring it all together into one package. So this one is in process of manufacturing, and will go out into a recycling facility over a conveyor belt.
Abate: Yeah. So this is a five or six year old prototype called Claudia. So to explain, you have a suction cup gripper here and a beefy spring so that the variable height of the material or condition of the material is absorbed mechanically.
Joe Castagneri: And then a pneumatic system going through this particular gripper and the suction cup will form a vacuum seal and we descend, suck, and then place off the side of the belt into a chute or into a bunker.
Abate: So then this right here would be where, say a milk jug would come and it would hold onto that milk jug.
Joe Castagneri: Yes. Its air suction and in particular, ahead of the robot cell, a camera imaging the conveyor belt will look at the material, localize where it is and what it is. And then the robotic path planning software will say, okay, I am configured to pick these things, so let me subset down what Ive seen to what Im configured to pick. Right. And then, there are too many things to pick that I have time for. I want to optimize the number of things that I can pick, given how long theyre gonna be in my picking region. And then I will intercept to be at this location at this time and turn my vacuum on at this time. And then place it off the side of the belt.
Abate: Yeah, so the interesting thing here is that this is a moving belt. Youve got limited belt amount of time, and youre trying to hit a certain number of items per minute that youre picking.
Joe Castagneri: Yes. Right. In particular, the value proposition of these units is as a replacement for human sorters. And so human sorters will remove material at 30 to 50 picks per minute, at their peak. So a decent starting robot will remove material at 30 to 50 picks per minute to break even with a person, but really, you would like it to do better. And so these systems routinely hit 80 plus picks per minute. Weve seen them hit over a hundred if the material stream is perfectly providing you a lot of eligible options in a well spread out way. So, a lot faster than a person, at a higher purity and for the whole duration of two shifts a day.
Abate: And how does that change from, say, one facility to another? Are these used in different ways by different companies?
Joe Castagneri: Dramatically. Yes. Theres always a conveyor belt in a facility. Thats the last chance Conveyor. And its the very last one. Its your last chance to get any stuff on that conveyor or its gonna go to landfill. And this is a frustrating thing to consumers because you figure, you put it in your recycling bin, its all gonna be recycled. And the reality is, itll be passed through this facility and whatever the yield of that facility is, were gonna pull that out. The rest goes to landfill. And so our early applications were to put these units on last chance lines and hey, get whatever you can. But a different type of application for these might be you have other conventional sorting equipment that is separating 2D paper and cardboard from 3D containers and plastics, and you have all this paper and cardboard, but because it was sorted conventionally, there are a whole bunch of other things in there. And so you would quality control, remove stuff out of that stream. Historically, this has been done by people. If its not done, then the paper bales that you make might be rejected by the buyer. Theres too much plastic in there, too many impurities. So it has to be done to ensure that the product youre making, paper in this case, has any value. And these can be there to quality control that stream.
Abate: Is it a mixture of everything that people put into their recycling bin is now what arrives at the MRF. And now you have to separate each individual component. So it would be like youre separating out the paper, the plastic, the cans, and then the random trash that people threw in there as well.
Joe Castagneri: Thats exactly right. I go one step further. If you think about the waste stream, like a miner thinks about ore, what do you have in there? Youve got precious metals, hydrocarbons, paper products, wood products, but the problem is theyre not refined. If you can sort them, you add value. Its trash until we can sort it, and then it becomes valuable. This is a feedstock now. Its no longer trash. Its transformed into an input to an industry. So when people throw stuff in the recycling bin, they will wish cycle things, thinking, Oh, I bet theyll find a use for this.
And it arrives at a recycling facility, dumped in a massive pile of recycling, and a front loader takes a scoop of it and puts it into the system. The first conveyor belt in the system is called the Presort line. Its usually a really wide, rugged conveyor belt with hand sorters pulling off items like bicycles. This job is still done by people because its a difficult grasping problem. They remove really odd items that shouldnt be there, like bowling balls, dog waste bags, bicycles, mattresses things that can break machinery down the line.
Then, conventional sorting equipment sorts through it.
Abate: How does a mattress get into a recycling can?
Joe Castagneri: The recycling dumpsters in cities, typically. In my building, for example, we have a dumpster for garbage and one for single stream recycling. People will put their old Ikea lamp in there because it has metal. They think itll be recycled. But since waste is so abstracted away from everyday consumers, they dont realize that these facilities have to run at 25 tons an hour to be profitable. They dont have time to disassemble that lamp. It stands in the way of efficiency.
Abate: 25 tons an hour.Joe Castagneri: Thats common for municipal facilities. In Denver, for instance, they might process 25 tons an hour, or 50,000 pounds an hour of material.
Abate: And do you know offhand how much trash a person produces in a year?
Joe Castagneri: I think a family household produces about three tons. About one ton of that is recyclable.
Abate: So this is on a massive scale.
Joe Castagneri: Absolutely. Trash is produced locally, so you need these facilities locally. Theyre called municipal recycling facilities because theyre often funded through municipalities to support the local population. No city is the same. Denver, a big city, having a 25 ton per hour facility for recycling makes sense. In Colorado, if you go into the Rocky Mountains, its rare to recycle because there isnt enough volume to make it profitable.
Were concerned about why there isnt recycling in more rural areas, or in areas that dont have the population to drive 10 to 30 tons an hour of waste. You need enough volume for the business to be profitable. Its a narrow margin, so you need scale. It would be great if we could build a smaller facility that was profitable without requiring so much throughput. Thats another thing were looking into.
Abate: So, what are those fixed costs that are preventing people?
Joe Castagneri: The fixed costs for a facility include the capital equipment, the sortation equipment, and conveyor belts. If you visit these facilities, its a maze of conveyor belts transferring throughout. Just considering the conveyor belts, they are a major expense. For instance, a facility processing 25 tons per hour might cost 10 to 20 million to build. In the mining industry, this might not seem like much, but in other sectors, its substantial. Given the thin margins on recycling, justifying that $20 million can be challenging. So, the primary fixed costs are the sortation equipment and the conveyor belts. Then there are dynamic costs, like sourcing material and paying for freight both to bring materials in and ship sorted goods out.
Abate: With tight margins in this industry, how much are operations affected by changes in material prices or varying regional prices for certain materials?
Joe Castagneri: Its hugely impactful. For instance, in 2018, China stopped accepting low-grade plastics from the US. This was disruptive because instead of earning from these plastics, facilities had to pay to landfill them. This sparked a need for innovation, to find new uses and methods to handle these materials.
Abate: What counts as low-grade plastic? Bottles or items like plastic bags?
Joe Castagneri: Great question. The main valuable commodities in recycling are aluminum cans, cardboard, PET drinking water bottles, and HDPE milk jugs. However, there are other materials like colored HDPE and polypropylene, which also have value. Materials like polystyrene, used in red solo cups, are challenging to sort and dont have as much value. When China stopped importing these low-grade plastics, the industry felt pressured to find new sorting methods and uses for them. Its now leading to innovative techniques like pyrolysis and metalysis that can process these plastics.
Abate: With these valuable materials youve mentioned, are they primarily what your algorithms are trained on?
Joe Castagneri: Of course, theres an incentive to be good at detecting and sorting the most valuable materials. However, AI robotics in recycling is also efficient at identifying materials that are typically ignored. We are part of the solution for materials that dont have an established sorting process using conventional methods.
Now we are really adept at identifying the mainstay items of recycling because the robots came into existence when our company began retrofitting value into existing facilities. When retrofitting value, you need to accommodate the facilities as they are. They sort natural high-density polyethylene, PET bottles, cardboard, and aluminum, among others.
Abate: Okay. Because the MRF is selecting what they can sell, theyre choosing what their local customers are willing to buy. Some materials might not be valuable enough for them to pick. So, could they use the software to specify which items theyre interested in?
Joe Castagneri: Absolutely. They can configure what the robot will pick with just a few clicks. If halfway through the day they decide they want to pick a particular item from the conveyor because theres more of it in the load, a few adjustments and its set to be picked. On the flip side, if they feel the machine is letting too many valuable items like PET bottles pass, they can increase its priority. These robots are highly adaptable, making them stand out in an environment where traditional sortation equipment is easy to operate but not versatile.
Using AI as the primary recognition tool in our facilities, we can change the type of material were processing and swiftly reconfigure the entire plant to adjust to the new material.
Abate: Thats quite powerful. Considering a system operated by humans, theres a limit to how many items you can instruct them to recognize. Plus, switching tasks frequently can be disruptive. Has automation introduced notable benefits for your customers?
Joe Castagneri: Indeed. Hand sorting, for instance, epitomizes dull, dirty, and dangerous jobs. Its risky due to hazards like needles and harmful substances in the trash. Workers wear protective gear, and the environment isnt conducive for long hours. Automating this process proves advantageous. Our robots not only replace labor costs but also generate revenue. This leads to a return on investment in under two years for units like these. While humans might struggle with sorting a wide variety of items efficiently, AI doesnt have this limitation.
Furthermore, there are other costs that arent immediately obvious. Its challenging for a worker to keep multiple items in mind for sorting. Some data suggests that the average duration of employment for hand sorters is three to six weeks. The turnover can result in lost revenue, recruitment, training, and other associated costs. Automation proves invaluable in these contexts.
Joe Castagneri: Our biggest market is the United States primary sortation. Weve installed more than 300 units in our facilities and in retrofit facilities that are operated by customers as well. Most of those are in the United States. We do have a small presence in Canada, Japan, and the EU as well. So we are international. Same problems exist in different markets. The EU has more regulatory pressure for solutions, leading to stricter purity constraints around the goods that youre sorting.
Abate: And whats that range? Is it like 95%?
Joe Castagneri: When we make bales of materials, big cubes of plastic, and sell them to a plastics reclaimer, the quality of that bale depends on if they hit the yield they were hoping for. If they didnt hit the yield, then the bale was considered bad. Until now, we havent really known the exact contents of the bale. We assume its about this pure, but thats a rough estimate. A rule of thumb has been for plastic bales, you want them to be 85% pure. For aluminum cans, you want them to be more like 97% pure. The reality is that recycling has historically been about doing the best you can, providing feedstocks to downstream processes and hoping they can work with the quality of material they receive. The EU is tightening regulations by requiring more recycling, even of low-quality plastics not often recycled in America.
Abate: So its not just about recycling more cans and bottles but also recycling more types of materials?
Joe Castagneri: Exactly, yes. You want to optimize both aspects.
Abate: But how can you start recycling more materials until you have the buyer side of the equation sorted? Like, is that sorted for them already? Do they already have customers lined up to buy these materials?
Joe Castagneri: Part of it is, and since there are several links in the chain, whos the buyer for you?
Abate: From what I understand, the buyer is the entity purchasing the packed material from the MRF.
Joe Castagneri: Absolutely. The buyer side would benefit greatly from a transparent market where different commodities are priced based on their quality. Right now, the market operates on a contract-by-contract basis. Buyers in specific regions tend to buy from known partners who have historically provided good quality material. If we had a more structured marketplace, more entrants could participate, identifying valuable commodities and accessing them without needing a web of personal relationships.
Abate: Do you even have a reliable way of determining the yield of each bale?
Joe Castagneri: It depends on the process. For processes like aluminum can recycling, you can weigh the bale before and after processing to get a mass yield. We typically have decent yield numbers, but they cover the entire operation. With the addition of AI analytics, you gain deeper insights, such as the efficiency of a particular unit or piece of equipment.
Abate: Thats intriguing. It seems like a significant differentiator for places without this system. One of the biggest challenges in waste management appears to be the lack of access to quality data.
Joe Castagneri: Yes. The data is invaluable to us. We can adjust the AI to keep up with changes in the waste stream. Moreover, in our facilities equipped with multiple vision systems, the key idea is using perception to drive efficiency. This approach results in better yields and the ability to recycle a wider variety of materials.
Abate: If you were to envision a smaller version of this system for a minor municipality, what would it resemble?
Joe Castagneri: Imagine a shipping container with a conveyor belt. Items are sorted using a pneumatic-based optical sorter. Its a simple setup that could be used temporarily, like at music festivals. For rural communities, you might need something between that and a full-scale recycling facility.
Abate: So, in essence, its an operation without human intervention, other than someone loading the waste?
Joe Castagneri: Yes. Someone loads, removes, and configures.
Abate: Fantastic. Lets go take a look.
Joe Castagneri: Certainly.
transcript
tags: Actuation, c-Industrial-Automation, cx-Industrial-Automation, Industrial Automation, interview, podcast, Robotics technology, startup
Abate De Mey Podcast Leader and Robotics Founder
See the original post here:
Deep Learning Meets Trash: Amp Robotics Revolution in Materials ... - Robohub
Rewiring the Brain: The Neural Code of Traumatic Memories – Neuroscience News
Summary: Unveiling the neurological enigma of traumatic memory formation, researchers harnessed innovative optical and machine-learning methodologies to decode the brains neuronal networks engaged during trauma memory creation.
The team identified a neural population encoding fear memory, revealing the synchronous activation and crucial role of the dorsal part of the medial prefrontal cortex (dmPFC) in associative fear memory retrieval in mice.
Groundbreaking analytical approaches, including the elastic net machine-learning algorithm, pinpointed specific neurons and their functional connectivity within the spatial and functional fear-memory neural network.
This pivotal study not only substantiates the principle that memories strengthen through enhanced neural connections but also pioneers the melding of optics and machine learning to elucidate the intricate dynamics of neural networks.
Key Facts:
Source: NINS
Scientists have long speculated about the physical changes that occur in the brain when a new memory is formed. Now, research from the National Institute for Physiological Sciences (NIPS) has shed light on this intriguing neurological mystery.
In a study recently published inNature Communications,the research team has succeeded in detecting the brain neuronal networks involved in trauma memory by using a novel method that combines optical and machine-learning-based approaches, capturing the complex changes that occur during memory formation and uncovering the mechanisms by which trauma memories are created.
Animals learn to adapt to changing environments for survival. Associative learning, which includes classical conditioning, is one of the simplest types of learning and has been studied intensively over the past century.
During the last two decades, technical developments in molecular, genetic, and optogenetic methods have made it possible to identify brain regions and specific populations of neurons that control the formation and retrieval of new associative memories. For instance, the dorsal part of the medial prefrontal cortex (dmPFC) is critical for the retrieval of associative fear memory in rodents.
However, the way in which the neurons in this region encode and retrieve associative memory is not well understood, which the research team aimed to address.
The dmPFC shows specific neural activation and synchrony during fear-memory retrieval and evoked fear responses, such as freezing and heart rate deceleration, explains lead author Masakazu Agetsuma.
Artificial silencing of the dmPFC in mice suppressed fear responses, indicating that this region is required to recall associative fear-memory. Because it is connected with brain systems implicated in learning and associated psychiatric diseases, we wanted to explore how changes in the dmPFC specifically regulate new associative memory information.
The research team used longitudinal two-photon imaging and various computational neuroscience techniques to determine how neural activity changes in the mouse prefrontal cortex after learning in a fear-conditioning paradigm.
Prefrontal neurons behave in a highly complex manner, and each neuron responds to various sensory and motor events. To address this complexity, the research team developed a new analytical method based on the elastic net, a machine-learning algorithm, to identify which specific neurons encode fear memory.
They further analyzed the spatial arrangement and functional connectivity of the neurons using graphical modeling.
We successfully detected a neural population that encodes fear memory, says Agetsuma. Our analyses showed us that fear conditioning induced the formation of a fear-memory neural network with hub neurons that functionally connected the memory neurons.
Importantly, the researchers uncovered direct evidence that associative memory formation was accompanied by a novel associative connection between originally distinct networks, i.e., the conditioned stimulus (CS, e.g., tone) network and the unconditioned stimulus (US, e.g., fearful experience) network.
We propose that this newly discovered connection might facilitate information processing by triggering a fear response (CR) to a CS (i.e., a neural network for CS-to-CR transformation).
Memories have long been thought to be formed by the enhancement of neural connections, which are strengthened by the repeated activation of groups of neurons. The findings of the present study, which were based on both real-life observations and model-based analysis, support this.
Furthermore, the study demonstrates how combined methods (optics and machine learning) can be used to visualize the dynamics of neural networks in great detail. These techniques could be used to uncover additional information about the neurological changes associated with learning and memory.
Author: Hayao KIMURASource: NINSContact: Hayao KIMURA NINSImage: The image is credited to Neuroscience News
Original Research: Open access.Activity-dependent organization of prefrontal hub-networks for associative learning and signal transformation by Masakazu Agetsuma et al. Nature Communications
Abstract
Activity-dependent organization of prefrontal hub-networks for associative learning and signal transformation
Associative learning is crucial for adapting to environmental changes. Interactions among neuronal populations involving the dorso-medial prefrontal cortex (dmPFC) are proposed to regulate associative learning, but how these neuronal populations store and process information about the association remains unclear.
Here we developed a pipeline for longitudinal two-photon imaging and computational dissection of neural population activities in male mouse dmPFC during fear-conditioning procedures, enabling us to detect learning-dependent changes in the dmPFC network topology.
Using regularized regression methods and graphical modeling, we found that fear conditioning drove dmPFC reorganization to generate a neuronal ensemble encoding conditioned responses (CR) characterized by enhanced internal coactivity, functional connectivity, and association with conditioned stimuli (CS).
Importantly, neurons strongly responding to unconditioned stimuli during conditioning subsequently became hubs of this novel associative network for the CS-to-CR transformation.
Altogether, we demonstrate learning-dependent dynamic modulation of population coding structured on the activity-dependent formation of the hub network within the dmPFC.
Original post:
Rewiring the Brain: The Neural Code of Traumatic Memories - Neuroscience News
Could AI communicate with aliens better than we could? – Space.com
If the search for extraterrestrial intelligence (SETI) is successful, we may require the help of artificial intelligence (AI) to understand what the aliens are saying and, perhaps, talk back to them.
In popular culture, we've gotten used to aliens speaking English, or being instantly understandable with the help of a seemingly magical universal translator. In real life, it might not be so easy.
Consider the potential problems. Number one would be that any potential aliens we encounter won't be speaking a human language. Number two would be the lack of knowledge about the aliens' culture or sociology even if we could translate, we might not understand what relevance it has to their cultural touchstones.
Eamonn Kerins, an astrophysicist from the Jodrell Bank Centre for Astrophysics at the University of Manchester in the U.K., thinks that the aliens themselves might recognize these limitations and opt to do some of the heavy lifting for us by making their message as simple as possible.
"One might hope that aliens who want to establish contact might be attempting to make their signal as universally understandable as possible," said Kerins in a Zoom interview. "Maybe it's something as basic as a mathematical sequence, and already that conveys the one message that perhaps they hoped to send in the first place, which is that we're here, you're not alone."
Related: Could AI find alien life faster than humans, and would it tell us?
Indeed, the possibility of receiving recognizable mathematical information pi, a burst of prime numbers in sequence (as was the case in the novel "Contact" by Carl Sagan) has been considered in SETI for decades, but it's not the only possible message that we might receive. Other signals might be more sophisticated in their design, trying to convey more complicated concepts, and this is where we hit problem number three: That alien language could be orders of magnitude more complex than human communication.
This is where we will need AI's help, but to understand how, first we must delve into the details behind the structure of language.
When we talk about a signal or a message being complex, we don't mean that the aliens will necessarily be talking about complex matters. Rather, it refers to the complexity underlying the structure of their message, their language. Linguists call this "information theory," which was developed by the cryptographer and mathematician Claude Shannon who worked at Bell Labs in New Jersey in the late 1940s, and was expanded on by linguist George Zipf of Harvard University.
Information theory is a way of distilling the information content of any given communication. Shannon realized that any kind of conveyance of information be it human language, the chemical exhalations of plants to attract predators to eat caterpillars on their leaves or the transmission of data down a fiber optic cable can be broken down into discrete units, or bits. These are like the 'quanta' of communication, such as the letters of the alphabet or a dolphin's repertoire of whistles.
In language, these bits cannot just go in any order. There is syntax, which describes the grammatical rules that dictate how the bits can be ordered. For example: In English, a 'q' at the beginning of a word is always followed by a 'u', and then the 'u' can be followed by a limited number of letters, and so on. Now suppose there is a gap 'quk'. We know from the syntax that there are only a few combinations of letters that can fill the gap 'ac' (quack), 'ar' (quark), 'ic' (quick) and ir (quirk). But, if the word is part of a sentence 'The duck went quk' then through context we know the missing letters are 'ac'.
By knowing the rules, or syntax, we can fill in the blanks. The amount missing that still allows us to complete the word of sentence is called "Shannon entropy," and thanks to its complexity, human languages have the highest Shannon entropy of any known form natural communication on the planet.
Meanwhile, Zipf was able to quantify these basic principles of Shannon's information theory. In any communication some of the little units, these fundamental bits, will appear more often than others. For example, in human language, letters such as a e, o, t and r appear far more often than q or z. When plotted on a graph with the most common units first (on the x-axis, their rate of occurrence on the y-axis), all human languages produce a slope with a gradient of 1. At the other extreme, a baby's random babbling results in a horizontal line on the graph, with all sounds being equally likely. The more complex the communication as the baby grows into a toddler and starts to talk, for example the more the slope converges on a 1 gradient.
A transmission of the digits of pi, for instance, would now carry a 1 slope. So instead of searching for technosignatures, the technologically-generated signals that could mark other advanced extraterrestrial civilizations, some researchers think that SETI should be specifically looking for signals with a 1 slope, regardless of whether they appear artificial or not, and the machine-learning algorithms that carefully sift through every scrap of data collected by radio telescopes could be configured to analyze each potential signal to determine whether a signal adheres to Zipf's Law.
Beyond that, alien communication could have a higher Shannon entropy than human language, and if it is much higher, it might make their language too difficult for humans to grasp.
But perhaps not for AI. Already, AI is being put to the test trying to understand communication from a non-human species. If it can pass that test, perhaps AI will be ready to tackle any alien messages in the future.
Denise Herzing, who is the Research Director at the Wild Dolphin Project in Jupiter, Florida, is one of the world's foremost experts in trying to understand what dolphins are saying to each other. Herzing has been swimming with dolphins and studying their communication for four decades, and has now introduced AI into the mix.
"We have two ways in which we're looking at dolphin communication, and they both use AI," Herzing told Space.com.
One way is listening to recordings of the various whistles and barks that make up the dolphins' own communication. In particular, a machine-learning algorithm is able to take a snippet of dolphin chat and break that communication down into discrete units on a spectrogram (a graph of sounds organized by frequency), just as Shannon and Zipf described, and then it labels each unique unit with a letter. These become analogous to words or letters, and Herzing is looking at the different ways they combine, or in other words their degree of order and structure.
"Right now we've identified 24 small units of sound that recombine within a spectrogram," said Herzing. "So you might have up-whistle 'A' followed by down-whistle 'B,' and so on, and this creates a symbolic code for a sequence of sound."
The machine-learning algorithm is then able to deeply analyze the sound recordings, searching for instances where that symbolic code is repeated.
"We're looking for interesting sequences that are somehow repetitive," said Herzing. "The algorithms then look for substitutions and deletions in the sequences, so you might have the same symbolic code but one little whistle is different. That's a learning algorithm that is pretty important."
That little difference could be because it incorporates a dolphin's signature whistle (every dolphin has its own unique signature whistle, a kind of identifier like human names) or because the context is different.
This is all solidly in line with Shannon's information theory, and Herzing is also interested in Zipf's law and how closely dolphin communication replicates that 1 slope.
"We're looking for language-like structures, because every language has a structure and a grammar that follows rules," said Herzing. "We're looking specifically for what the possibilities are for recombinational data are our little units of sound only found alone, or do some recombine with another sound?"
Herzing's team have been searching for bigrams occasions when two units frequently occur together, which might signify a specific phrase. More recently, they have also been searching for trigrams where three units occur in order regularly implying greater complexity.
This is exactly the way that AI would begin analyzing a real message embedded within a SETI signal. If the alien communication is more complex in structure and syntax than human languages then that tells us something about them; perhaps that their species is older than our own, which has given them enough time for their communication to evolve.
However, we still wouldn't know the context of what they are saying to us in the message. This is currently one of the challenges in understanding dolphin communication. Herzing has video footage of dolphin pods to see what they were doing whenever the AI detects a repeated vocalization of symbolic code, which allows Herzing to try and infer context to the sounds.
"But if you're dealing with radio signals, how are you ever going to figure out what the context of the message is?" asks Herzing, who also takes an interest in SETI. "Looking at animal sounds is an analog for looking at alien signals, potentially to build up the tools to categorize and analyze [the signals]. But for the interpretation part? Oh boy, I don't know."
Once we have received a signal from aliens, we may want to say something back to them. The difficulty in understanding context rears its head again here, too. As Spock says in the film "Star Trek IV: The Voyage Home," when discussing responding to an alien probe, "we could replicate the sounds but not the meaning. We'd be responding in gibberish."
Herzing is trying to circumvent this context problem by mutually agreeing with the dolphins what to call things. This is the essence of CHAT (Cetacean Hearing and Telemetry), which is the second way in which researchers are using AI to try and communicate with dolphins.
In its first incarnation, CHAT was a large device strapped around the chest of the user, receiving sounds via hydrophone (underwater microphone) and then producing sound through a speaker. The modern version is smartphone-sized and worn around the wrist. The idea is not to converse in 'dolphinese,' but to agree with the dolphins upon pre-programmed sounds for certain toys that the dolphins want to play with. For example, if they want to play with a hoop, they make the agreed-upon whistle for 'hoop'. If a diver wearing the CHAT device wants a dolphin to bring them a hoop, the underwater speaker can play the whistle for "hoop." The AI's job is to recognize the agreed-upon whistle amongst all the other sounds a dolphin makes amidst all the various sources of audio interference underwater, such as bubbles and boat propellers.
Herzing has observed that the dolphins have used the agreed-upon whistles, but in mostly different contexts. The problem, says Herzing, is spending enough time with any one particular dolphin to allow them to fully learn the agreed-upon sounds.
With aliens, their message will have traveled many light years; any two-way communication could take decades, centuries, millennia, if it is even possible at all. So whatever information we have about the aliens will be condensed into their original transmission. If, as Kerins suspects, they send something mathematical just as a signal to us that they are there and we are not alone, then we won't have to worry about deciphering it.
However if they do send a message that is more involved, then as Herzing is discovering with dolphins, the size of the dataset is crucial, so let's hope the aliens pack their message with information to give us and AI the best chance of at least assessing some of it.
See more here:
Could AI communicate with aliens better than we could? - Space.com
Computer Vision at the Edge Can Enable AI Apps – Embedded Computing Design
October 11, 2023
Blog
Computer vision refers to the technological goal to bring human vision an information-rich and intuitive sensor to computers, enabling applications such as assembly line inspection, security systems, driver assistance and robotics.
Unfortunately, computers lack the ability to intuit vision and imagery like humans. Instead, we must give computers algorithms to solve domain-specific tasks.
We often take our vision for granted, and how that biological ability can interpret our surroundings, from looking in the refrigerator to check food expiration dates to watching intently for a traffic light to turn green.
Computer vision dates to the 1960s and was initially used for tasks like reading text from a page (optical character recognition) and recognizing simple shapes such as circles or rectangles. Computer vision has since become one of the core domains of artificial intelligence (AI), which encompasses any computer system attempting to perceive, synthesize or infer some deeper meaning from data. There are three types of computer vision: conventional or rules-based, classical machine learning, and deep learning.
In this article, Ill consider AI from the perspective of making computers use vision to perceive the world more like humans. Ill also describe the trade-offs of each type of computer vision, especially in embedded systems that collect, process and act upon data locally, rather than relying on cloud-based resources.
Conventional computer vision refers to programmed algorithms that solve tasks such as motion estimation, panoramic image stitching or line detection.
Conventional computer vision uses standard signal processing and logic to solve tasks. Algorithms such as Canny edge detection or optical flow can find contours or vectors of motion, respectively, which is useful for isolating objects in an image or motion tracking between subsequent images. These types algorithms rely on filters, transforms, heuristics and thresholds to extract meaningful information from an image or video. These algorithms are often a precursor to an application-specific algorithm such as decoding information within a 1-D barcode, where a series of rules decode the barcode upon the detection of individual bars.
Conventional computer vision is beneficial in its straightforwardness and explainability, meaning that developers can analyze the algorithm at each step and explain why the algorithm behaved as it did. This can be useful in software auditing or safety-critical applications. However, conventional computer vision often requires more expertise to implement properly.
The algorithms often have a small set of parameters that require tuning to achieve optimal performance in different environments. Implementation can be difficult, especially for optimized, high-throughput applications. Some rules, algorithmic decisions or parameter values may have unexpected effects on images that do not fit original expectations, such that it becomes possible to trick the algorithm. Such vulnerabilities and edge cases can be difficult to fix without exposing new edge cases or increasing the algorithms complexity.
Machine learning emerged as a class of algorithms that use data to set parameters within an algorithm, rather than direct programming or calibration. These algorithms, such as support vector machine, multilayer perceptron (a precursor to artificial neural networks) and k-nearest neighbor, saw use in applications that were too challenging to solve with conventional computer vision. For example, recognizing a dog is a difficult task to program on a traditional computer vision algorithm, especially where complex scenery and objects are also present. Training a machine learning algorithm to learn parameters from 100 s or 1000 s of sample images is more tractable. Edge cases are solved by using a dataset that contains examples of those edge cases.
Training is computationally intensive, but running the algorithm on new data requires far fewer computing resources, making it possible to run in real time. These trained models generally have less explainability but are more resilient to small, unplanned variations in data, such as the orientation of an object or background noises. It is possible to fix variations that are not handled well by retraining with more data. Larger models with more parameters often boast higher accuracy, but have longer training times as well as more computations needed at run time, which has historically prevented very large models from use in real-time applications on embedded processors.
Classical machine learning-based approaches to computer vision still require an expert to craft the feature set on which the machine learning model is trained. Many of these features are common to conventional computer vision applications. Not all features are useful, thus requiring analysis to prune uninformative features. Implementing these algorithms effectively requires expertise in image processing as well as machine learning.
Deep learning refers to very large neural network models operating on largely unprocessed or raw data. Deep learning has made a large impact on computer vision by pulling feature extraction operations into the model itself, such that the algorithm learns the most informative features as needed. The following figure shows the data flow in each computer vision approach.
Deep learning has the most generality among the types of computer vision; neural networks are universal function approximators, meaning they have the capability of learning any relation between input and output (to the extent that the relation exists). Deep learning excels at finding both subtle and obvious patterns in data, and is the most tolerant to input variations. Applications such as object recognition, human pose estimation and pixel-level scene segmentation are common use cases.
Deep learning requires the least direct-tuning and image processing expertise. The algorithms rely on large and high-quality data sets to help the general-purpose algorithm learn patterns by gradually finding parameters that optimize a loss or error metric during training. Novice developers can make effective use of deep learning because the focus shifts from the algorithms implementation toward data-set curation. Furthermore, many deep learning models are publicly available such that they can be retrained for specific use cases. Using these publicly available models is straightforward; developing fully custom architectures does, however, require more expertise.
Compared to conventional computer vision and classical machine learning, deep learning has consistently higher accuracy and is rapidly improving due to immense popularity in research (and growingly, commercial) communities. However, deep learning typically has poor explainability since the algorithms are very large and complex; images that are completely unlike the training data set can cause unexpected, unpredictable behavior. Because of their size, deep learning models are so computationally intensive that special hardware is necessary to accelerate them for real-time operation. Training large models on large data sets can be costly, and curating a large data set is often time-consuming and tedious.
However, improvements in processing power, speeds, accelerators such as neural processing units and graphics processing units, and improved software support for matrix and vector operations have made the increase in computation requirements less consequential, even on embedded systems. Embedded microprocessors like the AM6xA portfolio leverage hardware accelerators to run deep learning algorithms at high frame rates.
So which type of computer vision is best?
That ultimately depends on its application, as shown in Figure 2.
In short, computer vision with classical machine learning rests between the other two methods for most attributes; the set of applications that benefit compared to the other two approaches is small. Conventional computer vision can be sufficiently accurate and highly efficient in straightforward, high-throughput or safety-critical applications. Deep learning is the most general, the easiest to develop for, and has the highest accuracy in complex applications and environments, such as identifying a tiny missing component during PCB assembly verification for high-density designs.
Some applications benefit from using multiple types of computer vision algorithms in tandem such that they cover each others weak points. This approach is common in safety-critical applications dealing with highly variable environments, such as driver assistance systems. For example, you could employ optical flow using conventional computer vision methods alongside a deep learning model for tracking nearby vehicles, and use an algorithm to fuse the results to ascertain whether the two approaches agree with each other. If they do not, the system could warn the driver or start a graceful safety maneuver. Alternatively, it is possible to use multiple types of computer vision sequentially. A barcode reader can use deep learning to locate regions of interest, crop those regions, and then use a conventional CV computer vision algorithm to decode.
The barrier to entry for computer vision is progressively lowering. Open source libraries like OpenCV provide efficient implementations of common functions like edge detection and color conversion. Deep learning runtimes like tensorflow-lite and ONNX runtime enable deep learning models to run efficiently on embedded processors. These runtimes also provide interfaces that custom hardware accelerators can implement to simplify the developers experience when they are ready to move an algorithm from the training environment on PC or cloud to inference on the embedded processor. Many deep learning architectures are also openly published such that they can be reused for a variety of tasks.
Processors in the Texas Instruments (TI) AM6xA portfolio, such as the AM62A7, contain deep learning acceleration hardware as well as software support for a variety of conventional and deep learning computer vision tasks. Digital signal processor cores like the C66x and hardware accelerators for optical flow and stereo depth estimation also enable high performance conventional computer vision tasks.
With processors capable of both conventional and deep learning computer vision, it becomes possible to build tools that rival sci-fi dreams. Automated shopping carts will streamline shopping; surgical and medical robots will guide doctors to early signs of disease; mobile robots will mow the lawn and deliver packages. If you can envision it, so can the application youll build. See TIs edge AI vision page to explore how embedded computer vision is changing the world.
Reese Grimsley is a Systems Applications Engineer with the Sitara MPU product line within TIs Processors organization. At TI, Reese works on image processing, machine learning, and analytics for a variety of camera-based end-equipment in industrial markets. One of his focal areas is demystifying Edge AI to help both new and experienced customers understand how they can quickly and easily bring complex deep learning algorithms to their products and improve accuracy, performance, and robustness.
More from Reese
View post:
Computer Vision at the Edge Can Enable AI Apps - Embedded Computing Design
3 Machine Learning Stocks That Should Be on Every Investor’s Radar This Fall – InvestorPlace
Machine learning has become one of the most transformative technologies of the 21st century, with the potential to revolutionize everything from transportation to healthcare. As machine learning adoption accelerates, investors have a tremendous opportunity to profit from this megatrend. However, not all machine learning stocks are created equal. Many fledgling companies boast about machine learning capabilities, but have nebulous use cases and unproven business models.
With that in mind, I believe investors should focus on more established machine-learning stocks with concrete traction rather than pursuing immature chatbot companies with questionable paths to profitability. On the other hand, the machine learning stocks on this list have moved beyond the hype and have integrated this powerful technology into their core products and services.
Thus, I believe the following three machine-learning stocks should be on every tech investors radar.
Source: rafapress / Shutterstock.com
iRobot Corporation (NASDAQ:IRBT) designs and builds consumer robots and is best known for its Roomba robotic vacuums. The stock has faced extreme volatility over the past year, with shares cratering from a peak of $133 to now trade at around $35 per share. This represents a massive 73% drawdown from the stocks highs.
While iRobots business has faced headwinds from production and supply chain issues, much of the negativity appears priced into the stock at current levels. Revenue and earnings per share are expected to rebound solidly next year, with analysts forecasting 8% sales growth and losses being halved. Multiple Wall Street analysts see a significant upside for the stock, with the average one-year price target implying over 43% in potential gains.
With the stock trading at just 1-times forward sales, iRobot appears to have been excessively punished by the recent market selloff. While macroeconomic uncertainty persists, robotic vacuum demand has historically proven resilient through prior downturns. Plus, as AI and machine learning become more common and people grow less absorbed in chores, it is only natural to expect that Roombas or similar tech will be used for indoor cleaning.
As supply constraints ease, iRobot looks poised to reaccelerate growth. Innovative products like the Roomba j7+, and operating leverage provide a promising setup for the next bull cycle. Investors looking for deep value among beaten-down growth stocks should take advantage of the massive discount with IRBT stock.
Source: Dejan Lazarevic / Shutterstock.com
AeroVironment (NASDAQ:AVAV) is a leading developer of unmanned aircraft systems (UAS) and tactical missile systems for military applications. Its ultra-portable drones enable reconnaissance, surveillance, and communications for infantry and special forces.
AeroVironment has seen its stock soar recently amidst surging demand, with shares nearly doubling over the past year. The company posted blockbuster fiscal Q1 2024 results, with revenue up 40% and earnings per share tripling to $1, beating estimates by 70 cents. Its record $540 million funded backlog provides revenue visibility for years ahead.
The war in Ukraine has underscored the strategic importance of AeroVironments unmanned solutions. Switchblade tactical drones and Puma reconnaissance UAS have proven highly effective for Ukrainian forces. Thus, the conflict has led allied nations to significantly increase investment in cutting-edge drones and AI/autonomy capabilities where AeroVironment excels.
The company also continues to innovate, having recently acquired AI robotic control systems leader Tomahawk Robotics. This expands AeroVironments ecosystem with unmanned aircraft, ground robots, and sensing capabilities operating seamlessly together.
With its expertise in AI and leading technology, AeroVironment enjoys a first-mover advantage in the new paradigm for intelligent, interconnected unmanned systems. Its solutions are mission-critical for allied nations looking to modernize defense capabilities. Investors should capitalize on any pullbacks to build positions in this high-growth innovator.
Source: monticello / Shutterstock.com
Baidu (NASDAQ:BIDU) is the leading Chinese Internet search provider. It operates Chinas Google equivalent, along with a host of online products and services. After four years of rangebound trading, now may be the time for Baidu to finally break out.
The company posted stellar Q2 results, with revenue up 15% (8.8% converted to USD) and earnings per share surging 42% year-over-year. Online marketing continues to rebound post-pandemic, while Baidus AI cloud business turned profitable. Its earnings per share of $3.10 beat expectations by 76 cents.
However, Baidus most exciting growth driver is its industry-leading AI capabilities. Its ERNIE AI system integrates advanced natural language processing to enhance search, push personalized recommendations, and enable intelligent chatbots. Baidu is reinventing its consumer products to be AI-native, positioning itself for sustainable growth.
Despite these positives, Baidu trades at just 13-times forward earnings, a bargain for a tech leader of its caliber. If Baidu sees success commercializing its extensive AI research, the stocks languishing valuation presents enormous upside potential. Bullish investors should take advantage of the negativity shrouding Chinese tech to build positions at an attractive entry point.
On the date of publication, Omor Ibne Ehsan did not have (either directly or indirectly) any positions in the securities mentioned in this article. The opinions expressed in this article are those of the writer, subject to the InvestorPlace.comPublishing Guidelines.
Omor Ibne Ehsan is a writer at InvestorPlace. He is a self-taught investor with a focus on growth and cyclical stocks that have strong fundamentals, value, and long-term potential. He also has an interest in high-risk, high-reward investments such as cryptocurrencies and penny stocks. You can follow him on LinkedIn.
Read the original post:
3 Machine Learning Stocks That Should Be on Every Investor's Radar This Fall - InvestorPlace
‘Artificial intelligence is being used in the battle against ageing’ – The Mirror
Dr Miriam Stoppard says Edinburgh University researchers discovered how to safely remove old, defective, useless cells linked to cancer, Alzheimers, failing eyesight and mobility
We are so keen to find the elixir of life that I suspect well go on looking well into the future for drugs that stave off the effects of ageing.
Edinburgh University researchers are the latest scientists to come up with age-defying drugs, this time using artificial intelligence. They have discovered how to safely remove old, defective, useless cells known as senescent cells linked to cancer, Alzheimers, failing eyesight and mobility.
A trio of chemicals that target faulty cells linked to a range of age-related conditions were found using their pioneering method, and the researchers claim they are hundreds of times cheaper than standard screening methods. Up to now, no way has been found to safely eliminate our senescent cells, and drugs that do this are often highly toxic against normal, healthy cells in the body.
The team has now devised a way of pinpointing safe senolytic drugs, which can target these senescent cells, using AI. Theyve developed a machine learning model by training it to recognise the key features of senolytic chemicals, using data from more than 4,000 chemical structures mined from previous studies.
The AI model identified 21 top-scoring molecules that it deemed to have a high likelihood of being senolytics with a potential for experimental testing. This study shows that AI can be incredibly effective in helping us identify new drug candidates, particularly at early stages of drug discovery and for diseases with complex biology or few known molecular targets, says Dr Diego Oyarzun, School of Informatics and School of Biological Sciences at Edinburgh University.
Lab tests on human cells revealed that three of the chemicals called ginkgetin, periplocin and oleandrin were able to remove senescent cells without damaging healthy cells. All three are natural products found in traditional herbal medicines, the team says. Oleandrin was found to be more effective than the best-performing known senolytic drug of its kind.
Dr Vanessa Smer-Barreto, from the Institute of Genetics and Cancer and School of Informatics at Edinburgh, added: This work was borne out of intensive collaboration between data scientists, chemists and biologists. Harnessing the strengths of this inter-disciplinary mix, we were able to build robust models and save screening costs by using only published data for model training. I hope this will open new opportunities to accelerate the application of this exciting technology.
Collaboration between many specialists in multiple disciplines is yet again the key to advancing treatments. Its the only way cutting-edge research can prosper.
Read this article:
'Artificial intelligence is being used in the battle against ageing' - The Mirror
AI and machine learning can successfully diagnose polycystic ovary … – National Institutes of Health (.gov)
News Release
Monday, September 18, 2023
NIH study reviews 25 years of data and finds AI/ML can detect common hormone disorder.
Artificial intelligence (AI) and machine learning (ML) can effectively detect and diagnose Polycystic Ovary Syndrome (PCOS), which is the most common hormone disorder among women, typically between ages 15 and 45, according to a new study by the National Institutes of Health. Researchers systematically reviewed published scientific studies that used AI/ML to analyze data to diagnose and classify PCOS and found that AI/ML based programs were able to successfully detect PCOS.
Given the large burden of under- and mis-diagnosed PCOS in the community and its potentially serious outcomes, we wanted to identify the utility of AI/ML in the identification of patients that may be at risk for PCOS, said Janet Hall, M.D., senior investigator and endocrinologist at the National Institute of Environmental Health Sciences (NIEHS), part of NIH, and a study co-author. The effectiveness of AI and machine learning in detecting PCOS was even more impressive than we had thought.
PCOS occurs when the ovaries do not work properly, and in many cases, is accompanied by elevated levels of testosterone. The disorder can cause irregular periods, acne, extra facial hair, or hair loss from the head. Women with PCOS are often at an increased risk for developing type 2 diabetes, as well as sleep, psychological, cardiovascular, and other reproductive disorders such as uterine cancer and infertility.
PCOS can be challenging to diagnose given its overlap with other conditions, said Skand Shekhar, M.D., senior author of the study and assistant research physician and endocrinologist at the NIEHS. These data reflect the untapped potential of incorporating AI/ML in electronic health records and other clinical settings to improve the diagnosis and care of women with PCOS.
Study authors suggested integrating large population-based studies with electronic health datasets and analyzing common laboratory tests to identify sensitive diagnostic biomarkers that can facilitate the diagnosis of PCOS.
Diagnosis is based on widely accepted standardized criteria that have evolved over the years, but typically includes clinical features (e.g., acne, excess hair growth, and irregular periods) accompanied by laboratory (e.g., high blood testosterone) and radiological findings (e.g., multiple small cysts and increased ovarian volume on ovarian ultrasound). However, because some of the features of PCOS can co-occur with other disorders such as obesity, diabetes, and cardiometabolic disorders, it frequently goes unrecognized.
AI refers to the use of computer-based systems or tools to mimic human intelligence and to help make decisions or predictions. ML is a subdivision of AI focused on learning from previous events and applying this knowledge to future decision-making. AI can process massive amounts of distinct data, such as that derived from electronic health records, making it an ideal aid in the diagnosis of difficult to diagnose disorders like PCOS.
The researchers conducted a systematic review of all peer-reviewed studies published on this topic for the past 25 years (1997-2022) that used AI/ML to detect PCOS. With the help of an experienced NIH librarian, the researchers identified potentially eligible studies. In total, they screened 135 studies and included 31 in this paper. All studies were observational and assessed the use of AI/ML technologies on patient diagnosis. Ultrasound images were included in about half the studies. The average age of the participants in the studies was 29.
Among the 10 studies that used standardized diagnostic criteria to diagnose PCOS, the accuracy of detection ranged from 80-90%.
Across a range of diagnostic and classification modalities, there was an extremely high performance of AI/ML in detecting PCOS, which is the most important takeaway of our study, said Shekhar.
The authors note that AI/ML based programs have the potential to significantly enhance our capability to identify women with PCOS early, with associated cost savings and a reduced burden of PCOS on patients and on the health system.
Follow-up studies with robust validation and testing practices will allow for the smooth integration of AI/ML for chronic health conditions.
Several NIEHS clinical studies focus on understanding and detecting PCOS. Learn more athttps://joinastudy.niehs.nih.gov.
Grants. This work was supported by the Intramural Research Program of the NIH/National Institute of Environmental Health Sciences (ZIDES102465 and ZIDES103323).
About the National Institute of Environmental Health Sciences (NIEHS): NIEHS supports research to understand the effects of the environment on human health and is part of the National Institutes of Health. For more information on NIEHS or environmental health topics, visit https://www.niehs.nih.govor subscribe to a news list.
About the National Institutes of Health (NIH):NIH, the nation's medical research agency, includes 27 Institutes and Centers and is a component of the U.S. Department of Health and Human Services. NIH is the primary federal agency conducting and supporting basic, clinical, and translational medical research, and is investigating the causes, treatments, and cures for both common and rare diseases. For more information about NIH and its programs, visit http://www.nih.gov.
NIHTurning Discovery Into Health
Barrera FJ, Brown EDL, Rojo A, Obeso J, Plata H, Lincango EP, Terry N, Rodrguez-Gutirrez R, Hall JE, Shekhar S, 2023. Application of Machine Learning and Artificial Intelligence in the Diagnosis and Classification of Polycystic Ovarian Syndrome: A Systematic Review. Frontiers in Endocrinology.https://www.frontiersin.org/articles/10.3389/fendo.2023.1106625/full
###
See the original post here:
AI and machine learning can successfully diagnose polycystic ovary ... - National Institutes of Health (.gov)
Neurosnap: Revolutionizing Biology Research with Machine Learning – Yahoo Finance
Wilmington, Delaware--(Newsfile Corp. - September 22, 2023) - In a breakthrough development for the field of computational biology, a new startup named Neurosnap is making waves with its innovative approach to incorporating machine learning into the world of biology research. By providing easy access to state-of-the-art bioinformatic tools and models without requiring any coding or technical expertise, Neurosnap aims to accelerate scientific discoveries and advancements in synthetic biology, pharmaceuticals, and medical research.
The marriage of machine learning and biology has shown great promise in recent years, with tools like AlphaFold2 ushering in a new era of possibilities for biologists. However, such cutting-edge tools have often been inaccessible to many researchers, primarily due to the complexity involved in integrating them into their existing pipelines. Neurosnap seeks to address this crucial barrier by offering a fully end-to-end suite of machine learning tools that are user-friendly and seamlessly integrate with a variety of research pipelines.
"Neurosnap was born out of the belief that computational biology has the potential to transform the way we understand and approach complex biological processes," says Keaun Amani, the CEO and co-founder of Neurosnap. "Our mission is to empower researchers from diverse backgrounds to harness the power of machine learning without the burden of technical intricacies. We envision a future where groundbreaking discoveries are made possible by democratizing access to advanced bioinformatic tools."
One of the key features that sets Neurosnap apart is its user-friendly interface, allowing researchers with little to no prior experience in machine learning to leverage its capabilities effectively. By eliminating the need for coding expertise, the platform ensures that biologists can focus on their core research questions and spend less time grappling with the complexities of data analysis.
Story continues
Researchers using Neurosnap can now explore intricate biological phenomena, analyze complex genomic datasets, and predict protein structures with ease. The platform leverages the latest advancements in machine learning algorithms to assist biologists in unraveling the mysteries of life more efficiently than ever before.
The potential impact of Neurosnap on the pharmaceutical and medical fields is particularly promising. By enabling researchers to identify potential drug candidates, predict protein interactions, and analyze disease-related pathways at a faster pace, the platform holds the potential to accelerate drug discovery and development timelines significantly.
With the launch of Neurosnap, the future of computational biology looks brighter than ever. As researchers from diverse backgrounds unite under a common platform, the potential for scientific advancements in various fields of biology becomes limitless. By democratizing access to cutting-edge machine learning tools, Neurosnap is poised to revolutionize the way biological research is conducted.
Name: Keaun AmaniEmail: hello@neurosnap.ai
To view the source version of this press release, please visit https://www.newsfilecorp.com/release/181481
Read more here:
Neurosnap: Revolutionizing Biology Research with Machine Learning - Yahoo Finance