Category Archives: Machine Learning

Big data and machine learning can usher in a new era of policymaking – Harvard Kennedy School

Q: What are the challenges to undertaking data analytical research? And where have these modes of analysis been successful?

The challenges are many, especially when you want to make a meaningful impact in one of the most complex sectorsthe health care sector. The health care sector involves a variety of stakeholders, especially in the United States, where health care is extremely decentralized yet highly regulated, for example in the areas of data collections and data use. Analytics-based solutions that can help one part of this sector might harm other parts, making finding globally optimal solutions in this sector extremely difficult. Therefore, finding data-driven approaches that can have public impact is not a walk in the park.

Then there are various challenges in implementation. In my lab, we can design advanced machine learning and AI algorithms that have outstanding performance. But if they are not implemented in practice, or if the recommendations they provide are not followed, they wont have any tangible impact.

In some of our recent experiments, for example, we found that the algorithms we had designed outperformed expert physicians in one of the leading U.S. hospitals. Interestingly, when we provided physicians with our algorithmic-based recommendations, they did not put much weight on the advice they got from the algorithms, and ignored it when treating patients, although they knew the algorithm most likely outperforms them.

We then studied ways of removing this obstacle. We found that combining human expertise with the recommendations provided by algorithms not only made it more likely for the physicians to put more weight on the algorithms advice, but also synthesized recommendations that are superior to both the best algorithms and the human experts.

We have also observed similar challenges at the policy level. For example, we have developed advanced algorithms trained on large-scale data that could help the Centers for Disease Control and Prevention improve its opioid-related policies. The opioid epidemic caused more than 556,000 deaths in the United States between 2000 and 2020, and yet the authorities still do not have a complete understanding of what can be done to effectively control this deadly epidemic. Our algorithms have produced recommendations we believe are superior to the CDCs. But, again, a significant challenge is to make sure CDC and other authorities listen to these superior recommendations.

I do not want to imply that policymakers or other authorities are always against these algorithm-driven solutionssome are more eager than othersbut I believe the helpfulness of algorithms is consistently underrated and often ignored in the practice.

Q: How do you think about the role of oversight and regulation in this field of new technologies and data analytical models?

Imposing appropriate regulations is important. There is, however, a fine line: while new tools and advancements should be guarded against misuses, the regulations should not block these tools from reaching their full potential.

As an example, in a paper that we published in the National Academy of Medicine in 2021, we discussed that the use of mobile health (mHealth) interventions (mainly enabled through advanced algorithms and smart devices) have been rapidly increasing worldwide as health care providers, industry, and governments seek more efficient ways of delivering health care. Despite the technological advances, increasingly widespread adoption, and endorsements from leading voices from the medical, government, financial, and technology sectors, these technologies have not reached their full potential.

Part of the reason is that there are scientific challenges that need to be addressed. For example, as we discuss in our paper, mHealth technologies need to make use of more advanced algorithms and statistical experimental designs in deciding how best to adapt the content and delivery timing of a treatment to the users current context.

However, various regulatory challenges remainsuch as how best to protect user data. The Food and Drug Administration in a 2019 statement encouraged the development of mobile medical apps (MMAs) that improve health care but also emphasized its public health responsibility to oversee the safety and effectiveness of medical devicesincluding mobile medical apps. Balancing between encouraging new developments and ensuring that such developments abide by the well-known principle of do no harm is not an easy regulatory task.

At the end, what is needed are two-fold: (a) advancements in the underlying science, and (b) appropriately balanced regulations. If these are met, the possibilities for using advanced analytics science methods in solving our lingering societal problems are endless.

Banner art by gremlin/Getty Images

See the original post:
Big data and machine learning can usher in a new era of policymaking - Harvard Kennedy School

Putting hydrogen on solid ground: Simulations with a machine … – Science Daily

Hydrogen, the most abundant element in the universe, is found everywhere from the dust filling most of outer space to the cores of stars to many substances here on Earth. This would be reason enough to study hydrogen, but its individual atoms are also the simplest of any element with just one proton and one electron. For David Ceperley, a professor of physics at the University of Illinois Urbana-Champaign, this makes hydrogen the natural starting point for formulating and testing theories of matter.

Ceperley, also a member of the Illinois Quantum Information Science and Technology Center, uses computer simulations to study how hydrogen atoms interact and combine to form different phases of matter like solids, liquids, and gases. However, a true understanding of these phenomena requires quantum mechanics, and quantum mechanical simulations are costly. To simplify the task, Ceperley and his collaborators developed a machine learning technique that allows quantum mechanical simulations to be performed with an unprecedented number of atoms. They reported in Physical Review Letters that their method found a new kind of high-pressure solid hydrogen that past theory and experiments missed.

"Machine learning turned out to teach us a great deal," Ceperley said. "We had been seeing signs of new behavior in our previous simulations, but we didn't trust them because we could only accommodate small numbers of atoms. With our machine learning model, we could take full advantage of the most accurate methods and see what's really going on."

Hydrogen atoms form a quantum mechanical system, but capturing their full quantum behavior is very difficult even on computers. A state-of-the-art technique like quantum Monte Carlo (QMC) can feasibly simulate hundreds of atoms, while understanding large-scale phase behaviors requires simulating thousands of atoms over long periods of time.

To make QMC more versatile, two former graduate students, Hongwei Niu and Yubo Yang, developed a machine learning model trained with QMC simulations capable of accommodating many more atoms than QMC by itself. They then used the model with postdoctoral research associate Scott Jensen to study how the solid phase of hydrogen that forms at very high pressures melts.

The three of them were surveying different temperatures and pressures to form a complete picture when they noticed something unusual in the solid phase. While the molecules in solid hydrogen are normally close-to-spherical and form a configuration called hexagonal close packed -- Ceperley compared it to stacked oranges -- the researchers observed a phase where the molecules become oblong figures -- Ceperley described them as egg-like.

"We started with the not-too-ambitious goal of refining the theory of something we know about," Jensen recalled. "Unfortunately, or perhaps fortunately, it was more interesting than that. There was this new behavior showing up. In fact, it was the dominant behavior at high temperatures and pressures, something there was no hint of in older theory."

To verify their results, the researchers trained their machine learning model with data from density functional theory, a widely used technique that is less accurate than QMC but can accommodate many more atoms. They found that the simplified machine learning model perfectly reproduced the results of standard theory. The researchers concluded that their large-scale, machine learning-assisted QMC simulations can account for effects and make predictions that standard techniques cannot.

This work has started a conversation between Ceperley's collaborators and some experimentalists. High-pressure measurements of hydrogen are difficult to perform, so experimental results are limited. The new prediction has inspired some groups to revisit the problem and more carefully explore hydrogen's behavior under extreme conditions.

Ceperley noted that understanding hydrogen under high temperatures and pressures will enhance our understanding of Jupiter and Saturn, gaseous planets primarily made of hydrogen. Jensen added that hydrogen's "simplicity" makes the substance important to study. "We want to understand everything, so we should start with systems that we can attack," he said. "Hydrogen is simple, so it's worth knowing that we can deal with it."

This work was done in collaboration with Markus Holzmann of Univ. Grenoble Alpes and Carlo Pierleoni of the University of L'Aquila. Ceperley's research group is supported by the U.S. Department of Energy, Office of Basic Energy Sciences, Computational Materials Sciences program under Award DE-SC0020177.

See more here:
Putting hydrogen on solid ground: Simulations with a machine ... - Science Daily

David Higginson of Phoenix Children’s Hospital on using machine … – Chief Healthcare Executive

Chicago - David Higginson has some advice for hospitals and health systems looking to use machine learning.

"Get started," he says.

Higginson, the chief innovation officer of Phoenix Children's Hospital, offered a presentation on machine learning at the HIMSS Global Health Conference & Exhibition. He described how machine learning models helped identify children with malnutrition and people who would be willing to donate to the hospital's foundation.

After the session, he spoke with Chief Healthcare Executive and offered some guidance for health systems looking to do more with machine learning.

"I would say get started by thinking about how you going to use it first," Higginson says. "Don't get tricked into actually building the model."

"Think about the problem, frame it up as a prediction problem," he says, while adding that not all problems can be framed that way.

"But if you find one that is a really nice prediction problem, ask the operators, the people that will use it everyday: 'Tell me how you'd use this,'" Higginson says. "And work with them on their workflow and how it's going to change the way they do their job.

"And when they can see it and say, 'OK, I'm excited about that, I can see how it's going to make a difference,' then go and build it," he says. "You'll have more motivation to do it, you'll understand what the goal is. But when you finally do get it, you'll know it's going to be used."

Visit link:
David Higginson of Phoenix Children's Hospital on using machine ... - Chief Healthcare Executive

Machine Learning Can Help to Flag Risky Messages on Instagram … – Drexel University

As regulators and providers grapple with the dual challenges of protecting younger social media users from harassment and bullying, while also taking steps to safeguard their privacy, a team of researchers from four leading universities has proposed a way to use machine learning technology to flag risky conversations on Instagram without having to eavesdrop on them. The discovery could open opportunities for platforms and parents to protect vulnerable, younger users, while preserving their privacy.

The team, led by researchers from Drexel University, Boston University, Georgia Institute of Technology and Vanderbilt University recently published its timely work an investigation to understand what type of data input, such as metadata, text, and image features could be most useful for machine learning models to identify risky conversations in the Proceedings of the Association for Computing Machinerys Conference on Human-Computer Interaction. Their findings suggest that risky conversations can be detected by metadata characteristics, such as conversation length and how engaged the participants are.

Their efforts address a growing problem on the most popular social media platform among 13-to-21-year-olds in America. Recent studies have shown that harassment on Instagram is leading to a dramatic uptick of depression among its youngest users, particularly a rise in mental health and eating disorders among teenage girls.

The popularity of a platform like Instagram among young people, precisely because of how it makes its users feel safe enough to connect with others in a very open way, is very concerning in light of what we now know about the prevalence of harassment, abuse, and bullying by malicious users, said Afsaneh Razi, PhD, an assistant professor in Drexels College of Computing & Informatics, who was a co-author of the research.

At the same time, platforms are under increasing pressure to protect their users privacy, in the aftermath of the Cambridge Analytica scandal and the European Unions precedent-setting privacy protection laws. As a result, Meta, the company behind Facebook and Instagram, is rolling out end-to-end encryption of all messages on its platforms. This means that the content of the messages is technologically secured and can only be accessed by the people in the conversation.

But this added level of security also makes it more difficult for the platforms to employ automated technology to detect and prevent online risks which is why the groups system could play an important role in protecting users.

One way to address this surge in bad actors, at a scale that can protect vulnerable users, is automated risk-detection programs, Razi said. But the challenge is designing them in an ethical way that enables them to be accurate, but also non-privacy invasive. It is important to put younger generations safety and privacy as a priority when implementing security features such as end-to-end encryption in communication platforms.

The system developed by Razi and her colleagues uses machine learning algorithms in a layered approach that creates a metadata profile of a risky conversation its likely to be short and one-sided, for example combined with context clues, such as whether images or links are sent. In their testing, the program was 87% accurate at identifying risky conversations using just these sparse and anonymous details.

To train and test the system, the researchers collected and analyzed more than 17,000 private chats from 172 Instagram users ages 13-21 who volunteered their conversations more than 4 million messages in all to assist with the research. The participants were asked to review their conversations and label each one as safe or unsafe. About 3,300 of the conversations were flagged as unsafe and additionally categorized in one of five risk categories: harassment, sexual message/solicitation, nudity/porn, hate speech and sale or promotion of illegal activities.

Using a random sampling of conversations from each category, the team used several machine learning models to extract a set of metadata features things like average length of conversation, number of users involved, number of messages sent, response time, number of images sent, and whether or not participants were connected or mutually connected to others on Instagram most closely associated with risky conversations.

This data enabled the team to create a program that can operate using only metadata, some of which would be available if Instagram conversations were end-to-end encrypted.

Overall, our findings open up interesting opportunities for future research and implications forthe industry as a whole, the team reported. First, performing risk detection based on metadata features alone allows for lightweight detection methods that do not require the expensive computation involved in analyzing text and images. Second, developing systems that do not analyze content eases some of the privacy and ethical issues that arise in this space, ensuring user protection.

To improve upon it making a program that could be even more effective and able to identify the specific risk type, if users or parents opt into sharing additional details of the conversations for security purposes the team performed a similar machine learning analysis of linguistic cues and image features using the same dataset.

In this instance advanced machine learning programs combed through the text of the conversations and, knowing which contact the users had identified as unsafe, pinpointed the words and combinations of words that are prevalent enough in risky conversations that they could be used to trigger a flag.

For analysis of the images and videos which are central to communication on Instagram the team used a set of programs, one that that can identify and extract text on top of images and videos, and another that can look at and generate a caption for each image. Then, using a similar textual analysis the machine learning programs again created a profile of words indicative of images and videos shared in a risky conversation.

Trained with these risky conversation characteristics, the machine learning system was put to the test by analyzing a random sampling of conversations from the larger dataset that had not been used in the profile-generation or training process. Through a combination of analyses of both metadata traits, as well as linguistic cues and image features the program was able to identify risky conversations with accuracy as high as 85%.

Metadata can provide high-level cues about conversations that are unsafe for youth; however, the detection and response to the specific type of risk require the use of linguistic cues and image data, they report. This finding raises important philosophical and ethical questions in light of Metas recent push towards end-to-end encryption as such contextual cues would be useful for well-designed risk mitigation systems that leverage AI.

The researchers acknowledge that there are limitations to their research because it only looked at messages on Instagram though the system could be adapted to analyze messages on other platforms that are subject to end-to-end encryption. They also note that the program could become even more accurate if its training were to continue with a larger sampling of messages.

But they note that this proves that this work shows that effective automated risk detection is possible, and while protecting privacy is a valid concern, there are ways to making progress and these steps should be pursued in order to protect the most vulnerable users of these popular platforms.

Our analysis provides an important first step to enable automated (machine learning based)

detection of online risk behavior going forward, they write. Our system is based on reactive characteristics of the conversation however our research also paves the way for more proactive approaches to risk detection which are likely to be more translatable in the real world given their rich ecological validity.

This research was funded by the U.S. National Science Foundation and the William T. Grant Foundation.

Shiza Ali, Chen Ling and Gianlucca Stringhini, from Boston University; Seunghyun Kim and Munmun De Choudhury, from Georgia Institute of Technology; and Ashwaq Alsoubai and Pamela J. Wisniewski, from Vanderbilt University, contributed to this research.

Read the full paper here: https://dl.acm.org/doi/10.1145/3579608

The rest is here:
Machine Learning Can Help to Flag Risky Messages on Instagram ... - Drexel University

What is one downside to deep learning? – Rebellion Research

What is one downside to deep learning?

Deep learning is a subset of machine learning that involves training artificial neural networks to recognize patterns in data. While deep learning has shown remarkable success in recent years, enabling breakthroughs in fields such as computer vision, natural language processing, and robotics, it is not without its flaws. One of the major challenges facing deep learning is its slow adaptability to changing environments and new data.

Deep learning algorithms typically train on large datasets. To recognize patterns in the data. These patterns can become used to make predictions or classify new data. That the model has not seen before. However, the performance of deep learning models usually deteriorate sover time. As the data trained on becomes outdated. Or no longer reflects the real-world conditions. Known as the problem of concept drift. Where the statistical properties of the data change over time. As a result, leading to degraded performance of the model.

There are several techniques that have become proposed to address the problem of concept drift in deep learning. One approach uses a continuous learning framework. Where the model becomes updated over time with new data to prevent the accumulation of errors due to concept drift. Another approach uses transfer learning. Where a pre-trained model fine-tuned on new data to adapt to the changing environment.

Despite these approaches, deep learning models still struggle with slow adaptability to new data and changing environments. Due in part to the fact that deep learning models highly parameterized and require large amounts of data to learn complex representations of the input data. As a result, updating the model with new data can be computationally expensive and time-consuming, making it difficult to adapt quickly to changing conditions.

In conclusion, the slow adaptability of deep learning models to changing environments. And new data becomes a major flaw. Moreover, one that needs to be addressed to enable their wider adoption in real-world applications. While techniques such as continuous learning and transfer learning show promise. More research becomes needed to develop more efficient and effective approaches to address this challenge. By addressing this flaw, deep learning can continue to revolutionize fields ranging from healthcare to finance to transportation, enabling new breakthroughs and transforming our world.

What is an example of a concept drift

Deep Learning 101: Introduction [Pros, Cons & Uses] (v7labs.com)

Advantages of Deep Learning | disadvantages of Deep Learning (rfwireless-world.com)

Pros and Cons of Deep Learning Pythonista Planet

Advantages and Disadvantages of Deep Learning | Analytics Steps

4 Disadvantages of Neural Networks & Deep Learning | Built In

What is one downside to deep learning?

View post:
What is one downside to deep learning? - Rebellion Research

Data Science Salon Brings the New York City AI and Machine … – GlobeNewswire

NEW YORK, April 19, 2023 (GLOBE NEWSWIRE) -- Data Science Salon, the most diverse data science community in the US, is excited to announce two all-day events in NYC, NY on June 7th and 8th, 2023 focusing on state-of-the-art AI and machine learning applications.

Only six months after the last Data Science Salon (DSS) in the Big Apple, DSS will be back in New York City with two events in the first week of June. The event on June 7th will be held at the S&P Global Ratings Headquarters in Manhattan and bring together local industry leaders from finance and technology data science fields. The second event on June 8th will focus on AI and machine learning applications in media and advertising and takes place at Blender Workspace in the heart of NoMad.

Both events include a combination of talks, panel conversations, lots of time for networking, and an optional expo in a casual environment. The two days bring together industry leaders and specialists face-to-face to share actionable insights and educate each other about innovative solutions in artificial intelligence, machine learning, predictive analytics and acceptance around best practices. Data Science Salon attendees are executives, senior data science practitioners, data science managers, analysts, and engineering professionals. 150 attendees are expected to attend each event day and over one thousand people will tune in virtually.

The event lineup features 20 speakers per day, including data leaders from Morgan Stanley, The Federal Reserve Bank of New York, S&P Global, T. Rowe Price, Freddie Mac, Barclays Investment Bank on June 7th and experts from Penguin Random House, BuzzFeed, Meta, Moet & Hennessy, and Parrot Analytics, and many more on June 8th.

Some topics covered at Data Science Salon NYC include:

Over the years Data Science Salon has grown into an amazing community of likeminded practitioners across multiple domains. We learn from each other different applications and techniques that normally we would not have seen within our own industry. Such a strong community of smart applied data scientists within an open and collaborative setting! Moody Hadi, Head of Credit Analytics New Product Development, S&P Global

Visit the Data Science Salon NYC website to view the complete conference agenda and register for one or both events.

The Data Science Salon (DSS) is a unique vertical focused conference which grew into a diverse community of senior data science, machine learning and other technical specialists. The community gathers face-to-face and virtually to educate each other, illuminate best practices and innovate new solutions in a casual atmosphere; you can also tune into the DSS webinars, Meetups, and podcast episodes. Learn more about Data Science Salon on the DSS website.

Contacts:For media inquiries:Esther Rietmann+1 305-215-4527esther@formulatedby.com

For sponsorship inquiries:Anna Anisin+1 305-215-4527anna@formulatedby.com

Go here to see the original:
Data Science Salon Brings the New York City AI and Machine ... - GlobeNewswire

Microsoft Readies AI Chip as Machine Learning Costs Surge – Slashdot

After placing an early bet on OpenAI, the creator of ChatGPT, Microsoft has another secret weapon in its arsenal: its own artificial intelligence chip for powering the large-language models responsible for understanding and generating humanlike language. The Information: The software giant has been developing the chip, internally code-named Athena, since as early as 2019, according to two people with direct knowledge of the project. The chips are already available to a small group of Microsoft and OpenAI employees, who are testing the technology, one of them said. Microsoft is hoping the chip will perform better than what it currently buys from other vendors, saving it time and money on its costly AI efforts. Other prominent tech companies, including Amazon, Google and Facebook, also make their own in-house chips for AI. The chips -- which are designed for training software such as large-language models, along with supporting inference, when the models use the intelligence they acquire in training to respond to new data -- could also relieve a shortage of the specialized computers that can handle the processing needed for AI software. That shortage, reflecting the fact that primarily just one company, Nvidia, makes such chips, is felt across tech. It has forced Microsoft to ration its computers for some internal teams, The Information has reported.

See the original post:
Microsoft Readies AI Chip as Machine Learning Costs Surge - Slashdot

Organic chemists should place their trust in machine learning’s black … – Chemistry World

Classically, a black box is a system whose inputs are controlled or known, and whose outputs can be harvested, but the internal workings remain a mystery. Take Google search we may know roughly how it works, but details of the search algorithm are kept secret from the public. But when organic chemistry meets computing, we sometimes feel we want to know everything black boxes can be seen as a frustrating and distrusted tool.

Its fair to say that sometimes, comprehensive understanding lets us control all variables to avoid problems. As a student, I expressed concern over the results of a computational exercise, to be dismissed with but the computer says this is what you have to do. Three months of hard work later, we synthetic chemists were vindicated when it was found that thanks to a computing error in a system we couldnt access directly, we had indeed been working on the wrong compounds for all that time. I have had a rigorous dose of scepticism for methods out of our control ever since!

Although caution is very well advised, we should also remember undergraduate thermodynamics, when we learn to deliberately treat chemical systems as black boxes, their complexity reduced to just a few fundamental parameters otherwise we are unable to compute their properties. For the most complex systems a clear understanding of the systems workings needs to be a sufficient substitute for knowing the exact pathways to reach our answer. I am particularly thinking of machine learning methods: systems whose contents are for practical purposes unknowable, and whose reasoning may not make sense to human users. However, making a leap of faith is highly uncomfortable for organic chemists, who are used to having authority and reasoning even over atomic structure. Although we will never hold every detail of an individual neural net in our minds eye, we can learn how they work, how it was created, and which parameters it has been allowed to exploit. An elementary understanding of the tools and some trust in expert collaborators allow us to reduce concerns in abstracted methods.

On some level, humans abstract almost everything we use. Every time you use an LCMS as a synthetic chemist, you dont need to mentally run through a back-to-basics understanding of the relative polarities, UV absorption and ionisability of your substrates. Simply referring to your compound as a tertiary aniline provides sufficient information for an experienced user to expect a certain outcome. These abstractions might even be directly hard-coded; for example, you may have polar and apolar generic methods set up on the instrument. There are countless popular examples of more readily recognising a concept when we give a name to it perhaps name reactions are one case as well as negative examples, such as seeing someone who looks like a yob and falsely making a mental connection to troublemaking. The audiences capability for abstraction is also a useful tool when presenting complex results: data storytelling allows a presenter to build individual bricks of data into conceptual structures, helping the audience to feel they have fewer individual concepts to wrap their heads around.

Abstractions leap to human non-interpretability when they involve computer-speed calculations or too many variables. Luckily, computers excel at these, but it can be a shock when the methods no longer fit inside a human brain. I visualise these superhuman helpers as being another layer on top of the brain, much like a laptop farming out calculations to a supercomputer cluster and then retrieving the results.

We organic chemists are not actually capable of understanding everything

And this is the advantage: some systems we dont understand really are better than us at what they do. Although machine learning is still an emerging tool when applied to organic chemistry, particularly due to our relatively small datasets, its power is clear from our everyday use in facial recognition on our devices to voice recognition on our home assistants. (That said, machine learning is subject to the same biases as its training: it can overuse a go-to catalyst, or more alarmingly, struggle more with darker skin tones on human images.) Something that may not be clear to those outside large companies is how frequently machine learning is found useful within chemistry, too. At the end of the day, what matters is whether the results verifiably work, rather than how we arrived there, although it may be hard to swallow. We have to make a leap of faith and remember that we organic chemists are not actually capable of understanding everything.

The famously not-so-humble world of organic chemistry is being damaged by our egos and our lack of willing to submit to the higher power of abstraction. We could make our field stronger and more useful, and as any computational chemist will tell you, black box processes need not come at the cost of overall insights.

Link:
Organic chemists should place their trust in machine learning's black ... - Chemistry World

Study shows how machine learning can identify social grooming behavior from acceleration signals in wild baboons – Phys.org

This article has been reviewed according to ScienceX's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

peer-reviewed publication

trusted source

proofread

Scientists from Swansea University and the University of Cape Town have tracked social grooming behavior in wild baboons using collar-mounted accelerometers.

The study, published in the journal Royal Society Open Science, is the first to successfully calculate grooming budgets using this method, which opens a whole avenue of future research directions.

Using collars containing accelerometers built at Swansea University, the team recorded the activities of baboons in Cape Town, South Africa, identifying and quantifying general activities such as resting, walking, foraging and running, and also the giving and receiving of grooming.

A supervised machine learning algorithm was trained on acceleration data matched to baboon video recordings and successfully recognized the giving and receiving grooming with high overall accuracy.

The team then applied their machine learning model to acceleration data collected from 12 baboons to quantify grooming and other behaviors continuously throughout the day and night-time.

Lead author Dr. Charlotte Christensen of the University of Zurich said, "We were unsure whether a sensor on a collar would be able to detect a behavior that involves such subtle movements, but it has worked. Our findings have important implications for the study of social behavior in animals, particularly in non-human primates." Two baboons holding on to each other, sitting on a rock surrounded by grass. Credit: Charlotte Christensen

Social grooming is one of the most important social behaviors in primates and, since the 1950s, has become a central focus of research in primatology.

Previously, scientists have relied on direct observations to determine how much primates groom each other, and while direct observations provide systematic data, it is sparse and non-continuous, with the added limitation of researchers only being able to watch a few animals at a time.

Technology like the one used in this study is revolutionizing the field of animal behavior research and allowing exciting new areas of investigation. Two baboons sitting on a rock and looking out at the land below. Credit: Charlotte Christensen

Senior author Dr. Ines Frtbauer of Swansea University said, "This is something our team have wanted to do for years. The ability to collect and analyze continuous grooming data in wild populations will allow researchers to re-examine long-standing questions and address new ones regarding the formation and maintenance of social bonds, as well as the mechanisms underpinning the sociality-health-fitness relationship."

More information: Charlotte Christensen et al, Quantifying allo-grooming in wild chacma baboons ( Papio ursinus ) using tri-axial acceleration data and machine learning, Royal Society Open Science (2023). DOI: 10.1098/rsos.221103

Journal information: Royal Society Open Science

More:
Study shows how machine learning can identify social grooming behavior from acceleration signals in wild baboons - Phys.org

Using machine learning to find reliable and low-cost solar cells … – eeNews Europe

We use cookies to enhance your browsing experience, serve personalized ads or content, and analyze our traffic. By clicking "Accept", you consent to our use of cookies

The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.

The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.

The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.

The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.

See the original post here:
Using machine learning to find reliable and low-cost solar cells ... - eeNews Europe