Category Archives: Machine Learning

AI and Machine Learning: Boosting Indian space exploration to new heights – The Financial Express

By Subhashis Kar

Indias space program has the potential to achieve unparalleled heights, thanks to a stunning confluence of cutting-edge technology and visionary ambition, fuelled by the unwavering synergy between Artificial Intelligence (AI) and Machine Learning (ML). The Indian Space Research Organisation (ISRO) is embarking on a new age of space exploration, employing AI and ML to explore the universes unknown frontiers.

From the successful launch of Chandrayaan-3, Indias moon expedition, to the Mars Orbiter expedition (Mangalyaan), which made India the first Asian nation to reach Mars, Indias voyage into space has been characterized by a succession of momentous milestones. The path ahead, however, is plagued with even bigger problems and opportunities, which is where AI and ML come into play. Precision and efficiency are crucial to Indias space efforts. Whether its sending satellites into orbit or researching distant celestial entities, every mission requires rigorous preparation and execution. AI and ML, with their ability to analyze data and recognize patterns, are essential tools in this sector. They enable ISRO scientists to optimize trajectory, anticipate ideal launch windows, and even simulate mission scenarios in order to increase the likelihood of success.

One of the most significant uses of artificial intelligence in space exploration is autonomous navigation. Traditionally, spaceships require regular human involvement for course corrections and changes. With AI-guided navigation systems, these spacecraft can make real-time choices based on sensor data, ensuring they stay on course even when millions of kilometers from Earth. This not only cuts mission expenses but also increases spacecraft longevity.In addition, machine learning algorithms are transforming our knowledge of the universe. Telescopes and observatories outfitted with machine learning algorithms can filter through massive volumes of data to find astronomical objects such as exoplanets and cosmic occurrences that would otherwise go unreported. This not only increases our understanding of the cosmos, but it also assists in the finding of possibly habitable planets outside of our solar system.

In recent years, India has also experimented with reusable space technologies. AI plays a critical role in improving launch vehicle recovery and refurbishing, making it economically feasible to send spacecraft into space on a more frequent basis. AI guarantees that launch vehicle reusability becomes a reality by learning from each missions performance and making improvements, lowering the overall cost of space exploration. Furthermore, the introduction of AI and ML in Indias space program goes beyond hardware and mission planning. It has also started to change how data is examined and understood. The Indian Space Science Data Center (ISSDC) processes and categorizes massive datasets acquired from space missions using ML algorithms. This speeds up the extraction of relevant scientific ideas, leading to ground-breaking breakthroughs in domains like astrophysics and planetary science.

International cooperation is becoming increasingly important as India expands its space capabilities. AI and machine learning have proven to be critical bridges in enabling effortless communication with space agencies and research institutes throughout the world. By standardizing data formats and analytic methodologies, these technologies guarantee a smooth interchange of information and knowledge, thrusting India even farther into the global space exploration scene.

Finally, the revolutionary potential of AI and ML is propelling Indias quest for excellence in space exploration to unprecedented heights. These technologies are important assets that enable accuracy, efficiency, and creativity in many aspects of Indias space program. The confluence of human brilliance and machine intelligence promises to uncover the mysteries of the universe and inspire future generations to dream beyond the sky as ISRO continues to reach for the heavens.

The author is CEO, Techbooze

Follow us onTwitter,Facebook,LinkedIn

More:
AI and Machine Learning: Boosting Indian space exploration to new heights - The Financial Express

The Evolution of Machine Learning In TB Diagnostics: Unlocking Patterns and Insights – ETHealthWorld

By Raghavendra Goud Vaggu

The severity of tuberculosis (TB) makes it a troubling crisis across the globe, especially as it is responsible for millions of deaths around the world. According to the World Health Organisation (WHO), TB was responsible for 1.6 million deaths in 2021, making it the 13th biggest killer and second leading infectious killer only after COVID-19 that year. With 10.6 million people falling ill in 2021 to the dreaded disease, and patients cutting across all demographics, there is need for vigilance.

continued below

The rise of computer-aided diagnostics has certainly added impetus to the drive for better TB diagnosis, especially because of better medical imaging that gives radiologists more precise interpretation of the patient's chest, blood, spine, or brain, depending on the part of the body that is affected. One of such tools is the CAD model which offers precise diagnosis of the TB cavity and clearly displays areas of interest when observing the chest x-ray image. This is a huge improvement on preexisting CAD systems which could not identify TB cavities, as a result of the lung field's superimposed anatomic parts.

Looking ahead in TB diagnosis and treatment

Within the framework of modern TB analysis, the place of data cannot be overemphasised. Fully automated CAD systems are being experimented for their great features, including handcrafted systems and deep features. These systems use pre-trained CNN frameworks and supervised learning to probe the parts of the body that are of interest using obtained data and processing such data to reach conclusive diagnosis. This also comes with the ongoing conversation around the difference between supervised and unsupervised learning and how the choice of the future can shape TB diagnosis. More advanced CAD systems will likely emerge in the future, powered by superior AI.

Raghavendra Goud Vaggu, Global CEO, Empe Diagnostics

(DISCLAIMER: The views expressed are sole of the author and ETHealthworld does not necessarily subscribe to it. ETHealthworld.com shall not be responsible for any damage caused to any person/organisation directly or indirectly)

Join the community of 2M+ industry professionals Subscribe to our newsletter to get latest insights & analysis.

Original post:
The Evolution of Machine Learning In TB Diagnostics: Unlocking Patterns and Insights - ETHealthWorld

Unlocking Battery Optimization: How Machine Learning and Nanoscale X-Ray Microscopy Could Revolutionize Lithium Batteries – MarkTechPost

A groundbreaking initiative has emerged from esteemed research institutions aiming to unravel the enigmatic intricacies of lithium-based batteries. Employing an innovative approach, researchers harness machine learning to meticulously analyze X-ray videos at the pixel level, potentially revolutionizing battery research.

The challenge at the heart of this endeavor is the quest for a comprehensive understanding of lithium-based batteries, particularly those constructed with nanoparticles of the active material. These batteries are the lifeblood of modern technology, powering many devices, from smartphones to electric vehicles. Despite their ubiquity, deciphering their complex inner workings has been a persistent challenge.

The breakthrough achieved by a multidisciplinary team from MIT and Stanford lies in their ability to extract profound insights from high-resolution X-ray videos of batteries in action. Historically, these videos were a goldmine of information, but their complexity made extracting meaningful data a daunting task.

Researchers emphasize the pivotal role played by the interfaces within these batteries in controlling their behavior. This newfound understanding opens doors to engineering solutions that could enhance battery performance significantly.

Furthermore, there is a pressing need for fundamental, science-based insights to expedite advancements in battery technology. By employing image learning to dissect nanoscale X-ray movies, researchers can now access previously elusive knowledge, which is crucial for industry partners aiming to develop more efficient batteries faster.

The research methodology involved capturing detailed scanning tunneling X-ray microscopy videos of lithium iron phosphate particles during the charging and discharging processes. Beyond the human eyes capacity, a sophisticated computer vision model scrutinized the subtle changes within these videos. The ensuing results were then compared to earlier theoretical models. Among their key revelations was the discovery of a correlation between the flow of lithium ions and the thickness of the carbon coating on individual particles. This discovery provides a promising avenue for optimizing future lithium-ion phosphate battery systems, ultimately enhancing battery performance.

In summary, the collaboration between esteemed research institutions and the integration of machine learning into battery research represents a significant leap forward in our understanding of lithium-based batteries. By shining a spotlight on the interfaces and leveraging the capabilities of image learning, scientists have unearthed new possibilities for enhancing the performance and efficiency of these vital energy storage devices. This research not only propels the boundaries of battery technology but also holds the promise of ushering in more advanced and sustainable energy solutions in the not-so-distant future.

Check out thePaper and Reference Article.All Credit For This Research Goes To the Researchers on This Project. Also,dont forget to joinour 30k+ ML SubReddit,40k+ Facebook Community,Discord Channel,andEmail Newsletter, where we share the latest AI research news, cool AI projects, and more.

If you like our work, you will love our newsletter..

Niharika is a Technical consulting intern at Marktechpost. She is a third year undergraduate, currently pursuing her B.Tech from Indian Institute of Technology(IIT), Kharagpur. She is a highly enthusiastic individual with a keen interest in Machine learning, Data science and AI and an avid reader of the latest developments in these fields.

Excerpt from:
Unlocking Battery Optimization: How Machine Learning and Nanoscale X-Ray Microscopy Could Revolutionize Lithium Batteries - MarkTechPost

Fujitsu and the Linux Foundation Launch Fujitsus Automated Machine Learning and AI Fairness Technologies: Pioneering Transparency, Ethics, and…

In an era marked by the rapid advancement of artificial intelligence (AI) technologies, the issues of transparency, ethics, and accessibility have taken center stage. While AI solutions have undoubtedly propelled the field forward, there remains a critical need to address issues related to fairness and accessibility. Recognizing this imperative, Fujitsu, a leading developer of AI technologies in Japan, has embarked on a groundbreaking commitment to open-source AI in collaboration with the Linux Foundation. This initiative addresses these challenges and aims to provide accessible solutions that can benefit a broader range of developers and industries.

Existing AI solutions have undoubtedly driven progress in the field, but they often fall short when addressing issues related to fairness and accessibility. Fujitsus latest endeavor, in partnership with the Linux Foundation, seeks to bridge these gaps and offer practical solutions that can empower developers and industries alike.

One of the cornerstones of this initiative is the automated machine learning project known as SapientML. This innovative project offers the capability to rapidly create highly efficient machine learning models and custom algorithms for a companys unique data. By expediting the development process and facilitating the fine-tuning of precise models, SapientML plays a pivotal role in accelerating progress in the AI field. It significantly reduces time-to-market for AI solutions, enabling companies to bring their innovations to the world more swiftly and effectively.

The second project, Intersectional Fairness, addresses a crucial aspect of AI development mitigating biases within AI systems. This technology is designed to excel at identifying subtle biases that may emerge at the intersection of attributes like gender, age, and ethnicity. Overcoming these often overlooked biases is paramount in creating fair and ethical AI systems that serve diverse populations equitably. Intersectional Fairness technology aligns with societal values and ethical standards, ensuring that AI systems are inclusive and impartial.

The efficacy of these solutions is further underscored by their metrics, which provide tangible evidence of their capabilities. SapientMLs ability to swiftly generate optimized machine learning models and tailored code has a transformative impact on AI development, offering a competitive edge in the industry. On the other hand, Intersectional Fairness technology not only identifies hidden biases but also actively contributes to eliminating them, fostering the creation of AI systems that are advanced technologically and ethically sound.

In conclusion, Fujitsus unwavering commitment to open-source AI, in collaboration with the Linux Foundation, heralds a new era in the development of AI technologies. This initiative goes beyond simply addressing the pressing issues of transparency and fairness; it also democratically opens access to cutting-edge AI technologies. As AI continues to shape our modern world, collective open-source efforts exemplify AIs immense potential to be a tool for global innovation while adhering to rigorous ethical standards. The future of AI embraces inclusivity, accessibility, and fairness for all, and Fujitsus initiatives are leading the way toward this bright future.

Check out theReference Article.All Credit For This Research Goes To the Researchers on This Project. Also,dont forget to joinour 30k+ ML SubReddit,40k+ Facebook Community,Discord Channel,andEmail Newsletter, where we share the latest AI research news, cool AI projects, and more.

If you like our work, you will love our newsletter..

Niharika is a Technical consulting intern at Marktechpost. She is a third year undergraduate, currently pursuing her B.Tech from Indian Institute of Technology(IIT), Kharagpur. She is a highly enthusiastic individual with a keen interest in Machine learning, Data science and AI and an avid reader of the latest developments in these fields.

Here is the original post:
Fujitsu and the Linux Foundation Launch Fujitsus Automated Machine Learning and AI Fairness Technologies: Pioneering Transparency, Ethics, and...

These companies are hiring software engineers to work on machine … – BetaKit – Canadian Startup News

Thomson Reuters, Procom, and Ripple are looking to hire machine learning experts across the country.

To find new solutions in the real world and the crypto-world, companies are taking advantage of artificial intelligence (AI) and machine learning (ML) anywhere they can. The companies below are looking for software developers and engineers in Canada to find and implement new machine-learning applications into their platforms. Check out all the organizations recruiting across the country at Jobs.BetaKit for more opportunities.

Thomson Reuters products include specialized software and tools for legal, tax, accounting, and compliance professionals combined with one of the worlds global news services Reuters.

Thomson Reuters is seeking an applied machine learning scientist and a senior applied machine learning scientist to find new real-world solutions for machine learning and deliver them. The qualifications and expectations for both roles are similar, including a masters or bachelors degree in a relevant field, but the senior role is expected to have eight years of experience over the five years of the junior role.

Hired candidates can expect a hybrid-work model out of a Toronto office, where they will formulate research and development plans in a collaborative environment.

Those interested in these positions and future ones with the company can bookmark its jobs page here.

Procom is a talent acquisition and workforce optimization firm helping clients find suitable recruits for unfilled jobs.

Currently Procom is recruiting a senior artificial intelligence machine learning developer for a 6-month position based out of Calgary. The recruiter is looking for a candidate to support the development of AI-powered chatbots and provide technical consultation for other AI and ML initiatives if required.

Hired candidates will have a minimum of four years experience in developing, deploying, and supporting Microsoft Azure AI Machine Learning solutions such as natural language processing, classification, predictions, intelligent document processing, and advanced video analytics.

For future recruitment opportunities with Procom, pay attention to its jobs page here.

Ripple is trying to connect traditional financial entities like banks, payment providers, and corporations with emerging blockchain technologies and their users.

As part of that mission, Ripple is hiring a senior engineering manager for its data platform. The hired candidate will be managing a team that implements the data infrastructure for analytics, machine learning, and other business functions in Ripples platform. Applicants should have more than 10 years of experience in software development and more than five years of experience managing teams.

Ripple has many open job postings, including for software engineers outside of the machine learning space, here.

Original post:
These companies are hiring software engineers to work on machine ... - BetaKit - Canadian Startup News

5 Machine Learning Algorithms Commonly Used in Python – Analytics Insight

This article gathers 5 machine-learning algorithms used in Python for analyzing and making predictions from data

Machine learning algorithms are essential for deriving knowledge from data and generating predictions. There are a number of widely used machine learning algorithms in Python that offer solid tools for addressing a variety of issues. These algorithms are made to extract patterns and correlations from data, allowing computers to reason and forecast the future. This postwill examine five well-known machine-learning algorithms usedin Python.

1. Naive Bayes- The classification approach used by this algorithm, which is based on the Bayes theorem, works by assuming that characteristics belonging to the same class are unaffected by features belonging to other types. Even though the elements are interdependent, the algorithm takes that they are unrelated. This approach provides a model that performs admirably with enormous datasets.

2. Random Forest- It essentially exemplifies an ensemble learning approach for classification, regression, and other issues that works by assembling a variety of decision trees during the training phase. Each decision tree is assigned a class in Random Forest, which categorizes objects based on qualities. The type that reports the most trees is then selected using this algorithm.

3. Linear Regression- It aids in result prediction while taking into account independent variables. The linear link between independent and dependent variables is established with the aid of this ML technique. It basically implies that it illustrates how the value of the independent variables affects the dependent variable.

4. Back-propagation- By changing the weights of the input signals, this algorithm may create the necessary output signals by designing supplied functions. This algorithm for supervised learning is employed in the classification and regression processes. By using the gradient descent or delta rule techniques, back-propagation determines the error function values with the lowest minimums. It is how the method determines the necessary weights to reduce or eliminate error functions.

5. KNN, or K-nearest Neighbours- It can categorize the data points by analyzing the labels of the data points that are present around the target data points and making predictions. Both classification and regression tasks require KNN. It is a method for supervised learning that is used to identify patterns in data and find anomalies.

See more here:
5 Machine Learning Algorithms Commonly Used in Python - Analytics Insight

Exploring Mild Cognitive Impairment to Alzheimer’s Disease … – Physician’s Weekly

The following is a summary of Neuroimaging and machine learning for studying the pathways from mild cognitive impairment to alzheimers disease: a systematic review, published in the August 2023 issue of Neurology by Ahmadzadeh et al.

Researchers performed a systematic review of the latest neuroimaging and machine learning methods for predicting Alzheimers disease dementia conversion from mild cognitive impairment.

They conducted their search in accordance with the systematic review guidelines outlined in the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA). The search encompassed PubMed, SCOPUS, and Web of Science databases.

The results showed that out of 2,572 articles, 56 fulfilled the inclusion criteria. The potential was observed using a multimodality framework and deep learning for predicting MCI to AD dementia conversion.

They concluded the potential of utilizing neuroimaging data and advanced learning algorithms for predicting AD progression. Challenges faced by researchers and future research directions were outlined. The protocol was registered as CRD42019133402 and published in the Systematic Reviews journal.

Source: bmcneurol.biomedcentral.com/articles/10.1186/s12883-023-03323-2

Link:
Exploring Mild Cognitive Impairment to Alzheimer's Disease ... - Physician's Weekly

Can AI Outperform Humans at Creative Thinking Task? This Study Provides Insights into the Relationship Between Human and Machine Learning Creativity -…

While AI has made tremendous progress and has become a valuable tool in many domains, it is not a replacement for humans unique qualities and capabilities. The most effective approach, in many cases, involves humans working alongside AI, leveraging each others strengths to achieve the best outcomes. There are fundamental differences between human and artificial intelligence, and there are tasks and domains where human intelligence remains superior.

Humans can think creatively, imagine new concepts, and innovate. AI systems are limited by the data and patterns theyve been trained on and often struggle with truly novel and creative tasks. However, the question is, can an average human outperform the AI model?

Researchers tried to compare the creativity of humans (n= 256) with that of three current AI chatbots, ChatGPT3.5, ChatGPT4, and Copy.AI, by using the alternate uses task (AUT), which is a divergent thinking task. It is a cognitive method used in psychology and creativity research to assess an individuals ability to generate creative and novel ideas in response to a specific stimulus. These tasks measure a persons capacity for divergent thinking, which is the ability to think broadly and generate multiple solutions or ideas from a single problem.

Participants were asked to generate uncommon and creative uses for everyday objects. AUT consisted of four tasks with objects: rope, box, pencil, and candle. The human participants were instructed to provide ideas qualitatively but not depend solely on the quantity. The chatbots were tested 11 times with four object prompts in different sessions. The four objects were tested only once within that session.

They collected subjective creativity or originality ratings from six professionally trained humans to evaluate the results. The order in which the responses within object categories were presented was randomized separately for each rater. The scores of each rater were averaged across all the responses a participant or chatbot in a session gave to an object, and the final subjective scores for each object were formed by averaging the six raters scores.

On average, the AI chatbots outperformed human participants. While human responses included poor-quality ideas, the chatbots generally produced more creative responses. However, the best human ideas still matched or exceeded those of the chatbots. While this study highlights the potential of AI as a tool to enhance creativity, it also underscores the unique and complex nature of human creativity that may be difficult to replicate or surpass with AI technology fully.

However, AI technology is rapidly developing, and the results may be different after half a year. Based on the present study, the clearest weakness in human performance lies in the relatively high proportion of poor-quality ideas, which were absent in chatbot responses. This weakness may be due to normal variations in human performance, including failures in associative and executive processes and motivational factors.

Check out thePaper.All Credit For This Research Goes To the Researchers on This Project. Also,dont forget to joinour 30k+ ML SubReddit,40k+ Facebook Community,Discord Channel,andEmail Newsletter, where we share the latest AI research news, cool AI projects, and more.If you like our work, you will love our newsletter..

Arshad is an intern at MarktechPost. He is currently pursuing his Int. MSc Physics from the Indian Institute of Technology Kharagpur. Understanding things to the fundamental level leads to new discoveries which lead to advancement in technology. He is passionate about understanding the nature fundamentally with the help of tools like mathematical models, ML models and AI.

View original post here:
Can AI Outperform Humans at Creative Thinking Task? This Study Provides Insights into the Relationship Between Human and Machine Learning Creativity -...

NASA reveals latest weapon to ‘search the heavens’ for UFOs, aliens – Fox News

Artificial intelligence and machine learning will be "essential" to finding and proving the existence of extraterrestrial life and UFOs, NASA said.

The space agency recently released its highly anticipated 36-page UFO report that said NASA doesn't have enough high-quality data to make a "definitive, scientific conclusion" about the origin of UFOs.

Moving forward, AI will be vital to pinpointing anomalies while combing through large datasets, according to the report compiled by NASA's independent research team on UAPs (unidentified anomalous phenomena), a fancy word for UFO.

"We will use AI and machine learning to search the skies for anomalies and will continue to search the heavens for habitable reality," NASA Administrator Bill Nelson said during a Sept. 14 briefing. "AI is just coming on the scene to be explored in all areas, so why should we limit any technological tool in analyzing, using data that we have?"

NASA CAN'T EXPLAIN HANDFUL OF UFO SIGHTINGS AS IT SEARCHS FOR SIGNS OF LIFE

The members of NASA's UAP (unidentified anomalous phenomena) study. (NASA)

Dr. Nicola Fox, NASA's associate administrator, elaborated on Nelson's point, saying AI "is an amazing tool" to find "signatures that are sort of buried in data."

That's how NASA, and scientists around the world, are going to be able to find the metaphorical needle in a haystack, Fox said.

WHAT IS ARTIFICIAL INTELLIGENCE (AI)?

"So a lot of our data are just sort of wiggly line plots. We get excited about wiggly line plots, by the way, but sometimes, you see the wiggles, but you miss a signal," she said.

"By using artificial intelligence, we can often find signatures. So one example we've had is to be able to find signatures of superstorms using very old data that, you know, really is before sort of like routine scientific satellite data."

A Fox News Digital-created UFO hotspot map based off information from the Department of Defense. (Julia Bonavita/Fox News Digital based on AARO's Data)

UAP reporting trends presented during April 19, 2023, Senate hearing. (U.S. Senate Committee on Armed Services)

Using AI was a key component of the 16-member, independent UAP research team's report.

"The panel finds that sophisticated data analysis techniques, including artificial intelligence and machine learning, must be used in a comprehensive UAP detection campaign when coupled with systematic data gathering and robust curation," the report says.

UFO HOTSPOTS MAP REVEALS CLUSTER OF SIGHTINGS LINKED TO ATOM BOMBS, WAR ZONES

The use of AI has been a controversial topic that governments around the world, including the U.S., are grappling with.

Advocates have lauded the potential capabilities of generative AI and the possibility it could catapult society to the next evolution of humankind. On the flip side, it can also create a dystopian future if guardrails aren't put in place, or if it's in the hands of ill-intended users, experts have warned.

WATCH VIDEO ABOUT PENTAGON'S NEW UFO WEBSITE

Earlier this month, over 100 members of Congress met with big tech tycoons such as Elon Musk and Mark Zuckerberg about AI, and some senators expressed concern about unregulated AI.

The NASA panel was asked if regulating AI would impact the space agency's ability to use the budding technology to potentially find extraterrestrial life.

RUSSIAN UFO ENGAGEMENTS, SECRET TIC TAC REPORT AND 3 KEY FIGURES SLIP UNDER RADAR AT CONGRESSIONAL HEARING

Nelson brushed off concerns that regulations would hamper NASA's mission.

"No, don't think that any attempts to that the Congress has underway to try to write a law that would appropriately put guardrails around AI for other reasons is anyway going to inhibit us from utilizing the tools of AI to help us in our quest on this specific issue," Nelson said in response to the question.

READ NASA'S FULL SEPT. 14 REPORT

NASA's study of UAPs is separate from the Pentagon's investigation through the All-domain Anomaly Resolution Office (AARO), although the two investigations are running on parallel tracks that include corroborative efforts.

CLICK HERE TO GET THE FOX NEWS APP

Much like a team of peer reviewers, NASA commissions independent study teams as a formal part of NASAs scientific process, and such teams provide the agency external counsel and an increased network of perspectives from scientific experts.

They were assigned to pinpoint the data available around UAP and produce a report that outlines a roadmap for how NASA can use its tools of science to obtain usable data to evaluate and provide suggestions moving forward.

Excerpt from:
NASA reveals latest weapon to 'search the heavens' for UFOs, aliens - Fox News

Progress in using deep learning to treat cancer – Nature.com

Deep learning approaches have potential to substantially reduce the astronomical costs and long timescales involved in drug discovery. KarmaDock proposes a deep learning workflow for ligand docking that shows improved performance against both benchmark cases and in a real-world virtual screening experiment.

Drug discovery is a long and arduous process that is staggeringly expensive the average estimated time needed to take a new drug from discovery to launch is 1012 years1, at a high cost of ~US$2.2 billion per drug2, which is a major problem considering that this process is also plagued by low hit rates. Computer-aided drug discovery (CADD) can substantially aid this process3, including both by predicting how a range of drug-like ligands would bind to a given drug-target (virtual screening) using docking algorithms, as well as predicting the corresponding binding free energies of the docking predicted poses, which are a measure of the strength with which the ligand binds to its target. However, despite significant progress in this area, challenges remain, including (1) the quality of the binding poses predicted, which is crucial for rational drug discovery, a process that is complicated by the presence of error, non-linearity, and randomness4; (2) the precision and accuracy of the predicted binding free energies for those poses there can be, for instance, significant variation in the pose ranking for the same ligand/target combination between docking approaches and (3) the speed of the approach, which is particularly an issue in the face of increasing library sizes. That is, computational approaches need to be both efficient enough to be able to perform ultra-large docking on libraries that can reach billions of compounds5, without significantly compromising the quality of the binding pose and free energy predictions. Such huge libraries are out of the scope of conventional CADD approaches, but are an ideal target for deep-learning (DL) approaches5, which typically perform better than traditional shallow machine learning techniques (or even deep learning approaches with expert descriptors) when processing large data sets6. However, even DL approaches face challenges optimizing both accuracy and computational speed, due to the inherent complexity of the problem, as well as the degree of seeming randomness involved4. Writing in Nature Computational Science Xujun Zhang and colleagues7 propose KarmaDock, a DL approach for ligand docking, showing both improved speed and accuracy compared to benchmark data sets, as well as performing well in a real-world virtual screening project where it was used to discover experimentally validated active inhibitors of LTK, which is a target for the treatment of non-small-cell lung cancer8.

Read the rest here:
Progress in using deep learning to treat cancer - Nature.com