Category Archives: Machine Learning

GovCon Expert Joe Paiva Finds AI at a CrossroadsAmplifying Biases or Empowering All – GovCon Wire

By Joe Paiva, Chief Operating Officer at Fearless

The digital divide of the 1990s exacerbated long-standing inequities in our society.

As broadband internet and personal computers proliferated, they reached affluent neighborhoods and households first. This left economically disadvantaged communities, disproportionately communities of color, on the wrong side of the divide. The impacts on education, job skills development and economic opportunity further widened existing disparities.

Today, we face an even more dangerous new digital divide one fueled by the rapid rise of artificial intelligence and machine learning.

Algorithms are increasingly used to make high-stakes decisions that impact peoples livelihoods and quality of life from college admissions to job candidate screening, to home mortgage approvals to the allocation of government services.

The fundamental problem is this: most of these AI and ML models are trained on historical datasets that reflect centuries of systemic bias and discrimination. Theres redlining in housing, legacy admissions in higher education and underinvestment in schools and businesses in minority neighborhoods. These and countless other inequities are baked into the data by which AI based applications learn how to make predictions.

For example, a 2017study by researchers at the University of Virginia and the University of Washington found that AI algorithms used by major online platforms to target job ads were significantly less likely to show opportunities in engineering, computing and other high-paying fields to women compared to men. The algorithms had learned to optimize ad placement based on past engagement data, perpetuating long-standing gender disparities in STEM careers.Research articles have found similar issues in AI used for hiring, where models trained on historical employment records can entrench racial and gender biases in selection processes. Equally insidious but more difficult to document examples permeate.

Without intentional effort to identify and mitigate these biases, AI will continue to amplify past inequities and erect new barriers to opportunity for underrepresented groups.

And because of the digital divide that began in the 90s, underserved communities and people of color have faced significant barriers to developing digital skills, pursuing education and job opportunities, and participating in the digital economy. As a result, these groups are less likely to be developing and implementing the AI tools and practices that are threatening to further divide.

A 2020 study by the National Skills Coalition, Applying a racial equity lens to digital literacy, reveals stark disparities in digital skill attainment between white workers and their Black, Latino and Asian American and Pacific Islander peers.

The study found that while 41 percent of white workers have advanced digital skills, only 13 percent of Black workers, 17 percent of Latino workers and 28 percent of AAPI workers have attained this level. These gaps in advanced digital skills are the product of structural inequities deeply rooted in our society, from uneven access to quality education and training to biased hiring practices and lack of diversity in the tech sector.

As a result, rather than being the great equalizer we once hoped for, AI threatens to systematize and amplify the biases of the past, affecting access to opportunity for generations to come.

Only by building teams as diverse as the public we serve can we design AI and digital services that work for all.

There are promising examples of AI being deployed thoughtfully to identify bias and the social issues that are present in disparity. TheVeterans Administration is utilizing AI in many ways. TheSocial Determinants of Health Extractor, or SDOH, is an AI-powered tool that analyzes clinical notes in electronic health records to identify key social factors, such as a patients economic status, education, housing situation and social support networks, that may influence their health outcomes.

By using natural language processing and deep learning techniques, the system can automatically surface SDOH information. The extracted SDOH variables can then be used by researchers to examine how these social factors contribute to health disparities and impact clinical outcomes for veterans from minority or underserved communities.

Understanding these relationships is a critical step toward designing more targeted interventions and equitable care delivery practices that address the root social drivers of health.

In the criminal justice system, AI is being leveraged to address racial disparities in sentencing.Researchers at the Stanford Computational Policy Lab developed a machine learning model to identify bias in risk assessment tools used by judges to inform sentencing decisions.

By analyzing data from over 100,000 criminal cases in Broward County, Florida, the team found that Black defendants were nearly twice as likely as white defendants to be misclassified as high risk of recidivism.

Armed with this insight, policymakers and judges can take steps to mitigate the bias, such as adjusting risk thresholds or supplementing the algorithms with additional contextual information.

While AI alone cannot solve systemic inequities, these examples demonstrate its potential as a tool for diagnosing and beginning to address bias in high-stakes government decisions and actions.

To disrupt the cycle and close the digital divide, diversity and inclusion must become a strategic imperative. Not only within government agencies, but also the contracting community that serves them and the technology sector as a whole. Only by building teams as diverse as the public we serve can we design AI and digital services that work for all.

Failing to act will allow the new digital divide to calcify, further concentrating wealth and power in the hands of the few at the expense of the many.

The call to action is clear. As leaders in government and the technology ecosystem, we must:

The path ahead is clear. By embracing diversity, equity and inclusion as core values in the development and deployment of AI, we have the power to create a future where technology truly serves all.

When we harness the talents and perspectives of our nations full diversity, we can create AI systems that are more innovative, more equitable and more impactful. Realizing this vision will require sustained commitment and collaboration across government, industry, academia and communities. It will demand courageous leadership, honest introspection and a willingness to break from the status quo. But the potential rewardsa society where AI narrows opportunity gaps instead of widening them, where technology is a source of empowerment rather than exclusionare too great to ignore.

So let us seize this moment, and work together to build a future where the power of AI lifts up the full diversity of the American people. In this future, the digital divide gives way to digital dignity and innovation drives not just prosperity, but justice. This is the future we must build, and the future we will build, together.

GovCon Expert Joe Paiva Finds AI at a CrossroadsAmplifying Biases or Empowering All - GovCon Wire

Ancoris named Leader for Data Analytics and Machine Learning in ISG Provider Lens Google Cloud Partner … – PR Newswire UK

LONDON, July 9, 2024 /PRNewswire/ -- Ancoris, a UK-based Google Cloud services provider, has been named a Leader for Data Analytics and Machine Learning in ISG Provider Lens Google Cloud Partner Ecosystem2024 report. The report released by ISG today provides a comprehensive independent overview of the Google partner landscape, alongside analysis of the strengths and capabilities of each individual provider.

"The Data Analytics and Machine Learning quadrant represents the most dynamic, innovative and competitive part of the Google Ecosystem" where only 9 of the 43 companies assessed for this quadrant were awarded Leader positions. "Ancoris is making significant investments in GenAI skills and assets" as it pivots to help Enterprise and Public Sector organisations embed AI across the organisation to help solve their biggest challenges.

"We are so thrilled to have been recognised as a Leader in the Data Analytics and Machine Learning quadrant this year," says Andre Azevedo, Ancoris CEO. "When we made the decision to invest in Generative AI and launch a dedicated practice in June 2023, we knew we had the capability to be successful, but how the market and our customers would respond was unknown. Ancoris has had a robust and mature data practice for a long time, but the introduction of Generative AI last year opened up customers' imaginations to how AI could transform their organisation," Azevedo continues. "As a result, the last 12 months have been transformative for our business - we've started valuable relationships with many new Enterprise and Public Sector customers, expanded existing relationships, and are doing more innovative work than ever before."

ISG recognises Ancoris' innovative AI-Native approach to helping customers overcome business challenges and its rapid prototyping capability as an accelerator in helping customers see tangible benefit from AI quickly. "The reality of Generative AI adoption is there's still a lot of hype, limited public references, and very little benchmark data to help customers build financial cases for Generative AI investment," says Matt Frank, Ancoris Chief AI and Innovation Officer.

"It's why we focus on meeting customers wherever they are on their AI adoption journey," Frank continues. "It's important that we get something tangible and actionable in front of the customer as soon as possible, whether that's developing and prioritising use cases through our Actionable AI Framework consulting services, or taking use cases from prototype to production with our Simple methodology and rapid prototyping. We find demonstrating value quickly and aligning with strategic outcomes accelerates adoption and sets customers on a more meaningful AI adoption path."

Ancoris is also recognised as a Product Challenger across the three other quadrants it responded to: Implementation and Integration, Managed Services, and Workspace. "To be the only Google-dedicated Partner to feature across these 4 key quadrants is a testament to our focus and our methodology for solving customer problems," Azevedo comments. "It's the combination of our capabilities across data and AI, software engineering, and cloud infrastructure that make us different.. Data & AI capabilities are a key skill, but to bring AI-native solutions to life you need to build the user experience and integrate it across systems and processes. Our ability to embed AI into existing or new applications, business systems, or processes - and do it all on secure and robust Google infrastructure - is a true differentiator against the other pure data players in the ecosystem."

To download the full report, visit

About AncorisAncoris is a leading Google Cloud Services Provider, headquartered in the UK, on a mission to become the most innovative Google Cloud partner in the ecosystem. Ancoris leverages its strong problem solving skills and continuous improvement approach to help customers become AI Native and stay ahead of their competition. Ancoris has extensive experience in Google Cloud technologies helping enterprises integrate AI-native solutions into their business through expertise in Data & AI, Application and Infrastructure Modernisation, Workspace, and Maps. Ancoris was recognized as a Leader for Data, Analytics, and Machine Learning in the ISG Provider Lens for Google Cloud Partner Ecosystem in 2024, and a Rising Star in 2022 and 2023. Ancoris was awarded Google Cloud's 2024 EMEA Public Sector Partner of the Year award. Ancoris employs the best in the business and was named in the Top 10 Sunday Times Best Places to Work 2023, and a Top Place to Work in 2024.


Image - Logo -

Read more:
Ancoris named Leader for Data Analytics and Machine Learning in ISG Provider Lens Google Cloud Partner ... - PR Newswire UK

Advancing transparency, fairness in AI to boost health equity – TechTarget

The use of race in clinical algorithms has increasingly come under fire as healthcare organizations have begun to pursue health equity initiatives and push back against the practice of race-based medicine. While the recognition that race is a social construct, rather than a biological one, is not new, the move toward race-conscious medicine has gained traction in recent years.

At the same time, evidence pointing to the real and potential harms of race-based algorithms has created significant concerns about how these tools -- many of which are currently in use -- will widen existing health disparities and perpetuate harm.

These worries are exacerbated by the rising use of AI and machine learning (ML), as these technologies are often black box models that remain inscrutable to human users despite the potential for bias.

At the recent "Together to Catalyze Change for Racial Equity in Clinical Algorithms" event -- hosted by the Doris Duke Foundation, the Council of Medical Specialty Societies and the National Academy of Medicine -- healthcare leaders came together to discuss how the industry can embrace the shift away from the use of race as a biological construct in clinical algorithms.

A selection of featured panelists gathered to detail race's use in clinical algorithms to date, with an eye toward addressing its harmful use and advancing health equity. To that end, multiple speakers presented ongoing work to mitigate potential harms from AI and ML tools by prioritizing transparency and fairness.

The pursuit of health equity has led many to question the transparency and fairness strategies needed to ensure that clinical algorithms reduce, rather than promote, disparities.

Rapid advances in AI technology have made these considerations critical across the industry, with public and private stakeholders rushing to catch up, as evidenced by guiding principles for ML-enabled devices recently issued by FDA Center for Devices and Radiological Health (CDRH).

"The FDA put out a call for transparency for machine learning-enabled medical devices," explained Tina Hernandez-Boussard, MD, Ph.D., MPH, associate dean of research and associate professor of biomedical informatics at Stanford University. "They're looking at the who, the why, the what, the where and the how for machine learning practices, so when we talk about transparency: transparency for who? Why do we need it to be transparent? What needs to be transparent?"

Much of this work, she indicated, is centered on how transparency can be embedded into clinical algorithms via automated methods to produce information on a tool's training data, the metrics used to validate it and the population to which it is designed to be applied.

However, Hernandez-Boussard emphasized that integrating transparency in this way requires the development of rigorous standards.

"We need standards and tools for transparency because when I say transparency, my definition might be completely different from somebody else's," she noted. "Industry has a different definition of transparency than other entities. So, we need to think about standards and tools for systematically generating this [transparency] information."

She also underscored the need for distributed accountability in order to drive responsible data and model use. Under such a framework, model developers would be responsible for reporting information about the tools they are building, while model implementers would be responsible for determining how to set up continuous monitoring for their clinical AI.

Further, Hernandez-Boussard indicated that assessing the role of patient outcomes in this accountability framework is essential. She also pointed out a need to require participation in the framework to systematically ensure that algorithms are transparent.

She explained that the recently issued final rule under Section 1557 of the Affordable Care Act (ACA) -- which "prohibits discrimination on the basis of race, color, national origin, age, disability, or sex (including pregnancy, sexual orientation, gender identity, and sex characteristics), in covered health programs or activities," per the U.S. Department of Health and Human Services (HHS) -- is key to these efforts, as its mandates require covered entities to identify and mitigate discrimination related to the use of AI or clinical decision support algorithms.

Hernandez-Boussard highlighted that the ongoing efforts to promote transparency and tackle discrimination are crucial for not only creating accountability but also spreading it across multiple stakeholders rather than just model developers.

"Broad scoping rules on discrimination set the stage for where we're going and how we think about these clinical decision support tools, how we need to evaluate them and how we think about deploying them across populations," she stated. "We need to be promoting health."

Sharing the responsibility of AI transparency also creates an environment in which industry stakeholders can collaborate, instead of compete, to advance the use of equitable clinical tools.

Currently, experts pursuing transparency and accountability efforts for clinical algorithms are challenged by a lack of consensus around what responsible AI looks like in healthcare.

The Coalition for Health AI (CHAI) is working to develop this consensus by bringing together roughly 2,500 clinical and nonclinical member organizations from across the industry, according to its president and CEO, Brian Anderson, MD.

"There's a lot of good work being done behind closed doors in individual organizations [to develop] responsible AI best practices and processes, but not at a consensus level across organizations," Anderson stated. "In a consequential space like healthcare, where people's lives are on the line that's a real problem."

He explained that the health systems that founded CHAI saw this as an opportunity to bring collaborators from every corner of the industry to develop a definition for responsible healthcare AI. However, willingness to collaborate on a responsible AI framework does not mean that defining concepts like fairness, bias and transparency are straightforward.

While there is agreement on metrics like area under the curve, for example, it's not easy to come to full consensus. This is because the stakes are high, Anderson said. Not only do providers, payers and model developers need to come together, he said, but patients' perspectives must also be part of the conversation, adding another layer of complexity.

As part of these consensus-building efforts, CHAI is homing in on a technical framework to help inform developers about what responsible AI looks like throughout the development, deployment, maintenance and monitoring steps of a model's life cycle.

Alongside these technical standards, the coalition is pursuing a national network of AI assurance labs. These labs would serve to bridge the gap between the development of clinical AI evaluation metrics and the application of such metrics to assess current and future tools, Anderson noted. The results of these evaluations would then be added to a national registry that anyone could use to gauge the fairness and performance of a clinical AI tool.

"I am a Native American, I live in the Boston area, I go to [Massachusetts General Hospital (MGH)], and I want to be able to go to this registry and look at the models that are deployed at MGH and see how they perform on Native Americans," Anderson said. "I want to be empowered to have a conversation with my provider and say, 'Maybe you shouldn't use that model because look at its AUC score on people like me.' That's what we're trying to enable with this kind of transparency."

He indicated that being able to engage with such a national registry could help overcome the lack of education for both healthcare stakeholders and the public around the industry's use of AI.

When asked how a patient could take advantage of CHAI's registry without being aware of what specific models were being applied to them by their healthcare provider, Anderson explained that part of CHAI's work to build its assurance labs involves requiring that each model's entry in the national registry lists the health systems at which the tool is deployed.

CHAI recently sought public feedback on a draft framework presenting assurance standards to evaluate AI tools across the lifecycle in the wake of Congressional criticism regarding the coalition's relationship with the FDA.

These efforts might be further hampered by additional challenges posed by efforts to measure AI fairness.

Despite the rapid development of AI and work to build consensus around fairness in algorithms, Shyam Visweswaran, MD, Ph.D., vice chair of clinical informatics and director of the Center for Clinical Artificial Intelligence at the University of Pittsburgh, warned that it might be premature to focus on AI tools -- many of which won't be ready for clinical use for some time -- rather than existing statistical algorithms used for clinical decision-making.

He asserted that performance metrics must be developed for both current statistical algorithms and future AI tools, particularly those that utilize race variables in their calculations. Visweswaran stated that efforts like CHAI's move the needle, but the struggle to define algorithmic fairness goes beyond agreeing on a one-size-fits-all approach.

He emphasized that the main difference between a statistical algorithm and an AI tool is the number of data points and variables used to develop each. AI and ML tools typically require vast amounts of data, whereas statistical models can be developed using a significantly smaller pool of information.

Further, derivation and performance data for statistical algorithms are typically published, and the tools themselves are in extensive clinical use. With AI, information about the model might be largely unavailable.

"There are over 500 FDA-certified health AI algorithms out there, and I don't think I can get my hands on any one of them in terms of their performance metrics," Visweswaran said. "So, as a core tenet of transparency, we have to be able to fix that going forward. [AI tools] are currently not in extensive clinical use, but they will be as we go forward, and the efforts to evaluate bias in them are just beginning."

He further underscored that currently, it's unclear how many existing healthcare algorithms are racially biased, aside from the handful that have been researched recently. To address this, Visweswaran and colleagues developed an online database to catalog information about currently deployed race-based algorithms.

He noted that when looking at which of these tools might be biased, starting with those that already incorporate race or ethnicity as an input variable is a good first step, as these explicitly produce different outputs for different racial categories.

However, he indicated that continually updating the online database and evaluating algorithms that don't explicitly incorporate race is necessary to reduce disparities and improve outcomes.

"There are devices which are biased in terms of racial categories, [like] pulse oximetry it was noticed that for darker-skinned people, the tool was not well-calibrated," Visweswaran stated. "By the time patients came to the hospital, they were actually pretty sick."

The same is true for devices like infrared thermometers and electroencephalograms (EEGs), which he noted do not work as well on patients with thick hair. This causes a disproportionate number of poor-quality readings for Black patients, which often leads to diagnostic issues down the line.

Further, poor-quality EEG readings cannot be used to develop algorithms, meaning that marginalized patient data might not be incorporated into a clinical decision support tool.

"Almost all the EEG data sets out there for research purposes don't have African-American data in them because it gets thrown out," Visweswaran explained, leading to the potential development of biased models.

This problem is exacerbated by the fact that the version history of an algorithm typically isn't available for researchers looking to assess a model's performance and fairness over time.

"When a new version of an algorithm comes, the old version disappears, [but] we need to track all these versions as we go along," he asserted. "We need a story for each of these algorithms -- which is freely available -- so that when researchers or developers go in, they don't have to start from scratch: they can go and look at versions of the algorithm, see the problems with a previous version and why the new version was developed. Sometimes, it's not quite clear that the newer version is actually better than the older version."

Alongside the need to track information about clinical algorithms, Visweswaran stated that stakeholders need to be mindful of how they conceptualize fairness. As part of the ongoing work to enhance its algorithm-tracking database, his team is developing "fairness profiles," which use fairness metrics -- like differences in sensitivity between groups -- found in the literature to assess each tool.

However, these are group fairness metrics, which evaluate measures across groups or populations.

"These are statistical measures, and they're in common use, but they don't guarantee that for a particular person, the algorithm actually is doing a good job," Visweswaran said. "All it guarantees is that for that particular group, on average, it does okay."

This knowledge has contributed to growing conversations around the role of individual fairness, which posits that similar individuals should receive similar treatments, and in turn, experience similar outcomes.

"The problem is that defining similarity between individuals is tricky, and right now, we don't have any standard measures which are available to measure individual fairness The key challenge is to derive the appropriate similarity metric by which to decide who is the peer group that we are going to use for this particular person," Visweswaran noted.

A focus on coming up with one fairness approach that everyone can agree on might undercut the possibility that there is no single set of fairness metrics that will work well for each patient.

"Having this grand idea of getting to a fairer situation is great, but some of the devil is going to be in the details, and there might be math out there which says you can't do some of these things that you actually want to do," Visweswaran cautioned.

Shania Kennedy has been covering news related to health IT and analytics since 2022.

See more here:
Advancing transparency, fairness in AI to boost health equity - TechTarget UnifyGPT Announces New Brand Name to Reflect Synergistic AI Mission – AiThority

UnifyGPT Inc, a leading innovator in artificial intelligence solutions, is thrilled to announce its rebranding to This strategic name change reflects the companys dedication to providing AI technologies that harmoniously integrate with the unique needs and goals of enterprise customers.

Also Read:Niva, Backed by Gradient, Googles AI Fund, Emerges to Tackle Global Business Verification

The name was chosen to encapsulate the companys mission: to utilize AI in ways that are synergistically aligned with the operational, safety, and privacy requirements of users, enterprises, and organizations. This rebranding underscores the companys commitment to developing responsible AI solutions that prioritize the safety and privacy concerns of all stakeholders.

Our new brand name,, perfectly aligns with our corporate mission to create AI solutions that not only enhance but also integrate seamlessly with our clients operations, saidRaghu Bala, Founder and CEO of We believe in the power of AI to drive innovation and efficiency, but we are equally committed to ensuring that these technologies are used responsibly and ethically.

The rebranding includes a new logo, website, and overall visual identity that reflects the companys forward-thinking approach and its core values of integrity, responsibility, and innovation.

Also Read:Quali Uses AI to Simplify Infrastructure as Code and Automate Application Environment Orchestration

Synergetics stands at the forefront of agentic AI platforms, transforming enterprise operations across diverse verticals including financial services, healthcare, e-commerce, and more. By managing both AI bots and autonomous agents, Synergetics seamlessly integrates advanced machine learning and robust automation capabilities to optimize processes, enhance decision-making, and foster innovation. Its intuitive interface and scalable solutions ensure easy adoption and significant impact across industries. Trusted by leading enterprises, Synergetics redefines efficiency and productivity, setting new standards for the future ofAIinbusiness.

Also Read:Revolutionizing Customer Interactions: Introducing Converse AI by Qwary

[To share your insights with us as part of editorial or sponsored content, please write to]

Read more from the original source: UnifyGPT Announces New Brand Name to Reflect Synergistic AI Mission - AiThority

Machine learning-based decision support model for selecting intra-arterial therapies for unresectable hepatocellular … –

Sung H, Ferlay J, Siegel RL, Laversanne M, Soerjomataram L, Jemal A, et al. Global Cancer Statistics 2020: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA Cancer J Clin. 2021;71:20949.

Article CAS PubMed Google Scholar

Marrero JA, Kulik LM, Sirlin CB, Zhu AX, Finn RS, Abecassis MM, et al. Diagnosis, staging, and management of hepatocellular Carcinoma: 2018 Practice Guidance by the American Association for the Study of Liver Diseases. Hepatology. 2018;68:72350.

Article PubMed Google Scholar

Villanueva A. Hepatocellular Carcinoma. N. Engl J Med. 2019;380:145062.

Article CAS PubMed Google Scholar

Park JW, Chen M, Colombo M, Roberts LR, Schwartz M, Chen PJ, et al. Global patterns of hepatocellular carcinoma management from diagnosis to death: the BRIDGE Study. Liver Int. 2015;35:215566.

Article PubMed PubMed Central Google Scholar

Ghanaati H, Mohammadifard M, Mohammadifard M. A review of applying transarterial chemoembolization (TACE) method for management of hepatocellular carcinoma. J Fam Med Prim Care. 2021;10:355360.

Article Google Scholar

Sidaway P. HAIC-FO improves outcomes in HCC. Nat Rev Clin Oncol. 2022;19:150.

Article CAS PubMed Google Scholar

He M, Li Q, Zou R, Shen JX, Fang WQ, Tan GS, et al. Sorafenib plus hepatic arterial infusion of Oxaliplatin, Fluorouracil, and Leucovorin vs Sorafenib alone for hepatocellular carcinoma with portal vein invasion: a randomized clinical trial. JAMA Oncol. 2019;5:95360.

Article PubMed PubMed Central Google Scholar

Zhang Z, Li C, Liao W, Huang Y, Wang Z A Combination of Sorafenib, an Immune Checkpoint Inhibitor, TACE and Stereotactic Body Radiation Therapy versus Sorafenib and TACE in Advanced Hepatocellular Carcinoma Accompanied by Portal Vein Tumor Thrombus. Cancers. 2022;14.

Lencioni R, Llovet JM, Han G, Tak WY, Yang JM, Alfredo G, et al. Sorafenib or placebo plus TACE with doxorubicin-eluting beads for intermediate stage HCC: The SPACE trial. J Hepatol. 2016;64:10908.

Article CAS PubMed Google Scholar

McGlynn KA, Petrick JL, El-Serag HB. Epidemiology of Hepatocellular Carcinoma. Hepatology. 2021;73:413.

Article CAS PubMed Google Scholar

An C, Zuo M, Li W, Chen Q, Wu P. Infiltrative Hepatocellular Carcinoma: Transcatheter arterial chemoembolization versus hepatic arterial infusion chemotherapy. Front Oncol. 2021;11:747496.

Article CAS PubMed PubMed Central Google Scholar

An C, Yao W, Zuo M, Li W, Chen Q, Wu P Pseudo-capsulated hepatocellular carcinoma: hepatic arterial infusion chemotherapy versus Transcatheter Arterial Chemoembolization. Acad Radiol. 2023.

Kourou K, Exarchos TP, Exarchos KP, Karamouzis MV, Fotiadis DI. Machine learning applications in cancer prognosis and prediction. Comput Struct Biotechnol J. 2015;13:817.

Article CAS PubMed Google Scholar

An C, Yang H, Yu X, Han Z, Cheng Z, Liu F, et al. A machine learning model based on health records for predicting recurrence after microwave ablation of hepatocellular carcinoma. J Hepatocell Carcinoma. 2022;9:67184.

Article PubMed PubMed Central Google Scholar

Uche-Anya E, Anyane-Yeboa A, Berzin TM, Ghassemi M, May FP. Artificial intelligence in gastroenterology and hepatology: how to advance clinical practice while ensuring health equity. Gut. 2022;71:190915.

Article PubMed Google Scholar

Moons KGM, Altman DG, Reitsma JB, Ioannidis JPA, Macaskill P, Steyerberg EW. Transparent Reporting of a multivariable prediction model for Individual Prognosis or Diagnosis (TRIPOD): explanation and elaboration. Ann Intern Med. 2015;162:W173. Jan 6

Article PubMed Google Scholar

EASL Clinical Practice Guidelines: Management of hepatocellular carcinoma. J Hepatol. 2018;69:182-236.

Cardella JF, Kundu S, Miller DL, Millward SF, Sacks D. Society of Interventional Radiology Clinical Practice Guidelines. J Vasc Inter Radio. 2009;20:S189191.

Article Google Scholar

Liu W, Wei R, Chen J, Li Y, Pang H, Zhang W, et al. Prognosis prediction and risk stratification of transarterial chemoembolization or intraarterial chemotherapy for unresectable hepatocellular carcinoma based on machine learning. Eur Radiol. 2024 Jan 30.

Article PubMed Google Scholar

Wang K, Tian J, Zheng C, Yang H, Ren J, Liu Y, et al. Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and SHAP. Comput Biol Med. 2021;137:104813.

Article PubMed Google Scholar

Ma M, Liu R, Wen C, Xu W, Xu Z, Wang S, et al. Predicting the molecular subtype of breast cancer and identifying interpretable imaging features using machine learning algorithms. Eur Radiol. 2022;32:165262.

Article CAS PubMed Google Scholar

Li QJ, He MK, Chen HW, Fang WQ, Zhou WM, Liu X, et al. Hepatic arterial infusion of Oxaliplatin, Fluorouracil, and Leucovorin versus transarterial chemoembolization for large hepatocellular carcinoma: a randomized Phase III trial. J Clin Oncol. 2022;40:15060.

Article CAS PubMed Google Scholar

Jin ZC, Zhong BY, Chen JJ, Zhu HD, Sun JH, Yin GW, et al. Real-world efficacy and safety of TACE plus camrelizumab and apatinib in patients with HCC (CHANCE2211): a propensity score matching study. Eur Radiol. 2023.

Johnson PJ, Berhane S, Kagebayashi C, Shinji S, Mabel T, Helen LR, et al. Assessment of liver function in patients with hepatocellular carcinoma: a new evidence-based approach-the ALBI grade. J Clin Oncol. 2015;33:5508.

Article PubMed Google Scholar

Reig M, Forner A, Rimola J, Joana F, Marta B, ngeles G, et al. BCLC strategy for prognosis prediction and treatment recommendation: The 2022 update. J Hepatol. 2022;76:68193.

Article PubMed Google Scholar

Song S, Bai M, Li X, Guo S, Yang W, Li C, et al. Early predictive value of circulating biomarkers for sorafenib in advanced hepatocellular carcinoma. Expert Rev Mol Diagn. 2022;22:36178.

Article CAS PubMed Google Scholar

Hiraoka A, Ishimaru Y, Kawasaki H, Aibiki T, Okudaira T, Toshimori A, et al. Tumor Markers AFP, AFP-L3, and DCP in Hepatocellular Carcinoma Refractory to Transcatheter Arterial Chemoembolization. Oncology. 2015;89:16774.

Article CAS PubMed Google Scholar

Zhou H, Song T. Conversion therapy and maintenance therapy for primary hepatocellular carcinoma. Biosci Trends. 2021;15:15560.

Article CAS PubMed Google Scholar

Fan J, Tang ZY, Yu YQ, Wu ZQ, Ma ZC, Zhou XD, et al. Improved survival with resection after transcatheter arterial chemoembolization (TACE) for unresectable hepatocellular carcinoma. Dig Surg. 1998;15:6748.

Article CAS PubMed Google Scholar

Shi F, Lian S, Mai Q, Mo ZQ, Zhuang WH, Cui W, et al. Microwave ablation after downstaging of hepatocellular carcinoma: outcome was similar to tumor within Milan criteria. Eur Radiol. 2020;30:245462.

Article PubMed Google Scholar

Binnewies M, Roberts EW, Kersten K, Vincent C, Douglas FF, Miriam M, et al. Understanding the tumor immune microenvironment (TIME) for effective therapy. Nat Med. 2018;24:54150.

Article CAS PubMed PubMed Central Google Scholar

Cao J, Su B, Peng R, Tang H, Tu DY, Tang YH, et al. Bioinformatics analysis of immune infiltrates and tripartite motif (TRIM) family genes in hepatocellular carcinoma. J Gastrointest Oncol. 2022;13:194258.

Article PubMed PubMed Central Google Scholar

Liu F, Liu D, Wang K, Xie XH, Su LY, Kuang M, et al. Deep learning radiomics based on contrast-enhanced ultrasound might optimize curative treatments for very-early or early-stage hepatocellular carcinoma patients. Liver Cancer. 2020;9:397413.

Article PubMed PubMed Central Google Scholar

Ding W, Wang Z, Liu FY, Cheng ZG, Yu XL, Han ZY, et al. A hybrid machine learning model based on semantic information can optimize treatment decision for nave single 3-5-cm HCC patients. Liver Cancer. 2022;11:25667.

Article CAS PubMed PubMed Central Google Scholar

Read the original here:
Machine learning-based decision support model for selecting intra-arterial therapies for unresectable hepatocellular ... -

Predictive modeling of lower extreme deep vein thrombosis following radical gastrectomy for gastric cancer: based on … –

The significance of this study in addressing the risk of lower extremity DVT in postoperative GC patients is underscored by the substantial morbidity and potential mortality associated with VTE in this patient population13. Notably, GC surgery is linked to a heightened risk of postoperative VTE, including DVT and PE14,15. Compared with air wave pressure therapy instrument, rivaroxaban has better preventive effect on lower extremity DVT after GC operations16. A systematic review and meta-analysis involving 111,936 patients indicated that the 1-month incidence of VTE post GC surgery was 1.8%, and specifically for DVT, it was 1.2%11. Among the 666 Korean patients after gastrectomy, the overall incidence of VTE was 2.1%17. These figures highlight the critical importance of focusing on DVT in GC patients postsurgery. Moreover, this study aims to fill a significant gap in the current research. While the incidence of VTE in GC patients is known, there is less focus on predicting lower extremity DVT, specifically in the postoperative phase of GC. A retrospective cohort study revealed that age, preoperative blood glucose level, postoperative anemia, and tumor malignancy were independent risk factors for postgastrectomy VTE in GC patients18. However, compared with previous studies, our study focused on predictive modeling using a comprehensive set of clinical indicators, including age and calcium ion levels, and provided a more detailed risk assessment tool; this underscores the need for predictive models that can accurately identify patients at higher risk for DVT following GC surgery, enabling targeted prophylactic strategies.

The predictive model developed in this study demonstrated high accuracy, as reflected by the area under the curve (AUC) values in both the training and validation sets. This finding indicates the strong predictive capability of the NRS-2002, which is essential in clinical settings for risk stratification and management of DVT in postoperative GC patients. The importance of such predictive models is highlighted by the varying risk factors identified across different studies, including age and tumor-related factors. Age has been consistently identified as a significant risk factor for postoperative VTE18, and the role of calcium in coagulation processes further substantiates its relevance as a predictive marker in the developed model. These factors provide critical insights into patient-specific risk profiles and can guide clinicians in the prophylaxis and management of DVT after GC surgery.

According to our univariate analysis, age emerged as a significant independent variable influencing DVT occurrence following gastrectomy in GC patients. Furthermore, multivariate analysis highlighted age as a contributing factor to the development of postoperative DVT in these patients. Age is also a risk factor for VTE in patients with GC19. Here, we found that calcium ions were a significant clinical factor in our model. The role of calcium ions in the coagulation process and thrombosis is complex and multifaceted; one key aspect is their involvement in platelet activation. Platelets play a critical role in maintaining hemostasis and vessel integrity under normal conditions and in thrombosis under pathological conditions. The activation of platelets strongly depends on an increase in the intracellular calcium (Ca2+) concentration. This increase results from the release of Ca2+ by the dense tubular system and the entry of Ca2+ from the extracellular space20. In the context of fibrinogen clotting, calcium ions are also known to be necessary for the normal polymerization of fibrin monomers21. In the activation of coagulation factor XIII, an important player in the final stages of the coagulation cascade, calcium also plays a crucial role22. Therefore, calcium ions are integral to the coagulation process and influence various stages, from platelet activation to stabilization of the fibrin clot.

LDL plays a significant role in the pathogenesis of atherothrombotic processes. It can modify the antithrombotic properties of the vascular endothelium and influence vessel contractility, partly by reducing the availability of endothelial nitric oxide and activating proinflammatory signaling pathways. These modified intravascular LDLs promote the formation of foam cells from smooth muscle cells and macrophages, increasing the vulnerability of atherosclerotic plaques and enhancing the thrombogenicity of both plaques and blood23.

Several research findings indicate that a reduction in hemoglobin levels may serve as an indicator of increased VTE risk and poorer prognosis in cancer patients5. Another study demonstrated that low hemoglobin levels at baseline correlated with an increased likelihood of symptomatic VTE, symptomatic DVT, and nonfatal PE24. Another study investigated the influence of anemia on the risk of bleeding in patients receiving anticoagulant therapy for VTE25. These findings underscore the importance of considering anemia as a factor in the management of VTE, particularly in populations at high risk, such as acutely ill patients and those with cancer.

Different from previous research studies, here, we collected plentiful and comprehensive clinical indicators including a total of 47 baseline, preoperative, surgical and pathological clinical data. So far, we have included the largest number of clinical variables in our study. Most importantly, in our research, we use a variety of comprehensive machine learning algorithms. Machine learning methods have been successfully applied in various fields of medicine and have shown great potential in predictive data analytics26. Compared to conventional prediction models (logistic regression), machine learning models perform as well as logistic regression models; however, some machine learning methods exhibit exceptional performance27. One study developed machine learning models (LightGBMs) to predict VTE diagnosis and 1-year risk using electronic health record data from diverse populations. These tools outperformed existing risk assessment tools, showing robust performance across various VTE types and patient demographics28. In our study, we used various machine learning algorithms, including logistic regression, decision trees, random forests, SVM, XGBoost, and LightGBM. By applying these insights to our study, we can anticipate a more robust and precise model for predicting lower extremity DVT risk in postoperative GC patients, potentially leading to better patient outcomes.

In a real-world setting, the model could be integrated into clinical decision-making processes, perhaps through electronic health records systems. By inputting patient-specific data, health care providers could receive immediate risk assessments, guiding them in choosing the most appropriate prophylactic measures. This approach aligns with the growing trend of personalized medicine, where treatment and preventive strategies are tailored to individual patient characteristics and risk profiles.

Despite its contributions, one potential limitation of this study is its retrospective nature, which may introduce biases such as selection bias or information bias. The data used in the study might also have limitations in terms of their scope or the accuracy of the recorded information. Another limitation could be the generalizability of the findings. The studys results are based on a specific patient population and may not be directly applicable to other populations or settings. Additionally, this study developed a population-specific predictive model. However, the selected predictors were not unique to any specific population, as they appear applicable to patients undergoing gastrointestinal, liver, and pancreatic surgeries. Therefore, it raises the question of whether it is necessary to develop a postoperative lower limb thrombosis prediction model specifically for patients undergoing radical gastrectomy.

Future research should focus on validating the predictive model in diverse patient populations and clinical settings to enhance its generalizability. Future studies could also explore the integration of the model into clinical workflows and its impact on patient outcomes in a real-world setting. However, further research is needed to understand the biological mechanisms underlying the identified risk factors for DVT in GC patients; this could lead to more targeted therapeutic interventions. Additionally, incorporating new types of data, such as genetic or molecular marker data, could improve the models predictive accuracy.

In summary, the development of a predictive model for lower extremity DVT in postoperative GC patients addresses a vital clinical need. The models accuracy and ability to identify significant predictive factors make it a valuable tool for enhancing postoperative care and patient outcomes in patients with GC.

See the rest here:
Predictive modeling of lower extreme deep vein thrombosis following radical gastrectomy for gastric cancer: based on ... -

60 Growing AI Companies & Startups (July 2024) – Exploding Topics

You may also like:

Artificial intelligence has the potential to transform industries ranging from medicine to sales to software development. And this potential is finally being realized.

The AI industry is poised to grow to an estimated $305.9 billion in 2024. Today, AI has become essential for an increasing number of businesses as remote work and reliance on technology are the new daily norm.

Read below for our picks for some of the most promising AI startups with a broad range of use cases across different industries.

5-year search growth: 469%

Search growth status: Exploding

Year founded: 2009

Location: Cologne, Germany

Funding: $400M (Series Unknown)

What they do: DeepL is a neural machine translation platform that uses advanced algorithms to translate text from one language to another with exceptional accuracy and fluency. With support for over 30 languages, DeepL's technology combines neural network models, deep learning techniques, and natural language processing (NLP) to provide high-quality translations for a wide range of content types, including websites, documents, and emails.

With its intuitive interface and powerful API, DeepLenables businesses and individuals to communicate and collaborate across different languages and cultures with ease. In May 2024, DeepL raised $300 million at a $2 billion valuation.

5-year search growth: 2,250%

Search growth status: Exploding

Year founded: 2016

Location: New York City, New York

Funding: $17.9M (Series B)

What they do: Frame is building one of the leading customer success platforms by providing leading artificial intelligence software around a robust solutions framework aimed at solving numerous customer challenges.

By building The Voice of the Customer engine, teams can use Frame to detect themes among customers, identify patterns for retention or acquisition of customers, and turn qualitative feedback into quantitative data for leadership.

5-year search growth: 2,233%

Search growth status: Regular

Year founded: 2018

Location: Copenhagen, Denmark

Funding: $18.6M (Series A)

What they do: Uizard is an AI-powered platform that helps users create professional-looking designs for websites and mobile apps with minimal coding or design experience. Uizard's proprietary technology uses machine learning algorithms to translate sketches and wireframes into functional code and designs, reducing the time and effort required to create a prototype.

Users can also create responsive and customizable designs that can be shared and tested with stakeholders.

5-year search growth: 476%

Search growth status: Exploding

Year founded: 2016

Location: Mountain View, CA

Funding: $305M (Series C)

What they do: Moveworks is an AI platform that helps employers create a better workplace. By using natural language understanding (NLU), conversational AI and probabilistic machine learning, the platform is able to support employees issues end-to-end. Examples of AI in action include troubleshooting common questions, such as getting access to software and routing document approvals to the correct person.

5-year search growth: 614%

Search growth status: Exploding

Year founded: 2013

Location: San Francisco, California

Funding: $4B (Series Unknown

What they do: Databricks is a data and AI company offering a unified analytics platform for integrating AI and machine learning. Customers can use the platform to analyze large-scale data, generate real-time analytics, build and deploy ML applications, and more. In September 2023, Databricks raised $500 million in Series I funding at a valuation of $43 billion.

5-year search growth: 4,400%

Search growth status: Regular

Year founded: 2017

Location: London, England

Funding: $156.6M (Series C)

What they do: Synthesia is an AI-powered platform that enables businesses to create and personalize video content at scale. The platform can generate realistic and engaging videos with human-like avatars, making it ideal for a variety of applications, from e-learning and marketing to news reporting and virtual events.

Synthesia's customization options including language support, voiceover selection, and scene creation, enable users to create and deploy video content quickly and efficiently.

5-year search growth: 99x+

Search growth status: Exploding

Year founded: 2021

Location: San Jose, California

Funding: $93M (Series B)

What they do: Codeium is an AI-powered coding assistant that provides users with real-time code suggestions, code search, IDE integration, and more. The tool includes a Codeium Chat feature, which acts as a chatbot to write new code and answer coding-related questions. In January 2024, the startup raised $65 million in Series B funding at a $500m valuation.

5-year search growth: 1,138%

Search growth status: Exploding

Year founded:



What they do: Cohere develops advanced AI and large language models for businesses to understand and generate human-like text. The company's three primary models are Command (generative AI text generation), Embed (text embeddings for analysis), and Rerank (improving search relevance). Cohere recently raised $450 million at a $5 billion valuation.

5-year search growth: 4,600%

Search growth status: Regular

Year founded: 2019

Location: San Diego, California

Funding: $4.5M (Seed)

What they do: Soundful is an AI-powered platform that enables businesses to create and customize high-quality soundtracks for their digital content, such as videos, podcasts, and advertisements. Soundful's technology uses deep learning algorithms to analyze the emotions, tone, and context of the content and generate soundtracks that complement and enhance the viewer's experience.

The platform allows users to easily adjust the mood, tempo, and style of the soundtracks to align with their brand identity and message.

5-year search growth: 9,400%

Search growth status: Exploding

Year founded: 2011

Location: San Francisco, California

Funding: $450M (Undisclosed)

What they do: Dialpad is a customer intelligence platform that offers various AI tools for customer engagement, sales intelligence, and team collaboration. The platform provides a central communications hub for businesses that includes video meetings, contact syncing, call recording, automated speech recognition, conversational chatbots, instant call summaries, and more. Dialpad has over 30,000 customers, including big brands like WeWork and Xero.

5-year search growth: 4,200%

Search growth status: Regular

Year founded: 2021

Location: San Francisco, California

Funding: $2.6M (Seed)

What they do: Writesonic enables users to generate content (such as blog posts, ad copy, and product descriptions) in a fraction of the time it would take to do so manually. Using natural language processing (NLP) and machine learning (ML) algorithms, Writesonic's technology can analyze a user's prompts and generate human-like text that matches the desired tone, style, and structure.

The platform also allows users to fine-tune and edit the generated text to meet their specific needs, while also providing suggestions for improvements.

5-year search growth: 689%

Search growth status: Exploding

Year founded: 2020

Location: San Francisco, California

Funding: $42M (Series A)

What they do: Atomic AI operates a content intelligence platform that leverages AI to facilitate the efficient development of new molecules and medicines. Through its proprietary platform, the company collects and analyzes engagement data from client analytics platforms, generating predictive reports on topics, publishing times, and distribution channels.

By providing users with new strategies to target RNA structure and treat previously undruggable diseases, Atomic AI is working to revolutionize the field of medicine in a unique way.

5-year search growth: 6,200%

Search growth status: Regular

Year founded: 2017

Location: Berkeley, California

60 Growing AI Companies & Startups (July 2024) - Exploding Topics

iCIMS Wins AI Breakthrough Award for "Best Overall AI Solution" – PR Newswire

Prestigious international award program recognizes iCIMS Talent Cloud AI as a trusted, powerful technology to simplify and accelerate hiring while driving quantifiable business outcomes

HOLMDEL, N.J., July 9, 2024 /PRNewswire/ --iCIMS, a leading provider of talent acquisition (TA) technology, today announced iCIMS Talent Cloud AI was selected the "Best Overall AI Solution" in the seventh annual AI Breakthrough Awards, a prominent market intelligence organization that recognizes the top companies, technologies and products in the global artificial intelligence (AI) market.

iCIMS Talent Cloud AI empowers organizations to simplify recruiting and dynamically engage with talent with job matching and search experiences. The award-winning technology enables TA teams to provide better and more personalized candidate experiences at scale, find best-fit candidates, hire faster and accelerate employee growth. iCIMS customers using its AI-powered solutions have reduced their time to fill an open role twice as fast as recruiting teams not using iCIMS Talent Cloud AI.

"iCIMS Talent Cloud AI gives customers a competitive hiring edge to build and scale winning teams, smarter and faster."

Native to the iCIMS platform, its AI is purpose-built and embedded across the entire experience no integration required. iCIMS' AI has been trained on billions of data points across hundreds of millions of candidate profiles and activity from thousands of organizations that receive more than 200M applications and make more than 5.5M hires annually.

The company has a longstanding journey of innovation with AI, accelerated by its acquisition of Opening.ioin 2020. Earlier this year, iCIMS advanced its program with the launch of its GenAI-powered recruiting assistantto help teams hire smarter and with greater efficiency. Most recently, iCIMS announced its next-generation CRM technology, iCIMS Candidate Experience Management(CXM), to help teams find and nurture talent that converts to quality hiresthrough a combination of advanced marketing automation, engagement scoring and artificial intelligence.

iCIMS is committed to helping organizations hire and scale their teams with reliable, responsible AI leveraging best practices, third-party audits and global regulations to help foster ethical and responsible recruiting. Its award-winning AI is grounded in six core principles: human-led, technically robust and safe, inclusive and fair, private and secure, transparent and accountable.

"CHROs are feeling the pressure to implement AI into business processes, yet it's one of the top priorities keeping them up at night, according to our new research," said Andreea Wade, VP of AI at iCIMS. "There's no doubt that AI provides a massive swath of opportunities, but it's so important to get right. It requires working with the right tech vendors, training and upskilling employees and level-setting on expectations. iCIMS is driving that technological innovation in TA forward, without exacerbating risk for our customers, their candidates and our own employees."

The mission of the AI Breakthrough Awards is to honor excellence and recognize the innovation, hard work and success in a range of AI and machine learning related categories, including Generative AI, Computer Vision, AIOps, Deep Learning, Robotics, Natural Language Processing, industry specific AI applications and many more. This year's program attracted more than 5,000 nominations from over 20 different countries throughout the world.

"HR and business leaders are always looking for new ways to improve the experience and create more efficiency and iCIMS does just that across the talent journey," said Steve Johansson, managing director, AI Breakthrough. "iCIMS Talent Cloud AI gives customers a competitive hiring edge to build and scale winning teams, smarter and faster, with reduced complexity and cost. After reviewing thousands of submissions across categories, we are proud to announce iCIMS as the 2024 winner of our 'Best Overall AI Solution' in our prestigious award program."

iCIMS will reveal the latest product innovations in its summer product release later this month. Request a demo today to see why leading employers like Microsoft, Target and Ford Motors use iCIMS to hire great teams. UK-based leaders and recruiters can see iCIMS in action at RecFest on 11 July in Knebworth Park.

About iCIMS, Inc. iCIMS is a leading provider of talent acquisition technology that enables organizations everywhere to build winning workforces. For over 20 years, iCIMS has been at the forefront of talent acquisition transformation. iCIMS empowers thousands of organizations worldwide with the right tools to meet their evolving needs across the talent journey and drive business success. Its AI-powered hiring platform is designed to improve efficiency, cut recruiting costs and build exceptional experiences for candidates and recruiters. For more information, visit

ContactCarlee Capawana Director of Corporate Communications, iCIMS [emailprotected] 9089476572


Read more:
iCIMS Wins AI Breakthrough Award for "Best Overall AI Solution" - PR Newswire

A deep learning-driven discovery of berberine derivatives as novel antibacterial against multidrug-resistant … –

A deep learning training set is established for novel anti-H. pylori agents exploration

First, the dataset was curated from reputable sources and ensured the diversity in chemical structures and activity levels. A sizable collection of 938 compounds with known anti-H. pylori properties was established. This dataset included 801 reported anti-H. pylori compounds with structural diversity from ChEMBL database,29 as well as 137 self-established BBR derivatives.30,31,32 An MIC value of 16g/mL was set as the critical value. The compounds with MICs 16g/mL were defined as active (label 1) and MICs > 16g/mL as inactive (label 0). The proposed deep learning framework firstly represented compounds with molecular graph, and extracted the molecular extended-connectivity fingerprints (ECFP)33 which preserve rich functional group information. Feature engineering was performed to extract the ECFP that captured essential functional group information, and leveraged message passing deep neural network to extract properties directly from molecular structure.

Since the significant interactions between atomic pairs with topologically distant could also affect the overall molecular properties (Fig. 1a), a deep graph neural network (Attentive FP)34 was applied to learn the embeddings of molecular graph, including both local and nonlocal features of the molecular structures. More specifically, every compound was represented with molecular graph, where nodes denoted atoms, and edges denoted bonds (Fig. 1a). By leveraging RDKit and DGL-LifeSci packages, vectors with a length of 39 for nodes and 11 for edges were obtained to represent the chemical properties of atoms and bonds, respectively. Attentive FP was used to translate the molecular graph with node and edge features into a continuous vector, which was the compound representation. Attentive FP iteratively aggregated the features of atoms and bonds with graph attention network (GAT)35 in the messaging phases, which allowed an atom to focus on most relevant neighborhoods. Then, it retained and filtered information with a gated recurrent network unit (GRU)36 in the readout phases, which allowed the model to capture the implicit effects among distant atoms. After obtaining the molecular graph representation, an attention mechanism to self-adaptively integrate molecular graph representation and ECFP fingerprints was introduced.

Establishment of the deep learning model. a Deep learning-based anti-H. pylori compound discovery. SMILES simplified molecular input line entry system. b A pie chart for data distributions, including three pre-train sets, a fine-tune set and a test set. c ROC-AUC plot evaluating model performance under the ten-fold cross-validation. d t-Distributed stochastic neighbor embedding (t-SNE) of all molecules from the pre-training, fine tune, and test set, revealing chemical relationships between these compounds

Considering that the 938 compounds with known anti-H. pylori properties were insufficient for training a successful deep learning model, we utilized the pre-train-then-fine-tuning paradigm,37 which pre-trained the deep learning model on large-scale bioassays related to H. pylori from PubChem databas,38 and fine-tuned the pre-trained deep learning model on the collection of 938 compounds. The pre-train database included 8999, 892, and 2809 compounds (Fig. 1b), respectively. All the above-mentioned training set information was provided as supplementary data sets. In the fine-tune phase, the parameters of the nonlinear multilayer perceptron network (MLP) in the pre-trained deep learning model were initialized and the model was further optimized on the collection of 938 compounds for capturing task-specific patterns. Finally, the molecular fingerprint features and molecular graph embeddings were self-adaptively integrated to form the compound feature vectors and then an MLP layer39 was leveraged to predict their activity against H. pylori.

The predictive accuracy of the model was assessed through ten-fold cross-validation on the training dataset and external validation on the independent dataset. Cross-validation techniques were applied to validate the robustness and reliability of the model.40 The performance of our final model was quantified as follows: the area under the receiver operating characteristic curve (ROC-AUC) attained a value of 0.9033, signifying commendable discriminative capacity; the area under precision-recall curve (AUPR) registered at 0.9615, indicating a robust precision-recall balance. Moreover, the F1-score, a composite metric denoting the harmonious interplay of precision and recall, manifested at 0.8797, attesting to a noteworthy equilibrium between these facets. The model also attained an accuracy rate of 0.8326, representing the proportion of accurately classified instances. Furthermore, the recall, an indicator of the models ability to correctly identify actual positives, attained a value of 0.8454, while the precision, signifying the proportion of predicted positives correctly classified, was recorded at 0.9169. These metrics collectively corroborated the models effectiveness in addressing specific classification tasks within the ambit of H. pylori inhibition.

Thus, this established deep learning model enabled the establishment of a correlation between the structural characteristics of these compounds and their antibacterial activity against H. pylori. To validate the effectiveness of this deep learning model, a series of novel BBR derivatives were strategically designed for prediction.

It is reported that modifications on the D-ring of BBR/PMT (Fig. 2a), such as 9-position mono-substitution, have limited enhancements of anti-H. pylori activity.32 While modifications were conducted on 13-position of ring C (Fig. 2a), the corresponding derivatives only exhibited moderate anti-H. pylori potencies.41 Meanwhile, there is scarce literature reporting on the anti-H. pylori activity of A-ring modified derivatives, making them highly attractive for novel anti-H. pylori drug discovery utilizing deep learning models. Considering the synthetic accessibility, we selectively chose 3-position of the A-ring for modifications with various types of substituents, including chain alkanes, cycloalkanes and substituted phenyls. Thus, a set of 3-substituted novel BBR/PMT derivatives was virtually designed for prediction. Two of them (5 and 6) were positively predicted and the rest nine were predicted to be negative (14, 913). To verify the accuracy and reliability of the deep learning model employed, all designed compounds were synthesized through an easy-to-operate one-step synthetic procedure as shown in Supplementary Scheme 1, and subsequently subjected to the antibacterial activity evaluation. Simultaneously, two 3,13-disubstituted derivatives (7 and 8) were accidentally obtained and identified during the synthesis of 5 and 6, respectively, with the existence of an excessive -C containing electrophilic reagent. Compared to previously reported procedures involving more than three steps,42 the disubstituted derivatives could be obtained with satisfactory yields ranging from 6167%. These two compounds were also predicted to be positive (78).

In vivo antibacterial evaluations for compound 8. a Chemical structures of BBR and 8. b Serum biochemical indices of liver and kidney functions for mice in different treatment groups (n=6). c Plasma and stomach concentrationtime profiles of 8 following a single oral dose of 30mg/kg (n=4). d The schematic diagram of H. pylori infection and treatment process in C57BL/6 mice. e, g The viable counts in the stomach of mice infected with H. pylori CCPM(A)-P-3722159 in each group (n=5) after different treatments. The administration dosage of each treatment component is as follows: OPZ (200g/kg); 8 (30mg/kg); AMX (15mg/kg); CLA (15mg/kg); CMC, carboxymethyl cellulose; AC, AMX+CLA; Bi, bismuth citrate (5mg/kg). f Hematoxylin and eosin (H&E) staining of stomach tissues

All constructed BBR/PMT derivatives were first evaluated for their activity against six different H. pylori strains, including two American Type Culture Collection (ATCC) reference strains of ATCC 43504 and ATCC 700392, and other four clinical isolates, taking BBR, PMT, CLA, AMX, LEV, and MTZ as positive controls. The tested strains included CLA-resistant strains (CCPM(A)-P-3716289 and CCPM(A)-P-3716370), MTZ-resistant strains (ATCC 43504 and CCPM(A)-P-3716289), LEV-resistant strains (CCPM(A)-P-3716289 and CCPM(A)-P-2316370), and an AMX-resistant strain (SS1). The chemical structures of the designed compounds, the deep learning prediction results, and their MIC values against the tested H. pylori strains are listed in Table 1. The results demonstrate a notable degree of predictive success, as evidenced by the MIC values. Specifically, the positively predicted compounds (58) exhibited substantially lower MIC values, ranging from 0.258g/mL. In contrast, for the negatively predicted compounds (14, 913), the MIC values went up to a range of 16 to >256g/mL. Therefore, compounds 5, 7, and 8 with the best antibacterial potencies were selected as representative compounds for further investigation. This approach exemplifies a judicious combination of computational prediction through deep learning models and experimental validation, constituting a powerful strategy for candidate exploration in future anti-H. pylori drug development.

The effects of predicted hits 5, 7, and 8 on cell viability were evaluated using the MTT assay in gastric epithelial cells (GES-1), hepatocellular carcinoma (HepG2), human non-small lung cancer (H460) and human embryonic kidney (293T) cells. The cell viability was determined after the exposure to varying concentrations of these compounds. As presented in Supplementary Table S1, compound 8 (Fig. 2a) exhibited lower cytotoxicity with the median toxic concentration (TC50) values ranging from 50.59 to 57.07M, compared to those of 5 (17.6824.96M) and 7 (8.8112.70M). Compound 8 exhibited the best anti-H. pylori activity and the lowest cytotoxicity, as well as the most favorable therapeutic index. Therefore, it was selected as a potential candidate for further studies.

The acute oral toxicity test of compound 8 was conducted in Kunming mice. The mice were closely monitored for 14 days, and the medium lethal dose (LD50) value of 8 was over 500mg/kg, which indicated a satisfactory safety profile of 8 for oral administration. Then, the blood samples collected from the above mice were assessed for the biochemical indices of liver and kidney functions. As illustrated in Fig. 2b, 8 did not lead to obviously elevation of glutamic oxalacetic transaminase (GOT), glutamic pyruvic transaminase (GPT), blood urea nitrogen (BUN) or creatine (CRE), indicating no detectable adverse effect of 8 on liver or kidney function.

To explore the pharmacokinetic profile of compound 8, the stomachs and plasma of C57BL/6 mice were collected and detected at different time points after a single oral dose of 30mg/kg. As illustrated in Fig. 2c, the gastric concentrations of 8 maintained above its MIC value (0.5g/mL) after 24h (3.251.51g/g, Supplementary Table S2), indicating an ideal gastric retention that could ensure its anti-H. pylori efficacy in vivo. Meanwhile, the maximum concentration (Cmax) of 8 in plasma was below 0.1g/mL (Supplementary Table S3), and it became undetectable (below the detection limit of 0.001g/mL) after 6h, suggesting a low possibility of systemic side effects. Besides, the acid stability of 8 was also assessed under the pH values of 1.0 and 3.0 (to simulate the acidic environment in gastric acid), at different time points (2, 8, and 24h). As shown in Supplementary Table S4, the content of 8 was still above 90% after 24h treatment in the acidic environment. Taken together, the favorable acid stability and pharmacokinetic properties of 8, including bare absorption to system circulation and long gastrointestinal retention, make it suitable for being developed as an anti-H. pylori agent.

Twenty-seven clinically isolated H. pylori strains were employed for further potency evaluation of 8. As shown in Table 2, compound 8 exhibited a robust activity with an MIC of 0.5g/mL against all tested strains (14 CLA-resistant strains, 11 MET-resistant strains, 10 LEV-resistant strains, 2 AMX-resistant strains, and 6 MDR strains, and all the resistant information is highlighted in dark color in Table 2).

Compound 8 was then challenged over a 36-day serial passage assay to determine the rate of potential resistance induction on H. pylori ATCC 43504, which is susceptible to CLA and AMX originally. As shown in Supplementary Fig. S1, repeated exposure to sub-MIC level of 8 or AMX did not develop resistance in the tested H. pylori strain by serial passage (12 passages). After 12 passages under permanent selective pressure of CLA, the bacteria showed resistance to CLA with the MIC reaching and stabilizing at 4g/mL (256-fold of initial MIC).

Checkerboard assay was performed to test the combined effects of 8 and AMX or CLA. As displayed in Supplementary Table S5, when combined with CLA, synergistic effects (fractional inhibitory concentration index, FICI0.5) could be observed in 10 out of 25 tested strains (5 out of 9 CLA-resistant strains) with the FICI values of 0.1880.50. Meanwhile, only additive effect (0.5

The in vivo antibacterial activity of compound 8 was evaluated in the C57BL/6 mouse gastric infection model (Fig. 2d). The mice were first randomly assigned into five groups: an uninfected control group and four infected groups with different treatments, which included a vehicle carboxymethyl cellulose (CMC) control group, dual therapy group (OPZ plus 8 [OPZ+8]), triple therapy group (OPZ plus AMX and CLA [OPZ+AC]), and quadruple therapy group (OPZ plus AMX, CLA, and 8 [OPZ+AC+8]), respectively. The mice in the infected groups were orally administrated via gavage with H. pylori CCPM(A)-P-3722159, a mouse-adapted MDR strain (resistant to AMX, CLA, and LEV), every other day for four times. After a two-week colonization period, the different treatments were performed as above for five consecutive days. The therapeutic efficacy was evaluated by comparing the viable bacteria counts in the mouse stomachs. As shown in Fig. 2e, treatment with OPZ+8 (30mg/kg) significantly decreased the gastric bacteria load of the infected mice from 1.3105 to 6.5102 CFU/g (2.2-log reduction in comparison to CMC group), which was similar to that of the triple-therapy group (OPZ+AC, 1.8-log reduction in bacterial burden). Remarkably, the quadruple-therapy treatment (OPZ+AC+8) further decreased the bacteria load to 2.0102 CFU/g (2.8-log reduction), representing a 99.8% inhibition of stomach colonization compared with CMC group. These results suggest that, with the pretreatment of OPZ, 8 could exert comparable eradicative efficacy to the combination of OPZ, AMX and CLA in vivo, and exhibited improved activity when combined with AMX and CLA, thereby increasing the clearance of the colonized multidrug-resistant H. pylori.

Additionally, there was no significant body weight loss after the different treatment, as shown in Supplementary Fig. S2. Histopathological examination of fixed stomach sections revealed that H. pylori infection led to a more porous and bloated structure of the gastric gland, the obvious inflammatory infiltration, and the increase of pepsinogen (high pepsinogen usually related to H. pylori infection, peptic ulcer, and gastritis) compared with the uninfected tissue (Fig. 2f). The dual, triple, and quadruple-therapy treatments alleviated the gastric inflammation in some degree and decreased the level of pepsinogen, indicating the eradication of the pathogens.

The long-term use of antibiotics often leads to a disturbance of the intestinal flora and a decrease in gut microbiota diversity. To investigate whether 8 affects the gut microbiota, stool samples were collected from each group, and 16S rRNA gene sequencing was employed to analyze the gut microbiota constitution. The Venn graph was used to analyze the characteristic sequence numbers of each group. As shown in Fig. 3a, the largest number of same specific characteristic sequences between 8 treatment group (T8: OPZ+8) and the uninfected group were observed, compared with other comparisons. Using alpha diversity (Pieloi_e) analysis (Fig. 3b), the microbiota diversity in the vehicle control group (CMC) and triple therapy group (OPZ+AC) was found to be significantly decreased compared with that in the uninfected group at the genus level. It is worth noting that, the box diagram showed that the diversity of intestinal flora of mice in group T8 was close to that in the healthy group (p>0.05), higher than that in CMC group and group OPZ+AC. Principal coordinate analysis (PCoA) showed that, in comparison to CMC and OPZ+AC groups, the composition of intestinal flora of the T8 group exhibited more similarity to the uninfected group (Fig. 3c).

Gut microbiome analysis in different treatment groups (n=5). Uninfect, the uninfected group; CMC, vehicle control group; T8, dual therapy group (OPZ+8); AC, triple therapy group (OPZ+AC); AC8, quadruple therapy group (OPZ+AC+8). a The Venn diagram of microbial characteristic sequences of each treatment group. b Alpha diversity analysis on microbiota diversity of each treatment group. c Beta diversity of PCoA analysis. d A bar plot analysis at the genus level (ten bacterial genera with the highest abundance). e A heatmap analysis at the genus level (ten bacterial genera with the highest abundance). f LDA value distribution histogram revealed by LEfSe software. When species with LDA Score >4 are statistically different, the length of the histogram (LDA Score) represent the impact size of the different species. g Evolutionary branching trees from the inside out in a clade represent the level of phylum, class, order, family, genus

Next, a bar plot and a heat map analysis at the genus level were performed to show the ten bacterial genera with the highest abundance of each treatment group (Fig. 3d, e). Through the relative abundance analysis at genus level, the intestinal flora disorder was observed in AC group, with the overgrowth of several genera, including Klebsiella, Escherichia-Shigella, and Bacteroides. In contrast to AC group, the microbiota constitution of the dual therapy group (T8) was sustained and the abundance of probiotics, including Lactobacillus and Dubosiella was partially restored. The bacterial genera with the highest abundance in each mouse was also displayed in Supplementary Fig. S3. In addition, Bifidobacterium, another kind of well-known probiotics (not belonging to ten highest abundance), was also significantly enriched in the dual therapy group compared with AC group (Supplementary Fig. S4), confirming that 8 has the tendency to avoid dysbiosis of intestinal flora. To further display the observed differences in the microbiome composition, linear discriminant analysis (LDA) effect size (LEfSe) analysis (Fig. 3f) was performed, and the cladogram was generated based on LEfSe analysis (Fig. 3g). Consistent with the above results, there was a significant increase in the abundance of Lactobacillus (LDA (log10)>4.0, p<0.05) in the dual therapy group. These results suggest that 8 might not exert an impact on the diversity of the intestinal flora, and increase the abundance of some probiotics while eradicating H. pylori.

To figure out why compound 8 could exhibit anti-H. pylori activity without exerting an impact on intestinal microbiota, the antibacterial spectrum of 8 was evaluated. The antibacterial activities of 8 against common gram-positive and negative bacteria were shown in Supplementary Table S7. Compound 8 only exhibited a moderate antibacterial efficacy against Staphylococcus aureus ATCC 29213 (MIC value: 8g/mL), while being ineffective against all tested gram-negative bacteria. Therefore, the antibacterial spectrum indicates the specific inhibitory effect of compound 8 against H. pylori, while exerting minor impact on the intestinal microbiota.

Proton pump inhibitor including OPZ is recommended to take before meals to avoid the over-production of gastric acid, so as to increase the stability of antibiotics. Considering that compound 8 possessed an ideal profile of acid stability, in vivo activity of compound 8 itself was evaluated, without the co-administration of OPZ. As shown in Fig. 2g, the mono-therapy of 8 showed a comparable potency compared with both the triple-therapy (OPZ+AMX+CLA) and the quadruple-therapy (OPZ+AMX+CLA+bismuth citrate). These results indicated that mono treatment of compound 8 may be applied as an alternative therapy of traditional triple or quadruple H. pylori eradication regimen.

Bacterial cell morphologic changes can provide valuable clues on the antibacterial mode of action, and are often used for pilot mechanism investigation. Therefore, we performed scanning electron microscopy (SEM) and transmission electron microscopy (TEM) analysis on H. pylori ATCC 43504 after the treatment of compound 8. Bacterial cells were incubated with or without sub-MIC (1/2 MIC, 0.25g/mL) level of 8 for 2 days. The SEM and TEM analysis results showed that the integrity of the H. pylori outer membrane was compromised, and obvious perforations were observed compared to the untreated control group (Fig. 4a, b). This suggests that the mechanism of action of 8 might be related to its impact on the integrity of the bacterial outer membrane, which warrants further investigation.

Mechanism of action and direct targets exploration on compound 8. a, b Images for morphology of H. pylori under electron microscope (a) SEM images of H. pylori treated without (upper) or with (lower) 8. b TEM images of H. pylori treated without (upper) or with (lower) 8. c The structure of the active photoaffinity probe 8-O. d Cy3-labeled target proteins were identified using fluorescent gel imaging. SecA (e) and BamD (f) were pulled down from H. pylori by using probe 😯 in immunoblot assay. SecA and BamD pulled down by 😯 were competitively inhibited by 8. The recombinant SecA (g) and BamD (h) proteins pulled down by 😯 were competitively inhibited by 8. Surface plasmon resonance (SPR) sensorgrams obtained on SecA (i)/BamD (j)-coated chips at different concentrations of 8. The thermal stability of SecA (k)/BamD (l) proteins with or without 8-treatment (n=3)

The effectiveness of 8 against both drug-susceptible and resistant H. pylori strains suggests that it might possess a unique mechanism of action distinct from those of the first-line antibiotics used for the treatment of H. pylori infection. Hence, it is of great significance to identify the direct targets of 8 and further elucidate its specific mechanism of action.

ABPP technique, a chemical biological tool for target protein exploration,43,44,45 was applied for the target fishing and identifying of 8 in this study, and the workflow of the specific process was described in Supplementary Fig. S5. Due to the lack of functional groups of 8 that can form covalent bonds with its target proteins, a photoaffinity probe (8-O, Fig. 4c) of 8 containing a diazirine photo cross-linking tag and an alkynyl functional group on position 3 was constructed. As mentioned above, mono substitution at position 3, and di-substitutions at positions 3 and 13 were beneficial for anti-H. pylori activity. Considering both structural similarity and synthetic feasibility, we opted for a probe design with a mono substitution at position 3. To make sure that probe 😯 possessed a similar mechanism as compound 8 and is suitable for target exploration, we assessed the effects of 😯 on the integrity of the H. pylori membrane through SEM and TEM analysis. As shown in the Supplementary Fig. S6, similar to compound 8, probe 😯 induced rupture and perforation of the H. pylori outer membrane. Subsequently, the probes activity against H. pylori was evaluated. As expected, 😯 exhibited comparable potency against the tested strains, with MICs ranging from 0.52g/mL as illustrated in Supplementary Table S8, indicating a similar mechanism with 8. Consequently, 😯 was deemed a viable functional probe for subsequent target exploration and verification.

Following by the addition of probe 😯 (25M) to the lysate of H. pylori ATCC 43504, the mixture was incubated for 1h (Supplementary Fig. S5). Upon exposure to 365nm light, the diazirine photo cross-linking tag of 😯 could generate free radical fragments. These fragments could then form covalent bonds with adjacent hydroxyl groups of target proteins. Next, the alkyne reporter group of the 8-O/protein conjugate was coupled with an azide-modified fluorescent dye (Cy3) via a click reaction. The Cy3-labeled complex was separated using SDS-polyacrylamide gel electrophoresis (SDS-PAGE), with DMSO treatment serving as the blank control. Fluorescent bands with molecular weights (MW) ranging from 25150kDa were observed, and the addition of 8 competitively weakened several of these bands, as depicted in Fig. 4d. This result demonstrated that 😯 might partially occupy the binding sites of 8s targets, and was suitable for further verifications as a chemical tool. Similarly, a biotin-labeled complex was formed by coupling 8-O/protein conjugate with biotin-azide (Supplementary Fig. S5). After being purified and enriched, the complex was identified through liquid chromatography-tandem mass spectrometry (LC-MS/MS) analysis in three biological replicates. Totally, 24 proteins were identified twice in the analysis (Supplementary Table S9). Among these, two proteins belonging to the bacterial general secretory pathway (Sec pathway) and -barrel assembly machinery (BAM), namely protein translocase subunit SecA (SecA) and outer membrane protein assembly factor BamD (BamD), were selected for further verification, respectively. Since Sec pathway and BAM complex are known to be responsible for transporting and assembling the majority of OMPs to the outer membrane, targeting this system could potentially affect the integrity of the bacterial outer membrane, which is consistent with the findings in SEM and TEM analysis on 8-treated H. pylori cells. Thus, SecA and BamD were given priority for further investigation.

Firstly, after pre-treatment of 8 in live H. pylori, SecA and BamD were successfully confirmed to be the potential direct targets of 8 through immunoblot assays using the 😯 probe in the pull-down experiments (Fig. 4e, f). Obvious competitive inhibition could be detected when 8 was pre-treated in situ, indicating possible specific interactions between 8 and these two proteins. Meanwhile, the recombinant H. pylori SecA and BamD proteins were also expressed and purified for further verification. In the presence of both UV (365nm) exposure and the active probe 😯 treatment, SecA/8-O conjugate with Cy3-labeling was successfully pulled down (Fig. 4g). Whereas, the fluorescent band was significantly weakened when either UV exposure or the active probe was absent, indicating the necessity of covalent bond formation between SecA and 😯 for successful pull-down. The fluorescence also faded when SecA was pre-treated with 8, indicating possible competitive inhibitions. Moreover, the fluorescent band of the 8-O/SecA complex almost vanished under the condition of 95C, suggesting that the active labeling of 😯 binding with SecA only occurred in the natively folded state rather than in the heat-treated unfolded state. Similar results were also observed in the BamD treatment group (Fig. 4h).

It was found that 8 could dose-dependently bind to immobilized SecA and BamD with Kd values of 3.39 and 21.21M (Fig. 4i, j), respectively, in surface plasmon resonance (SPR) analysis. These results further confirmed the direct interactions between 8 and SecA or BamD. Besides, the cellular thermal shift assay (CESTA) was applied for further validation of their specific interactions, as displayed in Fig. 4k, l. Taking DMSO as the blank control, the thermal stability of the SecA protein decreased with a serial increase in temperatures ranging from 44 to 76C. However, with the addition of 8, the stability of SecA improved significantly, indicating the possible formation of an 8/SecA complex. The same trend was observed for BamD. These findings demonstrated that 8 might serve as a potential substrate of SecA as well as BamD and enhance the thermostability of these two proteins.

To further figure out the specific binding sites and amino acid residues interacting with 8, protein mass spectrometry analysis was conducted. As shown in Fig. 5a, Escherichia coli (E. coli) strain Rosetta overexpressing H. pylori SecA or BamD was pretreated with or without 8 before probe 😯 was added. After proteome labeling and coupling with biotin, the specific peptide differences between the probe treatment and competitive inhibition group were analyzed through peptide fragment identification. Mass spectrometry analysis of the characteristic peaks was performed on the specific peptides of SecA/BamD, which might interact with 8. These characteristic peaks revealed that three different active cavities of SecA might serve as the potential binding sites of 8 (Supplementary Fig. S7). Then, the docking pattern analysis (Fig. 5b) was simulated in Discovery Studio 4.5 software (BIOVIA, San Diego, California, USA) for the prediction of the dominant contribution of each amino acid residue in these three cavities, and four potential residues forming hydrogen-bond interactions were selected for single-mutation verification. After being single mutated to alanine, the specific binding site was verified (KAENLFGVDNLYKIENAALSHHLDQALK), and 239-arginine inside this cavity was found to play a key role in SecA-8 interaction (the bright red ball, Fig. 5c). The two- and three-dimensional specific binding modes were displayed in Fig. 5c. Similarly, two adjacent peptide segments of BamD in space (one cavity), including YRPYVEYMQIKFILGQNELNRAIANVYK and IDETLEK, might contribute together to the interaction between BamD and 8 (Supplementary Fig. S8). Guided by the docking pattern and single mutation analysis, 171-glutamic acid and 209-serine were further confirmed to play the key roles among these residues. These findings provide solid evidences for the therapeutic targets verification of 8 and valuable insights for the exploration of novel candidates against H. pylori.

The exploration of active binding sites between 8 and SecA/BamD. a Experimental workflow for binding site and interaction residues investigation and validation based on LC-MS/MS analysis. The predicted docking patterns between 8 and SecA (b)/BamD (d) were performed by Discovery Studio 4.5 software based on the peptide fragment difference identification results of LC-MS/MS analysis. Specific binding pattern between 8 and SecA (c)/BamD (e)

Transcriptomic analysis was performed to gain comprehensive understanding of the antibacterial mechanism of 8 and verify its impact on OMPs (Fig. 6a, b). The inhibition of Sec pathway has been reported to impair the secretion of unfolded intracellular OMPs into the periplasmic space, leading to the over accumulation of OMPs within the intracellular space.46 As depicted in the Kyoto Encyclopedia of Genes and Genomes (KEGG) analysis, ribosome synthesis related genes were obviously down-regulated, which might due to the excessive accumulation of intracellular proteins. Specifically, after the treatment of 8, groEL and groES responsible for intracellular protein folding were significantly up-regulated, which might be used to deal with the excessive unfolded proteins (Fig. 6c). Lipopolysaccharide (LPS) transport highly dependents on Lpt machinery system, which consists of LptB located in cytoplasm and the other components in inner membrane (LptF, LptG), periplasmic space (LptA, LptC) or outer membrane (LptD, LptE). The impaired outer membrane transport will also result in a hampered LPS transport. It is worth noting that, as a cytoplasmic protein, the transcriptional level of LptB was significantly up-regulated after the treatment of 8 for the compensation of LPS deficiency in outer membrane. While as the Sec and Bam pathway was suppressed, the proteins located outside the inner membrane (LptA, LptD, LptE) could not be transported out and stacked in cytoplasm, which led to a negative regulation in the transcription of their coding genes (Fig. 6c). The transcription levels of H. pylori adhesion proteins in outer membrane, including BabA, SabA, and OipA were also significantly decreased in the transcriptome study (data not shown) and RT-qPCR validation (Fig. 6d). Collectively, these data suggest that the treatment of 8 arouses OMP aggregation in the cytoplasmic and periplasmic spaces and ineffective transportation, which is consistent with the Sec pathway and Bam machinery dysfunction.

Compound 8 disturbs the OMPs related gene transcription and inhibits the protein function of SecA and BamD. a, b Transcriptome analysis of H. pylori with or without the treatment of 8 (n=3). a Volcano plot analysis (Red dots: 239 up-regulated genes; Green dots: 302 down-regulated genes), and (b) KEGG analysis. c The differential expression genes at transcriptional level related to the OMPs secretion and transport dysfunction. d RT-qPCR verifications on gene transcription of the key H. pylori OMPs after the treatment of 8 (n=3). e Inhibition of 8 on the ATPase activity of SecA (n=3). f The interaction of BamA and BamD was inhibited by 8 in Co-IP analysis. g The change of the total amount of H. pylori OMPs after the treatment of 8. h, i Confocal analysis on adhesive effect of 8-treated H. pylori to GES-1 cells. No treatment group (h); 8 treatment group (i). For cell nucleic acid staining: 4,6-diamidino2-phenylindole (DAPI); for cell membrane staining: 1,1-Dioctadecyl-3,3,3,3-tetramethylindodicarbocyanine, 4-chlorobenzenesulfonate salt (DiD); for bacteria staining: fluorescein isothiocyanate (FITC)

SecA plays an indispensable role in the Sec complex as an ATPase.47 Therefore, the ATPase activity of SecA in the presence of 8 was measured. As depicted in Fig. 6e, 8 could dose dependently inhibit SecA, with an IC50 value of 11.53g/mL. Furthermore, to demonstrate the potential of SecA as an anti-H. pylori target, a previously reported SecA inhibitor CJ-21058 (IC50=7.0M)48 was evaluated for its anti-H. pylori potency. The MIC values of CJ-21058 against tested H. pylori strains were found to be in the range of 4-8g/mL (Supplementary Table S10), suggesting that SecA has the potential to be an attractive anti-H. pylori target and screening for SecA inhibitors could be an effective strategy for developing novel anti-H. pylori candidates.

In gram-negative bacteria, the assembly of OMPs requires the Bam machinery complex, in which BamA is the central component. The -barrel domain of BamA interacts with four lipoproteins, including the essential lipoprotein BamD that directly interacts with BamA, and the other accessory lipoproteins BamB, BamC, and BamE.49 BamD facilitates the delivery of OMP substrates to BamA -barrel and the subsequent assembly. To investigate if the function of BamD was affected by 8, a Co-Immunoprecipitation (Co-IP) test was performed using GST-tagged BamD and His-tagged BamA. As depicted in Fig. 6f, BamD exhibited a strong interaction with BamA, and this effect was suppressed by 8, indicating that 8 might inhibit the function of the BAM machinery by affecting the BamA-BamD interaction.

Read more:
A deep learning-driven discovery of berberine derivatives as novel antibacterial against multidrug-resistant ... -

AI Is Cracking a Hard Problem Giving Computers a Sense of Smell – The Good Men Project

Over 100 years ago, Alexander Graham Bell asked the readers of National Geographic to do something bold and fresh to found a new science. He pointed out that sciences based on the measurements of sound and light already existed. But there was no science of odor. Bell asked his readers to measure a smell.

Today, smartphones in most peoples pockets provide impressive built-in capabilities based on the sciences of sound and light: voice assistants, facial recognition and photo enhancement. The science of odor does not offer anything comparable. But that situation is changing, as advances in machine olfaction, also called digitized smell, are finally answering Bells call to action.

Research on machine olfaction faces a formidable challenge due to the complexity of the human sense of smell. Whereas human vision mainly relies on receptor cells in the retina rods and three types of cones smell is experienced through about 400 types of receptor cells in the nose.

Machine olfaction starts with sensors that detect and identify molecules in the air. These sensors serve the same purpose as the receptors in your nose.

But to be useful to people, machine olfaction needs to go a step further. The system needs to know what a certain molecule or a set of molecules smells like to a human. For that, machine olfaction needs machine learning.

Machine learning, and particularly a kind of machine learning called deep learning, is at the core of remarkable advances such as voice assistants and facial recognition apps.

Machine learning is also key to digitizing smells because it can learn to map the molecular structure of an odor-causing compound to textual odor descriptors. The machine learning model learns the words humans tend to use for example, sweet and dessert to describe what they experience when they encounter specific odor-causing compounds, such as vanillin.

However, machine learning needs large datasets. The web has an unimaginably huge amount of audio, image and video content that can be used to train artificial intelligence systems that recognize sounds and pictures. But machine olfaction has long faced a data shortage problem, partly because most people cannot verbally describe smells as effortlessly and recognizably as they can describe sights and sounds. Without access to web-scale datasets, researchers werent able to train really powerful machine learning models.

However, things started to change in 2015 when researchers launched the DREAM Olfaction Prediction Challenge. The competition released data collected by Andreas Keller and Leslie Vosshall, biologists who study olfaction, and invited teams from around the world to submit their machine learning models. The models had to predict odor labels like sweet, flower or fruit for odor-causing compounds based on their molecular structure.

The top performing models were published in a paper in the journal Science in 2017. A classic machine learning technique called random forest, which combines the output of multiple decision tree flow charts, turned out to be the winner.

I am a machine learning researcher with a longstanding interest in applying machine learning to chemistry and psychiatry. The DREAM challenge piqued my interest. I also felt a personal connection to olfaction. My family traces its roots to the small town of Kannauj in northern India, which is Indias perfume capital. Moreover, my father is a chemist who spent most of his career analyzing geological samples. Machine olfaction thus offered an irresistible opportunity at the intersection of perfumery, culture, chemistry and machine learning.

Progress in machine olfaction started picking up steam after the DREAM challenge concluded. During the COVID-19 pandemic, many cases of smell blindness, or anosmia, were reported. The sense of smell, which usually takes a back seat, rose in public consciousness. Additionally, a research project, the Pyrfume Project, made more and larger datasets publicly available.

By 2019, the largest datasets had grown from less than 500 molecules in the DREAM challenge to about 5,000 molecules. A Google Research team led by Alexander Wiltschko was finally able to bring the deep learning revolution to machine olfaction. Their model, based on a type of deep learning called graph neural networks, established state-of-the-art results in machine olfaction. Wiltschko is now the founder and CEO of Osmo, whose mission is giving computers a sense of smell.

Recently, Wiltschko and his team used a graph neural network to create a principal odor map, where perceptually similar odors are placed closer to each other than dissimilar ones. This was not easy: Small changes in molecular structure can lead to large changes in olfactory perception. Conversely, two molecules with very different molecular structures can nonetheless smell almost the same.

Such progress in cracking the code of smell is not only intellectually exciting but also has highly promising applications, including personalized perfumes and fragrances, better insect repellents, novel chemical sensors, early detection of disease, and more realistic augmented reality experiences. The future of machine olfaction looks bright. It also promises to smell good.

Ambuj Tewari, Professor of Statistics, University of Michigan

This article is republished from The Conversation under a Creative Commons license. Read the original article.

AI Is Cracking a Hard Problem Giving Computers a Sense of Smell - The Good Men Project