Category Archives: Machine Learning

Does AI and machine learning further entrench gender inequity? – Women Love Tech

Tomorrow, I will be emceeing an incredible panel of women at SXSW Sydney. They are Dr Catriona Wallace from the Responsible Metaverse Alliance and author of Checkmate Humanity, Tracey Spicer, author of Man-Made, and Shivani Gopal, CEO and Founder of Elladex.

Our topic is Does Machine Learning and AI further entrench gender inequity for future generations of women?

Heres a link to our panel:https://sxswsydney.com/session/does-machine-learning-and-ai-further-entrench-gender-inequity-for-future-generations-of-women/

If you have any questions you want me to ask this panel of experts, you can email us on editor@womenlovetech.com.

Our panel promises to be a lively debate. Please join us at the ICC in Sydney at 12.30pm on Tuesday, October 17.

We will also be including a video from Stela Solar, Director of the National Artificial Intelligence Centre at the CSIRO and introducing the idea of being a trust architect for AI from Zachary Zeus, CEO, Pyx Global. You can find out more about that role, here.

Originally posted here:
Does AI and machine learning further entrench gender inequity? - Women Love Tech

Unveiling the Top AI Development Technologies | by Pratik … – DataDrivenInvestor

With the help of cutting-edge technologies, artificial intelligence is transforming today drastically. Artificial intelligence, once limited to being a distinct field of study for three decades, has expanded its reach to encompass various applications across various domains. According to Grand View Research, AI will continue transforming many industries, with a projected annual growth rate of 37.3% between 2023 and 2030. This rapid rise will highlight the future significance of AI technology.

Today, we can see quite a range of emerging AI technologies. From small businesses to enormous corporations, there is a race to adopt artificial intelligence for data mining, operational excellence, etc. Lets talk about the most recent Artificial Intelligence developments.

Machine Learning is yet another useful technology in the Artificial Intelligence domain. This technology focuses on training a machine (computer) to learn and think independently. Machine Learning typically uses many complex algorithms for training the machine.

The machine is given a set of categorized or uncategorized training data about a specific or public domain during the process. The machine then analyses the data, draws inferences, and stores them for future use. When the machine encounters any other sample data of the domain it has already learned, it uses the stored inferences to draw necessary conclusions and respond appropriately.

A flexible and robust open-source machine learning framework that offers a comprehensive ecosystem for constructing and implementing ML models, strongly emphasizing deep learning and versatile architecture.

A popular open-source machine learning framework that emphasizes dynamic computation graphs, making it suitable for research and prototyping, with strong support for neural networks and deep learning.

A high-level neural networks API that runs on top of TensorFlow, PyTorch, or other frameworks simplifies building and training deep learning models, particularly for beginners and rapid prototyping.

Communicating effectively and clearly can be challenging, but processing information for machines differs from the human brain. Natural Language Generation (NLG) is crucial in converting text into data, enabling systems to convey ideas and thoughts. NLG finds extensive applications in customer service, generating reports, and producing market summaries.

Prominent companies like Attivio, Automated Insights, Cambridge Semantics, Digital Reasoning, Lucidworks, Narrative Science, SAS, and Yseop offer NLG solutions. It comes as no surprise that NLG is among the top 15 cutting-edge artificial intelligence technologies.

A comprehensive library for NLP tasks, providing tools for tokenization, stemming, tagging, parsing, and more.

A popular NLP library offering efficient tokenization, part-of-speech tagging, named entity recognition, dependency parsing, and pre-trained word vectors.

A library for topic modeling, document similarity analysis, and unsupervised learning of word embeddings like Word2Vec and FastText.

A Java-based NLP library by Stanford provides many tools, including tokenization, part-of-speech tagging, parsing, and sentiment analysis.

A simple and user-friendly library built on NLTK, offering tools for tokenization, part-of-speech tagging, noun phrase extraction, and sentiment analysis.

A library for state-of-the-art transformer models like BERT, GPT, and XLNet, enabling tasks such as text classification, named entity recognition, and question answering.

A powerful library built on PyTorch, specifically designed for NLP research, providing high-level abstractions for building and evaluating deep learning models.

The corporate landscape is experiencing a remarkable upswing in the need for artificial intelligence (AI) software. However, as the significance of such software grows, the need for compatible hardware also becomes apparent. Traditional chips cannot adequately support AI models, leading to a new generation of AI chip development designed for neural networks, deep learning, and computer vision tasks.

These AI solutions encompass a range of components, including CPUs capable of handling scalable workloads, specialized silicon chips built for neural networks, and innovative neuromorphic chips. Major technology organizations like Nvidia, Qualcomm, and AMD are actively involved in creating advanced chips that can perform complex AI calculations.

A popular computer vision library that provides many tools and algorithms for image and video processing, object detection, feature extraction, and more.

An open-source machine learning framework with a powerful computer vision module, TensorFlow Object Detection API, for training and deploying object detection models.

Another popular deep learning framework that offers computer vision capabilities is through its TorchVision library, which provides tools for image classification, object detection, and semantic segmentation.

A Python library focusing on image processing tasks, offering a comprehensive collection of algorithms and functions for image enhancement, filtering, segmentation, and feature extraction.

A C++ library with Python bindings specializing in facial detection and recognition, providing pre-trained models for face detection, landmark detection, and face alignment.

A deep learning framework known for its efficiency and speed, Caffe includes a computer vision library that supports image classification, object detection, and semantic segmentation.

A user-friendly computer vision library designed for beginners, SimpleCV provides easy-to-use functions for basic image processing tasks, such as filtering, feature detection, and color tracking.

A scientific computing library for Python, SciPy includes modules for image processing that offer functions for tasks like filtering, morphological operations, image restoration, and mathematical transformations.

Many businesses need help utilizing AI, primarily due to the high costs associated with the in-house development of AI products. Consequently, there is a growing demand for outsourced AI solutions, as they offer a more cost-effective approach for small and medium-sized businesses and budget-conscious large enterprises to dip their toes into AI. By leveraging cloud-based AI services, organizations can access the benefits of artificial intelligence without the hefty investment typically required for in-house development.

Amazons AI initiatives include enhancing its consumer devices like Alexa and delivering services through AWS. Interestingly, a significant portion of AWSs business cloud services is built upon the foundation of these consumer products. As Alexa evolves and improves, its business equivalent will follow suit.

Amazon Lex offers a comprehensive solution for integrating conversational interfaces into any application. This technology is currently utilized in Alexa, empowering developers to design chatbots with advanced natural language capabilities.

With its unique hardware innovation called the Tensor Processing Unit (TPU), Google sets itself apart from other cloud providers. The TPU is a specialized chip specifically designed to enhance the performance of TensorFlow, Googles open-source machine learning platform.

While other major cloud providers offer TensorFlow, none can access TPUs, giving Google a competitive edge. TPUs boast remarkable speed improvements, with 15 to 30 times faster performance than traditional CPUs or GPUs. They deliver up to 180 teraflops of computing power, making complex machine-learning tasks significantly faster and more efficient.

In addition to its hardware advantage, Google leverages its AI capabilities from consumer-facing products to cater to business users. The powerful AI algorithms that drive Google applications like Images, Translate, Inbox (Smart Reply), and voice search in Android are accessible through Google Compute Engine, its cloud offering.

This means businesses can harness the same cutting-edge AI technology that powers Googles popular consumer applications to improve their operations and services.

Microsoft organizes its AI solutions into three categories: AI Services, AI Tools and Frameworks, and AI Infrastructure. Unlike Amazon, Microsoft also leverages some of its consumer products to build its business AI offerings.

Under the AI Services category, there are three subgroups. The first is pre-built AI capabilities, which enhance customer-facing applications like web chatbots. Cognitive Search combines Azure Search with Cognitive Services to provide advanced search capabilities. Conversational AI utilizes Azure Bot Service to enable conversational bots with enhanced features like richer dialogs, full personality customization, and voice customization.

Artificial Intelligence (AI) encompasses computational models that replicate intelligence.

The widespread adoption of AI across various sectors has already yielded many benefits. However, it is crucial for organizations implementing AI to conduct rigorous pre-release trials to identify and mitigate biases and errors. The design and models employed should be robust and capable of withstanding real-world challenges.

Organizations should establish and uphold standards while hiring experts from diverse disciplines to facilitate informed decision-making. AIs ultimate aim and future vision revolve around automating complex human activities and eradicating errors and biases.

Read the original here:
Unveiling the Top AI Development Technologies | by Pratik ... - DataDrivenInvestor

Kingfisher introduces Athena to boost testing and learning with AI … – Retail Technology Innovation Hub

TCS OmniStore

Kingfisher is using TCS OmniStore, an AI powered unified commerce platform from Tata Consultancy Services.

The company operates a chain of over 1,900 stores in eight countries across Europe under its retail banners includingB&Q,Castorama,Brico Dpt,Screwfix, TradePoint and Kota.

It was looking to upgrade to a multilingual commerce platform that delivers a unified brand experience. In addition, it wanted to address legal, fiscal, and operational differences across all its European banners.

TCS OmniStore has enabled Kingfisher to deliver a range of capabilities such as Click and Collect services, scan and go options, mobile apps, save the cart, and self-checkout facilities along with dynamic promotion capabilities and clienteling.

In addition, the platform supports payment options such as contactless, Apple Pay, Apple wallet, and pay as you go.

Its API architecture is built around a centralised core base that allows localisation across different regions.

Kingfisher has implemented TCS OmniStore across two banners B&Q in UK and Ireland, and Castorama in France with the third coming later this year.

It says that it is benefiting from greater associate productivity, increased revenue, faster checkout, and broader sales opportunities; it was also able to execute promotions better based on data insights.

TCS OmniStore was the strategic choice for Kingfisher's future growth, orchestrating a fast, smooth, and seamless checkout experience, which is needed for today's customers, saysPeter Ash, Product Director, Operations and Fulfilment, Kingfisher.

Our self-checkout systems have allowed us to be more efficient on the front end. It's simple and our customers love it. They're easy to use. But it's also allowed us to bring colleagues further into the store. I'm really excited about the future. And I'm really excited about what OmniStore can bring with our current systems stack.

We are delighted to be a strategic partner to Kingfisher in its transformation journey to reimagine the end customer experience and offer a unified experience across its brands in Europe. The platform is enabling seamless omnichannel shopping experiences, enhancing their competitive differentiation, and driving growth, saysShekar Krishnan, Head, Retail & CPG UK and Europe, TCS.

Excerpt from:
Kingfisher introduces Athena to boost testing and learning with AI ... - Retail Technology Innovation Hub

Bring on AI and Machine Learning to take on escalating cybercrime threats – Gulf News

Tech breakthroughs have become prevalent in this digitised environment, and revolutionising the facets of our daily lives. The growth of AI has been a spectacular breakthrough, ushering in possibilities and transformations.

AI applications have found their way into industries, allowing for automation, better decision-making, and overall efficiency. However, as the world embraces digitalisation and AI, it is also confronted with the darker side of this technological revolution - the new perils of cybercrime.

Particularly in the aftermath of COVID-19, the travel industry has seen tremendous expansion and appeal. The unprecedented rebound in outbound travel has increased the exposure to cybercriminals. The travel and tourist industry has become a target of cyberattacks, suffering a variety of threats such as data breaches, ransomware attacks, and phishing efforts.

According to industry forecasts, the costs of cyberattacks would climb by a steep 15 per cent every year, reaching a staggering $10.5 trillion by 2025. This shows the intensity of the situation, emphasising the critical need for organisations to commit significant resources to strengthen their cybersecurity.

Recognising the importance, an astounding 85 per cent of SMEs have stated their intention to boost IT security investment by end-2023. That explains why investing in comprehensive cybersecurity measures is not only a must, but a strategic imperative for long-term success.

Following are a few technology advances that can help combat cybercrime:

AI and Machine Learning

AI and ML are revolutionising the field of cybersecurity by providing advanced capabilities to detect, analyze, and respond to cyber threats in real-time. These enable systems to automatically learn from vast amounts of data and identify patterns, anomalies, and potential risks that might be difficult for human operators to detect.

AI and ML algorithms can analyse network traffic, user behavior, and system logs to identify malicious activities, such as malware infections, unauthorised access attempts, or abnormal data transfers. This approach allows organisations to respond swiftly and effectively, minimising the impact of cyberattacks.

Blockchain

Blockchain technology, originally developed for cryptocurrencies like Bitcoin, offers a decentralised and tamper-resistant method of storing and verifying data. Its inherent properties, such as transparency, immutability, and consensus-based validation, make it highly suitable for enhancing cybersecurity.

In the context of cybersecurity, blockchain can be used to secure critical data, authenticate identities, and establish secure communication channels. By decentralising data storage and ensuring that information cannot be easily altered, blockchain technology adds an extra layer of protection against unauthorised access, data tampering, and insider threats.

Zero Trust architecture

Traditional network security models operate on the assumption that once a user or device gains access to the internal network, they can be trusted. However, with the increasing sophistication of cyber threats, the Zero Trust architecture has gained prominence.

Zero Trust revolves around the concept of never trust, always verify. Under this model, all users, devices, and network traffic are treated as potentially untrusted and are continuously verified and authenticated before being granted access to critical resources.

Zero Trust employs various security measures such as multi-factor authentication, strict access controls, and continuous monitoring to ensure that only authorized entities can access sensitive data and systems. By implementing Zero Trust principles, organisations can mitigate the risk of internal and external attacks, limit lateral movement, and protect their digital assets.

Staying ahead of cyber threats requires organisations to embrace evolving technologies as integral components of their cybersecurity strategies. These technologies have helped many organisations in the industry stay safe while also lowering their security management expenditures.

As businesses continue to grapple with the growing challenges of cybersecurity, those at the forefront of technological innovation will gain a competitive advantage by safeguarding their operations, reputation, and, most importantly, their invaluable data.

Suraj Tiwari

The writer is Head - Information Security, VFS Global.

Read the original:
Bring on AI and Machine Learning to take on escalating cybercrime threats - Gulf News

Redefining education in the AI era: the rise of generalists – asianews.network

October 16, 2023

KUALA LUMPUR Since the emergence of ChatGPT, many have been concerned about its potential negative impacts. Debates have sprung up, and tools have been developed to detect if assignments were crafted with the help of ChatGPT.

This concern is understandable. For instance, even in a notoriously difficult statistical machine learning course I taught last semester, ChatGPT could easily tackle it. I tested it with my final exam questions, and it outperformed nearly 90% of my students.

However, I dont think we should prohibit students from using such tools.

Is AI-generated content plagiarism?

That is to be debated. But to start, implementing such a ban is almost impossible due to the nature of machine learning.

Machine learning is fundamentally different from our typical internet searches. It efficiently extracts inherent relationships in data.

When AI generates content, it randomly samples based on these relationships rather than retrieving the most matching content. Therefore, the output cant strictly be called plagiarism as it doesnt specifically copy from any particular dataset, just like an impromptu speech might revolve around the same ideas but will never be identical each time.

So, in my classes, I not only dont prevent students from using ChatGPT, but I also encourage them to embrace it. As my colleague put it, its quite impolite not to.

Redefining the educational model

Should we be worried that students will stop learning?

While these tools will definitely revolutionize our teaching methods, I see it as a positive change. Just like I became reliant on Google Maps, it allows me to invest my time and energy into honing other skills.

Undoubtedly, traditional rote-learning methods are most at risk, especially when faced with advanced language models.

While the future of educational approaches remains uncertain, Im convinced that the new methods will prioritize problem-solving skills over rote memorization.

Typical assignments, like multiple-choice questions, are too easy for these large language models. I prefer assigning research-based projects, which reflect students genuine capabilities.

While current large-scale language models do have reasoning abilities, they still somewhat lack lateral thinking. If students can guide AI models based on their learning, the quality of the solutions achieved with AI assistance will significantly improve.

In this era, learning how to harness AI tools effectively is essential for everyone.

Leading universities like Tsinghua and Harvard now offer foundational courses for non-computer science majors. These courses teach them how to use tools like ChatGPT, delve into the principles behind them, and how to tweak these models for specific needs.

Even in my recent astrophysics class, most of my students werent computer science majors, but I still dedicated a session to introduce them to the nuances of large-scale language models.

The rise of the generalists

While machine learning is all the rage, and computer science enrollments are skyrocketing, Im often asked about major selection advice. Being at the intersection of astrophysics and computer science, I have some thoughts to share.

For those genuinely passionate about computing, a computer science degree is great. But, understand that the curriculum encompasses much more than just machine learning, including traditional system designs and compilers. As the use of large language models becomes commonplace, interdisciplinary knowledge is crucial. For instance, without knowledge in astrophysics, utilizing these models to extract valuable insights from large language models becomes challenging.

Interestingly, many leaders in the machine learning community dont come from a computer science background. For instance, some researchers at Anthropic, a main competitor to OpenAI, hail from physics backgrounds. Experts from varied academic fields bring unique perspectives and often introduce innovative viewpoints.

Breaking the alienation of the individual

We are in an era where generalists are emerging as leaders. Only by understanding and integrating specific application scenarios can the true value of these tools be fully realized. And I believe this is the greatest gift machine learning offers.

Looking back at the Industrial Revolution, individuals were often seen as mere cogs in the vast machinery of society, repetitively performing the same tasks. This cog-in-the-machine approach meant anyone could easily be replaced, ensuring continuous societal functioning. However, this often led to dehumanization.

Now, as machine learning capabilities show, repetitive tasks are likely the first to be automated, whether they are low-level or high-end specialized tasks.

Although jobs will undergo major transformations, and there are genuine concerns, those who will excel in this tech-driven era will be the generalists who thrive in multiple domains, rather than isolated specialists.

In this era of rapid change and competition, the accessibility of machine learning (as Ive mentioned in previous writings) has made global competition unprecedentedly open.

Its no longer just about the race between superpowers (unlike the nuclear arms race). Only nations that nurture a vast number of interdisciplinary experts will stand out.

So, is the Malaysian educational system ready?

Read more:
Redefining education in the AI era: the rise of generalists - asianews.network

How the Human-Machine Intelligence Partnership Is Evolving – AiThority

AI has enjoyed a long hype cycle, recently reignited by the introduction and rapid adoption of OpenAIs ChatGPT. Companies are now at varying stages of AI adoption given their business goals, resources, access to expertise, and the fact that AI is being embedded in more applications and services. Irrespective of industry, AI depends on a critical mass of quality data. However, the necessary quality depends on the use case. For example, as consumers, weve all been the victim of bad data as evidenced by marketing promotions we either laugh at or delete.

In the scientific community, such as the pharmaceutical and life sciences industry, bad data can be life-threatening, so data quality must be very high.

Also, unlike many other industries, the data required to discover a novel drug molecule tends to be scarce rather than abundant.

The data scarcity prevalent in the pharmaceutical and life science industries promotes a stronger alliance between humans and machines. Over time, there has been a significant accumulation of scientific data, the understanding of which demands a high level of education. This data accrual has been quite costly, leading to a general reluctance among owners to share the information they have acquired.

The intricate nature of scientific data implies that only scientists within the same field can comprehend the deeper contexts. Therefore, the volume of data available in an appropriate context is typically limited. This scarcity makes it challenging to develop credible AI algorithms in the healthcare industry.

To counteract this data deficit, human experts play an essential role in providing context and supplementary information. This human intervention helps in the co-development of algorithms and the workflows in which these algorithms are accurately utilized.

AI hype cycles have caused fear, uncertainty, and doubt because vendors are underscoring the need for automation in white-collar settings. In the distant past, AI was firmly focused on production-line jobs impacting blue-collar workers. Back then, no one anticipated AI would impact knowledge work, especially because AI capabilities were limited to the technology of the day and in most cases, it was rule-based.

Now, we see pervasive use of AI techniques such as machine learning and deep learning that can analyze massive amounts of data at scale. Instead of following a deterministic set of rules, modern systems are probabilistic, which makes things like prediction, as opposed to just historical data analysis, possible.

In the case of pharmaceuticals and life sciences, its possible to import tags from research papers, which is helpful, but the context tends not to be stated explicitly, so scientists need to help AI understand the underlying hypothesis or scientific context. The system then learns by being rewarded for good outcomes and scientists rejecting the bad outcomes. Without a human overseeing what AI does, it can drift in a manner that makes it less accurate.

In fact, it can take several weeks or months to create molecules and test them under experimental conditions. If animals are involved, the process could take years.

Some scientists or professionals dont want to share their work with AI, particularly when they are highly educated and experienced. These people know how long research takes and how expensive it can be, and theyve grown comfortable with that over time.

Also, the pharmaceutical and life sciences industries are highly regulated, so irrespective of whether AI is used or not, there are certain processes, and certain levels of rigor required just to ensure patient safety.

Interestingly, when well-trained scientists see AI in action, it becomes abundantly clear that it can handle a million or more data points easier and faster than a human. Suddenly, it becomes clearer that AI is a valuable tool that can save time and money and enable greater precision.

However, that doesnt mean that scientists trust what they see, especially when it comes to deep learning, which includes large language models such as ChatGPT. The problem with deep learning is that it tends to be opaque, meaning that it can take an input or multiple inputs and produce a result, but the AI is unable to explain in terms understandable to humans how it arrived at that result.

Thats why theres been a loud cry for AI transparency; people want to trust the result. Scientists and auditors demand it.

One of the biggest genetic databases is 23andMe. This is the gene testing service that reveals a persons ethnicity. It has also enabled individuals to discover family members they never met, such as a set of twins, one of whom was adopted. The service offers significant entertainment value.

However, from a scientific standpoint, it doesnt offer much.

Without understanding someones medical history, understanding genetic composition can only be somewhat helpful. As an example, a brother and sister may carry the same gene that is expressed in one and dormant in the other.

The more we know about an individual, the better chance there is of choosing the compounds that will work for them and at what dosage level. Today, theres still a lot of trial and error, and doses are standardized. In short, AI will help make personalized medicine possible.

The pharmaceutical and life sciences industries are both highly competitive. About two years ago, I visited Cambridge University and noticed that big pharma companies had sent a researcher or two to learn about Cambridges experimental automation technology that utilizes AI. Big pharma companies often work with research institutions to learn about scientific discovery and to get the high-quality data they need.

Another example is Recursion Pharmaceuticals where theyre automating biology-related processes. They photograph cells after treating some molecules and then use AI algorithms to understand the images. They produce tens of terabytes of image data every day and the experimental conditions are decided by prediction models. As new data comes in, the system generates new models, and the cycle repeats automatically and continuously.

AI is transforming the ways organizations and industries operate. However, scientific disciplines require a rigorous approach that yields accurate results and provides the transparency scientists and auditors need. Since governance, privacy, and security are not inherently baked into AI, organizations with strict requirements need to be sure that the technology they utilize is both safe and accurate.

Continue reading here:
How the Human-Machine Intelligence Partnership Is Evolving - AiThority

Large-scale genomic analyses with machine learning uncover predictive patterns associated with fungal … – Nature.com

Anderson, P. K. et al. Emerging infectious diseases of plants: Pathogen pollution, climate change and agrotechnology drivers. Trends Ecol. Evol. 19, 535544. https://doi.org/10.1016/j.tree.2004.07.021 (2004).

Article PubMed Google Scholar

Fisher, M. C. et al. Emerging fungal threats to animal, plant and ecosystem health. Nature 484, 186194 (2012).

Article ADS CAS PubMed Google Scholar

Trumbore, S., Brando, P. & Hartmann, H. Forest health and global change. Science 349, 814818 (2015).

Article ADS CAS PubMed Google Scholar

Allen, E. A. & Humble, L. M. Nonindigenous species introductions: A threat to Canadas forests and forest economy. Can. J. Plant Pathol. 24, 103110 (2002).

Article Google Scholar

Loo, J. A. Ecological impacts of non-indigenous invasive fungi as forest pathogens. Biol. Invasions 11, 8196 (2009).

Article Google Scholar

Roy, B. A. et al. Increasing forest loss worldwide from invasive pests requires new trade regulations. Front. Ecol. Environ. 12, 457465 (2014).

Article Google Scholar

Wingfield, M. J., Brockerhoff, E. G., Wingfield, B. D. & Slippers, B. Planted forest health: The need for a global strategy. Science 349, 832836 (2015).

Article ADS CAS PubMed Google Scholar

Bilodeau, P. et al. Biosurveillance of forest insects: Part IIAdoption of genomic tools by end user communities and barriers to integration. J. Pest Sci. 92, 7182 (2019).

Article Google Scholar

Roe, A. D. et al. Biosurveillance of forest insects: Part IIntegration and application of genomic tools to the surveillance of non-native forest insects. J. Pest Sci. 92, 5170 (2019).

Article Google Scholar

Hamelin, R. C. & Roe, A. D. Genomic biosurveillance of forest invasive alien enemies: A story written in code. Evolut. Appl. 13, 95115 (2020).

Article Google Scholar

Brasier, C. M. The biosecurity threat to the UK and global environment from international trade in plants. Plant Pathol. 57, 792808 (2008).

Article Google Scholar

McTaggart, A. R. et al. Fungal genomics challenges the dogma of name-based biosecurity. PLoS Pathog. 12, e1005475 (2016).

Article PubMed PubMed Central Google Scholar

Howlett, B. J., Lowe, R. G. T., Marcroft, S. J. & van de Wouw, A. P. Evolution of virulence in fungal plant pathogens: Exploiting fungal genomics to control plant disease. Mycologia 107, 441451 (2015).

Article CAS PubMed Google Scholar

Klosterman, S. J., Rollins, J. R., Sudarshana, M. R. & Vinatzer, B. A. Disease management in the genomics eraSummaries of focus issue papers. Phytopathology 106, 10681070 (2016).

Article CAS PubMed Google Scholar

Keri, S. et al. From genomes to forest managementTackling invasive Phytophthora species in the era of genomics. Can. J. Plant Pathol. 42, 129 (2020).

Article Google Scholar

Gardiner, D. M., Rusu, A., Barrett, L., Hunter, G. C. & Kazan, K. Natural gene drives offer potential pathogen control strategies in plants. bioRxiv https://doi.org/10.1101/2020.04.05.026500 (2020).

Article Google Scholar

Oliver, R. P. & Ipcho, S. V. S. Arabidopsis pathology breathes new life into the necrotrophs-vs-biotrophs classification of fungal pathogens. Mol. Plant Pathol. 5, 347352 (2004).

Article CAS PubMed Google Scholar

De Silva, N. I. et al. Mycosphere essays 9: Defining biotrophs and hemibiotrophs. Mycosphere 7, 545559 (2016).

Article Google Scholar

Pandaranayaka, E. P., Frenkel, O., Elad, Y., Prusky, D. & Harel, A. Network analysis exposes core functions in major lifestyles of fungal and oomycete plant pathogens. BMC Genom. 20, 1020 (2019).

Article CAS Google Scholar

Hane, J. K., Paxman, J., Jones, D. A. B., Oliver, R. P. & de Wit, P. CATAStrophy, a genome-informed trophic classification of filamentous plant pathogensHow many different types of filamentous plant pathogens are there?. Front. Microbiol. 10, 3088 (2020).

Article PubMed PubMed Central Google Scholar

Haridas, S. et al. 101 Dothideomycetes genomes: A test case for predicting lifestyles and emergence of pathogens. Stud. Mycol. https://doi.org/10.1016/j.simyco.2020.01.003 (2020).

Article PubMed PubMed Central Google Scholar

Amos, B. et al. VEuPathDB: The eukaryotic pathogen, vector and host bioinformatics resource center. Nucleic Acids Res. 50, D898D911 (2022).

Article CAS PubMed Google Scholar

Howe, K. L. et al. Ensembl genomes 2020Enabling non-vertebrate genomic research. Nucleic Acids Res. 48, D689D695 (2020).

Article CAS PubMed Google Scholar

OLeary, N. A. et al. Reference sequence (RefSeq) database at NCBI: Current status, taxonomic expansion, and functional annotation. Nucleic Acids Res. 44, D733-745 (2016).

Article PubMed Google Scholar

Grigoriev, I. V. et al. MycoCosm portal: Gearing up for 1000 fungal genomes. Nucleic Acids Res. 42, D699D704 (2014).

Article CAS PubMed Google Scholar

Almsi, . et al. Comparative genomics reveals unique wood-decay strategies and fruiting body development in the Schizophyllaceae. New Phytol. 224, 902915 (2019).

Article PubMed Google Scholar

Knapp, D. G. et al. Comparative genomics provides insights into the lifestyle and reveals functional heterogeneity of dark septate endophytic fungi. Sci. Rep. 8, 6321 (2018).

Article ADS PubMed PubMed Central Google Scholar

Kohler, A. et al. Convergent losses of decay mechanisms and rapid turnover of symbiosis genes in mycorrhizal mutualists. Nat. Genet. 47, 410415 (2015).

Article CAS PubMed Google Scholar

Miyauchi, S. et al. Large-scale genome sequencing of mycorrhizal fungi provides insights into the early evolution of symbiotic traits. Nat. Commun. 11, 5125 (2020).

Article ADS CAS PubMed PubMed Central Google Scholar

Gan, P. et al. Genus-wide comparative genome analyses of Colletotrichum species reveal specific gene family losses and gains during adaptation to specific infection lifestyles. Genome Biol. Evol. 8, 14671481 (2016).

Article PubMed PubMed Central Google Scholar

Carb, M., Moraga, J., Cantoral, J. M., Collado, I. G. & Garrido, C. Recent approaches on the genomic analysis of the phytopathogenic fungus Colletotrichum spp. Phytochem. Rev. https://doi.org/10.1007/s11101-019-09608-0 (2019).

Article Google Scholar

Krishnan, P., Ma, X., McDonald, B. A. & Brunner, P. C. Widespread signatures of selection for secreted peptidases in a fungal plant pathogen. BMC Evolut. Biol. 18, 7 (2018).

Article Google Scholar

Roy, A., Jayaprakash, A., Raja Rajeswary, T., Annamalai, A. & Lakshmi, P. T. V. Genome-wide annotation, comparison and functional genomics of carbohydrate-active enzymes in legumes infecting Fusarium oxysporum formae speciales. Mycology 11, 5670 (2020).

Article CAS PubMed PubMed Central Google Scholar

Ohm, R. A. et al. Diverse lifestyles and strategies of plant pathogenesis encoded in the genomes of eighteen Dothideomycetes fungi. PLoS Pathog. 8, e1003037 (2012).

Article CAS PubMed PubMed Central Google Scholar

Adhikari, B. N. et al. Comparative genomics reveals insight into virulence strategies of plant pathogenic oomycetes. PLoS One 8, e75072 (2013).

Article ADS CAS PubMed PubMed Central Google Scholar

de Bary, A. Comparative Morphology and Biology of the Fungi, Mycetozoa and Bacteria (Clarendon Press, 1887).

Book Google Scholar

Thrower, L. B. Terminology for plant parasites. J. Phytopathol. 56, 258259 (1966).

Article Google Scholar

Lewis, D. H. Concepts in fungal nutrition and the origin of biotrophy. Biol. Rev. 48, 261277 (1973).

Article Google Scholar

Perfect, S. E. & Green, J. R. Infection structures of biotrophic and hemibiotrophic fungal plant pathogens. Mol. Plant Pathol. 2, 101108 (2001).

Article CAS PubMed Google Scholar

Newton, A. C., Fitt, B. D. L., Atkins, S. D., Walters, D. R. & Daniell, T. J. Pathogenesis, parasitism and mutualism in the trophic space of microbeplant interactions. Trends Microbiol. 18, 365373 (2010).

Article CAS PubMed Google Scholar

Taylor, J. W. & Berbee, M. L. Dating divergences in the fungal tree of life: Review and new analyses. Mycologia 98, 838849 (2006).

Article PubMed Google Scholar

Berbee, M. L. & Taylor, J. W. Dating the molecular clock in fungiHow close are we?. Fungal Biol. Rev. 24, 116 (2010).

Article Google Scholar

Kabbage, M., Yarden, O. & Dickman, M. B. Pathogenic attributes of Sclerotinia sclerotiorum: Switching from a biotrophic to necrotrophic lifestyle. Plant Sci. 233, 5360 (2015).

Article CAS PubMed Google Scholar

Kuo, H.-C. et al. Secret lifestyles of Neurospora crassa. Sci. Rep. 4, 5135 (2015).

Article Google Scholar

Knogge, W. Fungal infection of plants. Plant Cell 8, 17111722 (1996).

Article CAS PubMed PubMed Central Google Scholar

Hmaty, K., Cherk, C. & Somerville, S. Hostpathogen warfare at the plant cell wall. Curr. Opin. Plant Biol. 12, 406413 (2009).

Article PubMed Google Scholar

Kubicek, C. P., Starr, T. L. & Glass, N. L. Plant cell wall-degrading enzymes and their secretion in plant-pathogenic fungi. Annu. Rev. Phytopathol. 52, 427451 (2014).

Article PubMed Google Scholar

Martinez, D. et al. Genome sequencing and analysis of the biomass-degrading fungus Trichoderma reesei (syn. Hypocrea jecorina). Nat. Biotechnol. 26, 553560 (2008).

Article CAS PubMed Google Scholar

King, B. C. et al. Arsenal of plant cell wall degrading enzymes reflects host preference among plant pathogenic fungi. Biotechnol. Biofuels 4, 4 (2011).

Article CAS PubMed PubMed Central Google Scholar

Zhao, Z., Liu, H., Wang, C. & Xu, J.-R. Comparative analysis of fungal genomes reveals different plant cell wall degrading capacity in fungi. BMC Genom. 14, 274 (2013).

Article CAS Google Scholar

Read more from the original source:
Large-scale genomic analyses with machine learning uncover predictive patterns associated with fungal ... - Nature.com

Techno-plasticity in the Age of Artificial Intelligence – Psychology Today

Image by Wolfgang Eckert from Pixabay

Human neuroplasticitythe brain's dynamic capability to rewire and adapthas been a cornerstone to what makes and keeps us human. It grants us the agility to learn new languages, empathize with others, and recover from brain injuries.

But as artificial intelligence (AI) technology advances (or evolves), are we nearing a shift in which machine intelligence mirrors human plasticity? The ground beneath us is undeniably shifting as we face a formidable contender: "techno-plasticity."

article continues after advertisement

While human neuroplasticity focuses on biological adaptations, techno-plasticity describes AI systems that can undergo real-time self-modifications. These are not rigid algorithms; they evolve, adapting to new data environments and situational variables. Let's look at liquid networks and their implications for technology and humanity.

Developed by researchers at MIT, liquid networks represent a fascinating and potentially transformative step in machine learning. Unlike traditional neural networks that are trained and deployed in a relatively static state, liquid networks are designed to continuously adapt their underlying algorithms in response to new data inputs.

This is achieved by allowing the parameters in the neural network's equations to evolve based on a nested set of differential equations.

The inspiration for this technological marvel comes from nature specifically, the microscopic nematode C. elegans, which possesses a mere 302 neurons yet exhibits incredibly complex behaviors. This biological muse has inspired the development of neural networks that adapt to changing data streams and do so in a highly resilient manner to noisy or unexpected data.

article continues after advertisement

The versatility of liquid networks is already apparent in various domains. For example, a sudden downpour that obscures camera vision is no longer an insurmountable obstacle in autonomous vehicles. These liquid networks offer a robust response to unanticipated changes.

Furthermore, they have excelled in time-series prediction tasks ranging from atmospheric chemistry to robotics, outperforming state-of-the-art algorithms and doing so with less computational overhead.

And there's another edge: The architecture of liquid networks makes them more interpretable, allowing for greater insights into their decision-making processes. This addresses a longstanding issue in AIthe black -box problemmaking these powerful networks more transparent and accountable.

While the advent of liquid networks might challenge human cognition, looking at the bigger picture is crucial: the symbiotic potential between neuroplasticity and techno-plasticity. Each brings to the table a unique set of capabilities and limitations. As AI systems like liquid networks become more plastic, humans, too, will find new avenues for cognitive expansion facilitated by AI's evolving capabilities.

article continues after advertisement

Our cognitive landscape is undergoing a seismic shift. We enter an era in which neither humans nor AI monopolize adaptability or learning. Instead, we find ourselves in a dynamic equilibrium where both forms of evolving intelligenceneuroplastic and technoplasticcontinuously evolve.

As liquid networks make their mark, raising the bar for what machine learning algorithms can achieve, we must adapt and evolve. It is not a competition but a journey of co-evolution, one in which the future of artificial and natural intelligence is continually rewritten.

The advent of techno-plasticity, particularly as manifested through liquid networks, could be a powerful catalyst for human transformation and evolution. This new frontier in AI capability may spur a symbiotic relationship in which each form of intelligencehuman and artificialcompels the other to adapt, innovate, and transcend current limitations.

Artificial Intelligence Essential Reads

As AI becomes more adept at real-time learning and adaptation, it challenges us to harness this technological prowess for societal advancement and look inward, reconsidering the scope and potential of our neuroplasticity. In essence, techno-plasticity could be the stimulus that drives us to explore uncharted territories of human cognition, creativity, and problem-solving, ultimately reshaping our understanding of what it means to be human in an age of advanced artificial intelligence.

article continues after advertisement

This new narrative calls for adaptability and a deepened understanding of the complexities. We stand on the cusp of a revolution that could usher in an era of unprecedented cognitive collaboration and exploration.

See original here:
Techno-plasticity in the Age of Artificial Intelligence - Psychology Today

FOXO Technologies Announces Issue Notification from USPTO for a Patent Leveraging Machine Learning Approaches to Enable the Commercialization of…

Builds on Notices of Allowance Previously Issued by the USPTO for Two Related Patents Leveraging the Same Approaches

MINNEAPOLIS, October 13, 2023--(BUSINESS WIRE)--FOXO Technologies Inc. (NYSE American: FOXO) ("FOXO" or the "Company"), a leader in the field of commercializing epigenetic biomarker technology, today announced that the United States Patent & Trademark Office (USPTO) has provided an Issue Notification for a key patent utilizing a machine learning model trained to determine a biochemical state and/or medical condition using DNA epigenetic data to enable the commercialization of epigenetic biomarkers. Previously, the USPTO had issued Notices of Allowance to the Company for two related patents and the Company awaits Issue Notification for the second allowed patent.

The first patent, for which the Company has received an Issue Notification, aids in practical applications of the technology that include generating epigenetic biomarkers. On occasion, epigenetic data may be missing or unreliable because a specific DNA site may not have been assayed or was unreliably measured. The patent allows the use of machine learning estimators to "fill in" the missing or unreliable epigenetic values at specific loci.

The second patent, for which the Company received a Notice of Allowance, leverages machine learning to estimate aspects about an individuals health, such as disease states, biomarker levels, drug use, health histories, and factors used to underwrite mortality risk. Commercial applications for this patent may include a potential AI platform for the delivery of health and well-being data-driven insights to individuals, healthcare professionals and third-party service providers, life insurance underwriting, clinical testing, and consumer health.

To support these patents, the Company has generated epigenetic data for over 13,000 individuals through internally sponsored research and external research collaborations. Pairing these data with broad phenotypic information is expected to help drive product development as demonstrated in the Companys patent claims.

Story continues

Mark White, Interim CEO of FOXO Technologies, stated, "As a pioneer in epigenetic biomarker discovery and commercialization, FOXO Technologies is dedicated to harnessing the power of epigenetics and artificial intelligence to provide data-driven insights that foster optimal health and longevity for individuals and organizations alike. With a strong commitment to improving the quality of life and promoting well-being, FOXO Technologies stands at the forefront of innovation in the biotechnology industry, with plans to leverage AI technology in order to expand into additional commercial markets."

"The newly granted patent underscores FOXO Technologies' position as a leader in the convergence of biotechnology and artificial intelligence. It represents a significant milestone in the Company's mission to extend and enhance human life through advanced diagnostics, therapeutic solutions, and lifestyle modifications. Moreover, by combining the fields of epigenetics and artificial intelligence, FOXO Technologies' pioneering approach sets a new standard for personalized healthcare. This patent represents a significant step forward in developing innovative tools that empower individuals and healthcare professionals to make informed decisions about health and well-being."

Nichole Rigby, Director of Bioinformatics & Data Science at FOXO Technologies, further noted, "The granting of these patents reaffirms our commitment to pushing the boundaries to bring together biotechnology and AI. We eagerly anticipate the transformative impact of this technology on health solutions, paving the way for healthier and longer lives for everyone."

About FOXO Technologies Inc. ("FOXO")

FOXO, a technology platform company, is a leader in epigenetic biomarker discovery and commercialization focused on commercializing longevity science through products and services that serve multiple industries. FOXO's epigenetic technology applies AI to DNA methylation to identify molecular biomarkers of human health and aging. For more information about FOXO, visit http://www.foxotechnologies.com. For investor information and updates, visit https://foxotechnologies.com/investors/.

Forward-Looking Statements

This press release contains certain forward-looking statements for purposes of the "safe harbor" provisions under the United States Private Securities Litigation Reform Act of 1995. Any statements other than statements of historical fact contained herein, including statements as to future results of operations and financial position, planned products and services, business strategy and plans, objectives of management for future operations of FOXO, market size and growth opportunities, competitive position and technological and market trends, are forward-looking statements. Such forward-looking statements include, but not limited to, expectations, hopes, beliefs, intentions, plans, prospects, financial results or strategies regarding FOXO; the future financial condition and performance of FOXO and the products and markets and expected future performance and market opportunities of FOXO. These forward-looking statements generally are identified by the words "anticipate," "believe," "could," "expect," "estimate," "future," "intend," "strategy," "may," "might," "strategy," "opportunity," "plan," project," "possible," "potential," "project," "predict," "scales," "representative of," "valuation," "should," "will," "would," "will be," "will continue," "will likely result," and similar expressions, but the absence of these words does not mean that a statement is not forward-looking. Forward-looking statements are predictions, projections and other statements about future events that are based on current expectations and assumptions and, as a result, are subject to risks and uncertainties. Many factors could cause actual future events to differ materially from the forward-looking statements in this press release, including but not limited to: (i) the risk of changes in the competitive and highly regulated industries in which FOXO operates, variations in operating performance across competitors or changes in laws and regulations affecting FOXOs business; (ii) the ability to implement FOXOs business plans, forecasts, and other expectations; (iii) the ability to obtain financing if needed; (iv) the ability to maintain its NYSE American listing; (v) the risk that FOXO has a history of losses and may not achieve or maintain profitability in the future; (vi) potential inability of FOXO to establish or maintain relationships required to advance its goals or to achieve its commercialization and development plans; (vii) the enforceability of FOXOs intellectual property, including its patents and the potential infringement on the intellectual property rights of others; and (viii) the risk of downturns and a changing regulatory landscape in the highly competitive biotechnology industry or in the markets or industries in which FOXOs prospective customers operate. The foregoing list of factors is not exhaustive. Readers should carefully consider the foregoing factors and the other risks and uncertainties discussed in FOXOs most recent reports on Forms 10-K and 10-Q, particularly the "Risk Factors" sections of those reports, and in other documents FOXO has filed, or will file, with the SEC. These filings identify and address other important risks and uncertainties that could cause actual events and results to differ materially from those contained in the forward-looking statements. Forward-looking statements speak only as of the date they are made. Readers are cautioned not to put undue reliance on forward-looking statements, and FOXO assumes no obligation and does not intend to update or revise these forward-looking statements, whether as a result of new information, future events, or otherwise.

View source version on businesswire.com: https://www.businesswire.com/news/home/20231013459322/en/

Contacts

Crescendo Communications, LLC(212) 671-1020foxo@crescendo-ir.com

Visit link:
FOXO Technologies Announces Issue Notification from USPTO for a Patent Leveraging Machine Learning Approaches to Enable the Commercialization of...

Leveraging machine learning to rapidly create clinical AI algorithms – HealthExec

They would test the algorithm further with refinements and give the dieticians 10 more patients to look at the next week. This process helped boost confidence in the algorithm to a point where it is now actually placing an order for consults in the electronic medical record (EMR).

"We're finding six to 10 patients a week who have undiagnosed malnutrition. Now, if you think about that from a family member of a child, that's a huge difference. And those things are really impactful in terms of practical AI, and that's kind of spawned other ideas, but that's been kind of one of our great use cases," Higginson explained.

Five years ago, Phoenix Children's Hospital embarked on a journey to harness the power of AI in solving clinical challenges. The traditional approach of relying on biostatisticians to develop algorithms proved to be time-consuming and often inefficient. He said the team might work on an algorithm for several months and find it does not work well in the end. So Higginson's team opted for a different path, utilizing automated machine learning. This approach involves providing a dataset to an AI system that autonomously creates the algorithm, allowing the hospital to start using it within a matter of hours, rather than weeks.

One of the key lessons learned from using AI in healthcare is that getting it right on the first attempt is a rare occurrence. Thus, an iterative approach is essential to fine-tune algorithms over time.

While there are now many vendors selling commercialized AI algorithms, Higginson said many are to generalized for the needs of his hospitals, which another reason why they have decided to develop their own, highly customized algorithms.

"One of the things I've learned with AI over the years is it doesn't translate very well. So I'm always very skeptical of vendors that tell me, 'I've got an AI model that's going to work great,' because geographic factors are a huge influence as well. There are some clinical conditions which obviously translate, but I think we've seen some recent examples where models are trained in one state, lifted somewhere else and don't work," he said.

For example, he said they created AI models on operational things like our donors and managing their employees, which require very local and customized factors that are completely unique. "Understanding how far is too far for an employee to travel into work all depends on the road density, where they are traveling from. I think the concepts and the ideas are transferable. But I would be a little skeptical of taking that black box and just lifting it somewhere else," Higginson explained.

Pediatric healthcare presents unique challenges that often require tailored solutions. At Phoenix Children's Hospital, they've developed their own patient portal, recognizing that pediatric patients and their families have distinct relationships with healthcare providers. This patient portal addresses the complex dynamics of patient relationships within families and guardianship scenarios. This includes who has access in a divorce or foster home situation, and the ages when patient information needs to be shared with the patient.

Moreover, the hospital has adapted to the post-pandemic landscape by embracing telehealth services, which have been particularly well-received by pediatric patients and their caregivers. The implementation of hybrid telehealth, where patients and their caregivers join virtual consultations, has transformed the healthcare experience for families, Higginson said.

Higginson encourages a more general application of AI in healthcare, emphasizing its adaptability to a wide range of scenarios. He used the example of AI helping determine no-show rates to better staff the emergency room. Another example is AI can be used to sift through patient emails to doctors via the patient portal to determine the most appropriate recipient within the healthcare team. This could streamline communication and enhancing efficiency so doctors can practice at the top of their license not not spend a large amount of time sorting basic email requests. Higginson said doctors tell him over and over 80% of these messages are about scheduling, medications and billing which have nothing to do with the physician.

"So how great would it be to take that message that came in and run it through a GPT prompt and ask it, which help desk should this go to?" He said.

Phoenix Children's Hospital's innovative approach to AI demonstrates the immense potential for the technology in healthcare. By adopting a strategic and iterative approach, they have successfully developed clinical algorithms that not only improve patient care, but also enhance the overall healthcare experience for pediatric patients and their families.

Find more HIMSS coverage.

Link:
Leveraging machine learning to rapidly create clinical AI algorithms - HealthExec