Page 1,315«..1020..1,3141,3151,3161,317..1,3201,330..»

Machine learning based prediction for oncologic outcomes of renal … – Nature.com

Using the original KORCC database9, two recent studies have been reported28,29. At first, Byun et al.28 assessed the prognosis of non-metastatic clear cell RCC using a deep learning-based survival predictions model. Harrels C-indices of DeepSurv for recurrence and cancer-specific survival were 0.802 and 0.834, respectively. More recently, Kim et al.29 developed ML-based algorithm predicting the probability of recurrence at 5 and 10years after surgery. The highest area under the receiver operating characteristic curve (AUROC) was obtained from the nave Bayes (NB) model, with values of 0.836 and 0.784 at 5 and 10years, respectively.

In the current study, we used the updated KORCC database. It now contains clinical data of more than 10,000 patients. To the best of our knowledge, this is the largest dataset in Asian population with RCC. With this dataset, we could develop much more accurate models with very high accuracy (range, 0.770.94) and F1-score (range, 0.770.97, Table 3). The accuracy values were relatively high compared to the previous models, including the Kattan nomogram, Leibovich model, the GRANT score, which were around 0.75,6,7,8. Among them, the Kattan nomogram was developed using a cohort of 601 patients with clinically localized RCC, and the overall C-index was 74%5. In a subsequent analysis with the same patient group using an additional prognostic variables including tumor necrosis, vascular invasion, and tumor grade, the C-index was as high as 82%30. Their prediction accuracies were not as high as ours yet.

In addition, we could include short-term (3-year) recurrence and survival data, which would be helpful for developing more sophisticated surveillance strategy. The other strength of current study was that most algorithms introduced so far had been applied18,19,20,21,22,23,24,25,26, showing relatively consistent performance with high accuracy. Finally, we also performed an external validation by using a separate (SNUBH) cohort, and achieved well maintained high accuracy and F1-score in both recurrence and survival (Fig.2). External validation of prediction models is essential, especially in case of using the multi-institutional dataset, to ensure and correct for differences between institutions.

AUROC has been mostly used as the standard evaluating performance of prediction models5,6,7,8,29. However, AUROC weighs changes in sensitivity and specificity equally without considering clinically meaningful information6. In addition, the lack of ability to compare performance of different ML models is another limitation of AUROC technique31. Thus, we adopted accuracy and F1-score instead of AUROC as evaluation metrics. F1-score, in addition to SMOTE17, is used as better accuracy metrics to solve the imbalanced data problems27.

RCC is not a single disease, but multiple histologically defined cancers with different genetic characteristics, clinical courses, and therapeutic responses32. With regard to metastatic RCC, the International Metastatic Renal Cell Carcinoma Database Consortium and the Memorial Sloan Kettering Cancer Center risk model have been extensively validated and widely used to predict survival outcomes of patients receiving systemic therapy33,34. However, both risk models had been developed without considering histologic subtypes. Thus, the predictive performance was presumed to have been strongly affected by clear cell type (predominant histologic subtype) RCC. Interestingly, in our previous study using the Korean metastatic RCC registry, we found the both risk models reliably predicted progression and survival even in non-clear cell type RCC35. In the current study, after performing subgroup analysis according to the histologic type (clear vs. non-clear cell type RCC), we also found very high accuracy and F1-score in all tested metrics (Supplemental Tables 3 and 4). Taking together, these findings suggest that the prognostic difference between clear and non-clear cell type RCC seems to be offset both in metastatic and non-metastatic RCC. Further effort is needed to develop and validate a sophisticated prediction model for individual subtypes of non-clear cell type RCC.

The current study had several limitations. First, due to the paucity of long-term follow-up cases at 10years, data imbalance problem could not be avoided. Subsequently, recurrence-free rate at 10-year was reported only to be 45.3%. In the majority of patients, further long-term follow up had not been performed in case of no evidence of disease at five years. However, we adopted both SMOTE and F1-score to solve these imbalanced data problems. The retrospective design of this study was also an inherent limitation. Another limitation was that the developed prediction model only included the Korean population. Validation of the model using data from other countries and races is also needed. In regard of non-clear cell type RCC, the current study cohort is still relatively small due to the rarity of the disease, we could not avoid integrating each subtype and analyzing together. Thus, further studies is still needed to develop and validate a prediction model for each subtypes. In addition, the lack of more accurate classifiers such as cross-validation and bootstrapping is another limitation of current study. Finally, the web-embedded deployment of model should be followed to improve accessibility and transportability.

Read more:
Machine learning based prediction for oncologic outcomes of renal ... - Nature.com

Read More..

Students Use Machine Learning in Lesson Designed to Reveal … – NC State News

In a new study, North Carolina State University researchers had 28 high school students create their own machine-learning artificial intelligence (AI) models for analyzing data. The goals of the project were to help students explore the challenges, limitations and promise of AI, and to ensure a future workforce is prepared to make use of AI tools.

The study was conducted in conjunction with a high school journalism class in the Northeast. Since then, researchers have expanded the program to high school classrooms in multiple states, including North Carolina. NCState researchers are looking to partner with additional schools to collaborate in bringing the curriculum into classrooms.

We want students, from a very young age, to open up that black box so they arent afraid of AI, said the studys lead author Shiyan Jiang, assistant professor of learning design and technology at NCState. We want students to know the potential and challenges of AI, and so they think about how they, the next generation, can respond to the evolving role of AI and society. We want to prepare students for the future workforce.

For the study, researchers developed a computer program called StoryQ that allows students to build their own machine-learning models. Then, researchers hosted a teacher workshop about the machine learning curriculum and technology in one-and-a-half hour sessions each week for a month. For teachers who signed up to participate further, researchers did another recap of the curriculum for participating teachers, and worked out logistics.

We created the StoryQ technology to allow students in high school or undergraduate classrooms to build what we call text classification models, Jiang said. We wanted to lower the barriers so students can really know whats going on in machine-learning, instead of struggling with the coding. So we created StoryQ, a tool that allows students to understand the nuances in building machine-learning and text classification models.

A teacher who decided to participate led a journalism class through a 15-day lesson where they used StoryQ to evaluate a series of Yelp reviews about ice cream stores. Students developed models to predict if reviews were positive or negative based on the language.

The teacher saw the relevance of the program to journalism, Jiang said. This was a very diverse class with many students who are under-represented in STEM and in computing. Overall, we found students enjoyed the lessons a lot, and had great discussions about the use and mechanism of machine-learning.

Researchers saw that students made hypotheses about specific words in the Yelp reviews, which they thought would predict if a review would be positive, or negative. For example, they expected reviews containing the word like to be positive. Then, the teacher guided the students to analyze whether their models correctly classified reviews. For example, a student who used the word like to predict reviews found that more than half of reviews containing the word were actually negative. Then, researchers said students used trial and error to try to improve the accuracy of their models.

Students learned how these models make decisions, and the role that humans can play in creating these technologies, and the kind of perspectives that can be brought in when they create AI technology, Jiang said.

From their discussions, researchers found that students had mixed reactions to AI technologies. Students were deeply concerned, for example, about the potential to use AI to automate processes for selecting students or candidates for opportunities like scholarships or programs.

For future classes, researchers created a shorter, five-hour program. Theyve launched the program in two high schools in North Carolina, as well as schools in Georgia, Maryland and Massachusetts. In the next phase of their research, they are looking to study how teachers across disciplines collaborate to launch an AI-focused program and create a community of AI learning.

We want to expand the implementation in North Carolina, Jiang said. If there are any schools interested, we are always ready to bring this program to a school. Since we know teachers are super busy, were offering a shorter professional development course, and we also provide a stipend for teachers. We will go into the classroom to teach if needed, or demonstrate how we would teach the curriculum so teachers can replicate, adapt, and revise it. We will support teachers in all the ways we can.

The study, High school students data modeling practices and processes: From modeling unstructured data to evaluating automated decisions, was published online March 13 in the journal Learning, Media and Technology. Co-authors included Hengtao Tang, Cansu Tatar, Carolyn P. Ros and Jie Chao. The work was supported by the National Science Foundation under grant number 1949110.

-oleniacz-

Note to Editors: The study abstract follows.

High school students data modeling practices and processes: From modeling unstructured data to evaluating automated decisions

Authors: Shiyan Jiang, Hengtao Tang, Cansu Tatar, Carolyn P. Ros and Jie Chao.

Published: March 13, 2023, Learning, Media and Technology

DOI: 10.1080/17439884.2023.2189735

Abstract: Its critical to foster artificial intelligence (AI) literacy for high school students, the first generation to grow up surrounded by AI, to understand working mechanism of data-driven AI technologies and critically evaluate automated decisions from predictive models. While efforts have been made to engage youth in understanding AI through developing machine learning models, few provided in-depth insights into the nuanced learning processes. In this study, we examined high school students data modeling practices and processes. Twenty-eight students developed machine learning models with text data for classifying negative and positive reviews of ice cream stores. We identified nine data modeling practices that describe students processes of model exploration, development, and testing and two themes about evaluating automated decisions from data technologies. The results provide implications for designing accessible data modeling experiences for students to understand data justice as well as the role and responsibility of data modelers in creating AI technologies.

Go here to see the original:
Students Use Machine Learning in Lesson Designed to Reveal ... - NC State News

Read More..

Multimodal Deep Learning – A Fusion of Multiple Modalities – NASSCOM Community

Multimodal Deep Learning and its Applications

As humans, our perception of the world is through our senses. We identify objects or anything through vision, sound, touch, and odor. Our way of processing this sensory information is multimodal. Modality refers to the way something is recognized, experienced, and recorded. Multimodal deep learning is an extensive research branch in Deep learning that works on the fusion of multimodal data.

The human brain consists of millions of neural networks that process multiple modalities from the external world. It could be recognizing a persons body movements, tone of voice, or even mimicking sounds. For AI to interpret Human Intelligence, we need a reasonable fusion of multimodal data and this is done through Multimodal Deep Learning.

Multimodal Machine Learning is developing computer algorithms that learn and predict using Multimodal datasets.

Multimodal Deep learning is a subset of the machine learning branch. With this technology, AI models are trained to identify relationships between multiple modalities such as images, videos, and texts and provide accurate predictions. From identifying the relevant link between datasets, Deep Learning models will be able to capture any place's environment and a person's emotional state.

If we say, Unimodal models that interpret only a single dataset have proven efficient in computer vision and Natural Language Processing. Unimodal models have limited capabilities; in certain tasks, these models failed to recognize humor, sarcasm, and hate speech. Whereas, Multimodal learning models can be referred to as a combination of unimodal models.

Multimodal deep learning includes modalities like visual, audio, and textual datasets. 3D visual and LiDAR data are slightly used multimodal data.

Multimodal Learning models work on the fusion of multiple unimodal neural networks.

First unimodal neural networks process the data separately and encode them, later, the encoded data is extracted and fused. Multimodal data fusion is an important process carried out using multiple fusion techniques. Finally, with the fusion of multimodal data, neural networks recognize and predict the outcome of the input key.

For example, in any video, there might be two unimodal models visual data and audio data. The perfect synchronization of both unimodal datasets provides simultaneous working of both models.

Fusing multimodal datasets improves the accuracy and robustness of Deep learning models, enhancing their performance in real-time scenarios.

Multimodal Deep learning has potential applications in computer vision algorithms. Here are some of its applications;

The research to reduce human efforts and develop machines matching with human intelligence is enormous. This requires multimodal datasets that can be combined using Machine Learning and Deep Learning models, paving the way for more advanced AI tools.

The recent surge in the popularity of AI tools has brought more additional investments in Artificial Intelligence and Machine Learning technology. This is a great time to grab job opportunities by learning and upskilling yourself in Artificial Intelligence and Machine Learning.

Read more:
Multimodal Deep Learning - A Fusion of Multiple Modalities - NASSCOM Community

Read More..

Amazon Teams Up With UT To Establish New Science Hub – UT News – The University of Texas at Austin

AUSTIN, Texas The University of Texas at Austin and Amazon are launching a science and engineering research partnership to enhance understanding in a variety of areas, including video streaming, search and information retrieval and robotics.

The UT Austin-Amazon Science Hub is the sixth such alliance between the tech company and a leading university. It aims to advance research that prompts new discoveries and addresses significant challenges while creating solutions that benefit society. This will be achieved by fostering collaboration among faculty members and students along with the development of a diverse and sustainable pipeline of research talent.

We are striving to establish even more collaborations with leading companies and organizations in order to bring together more talented people, produce higher-impact research, and help our students reach their greatest ambitions. The launch of the new hub with Amazon is the latest success story in this effort, said UT Austin President Jay Hartzell. I am eager to see the discoveries that our researchers and students will create from this collaboration, and how those discoveries will change the world.

The hub will be hosted in UT Austins Cockrell School of Engineering but engage researchers in a variety of disciplines.

As part of the collaboration, Amazon will provide funding for research projects, doctoral graduate student fellowships, and community-building events designed to diversify and increase cross-disciplinary innovation.

Amazon is thrilled to establish a university hub at UT Austin, said BA Winston, vice president of technology at Prime Video. For years, our top scientists have been a resource to UT Austin graduate students, collaborating on topics such as developing objective machine learning models to predict perceptual video quality that drive smart compression and multimodal AI models that help ensure highest quality media playback experience at scale.

The hub builds on an existing partnership between the two organizations via the Amazon Scholars program. Researchers from the Cockrell School, College of Natural Sciences, McCombs School of Business and College of Liberal Arts work with Amazon through the program.

This Science Hub will strengthen the partnership between UT Austin and Amazon by leveraging our collective strengths and creating opportunities for our faculty and students and leaders at Amazon to work together to accelerate progress in the areas of computer vision, machine learning, AI and robotics, said Roger Bonnecaze, dean of the Cockrell School of Engineering.

Research into visual neuroscience, streaming and social media is one of the reasons that UT was attractive to establish the new hub. Al Bovik, director of the new science hub and a professor in the Chandra Family Department of Electrical and Computer Engineering, helped guarantee the quality and reliability of streaming video and social media worldwide with his research. Joining him in leadership of the hub is an advisory board that includes personnel from UT Austin and Amazon:

UT Austin has built an impressive program in robotics with exceptional faculty and students, said Ken Washington, vice president of Amazon Consumer Robotics. The new hub will allow us to collaborate even more closely with them in robotics and related disciplines, so Im very optimistic about our growing partnership.

See the rest here:
Amazon Teams Up With UT To Establish New Science Hub - UT News - The University of Texas at Austin

Read More..

New study shows the potential of machine learning in the early … – Swansea University

A study by Swansea University has revealed how machine learning can help with the early detection of Ankylosing Spondylitis (AS) inflammatory arthritis and revolutionise how people are detected and diagnosed by their GPs.

Published in the open-access journal PLOS ONE, the study, funded by UCB Pharma and Health and Care Research Wales, has been carried out by data analysts and researchers from the National Centre for Population Health & Wellbeing Research (NCPHWR).

The team used machine learning methods to develop a profile of the characteristics of people likely to be diagnosed with AS, the second most common cause of inflammatory arthritis.

Machine learning, a type of artificial intelligence, is a method of data analysis that automates model building to improve performance and accuracy. Its algorithms build a model based on sample data to make predictions or decisions without being explicitly programmed to do so.

Using the Secure Anonymised Information Linkage (SAIL) Databank based atSwansea University Medical School, a national data repository allowing anonymised person-based data linkage across datasets, patients with AS were identified and matched with those with no record of a condition diagnosis.

The data was analysed separately for men and women, with a model developed using feature/variable selection and principal component analysis to build decision trees.

The findings revealed:

Dr Jonathan Kennedy, Data Lab Manager at NCPHWR and study lead:"Our study indicates the enormous potential machine learning has to help identify people with AS and better understand their diagnostic journeys through the health system.

"Early detection and diagnosis are crucial to secure the best outcomes for patients. Machine learning can help with this. In addition, it can empower GPs helping them detect and refer patients more effectively and efficiently.

"However, machine learning is in the early stages of implementation. To develop this, we need more detailed data to improve prediction and clinical utility."

Professor Ernest Choy, Researcher at NCPHWR and Head of Rheumatology and Translational Research at Cardiff University, added:"On average, it takes eight years for patients with AS from having symptoms to receiving a diagnosis and getting treatment. Machine learning may provide a useful tool to reduce this delay."

Professor Kieran Walshe, Director of Health and Care Research Wales, added: Its fantastic to see the cutting-edge role that machine learning can play in the early identification of patients with health conditions such as AS and the work being undertaken at the National Centre for Population Health and Wellbeing Research.

Though it is in its early stages, machine learning clearly has the potential to transform the way that researchers and clinicians approach the diagnostic journey, bringing about benefits to patients and their future health outcomes.

Read the full publication in the PLOS ONE journal.

See more here:
New study shows the potential of machine learning in the early ... - Swansea University

Read More..

What Is Few Shot Learning? (Definition, Applications) – Built In

Few-shot learning is a subfield of machine learning and deep learning that aims to teach AI models how to learn from only a small number of labeled training data. The goal of few-shot learning is to enable models to generalize new, unseen data samples based on a small number of samples we give them during the training process.

In general, few-shot learning involves training a model on a set of tasks, each of which consists of a small number of labeled samples. We train the model to learn how to recognize patterns in the data and use this knowledge.

One challenge of traditional machine learning is the fact that training models require large amounts of training data with labeled training samples. Training on a large data set allows machine learning models to generalize new, unseen data samples. However, in many real-world scenarios, obtaining a large amount of labeled data can be very difficult, expensive, time consuming or all of the above. This is where few-shot learning comes into play. Few-shot learning enables machine learning models to learn from only a few labeled data samples.

More From This Expert5 Deep Learning and Neural Network Activation Functions to Know

One reason few-shot learning is important is because it makes developing machine learning models in real-world settings feasible. In many real-world scenarios, it can be challenging to obtain a large data set we can use to train a machine learning model. Learning on a smaller training data set can significantly reduce the cost and effort required to train machine learning models. Few-shot learning makes this possible because the technique enables models to learn from only a small amount of data.

Few-shot learning can also enable the development of more flexible and adaptive machine learning systems. Traditional machine learning algorithms are typically designed to perform well on specific tasks and are trained on huge data sets with a large number of labeled examples. This means that algorithms may not generalize well to new, unseen data or perform well on tasks that are significantly different from the ones on which they were trained.

Few-shot learning solves this challenge by enabling machine learning models to learn how to learn and adapt quickly to new tasks based on a small number of labeled examples. As a result, the models become more flexible and adaptable.

Few-shot learning has many potential applications in areas such as computer vision, natural language processing (NLP) and robotics. For example, when we use few-shot learning in robotics, robots can quickly learn new tasks based on just a few examples. In natural language processing, language models can better learn new languages or dialects with minimal training data.

An error occurred.

Few-shot learning has become a promising approach for solving problems where data is limited. Here are three of the most promising approaches for few-shot learning.

Meta-learning, also known as learning to learn, involves training a model to learn the underlying structure (or meta-knowledge) of a task. Meta-learning has shown promising results for few-shot learning tasks where the model is trained on a set of tasks and learns to generalize to new tasks by learning just a few data samples. During the meta-learning process, we can train the model using meta-learning algorithms such as model-agnostic meta-learning (MALM) or by using prototypical networks.

Data augmentation refers to a technique wherein new training data samples are created by applying various transformations to the existing training data set. One major advantage of this approach is that it can improve the generalization of machine learning models in many computer vision tasks, including few-shot learning.

For computer vision tasks, data augmentation involves techniques like rotation, flipping, scaling and color jittering existing images to generate additional image samples for each class. We then add these additional images to the existing data set, which we can then use to train a few-shot learning model.

Generative models, such as variational autoencoders (VAEs) and generative adversarial networks (GANs) have shown promising results for few-shot learning. These models are able to generate new data points that are similar to the training data.

In the context of few-shot learning, we can use generative models to augment the existing data with additional examples. The model does this by generating new examples that are similar to the few labeled examples available. We can also use generative models to generate examples for new classes that are not present in the training data. By doing so, generative models can help expand the data set for training and improve the performance of the few-shot learning algorithm.

In computer vision, we can apply few-shot learning to image classification tasks wherein our goal is to classify images into different categories. In this example, we can use few-shot learning to train a machine learning model to classify images with a limited amount of labeled data. Labeled data refers to a set of images with corresponding labels, which indicate the category or class to which each image belongs. In computer vision, obtaining a large number of labeled data is often difficult. For this reason, few-shot learning might be helpful since it allows machine learning models to learn on fewer labeled data.

Few-shot learning can be applied to various NLP tasks like text classification, sentiment analysis and language translation. For instance, in text classification, few-shot learning algorithms could learn to classify text into different categories with only a small number of labeled text examples. This approach can be particularly useful for tasks in the area of spam detection, topic classification and sentiment analysis.

Related Reading From Built In ExpertsWhat Are Self-Driving Cars?

In robotics, we can apply few-shot learning to tasks like object manipulation and motion planning. Few-shot learning can enable robots to learn to manipulate objects or plan their movement trajectories by using small amounts of training data. For robotics, the training data typically consists of demonstrations or sensor data.

In medical imaging, learning from only a few exposures can help us train machine learning models for medical imaging tasks such as tumor segmentation and disease classification. In medicine, the number of available images is usually limited due to strict legal regulations and data protection laws around medical information. As a result, there is less data available on which to train machine learning models. Few-shot learning solves this problem because it enables machine learning models to successfully learn to perform the mentioned tasks on a limited data set.

View original post here:
What Is Few Shot Learning? (Definition, Applications) - Built In

Read More..

PathAI to Present on AI-based Models to Advance Tumor Analysis … – PR Newswire

Latest research underscores advantages of digital tools at scale to enhance tumor microenvironment and biomarker understanding in non-small cell lung cancer and renal cell carcinoma

BOSTON, Mass., April 11, 2023 /PRNewswire/ --PathAI,a leading provider of AI-powered pathology tools to advance precision medicine, today announced their recent research will be presented at theAmerican Association for Cancer Research Annual Meeting 2023, which will be held in Orlando, FL from April 14-19, 2023. PathAI will share three posters that highlight uses and advantages of AI-based methods to identify and examine non-small cell lung cancer (NSCLC) and renal cell carcinoma (RCC) specimens. Additionally, PathAI collaborated withGenentech, a member of the Roche Group, on two submissions, an oral presentation on H&E-based digital pathology biomarkers in metastatic NSCLC, and a poster presentation on digital PD-L1 tumor cell scoring in NSCLC. PathAI will also be exhibiting in booth 315, where it will showcase the capabilities of its newly launched PathExplore product, its AI-powered panel of histopathology features that spatially characterize the tumor microenvironment (TME) with single-cell resolution from H&E slide images.

"Our research demonstrates forward momentum in utilizing machine learning, cell segmentation models, and image analysis at scale to better recognize and analyze tissue morphology at the cellular level, revealing new biomarkers and predictive links to targeted therapies," said Mike Montalto, Ph.D., chief scientific officer at PathAI. "With this body of research, we are another step closer to improving oncology drug development and outcomes for these difficult to treat cancers."

PathAI collaborator Genentech will give an oral presentation, "Digital pathology-based prognostic and predictive biomarkers in metastatic non-small cell lung cancer," highlighting the relationship between the tumor microenvironment (TME) and patient response to targeted cancer immunotherapy by applying machine learning algorithms to study the TME in metastatic NSCLC. By quantifying digital pathology cell and region features, and using feature variability as a discovery tool, the study identified a feature set associated with outcome to PD-L1 targeted therapy, illustrating how novel data modalities can be integrated to elucidate biomarkers of immunotherapy response.

In a poster presentation in partnership with Genentech, "Digital SP263 PD-L1 tumor cell scoring in NSCLC achieves comparable outcome prediction to manual pathology scoring," the companies will demonstrate the effectiveness of an AI-based model for PD-L1 quantification in predicting NSCLC outcomes compared to manual scoring.

In a poster on renal cell carcinoma, "Machine learning models identify key histological features of renal cell carcinoma subtypes," PathAI will explain how their machine learning model quantified the RCC environment, allowing identification of spatially specific differences that correlate with histological subtypes, mutations and vascularization.

The full list of PathAI's research submissions is listed below. More information on each research abstract can be found here.

Oral Presentation: Digital pathology based prognostic and predictive biomarkers in metastatic NSCLC

Session MS.CL01.02 - Immune-based Biomarkers for Prognostic and Predictive Benefit

Abstract presentation number: 5705

Session time: April 18, 2023, 2:30 PM - 4:30 PM

Presentation time: 3:37 PM - 3:52 PM

Collaborator: Genentech

Poster Presentation: Digital SP263 PD-L1 tumor cell scoring in non-small cell lung cancer achieves comparable outcome prediction to manual pathology scoring

Session PO.BCS01.02 - Artificial Intelligence and Machine/Deep Learning 1

Abstract presentation number: 5358 / 7

Poster hours: April 18, 2023, 1:30 PM - 5:00 PM

Collaborator: Genentech

Poster Presentation: Machine learning models identify key histological features of renal cell carcinoma subtypes

Session PO.BCS02.03 - Artificial Intelligence: From Pathomics to Radiomics

Abstract presentation number: 5422 / 5

Poster hours: April 18, 2023, 1:30 PM - 5:00 PM

Poster Presentation: Artificial intelligence (AI)-based classification of stromal subtypes reveals associations between stromal composition and prognosis in NSCLC

Session PO.BCS02.03 - Artificial Intelligence: From Pathomics to Radiomics

Abstract presentation number: 5447 / 30

Poster hours: April 18, 2023, 1:30 PM - 5:00 PM

Poster Presentation: Development of a high-throughput image processing pipeline for multiplex immunofluorescence whole slide images at scale

Session PO.BCS02.02 - Integrative Spatial and Temporal Multi-omics of Cancer

Abstract presentation number: 6616 / 21

Poster hours: April 19, 2023, 9:00 AM - 12:30 PM

About PathAI

PathAI is the only AI-focused technology company to provide comprehensive precision pathology solutions from wet lab services to algorithm deployment for clinical trials and diagnostic use. Rigorously trained and validated with data from more than 15 million annotations, its AI-powered models can be leveraged to optimize the analysis of patient samples to improve diagnostic efficiency and accuracy, as well as to better gauge therapeutic efficacy and accelerate drug development for complex diseases.

PathAI, which is headquartered in Boston, MA, and operates a CAP/CLIA-certified laboratory in Memphis, TN, is proud to be a rapidly expanding organization comprised of innovative thinkers from around the globe, For more information, please visitwww.pathai.com.

Media Contact

Maggie NaplesSVM Public Relations and Marketing Communications [emailprotected]com(401) 490-700

SOURCE PathAI

See the article here:
PathAI to Present on AI-based Models to Advance Tumor Analysis ... - PR Newswire

Read More..

LivePerson and Cohere to deliver better business outcomes with … – PR Newswire

Leading AI companies will work together to make it easy to create and deploy enterprise-grade LLMs adapted to specific business needs, both for LivePerson and its customers

NEW YORK and TORONTO, April 11, 2023 /PRNewswire/ -- LivePerson(Nasdaq: LPSN), a global leader in Conversational AI, today announced a pilot program with Cohere, the natural language processing platform enabling broad access to cutting-edge language generation and understanding technology. This program will allow enterprise brands to easily create and deploy custom Large Language Models (LLMs)that improve both customer engagement and business outcomes.

While language AI technologies have attracted intense interest and even wonder, the realities and limitations of deploying them at the enterprise level have also led to deep concern about their readiness for customer-facing experiences and their ability to actually drive better business results.

LivePerson and Cohere intend to help enterprises overcome these barriers and put LLMs to work driving better customer engagement and business outcomes. The combination of LivePerson's industry-leading conversational platform and AI with Cohere's state-of-the-art language models will be designed to deliver:

"With Generative AI and LLMs, the best outcomes are driven by expansive data models and precision data sets. Combining Cohere's cutting-edge language models with our unparalleled expertise and data for customer engagement will set the new standard for using AI to communicate at the enterprise level," said LivePerson founder and CEO Rob LoCascio.

LivePerson's AI is trained on a vast and rich data set derived from a billion conversational interactions every month and informed by 20+ years of experience managing brand-to-consumer interactions for the world's largest enterprises and integrating into their backend systems. Unlike other platforms, hundreds of thousands of humans participate in LivePerson's AI learning loops, keeping conversations grounded and factual, and the company's commitment to fighting bias in AI is deep and long-standing.

This mirrors Cohere's vision to help developers and businesses tap into the massive opportunity that NLP brings and give them a competitive advantage as early adopters in this evolving market. Cohere's impressive team includes some of the world's top machine learning talent alongside business experts with experience implementing exciting new technology into products at scale. Its platform allows enterprises to fine-tune their data for stronger outcomes and more impactful business decisions.

"Our mission is to make it simple for any developer and business to build powerful language AI into their products," said Aidan Gomez, Co-Founder and CEO at Cohere. "LivePerson's leadership in conversational AI helps to further that mission, and we can help them increase access to this transformational technology, even to enterprises without extensive compute resources or machine learning expertise."

To help enterprises learn more about custom LLMs, LivePerson and Cohere will host a webinar with AI leaders Joe Bradley, Chief Scientist at LivePerson, and Matthew Dunn, Machine Learning Research expert at Cohere. The session will take place on May 24, 2023 at 12 PM ET and help shed light on "Driving better business outcomes with custom large language models."

To register for this event and learn more about how LivePerson and Cohere are bringing AI to enterprises, please click here.

About LivePerson, Inc.LivePerson(NASDAQ: LPSN) is a global leader in Conversational AI. Hundreds of the world's leading brands including HSBC, Chipotle, and Virgin Media use our Conversational Cloud platform to engage with millions of consumers as personally as they would with one. We power nearly a billion conversational interactions every month, providing a uniquely rich data set to build connections that reduce costs, increase revenue, and are anything but artificial. Fast Company named us the #1 Most Innovative AI Company in the world. To talk with us or our Conversational AI, please visit http://www.liveperson.com.

About CohereCohereis making language AI accessible to all developers and businesses, even those without massive compute resources or rare machine learning knowledge. Cohere builds state-of-the-art language models and makes them available through an API as a managed service or cloud ML platforms, turning a complex, expensive process into an easy-to-use interface. Cohere's mission is to help every developer, enterprise leader, or startup founder benefit from the power of language models, whether through copywriting, search, conversational AI, summarization, content moderation, and more. Cohere is based in Toronto, Canada, and powers customers across the globe.

Forward Looking StatementsStatements in this press release regarding LivePerson that are not historical facts are forward-looking statements and are subject to risks and uncertainties that could cause actual future events or results to differ materially from such statements. Any such forward-looking statements are made pursuant to the safe harbor provisions of the Private Securities Litigation Reform Act of 1995. It is routine for our internal projections and expectations to change as the quarter and year progress, and therefore it should be clearly understood that the internal projections and beliefs upon which we base our expectations may change. Although these expectations may change, we are under no obligation to inform you if they do. Some of the factors that could cause actual results to differ materially from the forward-looking statements contained herein include without limitation, our ability to execute on and deliver our current plans and goals, and the other factors described in the Risk Factors section of the Company's most recently filed Annual Report on Form 10-K for the year ended December 31, 2023, filed with the SEC on March 16, 2023 and as from time to time updated in LivePerson's Quarterly Reports on Form 10-Q. The list of Risk Factors is intended to identify only certain of the principal factors that could cause actual results to differ from those discussed in the forward-looking statements.

Media contact: Mike Tague, [emailprotected]

SOURCE LivePerson, Inc.

Read the original post:
LivePerson and Cohere to deliver better business outcomes with ... - PR Newswire

Read More..

Astronomers used machine learning to mine data from South Africas MeerKAT telescope: what they found – The Conversation Indonesia

New telescopes with unprecedented sensitivity and resolution are being unveiled around the world and beyond. Among them are the Giant Magellan Telescope under construction in Chile, and the James Webb Space Telescope, which is parked a million and a half kilometres out in space.

This means there is a wealth of data available to scientists that simply wasnt there before. The raw data off just a single observation from the MeerKAT radio telescope in South Africas Northern Cape province can measure a terabyte. Thats enough to fill a laptop computers hard drive. MeerKAT is an array of 64 large antenna dishes. It uses radio signals from space to study the evolution of the universe and everything it contains galaxies, for example. Each dish is said to generate as much data in one second as youd find on a DVD.

Machine learning is helping astronomers to work through this data quickly and more accurately than poring over it manually. Perhaps surprisingly, despite increasing reliance on computers, up until recently the discovery of rare or new astrophysical phenomena has completely relied on human inspection of the data.

Machine learning is essentially a set of algorithms designed to automatically learn patterns and models from data. Because we astronomers arent sure what were going to find we dont know what we dont know we also design algorithms to look out for anomalies that dont fit known parameters or labels.

This approach allowed my colleagues and I to spot a previously overlooked object in data from MeerKAT. It sits some seven billion light years from Earth (a light year is a measure of how far light would travel in a year). From what we know of the object so far, it has many of the makings of whats known as an Odd Radio Circle (ORC).

Odd Radio Circles are identifiable by their strange, ring-like structure. Only a handful of these circles have been detected since the first discovery in 2019, so not much is known about them yet.

In a new paper we outline the features of our potential Odd Radio Circle, which weve named SAURON (a Steep and Uneven Ring Of Non-thermal Radiation). SAURON is, to our knowledge, the first scientific discovery made in MeerKAT data with machine learning. (There have been a handful of other discoveries assisted by machine learning in astronomy.)

Not only is discovering something new incredibly exciting, new discoveries are critical for challenging our understanding of the cosmos. These new objects may match our theories of how galaxies form and evolve, or we may need to change how we see the universe. New discoveries of anomalous astrophysical objects help science to make progress.

We spotted SAURON in data from the MeerKAT Galaxy Cluster Legacy Survey. The survey is a programme of observations conducted with South Africas MeerKAT telescope, a precursor to the Square Kilometre Array. The array is a global project to build the worlds largest and most sensitive radio telescope within the coming decade, co-located in South Africa and Australia.

The survey was conducted between June 2018 and June 2019. It zeroed in on some 115 galaxy clusters, each made up of hundreds or even thousands of galaxies.

Thats a lot of data to sift through which is where machine learning comes in.

We developed and used a coding framework which we called Astronomaly to sort through the data. Astronomaly ranked unknown objects according to an anomaly scoring system. The human team then manually evaluated the 200 anomalies that interested us most. Here, we drew on vast collective expertise to make sense of the data.

It was during this part of the process that we identified SAURON. Instead of having to look at 6,000 individual images, we only had to look through the first 60 that Astronomaly flagged as anomalous to pick up SAURON.

But the question remains: what, exactly, have we found?

We know very little about Odd Radio Circles. It is currently thought that their bright, blast-like emission is the wreckage of a huge explosion in their host galaxies.

The name SAURON captures the fundamentals of the objects make-up. Steep refers to its spectral slope, indicating that at higher radio frequencies the source (or object) very quickly grows fainter. Ring refers to the shape. And the Non-Thermal Radiation refers to the type of radiation, suggesting that there must be particles accelerating in powerful magnetic fields. SAURON is at least 1.2 million light years across, about 20 times the size of the Milky Way.

But SAURON doesnt tick all the right boxes for us to say that its definitely an Odd Radio Circle. We detected a host galaxy but can find no evidence of radio emissions with the wavelengths and frequency that match those of host galaxies of the other known ORCs.

And even though SAURON has a number of features in common with Odd Radio Circle1 the first Odd Radio Circle spotted it differs in others. Its strange shape and its oddly behaving magnetic fields dont align well with the main structure.

One of the most exciting possibilities is that SAURON is a remnant of the explosive merger of two supermassive black holes. These are incredibly dense objects at the centre of galaxies such as our Milky Way that could cause a massive explosion when galaxies collide.

More investigation is required to unravel the mystery. Meanwhile, machine learning is quickly becoming an indispensable tool to find more strange objects by sorting through enormous datasets from telescopes. With this tool, we can expect to unveil more of what the universe is hiding.

If so, youll be interested in our free daily newsletter. Its filled with the insights of academic experts, written so that everyone can understand whats going on in the world. With the latest scientific discoveries, thoughtful analysis on political issues and research-based life tips, each email is filled with articles that will inform you and often intrigue you.

Get our newsletters

Editor and General Manager

Find peace of mind, and the facts, with experts. Add evidence-based articles to your news digest. No uninformed commentariat. Just experts. 90,000 of them have written for us. They trust us. Give it a go.

Get our newsletter

If you found the article you just read to be insightful, youll be interested in our free daily newsletter. Its filled with the insights of academic experts, written so that everyone can understand whats going on in the world. Each newsletter has articles that will inform and intrigue you.

Subscribe now

CEO | Editor-in-Chief

It helps you go deeper into key political issues and also introduces you to the diversity of research coming out of the continent. It's not about breaking news. It's not about unfounded opinions. The Europe newsletter is evidence-based expertise from European scholars, presented by myself in France, and two of my colleagues in Spain and the UK.

Get our newsletter

Head of English section, France edition

Go here to read the rest:
Astronomers used machine learning to mine data from South Africas MeerKAT telescope: what they found - The Conversation Indonesia

Read More..

This Startup Claims Its Models Fix A Major Problem With Generative AI – Forbes

While OpenAI is propelling toward artificial general intelligence, AI that's smarter than human beings, Writer cofounders May Habib (left) and Waseem Alshikh (right) hold a different opinion. If you can unplug it, it's not AGI, Habib says.

In 2013, May Habib was browsing through GitHub when she came across the work of Dubai-based tech executive Waseem Alshikh, who dabbled in then-nascent machine learning techniques to summarize large blocks of information.

Habib, who grew up in a small village in Lebanon before immigrating to Canada in the 90s to flee the civil war, was immediately taken by the similarities she shared with Alshikh, who was also forced to leave his native country, Syria but for a different reason: as a teenager, he had illegally hacked into the countrys Ministry of Interior.

Ten years and two startups later, the duo is at the cutting edge of a wave of generative AI companies that have merged into the mainstream as the rudimentary techniques Alshikh was experimenting with a decade ago have been supercharged by advancements in transformers and deep learning models. Their generative AI startup Writer uses its own large language models called Palmyra (named after an ancient Syrian city) to let enterprises and their employees write and edit content such as emails, documents, ads and summaries, all of which will adhere to a companys editorial guidelines.

Unlike the vast majority of generative AI models out there that hallucinate, or spout incorrect information a major issue for businesses that are incorporating the technology CEO Habib claims that the latest version of its language AI model will never create anything that's factually incorrect. Thats due to the models architecture, which is designed to prioritize accuracy over creativity.

Thats technically plausible to an extent. Model architecture definitely does have an impact on hallucination rates, says Pranav Reddy, an investor at Conviction who previously worked at generative AI search engine startup Neeva. But there is no model structure that guarantees that they dont hallucinate, he continued. In the rare scenario that the companys technology does hallucinate, Habib says Writer highlights the portion of the text that has no source.

Customers, which include giants like Uber UBER , Deloitte, Spotify and Accenture ACN , seem sold so far. United Healthcare is using Writers HIPAA-compliant models to examine the fine print in health insurance plans and then write blogs or emails to explain those plans to health providers and patients. Intuit INTU has it writing blog posts based on financial data, and LOral uses it to write product descriptions and push notifications. Writers customers can also feed the software a podcast or a video and it can repurpose it into written content. (One of Writers early clients was Twitter, which hasnt paid its bills, says Habib, who sued the Elon Musk-owned company in late February for defaulting on payments.)

Writers models are trained on public information as well as companys own data such as PDFs, editorial style guides and brand words, together forming 30 billion parameters. Each company gets its own customized version of the model, and the data is stored on the companys cloud. The data is actually used like an index and we have this path to explainability that shows where specific facts came from, Habib says.

The company, which is featured on the 2023 AI 50 list, directly competes with other generative AI startups on the list that are increasingly targeting the enterprise sector. But its still catching up to its competitors in terms of funding. Valued at $155 million, Writer has raised $26 million in venture funding from Insight Partners, Upfront Ventures and others. In comparison, copywriting AI tool Jasper is a unicorn with deep pockets of $125 million in venture capital. Writers main distinguisher, according to its cofounders, is that it uses its own large language models (unlike Jasper, which is built on OpenAIs GPT-3.5) and trains those models on company-specific data to get more accurate results.

Most of the companies in the market are only playing in the application layer, and they don't even do it well. They literally use someone elses technology and put a UI on top of it, says cofounder and CTO Alshikh. Jasper is an example. Its just a reseller. As an AI company, when you don't control your AI itself, how are you going to control the quality of output?

The San Francisco-based startups strategy to address generative AIs tendency to make up facts has to do with how its built. On the backend, Writer uses machine learning, natural language processing and transformers to understand text and generate more of it. Writer combines encoders, which is the part of a transformer that is good at understanding text, and decoders, the components that predict and generate text.

Unlike other models, Writers models make encoders and decoders talk to each other. Once an encoder understands the query, it grabs the information from the database uploaded by the client company and then informs the decoder how to generate a response based on it. This is how were able to be accurate and write in an inclusive language, Habib tells Forbes.

Reddy, the Conviction investor, agrees that while the encoder-decoder models arent perfect, they are better than ChatGPT, at least when it comes to accuracy. Encoder-decoder models do tend to have much lower hallucination rates than decoder-only models, which is the OpenAI GPT-3 structure, Reddy says. The tradeoff you pay for this is that these (encoder-decoder) models also tend to be less creative.

Habibs says her clients are more concerned about using the right wording and punctuation in their branded content than being able to generate stories, sonnets and poems. Any story included in content written by Writers software is a real one and not fiction, she says.

Committing to its name, Writers tools only produce text and do not generate visual content like images and videos a service that newer enterprise-focused generative AI startups like Typeface do offer. It can be plugged into many writing and content creation tools including Google Docs, Microsoft Word and Figma. Writer charges customers a platform fee anywhere between $30,000 up to $1 million. Pricing is also based on the number of words generated by the software. Through this business model, Writer expects to book an estimated $20 million in 2023 revenue.

At a relatively low price, Nvidia provides Writer the physical hardware (GPUs and CPUs) for computing purposes and training data, Alshikh says. Our goal today is to train smaller models as much as possible to keep the costs cheap for us and our customers, he tells Forbes.

The immigrant entrepreneurs say that Writer is an amalgamation of their personal experiences with the English language and their business foundations in the machine-learning realm. Habib, a Harvard graduate, was the first woman in her family to attend college and taught her family how to speak English. Alshikh taught himself how to speak English at age 20 to learn computer science. This prompted them to start their first venture in 2015: a machine learning based translation software called Qordoba that helped companies like Sephora and Visa translate their digital content to other languages and dialects, before evolving it into Writer in 2020.

The medium of language to move ahead in the world has always been something weve thought about really deeply, Habib says.

Go here to see the original:
This Startup Claims Its Models Fix A Major Problem With Generative AI - Forbes

Read More..