Category Archives: Machine Learning

Learning and predicting the unknown class using evidential deep learning | Scientific Reports –

I investigated whether m-EDL has the same performance as EDL through comparative experiments. I also investigated whether m-EDL has an advantage when including class u in the training data. The objective of this evaluation was to determine the following:

(Q1): whether the use of m-EDL reduces the prediction accuracy for a class k when the same training and test data are given to EDL and m-EDL models;

(Q2): whether a) an m-EDL model that has learned class u has the same prediction accuracy for a class k when compared with an EDL model that cannot learn class u, and b) m-EDL predicts class u with higher accuracy than EDL;

(Q3): if the ratio of class u data included in the training data affects the accuracy of predicting classes k and u in the test data;

(Q4): what happens when the properties of class u data that are blended with the training data and test data in Q2 and Q3 are exactly the same.

To answer these questions, several datasets and models were prepared. Conditions that depended on whether data from class u were included in the training and/or test data, as well as which model was used to learn the data, were used in the evaluation.

Here, I evaluate whether the performance of m-EDL is comparable to that of EDL in the situation assumed by EDL; that is, the situation where all training and test data belong to class k. In other words, both the training and test data were composed only of images from MNIST, and the following two conditions were compared: (1) the EDL model trained and tested on datasets with no class u data and (2) the m-EDL model trained and tested on datasets with no class u data.

Figure3 compares the accuracies of EDL (thin solid red line) and m-EDL (thick solid blue line). Each line shows the mean value and the shaded areas indicate the standard deviation. The accuracy of EDL changes with respect to each uncertainty threshold; the accuracy is plotted on the vertical axis with the uncertainty threshold indicated by the horizontal axis. The accuracy of EDL improves as the threshold decreases because only a classification result the model is confident of is treated as a classification result. Figure3a shows the results when (widehat{{{varvec{p}}}_{{{varvec{k}}}^{+}}}) is used for the classification results of m-EDL. An uncertainty threshold is not used for the classification result of m-EDL; a result parallel to the horizontal axis is obtained. In contrast, Fig.3b shows the results when (widehat{{{varvec{p}}}_{{{varvec{k}}}^{+}}}) is converted to (overline{{{varvec{p}} }_{{varvec{k}}}}) and the uncertainty threshold used for EDL is also used for m-EDL.

Accuracy of EDL and m-EDL when both the training and test datasets contain no class u data. (a) Results when (widehat{{{varvec{p}}}_{{{varvec{k}}}^{+}}}) is used in m-EDL classification. (b) Results when (overline{{{varvec{p}} }_{{varvec{k}}}}) is converted from (widehat{{{varvec{p}}}_{{{varvec{k}}}^{+}}}) and used in m-EDL classification with the same uncertainty threshold as that of EDL.

These graphs show that the accuracy of m-EDL is lower than that of EDL, except in the region where the uncertainty threshold is 0.9 or more. However, no substantial decrease in accuracy is observed, and it can be said that the performance of m-EDL would be sufficient depending on the application.

In this experiment, the properties of the class u data that are included in the training and test data are completely different; that is, they are obtained from different datasets. This makes it possible to confirm whether the learned uncertain class features are regarded as features that are not class k rather than features that are class u learned during training.

First, I consider whether an m-EDL model that has learned class u has the same prediction accuracy for class k when compared with an EDL model that cannot learn class u (Q2a). I then consider whether it can determine class u with higher prediction accuracy (Q2b).

The following two cases are considered: (1) EDL is tested on data that include Fashion MNIST data, and m-EDL is trained on data that include EMNIST data, but tested on data that include Fashion MNIST data. Figure4ac shows the results for class u rates of 25%, 50%, and 75% in training data, respectively. The lines of different colors indicate the results for class u rates of 25%, 50%, and 75% in the test data (12). These are percentages of the number of MNIST data. Additionally, Table 1 presents the mean accuracies of EDL and mEDL for each condition. (2) EDL is tested on data that include EMNIST data, and m-EDL is trained on data that include Fashion MNIST data, but tested on data that include EMNIST data. Figure4df shows the results for class u rates of 25%, 50%, and 75% in the training data, respectively. The lines of different colors indicate the results for class u rates of 25%, 50%, and 75% in test data. These are percentages of the number of MNIST data. Additionally, Table 2 presents the mean accuracies of EDL and mEDL for each condition.

Accuracy comparison of EDL and m-EDL. Line colors indicate the proportion of class u in the test data, and top and bottom plots show the accuracy for class k data and class u data, respectively. Results when m-EDL has learned class u (EMNIST data) but is tested on Fashion MNIST data for class u mix rates in the training data of (a) 25%, (b) 50%, and (c) 75%. These are percentages of the number of MNIST data. Results when m-EDL has learned class u (Fashion MNIST data) but is tested on EMNIST data for class u mix rates in the training data of (d) 25%, (e) 50%, and (f) 75%.

Under these two conditions, the one-hot vector yj of the data has K=10 dimensions. Therefore, all elements of the one-hot vectors of class u (EMNIST or Fashion MNIST data) in the test data were set to 0. In each of the following cases, the same processing was applied when EDL was tested on data including class u data.

The left plots of Fig.4ac and Table 1 (avg. accuracy for k) show the results for class k data for the first condition. The line color indicates the ratio of the class u data included in the test data, and it is assumed that the accuracy decreases as the mix ratio of class u in the test data increases. The results show that the accuracy of m-EDL with respect to class k is high and robust for the mix rate of class u in the training and test data: it can be seen from the left plots in Fig.4ac that when the m-EDL model that has learned class u is compared with the EDL model, which cannot learn class u, it has equal or higher accuracy with respect to class k. Moreover, the accuracy of m-EDL is not easily affected by the ratio of class u in the test data as well as the training data.

The right plots of Fig.4ac and Table 1 (avg. accuracy for u) show the accuracy for class u data, that is, the accuracy that the data that was judged as I do not know is actually different from the data classes learned so far. The right plots of Fig.4ac show that the accuracy of m-EDL with respect to class u is high and robust for the mix rate of class u in the training and test data. It is natural to increase the accuracy for class u of EDL when the ratio of class u increases because the accuracy increases when the ratio of class u increases even if class u is randomly classified via EDL.

Figure4df and Table 2 (avg. accuracy for k) show the results for the second condition, which is exactly the same as the first condition except that the EMNIST and Fashion MNIST datasets switch roles. Again, the accuracy of m-EDL with respect to class k is high and robust, as in the left plots of Fig.4ac. The results in the left plots of Fig.4df reveal that the m-EDL model that learned class u, when compared with EDL, achieved an equal or higher accuracy with respect to class k, and the accuracy of m-EDL was not easily affected by the ratio of class u in the test and training data.

However, the right plots of Fig.4df and Table 2 (avg. accuracy for u) show that the accuracy of m-EDL with respect to class u cannot be said to be better than that of EDL.

In the comparison of the two patterns in "Performance comparison of EDL and m-EDL when class u is included in the training and test data (Q2)", if the ratio of class u in the training data affects the prediction accuracy of the class k and u data, then the ratio of class u included in the training data must be appropriately selected. To answer whether this is the case, I used the results from "Performance comparison of EDL and m-EDL when class u is included in the training and test data (Q2)" (Fig.4ac and df, which have training data mix ratios of 25%, 50%, and 75%, respectively), and added the following two cases:1) Fashion MNIST is included in the test data, but neither EDL nor m-EDL are trained on class u data (a training data mix ratio of 0%; Fig.5a) and 2) EMNIST is included in the test data, but neither EDL nor m-EDL are trained on class u data (a training data mix ratio of 0%; Fig.5b). The lines of different colors indicate the results for class u rates of 25%, 50%, and 75% in the test data.

Accuracy comparison of EDL and m-EDL when neither EDL nor m-EDL have learned class u. Line colors indicate the mix rate of class u in the test data, and left and right plots show the accuracy for class k data and class u data, respectively. (a) Results for Fashion MNIST data. (b) Results for EMNIST data.

In the left plot of Fig.5a, the accuracy improved for class k as shown in the left plots of Fig.4ac, whereas in the right plot of Fig.5a, there was no improvement in accuracy for class u. In the right plots of Fig.4ac, the accuracy for class u was improved even when the ratio of class u in the training data was small. These results suggest that the accuracy for class u may be improved by having m-EDL learn even a small amount of class u data. Moreover, there is no particular need for these data to be related to the class u data in the test data.

The right plot of Fig.5b shows that m-EDL did not lead to improvements in accuracy for class u. Moreover, in the right plots of Fig.4df, the accuracy of m-EDL for class u is not better than that of EDL; however, when compared with the results in the right plot of Fig.5b, it is clear that the accuracy of m-EDL for class u is improved even if the ratio of class u in the training data is small.

It can be inferred from these comparisons that the amount of accuracy improvement for class u changes depending on the characteristics of class u in the training and test data.

As shown in "Performance comparison of EDL and m-EDL when class u is included in the training and test data (Q2)" and "Effect of the ratio of the class u included in the training data on the prediction accuracy of classes k and u in the test dataset (Q3)", the amount of improvement in accuracy for class u data changes depending on the characteristics of u in the training data and test data. Hence, I evaluated whether the accuracy for class u always improves when the characteristics of u in the training and test data are exactly the same (i.e., when the class u data are from the same dataset).

The following two conditions were considered: (1) when Fashion MNIST is included in both the test and training data [Fig.6ac and Table 3 (avg. accuracy for k and u)] and (2) when EMNIST is included in both the test and training data [Fig.6df and Table 4 (avg. accuracy for k and u)].

Accuracy comparison of EDL and m-EDL. Line colors indicate the proportion of class u in the test data, and top and bottom plots show the accuracy for class k data and class u data, respectively. Results when m-EDL has learned class u (Fashion MNIST) for class u mix rates in the training data of (a) 25%, (b) 50%, and (c) 75%. These are percentages of the number of MNIST data. Results when m-EDL has learned class u (EMNIST)for class u mix rates in the training data of (d) 25%, (e) 50%, and (f) 75%.

The differences in Fig.6ac and df are the mix rates of class u in the training data (25%, 50%, and 75%, respectively). The lines of different colors indicate the results for class u rates of 25%, 50%, and 75% in the test data. These are percentages of the number of MNIST data. In particular, the right-hand side plots of Fig.6af confirm that the accuracy of m-EDL is higher than that in the cases considered for Q2 and Q3 and is almost 100%.

In the cases of Q2 and Q3, the class u data in the training and/or test data have different characteristics, and the accuracy of m-EDL on the class u data changed depending on the combination. Meanwhile, in the Q4 cases, class u data had the same characteristics during both training and testing, and hence, the accuracy is very high. From this, it is clear that the feature learning of class u in the training data contributes to the improvement in accuracy that m-EDL exhibits when learning class u. However, in the comparisons of Q2, particularly when m-EDL was trained using EMNIST and both EDL and m-EDL were tested on data including Fashion MNIST, examples can be found where the accuracy improved even when the unknown classes in the training and test data differ. Therefore, m-EDL has the potential to improve accuracy by excluding uncertain data as a result of learning unrelated data that do not belong to class k data, although this depends on the combination of class u data in the training and test data.

Here, we hypothesize regarding the combination of class u datasets to be mixed during training that will increase the class u accuracy in testing. The hypothesis is that if class u data whose characteristics are as close as possible to those of class k are learned during training, class u data in the test can be discriminated as class u as long as the characteristics of class u given during the test are different from those in training; i.e., if a boundary that can distinguish the range of class k more strictly with u whose characteristics are close to those of class k is learned via mEDL, class u can be easily distinguished. Conversely, if the class u data during training are far from the characteristics of k, the decision boundary between k and u is freely determined, and if the class u data in the test are close to k, they may be incorrectly classified.

To test this hypothesis, I introduced another dataset (Cifar-1040) and evaluated the similarity of the characteristics of different datasets. The Cifar-10 dataset used had images of 2828 pixels for similarity calculation (consistent with the other dataset), which were grayscaled using a previously proposed method41. Table 5 presents the similarity of MNIST, EMNIST, Fashion-MNIST, and Cifar-10. Here, the structural similarity (SSIM) was determined by randomly selecting 500,000 images of the datasets to be compared, and the mean and variance were calculated as the similarity between the datasets.

The distance between datasets was determined as the inverse of the SSIM, and the positional relationship of the datasets on a two-dimensional plane was estimated via multidimensional scaling (MDS)41, as shown in Fig.7.

Location of each dataset estimated via MDS, where the points M, F, E, and C represent the locations of the MNIST, Fashion-MNIST, EMNIST, and Cifar-10 datasets, respectively, and the distance between points is proportional to the inverse of the similarity. The numbers on the horizontal and vertical axes are dimensionless.

As shown in Fig.7, EMNIST was more similar to Fashion-MNIST than to EMNIST. The newly introduced Cifar-10 is an image dataset with characteristics that are more different from those of MNIST than those of both EMNIST and Fashion-MNIST. The hypothesis explains the result presented in "Performance comparison of EDL and m-EDL when class u is included in the training and test data (Q2)" that the accuracy of class u was higher in Case 1 when u was trained with EMNIST and classified with test data containing Fashion MNIST than in Case 2 when u was trained with Fashion-MNIST and classified with test data containing EMNIST. The reason why the accuracy of class u was higher in Case 1 is because the characteristics of EMNIST were closer than those of Fashion-MNIST to the those of MNIST. mEDL-trained EMNIST was able to identify Fashion-MNIST, which was given during testing and had more distant characteristics than EMNIST, as class u. To verify this hypothesis, I compared the accuracy of class u in Case 3, where class u was trained with Cifar-10 and was classified with the test data containing EMNIST, with those for Cases 1 and 2. If the hypothesis is correct, the accuracy of class u should decrease in the following order: Case 1>Case 2>Case 3.

Table 6 presents the accuracies of mEDL for class u in each case. Indeed, the accuracy of Case 3 was the lowest, suggesting that if class u has characteristics close to those of class k during training, class u in the test can be detected as class u as long as the characteristics of class u given during testing are farther than those in the training.

Read the original:
Learning and predicting the unknown class using evidential deep learning | Scientific Reports -

Why Consider Python for Machine Learning and AI? – Analytics Insight

Here is why you should consider Python for Machine Learning and AI

Python has emerged as the preferred programming language for machine learning and artificial intelligence (AI) applications. Its versatility, ease of use, and extensive library support make it the top choice for data scientists, researchers, and engineers working in these fields. In this article, well explore the key reasons why Python is the go-to language for Machine Learning and AI.

Python boasts a rich ecosystem of libraries and frameworks that simplify machine learning and AI development. Two of the most prominent libraries are TensorFlow and PyTorch, which provide tools and resources for building and training deep learning models. Scikit-Learn is another widely used library for various machine learning tasks. These libraries offer pre-built modules, making it easier to implement complex algorithms and neural networks.

Pythons clean and readable syntax is beginner-friendly, making it accessible to a wide range of developers, including those new to machine learning and AI. Its code is similar to pseudo-code, which is human-readable and intuitive. This readability reduces the learning curve and fosters collaboration among teams with diverse backgrounds.

Python has a vibrant and active community of developers, data scientists, and researchers. This community support translates into a wealth of resources, tutorials, and forums where individuals can seek help, share knowledge, and collaborate on projects. As a result, Python users benefit from continuous improvements, updates, and innovations.

Python is cross-platform, which means that it can be used to execute applications on Windows, macOS, and Linux, among other operating systems. This flexibility allows developers to work on their preferred environments and seamlessly transition between different platforms without worrying about compatibility issues.

Pythons libraries, such as Pandas and NumPy, excel in data manipulation and analysis. These libraries facilitate tasks like data preprocessing, cleaning, and transformation, which are crucial for machine learning and AI projects. Pythons ease of working with structured and unstructured data makes it a top choice for data-centric applications.

Visualizing data is an essential aspect of data analysis and model evaluation. Pythons libraries, like Matplotlib, Seaborn, and Plotly, provide versatile tools for creating informative and interactive data visualizations. Effective visualization aids in gaining insights from data and communicating findings effectively.

Python can seamlessly integrate with big data technologies such as Apache Hadoop and Spark. Libraries like PySpark enable data scientists to process and analyze massive datasets, making Python an ideal choice for AI applications that involve large-scale data processing.

Python has strong support for cloud services like AWS, Google Cloud, and Azure. Developers can leverage Pythons libraries and SDKs to interact with cloud resources, enabling scalable and cost-effective deployment of machine learning and AI models.

View post:
Why Consider Python for Machine Learning and AI? - Analytics Insight

Rise Of The Machine LearningDeep Fakes Could Threaten Our Democracy – Forbes

used to deceive voters as we head into the next primary season.getty

Photos circulated on social media earlier this summer showing former U.S. President Donald Trump hugging and even kissing Dr. Anthony Fauci. The images weren't real of course, and they weren't the work of some prankster either. The images, which were generated with the aid of artificial intelligence-powered "Deep Fake" technology, were shared online by Florida Governor Ron DeSantis' rapid response team.

It was part of a campaign to criticize Trump for not firing Fauci, the former top U.S. infectious disease official who pushed for the Covid-19 restrictions at the height of the pandemic.

Deep Fakes being employed in the 2024 election has already been seen as a major concern, and last month the Federal Election Commission began a process to potentially regulate such AI-generated content in political ads ahead of the 2024 election. Advocates have said this is necessary to safeguard voters from election disinformation.

For years, there have been warnings about the danger of AI, and most critics have suggested the machines could take over in a scenario similar to science fiction films such as The Terminator or The Matrix, where they literally rise up and enslave humanity.

Yet, the clear and present danger could actually be AI used to deceive voters as we head into the next primary season.

"Deep Fakes are almost certain to influence the 2024 elections," warned Dr. Craig Albert, professor of political science and graduate director of the Master of Arts in Intelligence and Security Studies at Augusta University.

"In fact, the U.S. Intelligence Community expected these types of social media influence operations to occur during the last major election cycle, 2022, but they did not occur to any substantial effect," Albert noted.

However, the international community has already witnessed sophisticated Deep Fakes in the Russia-Ukraine War. Although the most sophisticated of these came from Ukraine, it is certain that the government of Russia took notice and is planning on utilizing these in the near future, suggested Albert.

"Based on their history of social media information warfare and how they have impacted U.S. elections generally over the past near decade, it is almost assured that the U.S. can expect to see this during the 2024 election cycle," he added.

The threat from AI-generated content is magnified due to the fact that so many Americans now rely on social media as a primary news source. Videos from sources that paid to be "verified" on platforms such as X (formerly Twitter) and Facebook can go viral quickly, and even when other users question the validity of that content from otherwise unvetted sources, many will still believe it to be real.

It is made worse because there is so little trust in politicians today.

"The danger for the individuals is this practice can do a lot of damage to the image and trustworthiness of the person attacked and eventually there will be laws put in place that would more effectively penalize the practice," suggested technology industry analyst Rob Enderle of the Enderle Group.

"Identity theft laws might apply now once attorneys start looking into how to mitigate this behavior," Enderle continued. "It is one thing to accuse an opponent of doing something they didn't do, but crafting false evidence to convince others they did it should be illegal but the laws may have to be revised to more effectively deal with this bad behavior."

The political candidatesat all levelsshouldn't wait for the FEC to act. To restore election integrity, there should be calls for anyone seeking office not to employ Deep Fakes or other manipulated videos and photos as a campaign tool.

"Beyond a doubt, all U.S. officials should agree to not engage in any social-media or cyber-enabled influence campaigns including Deep Fakes within the domestic sphere or for domestic consumption," said Albert. "Candidates should not endorse propaganda within the U.S. to impact voting behavior or policy construction at all. Engaging in Deep Fake creation or construction would fit within this category and ought to be severely restricted for candidates and politicians for ethical and national security reasons."

Yet, even if the candidates make such pledges, there will still be domestic and foreign operators who employ the technology. All of the political campaigns will likely be watching for such attacks, but voters will also need to be vigilent as well. Much of this is actually pretty straightforward and obvious.

"One should never trust unverified, non-official sources of videos and sound bites," added Albert. "These are all easy to fake, manipulate, and distort, and for candidate pages, easy to create cyber-personas that aren't authentic. If videos, sound bites, or social media posts appear and seem to cause some form of emotional reaction in the public realm, that is a signal to be slow to judge the medium until it has been verified as authentic."

I am a Michigan-based writer who has contributed to more than four dozen magazines, newspapers and websites. I covered the Detroit bankruptcy for Reuters in 2014, and I currently cover international affairs for 19FortyFive and cybersecurity for ClearanceJobs.

The rest is here:
Rise Of The Machine LearningDeep Fakes Could Threaten Our Democracy - Forbes

What is the future of machine learning? – TechTarget

Machine learning algorithms generate predictions, recommendations and new content by analyzing and identifying patterns in their training data. These capabilities power widely used technologies such as digital assistants and recommendation algorithms, as well as popular generative AI tools including ChatGPT and Midjourney.

Although these high-profile examples of generative AI have recently captured public attention, machine learning has promising applications in contexts ranging from big data analytics to self-driving cars. And adoption is already widespread: In a recent survey by consulting firm McKinsey & Company, 55% of respondents said their organization had adopted AI in some capacity.

Many of the underlying concepts powering today's machine learning applications date back as far as the 1950s, but the 2010s saw several advances that enabled this widespread business use:

These developments moved AI and machine learning into the mainstream business realm. Popular AI use cases in today's workplaces include predictive analytics, customer service chatbots and AI-assisted quality control, among many others.

Machine learning developments are expected across a range of fields over the next five to 10 years. The following are a few examples:

Among the many possible use cases for machine learning, several areas are expected to lead adoption, including natural language processing (NLP), computer vision, machine learning in healthcare and AI-assisted software development.

With the rise in popularity of ChatGPT and other large language models (LLMs), it's no surprise that NLP is currently a major area of focus in machine learning. Potential NLP developments over the next few years include more fluent conversational AI, more versatile models and an enterprise preference for narrower, fine-tuned language models.

As recently as 2018, the machine learning field was overall more focused on computer vision than NLP, said Ivan Lee, founder and CEO of Datasaur, which builds data labeling software for NLP contexts. But over the past year, he's noticed a significant shift in the industry's focus.

"We're seeing a lot of companies that maybe haven't invested in AI in the last decade coming around to it," Lee said. "Industries like real estate, agriculture, insurance -- folks who maybe haven't spent as much time with NLP -- now they're trying to explore it."

As with other fields within machine learning, improvements in NLP will be driven by advances in algorithms, infrastructure and tooling. But NLP evaluation methods are also becoming an increasingly important area of focus.

"We're starting to see the evolution of how people approach fine-tuning and improving [LLMs]," Lee said. For example, LLMs themselves can label data for NLP model training. Although data labeling can't yet -- and likely shouldn't -- be fully automated, he said, partial automation with LLMs can expedite model training and fine-tuning.

Because language is essential to so many tasks, NLP has applications in almost every sector. For example, LLM-powered chatbots such as ChatGPT, Google Bard and Anthropic's Claude are designed to be versatile assistants for diverse tasks, from generating marketing collaterals to summarizing lengthy PDFs.

But specialized language models fine-tuned on enterprise data could provide more personalized and contextually relevant responses to user queries. For example, an enterprise HR chatbot fine-tuned on internal documentation could account for specific company policies when answering users' natural language questions.

"The beauty of [ChatGPT] is that you can try a million different queries," Lee said. "But in the business setting, you really want to narrow that scope down. ... It's OK if [a recipe generator] doesn't tell me the best travel plans for San Antonio, but it better be fully tested and really good at recipes."

Outside of LLMs, computer vision is among the top areas of machine learning seeing an uptick in enterprise interest, said Ben Lynton, founder and CEO of AI consulting firm 10ahead AI.

Like NLP, computer vision has applications across many industries. Adoption will likely be spurred by improvements in algorithms such as image classifiers and object detectors, as well as increased access to sensor data and more customized models. Possible trends in the realm of computer vision include the following:

In generative AI, image generators such as Dall-E and Midjourney are already used by consumers as well as in marketing and graphic design. Moving forward, advances in video generation could further transform creative workflows.

Lee is particularly interested in multimodal AI, such as combining advanced computer vision capabilities with NLP and audio algorithms. "Image, video, audio, text -- using transformers, you can basically boil everything down to this core language and then output whatever you'd like," he said. For example, a model could create audio based on a text prompt or a video based on an input image.

Machine learning in healthcare could accelerate medical research and improve treatment outcomes. Promising areas include early disease detection, personalized medicine and scientific breakthroughs thanks to powerful models such as the protein structure predictor AlphaFold.

Hospitals have begun adopting clinical decision support systems powered by machine learning to aid in diagnosis, treatment planning and medical imaging analysis. AI-assisted analysis of complex medical scans could help expedite diagnosis by identifying abnormalities -- for example, correcting corrupted MRI data or detecting heart defects in electrocardiograms.

A top area of focus is developing and automating patient engagement efforts with machine learning, said Hal McCard, an attorney at law firm Spencer Fane whose practice focuses on the healthcare sector. Machine learning models can analyze massive health data sets to better predict patient outcomes, enabling healthcare providers to develop more personalized, timelier interventions that improve adherence to treatment regimens.

Here, the biggest shift isn't the underlying technology, but rather the scale. "Machine learning for data-predicted solutions and population health is not a new concept," McCard said. Rather, what's changing is "how it's being applied and the effectiveness with which you can take that output and ... use it to drive better outcomes in patient care and clinical care."

NLP has also shown some promise for clinical decision-making and summarizing physician notes. But for the foreseeable future, implementation still requires close human oversight. In a recent study, ChatGPT provided inappropriate cancer treatment recommendations in a third of cases and produced hallucinations in nearly 13%.

"When it comes to clinical decision-making, there are so many subtleties for every patient's unique situation," said Dr. Danielle Bitterman, the study's corresponding author and an assistant professor of radiation oncology at Harvard Medical School, in a release announcing the findings. "A right answer can be very nuanced, and not necessarily something ChatGPT or another large language model can provide."

Machine learning is also changing technical roles by automating repetitive coding tasks and detecting potential bugs and security vulnerabilities.

Emerging generative tools such as ChatGPT, GitHub Copilot and Tabnine can produce code and technical documentation based on natural language prompts. Although human review remains essential, offloading initial writing of boilerplate code to AI can significantly speed up the development process.

Combined with NLP advances, this could mean more interactive, chat-based functionalities in future integrated development environments. "I think in the future, coding editors will have a more chat-based interface," said Jonathan Siddharth, co-founder and CEO of Turing, a company that matches developers with employers seeking technical talent. "Every software engineer [will have] an AI assistant beside them who they can talk to when they code."

In software testing and monitoring, using machine learning techniques such as anomaly detection and predictive analytics to parse log data can help IT teams predict system failures or identify bottlenecks. Similarly, AIOps tools could use machine learning to automatically scale resource allocations based on usage patterns and suggest more efficient infrastructure setups.

Although prompt engineering -- the practice of crafting queries for generative AI models that yield the best possible output -- has recently been a hot topic in the tech community, it's unlikely that prompt engineer will continue to be a standalone role as generative models become more adept. "I don't think 'prompt engineer' is going to be a position you're hired for," Lee said.

However, experts do expect fluency with generative AI tools to become an increasingly important skill for technical professionals. "In terms of software engineering, I think we're going to see more and more engineers who know how to prompt LLMs," Siddharth said. "I think it'll be a broadly applicable skill."

Enthusiasm and optimism abound, but implementing machine learning initiatives requires addressing practical challenges and security risks as well as potential social and environmental harms.

Adopting machine learning raises pressing ethical concerns, such as algorithmic bias and data privacy. On the technical side, integrating machine learning into legacy systems and existing IT workflows can be difficult, requiring specialized skills in machine learning operations, or MLOps, and engineering. And whether emerging generative AI tools will live up to the hype in real workplaces remains unclear.

In NLP, for example, human-level fluency remains far off, and it's unclear whether AI will ever truly replicate human performance or reasoning in open-ended scenarios. LLMs can generate convincing text, but lack common sense or reasoning abilities. Similar limitations exist for other areas, such as computer vision, where models still struggle with unfamiliar data and lack the contextual understanding that comes naturally to humans. Given these limitations, it's important to carefully choose the best machine learning approach for a given use case -- if machine learning is indeed necessary at all.

"There is a class of problems that can be solved with generative AI," Siddharth said. "There is an even bigger class of problems that can be solved with just AI. There's an even bigger class of problems that could be solved with good data science and data analytics. You have to figure out what's the right solution for the job."

Moreover, generative AI is often riskier to implement than other types of models, particularly for sectors such as healthcare that deal with highly sensitive personal data. "The generative solutions that seek to produce original content and things like that, I think, carry the most risk," McCard said.

In evaluating potential privacy risks for external products, McCard emphasized the importance of understanding a model's data sources. "It's a little bit unrealistic to think that you're going to get insight into the algorithm," he said. "So, understanding that it might not ultimately be possible to understand the algorithm, then I think the question turns to the data sources and the rights of use in the data sources."

The massive amounts of training data that machine learning models require make them costly and difficult to build. Increasing use of compute resources following the generative AI boom has strained cloud services and hardware providers, including an ongoing shortage of GPUs. Additional demands for specialized machine learning hardware could further exacerbate these supply chain issues.

This ties into another foundational challenge, Lynton said: namely, the state of a company's IT infrastructure. He gave the example of a consulting engagement with an industry-leading client whose accounting, procurement and customer data systems were all on different legacy systems that could not communicate with one another -- including two that were discontinued and unmaintainable.

"It's slightly terrifying, but this is a very common situation for many large companies," Lynton said. "The reason this is an issue for AI adoption is that most leadership teams are unaware of their IT landscape and so may budget X million [dollars] for AI, but then get little to no ROI because a great deal of it is wasted in trying to patch together their systems."

McCard raised a similar concern about readiness for implementation in healthcare settings. "I have serious questions about the ability of some of these tools, especially the generative tools, to interface or be interoperable with the electronic health record systems and other systems that these health systems are currently running," he said.

The hardware and computations required for machine learning initiatives also have environmental implications, particularly with the rise of generative AI. Training machine learning models involves high levels of carbon emissions, particularly for large models with billions of parameters.

"The main risk is that people generate more carbon by training AI models than their sustainability use cases could ever save," Lynton said. "This wasn't a huge problem with the more established fields ... but now with [generative AI], it's a real threat."

To mitigate climate impacts, Lynton suggests focusing on choosing computationally efficient models and measuring the environmental impact of an AI project from start to finish. More efficient model architectures mean shorter training times and, in turn, a smaller carbon footprint.

Enterprise interest in machine learning is on the rise, with investment in generative AI alone projected to grow four times over the next two to three years.

"AI transformation is the new digital transformation," Siddharth said. "Every large enterprise company that I meet is thinking about what their AI strategy should be." Specifically, he said, companies are interested in exploring how AI and machine learning can help them better serve users or improve operational efficiency.

But in practice, not all companies are ready for the transition. For many enterprises, AI and machine learning are "still surprisingly a box-ticking exercise or risky investment, more than an accepted necessity," Lynton said. In many cases, an order comes down to "incorporate AI into the business," without further detail on what that actually entails, he said.

Moving forward, ensuring success in enterprise machine learning initiatives will require companies to slow down, rather than rushing to keep up with the AI hype. Start small with a pilot project, get input from a wide range of teams, ensure the organization's data and tech stacks are modernized, and implement strong data governance and ethics practices.

Lynton suggests taking an automation-first strategy. Rather than going full steam ahead on a complex AI initiative, start by automating five manual, repetitive and rules-based processes, such as a daily data entry task that involves entering a report from a procurement system into a separate accounting system.

These automation use cases are typically cheaper and show ROI more quickly compared with complex machine learning applications. Thus, an automation-first strategy can quickly give leaders a picture of their organization's readiness for an AI initiative -- which, in turn, can help prevent costly missteps.

"In a lot of cases, the outcome is that they are not [ready], and it's more important to first upgrade [or] combine some legacy systems," Lynton said.

Read more from the original source:
What is the future of machine learning? - TechTarget

Machine learning-based diagnosis and risk classification of … –

The workflow of the current study is presented in Fig.1. The following sections are dedicated to the description of data acquisition, radiomic features extraction, and diagnostic modeling framework, including feature selection methods, machine learning algorithms, and the process of evaluation and comparison of the models.

Workflow of the proposed radiomics models for automated diagnosis of coronary artery disease and risk classification from rest/stress myocardial perfusion imaging using single-photon emission computed tomography.

A total of 395 patients suspicious of coronary artery disease who underwent 2-day stress-rest protocol MPI SPECT were enrolled in this study. All the data were anonymized and used without any intervention on patients diagnosis, treatment, or management. The study was approved by the institutional review board (IRB) of Shahid Beheshti University of Medical Sciences (IRB code: IR.SBMU.MSP.REC.1399.368). Informed consent was waived for all subjects by the same IRB listed above. All methods were performed in accordance with the relevant guidelines and regulations. To emulate a real clinical scenario, we did not apply any conditional inclusion/exclusion criteria to the dataset. However, it is noteworthy to mention that the enrolled dataset did not include patients with myocardial infarction.

SPECT imaging was performed for all patients with a 2-day stress-rest myocardial perfusion protocol. Both rest and stress (induced by exercise, dipyridamole, or dobutamine) myocardial perfusion images were included in this study. On average, 555 to 925MBq of 99mTc-MIBI was administered intravenously into patients based on published guidelines37, 38. For exercise stress protocol, the radiopharmaceutical was injected when the patients heart rate reached 85% of its maximum value. Exercise testing was continued for at least 1min after injection of the radiopharmaceutical to maintain constant maximal cardiac oxygen demand. For the pharmacological stress test, dipyridamole was injected at a dose of 0.56mg/kg over 4min (or dobutamine at a dose of 5 to 10g per kilogram every 3 to 5min), followed by the injection of the radiopharmaceutical after three minutes39. Image acquisition was performed after 1520 and 60min post-injection for the exercise and pharmacologic stress tests, respectively40.

The images were acquired on a single-head gamma camera (Intermedical- MULTICAM 1000, Germany) imaging system using 32 projections over a 180 arc from right anterior oblique to left posterior oblique, stepping 30s for each projection, with a matrix size of 6464 and pixel dimension of 5.3575.357mm2. Supine stress imaging began 15 to 60min after stress.

Two nuclear medicine physicians reviewed patients gated MPI SPECT, additional clinical information and history, and classified patients as normal or diagnosed with CAD. Moreover, CAD positive patients were classified into low-, intermediate-, and high-risk groups. The ground truth was established based on a consensus between two physicians, and in cases where there was no agreement, a senior nuclear medicine physician made the final decision. Patients clinical information included prior MPI SPECT, blood pressure, echocardiography results, ECG and exercise test results, hyperlipidemia, Body Mass Index (BMI), and diabetes mellitus status. It is noteworthy that the physician had access to the traditional quantitative SPECT scores, such as Summed Stress (SSS), Rest (SRS), and Difference Scores (SDS), etc., and wall motion and thickening information from the gated datasets and the raw SPECT projections.

The dataset included 78 normal and 317 CAD patients including 135 low-, 127 intermediate, and 55 high-risk patients. The patients demographic information is summarized in Table 1.

The left ventricle myocardium, excluding the cardiac cavity, was manually segmented using the 3D-slicer software package41 by a nuclear medicine technologist with more than ten years of experience and edited/verified by an experienced nuclear medicine physician.

The Image Biomarker Initiative Standardization (IBSI)42 suggests interpolating images to isotropic voxel sizes to obtain rotationally invariant also to standardize the voxel size of images. However, in our dataset, all scans already had isotropic voxel spacing of 5.3575.3575.357 mm3. Hence, we kept them intact to avoid further manipulation of intensities. In addition, intensity levels inside the VOI were discretized to 64Gy levels to ease the calculation of texture features. The radiomic features were calculated using Standardized Environment for Radiomics Analysis (SERA)43, a MATLAB-based package compliant with the IBSI guideline. For the purpose of validating reproducibility, this package has been evaluated in multi-center standardization studies44. A total of 118 features, including 13 intensity-based, 12 intensity histogram (ih), 3 intensity volume histogram (ivh), and 90 3D textural features (25Gy-level co-occurrence matrix (GLCM), 16Gy-level run length matrix (GLRLM), 16Gy-level size zone matrix (GLSZM), 12Gy-level distance zone matrix (GLDZM), 5 neighborhood gray-tone difference matrix (NGTDM), and 16 neighborhood gray-level dependence matrix (NGLDM)) were extracted for each VOI. Absolute value First-order statistical features (min, max, average, etc.) were considered irrelevant since MPI SPECT images were not quantitative36. Morphological features were also irrelevant since the VOI was the whole left ventricle myocardium. Family, names, and abbreviations of the extracted features are listed in Supplementary Table S1.

In this section, we introduce different rings in the chain of the proposed automated diagnostic framework, including establishment of diagnostic tasks and feature sets, feature selection, classifiers, and models evaluation process.

Two diagnostic tasks were defined in this study for the models.

(1) The first task is CAD diagnosis, including classification of patients into negative, and positive CAD (normal/abnormal classification).

(2) The second task is risk diagnosis, including classification of patients into low-risk (negative, and low-risk CAD) and high-risk (intermediate- and high-risk) patients. Table 2 lists the tasks and their descriptions.

Rest-, Stress-, Delta-, and combined (combination of all) -radiomics feature sets were added to clinical features, including age, sex, family history, diabetes status, smoking status, and ejection fraction (calculated from SPECT images) to be fed into different models for diagnosing tasks 1 and 2.

The data were randomly divided into 80% and 20% for training and testing partitions. In all models, features extracted from the training dataset were normalized using the Z-score, and the obtained mean and standard deviation were applied to the corresponding feature extracted from the test dataset. Many of the extracted features may not correlate with the investigated outcome (not relevant features) or may correlate highly with each other (redundant features). These features do not provide new information and should therefore be excluded. We used three different FS methods, one filter-based: Maximum Relevance Minimum Redundancy (mRMR)45, and two wrapper-based: Boruta46 and Recursive Feature Elimination47 with the Random Forest as the core machine (RF-RFE). Since the used dataset for task 1 was unbalanced (78 normal and 317 abnormal patients), after the features were selected, we applied Synthetic Minority Over-sampling Technique (SMOTE) on the training data with selected features to correct for plausible biases48.

Classification of the patients was performed using nine different machine learning methods, namely Decision Tree (DT), Gradient Boosting (GB), K-Nearest Neighbor (KNN), Logistic Regression (LR), Multi-Layer Perceptron (MLP), Nave Bayes (NB), Random Forest (RF), Support Vector Machine (SVM) and eXtreme Gradient Boosting (XGB) algorithms. The hyperparameters were optimized in fivefold cross-validation in the training data by random-search for models with more than 100 different parameter settings (XGB and Random Forest) and grid-search for models with less than 100 different parameter settings. Subsequently, the optimum parameters were applied to the test data with 1000 bootstraps. The hyperparameters for each classifier and the range of their values are presented in Table 3. All FS and ML models were selected based on their public availability to increase the reproducibility of the study.

The area under the ROC curve (AUC), accuracy (ACC), sensitivity (SEN), and specificity (SPE) metrics were used to evaluate the performance of the models. In addition, the performance of the best models was statistically compared using the DeLong test (significance threshold<0.05). All analysis was performed using R 4.0 (mlr library version 2.18).

Read this article:
Machine learning-based diagnosis and risk classification of ... -

Machine Learning for .NET Developers Starts with ML.NET and … – Visual Studio Magazine


There's no more important topic than machine learning in the developer space right now as advanced AI constructs like ChatGPT and Microsoft's "Copilot" assistants are transforming the industry.

With software development's AI-powered future recently announcing itself, putting AI to work right now in your apps is a good way to position yourself for that future today.

"There are so many opportunities for developers to include it in a wide variety of applications," said Microsoft MVP (AI) Veronika Kolesnikova, a senior software engineer at Liberty Mutual.

She will help developers take advantage of those opportunities in an upcoming session titled "Machine Learning for .NET Developers" at the big Live! 360 conference set for November in Orlando.

"If you heard a lot about data collection and processing, creating and training models and ready to try it all yourself with the help of .NET, I'll show you where to start," said Kolesnikova, who has extensive experience in full-stack development using C#, .NET, Java and Typescript, Azure and AWS clouds.

For this session, of course, it's all about Azure and its built-in tooling that helps devs get started in ML regardless of their experience.

"Your first completely functional ML.NET project won't take too long to create. In this session we'll talk about the specifics of ML.NET, its applications and see how Auto ML can be a good starting point," said the sought-after public speaker. She promises attendees will learn:

We recently caught up with Kolesnikova to learn more about her session in a short Q&A.

VisualStudioMagazine: What inspired you to present a session on machine learning for .NET developers?Kolesnikova The goal of all my talks is to show developers how easy it is to start working with ML and AI without a lot of experience. Developers don't need to be professional data scientists in order to start using ML in their applications. I feel like .NET developers can feel extra intimidated by ML: they might feel they not only need special education, but also learn data science specific languages like Python and R.

"With this talk I want .NET developers to see how they can use their favorite development language for their ML tasks and feel inspired to start creating and using custom ML models."

Veronika Kolesnikova, Sr. Software Engineer, Microsoft MVP (AI), Liberty Mutual

With this talk I want .NET developers to see how they can use their favorite development language for their ML tasks and feel inspired to start creating and using custom ML models.

With machine learning being integrated across various tech stacks, what sets ML.NET apart from other machine learning libraries, especially for .NET developers?ML.NET allows .NET developers to build custom models that can be used everywhere: in the cloud, on-premises, on a device, etc without the need for switching between development languages. ML.NET can also be used to work with models trained with other technologies.

For developers new to machine learning, the tech and its processes can seem daunting. How does ML.NET simplify or streamline the learning curve, and what prerequisites would you suggest for those attending the session?I think the main benefit of ML.NET that helps to simplify the process and save time is less context switching -- writing all the parts of the solution using one language and tech stack. Another very important feature is AutoML support. Although the attendees don't need to know anything about ML, I would recommend taking a look at how ML works and what model types/algorithms are available.

You've mentioned AutoML as a potential starting point. Could you expand on just one of its benefits, especially in the context of .NET development, and how it integrates with ML.NET?AutoML saves a lot of time and effort when creating custom models. By saving time on routine tasks .NET developers can focus on other important tasks: solution architecture, model integration, etc.

The Model Builder tool is a highlight of ML.NET. In your experience, how has it changed the way developers approach building and training machine learning models?AutoML is in the core of the Model Builder, so it allows to jump start a ML-based solution even without any experience. The democratization of ML and AI in general makes it easy for anyone to start using ML. Oh! Did I say the Model Builder was free?!

Machine learning models are only as good as the data they're trained on. How does ML.NET facilitate the process of data collection and processing, ensuring the creation of robust models?ML.NET supports developers in all steps of the ML lifecycle: data organization and cleanup, model training and testing, retraining and MLOps. Before training a custom model it's important to understand the Responsible AI principles and data cleanup options. ML.NET has all the tools a developer can use for data preparation. Examples of data cleanup function from ML.NET: ReplaceMissingValues, NormalizeMinMax, NormalizeBinning.

As the technology landscape constantly evolves, where do you see the future of ML.NET in the broader spectrum of AI and machine learning, and what advice would you give developers looking to specialize in this area using .NET?ML.NET is constantly evolving to keep up with all the latest ML features. With increasing popularity of AI more and more developers will want to start training custom models and using those models. I'm sure ML.NET can be a great starting point in .NET developers' machine learning journey. It's great as both learning tool and production ready tool. In the end everyone will decide for themselves how far they want to go: work with data, build custom models, use pre-built models, create MLOps setup, combine ML.NET with other languages and tools, etc.

Note: Those wishing to attend the conference can save hundreds of dollars by registering early, according to the event's pricing page. "Save up to $400 if you register by September 22!" said the organizer of the event, which is presented by the parent company of Virtualization & Cloud Review.

About the Author

David Ramel is an editor and writer for Converge360.

Machine Learning for .NET Developers Starts with ML.NET and ... - Visual Studio Magazine

New Rice Continuing Studies course to explore generative AI … – Rice News

TheGlasscock School of Continuing Studiesat Rice will host a course exploring the possibilities and potential perils of generative artificial intelligence (AI) starting Sept. 20.

The course titled Generative Artificial Intelligence and Humanity is open to the public and will examine machine learning and related tools like ChatGPT for various aspects of human life, including education, work, health, creativity, equity, justice, democracy and what it means to be human.

Taught by Rice faculty, the course aims to provide a comprehensive overview of the latest developments in AI and its potential impacts on society.

A primary part of the Glasscock Schools mission is to provide community access to Rice faculty and the incredible and transformative research that is taking place on our campus, saidRobert Bruce,dean of the Glasscock School.

Additionally, we exist to inform and equip our city with the latest knowledge and skills needed to navigate work and life. This course is a prime example of both of those principles. As the proliferation of AI applications has exponentially accelerated just this year, we are excited to give Houstonians access to some of the leading scholars on the subject to help them understand and navigate this brave new world.

Through a series of lectures, discussions and hands-on exercises, students will explore case studies from various domains to gain a deeper understanding of the potential benefits and drawbacks of these technologies. They will also learn about strategies for ensuring that AI is used in ways that promote equity and justice.

Topics and speakers will include:

AI and Democracy, Moshe Vardi

Understanding Generative AI and Machine Learning: How Machines Learn and Decide, Vicente Ordez-Romn

A History of the Limitations and Possibilities of Artificial Intelligence, Elizabeth Petrick

How Generative AI May Reshape the Workforce, Fred Oswald

Responsible AI for Health, Kirsten Ostherr

What It Means to Be Human in an Age of AI: Philosophical and Ethical Issues, Rodrigo Ferreira

How Human Is AI Creativity? Anthony Brandt

Generative AI and Education, Richard Baraniuk

Its remarkable how many Rice faculty across disciplines are researching and teaching about the societal impact of generative AI," saidCathy Maris, the Glasscock Schools assistant dean for Community Learning and Engagement. "No one field has the solutions to these complex challenges. This course gives the public access to speakers from the fields of computer science, history, psychology, English, medical humanities and music. We hope this confluence of perspectives will offer powerful insights for and with our community.

The course will be held on campus from 7-8:30 p.m. every Wednesday night Sept. 20 - Nov. 8.

To learn more, clickhere.

See more here:
New Rice Continuing Studies course to explore generative AI ... - Rice News

Scientist in Molecular Engineering by Machine Learning job with … –

The Hospital for Sick Children (SickKids) Research Institute seeks an outstanding scientist whose research is focused on the development and utilization of computational machine learning approaches for the design and engineering of biomolecules. Designed molecular biologics - including antibodies, nanobodies, miniproteins, vaccines, enzymes, toxins, peptides, and nucleic acids - are poised to revolutionize biomedical research and therapeutic discovery. This permanent position lies at the interface between computational design, structural biology, and therapeutic development, and aligns with our SickKids Precision Child Health strategic initiative.

The successful candidate will be appointed as a Scientist in the Molecular Medicine research program at the SickKids Research Institute. SickKids is a world-renowned paediatric hospital with seven fully integrated research programs. The successful applicants laboratory will be located in the state-of-the-art Peter Gilgan Centre for Research & Learning (686 Bay Street, Toronto, Canada), in the Discovery District of the heart of downtown Toronto. This unique environment for biomedical science sits in close proximity to nine other academic hospital research centres and the University of Toronto campus.

The successful applicant will initiate and maintain an original, competitive, and independently funded research program of international caliber in the area of biomolecular design using machine learning in conjunction with experimental approaches including functional assays, structure determination, biophysical characterization and/or directed evolution. Designed biologics would be applied as probes of biological function and/or candidate therapeutic leads across the breadth of paediatric medicine. The successful candidate will benefit from the extensive research and core facilities of SickKids, the University of Toronto and its affiliated institutions for structural biology, biophysics, drug discovery, cellular imaging, functional genomics, proteomics, metabolomics, bioinformatics, computational biology, machine learning, and artificial intelligence, as well as new inter-institutional initiatives focused on biologics and therapeutic design.

The successful applicant is expected to qualify for an academic status-only appointment in an appropriate department at the University of Toronto, Canadas largest university and a world leader in machine learning. The successful candidate will also be considered by the Vector Institute for appointment as a Faculty Member or Faculty Affiliate. Vector is home to over 700 active researchers with broad expertise in artificial intelligence, including Faculty Members, Faculty Affiliates, and trainees in a world-class machine learning research environment. Vector is supported by government and private industry, in partnership with Ontario universities. Faculty that are co-recruited with Vector benefit from access to high performance computing capacity and resources for cutting edge artificial intelligence and machine learning research at Vector.

Applicants must have a PhD, MD, or MD/PhD or equivalent in a relevant discipline and a record of scientific accomplishments in the aforementioned research areas. Salary will be commensurate with qualifications and experience. A competitive benefits package will be offered along with support for relocation expenses.

Application Process

Interested individuals should email their application comprised of a curriculum vitae, a description of past research (maximum 1 page), a detailed proposed research program (maximum 4 pages), and copies of main research publications in PDF format to the Co-Chairs, Molecular Engineering by Machine Learning Search Committee at by November 7, 2023. Applicants must also arrange to have three signed letters of reference on institutional letter head sent directly to the Search Committee Chairs at, indicating the applicants name in the subject line, also by November 7, 2023. Late applications may be reviewed, but priority will be given to those submitted by the closing date.The search committee will interview applicants beginning in late 2023, with a potential start date in summer or fall 2024.

SickKids believes that diversity positively impacts science and is essential to sustain our vibrant world-leading research community. SickKids welcomes applications from racialized persons / persons of colour, women, Indigenous Peoples, persons with disabilities, 2SLGBTQIA+ persons, and others who contribute to the further diversification of ideas. Informed by the Accessibility for Ontarians with Disabilities Act (AODA), the Ontario Human Rights Code, and our Access and Accommodation Policy, SickKids is proud to make accommodations to support applicants during the interview and assessment process, if requested. Please advise the SickKids Research Institute Faculty Development & Diversity Office at of your accessibility needs during the recruitment process. Information received relating to accommodation will be addressed confidentially. As part of the application process, you will be asked to complete a brief voluntary diversity survey. Any information directly related to you is confidential and cannot be accessed by either the search committee or human resources staff. Results will be aggregated for institutional planning purposes. The self-identification information is collected, used, disclosed, retained and disposed of in accordance with thePrivacy Actand theAccess to Information Act.

SickKids recognizes that scholars have varying career paths and that career interruptions can be part of an excellent academic record. Candidates are encouraged to share any personal circumstances in order to allow for a fair assessment of their application.

All qualified applicants are encouraged to apply; however, in accordance with Canadian immigration requirements, Canadians and permanent residents will be given priority. The successful candidate will hold an appropriate and valid work permit, if applicable. Only those applicants selected for the interview will be contacted. Wherein a practicing MD, the successful candidate must hold or be eligible for licensure with the College of Physicians and Surgeons of Ontario.

Applicants may direct any informal inquiries to:

Co-Chairs, Molecular Engineering by Machine Learning Search Committee

SickKids Research Institute - The Peter Gilgan Centre for Research & Learning

686 Bay Street Toronto, Ontario Canada M5G 0A4

See the original post:
Scientist in Molecular Engineering by Machine Learning job with ... -

Machine learning helps identify metabolic biomarkers that could … – News-Medical.Net

A novel study from the University of South Australia has identified a range of metabolic biomarkers that could help predict the risk of cancer.

Image Credit: University of South Australia

Deploying machine learning to examine data from 459,169 participants in the UK Biobank, the study identified 84 features that could signal increased cancer risk.

Several markers also signaled chronic kidney or liver disease, highlighting the significance of exploring the underlying pathogenic mechanisms of these diseases for their potential connections with cancer.

The study, Hypothesis-free discovery of novel cancer predictors using machine learning, was conducted by UniSA researchers: Dr Iqbal Madakkatel,Dr Amanda Lumsden,Dr Anwar Mulugeta,and ProfessorElina Hyppnen, with University of Adelaides Professor Ian Olver.

We conducted a hypothesis-free analysis using artificial intelligence and statistical approaches to identify cancer risk factors among more than 2800 features. More than 40% of the features identified by the model were found to be biomarkers biological molecules that can signal health or unhealthy conditions depending on their status and several of these were jointly linked to cancer risk and kidney or liver disease.

Dr Iqbal Madakkatel,Researcher, UniSA

Dr Amanda Lumsden says this study provides important information on mechanisms which may contribute to cancer risk.

After age, high levels of urinary microalbumin was the highest predictor of cancer risk. Albumin is a serum protein needed for tissue growth and healing, but when present in your urine, its not only an indicator of kidney disease, but also a marker for cancer risk.

Similarly, other indicators of poor kidney performance such as high blood levels of cystatin C, high urinary creatinine (a waste product filtered by your kidneys), and overall lower total serum protein were also linked to cancer risk.

We also identified that greater red cell distribution width (RDW) or the variation in size of your red blood cells is associated with increased risk of cancer.

Normally, your red blood cells should be about the same size, and when there are discrepancies, it can correlate with higher inflammation and poorer renal function. As this study shows, also higher risk of cancer.

Additionally, the study found that high levels of C-reactive protein - an indicator of systemic inflammation - were connected to increased cancer risk, as were high levels of the enzyme gamma glutamyl transferase (GGT)- a liver stress-related biomarker.

Chief investigator, Professor Elina Hyppnen, Centre Director of the Australian Centre for Precision Health at UniSA, says the strength of this study lies in the machine learning.

Using artificial intelligence, our model has shown that it can incorporate and cross-reference thousands of features and identify relevant risk predictors that may otherwise remain hidden, Prof Hyppnen says.

It is interesting that while our model incorporated information on thousands of features, including clinical, behavioral, and social factors, so many were biomarkers, which reflect the metabolic state before cancer diagnoses.

While further studies are needed to confirm causality and clinical relevance, this research suggests that with relatively simple blood tests it may be possible to gain information about our future risk of cancer. This is important as it can then allow us to act early, at a stage when it may still be possible to prevent the disease.

Read the original here:
Machine learning helps identify metabolic biomarkers that could ... - News-Medical.Net

Amazons Rajeev Rastogi on AI and Machine Learning revolutionising workplace trends – People Matters

Machine Learning and AI technologies are transforming the future of work by enabling data-driven decision-making and automating routine tasks. In an exclusive conversation with People Matters, Rajeev Rastogi, Vice President of International Machine Learning at Amazon India, shared how programmes designed for the evolving landscape prepare both young and experienced workers for emerging job roles.

Over the past decade, the trajectory of machine learning has been on a continuous upswing, gaining traction across various industries. It is being aggressively adopted by manufacturing, financial services, retail, transportation, agriculture, healthcare sectors, and many more. Machine Learning has become a significant lever in solving customer problems, and the demand for machine learning roles is expected to increase significantly among employers. A study by the World Economic Forum proclaims that AI, machine learning, and data segments will be the top emerging job roles in India over the next five years, while the talent pool is expected to remain the same.

Skill shifts have accompanied the introduction of modern technologies in the workplace since at least the Industrial Revolution, and the adoption of machine learning will mark an acceleration over the shifts of even the recent past.

Training, workshops, and initiatives help talent prepare for crucible experiences and become their best selves. Machine Learning Summer School is a good example of a platform to help foster ML excellence and strive towards developing applied science skills in young talent.

Companies have been adopting technology to better virtual collaboration and tapping into the promise of AI and machine learning to reinvent techniques for boosting employee engagement, productivity, and well-being since the advent of remote work.

Machine learning and data science are advanced tools used to analyze data and enhance decision-making. Informed decisions to pursue a career in this fieldcareer in this fieldmay be made easier if you are aware of the differences between data science and machine learning.

At Amazon India, we believe in fostering a culture of growth and providing equal opportunities for all individuals to reach their full potential. Our commitment to equality extends to various communities, including women, LGBTQIA individuals, military veterans, and differently-abled individuals. We value the unique perspectives that each person brings to our workplace, recognizing the immense value they add to Amazon India.

Amazon's upskilling and reskilling initiatives play a pivotal role in ensuring the workforce is prepared for the machine learning era. While advanced skills will be in demand, Amazon also emphasizes the importance of basic digital literacy. This recognition stems from the understanding that in an age of machine learning, everyone should have the foundational skills to navigate the digital landscape effectively.

Read the rest here:
Amazons Rajeev Rastogi on AI and Machine Learning revolutionising workplace trends - People Matters