Category Archives: Machine Learning

New machine-learning algorithms can help optimize the next … – News-Medical.Net

Antibody treatments may be able to activate the immune system to fight diseases like Parkinsons, Alzheimers and colorectal cancer, but they are less effective when they bind with themselves and other molecules that arent markers of disease.

Now, new machine-learning algorithms developed at the University of Michigan can highlight problem areas in antibodies that make them prone to binding non-target molecules.

We can use the models to pinpoint the positions in antibodies that are causing trouble and change those positions to correct the problem without causing new ones.

Peter Tessier, the Albert M. Mattocks Professor of Pharmaceutical Sciences at U-M and corresponding author of the study in Nature Biomedical Engineering

The models are useful because they can be used on existing antibodies, brand new antibodies in development, and even antibodies that havent been made yet.

Antibodies fight disease by binding specific molecules called antigens on disease-causing agentssuch as the spike protein on the virus that causes COVID-19. Once bound, the antibody either directly inactivates the harmful viruses or cells or signals the bodys immune cells to do so.

Unfortunately, antibodies designed to bind their specific antigens very strongly and quickly can also bind non-antigen molecules, which removes the antibodies before they target a disease. Such antibodies are also prone to binding with other antibodies of the same type and, in the process, forming thick solutions that dont flow easily through the needles that deliver antibody drugs.

The ideal antibodies should do three things at once: bind tightly to what they're supposed to, repel each other and ignore other things in the body, Tessier said.

An antibody that doesnt check all three boxes is unlikely to become a successful drug, but many clinical-stage antibodies cant. In their new study, Tessiers team measured the activity of 80 clinical-stage antibodies in the lab and found that 75% of the antibodies interacted with the wrong molecules, to one another, or both.

Changing the amino acids that comprise an antibody, and in turn the antibodys 3D structure, could prevent antibodies from misbehaving because an antibodys structure determines what it can bind. But, some changes could cause more problems than they fix, and the average antibody has hundreds of different amino acid positions that could be changed.

Exploring all the changes for a single antibody takes about two workdays with our models, which is substantially shorter compared to experimentally measuring each modified antibodywhich would take months, at best, said Emily Makowski, a recent Ph.D. graduate in pharmaceutical sciences and the study's first author.

The teams models, which are trained on the experimental data they collected from clinical-stage antibodies, can identify how to change antibodies so they check all three boxes with 78% to 88% accuracy. This narrows down the number of antibody changes that chemical and biomedical engineers need to manufacture and test in the lab.

Machine learning is key for accelerating drug development, said Tiexin Wang, a doctoral student in chemical engineering and study co-author.

Biotech companies are already beginning to recognize machine-learnings potential to optimize the next-generation of therapeutic antibodies.

Some companies have developed antibodies that they are really excited about because they have a desired biological activity, but they know they are going to have problems when they try to use these antibodies as drugs, Tessier said. Thats where we come in and show them the specific spots in their antibodies that need to be fixed, and we are already helping out some companies do this.

The research was funded by the Biomolecular Interaction Technology Center, National Institutes of Health, National Science Foundation and Albert M. Mattocks Chair, and it was conducted in collaboration with the Biointerfaces Institute and EpiVax Inc.

The University of Michigan and Sanofi have filed a patent application for the experimental method that provided the data used to train the algorithm.

Tessier has received honoraria for invited presentations on this research from GlaxoSmithKline, Bristol Myers Squibb, Janssen and Genentech.

Tessier is also a professor of chemical engineering and biomedical engineering.


Journal reference:

Makowski, E. K., et al. (2023). Optimization of therapeutic antibodies for reduced self-association and non-specific binding via interpretable machine learning. Nature Biomedical Engineering.

View original post here:
New machine-learning algorithms can help optimize the next ... - News-Medical.Net

Meeranda, the Human-Like AI, Welcomes Recognized Machine … – Canada NewsWire

TORONTO, Sept. 14, 2023 /CNW/ - Meeranda, a privately held Artificial Intelligence (AI) solutions provider, serving both Small and Medium Businesses (SMBs) and Global Multinational Corporations (MNCs), announced today that Francesca Lazzeri, Ph.D., has joined Meeranda's Advisory Board.

Dr. Lazzeri's expertise lies in the field of applied machine learning and AI. She has more than 15 years of experience in academic research, applied machine learning, AI innovation, and engineering team management.

Currently serving as the Senior Director of Data Science and AI, Cloud and AI at Microsoft, Dr. Lazzeri leads a team of skilled data and machine learning scientists. She spearheads the development of intelligent applications on the Cloud, leveraging a wide range of data and techniques including generative AI, time series forecasting, experimentation, causal inference, computer vision, natural language processing, and reinforcement learning.

"We are honored that Dr. Lazzeri has accepted to join Meeranda's Advisory Board,"said Mr. Raji Wahidy, Co-Founder and CEO of Meeranda. "Dr. Lazzeri's contributions to the advancement of machine learning and AI technology are immense, quite well-known, and respected amongst her peers within this sector. Her addition is further validation that what we are embarking on at Meeranda is quite disruptive. We are excited and looking forward to leveraging Dr. Lazzeri's experience and expertise as we work towards delivering The New Personalized Customer Experience we promise to SMBs and Global MNCs."

Academically, Dr. Lazzeri is an Adjunct Professor at New York's Columbia University, teaching Python for machine learning and AI students. She has also contributed to the literature world by authoring several books including "Machine Learning Governance for Managers", "Impact of Artificial Intelligence in Business and Society", and "Machine Learning for Time Series Forecasting with Python."

"We are thrilled to welcome Dr. Lazzeri to Meeranda," said Mr. Jayson Ng, Co-Founder and Chief Research Officer of Meeranda. "Dr. Lazzeri's expertise will be instrumental in bridging the gap between cutting-edge research and real-world applications, thus pushing the technological boundaries and helping us take our product to new heights."

Dr. Lazzeri currently serves as an Advisor on the Advisory Board of the European Union for the AI-CUBE project and as a member of the Women in Data Science (WiDS) initiative. She is also known for having advised, mentored, and coached data scientists and machine learning engineers at the Massachusetts Institute of Technology (MIT) University. She was also a research fellow at Harvard University.

"I am very excited to join Meeranda's Advisory Board,"said Dr. Francesca Lazzeri, Senior Director of Data Science and AI, Cloud and AI at Microsoft. "Meeranda's unique and innovative approach at tackling a very pressing problem is quite disruptive. I strongly believe in their vision, mission, and the leadership team behind Meeranda. I look forward to further contributing to Meeranda's imminent success."

Dr. Lazzeri holds a Master's Degree in Economics and Institutional Studies from Luiss Guido Carli University, a Doctor of Philosophy (Ph.D.) in Economics and Technology Innovation from Scuola Superiore Sant'Anna, and a Postdoc Research Fellowship in Economics from Harvard University.

About Meeranda

Meerandais a privately held Artificial Intelligence (AI) solutions provider, serving Small and Medium Businesses (SMBs) and Global Multinational Corporations (MNCs). Meeranda is best known for its Real-Time Human-Like AI that intends to offer the new personalized customer experience to combat the ongoing frustration of dealing with chatbots and half-baked AI solutions. Although in its early stages, Meeranda already has agreements across six countries and seven industries, thus far.

Follow Meeranda

Website:https://meeranda.comMedia Kit:

SOURCE Meeranda

For further information: Meeranda Inc., Media Relations, [emailprotected]

See the original post:
Meeranda, the Human-Like AI, Welcomes Recognized Machine ... - Canada NewsWire

An Introduction To Diffusion Models For Machine Learning: What … – Dataconomy

Diffusion models owe their inspiration to the natural phenomenon of diffusion, where particles disperse from concentrated areas to less concentrated ones. In the context of artificial intelligence, diffusion models leverage this idea to generate new data samples that resemble existing data. By iteratively applying a noise schedule to a fixed initial condition, diffusion models can generate diverse outputs that capture the underlying distribution of the training data.

The power of diffusion models lies in their ability to harness the natural process of diffusion to revolutionize various aspects of artificial intelligence. In image generation, diffusion models can produce high-quality images that are virtually indistinguishable from real-world examples. In text generation, diffusion models can create coherent and contextually relevant text that is often used in applications such as chatbots and language translation.

Diffusion models have other advantages that make them an attractive choice for many applications. For example, they are relatively easy to train and require minimal computational resources compared to other types of deep learning models. Moreover, diffusion models are highly flexible and can be easily adapted to different problem domains by modifying the architecture or the loss function. As a result, diffusion models have become a popular tool in many fields of artificial intelligence, including computer vision, natural language processing, and audio synthesis.

Diffusion models take their inspiration from the concept of diffusion itself. Diffusion is a natural phenomenon in physics and chemistry, where particles or substances spread out from areas of high concentration to areas of low concentration over time. In the context of machine learning and artificial intelligence, diffusion models draw upon this concept to model and generate data, such as images and text.

These models simulate the gradual spread of information or features across data points, effectively blending and transforming them in a way that produces new, coherent samples. This inspiration from diffusion allows diffusion models to generate high-quality data samples with applications in image generation, text generation, and more.

The concept of diffusion and its application in machine learning has gained popularity due to its ability to generate realistic and diverse data samples, making them valuable tools in various AI applications.

There are four different types of diffusion models:

GANs consist of two neural networks: a generator network that generates new data samples, and a discriminator network that evaluates the generated samples and tells the generator whether they are realistic or not.

The generator and discriminator are trained simultaneously, with the goal of improving the generators ability to produce realistic samples while the discriminator becomes better at distinguishing between real and fake samples.

VAEs are a type of generative model that uses a probabilistic approach to learn a compressed representation of the input data. They consist of an encoder network that maps the input data to a latent space, and a decoder network that maps the latent space back to the input space.

During training, the VAE learns to reconstruct the input data and generate new samples by sampling from the latent space.

Normalizing flows are a type of generative model that transforms the input data into a simple probability distribution, such as a Gaussian distribution, using a series of invertible transformations. The transformed data is then sampled to generate new data.

Normalizing flows have been used for image generation, music synthesis, and density estimation.

Autoregressive models generate new data by predicting the next value in a sequence, given the previous values. These models are typically used for time-series data, such as stock prices, weather forecasts, and language generation.

Diffusion models are based on the idea of iteratively refining a random noise vector until it matches the distribution of the training data. The diffusion process involves a series of transformations that progressively modify the noise vector, such that the final output is a realistic sample from the target distribution.

The basic architecture of a diffusion model consists of a sequence of layers, each of which applies a nonlinear transformation to the input noise vector. Each layer has a set of learnable parameters that determine the nature of the transformation applied.

The symbiotic dance of technology and art

The output of each layer is passed through a nonlinear activation function, such as sigmoid or tanh, to introduce non-linearity in the model. The number of layers in the model determines the complexity of the generated samples, with more layers resulting in more detailed and realistic outputs.

To train a diffusion model, we first need to define a loss function that measures the dissimilarity between the generated samples and the target data distribution. Common choices for the loss function include mean squared error (MSE), binary cross-entropy, and log-likelihood. Next, we optimize the model parameters by minimizing the loss function using an optimization algorithm, such as stochastic gradient descent (SGD) or Adam. During training, the model generates samples by iteratively applying the diffusion process to a random noise vector, and the loss function calculates the difference between the generated sample and the target data distribution.

One advantage of diffusion models is their ability to generate diverse and coherent samples. Unlike other generative models, such as Generative Adversarial Networks (GANs), diffusion models do not suffer from mode collapse, where the generator produces limited variations of the same output. Additionally, diffusion models can be trained on complex distributions, such as multimodal or non-Gaussian distributions, which are challenging to model using traditional machine learning techniques.

Diffusion models have numerous applications in computer vision, natural language processing, and audio synthesis. For example, they can be used to generate realistic images of objects, faces, and scenes, or to create new sentences and paragraphs that are similar in style and structure to a given text corpus. In audio synthesis, diffusion models can be employed to generate realistic sounds, such as speech, music, and environmental noises.

There have been many advancements in diffusion models in recent years, and several popular diffusion models have gained attention in 2023. One of the most notable ones is Denoising Diffusion Models (DDM), which has gained significant attention due to its ability to generate high-quality images with fewer parameters compared to other models. DDM uses a denoising process to remove noise from the input image, resulting in a more accurate and detailed output.

Another notable diffusion model is Diffusion-based Generative Adversarial Networks (DGAN). This model combines the strengths of diffusion models and Generative Adversarial Networks (GANs). DGAN uses a diffusion process to generate new samples, which are then used to train a GAN. This approach allows for more diverse and coherent samples compared to traditional GANs.

Probabilistic Diffusion-based Generative Models (PDGM) is another type of generative model that combines the strengths of diffusion models and Gaussian processes. PDGM uses a probabilistic diffusion process to generate new samples, which are then used to estimate the underlying distribution of the data. This approach allows for more flexible modeling of complex distributions.

Non-local Diffusion Models (NLDM) incorporate non-local information into the generation process. NLDM uses a non-local similarity measure to capture long-range dependencies in the data, resulting in more realistic and detailed outputs.

Hierarchical Diffusion Models (HDM) incorporate hierarchical structures into the generation process. HDM uses a hierarchy of diffusion processes to generate new samples at multiple scales, resulting in more detailed and coherent outputs.

Diffusion-based Variational Autoencoders (DVAE) are a type of variational autoencoder that uses a diffusion process to model the latent space of the data. DVAE learns a probabilistic representation of the data, which can be used for tasks such as image generation, data imputation, and semi-supervised learning.

Two other notable diffusion models are Diffusion-based Text Generation (DTG) and Diffusion-based Image Synthesis (DIS).

DTG uses a diffusion process to generate new sentences or paragraphs, modeling the probability distribution over the words in a sentence and allowing for the generation of coherent and diverse texts.

DIS uses a diffusion process to generate new images, modeling the probability distribution over the pixels in an image and allowing for the generation of realistic and diverse images.

Diffusion models are a powerful tool in artificial intelligence that can be used for various applications such as image and text generation. To utilize these models effectively, you may follow this workflow:

Gather and preprocess your dataset to ensure it aligns with the problem you want to solve.

This step is crucial because the quality and relevance of your training data will directly impact the performance of your diffusion model.

Keep in mind when preparing your dataset:

Choose an appropriate diffusion model architecture based on your problem.

There are several types of diffusion models available, including VAEs (Variational Autoencoders), Denoising Diffusion Models, and Energy-Based Models. Each type has its strengths and weaknesses, so its essential to choose the one that best fits your specific use case.

Here are some factors to consider when selecting a diffusion model architecture:

Train the diffusion model on your dataset by optimizing model parameters to capture the underlying data distribution.

Training a diffusion model involves iteratively updating the model parameters to minimize the difference between the generated samples and the real data.

Keep in mind that:

Once your model is trained, use it to generate new data samples that resemble your training data.

The generation process typically involves iteratively applying the diffusion process to a noise tensor.

Remember when generating new samples:

Depending on your application, you may need to fine-tune the generated samples to meet specific criteria or constraints.

Fine-tuning involves adjusting the generated samples to better fit your desired output or constraints. This can include cropping, rotating, or applying further transformations to the generated images.

Dont forget:

Evaluate the quality of generated samples using appropriate metrics. If necessary, fine-tune your model or training process.

Evaluating the quality of generated samples is crucial to ensure they meet your desired standards. Common evaluation metrics include peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), and human perception scores.

Here are some factors to consider when evaluating your generated samples:

Integrate your diffusion model into your application or pipeline for real-world use.

Once youve trained and evaluated your diffusion model, its time to deploy it in your preferred environment.

When deploying your diffusion model:

Diffusion models hold the key to unlocking a wealth of possibilities in the realm of artificial intelligence. These powerful tools go beyond mere functionality and represent the fusion of science and art, as data metamorphoses into novel, varied, and coherent forms. By harnessing the natural process of diffusion, these models empower us to create previously unimaginable outputs, limited only by our imagination and creativity.

Featured image credit: svstudioart/Freepik.

Continue reading here:
An Introduction To Diffusion Models For Machine Learning: What ... - Dataconomy

Machine learning improves credit card fraud detection by over 94 … – Arab News

RIYADH: Machine learning algorithms could enhance credit card fraud detection by over 94 percent, according to a new study by theArab Monetary Fund.

According to the report, artificial intelligence plays a crucial role in strengthening credit card fraud detection, and machine learning predicts fraudulent transactions to a large extent.

Global credit card losses due to credit card fraud incurred by financial institutions and individuals hit $32.3 billion in 2021, representing a substantial rise of 13.8 percent compared to the previous year.

AMF, in its report, also urged intensified innovation and collaboration with top financial technology firms to develop ML-based fraud detection systems.

It also highlighted the vitality of using AI and ML to analyze credit card fraud in Arab nations.

The situation is also getting more challenging because of the increasing credit card penetration in the region.

Saudi card payments

In May, London-based data and analytics firm GlobalData reported that Saudi Arabias card payments market is expected to grow by 14.6 percent to reach SR532.1 billion ($141.9 billion) in 2023, driven by contactless payments and the governments push for an digitizedsociety.

The study found that card payment value in the Kingdom registered an annual growth of 29.8 percent in 2021 and 17.3 percent in 2022 thanks toimproving economic conditions and a rise in consumer spending.

While cash has traditionally been the preferred payment method in Saudi Arabia, its usage is on the decline in line with the rising consumer preference for electronic payments, said Ravi Sharma, lead banking and payments analyst, GlobalData, in a statement released in May.

Stringent regulations

The increasing utility has also spurred a rise in government regulations to prevent financial fraud across the region.

In July, Dubai Public Prosecution announced a clampdown on thoseforging, counterfeiting or reproducing debit and credit cards and warned that action from fraudsters will result in imprisonment and fines which range from 500,000 dirhams ($136,127) to 2 million dirhams.

Forging or counterfeiting or reproducing a credit card or debit card or any other electronic payment method by using any information technology means or computer program shall expose to imprisonment and fine not less than 500,000 dirhams and not over 2 million dirhams or either of these two penalties, said Dubai Public Prosecution.

Earlier this year, Saudi Arabia also announced a $1.3 million fine and five-year jail time for forgery for anyone who forges any electronic signature, records, or digital certificate or uses these documents while knowing they are fake.

Machine learning improves credit card fraud detection by over 94 ... - Arab News

Machine-learning model predicts CKD progression with ‘readily … – Healio

September 14, 2023

1 min read


Receive an email when new articles are posted on

Back to Healio

A machine-learning model developed by researchers at Sonic Healthcare USA accurately predicted the progression of chronic kidney disease using readily available laboratory data.

CKD is a major cause of morbidity and mortality, Joseph Aoki, MD, senior vice president, population health, at the Austin, Texas-based company, and colleagues wrote in a study. While more research is needed, our results support clinical utility of the model to improve timely recognition and optimal management for patients at risk for CKD progression.

The investigators used a retrospective observational trial to analyze deidentified laboratory information services data from a large U.S. outpatient laboratory network. It involved 110,264 adultswith initial eGFR values between 15 mL/min/1.73 m2 and 89 mL/min/1.73 m2.

Researchers developed a seven-variable risk classifier model using random forest survival methods to predict eGFR decline of more than 30% within 5 years.

Results showed that the risk classifier model accurately predicted eGFR decline greater than 30% and achieved an area under the curve receiver-operator characteristic of 0.85.

The most important predictor of progressive decline in kidney function was the eGFR slope, the authors wrote, followed by the urine albumin-creatinine ratio and serum albumin slope. Other key contributors to the model included initial eGFR, age and sex.

Our progressive CKD classifier accurately predicts significant eGFR decline in patients with early, mid and advanced disease using readily obtainable laboratory data.

The authors wrote that the study did have limitations: It did not evaluate the role of clinical variables such as blood pressure on the performance of the model. Further prospective work is warranted to validate the findings and assess the clinical utility of the model, the researchers wrote.

Used as a complement to and in conjunction with other well-established predictive models, the progressive CKD risk classifier has the potential to significantly improve timely recognition, risk stratification and optimal management for a heterogeneous population with CKD at a much earlier stage for intervention, Aoki and colleagues wrote.


Receive an email when new articles are posted on

Back to Healio

See more here:
Machine-learning model predicts CKD progression with 'readily ... - Healio

Machine Learning Operations Market Is Expected to Witness with Strong Growth rate in the forecast period – Benzinga

"The Best Report Benzinga Has Ever Produced"

Massive returns are possible within this market! For a limited time, get access to the Benzinga Insider Report, usually $47/month, for just $0.99! Discover extremely undervalued stock picks before they skyrocket! Time is running out! Act fast and secure your future wealth at this unbelievable discount! Claim Your $0.99 Offer NOW!


The latest research study released by Market Research Inc on Machine Learning Operations Market Forecast to 2023-2031 research provides accurate economic, global, and country-level predictions and analyses. It provides a comprehensive perspective of the competitive market as well as an in-depth supply chain analysis to assist businesses in identifying major changes in industry practices. The market report also examines the current state of the Machine Learning Operations industry, as well as predicted future growth, technological advancements, investment prospects, market economics, and financial data.

This study does a thorough examination of the market and offers insights based on an industry SWOT analysis. The report on the Machine Learning Operations Market provides access to critical information such as market growth drivers, market growth restraints, current market trends, the markets economic and financial structure, and other key market details.

Enter your email and you'll also get Benzinga's ultimate morning update AND a free $30 gift card and more!

Request for a Sample copy of Report:

Furthermore, The report provides a detailed understanding of the market segments which have been formed by combining different prospects such as types, applications, and regions. Apart from this, the key driving factors, restraints, potential growth opportunities, and market challenges are also discussed in the report.

Major Players from the Global Machine Learning Operations Market:

Porters five forces model in the report provides insights into the competitive rivalry, supplier and buyer positions in the market and opportunities for the new entrants in the global Machine Learning Operations market over the period of 2023 to 2031. Further, the growth matrix given in the report brings an insight into the investment areas that existing or new market players can consider.

Machine Learning Operations Market Segment Analysis:

The Machine Learning Operations Market Forecast report provides a holistic evaluation of the market. The report offers a comprehensive analysis of key segments, trends, drivers, restraints, competitive landscape, and factors that are playing a substantial role in the market. The Machine Learning Operations market segments and market data breakdown are illuminated.

Machine Learning Operations Market, By Type:

Machine Learning Operations Market, By Application:

Machine Learning Operations Market Regional Analysis:

The research study covers North America, Latin America, Asia-Pacific, Europe, Middle East and Africa on the basis of productivity, thus focusing on the leading countries from the global regions. The report further highlights the cost structure including cost of raw material and cost of manpower. It offers cogent analysis of business stimulants of the Machine Learning Operations market

Years Considered for the Machine Learning Operations Market:

Ask for Discount:

Key Features of the report:

Hidden gems are waiting to be found in this market! Don't miss the Benzinga Insider Report, typically $47/month, now ONLY $0.99! Uncover incredibly undervalued stocks before they soar! Limited time offer! Secure your financial success with this unbeatable discount! Grab your 0.99 offer TODAY!


Reasons to Buy The Machine Learning Operations Market Report:

Enquire Before Buying:

About Us

Market Research, Inc. is farsighted in its view and covers massive ground in global research. Local or global, we keep a close check on both markets. Trends and concurrent assessments sometimes overlap and influence the other. When we say market intelligence, we mean a deep and well-informed insight into your products, market, marketing, competitors, and customers. Market research companies are leading the way in nurturing global thought leadership. We help your product/service become the best they can with our informed approach.

Visit Our Website:

Contact Us

Market Research, Inc.

Author: Kevin

US Address: 51 Yerba Buena Lane, Ground Suite,

Inner Sunset San Francisco, CA 94103, USA

Call Us: +1 (628) 225-1818

Write Us:


Massive returns are possible within this market! For a limited time, get access to the Benzinga Insider Report, usually $47/month, for just $0.99! Discover extremely undervalued stock picks before they skyrocket! Time is running out! Act fast and secure your future wealth at this unbelievable discount! Claim Your $0.99 Offer NOW!


2023 Benzinga does not provide investment advice. All rights reserved.

Read the rest here:
Machine Learning Operations Market Is Expected to Witness with Strong Growth rate in the forecast period - Benzinga

How machine learning safeguards organizations from modern cyber … – BetaNews

2024 is fast approaching, and it seems likely that the new year heralds the same torrent of sophisticated malware, phishing, and ransomware attacks as 2023. Not only are these long-standing threats showing few signs of slowing down, but they're increasing by as much as 40 percent, with federal agencies and public sector services being the main targets.

Meanwhile, weak points like IoT and cloud vulnerabilities are making it tougher for cybersecurity pros to secure the wide attack surface that these edge devices create.

AI/ML, however, has emerged as the perfect solution for organizations, as it promises to change the way that cybersecurity professionals create their plans of action to tackle threats. Arguably more important is the fact that AI/ML-powered cybersecurity can leverage huge volumes of data to spot suspicious activity in real-time, resulting in no downtime and more effective defensive strategies.

In this article, we'll take a look at a few real-world examples of AI and ML-powered cybersecurity and some insights into the roles that artificial intelligence and machine learning may play in bolstering protection against malicious actors.

It's taken longer than some would have preferred, but cybersecurity leaders are slowly realizing that evolving cyber threats and risks demand an equally sophisticated solution. AI/ML-powered cybersecurity in particular hopes to mitigate the number of data breaches businesses must contend with, preventing serious consequences and shielding sensitive corporate and customer data, as well as digital assets.

At its core, an AI/ML-powered approach to cybersecurity utilizes a high number of datasets, algorithms and models to make it easier for security pros to prevent catastrophes before they occur. Keeping watch for threats that may compromise an endpoint requires a level of vigilance impossible for a team of human beings to achieve. But algorithms can -- AI/ML-powered solutions can keep constant watch over an organization's networks and systems with pattern recognition and continuous monitoring to make real-time predictions.

One of the biggest pain points thats gone untreated for a long time is the increased emergence of new attack vectors that threat actors take advantage of. New endpoints created by network-connected devices, IoT devices, and even your trusty laptop and workstation become new opportunities for cybercriminals to pounce on.

Considering that 84 percent of security professionals think cyber-attacks begin with the endpoint, it stands to reason that theyll need real-time data to monitor these endpoints. As we mentioned previously, humans observational capabilities are insufficient to keep up.

Even though AI/ML has immense benefits for cybersecurity, criminals have started using it as well. Despite its more humble beginnings as a piece of the puzzle that solves the automation of routine security tasks, AI has ironically transformed into a defense mechanism that can become a destructive weapon in the wrong hands.

Perhaps most obvious are the ways that AI and ML can improve DNS security to make it easier to identify hard-to-spot security threats. This, of course, is spearheaded by pinpointing anomalous DNS behavior with the help of Zero-Day attack detection. It helps security professionals locate atypical patterns even in the absence of unusual outbound traffic or other common indicators of compromise. Unlike humans, AI models can observe aspects of DNS traffic that would otherwise demand lots of time and resources to manually monitor.

Likewise, anomaly detection systems have emerged as the response to AI-empowered cybercriminals looking for an inconspicuous way to invade networks and compromise sensitive information. AI-driven anomaly detection systems are ideal for picking up on anomalous network patterns that demand a rapid response. Over time, organizations can also start using these systems to harness the power of AI for automating attack pattern analysis.

Security leaders and their departments are often iterating upon their vulnerability and risk management programs for the sake of gaining greater insight into their current security posture and potential threats on the horizon. With AI and ML, vulnerability and risk management can become largely automated, expediting the rate at which security teams can detect, identify, and remediate security vulnerabilities. Security teams also become better able to make data-driven decisions on how to handle potential threats since AI systems collect data from literally hundreds of thousands of devices, databases, and web pages.

The potential benefits that AI/ML-powered cybersecurity can offer organizations across multiple industries are both attractive and promising, but there are challenges that security leaders must remain cognizant of.

As previously mentioned, cybercriminals are now also using artificial intelligence algorithms and machine learning models to execute sophisticated attacks. Just as cyber professionals can train ML models with data, so too can cybercriminals feed false data to their models to dodge detection. Hackers who are savvy enough may also have certain inputs they want to train their AI systems on in order to circumvent automated defenses that organizations have put up. Security leaders should apprise key stakeholders of the necessary costs to leverage machine learning to mitigate cyber threats and combat cybercriminals using AI solutions of their own.

Arguably most important is the financial barrier that organizations must overcome to implement AI/ML technologies with their approach to cybersecurity. Small- and mid-sized organizations, in particular, may struggle to justify paying the costs that come with building and maintaining cutting-edge cybersecurity systems -- these organizations should consider the upfront cost that comes with shorter-term resource consumption and expenditure before deciding whether to invest in AI/ML-powered cybersecurity.

But its not the end of the world, in terms of monetary difficulties. According to research, the AI market is expected to reach $303 billion by 2025. Although enterprise products and services will certainly take center stage, there will also be an abundance of low-cost and scalable solutions for organizations with all types of specific needs and infrastructures.

As a final reminder, it's important to emphasize to security leaders -- and business leaders in general who want to strengthen the security posture of their organization -- that AI and ML technologies simply aren't perfect. Adopting these innovative technologies can be time-consuming and expensive, and the solutions themselves are prone to bias and errors until molded into a self-sufficient state. Therefore, its important to fastidiously monitor the way you implement AI into your larger approach to cybersecurity if you wish to mitigate the potential negative impacts and disruptions.

Now that we've explored the groundbreaking realm of AI-powered cybersecurity and its increasingly important role in defending organizations from sophisticated cyber threats, it's important that you decide which of its transformative capabilities are best suited to your organization and its goals for achieving a more robust cybersecurity posture.

Engage your security leaders and other relevant stakeholders in discussions about how AI algorithms and ML models can enable your cybersecurity systems to detect and neutralize sophisticated attacks such as malware, phishing, and ransomware. Likewise, you should also make them aware of any negative outcomes, as well as what blackhats are using in terms of attack solutions.

Image credit:Jirsak/

Lee Li is a project manager and B2B copywriter with a decade of experience in the Chinese fintech startup space as a PM for TaoBao, MeitTuan, and DouYin (now TikTok).

Go here to see the original:
How machine learning safeguards organizations from modern cyber ... - BetaNews

Yale researchers investigate the future of AI in healthcare – Yale Daily News

Michelle Foley

Picture a world where healthcare is not confined to a clinic.

The watch on your wrist ticks steadily throughout the day, collecting and transmitting information about your heart rate, oxygen saturation and the levels of sugar in your blood. Sensors scan your face and body, making inferences about your state of health.

By the time you see a doctor, algorithms have already synthesized this data and organized it in ways that fit a diagnosis, detecting health problems before symptoms arise.

We arent there yet, but, according to Harlan Krumholz, a professor of medicine at the School of Medicine, this could be the future of healthcare powered by artificial intelligence.

This is an entirely historic juncture in the history of medicine, Krumholz said. What were going to be able to do in the next decades, compared to what we have been able to do, is going to be fundamentally different and much better.

Over the past months, Yale researchers have published a variety of papers on machine learning in medicine, from wearable devices that can detect heart defects to algorithms that can triage COVID-19 patients. Though much of this technology is still in development, the rapid surge of AI innovation has prompted experts to consider how it will impact healthcare in the near future.

Questions remain about the reliability of AI conclusions, the ethics of using AI to treat patients and how this technology might transform the healthcare landscape.

Synergy: human and artificial intelligence at Yale

Two recent Yale studies highlight what the future of AI-assisted health care could look like.

In August, researchers at the School of Medicine developed an algorithm to diagnose aortic stenosis, a narrowing of a valve in the bodys largest blood vessel. Currently, diagnosis usually entails a preliminary screening by the patients primary care provider and then a visit to the radiologist, where the patient must undergo a diagnostic doppler exam.

The new Yale algorithm, however, can diagnose a patient from just an echocardiogram performed by a primary care doctor.

We are at the cusp of doing transformative work in diagnosing a lot of conditions that otherwise we were missing in our clinical care, said Dr. Rohan Khera, senior author of the study and clinical director of the Yale Center for Outcomes Research & Evaluation, CORE. All this work is powered by patients and their data, and how we intend to use it is to give back to the most underserved communities. Thats our big focus area.

The algorithm was also designed to be compatible with cheap and accessible handheld ultrasound machines, said lead author Evangelos Oikonomou, a clinical fellow at the School of Medicine. This would bring first-stage aortic stenosis testing to the community, instead of being limited to those that are referred to a skilled and potentially expensive radiologist. It could also allow the disease to be diagnosed before symptoms arise.

In a second study, researchers used AI to support physicians in hospitals by predicting COVID-19 outcomes for emergency room patients all within 12 hours.

According to first author Georgia Charkoftaki, an associate research scientist at the Yale School of Public Health, hospitals often run out of beds during COVID-19 outbreaks. AI-powered predictions could help determine which patients need inpatient care and which patients can safely recover at home.

The algorithm is also designed to be adaptable to other diseases.

When [Respiratory Syncytial Virus] babies come to the ICU, they are given the standard of care, but not all of them respond, Charkoftaki said. Some are intubated, others are out in a week. The symptoms [of RSV] are similar to COVID and so we are working on a study for clinical metabolomics there as well.

However, AI isnt always accurate, Charkoftaki admitted.

As such, Charkoftaki said that medical professionals need to use AI in a smart way.

Dont take it blindly, but use it to benefit patients and the discovery of new drugs, Charkoftaki told the News. You always need a brain behind it.

Machines in medicine

Though the concept of artificial intelligence has existed since mathematician Alan Turings work in the 1950s, the release of ChatGPT in November 2022 brought AI into public conversation. The chatbot garnered widespread attention, reaching over 100 million users in two months.

According to Lawrence Staib ENG 90, a professor of radiology and biomedical engineering, AI-powered healthcare does not yet consist of asking a sentient chatbot medical questions. Staib, who regularly uses machine learning models in his research with medical imaging, says AI interfaces are more similar to a calculator: users input data, an algorithm runs and it generates an output, like a number, image, or cancer stage. The use of these algorithms is still relatively uncommon in most medical fields.

While the recent public conversation on AI has centered around large language models programs like ChatGPT which are trained to understand text in context rather than as isolated words these algorithms are not the focus of most AI innovation in healthcare, Staib said.

Instead, researchers are using machine learning in healthcare to recognize patterns humans would not detect. When trained on large databases, machine learning models often identify hidden signals, said David van Dijk, an assistant professor of medicine and computer science. In his research, van Dijk works to develop novel algorithms for discovering these hidden signals, which include biomarkers and disease mechanisms, to diagnose patients and determine prognosis.

Youre looking for something thats hidden in the data, van Dijk said. Youre looking for signatures that may be important for studying that disease.

Staib added that these hidden signals are also found in medical imaging.

In a computerized tomography or CT scan, for example, a machine learning algorithm can identify subtle elements of the image that even a trained radiologist might miss.

While these pattern recognition algorithms could be helpful in analyzing patient data, it is sometimes unclear how they arrive at conclusions and how reliable those conclusions are.

It may be picking up something, and it may be pretty accurate, but it may not be clear what its actually detecting, Staib cautions.

One famous example of that ambiguity occurred at the University of Washington, where researchers designed a machine learning model to distinguish between wolves and huskies. Since all the images of wolves were taken in snowy forests and all the images of huskies were taken in Arizona, the model learned to identify the species based on their environment. When the algorithm was given an image of a husky in the snow, it was always classified as a wolf.

To address this issue, researchers are working on explainable artificial intelligence: the kind of program, Staib said, that not only makes a judgment, but also tells you how it made that judgment or how confident it is in that judgment.

Experts say that the goal of a partnership between human experts and AI is to reduce human error and clarify AIs judgment process.

In medicine, well-intended practitioners still sometimes miss key pieces of information, Krumholtz said.

Algorithms, Krumholtz said, can make sure that nothing falls through the cracks.

But he added the need for human oversight will not go away.

Ultimately, medicine still requires intense human judgements, he said.

Big data and its pitfalls

The key to training a successful machine-learning model is data and lots of it. But where this data comes from and how it is used can raise ethical questions, said Bonnie Kaplan, a professor of biostatistics and faculty affiliate at the Solomon Center for Health Law and Policy at Yale Law School.

The Health Insurance Portability and Accountability Act, or HIPPA, regulates patient data collected in healthcare institutions, such as hospitals, clinics, nursing homes and dentists offices, Kaplan said. If this data is scrubbed of identifying details, though, health institutions can sell it without patient consent.

This kind of scrubbed patient information constitutes much of the data with which health-related machine learning models are trained.

Still, health data is collected in places beyond healthcare institutions, like on period tracking apps, genetics websites and social media. Depending on the agreements that users sign knowingly or not to access these services, related health data can be sold with identifying information and without consent, experts say. And if scrubbed patient data is combined with this unregulated health data, it becomes relatively easy to identify people, which in turn poses a serious privacy risk.

Healthcare data can be stigmatizing, Kaplan told the News. It can be used to deny insurance or credit or employment.

For researchers, AI in healthcare raises other questions as well: who is responsible for regulating it, what privacy protections should be in place and who is liable if something goes wrong.

Kaplan said that while theres a general sense of what constitutes ethical AI usage, how to achieve [it], or even define the words, is not clear.

While some, like Krumholz, are optimistic about the future of AI in healthcare, others like Kaplan point out that much of the current discourse remains speculative.

Weve got all these promises that AI is going to revolutionize healthcare, Kaplan said. I think thats overblown, but still very motivating. We dont get those utopian dreams, but we do get a lot of great stuff.

Sixty million people use ChatGPT every day.

Hannah Mark covers Science and Society for the SciTech desk and occasionally writes for the WKND. Originally from Montana, she is a junior majoring in History of Science, Medicine, and Public Health.

Valentina Simon covers Astronomy, Computer Science and Engineering stories. She is a freshman in Timothy Dwight College majoring in Data Science and Statistics.

See the original post here:
Yale researchers investigate the future of AI in healthcare - Yale Daily News

Indigenous knowledges informing ‘machine learning’ could prevent stolen art and other culturally unsafe AI practices – The Conversation Indonesia

Artificial intelligence (AI) relies on its creators for training, otherwise known as machine learning. Machine learning is the process by which the machine generates its intelligence through outside input.

But its behaviour is determined by the information it is provided. And at the moment, AI is a white male dominated field.

How can we ensure the evolution of AI doesnt further encroach on Indigenous rights and data sovereignty?

AI has the ability to generate art, and anyone can create Indigenous art using this machine. Even before AI, Aboriginal art has widely been appropriated and reproduced without attribution or acknowledgement, particularly for tourism industries.

And this could worsen with people now being able to generate art through AI. This is an issue not just experienced by Indigenous people, with many artists affected by their art styles being misappropriated.

Indigenous art is embedded with history and connects to culture and Country. AI-created Indigenous art would lack this. There are also implications for financial gain bypassing Indigenous artists and going to the producers of the technology.

Including Indigenous people in creating AI or deciding what AI can learn, could help minimise exploitation of Indigenous artists and their art.

Read more: AI can reinforce discrimination but used correctly it could make hiring more inclusive

In Australia there is a long history of collecting data about Aboriginal and Torres Strait Islander people. But there has been little data collected for or with Aboriginal and Torres Strait Islander people. Aboriginal scholars Maggie Walter and Jacon Prehn write of this in the context of the growing Indigenous Data Sovereignty movement.

Indigenous Data Sovereignty is concerned with the rights of Indigenous peoples to own, control, access and possess their own data, and decide who to give it to. Globally, Indigenous peoples are pushing for formal agreements on Indigenous Data Sovereignty.

Many Indigenous people are concerned with how the data involving our knowledges and cultural practices is being used. This has resulted in some Indigenous lawyers finding ways to integrate intellectual property with cultural rights.

Mori scholar Karaitiana Taiuru says:

If Indigenous peoples dont have sovereignty of their own data, they will simply be re-colonised in this information society.

Indigenous people are already collaborating on research that draws on Indigenous knowledges and involves AI.

In the wetlands of Kakadu, rangers are using AI and Indigenous knowledges to care for Country.

A weed called para grass is having a negative impact on magpie geese, which have been in decline. While the Kakadu rangers are doing their best to control the issue, the sheer size of the area (two million hectares), makes this difficult.

Collecting and analysing information about magpie geese and the impact of para grass using drones is having a positive influence on goose numbers.

Projects like these are vital given the loss of biodiversity around the globe that is causing species extinctions and ecosystem loss at alarming rates. As a result of this collaboration thousands of magpie geese are returning to Country to roost.

This project involves Traditional land owners (collectively known as Bininj in the north of Kakadu National Park and Mungguy in the south) working with rangers and researchers to help protect the environment and preserve biodiversity.

By working with Traditional Owners, monitoring systems were able to be programmed with geographically-specific knowledge, not otherwise recorded, reflecting the connection of Indigenous people with the land. This collaboration highlights the need to ensure Indigenous-led approaches.

In another example, in Sanikiluaq, an Inuit community in Nunavut, Canada, a project called PolArtic uses scientific data with Indigenous knowledges to assess the location of, and manage, fisheries.

Changing climate patterns are affecting the availability of fish, and this is another example where Indigenous knowledges are providing solutions for biodiversity issues caused by the global climate crisis.

Indigital is an Indigenous-owned profit-for-purpose company founded by Dharug, Cabrogal innovator Mikaela Jade. Jade has worked with traditional owners of Kakadu to use augmented reality to tell their stories on Country.

Indigital is also providing pathways for mob who are keen to learn more about digital technologies and combine them with their knowledges.

Read more: How should Australia capitalise on AI while reducing its risks? It's time to have your say

Although AI is a powerful tool, it is limited by the data which inform it. The success of the above projects is because AI was informed by Indigenous knowledges, provided by Indigenous knowledge holders who have a long held ancestral relationship with the land, animals and environment.

Research indicates AI is a white male-dominated industry. A global study found 12% of professionals across all levels were female, with only 4% being people of colour. Indigenous participation was not noted.

In early June, the Australian governments Safe and Responsible AI in Australia discussion paper found racial and gender biases evident in AI. Racial biases occurred, the paper found, in situations such as where AI had been used to predict criminal behaviour.

The purpose of the study was to seek feedback on how to lessen potential risks of harm from AI. Advisory groups and consultation processes were raised as possibilities to address this, but not explored in any real depth.

Indigenous knowledges have a lot to offer in the development of new technologies including AI. Art is part of our cultures, ceremonies, and identity. AI-generated art presents the risk of mass reproduction without Indigenous input or ownership, and misrepresentation of culture.

The federal government needs to consider Indigenous Knowledges informing the machine learning informing AI, supporting data sovereignty. There is an opportunity for Australia to become a global leader in pursuing technology advancement ethically.

Here is the original post:
Indigenous knowledges informing 'machine learning' could prevent stolen art and other culturally unsafe AI practices - The Conversation Indonesia

Microchip Launches the MPLAB Machine Learning Development Suite for 8-, 16-, 32-Bit MCUs and MPUs –

Microchip has announced the launch of a new software package designed to put machine learning workloads onto eight-, 16-, and 32-bit microcontrollers and processors: the MPLAB Machine Learning Development Suite.

"Machine Learning is the new normal for embedded controllers, and utilizing it at the edge allows a product to be efficient, more secure and use less power than systems that rely on cloud communication for processing," claims Microchip's Rodger Richey of the core benefits behind on-device machine learning with resource-constrained hardware, known as "tinyML." Microchip's unique, integrated solution is designed for embedded engineers and is the first to support not just 32-bit MCUs and MPUs [Microcontroller Units and Microprocessor Units], but also 8- and 16-bit devices to enable efficient product development."

Designed for use alongside the MPLAB X Integrated Development Environment (IDE), the machine learning toolkit allows the developer to build machine learning models suitable for flashing to Microchip's various microcontroller and processor parts taking into account their limited resources compared to desktop computers or cloud servers. Driven by AutoML and with the option to use cloud computing resources to find the best algorithm for a given task, the package aims to cover feature extraction, training, validation, and testing in one, with an application programming interface (API) convertible to Python.

While Microchip had already supported the use of existing deep neural network (DNN) models from TensorFlow Lite on its microcontrollers, the launch of the MPLAB Machine Learning Development Suite demonstrates a desire to provide everything a developer needs to build something from the ground up and joins MPLAB Harmony V3 and the VectorBlox accelerator Software Development Kit (SDK), the latter designed for use with Microchip's various field-programmable gate array (FPGA) parts, in the company's on-device machine learning line-up.

The software is free for trial use on up to 1GB of data and with 2,500 labels plus five hours a month of AutoML CPU time, but no rights to deploy models for purposes other than evaluation; a standard license offers 10GB of data, unlimited labels, and 10 hours a month of CPU time, plus a license to deploy models in production for $89 a month; a "pro" license increases the CPU time to 250 hours a year (20.8 hours a month) and offers the option to output source code, rather than a pre-compiled library.

More information on the MPLAB Machine Learning Development Suite is available on the Microchip website, along with a getting-started guide which walks the reader through creating models for fan state monitoring and gesture recognition and running them on SAM D21 and AVR devices.

See the article here:
Microchip Launches the MPLAB Machine Learning Development Suite for 8-, 16-, 32-Bit MCUs and MPUs -