Category Archives: Machine Learning

An M.Sc. computer science program in RUNI, focusing on machine learning – The Jerusalem Post

The M.Sc. program in Machine Learning & Data Science at the Efi Arazi School of Computer Science aims to provide a deep theoretical understanding of machine learning and data-driven methods as well as a strong proficiency in using these methods. As part of this unique program, students with solid exact science backgrounds, but not necessarily computer science backgrounds, are trained to become data scientists. Headed by Prof. Zohar Yakhini and PhD Candidate Ben Galili, this program provides students with the opportunity to become skilled and knowledgeable data scientists by preparing them with fundamental theoretical and mathematical understandings, as well as endowing them with scientific and technical skills necessary to be creative and effective in these fields. The program offers courses in statistics and data analysis, different levels of machine -learning courses as well as unique electives such as a course in recommendation systems and on DNA and sequencing technologies.

M.Sc. student Guy Assa, preparing DNA for sequencing on a nanopore device, in Prof. Noam Shomrons DNA sequencing class, part of the elective curriculum (Credit: private photo)

In recent years, data science methodologies have become a foundational language and a main development tool for science and industry. Machine learning and data-driven methods have developed considerably and now penetrate almost all areas of modern life. The vision of a data-driven world presents many exciting challenges to data experts in diverse fields of application, such as medical science, life science, social science, environmental science, finance, economics, business.

Graduates of the program are successful in becoming data scientists in Israeli hi-tech companies. Lior Zeida Cohen, a graduate of the program says After earning a BA degree in Aerospace Engineering from the Technion and working as an engineer and later leading a control systems development team, I sought out a graduate degree program that would allow me to delve deeply into the fields of Data Science and Machine Learning while also allowing me to continue working full-time. I chose to pursue the ML & Data Science Program, at Reichman University. The program provided in-depth study in both the theoretical and practical aspects of ML and Data Science, including exposure to new research and developments in the field. It also emphasized the importance of learning the fundamental concepts necessary for working in these domains. In the course of completing the program, I began work at Elbit Systems as an algorithms developer in a leading R&D group focusing on AI and Computer Vision. The program has greatly contributed to my success in this position".

As a part of the curriculum, the students execute collaborative research projects with both external and internal collaborators, in Israel and around the world; One active collaboration is with the Leibniz Institute for Tropospheric Research (TROPOS) in Leipzig, Germany. In this collaboration, the students, led by Prof. Zohar Yakhini and Dr. Shay Ben-Elazar, a Principal Data Science and Engineering Manager at Microsoft Israel, as well as Dr. Johannes Bhl from TROPOS, are using data science and machine learning tools in order to infer properties of stratospheric layers by using data from sensory devices. The models developed in the project provide inference from simple devices that achieves an accuracy which is close to that which is obtained through much more expensive measurements. This improvement is enabled through the use of neural network models (deep learning).

Results from the TROPOS project: a significant improvement in the inference accuracy. Left panel: actual atmospheric status as obtained from the more expensive measurements (Lidar + Radar) Middle panel: predicted status as inferred from Lidar measurements using physical models. Right panel: status determined by the deep learning model developed in the project.

Additional collaborations include a number of projects with Israeli hospitals such as Sheba Tel Hashomer, Beilinson Hospital, and Kaplan Medical Center, as well as with the Israel Nature and Parks Authority and with several hi-tech companies.

PhD candidate Ben Galili, Academic Director of Machine Learning and Data Science Program (Credit: private photo)

Several research and thesis projects are led by students in the program addressing data analysis questions related to spatial biology the study of molecular biology processes in their bigger location context. One project, led by student Guy Attia and supervised by Dr. Leon Anavy addressed imputation methods for spatial transcriptomics data. A second one, led by student Efi Herbst, aims to expand the inference scope of data from spatial transcriptomics, into molecular properties that are not directly measured by the technology device.

According to Maya Kerem, a recent graduate, the MA program taught me a number of skills that would enable me to easily integrate into a new company based on the knowledge I gained. I believe that this program is particularly unique because it always makes sure that the learnings are applied to industry-related problems at the end of each module. This is a hands-on program at Reichman University, which is what drew me to enroll in this MA program.

For more info

This article was written in cooperation with Reichman University

See the rest here:
An M.Sc. computer science program in RUNI, focusing on machine learning - The Jerusalem Post

Application od Machine Learning in Cybersecurity – Read IT Quik

The most crucial aspect of every business is its cybersecurity. It aids in ensuring the security and safety of their data. Artificial intelligence and machine learning are in high demand, changing the cybersecurity industry as a whole. Cybersecurity may benefit greatly from machine learning, which can be used to better available antivirus software, identify cyber dangers, and battle online crime. With the increasing sophistication of cyber threats, companies are constantly looking for innovative ways to protect their systems and data. Machine learning is one emerging technology that is making waves in cybersecurity. Cybersecurity professionals can now detect and mitigate cyber threats more effectively by leveraging artificial intelligence and machine learning algorithms. This article will delve into key areas where machine learning is transforming the security landscape.

One of the biggest challenges in cybersecurity is accurately identifying legitimate connection requests and suspicious activities within a companys systems. With thousands of requests pouring in constantly, human analysis can fall short. This is where machine learning can play a crucial role. AI-powered cyber threat identification systems can monitor incoming and outgoing calls and requests to the system to detect suspicious activity. For instance, there are many companies that offer cybersecurity software that utilizes AI to analyze and flag potentially harmful activities, helping security professionals stay ahead of cyber threats.

Traditional antivirus software relies on known virus and malware signatures to detect threats, requiring frequent updates to keep up with new strains. However, machine learning can revolutionize this approach. ML-integrated antivirus software can identify viruses and malware based on their abnormal behavior rather than relying solely on signatures. This enables the software to detect not only known threats but also newly created ones. For example, companies like Cylance have developed smart antivirus software that uses ML to learn how to detect viruses and malware from scratch, reducing the dependence on signature-based detection.

Cyber threats can often infiltrate a companys network by stealing user credentials and logging in with legitimate credentials. It can be challenging to detect with traditional methods. However, machine learning algorithms can analyze user behavior patterns to identify anomalies. By training the algorithm to recognize each users standard login and logout patterns, any deviation from these patterns can trigger an alert for further investigation. For instance, Darktrace offers cybersecurity software that uses ML to analyze network traffic information and identify abnormal user behavior patterns.

Machine learning offers several advantages in the field of cyber security. First and foremost, it enhances accuracy by analyzing vast amounts of data in real time, helping to identify potential threats promptly. ML-powered systems can also adapt and evolve as new threats emerge, making them more resilient against rapidly growing cyber-attacks. Moreover, ML can provide valuable insights and recommendations to cybersecurity professionals, helping them make informed decisions and take proactive measures to prevent cyber threats.

As cyber threats continue to evolve, companies must embrace innovative technologies like machine learning to strengthen their cybersecurity defenses. Machine learning is transforming the cybersecurity landscape with its ability to analyze large volumes of data, adapt to new threats, and detect anomalies in user behavior. By leveraging the power of AI and ML, companies can stay ahead of cyber threats and safeguard their systems and data. Embrace the future of cybersecurity with machine learning and ensure the protection of your companys digital assets.

Go here to see the original:
Application od Machine Learning in Cybersecurity - Read IT Quik

New Machine Learning Parameterization Tested on Atmospheric … – Eos

Editors Highlights are summaries of recent papers by AGUs journal editors.Source: Journal of Advances in Modeling Earth Systems

Atmospheric models must represent processes on spatial scales spanning many orders of magnitude. Although small-scale processes such as thunderstorms and turbulence are critical to the atmosphere, most global models cannot explicitly resolve them due to computational expense. In conventional models, heuristic estimates of the effect of these processes, known as parameterizations, are designed by experts. A recent line of research uses machine learning to create data-driven parameterizations directly from very high-resolution simulations that require fewer assumptions.

Yuval and OGorman [2023] provide the first such example of a neural network parameterization of the effects of subgrid processes on the vertical transport of momentum in the atmosphere. A careful approach is taken to generate a training dataset, accounting for subtle issues in the horizontal grid of the high-resolution model. The new parameterization generally improves the simulation of winds in a coarse-resolution model, but also over-corrects and leads to larger biases in one configuration. The study serves as a complete and clear example for researchers interested in the application of machine learning for parameterization.

Citation: Yuval, J., & OGorman, P. A. (2023). Neural-network parameterization of subgrid momentum transport in the atmosphere. Journal of Advances in Modeling Earth Systems, 15, e2023MS003606. https://doi.org/10.1029/2023MS003606

Oliver Watt-Meyer, Associate Editor, JAMES

Related

Original post:
New Machine Learning Parameterization Tested on Atmospheric ... - Eos

Activating vacation mode: Utilizing AI and machine learning in your … – TravelDailyNews International

Say the words dream vacation and everyone will picture something different. This brings a particular challenge to the modern travel marketer especially in a world of personalization, when all []

Say the words dream vacation and everyone will picture something different. This brings a particular challenge to the modern travel marketer especially in a world of personalization, when all travelers are looking for their own unique experiences. Fortunately, artificial intelligence (AI) provides a solution that allows travel marketers to draw upon a variety of sources when researching the best ways to connect with potential audiences.

By utilizing and combining data from user-generated content, transaction history and other online communications, AI and machine-learning (ML) solutions can help to give marketers a customer-centric approach, while successfully accounting for the vast diversity amongst their consumer base.

AI creates significant value for travel brands, which is why 48% of business executives are likely to invest in AI and automation in customer interactions over the next two years, according to Deloitte. Using AI and a data-driven travel marketing strategy, you can predict behaviors and proactively market to your ideal customers. There are as many AI solutions in the market as there are questions that require data, so choosing the right one is important.

For example, a limited-memory AI solution can skim a review site, such as TripAdvisor, to determine the most popular destinations around a major travel season, like summertime. Or, a chatbot can speak directly with visitors to your site, and aggregate their data to give brands an idea on what prospective consumers are looking for. Other solutions offer predictive segmentation, which can separate consumers based on their probability of taking action, categorize your leads and share personalized outreach on their primary channels. Delivering personalized recommendations are a major end goal for AI solutions in the travel industry. For example, Booking.com utilizes a consumers search history to determine whether they are traveling for business or leisure and provide recommendations accordingly.

A major boon of todays AI and machine-learning solutions are their ability to monitor and inform users of ongoing behavioral trends. For example, who could have predicted the popularity of hotel day passes for remote workers, as little as three years ago? Or the growing consumer desire for sustainable toiletries? Trends change every year or, more accurately, every waking hour so, having a tool that can stay ahead of the next biggest thing is essential.

In an industry where every element of the customers experience travel costs, hotels, activities is meticulously planned, delivering personalized experiences is critical to maintaining a customers interest. Consumers want personalization. As Google reports, 90% of leading marketers indicate that personalization significantly contributes to business profitability.

Particularly in the travel field, where there are as many consumer preferences as there are destinations on a map, personalization is essential in order to gain their attention. AI capabilities can solve common traveler frustrations, further enhancing the consumer experience. Natural language processors can skim through review sites, gathering the generalized sentiment from prior reviews and determining common complaints that may arise. Through these analyses from a range of sources from across a consumers journey, you can catch problems before they start.

For travel marketers already dealing with a diverse audience, and with a need for personalization to effectively stand out amongst the competition, AI and ML solutions can effectively help you plan and execute personalized outreach, foster brand loyalty and optimize the consumer experience. With AI working behind the scenes, your customers can look forward to fun in the sun, on the slopes, or wherever their destination may be.

Janine Pollack is the Executive Director, Growth & Content, and self-appointed Storyteller in Chief at MNI Targeted Media. She leads the brands commitment to generating content that informs and inspires. Her scope of work includes strategy and development for Fortune Knowledge Groups thought leadership programs and launching Fortunes The Most Powerful Woman podcast. She is proud to have partnered with The Hebrew University on the inaugural Nexus: Israel program, featuring worldwide luminaries. Janine has also written lifetime achievements for Sports Business Journal. She earned her masters from the Northwestern University Medill School of Journalism and B.A. from The American University in Washington D.C.

Read the original post:
Activating vacation mode: Utilizing AI and machine learning in your ... - TravelDailyNews International

A novel CT image de-noising and fusion based deep learning … – Nature.com

SARS-CoV-2, known as corona virus, causes COVID-19. It is an infectious disease first discovered in China in December 20191,2,3. World Health Organization (WHO) also declares it as a pandemic. Figure1 shows its detail structure3. This new virus quickly spread throughout the world. Its effect is transmitted to humans through their zoonotic flora. COVID-19's main clinical topographies are cough, sore throat, muscle pain, fever, and shortness of breath4,5. Normally, RT-PCR is used for COVID-19 detection. CT and X-ray have also vital roles in early and quick detection of COVID-196. However, RT-PCR has low sensitivity of about 60% -70% and even some times negative results are obtained7,8. It is observed that CT is a subtle approach to detecting COVID-19, and it may be a best screening means9.

Artificial intelligence and its subsets play a significant role in medicine and have recently expanded their prominence by being used as tool to assist physicians10,11,12. Deep learning techniques are also used with prominent results in many disease detections like skin cancer detection, breast cancer detection, and lung segmentation13,14. However, Due to limited resources and radiologists, providing clinicians to each hospital is a difficult task. Consequently, a need of automatic AI or machine learning methods is required to mitigate the issues. It can also be useful in reducing waiting time and test cost by removing RT-PCR kits. However, thorough pre-processing of CT images is necessary to achieve the best results. Poisson or Impulse noise during the acquisition process of these photos could have seriously damaged the image information15. To make post-processing tasks like object categorization and segmentation easier, it is essential to recover this lost information. Various filtering algorithms have been proposed to de-blur and to de-noise images in past. Standard Median Filter (SMF) is one of the most often used non-linear filters16.

A number of SMF modifications, including Weighted median and Center weighted median (CWM)17,18, have been proposed. The most widely used noise adaptive soft-switching median (NASM) was proposed in19, which achieved optimal results. However, if the noise density exceeds 50%, the quality of the recovered images degradedsignificantly. These methods are all non-adaptive and unable to distinguish between edge pixels, uncorrupted pixels, and corrupted pixels. Recent deep learning idea presented in20,21,22 performs well in recovering the images degraded by fixed value Impulse noise. However, its efficiency decreases with the increase in the noise density and in reduction of Poisson noise, which normally exist in CT images. Additionally, most of these methods are non-adaptive and fails while recovering Poisson noise degraded images. In the first phase of this study, layer discrimination with max/min intensities elimination with adaptive filtering window is proposed, which can handle high density Impulse and Poisson noise corrupted CT images. The proposed method has shown superior performance both visually and statistically.

Different deep learning methods are being utilized to detect COVID-19 automatically. To detect COVID-19 in CT scans, a deep learning model employing the COVIDX-Net model that consists of seven CNN models, was developed. This model has higher sensitivity, specificity and can detect COVID-19 with 91.7% accuracy23. Reference24 shows a deep learning model which obtains 92.4% results in detection of COVID-19. A ResNet50 model was proposed in25 which also achieved 98% results as well. All of these trials, nevertheless, took more time to diagnose and didn't produce the best outcomes because of information loss during the acquisition process. There are many studies on detection of COVID-19 that employ machine learning models with CT images26,27,28,29.A study presented in30proposes two different approaches with two systems each to diagnose tuberculosis from two datasets. In this study,initially, PCA) algorithm was employedto reduce the features dimensionality, aiming to extract the deep features. Then, SVM algorithm was used to for classifying features. This hybrid approachachieved an accuracy of 99.2%, a sensitivity of 99.23%, a specificity of 99.41%, and an AUC of 99.78%. Similarly, a study presented in31 utilizes different noise reduction techniques and compared the resultsby calculating qualitative visual inspection and quantitative parameters like Peak Signal-to-Noise Ratio (PSNR), Correlation Coefficient (Cr), and system complexity to determine the optimum denoising algorithm to be applied universally. However, these techniques manipulate all pixels whether they are contaminated by noise or not.An automated deep learning approach from Computed Tomography (CT) scan images to detect COVID-19 is proposed in32. In this method anisotropic diffusion techniques are used to de-noised the image and then CNN model is employed to train the dataset. At the end, different models including AlexNet, ResNet50, VGG16 and VGG19 have been evaluated in the experiments. This method worked well and achieved higher accuracy. However, when the images were contaminated with higher noise density, its performance suffered.Similarly, the authors in33 used four powerful pre-trained CNN models, VGG16, DenseNet121, ResNet50,and ResNet152, for the COVID-19 CT-scan binary classification task. In this method, a FastAI ResNet framework was designed to automatically find the best architecture using CT images. Additionally, a transfer learning techniques were used to overcome the large training time. This method achieved a higher F1 score of 96%. A deep learning method to detect COVID-19 using chest X-ray images was presented in 34. A dataset of 10,040 samples were used in this study. This model has a detection accuracy of 96.43% and a sensitivity of 93.68%.However, its performance dramatically decreases with higher density Poisson noise. A convolution neural networks method used for binary classification pneumonia-based conversion of VGG-19, Inception_V2, and decision tree model was presented in35. In this study, X-ray and CT scan images dataset that contains 360 images were used for COVID-19 detection. According to the findings, VGG-19, Inception_V2 and decision tree model illustrate high performance with accuracy of 91% than Inception_V2 (78%) and decision tree (60%) models.

In this paper, a paradigm for automatic COVID-19 screening that is based on assessment fusion is proposed. The effectiveness and efficiency of all baseline models were improved by our proposed model, which utilized the majority voting prediction technique to eliminate the mistakes of individual models. The proposed AFM model only needs chest X-ray images to diagnose COVID-19 in an accurate and speeding way.

The rest of the paper is organized as: The dataset is explained in section "Meterial and methods". section "Proposed method" explains our proposed approach and section "Results and Discussion" presents empirical results and analysis. section "Conclusion" describes conclusion and the specific contributions along with the future directions for improving the efficiency of the proposed work.

The rest is here:
A novel CT image de-noising and fusion based deep learning ... - Nature.com

Knowledge Graphs: The Dream of a Knowledge Network – SAP News Center

The eighth largest defense contractor in the U.S., SAP customer L3 Technologies is embracing innovation and making it part of its corporate culture, according to Heidi Wood, senior vice president of Strategy and Operations.

In an interview with CXO Talks Michael Krigsman, Wood explains what it takes to become data-driven and radically transparent.

Click the button below to load the content from YouTube.

You have to embrace innovation. You have to make that part of your corporate culture. You have to encourage risk taking because thats a necessary and frequently not enough spoken about element of innovation, which is the willingness to take risks, the willingness to be bold, put yourself out there, and be courageous, Wood tells Krigsman when asked about driving forces that are responsible for transformation. The way I like to describe it is, we took all of the different systems that we have, and we piped them together into a fused system. It helps us come back to better decisions. Together, we can move with speed because all of us are seeing it at the same time and its based on fact, not anecdotes.

You want to show your better parts, Wood adds. But, you kind of get to a stage where everybody gets comfortable with, look, this is the truth, this is where were really, really at. It enables more collective contributions because people can see the areas that are ailing and say, Well, Ive got some guys that can help with this thing that youre working on because now we can see that that area needs work.

I think one of the exciting things about IT is that you actually have an angle where IT is helping change the culture of a company, she concludes.

Watch the complete interview to hear more about L3 and how the company is working toward a data-driven and radically transparent organization.

The rest is here:
Knowledge Graphs: The Dream of a Knowledge Network - SAP News Center

The Ethics of AI: Navigating the Future of Intelligent Machines – KDnuggets

Depending on your life, everybody has different opinions on artificial intelligence and its future. Some believed that it was just another fad that was going to die out soon. Whilst some believed there was a huge potential to implement it into our everyday lives.

At this point, its clear to say that AI is having a big impact on our lives and is here to stay.

With the recent advancements in AI technology such as ChatGPT and autonomous systems such as Baby AGI - we can stand on the continuous advancement of artificial intelligence in the future. It is nothing new. It's the same drastic change we saw with the arrival of computers, the internet, and smartphones.

A few years ago, there was a survey conducted with 6,000 customers in six countries, where only 36% of consumers were comfortable with businesses using AI and 72% expressed that they had some fear about the use of AI.

Although it is very interesting, it can also be concerning. Although we expect more to come in the future regarding AI, the big question is What are the ethics around it?.

The most developing and implemented area of AI development is machine learning. This allows models to learn and improve using past experience by exploring data and identifying patterns with little human intervention. Machine learning is used in different sectors, from finance to healthcare. We have virtual assistants such as Alexa, and now we have large language models such as ChatGPT.

So how do we determine the ethics around these AI applications, and how it will affect the economy and society?

There are a few ethical concerns surrounding AI:

1. Bias and Discrimination

Although data is the new oil and we have a lot of it, there are still concerns about AI being biased and discriminative with the data it has. For example, the use of facial recognition applications has proven to be highly biased and discriminative to certain ethnic groups, such as people with darker skin tones.

Although some of these facial recognition applications had high racial and gender bias, companies such as Amazon refused to stop selling the product to the government in 2018.

2. Privacy

Another concern around the use of AI applications is privacy. These applications require a vast amount of data to produce accurate outputs and have high performance. However, there are concerns regarding data collection, storage, and use.

3. Transparency

Although AI applications are inputted with data, there is a high concern about the transparency of how these AI applications come to their decision. The creators of these AI applications deal with a lack of transparency raising the question of who to hold accountable for the outcome.

4. Autonomous Applications

We have seen the birth of Baby AGI, an autonomous task manager. Autonomous applications have the ability to make decisions with the help of a human. This naturally opens eyes to the public on leaving the decision to be made by technology, which could be deemed ethically or morally wrong in society's eyes.

5. Job security

This concern has been an ongoing conversation since the birth of artificial intelligence. With more and more people seeing that technology can do their job, such as ChatGPT creating content and potentially replacing content creators - what are the social and economic consequences of implementing AI into our everyday lives?

In April 2021, the European Commission published its legislation on the Act of the use of AI. The act aimed to ensure that AI systems met fundamental rights and provided users and society with trust. It contained a framework that grouped AI systems into 4 risk areas; unacceptable risk, high risk, limited, and minimal or no risk. You can learn more about it here: European AI Act: The Simplified Breakdown.

Other countries such as Brazil also passed a bill in 2021 that created a legal framework around the use of AI. Therefore, we can see that countries and continents around the world are looking further into the use of AI and how it can be ethically used.

The fast advancements in AI will have to align with the proposed frameworks and standards. Companies who are building or implementing AI systems will have to follow ethical standards and conduct an assessment of the application to ensure transparency, and privacy and account for bias and discrimination.

These frameworks and standards will need to focus on data governance, documented, transparent, human oversight, and robust, accurate, cyber-safe AI systems. If companies fail to comply, they will, unfortunately, have to deal with fines and penalties.

The launch of ChatGPT and the development of general-purpose AI applications have prompted scientists and politicians to establish a legal and ethical framework to avoid any potential harm or impact of AI applications.

This year alone there have been many papers released on the use of AI and the ethics surrounding it. For example, Assessing the Transatlantic Race to Govern AI-Driven Decision-Making through a Comparative Lens. We will continue to see more and more papers getting released till governments conduct and publish a clear and concise framework for companies to implement.

Nisha Arya is a Data Scientist, Freelance Technical Writer and Community Manager at KDnuggets. She is particularly interested in providing Data Science career advice or tutorials and theory based knowledge around Data Science. She also wishes to explore the different ways Artificial Intelligence is/can benefit the longevity of human life. A keen learner, seeking to broaden her tech knowledge and writing skills, whilst helping guide others.

The rest is here:
The Ethics of AI: Navigating the Future of Intelligent Machines - KDnuggets

Simulations with a machine learning model predict a new phase of solid hydrogen – Phys.org

This article has been reviewed according to ScienceX's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

peer-reviewed publication

trusted source

proofread

Hydrogen, the most abundant element in the universe, is found everywhere from the dust filling most of outer space to the cores of stars to many substances here on Earth. This would be reason enough to study hydrogen, but its individual atoms are also the simplest of any element with just one proton and one electron. For David Ceperley, a professor of physics at the University of Illinois Urbana-Champaign, this makes hydrogen the natural starting point for formulating and testing theories of matter.

Ceperley, also a member of the Illinois Quantum Information Science and Technology Center, uses computer simulations to study how hydrogen atoms interact and combine to form different phases of matter like solids, liquids, and gases. However, a true understanding of these phenomena requires quantum mechanics, and quantum mechanical simulations are costly. To simplify the task, Ceperley and his collaborators developed a machine learning technique that allows quantum mechanical simulations to be performed with an unprecedented number of atoms. They reported in Physical Review Letters that their method found a new kind of high-pressure solid hydrogen that past theory and experiments missed.

"Machine learning turned out to teach us a great deal," Ceperley said. "We had been seeing signs of new behavior in our previous simulations, but we didn't trust them because we could only accommodate small numbers of atoms. With our machine learning model, we could take full advantage of the most accurate methods and see what's really going on."

Hydrogen atoms form a quantum mechanical system, but capturing their full quantum behavior is very difficult even on computers. A state-of-the-art technique like quantum Monte Carlo (QMC) can feasibly simulate hundreds of atoms, while understanding large-scale phase behaviors requires simulating thousands of atoms over long periods of time.

To make QMC more versatile, two former graduate students, Hongwei Niu and Yubo Yang, developed a machine learning model trained with QMC simulations capable of accommodating many more atoms than QMC by itself. They then used the model with postdoctoral research associate Scott Jensen to study how the solid phase of hydrogen that forms at very high pressures melts.

The three of them were surveying different temperatures and pressures to form a complete picture when they noticed something unusual in the solid phase. While the molecules in solid hydrogen are normally close-to-spherical and form a configuration called hexagonal close packedCeperley compared it to stacked orangesthe researchers observed a phase where the molecules become oblong figuresCeperley described them as egg-like.

"We started with the not-too-ambitious goal of refining the theory of something we know about," Jensen recalled. "Unfortunately, or perhaps fortunately, it was more interesting than that. There was this new behavior showing up. In fact, it was the dominant behavior at high temperatures and pressures, something there was no hint of in older theory."

To verify their results, the researchers trained their machine learning model with data from density functional theory, a widely used technique that is less accurate than QMC but can accommodate many more atoms. They found that the simplified machine learning model perfectly reproduced the results of standard theory. The researchers concluded that their large-scale, machine learning-assisted QMC simulations can account for effects and make predictions that standard techniques cannot.

This work has started a conversation between Ceperley's collaborators and some experimentalists. High-pressure measurements of hydrogen are difficult to perform, so experimental results are limited. The new prediction has inspired some groups to revisit the problem and more carefully explore hydrogen's behavior under extreme conditions.

Ceperley noted that understanding hydrogen under high temperatures and pressures will enhance our understanding of Jupiter and Saturn, gaseous planets primarily made of hydrogen. Jensen added that hydrogen's "simplicity" makes the substance important to study. "We want to understand everything, so we should start with systems that we can attack," he said. "Hydrogen is simple, so it's worth knowing that we can deal with it."

More information: Hongwei Niu et al, Stable Solid Molecular Hydrogen above 900 K from a Machine-Learned Potential Trained with Diffusion Quantum Monte Carlo, Physical Review Letters (2023). DOI: 10.1103/PhysRevLett.130.076102

Journal information: Physical Review Letters

The rest is here:
Simulations with a machine learning model predict a new phase of solid hydrogen - Phys.org

How is artificial intelligence revolutionizing financial services? – Cointelegraph

What is the role of artificial intelligence in the financial services industry?

AI is proving to be a powerful tool for financial institutions looking to improve their operations, manage risks, and optimize their portfolios more effectively.

Artificial intelligence (AI) is playing an increasingly vital role in the financial services industry. Predictive analytics, which can assist financial firms in better understanding and anticipating client demands, preferences and behaviors, is one of the most well-known uses of AI. They can then use this information to create goods and services that are more individually tailored.

Moreover, AI is also being utilized to enhance risk management and fraud detection in the financial services industry. AI systems can swiftly identify unusual patterns and transactions that can point to fraud by evaluating massive amounts of data in real-time. This can assist financial organizations in reducing overall financial risk and preventing fraud-related losses.

In addition, AI is being used for portfolio optimization and financial forecasting. By utilizing machine learning algorithms and predictive analytics, financial institutions can optimize their portfolios and make more accurate investment decisions.

Machine learning, deep learning and NLP are helping financial institutions improve their operations, enhance customer experiences, and make more informed decisions. These technologies are expected to play an increasingly significant role in the finance industry in the coming years.

Financial organizations may make better decisions by using machine learning to examine massive volumes of data and find trends. For instance, machine learning can be used to forecast stock prices, credit risk and loan defaulters, among other things.

Deep learning is a subset of machine learning that utilizes neural networks to model and resolve complicated issues. For instance, deep learning is being used in finance to create models for detecting fraud, pricing securities and managing portfolios.

Natural language processing (NLP) is being used in finance to enable computers to understand human language and respond appropriately. NLP is used in financial chatbots, virtual assistants and sentiment analysis tools. It enables financial institutions to improve customer service, automate customer interactions and develop better products and services.

AI is proving to be a powerful tool for financial institutions looking to improve their fraud detection and risk management processes, enabling them to operate more efficiently and effectively while minimizing potential losses.

Here are the steps explaining how AI helps in fraud detection and risk management in financial services:

Chatbots and virtual assistants are proving to be valuable tools for financial institutions looking to improve the customer experience, reduce costs and operate more efficiently.

Chatbots and virtual assistants are utilized to provide individualized services and assistance, which enhances the client experience. Customers can communicate with these AI-powered tools in real-time and receive details on their accounts, transactions and other financial services. They can also be used to respond to commonly asked inquiries, offer financial counsel and assist clients with challenging problems.

Suppose a bank customer wanted to check their account balance or ask a question about a recent transaction, but the banks customer service center was closed. The customer can make use of the banks chatbot or virtual assistant to receive the information they require in real-time rather than having to wait until the following day to speak with a customer support agent.

The virtual assistant or chatbot can verify the customers identification and give them access to their account balance or transaction details. If the customer has a more complex issue, the chatbot or virtual assistant can escalate it to a human representative for further assistance. This means that AI-powered chatbots and virtual assistants can provide immediate responses to customer inquiries, reducing wait times and improving customer satisfaction.

Because they are accessible round-the-clock, chatbots and virtual assistants are useful resources for clients who require support outside of conventional office hours. Through the automation of repetitive processes and the elimination of the need for human support, they can also assist financial organizations in cutting expenses.

The financial services industry can enjoy several benefits from AI systems, such as automating mundane tasks, improving risk management and swift decision-making. Nevertheless, the drawbacks of AI, such as security risks, potential bias and absence of a human touch, should not be ignored.

Potential advantages of AI in the financial services industry include:

The possible disadvantages of using AI in the financial services industry consist of:

The future of AI in finance is exciting, with the potential to improve efficiency, accuracy and customer experience. However, it will be essential for financial institutions to carefully manage the risks and challenges associated with the use of AI.

The use of AI in financial services has the potential to significantly improve the sector. Several facets of finance have already been transformed by AI, including fraud detection, risk management, portfolio optimization and customer service.

Automating financial decision-making is one area where AI is anticipated to have a large impact in the future. This could involve the examination of massive amounts of financial data using machine learning algorithms, followed by the formulation of investment recommendations. With AI, customized investment portfolios might be constructed for clients depending on their risk appetite and financial objectives.

In addition, AI-powered recommendation engines could also be developed to offer customers targeted products and services that meet their needs. This could improve customer experience and satisfaction while also increasing revenue for financial institutions.

However, there are also potential challenges associated with the use of AI in finance. These include data privacy concerns, regulatory compliance issues, and the potential for bias and discrimination in algorithmic decision-making. It will be important for financial institutions to ensure that AI is used in a responsible and ethical way and that appropriate safeguards, such as transparent algorithms and regular audits, are in place to mitigate these risks.

Go here to read the rest:
How is artificial intelligence revolutionizing financial services? - Cointelegraph

Machine Learning Has Value, but It’s Still Just a Tool – MedCity News

Machine learning (ML) has exciting potential for a constellation of uses in clinical trials. But hype surrounding the term may build expectations that ML is not equipped to deliver. Ultimately, ML is a tool, and like any tool, its value will depend on how well users understand and manage its strengths and weaknesses. A hammer is an effective tool for pounding nails into boards, after all, but it is not the best option if you need to wash a window.

ML has some obvious benefits as a way to quickly evaluate large, complex datasets and give users a quick initial read. In some cases, ML models can even identify subtleties that humans might struggle to notice, and a stable ML model will consistently and reproducibly generate similar results, which can be both a strength and a weakness.

ML can also be remarkably accurate, assuming the data used to train the ML model was accurate and meaningful. Image recognition ML models are being widely used in radiology with excellent results, sometimes catching things missed by even the most highly trained human eye.

This doesnt mean ML is ready to replace clinicians judgment or take their jobs, but results so far offer compelling evidence that ML may have value as a tool to augment their clinical judgment.

A tool in the toolbox

That human factor will remain important, because even as they gain sophistication, ML models will lack the insight clinicians build up over years of experience. As a result, subtle differences in one variable may cause the model to miss something important (false negatives), or overstate something that is not important (false positives).

There is no way to program for every possible influence on the available data, and there will inevitably be a factor missing from the dataset. As a result, outside influences such as a person moving during ECG collection, suboptimal electrode connection, or ambient electrical interference may introduce variability that ML is not equipped to address. In addition, ML wont recognize if there is an error such as an end user entering an incorrect patient identifier, but because ECG readings are unique like fingerprints a skilled clinician might realize that the tracing they are looking at does not match what they have previously seen from the same patient, prompting questions about who the tracing actually belongs to.

In other words, machines are not always wrong, but they are also not always right. The best results come when clinicians use ML to complement, not supplant, their own efforts.

Maximizing ML

Clinicians who understand how to effectively implement ML in clinical trials can benefit from what it does well. For example:

The value of ML will continue to grow as algorithms improve and computing power increases, but there is little reason to believe it will ever replace human clinical oversight. Ultimately, ML provides objectivity and reproducibility in clinical trials, while humans provide subjectivity and can contribute knowledge about factors the program does not take into account. Both are needed. And while MLs ability to flag data inconsistencies may reduce some workload, those predictions still must be verified.

There is no doubt that ML has incredible potential for clinical trials. Its power to quickly manage and analyze large quantities of complex data will save study sponsors money and improve results. However, it is unlikely to completely replace human clinicians for evaluating clinical trial data because there are too many variables and potential unknowns. Instead, savvy clinicians will continue to contribute their expertise and experience to further develop ML platforms to reduce repetitive and tedious tasks with a high degree of reliability and a low degree of variability, which will allow users to focus on more complex tasks.

Photo: Gerd Altmann, Pixabay

Go here to read the rest:
Machine Learning Has Value, but It's Still Just a Tool - MedCity News