Page 1,416«..1020..1,4151,4161,4171,418..1,4301,440..»

ADLINK Wants to Put a Powerful GPGPU in Your Pocket for On-the-Go Machine Learning Acceleration – Hackster.io

Anyone aiming to work with deep learning and artificial intelligence systems on-the-go with a notebook or small form factor PC is about to get an option to supercharge their hardware, courtesy of a pocket-sized external GPU from ADLINK: the Pocket AI, based on an NVIDIA RTX A500 add-in board.

"Pocket AI offers the ultimate in flexibility and reliability on the move," ADLINK claims of its impending hardware launch. "This portable, plug and play AI accelerator delivers a perfect power/performance balance from the NVIDIA RTX GPU. Pocket AI is the perfect device for AI developers, professional graphics users and embedded industrial applications for boosting productivity by improve the work efficiency."

The compact device takes advantage of the ability to use PCI Express lanes over a Thunderbolt USB Type-C connector to interface devices without the room for a full-size PCIe graphics card like notebooks to one of NVIDIA's more powerful examples.

In the case of the launch model, that's an NVIDIA RTX A500 graphics card, based on the Ampere GA107 architecture, which offers 2,048 CUDA cores, 64 Tensor cores, and 16 RT cores. All told, that's equivalent to a claimed 6.54 tera-floating point operations per second (TFLOPS) of FP32-precision compute, or 100 tera-operations per second (TOPS) of dense INT8 performance.

The idea behind the device: to act as a simple plug-and-play accelerator for deep learning and artificial intelligence workloads, as well as graphically-demanding tasks including ray tracing. The device is powered by USB Power Delivery (USB PD) on a second Type-C port, measures just 10672mm (around 4.172.83"), and weighs 250g (around 8.8oz) making it easily portable. The only catch: just 4GB of GDDR6 memory, meaning that the device may struggle fitting larger models into RAM.

More information on the device is available on the ADLINK website, though the company has yet to reveal pricing but has confirmed plans to open pre-orders this month, with shipping due to commence in June.

Freelance journalist, technical author, hacker, tinkerer, erstwhile sysadmin. For hire: freelance@halfacree.co.uk.

Continued here:
ADLINK Wants to Put a Powerful GPGPU in Your Pocket for On-the-Go Machine Learning Acceleration - Hackster.io

Read More..

Astrophysicists Show How to ‘Weigh’ Galaxy Clusters with Artificial … – University of Connecticut

Scholars from the Institute for Advanced Study and UConns Cosmology and Astrophysics with MachinE Learning Simulations (CAMELS) project have used a machine learning algorithm known as symbolic regression to generate new equations that help solve a fundamental problem in astrophysics: inferring the mass of galaxy clusters. Their work was recently published in Proceedings of the National Academy of Sciences.

Galaxy clusters are the most massive objects in the Universe: a single cluster contains anything from a hundred to many thousands of galaxies, alongside collections of plasma, hot X-ray emitting gas, and dark matter. These components are held together by the clusters own gravity. Understanding such galaxy clusters is crucial to pinning down the origin and continuing evolution of our universe.

Measuring how many clusters exist, and then what their masses are, can help us understand fundamental properties like the total matter density in the universe, the nature of dark energy, and other fundamental questions, says co-author and UConn Professor of Physics Daniel Angls-Alczar.

Perhaps the most crucial quantity determining the properties of a galaxy cluster is its total mass. But measuring this quantity is difficult: galaxies cannot be weighed by placing them on a scale. The problem is further complicated by the fact that the dark matter making up much of a clusters mass is invisible. Instead, scientists infer the mass of a cluster from other observable quantities.

Angls-Alczar notes another limitation: there are many ideas for how galaxies form and evolve and give rise to galaxy clusters, but there are still uncertainties on some of these processes.

Previously, scholars considered a clusters mass to be roughly proportional to another, more easily measurable quantity called the integrated electron pressure (or the Sunyaev-Zeldovich flux, often abbreviated to YSZ). The theoretical foundations of the Sunyaev-Zeldovich flux were laid in the early 1970s by Rashid Sunyaev, current Distinguished Visiting Professor in the Institute for Advanced Studys School of Natural Sciences, and his collaborator Yakov B. Zeldovich.

However, the integrated electron pressure is not a reliable proxy for mass because it can behave inconsistently across different galaxy clusters. The outskirts of clusters tend to exhibit very similar YSZ, but their cores are much more variable. The YSZ/mass equivalence was problematic in that it gave equal weight to all parts of the cluster. As a result, a lot of scatter was observed, meaning that the error bars on the mass inferences were large.

Digvijay Wadekar, current Member of the Institute for Advanced Studys School of Natural Sciences, has worked with collaborators across ten different institutions to develop an AI program to improve the understanding of the relationship between the mass and the YSZ.

Wadekar and his collaborators fed their AI program with state-of-the-art cosmological simulations developed by groups at Harvard and at the Center for Computational Astrophysics in New York. Their program searched for and identified additional variables that might make inferring the mass from the YSZ more accurate.

Angls-Alczar explains the CAMELS collaboration provided a large suite of simulations where the researchers could measure the properties of galaxies and clusters, and how they depend on the assumptions of the underlying galaxy formation physics.

Because we have thousands of parallel universes simulated under different assumptions, when we train a machine learning algorithm on large amounts of simulated data, we can test whether the predictions are robust relative to those variations or not.

AI is useful for identifying new parameter combinations that could be overlooked by human analysts. While it is easy for human analysts to identify two significant parameters in a data set, AI is better able to parse through high data volumes often revealing unexpected influencing factors.

Machine learning can be fantastic for making predictions, says Angls-Alczar, but its only as good as the data you use to train the machine learning model. For this type of application, its important to have simulations that can represent the real universe accurately enough, and understand the range of uncertainties, so that when you train the machine learning model, hopefully, you can apply that to real galaxy clusters to improve the actual measurements in the real universe.

More specifically, the AI method the researchers employed is known as symbolic regression. Right now, a lot of the machine learning community focuses on deep neural networks, Wadekar says. These are very powerful but the drawback is that they are almost like a black box. We cannot understand what goes on in them. In physics, if something is giving good results, we want to know why it is doing so. Symbolic regression is beneficial because it searches a given dataset and generates simple, mathematical expressions in the form of simple equations that you can understand. It provides an easily interpretable model.

Their symbolic regression program handed them a new equation, which was able to better predict the mass of the galaxy cluster by augmenting YSZ with information about the clusters gas concentration. Wadekar and his collaborators then worked backward from this AI-generated equation and tried to find a physical explanation for it. They realized that gas concentration is in fact correlated with the noisy areas of clusters where mass inferences are less reliable. Their new equation, therefore, improved mass inferences by providing a way for these noisy areas of the cluster to be down-weighted. In a sense, the galaxy cluster can be compared to a spherical doughnut. The new equation extracts the jelly at the center of the doughnut (that introduces larger errors), and concentrates on the doughy outskirts for more reliable mass inferences.

The new equations can provide observational astronomers engaged in upcoming galaxy cluster surveys with better insights into the mass of the objects that they observe. There are quite a few surveys targeting galaxy clusters which are planned in the near future, Wadekar says. Examples include the Simons Observatory (SO), the Stage 4 CMB experiment (CMB-S4), and an X-ray survey called eROSITA. The new equations can help us in maximizing the scientific return from these surveys.

He also hopes that this publication will be just the tip of the iceberg when it comes to using symbolic regression in astrophysics. We think that symbolic regression is highly applicable to answering many astrophysical questions, Wadekar says. In a lot of cases in astronomy, people make a linear fit between two parameters and ignore everything else. But nowadays, with these tools, you can go further. Symbolic regression and other artificial intelligence tools can help us go beyond existing two-parameter power laws in a variety of different ways, ranging from investigating small astrophysical systems like exoplanets to galaxy clusters, the biggest things in the universe.

Go here to see the original:
Astrophysicists Show How to 'Weigh' Galaxy Clusters with Artificial ... - University of Connecticut

Read More..

Using machine learning to determine the time of exposure to … – Nature.com

In this section we describe the clinical data set, data preprocessing, feature selection process, classifiers used, and design for the machine learning experiments.

The data we study is a collection of seven clinical studies available via the NCBI Gene Expression Omnibus, Series GSE73072. The details of these studies can be found in1, but we briefly summarize here and in Fig.1.

Each of the seven studies enrolled individuals to be infected with one of four viruses associated with a common respiratory infection. Studies DEE2-DEE5 challenged participants with H1N1 or H3N2. Studies Duke and UVA challenged participants with HRV, while DEE1 challenged individuals with RSV.

In all cases, individuals had blood samples taken at regular intervals every 412h both prior to and after infection; see Fig.1 for details. Specific time points are measured as hours since infection and vary by study. In total, 148 human subjects were involved with approximately 20 sampled time points per person. Blood samples were run through undirected microarray assays. CEL data files available via GEO were read and processed using RMA (Robust Multi-array Average) normalization through use of several Bioconductor packages23 producing expression values across 22,277 microarray probes.

To address the time of infection question, we separate the training and test samples into 9 bins in time post-inoculation, each with a categorical label; see Fig.1. The first six categories correspond to disjoint 8-h intervals in the first 2days after inoculation, and the last three categories are disjoint 24-h intervals from hours 48 to 120h. In addition to this 9-class classification problem, we also studied a relaxed binary prediction problem of whether a subject belongs to the early phase of infection (time of inoculation (le ) 48h) or later phase (time of infection > 48h). Results for this binary classification are inferred from the 9-class problem, i.e., if a classified label is associated to a time in the first 2days, it is considered correctly labeled.

After the data is processed, we apply the following general pipeline for each of the 14 experiments enumerated in Fig.2 (top panel):

Partition the data into training and testing sets based on the classification experiment.

Normalize the data to correct for batch effects seen between subjects (e.g., using the linear batch normalization routine in the limma package24).

Identify comprehensive sets of predictive features using the Iterative Feature Removal (IFR) approach6, which aims to extract all discriminatory features in high-dimensional data with repeated application of sparse linear classifiers such as Sparse Support Vector Machines (SSVM).

Identify network architectures and other hyperparameters for Artificial Neural Networks (ANN) and Centroid Encoder (CE) by utilizing a five-fold cross-validation experiment on the training data.

Evaluate the features identified from the step 3 on the test data. This is done by training and evaluating a new model using the selected features with a leave-one-subject-out cross validation scheme on the test study. The metric used for evaluation is BSR; throughout this study we utilize BSR as a balanced representation of performance accounting for imbalanced class sizes while being easy to interpret.

For each of the training sets illustrated in Fig.2 (top panel; stripes), features are selected using the IFR algorithm, with an SSVM classifier. This is done separately for all pairwise combinations of categorical labels (time bins); a 9-class experiment leads to 9-choose-2 = 36 pairwise combinations. So, for each of these 36 combinations of time bins, features are selected using the following steps.

First, the input data to IFR algorithm is partitioned into training and validation set. Next, sets of features that produce high accuracy on the validation set are selected iteratively. In each iteration, features that have previously been selected are masked out, so that theyre not used again. The feature selection is halted once the predictive rates on the validation data drops below a specified threshold. This results in one feature-set for a particular training-validation set of the input data.

Next, more training-validation partitions are repeatedly sampled, and the feature selection, as described above, is repeated for each partition-set; this results in a different feature-set for each new partition-set. Then, these different feature-sets are combined by applying a set union operation and the frequency of each individual feature is tracked if they are discovered in multiple feature-sets. The feature frequency is used to rank the features; the more a particular feature is discovered, the more important it is.

The size of this combined feature-set, although about 520% of the original feature-set size, is still often large for classification, so a last step we reduce the size of this feature-set. This is done by performing a grid-search using a linear SVM (without sparsity penalty) on the training data, taking the top n features, ranked by frequency, which maximize the average of true positive rates on every class, or BSR. Once the features have been selected, we perform a more detailed leave-one-subject-out classification for the experiments described in the Results and visualized in Fig.2 using the classifiers described in Methods section.

Feature selection produced 36 distinct feature-sets, coming from all distinct choices of two time bins from the nine labels possible. To address the question of commonality or importance of features selected on a time bin for a specific pathway, we implemented a heuristic scoring system. For a fixed time bin (say, bin1) and a fixed feature-set (say, bin1_vs_bin2; quantities summarized in Table 2) the associated collection of features was referenced against the GSEA MSigDB. This database includes both canonical pathways and gene sets as the result of data miningwe refer to anything in this collection generically as a gene set. A score for each MSigDB gene set was assigned for a given feature-set (bin1_vs_bin2) based on the ratio of features in the feature-set which appear in the gene set. For instance, a score of 0.5 for hypothetical GENE_SET_A for feature-set bin1_vs_bin2 represents the fact that 50% of the features in GENE_SET_A are present in bin1_vs_bin2.

A score for pathway on a time bin by itself was defined as the sum of the scores for that pathway on all feature-sets related to it. Continuing the example, a score for GENE_SET_A on bin1 would be the sum of the scores for GENE_SET_A for feature-set bin1_vs_bin2, bin1_vs_bin3, all the way up to bin1_vs_bin9, with equal weighting.

Certainly, there are several subtle statistical and combinatorial questions relating to this procedure. Direct comparison of pathways and gene sets is challenging due their overlapping nature (features may belong to multiple gene sets). The number of features associated with a gene set can vary anywhere from under 10, to over 1000 and may complicate a scoring system based on percentage overlap, such as ours. Attempting to use a mathematically or statistically rigorous procedure to account for these, and other potential factors is a worthy exercise, but we believe our heuristic is sufficient for an explainable high-level summary of the composition of the feature-sets found.

In this section we describe the classifiers and how they are applied for the classification task. We also describe how the feature-sets are adapted to different classifiers.

After feature selection, we evaluate the features on test sets based on successful classification in the nine time bins. For each experiment shown in Fig.1, we use the feature-sets extracted on its training set and evaluate the models using leave-one-subject-out cross validation on the test set. Each experiment is repeated 25 times to capture variability. For the binary classifiersSSVM and linear SVMwe used a multiclass method, with each of its ({9 atopwithdelims ()2}) pairwise models using respective feature-sets. On the other hand, we used a single classification model for ANN and CE because these models can handle multiple classes. The feature-set for these models are created by taking a union of ({9 atopwithdelims ()2} = 36) pairwise feature-sets.

Balanced Success Rate (BSR) Throughout the Results section, we report predictive power in terms of BSR. This is a simple average of true positive rates for each of the categories. The BSR serves as a simple, interpretable metric especially when working with imbalanced data sets and gives a holistic view of classification performance that easily generalizes to multiclass problems. For example, if true positive rates in a 3-class problem were (TPR_1 = 95%), (TPR_2 = 50%), and (TPR_3 = 65%), the BSR for the multiclass problem would be ((TPR_1 + TPR_2 + TPR_3)/3 = 70%).

We implement a pairwise model (or one-vs-one model) for training and classification to extend the binary classifiers described below (ANN and CE do not require these). For a data set with c unique classes, c-choose-2 models are built using the relevant subsets of the data. Learned model parameters and features selected for each model are stored and later used when discriminatory features are needed in the test phase.

After training, classification is done by a simple voting scheme: a new sample is classified by all c-choose-2 classifiers and assigned the label that had the plurality of the vote. If a tie occurs, the class is decided by an unbiased coin flip between the winning labels. In a nine-class problem, this corresponds to 36 classifiers and feature-sets being selected.

Linear SVM For a plain linear SVM model, the implementation in the scikit-learn package in Python was used25. While scikit-learn also has built-in support to extend this binary classifier to multiclass problems, either by one-vs-one or one-vs-all approaches, we only use it for binary classification problems, or for binary sub-problems of a one-vs-one scheme for a multiclass problem. The optimization problem was introduced by26 and requires the solution to

$$begin{aligned} begin{aligned} textrm{min}_{w,b} ;&||w||_2^2 quad text {subject to} \&y^i ( w cdot x^i - b ) ge 1, quad text { for all } i end{aligned} end{aligned}$$

(1)

where (y^i) represent class labels assigned to (pm 1), (x^i) represent vector samples, w represents the weight vector and b represents a bias (a scalar shift). This approach has seen widespread use and success in biological feature extraction27,28.

Sparse SVM (SSVM) The SSVM problem replaces the 2-norm in the objective of equation 1 with a 1-norm, which is understood to promote sparsity (many zero coefficients) in the coefficient vector ({textbf{w}}). This allows one to ignore those features and is our primary tool for feature selection when coupled with Iterated Feature Removal6. Arbitrary p-norm SVM were introduced in29 and (ell _1)-norm sparse SVM were further developed for feature selection in6,30,31.

After a standard one-hot encoding scheme, inherently multiclass methods (here: neural networks) do not need to be adapted to handle a multiclass problem as with linear methods, nor is there a straightforward way to encode the use of time-dependent features in passing new data forward through the neural network; this would be begging the (time of infection) question. Instead, for these methods, we simply take the union of all pairwise features built to classify pairs of time bins, then allow the multiclass algorithm to learn any necessary relationships internally. The specifics of the neural networks are described below.

Artificial Neural Networks (ANN) We apply a standard feed-forward neural network trained to learn the labels of the training data. In all the classification tasks, we used two hidden layers with 500 ReLU activation in each layer. We used the whole training set to calculate the gradient of the loss function (Cross-entropy) while updating the network parameters using Scaled Conjugate Gradient Descent (SCG); see32.

Centroid-Encoder (CE) This is a variation of an autoencoder which can be used for both visualization and classification purposes. Consider a data set with N samples and M classes. The classes denoted (C_j, j = 1, dots , M) where the indices of the data associated with class (C_j) are denoted (I_j). We define centroid of each class as (c_j=frac{1}{|C_j|}sum _{i in I_j} x_i) where (|C_j|) is the cardinality of class (C_j). Unlike autoencoder, which maps each point (x_i) to itself, CE will map each point (x_i) to its class centroid (c_j) by minimizing the following cost function over the parameter set (theta ):

$$begin{aligned} begin{aligned} {mathscr {L}}_{ce}(theta )=frac{1}{2N}sum ^M_{j=1} sum _{i in I_j}Vert c_j-f(x_i; theta ))Vert ^2_2 end{aligned} end{aligned}$$

(2)

The mapping f is composed of a dimension reducing mapping g (encoder) followed by a dimension increasing reconstruction mapping h (decoder). The output of the encoder is used as a supervised visualization tool33, and attaching another layer to map to the one-hot encoded labels and further training by fine-tuning provides a classifier. For further details, see34. In all of the classification tasks, we used three hidden layers ((500 rightarrow 100 rightarrow 500)) with ReLU activation for centroid mapping. After that we attached a classification layer with one-hot-encoding to the encoder ((500 rightarrow 100)) to learn the class label of the samples. The model parameters were updated using SCG.

Continue reading here:
Using machine learning to determine the time of exposure to ... - Nature.com

Read More..

AI Creating Higher Paying Jobs; Growing Demand For Prompt … – Vibes of India

Techies and software engineers move aside. New job avenues are being chalked out, all thanks to the rise of AI proliferating our communication and learning. In what is still be debated as being ethical at all, AI interface such as ChatGPT and Bard are necessitating a new tribe of the face behind. Prompt engineers, as they are now called, are people who spend their day coaxing AI to produce better results and help companies train their workforce to harness the tools.

Some media reports note that prompt engineers can draw a salary as high as $335,000 or 2 crores annually.

Over a dozen artificial intelligence language systems called large language models, or LLMs, have been created by companies like Google parent Alphabet Inc., OpenAI, and Meta Platforms Inc.

It is like an AI whisperer. Youll often find prompt engineers come from a history, philosophy, or English language background because it is all wordplay. In the end, it is about processing a search into limited number of words, explains Albert Phelps, a prompt engineer at Mudano, part of consultancy firm Accenture in Leytonstone, England.

Phelps and his colleagues spend most of the day writing messages or prompts for tools like OpenAIs ChatGPT, which can be saved as presets within OpenAIs playground for clients and others to use later. A typical day in the life of a prompt engineer involves writing five different prompts, with about 50 interactions with ChatGPT, says Phelps.

Companies like Anthropic, a Google-backed startup, are advertising salaries up to $335,000 for a Prompt Engineer and Librarian in the Bay Area.

Automated document reviewer Klarity, also in California, is offering as much as $230,000 for a machine learning engineer who can prompt and understand how to produce the best output from AI tools.

Outside of the tech world, Boston Childrens Hospital and London law firm Mishcon de Reya recently advertised for prompt engineer jobs.

The best-paying roles often go to people who have PhDs in machine learning or ethics, or those who have founded AI companies. Recruiters and others say these are among the critical skills needed to be successful.

Google, TikTok and Netflix Inc. have been driving salaries higher, but the role is becoming mainstream among bigger companies thanks to the excitement around the launch of OpenAIs ChatGPT-4, Google Bard, and Microsofts Bing AI chatbot.

Also Read: Surat: Woman Arrested With MD Drugs Worth Rs 50 Lakh

Follow us on Social Media

Original post:
AI Creating Higher Paying Jobs; Growing Demand For Prompt ... - Vibes of India

Read More..

Deep Learning Tools Can Improve Liver Cancer Prognosis – Inside Precision Medicine

Research led by Tsinghua University, Beijing, has developed a deep learning (DL) program that can improve prognostic biomarker discovery to help patients with liver cancer.

The researchers used the tool, known as PathFinder, to show the value of a biomarker that plays a key role in liver cancer outcomes. They also hope it can be useful for finding biomarkers for different types of cancer in the future.

Tissue biomarkers are crucial for cancer diagnosis, prognosis assessment and treatment planning. However, there are few known biomarkers that are robust enough to show true analytical and clinical value, write Lingjie Kong, a senior researcher from Tsinghua University, Beijing, and colleagues in the journal Nature Machine Intelligence.

DL-based computational pathology can be used as a strategy to predict survival, but the limited interpretability and generalizability prevent acceptance in clinical practice Thus there is still a desperate need for identifying additional robust biomarkers to guide tumor diagnosis and prognosis, and to direct the research of tumor mechanism.

PathFinder is a DL-guided framework that is designed to be easy to interpret for pathologists and other healthcare professionals or researchers who are not computational experts. It uses a combination of whole slide images from patients with cancer and healthy controls with spatial information, as well as DL, to search for new biomarkers.

In this study, using liver cancer as an example, the tool showed spatial distribution of necrosis in liver cancer is strongly related to patient prognosis. This biomarker is known, but rarely used in current clinical practice.

From their findings, the research team suggested two measurements, necrosis area fraction and tumor necrosis distribution, as ways pathologists can assess spatial distribution of necrosis in liver cancer patients to improve the accuracy of prognostic predictions. They then verified these measures in the Cancer Genome Atlas Liver Hepatocellular Carcinoma dataset and the Beijing Tsinghua Changgung Hospital dataset.

By combining sparse multi-class tissue spatial distribution information of whole slide images with attribution methods, PathFinder can achieve localization, characterization and verification of potential biomarkers, while guaranteeing state-of-the-art prognostic performance, write the authors.

In this study, we did not target AI as a substitute for pathologists, but as a tool for pathologists to mine dominate biomarkers. Just as AI guides mathematical intuition, pathologists can formulate specific hypotheses based on their clinical experience, and then use PathFinder to deeply mine the connection between hypotheses-relevant information and prognosis.

More:
Deep Learning Tools Can Improve Liver Cancer Prognosis - Inside Precision Medicine

Read More..

Bird migration forecasts get a boost from AI – EarthSky

View at EarthSky Community Photos. | Ragini Chaturvedi in Pennsylvania captured these snow geese in flight on March 14, 2021. She wrote: Went to Middle Creek area of Pennsylvania to watch the migration of the gaggle of geese. Thank you, Ragini! Researchers use machine learning to track bird migration. Through BirdCast, scientists can inform citizens of when to turn their lights out to protect birds.

By Miguel Jimenez, Colorado State University

With chatbots like ChatGPT making a splash, machine learning is playing an increasingly prominent role in our lives. For many of us, its been a mixed bag. We rejoice when our Spotify For You playlist finds us a new jam, but groan as we scroll through a slew of targeted ads on our Instagram feeds.

Machine learning is also changing many fields that may seem surprising. One example is my discipline, ornithology, the study of birds. It isnt just solving some of the biggest challenges associated with studying bird migration; more broadly, machine learning is expanding the ways in which people engage with birds. As spring migration picks up, heres a look at how machine learning is influencing ways to research birds and, ultimately, to protect them.

Last chance to get a moon phase calendar! Only a few left.

Most birds in the Western Hemisphere migrate twice a year, flying over entire continents between their breeding and nonbreeding grounds. While these journeys are awe-inspiring, they expose birds to many hazards en route. These include extreme weather, food shortages and light pollution that can attract birds and cause them to collide with buildings.

Our ability to protect migratory birds is only as good as the science that tells us where they go. And that science has come a long way.

In 1920, the U.S. Geological Survey launched the Bird Banding Laboratory, spearheading an effort to put bands with unique markers on birds, then recapture the birds in new places to figure out where they traveled. Today researchers can deploy a variety of lightweight tracking tags on birds to discover their migration routes. These tools have uncovered the spatial patterns of where and when birds of many species migrate.

However, tracking birds has limitations. For one thing, over 4 billion birds migrate across the continent every year. Even with increasingly affordable equipment, the number of birds that we track is a drop in the bucket. And even within a species, migratory behavior may vary across sexes or populations.

Further, tracking data tells us where birds have been, but it doesnt necessarily tell us where theyre going. Migration is dynamic, and the climates and landscapes that birds fly through are constantly changing. That means its crucial to be able to predict their movements.

This is where machine learning comes in. Machine learning is a subfield of artificial intelligence that gives computers the ability to learn tasks or associations without explicitly being programmed. We use it to train algorithms that tackle various tasks, from forecasting weather to predicting March Madness upsets.

But applying machine learning requires data. And the more data the better. Luckily, scientists have inadvertently compiled decades of data on migrating birds through the Next Generation Weather Radar system. This network, known as NEXRAD, measures weather dynamics and helps predict future weather events. But it also picks up signals from birds as they fly through the atmosphere.

BirdCast is a collaborative project of Colorado State University, the Cornell Lab of Ornithology and the University of Massachusetts. It seeks to leverage data to quantify bird migration. Machine learning is central to its operations. Researchers have known since the 1940s that birds show up on weather radar. But to make that data useful, we need to remove nonavian clutter and identify which scans contain bird movement.

This process would be painstaking by hand. But by training algorithms to identify bird activity, we have automated this process and unlocked decades of migration data. And machine learning allows the BirdCast team to take things further. By training an algorithm to learn what atmospheric conditions are associated with migration, we can use predicted conditions to produce forecasts of migration across the continental U.S.

BirdCast began broadcasting these forecasts in 2018 and has become a popular tool in the birding community. Many users may recognize that radar data helps produce these forecasts, but fewer realize that its a product of machine learning.

Currently these forecasts cant tell us what species are in the air, but that could be changing. Last year, researchers at the Cornell Lab of Ornithology published an automated system that uses machine learning to detect and identify nocturnal flight calls. These are species-specific calls that birds make while migrating. Integrating this approach with BirdCast could give us a more complete picture of migration.

These advancements exemplify how effective machine learning can be when guided by expertise in the field where it is being applied. As a doctoral student, I joined Colorado State Universitys Aeroecology Lab with a strong ornithology background but no machine learning experience. Conversely, Ali Khalighifar, a postdoctoral researcher in our lab, has a background in machine learning but has never taken an ornithology class.

Together, we are working to enhance the models that make BirdCast run, often leaning on each others insights to move the project forward. Our collaboration typifies the convergence that allows us to use machine learning effectively.

Machine learning is also helping scientists engage the public in conservation. For example, forecasts produced by the BirdCast team are often used to inform Lights Out campaigns.

These initiatives seek to reduce artificial light from cities. Manmade light attracts migrating birds and increases their chances of colliding with human-built structures, such as buildings and communication towers. Lights Out campaigns can mobilize people to help protect birds at the flip of a switch.

As another example, the Merlin bird identification app seeks to create technology that makes birding easier for everyone. In 2021, the Merlin staff released a feature that automates song and call identification, allowing users to identify what theyre hearing in real time, like an ornithological version of Shazam.

This feature has opened the door for millions of people to engage with their natural spaces in a new way. Machine learning is a big part of what made it possible.

Grant Van Horn, a staff researcher at the Cornell Lab of Ornithology who helped develop the algorithm behind this feature, told me:

Sound ID is our biggest success in terms of replicating the magical experience of going birding with a skilled naturalist.

Opportunities for applying machine learning in ornithology will only increase. As billions of birds migrate over North America to their breeding grounds this spring, people will engage with these flights in new ways, thanks to projects like BirdCast and Merlin. But that engagement is reciprocal. The data that birders collect will open new opportunities for applying machine learning.

Computers cant do this work themselves. Van Horn said:

Any successful machine learning project has a huge human component to it. That is the reason these projects are succeeding.

Miguel Jimenez, Ph.D. student in Ecology, Colorado State University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Bottom line: Researchers use machine learning to track bird migration. Through BirdCast, scientists can inform citizens of when to turn their lights out to protect birds.

See original here:
Bird migration forecasts get a boost from AI - EarthSky

Read More..

Merging Artificial Intelligence and Physics Simulations To Design … – SciTechDaily

Merging physics-based simulations with artificial intelligence gains increasing importance in materials science, especially for the design of complex materials that meet technological and environmental demands. Credit: T. You, Max-Planck-Institut fr Eisenforschung GmbH

Max Planck scientists explore the possibilities of artificial intelligence in materials science and publish their review in the journal Nature Computational Science.

Advanced materials become increasingly complex due to the high requirements they have to fulfil regarding sustainability and applicability. Dierk Raabe, and colleagues reviewed the use of artificial intelligence in materials science and the untapped spaces it opens if combined with physics-based simulations. Compared to traditional simulation methods, AI has several advantages and will play a crucial role in material sciences in the future.

Advanced materials are urgently needed for everyday life, be it in high technology, mobility, infrastructure, green energy or medicine. However, traditional ways of discovering and exploring new materials encounter limits due to the complexity of chemical compositions, structures and targeted properties. Moreover, new materials should not only enable novel applications, but also include sustainable ways of producing, using and recycling them.

Researchers from the Max-Planck-Institut fr Eisenforschung (MPIE) review the status of physics-based modelling and discuss how combining these approaches with artificial intelligence can open so far untapped spaces for the design of complex materials. They published their perspective in the journal Nature Computational Science.

To meet the demands of technological and environmental challenges, ever more demanding and multifold material properties have to be considered, thus making alloys more complex in terms of composition, synthesis, processing and recycling. Changes in these parameters entail changes in their microstructure, which directly impacts materials properties. This complexity needs to be understood to enable the prediction of structures and properties of materials. Computational materials design approaches play a crucial role here.

Our means of designing new materials rely today exclusively on physics-based simulations and experiments. This approach can experience certain limits when it comes to the quantitative prediction of high-dimensional phase equilibria and particularly to the resulting non-equilibrium microstructures and properties. Moreover, many microstructure- and property-related models use simplified approximations and rely on a large number of variables. However, the question remains if and how these degrees of freedom are still capable of covering the materials complexity, explains Professor Dierk Raabe, director at MPIE and first author of the publication.

The paper compares physics-based simulations, like molecular dynamics and ab initio simulations with descriptor-based modelling and advanced artificial intelligence approaches. While physics-based simulations are often too costly to predict materials with complex compositions, the use of artificial intelligence (AI) has several advantages.

AI is capable of automatically extracting thermodynamic and microstructural features from large data sets obtained from electronic, atomistic and continuum simulations with high predictive power, says Professor Jrg Neugebauer, director at MPIE and co-author of the publication.

As the predictive power of artificial intelligence depends on the availability of large data sets, ways of overcoming this obstacle are needed. One possibility is to use active learning cycles, where machine learning models are trained with initially small subsets of labelled data. The models predictions are then screened by a labelling unit that feeds high quality data back into the pool of labelled records and the machine learning model is run again. This step-by-step approach leads to a final high-quality data set usable for accurate predictions.

There are still many open questions for the use of artificial intelligence in materials science: how to handle sparse and noisy data? How to consider interesting outliers or misfits? How to implement unwanted elemental intrusion from synthesis or recycling? However, when it comes to designing compositionally complex alloys, artificial intelligence will play a more important role in the near future, especially with the development of algorithms, and the availability of high-quality material datasets and high-performance computing resources.

Reference: Accelerating the design of compositionally complex materials via physics-informed artificial intelligence by Dierk Raabe, Jaber Rezaei Mianroodi and Jrg Neugebauer, 31 March 2023, Nature Computational Science.DOI: 10.1038/s43588-023-00412-7

The research is supported by the BigMax network of the Max Planck Society.

Excerpt from:
Merging Artificial Intelligence and Physics Simulations To Design ... - SciTechDaily

Read More..

Avoid Cryptocurrency Downturn, Add These 5 Tech Stocks Instead – Yahoo Finance

The cryptocurrency market has been on a roller coaster ride for the past year, thanks to high volatility in the most popular digital currency bitcoin (BTC) which holds a dominant position in terms of market capital.

According to CoinGecko data, the global cryptocurrency market capital stands at $1.22 trillion as of today, reflecting a year-over-year fall of 46.7%. Bitcoins market cap stands at $544 billion, accounting for 44.7% of the total crypto market cap.

The sluggish performance of ProShares Bitcoin Strategy ETF (BITO) also reflects weakness in the crypto space. In the past year, BITO has lost 39.1%.

The massive decline in the major indexes of the U.S. equity market, which has been suffering from the Federal Reserves aggressive stance to cut down the inflation rate through continued interest rate hikes, remains the prime reason behind the topsy-turvy crypto market.

The Fed raised the benchmark interest rate again by 25 basis points to 4.75-5% in the March FOMC meeting. Chairman Jerome Powell said that the higher interest regime would continue for a longer period in his testimony before the U.S. Congress. Several Fed officials estimated that the terminal interest rate should be 5.375% or more at the end of 2023.

Apart from the interest rate hikes, the crash of the crypto exchange FTX is a significant reason behind the downturn in the crypto market.

Although some signs of improvement are visible from the latest rebound in BTC, rising interest rates and tighter monetary policy continue to ignite uncertainties in the crypto space.

Against this backdrop, we advise investors looking for good investment opportunities to park their money in high-quality tech stocks.

The growing proliferation of advanced technologies like AI, Machine Learning (ML), Internet of Things (IoT), blockchain and augmented reality/virtual reality (AR/VR) is constantly boosting the prospects of the technology sector.

The rapid adoption of cloud computing platforms is another positive. Rising 5G deployment, along with solid uptake of autonomous vehicles, cyber security solutions, wearables, voice assistants and other connected devices, are encouraging for the stocks offering products powered by the above-mentioned technologies.

Story continues

Given the upbeat scenario, we have selected five technology stocks (market cap greater than $20 billion) that are well-poised to grow through the rest of 2023, as these are highly reputed, fundamentally strong and financially resilient.

Apart from having solid fundamentals, the stocks have a favorable combination of a Growth Score of A or B and a Zacks Rank #1 (Strong Buy) or #2 (Buy). You can see the complete list of todays Zacks #1 Rank stocks here.

Per Zacks proprietary methodology, stocks with such a favorable combination offer solid investment opportunities.

Broadcom AVGO is benefiting from strength in networking and server storage. Substantial content increases at cloud and enterprise customers, which are aiding its storage connectivity business, are other positives. Solid growth from the deployment of Tomahawk 4 for data center switching at hyper-scale customers is noteworthy.

Upgrades of edge and core routing networks with Broadcoms next-generation Jericho portfolio at cloud and service providers are contributing well. The companys growing prospects from its strategic deals are other tailwinds. Its VMware acquisition will likely boost prospects in the long run. Its tie-up with Tencent to accelerate the adoption of high bandwidth co-packaged optics network switches for cloud infrastructure is another positive.

AVGO currently has a Zacks Rank #2 and a Growth Score of B. It has a market capitalization of $267.47 billion. The Zacks Consensus Estimate for Broadcoms fiscal 2023 earnings has improved 2.6% to $41.38 per share in the past 60 days.

Broadcom Inc. Price and Consensus

Broadcom Inc. price-consensus-chart | Broadcom Inc. Quote

Adobe ADBE is benefiting from strong demand for its creative products. Solid momentum across the companys Creative Cloud, Document Cloud and Adobe Experience Cloud is a major positive. Growth in emerging markets, robust online video creation demand, strong Acrobat adoption and improving average revenue per user are other tailwinds.

The growing adoption of Premiere Pro, solid momentum across the Adobe Express platform, and benefits from the Frame.io acquisition are contributing well. Rising demand for professional service and the solid uptake of Adobe Experience Manager are other positives.

ADBE currently has a Zacks Rank #2 and a Growth Score of B. It has a market capitalization of $176.72 billion. The Zacks Consensus Estimate for Adobes fiscal 2023 earnings has improved 1.2% to $15.40 per share in the past 60 days.

Adobe Inc. Price and Consensus

Adobe Inc. price-consensus-chart | Adobe Inc. Quote

Analog Devices ADI is riding on a growing presence across communication, consumer, industrial and automotive end markets. Solid demand for high-performance analog and mixed-signal solutions remains a tailwind.

Strong momentum across the electric vehicle space on the back of robust Battery Management System solutions is a positive. Growing power design wins are other positives.

ADI currently has a Zacks Rank #2 and a Growth Score of B. It has a market capitalization of $99.76 billion. The Zacks Consensus Estimate for Analog Devices fiscal 2023 earnings has improved 9.2% to $10.60 per share in the past 60 days.

Analog Devices, Inc. Price and Consensus

Analog Devices, Inc. price-consensus-chart | Analog Devices, Inc. Quote

Fortinet FTNT has been benefiting from the rising demand for security and networking products amid the growing hybrid working trend. It has also been gaining from robust growth in Fortinet Security Fabric, cloud and Software-defined Wide Area Network offerings.

Increasing IT spending on cybersecurity is expected to help Fortinet grow faster than the security market. Its focus on enhancing the unified threat management portfolio through product development and acquisitions is another tailwind. Strong deal wins remain the companys key growth drivers.

FTNT currently has a Zacks Rank #2 and a Growth Score of A. It has a market capitalization of $52.11 billion. The Zacks Consensus Estimate for Fortinets 2023 earnings has improved 4.4% to $1.41 per share in the past 60 days.

Fortinet, Inc. Price and Consensus

Fortinet, Inc. price-consensus-chart | Fortinet, Inc. Quote

Mettler-Toledo MTD is benefiting from the solid momentum across the Laboratory, Food Retail and Industrial segments. The companys expanding presence across the pharmaceutical and life sciences markets is a major positive. MTDs robust automated chemistry solutions, which are aiding its momentum across the drug process development field, are contributing well to its top-line growth.

In addition, the companys portfolio strength, cost-cutting efforts, robust sales and marketing strategies, and margin and productivity initiatives are acting as tailwinds.

MTD currently has a Zacks Rank #2 and a Growth Score of A. It has a market capitalization of $33.77 billion. The Zacks Consensus Estimate for Mettler-Toledos 2023 earnings has improved 3.6% to $44.00 per share in the past 60 days.

Mettler-Toledo International, Inc. Price and Consensus

Mettler-Toledo International, Inc. price-consensus-chart | Mettler-Toledo International, Inc. Quote

Want the latest recommendations from Zacks Investment Research? Today, you can download 7 Best Stocks for the Next 30 Days. Click to get this free report

Analog Devices, Inc. (ADI) : Free Stock Analysis Report

Adobe Inc. (ADBE) : Free Stock Analysis Report

Broadcom Inc. (AVGO) : Free Stock Analysis Report

Fortinet, Inc. (FTNT) : Free Stock Analysis Report

Mettler-Toledo International, Inc. (MTD) : Free Stock Analysis Report

To read this article on Zacks.com click here.

Zacks Investment Research

Read the rest here:
Avoid Cryptocurrency Downturn, Add These 5 Tech Stocks Instead - Yahoo Finance

Read More..

Bitcoins best quarter in 2 years beats all major indexes in Q1 – Fortune

After a rocky end to 2022, Bitcoin turned its luck around in the first three months of the year, skyrocketing 73% to outperform all major U.S. indexes.

The most popular cryptocurrency started the year off around $16,000 but finished the quarter around $28,500, its best quarterly performance in two years, according to CoinDesk. The cryptocurrency was trading at $28,000, down 1% on Monday morning, according to CoinMarketCap data.

Bitcoin outperformed all major indexes in a quarter that saw stocks bounce back. The S&P 500 was up 7%, and the Dow Jones rose 0.4%, while the Nasdaq led the pack with a 17% increase, its best quarter since the fourth quarter of 2020, according to CNN Business. Gold was also up 7% in the first quarter.

View the Bitcoin Has Its Best Quarter In Two Years As It Outperforms All Major Indexes In Q1 chart

The second-leading cryptocurrency, Ether, jumped 50% over the same period, and it could see additional gains after the Shanghai update to its code later this month.

Part of the reason for Bitcoins recent success is a restoration of its store-of-value properties, according to a Friday research report by crypto exchange Coinbase. Investors tend to put money into stores of value like gold when markets fall.

Its correlation to the S&P 500 has dropped from a peak of 70% in May 2022 to 25% on March 30, just before the quarter ended, according to Coinbase. Because the digital currency serves as an alternative to the traditional financial sector, it has benefited from investor uneasiness following the failure of Silicon Valley Bank and Signature Bank last month.

After holding just 43.9% of all crypto market share at the end of February, at the end of March, Bitcoin made up 47.7% of the digital asset market, according to Coinbase.

Last week, the cryptocurrency saw $8.8 million in inflows, while Ethereum saw outflows of around $2 million, according to a research note by digital asset investment and trading group CoinShares.

Another factor boosting both equities and Bitcoin is hopes from investors that the Federal Reserve will reverse course and cut interest rates. The Fed raised rates a quarter of a percentage point following its meeting in March, and some investors think a rate cut could be imminent due to recession signals.

Still, after the Feds March meeting, Chairman Jerome Powell said a rate cut was unlikely for the rest of 2023.

More here:
Bitcoins best quarter in 2 years beats all major indexes in Q1 - Fortune

Read More..

Bitcoin Drops to $27.5K While Dogecoin Spikes After Twitter Logo Change – Yahoo Finance

Join the most important conversation in crypto and web3! Secure your seat today

Bitcoin (BTC) remained firmly within the range it has held for much of the past two weeks, trading as low as about $27,200 and as high as $28,400 Monday.

The largest cryptocurrency by market capitalization was recently trading at $27,500, down over 2% from 24 hours ago. BTC is up nearly 70% for the year after a buoyant first quarter in which investors grew more optimistic about inflation and other macroeconomic issues.

Yet, BTCs price has been unable to ride above $29,000 for more than a few fleeting minutes in recent weeks as investors mull banking failures and fresh economic indicators that have been inconclusive.

Bitcoin needs a bullish catalyst to break above the $30,000 level, but until some significant use case argument is made prices could consolidate around the mid-$20,000s, Edward Moya, senior market analyst at foreign exchange market maker Oanda, wrote in an email.

Ether (ETH), the second-largest cryptocurrency, also slid 0.2% Monday to hover around $1,787. ETHs price jumped 48% in the first quarter. Among other altcoins, the meme-based dogecoin (DOGE) long supported by Twitter CEO Elon Musk surged 16.5% after the social media platform changed its logo to the dogecoin symbol from the usual blue bird. Payments provider Alchemy Pay's native ACH token rose 7% after a Monday report that the company has received $10 million in investment from market maker DWF Labs at a $400 million valuation.

The CoinDesk Market Index, which measures overall crypto market performance, was up 0.1% for the day.

Meanwhile, market liquidity has continued to worsen. Crypto data firm Kaikos Monday report noted that both BTC and ETHs 2% market depth, a metric for assessing liquidity conditions, has dropped by 50% and 41%, respectively, since the collapse of Alameda Research, the trading arm of crypto exchange FTX in November a so-called Alameda gap. The ongoing decline has followed exchange Binances announcement that it was curbing its zero-fee trading program, Kaiko said.

Story continues

(Kaiko)

Both assets (bitcoin and ether) have suffered in the aftermath of the FTX collapse and banking crisis, with fewer market makers supplying liquidity to order books, the report said.

Equity markets were mixed Monday. The S&P 500 closed up 0.3%, while the Dow Jones Industrial Average (DJIA) rose by 0.9%. However, the tech-heavy Nasdaq was down 0.2%.

Traditional market movements came after OPEC+ unexpectedly announced an oil production cut of over one million barrels a day, sending oil prices higher. Meanwhile, the manufacturing purchasing managers' index (PMI) on Monday showed that U.S. manufacturing activity in March dropped to its lowest level in nearly three years.

The rest is here:
Bitcoin Drops to $27.5K While Dogecoin Spikes After Twitter Logo Change - Yahoo Finance

Read More..