Page 22«..10..21222324..3040..»

Tectum Lists $TET on BingX – Cryptocurrency News – Altcoin Buzz

Tectum is announcing the listing of its $TET token on the BingX exchange. The listing took place on March 20. This means users can now access the token on the exchange. The listing is a huge achievement for Tectum. It also reflects its goal of reaching more users.

Tectum has made remarkable moves in recent months. And all of these increase the value of its $TET token. $TET recently surpassed the $5 million trading mark following its listing on Gate.io. So, the listing on BingX will bring more exposure to the token.

The listing on BingX saw massive participation from the Tectum community. Users who participated in the event enjoyed a series of rewards. This includes a 10% cashback on their initial net deposit. Also, Tectum offered a 10% reward to those who deposited three TETs or more.

The Tectum team told us that listing on BingX was a strategic move. BingX is a crypto exchange thats pretty easy to use and suitable for beginners. It also provides a wide range of services, such as

BingXs simplicity and services make it a good exchange for users. Tectum noted that listing on BingX would aid its newer users. Most of these users are novices in cryptocurrency trading. BingX provides a less complicated environment for these users.

Some of our analysts place $TET as one of the tokens to watch out for in this bull run. $TET plays a huge role within the Tectum ecosystem. First, it serves as a support system for the entire ecosystem. It also gives holders priority access to their own nodes on the chain. $TET is also a secure payment system globally.

The $TET token also has some good investment options. Locking $TET on dedicated platforms provides huge returns once the lock phase is over. The token also supports yield farming. This way, holders can make extra profits from their tokens.

Thankfully, there are a couple of exchanges to access the $TET token. Heres a list of these platforms:

And besides more exchanges listing $TET, Tectum is breaking speed records of the number of transactions per second too

Earlier this year, $TET entered the top 200 by market cap. We can expect more remarkable feats from this token. The project has also been making several strategic partnerships. They had one with GetBlock. And another with Ator. All of these partnerships expand the networks influence and value.

Tectum is all about ensuring faster and more efficient transactions. Tectums native token isnt a speculative asset. It has real utility. So, it makes sense to keep tabs on Tectums $TET ahead of a major market run.

Follow this link:
Tectum Lists $TET on BingX - Cryptocurrency News - Altcoin Buzz

Read More..

Cryptos and stocks end the week in the red, analysts optimistic about post-halving Bitcoin rally – KITCO

(Kitco News) The cryptocurrency market ended the week on a negative note as Bitcoin (BTC) slid back below $64,000 while the broader altcoin market recorded losses amid continued profit-taking by traders looking to reposition themselves ahead of the next major uptrend.

Stocks also fell under pressure after Thursdays rally saw all three major indexes hit new record highs at the prospect of lower interest rates. At the market close, the S&P, and Dow finished in the red, down 0.14% and 0.77%, while the Nasdaq managed to battle back from negative territory to finish the day up 0.16%.

While stock investors are cheering the new record highs, crypto investors used the opportunity to take a subtle dig at the accomplishment.

Although its true Bitcoin has fallen more than 13% from its recent high while the S&P is only down roughly 0.5%, its important to note that since 2014, Bitcoins price has increased by more than 29,000%, while the S&P is up 195%. Golds price increased 91.5% during that period.

BTC/USD Chart by TradingView

At the time of writing, BTC trades at $63,570, a decline of 2.3% on the 24-hour chart.

After Bitcoin continued to bleed throughout yesterday, we saw a nice reaction from the previous week's Low at $64.6k, said market analyst CryptoChiefs.

This could be setting up for a nice inverse head and shoulders pattern, however, there is clearly resistance trying to reclaim the Monday low at $65.6k, they said. If we see acceptance above this then we can look for a move towards the Weekly Open area ($68.4k) but if we keep rejecting, then I'll be looking for a move lower.

Bitcoins price might be dropping, but BlackRock's inflow in the Spot Bitcoin ETF is constantly positive, Poppeadded. This means the institutions keep on buying. Big sign in there, [suggesting] that we're far from done with this cycle.

First, there is a Breakout from the Pre-Halving Re-Accumulation Range (green-red range), he said. Second, comes the pre-halving rally (light blue), followed by a pre-halving retrace (dark blue circle), the post-halving re-accumulation range (red box), and then parabolic upside.

This current Pre-Halving Retrace is setting up a future Post-Halving Re-Accumulation Range so as to set up the future Parabolic Upside phase of the cycle, Rekt Capital said.

Altcoin correct amid pre-halving lull

Daily cryptocurrency market performance. Source: Coin360

A 21.9% gain from DeXe (DEXE) led the field, followed by a 16.2% increase for DAO Maker (DAO), and a gain of 11.5% for Aptos (APT). Echelon Prime (PRIME) dropped 9.3% to lead the losers, while Raydium (RAY) lost 8%, and Flux (FLUX) declined by 7.7%.

The overall cryptocurrency market cap now stands at $2.43 trillion, and Bitcoins dominance rate is 51.7%.

Disclaimer:The views expressed in this article are those of the author and may not reflect those of Kitco Metals Inc. The author has made every effort to ensure accuracy of information provided; however, neither Kitco Metals Inc. nor the author can guarantee such accuracy. This article is strictly for informational purposes only. It is not a solicitation to make any exchange in commodities, securities or other financial instruments. Kitco Metals Inc. and the author of this article do not accept culpability for losses and/ or damages arising from the use of this publication.

Read more from the original source:
Cryptos and stocks end the week in the red, analysts optimistic about post-halving Bitcoin rally - KITCO

Read More..

Solana price wavers, but increased DApp activity points to SOL recovery – Cointelegraph

Solanas native token, SOL (SOL), experienced a 45% surge over a week, hitting a high of $210 on March 18. Although SOL price hasnt reached its November 2021 all-time high at $260, it has gained 58% over the last 30 days. This performance surpasses that of Ether (ETH) and Avalanches AVAX (AVAX), which have increased by 12% and 30%, respectively, during the same period.

Solana remains firmly in place as the fifth-largest cryptocurrency by market capitalization and the third in terms of total value locked (TVL), making a long-term bearish outlook on SOLs price difficult to support. Nevertheless, this doesnt assure that SOLs price will stay above $165 in the short term, so investors should examine on-chain metrics to see if the bullish trend is likely to persist.

The view that SOLs 18% drop since March 18 has reversed the bullish trend is challenged by the fact that SOLs price dipped below $165 for less than an hour on March 20, showing significant support. With Bitcoin (BTC) unable to maintain above $70,000, leading to speculation of an altcoin season, both bullish and bearish arguments have their merits.

Critics highlight that the increased demand for Solana led to relatively high fees and more failed transactions. On March 16, data from Cointelegraph indicated that validators experienced delays of up to 40 seconds, causing nearly half of the transactions to fail within a 20-minute span. This rise in activity was spurred by a memecoin frenzy, notably marked by the launch of Book of Meme (BOME), which attracted a remarkable $270 million in trading volume within its first 24 hours.

After Ethereums Dencun hard fork on March 13, which reduced fees for its layer-2 scalability solutions, competition among memecoin launches intensified. This upgrade led to a surge in Ethereums Base activity, with a 77% increase in decentralized application (DApp) volume in a week, as reported by DappRadar. Consequently, the Ethereum ecosystem has become more competitive for memecoin launches, potentially diminishing the focus and spending power of Solana users.

Although its challenging to pinpoint the direct cause and effect, Solana SPL memecoins seemed to have hit their peak the day following the Ethereum networks upgrade on March 14. Dogwifhat (WIF) and Bonk experienced drops of 38% and 40%, respectively. Despite these setbacks, the Solana network has greatly benefited from the heightened activity, with an increase in both volume and active addresses engaging with its DApps.

Notice the Solana networks volume has surged by 55% since March 13, significantly outpacing competitors like BNB Chain and Polygon, which have only seen gains of 2% and 7%, respectively, during the same timeframe. However, the increased activity and volume from memecoins and new token launches do not necessarily guarantee sustained price increases, regardless of the projects merits.

Related: How low can the Bitcoin price go?

This was evident with the liquid staking project Jito (JTO), which saw a 20% drop over two days after reaching a high of $3.85 on March 18. In a similar vein, the decentralized exchange Jupiters JUP token has fallen 25.5% from its all-time high of $1.60 on the same day. Despite the level of adoption these projects may have, a downturn in SOLs price impacts the entire Solana ecosystem.

Analysts point out that the significant issuance of tokens to cover Solanas substantial validator costs, effectively inflating the supply of SOL, is a major concern. Additionally, the large volume of tokens held by the bankrupt FTX exchanges estate poses a sell-off risk in the near future. Despite these factors, Solanas DApp activity growth suggests no apparent weaknesses, indicating that the $165 support level should hold in the near term.

This article does not contain investment advice or recommendations. Every investment and trading move involves risk, and readers should conduct their own research when making a decision.

See original here:
Solana price wavers, but increased DApp activity points to SOL recovery - Cointelegraph

Read More..

AI reveals the complexity of a simple birdsong – The Washington Post

To a human ear the songs of all male zebra finches sound more or less the same. But faced with a chorus of this simple song, female finches can pick the performer who sings most beautifully.

Zebra finches are found in Australia, and they usually mate monogamously for life making this a high-stakes decision for the female finches. The zebra finch is among about a third of songbirds who learn a single song from their fathers early in life, and sing it over and over, raising the question of how female songbirds distinguish between them to choose a mate.

Listen to the song of a male zebra finch:

Scientists believe that most male songbirds evolved to sing a variety of songs to demonstrate their fitness. Under that theory, the fittest songbirds will have more time and energy to work on their vocal stylings and attract females with their varied vocal repertoire.

New research using machine learning shows finches may be sticking to one tune, but how they sing it makes a big difference. Published Wednesday in the journal Nature, the study reveals the complexity of a single zebra finch song and what female songbirds might be hearing in their prospective mates seemingly simple songs.

When researchers analyze birdsongs, theyre often not listening to them but rather looking at spectrograms, which are visualizations of audio files.

So I put together that, Hey, what humans are doing is looking at images of these audio files. Can we use machine learning and deep learning to do this? said Danyal Alam, the lead author on the new study and a postdoctoral researcher at the University of California at San Francisco.

Alam, along with Todd Roberts, an associate professor at UT Southwestern Medical Center and another colleague, used machine learning to analyze hundreds of thousands of zebra finch songs to figure out how they were different from each other and which variations were more attractive to female songbirds.

The researchers found one metric that seemed to get females attention: the spread of syllables in the song. The females seemed to prefer longer paths between syllables. This isnt something humans can easily pick up by listening to the songs or looking at the spectrograms but based on how these algorithms mapped the syllables, the researchers were able to see them in a new way.

To check their hypothesis, the researchers brought the findings back to the birds.

They generated synthetic bird songs to see if females preferred those with a longer path and they did, suggesting the birds intended audience picked up on the same pattern as the researchers computers.

Listen to see if you can tell the difference between a synthetic finch song that doesnt spread out its syllables:

Alam and his colleagues also found that baby birds had a harder time learning the long-distance song patterns than the shorter ones which suggests fitter birds would be more able to learn them, the researchers said.

The studys finding is consistent with whats been shown in other species: The more complexity or difficulty in a song, the more appealing it is to female birds.

A lot of signals in animal communication are meant to be an honest signal of some underlying quality, said Kate Snyder, a researcher at Vanderbilt who wasnt involved in the new paper.

For example, she said, if you look at a peacock, you see the male birds with the longer and more beautiful tails are better at attracting mates. Maintaining a tail like that is expensive for the bird so it must be good at finding food and surviving in its environment to have the time to devote to keeping its tail nice.

Learning takes a lot of time, energy, brain space, Snyder said. Only the fittest male birds will have the time and energy to devote to learning to sing.

Among finches, that work has just been harder to spot until now.

We used to think of this single song repertoire as perhaps a simple behavior, said Roberts. But what we see is that its perhaps much more complicated than we previously appreciated.

Read more:
AI reveals the complexity of a simple birdsong - The Washington Post

Read More..

Using AI to expand global access to reliable flood forecasts – Google Research

Posted by Yossi Matias, VP Engineering & Research, and Grey Nearing, Research Scientist, Google Research

Floods are the most common natural disaster, and are responsible for roughly $50 billion in annual financial damages worldwide. The rate of flood-related disasters has more than doubled since the year 2000 partly due to climate change. Nearly 1.5 billion people, making up 19% of the worlds population, are exposed to substantial risks from severe flood events. Upgrading early warning systems to make accurate and timely information accessible to these populations can save thousands of lives per year.

Driven by the potential impact of reliable flood forecasting on peoples lives globally, we started our flood forecasting effort in 2017. Through this multi-year journey, we advanced research over the years hand-in-hand with building a real-time operational flood forecasting system that provides alerts on Google Search, Maps, Android notifications and through the Flood Hub. However, in order to scale globally, especially in places where accurate local data is not available, more research advances were required.

In Global prediction of extreme floods in ungauged watersheds, published in Nature, we demonstrate how machine learning (ML) technologies can significantly improve global-scale flood forecasting relative to the current state-of-the-art for countries where flood-related data is scarce. With these AI-based technologies we extended the reliability of currently-available global nowcasts, on average, from zero to five days, and improved forecasts across regions in Africa and Asia to be similar to what are currently available in Europe. The evaluation of the models was conducted in collaboration with the European Center for Medium Range Weather Forecasting (ECMWF).

These technologies also enable Flood Hub to provide real-time river forecasts up to seven days in advance, covering river reaches across over 80 countries. This information can be used by people, communities, governments and international organizations to take anticipatory action to help protect vulnerable populations.

The ML models that power the FloodHub tool are the product of many years of research, conducted in collaboration with several partners, including academics, governments, international organizations, and NGOs.

In 2018, we launched a pilot early warning system in the Ganges-Brahmaputra river basin in India, with the hypothesis that ML could help address the challenging problem of reliable flood forecasting at scale. The pilot was further expanded the following year via the combination of an inundation model, real-time water level measurements, the creation of an elevation map and hydrologic modeling.

In collaboration with academics, and, in particular, with the JKU Institute for Machine Learning we explored ML-based hydrologic models, showing that LSTM-based models could produce more accurate simulations than traditional conceptual and physics-based hydrology models. This research led to flood forecasting improvements that enabled the expansion of our forecasting coverage to include all of India and Bangladesh. We also worked with researchers at Yale University to test technological interventions that increase the reach and impact of flood warnings.

Our hydrological models predict river floods by processing publicly available weather data like precipitation and physical watershed information. Such models must be calibrated to long data records from streamflow gauging stations in individual rivers. A low percentage of global river watersheds (basins) have streamflow gauges, which are expensive but necessary to supply relevant data, and its challenging for hydrological simulation and forecasting to provide predictions in basins that lack this infrastructure. Lower gross domestic product (GDP) is correlated with increased vulnerability to flood risks, and there is an inverse correlation between national GDP and the amount of publicly available data in a country. ML helps to address this problem by allowing a single model to be trained on all available river data and to be applied to ungauged basins where no data are available. In this way, models can be trained globally, and can make predictions for any river location.

Our academic collaborations led to ML research that developed methods to estimate uncertainty in river forecasts and showed how ML river forecast models synthesize information from multiple data sources. They demonstrated that these models can simulate extreme events reliably, even when those events are not part of the training data. In an effort to contribute to open science, in 2023 we open-sourced a community-driven dataset for large-sample hydrology in Nature Scientific Data.

Most hydrology models used by national and international agencies for flood forecasting and river modeling are state-space models, which depend only on daily inputs (e.g., precipitation, temperature, etc.) and the current state of the system (e.g., soil moisture, snowpack, etc.). LSTMs are a variant of state-space models and work by defining a neural network that represents a single time step, where input data (such as current weather conditions) are processed to produce updated state information and output values (streamflow) for that time step. LSTMs are applied sequentially to make time-series predictions, and in this sense, behave similarly to how scientists typically conceptualize hydrologic systems. Empirically, we have found that LSTMs perform well on the task of river forecasting.

Our river forecast model uses two LSTMs applied sequentially: (1) a hindcast LSTM ingests historical weather data (dynamic hindcast features) up to the present time (or rather, the issue time of a forecast), and (2) a forecast LSTM ingests states from the hindcast LSTM along with forecasted weather data (dynamic forecast features) to make future predictions. One year of historical weather data are input into the hindcast LSTM, and seven days of forecasted weather data are input into the forecast LSTM. Static features include geographical and geophysical characteristics of watersheds that are input into both the hindcast and forecast LSTMs and allow the model to learn different hydrological behaviors and responses in various types of watersheds.

Output from the forecast LSTM is fed into a head layer that uses mixture density networks to produce a probabilistic forecast (i.e., predicted parameters of a probability distribution over streamflow). Specifically, the model predicts the parameters of a mixture of heavy-tailed probability density functions, called asymmetric Laplacian distributions, at each forecast time step. The result is a mixture density function, called a Countable Mixture of Asymmetric Laplacians (CMAL) distribution, which represents a probabilistic prediction of the volumetric flow rate in a particular river at a particular time.

The model uses three types of publicly available data inputs, mostly from governmental sources:

Training data are daily streamflow values from the Global Runoff Data Center over the time period 1980 - 2023. A single streamflow forecast model is trained using data from 5,680 diverse watershed streamflow gauges (shown below) to improve accuracy.

We compared our river forecast model with GloFAS version 4, the current state-of-the-art global flood forecasting system. These experiments showed that ML can provide accurate warnings earlier and over larger and more impactful events.

The figure below shows the distribution of F1 scores when predicting different severity events at river locations around the world, with plus or minus 1 day accuracy. F1 scores are an average of precision and recall and event severity is measured by return period. For example, a 2-year return period event is a volume of streamflow that is expected to be exceeded on average once every two years. Our model achieves reliability scores at up to 4-day or 5-day lead times that are similar to or better, on average, than the reliability of GloFAS nowcasts (0-day lead time).

Additionally (not shown), our model achieves accuracies over larger and rarer extreme events, with precision and recall scores over 5-year return period events that are similar to or better than GloFAS accuracies over 1-year return period events. See the paper for more information.

The flood forecasting initiative is part of our Adaptation and Resilience efforts and reflects Google's commitmentto address climate change while helping global communities become more resilient. We believe that AI and ML will continue to play a critical role in helping advance science and research towards climate action.

We actively collaborate with several international aid organizations (e.g., the Centre for Humanitarian Data and the Red Cross) to provide actionable flood forecasts. Additionally, in an ongoing collaboration with the World Meteorological Organization (WMO) to support early warning systems for climate hazards, we are conducting a study to help understand how AI can help address real-world challenges faced by national flood forecasting agencies.

While the work presented here demonstrates a significant step forward in flood forecasting, future work is needed to further expand flood forecasting coverage to more locations globally and other types of flood-related events and disasters, including flash floods and urban floods. We are looking forward to continuing collaborations with our partners in the academic and expert communities, local governments and the industry to reach these goals.

Excerpt from:
Using AI to expand global access to reliable flood forecasts - Google Research

Read More..

Unlock the potential of generative AI in industrial operations | Amazon Web Services – AWS Blog

In the evolving landscape of manufacturing, the transformative power of AI and machine learning (ML) is evident, driving a digital revolution that streamlines operations and boosts productivity. However, this progress introduces unique challenges for enterprises navigating data-driven solutions. Industrial facilities grapple with vast volumes of unstructured data, sourced from sensors, telemetry systems, and equipment dispersed across production lines. Real-time data is critical for applications like predictive maintenance and anomaly detection, yet developing custom ML models for each industrial use case with such time series data demands considerable time and resources from data scientists, hindering widespread adoption.

Generative AI using large pre-trained foundation models (FMs) such as Claude can rapidly generate a variety of content from conversational text to computer code based on simple text prompts, known as zero-shot prompting. This eliminates the need for data scientists to manually develop specific ML models for each use case, and therefore democratizes AI access, benefitting even small manufacturers. Workers gain productivity through AI-generated insights, engineers can proactively detect anomalies, supply chain managers optimize inventories, and plant leadership makes informed, data-driven decisions.

Nevertheless, standalone FMs face limitations in handling complex industrial data with context size constraints (typically less than 200,000 tokens), which poses challenges. To address this, you can use the FMs ability to generate code in response to natural language queries (NLQs). Agents like PandasAI come into play, running this code on high-resolution time series data and handling errors using FMs. PandasAI is a Python library that adds generative AI capabilities to pandas, the popular data analysis and manipulation tool.

However, complex NLQs, such as time series data processing, multi-level aggregation, and pivot or joint table operations, may yield inconsistent Python script accuracy with a zero-shot prompt.

To enhance code generation accuracy, we propose dynamically constructing multi-shot prompts for NLQs. Multi-shot prompting provides additional context to the FM by showing it several examples of desired outputs for similar prompts, boosting accuracy and consistency. In this post, multi-shot prompts are retrieved from an embedding containing successful Python code run on a similar data type (for example, high-resolution time series data from Internet of Things devices). The dynamically constructed multi-shot prompt provides the most relevant context to the FM, and boosts the FMs capability in advanced math calculation, time series data processing, and data acronym understanding. This improved response facilitates enterprise workers and operational teams in engaging with data, deriving insights without requiring extensive data science skills.

Beyond time series data analysis, FMs prove valuable in various industrial applications. Maintenance teams assess asset health, capture images for Amazon Rekognition-based functionality summaries, and anomaly root cause analysis using intelligent searches with Retrieval Augmented Generation (RAG). To simplify these workflows, AWS has introduced Amazon Bedrock, enabling you to build and scale generative AI applications with state-of-the-art pre-trained FMs like Claude v2. With Knowledge Bases for Amazon Bedrock, you can simplify the RAG development process to provide more accurate anomaly root cause analysis for plant workers. Our post showcases an intelligent assistant for industrial use cases powered by Amazon Bedrock, addressing NLQ challenges, generating part summaries from images, and enhancing FM responses for equipment diagnosis through the RAG approach.

The following diagram illustrates the solution architecture.

The workflow includes three distinct use cases:

The workflow for NLQ with time series data consists of the following steps:

Our summary generation use case consists of the following steps:

Our root cause diagnosis use case consists of the following steps:

To follow along with this post, you should meet the following prerequisites:

To set up your solution resources, complete the following steps:

Next, you create the knowledge base for the documents in Amazon S3.

The next step is to deploy the app with the required library packages on either your PC or an EC2 instance (Ubuntu Server 22.04 LTS).

Provide the OpenSearch Service collection ARN you created in Amazon Bedrock from the previous step.

After you complete the end-to-end deployment, you can access the app via localhost on port 8501, which opens a browser window with the web interface. If you deployed the app on an EC2 instance, allow port 8501 access via the security group inbound rule. You can navigate to different tabs for various use cases.

To explore the first use case, choose Data Insight and Chart. Begin by uploading your time series data. If you dont have an existing time series data file to use, you can upload the following sample CSV file with anonymous Amazon Monitron project data. If you already have an Amazon Monitron project, refer to Generate actionable insights for predictive maintenance management with Amazon Monitron and Amazon Kinesis to stream your Amazon Monitron data to Amazon S3 and use your data with this application.

When the upload is complete, enter a query to initiate a conversation with your data. The left sidebar offers a range of example questions for your convenience. The following screenshots illustrate the response and Python code generated by the FM when inputting a question such as Tell me the unique number of sensors for each site shown as Warning or Alarm respectively? (a hard-level question) or For sensors shown temperature signal as NOT Healthy, can you calculate the time duration in days for each sensor shown abnormal vibration signal? (a challenge-level question). The app will answer your question, and will also show the Python script of data analysis it performed to generate such results.

If youre satisfied with the answer, you can mark it as Helpful, saving the NLQ and Claude-generated Python code to an OpenSearch Service index.

To explore the second use case, choose the Captured Image Summary tab in the Streamlit app. You can upload an image of your industrial asset, and the application will generate a 200-word summary of its technical specification and operation condition based on the image information. The following screenshot shows the summary generated from an image of a belt motor drive. To test this feature, if you lack a suitable image, you can use the following example image.

Hydraulic elevator motor label by Clarence Risher is licensed underCC BY-SA 2.0.

To explore the third use case, choose the Root cause diagnosis tab. Input a query related to your broken industrial asset, such as, My actuator travels slow, what might be the issue? As depicted in the following screenshot, the application delivers a response with the source document excerpt used to generate the answer.

In this section, we discuss the design details of the application workflow for the first use case.

The users natural language query comes with different difficult levels: easy, hard, and challenge.

Straightforward questions may include the following requests:

For these questions, PandasAI can directly interact with the FM to generate Python scripts for processing.

Hard questions require basic aggregation operation or time series analysis, such as the following:

For hard questions, a prompt template with detailed step-by-step instructions assists FMs in providing accurate responses.

Challenge-level questions need advanced math calculation and time series processing, such as the following:

For these questions, you can use multi-shots in a custom prompt to enhance response accuracy. Such multi-shots show examples of advanced time series processing and math calculation, and will provide context for the FM to perform relevant inference on similar analysis. Dynamically inserting the most relevant examples from an NLQ question bank into the prompt can be a challenge. One solution is to construct embeddings from existing NLQ question samples and save these embeddings in a vector store like OpenSearch Service. When a question is sent to the Streamlit app, the question will be vectorized by BedrockEmbeddings. The top N most-relevant embeddings to that question are retrieved using opensearch_vector_search.similarity_search and inserted into the prompt template as a multi-shot prompt.

The following diagram illustrates this workflow.

The embedding layer is constructed using three key tools:

At the outset of app development, we began with only 23 saved examples in the OpenSearch Service index as embeddings. As the app goes live in the field, users start inputting their NLQs via the app. However, due to the limited examples available in the template, some NLQs may not find similar prompts. To continuously enrich these embeddings and offer more relevant user prompts, you can use the Streamlit app for gathering human-audited examples.

Within the app, the following function serves this purpose. When end-users find the output helpful and select Helpful, the application follows these steps:

In the event that a user selects Not Helpful, no action is taken. This iterative process makes sure that the system continually improves by incorporating user-contributed examples.

By incorporating human auditing, the quantity of examples in OpenSearch Service available for prompt embedding grows as the app gains usage. This expanded embedding dataset results in enhanced search accuracy over time. Specifically, for challenging NLQs, the FMs response accuracy reaches approximately 90% when dynamically inserting similar examples to construct custom prompts for each NLQ question. This represents a notable 28% increase compared to scenarios without multi-shot prompts.

On the Streamlit apps Captured Image Summary tab, you can directly upload an image file. This initiates the Amazon Rekognition API (detect_text API), extracting text from the image label detailing machine specifications. Subsequently, the extracted text data is sent to the Amazon Bedrock Claude model as the context of a prompt, resulting in a 200-word summary.

From a user experience perspective, enabling streaming functionality for a text summarization task is paramount, allowing users to read the FM-generated summary in smaller chunks rather than waiting for the entire output. Amazon Bedrock facilitates streaming via its API (bedrock_runtime.invoke_model_with_response_stream).

In this scenario, weve developed a chatbot application focused on root cause analysis, employing the RAG approach. This chatbot draws from multiple documents related to bearing equipment to facilitate root cause analysis. This RAG-based root cause analysis chatbot uses knowledge bases for generating vector text representations, or embeddings. Knowledge Bases for Amazon Bedrock is a fully managed capability that helps you implement the entire RAG workflow, from ingestion to retrieval and prompt augmentation, without having to build custom integrations to data sources or manage data flows and RAG implementation details.

When youre satisfied with the knowledge base response from Amazon Bedrock, you can integrate the root cause response from the knowledge base to the Streamlit app.

To save costs, delete the resources you created in this post:

Generative AI applications have already transformed various business processes, enhancing worker productivity and skill sets. However, the limitations of FMs in handling time series data analysis have hindered their full utilization by industrial clients. This constraint has impeded the application of generative AI to the predominant data type processed daily.

In this post, we introduced a generative AI Application solution designed to alleviate this challenge for industrial users. This application uses an open source agent, PandasAI, to strengthen an FMs time series analysis capability. Rather than sending time series data directly to FMs, the app employs PandasAI to generate Python code for the analysis of unstructured time series data. To enhance the accuracy of Python code generation, a custom prompt generation workflow with human auditing has been implemented.

Empowered with insights into their asset health, industrial workers can fully harness the potential of generative AI across various use cases, including root cause diagnosis and part replacement planning. With Knowledge Bases for Amazon Bedrock, the RAG solution is straightforward for developers to build and manage.

The trajectory of enterprise data management and operations is unmistakably moving towards deeper integration with generative AI for comprehensive insights into operational health. This shift, spearheaded by Amazon Bedrock, is significantly amplified by the growing robustness and potential of LLMs likeAmazon Bedrock Claude 3to further elevate solutions. To learn more, visit consult theAmazon Bedrock documentation, and get hands-on with theAmazon Bedrock workshop.

Julia Hu is a Sr. AI/ML Solutions Architect at Amazon Web Services. She is specialized in Generative AI, Applied Data Science and IoT architecture. Currently she is part of the Amazon Q team, and an active member/mentor in Machine Learning Technical Field Community. She works with customers, ranging from start-ups to enterprises, to develop AWSome generative AI solutions. She is particularly passionate about leveraging Large Language Models for advanced data analytics and exploring practical applications that address real-world challenges.

Sudeesh Sasidharanis a Senior Solutions Architect at AWS, within the Energy team. Sudeesh loves experimenting with new technologies and building innovative solutions that solve complex business challenges. When he is not designing solutions or tinkering with the latest technologies, you can find him on the tennis court working on his backhand.

Neil Desai is a technology executive with over 20 years of experience in artificial intelligence (AI), data science, software engineering, and enterprise architecture. At AWS, he leads a team of Worldwide AI services specialist solutions architects who help customers build innovative Generative AI-powered solutions, share best practices with customers, and drive product roadmap. In his previous roles at Vestas, Honeywell, and Quest Diagnostics, Neil has held leadership roles in developing and launching innovative products and services that have helped companies improve their operations, reduce costs, and increase revenue. He is passionate about using technology to solve real-world problems and is a strategic thinker with a proven track record of success.

Read the rest here:
Unlock the potential of generative AI in industrial operations | Amazon Web Services - AWS Blog

Read More..

Undergraduate Researchers Help Unlock Lessons of Machine Learning and AI – College of Natural Sciences

Brain-Machine Interface

AI also intersects with language in other research areas. Nihita Sarma, a computer sciencethird-year student and member of Deans Scholars and Turing Scholars, researches theintersection of neuroscience and machine learning to understand language in the brain, workingwith Michael Mauk, professor of neuroscience, and Alexander Huth, an assistant professor ofcomputer science and neuroscience.

As research subjects listen to podcasts, they lie in an MRI machine and readings track their brainactivity. These customized-to-the-subject readings are then used to train machine learningmodels called encoding models, and Sarma then passes them through decoding models.

My research is taking those encodings and trying to backtrack and figure out based on thisneural representation based on the brain activity that was going on at that moment whatcould the person inside the MRI machine possibly have been thinking or listening to at thatmoment? Sarma said.

Along with gaining a better understanding of how language is represented in the brain, Sarmasaid the research has possible applications for a noninvasive communication tactic for peopleunable to speak or sign.

We would be able to decode what theyre thinking or what theyre trying to say, and allow themto communicate with the outside world, Sarma said.

Read more from the original source:
Undergraduate Researchers Help Unlock Lessons of Machine Learning and AI - College of Natural Sciences

Read More..

Safeguarding AI: A Policymakers Primer on Adversarial Machine Learning Threats – R Street

Artificial intelligence (AI) has become increasingly integrated into the digital economy, and as weve learned from the advent of the internet and the expansion of Internet-of-Things products and services, mass adoption of novel technology comes with widespread benefitsas well as security tradeoffs. For policymakers to support the resilience of AI and AI-enabled technology, it is crucial for them to understand malicious attacksassociated with AI integration, such as adversarial machine learning (ML); to support responsible AI development; and to develop robust security measures against these attacks.

Adversarial Machine Learning Attacks Adversarial ML attacksaim to undermine the integrity and performance of ML models by exploiting vulnerabilities in their design or deployment or injecting malicious inputs to disrupt the models intended function. ML models power a range of applications we interact with daily, including search recommendations, medical diagnosis systems, fraud detection, financial forecasting tools, and much more. Malicious manipulation of these ML models can lead to consequences like data breaches, inaccurate medical diagnoses, or manipulationof trading markets. Though adversarial ML attacks are often explored in controlled environments like academia, vulnerabilities have the potential to be translated into real-world threats as adversaries consider how to integrate these advancements into their craft. Adversarial ML attacks can be categorized into white-box and black-box attacksbased on the attackers ability to access the target model.

White-box attacksimply that the attacker has open access to the models parameters, training data, and architecture. In black-box attacks, the adversary has limited access to the target model and can only access additional information about it through application programming interfaces (APIs) and reverse-engineering behavior using output generated by the model. Black-box attacks are more relevant than white-box attacks because white-box attacks assume the adversary has complete access, which isnt realistic. It can be extremely complicated for attackers to gain complete access to fully trained commercial models in the deployment environments of the companies that own them.

Types of Adversarial Machine Learning Attacks Query-based Attacks Query-based attacks are a type of black-box ML attack where the attacker has limited information about the models internal workings and can only interact with the model through an API. The attacker submits various queries as inputs and analyzes the corresponding output to gain insight into the models decision-making process. These attacks can be broadly classified into model extraction and model inversion attacks.

Figure 1 Explaining query-based ML attacks (Source: Adversarial Robustness Toolbox)

Model Extraction: The attackers goal is to reconstruct or replicate the target models functionality by analyzing its responsesto various inputs. This stolen knowledge can be used for malicious purposes like replicating the model for personal gain, conducting intellectual property theft, or manipulating the models behavior to reduce its prediction accuracy.

Model Inversion:The attacker attempts to decipher characteristicsof the input data used to train the model by analyzing its outputs. This can potentially expose sensitive information embedded in the training data, raising significant privacy concerns related to personally identifiable information of the users in the dataset. Even if the models predictions are not directly revealing, the attacker can reconstruct the outputs to infer subtle patterns or characteristics about the training dataset. State-of-the-art models offer some resistance to such attacks due to their increased infrastructure complexity. New entrants, however, are more susceptible to these attacks because they possess limited resources to invest in security measures like differential privacyor complex input validation.

Data Poisoning Attacks Data poisoningattacks occur in both white- and black-box settings, where attackers deliberately add malicious samplesto manipulate data. Attackers can also use adversarial examplesto deceive the model by skewing its decision boundaries. Data poisoning occurs at different stages of the ML pipeline, including data collection, data preprocessing, and model training. Generally, the attacks are most effective during the model training phase because that is when the model learns about different elements within the data. Such attacks induce biases and reduce the models robustness.

Figure 2 Explaining data poisoning attack (Source: Adversarial Robustness Toolbox)

Adversaries face significant challenges when manipulating data in real time to affect model output thanks to technical constraints and operational hurdles that make it impractical to alter the data stream dynamically. For example, pre-trained models like OpenAIs ChatGPT or Googles Gemini trained on large and diverse datasets may be less prone to data poisoning compared to models trained on smaller, more specific datasets. This is not to say that pre-trained models are completely immune; these models sometimes fall prey to adversarial ML techniques like prompt injection, where the chatbot either hallucinatesor produces biased outputs.

Protecting Systems Against Adversarial Machine Learning Attacks Addressing the risk of adversarial ML attacks necessitates a balanced approach. Adversarial attacks, while posing a legitimate threat to user data protections and the integrity of predictions made by the model, should not be conflated with speculative, science fiction-esquenotions like uncontrolled superintelligence or an AI doomsday. More realistic ML threats relate to poisoned and biased models, data breaches, and vulnerabilities within ML systems. It is important to prioritize the development of secure ML systems alongside efficient deployment timelines to ensure continued innovation and resilience in a highly competitive market. Following is a non-exhaustive list of approaches to secure systems against adversarial ML attacks.

Secure-by-design principles for AI development: One method to ensure the security of an ML system is to employ security throughout its design, development, and deployment processes. Resources like the U.S. Cybersecurity and Infrastructure Security Agency and U.K. National Cyber Security Centre joint guidelineson secure AI development and the National Institute of Standards and Technology (NIST) Secure Software Development Frameworkprovide guidance on how to develop and maintain ML models properly and securely.

Incorporating principles from the AI Risk Management Framework: NISTsAI Risk Management Framework(RMF) is a flexible framework to address and assess AI risk. According to the RMF, ML models should prioritize anonymity and confidentiality of user data. The AI RMF also suggests that models should consider de-identification and aggregation techniques for model outputs and balance model accuracy with user data security. While specialized techniques for preventing adversarial ML attacks are essential, traditional cybersecurity defensive tools like red teamingand vulnerability management remain paramount to systems protection.

Supporting new entrants with tailored programs and resources:Newer players like startups and other smaller organizations seeking to integrate AI capabilities into their products are more likely to be vulnerable to these attacks due to their reliance on third-party data sources and any potential deficiencies in their technology infrastructure to secure their ML systems. Its important that these organizations receive adequate support from tailored programs or resources.

Risk and threat analysis: Organizations should conduct an initial threat analysis of their ML systems using tools like MITREs ATLASto identify interfaces prone to attacks. Proactive threat analysis helps organizations minimize risks by implementing safeguards and contingency plans. Developers can also incorporate adversarial ML mitigation strategiesto verify the security of their systems.

Data sanitization: Detecting individual data points that hurt the models performance and removing them from the final training dataset can defend the system from data poisoning. Data sanitization can be expensive to conduct due to its need for computational resources. Organizations can reduce the risk of data poisoning with stricter vetting standards for imported data used in the ML model. This can be accomplished through data validation, anomaly detection, and continual monitoring of data quality over time.

Because these attacks have the potential to compromise user data privacy and undermine the accuracy of results in critical sectors, it is important to stay ahead of threats. Understanding policy implications and conducting oversight is essential, but succumbing to fear and hindering innovation through excessive precaution is detrimental. Policymakers can foster environments conducive to secure ML development by providing resources and frameworks to navigate the complexities of securing ML technologies effectively. A balance between developing resilient systems and sustained innovation is key for the United States to maintain its position as a leading AI innovator.

The rest is here:
Safeguarding AI: A Policymakers Primer on Adversarial Machine Learning Threats - R Street

Read More..

Machine Learning Accelerates the Simulation of Dynamical Fields – Eos

Editors Highlights are summaries of recent papers by AGUs journal editors. Source: Journal of Advances in Modeling Earth Systems

Accurately simulating and appropriately representing the aerosol-cloud-precipitation system poses significant challenges in weather and climate models. These challenges are particularly daunting due to knowledge gaps in crucial processes that occur at scales smaller than typical large-eddy simulation model grid sizes (e.g., 100 meters). Particle-resolved direct numerical simulation (PR-DNS) models offer a solution by resolving small-scale turbulent eddies and tracking individual particles. However, it requires extensive computational resources, limiting its use to small-domain simulations and limited number of physical processes.

Zhang et al. [2024] develop the PR-DNS surrogate models using the Fourier neural operator (FNO), which affords improved computational performance and accuracy. The new solver achieves a two orders of magnitude reduction in computational cost, especially for high-resolution simulations, and exhibits excellent generalization, allowing for different initial conditions and zero-shot super resolution without retraining. These findings highlight the FNO method as a promising tool to simulate complex fluid dynamics problems with high accuracy, computational efficiency, and generalization capabilities, enhancing our ability to model the aerosol-cloud-precipitation system and develop digital twins for similarly high-resolution measurements.

Citation: Zhang, T., Li, L., Lpez-Marrero, V., Lin, M., Liu, Y., Yang, F., et al. (2024). Emulator of PR-DNS: Accelerating dynamical fields with neural operators in particle-resolved direct numerical simulation. Journal of Advances in Modeling Earth Systems, 16, e2023MS003898. https://doi.org/10.1029/2023MS003898

Jiwen Fan, Editor, JAMES

View original post here:
Machine Learning Accelerates the Simulation of Dynamical Fields - Eos

Read More..

Advancements in Pancreatic Cancer Detection: Integrating Biomarkers, Imaging Technologies, and Machine Learning … – Cureus

Specialty

Please choose I'm not a medical professional. Allergy and Immunology Anatomy Anesthesiology Cardiac/Thoracic/Vascular Surgery Cardiology Critical Care Dentistry Dermatology Diabetes and Endocrinology Emergency Medicine Epidemiology and Public Health Family Medicine Forensic Medicine Gastroenterology General Practice Genetics Geriatrics Health Policy Hematology HIV/AIDS Hospital-based Medicine I'm not a medical professional. Infectious Disease Integrative/Complementary Medicine Internal Medicine Internal Medicine-Pediatrics Medical Education and Simulation Medical Physics Medical Student Nephrology Neurological Surgery Neurology Nuclear Medicine Nutrition Obstetrics and Gynecology Occupational Health Oncology Ophthalmology Optometry Oral Medicine Orthopaedics Osteopathic Medicine Otolaryngology Pain Management Palliative Care Pathology Pediatrics Pediatric Surgery Physical Medicine and Rehabilitation Plastic Surgery Podiatry Preventive Medicine Psychiatry Psychology Pulmonology Radiation Oncology Radiology Rheumatology Substance Use and Addiction Surgery Therapeutics Trauma Urology Miscellaneous

More here:
Advancements in Pancreatic Cancer Detection: Integrating Biomarkers, Imaging Technologies, and Machine Learning ... - Cureus

Read More..