Category Archives: Machine Learning
New machine-learning approach identifies one molecule in a billion selectively, with graphene sensors – Phys.org
by Japan Advanced Institute of Science and Technology
Graphene's 2D nature, single molecule sensitivity, low noise, and high carrier concentration have generated a lot of interest in its application in gas sensors. However, due to its inherent non-selectivity, and huge p-doping in atmospheric air, its applications in gas sensing are often limited to controlled environments such as nitrogen, dry air, or synthetic humid air.
While humidity conditions in synthetic air could be used to achieve controlled hole doping of the graphene channel, this does not adequately mirror the situation in atmospheric air. Moreover, atmospheric air contains several gases with concentrations similar to or larger than the analytic gas. Such shortcomings of graphene-based sensors hinder selective gas detection and molecular species identification in atmospheric air, which is required for applications in environmental monitoring, and non-invasive medical diagnosis of ailments.
The research team led by Dr. Manoharan Muruganathan (formerly Senior Lecturer), and Professor Hiroshi Mizuta at the Japan Advanced Institute of Science and Technology (JAIST) employed the machine learning (ML) models trained on various gas adsorption-induced doping and scattering signals to realize both highly sensitive and selective gas sensing with a single device.
The ML models' performances are often dependent on the input features. 'The conventional graphene-based ML models are limited in their input features', says Dr. Osazuwa Gabriel Agbonlahor (formerly post-doctoral research fellow). The existing ML models only monitor the gas adsorption-induced changes in the graphene transfer characteristics or resistance/conductivity without modulating these characteristics by applying an external electric field.
Hence, they miss distinctive van der Waals (vdW) interaction between gas molecules and graphene, which is unique to individual gas molecules. Hence, unlike the conventional electronic nose (e-nose) models, we can map the external electric field modulated graphene-gas interaction, which enables more selective feature extraction for complex gas environments such as atmospheric air.
Our ML models for the identification of atmospheric gases were developed using the graphene sensor functionalized with a porous activated carbon thin film. Eight vdW complex features were used to monitor the effects of the external electric field on the graphene-gas molecule vdW interaction, and consequently mapped the evolution of the vdW bonding before, during, and after the external electric field application.
Furthermore, although the gas sensing experiments were performed under different experimental conditions e.g., gas chamber pressures, gas concentrations, ambient temperature, atmospheric relative humidity, tuning time, and tuning voltage, the developed models were shown to be robust enough to accommodate these variations in experimental conditions by not exposing the models to these parameters.
Moreover, to test the models versatility, they were trained on atmospheric environments as well as relatively inert environments that are often used in gas sensing e.g., nitrogen and dry air. Hence, a high-performance atmospheric gas "electronic nose" was achieved, distinguishing between the four different environments (ammonia in atmospheric air, acetone in atmospheric air, acetone in nitrogen, and ammonia in dry air) with 100% accuracy.
The research is published in the journal Sensors and Actuators B: Chemical.
More information: Osazuwa G. Agbonlahor et al, Machine learning identification of atmospheric gases by mapping the graphene-molecule van der waals complex bonding evolution, Sensors and Actuators B: Chemical (2023). DOI: 10.1016/j.snb.2023.133383
Provided by Japan Advanced Institute of Science and Technology
Machine learning and data consortium can fight identity theft: AU10TIX – Electronic Payments International
It is true that identity fraud has become a buzz word among people and experts, but not in a good way. First coined in 1964, identity theft has since turned prolific, whether it is fraudsters seeking to impersonate others in order to either open a credit card account or gangs laundering money without linking transactions to their real-life identities.
Last year, The National Fraud Intelligence Bureau reported that fraud offences increased by 17% in March 2022, reaching 936,276 compared to the year ending March 2021.
A GlobalData survey conducted in September 2022 stated identity theft came among the top three concerns shared among citizens from the UK, US, France, Germany and Poland. Identity theft, as a practice.
Nir Stern is vice president of product management at AU10TIX, a tech company providing intelligence information and the infrastructure needed to combat fraud.
In an interview with Electronic Payments International, Stern talks about how machine learning, data consortiums as well as external sources can effectively be used to combat identity fraud.
Generally speaking, in the world of financial crime and even beyond, you need to differentiate between account takeover and identity fraud.
In the case of an account takeover, as a consumer, you have an existing relationship with a financial institution, and someone is taking over that account by stealing your credentials (username, password or even one-time password).
This is usually done via social engineering or email scams. In the end, a fraudster tries to make the victim do things for them or provide them with information that they need without the victim knowing about the fraud.
With identity fraud, the purpose usually is to pretend to be another person to start a new interaction with the institution either to commit financial crimes or to accomplish other activities. One example could be money laundering to finance terrorist activity.
Because of regulations, you would need an identity with all relevant information. The methods employed would be different.
First, you need to have access to your victims personal information unless you want to create a synthetic ID.
Secondly, you must create a fake ID that is good quality enough to bypass any security measurements financial institutions have.
There are roughly three types of methods.
There is the very elementary and nave one, like trying to take a photo of the victims ID or download an ID image.
Then there are the more sophisticated fraud techniques. You could go on certain websites, pay a certain amount of money and download any type of digital document. Those should be of sufficient quality so that when you hand them to an expert, they will not notice anything suspicious.
Then you have the top quality fraud, where identity fraudsters steal or buy personal information from databases uploaded on the dark web. Then, they create high-quality, sometimes even physical IDs that are almost impossible to detect with all the information or standard methods of going through fraud. Everything will look perfectly normal because the fraudsters have accessed information through a particular type of data breach.
Those are the trickiest to detect because everything looks completely normal. And when we examine web traffic, we see the most sophisticated fraudsters use this.
So, because fraudsters are using the methods mentioned above, none of the standard ID verification measures employed by banks will be helpful, no matter how sophisticated they are.
The only way to detect it with systems like ours is first to use machine learning that has access to millions of legitimate transactions and can spot different indicators showing what counts as fraudulent activity.
Also, we have a unique system called Instinct, a concerted database where we track transactions worldwide. Through our big customers, we keep an eye on a huge amount of legitimate and fraudulent transactions. We securely store that information, avoiding any storage of sensitive data.
These steps enable us to see repetitions. Because when
these fraudsters invest a lot of time, effort, and money in coming up with a specific tactic, they will not do it just once. They will perform mass attacks.
And then we see cases where everything will look perfectly normal if you look at a single transaction. Still, our systems can detect the same face used on multiple IDs or the same ID with different faces.
The combination of sophisticated machine learning to determine indicators of fraudulent activity and using a data consortium is basically the only way to detect those attacks. Unfortunately, many organisations do not use such capabilities to prevent identity fraud, thereby risking their money.
So, the main differentiator is, first, the fact that many of our competitors and ID-proofing solutions need to provide manual reviews behind the scenes. They will capture the information or images sent over to them. But most of the analysis will be done by human beings. And, as I have mentioned, with all these sophisticated, super complex forgeries, a human being cannot identify ID fraud because the activity looks normal even when you go into detail with your examination.
Thats why first, you need to have a fully automated system based on machine learning/AI to detect those tiny indicators of forgeries that are hard to see otherwise.
Moreover, you also need the capabilities like we do with Instinct to look through a data consortium, detect synthetic fraud or mass attacks, and see whether those attacks are repetitive.
You dont bring a knife to a gunfight. That is probably the key element in fighting identity fraud. If you look at the reports that are always shared, the value of identity theft-related fraud is hundreds of millions. You see specific gangs of identity thieves making millions out of it. As a result, they invest a lot of money into acquiring new technology to commit fraud. So you need to have the equivalent of that technology to fight.
Therefore, you cannot have a solution based on human beings/agents that are supposed to detect identity fraud. It is almost impossible for them.
Next, on top of having a fully automated system, you also need to have a multi-layered approach.
That means not only searching to pinpoint fraudulent activity but also having access to data consortiums, as well as looking into external data sources to determine whether the fraudulent activity is one-off or not.
Thirdly, fraudsters adapt all the time, so you need a fully adaptable system that keeps on learning, a system with access to information not limited to your customers. If you only monitor your customer environment, when the next attack comes, they might not be prepared for it unless they have enough experience.
The other side of this discussion we should have brought up is the legitimate user experience.
If you think about these banks, they want to be secure and protected from fraud. Around 99.9% of their traffic consists of legitimate persons. In that sense, you want to ensure they have the best user experience, especially when you are a digital bank. That is your only channel; you dont have a backup physical branch.
You need to ensure you have a smoother digital user experience and make it secure and comfortable for your customers.
The key for these companies is to have the back-end systems able to detect these forgeries and the front-end solutions that help you, as a legitimate user, take the best profile photos and quickly become a customer.
When using these front-end solutions, make sure that the quality of customer images enables you, as a company, to detect ID theft and prevent fraudsters from using things like external cameras to create deep-fake photos, as well as upload images, or screenshot pictures of people or their IDs.
So all of that, combining a sophisticated front-end solution and a sophisticated back-end solution, is the direction many digital banks and financial institutions take and what traditional banks should do more as well.
When doing ID or digital verification, if you are not a small local bank but a global one willing to extend to other countries, you need companies like AU10TIX to do it for you.
We support over 200 countries and over 4,000 types of IDs. It is virtually impossible for a bank to develop a system that can manage various national ID formats while also successfully detecting forgeries.
Thats not their business. Their business is to give the best user experience, so I believe the best approach to combat fraud is for them to get expert help.
The riskiest thing about blockchain or decentralised financial transactions is that you dont have any indication of where the money goes, no way to turn it back, and no information about who is interacting with the system. So in the case of any account takeovers or social engineering acts that happen, there is nothing you can do.
There are different transaction monitoring techniques for decentralised IDs. A lot of times, these techniques make sure the money doesnt go to unknown wallets. But if its something like a fraudster opening the digital wallet and stealing the money there is very little you can do.
One of the trends we see, and were strategically investing a lot in it, is decentralised ID. It is when you issue a digital ID, which is encrypted and tokenised on your mobile device or digital wallet. That information is not shared with anyone; the signature is kept in the blockchain.
Because it is based on these standards, if you need to use the identification system, you can just share simple information or claims like am I allowed to do it? Am I over 18? Am I a citizen of a particular country?
We see a lot of hype around this technology. Because its all around self-survey identity and significantly related to data privacy, which is key to decentralised finance, the combination of the two may actually be a solution that benefits everyone. You can keep your privacy without the need to be part of the financial institution ecosystem while still making it secure in a way that guarantees only you have access to it.
Continued here:
Machine learning and data consortium can fight identity theft: AU10TIX - Electronic Payments International
Outlook on the Deep Learning Drug Discovery and Diagnostics Global Market to 2035: by Therapeutic Areas and Key Geographical Regions – Yahoo Finance
Company Logo
Dublin, March 16, 2023 (GLOBE NEWSWIRE) -- The "Deep Learning Market in Drug Discovery and Diagnostics: Distribution by Therapeutic Areas and Key Geographical Regions: Industry Trends and Global Forecasts (2nd Edition), 2023-2035" report has been added to ResearchAndMarkets.com's offering.
This report features an extensive study of the current market landscape and the likely future potential of the deep learning solutions market within the healthcare domain. The report highlights the efforts of several stakeholders engaged in this rapidly emerging segment of the pharmaceutical industry. The report answers many key questions related to this domain.
Since the mid-twentieth century, computing devices have continually been explored for applications beyond mere calculations, to emerge as machines that possess intelligence. These targeted efforts have contributed to the introduction of artificial intelligence, the next-generation simulator that employs programmed machines possessing the ability to comprehend data and execute the instructed tasks.
The progress of artificial intelligence can be attributed to machine learning, a field of study imparting computers with the ability to think without being explicitly programmed. Deep learning is a complex machine learning algorithm that uses a neural network of interconnected nodes / neurons in a multi-layered structure, thereby enabling the interpretation of large volumes of unstructured data to generate valuable insights. The mechanism of this technique resembles the interpretation ability of human beings, making it a promising approach for big data analysis.
Owing to the distinct characteristic of deep learning algorithm to imitate human brain, it is currently being deployed in the life sciences domain, primarily for the purpose of drug discovery and diagnostics. Considering the challenges associated with drug discovery and development, such as the high attrition rate and increased financial burden, deep learning has been found to improve the overall R&D productivity and enhance diagnosis / prediction accuracy.
Story continues
Recent advancements in the deep learning domain have demonstrated its potential in other healthcare-associated segments, such as medical image analysis, molecular profiling, virtual screening and sequencing data analysis. Driven by the ongoing pace of innovation and the profound impact of implementation of such solutions, deep learning is anticipated to witness substantial growth in the foreseen future.
Key Market Insights
What is the Current Market Landscape of the Deep Learning Market Focused on Drug Discovery and Diagnostics?
Currently, more than 200 industry players are focused on providing deep learning-based services / technologies for drug discovery and diagnostic purposes. The primary focus areas of these companies include big data analysis, medical imaging, medical diagnosis and genetic / molecular data analysis.
Further, these players are engaged in offering services across a wide range of therapeutic areas. It is worth highlighting that deep learning-powered diagnostic service providers offer various diagnostic solutions, such as structured analysis reports, image interpretation and biomarker identification solutions, with input data from several compatible devices.
What is the Market Size of Deep Learning in Drug Discovery?
Lately, the industry has witnessed the development of advanced deep learning technologies / software. These technologies possess the ability to obviate the concerns associated with the conventional drug discovery process. Eventually, such technologies will aid in the reduction of financial burden associated with drug discovery.
The global deep learning market focusing on drug discovery is anticipated to grow at a CAGR of over 20% between 2023 and 2035. By 2035, the deep learning in drug discovery market for oncological disorders is expected to capture the majority share. In terms of geography, the market in North America and Europe is anticipated to grow at a relatively faster pace by 2035.
What is the Market Size of Deep Learning in Diagnostics Market?
The adoption of deep learning-powered technologies to assist medical diagnosis, as well as prevention of diseases, has increased in the recent past. The global deep learning market focusing on diagnostics is anticipated to grow at a CAGR of over 15% between 2023 and 2035. By 2035, the deep learning in diagnostics market in North America is expected to capture the majority share. In terms of therapeutic areas, the deep learning in diagnostics market for endocrine and respiratory disorders is anticipated to grow at a relatively faster pace by 2035.
Which Segment held the Largest Share in Deep Learning Market?
The study covers the revenues from deep learning technology for their potential applications in the drug discovery and diagnostics domain. As of 2022, deep learning based diagnostics held the largest share of the market, owing to the efficiency and precision of applying deep learning-powered diagnostic solutions.
Further, the deep learning in drug discovery market is anticipated to grow at a relatively higher growth rate during the given time period with several pharmaceutical companies actively collaborating with solution providers for drug design and development.
What are the Key Advantages offered by Deep Learning in Drug Discovery and Diagnostics?
The use of deep learning in drug discovery has the potential to reduce capital requirements and the failure-to-success ratio, as algorithms are better equipped to analyze large datasets. Similarly, in diagnostics domain, deep learning technology can be used to assist medical professionals in medical imaging and interpretation. This enables quick and efficient diagnosis of disease indications at an early stage.
What are the Key Drivers of Deep Learning in Drug Discovery and Diagnostics Market?
In the last decade, the healthcare industry has witnessed an inclination towards the adoption of information services and digital analytical solutions.
This can be attributed to the fact that companies have recently shifted towards high-resolution medical images and electronic health and medical records, generating large and complex data, referred to as big data. In order to analyze such large, structured and unstructured datasets, efficient tools and technology, such as deep learning, are required. Thus, these massive datasets are anticipated to be a primary driver of technological advancements in the deep learning and artificial intelligence domain.
What are the Key Trends in the Deep Learning in Drug Discovery and Diagnostics Market?
Many stakeholders have been making consolidated efforts to forge alliances with other industry / non-industry players for research, software licensing and collaborative drug / solution development purposes. It is worth highlighting that over 240 clinical studies are being conducted to evaluate the potential of deep learning in diagnostics, highlighting the continuous pace of innovation in this field.
Moreover, the field is evolving continuously, as a number of start-ups have emerged with the aim of developing deep learning technologies / software. In this context, in the past seven years, over 60 companies providing deep learning-based solutions have been established. Given the inclination towards advanced deep learning technologies and their vast applications in the healthcare segment, we believe that the deep learning market is likely to evolve at a rapid pace over the coming years.
Frequently Asked Questions
Question 1: What is deep learning? What are the major factors driving the deep learning market focused on drug discovery and diagnostics?
Answer: The paradigm shift of industry players towards digitization and challenges associated with the drug discovery process have contributed to the overall adoption of deep learning technologies for drug discovery, leading to a reduced economic load. The potential of deep learning technologies in assisting medical personnel in an early-stage diagnosis of various disorders has fueled the adoption of such technologies in the diagnostics segment.
Question 2: Which companies offer deep learning technologies / services for drug discovery and diagnostics?
Answer: Presently, more than 200 players are engaged in the deep learning domain, offering technologies / services, specifically for drug discovery and diagnostics purposes.
Question 3: How much funding has taken place in field of deep learning in drug discovery and diagnostics?
Answer: Since 2019, more than USD 15 billion has been invested in the deep learning in drug discovery and diagnostics domain across multiple funding instances. Of these, the most prominent funding types included venture capital and grants, demonstrating high start-up activity in this domain.
Question 4: How many clinical trials, based on deep learning technologies, are being conducted?
Answer: Currently, more than 420 clinical trials are being conducted tor evaluate the potential of deep learning for diagnostic purposes. Of these, 63% of the trials are active.
Question 5: What is the likely cost saving potential associated with the use of deep learning-based technologies in diagnostics?
Answer: Considering the vast potential of artificial intelligence, deep learning technologies are believed to save around 45% of the overall drug diagnostic costs.
Question 6: Which therapeutic area accounts for the largest share in the deep learning for drug discovery market?
Answer: Presently, oncological disorders capture the largest share (close to 40%) of the deep learning in drug discovery market. However, therapeutic areas, such as cardiovascular and respiratory disorders are likely to witness higher annual growth rates in the upcoming years. This can be attributed to the increasing applications of deep learning technologies across drug discovery.
Question 7: Which region is expected to witness the highest growth rate in the deep learning market for diagnostics?
Answer: The deep learning market for diagnostics in North America is likely to grow at the highest CAGR, during the period 2023- 2035.
Key Topics Covered:
1. PREFACE
2. EXECUTIVE SUMMARY
3. INTRODUCTION
4. MARKET OVERVIEW: DEEP LEARNING IN DRUG DISCOVERY4.1. Chapter Overview4.2. Deep Learning in Drug Discovery: Overall Market Landscape of Service / Technology Providers4.2.1. Analysis by Year of Establishment4.2.2. Analysis by Company Size4.2.3. Analysis by Location of Headquarters4.2.4. Analysis by Application Area4.2.5. Analysis by Focus Area4.2.6. Analysis by Therapeutic Area4.2.7. Analysis by Operational Model4.2.7.1. Analysis by Service Centric Model4.2.7.2. Analysis by Product Centric Model
5. MARKET OVERVIEW: DEEP LEARNING IN DIAGNOSTICS5.1. Chapter Overview5.2. Deep Learning in Diagnostics: Overall Market Landscape of Service / Technology Providers5.2.1. Analysis by Year of Establishment5.2.2. Analysis by Company Size5.2.3. Analysis by Location of Headquarters5.2.4. Analysis by Application Area5.2.5. Analysis by Focus Area5.2.6. Analysis by Therapeutic Area5.2.7. Analysis by Type of Offering / Solution5.2.8. Analysis by Compatible Device
6. COMPANY PROFILES6.1. Chapter Overview6.2. Aegicare6.2.1. Company Overview6.2.2. Service Portfolio6.2.3. Recent Developments and Future Outlook6.3. Aiforia Technologies6.4. Ardigen6.5. Berg6.6. Google6.7. Huawei6.8. Merative6.9. Nference6.10. Nvidia6.11. Owkin6.12. Phenomic AI6.13. Pixel AI
7. PORTER'S FIVE FORCES ANALYSIS
8. CLINICAL TRIAL ANALYSIS
9. FUNDING AND INVESTMENT ANALYSIS
10. START-UP HEALTH INDEXING
11. COMPANY VALUATION ANALYSIS11.1. Chapter Overview11.2. Company Valuation Analysis: Key Parameters11.3. Methodology11.4. Company Valuation Analysis: Publisher Proprietary Scores
12. MARKET SIZING AND OPPORTUNITY ANALYSIS: DEEP LEARNING IN DRUG DISCOVERY
13. MARKET SIZING AND OPPORTUNITY ANALYSIS: DEEP LEARNING IN DIAGNOSTICS
14. DEEP LEARNING IN HEALTHCARE: EXPERT INSIGHTS
15. CONCLUDING REMARKS
16. INTERVIEW TRANSCRIPTS
17. APPENDIX 1: TABULATED DATA
18. APPENDIX 2: LIST OF COMPANIES AND ORGANIZATIONS
For more information about this report visit https://www.researchandmarkets.com/r/wv94in
About ResearchAndMarkets.comResearchAndMarkets.com is the world's leading source for international market research reports and market data. We provide you with the latest data on international and regional markets, key industries, the top companies, new products and the latest trends.
Unlocking computer vision and machine learning on highly efficient … – Imaging and Machine Vision Europe
Computer vision (CV) has been widely adopted in many Internet of Things (IoT) devices across various use cases, ranging from smart cameras and smart home appliances to smart retail, industrial applications, access control and smart doorbells. As these devices are constrained by size and are often battery powered, they need to wield highly efficient compute platforms.
One such platform is the MCU (microcontroller unit), which has low-power and low-cost characteristics, alongside CV and machine learning (ML) compute capabilities.
However, running CV on the MCU will undoubtedly increase its design complexity due to the hardware and software resource constraints of the platform.
Therefore, IoT developers need to determine how to achieve the required performance, while keeping power consumption low. In addition, they need to integrate the image signal processor (ISP) into the MCU platform, while balancing the ISP configuration and image quality.
One processor that fulfills these requirements is Arms Cortex-M85, which is Arms most powerful Cortex-M CPU to date. With vector extension of SIMD (single instruction, multiple data) 128-bit vector processing, Cortex-M85 accelerates CV alongside the overall MCU performance. For IoT developers, they can leverage Arms software ecosystem, ML embedded evaluation kit and guidance on how to integrate the ISP with the MCU in order to unlock CV and ML easily and quickly on the highly-efficient MCU platform.
As a first step, being able to run CV compute workloads requires improved performance on the MCU. Focusing on the CPU architecture, there are several ways to enhance the MCUs performance, including superscalar, VLIW (very long instruction word), and SIMD. For the Cortex-M85, Arm chose to adopt SIMD which is a single instruction set that can operate multiple data as its the best option for balancing performance and power consumption.
Figure 1: The comparison between VLIW and SIMD
Arms Helium technology, which is the M-Profile Vector Extension (MVE) for the Cortex-M processor series, brings vector processing to the MCU. Helium is an extension in the Armv8.1-M architecture to significantly enhance performance for CV and ML applications on small, low-power IoT devices. It also utilises the largest software ecosystem available to IoT developers, including optimised sample code and neural networks.
Supporting the Cortex-M CPUs, Arm has published various materials to make it easier to start running CV and ML. This includes the Arm ML embedded evaluation kit.
The evaluation kit provides ready-to-use ML applications for the embedded stack. As a result, IoT developers can experiment with the already-developed software use cases and then create their own applications. The example applications with ML networks are listed in the table below.
The Arm ML embedded evaluation kit
The ISP is an essential technology to unlock CV, as the image stream is the input source. However, there are certain points that we must consider when integrating ISP on the MCU platform.
For IoT edge devices, there will be a smaller image sensor resolution (<1-2MP; 15-30fps) and even lower frame rate. Also the image signal processing is not always active. Therefore, using a higher quality scaler within the ISP will drop the resolution to sub-VGA, which is 640 x 480, to, for example, minimise the data ingress to the NPU. This means that the ISP only uses the full resolution when needed.
ISP configurations can also affect power, area, and efficiency. Therefore, it is worth asking the following questions to save power and area.
Whether its for human vision, computer vision, or both?
What is the required memory bandwidth?
How many ISP output channels will be needed?
An MCU platform is usually resource-constraint with limited memory size. Integrating with an ISP requires the MCU to run the ISP driver, including the ISPs code, data, and control LUT (loop up table). Therefore, once the ISP configuration has been decided, developers need to tailor the driver firmware accordingly, removing unused code and data to accommodate the memory limitation on the MCU platform.
Figure 2: An example of concise ISP configuration
Another consideration when integrating the ISP with the MCU is lowering the frame rate and resolution In many cases, it would be best to consider the convergence speed of the 3As auto-exposure, auto-white balance and auto-focus. This will likely require a minimum of five to ten frames before settling. If the frame rate is too slow, it might be problematic for your use case. For example, this could mean a two to five second delay before a meaningful output can be captured and, given the short power-on window, there is a risk of missing critical events. Moreover, if the clock frequency of the image sensor is dropped too low, it is likely to introduce nasty rolling shutter artifacts.
Enabling CV and ML on MCU platforms is part of the next wave of the IoT evolution. However, the constraints of the MCU platform can increase the design complexity and difficulty. Enabling vector processing on the MCU through the Cortex-M85 and leveraging Arms software ecosystem can provide enough computing and reduce this design complexity. In addition, integrating a concise ISP is a sensible solution for IoT devices to speed up and unlock CV and ML tasks on low-power, highly efficient MCU platforms.
Here is the original post:
Unlocking computer vision and machine learning on highly efficient ... - Imaging and Machine Vision Europe
Pyramid Analytics expands AI-driven Decision Intelligence with new … – Yahoo Finance
The #1 ranked augmented analytics Decision Intelligence Platform adds deep integration of OpenAI into its 2023 release
LONDON & NEW YORK CITY & TEL AVIV, Israel, March 20, 2023--(BUSINESS WIRE)--Pyramid Analytics (Pyramid), a leading business analytics and decision intelligence provider, announced today at the Gartner Data & Analytics Summit in Orlando, Florida, that the Pyramid 2023 release extends its already category-leading, AI-driven augmented capabilities with the integration of GPT (generative pre-trained transformer) AI technology from OpenAIthe company behind ChatGPT and DALL-E 2throughout the platform, interoperating with its deep set of current AI technologies.
The release harnesses the new GPT AI engines to drive complex logic, data science, and machine learning code generation; AI-driven storytelling capabilities; and even AI-assisted design templates and colors. The effort extends Pyramids broader vision to enable and drive adoption across the enterprise by empowering all users to solve data-centric business problems through no-code and AI-assisted analytics and decision intelligence.
Key facts about Pyramids OpenAI integration
OpenAI is integrated throughout the Decision Intelligence Platformincluding the data preparation, data science, the business analytics and spreadsheet modules, and the storyboard and publication designer modules.
In data preparation: OpenAI can be used to generate SQL, DAX, and MDX code automatically for complex data extraction queries.
In data science: OpenAI can be used to generate Python and R code automatically to drive machine learning logic.
In spreadsheets: OpenAI can be used to build spreadsheet formulas for users constructing business models.
In storyboarding and publications: OpenAI can be used to generate designs for content and graphics.
Separately, OpenAI can be used with existing natural language querying (NLQ) engines to drive and enhance broader insights on enterprise-specific data, ultimately improving the existing tools for delivering automated storytelling and textual analysis.
Story continues
Market recognition
Pyramid has long been recognized as an innovator in the decision intelligence and analytics space. The companys Decision Intelligence Platform was ranked #1by leading analyst firm Gartnerfor "Augmented Analytics" in the 2022 Gartner Analytics and Business Intelligence (ABI) Critical Capabilities report.
Other leading analystssuch as 451 Research, Ventana Research, Dresner Advisory Services, and BARCrecognize Pyramid for its platform-based approach to no-code capabilities and AI/augmented analytics to enable all user types to drive sophisticated analytics easily. Critically, Pyramids existing augmented capabilities (NLQ, Chatbot, Smart Insights, Smart Model, Auto Discovery, Fill in the Blanks, Explain, etc.) uniquely operate directly on enterprise data without data duplication, custom models, proprietary data layers, or specialized data treatments.
Demos
Click here to schedule a demo of the Pyramid Decision Intelligence Platform and to learn more about OpenAI on the platform.
Quotes
Avi Perez, CTO and Co-Founder, Pyramid Analytics: "By integrating OpenAI throughout the Pyramid Decision Intelligence Platform, we are extending our existing AI and augmented capabilities with the latest generative AI tech, transformingand simplifyingthe decision-making experience further. As a no-code/low-code platform, Pyramid is designed to extend advanced analyticsfrom descriptive to predictive to prescriptivefor non-technical businesspeople, allowing them to make informed decisions that drive business outcomes."
Omri Kohl, CEO and Co-Founder, Pyramid Analytics: "At Pyramid, we believe the key to widespread adoption of data analytics requires an analytics experience that meets different peoples needs, regardless of their technical skills or analytics aptitude. By strategically integrating generative AI technologies like OpenAI, we are taking our AI vision to the next level."
About Pyramid Analytics
Pyramid Analytics is the next generation of decision intelligence. The award-winning Pyramid Decision Intelligence Platform empowers people with augmented, automated, and collaborative insights that simplify and guide the use of data in decision-making. Critically, the Pyramid Platform operates directly on any data, enabling governed self-service for any person; and meeting analytical needs in a no-code environment without data extraction, ingestion, and duplication. It combines data prep, business analytics, and data science into one frictionless platform to empower anyone with intelligent decision-making. This enables a strategic, enterprise-wide approach to business intelligence and analytics, from the simple to the sophisticated. Schedule a demo today.
Pyramid Analytics is incorporated in Amsterdam and has regional headquarters in global innovation and business centers, including London, New York City, and Tel Aviv. Our team lives worldwide because geography should not hinder talent and opportunity. Investors include H.I.G. Growth Partners, Jerusalem Venture Partners (JVP), Sequoia Capital, and Viola Growth. Learn more at Pyramid Analytics.
View source version on businesswire.com: https://www.businesswire.com/news/home/20230320005287/en/
Contacts
Pyramid AnalyticsPete VomocilSVP, Global Marketing, Pyramid Analyticspr@pyramidanalytics.com
Link:
Pyramid Analytics expands AI-driven Decision Intelligence with new ... - Yahoo Finance
Darwin AI Evolving the Islands of Automation – SMT 007
When Canadian artificial intelligence company Darwin AI was founded in 2017, machine learning and deep learning were still relatively new terms. In the past five years, CEO Sheldon Fernandez and his team have been working with this technology to develop some foundational IP to simplify implementation. About a year ago, Sheldon took a part happenstance, part deliberate opportunity to develop a vertical offering for EMS manufacturing. Heres what happened.
Sheldon, its nice to meet you. Would you briefly introduce your company?
Sheldon Fernandez:Were based out of Waterloo, Ontario, Canada. We're organically connected to the University of Waterloo, which is kind of like Canada's MIT. Two of our co-founders are professors at the institution, including Professor Alexander Wong, Canadas Research Chair in AI and Medical Imaging.
Weve been working on foundational machine learning and deep learning technology for the past five years. A couple of years ago, our large industrial and aerospace clients were telling us about their supply chain challenges during the pandemic and reshoring sensitive electronics manufacturing work back to North America, specifically printed circuit boards (PCB). We thought that created an opportunity for us.
When we looked at PCB manufacturing, it became apparent that while the SMT placement workflow was highly automated, there was a need in automating back-end production and final assembly. This laborious part of the process was where EMS companies and OEMs were still employing manual inspection. These inspection tasks are tough to crack from a traditional machine vision perspective, and we wondered, Can AI bring anything to bear on this problem? We spent about a year developing a hardware and software solution which fits into the typical assembly line for PCB manufacturing. It also does post-assembly analysis, and what's really fascinating is how quickly an operator can program our product.
We often hear that AOIs are good at what they do but are laborious to program and maintain. With our system, theres not a lot of manual work. You give the system a good (i.e., golden) boardor a couple of good boards if there's a union of different componentsand our AI system creates a map of where components should be in less than a minute; away you go. You can tweak it after that, and its striking how quickly you can configure the product.
We brought our mini system to IPC APEX EXPO, and the response was fascinating. So many companies were intrigued by finally automating back-end production, and we're really excited about becoming a part of the community.
To read this entire conversation, which appeared in the March 2023 issue of SMT007 Magazine,click here.
More here:
Darwin AI Evolving the Islands of Automation - SMT 007
What is artificial intelligence and how does it affect banking? – Santander
15/03/2023
To learn, reason and solve problems are some of the skills associated with humans. Today there are computer applications capable of performing such tasks through artificial intelligence. The rapid expansion of this technology brings about a series of opportunities in different fields. In the banking sector it can be used to improve customer service, optimise loan approval processes and prevent non-performing loans, among other applications. At the same time, it is important to be aware of the many challenges that artificial intelligence poses.
A few years ago, artificial intelligence (AI) might have seemed very futuristic. Nowadays it's common to find tools and applications that are based on AI algorithms in different areas of our everyday life and with different purposes: content generation (known as generative AI and employed in chatbots such as Microsoft's Chat GPT and Google's Bard); computers that play chess and are capable of beating humans; digital personal assistants, navigation assistants that tell you the best route and thousands of other examples.
In general, AI is the development of systems capable of learning, planning or resolving problems in a way similar to humans. For an electronic device or software to have artificial intelligence, it needs data and algorithms that are capable of making decisions. It can receive the former via the internet, big data applications, or by connecting directly to other devices to exchange information. The latter, meanwhile, are a series of instructions with which they are programmed in order to create behaviours or patterns based on the different data they receive.
The growth in artificial intelligence is being driven mainly by disruptive technology such as machine learning, deep learning, big data and quantum computing. What sets AI apart from a standard IT program is that it can improve its own processes autonomously. In other words, it learns from previous tasks, without the need for human intervention. This is known as machine learning.
Imagine you have a robot vacuum. It is programmed to go around your house every day. It follows a map that it generated and stored using its integrated navigation system on day one (previous tasks). However, a few weeks later you decide to move some furniture around. The robot starts bumping into them, prompting it to automatically activate its navigation system to update the map (it makes a decision using the data received). That way since it has designed a new route it won't bump into the furniture next time.
Read the rest here:
What is artificial intelligence and how does it affect banking? - Santander
Researchers From Tsinghua University Introduce A Novel Machine Learning Algorithm Under The Meta-Learning Paradigm – MarkTechPost
Recent achievements in supervised tasks of deep learning can be attributed to the availability of large amounts of labeled training data. Yet it takes a lot of effort and money to collect accurate labels. In many practical contexts, only a small fraction of the training data have labels attached. Semi-supervised learning (SSL) aims to boost model performance using labeled and unlabeled input. Many effective SSL approaches, when applied to deep learning, undertake unsupervised consistency regularisation to use unlabeled data.
State-of-the-art consistency-based algorithms typically introduce several configurable hyper-parameters, even though they attain excellent performance. For optimal algorithm performance, it is common practice to tune these hyper-parameters to optimal values. Unfortunately, hyper-parameter searching is often unreliable in many real-world SSL scenarios, such as medical image processing, hyper-spectral image classification, network traffic recognition, and document recognition. This is because the annotated data are scarce, leading to high variance when cross-validation is adopted. Having algorithm performance sensitive to hyper-parameter values makes this issue even more pressing. Moreover, the computational cost may become unmanageable for cutting-edge deep learning algorithms as the search space grows exponentially concerning the number of hyper-parameters.
Researchers from Tsinghua University introduced a meta-learning-based SSL algorithm called Meta-Semi to leverage the labeled data more. Meta-Semi achieves outstanding performance in many scenarios by adjusting just one more hyper-parameter.
The team was inspired by the realization that the network may be trained successfully using the appropriately pseudo-labeled unannotated examples. Specifically, during the online training phase, they produce pseudo-soft labels for the unlabeled data based on the network predictions. Next, they remove the samples with unreliable or incorrect pseudo labels and use the remaining data to train the model. This work shows that the distribution of correctly pseudo-labeled data should be comparable to that of the labeled data. If the network is trained with the former, the final loss on the latter should also be minimized.
They defined the meta-reweighting objective to minimize the final loss on the labeled data by selecting the most appropriate weights (weights throughout the paper always refer to the coefficients used to reweight each unlabeled sample rather than referring to the parameters of neural networks). The researchers encountered computing difficulties when tackling this problem using optimization algorithms.
For this reason, they suggest an approximation formulation from which a closed-form solution can be derived. Theoretically, they demonstrate that each training iteration only needs a single meta gradient step to achieve the approximate solutions.
In conclusion, they suggest a dynamic weighting approach to reweight previously pseudo-labeled samples with 0-1 weights. The results show that this approach eventually reaches the stationary point of the supervised loss function. In popular image classification benchmarks (CIFAR-10, CIFAR-100, SVHN, and STL-10), the proposed technique has been shown to perform better than state-of-the-art deep networks. For the difficult CIFAR-100 and STL-10 SSL tasks, Meta-Semi gets much higher performance than state-of-the-art SSL algorithms like ICT and MixMatch and obtains somewhat better performance than them on CIFAR-10. Moreover, Meta-Semi is a useful addition to consistency-based approaches; incorporating consistency regularisation into the algorithm further boosts performance.
According to the researchers, Meta-Semi requires a little more time to train is a drawback. They plan to look into this issue in the future.
Check out thePaperandReference Article.All Credit For This Research Goes To the Researchers on This Project. Also,dont forget to joinour 15k+ ML SubReddit,Discord Channel,andEmail Newsletter, where we share the latest AI research news, cool AI projects, and more.
Tanushree Shenwai is a consulting intern at MarktechPost. She is currently pursuing her B.Tech from the Indian Institute of Technology(IIT), Bhubaneswar. She is a Data Science enthusiast and has a keen interest in the scope of application of artificial intelligence in various fields. She is passionate about exploring the new advancements in technologies and their real-life application.
Read the original post:
Researchers From Tsinghua University Introduce A Novel Machine Learning Algorithm Under The Meta-Learning Paradigm - MarkTechPost
Scientists Identify Thousands of New Cosmic Objects Using Machine Learning – SciTechDaily
Application of machine learning techniques to large astronomy data sets can discover thousands of cosmic objects of various classes. Credit: Shivam Kumaran
A team of scientists from the Tata Institute of Fundamental Research in Mumbai and the Indian Institute of Space Science and Technology in Thiruvananthapuram, consisting of Prof. Sudip Bhattacharyya and Mr. Shivam Kumaran, along with Prof. Samir Mandal and Prof. Deepak Mishra, have utilized machine learning techniques to identify the nature of thousands of new celestial objects in X-ray wavelengths. Machine learning is a branch of artificial intelligence.
Astronomy is undergoing a transformation as vast amounts of astronomical data from millions of celestial objects become readily accessible. This is due to large-scale surveys and meticulous observations utilizing top-notch astronomical observatories, combined with a policy of open data availability.
Needless to say that these data have great potential for many discoveries and a new understanding of the universe. However, it is not practical to explore the data from all these objects manually, and automated machine learning techniques are essential to extract information from these data. But the application of such techniques to astronomical data is still very limited and in the preliminary stage.
In this background, the TIFR-IIST team applied machine learning techniques to hundreds of thousands of cosmic objects observed in X-rays with USAs Chandra space observatory. This demonstrated how a new and topical technological progress could help and revolutionize basic and fundamental scientific research. The team applied these techniques to about 277000 X-ray objects, the nature of most of which were unknown.
A classification of the nature of unknown objects is equivalent to the discovery of objects of specific classes. Thus, this research led to a reliable discovery of many thousands of cosmic objects of classes, such as black holes, neutron stars, white dwarfs, stars, etc., which opened up an enormous opportunity for the astronomy community for further detailed studies of many interesting new objects.
This collaborative research has also been important to establish a state-of-the-art capacity to apply new machine-learning techniques to fundamental research in astronomy, which will be crucial to scientifically utilize the data from current and upcoming observatories.
Reference: Automated classification of Chandra X-ray point sources using machine learning methods by Shivam Kumaran, Samir Mandal, Sudip Bhattacharyya and Deepak Mishra, 9 February 2023, Monthly Notices of the Royal Astronomical Society.DOI: 10.1093/mnras/stad414
See the original post:
Scientists Identify Thousands of New Cosmic Objects Using Machine Learning - SciTechDaily
Weights & Biases simplifies machine learning production and … – SiliconANGLE News
Machine learning development startup Weights & Biases Inc., whose software is used by the likes of OpenAI LLC and Nvidia Corp. to develop new artificial intelligence models, announced two major enhancements to its platform today.
Weights and Biases has created a platform for teams to build and collaborate on machine learning models and operations, or MLOps. The platform enables teams to keep track of their machine learning experiments. It also provides tools for evaluating the performance of different machine learning models, dataset versioning and pipeline management.
The W&B platform is designed to improve the efficiency of the trial-and-error process through which AI software is developed. According to the startup, it helps by increasing developer productivity, as it solves one of the main challenges of AI initiatives: organizing and processing project data.
The new additions to its platform were announced at Fully Connected, W&Bs inaugural user conference, and include W&B Launch and W&B Models.
Available in public preview starting today, W&B Launch provides an easier way for users to package code automatically and launch a new machine learning job on any target environment. In this way, machine learning practitioners gain easier access to the compute resources they need, while simplifying infrastructure complexity. In addition, W&B Launch makes it easier for teams to reproduce runs and scale up and scale out those activities.
Orlando Avila-Garcia, AI principal researcher at ARQUIMEA Research Center, said W&B Launch will make it easier for his organization to accelerate research on a variety of deep learning techniques, including neural radiance fields and graph neural networks. Abstracting away the complexity of using the infrastructure for our researchers is very beneficial to our overall team, he said. Launch greatly simplifies our work optimizing, experimenting with and benchmarking ML methods, letting us focus on reliability and reproducibility of results.
As for W&B Models, this is generally available now and provides a more scalable way for teams to govern machine learning model lifecycles in a centralized repository, while allowing for cross-functional discovery and collaboration. With its reproducibility and lineage tracking functionality that makes it possible for users to track exactly when a model was moved from staging to production, W&B Models can help teams to maintain higher-quality models over time.
Andy Thurai, vice president and principal analyst of Constellation Research Inc., told SiliconANGLE that Weights & Biases is one of the older players in the MLOps segment, and its especially strong inmodel creation, data set versioning and experiment tracking. He added that W&B Models looks like it will be a good addition to the companys solution set. It offers ML model registry and model governance, allowing users to collaborate more easily by making models discoverable across large enterprises, he said. This, combined with model lineage tracking, enables users to track models in the ML pipeline from inception to production.
Were confident that W&B users will quickly see the benefits of Launch and Models in accelerating model training, utilizing compute resources efficiently, managing models with more confidence, and having a more cohesive end-to-end ML workflow, said W&B Vice President of Product Phil Gurbacki.
The platform enhancements follow what W&B says has been a very promising 12 months, during which it has enjoyed significant traction and momentum. The company, which last raised $135 million at a $1 billion valuation in October 2021, claims to have doubled its employee base in the last year, opening new offices in Berlin, London and Tokyo to help boost its international presence.
See more here:
Weights & Biases simplifies machine learning production and ... - SiliconANGLE News