Category Archives: Machine Learning

Is the Future of Food Quality in the Hands of Machine Learning? – FoodSafetyTech – FoodSafetyTech

Is the future of food quality in the hands of machine learning? Its a provocative question, and one that does not have a simple answer. Truth be told, its not for every entity that produces food, but in a resource, finance and time-constrained environment, machine learning will absolutely play a role in the food safety arena.

We live in a world where efficiency, cost savings and sustainability goals are interconnected, says Berk Birand, founder and CEO of Fero Labs. No longer do manufacturers have to juggle multiple priorities and make tough tradeoffs between quality and quantity. Rather, they can make one change that optimizes all of these variables at once with machine learning. In a Q&A with Food Safety Tech, Birand briefly discusses how machine learning can benefit food companies from the standpoint of streamlining manufacturing processes and improve product quality.

Food Safety Tech: How does machine learning help food manufacturers maximize production without sacrificing quality?

Berk Birand: Machine learning can help food manufacturers boost volume and yield while also reducing quality issues waste, and cycle time. With a more efficient process powered by machine learning, they can churn out products faster without affecting quality.

Additionally, machine learning helps food producers manage raw material variation, which can cause low production volume. In the chemicals sector, a faulty batch of raw ingredients can be returned to the supplier for a refund; in food, however, the perishable nature of many food ingredients means that they must be used, regardless of any flaws. This makes it imperative to get the most out of each ingredient. A good machine learning solution will note those quality differences and recommend new parameters to deal with them.

FST: How does integrating machine learning into software predict quality violations in real-time, and thus help prevent them?

Birand: The power of machine learning can predict quality issues hours ahead of time and recommend the optimal settings to prevent future quality issues. The machine learning software analyzes all the data produced on the factory floor and learns how each factor, such as temperature or length of a certain process, affects the final quality.

By leveraging these learnings, the software can then help predict quality violations in real-time and tell engineers and operators how to prevent them, whether the solution is increasing the temperature or adding more of a specific ingredient.

FST: How does machine learning technology reveal & uphold sustainability improvements?

Birand: Due to the increase in climate change, sustainability continues to become a priority for many manufacturers. Explainable machine learning software can reveal where sustainability improvements, such as reducing heat or minimizing water consumption, can be made without any effect on quality or throughput. By tapping into these recommendations, factories can produce more food with the same amount of energy.

Go here to read the rest:
Is the Future of Food Quality in the Hands of Machine Learning? - FoodSafetyTech - FoodSafetyTech

Aspinity Redefines Always-on Power Efficiency with First Analog Machine Learning Chip – Business Wire

PITTSBURGH--(BUSINESS WIRE)--Aspinity, the pioneer in analog machine learning chips, today launched the first member of its analogML family, the AML100, which is the industrys first and only tiny machine learning (ML) solution operating completely within the analog domain. As such, the AML100 reduces always-on system power by 95%, allowing manufacturers to dramatically extend the battery life of todays devices or migrate walled powered always-on devices to battery - opening whole new classes of products for voice-first systems, home and commercial security, predictive and preventative maintenance, and biomedical monitoring.

Minimizing the quantity and movement of data through a system is one of the most efficient ways to reduce power consumption, but todays always-on devices dont have that capability. Instead, they continuously collect large amounts of natively analog data as they monitor their environment and digitize the data immediately, wasting tremendous system power processing data that are mostly irrelevant to the application. In contrast, the AML100 delivers substantial system-level power-savings by moving the ML workload to ultra-low-power analog, where the AML100 can determine data relevancy with a high degree of accuracy and at near-zero power. This makes the AML100 the only tinyML chip that intelligently reduces data at the sensor while the data is still analog and keeps the digital components in low power mode until important data is detected, thereby eliminating the power penalty of digitization, digital processing, and transmission of irrelevant data.

Weve long realized that reducing the power of each individual chip within an always-on system provides only incremental improvements to battery life, said Tom Doyle, founder and CEO, Aspinity. Thats not good enough for manufacturers who need revolutionary power improvements. The AML100 reduces always-on system power to under 100A, and that unlocks the potential of thousands of new kinds of applications running on battery.

Inside the AML100

The heart of the AML100 is an array of independent, configurable analog blocks (CABs) that are fully programmable within software to support a wide range of functions, including sensor interfacing and ML. This versatility delivers a tremendous advantage over other analog approaches, which are rigid and only address a single function. The AML100, however, is highly flexible, and can be reprogrammed in the field with software updates or with new algorithms targeting other always-on applications.

The precise programmability of the AML100s analog circuits also eliminates the chip-to-chip performance inconsistencies typical of standard analog CMOS process variation, which has severely limited the use of highly sophisticated analog chips, even when the inherent low power of analog makes it better suited for a specific task.

Key Features of the AML100

Availability

Aspinitys AML100 is currently sampling to key customers with volume production planned for Q4 2022. Customers can evaluate the AML100s capabilities by purchasing one of Aspinitys integrated hardware-software evaluation kits: EVK1 for glass break and T3/T4 alarm tone detection or EVK2 for voice detection with preroll collection and delivery. Contact Aspinity about evaluation kits with software packages for other applications. For more information, download the AML100 product brief or contact Aspinity.

About Aspinity

Aspinity is the world leader in the design and development of analog processing chips that are revolutionizing the power- and data-efficiency of always-on sensing architectures. By delivering highly discriminating analog event detection, Aspinitys ultra-low power, trainable and programmable analog machine learning (analogML) core eliminates the power penalty of moving irrelevant data through the digital processing system, dramatically extending battery life in consumer, IoT, industrial and biomedical applications.

For more information on Aspinity, stay in touch on LinkedIn and Twitter: @aspinity, email: info@aspinity.com or visit https://Aspinity.com.

Read the original post:
Aspinity Redefines Always-on Power Efficiency with First Analog Machine Learning Chip - Business Wire

Machine Learning in the Construction Industry | Pro – Pro Builder

When most people hear machine learning or artificial intelligence, the last thing that comes to mind is a technology that requires human interaction. Usually, its the opposite. More computers and more technology mean fewer humans need to be involved.

Machine learning can, however, improve the daily lives of humans in industries of all natures, particularly in construction.

While machine learning in construction may appear to be a distant concept decades away from becoming a reality, the technologys future is closer than you think. In reality, machine learning has been gaining traction in the construction business for years and, simply put, rather than removing humans from the equation, it allows individuals to accomplish their jobs more efficiently.

Because the construction industry has been slower to adopt the level of technological advancements applied in other industries, the job of constructing buildings has become increasingly difficult for its workers.

However, finding the resources to incorporate new technology while staying on track with building projects is difficult. Machine learning has the potential to propel the construction sector forward, improving conditions and productivity for workers, contracting organizations, and clients on a daily basis.

Before we delve too far into the topic, lets make sure we cover the basics, especially if youre unfamiliar with the notion. The definition of machine learning provided by the book Machine Learning: An Artificial Intelligence Approachcenters on the definition of what it means to be intelligent: the ability to learn is one of the most fundamental aspects of intelligent behaviour.

With machine learning, machines essentially have the ability to learn without being explicitly programmed. So machines can self-learn and forecast outcomes based on what statistically significant patterns they discover in the data they are receiving. Instead of having a human program them, they employ software with algorithms that enable them to make predictions based on data analysis. A machine, for example, can alert you to the need for preventative maintenance based on data it collects from the equipment its monitoring.

Machine learning is now considered a subset of artificial intelligence (AI). It sounds like science fiction, but it has many technical and practical uses.

There are plenty of ways in which machine learning can be used to help humans in the construction workplace, and it covers a range of different niches and considerations. Some of the most important include:

Machine learning has the potential to improve the overall design of a building for its occupants. Workspace businesses (WeWork, for example) use the technology to better analyze and estimate the frequency of use for meeting rooms, allowing companies to optimize the design of those spaces before construction begins.

Machine learning can also assist workers in identifying potential design flaws and omissions before proceeding with construction.

Of course, safety on building jobsites is a top priority, and machine learning can help. Consider the testing of VINNIE [Very Intelligent Neural Network for Insight and Evaluation]artificial intelligence as reported by Engineering News-Record in 2016:

VINNIE detected safety hazards, such as a person who was not wearing a hard hat, far more quickly and accurately than the human team. In comparison, a team of human specialists took over 4.5 hours to review over 1,000 entries, whereas VINNIE took less than 10 minutes. The human team correctly identified 414 photographs with persons, while VINNIE correctly identified 446.

At the end of the day, a safer work environment benefits the entire workforce.

Hand in hand with the above example is one of the most wonderful aspects of machine learning: its ability to predict dangers before they occur. For instance, using predictive analytics, machine learning can help you identify hazards, quantify their impact, and reduce or avoid them.

Machine learning requires enormous subsets of data to be effective and accurate. Lack of sufficient data is whats currently preventing many small and medium-size construction firms from using this technology. Increasing the amount of data available and integrating it will help the entire sector progress to a better, more efficient, and more productive future, creating a critical mass of construction firms that can benefit from machine learning especially if technological systems are integrated.

But lack of integration is one of the current hurdles construction faces because even if you have a large amount of digitized data available, unless technology platforms are adequately integrated, data will remain separated. Thats the case now in the construction industry overall and also within many building companies, which use multiple unintegrated platforms within their business. This will be something that needs to be addressed over time.

However, this is a challenge many businesses in a wide range of industries face, and is therefore a problem best solved collectively.

Machine learning and AI in construction share an intriguing future together, starting right now. But while machine learning is expected to have an impact on the future of construction, this doesnt mean machines and technology will take away human jobs.

Construction is and always will be a human endeavor. To win the future, we need our workers talents, competence, and invention to remain, we just need to optimize it. Machine learning can be used as yet another instrument to showcase our industrys expertise and growth.

George J. Newton is a business development and technology writer, blogger, and consultant at Write my Essay and Thesis writing service. He also writes for Nextcoursework.com. George loves exploring new ideas and seeing where the human race is heading.

Here is the original post:
Machine Learning in the Construction Industry | Pro - Pro Builder

Can Machine Learning be Used to Improve Mental Health? – Analytics Insight

We explore how leveraging Machine Learning helps improve Mental Health in the digital world

Machine learning (ML)is a type ofartificial intelligence.ML algorithmsare utilized in a wide range of applications including medicine, traffic prediction, and object recommendations, and image recognition andspeech recognitionwhere creating traditional algorithms to do the required tasks is difficult. It makes such tasks easier to conduct.

Nowadays Artificial intelligence (AI) and machine learning (ML) technologies are being used to increase our understanding ofmental health issues and to aid mental health clinicians in making better therapeutic decisions as data about an individuals mental health status becomes more readily available.

We are using machine learning in our daily life even without knowing it such as Google Maps, Google Assistant, Alexa, etc. Machine Learning (ML) is a type of Artificial Intelligence, which is the study of computer algorithms that can automatically learn with the use of huge data and experience. These ML algorithms then create a model based on training data (input data) to make predictions or judgments without having to be specifically programmed to do so. It can become fairly adept at executing tasks on its own and reduce the cumbersomeness of such tasks where developing an algorithm manually to do a specific task is needed. It can also assist in the identification of relevant patterns that people would not have been able to uncover as quickly alone without the assistance of the machine.

Machine learning is being used by neuroscientists and doctors all over the world to build treatment and therapeutic strategies and to identify some of the important markers for mental health issues before they arise. One of the advantages is that machine learning can assist clinicians in predicting who is at risk for a specific condition.

Assembling data for mental health specialists can be now done easily so that they can do their jobs better since there is a massive amount of data available. The fact that interpreting diagnoses was previously reliant on group averages and population statistics is what makes machine learning so useful now. Clinicians can customize their care thanks to machine learning.

Machine learning is assisting in the transformation of mental health in two major ways:

When people are diagnosed with a mental disorder today, they must go through a process of trial and error to find the correct pharmaceutical dosage and treatment plan. This process of trial and error should not be happening, but the truth is that each patients symptoms for a mental health illness like depression will be different. The symptoms of one patient may differ from those of another.

A biomarker is anything like blood cholesterol, which is a biomarker for coronary heart disease. Thus similar to a physical biomarker present in the human body, it contains behavioral bio-markers for mental illnesses too like feelings of hopelessness and despair depression. ML algorithms could aid mental health providers in determining whether patients are at high risk of acquiring a specific mental health illness by identifying crucial behavioral bio-markers. Additionally, the algorithms may aid in the monitoring of a treatment plans effectiveness.

It all boils down to each patients biology, triggers, and responses to stress and illnesses like depression. Many of the symptoms of mental health problems overlap, and while some of the important markers for mental health disorders are well-known, a treatment plan based on trial and error is not an option. Psychiatrists and mental health professionals can use machine learning algorithms to discover sub-types of various disorders and build better-tailored treatment strategies and medication dosages.

Its critical to note that persons with particular disorders, such as panic disorder, psychosis, manic states, and so on, are more susceptible to crises. Patients who have been diagnosed with chronic mental illnesses have their disorders checked in order to help them get through their daily lives. However, certain illnesses, like Schizophrenia and Bipolar Disorder, have a higher probability of experiencing a crisis. Mental health experts are in charge of reducing the likelihood of patients experiencing a crisis through the use of ML algorithms. To detect whether a patient is about to have an episode, machine learning algorithms can use a combination of self-provided data and passive data through their smartphones or social media. There are several clear indicators that a new episode is on the way. These crises can be predicted if a pattern of stress, isolation, or exposure to triggers can be identified. Every one of us has our own set of triggers and coping methods, and treatment plans that examine a patients tendencies and intervene before an episode occurs can be extremely beneficial.

Due to those teaching, there is a stigma surrounding mental health care, there is just insufficient access to mental health resources. In addition to the already difficult access to mental health services, marginalized and minority communities face even greater barriers. This is due to a combination of financial constraints and a lack of education on the necessity of mental health care, as well as the topics underlying stigma.

Data Science / Machine Learning is a fantastic tool for existing physicians, psychiatrists, and therapists to use in order to better assist their patients. Its fantastic that individuals are working to develop technology solutions to combat the disease. But it isnt quite enough. We should feel at ease discussing mental health in the same way that we would discuss physical health. Now, more than ever, we must make progress in normalizing this conversation.

Share This ArticleDo the sharing thingy

Read more:
Can Machine Learning be Used to Improve Mental Health? - Analytics Insight

TurbineOne Awarded Air Force Contract to Deploy New Machine Learning Capability to Frontlines – Business Wire

SAN FRANCISCO--(BUSINESS WIRE)--TurbineOne, the frontline perception company, was awarded a Small Business Innovation Research (SBIR) contract to advance its machine learning capabilities and deploy its software with the United States Air Force. The specific offices within the Air Force that made the SBIR award to TurbineOne are the Air Force Research Laboratory (AFRL) and AFWERX.

SBIR programs are highly competitive programs that encourage domestic small businesses to engage in Federal Research and Development. Through a competitive awards-based program, SBIRs enable small businesses to explore their technological potential and provide the incentive to profit from its commercialization. By including qualified small businesses in the nation's R&D arena, high-tech innovation is stimulated, and the United States gains entrepreneurial spirit as it meets its specific research and development needs.

TurbineOne and AFSOC have partnered with AFWERX to usher in a new era for unprecedented situational awareness. The specific technology being developed is AutoML, a feature within TurbineOnes Frontline Perception System. It uniquely enables Operators to make changes to machine-learning models in the field without having to code and without an internet connection. These newly tuned, or created, models can be immediately deployed to cameras, sensors, autonomous vehicles, and drones at the tactical edge to strengthen situational awareness, helping to keep warfighters and civilians safe.

U.S. warfighters do not have machine learning in their deployed kits, but it is critically valuable when configured for military missions, according to Ian Kalin, TurbineOnes CEO. AutoML is a revolutionary software technology that will salvage years of investments in machine learning by enabling Operators to synchronize real-world data with the training data used to create the original algorithms.

TurbineOne was founded by Ian Kalin and Matt Amacker. Kalin previously served in the U.S. Navy as a Counter Terrorism Officer after witnessing the Pentagon attacked on September 11th, and he later served as the first Chief Data Officer for the U.S. Department of Commerce. Amacker has been awarded over 110 patents and was the former Head of Applied R&D Lab at Google. He was also a Principal Engineer at Amazon and the Head of Car AI for the Toyota Research Institute. Together, Amacker and Kalin realized that people serving in dangerous frontline environments do not have Machine Learning (ML) capabilities readily available; TurbineOne was created to address the national security challenge.

AFRL and AFWERX have partnered to streamline the Small Business Innovation Research process in an attempt to speed up the experience, broaden the pool of potential applicants and decrease bureaucratic overhead. Beginning in SBIR 18.2, and now in SBIR 21.1, the Air Force has begun offering 'The Open Topic' SBIR/STTR program that is faster, leaner and open to a broader range of innovations. The Press Release authority is the Air Force Special Operations Command Public Affairs (PA) office.

TurbineOnes contract is a Phase-II type, which generally authorizes awards up to $750,000. TurbineOne plans to successfully deliver to its customer and end-users within one year of the contract award.

About TurbineOne

TurbineOne was created to help public sector heroes perform even more effectively with the right technologies. We leverage Machine Learning to provide frontline perception that empowers first-responders and warfighters with greater situational awareness. TurbineOne currently works with the Department of Defense as well as leading commercial companies like Siemens. The company is based in San Francisco. Please visit us at https://www.turbineone.com for more information.

About AFWERX

Twitter - https://twitter.com/AFWERX LinkedIn - https://www.linkedin.com/company/afwerx-usaf/ Facebook - https://www.facebook.com/AFWERX Instagram - https://www.instagram.com/afwerx/

Originally posted here:
TurbineOne Awarded Air Force Contract to Deploy New Machine Learning Capability to Frontlines - Business Wire

Apple Car Will Leverage Machine Learning to Make Driving Decisions as Fast as Possible – iDrop News

Apples Car, when released, will make history as one of the first consumer vehicles to lack a steering wheel and now we know about even more new features the Apple Car will bring.

The Apple Car will utilize machine learning because current auto processors are not fast enough to make key driving decisions without ML autonomously. It was expected that Apple would use machine learning (ML) in the vehicle since the fruit companys AI chief John Giannandrea is now in charge.This is being featured because Apple wants decisions at the wheel to be made as fast as they possibly can for the consumers sake.Even a decision about a lane change can affect the processor, and that is how it works in current automobiles.

In the patent, Apple wants to use the technology in some states, such as when the vehicle is traveling a largely-empty straight highway with no turns possible for several kilometers or miles.

The patent also states, the number of actions to be evaluated may be relatively small; in other states, as when the vehicle approaches a crowded intersection, the number of actions may be much larger.

If they choose to use this technology, that means the Apple Car would have to figure out the current state of the environment around the vehicle to make certain decisions that, if it were a regular car, would take much longer.

Then to finish it off, it will have to figure out a set of feasible actions that can be takenout. This means it works like a human brain; before you do something, you have a range of outcomes or options that play in your mind. ML is basically the same, just like how a human learns through experience over time, a machine needs to learn to eventually deliver an optimal experience.

Apple has recently faced some setbacks with the Apple Car project within the last few months, from employees and key executives leaving, among others but the Apple Car project is still ongoing and expected for the full product to be in production in 2024 and a full consumer release in 2025.

Thanks for reading! Any questions? Let us know on social media.

Follow this link:
Apple Car Will Leverage Machine Learning to Make Driving Decisions as Fast as Possible - iDrop News

Learning to improve chemical reactions with artificial intelligence – EurekAlert

image:INL researchers perform experiments using the Temporal Analysis of Products (TAP). view more

Credit: Idaho National Laboratory

If you follow the directionsin a cake recipe, you expect to end up with a nice fluffy cake.In Idaho Falls,though, the elevation can affecttheseresults.When baked goods dont turn outas expected, the troubleshooting begins.This happens in chemistry,too.Chemistsmustbeable to account for how subtle changes or additions may affect the outcome for better or worse.

Chemists maketheir version ofrecipes, known as reactions,to create specific materials.These materialsare essential ingredients to an array of products found in healthcare, farming, vehicles andother everyday productsfrom diapers to diesel.When chemists develop new materials, they rely on information from previous experiments and predictions based onpriorknowledge ofhowdifferent starting materials interact with others and behave underspecificconditions.There are a lot of assumptions, guesswork and experimentation in designing reactions using traditional methods.New computational methods like machine learning can help scientists better understand complex processes like chemical reactions.While it can be challenging forhumans topick outpatternshiddenwithin the data from many different experiments, computers excel at this task.

Machine learning isan advancedcomputational toolwhereprogrammers givecomputerslots ofdata andminimalinstructions about how to interpret it. Instead of incorporatinghuman bias into the analysis, the computer isonly instructed to pull out what it finds to be important from the data. This could be an image of a cat (if the input is all the photos on the internet) orinformation about how a chemical reactionproceeds through a series ofsteps, as is thecasefora set of machine learning experiments that are ongoing at Idaho National Laboratory.

At the lab,researchersworking with the innovative Temporal Analysis of Products (TAP)reactorsystemaretryingto improveunderstanding of chemical reactions by studying the role of catalysts,whicharecomponentsthat can be added toamixture of chemicals to alter thereactionprocess.Oftencatalystsspeed up thereaction,but they can do other things,too. In baking and brewing,enzymesact as catalyststo speed up fermentationandbreakdown sugars in wheat (glucose) into alcohol and carbon dioxide,which creates the bubbles that make bread riseand beer foam.

In the laboratory,perfectinga new catalystcan be expensive, time-consuming and even dangerous.According toINLresearcher Ross Kunz, Understanding how and why a specific catalyst behavesin a reaction is theholygrail ofreaction chemistry.To help find it,scientists arecombiningmachine learningwith a wealth of new sensor datafrom the TAP reactorsystem.

The TAP reactor system uses an array of microsensors to examine the different componentsof a reaction in realtime.For the simplestcatalytic reaction,the system captures8uniquemeasurementsin each of 5,000timepointsthat make up the experiment.Assembling the timepoints into a single data set provides 165,000 measurements foroneexperiment on a very simple catalyst.Scientiststhenuse the datatopredict what is happening in the reaction at a specific timeand how different reaction steps work together in a larger chemical reaction network.Traditional analysis methods canbarelyscratch the surfaceofsuch a large quantity of datafor a simple catalyst, let alonethe many more measurements thatare produced by acomplex one.

Machine learning methods can take theTAP dataanalysis further. Using a type of machine learning called explainableartificial intelligence, orAI,theteam caneducatethe computer about known properties of thereactionsstarting materialsand the physics that govern these types of reactions, a process called training.The computer can apply thistrainingand the patterns that it detects in the experimental data to better describe theconditions inareactionacross time.The team hopes that theexplainable AI method will produce adescription of the reaction that can be used toaccuratelymodelthe processes that occur during theTAP experiment.

In most AI experiments, a computer is given almost no trainingon the physicsand simply detects patterns in the data based upon what it can identify,similar tohow a baby might react to seeing something completely new.By contrast,the value of explainable AI lies in the fact that humanscan understand the assumptions and information that lead to the computers conclusions.This human-level understanding can make it easier for scientists to verify predictions and detect flaws and biases in the reaction description produced by explainable AI.

Implementing explainable AIis not as simple or straightforward as it might sound.With support from the Department of Energys Advanced Manufacturing office, theINLteam has spent two years preparing theTAPdata for machine learning,developing andimplementingthe machinelearning program, andvalidating the results for a common catalyst in a simple reaction that occursinthe car you driveeveryday. This reaction,the transformation of carbon monoxideinto carbon dioxide,occurs ina carscatalytic converter andrelies onplatinumasthe catalyst. Since this reaction is well studied,researcherscan checkhow well the results of the explainable AI experiments match known observations.

In April 2021, the INL team published their results validating the explainable AI method with the platinum catalyst in the article Data driven reaction mechanism estimation via transient kinetics and machine learninginChemical Engineering Journal.Now that the team has validated the approach, they are examining TAP data frommore complex industrialcatalystsused in the manufacture of smallmolecules like ethylene, propylene and ammonia. They are also working with collaborators at Georgia Institute of Technologyto applythemathematical models that result from themachine learningexperiments tocomputersimulationscalled digital twins. This type of simulation allows the scientists topredict what will happen if they change an aspectof the reaction. When a digital twin is based on avery accurate model of a reaction, researcherscanbe confident in itspredictions.

Bygivingthe digital twinthe taskto simulate a modification to a reaction or new type of catalyst, researchers can avoid doing physical experiments for modifications that are likely to lead to poor results or unsafe conditions. Instead,the digital twin simulation can savetime and moneyby testing thousands of conditions,while researchers can testonly a handful of the mostpromising conditions in the physical laboratory.

Plus, this machine learning approach can produce newer and more accurate modelsfor each new catalyst and reaction condition testedwith the TAP reactorsystem.In turn, applying these models to digital twin simulations gives researchers the predictive power to pick the best catalysts and conditions to test next in the TAP reaction. As a result, each roundof testing, model development and simulationproducesa greater understanding of how a reactionworksand howtoimprove it.

These toolsarethe foundation of a new paradigm incatalyst science but alsopave the way for radical new approaches inchemical manufacturing,said Rebecca Fushimi, who leads the project team.

About Idaho National LaboratoryBattelle Energy Alliance manages INL for the U.S. Department of Energys Office of Nuclear Energy. INL is the nations center for nuclear energy research and development,and alsoperforms research in each of DOEs strategic goal areas: energy, national security, science and the environment. For more information, visitwww.inl.gov. Follow us on social media:Twitter,Facebook,InstagramandLinkedIn.

Chemical Engineering Journal

Data driven reaction mechanism estimation via transient kinetics and machine learning

18-Apr-2021

Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.

Read the original here:
Learning to improve chemical reactions with artificial intelligence - EurekAlert

Machine learning used to make fruits and vegetables more delicious – hortidaily.com

According to some, produce sold in the grocery store often tastes like cardboard. For those that agree, there are several reasons for this. Most of them stem from the fact that tastiness is far down on the list of what the food industry encourages plant breeders to prioritize when developing new produce varieties. Then again, when they do want to focus on taste, breeders don't have good tools for quickly sampling the fruit from thousands of cultivars.

Now, in a surprising new paper, researchers at the University of Florida describe a new method for "tasting" produce based on its chemical profile. They also stumbled on a big surprise. For more than a century, breeders have focused on sweetness and sourness when they tried to develop tastier cultivars. The new research shows that the tried-and-true approach ignores roughly half of what makes a tasty fruit or veggie so delicious.

Agricultural scientist Patrico Muoz, one of the papers co-authors, has stated that his team determined that in blueberries, for example, only 40 percent [of how well people like a fruit] is explained by sugar and acid. The rest is explained by chemicals called volatile organic compounds that we perceive with receptors in our noses, not our mouths.

That find could change the future of agriculture. The researchers behind this study focused on dozens of varieties of tomatoes and blueberries, including commercial cultivars sold in supermarkets, heirloom varieties more likely to be found at farmers markets and farm-to-table restaurants, and newly developed strains that recently graduated from breeding programs.

Source: interestingengineering.com

Photo source: Dreamstime.com

View post:
Machine learning used to make fruits and vegetables more delicious - hortidaily.com

How Telecom Companies Can Leverage Machine Learning To Boost Their Profits – Forbes

AI in telecom

The number of smartphone users across the world has skyrocketed over the last decade and promises to do so in the future too. Additionally, most business functions can now be executed on mobile devices. However, despite the mobile surge, telecom operators around the world are still not that profitable, with average net profit margins hovering around the 17% mark. The main reasons for the middling profit rates are the high number of market rivals vouching for the same customer base and the high overhead expenses associated with the sector. Communication Service Providers (CSPs) need to become more data-driven to reduce such costs and, automatically, improve their profit margins. Increasing the involvement of AI in telecom operations enables telecom companies to make this switch from rigid, infrastructure-driven operations to a data-driven approach seamlessly.

The inclusion of AI in telecom functional areas positively impacts the bottom line of CSPs in several ways. Businesses can use specific capabilities, avatars or applications of machine learning and AI for this purpose.

Mobile networks are one of the prime components of the ever-expanding internet community. As stated earlier, a large number of internet users and business operations have gone mobile in recent times. Additionally, the emergence of 5G and edge applications, and the impending arrival of the metaverse, will simply increase the need for high-performance telecom networks. It is very likely that the standard automation tech and personnel will be overwhelmed by the relentless pressure of high-speed network connectivity and mobile calls.

The use of AI in telecom operations can transform an underperforming mobile network into a self-optimizing network (SON). Telecom businesses can monitor network equipment and anticipate equipment failure with AI-powered predictive analysis. Additionally, AI-based tools allow CSPs to keep network quality consistently high by monitoring key performance indicators such as traffic on a zone-to-zone basis. Apart from monitoring the performance of equipment, machine learning algorithms can also continually run pattern recognition while scanning network data to detect anomalies. Then, AI-based systems can either perform remedial actions or notify the network administrator and engineers in the region where the anomaly was detected. This enables telecom companies to fix network issues at source before they adversely impact customers.

Network security is another area of focus for telecom operators. Of late, the rising security issues in telecom networks have been a point of concern for CSPs globally. AI-based data security tools allow telecom companies to constantly monitor the cyber health of their networks. Machine learning algorithms perform analysis of global data networks and past security incidents to make key predictions of existing network vulnerabilities. In other words, AI-based network security tools enable telecom businesses to pre-empt future security complications and proactively take preventive measures to deal with them.

Ultimately, AI improves telecom networks in multiple ways. By improving the performance, anomaly detection and security of CSP networks, machine learning algorithms can enhance the user experience for telecom company clients. This will result in a growth of such companies customer base in the long term, and, by extension, an increase in profits.

How Telecom Companies Can Leverage Machine Learning To Boost Their Profits

The Europol classifies the telecom sector to be particularly vulnerable to fraud. Telecom fraud involves the abuse of telecommunications systems such as mobile phones and tablets by criminals to siphon money off CSPs. As per a recent study, telecom fraud accounted for losses of US$40.1 billionapproximately 1.88% of the total revenue of telecom operators. One of the common types of telecom fraud is International Revenue Sharing Fraud (IRSF). IRSF involves criminals linking up with International Premium Rate Number (IPRN) providers to illegally acquire money from telecom companies by using bots to make an absurdly high number of international calls of long duration. Such calls are difficult to trace. Additionally, telecom companies cannot bill clients for such premium calls as the connections are fraudulent. So, telecom operators end up bearing the losses for such calls. The IPRNs and criminals share the spoils between themselves. Apart from IRSF, vishing (a portmanteau for voice calls and phishing attacks) is a way in which malicious entities dupe clients of telecom companies to extract money and data. The involvement of AI in telecom operations enables CSPs to detect and eliminate these kinds of fraud.

Machine learning algorithms assist telecom network engineers with detecting instances of illegal access, fake caller profiles and cloning. To achieve this, the algorithms perform behavioral monitoring of the global telecom networks of CSPs. Accordingly, the network traffic along such networks is closely monitored. The pattern recognition capabilities of AI algorithms come into play again as they enable network administrators to identify contentious scenarios such as several calls being made from a fraudulent number, or blank callsa general indicator of vishingbeing repeatedly made from questionable sources. One of the more prominent examples of telecom companies using data analytics for fraud detection and prevention is Vodafones partnership with Argyle Data. The data science-based firm analyzes the network traffic of the telecom giant for intelligent, data-driven fraud management.

Detecting and eliminating telecom fraud are major steps towards increasing the profit margins of CSPs. As you can see, the role of AI in telecom operations is significant for achieving this objective.

To reliably serve millions of clients, telecom companies need to have a massive workforce that can handle their backend operations on a daily basis efficiently. Dealing with such a large volume of customers creates several opportunities for human error.

Telecom companies can employ cognitive computinga robotics-based field that involves Natural Language Processing (NLP), Robotic Process Automation (RPA) and rule enginesto automate the rule-based processes such as sending marketing emails, autocompleting e-forms, recording data and carrying out certain tasks that can replicate human actions. The use of AI in telecom operations brings greater accuracy in back-office operations. As per a study conducted by Deloitte, several executives in the telecom, media and tech industry felt that the use of cognitive computing for backend operations brought substantial and transformative benefits to their respective businesses.

Customer sentiment analysis involves a set of data classification and analysis tasks carried out to understand the pulse of customers. This allows telecom companies to evaluate whether their clients like or dislike their services based on raw emotions. Marketers can use NLP and AI to sense the "mood" of their customers from their texts, emails or social media posts bearing a telecom companys name. Aspect-based sentiment analytics highlight the exact service areas in which customers have problems. For example, if a customer is upset about the number of calls getting dropped regularly and writes a long and incoherent email to a telcos customer service team about it, the machine learning algorithms employed for sentiment analysis can still autonomously ascertain their mood (angry) and the problem (the call drop rate).

Apart from sentiment analysis, telecom businesses can hugely benefit from the growing emergence of chatbots and virtual assistants. Service requests for network set-ups, installation, troubleshooting and maintenance-based issues can be resolved through such machine learning-based tools and applications. Virtual assistants enable CRM teams in telecom companies to manage a large number of customers with ease. In this way, CSPs can manage customer service and sentiment analysis successfully.

Across the board, users generally cite the quality of their telecom customer service to be below satisfactory. Telecom users are constantly infuriated by long waiting times to get to a service executive, unanswered complaint emails and poor grievance handling by CSPs. Poor CRM does not bode well for telecom companies as it maligns their reputation and diminishes shareholder confidence. By implementing machine learning for CRM, telecom companies can address such issues efficiently.

Like businesses in any other sector, telecom companies need to boost their profits for long-term survival and diversification. As stated at the beginning, there are multiple factors that thwart their chances of profit generation. Going down the data science route is one of the novel ways to overcome such challenges. By involving AI in telecom operations, CSPs can manage their data wisely and channelize their resources towards maximizing revenues.

Despite the positives associated with AI, only a limited percentage of telecom businesses have incorporated the technology for profit maximization. Gradually, one can expect that percentage to rise.

More:
How Telecom Companies Can Leverage Machine Learning To Boost Their Profits - Forbes

We dont need boots on the ground to track Russias moves on Ukraine – Popular Science

Craig Nazareth is an assistant professor of Practice of Intelligence & Information Operations, University of Arizona. This story originally published on The Conversation.

The US has been warning for weeks about the possibility of Russia invading Ukraine, and threatening retaliation if it does. Just eight years after Russias incursion into eastern Ukraine and invasion of Crimea, Russian forces are once again mobilizing along Ukraines borders.

As the US and other NATO member governments monitor Russias activities and determine appropriate policy responses, the timely intelligence they rely on no longer comes solely from multimillion-dollar spy satellites and spies on the ground.

Social media, big data, smartphones and low-cost satellites have taken center stage, and scraping Twitter has become as important as anything else in the intelligence analyst toolkit. These technologies have also allowed news organizations and armchair sleuths to follow the action and contribute analysis.

Governments still carry out sensitive intelligence-gathering operations with the help of extensive resources like the US intelligence budget. But massive amounts of valuable information are publicly available, and not all of it is collected by governments. Satellites and drones are much cheaper than they were even a decade ago, allowing private companies to operate them, and nearly everyone has a smartphone with advanced photo and video capabilities.

As an intelligence and information operations scholar, I study how technology is producing massive amounts of intelligence data and helping sift out the valuable information.

Through information captured by commercial companies and individuals, the realities of Russias military posturing are accessible to anyone via internet search or news feed. Commercial imaging companies are posting up-to-the-minute, geographically precise images of Russias military forces. Several news agencies are regularly monitoring and reporting on the situation. TikTok users are posting video of Russian military equipment on rail cars allegedly on their way to augment forces already in position around Ukraine. And internet sleuths are tracking this flow of information.

This democratization of intelligence collection in most cases is a boon for intelligence professionals. Government analysts are filling the need for intelligence assessments using information sourced from across the internet instead of primarily relying on classified systems or expensive sensors high in the sky or arrayed on the planet.

However, sifting through terabytes of publicly available data for relevant information is difficult. Knowing that much of the data could be intentionally manipulated to deceive complicates the task.

Enter the practice of open-source intelligence. The U.S. director of national intelligence defines Open-Source Intelligence, or OSINT, as the collection, evaluation and analysis of publicly available information. The information sources include news reports, social media posts, YouTube videos and satellite imagery from commercial satellite operators.

OSINT communities and government agencies have developed best practices for OSINT, and there are numerous free tools. Analysts can use the tools to develop network charts of, for example, criminal organizations by scouring publicly available financial records for criminal activity.

Private investigators are using OSINT methods to support law enforcement, corporate and government needs. Armchair sleuths have used OSINT to expose corruption and criminal activity to authorities. In short, the majority of intelligence needs can be met through OSINT.

Even with OSINT best practices and tools, OSINT contributes to the information overload intelligence analysts have to contend with. The intelligence analyst is typically in a reactive mode trying to make sense of a constant stream of ambiguous raw data and information.

Machine learning, a set of techniques that allows computers to identify patterns in large amounts of data, is proving invaluable for processing OSINT information, particularly photos and videos. Computers are much faster at sifting through large datasets, so adopting machine learning tools and techniques to optimize the OSINT process is a necessity.

Identifying patterns makes it possible for computers to evaluate information for deception and credibility and predict future trends. For example, machine learning can be used to help determine whether information was produced by a human or by a bot or other computer program and whether a piece of data is authentic or fraudulent.

And while machine learning is by no means a crystal ball, it can be usedif its trained with the right data and has enough current informationto assess the probabilities of certain outcomes. No one is going to be able to use the combination of OSINT and machine learning to read Russian President Vladimir Putins mind, but the tools could help analysts assess how, for example, a Russian invasion of Ukraine might play out.

Technology has produced a flood of intelligence data, but technology is also making it easier to extract meaningful information from the data to help human intelligence analysts put together the big picture.

More:
We dont need boots on the ground to track Russias moves on Ukraine - Popular Science