Category Archives: Artificial Intelligence
Artificial Intelligence is critical to organisations, but many unprepared – Workplace Insight
The State of Intelligent Enterprises report sets out to examine the current landscape and shows the challenges and the driving factors for businesses to become truly Intelligent Enterprises. Wipro surveyed 300 respondents in UK and US across key industry sectors like financial services, healthcare, technology, manufacturing, retail and consumer goods. The report claims to highlight that while collecting data is critical, the ability to combine this with a host of technologies to leverage insights creates an Intelligent Enterprise. Organisations that fast-track adoption of intelligent processes and technologies stand to gain an immediate competitive advantage over their counterparts.
Some of the key findings from the report are:
Organisations now need new capabilities to navigate the current challenges.
While 80 percent of organisations recognise the importance of being intelligent, only 17 percent would classify their organisations as an Intelligent Enterprise. 98 percent of those surveyed believe that being an Intelligent Enterprise yields benefits to organisations. The most important ones being improved customer experience, faster business decisions and increased organisational agility. 91 percent of organisations feel there are data barriers towards being an Intelligent Enterprise, with security, quality and seamless integration being of utmost concern. 95 percent of business leaders surveyed see Artificial Intelligence as critical to being Intelligent Enterprises, yet, currently, only 17 percent can leverage AI across the entire organisation. 74 percent of organisations consider investment in technology as the most likely enabler for an Intelligent Enterprise, however 42 percent of them think that this must be complemented with efforts to re-skill workforce.
Jayant Prabhu, Vice President & Head Data, Analytics & AI, Wipro Limited said, Organisations now need new capabilities to navigate the current challenges. The report amplifies the opportunity to gain a first-mover advantage to being Intelligent. The ability to take productive decisions depends on an organisations ability to generate accurate, fast and actionable intelligence. Successful organisations are those that quickly adapt to the new technology landscape to transform into an Intelligent Enterprise.
Image by Gerd Altmann
Continue reading here:
Artificial Intelligence is critical to organisations, but many unprepared - Workplace Insight
US DoD to Focus C4ISR Spending on Commercial IT Advances in Artificial Intelligence and Cloud Computing – PRNewswire
"C4ISR and IT industries are converging around artificial intelligence (AI), machine learning, data analysis, self-healing networks, and cloud computing," said Brad Curran, Aerospace & Defense Research Analyst at Frost & Sullivan. "Going forward, naval, airborne, and ground tactical networks are overly complex, making the networks too difficult to establish and defend. To resolve this problem, DoD requires integration and cybersecurity services from the defense industry."
Curran added: "Procurement will overtake research, development, test and evaluation (RDT&E) to take up the largest share of spending by 2024. It will primarily focus on manpack radio, fixed surveillance systems, naval IT networks, ship self-defense systems, anti-submarine warfare sensors, and deployable tactical networks. Further, the operations and maintenance department's spending will emphasize on service-wide communications, global early warning sensors and networks, cybersecurity, weather systems, and software/digital technology pilot programs."
The steady growth of the C4ISR budget spending presents immense growth prospects for its market participants, including:
Assessment of the US DoD C4ISR Market, Forecast to 2025 is the latest addition to Frost & Sullivan's Aerospace & Defense research and analyses available through the Frost & Sullivan Leadership Council, which helps organizations identify a continuous flow of growth opportunities to succeed in an unpredictable future.
About Frost & Sullivan
For over five decades, Frost & Sullivan has become world-renowned for its role in helping investors, corporate leaders and governments navigate economic changes and identify disruptive technologies, Mega Trends, new business models and companies to action, resulting in a continuous flow of growth opportunities to drive future success. Contact us: Start the discussion.
Assessment of the US DoD C4ISR Market, Forecast to 2025
K4D9
Contact:Francesca ValenteGlobal Corporate Communications E: [emailprotected] http://ww2.frost.com
SOURCE Frost & Sullivan
UC Riverside Wins Grant to Bring Artificial Intelligence to the Colorado River Basin – AG INFORMATION NETWORK OF THE WEST – AGInfo Ag Information…
With California Ag Today, Im Tim Hammerich.
The University of California, Riverside recently won a $10 million grant to develop artificial intelligence to improve environmental and economic stability throughout the western U.S.
Elia Scudiero is a Research Agronomist at the UC Riverside.
Scudiero So this will bring together university personnel and ag tech companies that will provide training that will serve the farming communities in California, Arizona, Colorado, and the native American communities in the Colorado River Basin. So we really hope that this is well -received by the growers and it can be useful to improve their current practices so that we can then continue this program beyond the duration of the project.
Partnering with UC Riverside on this are Colorado State, Duke, University of Georgia, and the University of Arizona. Included in the program is an undergraduate Digital Agricultural Fellowship.
Scudiero So we are going to pair these undergraduate students with a faculty advisor for over a year, creating a very tight relationship there. And these students will carry out independent research in the university lab. But at the same time, we will compliment this type of experience for the students by sending them to have industry internships at our partners in the ag tech industry.
Stay tuned for more information on this exciting project to bring more artificial intelligence to agriculture. The researchers plan to release a website in the coming year.
How the VA is using artificial intelligence to improve veterans’ mental health | TheHill – The Hill
Navy veteran Lee Becker knows how hard it can be to ask for help in the military.
I remember when I was in the military I had to talk to leaders [who] would chastise service members for getting medical support for mental health, said Becker, who served at the Navy's Bureau of Medicine and Surgery, providing care to Marines and Sailors serving in Iraq and Afghanistan.
So when he began his career at the Department of Veterans Affairs (VA) about a decade ago, he knew things needed to change. In 2017, the suicide rate for veterans was 1.5 times the rate for nonveteran adults, according to the 2019 National Veteran Suicide Prevention Annual Report, increasing the average number of veteran suicides per day to 16.8.
The VA historically has always been in reactive mode, always caught by surprise, he said, citing the example of the lack of health care for female veterans, who are 2.2 times more likely to die by suicide than non-veteran women.
WHAT YOU NEED TO KNOW ABOUT CORONAVIRUS RIGHT NOW
FAUCI ADDRESSES WHY BLACK AMERICANS ARE DISPROPORTIONALLY AFFECTED BY CORONAVIRUS
REPORT HIGHLIGHTS ASIAN AMERICAN DISCRIMINATION AMID CORONAVIRUS PANDEMIC
IM A FOREIGN-BORN DOCTOR FIGHTING AMERICAS WAR AGAINST THE CORONAVIRUS. THE US NEEDS MORE OF US
WOMEN AND THE HIDDEN BURDEN OF THE CORONAVIRUS
400,000 CALL ON CONGRESS TO EASE HOUSING CONCERNS FOR AMERICAN FAMILIES IMPACTED BY CORONAVIRUS
After an explosive report by the Washington Post in 2014 detailing tens of thousands of veterans waiting for care as VA employees were allegedly directed to manipulate records, some things have changed. This April, veteran trust in the VA reached 80 percent, up 19 percent since January 2017, according to the agency. And the former chief of staff for the Veterans Experience Office is now working for Medallia, a customer experience management company. He is the solutions principal for public sector and health care, and he helped launch the Veterans Signals program in partnership with the VA.
The programutilizesartificial intelligence systems typically used in the customer experience industry to monitor responses based on tone and language and respond immediately to at-risk veterans. About 2,800 crisis alerts have been routed to VA offices, according to Medallia, providing early intervention for more than 1,400 veterans in need within minutes of being alerted.
If they have the ability to harness this capability so they can sell more, why cant public service agencies have the ability to serve more? Becker asked. "It opened the aperture, making sure we really targeted the care. We were getting insights that helped anticipate future problems. We were able to identify veterans that are in crisis and route that case directly to the veterans crisis line.
Through surveys, Medallia collects customer feedback for the VA that seeks to understand veterans as customers with other identities outside of their military service. One call came from an Asian American female veteran living in Idaho who was scared to leave her house due to racist stigma blaming Asian Americans for the coronavirus pandemic.
I think the greatest tragedy is that I see a tsunami coming around mental health and if we dont mitigate that by truly listening and anticipating the needs of the people, were going to have an issue, Becker said.
Our country is in a historic fight. Add Changing America to your Facebook or Twitter feed to stay on top of the news.
The coronavirus pandemic has exacerbated existing inequities for the most vulnerable communities. The VA medical system has recorded more than 53,000 cases of COVID-19 among veterans in all 50 states, the District of Columbia and Puerto Rico, AARP reported, withmore than 3,000 deaths not including veterans who were not diagnosed at VA hospitals and medical centers.
Access to care is still an issue. A report released last week by the Department of Veterans Affairs, Office of Inspector General revealed deficiencies in care, care coordination and facility response in the case of patient who died by suicide after being discharged by the Memphis, Tenn., VA Medical Center. But Becker remains optimistic that he can make change from within the system.
"It has to start on the military side. We have to make sure that it's very clear it's ok not to be ok, if someone needs mental health support it's not weakness," he said.
And that support needs to carry through veterans' transitions to civilian life, Becker added.
"[The military is] a cocoon, you get fed, you have a job, you get issued clothes, he said. When you leave, how do we make sure that all of those needs are getting met?"
While hes optimistic, Becker is also a realist and he knows there are still very real problems with the VA. But he says its more an issue of capability than bad intentions.
Theres a few bad apples, Ive supervised those bad apples and I've had to get rid of those bad apples, Becker said. But hes also seen new leaders step up.
Its a tale of two cities, he said. Were seeing a set of leadership behaviors that are not conducive to the needs of what were looking for, but were seeing great leaders within the federal government who are career employees and even some politicians.
THE LATEST FROM CHANGING AMERICA ON THE CORONAVIRUS PANDEMIC
FAUCI PUSHES BACK AGAINST MINIMIZING OF CORONAVIRUS DEATH TOLL
HERD IMMUNITY EXPLAINED
THE PROBLEM WITH HOLDING UP SWEDEN AS AN EXAMPLE FOR CORONAVIRUS RESPONSE
FALSE POSITIVE AND NEGATIVE TEST RESULTS EXPLAINED
NOBEL LAUREATE PREDICTS WE WILL HAVE MUCH FASTER CORONAVIRUS RECOVERY THAN EXPECTED
Read the rest here:
How the VA is using artificial intelligence to improve veterans' mental health | TheHill - The Hill
Catalyst of change: Bringing artificial intelligence to the forefront – The Financial Express
Artificial Intelligence (AI) has been much talked about over the last few years. Several interpretations of the potential of AI and its outcomes have been shared by technologists and futurologists. With the focus on the customer, the possibilities range from predicting trends to recommending actions to prescribing solutions.
The potential for change due to AI applications is energised by several factors. The first is the concept of AI itself which is not a new phenomenon. Researchers, cognitive specialists and hi-tech experts working with complex data for decades in domains such as space, medicine and astrophysics have used data to help derive deep insights to predict trends and build futuristic models.
AI has now moved out of the realms of research labs to the commercial world and every day life due to three key levers. Innovation and technology advancements in the hardware, telecommunications and software have been the catalysts in bringing AI to the forefront and attempting to go beyond the frontiers of data and analytics.
What was once seen as a big breakthrough to be able to analyse the data as if-else- then scenarios transitioned to machine learning with the capability to deal with hundreds of variables but mostly structured data sets. Handcrafted techniques using algorithms did find ways to convert unstructured data to structured data but there are limitations to such volumes of data that could be handled by machine learning.
With 80% of the data being unstructured and with the realisation that the real value of data analysis would be possible only when both structured and unstructured data are synthesised, there came deep learning which is capable of handling thousands of factors and is able to draw inferences from tens of billions of data comprising of voice, image, video and queries each day. Determining patterns from unstructured data multi-lingual text, multi-modal speech, vision have been maturing making recommendation engines more effective.
Another important factor that is aiding the process for adoption of AI rapidly is the evolution seen in the hardware. CPUs (Central processing unit) today are versatile and designed for handling sequential codes and not for addressing codes related to massive parallel problems. This is where the GPUs (graphcial processing units) which were hitherto considered primarily for applications such as gaming are now being deployed for the purpose of addressing the needs of commercial establishments, governments and other domains dealing with gigantic volumes of data supporting their needs for parallel processing in areas such as smart parking, retail analytics, intelligent traffic systems and others. Such computing intensive functions requiring massive problems to be broken up into smaller ones that require parallelisation are finding efficient hardware and hosting options in the cloud.
Therefore the key drivers for this major transition are the evolution of hardware and hosting on the cloud, sophisticated tools and software to capture, store and analyse the data as well as a variety of devices that keep us always connected and support in the generation of humungous volumes of data. These dimensions along with advances in telecommunications will continue to evolve, making it possible for commercial establishments, governments and society to arrive at solutions that deliver superior experiences for the common man. Whether it is agriculture, health, decoding crimes, transportation or maintenance of law and order, we have already started seeing the play of digital technologies and democratisation of AI would soon become a reality.
The writer is chairperson, Global Talent Track, a corporate training solutions company
Get live Stock Prices from BSE, NSE, US Market and latest NAV, portfolio of Mutual Funds, calculate your tax by Income Tax Calculator, know markets Top Gainers, Top Losers & Best Equity Funds. Like us on Facebook and follow us on Twitter.
Financial Express is now on Telegram. Click here to join our channel and stay updated with the latest Biz news and updates.
Read more:
Catalyst of change: Bringing artificial intelligence to the forefront - The Financial Express
3 Predictions For The Role Of Artificial Intelligence In Art And Design – Forbes
Christies made the headlines in 2018 when it became the first auction house to sell a painting created by AI. The painting, named Portrait of Edmond de Belamy, ended up selling for a cool $432,500, but more importantly, it demonstrated how intelligent machines are now perfectly capable of creating artwork.
3 Predictions For The Role Of Artificial Intelligence In Art And Design
It was only a matter of time, I suppose. Thanks to AI, machines have been able to learn more and more human functions, including the ability to see (think facial recognition technology), speak and write (chatbots being a prime example). Learning to create is a logical step on from mastering the basic human abilities. But will intelligent machines really rival humans remarkable capacity for creativity and design? To answer that question, here are my top three predictions for the role of AI in art and design.
1. Machines will be used to enhance human creativity (enhance being the key word)
Until we can fully understand the brains creative thought processes, its unlikely machines will learn to replicate them. As yet, theres still much we dont understand about human creativity. Those inspired ideas that pop into our brain seemingly out of nowhere. The eureka! moments of clarity that stop us in our tracks. Much of that thought process remains a mystery, which makes it difficult to replicate the same creative spark in machines.
Typically, then, machines have to be told what to create before they can produce the desired end result. The AI painting that sold at auction? It was created by an algorithm that had been trained on 15,000 pre-20th century portraits, and was programmed to compare its own work with those paintings.
The takeaway from this is that AI will largely be used to enhance human creativity, not replicate or replace it a process known as co-creativity." As an example of AI improving the creative process, IBM's Watson AI platform was used to create the first-ever AI-generated movie trailer, for the horror film Morgan. Watson analyzed visuals, sound, and composition from hundreds of other horror movie trailers before selecting appropriate scenes from Morgan for human editors to compile into a trailer. This reduced a process that usually takes weeks down to one day.
2. AI could help to overcome the limits of human creativity
Humans may excel at making sophisticated decisions and pulling ideas seemingly out of thin air, but human creativity does have its limitations. Most notably, were not great at producing a vast number of possible options and ideas to choose from. In fact, as a species, we tend to get overwhelmed and less decisive the more options were faced with! This is a problem for creativity because, as American chemist Linus Pauling the only person to have won two unshared Nobel Prizes put it, You cant have good ideas unless you have lots of ideas. This is where AI can be of huge benefit.
Intelligent machines have no problem coming up with infinite possible solutions and permutations, and then narrowing the field down to the most suitable options the ones that best fit the human creatives vision. In this way, machines could help us come up with new creative solutions that we couldnt possibly have come up with on our own.
For example, award-winning choreographer Wayne McGregor has collaborated with Google Arts & Culture Lab to come up with new, AI-driven choreography. An AI algorithm was trained on thousands of hours of McGregors videos, spanning 25 years of his career and as a result, the program came up with 400,000 McGregor-like sequences. In McGregors words, the tool gives you all of these new possibilities you couldnt have imagined.
3. Generative design is one area to watch
Much like in the creative arts, the world of design will likely shift towards greater collaboration between humans and AI. This brings us to generative design a cutting-edge field that uses intelligent software to enhance the work of human designers and engineers.
Very simply, the human designer inputs their design goals, specifications, and other requirements, and the software takes over to explore all possible designs that meet those criteria. Generative design could be utterly transformative for many industries, including architecture, construction, engineering, manufacturing, and consumer product design.
In one exciting example of generative design, renowned designer Philippe Starck collaborated with software company Autodesk to create a new chair design. Starck and his team set out the overarching vision for the chair and fed the AI system questions like, "Do you know how we can rest our bodies using the least amount of material?" From there, the software came up with multiple suitable designs to choose from. The final design an award-winning chair named "AI" debuted at Milan Design Week in 2019.
Machine co-creativity is just one of 25 technology trends that I believe will transform our society. Read more about these key trends including plenty of real-world examples in my new books, Tech Trends in Practice: The 25 Technologies That Are Driving The 4th Industrial Revolution and The Intelligence Revolution: Transforming Your Business With AI.
See the rest here:
3 Predictions For The Role Of Artificial Intelligence In Art And Design - Forbes
Artificial Intelligence: How realistic is the claim that AI will change our lives? – Bangkok Post
Artificial Intelligence: How realistic is the claim that AI will change our lives?
Artificial Intelligence (AI) stakes a claim on productivity, corporate dominance, and economic prosperity with Shakespearean drama. AI will change the way you work and spend your leisure time and puts a claim on your identity.
First, an AI primer.
Let's define intelligence, before we get onto the artificial kind. Intelligence is the ability to learn. Our senses absorb data about the world around us. We can take a few data points and make conceptual leaps. We see light, feel heat, and infer the notion of "summer."
Our expressive abilities provide feedback, i.e., our data outputs. Intelligence is built on data. When children play, they engage in endless feedback loops through which they learn.
Computers too, are deemed intelligent if they can compute, conceptualise, see and speak. A particularly fruitful area of AI is getting machines to enjoy the same sensory experiences that we have. Machines can do this, but they require vast amounts of data. They do it by brute force, not cleverness. For example, they determine the image of a cat by breaking pixel data into little steps and repeat until done.
Key point: What we do and what machines do is not so different, but AI is more about data and repetition than it is about reasoning. Machines figure things out mathematically, not visually.
AI is a suite of technologies (machines and programs) that have predictive power, and some degree of autonomous learning.
AI consists of three building blocks:
An algorithm is a set of rules to be followed when solving a problem. The speed of the volume of data that can be fed into algorithms is more important than the "smartness" of algorithms.
Let's examine these three parts of the AI process:
The raw ingredient of intelligence is data. Data is learning potential. AI is mostly about creating value through data. Data has become a core business value when insights can be extracted. The more you have, the more you can do. Companies with a Big Data mind-set don't mind filtering through lots of low value data. The power is in the aggregation of data.
Building quality datasets for input is critical too, so human effort must first be spent obtaining, preparing and cleaning data. The computer does the calculations and provides the answers, or output.
Conceptually, Machine Learning (ML) is the ability to learn a task without being explicitly programmed to do so. ML encompasses algorithms and techniques that are used in classification, regression, clustering or anomaly detection.
ML relies on feedback loops. The data is used to make a model, and then test how well that model fits the data. The model is revised to make it fit the data better, and repeated until the model cannot be improved anymore. Algorithms can be trained with past data to find patterns and make predictions.
Key point: AI expands the set of tools that we have to gain a better grasp of finding trends or structure in data, and make predictions. Machines can scale way beyond human capacity when data is plentiful.
Prediction is the core purpose of ML. For example, banks want to predict fraudulent transactions. Telecoms want to predict churn. Retailers want to predict customer preferences. AI-enabled businesses make their data assets a strategic differentiator.
Prediction is not just about the future; it's about filling in knowledge gaps and reducing uncertainty. Prediction lets us generalise, an essential form of intelligence. Prediction and intelligence are tied at the hip.
Let's examine the wider changes unfolding.
AI increases our productivity. The question is how we distribute the resources. If AI-enhanced production only requires a few people, what does that mean for income distribution? All the uncertainties are on how the productivity benefits will be distributed, not how large they will be.
Caution:
ML is already pervasive in the internet. Will the democratisation of access brought on by the internet continue to favour global monopolies? Unprecedented economic power rests in a few companies you can guess which ones with global reach. Can the power of channelling our collective intelligence continue to be held by these companies that are positioned to influence our private interests with their economic interests?
Nobody knows if AI will produce more wealth or economic precariousness. Absent various regulatory measures, it is inevitable that it will increase inequality and create new social gaps.
Let's examine the impact on everyone.
As with all technology advancements, there will be changes in employment: the number of people employed, the nature of jobs and the satisfaction we will derive from them. However, with AI all classes of labour are under threat, including management. Professions involving analysis and decision-making will become the providence of machines.
New positions will be created, but nobody really knows if new jobs will sufficiently replace former ones.
We will shift more to creative or empathetic pursuits. To the extent of income shortfall, should we be rewarded for contributing in our small ways to the collective intelligence? Universal basic income is one option, though it remains theoretical.
Our consumption of data (mobile phones, web-clicks, sensors) provides a digital trail that is fed into corporate and governmental computers. For governments, AI opens new doors to perform surveillance, predictive policing, and social shaming. For corporates, it's not clear whether surveillance capitalism, the commercialisation of your personal data, will be personalised to you, or for you. Will it direct you where they want you to go, rather than where you want to go?
How will your data be a measure of you?
The interesting angle emerging is whether we will be hackable. Thats when the AI knows more about you than yourself. At that point you become completely influenceable because you can be made to think and to react as directed by governments and corporates.
We do need artificial forms of intelligence because our prediction abilities are limited, especially when handling big data and multiple variables. But for all its stunning accomplishments, AI remains very specific. Learning machines are circumscribed to very narrow areas of learning. The Deep Mind that wins systematically at Go can't eat soup with a spoon or predict the next financial crises.
Filtering and personalisation engines have the potential to both accommodate and exploit our interests. The degree of change will be propelled, and restrained, by new regulatory priorities. The law always lags behind technology, so expect the slings and arrows of our outrageous fortune.
Author: Greg Beatty, J.D., Business Development Consultant. For further information please contact gregfieldbeatty@gmail.com
Series Editor: Christopher F. Bruton, Executive Director, Dataconsult Ltd, chris@dataconsult.co.th. Dataconsult's Thailand Regional Forum provides seminars and extensive documentation to update business on future trends in Thailand and in the Mekong Region.
Read the original:
Artificial Intelligence: How realistic is the claim that AI will change our lives? - Bangkok Post
3 Ways Artificial Intelligence Is Transforming The Energy Industry – OilPrice.com
Back in 2017, Bill Gates penned a poignant online essay to all graduating college students around the world whereby he tapped artificial intelligence (AI), clean energy, and biosciences as the three fields he would spend his energies on if he could start all over again and wanted to make a big impact in the world today.
It turns out that the Microsoft co-founder was right on the money.
Three years down the line and deep in the throes of the worst pandemic in modern history, AI and renewable energy have emerged as some of the biggest megatrends of our time. On the one hand, AI is powering the fourth industrial revolution and is increasingly being viewed as a key strategy for mastering some of the greatest challenges of our time, including climate change and pollution. On the other hand, there is a widespread recognition that carbon-free technologies like renewable energy will play a critical role in combating climate change.
Consequently, stocks in the AI, robotics, and automation sectors as well as clean energy ETFs have lately become hot property.
From utilities employing AI and machine learning to predict power fluctuations and cost optimization to companies using IoT sensors for early fault detection and wildfire powerline/gear monitoring, here are real-life cases of how AI has continued to power an energy revolution even during the pandemic.
Top uses of AI in the energy sector
Source: Intellias
#1. Innowatts: Energy monitoring and management The Covid-19 crisis has triggered an unprecedented decline in power consumption. Not only has overall consumption suffered, but there also have been significant shifts in power usage patterns, with sharp decreases by businesses and industries while domestic use has increased as more people work from home.
Houston, Texas-based Innowatts, is a startup that has developed an automated toolkit for energy monitoring and management. The companys eUtility platform ingests data from more than 34 million smart energy meters across 21 million customers, including major U.S. utility companies such as Arizona Public Service Electric, Portland General Electric, Avangrid, Gexa Energy, WGL, and Mega Energy. Innowatts says its machine learning algorithms can analyze the data to forecast several critical data points, including short- and long-term loads, variances, weather sensitivity, and more.
Related: The Real Reason The Oil Rally Has Fizzled Out
Innowatts estimates that without its machine learning models, utilities would have seen inaccuracies of 20% or more on their projections at the peak of the crisis, thus placing enormous strain on their operations and ultimately driving up costs for end-users.
#2. Google: Boosting the value of wind energy
A while back, we reported that proponents of nuclear energy were using the pandemic to highlight its strong points vis-a-vis the short-comings of renewable energy sources. To wit, wind and solar are the least predictable and consistent among the major power sources, while nuclear and natural gas boast the highest capacity factors.
Well, one tech giant has figured out how to employ AI to iron out those kinks.
Three years ago, Google announced that it had reached 100% renewable energy for its global operations, including its data centers and offices. Today, Google is the largest corporate buyer of renewable power, with commitments totaling 2.6 gigawatts (2,600 megawatts) of wind and solar energy.
In 2017, Google teamed up with IBM to search for a solution to the highly intermittent nature of wind power. Using IBMs DeepMind AI platform, Google deployed ML algorithms to 700 megawatts of wind power capacity in the central United States--enough to power a medium-sized city.
IBM says that by using a neural network trained on widely available weather forecasts and historical turbine data, DeepMind is now able to predict wind power output 36 hours ahead of actual generation. Consequently, this has boosted the value of Googles wind energy by roughly 20 percent.
A similar model can be used by other wind farm operators to make smarter, faster and more data-driven optimizations of their power output to better meet customer demand.
IBMs DeepMind uses trained neural networks to predict wind power output 36 hours ahead of actual generation
Source: DeepMind
#3. Wildfire powerline and gear monitoring In June, Californias biggest utility, Pacific Gas & Electric, found itself in deep trouble. The company pleaded guilty for the tragic 2018 wildfire accident that left 84 people dead and PG&E saddled with hefty penalties of $13.5 billion as compensation to people who lost homes and businesses and another $2 billion fine by the California Public Utilities Commission for negligence.
It will be a long climb back to the top for the fallen giant after its stock crashed nearly 90% following the disaster despite the company emerging from bankruptcy in July.
Perhaps the loss of lives and livelihood could have been averted if PG&E had invested in some AI-powered early detection system.
Source: CNN Money
One such system is by a startup called VIA, based in Somerville, Massachusetts. VIA says it has developed a blockchain-based app that can predict when vulnerable power transmission gear such as transformers might be at risk in a disaster. VIAs app makes better use of energy data sources, including smart meters or equipment inspections. Related: Worlds Largest Oilfield Services Provider Sells U.S. Fracking Business
Another comparable product is by Korean firm Alchera which uses AI-based image recognition in combination with thermal and standard cameras to monitor power lines and substations in real time. The AI system is trained to watch the infrastructure for any abnormal events such as falling trees, smoke, fire, and even intruders.
Other than utilities, oil and gas producers have also been integrating AI into their operations. These include:
By Alex Kimani for Oilprice.com
More Top Reads From Oilprice.com:
Visit link:
3 Ways Artificial Intelligence Is Transforming The Energy Industry - OilPrice.com
Toward a machine learning model that can reason about everyday actions – MIT News
The ability to reason abstractly about events as they unfold is a defining feature of human intelligence. We know instinctively that crying and writing are means of communicating, and that a panda falling from a tree and a plane landing are variations on descending.
Organizing the world into abstract categories does not come easily to computers, but in recent years researchers have inched closer by training machine learning models on words and images infused with structural information about the world, and how objects, animals, and actions relate. In a new study at the European Conference on Computer Vision this month, researchers unveiled a hybrid language-vision model that can compare and contrast a set of dynamic events captured on video to tease out the high-level concepts connecting them.
Their model did as well as or better than humans at two types of visual reasoning tasks picking the video that conceptually best completes the set, and picking the video that doesnt fit. Shown videos of a dog barking and a man howling beside his dog, for example, the model completed the set by picking the crying baby from a set of five videos. Researchers replicated their results on two datasets for training AI systems in action recognition: MITs Multi-Moments in Time and DeepMinds Kinetics.
We show that you can build abstraction into an AI system to perform ordinary visual reasoning tasks close to a human level, says the studys senior author Aude Oliva, a senior research scientist at MIT, co-director of the MIT Quest for Intelligence, and MIT director of the MIT-IBM Watson AI Lab. A model that can recognize abstract events will give more accurate, logical predictions and be more useful for decision-making.
As deep neural networks become expert at recognizing objects and actions in photos and video, researchers have set their sights on the next milestone: abstraction, and training models to reason about what they see. In one approach, researchers have merged the pattern-matching power of deep nets with the logic of symbolic programs to teach a model to interpret complex object relationships in a scene. Here, in another approach, researchers capitalize on the relationships embedded in the meanings of words to give their model visual reasoning power.
Language representations allow us to integrate contextual information learned from text databases into our visual models, says study co-author Mathew Monfort, a research scientist at MITs Computer Science and Artificial Intelligence Laboratory (CSAIL). Words like running, lifting, and boxing share some common characteristics that make them more closely related to the concept exercising, for example, than driving.
Using WordNet, a database of word meanings, the researchers mapped the relation of each action-class label in Moments and Kinetics to the other labels in both datasets. Words like sculpting, carving, and cutting, for example, were connected to higher-level concepts like crafting, making art, and cooking. Now when the model recognizes an activity like sculpting, it can pick out conceptually similar activities in the dataset.
This relational graph of abstract classes is used to train the model to perform two basic tasks. Given a set of videos, the model creates a numerical representation for each video that aligns with the word representations of the actions shown in the video. An abstraction module then combines the representations generated for each video in the set to create a new set representation that is used to identify the abstraction shared by all the videos in the set.
To see how the model would do compared to humans, the researchers asked human subjects to perform the same set of visual reasoning tasks online. To their surprise, the model performed as well as humans in many scenarios, sometimes with unexpected results. In a variation on the set completion task, after watching a video of someone wrapping a gift and covering an item in tape, the model suggested a video of someone at the beach burying someone else in the sand.
Its effectively covering, but very different from the visual features of the other clips, says Camilo Fosco, a PhD student at MIT who is co-first author of the study with PhD student Alex Andonian. Conceptually it fits, but I had to think about it.
Limitations of the model include a tendency to overemphasize some features. In one case, it suggested completing a set of sports videos with a video of a baby and a ball, apparently associating balls with exercise and competition.
A deep learning model that can be trained to think more abstractly may be capable of learning with fewer data, say researchers. Abstraction also paves the way toward higher-level, more human-like reasoning.
One hallmark of human cognition is our ability to describe something in relation to something else to compare and to contrast, says Oliva. Its a rich and efficient way to learn that could eventually lead to machine learning models that can understand analogies and are that much closer to communicating intelligently with us.
Other authors of the study are Allen Lee from MIT, Rogerio Feris from IBM, and Carl Vondrick from Columbia University.
Continued here:
Toward a machine learning model that can reason about everyday actions - MIT News
The fourth generation of AI is here, and its called Artificial Intuition – The Next Web
Artificial Intelligence (AI) is one of the most powerful technologies ever developed, but its not nearly as new as you might think. In fact, its undergone several evolutions since its inception in the 1950s. The first generation of AI was descriptive analytics, which answers the question, What happened? The second, diagnostic analytics, addresses, Why did it happen? The third and current generation is predictive analytics, which answers the question, Based on what has already happened, what could happen in the future?
While predictive analytics can be very helpful and save time for data scientists, it is still fully dependent on historic data. Data scientists are therefore left helpless when faced with new, unknown scenarios. In order to have true artificial intelligence, we need machines that can think on their own, especially when faced with an unfamiliar situation. We need AI that can not just analyze the data it is shown, but express a gut feeling when something doesnt add up. In short, we need AI that can mimic human intuition.Thankfully, we have it.
What is Artificial Intuition?
The fourth generation of AI is artificial intuition, which enables computers to identify threats and opportunities without being told what to look for, just as human intuition allows us to make decisions without specifically being instructed on how to do so. Its similar to a seasoned detective who can enter a crime scene and know right away that something doesnt seem right, or an experienced investor who can spot a coming trend before anybody else. The concept of artificial intuition is one that, just five years ago, was considered impossible. But now companies like Google, Amazon and IBM are working to develop solutions, and a few companies have already managed to operationalize it.
How Does It Work?
So, how does artificial intuition accurately analyze unknown data without any historical context to point it in the right direction? The answer lies within the data itself. Once presented with a current dataset, the complex algorithms of artificial intuition are able to identify any correlations or anomalies between data points.
Of course, this doesnt happen automatically. First, instead of building a quantitative model to process the data, artificial intuition applies a qualitative model. It analyzes the dataset and develops a contextual language that represents the overall configuration of what it observes. This language uses a variety of mathematical models such as matrices, euclidean and multidimensional space, linear equations and eigenvalues to represent the big picture. If you envision the big picture as a giant puzzle, artificial intuition is able to see the completed puzzle right from the start, and then work backward to fill in the gaps based on the interrelationships of the eigenvectors.
In linear algebra, an eigenvector is a nonzero vector that changes at most by a scalar factor (direction does not change) when that linear transformation is applied to it. The corresponding eigenvalue is the factor by which the eigenvector is scaled. In concept this provides a guidepost for visualizing anomalous identifiers. Any eigenvectors that do not fit correctly into the big picture are then flagged as suspicious.
How Can It Be Used?
Artificial intuition can be applied to virtually any industry, but is currently making considerable headway in financial services. Large global banks are increasingly using it to detect sophisticated new financial cybercrime schemes, including money laundering, fraud and ATM hacking. Suspicious financial activity is usually hidden among thousands upon thousands of transactions that have their own set of connected parameters. By using extremely complicated mathematical algorithms, artificial intuition rapidly identifies the five most influential parameters and presents them to analysts.
In 99.9% of cases, when analysts see the five most important ingredients and interconnections out of tens of hundreds, they can immediately identify the type of crime being presented. So artificial intuition has the ability to produce the right type of data, identify the data, detect with a high level of accuracy and low level of false positives, and present it in a way that is easily digestible for the analysts.
By uncovering these hidden relationships between seemingly innocent transactions, artificial intuition is able to detect and alert banks to the unknown unknowns (previously unseen and therefore unexpected attacks). Not only that, but the data is explained in a way that is traceable and logged, enabling bank analysts to prepare enforceable suspicious activity reports for the Financial Crimes Enforcement Network (FinCEN).
How Will It Affect the Workplace?
Artificial intuition is not intended to serve as a replacement for human instinct. It is just an additional tool that helps people perform their jobs more effectively. In the banking example outlined above, artificial intuition isnt making any final decisions on its own; its simply presenting an analyst with what it believes to be criminal activity. It remains the analysts job to review the identified transactions and confirm the machines suspicions.
AI has certainly come a long way since Alan Turing first presented the concept back in the 1950s, and it is not showing any sign of slowing down. Previous generations were just the tip of the iceberg. Artificial intuition marks the point when AI truly became intelligent.
So youre interested in AI? Thenjoin our online event, TNW2020, where youll hear how artificial intelligence is transforming industries and businesses.
Published September 3, 2020 17:00 UTC
Read more:
The fourth generation of AI is here, and its called Artificial Intuition - The Next Web