Category Archives: Artificial Intelligence

Artificial intelligence expert moves to Montreal because it’s an AI hub – Montreal Gazette

Irina Rish, now a renowned expert in the field of artificial intelligence, first became drawn to the topic as a teenager in the former Soviet republic of Uzbekistan. At 14, she was fascinated by the notion that machines might have their own thought processes.

I was interested in math in school and I was looking at how you improve problem solving and how you come up with algorithms, Rish said in a phone interview Friday afternoon. I didnt know the word yet (algorithm) but thats essentially what it was. How do you solve tough problems?

She read a book introducing her to the world of artificial intelligence and that kick-started a lifelong passion.

First of all, they sounded like just mind-boggling ideas, that you could recreate in computers something as complex as intelligence, said Rish. Its really exciting to think about creating artificial intelligence in machines. It kind of sounds like sci-fi. But the other interesting part of that is that you hope that by doing so, you can also better understand the human mind and hopefully achieve better human intelligence. So you can say AI is not just about computer intelligence but also about our intelligence. Both goals are equally exciting.

Here is the original post:
Artificial intelligence expert moves to Montreal because it's an AI hub - Montreal Gazette

Artificial Intelligence Is Helping to Spot California Wildfires – GovTech

(TNS) As 12,000 lightning strikes pummeled the Bay Area this month, igniting hundreds of fires, fire spotters sprang into action.

Their arsenal of tools includes thermal imagery collected by space satellites; real-time feeds from hundreds of mountaintop cameras; a far-flung array of weather stations monitoring temperature, humidity and winds; and artificial intelligence to munch and crunch the vast data troves to pinpoint hot spots.

For decades, wildfires in remote regions were spotted by people in lookout towers who scanned the horizon with binoculars for smoke a tough and tedious job. They reported potential danger by telephone, carrier pigeon or Morse code signals with a mirror.

Now, fire spotting has gone high tech. And the technology to address it is getting exponentially better and faster, trained by a growing body of data about wildfires. Its making firefighters more nimble and keeping them safer. The only question is whether silicon-powered progress can keep up with the climate change-fueled flames.

Tech has also made fire spotting more democratic. Anyone can go online to see the satellite and camera images, whileinteractive mapsdisplay the conflagrations locations. Footage from some of the mountaintop cameraswent viralthis month as they transmitted apocalypticimagesof the raging flames that ultimately burned them in the CZU Lightning Complex fires.

Its Netflix for fire, said Graham Kent, who runs theAlertWildfire.orgsystem, which has about 550 cameras in California, a number he hopes to double by 2022. The cameras capture a still image every second to make time-lapse videos, using near-infrared technology for nighttime viewing. They give an intimate sense of whats going on. Theres a primal sense like were still living in caves; everyone fears fire.

The network of cameras, backed by a consortium of the University of Nevada at Reno, UC San Diego and the University of Oregon, allows authorized personnel such as fire command teams to rotate, pan and zoom to zero in on suspicious plumes of smoke. The AlertWildfire system is adding some mobile cameras a trailer with a 30-foot tower that can be positioned anywhere its needed.

The images from the cameras and satellites, along with footage captured by piloted and unpiloted aircraft, and weather station data, are vital components in the rapidly advancing technology for fire spotting.

The new technology is helping us fight more-aggressive fires more aggressively with a calculated level of safety, said Brice Bennett, a spokesman for Cal Fire. Fire-line commanders utilize intelligence from all these different inputs. Situational awareness is paramount fully understanding the events unfolding around you, not just whats directly in front of your face but what will occur in the next 12 hours.

The boots on the ground crews use the detailed data to get information even while theyre en route, he said. The digital maps can show where the hottest spots are, for instance, so they know what areas to avoid and where to construct fire lines.

We can use this information to understand where fires are spreading, where theyre most active and to get rapid alerts for wildfires, said Scott Strenfel, manager of meteorology and fire science at PG&E. Its pretty exciting with all this technology coming together. The earlier you can spot a fire, the earlier you can take suppression action.

During fire season, PG&E staffs its newWildfire Safety Operations Centeraround the clock. Analysts in the room at the companys San Francisco headquarters monitor big-screen monitors displaying data-packed maps and information flowing in from a variety of sources.

The company used to spend a couple of million dollars a year on a smoke patrol program. Every afternoon during fire season, seven pilots would fly in set patterns (similar to a lawn-mowers path) over heavily forested areas in its service territory, looking for smoke. But satellite advances meant it could get similar information for a tenth of the cost and have continuous coverage, Strenfel said.

Even in a test version last year, the satellite system detected an early-morning grass fire on Mount Diablo in July 2019 about 15 minutes before the first 911 calls came in, he said. PG&E now has systems in place to notify local fire agencies when its technology spots fires.

Technology comes into play after fires as well. We map burn severity to see how much damage resulted from the fire, so resource management can stabilize the landscape and mitigate hazards like flash floods, said Brad Quayle, a program manager at the Forest ServicesGeospatial Technology and Applications Center, which uses satellites and other technologies to detect and monitor fire activity.

Technology also helps authorities decide whether and when to evacuate locals.

A fire is a dynamic situation with high winds, dry fuels, proximity to populations, especially in California, said Everett Hinkley, national remote sensing program manager at the Forest Service. We can provide rapid updates to infer the direction and speed of those wildfires to help people calling the evacuation orders.

Although satellites have been used in fire spotting for about 20 years, a new generation of satellites and onboard tools have dramatically improved their aptitude for the task.

Weather satellites have thermal channels that can be used for fires, but theyre optimized to look at cloud temperatures (which are) very cold, not for very high temperatures, said Vincent Ambrosia, associate program manager for wildfires at the NASA Ames Research Center in Mountain View. Newer satellites with spectral sensors and advanced optics technology now provide finer spatial resolution and data processing.

There are two types of satellites: Polar orbiter satellites are closer to Earth and provide higher-resolution images, but capture them only twice a day. Geosynchronous or geostationary satellites stay over a specific geographic area, providing images about every five minutes, but must fly about 22,000 miles above the Earth to synchronize with its orbit, so the images are more coarse.

Researchers have lengthy lists of tech improvements they hope to see in the near future.

One is unpiloted aircraft that can stay aloft for months at a time, perhaps 100,000 feet above the ground, providing persistent surveillance of a fire event, allowing (firefighters) to make real-time decisions, Ambrosia said. Its the same as the resources that support troops on the ground in battle scenarios.

Quayle likewise said hed like to see long endurance, high-altitude platforms that can serve the purpose of a satellite but fly in the atmosphere.

Several private companies are working on options such as solar-powered aircraft or high-altitude airships like dirigibles, he said, estimating that deployment is between one and five years out.

Hed also like to see satellites built specifically for fire detection, something now being developed in Canada, which is replete with remote, fire-prone forests. That satellite system is probably five years out from completion and launch, he said, noting that the rest of the world can share it.

While some have speculated that the smaller drones flown by hobbyists could be deployed, they lack the power and range to fly high enough to usefully spot fires. But their technology, too, could improve over time.

Another future upgrade is for computers to get even better at reading the data via improved artificial intelligence, to cut down on false positives. We need better machine learning to process this data overload, because you cant put enough analysts in front of screens to handle it all, Hinkley said.

Despite all the high-tech wizardry, many fires are initially reported through a traditional system: 911 calls. Blazes increasingly occur near populated areas so there are essentially millions of potential spotters on the ground.

The 911 calls in many places will be the first notification, Strenfel said.

But calls to 911 can mean a deluge of information without the specifics that firefighters need so the satellites and cameras come into play to home in on exact locations.

In cases like we just went through, with the lightning causing 500 fires all at once, and many people calling, that information can be overwhelming, Strenfel said. The satellite detection systems (show) where these fires are in real time.

Kent from AlertWildfire said similar things about his camera network.

When a 911 call comes in, authorities can turn to a camera and see the ignition phase of that fire, he said. Cameras can also triangulate a fires exact location. Under normal circumstances, they can see 20 miles in daytime; 40 miles at night if there arent obstacles. But hes seen fires caught by cameras as far away as 100 miles in the daytime and 160 miles at night.

Sometimes traditional ways reemerge.

Cal Fires Amador-El Dorado Unit recently refurbished two dilapidated lookout towers and now staffs them during fire season with community volunteers.

Armed with a two-way radio, binoculars and an Osborne Fire Finder a topographic paper map with sighting apertures to help gauge a fires distance and location the volunteers have spotted 85 smokes since June 1, with seven of them being first reports, said Diana Swart, a spokeswoman for the unit.

These human volunteers get up in that tower with their old-fashioned Fire Finders from the early 1900s, she said. In these very rural wooded areas, fires otherwise may not be noticed until they get very large. Having a person out there whos actively looking is key.

2020 the San Francisco Chronicle, Distributed by Tribune Content Agency, LLC.

Looking for the latest gov tech news as it happens? Subscribe to GT newsletters.

Read more:
Artificial Intelligence Is Helping to Spot California Wildfires - GovTech

MQ-9 Reaper Flies With AI Pod That Sifts Through Huge Sums Of Data To Pick Out Targets – The Drive

General Atomics says that it has successfully integrated and flight-tested Agile Condor, a podded, artificial intelligence-driven targeting computer, on its MQ-9 Reaper drone as part of a technology demonstration effort for the U.S. Air Force. The system is designed to automatically detect, categorize, and track potential items of interest. It could be an important stepping stone to giving various types of unmanned, as well as manned aircraft, the ability to autonomously identify potential targets, and determine which ones might be higher priority threats, among other capabilities.

The California-headquartered drone maker announced the Agile Condor tests on Sept. 3, 2020, but did not say when they had taken place. The Reaper with the pod attached conducted the flight testing from General Atomics Aeronautical Systems, Inc.'s (GS-ASI) Flight Test and Training Center in Grand Forks, North Dakota.

Computing at the edge has tremendous implications for future unmanned systems, GA-ASI President David R. Alexander said in a statement. GA-ASI is committed to expanding artificial intelligence capabilities on unmanned systems and the Agile Condor capability is proof positive that we can accurately and effectively shorten the observe, orient, decide and act cycle to achieve information superiority. GA-ASI is excited to continue working with AFRL [Air Force Research Laboratory] to advance artificial intelligence technologies that will lead to increased autonomous mission capabilities."

Defense contractor SRC, Inc. developed the Agile Condor system for the Air Force Research Laboratory (AFRL), delivering the first pod in 2016. It's not clear whether the Air Force conducted any flight testing of the system on other platforms before hiring General Atomics to integrate it onto the Reaper in 2019. The service had previously said that it expected to take the initial pod aloft in some fashion before the end of 2016.

"Sensors have rapidly increased in fidelity, and are now able to collect vast quantities of data, which must be analyzed promptly to provide mission critical information," an SRC white paper on Agile Condor from 2018 explains. "Stored data [physically on a drone] ... creates an unacceptable latency between data collection and analysis, as operators must wait for the RPA [remotely piloted aircraft] to return to base to review time sensitive data."

"In-mission data transfers, by contrast, can provide data more quickly, but this method requires more power and available bandwidth to send data," the white paper continues. "Bandwidth limits result in slower downloads of large data files, a clogged communications link and increased latency that could allow potential changes in intel between data collection and analysis. The quantities of data being collected are also so vast, that analysts are unable to fully review the data received to ensure actionable information is obtained."

This is all particularly true for drones equipped with wide-area persistent surveillance systems, such as the Air Force's Gorgon Stare system, which you can read about in more detail here, that grab immense amounts of imagery that can be overwhelming for sensor operators and intelligence analysts to scour through. Agile Condor is designed to parse through the sensor data a drone collects first, spotting and classifying objects of interest and then highlighting them for operators back at a control center or personnel receiving information at other remote locations for further analysis. Agile Condor would simply discard "empty" imagery and other data that shows nothing it deems useful, not even bothering to forward that on.

"This selective 'detect and notify' process frees up bandwidth and increases transfer speeds, while reducing latency between data collection and analysis," SRC's 2018 white paper says. "Real time pre-processing of data with the Agile Condor system also ensures that all data collected is reviewed quickly, increasing the speed and effectiveness with which operators are notified of actionable information."

See more here:
MQ-9 Reaper Flies With AI Pod That Sifts Through Huge Sums Of Data To Pick Out Targets - The Drive

60% of enterprises believe AI will disrupt their business in 2-3 years: Nasscom/EY – Economic Times

Pune : Sixty percent of Indian enterprises believe that Artificial Intelligence (AI) will disrupt their business in the next two-three years, according to a study by industry body Nasscom and consultancy EY.

The study, Can enterprise intelligence be created artificially? A survey of Indian enterprises, is based on a survey of over 500 CXOs across sectors like retail, BFSI, healthcare and agriculture on the maturity of AI adoption along with the key challenges faced on their AI enterprise journey.

Seventy percent of Indian enterprises that deployed AI have achieved measurable results.

As industry witnesses a rapid advancement in new technologies, Artificial Intelligence increasingly becoming an imperative for businesses across industries. Implementing AI will not only catalyse the innovation to stay competitive but also generate long-term value for enterprises, said Debjani Ghosh, President, Nasscom.

Operational efficiency, customer experience and revenue growth are the main reasons why enterprises are turning to AI, with BFSI firms (36%) leading the way, followed by retail (25%), healthcare (20%) and agriculture (8%).

Some of the biggest impediments to the adoption of AI include the quality of data available, the level of digitisation at the enterprise and the maturity of the partner network, said Nitin Bhatt, Partner and Technology Sector Leader, EY India.

People and cultural issues were other big challenges, with 40% citing workforce displacement and 32% citing cultural impediments to AI adoption.

However, among the firms that had gone ahead with AI adoption, 19% said workforce displacement was a challenge while 55% cited cultural factors.

Explainability is an important factor. People are seeing AI algorithms making important decisions without knowing how and why it makes those decisions, said Bhatt.

It is important to bring in a trust factor, especially around important decisions to boost the adoption of AI. While 74% of the respondents have established either a formal strategy or obtained C-suite sponsorship to initiate or scale-up their AI programs, 78% said that reskilling existing employees will help maximise value from their AI programs.

Vijay Bhaskaran, Partner Technology Consulting, EY said: AI has immense capability to unlock exponential value for businesses and navigate the complexities of the ever-evolving digital economy. However, enterprises too need to equip themselves with the right AI platform that can help them rapidly adopt and scale AI solutions, resulting in faster, smarter and future ready businesses.

More:
60% of enterprises believe AI will disrupt their business in 2-3 years: Nasscom/EY - Economic Times

3 Daunting Ways Artificial Intelligence Will Transform The World Of Work – Forbes

Each industrial revolution has brought with it new ways of working think of the impact computers and digital technology (the third industrial revolution) have had on how we work.

3 Daunting Ways AI Will Transform The World Of Work

But this fourth industrial revolution what I call the intelligence revolution, because it is being driven by AI and data feels unprecedented in terms of the sheer pace of change. The crucial difference between this and the previous industrial revolutions is were no longer talking about generational change; were talking about enormous transformations that are going to take place within the next five, 10 or 20 years.

Here are the three biggest ways I see AI fundamentally changing the work that humans do, within a very short space of time.

1. More tasks and roles will become automated

Increasing automation is an obvious place to start since a common narrative surrounding AI is robots are going to take all our jobs. In many ways, this narrative is completely understandable in a lot of industries and jobs, the impact of automation will be keenly felt.

To understand the impact of automation, PricewaterhouseCoopers analyzed more than 200,000 jobs in 29 countries and found:

By the early 2020s, 3 percent of jobs will be at risk of automation.

That rises to almost 20 percent by the late 2020s.

By the mid-2030s, 30 percent of jobs will be at the potential risk of automation. For workers with low education, this rises to 44 percent.

These are stark figures. But there is a positive side to increasing automation. The same study found that, while automation will no doubt displace many existing jobs, it will also generate demand for new jobs. In fact, AI, robotics, and automation could provide a potential $15 trillion boost to global GDP by 2030.

This is borne out by previous industrial revolutions, which ultimately created more jobs than they displaced. Consider the rise of the internet as an example. Sure, the internet had a negative impact on some jobs (I dont know about you but I now routinely book flights and hotels online, instead of popping to my local travel agent), but just look at how many jobs the internet has created and how its enabled businesses to branch into new markets and reach new customers.

Automation will also lead to better jobs for humans. If were honest with ourselves, the tasks that are most likely to be automated by AI are not the tasks best suited to humans or the tasks that humans should even want to do. Machines are great at automating the boring, mundane, and repetitive stuff, leaving humans to focus on more creative, empathetic, and interpersonal work. Which brings me to

2. Human jobs will change

When parts of jobs are automated by machines, that frees up humans for work that is generally more creative and people-oriented, requiring skills such as problem-solving, empathy, listening, communication, interpretation, and collaboration all skills that humans are generally better at than machines. In other words, the jobs of the future will focus more and more on the human element and soft skills.

According to Deloitte, this will lead to new categories of work:

Standard jobs:Generally focusing on repeatable tasks and standardized processes, standard jobs use a specified and narrow skill set.

Hybrid jobs:These roles require a combination of technical and soft skills which traditionally havent been combined in the same job.

Superjobs:These are roles that combine work and responsibilities from multiple traditional jobs, where technology is used to both augment and widen the scope of the work, involving a more complex combination of technical and human skills.

For me, this emphasizes how employees and organizations will need to develop both the technical and softer human skills to succeed in the age of AI.

3. The employee experience will change, too

Even in seemingly non-tech companies (if there is such a thing in the future), the employee experience will change dramatically. For one thing, robots and cobots will have an increasing presence in many workplaces, particularly in manufacturing and warehousing environments.

But even in office environments, workers will have to get used to AI tools as co-workers. From how people are recruited, to how they learn and develop in the job, to their everyday working activities, AI technology and smart machines will play an increasingly prominent role in the average person's working life. Just as we've all got used to tools like email, we'll also get used to routinely using tools that monitor workflows and processes and make intelligent suggestions about how things could be done more efficiently. Tools will emerge to carry out more and more repetitive admin tasks, such as arranging meetings and managing a diary. And, very likely, new tools will monitor how employees are working and flag up when someone is having trouble with a task or not following procedures correctly.

On top of this, workforces will become decentralized (a trend likely to be accelerated by the coronavirus pandemic) which means the workers of the future can choose to live anywhere, rather than going where the work is.

Preparing for the AI revolution

AI, and particularly automation, is going to transform the way we work. But rather than fear this development, we should embrace this new way of working. We should embrace the opportunities AI provides to make work better.

No doubt, this will require something of a cultural shift for organizations just one of the many ways in which organizations will have to adapt for the intelligence revolution. Discover how to prepare your organization for an AI-driven world in my new book, The Intelligence Revolution: Transforming Your Business With AI.

Visit link:
3 Daunting Ways Artificial Intelligence Will Transform The World Of Work - Forbes

The Guardian view on artificial intelligence’s revolution: learning but not as we know it – The Guardian

Bosses dont often play down their products. Sam Altman, the CEO of artificial intelligence company OpenAI, did just that when people went gaga over his companys latest software: the Generative Pretrained Transformer 3 (GPT-3). For some, GPT-3 represented a moment in which one scientific era ends and another is born. Mr Altman rightly lowered expectations. The GPT-3 hype is way too much, he tweeted last month. Its impressive but it still has serious weaknesses and sometimes makes very silly mistakes.

OpenAIs software is spookily good at playing human, which explains the hoopla. Whether penning poetry, dabbling in philosophy or knocking out comedy scripts, the general agreement is that the GPT-3 is probably the best non-human writer ever. Given a sentence and asked to write another like it, the software can do the task flawlessly. But this is a souped up version of the auto-complete function that most email users are familiar with.

GPT-3 stands out because it has been trained on more information about 45TB worth than anything else. Because the software can remember each and every combination of words it has read, it can work out through lightning-fast trial-and-error attempts of its 175bn settings where thoughts are likely to go. Remarkably it can transfer its skills: trained as a language translator, GPT-3 worked out it could convert English to Javascript as easily as it does English to French. Its learning, but not as we know it.

But this is not intelligence or creativity. GPT-3 doesnt know what it is doing; it is unable to say how or why it has decided to complete sentences; it has no grasp of human experience; and cannot tell if it is making sense or nonsense. What GPT-3 represents is a triumph of one scientific paradigm over another. Once machines were taught to think like humans. They struggled to beat chess grandmasters. Then they began to be trained with data to, as one observer pointed out, discover like we can rather than contain what we have discovered. Grandmasters started getting beaten. These days they cannot win.

The reason is Moores law, the exponentially falling cost of number-crunching. AIs bitter lesson is that the more data that can be consumed, and the more models can be scaled up, the more a machine can emulate or surpass humans in quantitative terms. If scale truly is the solution to human-like intelligence then GPT-3 is still about 1,000 times smaller than the brains 100 trillion-plus synapses. Human beings can learn a new task by being shown how to do it only a few times. That ability to learn complex tasks from only a few examples, or no examples at all, has so far eluded machines. GPT-3 is no exception.

All this raises big questions that seldom get answered. Training GPT-3s neural nets is costly. A $1bn investment by Microsoft last year was doubtless needed to run and cool GPT-3s massive server farms. The bill for the carbon footprint a large neural net is equal to the lifetime emissions of five cars is due.

Fundamental is the regulation of a for-profit OpenAI. The company initially delayed the launch of its earlier GPT-2, with a mere 1.5bn parameters, because the company fretted over its implications. It had every reason to be concerned; such AI will emulate the racist and sexist biases of the data it swallows. In an era of deepfakes and fake news, GPT-style devices could become weapons of mass destruction: engaging and swamping political opponents with divisive disinformation. Worried? If you arent then remember that Dominic Cummings wore an OpenAI T-shirt on his first day in Downing Street.

More here:
The Guardian view on artificial intelligence's revolution: learning but not as we know it - The Guardian

What Is The Artificial Intelligence Revolution And Why Does It Matter To Your Business? – Forbes

As a species, humanity has witnessed three previous industrial revolutions: first came steam/water power, followed by electricity, then computing. Now, were in the midst of a fourth industrial revolution, one driven by artificial intelligence and big data.

What Is The Artificial Intelligence Revolution And Why Does It Matter To Your Business?

I like to refer to this as the Intelligence Revolution." But whatever we call it the fourth industrial revolution, Industry 4.0 or the Intelligence Revolution one thing is clear: this latest revolution is going to transform our world, just as the three previous industrial revolutions did.

What makes AI so impactful, and why now?

AI gives intelligent machines (be they computers, robots, drones, or whatever) the ability to think and act in a way that previously only humans could. This means they can interpret the world around them, digest and learn from information, make decisions based on what theyve learned, and then take appropriate action often without human intervention. Its this ability to learn from and act upon data that is so critical to the Intelligence Revolution, especially when you consider the sheer volume of data that surrounds us today. AI needs data, and lots of it, in order to learn and make smart decisions. This gives us a clue as to why the Intelligence Revolution is happening now.

After all, AI isnt a new concept. The idea of creating intelligent machines has been around for decades. So why is AI suddenly so transformative? The answer to that question is two-fold:

We have more data than ever before. Almost everything we do (both in the online world and the offline world) creates data. Thanks to the increasing digitization of our world, we now have access to more data than ever before, which means AI has been able to grow much smarter, faster, and more accurate in a very short space of time. In other words, the more data intelligent machines have access to, the faster they can learn, and the more accurate they become at interpreting the information. As a very simple example, think of Spotify recommendations. The more music (or podcasts) you listen to via Spotify, the better able Spotify is to recommend other content that you might enjoy. Netflix and Amazon recommendations work on the same principle, of course.

Impressive leaps in computing power make it possible to process and make sense of all that data. Thanks to advances like cloud computing and distributed computing, we now have the ability to store, process, and analyze data on an unprecedented scale. Without this, data would be worthless.

What the Intelligence Revolution means for your business

I guarantee your business is going to have to get smarter. In fact, every business is going to have to get smarter from small startups to global corporations, from digital-native companies to more traditional businesses. Organizations of all shapes and sizes will be impacted by the Intelligence Revolution.

Take a seemingly traditional sector like farming. Agriculture is undergoing huge changes, in which technology is being used to intelligently plan what crops to plant, where and when, in order to maximize harvests and run more efficient farms. Data and AI can help farmers monitor soil and weather conditions, and the health of crops. Data is even being gathered from farming equipment, in order to improve the efficiency of machine maintenance. Intelligent machines are being developed that can identify and delicately pick soft ripe fruits, sort cucumbers, and pinpoint pests and diseases. The image of a bucolic, traditional farm is almost a thing of the past. Farms that refuse to evolve risk being left behind.

This is the impact of the Intelligence Revolution. All industries are evolving rapidly. Innovation and change is the new norm.Those who cant harness AI and data to improve their business whatever the business will struggle to compete.

Just as in each of the previous industrial revolutions, the Intelligence Revolution will utterly transform the way we do business. For your company, this may mean you have to rethink the way you create products and bring them to market, rethink your service offering, rethink your everyday business processes, or perhaps even rethink your entire business model.

Forget the good vs bad AI debate

In my experience, people fall into one of two camps when it comes to AI. Theyre either excited at the prospect of a better society, in which intelligent machines help to solve humanitys biggest challenges, make the world a better place, and generally make our everyday lives easier. Then there are those who think AI heralds the beginning of the end, the dawning of a new era in which intelligent machines supersede humans as the dominant lifeform on Earth.

Personally, I sit somewhere in the middle. Im certainly fascinated and amazed by the incredible things that technology can achieve. But Im also nervous about the implications, particularly the potential for AI to be used in unethical, nefarious ways.

But in a way, the debate is pointless. Whether youre a fan of AI or not, the Intelligence Revolution is coming your way. Technology is only going in one direction forwards, into an ever-more intelligent future. Theres no going back.

Thats not to say we shouldnt consider the implications of AI or work hard to ensure AI is used in an ethical, fair way one that benefits society as well as the bottom line. Of course, we should do that. But it's important to understand that; however, you feel about it, AI cannot be ignored. Every business leader needs to come to terms with this fact and take action to prepare their company accordingly. This means working out how and where AI will make the biggest difference to your business, and developing a robust AI strategy that ensures AI delivers maximum value.

AI is going to impact businesses of all shapes and sizes, across all industries. Discover how to prepare your organization for an AI-driven world in my new book, The Intelligence Revolution: Transforming Your Business With AI.

Read the original:
What Is The Artificial Intelligence Revolution And Why Does It Matter To Your Business? - Forbes

Digitalized Discrimination: COVID-19 and the Impact of Bias in Artificial Intelligence – JD Supra

[co-author: Jordan Rhodes]

As the world grapples with the impacts of the COVID-19 pandemic, we have become increasingly reliant on artificial intelligence (AI) technology. Experts have used AI to test potential treatments, diagnose individuals, and analyze other public health impacts. Even before the pandemic, businesses were increasingly turning to AI to improve efficiency and overall profit. Between 2015 and 2019, the adoption of AI technology by businesses grew more than 270 percent.

The growing reliance on AIand other machine learning systemsis to be expected considering the technologys ability to help streamline business processes and tackle difficult computational problems. But as weve discussed previously, the technology is hardly the neutral and infallible resource that so many view it to be, often sharing the same biases and flaws as the humans who create it.

Recent research continues to point out these potential flaws. One particularly important flaw is algorithm bias, which is the discriminatory treatment of individuals by a machine learning system. This treatment can come in various forms but often leads to the discrimination of one group of people based on specific categorical distinctions. The reason for this bias is simpler than you may think. Computer scientists have to teach an AI system how to respond to data. To do this, the technology is trained on datasetsdatasets that are both created and influenced by humans. As such, it is necessary to understand and account for potential sources of bias, both explicit and inherent, in the collection and creation of a dataset. Failure to do so can result in bias seeping into a dataset and ultimately into the results and determinations made by an AI system or product that utilizes that dataset. In other words, bias in, bias out.

Examining AI-driven hiring systems expose this flaw in action. An AI system can sift through hundreds, if not thousands, of rsums in short periods of time, evaluate candidates answers to written questions, and even conduct video interviews. However, when these AI hiring systems are trained on biased datasets, the output reflects that exact bias. For example, imagine a rsum-screening machine learning tool that is trained on a companys historical employee data (such as rsums collected from a companys previously hired candidates). This tool will inherit both the conscious and unconscious preferences of the hiring managers who previously made all of those selections. In other words, if a company historically hired predominantly white men to fill key leadership positions, the AI system will reflect that preferential bias for selecting white men for other similar leadership positions. As a result, such a system discriminates against women and people of color who may otherwise be qualified for these roles. Furthermore, it can embed a tendency to discriminate within the companys systems in a manner that makes it more difficult to identify and address. And as the countrys unemployment rate skyrockets in response to the pandemic, some have taken issue with companies relying on AI to make pivotal employment decisionslike reviewing employee surveys and evaluations to determine who to fire.

Congress has expressed specific concerns regarding the increase in AI dependency during the pandemic. In May, some members of Congress addressed a letter to House and Senate Leadership, urging that the next stimulus package include protections against federal funding of biased AI technology. If the letters recommendations are adopted, certain businesses that receive federal funding from the upcoming stimulus package will have to provide a statement certifying that bias tests were performed on any algorithms the business uses to automate or partially automate activities. Specifically, this testing requirement would apply to companies using AI to make employment and lending determinations. Although the proposals future is uncertain, companies invested in promoting equality do not have to wait for Congress to act.

In recent months, many companies have publicly announced initiatives to address how they can strive to reduce racial inequalities and disparities. For companies considering such initiatives, one potential actionable step could be a strategic review of the AI technology that a company utilizes. Such a review could include verifying whether the AI technology utilized by the company is bias-tested and consideration of the AI technologys overall potential for automated discriminatory effects given the context of its specific use.

Only time will reveal the large-scale impacts of AI on our society and whether weve used AI in a responsible manner. However, in many ways, the pandemic demonstrates that these concerns are only just beginning.

[View source.]

Visit link:
Digitalized Discrimination: COVID-19 and the Impact of Bias in Artificial Intelligence - JD Supra

Adopting IT Advances: Artificial Intelligence and Real Challenges – CIO Applications

By coming together, we are able to select and strengthen a business process supported by advanced analytics, which local teams can embrace and deploy across their business units.

In addition to the benefits of forming a cross functional, multi-national team, its been exciting to watch the collaborative process evolve as Baby Boomers, Gen X, Gen Y and Gen Z colleagues work to solve business critical challenges. Weve found that by bringing these generations together, we can leverage the necessary experiences and skillsets to create a balanced vision that forms the strategy as the work streams begin to develop their actions. Pairing the multi-generational workforce with our focus on inclusion and diversity also fosters internal ownership. This participation yield steam unity and pride through clearly understood program goals, objectives and--ultimately--improved adoption deep across all business regions.

Build confidence

Even with a global, inter-generational team building advanced applications, theres still a question of confidence in the information delivered through AI and ML techniques. Can the information being provided actually be used to create a better, more reliable experience for our customers?

A recent article by Towards Data Science, an online organization for data scientists and ML engineers, put it best: At the end of the day, one of the most important jobs any data scientist has is to help people trust an algorithm that they most likely dont completely understand.

To build that trust, the heavy lifting done early in the process must contain algorithms and mathematical calculations that deliver correct information while being agile enough to also capture the changes experienced on a very dynamic basis in our business. This step begins further upstream in the process by first establishing a cross-functional group that owns, validates and organizes the data sets needed for accurate outputs. This team also holds the responsibility for all modifications made post-implementation as continuous improvement steps are added into the data driven process. While deploying this step may delay time to market delivery, the benefits gained by providing a dependable output decreases the need for rework and increases user reliability.

Time matters

How flexible is your business? It takes time and dedication to successfully incorporate AI and ML into an organization since it requires the ability to respond quickly.

Business complexity has evolved over the years along customers increasing expectations for excellence. Our organization continues reaching new heights by deploying AI and ML techniques that include an integration that: Creates a diverse pool of talented external candidates Leads to stronger training and development processes and programs for our employees Localizes a global application Bridges technological enhancements with business processes Drives business value from delivering reliable information

By putting the right processes in place now, forward-thinking businesses are better prepared for a quicker response when tackling IT challenges and on the path to finding very real solutions.

Visit link:
Adopting IT Advances: Artificial Intelligence and Real Challenges - CIO Applications

Stanford Center for Health Education Launches Online Program in Artificial Intelligence in Healthcare to Improve Patient Outcomes – PRNewswire

STANFORD, Calif., Aug. 10, 2020 /PRNewswire/ --TheStanford Center for Health Education launched an online program in AI and Healthcare this week. The program aims to advance the delivery of patient care and improve global health outcomes through artificial intelligence and machine learning.

The online program, taught by faculty from Stanford Medicine, is designed for healthcare providers, technology professionals, and computer scientists. The goal is to foster a common understanding of the potential for AI to safely and ethically improve patient care.

Stanford University is a leader in AI research and applications in healthcare, with expertise in health economics, clinical informatics, computer science, medical practice, and ethics.

"Effective use of AI in healthcare requires knowing more than just the algorithms and how they work," said Nigam Shah, associate professor of medicine and biomedical data science, the faculty director of the new program. "Stanford's AI in Healthcare program will equip participants to design solutions that help patients and transform our healthcare system. The program will provide a multifaceted perspective on what it takes to bring AI to the clinic safely, cost-effectively, and ethically."

AI has the potential to enable personalized care and predictive analytics, using patient data. Computer system analyses of large patient data sets can help providers personalize optimal care. And data-driven patient risk assessment canbetter enable physicians to take the right action, at the right time. Participants in the four-course program will learn about: the current state, trends and implications of artificial intelligence in healthcare; the ethics of AI in healthcare; how AI affects patient care safety, quality, and research; how AI relates to the science, practice and business of medicine; practical applications of AI in healthcare; and how to apply the building blocks of AI to innovate patient care and understand emerging technologies.

The Stanford Center for Health Education (SCHE), which created the AI in Healthcare program, develops online education programs to extend Stanford's reach to learners around the world. SCHE aims to shape the future of health and healthcare through the timely sharing of knowledge derived from medical research and advances. By facilitating interdisciplinary collaboration across medicine and technology, and introducing professionals to new disciplines, the AI in Healthcare program is intended to advance the field.

"In keeping with the mission of the Stanford Center for Health Education to expand knowledge and improve health on a global scale, we are excited to launch this online certificate program on Artificial Intelligence in Healthcare," said Dr. Charles G. Prober, founding executive director of SCHE. "This program features several of Stanford's leading thinkers in this emerging field a discipline that will have a profound effect on human health and disease in the 21st century."

The Stanford Center for Health Education is a university-wide program supported by Stanford Medicine. The AI in Healthcare program is available for enrollment through Stanford Online, and hosted on the Coursera online learning platform. The program consists of four online courses, and upon completion, participants can earn a Stanford Online specialization certificate through the Coursera platform. The four courses comprising the AI in Healthcare specialization are: Introduction to Healthcare, Introduction to Clinical Data, Fundamentals of Machine Learning for Healthcare, and Evaluations of AI Applications in Healthcare.

SOURCE Stanford Center for Health Education

Excerpt from:
Stanford Center for Health Education Launches Online Program in Artificial Intelligence in Healthcare to Improve Patient Outcomes - PRNewswire