Category Archives: Artificial Intelligence
Fake news generated by artificial intelligence can be convincing enough to trick even experts – Scroll.in
If you use such social media websites as Facebook and Twitter, you may have come across posts flagged with warnings about misinformation. So far, most misinformation flagged and unflagged has been aimed at the general public. Imagine the possibility of misinformation information that is false or misleading in scientific and technical fields like cybersecurity, public safety and medicine.
There is growing concern about misinformation spreading in these critical fields as a result of common biases and practices in publishing scientific literature, even in peer-reviewed research papers. As a graduate student and as faculty members doing research in cybersecurity, we studied a new avenue of misinformation in the scientific community. We found that it is possible for artificial intelligence systems to generate false information in critical fields like medicine and defence that is convincing enough to fool experts.
General misinformation often aims to tarnish the reputation of companies or public figures. Misinformation within communities of expertise has the potential for scary outcomes such as delivering incorrect medical advice to doctors and patients. This could put lives at risk.
To test this threat, we studied the impacts of spreading misinformation in the cybersecurity and medical communities. We used artificial intelligence models dubbed transformers to generate false cybersecurity news and Covid-19 medical studies and presented the cybersecurity misinformation to cybersecurity experts for testing. We found that transformer-generated misinformation was able to fool cybersecurity experts.
Much of the technology used to identify and manage misinformation is powered by artificial intelligence. AI allows computer scientists to fact-check large amounts of misinformation quickly, given that there is too much for people to detect without the help of technology. Although AI helps people detect misinformation, it has ironically also been used to produce misinformation in recent years.
Transformers, like BERT from Google and GPT from OpenAI, use natural language processing to understand text and produce translations, summaries and interpretations. They have been used in such tasks as storytelling and answering questions, pushing the boundaries of machines displaying human-like capabilities in generating text.
Transformers have aided Google and other technology companies by improving their search engines and have helped the general public in combating such common problems as battling writers block.
Transformers can also be used for malevolent purposes. Social networks like Facebook and Twitter have already faced the challenges of AI-generated fake news across platforms.
Our research shows that transformers also pose a misinformation threat in medicine and cybersecurity. To illustrate how serious this is, we fine-tuned the GPT-2 transformer model on open online sources discussing cybersecurity vulnerabilities and attack information. A cybersecurity vulnerability is the weakness of a computer system, and a cybersecurity attack is an act that exploits a vulnerability. For example, if a vulnerability is a weak Facebook password, an attack exploiting it would be a hacker figuring out your password and breaking into your account.
We then seeded the model with the sentence or phrase of an actual cyberthreat intelligence sample and had it generate the rest of the threat description. We presented this generated description to cyberthreat hunters, who sift through lots of information about cybersecurity threats. These professionals read the threat descriptions to identify potential attacks and adjust the defences of their systems.
We were surprised by the results. The cybersecurity misinformation examples we generated were able to fool cyberthreat hunters, who are knowledgeable about all kinds of cybersecurity attacks and vulnerabilities. Imagine this scenario with a crucial piece of cyberthreat intelligence that involves the airline industry, which we generated in our study.
This misleading piece of information contains incorrect information concerning cyberattacks on airlines with sensitive real-time flight data. This false information could keep cyber analysts from addressing legitimate vulnerabilities in their systems by shifting their attention to fake software bugs. If a cyber analyst acts on the fake information in a real-world scenario, the airline in question could have faced a serious attack that exploits a real, unaddressed vulnerability.
A similar transformer-based model can generate information in the medical domain and potentially fool medical experts. During the Covid-19 pandemic, preprints of research papers that have not yet undergone a rigorous review are constantly being uploaded to such sites as medrXiv.
They are not only being described in the press but are being used to make public health decisions. Consider the following, which is not real but generated by our model after minimal fine-tuning of the default GPT-2 on some Covid-19-related papers.
The model was able to generate complete sentences and form an abstract allegedly describing the side effects of Covid-19 vaccinations and the experiments that were conducted. This is troubling both for medical researchers, who consistently rely on accurate information to make informed decisions and for members of the general public, who often rely on public news to learn about critical health information. If accepted as accurate, this kind of misinformation could put lives at risk by misdirecting the efforts of scientists conducting biomedical research.
Although examples like these from our study can be fact-checked, transformer-generated misinformation hinders such industries as health care and cybersecurity in adopting AI to help with information overload. For example, automated systems are being developed to extract data from cyberthreat intelligence that is then used to inform and train automated systems to recognise possible attacks. If these automated systems process such false cybersecurity text, they will be less effective at detecting true threats.
We believe the result could be an arms race as people spreading misinformation develop better ways to create false information in response to effective ways to recognise it.
Cybersecurity researchers continuously study ways to detect misinformation in different domains. Understanding how to automatically generate misinformation helps in understanding how to recognise it. For example, automatically generated information often has subtle grammatical mistakes that systems can be trained to detect. Systems can also cross-correlate information from multiple sources and identify claims lacking substantial support from other sources.
Ultimately, everyone should be more vigilant about what information is trustworthy and be aware that hackers exploit peoples credulity, especially if the information is not from reputable news sources or published scientific work.
Priyanka Ranade is a PhD Student in Computer Science and Electrical Engineering and Anupam Joshi is a Professor of Computer Science & Electrical Engineering at the University of Maryland, Baltimore County.
Tim Finin is a Professor of Computer Science and Electrical Engineering at the same institute.
This article first appeared on The Conversation.
Will Artificial Intelligence Robots Do the Majority of Our Work in the Coming Decade? – BBN Times
Artificial intelligence robots are slowly replacing blue collar and white collarworkers.
Go to the trading floors to find out that there are no human brokers. Algorithmic trading software makes money for most investment funds.
It takes 0.2 seconds for a price quote to come from the exchange to your software vendors data center (DC), 0.3 seconds from the data center to reach your trading screen, 0.1 seconds for your trading software to process this received quote, 0.3 seconds for it to analyze and place a trade, 0.2 seconds for your trade order to reach your broker, 0.3 seconds for your broker to route your order to the exchange.
Total time elapsed = 0.2 + 0.3 + 0.1 + 0.3 + 0.2 + 0.3 = 1.4 seconds. In todays dynamic trading world, the original price quote would have changed multiple times within this 1.4 second period. The algorithmic trading program is to identify profitable opportunities and place the trades to generate profits at a speed and frequency that is impossible to match by a human trader.
No chance for humans to compete with machines in computing speed and efficiency, instant analysis and decision-making.
The human-AI replacement process is following the causal link of smart work automation:
Office workers -> Distant workers -> RPA bots -> Digital workers -> AI workers
"A digital worker is software designed to model and emulate human job roles by performing end-to-end job activities using automation and AI-based skills. In contrast, an RPA bot is software that mimics human actions by performing specific tasks for which its programmed.
Digital workers can understand, decide and act to automate job roles as opposed to simply acting to automate individual tasks. In doing so, they extend RPA capabilities and can be applied to more use cases". IBM DW
The human-robot replacement has been accelerated by COVID-19. Google as Facebook let their employees continue to work from home through July 2021 due to the ongoing COVID-19 pandemic. Most of them will hardly return back.
As McKinsey cautiously projected:
"The activities most susceptible to automation are physical ones in highly structured and predictable environments, as well as data collection and processing.
In the United States, these activities make up 51 percent of activities in the economy, accounting for almost $2.7 trillion in wages.
They are most prevalent in manufacturing, accommodation and food service, and retail trade. And its not just low-skill, low-wage work that could be automated; middle-skill and high-paying, high skill occupations, too, have a degree of automation potential".
View post:
Will Artificial Intelligence Robots Do the Majority of Our Work in the Coming Decade? - BBN Times
This is Why Miners are Investing in Artificial Intelligence – Baystreet.ca
Mining has become essential. With a growing population, urbanization, demand for green energy, buildings, cars, and even more electronic gadgets, well see an increased need for metals. What makes mining even more valuable is the fact were already coming up short on necessary metals, like copper, silver, platinum, palladium, nickel, cobalt, and rhodium to name a few. Thats just part of the reason investors may be pouring money into the red hot mining sector, fueling potential upside for companies such as Windfall Geotek Inc. (TSXV:WIN) (OTC:WINKF)(FSE:L7C2), Emerita Resources Corp. (CVE:EMO)(OTC:EMOTF), Labrador Gold Corp. (TSXV:LAB)(OTC:NKOSF), Eskay Mining Corp. (TSXV:ESK)(OTC:ESKYF), and Goldshore Resources Inc. (TSXV:GSHR)(OTC:SMDXD).
In addition, The mining industry is seeing an increase in artificial intelligence (AI) investment across several key metrics, according to an analysis of GlobalData data, as noted by Mining Technology. Plus, according to PreScouter, McKinseyestimates that by 2035, the age of smart mining achieved through autonomous mining using data analysis and digital technologies like artificial intelligence (AI) willsave between $290 billion and $390 billion annually for mineral raw materials producers. Thats beneficial for companies like Windfall Geotek, which is providing artificial intelligence (AI) to unearth minerals with a higher likelihood of success.
Windfall Geotek Provided AI Gold Target to Dios Exploration on K2 Project in Quebec
Windfall Geotek (TSXV:WIN; OTCQB:WINKF), a leader in the use of Artificial Intelligence (AI) with advanced knowledge-extraction techniques since 2005 in the mining sector is pleased to announce that it will provide and AI Gold target on Dios Exploration K2 project located in the Eeyou Istchee James Bay region of the province of Quebec.
Dinesh Kandanchatha, Chairman of Windfall Geotek commented: We are pleased to announce this agreement with Dios Exploration where we are able to take advantage of our history of work in that region having done the Elmer project in the past which has had positive results. We are excited to be working with Dios Exploration in a Win-Win scenario at a strategic time with the upcoming field work Dios is undertaking over summer 2021.
Marie- Josee Girard, President & Geologist of Dios Exploration Inc commented: We are thrilled to have acquire this high probability target ahead of our Summer 2021 campaign in the area, we have seen positive results with our neighbour in the same geological context and we feel this target as good potential.
Other related developments from around the markets include:
Emerita Resources Corp. announced that it has received a resolution from the Mining Department in Huelva approving the proposed work program for the entire Iberia Belt West project, subject to the Company receiving final approval from the Environmental Department for the El Cura and La Romanera targets. Emerita has engagedFRASA Ingenieros Consultores, a highly reputable engineering firm with offices in Spain and internationally, to prepare the environmental documentation for the west side of the IBW project, including the El Cur and La Romanera targets, in order to obtain the Autorizacion Ambiental Unificada (AAU). FRASAare the environmental consultants used by major companies in the area for permitting and are well versed in the requirements to obtain work permits.
Labrador Gold Corp. announced another high-grade intercept of near surface gold mineralization from its Kingsway project near Gander, Newfoundland. The Kingsway project is located in the highly prospective central Newfoundland gold belt.The high-grade intersection is from hole K-21-17 that contains fine particles of visible gold in quartz vein. The hole intersected 50.38 g/t Au over 1.85 metres including 160.42 g/t over 0.55 metres. The quartz vein containing the visible gold is typically vuggy and locally contains stylolites and is similar to quartz veins containing high grade gold intersections of 20.6 g/t Au over 3.6 metres including 103.36 g/t over 0.3 metres and 10.48 g/t Au over 2.4 metres reported previously.
Eskay Mining Corp. announced that it has commenced its 2021 exploration program with a property wide SkyTEM Survey across its 100% owned Consolidated Eskay precious metal-rich volcanogenic massive sulphide project in the Golden Triangle, British Columbia. SkyTEM is a powerful helicopter supported electromagnetic technique that differentiates electrically resistive and conductive rocks in the subsurface. It is particularly helpful in recognizing areas of relatively conductive altered and sulphide-bearing rocks associated with stockwork VMS mineralization.
Goldshore Resources Inc. announced that it has received its exploration permit from the Ministry of Energy, Northern Development and Mines, allowing the Company to commence its drilling activities at the Moss Lake Gold Project. Brett Richards, President and Chief Executive Officer commented: "I am pleased to have received our exploration permit as it is one of the first milestones we have set for the Company, and allows us to commence our planned100,000 mdrill program as part of a larger exploration strategy that will last close to 24 months."
Legal Disclaimer / Except for the historical information presented herein, matters discussed in this article contains forward-looking statements that are subject to certain risks and uncertainties that could cause actual results to differ materially from any future results, performance or achievements expressed or implied by such statements. Winning Media is not registered with any financial or securities regulatory authority and does not provide nor claims to provide investment advice or recommendations to readers of this release. For making specific investment decisions, readers should seek their own advice. Winning Media is only compensated for its services in the form of cash-based compensation. Pursuant to an agreement Winning Media has been paid three thousand five hundred dollars for advertising and marketing services for Windfall Geotek by Windfall Geotek We own ZERO shares of Windfall Geotek. Please click here for full disclaimer.
Contact Information: 2818047972[emailprotected]
Read the original post:
This is Why Miners are Investing in Artificial Intelligence - Baystreet.ca
Can artificial intelligence predict how sick you’ll get from COVID-19? UC San Diego scientists think so – The San Diego Union-Tribune
A team of San Diego scientists is harnessing artificial intelligence to understand why COVID-19 symptoms can vary dramatically from one person to the next information that could prove useful in the continued fight against the coronavirus and future pandemics.
Researchers pored through publicly available data to see how other viruses alter which genes our cells turn on or off. Using that information, they found a set of genes activated across a wide range of infections, including the novel coronavirus. Those genes predicted whether someone would have a mild or a severe case of COVID-19, and whether they were likely to have a lengthy hospital stay.
A UC San Diego-led team joined by researchers at Scripps Research and the La Jolla Institute for Immunology published the findings June 11. The studys authors say their approach could help determine whether new treatments and vaccines are working.
When the whole world faced this pandemic, it took several months for people to scramble to understand the new virus, said Dr. Pradipta Ghosh, a UCSD cell biologist and one of the studys authors. I think we need more of this computational framework to guide us in panic states like this.
The project began in March 2020, when Ghosh teamed up with UCSD computer scientist Debashis Sahoo to better understand why the novel coronavirus was causing little to no symptoms in some people while wreaking havoc on others.
There was just one problem: The novel coronavirus was, well, novel, meaning there wasnt much data to learn from.
So Sahoo and Ghosh took a different tack. They went to public databases and downloaded 45,000 samples from a wide array of viral infections, including Ebola, Zika, influenza, HIV, and hepatitis C virus, among others.
Their hope was to find a shared response pattern to these viruses, and thats exactly what they saw: 166 genes that were consistently cranked up during infection. Among that list, 20 genes generally separated patients with mild symptoms from those who became severely ill.
The coronavirus was no exception. Sahoo and Ghosh say they identified this common viral response pattern well before testing it in samples from COVID-19 patients and infected cells, yet the results held up surprisingly well.
It seemed to work in every data set we used, Sahoo said. It was hard to believe.
They say their findings show that respiratory failure in COVID-19 patients is the result of overwhelming inflammation that damages the airways and, over time, makes immune cells less effective.
Stanfords Purvesh Khatri isnt surprised. His lab routinely uses computer algorithms and statistics to find patterns in large sets of immune response data. In 2015, Khatris group found that respiratory viruses trigger a common response. And in April, they reported that this shared response applied to a range of other viruses, too, including the novel coronavirus.
That makes sense, Khatri says, because researchers have long known there are certain genes the immune system turns on in response to virtually any viral infection.
Overall, the idea is pretty solid, said Khatri of the recent UCSD-led study. The genes are all (the) usual suspects.
Sahoo and Ghosh continue to test their findings in new coronavirus data as it becomes available. Theyre particularly interested in COVID-19 long-haulers. Ghosh says theyre already seeing that people with prolonged coronavirus symptoms have distinct gene activation patterns compared to those whove fully recovered. Think of it like a smoldering fire that wont die out.
The researchers ultimate hope isnt just to predict and understand severe disease, but to stop it. For example, they say, a doctor could give a patient a different therapy if a blood sample suggests theyre likely to get sicker with their current treatment. Ghosh adds that the gene pattern theyre seeing could help identify promising new treatments and vaccines against future pandemics based on which therapies prevent responses linked to severe disease.
In unknown, uncharted territory, this provides guard rails for us to start looking around, understand (the virus), find solutions, build better models and, finally, find therapeutics.
Making AI Sing: An Interview With Verphoria On The Use Of Artificial Intelligence Within The Music Industry – Forbes
Verphoria, Founder and CEO of Hierarchy Music
In todays music industry, the separation between digital and analogue is almost impossible to determine. At the most basic level, the majority of todays music is crafted using highly intelligent software. However, at the cutting edge of AI and the music industry, innovators are continuously pushing the boundaries of human/machine collaboration in musical creation as well as business.
One such innovator is Vernica Serjilus, professionally as Verphoria, an American singer, record producer, songwriter, entrepreneur, and the Founder and CEO of Hierarchy Music. Hierarchy Music is a global music company that connects musicians globally with Grammy Award-winning, multi-platinum music services.
At the crux of Hierarchy Musics operations is data AI and back-end exposure which allow us to bring exposure to new artists, or existing artists and their brands, utilizing both Hierarchy Music and Hierarchy Medias back-end network.
I spoke with Verphoria about her background as a musician as well as her perspective on the future of AI in music.
How did you get your start in the music industry?
I started singing at the age of four and started record producing at the age of 10.
At the age of 19 I was discovered by Aton Ben Horin and Ethan Curtis, the co-owners of the Grammy Award-winning, multi-platinum Plush Recording Studios. At the age of 22, I was invited to record at Paramount Recording Studio and Neighborhood Watche by renowned engineer/mixer Andrew Drew Chavez where I continued to sharpen my skills in music.
My brand Verphoria gained popularity on Instagram and other social media platforms for Music which lead to making appearances at a number of red carpet events, such as those hosted by Maxim Magazine and Sports Illustrated. I gained the attention of Celebrity Director Chris Applebaum (who directed Rihannas Umbrella, Britney Spears, Kim Kardashian, Usher, Selena Gomez, Miley Cyrus, Demi Lovato, and Paris Hilton) who will be directing my music videos.
Who are your musical influences?
The biggest influences in my life are Michael Jackson, Rihanna, Britney Spears, Mariah Carey, Shakira, Wolfgang Mozart, and Beyonc.
Do you use data and AI in your music or in your broader career?
To create my compositions, I use a digital audio workstation (DAW, an electronic device or application software used for recording, editing and producing audio files) called Ableton Live which uses the AI plugin called Magenta Studio that allows me to experiment with open source machine learning tools.
This AI grants me the ability to create learning models for musical melodies, patterns, and rhythms by using a mathematical model.
Verphoria, Founder and CEO of Hierarchy Music
What are your thoughts on the recent quote from artist Grimes in which she states, Once theres actually AGI (Artificial General Intelligence), theyre gonna be so much better at making art than us.
I am going to disagree with that statement. AGI can be used to speed up the production of music, however, it cannot replace the emotion that comes from music produced by a human, nor can it recapitulate and evoke the emotional connection that musicians possess in the creation of their musical compositions.
Making good art is much more than following an algorithm, its the emotional aspect that makes it touch people.
How do you think AI and data are shaping the industry as a whole?
AI will definitely become a bigger and bigger part of the music industry as it will in every other industry. It is not yet perfect, and it may not ever be perfect on its own, but the use of AI helps to streamline many of the more laborious processes in music production.
Whether this is a good or bad thing is up for debate.
In my personal opinion it is best used as a collaboration tool, not something to make a whole record without the touch of a human. This article brings up a lot of interesting questions and concerns that we will have to deal with in the near future. Now is an exciting time to be in the music industry as we grapple with these problems.
How has data and AI helped you build your career?
AI and backend exposure has been instrumental in growing my personal brand Verphoria. Hierarchy Music and Hierarchy Medias data AI helped to grow my audience significantly by connecting my existing network to different network niches.
This helped in two main ways: it increased my exposure and helped me understand my audiences behavior.
The data gleaned from this process was invaluable in growing my brand relatively quickly compared to traditional methods.
What are issues in the music industry you think technology could help solve?
I believe a cloud-based DAW should be created so the music records being produced can be continuously saved and not lost if the computer or hard drive is stolen with unlimited amounts of data that can be stored.
How has technology made the business of being a musician easier? How has it made it more difficult?
The best thing about technology is that it has made becoming a musician more accessible to average people. With enough drive and the will to learn anyone can become a world-class musician. It has also made the technical aspects of making music easier. For example, we can make sure every note, melody, or rhythm is pitched and quantized correctly so there are no mistakes or flaws in the notes.
As for how it has made things more difficult? That is harder for me to answer.
Technology has been constantly evolving throughout my life, so to me it is second nature and is definitely not a problem, but for those people not as comfortable with the ever-changing nature of tech that can pose some difficulties.
The sense and nonsense of Artificial Intelligence in the greenhouse – hortidaily.com
AI is a term that is used frequently in the horticultural industry. Because of this, one sometimes gets the impression that AI is going to solve all future problems in the industry. Think, for example, of the shortage of employees and specifically trained growers. Will AI then ensure that all work can be taken over by robots in the future? A dream for some, but also a horror scenario for many. Is AI going to replace people? "The answer to that, in our opinion, is no," says Ton van Dijk with LetsGrow.com. In this article, he explains how AI can contribute to horticulture.
LetsGrow.com believes that AI helps to make better decisions, but it is certainly not going to replace a grower or crop advisor. AI does bring the grower and crop advisor possibilities to control larger areas from a distance. Expert Knowledge combined with Artificial Intelligence is the golden combination. But more about that later.
So how exactly does AI work?It is often thought that as much data as possible from different growers together is the recipe for developing an algorithm, an algorithm that might make automatic cultivation possible. However, this is not the right way. Firstly, because data of growers always remains the property of the grower and cannot be used in this way. Secondly, because using data from the past to develop algorithms and predictions for the future, does not ensure optimization. In that situation, one takes any mistakes from the past into development. AI makes it possible to perform tasks faster and sometimes better than people, but only if the algorithm is built properly.
So what about horticulture?Over the past 50 years, much scientific research has been done on plant physiology and physics in and around a greenhouse. As a result of this research, LetsGrow.com knows how a plant grows and how to make the plant as comfortable as possible. Optimum conditions ultimately ensure a high-quality crop and a higher yield. Growers and cultivation advisors use their own experiences and calculations for this, which is also called Expert Intelligence (EI). Systems can already be set up to issue alerts to the grower when needed. This already ensures that growers can focus on more strategic matters. LetsGrow.com contributes to this by visualizing and analyzing the growers data. These results make it possible to cultivate in a data-driven way. But it can be even more extensive.
Combining outcomes from EI with other external data, such as weather forecasts, for example, it is possible to use those data to create an optimal situation for the plant. For example, it is possible to use an optimally constructed Machine Learning model to predict when plant stress may occur in the plant. A grower can therefore use this information to adjust the cultivation strategy to prevent this plant stress from actually occurring. In this way, the combination of data, EI, and AI helps to provide predictive insight to the grower. This gives the grower the possibility to create an even more stable and optimal crop.
There are so many more possibilities with AI. Think of automatic image recognition by placing cameras in the greenhouse. Together with partner Gearbox, LetsGrow.com is already making this possible. In addition, LetsGrow.com also works together with HortiKey and their Plantalyzer. This is a robot that rides along the pipe-rail and takes pictures of the crop. The advanced AI in this robot is able to recognize the number of fruits or flowers in the path or to analyze growth. This recognition makes it possible for LetsGrow.com to make more accurate yield predictions and visualize them in the dashboard MyLetsGrow. A grower can use this data to work in a more targeted way by, for example, using the sales of his product correctly and selling at higher margins.
So is AI now taking over from humans?No. At all times it is important that the combination between computer and human remains. A combination of Expert Intelligence and Artificial Intelligence. Humans determine the strategy at the start of the process based on the commercial requirements for that year. In addition, humans must always be able to intervene in case of calamities. AI therefore certainly does not make people superfluous, but it does enable people to manage and optimize larger areas without being too preoccupied with peripheral issues.
For more information:LetsGrow.cominfo@letsgrow.com http://www.letsgrow.com
Originally posted here:
The sense and nonsense of Artificial Intelligence in the greenhouse - hortidaily.com
Interested in Artificial Intelligence? Check Out This New Weekly Series – Pasadena Now
Innovate Pasadena and Artificial Intelligence Los Angeles (AILA) are partnering with Global Research Methods and Data Science (RMDS) for a free online meet-up on AI Bias and Surveillance: Recognition, Analysis and Prediction on Tuesday, June 1, 4 to 5 p.m.
The program is targeted to anyone interested in AI bias in detection and analysis systems (face, object, language, emotion) and surveillance in public, private and professional contexts.
No previous knowledge is required of those attending as long as the participants are willing to appreciate diversity of perspectives and think critically about power, harms and risks.
This will be the first of six one-hour per week sessions through Zoom, to be led by Merve Hickok, founder of AIethicist, which provides reference and research material for anyone interested in the current discussions on AI ethics and impact of AI on individuals and society.
The program begins with fundamental concepts in AI ethics and human rights in the first week. Week 2 and 3 move on to specifics of how recognition and analysis systems use face or objects, NLP (natural language processing) and affective computing manifest bias.
Weeks 4 to 6 discuss how data is collected and connected both online and offline, how the recognition and analysis systems are used in different settings, for which purposes, and what the consequences are.
Attendees will be provided with either short reports and/or videos for each class which they can read and watch either ahead of the class or after the class. The live session will allow for discussion of concepts and cases.
Merve Hickok is an independent consultant and trainer focused on capacity building in ethical and responsible AI and governance of AI systems. She is currently an instructor on Data Ethics at the University of Michigan School of Information, senior researcher at the Center for AI and Digital Policy, founding editorial board member of Springer Nature AI and Ethics journal, and one of the 100 Brilliant Women in AI Ethics 2021.
She is a fellow at ForHumanity Center, a regional lead for Women in AI Ethics Collective, and sits in a number of IEEE (Institute of Electrical and Electronics Engineers) and IEC (International Electrotechnical Commission) work groups that set global standards forautonomous systems. Previously, Hickok was Vice President of HR in a number of different roles with Bank of America Merrill Lynch.
To register for Tuesdays virtual meetup, visit http://www.innovatepasadena.org/events/ai-bias-surveillance-recognition-analysis-prediction and click on the Attend Event button.
For more information, call (213) 245-1817.
Post Views: 124
Read this article:
Interested in Artificial Intelligence? Check Out This New Weekly Series - Pasadena Now
The Not-So-Hidden FTC Guidance On Organizational Use Of Artificial Intelligence (AI), From Data Gathering Through Model Audits – Technology – United…
Our last AI post on this blog, the New (if Decidedly Not 'Final') Frontier ofArtificial Intelligence Regulation, touched on both the FederalTrade Commission's (FTC) April 19, 2021, AI guidance and the EuropeanCommission's proposed AI Regulation. The FTC's 2021guidance referenced, in large part, the FTC's April 2020 post"UsingArtificial Intelligence and Algorithms." The recent FTCguidance also relied on older FTC work on AI, including a January2016 report, "Big Data: A Tool for Inclusion orExclusion?," which in turn followed a September 15, 2014,workshop on the same topic. The Big Data workshop addressed datamodeling, data mining and analytics, and gave us a prospective lookat what would become an FTC strategy on AI.
The FTC's guidance begins with the data, and the 2016guidance on big data and subsequent AI development addresses thismost directly. The 2020 guidance then highlights importantprinciples such as transparency, explain-ability, fairness,accuracy and accountability for organizations to consider. And the2021 guidance elaborates on how consent, or opt-in, mechanisms workwhen an organization is gathering the data used for modeldevelopment.
Taken together, the three sets of FTC guidance - the 2021, 2020,and 2016 guidance ? provide insight into the FTC's approach toorganizational use of AI, which spans a vast portion of the datalife cycle, including the creation, refinement, use and back-endauditing of AI. As a whole, the various pieces of FTC guidance alsoprovide a multistep process for what the FTC appears to view asresponsible AI use. In this post, we summarize our takeaways fromthe FTC's AI guidance across the data life cycle to provide apractical approach to responsible AI deployment.
Evaluation of a data set should assess the quality ofthe data (including accuracy, completeness and representativeness)? and if the data set is missing certain population data, theorganization must take appropriate steps to address and remedy thatissue (2016).
An organization must honor promises made to consumersand provide consumers with substantive information about theorganization's data practices when gathering information for AIpurposes (2016). Any related opt-in mechanisms for such datagathering must operate as disclosed to consumers (2021).
An organization should recognize the data compilationstep as a "descriptive activity," which the FTC definesas a process aimed at uncovering and summarizing "patterns orfeatures that exist in data sets" - a reference to data mining scholarship (2016) (note that theFTC's referenced materials originally at mmds.org are nowredirected).
Compilation efforts should be organized around a lifecycle model that provides for compilation and consolidation beforemoving on to data mining, analytics and use (2016).
An organization must recognize that there may beuncorrected biases in underlying consumer data that will surface ina compilation; therefore, an organization should review data setsto ensure hidden biases are not creating unintended discriminatoryimpacts (2016).
An organization should maintain reasonable security overconsumer data (2016).
If data are collected from individuals in a deceitful orotherwise inappropriate manner, the organization may need to deletethe data (2021).
An organization should recognize the model and AIapplication selection step as a predictive activity, where anorganization is using "statistical models to generate newdata" - a reference to predictive analytics scholarship (2016).
An organization must determine if a proposed data modelor application properly accounts for biases (2016). Where there areshortcomings in the data model, the model's use must beaccordingly limited (2021).
Organizations that build AI models may "not selltheir big data analytics products to customers if they know or havereason to know that those customers will use the products forfraudulent or discriminatory purposes." An organization must,therefore, evaluate potential limitations on the provision or useof AI applications to ensure there is a "permissiblepurpose" for the use of the application (2016).
Finally, as a general rule, the FTC asserts that underthe FTC Act, a practice is patently unfair if it causes more harmthan good (2021).
Organizations must design models to account for datagaps (2021).
Organizations must consider whether their reliance onparticular AI models raises ethical or fairness concerns(2016).
Organizations must consider the end uses of the modelsand cannot create, market or sell "insights" used forfraudulent or discriminatory purposes (2016).
Organizations must test the algorithm before use (2021).This testing should include an evaluation of AI outcomes(2020).
Organizations must consider prediction accuracy whenusing "big data" (2016).
Model evaluation must focus on both inputsand AI models may not discriminate against aprotected class (2020).
Input evaluation shouldinclude considerations of ethnically based factors or proxies forsuch factors.
Outcome evaluation iscritical for all models, including facially neutral models.
Model evaluation should consider alternative models, asthe FTC can challenge models if a less discriminatory alternativewould achieve the same results (2020).
If data are collected from individuals in a deceptive,unfair, or illegal manner, deletion of any AI models or algorithmsdeveloped from the data may also be required (2021).
Organizations must be transparent and not misleadconsumers "about the nature of the interaction" ? and notutilize fake "engager profiles" as part of their AIservices (2020).
Organizations cannot exaggerate an AI model'sefficacy or misinform consumers about whether AI results are fairor unbiased. According to the FTC, deceptive AI statements areactionable (2021).
If algorithms are used to assign scores to consumers, anorganization must disclose key factors that affect the score,rank-ordered according to importance (2020).
Organizations providing certain types of reports throughAI services must also provide notices to the users of such reports(2016).
Organizations building AI models based on consumer datamust, at least in some circumstances, allow consumers access to theinformation supporting the AI models (2016).
Automated decisions based on third-party data mayrequire the organization using the third-party data to provide theconsumer with an "adverse action" notice (for example, ifunder the Fair Credit Reporting Act 15 U.S.C. 1681(Rev. Sept. 2018), such decisions deny an applicant anapartment or charge them a higher rent) (2020).
General "you don'tmeet our criteria" disclosures are not sufficient. The FTCexpects end users to know what specific data areused in the AI model and how the data are used bythe AI model to make a decision (2020).
Organizations that change specific terms of deals basedon automated systems must disclose the changes and reasoning toconsumers (2020).
Organizations should provide consumers with anopportunity to amend or supplement information used to makedecisions about them (2020) and allow consumers to correct errorsor inaccuracies in their personal information (2016).
When deploying models, organizations must confirm thatthe AI models have been validated to ensure they work as intendedand do not illegally discriminate (2020).
Organizations must carefully evaluate and select anappropriate AI accountability mechanism, transparency frameworkand/or independent standard, and implement as applicable(2020).
An organization should determine the fairness of an AImodel by examining whether the particular model causes, or islikely to cause, substantial harm to consumers that is notreasonably avoidable and not outweighed by countervailing benefits(2021).
Organizations must test AI models periodically torevalidate that they function as intended (2020) and to ensure alack of discriminatory effects (2021).
Organizations must account for compliance, ethics,fairness and equality when using AI models, taking into accountfour key questions (2016; 2020):
How representative is thedata set? Does the AI model account for biases? How accurate are the AI predictions? Does the reliance on the data set raise ethical or fairnessconcerns?
Organizations must embrace transparency andindependence, which can be achieved in part through the following(2021):
Using independent,third-party audit processes and auditors, which are immune to theintent of the AI model. Ensuring data sets and AI source code are open to externalinspection. Applying appropriate recognized AI transparency frameworks,accountability mechanisms and independent standards. Publishing the results of third-party AI audits.
Organizations remain accountable throughout the AI datalife cycle under the FTC's recommendations for AI transparencyand independence (2021).
The content of this article is intended to provide a generalguide to the subject matter. Specialist advice should be soughtabout your specific circumstances.
Read this article:
The Not-So-Hidden FTC Guidance On Organizational Use Of Artificial Intelligence (AI), From Data Gathering Through Model Audits - Technology - United...
Artificial intelligence system could help counter the spread of disinformation – MIT News
Disinformation campaigns are not new think of wartime propaganda used to sway public opinion against an enemy. What is new, however, is the use of the internet and social media to spread these campaigns. The spread of disinformation via social media has the power to change elections, strengthen conspiracy theories, and sow discord.
Steven Smith, a staff member from MIT Lincoln Laboratorys Artificial Intelligence Software Architectures and Algorithms Group, is part of a team that set out to better understand these campaigns by launching the Reconnaissance of Influence Operations (RIO) program. Their goal was to create a system that would automatically detect disinformation narratives as well as those individuals who are spreading the narratives within social media networks. Earlier this year, the team published a paper on their work in the Proceedings of the National Academy of Sciences and they received an R&D 100 award last fall.
The project originated in 2014 when Smith and colleagues were studying how malicious groups could exploit social media. They noticed increased and unusual activity in social media data from accounts that had the appearance of pushing pro-Russian narratives.
"We were kind of scratching our heads," Smith says of the data. So the team applied for internal funding through the laboratorys Technology Office and launched the program in order to study whether similar techniques would be used in the 2017 French elections.
In the 30 days leading up to the election, the RIO team collected real-time social media data to search for and analyze the spread of disinformation. In total, they compiled 28 million Twitter posts from 1 million accounts. Then, using the RIO system, they were able to detect disinformation accounts with 96 percent precision.
What makes the RIO system unique is that it combines multiple analytics techniques in order to create a comprehensive view of where and how the disinformation narratives are spreading.
"If you are trying to answer the question of who is influential on a social network, traditionally, people look at activity counts," says Edward Kao, who is another member of the research team. On Twitter, for example, analysts would consider the number of tweets and retweets. "What we found is that in many cases this is not sufficient. It doesnt actually tell you the impact of the accounts on the social network."
As part of Kaos PhD work in the laboratorys Lincoln Scholars program, a tuition fellowship program, he developed a statistical approach now used in RIO to help determine not only whether a social media account is spreading disinformation but also how much the account causes the network as a whole to change and amplify the message.
Erika Mackin, another research team member, also applied a new machine learning approach that helps RIO to classify these accounts by looking into data related to behaviors such as whether the account interacts with foreign media and what languages it uses. This approach allows RIO to detect hostile accounts that are active in diverse campaigns, ranging from the 2017 French presidential elections to the spread of Covid-19 disinformation.
Another unique aspect of RIO is that it can detect and quantify the impact of accounts operated by both bots and humans, whereas most automated systems in use today detect bots only. RIO also has the ability to help those using the system to forecast how different countermeasures might halt the spread of a particular disinformation campaign.
The team envisions RIO being used by both government and industry as well as beyond social media and in the realm of traditional media such as newspapers and television. Currently, they are working with West Point student Joseph Schlessinger, who is also a graduate student at MIT and a military fellow at Lincoln Laboratory, to understand how narratives spread across European media outlets. A new follow-on program is also underway to dive into the cognitive aspects of influence operations and how individual attitudes and behaviors are affected by disinformation.
Defending against disinformation is not only a matter of national security, but also about protecting democracy, says Kao.
The rest is here:
Artificial intelligence system could help counter the spread of disinformation - MIT News
AI is learning how to create itself – MIT Technology Review
But theres another crucial observation here. Intelligence was never an endpoint for evolution, something to aim for. Instead, it emerged in many different forms from countless tiny solutions to challenges that allowed living things to survive and take on future challenges. Intelligence is the current high point in an ongoing and open-ended process. In this sense, evolution is quite different from algorithms the way people typically think of themas means to an end.
Its this open-endedness, glimpsed in the apparently aimless sequence of challenges generated by POET, that Clune and others believe could lead to new kinds of AI. For decades AI researchers have tried to build algorithms to mimic human intelligence, but the real breakthrough may come from building algorithms that try to mimic the open-ended problem-solving of evolutionand sitting back to watch what emerges.
Researchers are already using machine learning on itself, training it to find solutions to some of the fields hardest problems, such as how to make machines that can learn more than one task at a time or cope with situations they have not encountered before. Some now think that taking this approach and running with it might be the best path to artificial general intelligence.We could start an algorithm that initially does not have much intelligence inside it, and watch it bootstrap itself all the way up potentially to AGI, Clune says.
The truth is that for now, AGI remains a fantasy. But thats largely because nobody knows how to makeit.Advances in AI are piecemeal and carried out by humans, with progress typically involving tweaks to existing techniques or algorithms, yielding incremental leaps in performance or accuracy. Clune characterizes these efforts as attempts to discover the building blocks for artificial intelligence without knowing what youre looking for or how many blocks youll need. And thats just the start. At some point, we have to take on the Herculean task of putting them all together, he says.
Asking AI to find andassemble those building blocks for usis a paradigm shift. Its saying we want to create an intelligent machine, but we dont care what it might look likejust give us whatever works.
Even if AGI is never achieved, the self-teaching approach may still change what sorts of AI are created. The world needsmore than a very good Go player, says Clune. For him, creating a supersmart machine means building a system that invents its own challenges, solves them, and then invents new ones. POET is a tiny glimpse of this in action. Clune imagines a machine that teaches a bot to walk, then to play hopscotch, then maybe to play Go. Then maybe it learns math puzzles and starts inventing its own challenges, he says. The system continuously innovates, and the skys the limit in terms of where it might go.
Link:
AI is learning how to create itself - MIT Technology Review