Category Archives: Artificial Intelligence
Americans worry these ‘creepy’ deepfakes will manipulate people in 2024 election, ‘disturbingly false’ – Fox News
{{#rendered}} {{/rendered}}
Americans in Silicon Valley are predicting advanced artificial intelligence could significantly influence and manipulate voters in the 2024 elections, with a potential for "disturbingly false" political advertising to push agendas.
"I've seen some hilarious videos and some concerning ones where it's getting too realistic," Travis, of San Jose, Cailfornia, said. "It's a little creepy."
WATCH MORE FOX NEWS DIGITAL ORIGINALS HERE
As advanced artificial intelligence applications proliferate across industries, the rapidly evolving technology has raised concerns about its ability to manipulate elections, with some 2024 presidential campaigns already utilizing the tool. Former President Donald Trump's presidential campaign, for example, triggered an uproar on X after using artificial intelligence to recreate Florida Gov. Ron DeSantis 2024 presidential announcement with fictional guests, including billionaire Democratic donor George Soros, World Economic Forum Chair Klaus Schwab, former Vice President Dick Cheney, Adolf Hitler, the devil and the FBI.
"I think it will worsen the circumstances with fake postings," Richard said. "I think a lot of the political advertising has the potential to become disturbingly false using AI. It's gradually improving significantly, and I think there's a tremendous motivation for people trying to push a particular agenda."
Former President Donald Trump, right, and Florida Gov. Ron DeSantis' 2024 presidential campaigns have traded blows using AI-generated content. DeSantis' campaign posted an AI-generated image of Trump affectionately hugging Anthony Fauci. The Trump campaign also used AI to recreate DeSantis' 2024 presidential announcement with fictional guests, including Adolf Hitler. (AP Photo, File)
WHAT IS ARTIFICIAL INTELLIGENCE (AI)?
Claire said voters could have trouble differentiating real and AI-generated content.
"People aren't going to be able to distinguish between AI and real reporting," Claire told Fox News. "What's fake and what's real was already kind of an issue with 2020, and I think it's going to continue to get worse in 2024 because some of it is extremely convincing."
Richard says advanced artificial intelligence could worsen the amount of AI-generated content used in campaign ads. (Fox News/Jon Michael Raasch)
FEAR AT 10: SENATORS' CONCERNS SPIKE ON IMPACT OF ARTIFICIAL INTELLIGENCE TO CHANGE VOTES IN 2024
DeSantis' campaign also used AI-generated audio and video to criticize Trump's policies, including one portraying a fictional image of Trump hugging Anthony Fauci posted on social media in June.
Another campaign ad, created by a PAC supporting DeSantis, used AI-generated audio to mimic Trumps voice criticizing Iowa Gov. Kim Reynolds. The AI voice appears to have been based on comments Trump wrote on Truth Social but never said aloud.
Steve fears AI will lead to voter manipulation in the 2024 election. (Fox News/Jon Michael Raasch)
"I think AI will be used to manipulate people into doing things that they're not quite sure they wanted to do," Steve said. "That's going to be a big impact that goes under the radar. I think public opinion will be shaped in a large way."
Ken said Americans will have to learn to distinguish between real and deceptively manipulated campaign ads when making important voting decisions.
CLICK HERE TO GET THE FOX NEWS APP
"I think there's going to be a period where we're going to be influenced by what AI presents," he said. "It's going to take some time for people to kind of wise up and understand that we live in a different world."
"You can't really trust what you see and hear anymore," Ken continued. "It's going to be interesting how this shapes how this shapes us."
Ramiro Vargas contributed to the accompanying video.
Continued here:
Americans worry these 'creepy' deepfakes will manipulate people in 2024 election, 'disturbingly false' - Fox News
A.I. (Artificial Intelligence) is not the Problem: We Need More Diverse and Inclusive Humans in the Tech Sector – Insight News
Just a few days after I posted a version of this column on @Medium, the Chronicle of Philanthropy had this story as its lead story, A.I. Could Prove Disastrous for Democracy: How Can Philanthropy Prepare? (https://www.philanthropy.com/article/a-i-could-prove-disastrous-for-democracy-how-can-philanthropy-prepare).
The author, #GordonWhitman, asserted that we need less A.I. in the world of philanthropy and more human connection. Why? Because donors cant discern the difference between an A.I. generated voice and a real person begging for money.
What he fails to acknowledge is that humanspeopleare a major part of the problem with A.I.
Less than a week ago, on #LinkedIn, I read this post about an #NPR story: @Carmen Drahl, AI was asked to create images of Black African docs treating white kids. Howd it go? (https://www.npr.org/sections/goatsandsoda/2023/10/06/1201840678/ai-was-asked-to-create-images-of-black-african-docs-treating-white-kids-howd-it-?). My #blackanthropology colleague, Dr. David Simmons (https://www.linkedin.com/in/david-simmons-87743a4/ ), responded to the article with this observation about the real danger behind #AI, and also an appeal:
AI still relies on humanscomplete with their biases and assumptions, both implicit and explicit. Lets work towards creating AI systems that are more inclusive.
The Oxford University researcher that Drahl wrote about, Dr. Arsenii Alenichev (https://www.ethox.ox.ac.uk/team/arsenii-alenichev ), had tried an experiment. The results he and his team of scientists reached over and over again proved that our fear should not be that #artificialintelligence will take over and ruin the human world. After all, A.I. is constrained by its data parameters.
What we do need to fear is how the coding and input of data into A.I. are done by human beings, who come already socialized and filled with cultural biases! Drahl explains,
[Alenichevs]goal was to see if AI would come up with images that flip the stereotype of white saviors or the suffering Black kids [He stated],We wanted to invert your typical global health tropes.
They realized AI did fine at providing on-point images if asked to show either Black African doctors or white suffering children. It was the combination of those two requests that was problematic.
Racial and Gender Inequality in Silicon Valley
What Alenichev learned was thisa computerized intelligence cannot imagine or configure anything beyond its programmers imagination. Artificial intelligence is locked into the social and cultural norms and conditioning of the people who are feeding it the information. And, while sometimes information can be neutral, more often than not, it is also accompanied by interpretations and value judgments.
This, if a (white) programmer cannot conceive of a Black doctor helping white suffering children, then that bias is coded into the A.I. In short, any machine (or A.I.) is only as smart (or empathetic) as the people who initially coded and input the data.
According to UC Santa Barbara sociologist and ethnographer Dr. France Winddance Twine, we probably shouldnt hold our breath for an inclusive A.I. , as Simmons requestedaint gonna happen.
Winddance documents in her latest book, Geek Girls: Inequality and Opportunity in Silicon Valley, how implicit and explicit racial biases and gender inequality abound in Silicon Valley! She concludes the book with this statement:
The technology sector is unjust and not yet a vehicle for economic justice and social mobility for everyone.
Whats an A.I. to do?
So, whats an A.I. to do?
Well, we know that artificial intelligence is not autonomous. It cannot create anythingat least not at this moment in timeoutside of the existing information stored within its database.
A.I. can reconfigure and make up factsand it can also plagiarize and create false data by linking things together and stealing online content from human researcher and writers as Matt Novak pointed out in a May 2023 Forbes article about the new Google search engine: Googles New AI-Powered Search Is A Beautiful Plagiarism Machine.
At this moment in time, A.I. does not have its own autonomous scarecrow brain; it simply mimics and expands upon its existing program.
It is true, if we believe Isaac Asimov in his Robot series, and the Will Smith movie I, Robot, A.I. could become a supercomputer, but it cannot, as Azmera Hammouri-Davis, M.T.S says #breaktheboxes of its human programmer.
Our greatest fear should not be of an autonomous A.I.like Hal the Computer in Space Odysseythough A.I.s are destined to create massive unemployment for laborers who are unskilled in use of technology. As the Chronicle of Philanthropy reminds us, they can be made to sound human.
Nonetheless, our greatest fear MUST be of A.I.s should be that they are being supercoded with #whitesupremacy ideology and #genderinequality data.
And, dont act surprised! This is not new stuff. Since 2018, groups like the Critical Code Studies Working Group at the University of Southern Califorinia, now called The Humanities and Critical Code Lab (HaCCS), led by Mark Merino, have been looking at issues of inequality in coding for several years.
Indeed, I discovered this fact some time agobefore Google began reading critiques of its coding practices. What I found was that if you typed #blackbeauty into the google search engine, all that appeared were images of horses, like in the movie Black Beauty.
Conversely, if you typed in #beauty, only images of #whitewomen appeared. Since then, Google has become more #WOKE and updated some images in the search parameters connected to these words. But whiteness still prevails.
These are just a few of the known biases coded into #A.I. historicallyand the recent experiments by Dr. Alenichev proves that racial stereotypes are still prevalent, such that in the coded minds of A.I. (and its programmers), all the suffering children are Black and nonwhite and ALL the medical doctor saviors are white.
In the case of Black doctors treating white suffering children, such biases and assumptions against this as a possibility are rooted in #whitesupremacy ideology and beliefs;
Disbelief in the professionalism of Black people are part of the tacit anti-Blackness knowledge around which most white people are socialized in America, and Europeans globally.
These human beliefs and biases will not change/cannot change until medical schools are more diverse, and Silicon Valley becomes equal, ungendered, diverse, equitable, and inclusive!
It is not the A.I. that needs a #DEI reboot, but the human beings that code them sure do!
But. dont hold your breath for immediate change.
The current climate of anti-CRT, anti-Blackness, and attempts to whitewash America history and negate the history of hundreds of years of human enslavement, suffering, and ongoing Black and Indigenous generational trauma, disparities, and inequality (https://www.politico.com/news/2023/07/24/florida-desantis-black-history-education-00107859) suggests little hope for change towards a more socially and racially intelligent A.I., based on the current state of biases in the tech industry in Silicon Valley and the mindset of its human coding professionals.
(c) 2023 Irma McClaurin
An earlier version published on Medium, Oct 20, 2023 (https://irmamcclaurin.medium.com/ai-is-not-the-problem-we-need-more-diverse-and-inclusive-humans-in-the-tech-sector-7cbec2ad2b77)
Dr. Irma McClaurin (https://linktr.ee/dr.irma/@mcclaurintweets) is a digital columnist on Medium, Culture and Education Editor forInsight News, and Ms. Magazine author. She is the founder of the Irma McClaurin Black Feminist Archive at the University of Massachusetts Amherst.An activist Black Feminist anthropologist and award-winning author, recognized in 2015 by the Black Press of America as Best in the Nation Columnist. She is a past president of Shaw University and recently was featured in the PBS American Experience documentary on Zora Neale Hurston as an anthropologist. A collection of herInsight Newscolumns,Justspeak: Reflections on Race, Culture & Politics in America,is forthcoming. She is also working on a book manuscript on Zora Neale Hurston and anthropology as well as a collection of short vignettes entitled Confessions of a Barstool DIVAH.
The rest is here:
A.I. (Artificial Intelligence) is not the Problem: We Need More Diverse and Inclusive Humans in the Tech Sector - Insight News
One on One: Economics, artificial intelligence, and the nation’s wealth – The Daily Herald
The late Davidson economics professor Charles Ratliff was a great teacher who almost led me to a beginning understanding of economics.
Although not accomplishing that objective, he left me with a love of the subject and a long-standing interest in learning more. As a part of this course, Ratliff taught us the history of economic thought.
He used Paul Samuelsons text, titled simply Economics, as our guide. Samuelson, like Ratliff, was a Keynesian, which meant, I think, that when a nations economy is struggling, it is a time for the government to pour money into the economy to stimulate activity.
It was, and still is, hard for me to understand how all that works, but I am comforted by the fact that others also have trouble dealing with economic theory.
A few years ago, I tried to get Professor Ratliff to help me understand how these things work. I asked him, How does the government pouring money into the economy help it grow?
Well, Ratliff said, that depends on what you mean by money.
I am still struggling about his response to my query. I thought of it again the other day when I read about the death of another noted Keynesian, Robert M. Solow, the winner in 1987 of the Nobel Memorial Prize in economic sciences.
According to his obituary by Robert D. Hershey Jr. and Michael M. Weinstein in the Dec. 21, 2023, edition of The New York Times, He won the Nobel for his theory that advances in technology, rather than increases in capital and labor, have, been the primary drivers of economic growth in the United States
Before Solow set out a different approach, it was generally accepted that economic growth was determined by the growth of capital and labor. But according to his obituary, Solow could not find data to confirm that common-sense presumption.
What then does determine growth? Entrepreneurs? Geography? Legal institutions? Something else?
Solow told the writers who, years in advance, were preparing his obituary, I discovered to my great surprise that the main source of growth was not capital investment but technological change.
What kind of technological change would lead to growth? The telephone? The steam engine? The computer?
The technological change that promises to grow the current economy is, of course, Artificial Intelligence or A.I.
Already, A.I. is taking on tasks that would be impossible or prohibitively expensive if using ordinary research tools.
Given an assignment to write a news article that would include a history of government regulation of atomic energy, for instance, A.I. could sort the text of every newspaper report ever written on the topic and select the relevant material. Then, it could instantly assemble a news article that would have taken a reporter hundreds of hours, days or even years, to research and write.
Recognizing the value of A.I.s contribution, there is still a problem. Where does A.I. get the newspaper texts and other necessary information to assemble and write its report? Who, if anyone, must it compensate for the use of these materials?
The New York Times took an important step towards finding an answer to this question last week when it sued A.I. entities, including OpenAI and Microsoft, owners of the popular A.I. program ChatGPT.
The lawsuit accuses the defendants of seeking a free ride on The Timess massive investment in its journalism and alleges that OpenAI and Microsoft are using The Timess content without payment to create products that substitute for The Times and steal audiences away from it.
However the lawsuit turns out, A.I. is here to stay.
I wish Professor Solow were here to explain how and how much it could increase the nations wealth.
D.G. Martin, a lawyer, served as UNC-Systems vice president for public affairs and hosted PBS-NCs North Carolina Bookwatch.
Read more here:
One on One: Economics, artificial intelligence, and the nation's wealth - The Daily Herald
Putin to boost AI work in Russia to fight a Western monopoly he says is ‘unacceptable and dangerous’ – The Associated Press
MOSCOW (AP) Russian President Vladimir Putin on Friday announced a plan to endorse a national strategy for the development of artificial intelligence, emphasizing that its essential to prevent a Western monopoly.
Speaking at an AI conference in Moscow, Putin noted that its imperative to use Russian solutions in the field of creating reliable and transparent artificial intelligence systems that are also safe for humans.
Monopolistic dominance of such foreign technology in Russia is unacceptable, dangerous and inadmissible, Putin said.
He noted that many modern systems, trained on Western data are intended for the Western market and reflect that part of Western ethics, norms of behavior, public policy to which we object.
During his more than two decades in power, Putin has overseen a multi-pronged crackdown on the opposition and civil society groups, and promoted traditional values to counter purported Western influence policies that have become even more oppressive after he sent troops into Ukraine in February 2022.
Putin warned that algorithms developed by Western platforms could lead to a digital cancellation of Russia and its culture.
An artificial intelligence created in line with Western standards and patterns could be xenophobic, Putin said.
Western search engines and generative models often work in a very selective, biased manner, do not take into account, and sometimes simply ignore and cancel Russian culture, he said. Simply put, the machine is given some kind of creative task, and it solves it using only English-language data, which is convenient and beneficial to the system developers. And so an algorithm, for example, can indicate to a machine that Russia, our culture, science, music, literature simply do not exist.
He pledged to pour additional resources into the development of supercomputers and other technologies to help intensify national AI research.
We are talking about expanding fundamental and applied research in the field of generative artificial intelligence and large language models, Putin said.
In the era of technological revolution, it is the cultural and spiritual heritage that is the key factor in preserving national identity, and therefore the diversity of our world, and the stability of international relations, Putin said. Our traditional values, the richness and beauty of the Russian languages and languages of other peoples of Russia must form the basis of our developments, helping create reliable, transparent and secure AI systems.
Putin emphasized that trying to ban AI development would be impossible, but noted the importance of ensuring necessary safeguards.
I am convinced that the future does not lie in bans on the development of technology, it is simply impossible, he said. If we ban something, it will develop elsewhere, and we will only fall behind, thats all.
Putin added that the global community will be able to work out the security guidelines for AI once it fully realizes the risks.
When they feel the threat of its uncontrolled spread, uncontrolled activities in this sphere, a desire to reach agreement will come immediately, he said.
Read more here:
Putin to boost AI work in Russia to fight a Western monopoly he says is 'unacceptable and dangerous' - The Associated Press
Accounting in 2024: Artificial intelligence, tech innovation and more – Accounting Today
As we approach the finish line for 2023 and your organization gets ready to start another year, take some time to celebrate and look back at what you accomplished.
This year was transformative. It saw the emergence of artificial intelligence tools, increased cloud technology adoption rates, innovation to address staffing issues and so much more.
How finance and accounting teams start 2024 will set the stage for defining the rest of the year and positioning for sustained success. Set your firm up for growth as we explore what new trends and topics will define 2024, what will stay the same for accountants, and what will change.
The topics defining 2024
Artificial intelligence
One of the emerging topics in 2023 that will see even more discussion in 2024, artificial intelligence has potential. From transforming administrative work to improving inefficiencies, AI is here to stay.
Earlier in 2023, AI chatbot ChatGPT made headlines for passing notoriously tricky exams like the Uniform Bar Exam, the LSAT, the SAT, and other similarly challenging tests. Yet, when Accounting Today asked ChatGPT to take the four sections of the CPA test, the bot failed every part of the exam. AI currently may lack the human touch needed to succeed in accounting, but that won't stop it from being transformative in the field.
From automatically sorting and pairing transactions, to replicating budgets and increasing draft figures automatically, AI will help accountants do their jobs more efficiently and effectively and suggesting AI-powered optimizations.
AI won't replace accountants in 2024, but the power to help the industry find efficiencies is still an open book to be explored.
Innovating to overcome staffing challenges
With the perfect storm of a workforce nearing retirement age, fewer students pursuing degrees in accounting, and accountants pursuing jobs in other fields, the field must continue to innovate to overcome staffing challenges as CPAs remain in high demand.
Nearly 300,000 U.S. accountants and auditors have left their jobs in the past few years, with both young (25 to 34) and midcareer (45 to 54) professionals departing in high numbers starting in 2019, according to Bureau of Labor Statistics data as reported by The Wall Street Journal.
While the field has already begun to work to overcome these staffing challenges, the most significant change will come through the technology accountants use to do their jobs. Investing in technology that increases efficiency and is easy to use will help firms attract and retain talent.
According to the Wolters Kluwer Annual Accounting Industry Survey, improving operational workflows, increasing employee effectiveness, and investing in new technologies that support remote work were three vital strategic goals firms targeted in 2023.
The Wolters Kluwer survey demonstrates this:
To overcome staffing challenges, firms need to meet accountants where they're at and ensure they have the best tools needed to succeed. These tools will feature built-in efficiencies and workflows and will help accountants get data faster through integrations. Examining your current technology and identifying ways to combine and consolidate is crucial to eliminate redundancies and become more efficient.
Cloud technology
Organizations will continue to pivot to cloud-based technologies in the new year. The last several years have seen a broad shift to cloud technologies to accommodate remote and hybrid work.
In 2024, the cloud will continue to improve efficiency and provide time-savings. Plus, organizations will continue to benefit from the advanced security features the cloud provides. Whereas legacy systems leave security up to the organization, cloud providers and vendors are dedicated to creating secure environments at a scale individual companies cannot replicate.
From complying with the latest security protocols to ensuring security features like multi-factor authentication are standard with their products, cloud-based software does more to keep organizations safe while boosting productivity.
What will change:
The role of the CFO
The role of the CFO has fully transformed from a onedimensional leader into one that provides strategic insights and drives growth opportunities, and 2024 is when this will emerge.
The days of CFOs strictly managing finances and delivering reports are behind us. The modern CFO is an agile, strategic and growth influencer. Fundraising, operations, grant management, board engagement and more are all on their desk.
A CFO's role will go beyond just delivering accurate reports. The CFO is a leader, a strategy optimizer, and a growth-focused individual helping their team leverage the correct tools to increase efficiency and save costs.
Seeing both the small and big picture, CFOs need to leverage insights to understand and communicate what has happened, what is happening, and what can be done to advance. They're working to influence strategic operations, staying a step ahead and plotting which next move will be in the right direction. To do this, CFOs must leverage data, analytics and reporting to drive forwardthinking changes.
It's been a long time coming, but when organizations properly leverage the combined talents of their leadership and board and expand the traditional definition of the CFO role, the growth opportunities will expand.
As we start a new year, recognize your efforts in 2023, and always remember to take time for yourself. As we get ready to take the next steps into 2024, grab your sunglasses because the future is looking brighter than ever.
See the original post here:
Accounting in 2024: Artificial intelligence, tech innovation and more - Accounting Today
Why Artificial Intelligence won over Metaverse | by Technology … – Medium
2015 was a breakthrough year for AI deep learning by achieving the lowest error rate, yet.
Two of the most fascinating technologies under development at the moment are artificial intelligence (AI) and the metaverse. Metaverse is still in its early stages, even though AI is now being used in many different businesses. But, Metaverse has recently taken a backseat to AI in the most recent flurry of AI activity at big tech and others. Despite renaming Facebook to Meta almost 2 years ago, in March 2023, Meta announced shifting R&D focus from Metaverse to artificial intelligence (AI). This has raised questions about readiness of AI versus Metaverse for the consumer market. Here, I will mention some of the challenges in developing the Metaverse and try to explain why AI is easier to develop.
Since software is the primary tool for developing, implementing and testing of AI models, it also has become the focus of AI development. AI can be used in software development to expedite testing, generate code more quickly and efficiently, and automate manual tasks. Artificial intelligence (AI) technologies are progressively extending into new domains and discovering new uses in well-established businesses. The concepts for AI have been around for decades. However, the accessibility of AI development has increased due to the availability of software tools, frameworks, and datasets. Today, pre-built models are available for developers to utilize and incorporate seamlessly into their apps. On the other hand, creating the Metaverse is a very complicated and a recent idea. For developers, creating a virtual world with millions of users interacting simultaneously presents a big challenge.
One of the biggest challenges in developing the Metaverse is creating a seamless user experience. A accurate replica of the real environment must be created by developers in order to produce this seamless experience. Large R&D budgets and cutting-edge technologies, such augmented and virtual reality (AR) headsets, along with every component technologies are needed for this.
More here:
Why Artificial Intelligence won over Metaverse | by Technology ... - Medium
US agency streamlines probes related to artificial intelligence – Reuters
[1/2]AI (Artificial Intelligence) letters and robot hand miniature in this illustration taken, June 23, 2023. REUTERS/Dado Ruvic/Illustration/File Photo Acquire Licensing Rights
WASHINGTON, Nov 21 (Reuters) - Investigations of cases where artificial intelligence (AI) is used to break the law will be streamlined under a new process approved by the U.S. Federal Trade Commission, the agency said on Tuesday.
The move, along with other actions, highlights the FTC's interest in pursuing cases involving AI. Critics of the technology have said that it could be used to turbo-charge fraud.
The agency, which now has three Democrats, voted unanimously to make it easier for staff to issue a demand for documents as part of an investigation if it is related to AI, the agency said in a statement.
In a hearing in September, Commissioner Rebecca Slaughter, a Democrat who has been nominated to another term, agreed with two Republicans nominated to the agency that the agency should focus on issues like use of AI to make phishing emails and robocalls more convincing.
The agency announced a competition last week aimed at identifying the best way to protect consumers against fraud and other harms related to voice cloning.
Reporting by Diane BartzEditing by Marguerita Choy
Our Standards: The Thomson Reuters Trust Principles.
Continue reading here:
US agency streamlines probes related to artificial intelligence - Reuters
Artificial Intelligence: Canadas future of everything – Global News
Blink and youll get left behind.
The AI revolution is here and its changing the world faster than anybody could have predicted.
Whether thats for better or worse depends on whom you ask. Whats undeniable is this the only limitation on how AI will transform the world is us.
Geoffrey Hinton, the so-called grandfather of AI, issued warnings this year, sounding the alarm about the existential threat of AI.
In May 2023, he appeared in an article on the front page of The New York Times, announcing he had quit his job at Google to speak freely about the harm he believes AI will cause humanity.
The way that we propose and strategize technologies rarely, if ever, turn out, said Isabel Pedersen, author and professor at Ontario Tech University focusing on the social implications of emergent technologies.
If Hinton is having a come-to-Jesus moment, he might be too late. Over 100 Million people use ChatGPT, a form of AI using technology he invented. Thats on top of the way AI is already interwoven into practically everything we do online.
And while Toronto-based Hinton is one of the Canadian minds leading this industry one which is growing exponentially the circle of AI innovators remains small.
So small, in fact, that while filming for this story in Torontos Eaton Centre, our Global News team happened upon Hamayal Choudhry, founder and CEO of smartARM, the worlds first bionic arm for which AI uses cameras to dictate movement.
Story continues below advertisement
Unknowingly, we asked him for an interview about the latest release and quickly learned, this was no ordinary shopper.
Choudhry was in the store testing the glasses out for himself, curious to see if AI could be as big a disruptor in the world of glasses as he intends to make it in the world of prosthetics.
The same way those glasses have tiny cameras in the frames, thats what were doing for our prosthetic arms, Choudhry said, in an informal interview in the mall concourse.
smartARM is just one of many examples of how AI is on the brink of revolutionizing virtually every facet of human existence. Canada is on the leading edge, utilizing what AI can do from healthcare and education to airlines and entertainment.
There are inherent risks with AI and a lack of regulation and oversight in Canada, but overall, AI is improving our daily lives in ways that may often remain undetectable.
Canadas AI pioneering dates back to the 1970s, when researchers formed the worlds first national AI association.
The Canadian Artificial Intelligence Association (CAIAC) formerly known as the Canadian Society for the Computational Studies of Intelligence, held its first official meeting in 1973.
Its own mission statement says the CAIAC aims to foster excellence and leadership in research, development and education in Canadas artificial intelligence community by facilitating the exchange of knowledge through various media and venues.
It held its first annual conference in 1976, three years before the American Association for Artificial Intelligence was founded.
Conferences helped with collaboration and community-building, especially in those early days when remote work did not exist and even phone calls were expensive. When you look at the earlier proceedings, you will see a pool of talent that was leading the world in AI, said Denilson Barbosa, current CAIAC president.
The society has had different publications over the years, including a magazine and newsletters, first published in 1974.
Its archive is a treasure trove of AI history. Its magazine debut in 1984 talks about the potential and the influence of AI: It is predictable that, awakening to their new place in the sun, AIers be distracted by sycophants, charlatans, and barmecides. Does the AIer grow frustrated with misconceptions about AI? You bet we do.
Barbosa is a professor of computing science at the University of Alberta. Speaking by phone from Edmonton, Barbosa acknowledged that while Canada is slow to fully integrate AI, conservative in investing and implementing new technologies, hes looking forward to the possibilities of AI in learning in school or for personal growth, including AI in classrooms.
The way we teach is fairly passive. Instructors have a huge influence on the success of students and control the pace of the process. It would be best to turn things around and let the students drive the process they would work on their own, and when they got stuck, they would get help from an always-available AI, Barbosa said.
Human instructors would be there to oversee the process and provide extra help to those who need it the most.
When most people think of AI, robots from movies who can think for themselves come to mind. Thats known as deep learning AI, found in the software of autonomous cars, facial recognition, mass surveillance and yes, some robots. Its the AI that has the potential to think for itself and that scares a lot of people.
Story continues below advertisement
Then theres the field where Canada is considered a leader: the broader technology of machine learning AI, which most people dont realize we already use every day. It curates social media feeds, translates languages, detects fraudulent bank transactions, makes song and movie recommendations, and more. Its driven by human input and our habits.
Under the umbrella of machine learning, is Generative AI. It generates new content that mimics the data it was trained on.
Think of a program that has been trained with vast amounts of information, information that exists all over the Internet. And it develops an understanding and a representation of what this information is all about, Deloitte Canadas Jas Jaaj told Global News The New Reality.
Generative AI allows you to tune the information in a way where now you can actually consume the information the way you want to based on your values, based on your preferences. And then you can ask all sorts of questions to be able to get things like recommendations, things like suggestions of what you may want to have for a meal.
Most Canadians cant tell whats human or AI. A study released in October by the Canadian Journalism Foundation found that half of us are not confident in distinguishing the difference between social media content generated by AI compared with content created by humans.
This unknown is leading to a lot of fear and misinformation about what AI is and isnt doing. But whether were ready, its poised to change nearly everything we do and Canada is already playing a leading role.
In 2017, the federal government put $125 million of funding into AI, the first country to have a national AI strategy. The funding helped to create three world-renowned institutes to guide development in AI research in Montreal, Edmonton, and Toronto, and put Canada among the top five nations in the field.
For me, the biggest risk is not adopting AI and realizing its maximum potential, said Deval Pandya, vice-president of AI engineering at Torontos Vector Institute, one of the institutes founded in 2017.
AI is a transformative technology that is going to make [the] world a much better place and help us solve some of the most pressing challenges as a society that we face right now.
Hes personally bet on Canadas future. Pandya, originally from India, joined Vector Institute nearly three years ago, after working in the U.S.
I think its very bright. And if it wasnt bright, I wouldnt move to Canada, Pandya said.
Vector Institute is an independent non-profit dedicated to AI research. It has partnerships with 20 university programs in Ontario with training focused on AI, providing startups with a steady stream of new workers.
Last year, we graduated more than a thousand Vector-recognized master students. And what is very impressive is that 90 per cent of the students stay in Canada, in Ontario, after they graduate, Pandya said.
You dont have to look far from the Vector Institute to see AIs evolution.
Jas Jaaj, a managing partner at Deloitte in Toronto who focuses on AI and data, believes AI will be to the 21st century what the steam engine was to the 18th century, what electricity was to the 19th century, and what personal computers were to the 20th century.
You know what really happened in all of these major developments? Every major industry changed, societies changed, Jaaj said.
Professional services firms like Deloitte help companies find the next big thing. And these days, it doesnt get bigger than AI.
Global News got an exclusive look at one of the ways AI will change health care. Its an AI nurse, assigned to newly discharged patients to help track their recovery care plan and progress.
Story continues below advertisement
It was created in partnership between Deloitte and The Ottawa Hospital and received positive feedback when it was rolled out in the testing phase.
Each AI nurse is unique to the patient. It has the information it requires to converse with a patient about their care and its skin tone, even the language and dialect it speaks, can be altered to put a patient at ease.
During the demo for Global News, the AI nurse asked: Regarding your post-discharge medications, Tenzin and Lasix, do you have any questions or concerns about them, how to take them or any potential side effects?
The responses given by the patient trigger another series of questions. It will ask if youve weighed yourself, how youre feeling, and more, as a way to ensure post-op instructions are being followed. And if its interacting with a stubborn patient, it will push back.
They could be a bit insistent in terms of saying, Hey, you know what? You havent done what you were supposed to do, so get on with it, Jaaj said.
Most importantly, the AI nurse is smart enough to flag problems for a human nurse or doctor to intercede.
The goal of the AI nurse isnt to replace human nurses, its to take tasks off their already full plates and allow them to perform higher-value work with more time. It can also lower readmission rates. Jaaj predicts 100 per cent of Canadian hospitals will begin to use AI in some way within the next few years.
Regardless of the goal, it doesnt stop one of the biggest concerns about progress in integrating AI into the Canadian workforce: job losses.
This is a hot topic in terms of the anxiety that some people have, Jaaj said. The way to think about it is not in a way by which it will replace workers. Rather, it will reshape the workforce as we move forward.
While progress will have its casualties, Canada is very much in a position of relying on human input to use AI. AI can take on time-consuming tasks, freeing us up to solve more complicated problems.
Thats what Air Canada is doing, on a mission to overhaul and streamline its entire company by taking advantage of the power and efficiency AI creates. Its biggest challenge is tackling the thing we all hate most: delays.
In terms of the complexity of getting our passengers safely to where they want to be on time, a lot of things have to go right, said Bruce Stamm, Air Canadas managing director of enterprise data and artificial intelligence.
There are literally 60 things which need to go right for an aircraft to land on time. Only 30 of those do we actually control as an airline.
When it works, the chaos of air travel seems like a synchronized symphony in the sky. Canadas largest air carrier is timing the takeoffs and landings of more than 1,100 daily flights in such a way that 140,000 passengers get to where they want to be unhindered by delays or lost bags.
But the truth is, frustrations are all too common. Even a 10-minute delay of a flight at the beginning of the day can cause a ripple effect.
Nearly 28 per cent of Air Canada flights, or more than 8,700, landed late in October 2023, placing the company ninth out of 10 airlines on the continent in on-time performance, according to aviation data company Cirium.
In late October, Stamms team started using AI to optimize its scheduling and predict delays. The program uses historical data and looks at scheduling three to five years in advance. Instead of the old way of doing things, using theoretical predictions about flight times, cleaning, maintenance, and more, Air Canada is able to devise a more accurate schedule.
Story continues below advertisement
Global News got a first look at the companys Montreal headquarters.
The software doesnt seem like much coloured boxes on a screen but it churns through massive amounts of data, faster and more efficiently than a person could.
AI and data are going to be part of our DNA just to do a lot more effective decision-making, Stamm said.
Next year, Air Canada plans to use AI to modernize its maintenance schedule for its fleet of about 200 planes. It will take into consideration where planes and mechanics are located, ordering time for parts, and more. The best part? What currently takes people weeks to do will soon be done in about 15 minutes, allowing for work a computer cant do to become the priority.
Air Canada is embracing this, leveraging this to the better of our employees experience and ultimately our passengers experience, Stamm said. Its awesome. And its also fun.
While Canadas largest corporations are harnessing new technology, small startups are also launching very quickly. In the last year, hundreds of Canadian AI startups have hit the market.
A couple of weeks after Global News met Choudhry, he and smartARM co-founder Evan Neff, invited the Global News crew to their Toronto office, a small area in a coworking building east of downtown, which also holds their first prototypes.
Choudhry started smartARM about four years ago, shortly after beating out 50,000 other inventors at a competition sponsored by Microsoft in 2018. He had learned that prosthetics were either cheap but clunky, or functional but expensive.
Neff tells Global News, smartARMs goal is to enhance autonomy and accessibility in the prosthetic space.
Choudhry and Neff arent specifying their price point yet, Neff said current market prices range from about $30,000 to $200,000 making high-quality prosthetics inaccessible to roughly 95 per cent of the upper-limb different community.
We aim to change this, Neff said.
For those with insurance or funding, smartARM will remain affordable, minimizing out-of-pocket expenses. For others, were pricing smartARM comparably to a smartphone or laptop, not a car.
smartARM set out to design something affordable and remarkable.
It mimics human tendencies, like hand-eye coordination, holding items, carrying heavy objects and lifting delicate ones. Using AI, Choudry saidthe prosthesis will inherently know how to handle something just like the way you would look at an object and grab it. You wont necessarily think about how you wrap your fingers around it.
smartARM isnt for sale yet, with regulatory certifications and approvals pending. In the meantime, Choudhry and Neff are testing it on people with a limb difference, including former NFL player Shaquem Griffin.
Griffin was born with a rare condition forcing the amputation of his left hand when he was four years old. He made NFL history when he was drafted by the Seattle Seahawks in 2018 and went on to play for the Miami Dolphins before retiring in 2022 after four seasons in the league.
Choudhry sent a private message to Griffin on Instagram late last year and it wasnt long before Choudhry and Neff travelled to meet Griffin and his mother.
Story continues below advertisement
The first time he put on a smartARM and interacted with it, they were sitting down for dinner at a restaurant.
Choudhry recalls Griffin repeatedly picking up and putting down a glass of water, taking a drink, and passing it to different people at the table. Choudhry and Neff witnessed Griffin use his left hand for the first time as an adult.
Shaquem has proven that having a limb difference doesnt mean that you are any less capable of greatness, Neff said.
We want to stress that smartARM isnt about necessity; its simply about empowering our users with more choices and independence. Watching Shaquem explore new possibilities with smartARM was a testament to our mission.
smartARM also made its New York Fashion Week debut in 2023, showcased by Griffin at the Runway of Dreams, highlighting inclusivity through fashion and beauty.
Just looking at their expressions and looking at how they interact with it so intuitively and naturally is inexplicably rewarding for us to see, Choudhry said of the users testing smartARM. You know, it makes coming in here and working every day not even really seem like work.
Pedersen has been studying wearable technology for over 20 years. Even she is amazed at how fast AI has become embedded in our daily lives.
People went from never experiencing AI themselves to being able to use it on their phones, to use it on their laptops, use it at work, use it at home, she said.
Weve gone through this rapid process in a matter of weeks that in some ways other technologies took 100 years.
In addition to her role as a professor at Ontario Tech University, Pedersen is the founding director of the Digital Life Institute. Its an international research network studying the social implications of emergent digital technologies.
Pedersen said inventors are still developing and designing a future that hasnt happened yet.
We have to be careful of techno-solution-ism. For me to say that youre going to have a technology that is going to solve these very difficult problems it wont, Pedersen said.
I do believe that we have to continue to move forward and try to design ethical outcomes at the earliest stages so that we can presuppose some of these harms that are ongoing that we might face.
As with everything related to new technology, there are persistent concerns that AI is moving much faster than the guardrails being built for it.
The federal government has a framework for proposed legislation, but The Artificial Intelligence and Data Act wont come into force before 2025. Federal Minister of Innovation, Science, and Industry, Franois-Philippe Champagne, told Global News the government wants to get it done right.
Theres an acknowledgment that we need to deal with the concerns and the risks so that we can realize the opportunities. And in order to do that, we need framework, we need guardrails, so we build trust with people, Champagne said.
But trust can be tricky. Until the act becomes law, the government released a voluntary code of conduct in late September for generative AI developers. In the absence of regulation, it is supposed to guide organizations to come up with an environment to self-regulate.
It is incumbent on organizations and businesses themselves to not only wait for things like regulations and these types of directives coming in, but also go down the path of really understanding how they can self-regulate in the interim by educating themselves and learning about how this technology will really make a difference, Jaaj said.
Read the original:
Artificial Intelligence: Canadas future of everything - Global News
First international benchmark of artificial intelligence and machine … – Nuclear Energy Agency
Nuclear Energy Agency (NEA) - First international benchmark of artificial intelligence and machine learning in nuclear reactor physics
Recent performance breakthroughs in artificial intelligence (AI) and machine learning (ML) have led to unprecedented interest among nuclear engineers. Despite the progress, the lack of dedicated benchmark exercises for the application of AI and ML techniques in nuclear engineering analyses limits their applicability and broader usage. In line with the NEA strategic target to contribute to building a solid scientific and technical basis for the development of future generation nuclear systems and deployment of innovations, theTask Force on Artificial Intelligence and Machine Learning for Scientific Computing in Nuclear Engineering was established within theExpert Group on Reactor Systems Multi-Physics (EGMUP) of the Nuclear Science Committees Working Party on Scientific Issues and Uncertainty Analysis of Reactor Systems (WPRS). The Task Force will focus on designing benchmark exercises that will target important AI and ML activities, and cover various computational domains of interest, from single physics to multi-scale and multi-physics.
A significant milestone has been reached with the successful launch of the first comprehensive benchmark of AI and ML to predict the Critical Heat Flux (CHF). This CHF corresponds in a boiling system to the limit beyond which wall heat transfer decreases significantly, which is often referred to as critical boiling transition, boiling crisis and (depending on operating conditions) departure from nucleate boiling (DNB), or dryout. In a heat transfer-controlled system, such as a nuclear reactor core, CHF can result in a significant wall temperature increase leading to accelerated wall oxidation, and potentially to fuel rod failure. While constituting an important design limit criterion for the safe operation of reactors, CHF is challenging to predict accurately due to the complexities of the local fluid flow and heat exchange dynamics.
Current CHF models are mainly based on empirical correlations developed and validated for a specific application case domain. Through this benchmark, improvements in the CHF modelling are sought using AI and ML methods directly leveraging a comprehensive experimental database provided by the US Nuclear Regulatory Commission (NRC), forming the cornerstone of this benchmark exercise. The improved modelling can lead to a better understanding of the safety margins and provide new opportunities for design or operational optimisations.
The CHF benchmark phase 1 kick-off meeting on 30 October 2023 gathered 78 participants, representing 48 institutions from 16 countries. This robust engagement underscores the profound interest and commitment within the global scientific community toward integrating AI and ML technologies into nuclear engineering. The ultimate goal of the Task Force is to leverage insights from the benchmarks and distill lessons learnt to provide guidelines for future AI and ML applications in scientific computing in nuclear engineering.
eNrVmN9v2jAQx9/7V6C85wfdWtgUqDbWbkitxmjRpr0g4xxgauz0bPNjf/0coBqdknWY+qESL8mZu4v9vc9dkl6s5ry2AFRMilZQj5KgBoLKjIlJKxjcXYXN4KJ9ks7Iguwva0bJ8DQ5DWqUE6VaQWGPRkCEin7cXH8C6wEwaJ/UUjmaAdVP1hnNePSFqOkNyYs1tXQhWVabg57KrBXkRm/u1lKl0ebRXkq8VzmhkMa7O/tWjXJ4dnae7BvTuPD4H66NArwmYlLqGYSTT2oQQegO0TCRuK5MuvGu4ZY0U31Q0iCFHtHTHsoFyyArjTMmXIFTkPEyuwVccNBFkFLn8YzOlZNzMiOrPjx0y5P+YK0dvdJhEtYbST1Jmm/qdftzCoV7W1UarXiIOOfDZvNt0ojHDJUOmdCAgmirdsLDka2I6ZzgfSjHIUHNxowye79YxTmbWDOERGThnNApExByICisb7siFIYWlyECoVpimE/XilHlePA9iZpwT0fOVOepdD3FQXh4VloZUzkn62imctetIkisGdBSxt+DFE9wh5Z73O7ZX/6F4Tw+MOvBDkieMi5415FG6AouXfVdN6IjbTWsqk/UDaV6tdMiA/Vybn9JUd5LembEGXXlpSWaAaUH/W41Ll8NaT4SBQP0h5rvTGRyqV4eYfuS8ZR9vqHwv6aRc+fq/Gm1WdEXLw3KHGILNqaO4VVXjOWxpLJyL3f1KPbXofPN/Ccp4VAxAQ4diWgF/ji1eishf+W5NZQ6/Xx556q9bwZwfbu5LHXNstajaty6hY8WZIVemffhZbMlx7OvBpYgbtOuwXIsTbXO1fs4Xi6XkQSahQJIJHHyunrS3ozh763HyyCzHey2bPeU+mjbsA87ftdSfm7UOXZ83/1/95pQUScGjjiLLfW9sbl7+fK4/zO7e0u794RP/sJs5uwNYXyNaGZU6vGoBmOPVVyh5cPXsYUgHCbLNN5+CmufpHHxGax98hsAQXsu
WbSZBZ7fHBT6SYqA
See the rest here:
First international benchmark of artificial intelligence and machine ... - Nuclear Energy Agency
Artificial Intelligence images of ‘average’ person from each US state … – UNILAD
Artificial Intelligence has been asked to create a host of things since it's creation.
Another thing AI's created it what it believes your average Joe might look like depending on which US state they live in.
And it's safe to say the results are questionable.
We should really all know by now that there's no such thing as the 'average person', but there are stereotypes, fashion trends and local traditions, and its these factors that seem to have inspired AI when it came to creating images of the 'average' human from a specific US state.
In a post on the Reddit thread r/midjourney, a Redditor shared a series of AI-generated images from a variety of states, along with the caption: "The most stereotypical person in [state name]."
The caption presumably represented the prompt they'd given to the AI program before letting it do its thing, with the chosen states including Texas, California, Colorado, Florida, Oregon and Maine.
And the results of the prompt are interesting, to say the least. Where to begin?
Kicking things off with Texas, we have a man dressed in some 'cowboy'-style attire, including a large cowboy hat, a brown shirt tucked into blue jeans and a wide belt buckle.
It's all flower-power in California, where the AI human has long hair blowing in the breeze, big sunglasses and a floral shirt.
While in Colorado it's a different kind of plant getting all the attention, with a woman perched on what looks to be a mountaintop packed with marijuana plants.
She's wearing a green hoodie and headband, with what looks to be a smoking joint in her hand.
I'm not sure how many people hike up weed mountains to get a hit in Colorado, but okay.
Next let's head to Florida, where a man with a long white beard stands on a road with long blue shorts, a baggy pink shirt and a sunhat, before moving to Oregon, where we're greeted by a woman with short greyish-blue hair.
And things take a dramatic turn as we head to Maine; a city known for its lobster.
To represent this, our AI man stands with a hat featuring an actual lobster on his head.
Again, I'm not sure how 'average' that is, but I've never been to Maine myself.
The AI-generated images have sparked mixed responses after being shared online, with one outraged Reddit user claiming the original poster 'clearly used unflattering prompts for the red states'.
Another unimpressed viewer commented: "Hi. Maine here. Can you not put dead lobsters all over everything? K thx."
The creations have left many people intrigued, though, with a lot of comments calling for more AI-generated images from even more states.
Read more:
Artificial Intelligence images of 'average' person from each US state ... - UNILAD