Category Archives: Artificial Intelligence

Putin to boost AI work in Russia to fight a Western monopoly he says is ‘unacceptable and dangerous’ – The Associated Press

MOSCOW (AP) Russian President Vladimir Putin on Friday announced a plan to endorse a national strategy for the development of artificial intelligence, emphasizing that its essential to prevent a Western monopoly.

Speaking at an AI conference in Moscow, Putin noted that its imperative to use Russian solutions in the field of creating reliable and transparent artificial intelligence systems that are also safe for humans.

Monopolistic dominance of such foreign technology in Russia is unacceptable, dangerous and inadmissible, Putin said.

He noted that many modern systems, trained on Western data are intended for the Western market and reflect that part of Western ethics, norms of behavior, public policy to which we object.

During his more than two decades in power, Putin has overseen a multi-pronged crackdown on the opposition and civil society groups, and promoted traditional values to counter purported Western influence policies that have become even more oppressive after he sent troops into Ukraine in February 2022.

Putin warned that algorithms developed by Western platforms could lead to a digital cancellation of Russia and its culture.

An artificial intelligence created in line with Western standards and patterns could be xenophobic, Putin said.

Western search engines and generative models often work in a very selective, biased manner, do not take into account, and sometimes simply ignore and cancel Russian culture, he said. Simply put, the machine is given some kind of creative task, and it solves it using only English-language data, which is convenient and beneficial to the system developers. And so an algorithm, for example, can indicate to a machine that Russia, our culture, science, music, literature simply do not exist.

He pledged to pour additional resources into the development of supercomputers and other technologies to help intensify national AI research.

We are talking about expanding fundamental and applied research in the field of generative artificial intelligence and large language models, Putin said.

In the era of technological revolution, it is the cultural and spiritual heritage that is the key factor in preserving national identity, and therefore the diversity of our world, and the stability of international relations, Putin said. Our traditional values, the richness and beauty of the Russian languages and languages of other peoples of Russia must form the basis of our developments, helping create reliable, transparent and secure AI systems.

Putin emphasized that trying to ban AI development would be impossible, but noted the importance of ensuring necessary safeguards.

I am convinced that the future does not lie in bans on the development of technology, it is simply impossible, he said. If we ban something, it will develop elsewhere, and we will only fall behind, thats all.

Putin added that the global community will be able to work out the security guidelines for AI once it fully realizes the risks.

When they feel the threat of its uncontrolled spread, uncontrolled activities in this sphere, a desire to reach agreement will come immediately, he said.

Read more here:
Putin to boost AI work in Russia to fight a Western monopoly he says is 'unacceptable and dangerous' - The Associated Press

Accounting in 2024: Artificial intelligence, tech innovation and more – Accounting Today

As we approach the finish line for 2023 and your organization gets ready to start another year, take some time to celebrate and look back at what you accomplished.

This year was transformative. It saw the emergence of artificial intelligence tools, increased cloud technology adoption rates, innovation to address staffing issues and so much more.

How finance and accounting teams start 2024 will set the stage for defining the rest of the year and positioning for sustained success. Set your firm up for growth as we explore what new trends and topics will define 2024, what will stay the same for accountants, and what will change.

The topics defining 2024

Artificial intelligence

One of the emerging topics in 2023 that will see even more discussion in 2024, artificial intelligence has potential. From transforming administrative work to improving inefficiencies, AI is here to stay.

Earlier in 2023, AI chatbot ChatGPT made headlines for passing notoriously tricky exams like the Uniform Bar Exam, the LSAT, the SAT, and other similarly challenging tests. Yet, when Accounting Today asked ChatGPT to take the four sections of the CPA test, the bot failed every part of the exam. AI currently may lack the human touch needed to succeed in accounting, but that won't stop it from being transformative in the field.

From automatically sorting and pairing transactions, to replicating budgets and increasing draft figures automatically, AI will help accountants do their jobs more efficiently and effectively and suggesting AI-powered optimizations.

AI won't replace accountants in 2024, but the power to help the industry find efficiencies is still an open book to be explored.

Innovating to overcome staffing challenges

With the perfect storm of a workforce nearing retirement age, fewer students pursuing degrees in accounting, and accountants pursuing jobs in other fields, the field must continue to innovate to overcome staffing challenges as CPAs remain in high demand.

Nearly 300,000 U.S. accountants and auditors have left their jobs in the past few years, with both young (25 to 34) and midcareer (45 to 54) professionals departing in high numbers starting in 2019, according to Bureau of Labor Statistics data as reported by The Wall Street Journal.

While the field has already begun to work to overcome these staffing challenges, the most significant change will come through the technology accountants use to do their jobs. Investing in technology that increases efficiency and is easy to use will help firms attract and retain talent.

According to the Wolters Kluwer Annual Accounting Industry Survey, improving operational workflows, increasing employee effectiveness, and investing in new technologies that support remote work were three vital strategic goals firms targeted in 2023.

The Wolters Kluwer survey demonstrates this:

To overcome staffing challenges, firms need to meet accountants where they're at and ensure they have the best tools needed to succeed. These tools will feature built-in efficiencies and workflows and will help accountants get data faster through integrations. Examining your current technology and identifying ways to combine and consolidate is crucial to eliminate redundancies and become more efficient.

Cloud technology

Organizations will continue to pivot to cloud-based technologies in the new year. The last several years have seen a broad shift to cloud technologies to accommodate remote and hybrid work.

In 2024, the cloud will continue to improve efficiency and provide time-savings. Plus, organizations will continue to benefit from the advanced security features the cloud provides. Whereas legacy systems leave security up to the organization, cloud providers and vendors are dedicated to creating secure environments at a scale individual companies cannot replicate.

From complying with the latest security protocols to ensuring security features like multi-factor authentication are standard with their products, cloud-based software does more to keep organizations safe while boosting productivity.

What will change:

The role of the CFO

The role of the CFO has fully transformed from a onedimensional leader into one that provides strategic insights and drives growth opportunities, and 2024 is when this will emerge.

The days of CFOs strictly managing finances and delivering reports are behind us. The modern CFO is an agile, strategic and growth influencer. Fundraising, operations, grant management, board engagement and more are all on their desk.

A CFO's role will go beyond just delivering accurate reports. The CFO is a leader, a strategy optimizer, and a growth-focused individual helping their team leverage the correct tools to increase efficiency and save costs.

Seeing both the small and big picture, CFOs need to leverage insights to understand and communicate what has happened, what is happening, and what can be done to advance. They're working to influence strategic operations, staying a step ahead and plotting which next move will be in the right direction. To do this, CFOs must leverage data, analytics and reporting to drive forwardthinking changes.

It's been a long time coming, but when organizations properly leverage the combined talents of their leadership and board and expand the traditional definition of the CFO role, the growth opportunities will expand.

As we start a new year, recognize your efforts in 2023, and always remember to take time for yourself. As we get ready to take the next steps into 2024, grab your sunglasses because the future is looking brighter than ever.

See the original post here:
Accounting in 2024: Artificial intelligence, tech innovation and more - Accounting Today

Barriers and Facilitators of Artificial Intelligence in Family Medicine … – Cureus

Specialty

Please chooseI'm not a medical professional.Allergy and ImmunologyAnatomyAnesthesiologyCardiac/Thoracic/Vascular SurgeryCardiologyCritical CareDentistryDermatologyDiabetes and EndocrinologyEmergency MedicineEpidemiology and Public HealthFamily MedicineForensic MedicineGastroenterologyGeneral PracticeGeneticsGeriatricsHealth PolicyHematologyHIV/AIDSHospital-based MedicineI'm not a medical professional.Infectious DiseaseIntegrative/Complementary MedicineInternal MedicineInternal Medicine-PediatricsMedical Education and SimulationMedical PhysicsMedical StudentNephrologyNeurological SurgeryNeurologyNuclear MedicineNutritionObstetrics and GynecologyOccupational HealthOncologyOphthalmologyOptometryOral MedicineOrthopaedicsOsteopathic MedicineOtolaryngologyPain ManagementPalliative CarePathologyPediatricsPediatric SurgeryPhysical Medicine and RehabilitationPlastic SurgeryPodiatryPreventive MedicinePsychiatryPsychologyPulmonologyRadiation OncologyRadiologyRheumatologySubstance Use and AddictionSurgeryTherapeuticsTraumaUrologyMiscellaneous

Read this article:
Barriers and Facilitators of Artificial Intelligence in Family Medicine ... - Cureus

Why Artificial Intelligence won over Metaverse | by Technology … – Medium

2015 was a breakthrough year for AI deep learning by achieving the lowest error rate, yet.

Two of the most fascinating technologies under development at the moment are artificial intelligence (AI) and the metaverse. Metaverse is still in its early stages, even though AI is now being used in many different businesses. But, Metaverse has recently taken a backseat to AI in the most recent flurry of AI activity at big tech and others. Despite renaming Facebook to Meta almost 2 years ago, in March 2023, Meta announced shifting R&D focus from Metaverse to artificial intelligence (AI). This has raised questions about readiness of AI versus Metaverse for the consumer market. Here, I will mention some of the challenges in developing the Metaverse and try to explain why AI is easier to develop.

Since software is the primary tool for developing, implementing and testing of AI models, it also has become the focus of AI development. AI can be used in software development to expedite testing, generate code more quickly and efficiently, and automate manual tasks. Artificial intelligence (AI) technologies are progressively extending into new domains and discovering new uses in well-established businesses. The concepts for AI have been around for decades. However, the accessibility of AI development has increased due to the availability of software tools, frameworks, and datasets. Today, pre-built models are available for developers to utilize and incorporate seamlessly into their apps. On the other hand, creating the Metaverse is a very complicated and a recent idea. For developers, creating a virtual world with millions of users interacting simultaneously presents a big challenge.

One of the biggest challenges in developing the Metaverse is creating a seamless user experience. A accurate replica of the real environment must be created by developers in order to produce this seamless experience. Large R&D budgets and cutting-edge technologies, such augmented and virtual reality (AR) headsets, along with every component technologies are needed for this.

More here:
Why Artificial Intelligence won over Metaverse | by Technology ... - Medium

US agency streamlines probes related to artificial intelligence – Reuters

[1/2]AI (Artificial Intelligence) letters and robot hand miniature in this illustration taken, June 23, 2023. REUTERS/Dado Ruvic/Illustration/File Photo Acquire Licensing Rights

WASHINGTON, Nov 21 (Reuters) - Investigations of cases where artificial intelligence (AI) is used to break the law will be streamlined under a new process approved by the U.S. Federal Trade Commission, the agency said on Tuesday.

The move, along with other actions, highlights the FTC's interest in pursuing cases involving AI. Critics of the technology have said that it could be used to turbo-charge fraud.

The agency, which now has three Democrats, voted unanimously to make it easier for staff to issue a demand for documents as part of an investigation if it is related to AI, the agency said in a statement.

In a hearing in September, Commissioner Rebecca Slaughter, a Democrat who has been nominated to another term, agreed with two Republicans nominated to the agency that the agency should focus on issues like use of AI to make phishing emails and robocalls more convincing.

The agency announced a competition last week aimed at identifying the best way to protect consumers against fraud and other harms related to voice cloning.

Reporting by Diane BartzEditing by Marguerita Choy

Our Standards: The Thomson Reuters Trust Principles.

Continue reading here:
US agency streamlines probes related to artificial intelligence - Reuters

Artificial Intelligence: Canadas future of everything – Global News

Blink and youll get left behind.

The AI revolution is here and its changing the world faster than anybody could have predicted.

Whether thats for better or worse depends on whom you ask. Whats undeniable is this the only limitation on how AI will transform the world is us.

Geoffrey Hinton, the so-called grandfather of AI, issued warnings this year, sounding the alarm about the existential threat of AI.

In May 2023, he appeared in an article on the front page of The New York Times, announcing he had quit his job at Google to speak freely about the harm he believes AI will cause humanity.

The way that we propose and strategize technologies rarely, if ever, turn out, said Isabel Pedersen, author and professor at Ontario Tech University focusing on the social implications of emergent technologies.

If Hinton is having a come-to-Jesus moment, he might be too late. Over 100 Million people use ChatGPT, a form of AI using technology he invented. Thats on top of the way AI is already interwoven into practically everything we do online.

And while Toronto-based Hinton is one of the Canadian minds leading this industry one which is growing exponentially the circle of AI innovators remains small.

So small, in fact, that while filming for this story in Torontos Eaton Centre, our Global News team happened upon Hamayal Choudhry, founder and CEO of smartARM, the worlds first bionic arm for which AI uses cameras to dictate movement.

Story continues below advertisement

Unknowingly, we asked him for an interview about the latest release and quickly learned, this was no ordinary shopper.

Choudhry was in the store testing the glasses out for himself, curious to see if AI could be as big a disruptor in the world of glasses as he intends to make it in the world of prosthetics.

The same way those glasses have tiny cameras in the frames, thats what were doing for our prosthetic arms, Choudhry said, in an informal interview in the mall concourse.

smartARM is just one of many examples of how AI is on the brink of revolutionizing virtually every facet of human existence. Canada is on the leading edge, utilizing what AI can do from healthcare and education to airlines and entertainment.

There are inherent risks with AI and a lack of regulation and oversight in Canada, but overall, AI is improving our daily lives in ways that may often remain undetectable.

Canadas AI pioneering dates back to the 1970s, when researchers formed the worlds first national AI association.

The Canadian Artificial Intelligence Association (CAIAC) formerly known as the Canadian Society for the Computational Studies of Intelligence, held its first official meeting in 1973.

Its own mission statement says the CAIAC aims to foster excellence and leadership in research, development and education in Canadas artificial intelligence community by facilitating the exchange of knowledge through various media and venues.

It held its first annual conference in 1976, three years before the American Association for Artificial Intelligence was founded.

Conferences helped with collaboration and community-building, especially in those early days when remote work did not exist and even phone calls were expensive. When you look at the earlier proceedings, you will see a pool of talent that was leading the world in AI, said Denilson Barbosa, current CAIAC president.

The society has had different publications over the years, including a magazine and newsletters, first published in 1974.

Its archive is a treasure trove of AI history. Its magazine debut in 1984 talks about the potential and the influence of AI: It is predictable that, awakening to their new place in the sun, AIers be distracted by sycophants, charlatans, and barmecides. Does the AIer grow frustrated with misconceptions about AI? You bet we do.

Barbosa is a professor of computing science at the University of Alberta. Speaking by phone from Edmonton, Barbosa acknowledged that while Canada is slow to fully integrate AI, conservative in investing and implementing new technologies, hes looking forward to the possibilities of AI in learning in school or for personal growth, including AI in classrooms.

The way we teach is fairly passive. Instructors have a huge influence on the success of students and control the pace of the process. It would be best to turn things around and let the students drive the process they would work on their own, and when they got stuck, they would get help from an always-available AI, Barbosa said.

Human instructors would be there to oversee the process and provide extra help to those who need it the most.

When most people think of AI, robots from movies who can think for themselves come to mind. Thats known as deep learning AI, found in the software of autonomous cars, facial recognition, mass surveillance and yes, some robots. Its the AI that has the potential to think for itself and that scares a lot of people.

Story continues below advertisement

Then theres the field where Canada is considered a leader: the broader technology of machine learning AI, which most people dont realize we already use every day. It curates social media feeds, translates languages, detects fraudulent bank transactions, makes song and movie recommendations, and more. Its driven by human input and our habits.

Under the umbrella of machine learning, is Generative AI. It generates new content that mimics the data it was trained on.

Think of a program that has been trained with vast amounts of information, information that exists all over the Internet. And it develops an understanding and a representation of what this information is all about, Deloitte Canadas Jas Jaaj told Global News The New Reality.

Generative AI allows you to tune the information in a way where now you can actually consume the information the way you want to based on your values, based on your preferences. And then you can ask all sorts of questions to be able to get things like recommendations, things like suggestions of what you may want to have for a meal.

Most Canadians cant tell whats human or AI. A study released in October by the Canadian Journalism Foundation found that half of us are not confident in distinguishing the difference between social media content generated by AI compared with content created by humans.

This unknown is leading to a lot of fear and misinformation about what AI is and isnt doing. But whether were ready, its poised to change nearly everything we do and Canada is already playing a leading role.

In 2017, the federal government put $125 million of funding into AI, the first country to have a national AI strategy. The funding helped to create three world-renowned institutes to guide development in AI research in Montreal, Edmonton, and Toronto, and put Canada among the top five nations in the field.

For me, the biggest risk is not adopting AI and realizing its maximum potential, said Deval Pandya, vice-president of AI engineering at Torontos Vector Institute, one of the institutes founded in 2017.

AI is a transformative technology that is going to make [the] world a much better place and help us solve some of the most pressing challenges as a society that we face right now.

Hes personally bet on Canadas future. Pandya, originally from India, joined Vector Institute nearly three years ago, after working in the U.S.

I think its very bright. And if it wasnt bright, I wouldnt move to Canada, Pandya said.

Vector Institute is an independent non-profit dedicated to AI research. It has partnerships with 20 university programs in Ontario with training focused on AI, providing startups with a steady stream of new workers.

Last year, we graduated more than a thousand Vector-recognized master students. And what is very impressive is that 90 per cent of the students stay in Canada, in Ontario, after they graduate, Pandya said.

You dont have to look far from the Vector Institute to see AIs evolution.

Jas Jaaj, a managing partner at Deloitte in Toronto who focuses on AI and data, believes AI will be to the 21st century what the steam engine was to the 18th century, what electricity was to the 19th century, and what personal computers were to the 20th century.

You know what really happened in all of these major developments? Every major industry changed, societies changed, Jaaj said.

Professional services firms like Deloitte help companies find the next big thing. And these days, it doesnt get bigger than AI.

Global News got an exclusive look at one of the ways AI will change health care. Its an AI nurse, assigned to newly discharged patients to help track their recovery care plan and progress.

Story continues below advertisement

It was created in partnership between Deloitte and The Ottawa Hospital and received positive feedback when it was rolled out in the testing phase.

Each AI nurse is unique to the patient. It has the information it requires to converse with a patient about their care and its skin tone, even the language and dialect it speaks, can be altered to put a patient at ease.

During the demo for Global News, the AI nurse asked: Regarding your post-discharge medications, Tenzin and Lasix, do you have any questions or concerns about them, how to take them or any potential side effects?

The responses given by the patient trigger another series of questions. It will ask if youve weighed yourself, how youre feeling, and more, as a way to ensure post-op instructions are being followed. And if its interacting with a stubborn patient, it will push back.

They could be a bit insistent in terms of saying, Hey, you know what? You havent done what you were supposed to do, so get on with it, Jaaj said.

Most importantly, the AI nurse is smart enough to flag problems for a human nurse or doctor to intercede.

The goal of the AI nurse isnt to replace human nurses, its to take tasks off their already full plates and allow them to perform higher-value work with more time. It can also lower readmission rates. Jaaj predicts 100 per cent of Canadian hospitals will begin to use AI in some way within the next few years.

Regardless of the goal, it doesnt stop one of the biggest concerns about progress in integrating AI into the Canadian workforce: job losses.

This is a hot topic in terms of the anxiety that some people have, Jaaj said. The way to think about it is not in a way by which it will replace workers. Rather, it will reshape the workforce as we move forward.

While progress will have its casualties, Canada is very much in a position of relying on human input to use AI. AI can take on time-consuming tasks, freeing us up to solve more complicated problems.

Thats what Air Canada is doing, on a mission to overhaul and streamline its entire company by taking advantage of the power and efficiency AI creates. Its biggest challenge is tackling the thing we all hate most: delays.

In terms of the complexity of getting our passengers safely to where they want to be on time, a lot of things have to go right, said Bruce Stamm, Air Canadas managing director of enterprise data and artificial intelligence.

There are literally 60 things which need to go right for an aircraft to land on time. Only 30 of those do we actually control as an airline.

When it works, the chaos of air travel seems like a synchronized symphony in the sky. Canadas largest air carrier is timing the takeoffs and landings of more than 1,100 daily flights in such a way that 140,000 passengers get to where they want to be unhindered by delays or lost bags.

But the truth is, frustrations are all too common. Even a 10-minute delay of a flight at the beginning of the day can cause a ripple effect.

Nearly 28 per cent of Air Canada flights, or more than 8,700, landed late in October 2023, placing the company ninth out of 10 airlines on the continent in on-time performance, according to aviation data company Cirium.

In late October, Stamms team started using AI to optimize its scheduling and predict delays. The program uses historical data and looks at scheduling three to five years in advance. Instead of the old way of doing things, using theoretical predictions about flight times, cleaning, maintenance, and more, Air Canada is able to devise a more accurate schedule.

Story continues below advertisement

Global News got a first look at the companys Montreal headquarters.

The software doesnt seem like much coloured boxes on a screen but it churns through massive amounts of data, faster and more efficiently than a person could.

AI and data are going to be part of our DNA just to do a lot more effective decision-making, Stamm said.

Next year, Air Canada plans to use AI to modernize its maintenance schedule for its fleet of about 200 planes. It will take into consideration where planes and mechanics are located, ordering time for parts, and more. The best part? What currently takes people weeks to do will soon be done in about 15 minutes, allowing for work a computer cant do to become the priority.

Air Canada is embracing this, leveraging this to the better of our employees experience and ultimately our passengers experience, Stamm said. Its awesome. And its also fun.

While Canadas largest corporations are harnessing new technology, small startups are also launching very quickly. In the last year, hundreds of Canadian AI startups have hit the market.

A couple of weeks after Global News met Choudhry, he and smartARM co-founder Evan Neff, invited the Global News crew to their Toronto office, a small area in a coworking building east of downtown, which also holds their first prototypes.

Choudhry started smartARM about four years ago, shortly after beating out 50,000 other inventors at a competition sponsored by Microsoft in 2018. He had learned that prosthetics were either cheap but clunky, or functional but expensive.

Neff tells Global News, smartARMs goal is to enhance autonomy and accessibility in the prosthetic space.

Choudhry and Neff arent specifying their price point yet, Neff said current market prices range from about $30,000 to $200,000 making high-quality prosthetics inaccessible to roughly 95 per cent of the upper-limb different community.

We aim to change this, Neff said.

For those with insurance or funding, smartARM will remain affordable, minimizing out-of-pocket expenses. For others, were pricing smartARM comparably to a smartphone or laptop, not a car.

smartARM set out to design something affordable and remarkable.

It mimics human tendencies, like hand-eye coordination, holding items, carrying heavy objects and lifting delicate ones. Using AI, Choudry saidthe prosthesis will inherently know how to handle something just like the way you would look at an object and grab it. You wont necessarily think about how you wrap your fingers around it.

smartARM isnt for sale yet, with regulatory certifications and approvals pending. In the meantime, Choudhry and Neff are testing it on people with a limb difference, including former NFL player Shaquem Griffin.

Griffin was born with a rare condition forcing the amputation of his left hand when he was four years old. He made NFL history when he was drafted by the Seattle Seahawks in 2018 and went on to play for the Miami Dolphins before retiring in 2022 after four seasons in the league.

Choudhry sent a private message to Griffin on Instagram late last year and it wasnt long before Choudhry and Neff travelled to meet Griffin and his mother.

Story continues below advertisement

The first time he put on a smartARM and interacted with it, they were sitting down for dinner at a restaurant.

Choudhry recalls Griffin repeatedly picking up and putting down a glass of water, taking a drink, and passing it to different people at the table. Choudhry and Neff witnessed Griffin use his left hand for the first time as an adult.

Shaquem has proven that having a limb difference doesnt mean that you are any less capable of greatness, Neff said.

We want to stress that smartARM isnt about necessity; its simply about empowering our users with more choices and independence. Watching Shaquem explore new possibilities with smartARM was a testament to our mission.

smartARM also made its New York Fashion Week debut in 2023, showcased by Griffin at the Runway of Dreams, highlighting inclusivity through fashion and beauty.

Just looking at their expressions and looking at how they interact with it so intuitively and naturally is inexplicably rewarding for us to see, Choudhry said of the users testing smartARM. You know, it makes coming in here and working every day not even really seem like work.

Pedersen has been studying wearable technology for over 20 years. Even she is amazed at how fast AI has become embedded in our daily lives.

People went from never experiencing AI themselves to being able to use it on their phones, to use it on their laptops, use it at work, use it at home, she said.

Weve gone through this rapid process in a matter of weeks that in some ways other technologies took 100 years.

In addition to her role as a professor at Ontario Tech University, Pedersen is the founding director of the Digital Life Institute. Its an international research network studying the social implications of emergent digital technologies.

Pedersen said inventors are still developing and designing a future that hasnt happened yet.

We have to be careful of techno-solution-ism. For me to say that youre going to have a technology that is going to solve these very difficult problems it wont, Pedersen said.

I do believe that we have to continue to move forward and try to design ethical outcomes at the earliest stages so that we can presuppose some of these harms that are ongoing that we might face.

As with everything related to new technology, there are persistent concerns that AI is moving much faster than the guardrails being built for it.

The federal government has a framework for proposed legislation, but The Artificial Intelligence and Data Act wont come into force before 2025. Federal Minister of Innovation, Science, and Industry, Franois-Philippe Champagne, told Global News the government wants to get it done right.

Theres an acknowledgment that we need to deal with the concerns and the risks so that we can realize the opportunities. And in order to do that, we need framework, we need guardrails, so we build trust with people, Champagne said.

But trust can be tricky. Until the act becomes law, the government released a voluntary code of conduct in late September for generative AI developers. In the absence of regulation, it is supposed to guide organizations to come up with an environment to self-regulate.

It is incumbent on organizations and businesses themselves to not only wait for things like regulations and these types of directives coming in, but also go down the path of really understanding how they can self-regulate in the interim by educating themselves and learning about how this technology will really make a difference, Jaaj said.

Read the original:
Artificial Intelligence: Canadas future of everything - Global News

First international benchmark of artificial intelligence and machine … – Nuclear Energy Agency

Nuclear Energy Agency (NEA) - First international benchmark of artificial intelligence and machine learning in nuclear reactor physics

Recent performance breakthroughs in artificial intelligence (AI) and machine learning (ML) have led to unprecedented interest among nuclear engineers. Despite the progress, the lack of dedicated benchmark exercises for the application of AI and ML techniques in nuclear engineering analyses limits their applicability and broader usage. In line with the NEA strategic target to contribute to building a solid scientific and technical basis for the development of future generation nuclear systems and deployment of innovations, theTask Force on Artificial Intelligence and Machine Learning for Scientific Computing in Nuclear Engineering was established within theExpert Group on Reactor Systems Multi-Physics (EGMUP) of the Nuclear Science Committees Working Party on Scientific Issues and Uncertainty Analysis of Reactor Systems (WPRS). The Task Force will focus on designing benchmark exercises that will target important AI and ML activities, and cover various computational domains of interest, from single physics to multi-scale and multi-physics.

A significant milestone has been reached with the successful launch of the first comprehensive benchmark of AI and ML to predict the Critical Heat Flux (CHF). This CHF corresponds in a boiling system to the limit beyond which wall heat transfer decreases significantly, which is often referred to as critical boiling transition, boiling crisis and (depending on operating conditions) departure from nucleate boiling (DNB), or dryout. In a heat transfer-controlled system, such as a nuclear reactor core, CHF can result in a significant wall temperature increase leading to accelerated wall oxidation, and potentially to fuel rod failure. While constituting an important design limit criterion for the safe operation of reactors, CHF is challenging to predict accurately due to the complexities of the local fluid flow and heat exchange dynamics.

Current CHF models are mainly based on empirical correlations developed and validated for a specific application case domain. Through this benchmark, improvements in the CHF modelling are sought using AI and ML methods directly leveraging a comprehensive experimental database provided by the US Nuclear Regulatory Commission (NRC), forming the cornerstone of this benchmark exercise. The improved modelling can lead to a better understanding of the safety margins and provide new opportunities for design or operational optimisations.

The CHF benchmark phase 1 kick-off meeting on 30 October 2023 gathered 78 participants, representing 48 institutions from 16 countries. This robust engagement underscores the profound interest and commitment within the global scientific community toward integrating AI and ML technologies into nuclear engineering. The ultimate goal of the Task Force is to leverage insights from the benchmarks and distill lessons learnt to provide guidelines for future AI and ML applications in scientific computing in nuclear engineering.

eNrVmN9v2jAQx9/7V6C85wfdWtgUqDbWbkitxmjRpr0g4xxgauz0bPNjf/0coBqdknWY+qESL8mZu4v9vc9dkl6s5ry2AFRMilZQj5KgBoLKjIlJKxjcXYXN4KJ9ks7Iguwva0bJ8DQ5DWqUE6VaQWGPRkCEin7cXH8C6wEwaJ/UUjmaAdVP1hnNePSFqOkNyYs1tXQhWVabg57KrBXkRm/u1lKl0ebRXkq8VzmhkMa7O/tWjXJ4dnae7BvTuPD4H66NArwmYlLqGYSTT2oQQegO0TCRuK5MuvGu4ZY0U31Q0iCFHtHTHsoFyyArjTMmXIFTkPEyuwVccNBFkFLn8YzOlZNzMiOrPjx0y5P+YK0dvdJhEtYbST1Jmm/qdftzCoV7W1UarXiIOOfDZvNt0ojHDJUOmdCAgmirdsLDka2I6ZzgfSjHIUHNxowye79YxTmbWDOERGThnNApExByICisb7siFIYWlyECoVpimE/XilHlePA9iZpwT0fOVOepdD3FQXh4VloZUzkn62imctetIkisGdBSxt+DFE9wh5Z73O7ZX/6F4Tw+MOvBDkieMi5415FG6AouXfVdN6IjbTWsqk/UDaV6tdMiA/Vybn9JUd5LembEGXXlpSWaAaUH/W41Ll8NaT4SBQP0h5rvTGRyqV4eYfuS8ZR9vqHwv6aRc+fq/Gm1WdEXLw3KHGILNqaO4VVXjOWxpLJyL3f1KPbXofPN/Ccp4VAxAQ4diWgF/ji1eishf+W5NZQ6/Xx556q9bwZwfbu5LHXNstajaty6hY8WZIVemffhZbMlx7OvBpYgbtOuwXIsTbXO1fs4Xi6XkQSahQJIJHHyunrS3ozh763HyyCzHey2bPeU+mjbsA87ftdSfm7UOXZ83/1/95pQUScGjjiLLfW9sbl7+fK4/zO7e0u794RP/sJs5uwNYXyNaGZU6vGoBmOPVVyh5cPXsYUgHCbLNN5+CmufpHHxGax98hsAQXsu

WbSZBZ7fHBT6SYqA

See the rest here:
First international benchmark of artificial intelligence and machine ... - Nuclear Energy Agency

Application of Artificial Intelligence in the Management of Drinking … – Cureus

Specialty

Please chooseI'm not a medical professional.Allergy and ImmunologyAnatomyAnesthesiologyCardiac/Thoracic/Vascular SurgeryCardiologyCritical CareDentistryDermatologyDiabetes and EndocrinologyEmergency MedicineEpidemiology and Public HealthFamily MedicineForensic MedicineGastroenterologyGeneral PracticeGeneticsGeriatricsHealth PolicyHematologyHIV/AIDSHospital-based MedicineI'm not a medical professional.Infectious DiseaseIntegrative/Complementary MedicineInternal MedicineInternal Medicine-PediatricsMedical Education and SimulationMedical PhysicsMedical StudentNephrologyNeurological SurgeryNeurologyNuclear MedicineNutritionObstetrics and GynecologyOccupational HealthOncologyOphthalmologyOptometryOral MedicineOrthopaedicsOsteopathic MedicineOtolaryngologyPain ManagementPalliative CarePathologyPediatricsPediatric SurgeryPhysical Medicine and RehabilitationPlastic SurgeryPodiatryPreventive MedicinePsychiatryPsychologyPulmonologyRadiation OncologyRadiologyRheumatologySubstance Use and AddictionSurgeryTherapeuticsTraumaUrologyMiscellaneous

Originally posted here:
Application of Artificial Intelligence in the Management of Drinking ... - Cureus

Artificial Intelligence images of ‘average’ person from each US state … – UNILAD

Artificial Intelligence has been asked to create a host of things since it's creation.

Another thing AI's created it what it believes your average Joe might look like depending on which US state they live in.

And it's safe to say the results are questionable.

We should really all know by now that there's no such thing as the 'average person', but there are stereotypes, fashion trends and local traditions, and its these factors that seem to have inspired AI when it came to creating images of the 'average' human from a specific US state.

In a post on the Reddit thread r/midjourney, a Redditor shared a series of AI-generated images from a variety of states, along with the caption: "The most stereotypical person in [state name]."

The caption presumably represented the prompt they'd given to the AI program before letting it do its thing, with the chosen states including Texas, California, Colorado, Florida, Oregon and Maine.

And the results of the prompt are interesting, to say the least. Where to begin?

Kicking things off with Texas, we have a man dressed in some 'cowboy'-style attire, including a large cowboy hat, a brown shirt tucked into blue jeans and a wide belt buckle.

It's all flower-power in California, where the AI human has long hair blowing in the breeze, big sunglasses and a floral shirt.

While in Colorado it's a different kind of plant getting all the attention, with a woman perched on what looks to be a mountaintop packed with marijuana plants.

She's wearing a green hoodie and headband, with what looks to be a smoking joint in her hand.

I'm not sure how many people hike up weed mountains to get a hit in Colorado, but okay.

Next let's head to Florida, where a man with a long white beard stands on a road with long blue shorts, a baggy pink shirt and a sunhat, before moving to Oregon, where we're greeted by a woman with short greyish-blue hair.

And things take a dramatic turn as we head to Maine; a city known for its lobster.

To represent this, our AI man stands with a hat featuring an actual lobster on his head.

Again, I'm not sure how 'average' that is, but I've never been to Maine myself.

The AI-generated images have sparked mixed responses after being shared online, with one outraged Reddit user claiming the original poster 'clearly used unflattering prompts for the red states'.

Another unimpressed viewer commented: "Hi. Maine here. Can you not put dead lobsters all over everything? K thx."

The creations have left many people intrigued, though, with a lot of comments calling for more AI-generated images from even more states.

Read more:
Artificial Intelligence images of 'average' person from each US state ... - UNILAD

Pentagon faces future with lethal AI weapons on the battlefield – NBC Chicago

Artificial intelligence employed by the U.S. military has piloted pint-sized surveillance drones in special operations forces missions and helped Ukraine in its war against Russia. It tracks soldiers fitness, predicts when Air Force planes need maintenance and helps keep tabs on rivals in space.

Now, the Pentagon is intent on fielding multiple thousands of relatively inexpensive, expendable AI-enabled autonomous vehicles by 2026 to keep pace with China. The ambitious initiative dubbed Replicator seeks to galvanize progress in the too-slow shift of U.S. military innovation to leverage platforms that are small, smart, cheap, and many, Deputy Secretary of Defense Kathleen Hicks said in August.

While its funding is uncertain and details vague, Replicator is expected to accelerate hard decisions on what AI tech is mature and trustworthy enough to deploy - including on weaponized systems.

There is little dispute among scientists, industry experts and Pentagon officials believe that the U.S. will within the next few years have fully autonomous lethal weapons. And though officials insist humans will always be in control, experts say advances in data-processing speed and machine-to-machine communications will inevitably relegate people to supervisory roles.

Thats especially true if, as expected, lethal weapons are deployed en masse in drone swarms. Many countries are working on them and neither China, Russia, Iran, India or Pakistan have signed a U.S.-initiated pledge to use military AI responsibly.

Its unclear if the Pentagon is currently formally assessing any fully autonomous lethal weapons system for deployment, as required by a 2012 directive. A Pentagon spokeswoman would not say.

Replicator highlights immense technological and personnel challenges for Pentagon procurement and development as the AI revolution promises to transform how wars are fought.

"The Department of Defense is struggling to adopt the AI developments from the last machine-learning breakthrough, said Gregory Allen, a former top Pentagon AI official now at the Center for Strategic and International Studies think tank.

The Pentagon's portfolio boasts more than 800 AI-related unclassified projects, much still in testing. Typically, machine-learning and neural networks are helping humans gain insights and create efficiencies.

The AI that weve got in the Department of Defense right now is heavily leveraged and augments people, said Missy Cummings, director of George Mason Universitys robotics center and a former Navy fighter pilot. Theres no AI running around on its own. People are using it to try to understand the fog of war better.

One domain where AI-assisted tools are tracking potential threats is space, the latest frontier in military competition.

China envisions using AI, including on satellites, to "make decisions on who is and isnt an adversary, U.S. Space Force chief technology and innovation officer Lisa Costa, told an online conference this month.

Nine humanoid robots gathered at the AI for Good conference in Geneva, Switzerland, where organizers are seeking to make the case for artificial intelligence to help resolve some of the worlds biggest challenges.

The U.S. aims to keep pace.

An operational prototype called Machina used by Space Force keeps tabs autonomously on more than 40,000 objects in space, orchestrating thousands of data collections nightly with a global telescope network.

Machina's algorithms marshal telescope sensors. Computer vision and large language models tell them what objects to track. And AI choreographs drawing instantly on astrodynamics and physics datasets, Col. Wallace Rhet Turnbull of Space Systems Command told a conference in August.

Another AI project at Space Force analyzes radar data to detect imminent adversary missile launches, he said.

Elsewhere, AI's predictive powers help the Air Force keep its fleet aloft, anticipating the maintenance needs of more than 2,600 aircraft including B-1 bombers and Blackhawk helicopters.

Machine-learning models identify possible failures dozens of hours before they happen, said Tom Siebel, CEO of Silicon Valley-based C3 AI, which has the contract. C3's tech also models the trajectories of missiles for the the U.S. Missile Defense Agency and identifies insider threats in the federal workforce for the Defense Counterintelligence and Security Agency.

Among health-related efforts is a pilot project tracking the fitness of the Army's entire Third Infantry Division more than 13,000 soldiers. Predictive modeling and AI help reduce injuries and increase performance, said Maj. Matt Visser.

In Ukraine, AI provided by the Pentagon and its NATO allies helps thwart Russian aggression.

NATO allies share intelligence from data gathered by satellites, drones and humans, some aggregated with software from U.S. contractor Palantir. Some data comes from Maven, the Pentagons pathfinding AI project now mostly managed by the National Geospatial-Intelligence Agency, say officials including retired Air Force Gen. Jack Shanahan, the inaugural Pentagon AI director,

Maven began in 2017 as an effort to process video from drones in the Middle East spurred by U.S. Special Operations forces fighting ISIS and al-Qaeda and now aggregates and analyzes a wide array of sensor- and human-derived data.

AI has also helped the U.S.-created Security Assistance Group-Ukraine help organize logistics for military assistance from a coalition of 40 countries, Pentagon officials say.

To survive on the battlefield these days, military units must be small, mostly invisible and move quickly because exponentially growing networks of sensors let anyone see anywhere on the globe at any moment, then-Joint Chiefs chairman Gen. Mark Milley observed in a June speech. And what you can see, you can shoot.

To more quickly connect combatants, the Pentagon has prioritized the development of intertwined battle networks called Joint All-Domain Command and Control to automate the processing of optical, infrared, radar and other data across the armed services. But the challenge is huge and fraught with bureaucracy.

Christian Brose, a former Senate Armed Services Committee staff director now at the defense tech firm Anduril, is among military reform advocates who nevertheless believe they "may be winning here to a certain extent."

The argument may be less about whether this is the right thing to do, and increasingly more about how do we actually do it -- and on the rapid timelines required," he said. Brose's 2020 book, The Kill Chain, argues for urgent retooling to match China in the race to develop smarter and cheaper networked weapons systems.

To that end, the U.S. military is hard at work on "human-machine teaming." Dozens of uncrewed air and sea vehicles currently keep tabs on Iranian activity. U.S. Marines and Special Forces also use Andurils autonomous Ghost mini-copter, sensor towers and counter-drone tech to protect American forces.

Industry advances in computer vision have been essential. Shield AI lets drones operate without GPS, communications or even remote pilots. It's the key to its Nova, a quadcopter, which U.S. special operations units have used in conflict areas to scout buildings.

On the horizon: The Air Forces loyal wingman program intends to pair piloted aircraft with autonomous ones. An F-16 pilot might, for instance, send out drones to scout, draw enemy fire or attack targets. Air Force leaders are aiming for a debut later this decade.

The loyal wingman timeline doesn't quite mesh with Replicator's, which many consider overly ambitious. The Pentagon's vagueness on Replicator, meantime, may partly intend to keep rivals guessing, though planners may also still be feeling their way on feature and mission goals, said Paul Scharre, a military AI expert and author of Four Battlegrounds.

Anduril and Shield AI, each backed by hundreds of millions in venture capital funding, are among companies vying for contracts.

Nathan Michael, chief technology officer at Shield AI, estimates they will have an autonomous swarm of at least three uncrewed aircraft ready in a year using its V-BAT aerial drone. The U.S. military currently uses the V-BAT -- without an AI mind -- on Navy ships, on counter-drug missions and in support of Marine Expeditionary Units, the company says.

It will take some time before larger swarms can be reliably fielded, Michael said. Everything is crawl, walk, run -- unless youre setting yourself up for failure.

The only weapons systems that Shanahan, the inaugural Pentagon AI chief, currently trusts to operate autonomously are wholly defensive, like Phalanx anti-missile systems on ships. He worries less about autonomous weapons making decisions on their own than about systems that dont work as advertised or kill noncombatants or friendly forces.

President Joe Biden issued an executive order on Monday designed to protect privacy, support workers and set new standards for safety and security around artificial intelligence.

The department's current chief digital and AI officer Craig Martell is determined not to let that happen.

Regardless of the autonomy of the system, there will always be a responsible agent that understands the limitations of the system, has trained well with the system, has justified confidence of when and where its deployable -- and will always take the responsibility, said Martell, who previously headed machine-learning at LinkedIn and Lyft. That will never not be the case.

As to when AI will be reliable enough for lethal autonomy, Martell said it makes no sense to generalize. For example, Martell trusts his car's adaptive cruise control but not the tech thats supposed to keep it from changing lanes. As the responsible agent, I would not deploy that except in very constrained situations, he said. Now extrapolate that to the military.

Martells office is evaluating potential generative AI use cases it has a special task force for that but focuses more on testing and evaluating AI in development.

One urgent challenge, says Jane Pinelis, chief AI engineer at Johns Hopkins Universitys Applied Physics Lab and former chief of AI assurance in Martells office, is recruiting and retaining the talent needed to test AI tech. The Pentagon can't compete on salaries. Computer science PhDs with AI-related skills can earn more than the military's top-ranking generals and admirals.

Testing and evaluation standards are also immature, a recent National Academy of Sciences report on Air Force AI highlighted.

Might that mean the U.S. one day fielding under duress autonomous weapons that dont fully pass muster?

We are still operating under the assumption that we have time to do this as rigorously and as diligently as possible, said Pinelis. I think if were less than ready and its time to take action, somebody is going to be forced to make a decision.

Read more:
Pentagon faces future with lethal AI weapons on the battlefield - NBC Chicago