Category Archives: Ai

AI Is the New Industrial Revolution. How Jobs and Work Will Change. – Barron’s

The rise of generative artificial intelligence heralds a new stage of the Industrial Revolution, one where machines think, learn, self-replicate, and can master many tasks that were once reserved for humans.This phase will be just as disruptiveand transformativeas the previous ones.

That AI technology will come for jobs is certain. The destruction and creation of jobs is a defining characteristic of the Industrial Revolution. Less certain is what kind of new jobsand how manywill take their place.

Some scholars divide the Industrial Revolution into three stages: steam, which started around 1770; electricity, in 1870; and information in 1950. Think of the automobile industry replacing the horse-and-carriage trade in the first decades of the 20th century, or IT departments supplanting secretarial pools in recent decades.

In all of these cases, some people get left behind. The new jobs can be vastly different in nature, requiring novel skills and perhaps relocation, such as from farm to city in the first Industrial Revolution.

As shares of companies involved in the AI industry have soared, concerns about job security has grown. AI is finding its way into all aspects of life, from chatbots to surgery to battlefield drones. AI was at the center of this years highest-profile labor disputes, involving industries as disparate as Detroit car makers and Hollywood screenwriters. AI was on the agenda of the recent summit between President Joe Biden and Chinese President Xi Jinping.

Advertisement - Scroll to Continue

The advances in AI technology are coming fast, with some predicting singularitythe theoretical point when machines evolve beyond human controla few years away. If thats true, job losses would be the least of worries.

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war, wrote a group of industry leaders, technologists, and academics this year in an open letter.

Assuming we survive, what can the past show us about how we will work withor forthese machines in the future?

Advertisement - Scroll to Continue

Consider the first Industrial Revolution, where mortals fashioned their own crude machines. Run on Britains inexpensive and abundant coal and manned by its cheap and abundant unskilled labor, steam engines powered trains, ships, and factories. The U.K. became a manufacturing powerhouse.

Not everyone welcomed the mechanical competition.

A wanted poster from January 1812, in Nottingham, England, offers a 200-pound reward for information about masked men who broke into a local workshop and wantonly and feloniously broke and destroyed five stocking frames (mechanical knitting machines).

Advertisement - Scroll to Continue

The vandals were Luddites, textile artisans who waged a campaign of destruction against manufacturing between 1811 and 1817. They werent so much opposed to the machines as they were to a factory system that no longer valued their expertise.

Machine-breaking was an early form of job action, collective bargaining by riot, as historian Eric Hobsbawm put it. It was a precursor to many labor disputes to follow.

The second Industrial Revolution, kick-started by the completion of the transcontinental railroad in 1869, propelled the U.S. to global dominance. Breakthroughs including electricity, mass production, and the corporation transformed the world with marvels like cars, airplanes, refrigerators, and radios.

These advances also drew a backlash from people whose jobs were threatened.

Advertisement - Scroll to Continue

Only the lovers who flock to the dimmest nooks of the parks to hold hands and spoon found no fault with the striking lamplighters last night, the New-York Tribune wrote on April 26, 1907, after a walkout by the men who hand-lit the citys 25,000 gas streetlights each night.

The lamplighters struck over claims of union busting, but the real enemy was in plain sight: the electric lightbulb.

Advertisement - Scroll to Continue

In the downtown part of Manhattan, where there are electric lights in plenty, there was no inconvenience, the Tribune reported. The days of the lamplighters centuries-old trade were numbered.

Numbered also were the days of carriage makers, icemen, and elevator operators.

The third Industrial Revolution, meanwhile, rang the death knell for switchboard operators, newspaper typesetters, and most anyone whose job could be done by a computer.

Those lost jobs were replaced, in spades. The rise of personal computing and the internet led directly to the loss of 3.5 million U.S. jobs since 1980, according to McKinsey Global Institute in 2018. At the same time, new technologies created 19 million new jobs.

Looking ahead, MGI estimates technological advances might force as many as 375 million workers globally, out of 2.7 billion total, to switch occupations by 2030.

A survey conducted by LinkedIn for the World Economic Forum offers hints about where job growth might come from. Of the five fastest-growing job areas between 2018 and 2022, all but one involve people skills: sales and customer engagement; human resources and talent acquisition; marketing and communications; partnerships and alliances. The other: technology and IT. Even the robots will need their human handlers.

McKinsey Globals Michael Chui suggests people wont be replaced by technology in the future so much as they will partner more deeply with it.

Almost all of us are cyborgs nowadays, in some sense, he told Barrons, pointing to the headphones he was wearing during a Zoom discussion.

In The Iliad, 28 centuries ago, Homer describes robotic slaves crafted by the god Hephaestus. Chui doesnt expect humanoid robots, like Homers creations, to come down and do everything we once did.

For most of us, he says, its parts of our jobs that machines will actually take over.

Each wave of the Industrial Revolution brought greater prosperityeven if it wasnt equally sharedadvances in science and medicine, cheaper goods, and a more connected world. The AI wave might even do more.

Ive described it as giving us super powers, and I think its true, Chui says.

Superpowers or extinctionstarkly different visions for our brave, new AI future. Best hang on.

Write to editors@barrons.com

See the original post here:

AI Is the New Industrial Revolution. How Jobs and Work Will Change. - Barron's

US closer to using AI-drones that can autonomously decide to kill humans – Business Insider

South Korea's military drones fly in formation during a joint military drill with the US at Seungjin Fire Training Field in Pocheon on May 25, 2023. YELIM LEE

The deployment of AI-controlled drones that can make autonomous decisions about whether to kill human targets is moving closer to reality, The New York Times reported.

Lethal autonomous weapons, that can select targets using AI, are being developed by countries including the US, China, and Israel.

The use of the so-called "killer robots" would mark a disturbing development, say critics, handing life and death battlefield decisions to machines with no human input.

Several governments are lobbying the UN for a binding resolution restricting the use of AI killer drones, but the US is among a group of nations which also includes Russia, Australia, and Israel who are resisting any such move, favoring a non-binding resolution instead, The Times reported.

"This is really one of the most significant inflection points for humanity," Alexander Kmentt, Austria's chief negotiator on the issue, told The Times. "What's the role of human beings in the use of force it's an absolutely fundamental security issue, a legal issue and an ethical issue."

The Pentagon is working toward deploying swarms of thousands of AI-enabled drones, according to a notice published earlier this year.

In a speech in August, US Deputy Secretary of Defense, Kathleen Hicks, said technology like AI-controlled drone swarms would enable the US to offset China's People's Liberation Army's (PLA) numerical advantage in weapons and people.

"We'll counter the PLA's mass with mass of our own, but ours will be harder to plan for, harder to hit, harder to beat," she said, reported Reuters.

Frank Kendall, the Air Force secretary, told The Times that AI drones will need to have the capability to make lethal decisions while under human supervision.

"Individual decisions versus not doing individual decisions is the difference between winning and losing and you're not going to lose," he said.

"I don't think people we would be up against would do that, and it would give them a huge advantage if we put that limitation on ourselves."

The New Scientist reported in October that AI-controlled drones have already been deployed on the battlefield by Ukraine in its fight against the Russian invasion, though it's unclear if any have taken action resulting in human casualties.

The Pentagon did not immediately respond to a request for comment.

Loading...

Go here to see the original:

US closer to using AI-drones that can autonomously decide to kill humans - Business Insider

5 Lessons I’ve Learned From Using AI (Opinion) – Education Week

Artificial intelligence is all the rage right now, and it will be for the rest of my lifetime. In this Harvard Business Review article, McAfee, Rock, and Brynjolfsson refer to AI as a general-purpose technology akin to electricity, the steam engine, and the internet. Before I go any further, and risk getting comments thrown at me on social media, I do understand that there are IP concerns, among other issues that we still have to work out. However, what we also know is that AI is here to stay, so we as educators can either get on board with it or we can be once again deemed behind the times where our students use the technology at home and come to school to go back a century or two.

Over the past few months, I have used AI more and more. Partly because I wanted to see what all of the fuss was about and also because I needed to play around with it to see if it was something that I should be using in my role as an author, consultant, and owner of a company.

For full disclosure, I am not a techie. I have created a website for the Instructional Leadership Collective, designed courses through Thinkific, and use Mentimeter for all of my virtual and in-person workshops. However, I would never consider myself an expert in using technology, which is why I am trying out AI. I feel that every once in a while, we should feel uncomfortable as we learn, because being uncomfortable during learning (in a psychologically safe environment) can result in deeper, more rigorous learning experiences.

Heres What Ive Learned Through AIAs with anything that is new for me, I like to start small. When I began hearing more and more about AI, I decided I would engage in a low-risk activity, which leads me to the first lesson I learned through AI. I used it for cooking. Yes, cooking.

After some major inspiration over the last six months, I began experimenting with gourmet cooking. Please keep in mind that prior to June of this year, I struggled to open a can of tuna fish, so it may surprise you that I now use a sous vide or Big Green Egg to make filet mignon, bleu cheese turkey burgers with pesto, salmon, halibut, or sesame chicken with my own sesame sauce. What does that have to do with AI? I use AI to give me recipes for special sauces, like the one I will make for the pumpkin ravioli I will be serving to guests tonight.

Secondly, I use AI to ask better questions. In a previous blog, I wrote about using AI as a leadership coaching assistant. After sessions are done, I go back and reflect on the questions I asked and have engaged in reading books to help me better learn what questions I could be asking. Additionally, I find that when I ask AI a question and the answers it provides are not always on point, there have been many times that it was my question that encouraged that answer. I needed to go back and rephrase the question so AI understood me better. Thats something we can always do in conversations with humans.

Along with using the AI personal assistant to ask better questions, I also have learned to use AI to see how much I talk during sessions as opposed to the people I coach. Fortunately, I have seen that in most cases, the person being coached does talk more than I do. However, there have been those times where I cut it close, and that matters to me. I strive to listen more than I talk.

The fourth lesson I have learned when using AI is that it helps keep me inspired during moments when I lack inspiration. I have used it to give me a boost when considering keynotes, workshop activities, or topics to cover in this blog. Is it perfect, no. However, I noticed that although it may not give me the exact information I need, it has inspired me to read what it gives and think, I wonder if I could, and new ideas come to me during those moments.

Lastly, in reading this outstanding ISTE article, I learned that there are several types of AI, which are:

Reactive - Tools that respond to specific inputs or situations without learning (i.e., Alexa)

Predictive - Tools that analyze historical data and experiences to predict future events or behaviors (i.e., Netflix).

Generative - Tools that generate new content or outputs, often creating something novel from learned patterns (ChatGPT).

In the EndThose who fear AI have probably been using it already when they ask Alexa to play a song or when they get on Netflix and click on the movie that Netflix said they might want to watch. It seems that generative is the one that makes most people nervous because of IP rules or that it may not be able to provide the most accurate information, which is kind of ironic given how many people spread misinformation through gossip.

Read the original here:

5 Lessons I've Learned From Using AI (Opinion) - Education Week

Pentagon’s AI initiatives accelerate hard decisions on lethal autonomous weapons. – ABC News

NATIONAL HARBOR, Md. -- Artificial intelligence employed by the U.S. military has piloted pint-sized surveillance drones in special operations forces missions and helped Ukraine in its war against Russia. It tracks soldiers fitness, predicts when Air Force planes need maintenance and helps keep tabs on rivals in space.

Now, the Pentagon is intent on fielding multiple thousands of relatively inexpensive, expendable AI-enabled autonomous vehicles by 2026 to keep pace with China. The ambitious initiative dubbed Replicator seeks to galvanize progress in the too-slow shift of U.S. military innovation to leverage platforms that are small, smart, cheap, and many, Deputy Secretary of Defense Kathleen Hicks said in August.

While its funding is uncertain and details vague, Replicator is expected to accelerate hard decisions on what AI tech is mature and trustworthy enough to deploy - including on weaponized systems.

There is little dispute among scientists, industry experts and Pentagon officials that the U.S. will within the next few years have fully autonomous lethal weapons. And though officials insist humans will always be in control, experts say advances in data-processing speed and machine-to-machine communications will inevitably relegate people to supervisory roles.

Thats especially true if, as expected, lethal weapons are deployed en masse in drone swarms. Many countries are working on them and neither China, Russia, Iran, India or Pakistan have signed a U.S.-initiated pledge to use military AI responsibly.

Its unclear if the Pentagon is currently formally assessing any fully autonomous lethal weapons system for deployment, as required by a 2012 directive. A Pentagon spokeswoman would not say.

Paradigm shifts

Replicator highlights immense technological and personnel challenges for Pentagon procurement and development as the AI revolution promises to transform how wars are fought.

"The Department of Defense is struggling to adopt the AI developments from the last machine-learning breakthrough, said Gregory Allen, a former top Pentagon AI official now at the Center for Strategic and International Studies think tank.

The Pentagon's portfolio boasts more than 800 AI-related unclassified projects, much still in testing. Typically, machine-learning and neural networks are helping humans gain insights and create efficiencies.

The AI that weve got in the Department of Defense right now is heavily leveraged and augments people, said Missy Cummings, director of George Mason Universitys robotics center and a former Navy fighter pilot. Theres no AI running around on its own. People are using it to try to understand the fog of war better.

Space, war's new frontier

One domain where AI-assisted tools are tracking potential threats is space, the latest frontier in military competition.

China envisions using AI, including on satellites, to "make decisions on who is and isnt an adversary, U.S. Space Force chief technology and innovation officer Lisa Costa, told an online conference this month.

The U.S. aims to keep pace.

An operational prototype called Machina used by Space Force keeps tabs autonomously on more than 40,000 objects in space, orchestrating thousands of data collections nightly with a global telescope network.

Machina's algorithms marshal telescope sensors. Computer vision and large language models tell them what objects to track. And AI choreographs drawing instantly on astrodynamics and physics datasets, Col. Wallace Rhet Turnbull of Space Systems Command told a conference in August.

Another AI project at Space Force analyzes radar data to detect imminent adversary missile launches, he said.

Maintaining planes and soldiers

Elsewhere, AI's predictive powers help the Air Force keep its fleet aloft, anticipating the maintenance needs of more than 2,600 aircraft including B-1 bombers and Blackhawk helicopters.

Machine-learning models identify possible failures dozens of hours before they happen, said Tom Siebel, CEO of Silicon Valley-based C3 AI, which has the contract. C3's tech also models the trajectories of missiles for the the U.S. Missile Defense Agency and identifies insider threats in the federal workforce for the Defense Counterintelligence and Security Agency.

Among health-related efforts is a pilot project tracking the fitness of the Army's entire Third Infantry Division more than 13,000 soldiers. Predictive modeling and AI help reduce injuries and increase performance, said Maj. Matt Visser.

Aiding Ukraine

In Ukraine, AI provided by the Pentagon and its NATO allies helps thwart Russian aggression.

NATO allies share intelligence from data gathered by satellites, drones and humans, some aggregated with software from U.S. contractor Palantir. Some data comes from Maven, the Pentagons pathfinding AI project now mostly managed by the National Geospatial-Intelligence Agency, say officials including retired Air Force Gen. Jack Shanahan, the inaugural Pentagon AI director,

Maven began in 2017 as an effort to process video from drones in the Middle East spurred by U.S. Special Operations forces fighting ISIS and al-Qaeda and now aggregates and analyzes a wide array of sensor- and human-derived data.

AI has also helped the U.S.-created Security Assistance Group-Ukraine help organize logistics for military assistance from a coalition of 40 countries, Pentagon officials say.

All-Domain Command and Control

To survive on the battlefield these days, military units must be small, mostly invisible and move quickly because exponentially growing networks of sensors let anyone see anywhere on the globe at any moment, then-Joint Chiefs chairman Gen. Mark Milley observed in a June speech. And what you can see, you can shoot.

To more quickly connect combatants, the Pentagon has prioritized the development of intertwined battle networks called Joint All-Domain Command and Control to automate the processing of optical, infrared, radar and other data across the armed services. But the challenge is huge and fraught with bureaucracy.

Christian Brose, a former Senate Armed Services Committee staff director now at the defense tech firm Anduril, is among military reform advocates who nevertheless believe they "may be winning here to a certain extent."

The argument may be less about whether this is the right thing to do, and increasingly more about how do we actually do it -- and on the rapid timelines required," he said. Brose's 2020 book, The Kill Chain, argues for urgent retooling to match China in the race to develop smarter and cheaper networked weapons systems.

To that end, the U.S. military is hard at work on "human-machine teaming." Dozens of uncrewed air and sea vehicles currently keep tabs on Iranian activity. U.S. Marines and Special Forces also use Andurils autonomous Ghost mini-copter, sensor towers and counter-drone tech to protect American forces.

Industry advances in computer vision have been essential. Shield AI lets drones operate without GPS, communications or even remote pilots. It's the key to its Nova, a quadcopter, which U.S. special operations units have used in conflict areas to scout buildings.

On the horizon: The Air Forces loyal wingman program intends to pair piloted aircraft with autonomous ones. An F-16 pilot might, for instance, send out drones to scout, draw enemy fire or attack targets. Air Force leaders are aiming for a debut later this decade.

The race to full autonomy

The loyal wingman timeline doesn't quite mesh with Replicator's, which many consider overly ambitious. The Pentagon's vagueness on Replicator, meantime, may partly intend to keep rivals guessing, though planners may also still be feeling their way on feature and mission goals, said Paul Scharre, a military AI expert and author of Four Battlegrounds.

Anduril and Shield AI, each backed by hundreds of millions in venture capital funding, are among companies vying for contracts.

Nathan Michael, chief technology officer at Shield AI, estimates they will have an autonomous swarm of at least three uncrewed aircraft ready in a year using its V-BAT aerial drone. The U.S. military currently uses the V-BAT -- without an AI mind -- on Navy ships, on counter-drug missions and in support of Marine Expeditionary Units, the company says.

It will take some time before larger swarms can be reliably fielded, Michael said. Everything is crawl, walk, run -- unless youre setting yourself up for failure.

The only weapons systems that Shanahan, the inaugural Pentagon AI chief, currently trusts to operate autonomously are wholly defensive, like Phalanx anti-missile systems on ships. He worries less about autonomous weapons making decisions on their own than about systems that dont work as advertised or kill noncombatants or friendly forces.

The department's current chief digital and AI officer Craig Martell is determined not to let that happen.

Regardless of the autonomy of the system, there will always be a responsible agent that understands the limitations of the system, has trained well with the system, has justified confidence of when and where its deployable -- and will always take the responsibility, said Martell, who previously headed machine-learning at LinkedIn and Lyft. That will never not be the case.

As to when AI will be reliable enough for lethal autonomy, Martell said it makes no sense to generalize. For example, Martell trusts his car's adaptive cruise control but not the tech thats supposed to keep it from changing lanes. As the responsible agent, I would not deploy that except in very constrained situations, he said. Now extrapolate that to the military.

Martells office is evaluating potential generative AI use cases it has a special task force for that but focuses more on testing and evaluating AI in development.

One urgent challenge, says Jane Pinelis, chief AI engineer at Johns Hopkins Universitys Applied Physics Lab and former chief of AI assurance in Martells office, is recruiting and retaining the talent needed to test AI tech. The Pentagon can't compete on salaries. Computer science PhDs with AI-related skills can earn more than the military's top-ranking generals and admirals.

Testing and evaluation standards are also immature, a recent National Academy of Sciences report on Air Force AI highlighted.

Might that mean the U.S. one day fielding under duress autonomous weapons that dont fully pass muster?

We are still operating under the assumption that we have time to do this as rigorously and as diligently as possible, said Pinelis. I think if were less than ready and its time to take action, somebody is going to be forced to make a decision.

Read the rest here:

Pentagon's AI initiatives accelerate hard decisions on lethal autonomous weapons. - ABC News

unveil new AI-enabled imaging innovations at #RSNA – News | Philips – Philips

Philips HealthSuite Imaging is a cloud-based next generation of Philips Vue PACS, enabling radiologists and clinicians to adopt new capabilities faster, increase operational efficiency and improving patient care. HealthSuite Imaging on Amazon Web Services (AWS) offers new capabilities such as high-speed remote access for diagnostic reading, integrated reporting and AI-enabled workflow orchestration, all delivered securely via the cloud to ease IT management burden. Also unveiled at RSNA is Philips AI Manager, an end-to-end AI enablement solution that integrates with a customer's IT infrastructure, allowing radiologists to leverage more than 100 AI applications for a more comprehensive assessment and deeper clinical insights in the radiology workflow.

Speed and efficiency are critical to diagnosis and treatment. At RSNA Philips will also spotlight its newest innovations in Digital X-ray including Philips Radiography 7000 M, a premium mobile radiography solution designed to offer enhanced care and higher operational efficiency for faster and efficient patient care, and Philips Radiography 7300 C premium digital radiography system designed to deliver high efficiency and clinical versatility. Also featured is the next-generation Image Guided Therapy System Azurion 7 B20/15 biplane configuration, providing superb positioning capability for easier patient access during minimally invasive procedures, faster system movement, and full table side control of all components.

View post:

unveil new AI-enabled imaging innovations at #RSNA - News | Philips - Philips

Commentary: Biden’s executive order on AI is ambitious and … – The Spokesman Review

Last month President Joe Biden issued an executive order on artificial intelligence, the governments most ambitious attempt to set ground rules for this technology. The order focuses on establishing best practices and standards for AI models, seeking to constrain Silicon Valleys propensity to release products before theyve been fully tested to move fast and break things.

But despite the orders scope its 111 pages and covers a range of issues, including industry standards and civil rights two glaring omissions may undermine its promise.

The first is that the order fails to address the loophole provided by Section 230 of the Communications Decency Act. Much of the consternation surrounding AI has to do with the potential for deep fakes convincing video, audio and image hoaxes and misinformation. The order does include provisions for watermarking and labeling AI content so people at least know how its been generated. But what happens if the content is not labeled?

Much of the AI-generated content will be distributed on social media sites such as Instagram and X (formerly Twitter). The potential harm is frightening: Already theres been a boom of deep fake nudes, including of teenage girls. Yet Section 230 protects platforms from liability for most content posted by third parties. If the platform has no liability for distributing AI-generated content, what incentive does it have to remove it, water-marked or not?

Imposing liability only on the producer of the AI content, rather than on the distributor, will be ineffective at curbing deep fakes and misinformation because the content producer may be hard to identify, out of jurisdictional bounds or unable to pay if found liable. Shielded by Section 230, the platform may continue to spread harmful content and may even receive revenue for it if its in the form of an ad.

A bipartisan bill sponsored by Sens. Richard Blumenthal, D-Conn., and Josh Hawley, R-Mo., seeks to address this liability loophole by removing 230 immunity for claims and charges related to generative artificial intelligence. The bill does not, however, seem to resolve the question of how to apportion responsibility between the AI companies that generate the content and the platforms that host it.

The second worrisome omission from the AI order involves terms of service, the annoying fine print that plagues the internet and pops up with every download. Although most people hit accept without reading these terms, courts have held that they can be binding contracts. This is another liability loophole for companies that make AI products and services: They can unilaterally impose long and complex one-sided terms allowing illegal or unethical practices and then claim we have consented to them.

In this way, companies can bypass the standards and best practices set by advisory panels. Consider what happened with Web 2.0 (the explosion of user-generated content dominated by social media sites). Web tracking and data collection were ethically and legally dubious practices that contravened social and business norms. Facebook, Google and others, however, could defend themselves by claiming that users consented to these intrusive practices when they clicked to accept the terms of service.

In the meantime, companies are releasing AI products to the public, some without adequate testing and encouraging consumers to try out their products for free. Consumers may not realize that their free use helps train these models and so their efforts are essentially unpaid labor. They also may not realize that they are giving up valuable rights and taking on legal liability.

For example, Open AIs terms of service state that the services are provided as is, with no warranty, and that the user will defend, indemnify, and hold harmless Open AI from any claims, losses, and expenses (including attorneys fees) arising from use of the services. The terms also require the user to waive the right to a jury trial and class action lawsuit. Bad as such restrictions may seem, they are standard across the industry. Some companies even claim a broad license to user-generated AI content.

Bidens AI order has largely been applauded for trying to strike a balance between protecting the public interest and innovation. But to give the provisions teeth, there must be enforcement mechanisms and the threat of lawsuits. The rules to be established under the order should limit Section 230 immunity and include standards of compliance for platforms. These might include procedures for reviewing and taking down content, mechanisms to report issues within the company and externally, and minimum response times from companies to external concerns. Furthermore, companies should not be allowed to use terms of service (or other forms of consent) to bypass industry standards and rules.

We should heed the hard lessons from the last two decades to avoid repeating the same mistakes. Self-regulation for Big Tech simply does not work, and broad immunity for profit-seeking corporations creates socially harmful incentives to grow at all costs. In the race to dominate the fiercely competitive AI space, companies are almost certain to prioritize growth and discount safety. Industry leaders have expressed support for guardrails, testing and standardization, but getting them to comply will require more than their good intentions it will require legal liability.

Nancy Kim is a law professor at Chicago-Kent College of Law, Illinois Institute of Technology.

Read the rest here:

Commentary: Biden's executive order on AI is ambitious and ... - The Spokesman Review

Implementing quality management systems to close the AI … – Nature.com

In HCOs, AI/ML technologies are often initiated as siloed research or quality improvement initiatives. However, when these AI technologies demonstrate potential for implementation in patient care, development teams may encounter substantial challenges and backtracking to meet the rigorous quality and regulatory requirements12,13. Similarly, HCO governance and leadership may possess a strong foundation in scientific rigor and clinical studies; however, without targeted qualifications and training, they may find themselves unprepared to offer institutional support, regulatory oversight, or mobilize teams toward interdisciplinary scientific validation of AI/MLenabled technologies required for regulatory submissions and deployment of SaMD. Consequently, the unpreparedness of HCOs exacerbates the translation gap between research activities and the practical implementation of clinical solutions14. The absence of a systematic approach to ensuring the effectiveness of practices and perpetuating them throughout the organization can lead to operational inefficiencies or harm. Thus, HCOs must first contend with a culture shift when faced with quality control rigor inherent to industry-aligned software development and deployment, specifically design controls, version control, installation qualification, operational qualification, performance qualification, that primarily focuses on end-user acceptance testing and the product meeting its intended purpose (improving clinical outcomes or processes compared to the standard of care or the current state), and the traceability and auditability of proof records (Table 1).

Consider that even in cases where a regulatory submission is not within the scope, it remains imperative to adhere to practices encompassing ethical and quality principles. Examples of such principles identified by the Coalition for Health AI and the National Institute for Standards and Technology (NIST) include effectiveness, safety, fairness, equity, accountability, transparency, privacy, and security3,7,15,16,17,18,19,20. It is also feasible that the AI/ML technology could transition from a non-regulated state to a regulated one due to updated regulations or an expanded scope. In that case, a proactive approach to streamlining the conversion from a non-regulatory to a regulatory standard should address the delicate balance of meeting baseline requirements while maintaining a least-burdensome transition to regulatory compliance.

As utilized by the FDA for regulating SaMD, a proactive culture of quality recognizes the same practices familiar to research scientists well-versed in informatics, translational science, and AI/ML framework development. For example, the FDA has published good machine learning practices (GMLP)21 that enumerate its expectations across the entire AI/ML life cycle grounded in emerging AI/ML science. The FDAs regulatory framework allows for a stepwise product realization approach that HCOs can follow to augment this culture shift. This stepwise approach implements ethical and quality principles by design into the AI product lifecycle, fostering downstream compliance while allowing development teams to innovate and continuously improve and refine their products. Using this approach allows for freedom to iterate at early research stages. As the product evolves, the team is prepared for the next stage, where prospectively planned development, risk management, and industry-standard design controls are initiated. At this stage, the model becomes a product, incorporating all the software and functionality needed for the model to work as intended in its clinical setting. QMS procedures outline practices, and the records generated during this stage create the level of evidence expected by industry and regulators22,23. HCOs may either maintain dedicated quality teams responsible for conducting testing or employ alternative structures designed to carry out independent reviews and audits.

Upon deployment, QMS rigor increases again to account for standardized post-deployment monitoring and change management practices embedded in QMS procedures (Fig. 2). By increasing formal QMS consistency as the AI/ML gets closer to clinical deployment, the QMS can minimize disruption to current research practices and embolden HCO scientists with a clear pathway as they continue to prove their software safe, effective, and ethical for clinical deployment.

Staged process for applying increasing regulatory rigor throughout product realization.

See the rest here:

Implementing quality management systems to close the AI ... - Nature.com

AI And Generation Z: Pioneering A New Era Of Philanthropy – Forbes

Founder of Laulau-During the UK's 2023 cost-of-living crisis, He launched the Hot Meal Challenge to raise hot meals for Londoners facing food poverty. Under the patronage of Lord Woolley of Woodford and in partnership with Sufra, this social fundraiser supported hundreds of families across London.Fabio Richter

In an era marked by economic challenges, the resilience of human generosity is more evident than ever. The charity and non-profit organizations (NGOs) market has seen incredible growth, reaching US$305.2 billion in 2023 from US$288.97 billion in 2022. And the projections are even more promising, with estimates of the market reaching US$369.21 billion by 2027. Notably, Generation Z, born between 1997 and 2012, has contributed increasingly to this philanthropic surge. Despite their relatively lesser financial resources, studies show that Gen Z's charitable contributions are growing, reflecting their commitment to social and environmental causes. Recent data reveals that Generation Z, though younger and with comparatively less wealth, contributes significantly to this trend. This growth, amidst a weakened global economy, rising prices, and geopolitical volatility, underscores a collective commitment to positive impact.

Concurrently, there is a surge in the utility and popularity of AI adoption worldwide. The convergence of these two trendsexpanding NGO market and AI technologiesholds immense potential, as philanthropy proactively leverages AI's innovative capabilities. Experts in the field, such as James Hodson, also CEO of AI for Good Foundation, suggest that AI can revolutionize fundraising and operational efficiency in philanthropy, creating more impact per dollar donated, as evident in Lifeforce, a Humanitarian Aid 2.0 initiative.

While much has been discussed regarding AI's economic impact, with estimations from PWC of up to US$15.7 trillion added to the global economy by 2030, its potential influence on humanitarian action still needs to be explored.

While AI's economic impact is well-documented, its potential in humanitarian sectors is just starting to be realized. Emerging technologies can drive further innovation in the philanthropic sector, benefiting charities through personalized donor outreach strategies, optimized resource allocation, and streamlined decision-making processes. For example, the American Red Cross has implemented AI algorithms to predict donation trends, enabling them to allocate resources more effectively during crises.

While AI's economic impact is well-documented, its potential in humanitarian sectors is just starting to be realized. AI can revolutionize philanthropy with personalized donor outreach, optimized resource allocation, and more efficient decision-making.

For instance, Save The Children Australia enhanced donor outreach with AI-powered predictions, using data segmentation and their CRM system for effective donor targeting. They ranked donation profiles to target specific donors effectively. Similarly, Greenpeace Australia Pacific leveraged machine learning techniques to improve donor retention through a churn propensity model. By assigning scores to previous donation histories, the charity identified donors to re-engage successfully. Furthermore, SwissFoundations highlights the unexplored potential of AI in donor matching, reporting, impact evaluation, and increasing transparency and accountability within philanthropic organizations. These case studies illustrate how AI can provide actionable insights into donor behavior, leading to more targeted and successful fundraising strategies.

Gen Z is uniquely positioned to advance charitable work with their digital fluency and social media savviness. As digital natives, Gen Z brings a unique perspective to philanthropy. In addition to AI, the untapped potential of Gen Z, born between 1997 and 2012, can drive advancements in charitable work through new technologies. As the world's first generation of authentic digital natives, Their comfort with technology and social media paves the way for innovative approaches to charitable giving.

The intersection of AI and Gen Z presents a unique opportunity to shape the future of philanthropy. As these two forces continue to converge, the possibilities for innovation and positive impact are boundless. Research indicates that Gen Z donors prefer digital platforms for charitable engagement and are more likely to support causes that align with their values, emphasizing the need for NGOs to adapt to these preferences. This generation is more than just digitally competent; they are socially conscious. According to a study by McKinsey, 70% of Gen Z prioritize social impact in their spending and charitable giving, indicating a shift towards more conscientious consumerism.

Dubious stereotypes often dismiss Gen Z as a generation of self-absorbed and distractible youth, seemingly trapped by the addictive allure of social media and limited in their ability to appreciate the world beyond their personal experiences. However, according to a Forbes article in 2022, members of Gen Z are defying these expectations and emerging as the next generation of charitable donors, potentially surpassing their older counterparts in their willingness to support philanthropic causes. Their motivations stem from a deep sense of conviction.

Gen Z distinguishes itself as a charitable demographic and takes the lead in its chosen advocacies, spearheading digitally driven efforts to address philanthropic causes. The Hot Meal Challenge is a prime example of how Gen Z's digital savviness can be harnessed for philanthropic efforts. It also exemplifies Gen Z's philanthropic innovation, addressing food insecurity in the UK. Hot Meal Challenge is a viral fundraising campaign aimed at tackling the United Kingdom's pervasive cost-of-living crisis by providing hot meals to food-insecure households. Collaborating with Sufra, a prominent London-based food poverty charity, Gen Z members nominate each other via an app to donate hot meals.

Fabio Richter, the founder of the Hot Meal Challenge, firmly believes that philanthropy can drive meaningful global change, especially when harnessed with the power of technology. He states, "Through strategic giving and thoughtful investments, philanthropy can catalyze positive transformations in society. To fully unlock its potential, it is crucial to leverage technology, invest in local capacity-building, and collaborate with policymakers to enact long-term structural change."

Fabio emphasizes that Gen Z represents a highly promising donor market with which charities and non-governmental organizations (NGOs) should actively engage. "Surprisingly, non-profits often overlook generational cohorts like Gen Z and Millennials. While they may have less purchasing power compared to older generations, they outperform them in terms of annualized giving rates as a percentage of disposable income," he explains. "To effectively connect with younger generations, nonprofits must understand them from multiple dimensionsdemographically, behaviorally, and psychographically." This viral campaign brought widespread attention to food insecurity issues in the UK, demonstrating the power of social media in driving social change.

Gen Z challenges the prevailing stereotypes by actively contributing to philanthropic endeavors. Their digital savvy and deep commitment to causes make them a force to be reckoned with. Nonprofits and organizations should recognize the untapped potential of this emerging market and develop comprehensive strategies to engage and collaborate with Gen Z effectively. Fabio Richter, the founder, stated, "The success of this initiative is a testament to Gen Z's commitment to social change, harnessed through technology." Richter's approach in the Hot Meal Challenge exemplifies how combining technology with a deep understanding of Gen Z's communication styles can lead to successful philanthropic campaigns.

While AI offers numerous benefits, it also presents new challenges in terms of ethics and privacy. Balancing these aspects is crucial for sustainable growth in philanthropy. As AI technology holds immense promise in revolutionizing humanitarian work, advocates and supporters must remain vigilant about the inherent risks associated with emerging technologies. Managing the potential systemic risks posed by training AI systems, addressing the challenges of predictive decision-making, and ensuring transparency are all paramount. Fabio, an expert in the field, highlights the importance of preventing AI systems from perpetuating and exacerbating structural biases inherent in data.

When deployed carefully and strategically, AI can be an extraordinary catalyst for transforming humanitarian efforts, regardless of their humble origins. Pioneering digital philanthropic initiatives such as the Hot Meal Challenge are already reshaping the industry landscape and paving the way for tangible real-world impact. Speaking at the initiative's launch, Lord Woolley of Woodford emphasized its transformative potential in fighting poverty and restoring human dignity.

It's vital for organizations to establish ethical guidelines for AI use, ensuring that these technologies are used responsibly and transparently, with a focus on enhancing rather than replacing human decision-making in philanthropy. Ethical considerations, like data privacy and bias in AI algorithms, are crucial. Measures such as transparent AI development processes and regular ethical audits are essential to ensure these tools serve the greater good without unintended consequences. It is critical to remember that integrating AI and philanthropy must be rooted in fundamental human values underpinning charitable endeavors: service, compassion, and a steadfast commitment to envisioning a better future. As Lord Woolley aptly stated, this endeavor is not just about individual gain but collective action and collaboration in serving others.

The collaboration between AI and philanthropy has the power to drive significant change. Still, it must always be guided by a deep understanding of human values and a shared vision for a brighter tomorrow.

The intersection of AI and Gen Z presents a unique opportunity to shape the future of philanthropy. The convergence of AI, Gen Z values, and philanthropy is a powerful combination with the potential to reshape humanitarian efforts.

As these complementary forces gain momentum, upholding ethical integrity and managing risks becomes paramount to unlocking innovative potential. By embracing these trends, NGOs can unlock new potentials for impact and efficiency.

This approach promises technological advancement and a more compassionate and efficient philanthropic sector.

Moving forward, it is essential for philanthropic organizations to stay attuned to these technological advancements and generational shifts, ensuring that their strategies remain relevant and effective in the evolving landscape of philanthropy. The synergistic potential of this unique convergence still needs to be explored. This approach promises technological advancement and a more compassionate and efficient philanthropic sector. With courage, care, and conviction, AI and Gen Z have the opportunity to shape the course of philanthropy for the future and the lasting ascent of humanity.

Mr. Minevich is a highly regarded and trusted Digital Cognitive Strategist, Artificial Intelligence expert, Venture Capitalist, and the principal founder of Going Global Ventures. Mark collaborates and advises large global enterprises both in the US and Japan (Hitachi), and is the official AI and Future of Work Advisor to the Boston Consulting Group. Currently, he serves as the strategic advisor and Global ambassador to the CEO and Chairman of New York based IPsoft Inc.

Mark holds the role of senior fellow as part of the U.S. Council on Competitiveness in Washington, D.C., and maintains a position as senior adviser on Global Innovation and Technology to the United Nations Office of Project Services (UNOPS). He is an appointed member of the G20/B20s Digital Task Force, supplementing the group with expert knowledge on digitization, advanced autonomous systems, and the future of AI.

Mark is also the founder and Co-Chair of the World Artificial Intelligence Organization and AI Pioneers based in New York, and was appointed as the Global Digital Ambassador to the World Assessment Council in early 2020. He is the Strategic Advisor to SwissCognitive - "independent" Global AI think-tank in Switzerland, which aims to share, connect, and unlock the fullest potential of Artificial Intelligence.

Mark also advises several venture capital groups. He acts as a Fund Adviser to Bootstrap Labs based in San Francisco: a pioneer in the realm of VC funds focused on applied AI, carrying with it a mature fund and portfolio of 24 applied AI companies. Mark is also an Advisor to the AI Capital Venture Fund based in Colorado, which is a dedicated venture and private equity fund geared towards AI companies in the late-seed to growth-stage maturity level.

Mark is also a trusted Adviser and Entrepreneur in Residence for Hanaco Ventures, a global venture fund that focuses on late-stage, pre-IPO Israeli and US companies powered by bold, visionary, and passionate minds. Prior to this position, Mark was the Vice Chair of Ventures and External Affairs, as well as CTO at the Comtrade Group, an international technology conglomerate. He also served as the CTO and Strategy Executive at IBM, and held other management, technology, and strategy roles that entailed formulating investment tactics for Venture Capital Incubation programs.

Mark is also involved in media and journalism, and contributes to a number of publications, such as Forbes.com. His knowledge has been cited and his name has been featured in articles on an international scale.

Forbes named Mark one of the Leaders to Watch in 2017. He has received the Albert Einstein Award for Outstanding Achievement and the World Trade Leadership Award from the World Trade Centers and World Trade Centers Association. Mark has served as a venture partner with GVA Capital in Silicon Valley, advising the fund on AI startups. He has also served as venture advisor to Global Emerging Markets, an alternative investment group that manages a diverse set of investment vehicles. Mark was also involved with Research Board, a Gartner company and international think-tank advising CIOs at some of the worlds largest corporations, such as: Deutsche Bank, BTM Corporation, Geotek Communications, Qwest Communications, Comcast, and USWEB/CKS.

Original post:

AI And Generation Z: Pioneering A New Era Of Philanthropy - Forbes

How Popular Are Generative AI Apps? – Government Technology

The Internet has been abuzz with stories surrounding the future of OpenAI (the company that created ChatGPT), CEO Sam Altmans future (as of this writing, he had been reinstated to the position), Microsofts hiring plans and related stories.

But regardless of how those important questions get answered, it is important to understand just how big this generative artificial intelligence (GenAI) market has become as we head into 2024.

The Verge recently highlighted the growth of ChatGPT: In less than a year, its hit 100 million weekly users, and over 2 million developers are currently building on the companys API, including the majority of Fortune 500 companies.

A newly released study by Writerbuddy examined this question in detail. Heres what they found about the top 10 apps. (Note that the full report looks at 50 apps, but I am just listing the top 10.)

1. ChatGPT:Launched in November 2022, it quickly dominated with 14.6 billion visits over 10 months, averaging 1.5 billion monthly.

2. Character.AI: Introduced in September 2022, it captivated users, accumulating 3.8 billion visits and surging by 463.4 million within a year.

4. Janitor AI: A unique chatbot from May 2023, it experienced a quick rise with 192.4 million visits in four months.

5. Perplexity AI: Established by ex-Google staff in August 2022, it progressed rapidly, drawing 134.3 million users in nine months.

6. Civitai: An AI art hub since November 2022, it climbed to 177.2 million visits within 10 months.

7. Leonardo AI: From December 2022, this visual asset tool became a creator's choice, gathering 101.6 million visits over nine months.

8. ElevenLabs: With advanced voice AI since October 2022, it attracted 88.6 million users in 11 months.

9. CapCut: An established video tool from April 2020, it consistently pulled 203.8 million visits in the past year.

10. Cutout.Pro: An AI content tool from 2018, it maintained its grip with 133.5 million users over the year.

1. Craiyon: Between September 2022 and August 2023, Craiyon.com, an AI image generator, lost 15 million visits, possibly due to growing competition. They averaged 10.7 million monthly, with a monthly drop of 1.4 million.

2. Midjourney: Another AI image tool, Midjourney, launched in July 2022, faced an 8.66 million visitor dip over the year. Despite a strong 41.7 million monthly average, they saw a monthly decline of 787,700.

3. QuillBot: An established AI writing tool, QuillBot lost 5 million visits in 12 months, perhaps due to rising chatbot rivals. Still, they held a robust 94.6 million monthly traffic, with a 461,400 monthly drop.

4. Jasper: An early AI writing platform, Jasper's traffic decreased by 1.27 million visits over the year, possibly affected by giants like ChatGPT. They averaged 7.9 million monthly and faced a 115,100 monthly loss.

5. Zyro: An AI-enhanced website builder, Zyro saw a 1.09 million-visitor decline in 12 months, potentially due to growing AI feature competitors. They recorded a 5 million monthly average and a decline of 99,400 each month.

This video shows some of the way that GenAI tools are changing enterprises' business plans:

The rest is here:

How Popular Are Generative AI Apps? - Government Technology

Fovia Ai to Showcase Optimized AI Visualization at IAIP Exhibit … – PR Newswire

Fovia Ai Provides Efficient AI Results in One Unified Viewer

CHICAGO, Nov. 26, 2023 /PRNewswire/ -- Fovia Ai, Inc., a subsidiary of Fovia, Inc., a world leader in advanced visualization for over two decades and a preeminent provider of zero-footprint, cloud-based imaging SDKs, today announced that it will be showcasing AI interaction and visualization of AI results from multiple vendors and algorithms displayed efficiently in one consistent user interface, in the Imaging Artificial Intelligence in Practice (IAIP) demonstration November 26 November 29 at the 109th Scientific Assembly and Annual Meeting of the Radiology Society of North America (RSNA 2023) at McCormick Place in Chicago.

RSNA attendees visiting the IAIP exhibit will be able to explore AI integrations from 20 exhibitors with 28 products that are based on real-world clinical scenarios as well as see live demonstrations of Fovia Ai software integrated with vendors including GE Healthcare, Hyperfine, Laurel Bridge, Milvue, Nuance, Nvidia, PaxeraHealth, Qure.ai, Qvera, Siemens Healthineers, Smart Reporting, Telerad Tech and Visage Imaging. The interactive exhibit provides attendees access to emerging AI technologies, demonstrates the interoperability standards needed to integrate AI into the workflow of diagnostic radiology, and highlights AI-driven products that remove barriers to clinical adoption.

"For the fourth year in a row, Fovia Ai is thrilled to participate in the IAIP demo and to provide optimized AI visualization for various industry partners. Acting as a gatekeeper for AI findings, our technology works conjointly with the latest interoperability standards to make physician-validated findings readily available to AI orchestrators, PACS, and reporting systems. At this year's RSNA we are also demonstrating the efficiency of visualizing and interacting with AI results in one consistent user interface regardless of the algorithm origin, thereby streamlining AI-supported image interpretation," stated Fovia Ai's Chief Technology Officer, Kevin Kreeger, Ph.D. "We are pleased with the incredible forward-thinking environment the collaborative IAIP team provides every year."

Conveniently located adjacent to the IAIP demonstration, attendees visiting the Fovia Ai booth (#4161), can:

To learn more about Fovia and Fovia Ai's complete product suites or arrange a demonstration at the 109th Scientific Assembly and Annual Meeting of the Radiological Society of North America, November 26November 29,contact us.

About Fovia Ai

Fovia Ai, Inc. is a subsidiary of Fovia, Inc., a world leader in advanced visualization, a preeminent provider of cloud-based, zero-footprint imaging SDKs, and the developer of High Definition Volume Rendering, XStream HDVR, F.A.S.T. RapidPrint and TruRender. Fovia Ai's flagship products, F.A.S.T. aiCockpit and F.A.S.T. AI SDK, enable radiologists and clinicians to efficiently access AI results directly within their existing workflows from any PACS, worklist, dictation software or hospital system. Complementary products in Fovia Ai's product suite include F.A.S.T. AI Annotation, F.A.S.T. AI Validation, F.A.S.T. AI Workflows, F.A.S.T. Interactive AI and F.A.S.T. Interactive Segmentation, collectively providing tools to annotate, validate, modify, accept/reject, interact with and segment data. The flexible architecture of Fovia Ai's product suite and Fovia's 20+ years of radiology integration experience facilitate seamless integrations with a variety of partners, platforms, processors and operating systems.

For additional information and to learn more about commercial, academic or research licensing, visit fovia.ai or fovia.com.

IMPORTANT REGULATORY NOTICE: The applications mentioned herein are for investigational use only at this time.

SOURCE Fovia Ai, Inc.

Excerpt from:

Fovia Ai to Showcase Optimized AI Visualization at IAIP Exhibit ... - PR Newswire