Category Archives: Artificial Intelligence

Researchers Using Artificial Intelligence to Assist With Early Detection of Autism Spectrum Disorder – University of Arkansas Newswire

Photo by University Relations

Khoa Luu and Han-Seok Seo

Could artificial intelligence be used to assist with the early detection of autism spectrum disorder? Thats a question researchers at the University of Arkansas are trying to answer. But theyre taking an unusual tack.

Han-Seok Seo, an associate professor with a joint appointment in food science and the UA System Division of Agriculture, and Khoa Luu, an assistant professor in computer science and computer engineering, will identify sensory cues from various foods in both neurotypical children and those known to be on the spectrum. Machine learning technology will then be used to analyze biometric data and behavioral responses to those smells and tastes as a way of detecting indicators of autism.

There are a number of behaviors associated with ASD, including difficulties with communication, social interaction or repetitive behaviors. People with ASD are also known to exhibit some abnormal eating behaviors, such as avoidance of some if not many foods, specific mealtime requirements and non-social eating. Food avoidance is particularly concerning, because it can lead to poor nutrition, including vitamin and mineral deficiencies. With that in mind, the duo intend to identify sensory cues from food items that trigger atypical perceptions or behaviors during ingestion. For instance, odors like peppermint, lemons and cloves are known to evoke stronger reactions from those with ASD than those without, possibly triggering increased levels of anger, surprise or disgust.

Seo is an expert in the areas of sensory science, behavioral neuroscience, biometric data and eating behavior. He is organizing and leading this project, including screening and identifying specific sensory cues that can differentiate autistic children from non-autistic children with respect to perception and behavior. Luu isan expert in artificial intelligence with specialties in biometric signal processing, machine learning, deep learning and computer vision. He will develop machine learning algorithms for detecting ASD in children based on unique patterns of perception and behavior in response to specific test-samples.

The duo are in the second year of a three-year, $150,000 grant from the Arkansas Biosciences Institute.

Their ultimate goalis to create an algorithm that exhibits equal or better performance in the early detection of autism in children when compared to traditional diagnostic methods, which require trained healthcare and psychological professionals doing evaluations, longer assessment durations, caregiver-submitted questionnaires and additional medical costs. Ideally, they will be able to validate a lower-cost mechanism to assist with the diagnosis of autism. While their system would not likely be the final word in a diagnosis, it could provide parents with an initial screening tool, ideally eliminating children who are not candidates for ASD while ensuring the most likely candidates pursue a more comprehensive screening process.

Seo said that he became interested in the possibility of using multi-sensory processing to evaluate ASD when two things happened: he began working with a graduate student, Asmita Singh, who had background in working with autistic students, and the birth of his daughter. Like many first-time parents, Seo paid close attention to his newborn baby, anxious that she be healthy. When he noticed she wouldnt make eye contact, he did what most nervous parents do: turned to the internet for an explanation. He learned that avoidance of eye contact was a known characteristic of ASD.

While his child did not end up having ASD, his curiosity was piqued, particularly about the role sensitivities to smell and taste play in ASD. Further conversations with Singh led him to believe fellow anxious parents might benefit from an early detection tool perhaps inexpensively alleviating concerns at the outset. Later conversations with Luu led the pair to believe that if machine learning, developed by his graduate student Xuan-Bac Nguyen, could be used to identify normal reactions to food, it could be taught to recognize atypical responses, as well.

Seo is seeking volunteers 5-14 years old to participate in the study. Both neurotypical children and children already diagnosed with ASD are needed for the study. Participants receive a $150 eGift card for participating and are encouraged to contact Seo athanseok@uark.edu.

About the University of Arkansas:As Arkansas' flagship institution, the UofA provides an internationally competitive education in more than 200 academic programs. Founded in 1871, the UofA contributes more than$2.2 billion to Arkansas economythrough the teaching of new knowledge and skills, entrepreneurship and job development, discovery through research and creative activity while also providing training for professional disciplines. The Carnegie Foundation classifies the UofA among the few U.S. colleges and universities with the highest level of research activity.U.S. News & World Reportranks the UofA among the top public universities in the nation. See how the UofA works to build a better world atArkansas Research News.

See the rest here:
Researchers Using Artificial Intelligence to Assist With Early Detection of Autism Spectrum Disorder - University of Arkansas Newswire

Kura Technologies Artificial Intelligence Generated Optics Deliver the Highest Performance for the Future of the Metaverse – Yahoo Finance

Kuras custom, one-of-a-kind development kit optics are on track for production later this year to deliver the best-performing augmented reality glasses and drive the next wave of innovation for the Metaverse

Kuras AI-Generated Custom Development Kit Optics Deliver the Highest Performance of any AR Glasses

Kuras development kit optics (pictured on right) was generated by its proprietary artificial intelligence-powered design tool, which is 1,000 times faster than existing design tools. Kuras optics greatly outperform competitors in magnitude, with up to 150-degree field-of-view, 8K resolution, 95% transparency, 30% efficiency, zero light leakage and wide range of depth.

SAN FRANCISCO, CA, Aug. 24, 2022 (GLOBE NEWSWIRE) -- Kura Technologies, an award-winning developer of the best-in-class augmented reality (AR) smart glasses and platform, today announced that its fully custom and highly-anticipated development kit optics are up and running in the lab with production to start later this year.

An AR headset is nothing without its display and optics, said Kura Founder, CEO and CTO Kelly Peng. Kura's groundbreaking performance is made possible by our grounds-up architecture, including custom display silicon we've fabricated with TSMC, the worlds largest semiconductor foundry, custom micro-LED display, and custom AI-generated optics.

Kuras optics are uniquely capable of:

Up to 150-degree field-of-view, a world record for waveguide type displays and 9 times the viewable area of current market leaders.

8K resolution, 16 times the resolution of existing AR, enabling text readability critical for training, remote collaboration, and design.

Up to 95% transparencysignificantly higher than ~25% typical in other products; Kuras performance enables natural eye contact and safe operation, and lets users perform other tasks while using AR.

Compact eyeglass-style waveguide displays enabling attractive, ergonomic glasses suited for extended usage.

Wide range of depth, allowing for natural interactions from arm's length all the way to infinity.

Zero front light leakage so that others cannot see the content the wearer is viewingcritical not only for defense and security use cases, but also for privacy in day-to-day usage. Kura has the only waveguide display that can achieve true zero outwards light leakage from any angle.

Story continues

To build its ultra-high performance AR glasses, Kura initially tried to use existing optics and multiphysics design software, but found that executing the design in existing tools was completely impossible, since the simulations didn't converge. In response, Kura developed a proprietary AI optical design tool, Istara, which executes over 1,000 times faster than existing software, finding solutions in a matter of hours rather than years. This tool leverages a state-of-the-art parallel, vectorized compute kernel which implements accelerated ray tracing algorithms and proprietary AI algorithms that Kura developed from the ground up and are completely different from the algorithms in popular toolkits such as TensorFlow, PyTorch, and Caffe. Istara is based on both mathematical models and real-world feedback from designs manufactured with the tool. Kura has already received licensing requests from some of the largest companies in the world for this software.

We have our amazing team to thank for this groundbreaking work. Our team of world-class experts includes pioneers in machine learning, MIT-educated engineers and PhDs in program synthesis, AI, advanced algorithms, signal processing, machine learning, optics, and high-performance computing, and award-winning math and computer science experts, said Kura Co-Founder and Chief Science Officer Bayley Wang. It's this sort of experience which gives us the boldness to break free of traditional paradigms and craft something completely novel, something that very few other groups can pull off.

"It's been thought impossible to build a headset which is transparent enough to look someone in the eye and opaque enough to see images in daylight, but Kura's AI optimizer and simulator assemble the optics in a truly mind-boggling way that makes this possible," said Dr. James Koppel, Kuras software advisor, who holds a Ph.D. from MIT on AI for generating programming tools.

Kuras product has been widely regarded as the highest-performing AR headset for years, with orders from industry-leading companies and institutions including General Motors, Caterpillar, Trimble, and multiple government agencies, which approached Kura after being dissatisfied with the performance of other AR glasses.

About Kura Technologies

Kura is building the worlds best-performing augmented reality glasses, global telepresence and remote collaboration platform. The company has over 350 customers, predominantly Fortune 100 and 500 companies in diverse sectors including automotive, smartphone, telecom, entertainment, medical, and enterprise software, 100% of which are in-bound.

Founded in 2016, Kura is headquartered in Silicon Valley, California, and is led by ateam of industry veterans, with over 50 percent of founding leadership from Massachusetts Institute of Technology (MIT). Three of its lead engineers hold over 400 patents.

For more information, visitwww.kura.tech.

Attachment

View original post here:
Kura Technologies Artificial Intelligence Generated Optics Deliver the Highest Performance for the Future of the Metaverse - Yahoo Finance

Qloo, the Leading Artificial Intelligence Platform for Culture and Taste Preferences, Raises $15M in Series B – Business Wire

NEW YORK--(BUSINESS WIRE)--Qloo, the leading artificial intelligence platform for culture and taste preferences, announced today that it has raised $15M in Series B funding from Eldridge and AXA Venture Partners. This latest round of funding brings Qloos total capital raised to $30M, and will enable the privacy-centric AI leader to expand its team of world-class data scientists, enrich its technology, and build on its sales channels in order to continue to offer premier insights into global consumer taste for Fortune 500 companies across the globe.

Founded in 2012, Qloo pioneered the predictive algorithm as a service model, using AI technology to help brands securely analyze anonymized and encrypted consumer taste data to provide recommendations based on a consumers preferences. Demand for Qloo has been accelerating as companies look for privacy centric solutions - in fact, API request volumes across endpoints grew more than 273% year-over-year in Q2.

Before Qloo, consumer taste was really only examined within the silo of a certain app or service - which made it impossible to model a fuller picture of peoples preferences, said Alex Elias, Founder and CEO of Qloo. Qloo is the first AI platform that takes into account all the cross-sections of our preferences - like how our music tastes correlate to our favorite restaurants, or how our favorite clothing brands may lend themselves to a great movie recommendation.

Qloos flagship API works across multiple layers to process and correlate over 575 million primary entities (such as a movie, book, restaurant, song, etc.) across entertainment, culture, and consumer products, giving the most accurate and expansive predictions of consumer taste based on demographics, preferences, cultural entities, metadata, and geolocational factors. Qloos API can be plugged directly into leading data platforms such as Snowflake and Tableau, with results populated in only a matter of seconds making it easy for companies to improve product development, media buying, and consumer experiences in real time.

Qloo currently delivers cultural AI that powers inferences for clients serving over 550 million customers globally in 2022, including industry leaders across media and publishing, entertainment, technology, e-commerce, consumer brands, travel, hospitality, automakers, fashion, financial services, and more.

About Qloo:

Qloo is the leading artificial intelligence platform on culture and taste preferences, providing completely anonymized and encrypted consumer taste data and recommendations for leading companies in the tech, entertainment, publishing, retail, travel, hospitality and CPG sectors. Qloos proprietary API can predict consumers' preferences and connect how their tastes correlate across over a dozen major categories, including music, film, television, podcasts, dining, nightlife, fashion, consumer products, books and travel. Launched in 2012, Qloo combines the latest in machine learning, theoretical research in Neuroaesthetics and one of the largest pipelines of detailed taste data to better inform its customers - and makes all of this intelligence available through an API. By allowing companies to speak more effectively with their target consumers, Qloo helps its customers solve real-world problems such as driving sales, saving money on media buys, choosing locations and building brands. Qloo is the parent company of TasteDive, a cultural recommendation engine and social community that allows users to discover what to watch, read, listen to, and play based on their existing unique preferences.

Learn more at qloo.com and http://www.tastedive.com.

See the original post here:
Qloo, the Leading Artificial Intelligence Platform for Culture and Taste Preferences, Raises $15M in Series B - Business Wire

KSA To Host 2nd Edition of Artificial Intelligence Summit – About Her

The second Global Artificial Intelligence Summit will take place in Riyadh from September 13 to 15. The Saudi Press Agency reported that the Saudi Authority for Data and Artificial Intelligence forDataandArtificialIntelligence ishosting the summit,with the theme "Artificial Intelligence for the Good of Humanity," and Crown Prince Mohammed bin Salman, who serves as the organization's board of directors chairman, is serving as its sponsor.

The King Abdulaziz International Conference Center will host the gathering. According to Arab News, Dr. Abdullah Al-Ghamdi, the president of SDAIA, said that the crown princes patronage raises the summits status and importance locally, regionally andinternationally. It has become clear that artificial intelligence techniques have begun to be used in our daily lives and will have a significant impact in all aspects of life, so the Kingdom of Saudi Arabia has set the visions of this modern technical field within the objectives of its Vision 2030, he said.

This global summit will seek to become a major world forum in the field of artificial intelligence, after the success achieved in the first summit held in 2020, he added.

Al-Ghamdi also stated that the summit will cover all topics relating to AI technologies and that experts, specialists, senior government officials, and the biggest global technology corporations will all be in attendance. Various presentations will be conducted emphasizing the newest research and technology, while attendees will also share skills, and explore investment opportunities.

With "more than 100 speakers from around the world under one roof in Riyadh, specialized in artificial intelligence," he claimed that the Global Artificial Intelligence Summit is also a chance for specialists and interested parties to profit from the meeting.

The summit will cover several subjects that demonstrate the effects of AI on the most significant industries, including smart cities, human capacity development, health care, transportation, energy, culture and heritage, environment, and more. The first Global Artificial Intelligence Summit, which lasted two days and featured more than 200 experts and decision-makers, drew more than 13,000 attendees. More than 5 million people viewed the conference on social media.

View post:
KSA To Host 2nd Edition of Artificial Intelligence Summit - About Her

This Is The Artificial Intelligence System With Which Tesla Wants To Impose Itself On The Autonomous Car – Nation World News

Currently, Tesla is World leader in electric cars, However, the American company is not satisfied with this and wants to become the leader in one of the main battles in the automotive industry: autonomous vehicles.

Tesla has made public for the first time artificial intelligence technology that it has been developed for use in its self-driving cars. With this, he hopes to revolutionize the transportation industry and be the first to offer these vehicle models.

Teslas AI system is called dojo and has expanded its operations at the Hot Chips conference. Dojo assembled hundreds of its D1 chips into giant exods, The system is responsible for analyzing the video of the fleet of Tesla cars that are currently being circulated on the roads. How driving works in the real world,

in detail cnetThis artificial intelligence is the basis of your system full self drivingWith which Tesla hopes to drive its cars at highway intersections, parking lots and traffic signals.

Even then, Tesla has had trouble offering FSDs to its customers, many of whom paid for it years ago. Developing this AI is much more complicated than Tesla expected, which has slowed him down his time. Dojos vast computing power is finally there to make self-driving Teslas a reality.

FSD is currently in limited beta testing and requires ongoing human monitoring Disqualifies Tesla as true self-driving cars, But that doesnt deter Elon Musk from insisting that this technology will pave its way toward autonomous vehicles that dont need a human.

OK, FSD is giving Musk a headache. Just a few days ago, it announced that the price of the service would go from $12,000 to $15,000. Self-driving pervasively is a difficult problem that requires a great deal of real-world AI to solve, Musk tweeted in 2021.I didnt expect it to be so hardBut the difficulty is apparent in retrospect.

Even then, dojo something not current, The American brand has already started talking about this technology in 2021. But until now Tesla hasnt detailed how these D1 chips work and how they are interconnected by the dozens or hundreds into a vast computing fabric.

,We need to speed up these AI processorsDescription: Ganesh Venkataraman, who leads Teslas autonomous vehicle team. To achieve this, manufacturers start by designing chips based on their needs, primarily in the processing of video data that adapts to the cars changing environment. tracks.

D1 chips are packaged in groups of 25 the width of a platter in a single square training room. These cells join the edges to form a grid with their neighbors. The data moves from tile to tile, with cars zipping through city streets or, in the case of long journeys, using a network that is like a train.

To speed up the process and tailor it to your needs, Tesla plans to move the AI process from nvidia processor for dojo, but it is not yet known at what stage this change is. Weve had the hardware for a long time, and weve been running it in the lab, says the engineer. hardwareEmile Talpes.

As complicated as the dojo can be, deep down This is the biggest asset Tesla has against traditional brands, The rest of the manufacturers rely on an extensive network of component suppliers to be able to develop their technologies, which slows down the process. In other words, Musks company applies vertical integration,

This integration is on the rise because it allows companies to have tighter control over their products and services so that they can work together for customers. In addition, this means that these companies are not required to share profits with anyone else. In Teslas case, this allows them to regularly update their softwareA service for which he charges $10 a month.

Read more here:
This Is The Artificial Intelligence System With Which Tesla Wants To Impose Itself On The Autonomous Car - Nation World News

At Artificial General Intelligence (AGI) Conference, DRLearner is Released as Open-Source Code — Democratizing Public Access to State-of-the-Art…

SEATTLE, Aug. 19, 2022 /PRNewswire/ -- The 15th annual Artificial General Intelligence (AGI) Conference opens today at Seattle's Crocodile Venue. Running from August 19-22, the AGI conference event includes in-person events, live streaming, and fee-based video accessand features a diverse set of presentations from accomplished leaders in AI research.

As the AGI community convenes, it continues to promote efforts to democratize AI access and benefits. To that end, several AGI-22 presentations will officially launch DRLearneran open source project to broaden AI access and innovation by distributing AI/Machine Learning code that rivals or exceeds human intelligence across a diverse set of widely acknowledged benchmarks. (Within the AI research community these Arcade Learning Environment [ALE] benchmark tests are widely accepted as a proxy for situational intelligence.)

"Until now, tools at this level in 'Deep Reinforcement Learning' have been available only to the largest corporations and R&D labs," said project lead Chris Poulin. "But with the open-source release of the DRLearner code, we are helping democratize access to state-of-the-art machine learning tools of high-performance reinforcement learning," continued Poulin.

Ben Goertzel, Chairman of the AGI Society and AGI Conference Series, contextualized DRLearner as well-aligned with the goals of the AGI conference. "Democratizing AI has long been a central mission, both for me and for many colleagues. With AGI-22 we push this mission forward by fostering diversity in AGI architectures and approaches, beyond the narrower scope currently getting most of the focus in the Big Tech world," Goertzel said.

DRLearner project presentations include:

"Open Source Deep Reinforcement Learning" General Interest Keynote presented by Chris Poulin, Project Lead. (Journalists Note: Poulin's initial keynote is scheduled for Sunday, August 21. On this day the AGI-22 Conference is open to the general public.)

"Open Source Deep Reinforcement Learning: Deep Dive" Technical Keynote by Chris Poulin and co-principal author Phil Tabor. (Monday, August 22)

"Demo of Open Source DRLearner Tool" Code Demo by co-author Dzvinka Yarish (Monday, August 22)

Story continues

Poulin also noted the importance of managing expectations on the benefits on what DRLearner will, and will not, provide in its initial Beta release: "Fully implementing this state-of-the-art ML capability requires considerable computational power on the cloud, so we advise implementors to maintain realistic expectations regarding any deployment". DRLearner's benefits could be substantial, however, for the numerous organizations who have substantial computing budgets: analytical insights, expanded research capability, and perhaps a competitive advantage. "And for those whose professional lives are focused on AGI, this is an exciting time, as DRLearner can enhance their neural network training efforts" Poulin said.

Drawing on his working experience with both US and Ukrainian computer scientists and software developers, Poulin assembled an international team of expert developers to complete the open-source project. (See more about 'DRLearner's International Dev Team' below.)

A final noteworthy addition, is that the work of Poulin et al was advised by Adria Puigdomenech Badia of DeepMind. "DRLearner provides a great implementation of reinforcement learning algorithms, specifically including the curiosity approach that we had pioneered at DeepMind," said Puigdomenech Badia. Poulin likewise had high praise for the DeepMind's prior "Agent 57" achievement: "Agent 57 was one of a limited number of implementations (at Deep Mind) that consistently beat human benchmarks. And due to the elegant simplicity of its particular design, and help of Adria, it was the best candidate to inspire our software implementation," Poulin said.

ON ARTIFICIAL GENERAL INTELLIGENCE & THE AGI CONFERENCE GOALS

The original goal of the AI field was the construction of "thinking machines"computer systems with human-like general intelligence. Given the difficulty of that challenge, however, AI researchers in recent decades have focused instead on "narrow AI"systems displaying intelligence regarding specific, highly constrained tasks. But the AGI conference series never gave up on this field's ambitious vision; and throughout its fifteen-year existence AGI has promoted the resurgence of broader research on "artificial intelligence"in the original sense of that term.

And in recent years more and more researchers have recognized the necessity and feasibility of returning to the original goals of the field. Increasingly, there is a call for a transition back to confronting the more difficult issues of "human level intelligence" and "artificial general intelligence (AGI)." AGI leaders are committed to continuing the organization's longstanding leadership roleby encouraging and exploring interdisciplinary research based on different understandings of intelligence.

Today, the AGI conference remains the only major conference series devoted wholly and specifically to the creation of AI systems possessing general intelligence at the human level, and ultimately beyond. By convening AI/ML researchers for presentations and discussions, AGI conferences accelerate progress toward our common general intelligence goal.

About the AGI-22 Conference: visit https://agi-conf.org/2022/

About the DRLearner Project: visit http://www.drlearner.org

About Chris Poulin: Poulin specializes in real-time prediction frameworks at Patterns and Predictions, a leading firm in predictive analytics and scalable machine learning. Poulin is also an Advisor at Singularity NET & Singularity DAO. Previously at Microsoft, Poulin was a subject-matter-expert (senior director) in machine learning and data science. He also served as Director & Principal Investigator of the Durkheim Project, a DARPA-sponsored nonprofit collaboration with the U.S. Veterans Administration. At Dartmouth College, Poulin was co-director of the Dartmouth Meta-learning Working Group, and IARPA-sponsored project focused on large-scale machine learning. He also has lectured on artificial intelligence and big data at the U.S. Naval War College. Poulin is co-author of the book Artificial Intelligence in Behavioral and Mental Health (Elsevier, 2015). Chris Poulin's LinkedIn Profile

About Ben Goertzel: Chairman of the AGI Society and AGI Conference Series, Goetzel is CEO of SingularityNET, which brings AI and blockchain together to create a decentralized open market for AIs. SingularityNET is a medium for AGI creation and emergence, a way to roll out superior AI-as-a-service to vertical markets, and a vehicle for enabling public contributions toand benefits fromartificial intelligence. In addition to AGI, Goetzel's passions include life extension biology, philosophy of mind, psi, consciousness, complex systems, improvisational music, experimental fiction, theoretical physics, and metaphysics. For general links to various of his pursuits present and past, see the Goetzel.org website. Ben Goetzel's LinkedIn Profile

About Adria Puigdomenech Badia: For the past seven years Badia has been at DeepMind, where he has specialized in the development of deep reinforcement learning algorithms. Examples of this include 'Asynchronous Methods for reinforcement learning' where he and Vlad Mnih (DeepMind) proposed A3C - 'Neural episodic control'. Badia's recent projects include 'Never Give Up' and 'Agent57' algorithms, addressing one of the most challenging problems of RL: the exploration problem.

DRLearner's International Dev Team:

Chris Poulin (Project Lead-US)Phil Tabor (Co-Lead-US)Dzvinka Yarish (Ukraine)Ostap Viniavskyi (Ukraine)Oleksandr Buiko (Ukraine)Yuriy Pryyma (Ukraine)Mariana Temnyk (Ukraine)Volodymyr Karpiv (Ukraine) Mykola Maksymenko (Advisor-Ukraine)Iurii Milovanov (Advisor-Ukraine)

For media inquiries about the DRLearner project, please contact:

Gregory PetersonArchetype Communicationsgpeterson@archetypecommunications.com

For general inquiries about the AGI-22 Conference, please contact:

Jenny CorlettApril Sixsingularitynet@aprilsix.com

SOURCE drlearner.org

Continued here:
At Artificial General Intelligence (AGI) Conference, DRLearner is Released as Open-Source Code -- Democratizing Public Access to State-of-the-Art...

Is the future of artificial intelligence internet-free? These researchers hope so – WQAD Moline

Today, AI learning requires a connection to a remote server to perform heavy computing calculations. These researchers say changing that could transform health care.

ORLANDO, Fla. Our computers, devices, smart watches, video monitoring systems, etc...- we rely on connectivity to the internet and dont think twice about it. Now, scientists are developing technology for artificial intelligence that will allow it to work even in remote areas.

Self-driving cars, drone helicopters and medical monitoring equipment; its all cutting-edge technology that requires connection to the cloud. Now, researchers at the University of Central Florida are developing devices that wont rely on an internet connection.

What we are trying to do is make small devices, which will mimic the neurons and synapses of the brain, researcher at the University of Central Florida, Tania Roy, PhD, explains.

Right now, artificial intelligence learning requires a connection to a remote server to perform heavy computing calculations. Scientists are making the AI circuits microscopically small.

Roy emphasizes, Each device that we have is the size of 1/100th of a human hair.

The AI can fit on a small microchip less than an inch wide eliminating the need for an internet connection, meaning life-saving devices could work in remote areas. For example, helping emergency responders find missing hikers.

We would send a drone which has a camera eye, and it can just go and locate those people and rescue them, Roy says.

The scientists say with no need for an internet connection, the AI would also work in space, where no AI technology has gone before.

The same UCF team is expanding on their work with artificial brain devices, and they are developing artificial intelligence that mimics the retina in the human eye, meaning someday, AI could instantly recognize the images in front of it. The researchers say this technology is about five years away from commercial use.

If this story has impacted your life or prompted you or someone you know to seek or change treatments, please let us know by contacting Shelby Kluver at shelby.kluver@wqad.com or Marjorie Bekaert Thomas at mthomas@ivanhoe.com.

Watch more 'Your Health' segments on News 8's YouTube channel

See more here:
Is the future of artificial intelligence internet-free? These researchers hope so - WQAD Moline

Artificial Intelligence Is All Around Us. So This District Designed Its Own AI Curriculum – Education Week

The description of artificial intelligence in high school may conjure up a science fiction novel where robots stand around chatting at their lockers.

The reality, at Seckinger High School in Gwinnett County, Ga., looks more like this: A social studies teacher pauses a lesson on the spread of cholera in the 19th century to discuss how data scientists use AI tools today to track diseases. A math class full of English-language learners uses machine learning to identify linear and non-linear shapes.

The simplest explanation of this technology is that it trains a machine to do tasks that simulate some of what the human brain can do. That means it can learn to do things like recognize faces and voices (helpful for radiology, security, and more), understand natural language, and even make recommendations. (Think of the algorithm Netflix uses to suggest your next binge-worthy TV show.)

While the Gwinnett County school district, which with more than 177,000 students is among the largest in the country, opened Seckinger high school this month to relieve overcrowding elsewhere, the focus of the school is unique. Seckinger is apparently the only high school in the country dedicated to teaching AI as part of its curriculum, not just as an elective class, according to CSforAll, a nonprofit group dedicated to expanding computer science education in schools across the country.

The district has also expanded the focus on artificial intelligence to three nearby elementary schools and a feeder middle school, creating an AI cluster. Ultimately, Gwinnett aims to expose kids to AI in every subject, as they move from kindergarten to 12th grade. Students who find themselves particularly drawn to the topic will get opportunities to delve even deeper into how artificial intelligence works and the ethical implications of using it.

Through the cluster, Gwinnett plans to do more than just prepare kids for success in a hot corner of the job market: It wants to give them a critical window into how AI is reshaping nearly every aspect of the economy.

Our students need to understand the implications of the technology that they are consuming, and how its being used on them so that they can make informed decisions, said Sallie Holloway, the districts director of artificial intelligence and computer science. (Holloway said shes never spoken to another district leader who had AI in a job title.)

Gwinnett is taking a bold step to help students prepare for the present and the future, said Joseph South, the chief learning officer for the International Society for Technology in Education, a nonprofit group that runs the largest educational technology conference in the country.

We talk like AI is coming, South said. But its actually already here. Its all around us. Theres no part of our society that isnt going to be touched by [AI]. To the extent that its invisible to us, we dont have any power over it. It has power over us. To the extent that we understand it, and even better know how to design it, then we can start to partner with AI, instead of being controlled by AI.

Gwinnett officials didnt have to look far for examples of longstanding industries whose work had evolved to include an AI twist.

An agricultural machinery company headquartered in the county now calls itself a technology company, and utilizes self-driving tractors. An assistant superintendent stopped in at a nearby caf where robots mixed the drinks, and the man behind the counter was an engineer, not a barista.

That drove home to district officials that the kids who graduated our high school who might have gone with traditional trades [in the past] are going to need some more technical AI-driven skills, Holloway said.

Whats more, they see embracing AI as particularly important for a district as diverse as Gwinnett.Its been well-documented that intelligent machines reflect the same biases as the humans programming them. Facial recognition software powered by AI has had trouble picking up darker complexions. AI-driven risk-assessment algorithms used to figure out criminal sentences tend to make harsher predictions about Black defendants than white ones.

Those problems might not be so prevalent, experts say, if more of the engineers behind the technology came from the demographic groups that make up much of Gwinnetts student population.

We serve the students who are most underrepresented in the technology industry, Holloway said. Gwinnetts students come from more than 180 countries, about a third are Black, and another third are Hispanic or Latino. About a third come from economically disadvantaged families.

We want them to be represented and have a voice in how AI develops over the next few decades, when its expected to take on an even more central role in daily life, Holloway said.

One of the biggest challenges, which Holloway expects to be ongoing: There are little, if any, curricular materials out there for teaching AI to K-12 students, particularly for educators hoping to spotlight the technology in a range of subjects and grade levels.

When the district began considering its approach, no one else was thinking about this holistic idea of AI readiness, where its embedded in the classes, Holloway said. Experts were talking about specific technical AI courses, like computer science courses.

The problem is that not every kid can take those elective classes. So, every kid doesnt get access to AI, if you only address it through an elective, Holloway said. But if I embed it into all of the classes a student takes now, every single kid is going to get access to that critical learning that they need for future readiness. We just needed to create it ourselves.

To inform that work, Gwinnett school district officials reached out to higher education institutions, such as the nearby Gwinnett College, Georgia Institute of Technology, and University of Georgia in addition to other schools outside the state like Harvard University and the Massachusetts Institute of Technology. The district also turned to tech companies such as Apple, Google, IBM, and Microsoft as well as nonprofit groups AI4K12, CSForAll, and aiEDU for help.

Even though we are doing the heavy lifting, we were lucky to have a ton of people who were interested, Holloway said.

Seckinger offers a series of three progressively sophisticated elective classes focused on AI. The first will provide a broad overview of the technology, including its history and evolution, impact on society, plus an introduction to more technical aspects. The second class will go deeper, and the third will have a significant project-based component, allowing students to put their knowledge of AI towards solving real world problems.

Teacher Jason Hurd is not only leading the courses, hes playing a big part in writing them.

Thats meant developing something that doesnt exist anywhere in the country, and potentially, the world, Hurd said.

Memorie Reesman, Seckingers principal, expects a significant chunk of students will take at least one AI course. But she doesnt anticipate every Seckinger graduate will wind up in a Silicon Valley programming gig.

School and district officials think of Seckingers students in three different buckets: swimmers, who will get broad exposure to a range of AI-related topics across the curriculum; snorkelers, who might take a couple of the AI electives or delve deeper into the topic as part of another class; and scuba-divers, who will spend much of high school immersed in AI.

In all classes, teachers will be explicit about how their contentsocial studies, or even physical educationtouches on a range of topics key to helping students become AI ready, including data science, mathematical reasoning, creative problem solving, and ethics.

What I love about it is it allows us as teachers that dont teach just AI to be able to recognize that theres so much that we do already that touches on the concepts behind the technology, said Cheri Nations, who teaches environmental engineering at Seckinger. Its [about] being more intentional and authentic with it and tying it and making connections for the kids. Then, as we become more comfortable, we can start doing more of that deep diving.

Reesman has previewed how all this can work in her previous job as the principal of Glenn C. Jones Middle School, the feeder middle school in the AI cluster. The school started piloting the AI program about two years ago.

At first, Jones middle school students and teachers just played around with a few AI-related challenges during the 20-minute homeroom slot in their schedule, Reesman said, including a program from Amazon that allowed students to practice coding robots to do work in a warehouse.

Later, teachers in all subjects began mixing a bit of AI-related content into their classes. One of Reesmans favorite examples: Seventh grade science students took a concept thats long been part of their curriculumgeneticsand used coding to figure out the probability of inheriting certain genetic traits.

There are going to be some days where youre gonna see [AI] really heavily in the cluster schools, Holloway said. But it may not always be like a very obvious, hit you in the face [realization], like, Oh, were doing this in AI today. A lot of its going to show up in the culture of the school.

That culture extends even to Seckingers furniture, which isnt your typical desks in rows. Instead, most classrooms use a more flexible seating model, Holloway said.

Theyre in circles, theyre in groups. Their work is all over their wall, she said. Theyre having discussions and conversations and you might not know where the teacher is in the room because they may just be mixed in and talking with the kids. The goal is to make collaborative leadership skills and creative problem solving a central piece of every class.

Helping teachers make the cultural pivot will require time and experimentation, Holloway added.

Professional development doesnt fix everything [and] theres just a lot of priorities right now in the world of education, Holloway said. Shes explained to teachers, were going to try something different, and if we fail, thats OK because were going to pause and learn and try to improve next time.

Eventually, Gwinnett would like to see the curriculum model used throughout the district. And it could be poised to spread even further. The Georgia Department of Education worked with Gwinnett to write academic standards for the new material so that schools anywhere in the Peach State can launch their own AI courses.

South, of ISTE, expects to see more schools around the country adopt AI as a curricular focus.

There are entire universities devoted to AI in China, he said. This is already a central part of our society, and we need to prepare citizens to understand it and design it. Theres no doubt in my mind this is going to grow.

Read the original post:
Artificial Intelligence Is All Around Us. So This District Designed Its Own AI Curriculum - Education Week

Artificial intelligence was supposed to transform health care. It hasn’t. – POLITICO

Companies come in promising the world and often dont deliver, said Bob Wachter, head of the department of medicine at the University of California, San Francisco. When I look for examples of true AI and machine learning thats really making a difference, theyre pretty few and far between. Its pretty underwhelming.

Administrators say algorithms the software that processes data from outside companies dont always work as advertised because each health system has its own technological framework. So hospitals are building out engineering teams and developing artificial intelligence and other technology tailored to their own needs.

But its slow going. Research based on job postings shows health care behind every industry except construction in adopting AI.

The Food and Drug Administration has taken steps to develop a model for evaluating AI, but it is still in its early days. There are questions about how regulators can monitor algorithms as they evolve and rein in the technologys detrimental aspects, such as bias that threaten to exacerbate health care inequities.

Sometimes theres an assumption that AI is working, and its just a matter of adopting it, which is not necessarily true, said Florenta Teodoridis, a professor at the University of Southern Californias business school whose research focuses on AI. She added that being unable to understand why an algorithm came to a certain result is fine for things like predicting the weather. But in health care, its impact is potentially life-changing.

Despite the obstacles, the tech industry is still enthusiastic about AIs potential to transform health care.

The transition is slightly slower than I hoped but well on track for AI to be better than most radiologists at interpreting many different types of medical images by 2026, Hinton told POLITICO via email. He said he never suggested that we should get rid of radiologists, but that we should let AI read scans for them.

If hes right, artificial intelligence will start taking on more of the rote tasks in medicine, giving doctors more time to spend with patients to reach the right diagnosis or develop a comprehensive treatment plan.

I see us moving as a medical community to a better understanding of what it can and cannot do, said Lara Jehi, chief research information officer for the Cleveland Clinic. It is not going to replace radiologists, and it shouldnt replace radiologists.

Radiology is one of the most promising use cases for AI. The Mayo Clinic has a clinical trial evaluating an algorithm that aims to reduce the hours-long process oncologists and physicists undertake to map out a surgical plan for removing complicated head and neck tumors.

An algorithm can do the job in an hour, said John D. Halamka, president of Mayo Clinic Platform: Weve taken 80 percent of the human effort out of it. The technology gives doctors a blueprint they can review and tweak without having to do the basic physics themselves, he said.

NYU Langone Health has also experimented with using AI in radiology. The health system has collaborated with Facebooks Artificial Intelligence Research group to reduce the time it takes to get an MRI from one hour to 15 minutes. Daniel Sodickson, a radiological imaging expert at NYU Langone who worked on the research, sees opportunity in AIs ability to downsize the amount of data doctors need to review.

When I look for examples of true AI and machine learning thats really making a difference, theyre pretty few and far between. Its pretty underwhelming.

Bob Wachter, head of the department of medicine at the University of California, San Francisco

Covid has accelerated AIs development. Throughout the pandemic, health providers and researchers shared data on the disease and anonymized patient data to crowdsource treatments.

Microsoft and Adaptive Biotechnologies, which partner on machine learning to better understand the immune system, put their technology to work on patient data to see how the virus affected the immune system.

The amount of knowledge thats been obtained and the amount of progress has just been really exciting, said Peter Lee, corporate vice president of research and incubations at Microsoft.

There are other success stories. For example, Ochsner Health in Louisiana built an AI model for detecting early signs of sepsis, a life-threatening response to infection. To convince nurses to adopt it, the health system created a response team to monitor the technology for alerts and take action when needed.

Im calling it our care traffic control, said Denise Basow, chief digital officer at Ochsner Health. Since implementation, she said, death from sepsis is declining.

The biggest barrier to the use of artificial intelligence in health care has to do with infrastructure.

Health systems need to enable algorithms to access patient data. Over the last several years, large, well-funded systems have invested in moving their data into the cloud, creating vast data lakes ready to be consumed by artificial intelligence. But thats not as easy for smaller players.

Another problem is that every health system is unique in its technology and the way it treats patients. That means an algorithm may not work as well everywhere.

Over the last year, an independent study on a widely used sepsis detection algorithm from EHR giant Epic showed poor results in real-world settings, suggesting where and how hospitals used the AI mattered.

This quandary has led top health systems to build out their own engineering teams and develop AI in-house.

That could create complications down the road. Unless health systems sell their technology, its unlikely to undergo the type of vetting that commercial software would. That could allow flaws to go unfixed for longer than they might otherwise. Its not just that the health systems are implementing AI while no ones looking. Its also that the stakeholders in artificial intelligence, in health care, technology and government, havent agreed upon standards.

A lack of quality data which gives algorithms material to work with is another significant barrier in rolling out the technology in health care settings.

Over the last several years, large, well-funded systems have invested in moving their data into the cloud, creating vast data lakes ready to be consumed by artificial intelligence.|Elaine Thompson/AP Photo

Much data comes from electronic health records but is often siloed among health care systems, making it more difficult to gather sizable data sets. For example, a hospital may have complete data on one visit, but the rest of a patients medical history is kept elsewhere, making it harder to draw inferences about how to proceed in caring for the patient.

We have pieces and parts, but not the whole, said Aneesh Chopra, who served as the governments chief technology officer under former President Barack Obama and is now president of data company CareJourney.

While some health systems have invested in pulling data from a variety of sources into a single repository, not all hospitals have the resources to do that.

Health care also has strong privacy protections that limit the amount and type of data tech companies can collect, leaving the sector behind others in terms of algorithmic horsepower.

Importantly, not enough strong data on health outcomes is available, making it more difficult for providers to use AI to improve how they treat patients.

That may be changing. A recent series of studies on a sepsis algorithm included copious details on how to use the technology in practice and documented physician adoption rates. Experts have hailed the studies as a good template for how future AI studies should be conducted.

But working with health care data is also more difficult than in other sectors because it is highly individualized.

We found that even internally across our different locations and sites, these models dont have a uniform performance, said Jehi of the Cleveland Clinic.

And the stakes are high if things go wrong. The number of paths that patients can take are very different than the number of paths that I can take when Im on Amazon trying to order a product, Wachter said.

Health experts also worry that algorithms could amplify bias and health care disparities.

For example, a 2019 study found that a hospital algorithm more often pushed white patients toward programs aiming to provide better care than Black patients, even while controlling for the level of sickness.

Last year, the FDA published a set of guidelines for using AI as a medical device, calling for the establishment of good machine learning practices, oversight of how algorithms behave in real-world scenarios and development of research methods for rooting out bias.

The agency subsequently published more specific guidelines on machine learning in radiological devices, requiring companies to outline how the technology is supposed to perform and provide evidence that it works as intended. The FDA has cleared more than 300 AI-enabled devices, largely in radiology, since 1997.

Regulating algorithms is a challenge, particularly given how quickly the technology advances. The FDA is attempting to head that off by requiring companies to institute real-time monitoring and submit plans on future changes.

But in-house AI isnt subject to FDA oversight. Bakul Patel, former head of the FDAs Center for Devices and Radiological Health and now Googles senior director for global digital health strategy and regulatory affairs, said that the FDA is thinking about how it might regulate noncommercial artificial intelligence inside of health systems, but he adds, theres no easy answer.

FDA has to thread the needle between taking enough action to mitigate flaws in algorithms while also not stifling AIs potential, he said.

Some argue that public-private standards for AI would help advance the technology. Groups, including the Coalition for Health AI, whose members include major health systems and universities as well as Google and Microsoft, are working on this approach.

But the standards they envision would be voluntary, which could blunt their impact if not widely adopted.

See original here:
Artificial intelligence was supposed to transform health care. It hasn't. - POLITICO

Parker to Lead Artificial Intelligence Research and Education Initiative at UT – Tennessee Today

Lynne Parker is returning to the University of Tennessee, Knoxville, after completing a four-year post as deputy United States chief technology officer and director of the National Artificial Intelligence Initiative Office within the White House. In that role, Parker oversaw the development and implementation of the national artificial intelligence strategy.

Starting September 6, Parker will serve as associate vice chancellor and director of the new AI Tennessee Initiative at UT, where she will lead the universitys strategic vision and strategy for multidisciplinary artificial intelligence education and research. The initiative is designed to increase UTs funded research, expand the number of students developing interdisciplinary skills and competencies related to AI, and position the university and the state of Tennessee as national and global leaders in the data-intensive knowledge economy.

My goal has always been to advance AI initiatives and policy to the benefit of the American people, and indeed our world. I am proud of the accomplishments we have made over three administrations, together with colleagues from across government, academia, and industry, said Parker. In my new role, I look forward to advancing Tennessees engagement in this work by bringing together the broad perspectives and expertise of faculty and students from across many disciplines, not only on the UT Knoxville campus but also with partner institutions and organizations across the state.

Before joining the White House Office of Science and Technology Policy in 2018, Parker served as interim dean of UTs Tickle College of Engineering. She has also served as the National Science Foundations division director of information and intelligent systems.

I am thrilled Lynne is returning to UT to deepen and give focus to the research we are doing in AI, both here at UT and throughout Tennessee, said Vice Chancellor for Research Deborah Crawford. AI is increasingly important to our economy and our society, and Lynnes depth of knowledge and unique expertise will ensure our work in this space is meaningful, thoughtful, and supportive of peoples lives and livelihoods.

Lynnes involvement in shaping AI policy for the country over the last four years will be of enormous benefit to our students as we endeavor to provide new and exciting opportunities for students who want to major, minor, or take courses in the field, said Provost and Senior Vice Chancellor John Zomchick.

A Knoxville native, Parker completed her masters degree in UTs Min H. Kao Department of Electrical Engineering and Computer Science and her PhD at the Massachusetts Institute of Technology. She joined UTs faculty in 2002 when she founded the Distributed Intelligence Laboratory, where she broadened research and knowledge into multirobot systems, sensor networks, machine learning, and humanrobot interaction.

Parker was named a Fellow of the Association for the Advancement of Artificial Intelligence in 2022. She is also a Fellow of both the American Association for the Advancement of Science and the Institute of Electrical and Electronics Engineers, and a Distinguished Member of the Association for Computing Machinery. In addition to contributing to several conferences over the years, Parker chaired the 2015 IEEE International Conference on Robotics and Automation and served as editor-in-chief of the IEEE Robotics and Automation Society Conference Editorial Board and editor of IEEE Transactions on Robotics.

CONTACT:

Tyra Haag (865-974-5460, ttucker@utk.edu)

David Goddard (865-974-0683, david.goddard@utk.edu)

Read more here:
Parker to Lead Artificial Intelligence Research and Education Initiative at UT - Tennessee Today