Page 1,629«..1020..1,6281,6291,6301,631..1,6401,650..»

How Ambient.ai Is Using Artificial Intelligence to Turn Video Security On Its Head – Inc.

Shikhar Shrestha hasbeen building security systems since he was a teenager. It began as part obsession, part coping mechanism. He'd been traumatizedwhen he and his mother wererobbed at gunpoint when he was 12. The area of his hometown in eastern India seemed to have lots of security cameras--butwhat was the use? Help did not come while he was being threatened, and whilemother's jewelry was being stolen. He thought about that a lot.

Asa child Shrestha tinkeredwith technology, including building homemade security systems for neighbors. Years laterhe enrolled at Stanford,doing graduate work in electrical and mechanical engineering.There hemet computer science grad student Vikesh Khanna--and the pair had a lightbulb moment in conceptualizing the future of video innovation.

"We had an idea thatartificial intelligenceand video technology were getting so good that in five years video tech and A.I. could look at a video more exactly than humans can," Shrestha, now30,says."If any camera out there can tell you right away when it sees something suspicious, that would make for a great security system."

The pair earned master's degrees, and in 2017founded Ambient.ai, iteratingon their idea with funding and support from the Silicon Valley startup incubator Y Combinator. They had a clear goal: to prevent every physical security incident possible. They developed a technology thatcombines A.I. and a computer-vision breakthrough, called computer vision intelligence, to understand situational context. It could, in real-time, identify elements in a videofrom a human walking, to a car tailing another car, to a weapon being brandished, to a perimeter breach.

The foundersthought they had a straightforward problem to fix. With conventional enterprise security systems, video cameras capturean endless stream of video--which is rarely, if ever, watched in real time to actually stop, prevent, or quickly respond to an incident. During his time in Y Combinator, Shrestha sent 100 emails a week to security chiefs at large companies, hospitals, hotels, and governments,to learn more about his market and its needs.He quickly learned that no one wanted a new security system--they already had cameras. But the meetings confirmed what he knew: "Everyone does security the same way: They spend millions of dollars on their programs. The expectation is that if something bad happens you rewind the video." In other words, it wasn't having the kind of crime-stopping utility Shrestha envisioned.

At the same time, he was gaining confidence in his teachablevideo-scanning tool. It could identify when a human fell and got hurt, or when a weapon appeared. The softwarealso couldgauge how certainit was thata security incident occurred.Low confidence meant it would ping a member of Ambient.ai's small team of humans to verify what was happening in the video. In cases of high confidence, it alerts a designated authority, such as a security chief on duty or local law enforcement.

Just because Shrestha trusted his technology didn't mean investors saw the point."At that time the venture community did not believe that physical security was an interesting space where you could build a venture-scale business," he says. There were dominant players already. Companies' budgets were allocated. But Ambient.ai'ssolution was complementary with existing security: It could be integrated into almost any camera-feed system, and customized based on the security needs of nearlyany business to detect threats in real-time. Still, Shrestha says raising the first $2 million for Ambient.ai required approximately 50 meetings over the course of two months.

The company pitched its productwhere it saw immediate need. When a private school in San Jose, California, the Harker School, experienced a nighttime perimeter breach (caught on video that no one was watching) followed by an assault the next morning, Shrestha proposed his system could have prevented it by alerting the authorities immediately. Getting a paying customer seemed to set more deals in motion. While still in beta, the company slowly amassed a client roster. Investor confidence soared, too.When Ambient.ai raised a Series A round of funding, it took13 days of meetings; the Series B took just three.

After five years of signing upcustomers and building up its A.I. intelligence in stealth mode, Ambient.ai formally launched to the public in January 2022. It also announced it had raised $52 million in a round led by Andreessen Horowitz. The startupworks with seven of the top-10 U.S. technology companiesby market capitalization, andits client listincludes Adobe, VMWare, and Impossible Foods. Most of the company's 100 employees are based around its headquarters in theSan Francisco Bay area.

Shrestha is hoping his company flips the surveillance model of security to be proactive, rather than reactive. He's also addressing concerns about the use of machine learning in security, which evokes concern over baked-in or learned prejudices andprofiling.The Ambient.ai system identifies forms of objects and people, not their colors or traits. Unlike other video-monitoring systems, it does not use facial recognition. Nor does its system have the ability to recognize bias-inducing traits, such as gender, age, or skin color.

"It's not looking for classes that can include bias," Shrestha says. "There's a huge responsibility of people who build these systems to build systems from the ground up to maximize privacy and to eliminate bias."

See the article here:
How Ambient.ai Is Using Artificial Intelligence to Turn Video Security On Its Head - Inc.

Read More..

The New Artificial Intelligence Of Car Audio Might Improve More Than Just Tunes – Forbes

As Artificial Intelligence is applied to car audio, the system can start to sense competing noise ... [+] and adjust the experience dynamically.

Hollywood has perennially portrayed Artificial Intelligence (AI) as the operating layer of dystopian robots who replace unsuspecting humans and create the escalating, central conflict. In a best case reference, you might imagine a young Hailey Joel Osment playing David, the self-aware, artificial kid in Spielbergs polar-caps-thawed-and-flooded-coastal-cities world (sound familiar?) of AI: Artificial Intelligence who (spoiler alert) only kills himself. Or maybe you recall Robin Williamss voice as Bicentennial Man who, once again, is a self-aware robot attempting to thrive who (once again on the spoiler alert), ends up being his only victim. And, of course, theres the nearly clich reference to Terminator and its post-apocalyptic world with machines attempting to destroy humans and, well, (not-so-spoiler alert) lots of victims over a couple of decades. In none of these scenarios, however, do humans coexist with an improved life, let alone enhanced entertainment and safety.

That, however, is the new reality. Artificial Intelligence algorithms can be included into audio designs and continuously improved via over-the-air updates to improve the driving experience. And in direct contradiction to these Hollywood examples, such AI might actually improve the humans likelihood to survive.

How the car audio performs can now become an innovative, self-tuned system that enhances the ... [+] experience for the user.

Until recently, all User Interface (UI) including audio development has required complex programming by expert coders over the standard thirty-six (36) months of a vehicle program. Sheet metal styling and electronic boxes are specified, sourced and developed in parallel only to calibrate individual elements late in development. Branded sounds. Acoustic signatures. All separate initiatives within the same, anemic system design that has cost manufacturers billions.

But Artificial Intelligence has allowed a far more flexible and efficient way of approaching audio experience design. What were seeing is the convergence of trends, states Josh Morris, DSP Concepts Machine Learning Engineering Manager. Audio is becoming a more dominant feature within automotive, but at the same time youre seeing modern processors become stronger with more memory and capabilities.

And, therein, using a systems-focused development platform, Artificial Intelligence and these stronger processors provides drivers and passengers with a new level of adaptive, real-time responsiveness. . Instead of the historical need to write reams of code for every conceivable scenario, AI guides system responsiveness based on a learned awareness of environmental conditions and events, states Steve Ernst, DSP Concepts Head of Automotive Business Development.

The very obvious way to use such a learning system is de-noising the vehicle so that premium audio can be tailored and improved despite having swapped to winter tires or other such ambient changes. But LG Electronics has developed algorithms running in the DSP Concepts Audio Weaver platform to allow voice enhancements of the movies dialogue during rear-seat entertainment to accentuate it versus in-movie explosions, thereby allowing the passenger to better hear the critical content

Another non-obvious aspect would be how branded audio sounds are orchestrated in the midst of other noises. Does this specific vehicle require the escalating boot-up sequence to play while other sounds like the radio and chimes are automatically turned down? Each experience can be adjusted.

How to deal with the ongoing, internal, external and ever-changing audio alerts will be a ... [+] development challenge for autonomous and electric vehicles alike.

As the world races into both electric vehicles and autonomous driving, the frequency and needs of audible warnings will likely change drastically. For instance, an autonomous taxis safety engineer cannot assume the passengers are anywhere near a visual display when a timely alert is required. And how audible is that alert for the nearly 25 million Americans with disabilities for whom autonomous vehicles should open new mobility possibilities? Audio now isnt just for listening to your favorite song, states Ernst. With autonomous driving, there are all sorts of alerts that are required to keep the driver engaged or to alert the non-engaged driver about things going on around them.

And what makes it more challenging, injects Adam Levenson, DSP Conceptss Head of Marketing, are all of the things being handled simultaneously within the car: telephony, immersive or spatial sound, engine noise, road noise, acoustic vehicle alert systems, voice systems, etc. We like to say the most complex audio product is the car.

For instance, imagine the scenario where a driver has enabled autonomous drive mode on the highway, has turned up his tunes and is pleasantly ignorant of an approaching emergency vehicle. At what accuracy (and distance) of siren-detection using the vehicles microphone(s) does the car alert its quasi-distracted-driver? How must that alert be presented to overcome ambient noise, provide sufficient attention but not needlessly startle the driver? All of this can be tuned via pre-developed models, upfront training with different sirens and subsequent cloud-based tuning. This is where the overall orchestration becomes really important, explains Morris. We can take the output of the [AIs detection] model and direct that to different places in the car. Maybe you turn the audio down, trigger some audible warning signal and flash something on the dashboard for the driver to pay attention.

The same holds true for external alerts. For instance, quiet electric vehicle may have tuned alarms for pedestrians. And so new calibrations can be created offline and downloaded to vehicles as software updates based upon the enabled innovation.

Innovation everywhere. And Artificial Intelligence feeding the utopian experience rather than creating Hollywoods dystopian world.

Heres my prediction of the week (and its only Tuesday, folks): the next evolution of audio shall include a full, instantaneous feedback loop including the subtle, real-time users delight. Yes, much of the current design likely improves the experience, but an ongoing calibration of User-Centered Design (UCD) might be additionally enhanced based upon the passengers expressions, body language and comments, thereby individually tuning the satisfaction in real-time. All of the enablers are all there: camera, AI, processors and an adaptive platform.

Yes, weve previously heard of adaptive mood lighting and remote detection of boredom, stress, etc. to improve safety, but nothing that enhances the combined experience based upon real-time, learning algorithms of all user-pointed sensors.

Maybe Im extrapolating too much. But just like Robin Williamss character Ive spanned two centuries so maybe Im also just sensitive to what humans might want.

Read more from the original source:
The New Artificial Intelligence Of Car Audio Might Improve More Than Just Tunes - Forbes

Read More..

Artificial intelligence thinks the Aspen area looks like this – The Aspen Times

Aspen is known for its world-class skiing, sky-high real-estate prices and breath-taking mountain views. The town has been known to conjure artistic inspiration, as well; its the town where Stevie Nicks reportedly wrote the hit Landslide, and a place John Denver called home for many years.

According to Swift Luxe, there are approximately 1.5 million visitors to Aspen each year who come to take in the beauty of the area.

While its practically impossible to capture Aspen and the surrounding areas beauty in an image, an AI program tried. The images below were created using a program calledDream Studio beta, a more rapid and accessible version ofStable Diffusion, a text-to-image modelthat was released to the public last month.

When this artificial intelligence text-to-image application thinks of Aspen, it thinks of vast mountain ranges.

Aspen, ColoradoStable Diffusion

Aspen, ColoradoStable Diffusion

Aspen, ColoradoStable Diffusion

Aspen, ColoradoStable Diffusion

Aspen, ColoradoStable Diffusion

Aspen, ColoradoStable Diffusion

Show CaptionsHide Captions

This is pretty close if you ask us.

Maroon BellsStable Diffusion

Maroon BellsStable Diffusion

Maroon BellsStable Diffusion

Maroon BellsStable Diffusion

Maroon BellsStable Diffusion

Show CaptionsHide Captions

Aspen Real EstateStable Diffusion

Aspen Real EstateStable Diffusion

Aspen Real EstateStable Diffusion

Aspen Real EstateStable Diffusion

Aspen Real EstateStable Diffusion

Show CaptionsHide Captions

Snowmass VillageStable Diffusion

Snowmass VillageStable Diffusion

Snowmass VillageStable Diffusion

Snowmass VillageStable Diffusion

Show CaptionsHide Captions

Close, very close.

Read the rest here:
Artificial intelligence thinks the Aspen area looks like this - The Aspen Times

Read More..

Will Artificial Intelligence Kill College Writing? – The Chronicle of Higher Education

When I was a kid, my favorite poem was Shel Silversteins The Homework Machine, which summed up my childhood fantasy: a machine that could do my homework at the press of a button. Decades later that technology, the innocuously titled GPT-3, has arrived. It threatens many aspects of university education above all, college writing.

The web-based GPT-3 software program, which was developed by an Elon Musk-backed nonprofit called OpenAI, is a kind of omniscient Siri or Alexa that can turn any prompt into prose. You type in a query say, a list of ingredients (what can I make with eggs, garlic, mushrooms, butter, and feta cheese?) or a genre and prompt (write an inspiring TED Talk on the ways in which authentic leaders can change the world) and GPT-3 spits out a written response. These outputs can be astonishingly specific and tailored. When asked to write a song protesting inhumane treatment of animals in the style of Bob Dylan, the program clearly draws on themes from Dylans Blowin in the Wind:

How many more creatures must suffer?How many more must die?Before we open up our eyesAnd see the harm were causing?

When asked to treat the same issue in the style of Shakespeare, it produces stanzas of iambic tetrameter in appropriately archaic English:

By all the gods that guide this EarthBy all the stars that fill the skyI swear to end this wretched dearthThis blight of blood and butchery.

GPT-3 can write essays, op-eds, Tweets, jokes (admittedly just dad jokes for now), dialogue, advertisements, text messages, and restaurant reviews, to give just a few examples. Each time you click the submit button, the machine learning algorithm pulls from the wisdom of the entire internet and generates a unique output, so that no two end products are the same.

The quality of GPT-3s writing is often striking. I asked the AI to discuss how free speech threatens a dictatorship, by drawing on free speech battles in China and Russia and how these relate to the First Amendment of the U.S. Constitution. The resulting text begins, Free speech is vital to the success of any democracy, but it can also be a thorn in the side of autocrats who seek to control the flow of information and quash dissent. Impressive.

From an essay written by the GPT-3 software program

The current iteration of GPT-3 has its quirks and limitations, to be sure. Most notably, it will write absolutely anything. It will generate a full essay on how George Washington invented the internet or an eerily informed response to 10 steps a serial killer can take to get away with murder. In addition, it stumbles over complex writing tasks. It cannot craft a novel or even a decent short story. Its attempts at scholarly writing I asked it to generate an article on social-role theory and negotiation outcomes are laughable. But how long before the capability is there? Six months ago, GPT-3 struggled with rudimentary queries, and today it can write a reasonable blog post discussing ways an employee can get a promotion from a reluctant boss.

Since the output of every inquiry is original, GPT-3s products cannot be detected by anti-plagiarism software. Anyone can create an account for GPT-3. Each inquiry comes at a cost, but its usually less than a penny and the turnaround is instantaneous. Hiring someone to write a college-level essay, in contrast, currently costs $15 to $35 per page. The near-free price point of GPT-3 is likely to entice many students who would otherwise be priced out of essay-writing services.

It wont be long before GPT-3, and the inevitable copycats, infiltrate the university. The technology is just too good and too cheap not to make its way into the hands of students who would prefer not to spend an evening perfecting the essay I routinely assign on the leadership style of Elon Musk. Ironic that he has bankrolled the technology that makes this evasion possible.

To help me think through what the collision of AI and higher ed might entail, I naturally asked GPT-3 to write an op-ed exploring the ramifications of GPT-3 threatening the integrity of college essays. GPT-3 noted, with mechanical unself-consciousness, that it threatened to undermine the value of a college education. If anyone can produce a high-quality essay using an AI system, it continued, then whats the point of spending four years (and often a lot of money) getting a degree? College degrees would become little more than pieces of paper if they can be easily replicated by machines.

The effects on college students themselves, the algorithm wrote, would be mixed: On the positive side, students would be able to focus on other aspects of their studies and would not have to spend time worrying about writing essays. On the negative side, however, they will not be able to communicate effectively and will have trouble in their future careers. Here GPT-3 may actually be understating the threat to writing: Given the rapid development of AI, what percent of college freshmen today will have jobs that require writing at all by the time they graduate? Some who would once have pursued writing-focused careers will find themselves instead managing the inputs and outputs of AI. And once AI can automate that, even those employees may become redundant. In this new world, the argument for writing as a practical necessity looks decidedly weaker. Even business schools may soon take a liberal-arts approach, framing writing not as career prep but as the foundation of a rich and meaningful life.

So what is a college professor to do? I put the question to GPT-3, which acknowledged that there is no easy answer to this question. Still, I think we can take some sensible measures to reduce the use of GPT-3 or at least push back the clock on its adoption by students. Professors can require students to draw on in-class material in their essays, and to revise their work in response to instructor feedback. We can insist that students cite their sources fully and accurately (something that GPT-3 currently cant do well). We can ask students to produce work in forms that AI cannot (yet) effectively create, such as podcasts, PowerPoints, and verbal presentations. And we can design writing prompts that GPT-3 wont be able to effectively address, such as those that focus on local or university-specific challenges that are not widely discussed online. If necessary, we could even require students to write assignments in an offline, proctored computer lab.

Eventually, we might enter the if you cant beat em, join em phase, in which professors ask students to use AI as a tool and assess their ability to analyze and improve the output. (I am currently experimenting with a minor assignment along these lines.) A recent project on Beethovens 10th symphony suggests how such projects might work. When he died, Beethoven had composed only 5 percent of his 10th symphony. A handful of Beethoven scholars fed the short, completed section into an AI that generated thousands of potential versions of the rest of the symphony. The scholars then sifted through the AI-generated material, identified the best parts, and pieced them together to create a complete symphony. To my somewhat limited ear, it sounds just like Beethoven.

Read more:
Will Artificial Intelligence Kill College Writing? - The Chronicle of Higher Education

Read More..

New artificial intelligence recycling technology can sort plastics on its own – H2 News – Hydrogen News – Green Hydrogen Report

New recycling technology has been developed using artificial intelligence to help programs to sort plastics effectively and affordably in order to stop recyclable materials from being sent to landfills.

Even though many people with municipal programs carefully sort through their waste, much of the plastic they think is being recycled is still finding its way to the landfill. Among the biggest problems is that once the trash has been collected, the individual plastics must still be sorted. On a massive scale and with cost as a concern, recycling technology has not reached the point that many plastics will end up anywhere but in a landfill.

Without proper quick and easy sorting, it becomes difficult, slow, and expensive to try to process all the recycled materials. It becomes impossible to keep up with the incoming waste to be sorted and very expensive when much of it must be accomplished by hand. Failing to do so and mixing the wrong plastics means that the remade plastics will be flawed and will not perform as needed, wasting the entire batch as well as the energy and resources required to produce it.

The recycling process is quite complicated. If you go to the supermarket or for the daily recycling you need to know how to properly place all the recyclable (items), like bottles or others, into the right bins. You need to know the labels, know the icons, explained University of Technology Sydney School of Electrical and Data Engineerings Dr. Xu Wang.

This being the case, Dr. Wang led a team of the universitys researchers from the Global Big Data Technologies Centre (GBDTC) in the development of a smart bin capable of automatically sorting the plastics it receives.

The bin uses a spectrum of different forms of recycling technology including robotics, machine vision and artificial intelligence.

This machine can classify different (types) of waste including glasses, metal cans and plastics, explained Wang. This includes the different forms of plastics including PET and HDPE.

See original here:
New artificial intelligence recycling technology can sort plastics on its own - H2 News - Hydrogen News - Green Hydrogen Report

Read More..

University of Washington graduates use artificial intelligence to create new proteins – NBC Right Now

SEATTLE, Wash.-

For over two years, protein structure prediction has been changed by machine learning. On Sept. 15, two science related research talk about a similar idea in the revolution of protein design.

The findings show how machine learning can create protein molecules that are more accurate and made quicker than before.

With these new software tools, we should be able to find solutions to long-standing challenges in medicine, energy, and technology, said senior author David Baker, professor of biochemistry at the University of Washington School of Medicine.

The algorithm used in machine learning which includes RoseTTAFold have been trained to predict the smaller detailed shapes if natural proteins based on their amino acid sequences.

Machine learning is a type of artificial intelligence that allows computers to learn from data without having to be programmed.

A.I. has the ability to generate protein in two ways. One being akin to DALL-E or other A.I. tools that produce an output from simple prompts. The second is the autocomplete feature we can find in a search bar.

As a way of making things go by faster the A.I. team created a new algorithm that creates amino acid sequences. This tool, called ProteinMPNN, creates the sequence in one second. That's over 200 minutes faster than previous best software.

The Baker Lab also says combining new machine learning tools could reliably generate new proteins that functioned in the laboratory. Among those were the nanoscale ring that could make up part of a custom nanomachines.

Read the original post:
University of Washington graduates use artificial intelligence to create new proteins - NBC Right Now

Read More..

Precision health perspectives – UCI News

In February, UCI launched the Institute for Precision Health, a campus-wide, interdisciplinary endeavor that merges UCIs powerhouse health sciences, engineering, machine learning, artificial intelligence, clinical genomics and data science capabilities. The objective is to identify, create and deliver the most effective health and wellness strategy for each individual person and, in doing so, confront the linked challenges of health equity and the high cost of care.

IPH will bring a multifaceted, integrated approach to what many call the next great advancement in healthcare. The institute is an ecosystem for collaboration across disciplines.

Dr. Daniel Chow is an assistant professor of radiological sciences and a co-director of UCIs Center for Artificial Intelligence in Diagnostic Medicine. Hes been awarded teacher of the year by the Department of Radiology and was recognized by UCI Chancellor Howard Gillman as a 2018 Big Idea Winner for his teams proposal centering on precision health and artificial intelligence. Chow is the A3(applied analytics and artificial intelligence) lead for UCIs Institute for Precision Health. His team brings solutions to inpatient, ambulatory and community settings and supports pilot applications. Here, Chow shares why hes an Institute for Precision Health believer and how data is great, but humans working together are still at the crux of advancements in health care.

What most interests you about launching the Institute for Precision Health?

Im really excited, because I think we are now in a position where within this generation we can actualize some big ideas to improve patient care. Im still a clinician, and I want to figure out how we can deploy AI tools to benefit patients. To me, that should always be the goal.

Explain a little more. Ive heard precision medicine described as the giant leap for health care. Is that how you see it?

I think were on the precipice of that. And I feel a lot of the pieces are there. You look at, say, clinical omics, you look at AI, big data all these terms have been around for a while. I dont think any one of these things is whats going to advance healthcare but when you integrate all these technologies and when you integrate with cohesive goals then I think things can advance. Thats exactly what were doing with IPH.

What do you envision your primary contribution at IPH will be?

All of the groups within IPH have specific focuses. The group I lead focuses on deploying tools and strategies and quantifying the benefits.

Do you mean that youll be taking tools into clinical settings and figuring out how to get them to work within the hospital or clinic?

Thats exactly what it is. And some of the solutions we use will be developed within IPH and some may be already developed within industry. So, well work with a little of both.

You wear a number of hats right now. How quickly do you think IPH will be the thing that really takes over your life?

I feel like for myself right now thats kind of the goal. I want to be able to move in that direction where I dedicate much of my time to IPH.

Is it because you think this is the most important place to put your energy?

Yes. I think growing up this is what I always wanted to do. This is what I dreamed of doing.

What fueled that dream?

What interests me is looking at operational benefits, kind of looking at the downstream effects of tools and strategies. When you start to impact those, then its not about just touching the life of a person or even a group of people. Its about advancing an entire field and touching the lives of countless people. So thats what excites me.

Do you know what your first project within IPH will be?

We have a few things that weve already been working on. One is an AI tool that will automatically detect strokes. From the initial analysis that weve done we have shown that it can drastically improve turnaround time that is, the time between admission and when radiology reports the stroke finding to neurology. However, one thing were still measuring is if this actually results in better patient outcomes. The answer right now is that its kind of mixed.

Do you know why?

The analogy I use is that AI is kind of a cog, and a cog is meant to turn other cogs. In this early generation, people are still treating AIs like wheels, though. And were trying to fit them in our usual and customary workflows. So that will have to adjust. One of the goals must be not just developing cool technology but also developing ways to leverage it. We have to figure out how to actually move these tools into our workflow. So, with the stroke work, we have the new tool for faster stroke detection, but we have to figure out the degree to which the bottleneck with patient care is detection or perhaps something else.

And you were also part of the team that developed the COVID Vulnerability Index, a tool that doctors use to quickly determines how to best treat each patient?

Yes. With the COVID Vulnerability Index, we went from just the idea for it to actually having the tool in place and deployed within four months of the pandemic hitting UCI. And part of the COVID tool is that we use the knowledge gained from each patient to best treat the next. This represents a very big shift. In medicine, knowledge has traditionally been generational. Now its becoming more real time.

An axiom that medical students are taught is that half of what you learn at school is wrong, but you just dont know which half yet. Why? Because historically if we had a new challenge, medical colleagues would share experiences by maybe writing them up for publication in a peer-reviewed journal and learning from each other that way. But that process takes a really long time. And, of course, thats not what we did with COVID. We pulled all the data in real time, and we learned from patients in real time. So that was our testing ground of sorts. It showed that translational medicine the bench to bedside process can move much faster in a precision health paradigm.

And do you now feel that tool has legs?

Exactly. There are other issues that we might use the model to build on. A few ideas: hospital readmission, sepsis there are so many other challenges where we might apply the same formula that we used for the COVID tool. But it wasnt just me or my group that developed it. We collaborated with laboratory medicine, radiology, computer sciences, public health, nursing and many others. To bring it back full circle, its just like its not going to be any one technology thats going to solve the huge modern health issues. Its also not any one field or specialty that will do it. You really have to combine all the different expertise and insights to actually get there. And thats something IPH is doing.

Sometimes when people talk about precision health, they laser focus on the idea of merely getting more data. It sounds like youre acknowledging that the success of precision health and UCIs Institute for Precision Health will also be because it pools so many specialties and so much human expertise?

Precision health really is about the team effort; its bigger than any one person. But, yes, I think historically theres been a lot of focus on specific types of data. Sometimes the problem is that theres almost too much data, though. So I just like to emphasize that we also need to know how we combine all the different types of tools we develop. And we need to know how to best integrate the knowledge within the healthcare setting.

How long before you can confidently say that IPH has improved patient health?

Well, we can already say that because of our work with COVID and stroke detection. Now the task is to find more applications and more uses. What Im most interested in is frequent, incremental successes. Im a firm believer that little successes add up to major advancements.

If you want to learn more about supporting this or other activities at UCI, please visit the Brilliant Future website athttps://brilliantfuture.uci.edu. Publicly launched on October 4, 2019, the Brilliant Future campaign aims to raise awareness and support for UCI. By engaging 75,000 alumni and garnering $2 billion in philanthropic investment, UCI seeks to reach new heights of excellence instudent success,health and wellness, research and more. UCI Health Affairs plays a vital role in the success of the campaign. Learn more by visitinghttps://brilliantfuture.uci.edu/uci-health-affairs/.

About UCI Institute for Precision Health: Founded in February 2022, the Institute for Precision Health (IPH) is a multifaceted, integrated ecosystem for collaboration that maximizes the collective knowledge of patient data sets and the power of computer algorithms, predictive modeling and AI. IPH marries UCIs powerhouse health sciences, engineering, machine learning, artificial intelligence, clinical genomics and data science capabilities to deliver the most effective health and wellness strategy for each individual person and, in doing so, confronts the linked challenges of health equity and the high cost of care. IPH is part of UCI Health Affairs, and is co-directed by Tom Andriola, vice chancellor for information, technology and data, and Leslie Thompson, Donald Bren Professor of psychiatry & human behavior and neurobiology & behavior. IPH is a comprised of seven areas: SMART(statistics, machine learning-artificial intelligence), A2IR(applied artificial intelligence research), A3(applied analytics and artificial intelligence), Precision Omics(fosters translation of genomic, proteomic, and metabolomic research findings into clinical applications), Collaboratory for Health & Wellness(providestheecosystem that fosters collaboration across disciplines through the integration of health-related data sources), Deployable Equity(engagescommunity stakeholders and health-equitygroupsto create solutionsthat narrow the disparities gap in the health and wellbeing of underserved and at-risk populations.) and Education and Training (brings data-centric education to students and healthcare practitioners so they can practice at the top of their licenses).

Visit link:
Precision health perspectives - UCI News

Read More..

The Increased Use Of Machine Learning And Artificial Intelligence Is Expected To Fuel The Digital Transformation Market As Per The Business Research…

LONDON, Sept. 14, 2022 (GLOBE NEWSWIRE) -- According to The Business Research Companys research report on the digital transformation market, the increasing adoption of machine learning and artificial intelligence is expected to drive the growth of the digital transformation market going forward. Digital transformation provides traditional businesses with solutions like cloud computing, big data & analytics, data management, and other advanced features such as artificial intelligence and machine learning, which help in the optimization of business operations, leading to reduced efforts in operations and increased efficiency. Thus, their usage increased in various sectors such as healthcare, banking, transportation, manufacturing, and others, increasing the demand in the digital transformation market.

For instance, according to the report published by Cloudmantra, an India-based technology services company, the usage of machine learning in the Indian manufacturing industry has increased manufacturing capacity by up to 20% while reducing material usage by 4% in 2021. It also gives manufacturers the ability to control Overall Equipment Effectiveness (OEE) at the plant level, increasing OEE performance from 65% to 85%. Furthermore, according to the MIT Technology Review Insights report in 2022, approximately 60% of manufacturers are using artificial intelligence to improve daily operations, design products, and plan their future operations. Therefore, the rising adoption of machine learning and AI drives the digital transformation market.

Request for a sample of the global digital transformation market report

The global digital transformation market size is expected to grow from $0.94 trillion in 2021 to $1.17 trillion in 2022 at a compound annual growth rate (CAGR) of 24.7%. The global digital transformation market share is expected to grow to $2.64 trillion in 2026 at a CAGR of 22.4%.

Technological advancement in digital solutions is gaining popularity among the digital transformation market trends. Major companies operating in the digital transformation market are focused on developing technologically advanced products to strengthen their market position. For instance, in April 2020, Oracle Corporation, a US-based computer technology corporation and software solutions provider, built a new cloud data storage service called GoldenGate, an oracles cloud infrastructure software that uses real-time data analytics for the analysis of data. Real-time data analysis provides a very quick analysis of data by using different logical and mathematical operations, which helps in understanding business requirements and implementing any decision instantly. GoldenGate provides clients with a highly automated and fully managed cloud service such as database replication, analyzing real-time data, and real-time data ingestion to the cloud, which will make daily business operations easy and analyzable.

Major players in the digital transformation market are Microsoft Corporation, IBM Corporation, Oracle Corporation, Google Inc., Cognizant, Accenture PLC, Dell EMC, Siemens AG, Hewlett-Packard Company, Adobe Systems Inc., Capgemini, Cognex Corporation, Deloitte, Marlabs Inc., Equinix Inc., PricewaterhouseCoopers, Apple Inc., Broadcom, CA Technologies, KELLTON TECH, International Business Machines Corporation, Hakuna Matata Solutions, ScienceSoft Inc., SumatoSoft, Space-O Technologies, HCL Technologies, and Tibco Software Inc.

The global digital transformation market analysis is segmented by technology into cloud computing, big data and analytics, artificial intelligence (AI), internet of things (IoT), blockchain; by deployment mode into cloud, on-premises; by organization size into large enterprises, small and medium-sized enterprises (SMEs); by end-user into BFSI, healthcare, telecom and IT, automotive, education, retail and consumer goods, media and entertainment manufacturing, government, others.

North America was the largest region in the digital transformation market in 2021. Asia-Pacific is expected to be the fastest-growing region in the global digital transformation market during the forecast period. The regions covered in the global digital transformation industry outlook are Asia-Pacific, Western Europe, Eastern Europe, North America, South America, the Middle East, and Africa.

Digital Transformation Global Market Report 2022 Market Size, Trends, And Global Forecast 2022-2026 is one of a series of new reports from The Business Research Company that provide digital transformation market overviews, analyze and forecast market size and growth for the whole market, digital transformation market segments and geographies, digital transformation market trends, digital transformation market drivers, digital transformation market restraints, digital transformation market leading competitors revenues, profiles and market shares in over 1,000 industry reports, covering over 2,500 market segments and 60 geographies.

The report also gives in-depth analysis of the impact of COVID-19 on the market. The reports draw on 150,000 datasets, extensive secondary research, and exclusive insights from interviews with industry leaders. A highly experienced and expert team of analysts and modelers provides market analysis and forecasts. The reports identify top countries and segments for opportunities and strategies based on market trends and leading competitors approaches.

Not the market you are looking for? Check out some similar market intelligence reports:

Artificial Intelligence Global Market Report 2022 By Offering (Hardware, Software, Services), By Technology (Machine Learning, Natural Language Processing, Context-Aware Computing, Computer Vision, Others (Image Processing, Speech Recognition)), By End-User Industry (Healthcare, Automotive, Agriculture, Retail, Marketing, Telecommunication, Defense, Aerospace, Media & Entertainment) Market Size, Trends, And Global Forecast 2022-2026

Cloud Orchestration Global Market Report 2022 By Service Type (Cloud Service Automation, Training, Consulting, And Integration, Support And Maintenance), By Deployment Mode (Private, Public, Hybrid), By Organization Size (Small And Medium Enterprises (SMEs), Large Enterprises), By End-User (Healthcare And Life Sciences, Transportation And Logistics, Government And Defense, IT And Telecom, Retail, Manufacturing, Other End-Users) Market Size, Trends, And Global Forecast 2022-2026

Internet Of Things (IoT) Global Market Report 2022 By Platform (Device Management, Application Management, Network Management), By End Use Industry (BFSI, Retail, Government, Healthcare, Manufacturing, Transportation, IT & Telecom), By Application (Building And Home Automation, Smart Energy And Utilities, Smart Manufacturing, Connected Logistics, Smart Retail, Smart Mobility And Transportation) Market Size, Trends, And Global Forecast 2022-2026

Interested to know more about The Business Research Company?

The Business Research Company is a market intelligence firm that excels in company, market, and consumer research. Located globally it has specialist consultants in a wide range of industries including manufacturing, healthcare, financial services, chemicals, and technology.

The Worlds Most Comprehensive Database

The Business Research Companys flagship product, Global Market Model, is a market intelligence platform covering various macroeconomic indicators and metrics across 60 geographies and 27 industries. The Global Market Model covers multi-layered datasets which help its users assess supply-demand gaps.

Original post:
The Increased Use Of Machine Learning And Artificial Intelligence Is Expected To Fuel The Digital Transformation Market As Per The Business Research...

Read More..

International Students Conclude Their Participation in Global Summit on Artificial Intelligence – Markets Herald

A group of AI graduate students in several prestigious international universities concluded their participation in the second edition of the Global Summit on Artificial Intelligence, which concluded yesterday in Riyadh, and visited Masmak Palace in the center of Riyadh to be briefed on the history of the capital.

The students represented six countries, joined by several Saudi scholarship students in the same specialization. Their participation came within the knowledge exchange initiative launched by SDAIA and hosted 19 male and female students of different nationalities, including the US, the UK, India, Jordan, Algeria, South Korea, and Nigeria, who study at prestigious international universities and institutes, including the Sorbonne University in Paris, Oxford University, University College London, Durham University, Nottingham University, Sussex University in the UK, the Massachusetts Institute of Technology in the USA, and Kings College London.

Through this initiative, SDAIA aimed to attract global capabilities in artificial intelligence and to enhance the role of distinguished youth based on the Kingdoms Vision 2030 and its aspirations to enable them to lead the future of artificial intelligence in the Kingdom, the region, and the world.

On this occasion, SDAIA President Dr. Abdullah Al-Ghamdi explained that the knowledge exchange initiative was designed to achieve several benefits, including engaging visiting students in knowledge exchange dialogues to explore opportunities for future cooperation and introducing them to the Kingdoms efforts in pioneering the data and artificial intelligence and the future of the sector during a journey that Saudi students will lead with their peers from international universities.

He stressed that SDAIA aims through the initiative to build qualitative partnerships that support its efforts in data and artificial intelligence and help attract global capabilities that achieve qualitative addition to the Kingdom, adding that the initiative contributes to activating the distinguished role of Saudi youth and engaging them in a real dialogue that develops their leadership spirit and shows their knowledge capabilities. Al-Ghamdi said that this would enhance the Kingdoms position in data and artificial intelligence, noting that the initiative provides the opportunity to exchange knowledge and explore opportunities for future cooperation through a constructive dialogue that brings together Saudi youth and foreign graduate students.

Here is the original post:
International Students Conclude Their Participation in Global Summit on Artificial Intelligence - Markets Herald

Read More..

IonQ to Participate in IEEE International Conference on Quantum Computing and Engineering – HPCwire

COLLEGE PARK, Md., Sept. 19, 2022 IonQ, an industry leader in quantum computing, today announced its participation in IEEE International Conference on Quantum Computing and Engineering (QCE22). The weeklong event will take place in Broomfield, Colorado, on September 18-23, 2022, and brings together some of the worlds leading quantum researchers, scientists, entrepreneurs, and academics to discuss and explore the latest advancements in the field of quantum.

IonQ co-founder and Chief Scientist Chris Monroe will keynote the event on September 19, where he will summarize the distinct advantages of trapped ion quantum computers in both academic and industrial settings, along with their uses in scientific and commercial applications. Fellow co-founder and Chief Technology Officer Jungsang Kim will also be participating in a workshop program on September 20, focused on constructing control systems for trapped ion quantum computers.

Additional IonQ team members will also be joining a number of workshops and panel discussions throughout the week, exploring topics like working with the Microsoft Azure Quantum Platform, the need for low-level programming to deliver quantum advantage, and the key challenges when scaling towards practical quantum computing. Fellow panelists and workshop participants include researchers and executives from Microsoft, IBM, Lawrence Berkeley National Laboratory, and more.

Visit the conference page here to learn more about QCE22, or click here to learn more about IonQs latest updates to its IonQ Aria system.

About IonQ

IonQ is a leader in quantum computing, with a proven track record of innovation and deployment. IonQs current generation quantum computer, IonQ Forte, is the latest in a line of cutting-edge systems, including IonQ Aria, a system that boasts industry-leading 23 algorithmic qubits. Along with record performance, IonQ has defined what it believes is the best path forward to scale. IonQ is the only company with its quantum systems available through the cloud on Amazon Braket, Microsoft Azure, and Google Cloud, as well as through direct API access. IonQ was founded in 2015 by Christopher Monroe and Jungsang Kim based on 25 years of pioneering research.

Source: IonQ

Follow this link:
IonQ to Participate in IEEE International Conference on Quantum Computing and Engineering - HPCwire

Read More..