Category Archives: Artificial Intelligence

Artificial intelligence is bringing the dead back to ‘life’ but should it? – UC Riverside

What ifyou could talk to a digital facsimile of a deceased loved one? Would you really be talking to them? Would you even want to?

In recent years, technology has been employed to resurrect the dead, mostly in the form of departed celebrities. Carrie Fisher was digitally rendered in order to reprise her role as Princess Leia in the latest "Star Wars" film. Kanye West famously gifted Kim Kardashian a hologram of her late father for her birthday last year.Most recently and controversiallyartificial intelligence was used to deepfakechef Anthony Bourdains voiceto provide narration in the documentary film Roadrunner.

In what seems eerily like aBlack Mirror episode,Microsoft announced earlier this year it had secureda patent for software that could reincarnate people as a chatbot, opening the door to even wider use of AI to bring the dead back to life.

We asked our faculty experts aboutAI technology in use today, the future of digital reincarnation, and the ethical implications of artificial immortality.

When we learn about some very sophisticated use of AI . . . we tend to extrapolate from that situation that AI is much better than it really is. Roy-Chowdhury

A: All artificial intelligence uses algorithms that need to be trained on large datasets. If you have lots of text or voice recordings from a person to train the algorithms, its very doable to create a chatbot that responds similarly to the real person. The challenges arise in unstructured environments, where the program has to respond to situations it hasnt encountered before.

For example, weve probably all had interactions with a customer service chatbot that didnt go as planned. Asking a chatbot to help you change an airline ticket, for example, requires the AI to make decisions around several unique conditions. This is usually easy for a person, but a computer may find it difficult, especially if there are unique conditions involved. Many of these AI systems are essentially just memorizing routines. They are not getting a semantic understanding that would allow them to generate entirely novel, yet reasonable, responses.

When we learn about some very sophisticated use of AI to copy a real person, such as in the documentary about Anthony Bourdain, we tend to extrapolate from that situation that AI is much better than it really is. They were only able to do that with Bourdain because there are so many recordings of him in a variety of situations. If you can record data, you can use it to train an AI, and it will behave along the parameters it has learned. But it cant respond to more occasional or unique occurrences. Humans have an understanding of the broader semantics and are able to produce entirely new responses and reactions. We know the semantic machinery is messy.

In the future, we will probably be able to design AI that responds in a human-like way to new situations, but we dont know how long this will take. These debates are happening now in the AI community. There are some who think it will take 50-plusyears, and others think we are closer.

AI is still ineffective at building chatbots that can respond in a meaningful way to open-domain conversations. Hristidis

A: AI has been very successful in the last few years in problems relating to changing the tone or styleof images, videos, or text. For example, we are able to replace the face of a person in a picture or a video, or change the words of a person in a video, or change the voice of an audio recording.

AI has also been somewhat successful in modifying the words in a sentence to change the tone or style of a sentence, for example, to make it more serious or funnier or use the vocabulary of a specific person,alive or dead.

A: AI is still ineffective at building chatbots that can respond in a meaningful way to open-domain conversations. For example, voice assistants like Alexa or Siri can only help in a very limited set of tasks, such as playing music or navigating, but fail for unexpected tasks such as find a free two-hour slot before sunset this weekend in my calendar.

A key challenge is that language is very complex, as there are countless ways to express the same meaning. Further, when a chatbot generates a response, even a single inappropriate word can completely mess up the meaning of a sentence, which is not the case with, say, images, where changing the color of a few pixels may go unnoticed by viewers.

In the future, we will see more progress in modeling the available knowledge and also in language understanding and generation. This will be facilitated by the huge training data generated by voice assistants, which offer a great research advantage to large tech companies over academic institutions.

We might come to not care very much whether grandma is human or deepfake. Schwitzgebel

A: I am struck by the possibility of a future in which we might be able to feel more and more like our departed loved ones are really still here through voice and video generated to sound and look like them. Programs might be designed so that artificial reconstructions of them even say the kinds of things that they based on past records would have tended to say. If an artificial intelligence program gains access to large amounts of text and voice and video of the deceased, we might even be able to have conversations with them in which they feel almost like our familiar old friends, with the same quirks and inflections and favorite phrases.

At the same time, the pandemic has launched us into a world in which more and more we are interacting with people by remote video or at least this is true for white-collar workers. Thus, the gap between the real interactions we have with living people by remote video and interactions with reconstructed versions of the deceased could become less and less, until the difference is subtle.

If we want, we can draw on text and image and video databases to create simulacra of the deceased simulacra that speak similarly to how they actually spoke, employing characteristic ideas and turns of phrase, with voice and video to match. With sufficient technological advances, it might become challenging to reliably distinguish simulacra from the originals, based on text, audio, and video alone.

Now combine this thought with the first development, a future in which we mostly interact by remote video. Grandma lives in Seattle. You live in Dallas. If she were surreptitiously replaced by Deepfake Grandma, you might hardly know, especially if your interactions are short and any slips can be attributed to the confusions of age.

This is spooky enough, but I want to consider a more radical possibility the possibility that we might come to not care very much whether grandma is human or deepfake.

I firmly believe in the empowerment of individual choice. Maguire

A: As academics, we can only speculate as to the potential risks/benefits since no one has direct clinical experience with AI and we lack any empirical evidence. I firmly believe in the empowerment of individual choice. If a patient of mine were to ever ask my guidance on such, I would outline the above, cite the hypothesized possibilities, and allow my patient to make their informed decision.

Read more:
Artificial intelligence is bringing the dead back to 'life' but should it? - UC Riverside

Combining creative minds with artificial intelligence is in the works – TRT World

Creative industry leaders say while AI can help humans perform their jobs better, it is still far from replacing them.

Artificial intelligence (AI) has been used for decades now to augment human intelligence and prowess. In a recent article, the BBC questions whether it can replace humans in creative endeavours such as copywriting, and comes up with a collaborative solution.

A carphone company was seeking a catchphrase for a Black Friday sale, and all the options human copywriters were coming up with contained the words Black Friday. Stepping in to save the day was a software company whose technology ran through thousands of options and came up with The Time is Now.

Phrasee is the brainchild of Parry Malm, a Canadian living in the UK. Malm, who works in marketing, was frustrated that technology to boost human creativity didnt already exist, and set out to create the software in 2015.

Saul Lopes, head of customer marketing at Dixons Carphone, assures the BBC that copywriters will not be replaced by AI any time soon, but that we will see more of the human-AI collaboration in the future. "Combining creative people with AI is the next step for the agencies. It's not AI versus the human, it generates creative thought," he says.

On the other hand, there are people such as Larry Collins, a toll booth operator in San Francisco who lost his job during the pandemic, to an automated vehicle pass system established to minimise human contact.

According to Time, who reported on Collins in August 2020, who is a Black low-wage worker, jobs are disappearing when AI is replacing the human touch. Even before the pandemic, the global consulting company [McKinsey] estimated that automation could displace 132,000 Black workers in the US by 2030, Alana Semuels writes in Time.

Yet white collar jobs and jobs that require human input are less at risk of disappearing, according to Iain Brown writing for Engineering & Technology. Noting that were already letting the machines take over, Brown says workers need not fear losing their jobs to AI: even in an AI-driven future, humans will remain a valuable commodity worth investing in. They will continue to deliver value that machines do not.

Brown talks of the limitations of AI, noting that Despite the hype, most AIs are designed to be very good at solving a specific problem under very particular parameters. Introduce a variable and the system breaks down or a new model needs to be created.

An article published in the July-August 2018 issue of the Harvard Business Review also seems to back this view: Authors H James Wilson and Paul R Daugherty write that While AI will radically alter how work gets done and who does it, the technologys larger impact will be in complementing and augmenting human capabilities, not replacing them.

Wilson and Daugherty point out that Smart machines are helping humans expand their abilities in three ways. They can amplify our cognitive strengths; interact with customers and employees to free us for higher-level tasks; and embody human skills to extend our physical capabilities.

The BBCs example of AI helping write marketing copy is an example of amplification of our cognitive strengths. The BBCs Michael Dempsey interviews the head of the behavioural science practice run by its vice-chairman and veteran copywriter Rory Sutherland for the article.

"AI can't hurt if it generates interesting suggestions," Mr Sutherland admits, "but it's like satnav [satellite navigation] in a car. Great for directions but you don't allow it to drive the car!"

Sutherland tells Dempsey that he doesnt see AI taking over in creative industries, and if it were to do so, some vital human element would be lost. "As a stimulus, suggesting ideas, it has a great future. As a source of judgement it's dubious."

Source: TRTWorld and agencies

Original post:
Combining creative minds with artificial intelligence is in the works - TRT World

Tesla teases future products using artificial intelligence not related to its electric vehicle fleet – Electrek

In the invite to its AI Day, Tesla is teasing the use of artificial intelligence beyond its electric vehicle fleet.

What do you think theyre talking about?

As we have reported over the last week, Tesla is preparing for its upcoming AI Day on August 19.

Over the last few years, Tesla started holding events, not really to unveil new products, but to present new technologies that it has been working on in certain fields.

For example, it held a Tesla Autonomy Day in 2019 and a Tesla Battery Day last year.

Tesla AI Day is expected to be similar, and CEO Elon Musk said that they will discuss advancements in both AI hardware and software, specifically with the automakers new Dojo supercomputer and its neural nets.

Now Tesla has sent out invites to the event and confirmed presentations regarding those technologies, but the most interesting part is that Tesla teased an inside look at Teslas use of AI in things other than vehicles:

This invite-only event will feature a keynote by Elon, hardware and software demos from Tesla engineers, test rides in Model S Plaid, and more. Attendees will be among the first to see our latest developments in supercomputing and neural network training. They will also get an inside look at whats next for AI at Tesla beyond our vehicle fleet.

Musk has often hyped Teslas AI team as one of the best in the world, but it has always been mostly about the automakers self-driving effort.

The automaker has been known to also use AI in non-consumer products, like in its energy software Autobidder, and the automaker has also been extensively using AI in manufacturing.

Gavin Hall, Tesla staff machine learning and controls engineer, describes his work integrating AI in Tesla Gigafactories:

At the Tesla Gigafactory, I develop the supervisory machine learning algorithms used in automated computer vision tasks and the real-time control optimization strategies of factory systems with respect to energy costs, setpoint errors, and equipment downtime.

Our machine vision solutions use deep learning via convolutional neural network variants for classification, detection and segmentation, while our control system cloud architecture combines AI, big data, control theory, and industrial control by leveraging Python to use continuous deep reinforcement learning, model predictive control, state estimation, and recurrent supervised learning models to forecast loads and plan the optimal sequence of control actions sent to PLCs.

That may or may not be what Tesla is hinting at, but we wont know until August 19.

Thats interesting. It could be some backend applications, but for some reason, I think it could also be related to new consumer products.

When you think about it, if Tesla can truly solve self-driving, it is safe to assume that some of these computer vision developments would apply to other products.

Its interesting that Tesla is apparently partnering with roboticist Dennis Hong, which could be part of what the automaker is talking about here.

Hong is known for working on humanoid robots, so it would be quite surprising for Tesla to get into that space at this point.

But it wouldnt be shocking to see Tesla working on some kinds of new robots. Maybe well finally see the Tesla robot snake charger?

What do you think? Let us know in the comment section below.

FTC: We use income earning auto affiliate links. More.

Subscribe to Electrek on YouTube for exclusive videos and subscribe to the podcast.

Link:
Tesla teases future products using artificial intelligence not related to its electric vehicle fleet - Electrek

Artificial Intelligence in India: 5 Reasons to Make AI More Accessible – Analytics Insight

In the past few years, digital initiatives like making the internet more accessible, boosting IoT, cybersecurity, machine learning, and artificial intelligence in India have been the goals of the government. You must have heard that AI is the future and it particularly saves the world as it is a technology that will dramatically alter human life in very real ways. AI helps people to rethink how they integrate information, analyze data, and use the resulting insights to improve decision-making. With AI significantly changing the tech scenario across the globe, maybe conveying it successfully, and ensuring that its advantages stream down to the most minimal level, turns out to be particularly essential.

Here are 5 reasons to make artificial intelligence more accessible in India:

According to reports, the number of smartphone users in India will reach over 760 million in the year 2021. And have you ever wondered what makes your smartphone actually smart? Artificial intelligence, from users face, unlock to digital voice assistants everything in your mobile phone works on this technology. These devices from Siri and Alexa to Google Home and Cortana, utilize regular language handling and generators driven by AI to return answers to you.

From manufacturing and retail to banking and agriculture, the impact of artificial intelligence is being felt in a wide range of ventures. At present time it is nearly impossible for professionals to provide services using conventional methods especially in sectors like healthcare where human life is at risk. Nowadays, AI is being utilized to recognize illness faster which saves lots of time and helps health professionals to provide the best treatment.

More the machines become intelligent, legitimate concerns about the effect on human jobs increases. However, while theres no question that automation will prompt the removal of many jobs, it is trusted that it will create more jobs that value human capabilities like creativity and empathy.

AI will likewise improve our working lives. News-casting is one industry that is going through an AI transformation, and there are numerous AI instruments that assist media professionals with recognizing and composing stories.

It used to be that to work with AI youd need costly innovation and a huge team of in-house data researchers. That is not true anymore. In the same way as other innovation arrangements, AI is currently promptly accessible with a quickly developing scope of service solutions focused on organizations, all things considered.

For instance, in 2019, Amazon launched Personalize, AI-based assistance that assists organizations with giving custom-made client suggestions and list items. Staggeringly, Amazon says no AI experience is expected to prepare and deploy the innovation.

Without artificial intelligence, it would be impossible to achieve the amazing recent advances seen in areas such as augmented reality, chatbots, cloud computing, facial acknowledgment, self-governing vehicles, and mechanical technology (and that is simply to give some examples). Consider practically any new groundbreaking innovation or logical leap forward, and, incidentally, AI has assumed a part. For instance, because of AI, scientists are now able to peruse and grouping qualities rapidly, and this information can be utilized to figure out which drug treatments will be more viable for singular patients.

In conclusion, people should get access to advance technologies to grow and create opportunities than just using them to unlock smartphones or use assistants.

Share This ArticleDo the sharing thingy

About AuthorMore info about author

Read the original here:
Artificial Intelligence in India: 5 Reasons to Make AI More Accessible - Analytics Insight

Vita Mobile Systems Acquires Artificial Intelligence Company with Location-based Health and Safety Software Platform – GlobeNewswire

IRVINE, CA, Aug. 05, 2021 (GLOBE NEWSWIRE) -- via NewMediaWire -- Vita Mobile Systems, Inc. (OTC PINK: VMSI), a technology company focused on digital imaging in mobile devices, collection and management of big data and development of artificial intelligence, today announced the acquisition of My2Tum through a share exchange agreement. My2Tum brings proprietary Artificial Intelligence (AI) technology that allows users to find up-to-date health and safety information by location, complementing VMSIs own geolocation-based social application platforms.

The significance of this acquisition is immense, and bringing on My2Tum is a huge step forward for VMSI, stated Sean Guerrero, CEO of Vita Mobile Systems. The pandemic has made it clear that accurate and timely health and safety information for locations around the world is crucial for both business and personal needs. For example, businesses could leverage this data to better support their employees and deliver to their customers, and individuals would be able to travel anywhere in the world with the confidence and the assurance that is required to make their trip a successful one.

Pursuing a similar goal of providing the most up-to-date geolocation-based information has made My2Tum a perfect synergistic acquisition for VMSI. We are excited to launch our first My2Tum product, as part of our VMSI portfolio, which is slated to be introduced to the public this fall, continued Guerrero. This acquisition strengthens both the future of VMSI and its management team. As we incorporate My2Tum into VMSI, we anticipate strategic management changes and plan to announce those changes once the expected appointments take place.

To learn more about VITA please visit the Vita Mobile Systems website at http://www.vitamobilesystems.com.

About Vita Mobile Systems, Inc.(www.vitamobilesystems.com)

Vita Mobile Systems, Inc. (OTC PINK: VMSI) is a technology company focusing on digital imaging in mobile devices, collection of big data and development of artificial intelligence. Vita Mobile Systems is currently finalizing their first geolocation-based, social media app "VITA" for release on both iOS and Android. Comprised of a strong foundation of successful entrepreneurs, Vita Mobile Systems has developed proprietary algorithms and tools which gather, categorize, analyze and augment digital content. Over the years, Vita Mobile Systems has used these proprietary marketing, social media, and data collection tools to generate significant amounts of internet traffic for advertising networks. Vita Mobile Systems aims to create a monumental library of crowdsourced content, a massive catalogue of predictive big data, and platform for ultra-targeted advertising. The company expects to establish a strong foundation within the multibillion-dollar industry of driving big data and targeted advertising through its process of cataloging, meta-tagging, analyzing, and predicting trends of everyday people in everyday life.

Cautionary Language Concerning Forward-Looking Statements:

This press release contains forward-looking statements. The words or phrases "would be," "will allow," "intends to," "will likely result," "are expected to," "will continue," "is anticipated," "estimate," "project," or similar expressions are intended to identify "forward-looking statements." Actual results could differ materially from those projected by VITA. The public filings, if any, of Vita Mobile Systems, Inc. (OTC Pink: VMSI) may be accessed at http://www.otcmarkets.com. Statements made herein are as of the date of this press release and should not be relied upon as of any subsequent date. VITA cautions readers not to place reliance on such statements. Unless otherwise required by applicable law, VITA does not undertake, and VITA specifically disclaims any obligation, to update any forward-looking statements to reflect occurrences, developments, unanticipated events or circumstances after the date of such statement.

Contact:

Vita Mobile Systems, Inc.

2640 Main St.

Irvine, CA 92614

949-864-6902

info@vitamobilesystems.com

Investor Relations:

949-864-6902

ir@vitamobilesystems.com

Read the original:
Vita Mobile Systems Acquires Artificial Intelligence Company with Location-based Health and Safety Software Platform - GlobeNewswire

Artificial Intelligence May Have Cracked the Code to Creating Low-Priced Works on Canvas – artnet News

What better time for a next-generation version of art to come crashing into the art world than 2021? After all, this is the unprecedented year that saw an explosion of demand and sales of NFTs or non-fungible tokens, which are inextricably tied to crypto-currency and blockchain technology. Specifically, were now talking about art created byartficial intelligence yes, the machines are taking over art too.

In 2018, ChristiessoldPortrait of Edmond de Belamy(2018), the first-ever original work of art created using artificial intelligence to come to auction (it sold for $432,500 against a high estimate of $10,000), Inspired by reports of the sale, Ben Kovalis and two like-minded childhood friends from Israel, Eyal Fisher and Guy Haimovitz, launched the Art AI Galleryone year later, in late 2019. It involves collections of curated work made using an algorithm that was created over the course of six months and then refined over the next year and a half.

The Christies auction was amazing to us because we are enthusiastic about art but also tech savvy, Kovalis told Artnet News in a phone interview from London. He and his friends were stunned first by the fact that AI could create art and then by the fact that it could garner the type of price it did, he said.

We were enthusiastic. We were already speaking about opening a startup company togetherwhich is the dream of every Israeli guy between the ages of 20 and 30, he said with a laugh.

Last month, the group introduced Artifly, which takes its algorithm development work a step further. Customers scroll through a selection of artwork and click the designs they like, in order to show Artifly your style. Then, the user clicks a button reading Make My Art, Artifly (the name of which is meant to evoke the phrase Art on the Fly) becomes familiar with your selectionsand then near instantly, in about a minute, it creates a brand-new personalized artwork. The user then has the option, though not the obligation, to buy a bespoke piece of AI art.

Obvious Arts [ ())] + [( (()))], Portrait of Edmond de Belamy,Generative Adversarial Network print on canvas (2018).

At the time of the 2018 auction of Obvious, Fisher was working onimage processing and analysis for his PhD in mathematical genomics at Cambridge University. He was inspired by the headlines about the Christies sale to start working on algorithms.

Bold (2021). Image courtesy of Art AI Gallery and Artifly.

He thought that hecould create an algorithm that would make way more beautiful and very engaging art, says Kovalis. The idea is not to create a single one and sell it for $100,000 but to create thousands and tens of thousands of them and still keep them one of a kind, so anyone can enjoy them.

Kovalis has extensive background in e-commerce, having formerly been a VP in the high-tech sector, managing large international operations.

The company founders say that the most common question they face is, so you want to replace human artists with robots? Kovalis has a readymade response: Definitely notfirst of all because we dont think that it is possible to replace artists. Thisis simply something that enhances art.He also emphasizes just how much human effort is still required for the process. You need a lot of sweat, and tears and human involvement to make an AI algorithm that creates something beautiful.

Selection of four potential artworks generated by Artifly.

He compares the use of AI to musicians starting to use synthesizers in the 1980s to create a new type of music. Not everyone liked the synthesizer. Many did, many didnt, but it developed into something that helped create pop music. Today, music is the same music that we know and love, it just has some technology in it. This is the same thing that were doing with art.

As for cost, the prices of the works hover around a few hundred dollars at most. It remains to be seen whether a secondary or resale market develops based on the individual certificates of authenticity that can accompany such transactions. And it also remains to be seen whether or not values begin to creep up as they do over time for works sold by galleries and auction houses.

Screenshot of a custom artwork created by Artifly.

For many people,Kovalis says, there is a learning curve when it comes to becoming comfortable with AI-created art. People can be a bit scared at first. They know about self-drivingcars, but seeing an AI that creates art is the wild frontier.

More here:
Artificial Intelligence May Have Cracked the Code to Creating Low-Priced Works on Canvas - artnet News

Artificial intelligence and the structure of the universe – Las Cruces Sun-News

Bryson Stemock| Star News

Weve all seen artificial intelligence in sci-fi movies and television. It seems the machines are getting smarter every day! But how can we harness artificial intelligence to help us learn about our universe, and possibly even extraterrestrial life?

Machine learning is a subfield of artificial intelligence that describes how a computer program can take information, identify patternsand learn to identify the important aspects lurking in the data. One example is a neural network (neural because its learning is patterned after the neurons in the human brain). If we give a neural network millions of images of animals and an answer key of which animal is which, the neural network will soon learn to identify animals in pictures. Now give it millions of different pictures of these same animals and it will identify them all for you in a second. Imagine if we unleash that lightning-fast precision learning on the study of the universe.

So how do we teach a machine if we dont have millions of animal pictures on hand? Well, how might we teach a child? If you didnt have a picture book of animals to teach your child, you might draw your own. This isone of the most commonly used solutions in machine learning.If the training data dont exist, we simulate them. In order to properly teach a machine about a topic, well need the most detailed, accurate simulations possible which means that we need to pool our complete knowledge on the subject. Otherwise, it can be easily tricked by lookalikes,such as in the famous dog or food? internet meme. Imagine blueberry muffins masquerading as chihuahuas!

Now consider space: full of stuff like stars and galaxies. Galaxies, including the Milky Way, reside in large halos of gas that astronomers call the circumgalactic medium.Beyond the circumgalactic medium, filaments of gas stretch between galaxies. This gas comprises what we call the intergalactic medium.Together, circumgalactic and intergalactic gas form a large overall structure that astronomers call the cosmic web.The cosmic web, massive filaments of galaxies separated by giant voids, is the structure of the universe.

A powerful technique to reveal the structure of the cosmic web is to observe the light from very distant and very bright objects. As this light travels billions of light years through the universe on the way to our telescopes, it passes through the gas that makes up the cosmic web; the gas absorbs some of the light, leaving a distinct signature that we can measure. The signature contains information about the chemical composition of the gas in the cosmic web, how much gas there is, its temperature, and more. By analyzing many of these systems, we can start to piece together information about the structure of the universe, and how it has changed throughout its lifetime.

As powerful as this technique is, it is equally complex and time-consuming. In fact, a trained expert might analyze only one or two systems per week. For reference, the Astronomy Department at NMSU has around 3,500 systems on file, which means that it will take 35 to 70 years to analyze all of this data.Furthermore, our data archives will only grow as the next generation of telescopes come online. How can we possibly handle all of this data? Enter machine learning.

Using machine learning, it may be possible to leapfrog the system analysis entirely. Instead of feeding a machine pictures of animals to identify, we can feed it absorption systems and receive our scientific answers directly from the machine. This work is still in the beginning stages of development and will take many years to pursue, but the potential of applying machine learning to quasar absorption line spectroscopy is incredible. With our current models, we can train a machine on nearly one million simulated systems in just over a day, at which point the machine can analyze 100,000 systems in just a few hours!

As our telescope technology improves and we collect more data in greater detail, astronomers face the daunting task of finding new ways to analyze these data en masse. Machine learning holds significant promise in this effort and is justifiably being explored in astronomy as a new tool available in our endeavor to understand the cosmos.

Bryson Stemock is a PhD student in astronomy at New Mexico State University. He can be reached at bstemock@nmsu.edu.

More Star News:

More here:
Artificial intelligence and the structure of the universe - Las Cruces Sun-News

Recent Developments In Artificial Intelligence And IP Law: South Africa Grants World’s First Patent For AI-Created Invention – Intellectual Property -…

05 August 2021

Winstead PC

To print this article, all you need is to be registered or login on Mondaq.com.

On July 28, the Companies and Intellectual Property Commissionof South Africa granted the world's first patent on aninvention created by an artificial intelligence (AI)inventor. This development marks an important milestone inwhat will certainly be a significant battle for legal recognitionof such inventions in the United States and other countries.

Device for Autonomous Bootstrapping of UnifiedSentience aka DABUS is an AI developed byMissouri physicist Stephen Thaler. The recently-issued patentis directed to a food container based on fractal geometry.The patent application was filed on September 17, 2019 under thePatent Cooperation Treaty. 1 Under the heading ofinventor, the application identifies DABUS and statesThe invention was autonomously generated by an artificialintelligence." 2

It is important to note that patent applications in South Africaare not subject to a formalized patent-examination procedure of thekind found in the U.S., Canada, Europe, and many otherjurisdictions. Indeed, this aspect of the South Africanpatent system appears to have been a motivating factor for Thalerto seek patent protection in the country. Thus, it should notbe surprising that this patent was granted and the ultimate legalsignificance of this grant may be yet to be seen. With thatsaid, Thaler and his legal team have attempted so far invain to have AI-invented technologies recognized in othercountries including the United States.

In Europe, the Board of Appeal (BOA) of theEuropean Patent Office (EPO) handed down a pair ofpreliminary communications stating that an inventor on a patentapplication must have legal capacity. TheBOA's communications were responsive to appeals of theEPO's rejection of the DABUS patent applications. Anoral hearing before the BOA is scheduled for December 2021.

In the United States, Thaler filed U.S. Patent Application Nos.16/524,350 and 16/524,532 on July 29, 2019. 3Along with the patent applications, an Application Data Sheet(ADS) was filed in each case identifying a singleinventor with the given name DABUS and a family nameInvention generated by artificial intelligence.4 The ADSs also identify Stephen Thaler as theApplicant and Assignee. In both cases, the United StatesPatent and Trademark Office (USPTO) responded byissuing a Notice to File Missing Parts of Nonprovisional PatentApplication (the Notice) and asserted that the ADSdid not identify each inventor by his legal name.5 A subsequent petition to request supervisory review ofthe Notice and vacate the Notice was then filed by Thaler anddismissed by the USPTO. 6 Thaler then appealed tothe U.S. District Court for the EasternDistrict of Virginiaseeking, among other things, a reversal of the USPTO'sdecision on the petition.

In his complaint to the district court, Thaler argues that nonatural person meets the criteria for inventorship under thecurrent statutory and regulatory scheme. 7 Thus, if nocorrective action is taken, Thaler asserts that future AI-generatedpatents would enter the public domain once disclosed.8 Additionally, Thaler argues that allowingpatents on AI-generated inventions would be consistent with theConstitution and the Patent Act, will incentivize the furtherdevelopment of inventive machines, and that failure to do so allowsindividuals to take credit for work that they have not done.9 Finally, Thaler argues that the notion ofconception does not necessarily exclude artificialinventors. 10 Thaler seeks an order compelling the USPTOto reinstate the DABUS U.S. patent applications, a declaration thata patent application should not be rejected on the grounds that nonatural person is identified as an inventor, and a declaration thata patent application should properly identify an AI in cases wherethe AI has met the inventorship criteria. 11

Oral arguments were heard in the spring of 2021 and, so far, noorder has been issued. The outcome of this case will not onlyimpact the DABUS U.S. patent applications, but could also havedrastic implications for other areas of patent law such as ourunderstanding of conception and obviousness.

Footnotes

1. Patent Application No. PCT/IB2019/057809 (filed Sept.17, 2019).

2. Id. at [72].

3. Complaint for Declaratory and Injunctive Relief at 3,Stephen Thaler v. Iancu, No. 1:20-cv-00903 (E.D. Va. Aug. 6,2020).

4. Id. at 4.

5. Id. at 5.

6. Id. at 5.

7. Id. at 7.

8. Id.

9. Id. at 8-9.

10. Id. at 12.

11. Id. at 16-17.

The content of this article is intended to provide a generalguide to the subject matter. Specialist advice should be soughtabout your specific circumstances.

POPULAR ARTICLES ON: Intellectual Property from Worldwide

Obhan & Associates

Trademarks Comparative Guide for the jurisdiction of India, check out our comparative guides section to compare across multiple countries

Maschoff Brennan

My mentor used to tell a story about a patent attorney who, at the beginning of a meeting with clients would say "Everyone hold up your pencils."

Frankfurt Kurnit Klein & Selz

Sacha Baron Cohen is no stranger to litigation. The creator of the Borat, Ali G and other characters has been sued on several occasions. He has faced suits from unwitting participants in his films...

Read the original:
Recent Developments In Artificial Intelligence And IP Law: South Africa Grants World's First Patent For AI-Created Invention - Intellectual Property -...

Rapid Exclusion of COVID Infection With the Artificial Intelligence Electrocardiogram – DocWire News

This article was originally published here

Mayo Clin Proc. 2021 Aug;96(8):2081-2094. doi: 10.1016/j.mayocp.2021.05.027.

ABSTRACT

OBJECTIVE: To rapidly exclude severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection using artificial intelligence applied to the electrocardiogram (ECG).

METHODS: A global, volunteer consortium from 4 continents identified patients with ECGs obtained around the time of polymerase chain reaction-confirmed COVID-19 diagnosis and age- and sex-matched controls from the same sites. Clinical characteristics, polymerase chain reaction results, and raw electrocardiographic data were collected. A convolutional neural network was trained using 26,153 ECGs (33.2% COVID positive), validated with 3826 ECGs (33.3% positive), and tested on 7870 ECGs not included in other sets (32.7% positive). Performance under different prevalence values was tested by adding control ECGs from a single high-volume site.

RESULTS: The area under the curve for detection of acute COVID-19 infection in the test group was 0.767 (95% CI, 0.756 to 0.778; sensitivity, 98%; specificity, 10%; positive predictive value, 37%; negative predictive value, 91%). To more accurately reflect a real-world population, 50,905 normal controls were added to adjust the COVID prevalence to approximately 5% (2657/58,555), resulting in an area under the curve of 0.780 (95% CI, 0.771 to 0.790) with a specificity of 12.1% and a negative predictive value of 99.2%.

CONCLUSION: Infection with SARS-CoV-2 results in electrocardiographic changes that permit the artificial intelligence-enhanced ECG to be used as a rapid screening test with a high negative predictive value (99.2%). This may permit the development of electrocardiography-based tools to rapidly screen individuals for pandemic control.

PMID:34353468 | DOI:10.1016/j.mayocp.2021.05.027

Here is the original post:
Rapid Exclusion of COVID Infection With the Artificial Intelligence Electrocardiogram - DocWire News

AI Special: Artificial Intelligence Will Affect Everyone. Here’s What You Need To Know – Forbes India

Illustration: Chaitanya Dinesh Surpur

Artificial intelligence (AI) is fast becoming a topic that is relevant to everyone today and, therefore, a subject that everyone ought to learn at least the rudiments of, say experts. From the humble milkman delivering packets of milk to households in the morning to the highest lawmakers and biggest industrialists, AI will increasingly touch everyone.

A lot of people look at AI as a vertical that calls for experts to develop, says Amit Anand, founding partner at Jungle Ventures, a VC firm in Singapore that has invested in several tech startups in India. However, both in his own mind and as an advisor to the Singapore government on the ethical use of AI, We have taken a view that AI is going to affect everybody, and hence everyone should be knowledgeable and have a certain level of understanding of AI.

The government has also complemented this with education at the grassroots level, Anand says, with centres of excellence and so on. The common man should know what happens when an AI programme takes over his loan processing, for example. How do you get the consumers ready for that wave, because its coming, he says.

There has been an explosion of use cases that take advantage of AI across industries, says Sumit Sarawgi, managing director and senior partner at Boston Consulting Group. In parallel, there has been an explosion of data that large companies and their end-consumers are generating, he adds.

This now makes it even more urgent that organisations around the world proactively embrace ways

of using AI in a responsible manner. While on the one hand, AI can make for better quality of services, improve customer experience and boost the financial performance of companies, the need to ensure its responsible use has also increased.

In Europe, for example, the European Union has published proposals for rules on AI, which include banning certain uses of AI, heavily regulating high-risk uses and lightly regulating less risky AI systems. In India, Niti Aayog, a government-backed think tank, released a discussion paper towards a national strategy for AI three years ago. Subsequently, in January 2020, a follow-on paper was also released on developing AI-specific technology infrastructure.

In the past, India didnt capture its share of benefits from technological advancements, such as semiconductor manufacturing, for example. Today, there is recognition that the country cant afford to miss the AI bus.

(This story appears in the 13 August, 2021 issue of Forbes India. You can buy our tablet version from Magzter.com. To visit our Archives, click here.)

Originally posted here:
AI Special: Artificial Intelligence Will Affect Everyone. Here's What You Need To Know - Forbes India