Category Archives: Artificial Intelligence

Making AI Sing: An Interview With Verphoria On The Use Of Artificial Intelligence Within The Music Industry – Forbes

Verphoria, Founder and CEO of Hierarchy Music

In todays music industry, the separation between digital and analogue is almost impossible to determine. At the most basic level, the majority of todays music is crafted using highly intelligent software. However, at the cutting edge of AI and the music industry, innovators are continuously pushing the boundaries of human/machine collaboration in musical creation as well as business.

One such innovator is Vernica Serjilus, professionally as Verphoria, an American singer, record producer, songwriter, entrepreneur, and the Founder and CEO of Hierarchy Music. Hierarchy Music is a global music company that connects musicians globally with Grammy Award-winning, multi-platinum music services.

At the crux of Hierarchy Musics operations is data AI and back-end exposure which allow us to bring exposure to new artists, or existing artists and their brands, utilizing both Hierarchy Music and Hierarchy Medias back-end network.

I spoke with Verphoria about her background as a musician as well as her perspective on the future of AI in music.

How did you get your start in the music industry?

I started singing at the age of four and started record producing at the age of 10.

At the age of 19 I was discovered by Aton Ben Horin and Ethan Curtis, the co-owners of the Grammy Award-winning, multi-platinum Plush Recording Studios. At the age of 22, I was invited to record at Paramount Recording Studio and Neighborhood Watche by renowned engineer/mixer Andrew Drew Chavez where I continued to sharpen my skills in music.

My brand Verphoria gained popularity on Instagram and other social media platforms for Music which lead to making appearances at a number of red carpet events, such as those hosted by Maxim Magazine and Sports Illustrated. I gained the attention of Celebrity Director Chris Applebaum (who directed Rihannas Umbrella, Britney Spears, Kim Kardashian, Usher, Selena Gomez, Miley Cyrus, Demi Lovato, and Paris Hilton) who will be directing my music videos.

Who are your musical influences?

The biggest influences in my life are Michael Jackson, Rihanna, Britney Spears, Mariah Carey, Shakira, Wolfgang Mozart, and Beyonc.

Do you use data and AI in your music or in your broader career?

To create my compositions, I use a digital audio workstation (DAW, an electronic device or application software used for recording, editing and producing audio files) called Ableton Live which uses the AI plugin called Magenta Studio that allows me to experiment with open source machine learning tools.

This AI grants me the ability to create learning models for musical melodies, patterns, and rhythms by using a mathematical model.

Verphoria, Founder and CEO of Hierarchy Music

What are your thoughts on the recent quote from artist Grimes in which she states, Once theres actually AGI (Artificial General Intelligence), theyre gonna be so much better at making art than us.

I am going to disagree with that statement. AGI can be used to speed up the production of music, however, it cannot replace the emotion that comes from music produced by a human, nor can it recapitulate and evoke the emotional connection that musicians possess in the creation of their musical compositions.

Making good art is much more than following an algorithm, its the emotional aspect that makes it touch people.

How do you think AI and data are shaping the industry as a whole?

AI will definitely become a bigger and bigger part of the music industry as it will in every other industry. It is not yet perfect, and it may not ever be perfect on its own, but the use of AI helps to streamline many of the more laborious processes in music production.

Whether this is a good or bad thing is up for debate.

In my personal opinion it is best used as a collaboration tool, not something to make a whole record without the touch of a human. This article brings up a lot of interesting questions and concerns that we will have to deal with in the near future. Now is an exciting time to be in the music industry as we grapple with these problems.

How has data and AI helped you build your career?

AI and backend exposure has been instrumental in growing my personal brand Verphoria. Hierarchy Music and Hierarchy Medias data AI helped to grow my audience significantly by connecting my existing network to different network niches.

This helped in two main ways: it increased my exposure and helped me understand my audiences behavior.

The data gleaned from this process was invaluable in growing my brand relatively quickly compared to traditional methods.

What are issues in the music industry you think technology could help solve?

I believe a cloud-based DAW should be created so the music records being produced can be continuously saved and not lost if the computer or hard drive is stolen with unlimited amounts of data that can be stored.

How has technology made the business of being a musician easier? How has it made it more difficult?

The best thing about technology is that it has made becoming a musician more accessible to average people. With enough drive and the will to learn anyone can become a world-class musician. It has also made the technical aspects of making music easier. For example, we can make sure every note, melody, or rhythm is pitched and quantized correctly so there are no mistakes or flaws in the notes.

As for how it has made things more difficult? That is harder for me to answer.

Technology has been constantly evolving throughout my life, so to me it is second nature and is definitely not a problem, but for those people not as comfortable with the ever-changing nature of tech that can pose some difficulties.

Read more:
Making AI Sing: An Interview With Verphoria On The Use Of Artificial Intelligence Within The Music Industry - Forbes

The sense and nonsense of Artificial Intelligence in the greenhouse –

AI is a term that is used frequently in the horticultural industry. Because of this, one sometimes gets the impression that AI is going to solve all future problems in the industry. Think, for example, of the shortage of employees and specifically trained growers. Will AI then ensure that all work can be taken over by robots in the future? A dream for some, but also a horror scenario for many. Is AI going to replace people? "The answer to that, in our opinion, is no," says Ton van Dijk with In this article, he explains how AI can contribute to horticulture. believes that AI helps to make better decisions, but it is certainly not going to replace a grower or crop advisor. AI does bring the grower and crop advisor possibilities to control larger areas from a distance. Expert Knowledge combined with Artificial Intelligence is the golden combination. But more about that later.

So how exactly does AI work?It is often thought that as much data as possible from different growers together is the recipe for developing an algorithm, an algorithm that might make automatic cultivation possible. However, this is not the right way. Firstly, because data of growers always remains the property of the grower and cannot be used in this way. Secondly, because using data from the past to develop algorithms and predictions for the future, does not ensure optimization. In that situation, one takes any mistakes from the past into development. AI makes it possible to perform tasks faster and sometimes better than people, but only if the algorithm is built properly.

So what about horticulture?Over the past 50 years, much scientific research has been done on plant physiology and physics in and around a greenhouse. As a result of this research, knows how a plant grows and how to make the plant as comfortable as possible. Optimum conditions ultimately ensure a high-quality crop and a higher yield. Growers and cultivation advisors use their own experiences and calculations for this, which is also called Expert Intelligence (EI). Systems can already be set up to issue alerts to the grower when needed. This already ensures that growers can focus on more strategic matters. contributes to this by visualizing and analyzing the growers data. These results make it possible to cultivate in a data-driven way. But it can be even more extensive.

Combining outcomes from EI with other external data, such as weather forecasts, for example, it is possible to use those data to create an optimal situation for the plant. For example, it is possible to use an optimally constructed Machine Learning model to predict when plant stress may occur in the plant. A grower can therefore use this information to adjust the cultivation strategy to prevent this plant stress from actually occurring. In this way, the combination of data, EI, and AI helps to provide predictive insight to the grower. This gives the grower the possibility to create an even more stable and optimal crop.

There are so many more possibilities with AI. Think of automatic image recognition by placing cameras in the greenhouse. Together with partner Gearbox, is already making this possible. In addition, also works together with HortiKey and their Plantalyzer. This is a robot that rides along the pipe-rail and takes pictures of the crop. The advanced AI in this robot is able to recognize the number of fruits or flowers in the path or to analyze growth. This recognition makes it possible for to make more accurate yield predictions and visualize them in the dashboard MyLetsGrow. A grower can use this data to work in a more targeted way by, for example, using the sales of his product correctly and selling at higher margins.

So is AI now taking over from humans?No. At all times it is important that the combination between computer and human remains. A combination of Expert Intelligence and Artificial Intelligence. Humans determine the strategy at the start of the process based on the commercial requirements for that year. In addition, humans must always be able to intervene in case of calamities. AI therefore certainly does not make people superfluous, but it does enable people to manage and optimize larger areas without being too preoccupied with peripheral issues.

For more

Originally posted here:
The sense and nonsense of Artificial Intelligence in the greenhouse -

The Not-So-Hidden FTC Guidance On Organizational Use Of Artificial Intelligence (AI), From Data Gathering Through Model Audits – Technology – United…

Our last AI post on this blog, the New (if Decidedly Not 'Final') Frontier ofArtificial Intelligence Regulation, touched on both the FederalTrade Commission's (FTC) April 19, 2021, AI guidance and the EuropeanCommission's proposed AI Regulation. The FTC's 2021guidance referenced, in large part, the FTC's April 2020 post"UsingArtificial Intelligence and Algorithms." The recent FTCguidance also relied on older FTC work on AI, including a January2016 report, "Big Data: A Tool for Inclusion orExclusion?," which in turn followed a September 15, 2014,workshop on the same topic. The Big Data workshop addressed datamodeling, data mining and analytics, and gave us a prospective lookat what would become an FTC strategy on AI.

The FTC's guidance begins with the data, and the 2016guidance on big data and subsequent AI development addresses thismost directly. The 2020 guidance then highlights importantprinciples such as transparency, explain-ability, fairness,accuracy and accountability for organizations to consider. And the2021 guidance elaborates on how consent, or opt-in, mechanisms workwhen an organization is gathering the data used for modeldevelopment.

Taken together, the three sets of FTC guidance - the 2021, 2020,and 2016 guidance ? provide insight into the FTC's approach toorganizational use of AI, which spans a vast portion of the datalife cycle, including the creation, refinement, use and back-endauditing of AI. As a whole, the various pieces of FTC guidance alsoprovide a multistep process for what the FTC appears to view asresponsible AI use. In this post, we summarize our takeaways fromthe FTC's AI guidance across the data life cycle to provide apractical approach to responsible AI deployment.

Evaluation of a data set should assess the quality ofthe data (including accuracy, completeness and representativeness)? and if the data set is missing certain population data, theorganization must take appropriate steps to address and remedy thatissue (2016).

An organization must honor promises made to consumersand provide consumers with substantive information about theorganization's data practices when gathering information for AIpurposes (2016). Any related opt-in mechanisms for such datagathering must operate as disclosed to consumers (2021).

An organization should recognize the data compilationstep as a "descriptive activity," which the FTC definesas a process aimed at uncovering and summarizing "patterns orfeatures that exist in data sets" - a reference to data mining scholarship (2016) (note that theFTC's referenced materials originally at are nowredirected).

Compilation efforts should be organized around a lifecycle model that provides for compilation and consolidation beforemoving on to data mining, analytics and use (2016).

An organization must recognize that there may beuncorrected biases in underlying consumer data that will surface ina compilation; therefore, an organization should review data setsto ensure hidden biases are not creating unintended discriminatoryimpacts (2016).

An organization should maintain reasonable security overconsumer data (2016).

If data are collected from individuals in a deceitful orotherwise inappropriate manner, the organization may need to deletethe data (2021).

An organization should recognize the model and AIapplication selection step as a predictive activity, where anorganization is using "statistical models to generate newdata" - a reference to predictive analytics scholarship (2016).

An organization must determine if a proposed data modelor application properly accounts for biases (2016). Where there areshortcomings in the data model, the model's use must beaccordingly limited (2021).

Organizations that build AI models may "not selltheir big data analytics products to customers if they know or havereason to know that those customers will use the products forfraudulent or discriminatory purposes." An organization must,therefore, evaluate potential limitations on the provision or useof AI applications to ensure there is a "permissiblepurpose" for the use of the application (2016).

Finally, as a general rule, the FTC asserts that underthe FTC Act, a practice is patently unfair if it causes more harmthan good (2021).

Organizations must design models to account for datagaps (2021).

Organizations must consider whether their reliance onparticular AI models raises ethical or fairness concerns(2016).

Organizations must consider the end uses of the modelsand cannot create, market or sell "insights" used forfraudulent or discriminatory purposes (2016).

Organizations must test the algorithm before use (2021).This testing should include an evaluation of AI outcomes(2020).

Organizations must consider prediction accuracy whenusing "big data" (2016).

Model evaluation must focus on both inputsand AI models may not discriminate against aprotected class (2020).

Input evaluation shouldinclude considerations of ethnically based factors or proxies forsuch factors.

Outcome evaluation iscritical for all models, including facially neutral models.

Model evaluation should consider alternative models, asthe FTC can challenge models if a less discriminatory alternativewould achieve the same results (2020).

If data are collected from individuals in a deceptive,unfair, or illegal manner, deletion of any AI models or algorithmsdeveloped from the data may also be required (2021).

Organizations must be transparent and not misleadconsumers "about the nature of the interaction" ? and notutilize fake "engager profiles" as part of their AIservices (2020).

Organizations cannot exaggerate an AI model'sefficacy or misinform consumers about whether AI results are fairor unbiased. According to the FTC, deceptive AI statements areactionable (2021).

If algorithms are used to assign scores to consumers, anorganization must disclose key factors that affect the score,rank-ordered according to importance (2020).

Organizations providing certain types of reports throughAI services must also provide notices to the users of such reports(2016).

Organizations building AI models based on consumer datamust, at least in some circumstances, allow consumers access to theinformation supporting the AI models (2016).

Automated decisions based on third-party data mayrequire the organization using the third-party data to provide theconsumer with an "adverse action" notice (for example, ifunder the Fair Credit Reporting Act 15 U.S.C. 1681(Rev. Sept. 2018), such decisions deny an applicant anapartment or charge them a higher rent) (2020).

General "you don'tmeet our criteria" disclosures are not sufficient. The FTCexpects end users to know what specific data areused in the AI model and how the data are used bythe AI model to make a decision (2020).

Organizations that change specific terms of deals basedon automated systems must disclose the changes and reasoning toconsumers (2020).

Organizations should provide consumers with anopportunity to amend or supplement information used to makedecisions about them (2020) and allow consumers to correct errorsor inaccuracies in their personal information (2016).

When deploying models, organizations must confirm thatthe AI models have been validated to ensure they work as intendedand do not illegally discriminate (2020).

Organizations must carefully evaluate and select anappropriate AI accountability mechanism, transparency frameworkand/or independent standard, and implement as applicable(2020).

An organization should determine the fairness of an AImodel by examining whether the particular model causes, or islikely to cause, substantial harm to consumers that is notreasonably avoidable and not outweighed by countervailing benefits(2021).

Organizations must test AI models periodically torevalidate that they function as intended (2020) and to ensure alack of discriminatory effects (2021).

Organizations must account for compliance, ethics,fairness and equality when using AI models, taking into accountfour key questions (2016; 2020):

How representative is thedata set? Does the AI model account for biases? How accurate are the AI predictions? Does the reliance on the data set raise ethical or fairnessconcerns?

Organizations must embrace transparency andindependence, which can be achieved in part through the following(2021):

Using independent,third-party audit processes and auditors, which are immune to theintent of the AI model. Ensuring data sets and AI source code are open to externalinspection. Applying appropriate recognized AI transparency frameworks,accountability mechanisms and independent standards. Publishing the results of third-party AI audits.

Organizations remain accountable throughout the AI datalife cycle under the FTC's recommendations for AI transparencyand independence (2021).

The content of this article is intended to provide a generalguide to the subject matter. Specialist advice should be soughtabout your specific circumstances.

Read this article:
The Not-So-Hidden FTC Guidance On Organizational Use Of Artificial Intelligence (AI), From Data Gathering Through Model Audits - Technology - United...

Interested in Artificial Intelligence? Check Out This New Weekly Series – Pasadena Now

Innovate Pasadena and Artificial Intelligence Los Angeles (AILA) are partnering with Global Research Methods and Data Science (RMDS) for a free online meet-up on AI Bias and Surveillance: Recognition, Analysis and Prediction on Tuesday, June 1, 4 to 5 p.m.

The program is targeted to anyone interested in AI bias in detection and analysis systems (face, object, language, emotion) and surveillance in public, private and professional contexts.

No previous knowledge is required of those attending as long as the participants are willing to appreciate diversity of perspectives and think critically about power, harms and risks.

This will be the first of six one-hour per week sessions through Zoom, to be led by Merve Hickok, founder of AIethicist, which provides reference and research material for anyone interested in the current discussions on AI ethics and impact of AI on individuals and society.

The program begins with fundamental concepts in AI ethics and human rights in the first week. Week 2 and 3 move on to specifics of how recognition and analysis systems use face or objects, NLP (natural language processing) and affective computing manifest bias.

Weeks 4 to 6 discuss how data is collected and connected both online and offline, how the recognition and analysis systems are used in different settings, for which purposes, and what the consequences are.

Attendees will be provided with either short reports and/or videos for each class which they can read and watch either ahead of the class or after the class. The live session will allow for discussion of concepts and cases.

Merve Hickok is an independent consultant and trainer focused on capacity building in ethical and responsible AI and governance of AI systems. She is currently an instructor on Data Ethics at the University of Michigan School of Information, senior researcher at the Center for AI and Digital Policy, founding editorial board member of Springer Nature AI and Ethics journal, and one of the 100 Brilliant Women in AI Ethics 2021.

She is a fellow at ForHumanity Center, a regional lead for Women in AI Ethics Collective, and sits in a number of IEEE (Institute of Electrical and Electronics Engineers) and IEC (International Electrotechnical Commission) work groups that set global standards forautonomous systems. Previously, Hickok was Vice President of HR in a number of different roles with Bank of America Merrill Lynch.

To register for Tuesdays virtual meetup, visit and click on the Attend Event button.

For more information, call (213) 245-1817.

Post Views: 124

Read this article:
Interested in Artificial Intelligence? Check Out This New Weekly Series - Pasadena Now

Artificial intelligence system could help counter the spread of disinformation – MIT News

Disinformation campaigns are not new think of wartime propaganda used to sway public opinion against an enemy. What is new, however, is the use of the internet and social media to spread these campaigns. The spread of disinformation via social media has the power to change elections, strengthen conspiracy theories, and sow discord.

Steven Smith, a staff member from MIT Lincoln Laboratorys Artificial Intelligence Software Architectures and Algorithms Group, is part of a team that set out to better understand these campaigns by launching the Reconnaissance of Influence Operations (RIO) program. Their goal was to create a system that would automatically detect disinformation narratives as well as those individuals who are spreading the narratives within social media networks. Earlier this year, the team published a paper on their work in the Proceedings of the National Academy of Sciences and they received an R&D 100 award last fall.

The project originated in 2014 when Smith and colleagues were studying how malicious groups could exploit social media. They noticed increased and unusual activity in social media data from accounts that had the appearance of pushing pro-Russian narratives.

"We were kind of scratching our heads," Smith says of the data. So the team applied for internal funding through the laboratorys Technology Office and launched the program in order to study whether similar techniques would be used in the 2017 French elections.

In the 30 days leading up to the election, the RIO team collected real-time social media data to search for and analyze the spread of disinformation. In total, they compiled 28 million Twitter posts from 1 million accounts. Then, using the RIO system, they were able to detect disinformation accounts with 96 percent precision.

What makes the RIO system unique is that it combines multiple analytics techniques in order to create a comprehensive view of where and how the disinformation narratives are spreading.

"If you are trying to answer the question of who is influential on a social network, traditionally, people look at activity counts," says Edward Kao, who is another member of the research team. On Twitter, for example, analysts would consider the number of tweets and retweets. "What we found is that in many cases this is not sufficient. It doesnt actually tell you the impact of the accounts on the social network."

As part of Kaos PhD work in the laboratorys Lincoln Scholars program, a tuition fellowship program, he developed a statistical approach now used in RIO to help determine not only whether a social media account is spreading disinformation but also how much the account causes the network as a whole to change and amplify the message.

Erika Mackin, another research team member, also applied a new machine learning approach that helps RIO to classify these accounts by looking into data related to behaviors such as whether the account interacts with foreign media and what languages it uses. This approach allows RIO to detect hostile accounts that are active in diverse campaigns, ranging from the 2017 French presidential elections to the spread of Covid-19 disinformation.

Another unique aspect of RIO is that it can detect and quantify the impact of accounts operated by both bots and humans, whereas most automated systems in use today detect bots only. RIO also has the ability to help those using the system to forecast how different countermeasures might halt the spread of a particular disinformation campaign.

The team envisions RIO being used by both government and industry as well as beyond social media and in the realm of traditional media such as newspapers and television. Currently, they are working with West Point student Joseph Schlessinger, who is also a graduate student at MIT and a military fellow at Lincoln Laboratory, to understand how narratives spread across European media outlets. A new follow-on program is also underway to dive into the cognitive aspects of influence operations and how individual attitudes and behaviors are affected by disinformation.

Defending against disinformation is not only a matter of national security, but also about protecting democracy, says Kao.

The rest is here:
Artificial intelligence system could help counter the spread of disinformation - MIT News

AI is learning how to create itself – MIT Technology Review

But theres another crucial observation here. Intelligence was never an endpoint for evolution, something to aim for. Instead, it emerged in many different forms from countless tiny solutions to challenges that allowed living things to survive and take on future challenges. Intelligence is the current high point in an ongoing and open-ended process. In this sense, evolution is quite different from algorithms the way people typically think of themas means to an end.

Its this open-endedness, glimpsed in the apparently aimless sequence of challenges generated by POET, that Clune and others believe could lead to new kinds of AI. For decades AI researchers have tried to build algorithms to mimic human intelligence, but the real breakthrough may come from building algorithms that try to mimic the open-ended problem-solving of evolutionand sitting back to watch what emerges.

Researchers are already using machine learning on itself, training it to find solutions to some of the fields hardest problems, such as how to make machines that can learn more than one task at a time or cope with situations they have not encountered before. Some now think that taking this approach and running with it might be the best path to artificial general intelligence.We could start an algorithm that initially does not have much intelligence inside it, and watch it bootstrap itself all the way up potentially to AGI, Clune says.

The truth is that for now, AGI remains a fantasy. But thats largely because nobody knows how to makeit.Advances in AI are piecemeal and carried out by humans, with progress typically involving tweaks to existing techniques or algorithms, yielding incremental leaps in performance or accuracy. Clune characterizes these efforts as attempts to discover the building blocks for artificial intelligence without knowing what youre looking for or how many blocks youll need. And thats just the start. At some point, we have to take on the Herculean task of putting them all together, he says.

Asking AI to find andassemble those building blocks for usis a paradigm shift. Its saying we want to create an intelligent machine, but we dont care what it might look likejust give us whatever works.

Even if AGI is never achieved, the self-teaching approach may still change what sorts of AI are created. The world needsmore than a very good Go player, says Clune. For him, creating a supersmart machine means building a system that invents its own challenges, solves them, and then invents new ones. POET is a tiny glimpse of this in action. Clune imagines a machine that teaches a bot to walk, then to play hopscotch, then maybe to play Go. Then maybe it learns math puzzles and starts inventing its own challenges, he says. The system continuously innovates, and the skys the limit in terms of where it might go.

AI is learning how to create itself - MIT Technology Review

The United Nations needs to start regulating the ‘Wild West’ of artificial intelligence – The Conversation CA

The European Commission recently published a proposal for a regulation on artificial intelligence (AI). This is the first document of its kind to attempt to tame the multi-tentacled beast that is artificial intelligence.

The sun is starting to set on the Wild West days of artificial intelligence, writes Jeremy Kahn. He may have a point.

When this regulation comes into effect, it will change the way that we conduct AI research and development. In the last few years of AI, there were few rules or regulations: if you could think it, you could build it. That is no longer the case, at least in the European Union.

There is, however, a notable exception in the regulation, which is that is does not apply to international organizations like the United Nations.

Naturally, the European Union does not have jurisdiction over the United Nations, which is governed by international law. The exclusion therefore does not come as a surprise, but does point to a gap in AI regulation. The United Nations therefore needs its own regulation for artificial intelligence, and urgently so.

Artificial intelligence technologies have been used increasingly by the United Nations. Several research and development labs, including the Global Pulse Lab, the Jetson initiative by the UN High Commissioner for Refugees , UNICEFs Innovation Labs and the Centre for Humanitarian Data have focused their work on developing artificial intelligence solutions that would support the UNs mission, notably in terms of anticipating and responding to humanitarian crises.

United Nations agencies have also used biometric identification to manage humanitarian logistics and refugee claims. The UNHCR developed a biometrics database which contained the information of 7.1 million refugees. The World Food Program has also used biometric identification in aid distribution to refugees, coming under some criticism in 2019 for its use of this technology in Yemen.

In parallel, the United Nations has partnered with private companies that provide analytical services. A notable example is the World Food Programme, which in 2019 signed a contract worth US$45 million with Palantir, an American firm specializing in data collection and artificial intelligence modelling.

In 2014, the United States Bureau of Immigration and Customs Enforcement (ICE) awarded a US$20 billion-dollar contract to Palantir to track undocumented immigrants in the U.S., especially family members of children who had crossed the border alone. Several human rights watchdogs, including Amnesty International, have raised concerns about Palantir for human rights violations.

Like most AI initiatives developed in recent years, this work has happened largely without regulatory oversight. There have been many attempts to set up ethical modes of operation, such as the Office for the Co-ordination of Humanitarian Affairs Peer Review Framework, which sets out a method for overseeing the technical development and implementation of AI models.

In the absence of regulation, however, tools such as these, without legal backing, are merely best practices with no means of enforcement.

In the European Commissions AI regulation proposal, developers of high-risk systems must go through an authorization process before going to market, just like a new drug or car. They are required to put together a detailed package before the AI is available for use, involving a description of the models and data used, along with an explanation of how accuracy, privacy and discriminatory impacts will be addressed.

The AI applications in question include biometric identification, categorization and evaluation of the eligibility of people for public assistance benefits and services. They may also be used to dispatch of emergency first response services all of these are current uses of AI by the United Nations.

Conversely, the lack of regulation at the United Nations can be considered a challenge for agencies seeking to adopt more effective and novel technologies. As such, many systems seem to have been developed and later abandoned without being integrated into actual decision-making systems.

An example of this is the Jetson tool, which was developed by UNHCR to predict the arrival of internally displaced persons to refugee camps in Somalia. The tool does not appear to have been updated since 2019, and seems unlikely to transition into the humanitarian organizations operations. Unless, that is, it can be properly certified by a new regulatory system.

Trust in AI is difficult to obtain, particularly in United Nations work, which is highly political and affects very vulnerable populations. The onus has largely been on data scientists to develop the credibility of their tools.

A regulatory framework like the one proposed by the European Commission would take the pressure off data scientists in the humanitarian sector to individually justify their activities. Instead, agencies or research labs who wanted to develop an AI solution would work within a regulated system with built-in accountability. This would produce more effective, safer and more just applications and uses of AI technology.

See the original post here:
The United Nations needs to start regulating the 'Wild West' of artificial intelligence - The Conversation CA

How Artificial Intelligence Is Cutting Wait Time at Red Lights – Motor Trend

Who hasn't been stuck seething at an interminable red light with zero cross traffic? When this happened one time too many to Uriel Katz, he co-founded Israel-based, Palo Alto, California-headquartered tech startup NoTraffic in 2017. The company claims its cloud- and artificial-intelligence-based traffic control system can halve rush-hour times in dense urban areas, reduce annual CO2 emissions by a half-billion tons in places like Phoenix/Maricopa County, and slash transportation budgets by 70 percent. That sounded mighty free-lunchy, so I got NoTraffic's VP of strategic partnerships, Tom Cooper, on the phone.

Here's how it works: Sensors perceive, identify, and analyze all traffic approaching each intersection, sharing data to the cloud. Here light timing and traffic flow is adjusted continuously, prioritizing commuting patterns, emergency and evacuation traffic, a temporary parade of bicycleswhatever. Judicious allocation of "green time" means no green or walk-signal time gets wasted.

I assumed such features had long since evolved from the tape-drive traffic control system Michael Cain's team sabotaged in Rome to pull off The Italian Job in 1969. Turns out that while most such systems' electronics have evolved, their central intelligence and situational adaptability have not.

Intersections that employ traffic-sensing pavement loops, video cameras, or devices that enable emergency vehicle prioritization still typically rely on hourly traffic-flow predictions for timing. When legacy system suppliers like Siemens offer similar technology with centralized control, it typically requires costly installation of fiber-optic or other wired-network connections, as the latency inherent in cellular communications can't meet stringent standards set by Advance Transportation Controller (ATC), National Electrical Manufacturers Association (NEMA), CalTrans, and others for safety and conflict resolution.

By contrast, NoTraffic localizes all the safety-critical decision-making at the intersection, with a camera/radar sensor that can identify vehicles, pedestrians, and bikers observing each approach. These sensors are wired to a box inside the existing control cabinet that can also accept input signals from pressure loops or other existing infrastructure. The controller only requires AC power. It connects to the cloud via 4G/5G/LTE, but this connection merely allows for sharing of data that constantly tailors the signal timing of nearby intersections. This is not nano-second, fiber-optic-speed critical info. NoTraffic promises to instantly leapfrog legacy intersections to state-of-the-art intelligence, safety sensing, and connectivity.

Installation cost per intersection roughly equals the cost budgeted for maintaining and repairing today's inductive loops and camera intersections every five years, but the NoTraffic gear allegedly lasts longer and is upgradable over the air. This accounts for that 70 percent cost savings.

NoTraffic's congestion-reduction claims don't require vehicle-to-infrastructure communications or Waze/Google/Apple Maps integration, but adding such features via over-the-air upgrades promises to further improve future traffic flow.

Hardening the system against Italian Job-like traffic system hacks is essential, so each control box is electrically isolated and firewalled. All input signals from the local sensors are fully encrypted. Ditto all cloud communications.

NoTraffic gear is up and running in California, Arizona, and on the East Coast, and the company plans to be in 41 markets by the end of 2021. Maricopa County has the greatest number of NoTraffic intersections, and projections indicate equipping all 4,000 signals in the area would save 7.8 centuries of wasted commuting time per year, valued at $1.2 billion in economic impact. Reducing that much idling time would save 531,929 tons of CO2 emissionsakin to taking 115,647 combustion-engine vehicles off the road. The company targets jurisdictions covering 80 percent of the nation's 320,000 traffic signals, noting that converting the entire U.S. traffic system could reduce CO2 by as much as removing 20 million combustion vehicles each year.

I fret that despite its obvious advantages, greedy municipalities might push to leverage NoTraffic cameras for red light enforcement, but Cooper noted the company's clients are traffic operations departments, which are not tasked with revenue generation. NoTraffic is neither conceived nor enabled to be an enforcement tool. Let's hope the system proves equally hackproof to government "revenuers" and gold thieves alike.

Originally posted here:
How Artificial Intelligence Is Cutting Wait Time at Red Lights - Motor Trend

How Artificial Intelligence has played a major role in fighting Covid – The National

From the personal to the professional and the micro to the macro-economic, the pandemic has highlighted just how crucial the state of global health and the policies that underpin it are to our collective survival and prosperity. Perhaps lesser appreciated, but certainly no less significant, is just how big a part Artificial Intelligence has to play, says a leading expert in the field.

Weve had an unprecedented amount of sharing of data globally, of live daily updates on data across the board, whether it has to do with death rates or infection rates. In the UK, we had our live tracker, we have track-and-trace that also collected data. All of this is underpinning the work that was being done to fight Covid. It is also what is ultimately the foundation for artificial intelligence, says Aldo Faisal, Professor of AI and Neuroscience at the Departments of Computing and Bioengineering at Imperial College London.

Prof Faisal leads the Brain and Behaviour Lab, which uses and develops statistical AI techniques to analyse data and predict behaviour, as well as producing medical-related robotics. Last year he was awarded a five-year UK Research and Innovation Turing AI Fellowship to develop an "AI Clinician" that will help doctors make complex decisions and relieve pressure on the NHS.

Having spent years harnessing the power of AI to develop better health care, Covid-19 was certainly no exception and Prof Faisal redirected a large portion of his labs resources to the national effort at the outset of the pandemic.

Just last month he and a team of researchers revealed their work in using machine learning to predict which Covid-19 patients in intensive care units might get worse and not respond positively to being turned on to their stomachs a technique that is commonly used to improve oxygenation of the lungs.

This only happened because we look at the trajectories of patients on a daily basis, says Prof Faisal, who first studied in Germany, where he received a number of awards and distinctions, before continuing his education as a Junior Fellow at the University of Cambridge.

In collaboration with a digital healthcare company his lab ran a survey of Covid-19 symptoms worldwide with one million respondents which, though not yet peer-reviewed, has shown that standard Covid-19 symptoms, such as loss of taste and smell, are not consistent across countries.

Suddenly symptoms in Africa or India present themselves very differently from symptoms in Europe. Why is that important? Because we're always talking about asymptomatic transmission, and the challenges [involved], the German-born professor tells The National.

From lung scan imaging for preliminary detection to the rapid review of research and, of course, the worldwide dissemination of mortality figures, algorithms have been deployed far and wide to help better understand and combat the virus.

I've seen things advance in weeks, that would have taken probably a decade to happen. And the question is, how much of that legacy experience from a citizen's viewpoint is going to transform in the long term? What is acceptable? asks Prof Faisal, who is also the Founding Director of the 20 million ($28.3m) UKRI Centre for Doctoral Training in AI for Healthcare.

Privacy, data and bias remain the omnipresent issues trailing behind the proliferation of AI across sectors, but a public health emergency like Covid-19 tends for better or worse to quieten such resistance.

There is a massive shortage of doctors worldwide. What AI can do is address some of the unmet personnel needs

Nevertheless, ardent proponents of AI welcome the legislative safeguards and frameworks they say would help foster greater trust among the public, as well as increased collaboration among institutions.

Addressing an online forum of AI Healthcare experts earlier this year, the Conservative MP and former Minister of State, George Freeman, said governments had a difficult but important role to play in instilling excitement instead of fear into the public. The big challenge in this space is to create a trust framework where people out on the streets can have confidence that this big system for using massive computer power to find value in the healthcare system is working for them, not on them, said the founder of Reform for Resilience, an international initiative aimed at promoting strategic reform of health care.

Mr Freeman said the steady rise in the wellness and wearable technology in healthcare industries suggests people are increasingly willing to take responsibility for their health but needed better architecture to do so.

We need to set some global international protocols and standards for what is and is not legitimate good practice use of AI, he said at the online forum.

I think we need to frame AI within a UK system approach in which the public would have real trust that we're going to embed that properly in a system that will make the sacrifices of this last year mean that the next generation don't have to experience it.

Regardless of where the legislation is going, the increased integration of health care with personal digital technologies is unlikely to turn back. Utilising AI does not, however, mean dispensing with doctors and medical professionals, says Amr Nimer, a neurosurgeon at Imperial College NHS Trust and a colleague of Prof Faisal.

There is a massive shortage of doctors worldwide. What AI can do is address some of the unmet personnel needs. The idea behind the deployment of AI agents is not to replace doctors or healthcare professionals, but to help automate some of the tasks that can be done much more efficiently by machines, so that we as healthcare professionals can concentrate on actual patient care. AI will augment, rather than replace, healthcare professionals, Mr Nimer told The National.

Over the past year the Dubai-born neurosurgeon has been working with Prof Faisal in the Brain Behaviour Lab on a project to train surgeons using AI.

It's based on the principles of economy of movement and surgical efficacy. We use state-of-the art motion sensors to collect movement data from expert surgeons, and then utilise AI algorithms to answer the questions: what defines manual surgical expertise, or what makes an expert, an expert? What does behavioural data show us about the manual skills of surgical experts [versus] novices? Once we have an entirely data-driven objective definition of expertise in a particular procedure, we can use AI algorithms to help junior surgeons perform that procedure much more efficiently on models, rather than practising on patients first, Mr Nimer said.

Showing the wide applicability of AI, this research project shares similar research principles with that undertaken by Prof Faisals team last year with Formula E World Champion, Lucas di Grassi. Wearing a wireless electroencephalogram helmet to track his brain activity, the racing drivers eye and body movements were monitored under real-time extreme conditions. The first-time experiment aimed to better understand how an expert driver performs, so that more targeted and useful information can be given to self-driving cars.

After more than a year responding to the severities of Covid-19, the healthcare system is overwhelmingly strained. The long-term direct and indirect health effects of the virus are still revealing themselves, but initial assessments suggest a long road of continued care ahead and waiting times to treat other illnesses are now several years long. Healthcare facilities will need a huge injection of both human and financial capital, as well as the latest technology has to offer in order to cope.

The crisis precipitated a hastening of AIs foray into the medical sphere with an unprecedented sharing of data and collaboration across institutions. With medics facing ominous healthcare challenges for years to come, former sceptics may now be more willing to embrace tech that can lessen the burden. It remains to be seen, however, whether the government can provide the necessary regulatory framework to protect the interests of both the patient and the professional.

Read the rest here:
How Artificial Intelligence has played a major role in fighting Covid - The National

Hesse launched the first nationwide artificial intelligence pilot project – TheMayor.EU

First nationwide pilot project for artificial intelligence in Hesse

The German state will participate in a testing hub for AI

Hesse authorities signed a joint declaration on the establishment of an AI Quality & Testing Hub with the President of the VDE (Association of Electrical, Electronic and Information Technologies). Last week the German state and the association outlined the goal of the initiative to put AI systems to the test.

Research and development, standardisation and certification are combined under one roof in the hub. In this way, the hub makes an important contribution to developing and applying AI responsibly.

AI is developing into the key technology of the 21st century, as it can offer solutions to many societal challenges. The Hessian state government wants to promote the quality of AI together with the VDE and make it verifiable. We are convinced, that the high quality of AI systems is the basis for trust and use in this technology,emphasised Hesse's digital minister Prof. Dr. Kristina Sinemus.

In the AI Quality & Testing Hub, data aspects should also play a role . In addition to certification, opportunities for dialogue, discourse, experimentation and knowledge are also expected to be created.

The initiative is to be supported by the State-funded Centre for Responsible Digitisation and the Hessian Centre for Artificial Intelligence (hessian.AI). In the dialogue process that is now starting, the concept of the hub is to be refined in order to enter a foundation phase of the hub by mid-2022.

With the update of the digital strategy, the Hesse government is relying in particular on the strong brand "KI made in Hessen", which stands for responsible innovations and AI applications in the digital sector.

"The pandemic once again clearly demonstrated the necessary innovation boost through digitization, but also made clear the need for action. With the new project, we are actively participating in the design of a European legal framework for AI and thus making a practical contribution to making AI trustworthy so that AI is developed and used for the benefit of people," concluded Sinemus.

Read more from the original source:
Hesse launched the first nationwide artificial intelligence pilot project - TheMayor.EU