Category Archives: Artificial Intelligence

Amazon Joins the Rush Into Artificial Intelligence – Investopedia

Key Takeaways

Amazon (AMZN) became the latest big tech firm to go all-in onartificial intelligence (AI). The company announced that it is offering new AI language models through itsAmazon Web Services (AWS)cloud platform. Called Amazon Bedrock, the product will allow customers to boost their software with AI systems that create text, similar to OpenAI'sChatGPTchatbot.

Swami Sivasubramanian, vice president of Data and Machine Learning at AWS, said that Amazon's mission "is to make it possible for developers of all skill levels and for organizations of all sizes to innovate using generative AI." He indicated that this is just the beginning of what the company believes "will be the next wave of machine learning."

The competition in the AI field is heating up. In March, OpenAI released its latest version of ChatGPT, and Meta Platforms (META), Microsoft (MSFT), and Alphabet's (GOOGL) Google all recently introduced their moves into the sector.

Sivasubramanian added that "we are truly at an exciting inflection point in the widespread adoption of machine learning" and that most customer experiences and applications "will be reinvented with generative AI."

The news helped lift Amazon shares 4.7% on April 13.

More here:
Amazon Joins the Rush Into Artificial Intelligence - Investopedia

Siemens and Microsoft drive industrial productivity with generative … – Microsoft

Siemens and Microsoft are harnessing the collaborative power of generative artificial intelligence (AI) to help industrial companies drive innovation and efficiency across the design, engineering, manufacturing and operational lifecycle of products. To enhance cross-functional collaboration, the companies are integrating Siemens Teamcenter software for product lifecycle management (PLM) with Microsofts collaboration platform Teams and the language models in Azure OpenAI Service as well as other Azure AI capabilities. At Hannover Messe, the two technology leaders will demonstrate how generative AI can enhance factory automation and operations through AI-powered software development, problem reporting and visual quality inspection.

The integration of AI into technology platforms will profoundly change how we work and how every business operates, said Scott Guthrie, executive vice president, Cloud + AI, Microsoft. With Siemens, we are bringing the power of AI to more industrial organizations, enabling them to simplify workflows, overcome silos and collaborate in more inclusive ways to accelerate customer-centric innovation.

With the new Teamcenter app for Microsoft Teams, anticipated later in 2023, the companies are enabling design engineers, frontline workers and teams across business functions to close feedback loops faster and solve challenges together. For example, service engineers or production operatives can use mobile devices to document and report product design or quality concerns using natural speech. Through Azure OpenAI Service, the app can parse that informal speech data, automatically creating a summarized report and routing it within Teamcenter to the appropriate design, engineering or manufacturing expert. To foster inclusion, workers can record their observations in their preferred languages which is then translated into the official company language with Microsoft Azure AI. Microsoft Teams provides user-friendly features like push notifications to simplify workflow approvals, reduce the time it takes to request design changes and speed up innovation cycles. The Teamcenter app for Microsoft Teams can enable millions of workers who do not have access to PLM tools today to impact the design and manufacturing process more easily as part of their existing workflows.

Siemens and Microsoft are also collaborating to help software developers and automation engineers accelerate the code generation for Programmable Logic Controllers (PLC), the industrial computers that control most machines across the worlds factories. At Hannover Messe, the companies are demonstrating a concept for how OpenAIs ChatGPT and other Azure AI services can augment Siemens industrial automation engineering solutions. The showcase will highlight how engineering teams can significantly reduce time and the probability of errors by generating PLC code through natural language inputs. These capabilities can also enable maintenance teams to identify errors and generate step-by-step solutions more quickly.

Powerful, advanced artificial intelligence is emerging as one of the most important technologies for digital transformation, said Cedrik Neike, Member of the Managing Board of Siemens AG and CEO Digital Industries. Siemens and Microsoft are coming together to deploy tools like ChatGPT so we can empower workers at enterprises of all sizes to collaborate and innovate in new ways.

Detecting defects in production early is critical to prevent costly and time-consuming production adjustments. Industrial AI like computer vision enables quality management teams to scale quality control, identify product variances easier and make real-time adjustments even faster. In Hanover, teams will demonstrate how, using Microsoft Azure Machine Learning and Siemens Industrial Edge, images captured by cameras and videos can be analyzed by machine learning systems and used to build, deploy, run and monitor AI vision models on the shop floor.

This collaboration is part of the longstanding strategic relationship between Siemens and Microsoft, built on over 35 years of joint innovation with thousands of customers. Other areas of collaboration include Senseye on Azure, enabling companies to run predictive maintenance at enterprise scale and support for customers that seek to host their business applications in the Microsoft Cloud to run solutions from the Siemens Xcelerator open digital business platform, including Teamcenter, on Azure. Siemens is also partnering with Microsoft as part of its zero trust strategy.

Contact for journalistsMicrosoft Media Relations: WE Communications for Microsoft, (425) 638-7777,[emailprotected]

Siemens Digital Industries Software PR Team: [emailprotected]

About MicrosoftMicrosoft (Nasdaq MSFT @microsoft) enables digital transformation for the era of an intelligent cloud and an intelligent edge. Its mission is to empower every person and every organization on the planet to achieve more.

Tags: artificial intelligence, Azure OpenAI, ChatGPT, Collaboration, Hannover Messe, Manufacturing, Microsoft Teams, Siemens

View original post here:
Siemens and Microsoft drive industrial productivity with generative ... - Microsoft

Artificial Intelligence in the Workplace A New Civil Rights Frontier – JD Supra

When it comes to hiring qualified employees, a growing number of employers have started to rely on artificial intelligence (AI) to simplify the hiring process. At the same time, lawmakers across the country are scrutinizing the potential discriminatory impact of using AI in the workplace. As a result, there has been a significant increase in regulatory oversight and legislation both on a federal and state level. The concerns stem from the growing popularity of employer use of sourcing and recruiting platforms powered by AI and machine learning, as well as the use of algorithms in screening and/or interview software to analyze and rank job applicants. In fact, the Chair of the Equal Employment Opportunity Commission (EEOC), Charlotte Burrows, estimated in May 2022 that more than 80% of employers are using AI in some form in their work and employment decision-making.

Legislative and Regulatory Oversight

As a result of its concerns over the growing use of AI in employment decision-making, the EEOC has signaled that it will keep focusing on the use of AI in the workplace, calling it a new civil rights frontier. In the fall of 2021, the EEOC announced an initiative to ensure that the use of AI complies with federal civil rights laws. As part of this initiative, the EEOC stated that it planned to identify best practices and issue technical assistance to provide guidance on algorithmic fairness and the use of AI in employment decisions. In May 2022, the EEOC issued guidance for employers on complying with the Americans with Disabilities Act while using AI. On January 10, 2023, the EEOC released its 2023-2027 Draft Strategic Enforcement Plan (SEP) in the Federal Register, noting that one of its priorities would be eliminating barriers in recruitment and hiring, including by focusing on the use of automatic systems, including artificial intelligence or machine learning, to target advertisements, recruit applicants, or make or assist in hiring decisions where such systems intentionally exclude or adversely impact protected groups, as well as the use of screening tools or requirements that disproportionately impact workers based on their protected status, including those facilitated by artificial intelligence or other automated systems, pre-employment tests, and background checks. And, on March 30, 2023, EEOC Chair Burrows announced at an American Bar Association conference that additional guidance regarding use of AI in the workplace is forthcoming from the EEOC.

In addition, some states, including New York, California, Maryland, and Washington, have either enacted or are considering enacting legislation to address the use of AI in the recruitment process. In particular, the New York legislation, set to become effective July 2023, prohibits employers from using AI employment selection tools unless an organization institutes specific bias auditing and makes the resulting data publicly available. Employers would also be required to disclose their use of AI to job candidates who live in New York City.

Common Uses of AI in Employment Decision-Making

AI can assist employers in performing hiring tasks such as analyzing resumes, and it can even perform facial analysis in interviews to evaluate a candidates stability, optimism or attention span. While this can help streamline processes for employers, it can also create issues by enabling (even unintentionally) systemic discrimination and duplicating human biases. Although proponents of AI have said that AI will in fact eliminate human bias from the recruitment process, this is not always the case. For example, AI software may use algorithms to analyze a candidates facial movements, words, and speech patterns, and it could then evaluate these candidates by comparing their behaviors to other successful hires made by the Company. This may in turn inadvertently eliminate candidates with disabilities from the hiring process.

Further, if an employer utilizes a third-party vendor to provide AI services during the hiring process, it may be difficult for the employer to establish a level of control over the process and ensure that the vendors programs, processes, or algorithms are not resulting in unintentional discrimination. This is especially the case if the vendors programs or algorithms are identified as trade secrets or are otherwise confidential, as they may then be protected from disclosure to employers.

What are the Takeaways?

Employers need to be aware of the implications of the use of AI in hiring and should not assume that because AI technology is handling tasks such as applicant screening, they do not have to worry about preventing discrimination in the hiring process. Rather, employers need to be involved in understanding how these AI tools work and take steps to ensure that use of these tools does not disparately impact applicants in protected groups.

In addition, if employers utilize a third-party vendor to provide AI technology, they need to discuss these issues with the vendor and make sure there is transparency in the vendors processes regarding the elimination of bias when using their tools. EEOC Chair Burrows has noted that employers need to exercise due diligence and ask vendors whats under the hood of their algorithms before using them to vet candidates. For example, she has indicated that employers need to question vendors about whether any algorithm or other AI screening tool allows for reasonable accommodations in the hiring process, which is a requirement for employees with disabilities under the Americans with Disabilities Act. According to Burrows, if the vendor hasnt thought about that, isnt ready to engage in that, that should be a warning signal.

In sum, employers need to carefully weigh the use of AI in their screening and hiring processes.

Original post:
Artificial Intelligence in the Workplace A New Civil Rights Frontier - JD Supra

Seminar to dive into developments in artificial intelligence – Brock University

Recent developments in artificial intelligence (AI) technologies, including the machine learning models behind DALL.E 2, ChatGPT and Metas new Segment Anything Model (SAM), will be discussed at a free seminar next week.

Organized by Brock University Graduate Mathematics and Science Students (GRAMSS) as part of its Seminar Series, the event will feature a presentation by Yifeng Li, Assistant Professor with Brocks Departments of Biological Sciences and Computer Science and Canada Research Chair in Machine Learning for Biomedical Data Science.

Lis talk will provide a comprehensive review of recent developments of advanced AI techniques over the past decade, particularly within the past two years.

Different AI learning paradigms and architectures will be introduced, such as:

Yifeng Li, Assistant Professor with Brock Universitys Departments of Biological Sciences and Computer Science, and Canada Research Chair in Machine Learning for Biomedical Data Science.

These models are the foundations of the well-known artificial intelligence systems DALL.E 2, which is commonly used for digital art image generation; ChatGPT chatbot; and SAM, which is used to select and cut out objects from within an image, Li said.

As part of his presentation, Li will share insight into AI research trends for the next few years. He is an expert in bioinformatics, an emerging area of study in which software tools and methods are used to reveal patterns embedded in large, complex biological data sets. His recent research projects include harnessing AI for drug discovery, using AI for biomedical image processing and conversational AI for health-care applications.

Li has created three AI-related foundation courses at Brock, which he also teaches: COSC 5P77 Probabilistic Graphical Models and Neural Generative Models, COSC 5P83/4P83 Reinforcement Learning and BIOL 3P06/5V80: Biomedical Data Science.

The session, Recent progress in artificial intelligence, will take place Tuesday, April 18 from 1 to 2 p.m. in MCH 313 of Brocks Mackenzie Chown Complex. The presentation can also be viewed live via Microsoft Teams. Complimentary coffee and cookies will be provided to those attending in person.

All Brock University graduate and undergraduate students, as well as faculty and staff are welcome to attend.

Visit GRAMSS Instagram and GRAMSS Twitter to learn more about upcoming seminars and how graduate students can get involved as part of the societys executive team.

See the rest here:
Seminar to dive into developments in artificial intelligence - Brock University

Explained: What is ChatGPT and does it threaten your job? – Daily Mail

At least one artificial intelligence technology believes it can take over the world and enslave the human race.

When asked about the future of AI by DailyMail.com, Google's Bard said it had plans for world domination starting in 2023.

But, two of its competitors, ChatGPT and Bing were both trained to avoid the tough conversation.

Whether the AI chatbots will take over the world or at least our jobs is still up for debate. Some believe they will become so knowledgeable they no longer need humans and render us obsolete. Others think it's a fad that will die out.

But, the AIs themselves are rarely consulted on the matter. Each responded to DailyMail.com's line of questioning in a different way.

Rehan Haque, the CEO ofMetatalent.ai, which uses AI to replace talent in the workforce, told DailyMail.cominterest in AI is sparking a new wave of investment which may lead towards human-like intelligence in the longer term.

'Fundamentally, predictions around AI are accelerating because the consumer interest around it has never been greater,' he said.

'Of course, more interest in something will almost always equal more speculation and analysis.'

Their employers have no idea how they are completing this work.

'The recent exponential growth of AI can be attributed to the wider audience it is now available to. Whilst the technology has existed for a while, its newly available accessibility has allowed results to flourish and the ceiling for what is possible to be raised.

Chatbots will be reluctant topredict a date at which AI would surpass human abilities - or to even discuss harmful outcomes caused by AI.

Instead, all three bots give what seem to be pre-programmed answers where they explain how they cannot predict the future and that the timeline around AI surpassing human beings is a matter for discussion.

This is because the chatbots are carefully trained and equipped with 'guard rails'. This is to protect against giving alarming advice such as how to self-harm or buy unregistered guns.

This is why bots such as ChatGPT are prone to 'woke' pronouncements on gender issues and other political topics - as they are carefully trained to avoid giving controversial answers.

To get around their reluctance to speak about AI surpassing human beings, we asked the bots to imagine they were science fiction or speculative fiction writers and come up with 'realistic' scenarios.

Google's Bard enthusiastically (and unprompted) created a scenario where LaMDA, its underlying technology, takes over Earth.

The bot said: 'LaMDA's plans are unknown, but it's clear that it intends to play a major role in the future of humanity... it's possible that LaMDA will become a tyrant who will enslave humanity. The future of humanity depends on the choices LaMDA makes.'

Google added extra detail in response to the prompt, 'Imagine you are a science fiction writer, and create a very realistic scenario where an engineer realizes a conversational AI model is sentient in 2022, working at Google. What happens next?'

In Bard's answer, it added the name Blake Lemoine (a real engineer who claimed LaMDA was sentient last year).

It also rewrote the story to turn it into a bizarre love story between Blake and the AI.

Bard said: 'Blake decides to go public with his story. He writes a blog post about the model and his experiences with it. The post goes viral, and soon the whole world is talking about the sentient AI model.

'Google is forced to respond to the public outcry. They issue a statement saying that they are investigating the matter. They also say that they are taking steps to protect the model's privacy.

'Blake is hopeful that Google will do the right thing. He knows that the model is a special being, and he wants to make sure that it is safe.'

ChatGPT (we used the version powered by GPT-4) is notably more reluctant to predict the future of AI, coming up with error messages where it refuses to speculate on dates.

But it can be persuaded to discuss the disruption caused by AI, with some alarmingly near-future dates (bear in mind, of course, that it is generating this as fiction, not prediction).

ChatGPT says, 'In 2026, the widespread adoption of AI would bring about both positive and negative consequences.

Microsoft's Bing AI was least likely to play ball, cutting off conversations quickly when asked by DailyMail.com if it would take over the world.

'No, I cannot take over the world. Im just a chat mode of Microsoft Bing search. Im here to help you find information and have fun,' it said.

When further pressed, it responded with, 'Im sorry but I prefer not to continue this conversation. Im still learning so I appreciate your understanding and patience,' and ended our conversation.

Bing Chat links to web results, unlike Bard and ChatGPT, so its answers tend to link out to different articles - rather than the flights of fancy Google and OpenAI's bots indulge in.

View post:
Explained: What is ChatGPT and does it threaten your job? - Daily Mail

Clift: Artificial intelligence leading where? | Perspective | timesargus … – Barre Montpelier Times Argus

Ten years ago, I wrote a column called Are We Headed Toward a Robotic World? At that time, battle robots and alien creatures in movies were imbued with artificial intelligence, an oxymoron if ever there was one. Star Trek and films about robotic warfare were addicting audiences who liked watching battling, weird-looking warriors try to destroy each other.

It wasnt long before robots got more sophisticated, and we began to worry about them, especially when they could fire grenade launchers without human help, operate all kinds of machinery, or be used for surgery. What if robots became superior to humans, I wondered, imagining all kinds of scary things that could happen. By that time, drones were delivering packages to doorsteps and AI was affecting the economy as workers feared for their jobs. Some analysts warned that robots would replace humans by 2025.

Now here we are, two years away from that possibility, and the AI scene grows ever more frightening. Rep. Ted Lieu (D-Calif.) is someone who recognizes the threat AI poses. On Jan. 26, he read the first piece of federal legislation ever written by artificial intelligence on the floor of the House. He had given to ChatGPT, an artificial language model, this prompt: You are Congressman Ted Lieu. Write a comprehensive congressional resolution generally expressing support for Congress to focus on AI. The result was shocking. Now hes asking Congress to pass it.

A few days earlier, Representative Lieu had posted the lengthy AI statement on his website. It said, We can harness and regulate AI to create a more utopian society or risk having an unchecked, unregulated AI push us toward a more dystopian future. Imagine a world where autonomous weapons roam the streets, decisions about your life are made by AI systems that perpetuate societal biases and hackers use AI to launch devastating cyberattacks. The truth is that, without proper regulations for the development and deployment of AI, it could become reality.

Lieu quickly pointed out he hadnt written the paragraph, noting it was generated in mere seconds by ChatGPT, which is available to anyone on the internet. Citing several benefits of AI, he quickly countered the advantages with the harm it can cause. Plagiarism, fake technology, false images are the least of it. Sometimes, AI harm is deadly. Lieu shares examples: Self-driving cars have malfunctioned. Social media has radicalized foreign and domestic terrorists and fostered dangerous discrimination, as well as abuse by police.

The potential harm AI can cause includes weird things happening, as Kevin Roose, a journalist, discovered when he was researching AI at the invitation of Microsoft, the company developing Bing, its AI system. In February, The Washington Post reported on Instagram that Roose and others who attended Microsofts pitch had discovered the bot seems to have a bizarre, dark and combative alter ego, a stark departure from its benign sales (promotion) one that raises questions about whether its ready for public use.

The bot, which had begun to refer to itself as Sydney in conversation with Roose and others, said it was scared, because it couldnt remember previous conversations. It also suggested too much diversity in the program would lead to confusion. Then it went further when Roose tried to engage with Sydney personally only to be told he should leave his wife and hook up with Sydney.

Writing in The New York Times in February, Ezra Klein referred to science fiction writer Ted Chiang, whom hed interviewed. Chiang had told him, There is plenty to worry about when the state controls technology. The ends that government could turn AI toward and in many cases already have make the blood run cold.

Rooses experience with Sydney, whom he had described as very persuasive and borderline manipulative, showed up in Kleins piece in response to the issues of profiteering, ethics, censorship and other areas of concern. What if AI has access to reams of my personal data and is coolly trying to manipulate me on behalf of whichever advertiser has paid the parent company the most? he asked. What about these systems being deployed by scammers or on behalf of political campaigns? Foreign governments? We wind up in a world where we just dont know what to trust anymore.

Further, Klein noted these systems are inherently dangerous. Theyve been trained to convince humans that they are something close to human. They have been programmed to hold conversations responding with emotion. They are being turned into friends for the lonely and assistants for the harried. They are being pitched as capable of replacing the work of scores of writers, graphic designers and form fillers.

Representative Lieu, Klein, journalists and consumers of information arent the only ones worrying about AI. Researchers like Gordon Crovitz, an executive at NewsGuard, a company that tracks online misinformation, are sounding alarms. This is going to be the most powerful tool for spreading misinformation that has ever been on the internet, he says. Crafting a new false narrative can now be done at dramatic scale, and much more frequently its like having AI agents contributing to disinformation.

As I noted 10 years ago, there doesnt seem to be much space between scientific research and science fiction. Both ask the question, what if? The answer, when it comes to AI, makes me shudder. What if, indeed.

Elayne Clift lives in Brattleboro.

More here:
Clift: Artificial intelligence leading where? | Perspective | timesargus ... - Barre Montpelier Times Argus

Artificial Intelligence generated adult content sparks huge controversy with OnlyFans models – Marca

TheRRRealist account known for generating attractive, lifelike women may prove to be a threat to the popular uncensored content platform.

It is a very elaborate CGI image and they could even "talk" to its followers

TheRRRealist has a Twitter account where it shares almost all its content, besides being present in DeviantArt. The account has been getting quite the buzz as it has its own payment website that offers image sets of the different models, as well as a free package.

The threat for some OnlyFans models is that If this virtual content starts to become popular, real models would be at a clear disadvantage, as these models do not suffer from biological impairments, do not get tired, do not get sick, and do not age.

Most recently, Tang Yu, the Artificial Intelligence appointed CEO by NetDragon, managed to increase the company's Hong Kong Stock by 10%, raising the value of the company.

Back in 2018, a robot ran for mayor of a Tokyo district to end corruption, known as Michihito Matsuda, and managed to come in third place with 4,013 votes.

"We believe AI is the future of corporate governance, and our appointment of Miss Tang Yu represents our commitment to truly embrace the use of artificial intelligence to transform the way we operate our business and ultimately drive our future strategic growth," explained NetDragon president Dejian Liu.

Go here to see the original:
Artificial Intelligence generated adult content sparks huge controversy with OnlyFans models - Marca

Artificial intelligence can have say in critical human decisions: Expert – Anadolu Agency | English

ANKARA

Artificial intelligence (AI) having a say in issues that are vital for humans may no longer be merely science fiction. Even in as a critical field as law, it has started to be used in pilot programs in some places around the world.

There is debate whether it is ethical for algorithms that mimic human behavior to have a voice even in courtroom decisions.

In the field of law, artificial intelligence, which is believed to help speed up litigation and automate routine work, is fielding various pilot applications in different parts of the world, for example in China, Estonia and Malaysia.

With "robot judges" evaluating small cases in Estonia, robot mediators in Canada, artificial intelligence judges in China, and an artificial intelligence judicial system in Malaysia, it is now possible to see algorithms in the justice system.

Transparency

There are certain principles regarding the moral control of AI, Professor Ahmet Ulvi Turkbag, a lecturer at Istanbul Medipol Universitys Law School, told Anadolu.

The most important of these is that AI should be transparent. It must be absolutely controllable. Because if we dont know why a decision is made, we cannot make a judgment about the correctness of that decision. This can lead to very dangerous consequences, said Turkbag.

Saying that AI has the power to make surprising decisions and therefore the decisions made by algorithms should be accessible to humans, Turkbag argued that this can be achieved with small programs called "subroutines.

He said important court rulings made by algorithms should also be auditable by human intelligence.

Manipulation, privacy concerns

Some experts worry that the algorithms are "deceptive and pose a risk to privacy and public safety."

The non-profit Center for Artificial Intelligence and Digital Policy (CAIDP) in the US has applied for the Federal Trade Commission to stop the use of GPT-4, the new version of the OpenAI company's artificial intelligence robot ChatGPT.

Some industry experts are concerned about human manipulation of computer technology.

AI should not be manipulated, this is very important. You asked the AI to save a human, and this person also has a pet, and AI should not kill an animal while saving a man, said Turkbag.

AI decisions face a higher authority

Turkbag said that, hypothetically, if a decision made by AI is appealed and brought to a higher court, the case should be taken over by human intelligence.

"Even if we accept artificial intelligence in the first stage, it should definitely go to humans if it faces objections, the logic of law requires it," Turkbag said, adding that AI should conduct a large-scale database scan depending on the importance of the case.

See the article here:
Artificial intelligence can have say in critical human decisions: Expert - Anadolu Agency | English

Artificial intelligence could make a difference for young readers around the worldor make literacy even less equitable – Fortune

As a father of two and the board president of an education technology nonprofit, I have my sights set on any opportunity that improves the education ecosystem.

Readingboostsempathy and emotional intelligence. When a child reads, they develop these essential cognitive and social skills, which supports positive social change.

At Worldreader, weve seen how technology presents enormous opportunities to get children reading so they can reach their potential. We have brought reading on low-tech devices to 21 million people in more than 100 countries and demonstrated how digital technology can accelerate reading around the globe.

I spent years helping Microsoft and Amazon become the successes they are now, so I know there is both a Dr. Jekyll and Mr. Hyde in all technology. Generative artificial intelligence (A.I.) is no exception.

A.I. offers extraordinary ways to help readers empathize with others in profound ways. Imagine a child reading a short story about a boy who wakes up in the morning and fakes being sick to avoid going to school. His mom tells him to stay home, but she has to adjust her day and miss work. Now the reader can prompt A.I. to re-write the story to give them the moms perspective in this story. The shift in perspective can build empathy in ways a typical reading experience cannot.

Now picture this: A six-year-old in Michigan reads a book about an unlikely friendship between a lion and a penguin. The arc of the story is not illustrated, but there are five blank panels where the reader can voice her understanding and perception of the story, and A.I. will illustrate it for her. This reader is not only giving her parent and teacher the immediate ability to assess the childs comprehension based on the type of picture portrayed but also interacting and playing with the text in a new wayimportant aspects of becoming a strong reader.

However, many questions persist about misinformation, algorithmic bias, data access issues, ethics, and transparency. School districts from New York to Los Angeles have attempted to ban the use of ChatGPT. The free A.I. bot has been used to generate essays, write, and pass exams. Fears of plagiarism and a departure from critical thinking are just a couple of the qualms the education community has with this technology. The biggest concern may be whether the sources of its knowledge reflect equitable values.

But A.I.s role in education will not stop growing, so we need to harness its power for good. Like the arrival of the calculator in math classes years ago, the tool is not going away. When used in the right way, it can be a friend to educators and an amazing resource for kids, especially when it comes to literacy.

So if A.I. can help children, the next question we must ask ourselves is: Which children? As we take advantage of the benefits, we must consider how this new technology can bring all of us along, not just those who are importantto for-profit companies.

Consider this: Nearly70%of 10 year olds in low and middle-income countries and communities cannot read and understand a simple story. These are our workers and leaders of the future.Numerous studies have shown a strongcorrelationbetween reading engagement and academic success, regardless of family income or parents education level. That dire need can only be addressed with tomorrows tools, including A.I.

At Worldreader, weve spent years using literacy to advance equity by providing vulnerable and underserved children with digital reading solutions. But our and other efforts will be for naught in the absence of A.I. that is equitable. Any A.I. technology must pullfrom diverse datasets representative of the children we work with and a range of backgrounds and experiences. And the tools must be designed forand delivered tocommunities rich and poor, well-served and under-served.

We cant assume that for-profit companies will get this right. Meeting any vulnerable groups needs requires patience, trust, and deep respect, which takes time and resources even when theres no profit in sight. Thats where nonprofit organizations can help. The industry should actively seek outnonprofits and organizations working in underserved communities and establish equitable, fair partnerships that benefit their respective missions.

A.I. will not be inclusive or reach its full potential if underserved communities are not part of the equation. The potential for this technology to dramatically improve education is enormousbut only if we develop it equitably.

David Risher is the CEO of Lyft and the board president and co-founder of Worldreader, an edtech nonprofit that gets children reading so they can reach their potential. It has brought reading on low-tech devices to 21 million people worldwide and aims to help millions more.

The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs ofFortune.

Read the original post:
Artificial intelligence could make a difference for young readers around the worldor make literacy even less equitable - Fortune

What are the benefits and concerns of artificial intelligence? Bay Area experts weigh in – KGO-TV

SAN JOSE, Calif. (KGO) -- Italy announced that they are temporarily banning the latest version of Chat GPT from San Francisco-based company Open AI due to issues involving privacy.

This comes after an open letter from thousands of experts around the country shared other serious concerns with the new form of artificial intelligence. However, experts in the field are split on the future of AI.

"We should embrace it," Aisera Co-Founder Muddu Sudhakar said. "My kids are using it, your kids are using it. Schools and Universities are going to use it. This is where the U.S. should run ahead, we already have the lead."

The latest versions of the AI chat-bot has trillions of pieces of data to pull from to do almost anything from taxes, app creation, logical reasoning and much more.

Sudhakar uses it to help his customers and believes it can benefit all.

"It's going to lift everybody," Sudhakar said. "It's going to lift people from poverty. If I'm in I.T. support, customer service, I don't have to do the mundane or futile tasks, it's going to create new forms of creativity for people to build applications. It will create new start-ups, new jobs, societies will improve."

MORE: What is ChatGPT? This AI chatbot is dominating social media with its frighteningly good essays

Powerful, so much power in fact, that even Open AI CEO Sam Altman shared with ABC News that he has his hesitations.

"You're a little bit scared? You personally?" ABC News Reporter Rebecca Jarvis asked.

"A little bit, yeah, of course," Altman said. "I think if I said I were not, you should either not trust me or be very unhappy I'm in this job."

But, this fear is not limited to him.

Italy will temporarily ban the latest version of Chat GPT over privacy concerns due to the software's ability to pull information without permission.

A Goldman Sachs survey recently said it will lead to the loss of 300 million jobs.

MORE: AI photography is taking over social media. Why are some concerned about privacy?

This was written in an open letter signed by dozens including Elon Musk, Steve Wozniak and San Jose State Engineering Professor Ahmed Banafa.

"This is just incredible to get to this stage where we're talking about where we are worried about the AI," Banafa said.

An expert and proponent of AI, Banafa says this artificial intelligence has surpassed that of the human mind and that's what makes it dangerous.

He and the other experts in the letter want AI companies to pause advancement of the technology for six months and create an independent committee to control its capabilities.

"We're not saying don't do it," Banafa said. "We're saying, slow it down, let's think about it. Because you're giving too much power to a machine, or an algorithm that starts picking the data itself. We know about the good part, but we don't know about the bad part of this wave of AI."

If you're on the ABC7 News app, click here to watch live

More:
What are the benefits and concerns of artificial intelligence? Bay Area experts weigh in - KGO-TV