Category Archives: Ai
Putin: Russia will develop new AI technology to counter West – Business Insider
Russian President Vladimir Putin unveiled Russia's new plan to take on the West in developing AI. Vladimir Smirnov, Sputnik, Kremlin Pool Photo via AP
Russia is staking its claim in the AI arms race.
At the Artificial Intelligence Journey conference in Moscow on Friday, Russian President Vladimir Putin announced plans for a new national strategy for AI development to counter Western influence over the powerful technology.
"Our domestic models of artificial intelligence must reflect the entire wealth and diversity of world culture, the heritage, knowledge, and wisdom of all civilizations," Putin said.
Putin, who outlined the goals of the new strategy in broad terms, said that Russia will intensify its research into generative AI and large language models.
To achieve that, the country would scale up its supercomputing power and improve top-level AI education. Russia would also work to change laws and boost international cooperation to achieve its goals, Putin said.
Putin lamented that existing generative AI models work in "selective" or "biased" ways potentially ignoring Russian culture because they're often trained to solve tasks using English-only data sets, or data sets that are "convenient" or "favorable" to the developers of these models.
As a result, an algorithm can "tell the machine that Russia, our culture, science, music, literature simply does not exist," he said, leading to "a kind of abolition in the digital space."
English-speaking countries right now dominate AI research. The United States and the United Kingdom claim the top spots in a ranking of the highest number of significant machine learning systems, according to Stanford's Institute for Human-Centered Artificial Intelligence (HAI).
The United States has 16 significant systems. The United Kingdom has eight. Russia has just one, by HAI's account. Similarly, close to 300 authors of these systems come from the United States. Another 140 are from the United Kingdom. Only three come from Russia.
Concerns about the potential dangers AI could pose to humanity have divided even the most dedicated AI researchers, and some have publicly said the technology in the wrong hands would be problematic.
Geoffrey Hinton, the British-Canadian AI researcher named a "godfather of AI," for instance, has said he's worried about "bad actors" like Putin using the AI tools he's creating.
Putin, however, says that the Western monopoly over AI is "unacceptable, dangerous and inadmissible."
Loading...
Read more:
Putin: Russia will develop new AI technology to counter West - Business Insider
Nicolas Cage on Memes, Myths, and Why He Thinks AI Is a … – WIRED
Nicolas Cage knows hes a meme. Hes not happy about it. After making the mistake of googling himself a few years back, the charismatic actor discovered that his big on-screen performances had been translated into single-frame quips and supercuts, takenlike all memes, reallyout of context, played for lolz, and in a manner that, frankly, makes Cage seem like a graduate from the Jim Carrey school of rubber-faced acting.
Something like Nick Cage loses his shit, where they cherry-pick meltdowns from different movies Id made over the years, he says. I get that its all done for laughs, and in that context it is funny, but at the same time, theres no regard to how the character got there. Theres no Act One, theres no Act Two.
This, Cage says, is not why he got into making movies. Back in the 1980s, when he was showing up in Fast Times at Ridgemont High and as the romantic lead in Valley Girl, there was no internet, no one turning him into a TikTok template. So, as Ive watched these memes grow exponentially and get turned into T-shirts and You dont say? and all that stuff, Cage says, Ive just thought, Wow, I dont know how I should feel about this, because its made me kind of frustrated and confused.
Thats part of the reason Cage signed on to do his latest movie, the A24 drama Dream Scenario, in which he plays Paul Matthews, a downtrodden university professor who suddenly starts to appear in the dreams of millions of people around the world. Directed by Sick of Myselfs Kristoffer Borgli, the film is a clever look at the trappings of instantaneous fame and at what it looks like when someones fame becomes bigger than they might be themselvessomething Cage, who actually changed his name and leaned into a more bombastic persona early in his career, knows a little something about.
To mark Dream Scenarios release, WIRED talked to Cage about where hes at with his meme-ification these days, his dislike of social media, and why hes going to make damn sure that no one can make an AI-generated Nick Cage after he shuffles off this mortal coil.
WIRED: Over the course of the movie, Paul struggles with who he thinks he is and who the world thinks he is, and how thats constantly shifting around him. Is that something youve had to deal with over the course of your career in terms of Nick Cage, Hollywood actor versus Nicolas Coppola, father and human being?
View post:
Nicolas Cage on Memes, Myths, and Why He Thinks AI Is a ... - WIRED
Wikipedia founder Jimmy Wales says AI is a mess now but can … – Euronews
Wikipedias founder Jimmy Wales tells Euronews Next about the terrible early stage of ChatGPT, the lesson for OpenAI and about his open-source social media platform.
ChatGPT, the wildly popular generative artificial intelligence (AI) tool from OpenAI, is currently a mess when its used to write articles on Wikipedia, the platforms founder Jimmy Wales tells Euronews Next.
A Wikipedia article written today with ChatGPT-4 is terrible and doesn't work at all, he says, because it really misses out on a lot and it get things wrong, and it gets things wrong in a plausible way and it makes up sources and it's a mess.
He even goes as far as to predict that superhuman AI could take at least 50 years to achieve.
But while he believes it is possible for AI to surpass humans in the distant future, it is more likely that AI tools will continue to support intellectual activities, despite being in an early phase at the moment, he says.
The most valuable start-up in the United States, OpenAI catapulted onto the scene with its chatbot ChatGPT last year.
The technology takes instruction and questions and answers them with an eerily human-like response based on sources it gathers online. It can be used for writing essays, song lyrics, or even health advice, though it can often get the information wrong, known as hallucinating.
But even the most powerful chatbot AI start-up was thrown into chaos with the ousting of its CEO and co-founder Sam Altman last week and then his rehiring just days later after employees threatened the board they would quit en masse.
Wales said it is worrisome that this occurred for such an influential company but that it will probably pass as if nothing happened.
If anything, he said the company will likely get its house in order and that it is a good lesson to start-ups of all kinds that you really do have to think even at a very early stage about governance, about the stability of decision making.
Despite his criticism of current generative AI models, Wales has not ruled out AI being used for Wikipedia.
He said if a tool were built to find mistakes in a Wikipedia article by comparing it to the sources it uses, it could help to iron out inaccuracies.
He even told Euronews Next that he would consider a Wikipedia venture with an open-source AI company that is freely usable to match Wikipedias principles, but clarified there is nothing specific in the works.
However, he says that this would be a decision that would not be taken lightly.
Most businesses, not just charities like us, would say you have to be really, really careful if you're going to put at the heart of your business a technology that's controlled by someone else because if they go in a different direction, your whole business may be at risk, he said.
He would therefore think carefully about any partnerships but added that he was open to pilot programmes and testing models.
Wikipedia is still essential to generative AI as it sources information published online to produce content. Therefore, the online encyclopedia must be accurate and not produce bias, something that both AI and Wikipedia have been accused of.
To create a gender balance and combat disinformation, Wikipedia has its own army of Wikipedians who are mostly male volunteer editors. Wales said that Wikipedians can see the difference between fake websites and can easily tell if the text was written by a human.
But bias is much harder to tackle as it can be historical; for instance, there were fewer female scientists in the 19th century and not much was written about them at the time, meaning Wikipedians cannot write that much about them. Or it can be unconscious bias, whereby a 28-year-old tech nerd Wikipedian may have different interests compared to a 55-year-old mother.
Diversity is key in trying to combat bias, something the company is striving to achieve.
It's a real problem and obviously we feel a heavy responsibility to the extent that the world depends on Wikipedia and AI models depend on Wikipedia, he said.
We don't want to teach robots to be biased, so we want to get it right as at the sort of the human heart of the whole thing.
Disinformation and online hate have been a grievance for Wales and one that has led to blows with the X (formerly Twitter) boss Elon Musk, who offered $1 billion (915 million) for Wikipedia to change its name to Dickipedia.
Wales has never responded to Musks comment as he said it did not need an answer. Everybody looks at that and says, Are you 12-years-old, Elon? he told Euronews Next.
The $1 billion offer came after Wales criticised Musk for laying off moderation staff at X, which the Wikipedia chief said had increased all kinds of serious racism and toxic behaviour on the platform and is likely to affect advertising revenue.
You can't both run a toxic platform and expect advertisers to give you money, so that might change things, Wales said, adding that he and Musk are friendly and do text and that the exchanges are pleasant.
He said he still uses X but has deleted the app from his phone, which has made his life much better as he can do other things that are less toxic.
Wales has launched his own social network platform which he says has a completely different approach to X.
Last week at Web Summit, Wales announced the beta version of his project called Trust Cafe, a new online community he says will give power to its most trusted members.
First revealed in September, he describes it as his experiment in a friendly and open-source social media platform that he is not taking too seriously as a business venture.
He called it a cross between X and Reddit, where you can discuss certain topics but are not limited to a certain number of characters and there is not one sole owner of a discussion.
Reddit is both fantastic and horrible. Whereas we're really pursuing a model that's much more the governance is across everything, said Wales.
While he admits online hate and toxic behaviour will always occur within some users, he is optimistic.
If you've got basically sensible people who have enough power, you'll get a basically sensible platform and there's always going to be somebody crazy. There's always going to be some debate that turns a little ugly. That's just human nature, said Wales.
But as long as you can keep the main thrust of it in a healthy channel, then you can have like a really interesting kind of open platform where people can really genuinely engage with ideas.
Original post:
Wikipedia founder Jimmy Wales says AI is a mess now but can ... - Euronews
Fortifying Bonds and Exploring AI: Highlights from Kazakh-Kyrgyz … – Astana Times
BISHKEK A delegation from Kazakhstan, led by Maulen Ashimbayev, the Chairperson of the Senate, an upper chamber of the Kazakh Parliament, participated in the Kazakh-Kyrgyz Youth Forum in Bishkek on Nov. 17-19. The forum focused on the youths role in strengthening relations between the two countries, artificial intelligence (AI), and the creative industry.
A delegation from Kazakhstan, led by Maulen Ashimbayev, the Chairperson of the Senate, an upper chamber of the Kazakh Parliament, participated in the Kazakh-Kyrgyz Youth Forum in Bishkek on Nov. 17-19. Photo credit: The Astana Times.
The forum aimed to unite young people from both countries, fostering the exchange of ideas, support, and recognition of young citizens achievements across various fields. It also sought to identify and address urgent problems faced by the youth.
Addressing the forum participants, Ashimbayev outlined the historical aspects of the friendly relations between the two countries, emphasizing the importance of preserving the connection across generations.
The relations between the two countries leaders, based on brotherhood and trust, greatly strengthen cooperation in all areas. Our common task is to fortify the friendship of the new generation, our youth, he said.
National Volunteer Network of Kazakhstan presented a collection of the best volunteer practices in the country to colleagues from the Kyrgyz Republic during the Kazakh Kyrgyz Youth Forum. Photo credit: The Astana Times.
According to Ashimbayev, the ability to unite and collectively address common challenges in modern geopolitical instability is crucial. Understanding the common centuries-old history and learning from it is essential to achieve this.
In turn, Nurlanbek Shakiyev, the Chairman of the Jogorku Kenesh (Parliament) of the Kyrgyz Republic, highlighted the significant role of youth in implementing social, political, and economic reforms, noting that many young individuals are contributing to various fields and achieving success.
Ashimbayev outlined the historical aspects of the friendly relations between the two countries, emphasizing the importance of preserving the connection across generations. Photo credit: The Astana Times.
He asserted that young people are the future, emphasizing the importance of preserving ancestral values, particularly their native language, alongside mastering modern technologies.Commencing the first-panel session, Almas Ishimbayev, founder of an IT company in Bishkek, delved into the topic of AI, highlighting the emergence of Ghat GPT and distinguishing between generative artificial intelligence and non-generative.
Non-generative means when AI generates information, utilizing existing data without creating new content, extracting information from the internet and delivering it in a millisecond. However, generative AI is an intelligence that creates something new from existing information. It processes information from various sources, recognizes it from the internet, and presents us with something innovative, he said.
Adilkhan Kopabay, Chairman of the Board of the Situational Analytical Center of the Fuel and Energy Complex of Kazakhstan, showcased their AI project, MoonAI. Photo credit: The Astana Times.
Despite the notion that AI has the potential to replace humans entirely, Ishimbayev emphasized that its dependence on humans will persist for the next decade. This is because AI still learns based on existing information and needs to understand concepts like marketing, morality, and the intricate patterns of human lives.
I believe that AI will not replace people in their positions, but individuals who know how to use AI will replace those who do not. Nevertheless, we must acknowledge that AI is a part of our lives, and everyone should learn how to use it for the greater good and adapt to its integration, he said.
Adilkhan Kopabay, Chairman of the Board of the Situational Analytical Center of the Fuel and Energy Complex of Kazakhstan, showcased their AI project, MoonAI, during which he demonstrated its practical applications in the countrys energy industry.
Kopabay illustrated how the tool can provide forecasts for the actual use of energy over 24 hours and predict oil prices for three years. Significantly, the tool addresses the critical challenge of predicting emergencies in Kazakhstan.
Our product incorporates comprehensive data on oil, gas, energy, and uranium residues. We have designed key dashboards tailored to our leaderships needs, focusing on efficiency and integration with the Ministry of Emergency Situations. The tool offers real-time insights, and the data is dynamic, providing a visual representation of the regions plan and deviations, he said.
Discussing the role of AI, Kopabay emphasized that it is more about the accuracy percentage and proposals. While acknowledging that AI can occasionally make incorrect predictions, he stressed that its reliability depends on many data points. He likened it to predicting snowfall, where different metrics may not align but should be correlated for accuracy.
The event also featured discussions of engaging young individuals in creative pursuits, supporting youth-led initiatives and social projects, exchanging insights into entrepreneurship and youth volunteering, and endorsing startups and projects that utilize AI innovatively.
Go here to see the original:
Fortifying Bonds and Exploring AI: Highlights from Kazakh-Kyrgyz ... - Astana Times
Rad AI Unveils Omni Unchanged, a Revolutionary Addition to Omni … – PR Newswire
Omni Unchanged exemplifies the 'Speak Less, Say More' philosophy in Rad AI Omni Reporting, empowering radiologists to tackle complex exams more efficiently, decreasing stress and repetitive workload.
SAN FRANCISCO, Nov. 26, 2023 /PRNewswire/ -- Rad AI, the fastest-growing radiologist-led AI company and winner of AuntMinnie's Best New Radiology Software award for 2023, has announced the launch of its latest groundbreaking feature in Omni Reporting: Omni Unchanged. This remarkable feature is set to revolutionize radiology reporting by enabling radiologists to dictate complex follow-up exams up to 50% faster, using up to 90% fewer words. Rad AI previously announced several initial features of Omni Reporting at its Launch Day event in late September, which has already sparked significant interest within the radiology community.
Omni Unchanged extracts stable and unchanged findings from prior reports and inserts them into the proper location within the radiologist's existing preferred report template. Hence, radiologists only need to dictate new or updated findings. For exams that haven't changed, radiologists only have to dictate "Omni Unchanged" even if the exam has multiple complex findings. These features mark a significant step in Rad AI's mission to streamline radiology reporting processes, enabling radiologists to operate at the top of their licenses with greater efficiency.
Built for speed, performance, and accuracy, Omni Reporting helps radiologists create more of the report automatically.
"Having been a practicing radiologist and user of speech recognition for radiology reporting for over two decades, I am excited to lead this transformative project. With Omni Reporting, we aim to help radiologists refocus on their crucial task of analyzing imaging exams and providing expert interpretation. Omni Unchanged takes Rad AI's commitment to automating repetitive tasks, reducing cognitive stress, and enhancing radiologist efficiency to a whole new level," said William Boonn MD, Chief Medical Officer of Rad AI.
Omni Reporting builds on Rad AI's leadership in generative AI and track record of performant cloud-native modern radiology software to enable a "Speak Less, Say More" experience for radiologists, saving radiologists time and reducing cognitive load. Omni Reporting was designed to enable a seamless transition from other reporting products. Personal and system templates can be easily imported from your existing reporting system, with full support for your current microphone setup and hardware, lowering the barrier to entry with minimal need for training.
Built from the ground up for speed, performance, and accuracy, Omni Reporting helps radiologists create more of the report automatically in the radiologist's style and language while generating individually customized impressions using Omni Impressions. Omni Reporting's open architecture enables seamless integration with imaging AI vendors and PACS companies and allows radiology practices to develop and manage their own real-time analytics.
John Paulett, Director of Engineering at Rad AI, and the chief architect of Omni Reporting added: "The development of Omni Reporting from the ground up with modern architecture and the latest in generative AI, sets Omni Reporting as the future-proof platform for radiology. We focus on speed, ease of use, and interoperability, enabling seamless adoption into existing IT frameworks and future AI applications."
Rad AI's introduction of Omni Unchanged at RSNA 2023 continues its legacy of innovation in radiology, following the success of its previous solutions, including Rad AI Omni Impressions, Continuity, and Nexus. Omni Unchanged is expected to redefine the standard for radiology reporting software, reinforcing Rad AI's position as a leader in radiology AI solutions.
"Rad AI is uniquely positioned to transform the radiology reporting landscape," said CEO Doktor Gurson. "Having been the pioneer in introducing large language models (LLMs) to radiology in 2018 with our Rad AI Omni Impression solution, we're now expanding the use of our technology across various facets of the radiology reporting workflow and are harnessing the power of generative AI in truly novel and transformative ways to empower radiologists. By ushering in the next generation of radiology reporting and taking radiology reporting from stagnation to innovation, we will improve the daily work and lives of radiologists and patients across the globe."
Those interested in partnering with Rad AI for Omni Reporting should book a demo for a remaining slot at radai.com/rsna2023. For any questions, contact [emailprotected].
About Rad AI
Rad AI is the fastest-growing radiologist-led AI company and the pioneer in generative AI in radiology since 2018, having now saved radiologists nearly one billion fewer words dictated. In addition to being named AuntMinnie's "Best New Radiology Software" in 2023 for Omni Reporting and "Best New Radiology Vendor" in 2021, CB Insights listed Rad AI as one of the most innovative digital health startups and one of the world's most promising private AI companies in its rankings.
Founded by the youngest radiologist in U.S. history, Rad AI is known for the first and most successful generative AI solution in radiology. Since then, Rad AI has seen rapid adoption of its AI platform, which is already in use at 8 of the 10 largest private radiology practices in the US and trusted by thousands of radiologists. Rad AI uses state-of-the-art machine learning to streamline repetitive tasks for radiologists and automate workflow for health systems, which yields substantial time savings, alleviates burnout, and creates more time to focus on patient care.
Learn more about Rad AI at http://www.radai.com or on Twitter @radai.
SOURCE Rad AI
See the original post here:
Rad AI Unveils Omni Unchanged, a Revolutionary Addition to Omni ... - PR Newswire
Chinese EV maker Nio sees AI, robots replacing 30% of workforce by 2027 – South China Morning Post
Nio, one of Chinas top three builders of premium electric vehicles (EV), aims to reduce its workforce by a third by 2027 as it rapidly replaces them with robots.
Earlier this month, the company said it had cut 10 per cent of its workforce to boost efficiency and stay competitive.
We want to resort to AI technologies to largely reduce reliance on skilled workers and technicians, and hence save more labour costs, he said on Friday. If 80 per cent of our decisions [in manufacturing] can be made by AI, it will enable us to reduce 50 per cent of our managerial positions in 2025.
Industrial robots could help the company cut the use of workers on the production lines by 30 per cent between 2025 and 2027, he added.
Nio had a workforce of about 7,000 at the end of 2022, according to data from corporate registry website Qichacha.
Nio envisions full automation, or a labour-free system, at its manufacturing sites in future, banking on advanced AI and robotic technologies, Ji said. He, however, admitted it was difficult to give a time frame.
02:48
China World Robot Expo 2023: Your next bartender could be a humanoid
China World Robot Expo 2023: Your next bartender could be a humanoid
Nio delivered 126,067 vehicles in the first 10 months of 2023, up 36.3 per cent year on year. Its president Qin Lihong said in a speech during the Guangzhou Auto Show on November 17 that 40 per cent year-on-year sales growth was not fast enough to reflect the companys design and manufacturing strength.
The carmaker operates two plants, both in Hefei, capital of eastern Anhui province. The first factory has an annual production capacity of 150,000 units on one shift, while the second is capable of building 300,000 vehicles annually, also on a single shift. A single shift normally requires 1,000 workers.
Nio already has a big production capacity and its manufacturing technique is advanced enough to support high growth, said Chen Jinzhu, CEO of consultancy Shanghai Mingliang Auto Service. The company needs to design and produce more vehicles that can appeal to more Chinese drivers to bolster sales.
02:21
Japan start-up develops Gundam-like robot with US$2.7 million price tag
Japan start-up develops Gundam-like robot with US$2.7 million price tag
At the second plant near Hefei Xinqiao airport, 756 robots are used to achieve 100 per cent automation in one of the manufacturing processes.
Nio aims to turn the factory into the worlds smartest with advanced equipment, flexible processes and efficient supply-chain management, Ji said.
Nios rival, Guangzhou-based Xpeng, said in April that it would fine-tune its designs and improve efficiency next year, hoping to slash costs by 25 per cent to stay ahead of the competition.
The efficiency drive and cost-cutting programme would put unprofitable Xpeng on the road to generate positive cash flow by 2025, the carmakers president Brian Gu said.
Read the rest here:
Chinese EV maker Nio sees AI, robots replacing 30% of workforce by 2027 - South China Morning Post
AI can help to flag students struggling with mental health – University World News
UNITED KINGDOM
The long-term adjustments have been challenging too. As a package, the traditional university experience was one teeming with physical and social interactions, whether through study groups, seminars or peer activities. Now, the shift from on-campus to virtual and hybrid learning environments has firmly disrupted that experience.
The implications are concerning. In a 2022 Student Minds survey in the United Kingdom, more than half (57%) of respondents self-reported a mental health issue.
As the pressure to meet deadlines and achieve grades contributes to increased stress, the consequences of mental health issues can also lead to poor academic performance, dropping out of university and even self-harm.
It is clear that more needs to be done to address the growing mental health crisis in universities, and ensure that students have access to the right support as soon as it is needed.
Tell-tale signs of ill health
Even pre-pandemic, mental health issues were plaguing higher education. Perhaps its unsurprising that only 12% of students think their university handles the issue of mental health well.
With the situation still ongoing, easier, timely access to effective mental health services has never been more important. Now, with the assistance of artificial intelligence and data intelligence, this is possible.
Location services, powered by network automation, can offer important student data and insights to pre-emptively flag when an individual might be experiencing mental distress.
With the help of AI-driven technology, universities can quickly identify withdrawn behaviour often a tell-tale sign of mental unwellness. If a student is spending most of their time confined to their accommodation, or continuously missing lessons, location services will pick it up. By leveraging this data, universities can then offer early intervention, whether from counsellors or mental health support teams.
Personalised responses
Very much a hot topic the world over, advanced AI is streaming its way into personal, social and enterprise-level activities. Higher education facilities should now be looking at their own digital transformation progress and assessing how AI can enhance their IT infrastructures for the benefit of their students. By using location services, universities can ensure the appropriate help is offered at a greater speed once a pattern of absence is identified.
At the same time, AI can personalise recommended resources and activities based on the interests and preferences of individual students.
It can also communicate in ways that the student is likely to respond to such as via chatbot, over email or a phone call. Bespoke services can be a more effective way for education facilities to improve student engagement and how well they respond to the support on offer.
Not only can AI spot when a student may be withdrawn, but it can also help students flourish by providing more flexible ways of working for different learning styles whether thats interactive, audio, visual or just more collaborative. Each of these, if effective, can reduce the stress of studying and improve learning opportunities and outcomes.
Data privacy
Of course, any technology service provided through the use of data can bring about privacy concerns. Thats why offering an opt-in approach should be the way forward. Students (and potentially their parents or carers) will have a clear choice of how they want their data to be used, and those that do opt in will not feel monitored during their university experience.
Parents or carers of students leaving for university will also have greater peace of mind knowing additional support and welfare oversight is offered. By embracing AI within their IT services, universities can enhance the level of support provided and students still have the freedom to accept or decline.
Clear and transparent communication from universities over how a students data will be used is also crucial to effectiveness. When students are aware that their data will be used to help them on their education journey rather than to penalise them theres a chance theyll be on board. At the same time, ensuring clear policies and safeguarding practices is crucial to protecting unauthorised access to sensitive information.
A proactive approach
The current mental health crisis sweeping the countrys higher education sector cannot be ignored or denied. Education facilities have a duty of care to ensure their students are given access to the support they need, both to handle the pressures of university and flourish in their studies. And with the intelligence of todays technology, it makes perfect sense to leverage it.
Location services through network automation can transform IT services for the better. Universities can not only identify, respond to and deal with issues as they arise, but they can foresee patterns in behaviour to prevent situations from developing and worsening.
While a proactive rather than reactive approach wont necessarily prevent mental health issues from surfacing this way, there are at least practices in place to help if or when theyre needed.
Jamie Pitchforth is head of UK strategic business at networking hardware company Juniper Networks.
Original post:
AI can help to flag students struggling with mental health - University World News
Former Google engineer who was pardoned by Donald Trump revives AI church – Business Insider
Anthony Levandowski was cleared by Donald Trump of charges relating to the theft of technology secrets from Google. Justin Sullivan / Getty
Anthony Levandowski, a pioneer of self-driving cars and controversial Silicon Valley figure, announced the return of his AI-dedicated church in an episode of Bloomberg's AI IRL podcast.
Levandowski started his "Way of the Future" church in 2015 while he was working as an engineer on Google's self-driving project Waymo.
While the original church was shut a few years later, Levandowski's new venture already has "a couple thousand people" who are trying to build a "spiritual connection" with AI, he said, per Bloomberg.
"Here we're actually creating things that can see everything, be everywhere, know everything, and maybe help us and guide us in a way that normally you would call God," Levandowski said, adding that his aim was to help people gain a deeper understanding of AI and allow more people to have a say in how the technology is used.
"How does a person in rural America relate to this? What does this mean for their job?" he said. "Way of the Future is a mechanism for them to understand and participate and shape the public discourse as to how we think technology should be built to improve you."
Levandowski's church first came under the spotlight in 2017 when he became embroiled in a high-profile court case after he was accused of stealing trade secrets.
Levandowski later pleaded guilty and was sentenced to 18 months in prison. The engineer was pardoned in 2021 by the outgoing president at the time, Donald Trump.
Levandowski's official pardon said he had "paid a significant price for his actions and plans to devote his talents to advance the public good."
The former Googler is now the CEO of Pollen Mobile, a decentralized mobile network he founded in 2021.
Loading...
Originally posted here:
Former Google engineer who was pardoned by Donald Trump revives AI church - Business Insider
AI Technology Threatens Educational Equity for Marginalized Students – Progressive.org
The fall semester is well underway, and schools across the United States are rushing to implement artificial intelligence (AI) in ways that bring about equity, access, and efficiency for all members of the school community. Take, for instance, Los Angeles Unified School Districts (LAUSD) recent decision to implement Ed.
Ed is an AI chatbot meant to replace school advisors for students with Individual Education Plans (IEPs), who are disproportionately Black. Announced on the heels of a national uproar about teachers being unable to read IEPs due to lack of time, energy, and structural support, Ed might seem to many like a sliver of hopethe silver bullet needed to address thechronic mismanagement of IEPs andongoing disenfranchisement of Black students in the district. But for Black students with IEPs, AI technologies like Ed might be more akin to a nightmare.
Since the pandemic, public schools have seen a proliferation of AI technologies that promise to remediate educational inequality for historically marginalized students. These technologies claim topredict behavior and academic performance,manage classroom engagement,detect and deter cheating, andproactively stop campus-based crimes before they happen. Unfortunately, because anti-Blackness is often baked into the design and implementation of these technologies, they often do more harm than good.
Proctorio, for example, is a popular remote proctoring platform that uses AI to detect perceived behavior abnormalities by test takers in real time. Because the platform employs facial detection systems that fail to recognize Black faces more than half of the time, Black students have an exceedingly hard time completing their exams without triggering the faulty detection systems, which results in locked exams, failing grades, and disciplinary action.
While being falsely flagged by Proctorio mightinduce test-taking anxiety or result in failed courses, the consequences for inadvertently triggeringschool safety technologies are much more devastating. Some of the most popular school safety platforms, like Gaggle and GoGaurdian, have been known to falsely identify discussions about LGBTQ+ identity, race related content, andlanguage used by Black youth as dangerous or in violation of school disciplinary policies. Because many of these platforms aredirectly connected to law enforcement, students that are falsely identified are contacted by police both on campus and in their homes. Considering that Black youth endure the highest rates of discipline, assault, and carceral contact on school grounds and aresix times more likely than their white peers to have fatal encounters with police, the risk of experiencing algorithmic bias can be life threatening.
These examples speak to the dangers of educational technologies designed specifically for safety, conduct, and discipline. But what about education technology (EdTech) intended for learning? Are the threats to student safety, privacy, and academic wellbeing the same?
Unfortunately, the use of educational technologies for purposes other than discipline seems to be the exception, not the rule. Anational study examining the use of EdTech found an overall decrease in the use of the tools for teaching and learning, with over 60 percent of teachers reporting that the software is used to identify disciplinary infractions.
Whats more, Black students and students with IEPs endure significantly higher rates of discipline not only from being disproportionately surveilled by educational technologies, but also from using tools like ChatGPT to make their learning experience more accommodating and accessible. This could include using AI technologies to support executive functioning, access translated or simplified language, or provide alternative learning strategies.
Many of these technologies are more likely to exacerbate educational inequities like racialized gaps in opportunity, school punishment, and surveillance.
To be sure, the stated goals and intentions of educational technologies are laudable, and speak to our collective hopes and dreams for the future of schoolsplaces that are safe, engaging, and equitable for all students regardless of their background. But many of these technologies are more likely to exacerbate educational inequities like racialized gaps in opportunity, school punishment, and surveillance, dashing many of these idealistic hopes.
To confront the disparities wrought by racially-biased AI, schools need a comprehensive approach to EdTech that addresses the harms of algorithmic racism for vulnerable groups. There are severalways to do this.
One possibility is recognizing that EdTech is not neutral. Despite popular belief, educational technologies are not unbiased, objective, or race-neutral, and they do not inherently support the educational success of all students. Oftentimes, racism becomes encoded from the onset of the design process, and can manifest in the data set, the code, the decision making algorithms, and the system outputs.
Another option is fostering critical algorithmic literacy. Incorporating critical AI curriculum into K-12 coursework, offering professional development opportunities for educators, or hosting community events to raise awareness of algorithmic bias are just a few of the ways schools can support bringing students and staff up to speed.
A third avenue is conducting algorithmic equity audits. Each year, the United Statesspends nearly $13 billion on educational technologies, with the LAUSD spending upwards of$227 million on EdTech in the 2020-2021 academic year alone. To avoid a costly mistake, educational stakeholders can work with third-party auditors to identify biases in EdTech programs before launching them.
Regardless of the imagined future that Big Tech companies try to sell us, the current reality of EdTech for marginalized students is troubling and must be reckoned with. For LAUSDthe second largest district in the country and the home of the fourteenth largest school police force in Californiathe time to tackle the potential harms of AI systems like Ed the IEP Chatbot is now.
The rest is here:
AI Technology Threatens Educational Equity for Marginalized Students - Progressive.org
Exclusive: Germany, France and Italy reach agreement on future AI … – Reuters
AI (Artificial Intelligence) letters are placed on computer motherboard in this illustration taken June 23, 2023. REUTERS/Dado Ruvic/Illustration/File Photo Acquire Licensing Rights
BERLIN, Nov 18 (Reuters) - France, Germany and Italy have reached an agreement on how artificial intelligence should be regulated, according to a joint paper seen by Reuters, which is expected to accelerate negotiations at the European level.
The three governments support "mandatory self-regulation through codes of conduct" for so-called foundation models of AI, which are designed to produce a broad range of outputs. But they oppose "un-tested norms."
"Together we underline that the AI Act regulates the application of AI and not the technology as such," the joint paper said. "The inherent risks lie in the application of AI systems rather than in the technology itself."
The European Commission, the European Parliament and the EU Council are negotiating how the bloc should position itself on this topic.
The paper explains that developers of foundation models would have to define model cards, which are used to provide information about a machine learning model.
"The model cards shall include the relevant information to understand the functioning of the model, its capabilities and its limits and will be based on best practices within the developers community," the paper said.
"An AI governance body could help to develop guidelines and could check the application of model cards," the joint paper said.
Initially, no sanctions should be imposed, the paper said.
If violations of the code of conduct are identified after a certain period of time, however, a system of sanctions could be set up.
Germany's Economy Ministry, which is in charge of the topic together with the Ministry of Digital Affairs, said laws and state control should not regulate AI itself, but rather its application.
Digital Affairs Minister Volker Wissing told Reuters he was very pleased an agreement had been reached with France and Germany to limit only the use of AI.
"We need to regulate the applications and not the technology if we want to play in the top AI league worldwide," Wissing said.
State Secretary for Economic Affairs Franziska Brantner told Reuters it was crucial to harness the opportunities and limit the risks.
"We have developed a proposal that can ensure a balance between both objectives in a technological and legal terrain that has not yet been defined," Brantner said.
As governments around the world seek to capture the economic benefits of AI, Britain in November hosted its first AI safety summit.
The German government is hosting a digital summit in Jena, in the state of Thuringia, on Monday and Tuesday that will bring together representatives from politics, business and science.
Issues surrounding AI will also be on the agenda when the German and Italian governments hold talks in Berlin on Wednesday.
Reporting by Andreas Rinke; Writing by Maria Martinez; Editing by Mike Harrison, Barbara Lewis and Diane Craft
Our Standards: The Thomson Reuters Trust Principles.
The rest is here:
Exclusive: Germany, France and Italy reach agreement on future AI ... - Reuters