Category Archives: Artificial General Intelligence

U. community discusses integration of AI into academic, points to … – The Brown Daily Herald

Provost Francis J. Doyle III identified the intersection of artificial intelligence and higher education as a University priority in an Aug. 31 letter to the community titled Potential impact of AI on our academic mission. Doyles address comes at a time of uncertainty as educational institutions struggle to make sense of the roles and regulations of artificial intelligence tools in academia.

Doyles letter begins by zooming in on generative AI tools such as ChatGPT, which soared in popularity after its debut in late November of last year. The program, an open-access online chatbot, raked in over 100 million monthly users within the first two months of its launch, according to data from Pew Research Center.

There is no shortage of public analysis regarding the ways in which the use of generative artificial intelligence tools which are open-access tools that can generate realistic text, computer code and other content in response to prompts from the user provide both challenges and opportunities in higher education, Doyle wrote in the letter.

Exploring the use of AI in ways that align with Browns values has been a topic of discussion among our senior academic leaders for several months, he continued.

Doyle did not prescribe University-wide AI policies in the letter but encouraged instructors to offer clear, unambiguous guidelines about AI usage in their courses. He also provided a variety of resources for students seeking guidelines on citing AI-generated content, as well as how to use AI as a research tool.

As we identify the ways in which AI can enhance academic activities, we must also ensure these tools are understood and used appropriately and ethically, Doyle wrote.

The contention presented by Doyle is one mirrored by educators and administrators nationwide: How can academic institutions strike a balance between using AI as a learning tool and regulating it enough to avoid misuse?

The upsides to AI tools such as ChatGPT that are often touted include improved student success, the ability to tailor lessons to individual needs, immediate feedback for students and better student engagement, Doyle wrote in a message to The Herald. But it is important for students to understand the inherent risks associated with any open-access technology, in terms of privacy, intellectual property ownership and more.

Doyle told The Herald that he anticipates prolonged discussions with academic leadership, faculty and students as the University continues to monitor the evolution of AI tools and discovers innovative applications to improve learning outcomes and inform research directions.

Michael Vorenberg, associate professor of history, is finding creative ways to bring AI into the classroom. On the first day of his weekly seminar, HIST 1972A: American Legal History, 1760-1920, Vorenberg spoke candidly with his students about general attitudes regarding AI in education and the opportunities for exploration these developments afford.

Most of what educators are hearing about are the negative sides of generative AI programs, Vorenberg wrote in a message to The Herald. I am also interested in how generative AI might be used as a teaching tool.

Vorenberg outlined two broad potential uses for AI in his class: The examination of sources generated by ChatGPT allowing students to probe into the appropriateness of the retrieved documents from a historians perspective and the intentional criticism of said generated sources, understanding how a historians perspective could have produced a stronger source.

The underlying assumption behind the exercise is that even a moderately skilled historian can do better at this sort of task than a generative AI program, Vorenberg explained. Until (this) situation changes, we who teach history have an opportunity to use generative AI to give concrete examples of the ways that well-trained human historians can do history better than AI historians.

Given the Universitys large pool of students interested in pursuing computer science The Heralds recent first-year poll shows computer science as the top indicated concentration for the class of 2027 Brown has the potential to shape the future of AI.

Doyle told The Herald that the University is well-situated to contribute our creativity (and) our entrepreneurial spirit to making an impact as researchers continue to strengthen these tools.

Jerry Lu 25, who is concentrating in both computer science and economics, obsessively followed the growing momentum behind Open AI, ChatGPT and developments in automation.

Lu believes there are two ways the University can best support its students in navigating artificial intelligence one from an educational perspective, and another from a more career-oriented view.

In terms of education, Lu said he hopes that the University would approach AI not just through computer science classes, but from a sociology approach or humanities lens as well to equip all students with the necessary skills to address how AI will undoubtedly affect society.

Get The Herald delivered to your inbox daily.

Lu also pointed to the restructured Center for Career Exploration as a potential resource for preparing students to enter a workforce heavily influenced by AI.

The new Career LAB should be cognizant of how these new technologies are going to impact careers, Lu said. Offering guidance on how students should think about AI and how they can navigate (it) or use (it) to their advantage, I think that that would be really key.

When asked about how the universities should engage with AI, ChatGPT focused on the pursuit for a common good.

Universities have a critical role to play in the responsible development and application of artificial intelligence, it replied. They should focus on research, education, ethics, collaboration and societal impact to ensure that AI technologies benefit humanity as a whole while minimizing potential harms.

Sofia Barnett is a University News editor overseeing the faculty and higher education beat. She is a sophomore from Texas studying history, politics and nonfiction writing.

Read more:

U. community discusses integration of AI into academic, points to ... - The Brown Daily Herald

As regulators talk tough, tackling AI bias has never been more urgent – VentureBeat

Head over to our on-demand library to view sessions from VB Transform 2023. Register Here

The rise of powerful generative AI tools like ChatGPT has been described as this generations iPhone moment. In March, the OpenAI website, which lets visitors try ChatGPT, reportedly reached847 million unique monthly visitors. Amid this explosion of popularity, the level of scrutiny placed on gen AI has skyrocketed, with several countries acting swiftly to protect consumers.

In April, Italy became the first Western country toblockChatGPT on privacy grounds, only to reverse the ban four weeks later. Other G7 countries areconsidering a coordinated approachto regulation.

The UK will host thefirst global AI regulation summitin the fall, with Prime Minister Rishi Sunak hoping the country can drive the establishment of guardrails on AI. Itsstated aimis to ensure AI is developed and adopted safely and responsibly.

Regulation is no doubt well-intentioned. Clearly, many countries are aware of the risks posed by gen AI. Yet all this talk of safety is arguably masking a deeper issue: AI bias.

VB Transform 2023 On-Demand

Did you miss a session from VB Transform 2023? Register to access the on-demand library for all of our featured sessions.

Although the term AI bias can sound nebulous, its easy to define. Also known as algorithm bias, AI bias occurs when human biases creep into the data sets on which the AI models are trained. This data, and the subsequent AI models, then reflect any sampling bias, confirmation bias and human biases (against gender, age, nationality, race, for example) and clouds the independence and accuracy of any output from the AI technology.

As gen AI becomes more sophisticated, impacting society in ways it hadnt before, dealing with AI bias is more urgent than ever. This technology isincreasingly usedto inform tasks like face recognition, credit scoring and crime risk assessment. Clearly, accuracy is paramount with such sensitive outcomes at play.

Examples of AI bias have already been observed in numerous cases. When OpenAIs Dall-E 2, a deep learning model used to create artwork, wasasked to create an imageof a Fortune 500 tech founder, the pictures it supplied were mostly white and male. When asked if well-known Blues singer Bessie Smith influenced gospel singer Mahalia Jackson, ChatGPTcould not answer the question without further prompts, raising doubts about its knowledge of people of color in popular culture.

Astudyconducted in 2021 around mortgage loans discovered that AI models designed to determine approval or rejection did not offer reliable suggestions for loans to minority applicants.These instances prove that AI bias can misrepresent race and gender with potentially serious consequences for users.

AI that produces offensive results can be attributed to the way the AI learns and the dataset it is built upon. If the data over-represents or under-represents a particular population, the AI will repeat that bias, generating even more biased data.

For this reason, its important that any regulation enforced by governments doesnt view AI as inherently dangerous. Rather, any danger it possesses is largely a function of the data its trained on. If businesses want to capitalize on AIs potential, they must ensure the data it is trained on is reliable and inclusive.

To do this, greater access to an organizations data to all stakeholders, both internal and external, should be a priority. Modern databases play a huge role here as they have the ability to manage vast amounts of user data, both structured and semi-structured, and have capabilities to quickly discover, react, redact and remodel the data once any bias is discovered. This greater visibility and manageability over large datasets means biased data is at less risk of creeping in undetected.

Furthermore, organizations must train data scientists to better curate data while implementing best practices for collecting and scrubbing data. Taking this a step further, the data training algorithms must be made open and available to as many data scientists as possible to ensure that more diverse groups of people are sampling it and can point out inherent biases. In the same way modern software is often open source, so too should appropriate data be.

Organizations have to be constantly vigilant and appreciate that this is not a one-time action to complete before going into production with a product or a service. The ongoing challenge of AI bias calls for enterprises to look at incorporating techniques that are used in other industries to ensure general best practices.

Blind tasting tests borrowed from the food and drink industry, red team/blue team tactics from the cybersecurity world or the traceability concept used in nuclear power could all provide valuable frameworks for organizations in tackling AI bias. This work will help enterprises to understand the AI models, evaluate the range of possible future outcomes and gain sufficient trust with these complex and evolving systems.

In previous decades, talk of regulating AI was arguably putting the cart before the horse. How can you regulate something whose impact on society is unclear? A century ago, no one dreamt of regulating smoking because it wasnt known to be dangerous. AI, by the same token, wasnt something under serious threat of regulation any sense of its danger was reduced tosci-fi filmswith no basis in reality.

But advances in gen AI and ChatGPT, as well as advances towards artificial general Intelligence (AGI), have changed all that. Some national governments seem to be working in unison to regulate AI, while paradoxically, others are jockeying for position as AI regulators-in-chief.

Amid this hubbub, its crucial that AI bias doesnt become overly politicized and is instead viewed as a societal issue that transcends political stripes. Across the world, governments alongside data scientists, businesses and academics must unite to tackle it.

Ravi Mayuram is CTO of Couchbase.

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even considercontributing an articleof your own!

Read More From DataDecisionMakers

Read the rest here:

As regulators talk tough, tackling AI bias has never been more urgent - VentureBeat

The Race to Lead the AI Revolution: Tech Giants, Cloud Titans and … – Medium

Artificial intelligence promises to transform industries and generate immense economic value over the coming decades. Tech giants, cloud computing leaders and semiconductor firms are fiercely competing to provide the foundational AI infrastructure and services fueling this revolution. In this high-stakes battle to dominate the AI sphere, these companies are rapidly advancing hardware, software, cloud platforms, developer tools and applications. For investors, understanding the dynamic competitive landscape is key to identifying leaders well-positioned to capitalize on surging AI demand.

The worlds largest technology companies view leadership in artificial intelligence as vital to their futures. AI permeates offerings from Amazon, Microsoft, Google, Facebook and Apple as they fight for market share. The cloud has become the primary arena for delivering AI capabilities to enterprise customers. Amazon Web Services, Microsoft Azure and Google Cloud Platform offer integrated machine learning, data analytics and AI services through their cloud platforms.

The tech titans are also racing to advance AI assistant technologies like Alexa, Siri and Cortana for consumer and business use. IoT ecosystems that accumulate data to train AI depend on cloud infrastructure. Tech firms battle to attract top AI engineering talent and acquire promising startups. Government scrutiny of their AI competitive tactics is growing. But the tech giants continue aggressively investing in R&D and new partnerships to expand their AI footprints.

The major cloud providers have emerged as gatekeepers for enterprise AI adoption. AWS, Microsoft Azure, Google Cloud and IBM Cloud aggressively market integrated machine learning toolkits, neural network APIs, automated ML and other services that remove AI complexities. This strategy drives more customers to their clouds to access convenient AI building blocks.

Cloud platforms also offer vast on-demand computing power and storage for AI workloads. Firms like AWS and Google Cloud tout specialized AI accelerators on their servers. The cloud battleground has expanded to wearable, mobile and edge devices with AI capabilities. Cloud leaders aim to keep customers within their ecosystems as AI proliferates.

Graphical processing units (GPUs) from Nvidia, AMD and Intel currently dominate AI computing. But rising challengers like Cerebras, Graphcore and Tenstorrent are designing specialized processing chips just for deep learning. Known as AI accelerators, these chips promise faster training and inference than repurposed GPUs. Startups have attracted huge investments to develop new accelerator architectures targeted at AI workloads.

Big tech companies are also muscling into the AI chip space. Googles Tensor Processing Units power many internal workloads. Amazon has designed AI inference chips for Alexa and AWS. Microsoft relies on FPGA chips from Xilinx but is also developing dedicated AI silicon. As AI proliferates, intense competition in AI-optimized semiconductors will shape the future landscape.

Much AI innovation comes from open source projects like TensorFlow, PyTorch, MXNet and Keras. Tech giants liberally adopt each others frameworks into their own stacks. This open ecosystem drives rapid advances through collaboration between intense competitors. But tech firms then differentiate by offering proprietary development environments, optimized runtimes and additional services around the open source cores.

Leading corporate sponsors behind frameworks like Facebooks PyTorch and AWSs Gluon intend to benefit by steering standards and features. However, generous licensing enables wide adoption and growth. The symbiotic relationship between open source and proprietary AI has greatly accelerated overall progress.

Beyond core technology purveyors, many other players want a slice of the AI market. Consulting firms sell AI strategy and implementation services. Cloud data warehouse vendors feature AI-driven analytics. Low-code platforms incorporate AI-powered automation. Cybersecurity companies inject AI into threat detection. AI success will ultimately require an entire ecosystem integrating hardware, software, infrastructure, tools and expertise into multi-layered technology stacks.

Current AI capabilities remain narrow and require extensive human guidance. But rapid advances in foundational machine learning approaches, computing power and neural network design point to a future Artificial General Intelligence that mimics human-level capacities. Tech giants are investing today in moonshot projects like robotics, quantum computing and neuro-symbolic AI to fuel the next paradigm shifts.

Government regulation will also shape AIs evolution, balancing innovation with ethics. Despite uncertainties, AI will undoubtedly transform business and society over the next decade through visionary efforts underway today across the technology landscape.

For investors, AI represents an enormously valuable mega-trend with a long runway for growth. While hype exceeds reality today, practical AI adoption is accelerating. The tech giants have tremendous balance sheet resources to sustain investment. But they also face anti-trust scrutiny that could advantage smaller players.

Seeking exposure across the AI ecosystem is ideal to benefit from both large established players and potential rising challengers. AI promises outsized returns for those investors savvy enough to identify leaders powering this transformative technology through its period of exponential growth.

Sign up for the SEEKME newsletter to receive the latest artificial intelligence insights, case studies and research direct to your inbox each month. Stay ahead of the AI curve.

Follow this link:

The Race to Lead the AI Revolution: Tech Giants, Cloud Titans and ... - Medium

Decoding Opportunities and Challenges for LLM Agents in … – Unite.AI

We are seeing a progression of Generative AI applications powered by large language models (LLM) from prompts to retrieval augmented generation (RAG) to agents. Agents are being talked about heavily in industry and research circles, mainly for the power this technology provides to transform Enterprise applications and provide superior customer experiences. There are common patterns for building agents that enable first steps towards artificial general intelligence (AGI).

In my previous article, we saw a ladder of intelligence of patterns for building LLM powered applications. Starting with prompts that capture problem domain and use LLM internal memory to generate output. With RAG, we augment the prompt with external knowledge searched from a vector database to control the outputs. Next by chaining LLM calls we can build workflows to realize complex applications. Agents take this to a next level by auto determining how these LLM chains are to be formed. Let's look in detail.

A key pattern with agents is that they use the language understanding power of LLM to make a plan on how to solve a given problem. The LLM understands the problem and gives us a sequence of steps to solve the problem. However, it doesn't stop there. Agents are not a pure support system that will provide you recommendations on solving the problem and then pass on the baton to you to take the recommended steps. Agents are empowered with tooling to go ahead and take the action. Scary right!?

If we ask an agent a basic question like this:

Human: Which company did the inventor of the telephone start?

Following is a sample of thinking steps that an agent may take.

Agent (THINKING):

Agent (RESPONSE): Alexander Graham Bell co-founded AT&T in 1885

You can see that the agent follows a methodical way of breaking down the problem into subproblems that can be solved by taking specific Actions. The actions here are recommended by the LLM and we can map these to specific tools to implement these actions. We could enable a search tool for the agent such that when it realizes that LLM has provided search as an action, it will call this tool with the parameters provided by the LLM. The search here is on the internet but can as well be redirected to search an internal knowledge base like a vector database. The system now becomes self-sufficient and can figure out how to solve complex problems following a series of steps. Frameworks like LangChain and LLaMAIndex give you an easy way to build these agents and connect to toolings and API. Amazon recently launched their Bedrock Agents framework that provides a visual interface for designing agents.

Under the hood, agents follow a special style of sending prompts to the LLM which make them generate an action plan. The above Thought-Action-Observation pattern is popular in a type of agent called ReAct (Reasoning and Acting). Other types of agents include MRKL and Plan & Execute, which mainly differ in their prompting style.

For more complex agents, the actions may be tied to tools that cause changes in source systems. For example, we could connect the agent to a tool that checks for vacation balance and applies for leave in an ERP system for an employee. Now we could build a nice chatbot that would interact with users and via a chat command apply for leave in the system. No more complex screens for applying for leaves, a simple unified chat interface. Sounds exciting!?

Now what if we have a tool that invokes transactions on stock trading using a pre-authorized API. You build an application where the agent studies stock changes (using tools) and makes decisions for you on buying and selling of stock. What if the agent sells the wrong stock because it hallucinated and made a wrong decision? Since LLM are huge models, it is difficult to pinpoint why they make some decisions, hence hallucinations are common in absence of proper guardrails.

While agents are all fascinating you probably would have guessed how dangerous they can be. If they hallucinate and take a wrong action that could cause huge financial losses or major issues in Enterprise systems. Hence Responsible AI is becoming of utmost importance in the age of LLM powered applications. The principles of Responsible AI around reproducibility, transparency, and accountability, try to put guardrails on decisions taken by agents and suggest risk analysis to decide which actions need a human-in-the-loop. As more complex agents are being designed, they need more scrutiny, transparency, and accountability to make sure we know what they are doing.

Ability of agents to generate a path of logical steps with actions gets them really close to human reasoning. Empowering them with more powerful tools can give them superpowers. Patterns like ReAct try to emulate how humans solve the problem and we will see better agent patterns that will be relevant to specific contexts and domains (banking, insurance, healthcare, industrial, etc.). The future is here and technology behind agents is ready for us to use. At the same time, we need to keep close attention to Responsible AI guardrails to make sure we are not building Skynet!

Read the rest here:

Decoding Opportunities and Challenges for LLM Agents in ... - Unite.AI

Google looks to make Artificial Intelligence as simple as Search – Times of India

SAN FRANCISCO: Google is now doing to AI what it did to the internet. "We are taking the sophistication of the AI model and putting it behind a simple interface called chat which then lets you open it up to every department," Google Cloud's CEO Thomas Kurian said.Duet AI in Workspace and Vertex AI - both recently launched products by Google - are expected to revolutionise the market, he added. Kurian was speaking with some members of the press last week on the sidelines of the three-day Google Cloud Next - a mega event at Moscone Center in San Francisco from August 29."AI can be used in virtually every department, every business function in a company, and every industry. Retailers are testing it for shopping and commerce. Telecommunication companies are using it for customer service. Banks are using it to synthesise financial statements for their wealth managers. We expect the number of people who can use AI to grow just like when we simplified access to the internet and broadened it," he added.Vertex AI Search and Conversation, which was made available during the Cloud Next event, allows developers with minimum machine learning knowledge to take data, customise it, build an interactive chatbot or search engine within it, and deploy the apps within a few hours.Aparna Pappu, VP and general manager of Google Workspace, said Duet AI has your back. "It can help write emails and make presentations using different sources and summarise what was said in a virtual meeting and even attend the meet on the user's behalf," she said in another media interaction during the event.Kurian said that generative AI is moving technology out of the IT department to many other functions in companies. "When we look at users of generative AI - marketing departments, HR, supply chain organisations - none of them were talking to us earlier, but at this conference, many are from non-engineering backgrounds... from different business lines because they want to understand how they can use generative AI technology," he added.Google has provided an AI platform that protects data and ensures that it does not leak out. "We have capability in Vertex so data can be kept and any feedback or changes to the model are private to you," he added. Kurian said they have analysed a million users, understood their behaviour, and found that an average user of Duet can typically write 30-40% more emails with more than 50% of the content generated by the model.

Read the original here:

Google looks to make Artificial Intelligence as simple as Search - Times of India

Heated massages, AI counselling and films on the go: Will … – Euronews

LG presented a vision of what autonomous vehicles (AVs) could be like in the future - and its all about having more "me time" on the move.

Its been a stressful day at work so you decide to linger in the car and take a breath before getting out and facing the task of preparing dinner or tackling the household chores.

You recline your seat, listening to the soothing sounds of nature while it gives you a heated massage. Or maybe you opt for counselling from the onboard artificial intelligence (AI) to wind down and clear your head after a hectic day.

Compared with your current daily commute sitting in stop-start traffic, the concept might seem lightyears from reality. However, it is just one vision of what autonomous driving could look like proposed by South Korean electronics giant LG.

The technology behind autonomous vehicles (AVs) is currently geared towards the mechanics of getting the car to move and navigate independently while the onboard experience of passengers is, for now at least, relegated to a secondary talking point.

LG, on the other hand, is now actively turning its focus to the sensory elements of being inside the autonomous cars of the future, believing the perspective should shift to the opportunities that AVs will give to improve the driving experience.

"There have been a lot of discussions about future mobility in terms of physical transformation and the role of the car. However, despite many discussions, it is still unclear how the changes will exactly happen," William Cho, the companys CEO, said this week at IAA Mobility - one of the worlds largest trade fairs of its kind - in Munich.

"As we all know, the mobility industry is evolving dramatically, changing our traditional beliefs on cars. Our in-depth customer research directed us to see mobility through the lens of customer experience, focusing on expanding space in the car and quality of time spent on the road".

The companys idea? To redefine the car from a means of travel to a "personalised digital cave" for its occupant.

To date, billions have been invested in developing the technology to produce robot vehicles controlled and piloted by AI-powered computer systems, but prototypes so far all require human inputs.

AVs in the US are subject to standards set by SAE International, formerly known as the Society of Automotive Engineers, with level 0 being no automation and level 5 being the highest rating, full vehicle autonomy in all conditions and locations.

Teslas driver assistance system Autopilot, for example, which offers partial automation, is classified at level 2. The US carmakers basic Autopilot, which is available in all models, offers lane centring and assisted steering while more advanced systems, like Enhanced Autopilot and Full Self-Driving Capacity, have functions to help park, stop and summon the vehicle.

Earlier this summer, Mercedes-Benz announced its Drive Pilot system had been given level 3 approval, attaining the highest SAE rating for a commercial vehicle to date.

Unlike level 2, cars classified as level 3 can handle most driving situations but again, a driver must intervene to make inputs to avoid potentially dangerous incidents.

Last month, Cruise, an arm of US automaker General Motors, was granted a licence in California - along with Alphabet-owned company Waymo - to expand its existing fleet of self-driving taxis in San Francisco and operate on a 24/7 basis.

Unlike commercial vehicles, these taxis are operating at level 4 - in other words, near-complete autonomy. They are programmed to drive in a preset area - known as geofencing - in which they can negotiate their environment through a combination of cameras, sensors, machine learning algorithms and artificial intelligence (AI), determining their location, real-time information on pedestrians and traffic and how each is likely to behave.

If a difficult circumstance arises, a human operator is able to step in remotely to guide or stop the vehicle.

And difficulties do arise. Just 10 days after it was granted its latest licence, Cruise was asked to reduce its fleet following a series of accidents, including a collision with a fire engine.

According to data from the US National Highway Traffic Safety Administration (NHTSA), self-driving Tesla vehicles have also been involved in 736 crashes in the US since 2019, resulting in 17 known fatalities.

Despite the rollout of services like Cruise and Tesla's Autopilot, and the major investment in research, development and testing by the automotive industry, its unlikely a level 5 vehicle will be on the market anytime soon.

Cho believes, however, that electrification will only accelerate the shift to autonomous driving.

"Today's mobility is shifting towards software-defined vehicles [SDVs]. This means social mobility will transform into highly sophisticated electronic devices and can be seen as one of moving space to provide new experiences," he said.

LGs vision for these mobile experiences is theoretical for now but the company plans to design and produce technologies for future AVs based on three core themes collectively known as "Alpha-able": Transformable, Explorable and Relaxable.

For the first, LG predicts that cars will become personalised digital caves, spaces that will be able to easily adapt to suit different purposes and occasions. It could be a restaurant to dine in with your partner, a home office on wheels where you can make business deals in private or even recline and watch a film in a cinema on wheels.

For the second theme, LG is aiming to incorporate augmented reality (AR) and advanced AI to improve the interactions with the vehicle; whether this be voice assistants who recommend content based on the duration of the determined route to your destination or interactive windscreens made from OLED displays that show information about your location and journey.

And of course, the driving experience should be relaxing, through sensory stimuli such as films, massages, meditative music and so on through the cars infotainment system.

While level 5 AVs are yet to materialise, LG says it is already at work on the necessary technology to achieve its three-pronged objectives, including opening a new factory in Hungary in a joint venture with Magna International to produce e-powertrains, the power source of EVs.

"We strongly believe future mobility should focus on the mission to deliver another level of customer experience. LG, with innovative mobility solutions, is more than committed to this important mission," Cho said.

More here:

Heated massages, AI counselling and films on the go: Will ... - Euronews

This Week in AI: Deepfakes, Vertical Horizons and Smarter Assistance – PYMNTS.com

Is it Moores Law, or mores law?

Anyone keeping an eye on the generative artificial intelligence (AI) landscape could be forgiven for confusing the two.

This, as another week has gone by, and with it another hyper-rapid clip of advances in the commercialization of generative AI solutions, and even a new executive orderfrom California Governor Gavin Newsom, around theneed for regulationof the innovative technology.

Were it any other technology, the rapid pace of change we are seeing within AI would require a least a year or more to make it to market.

Already, after China became the first major market economy last month to pass regulations policing AI,the nationsbiggest tech firmsdebuted theirown adherent products just weeks later this one.

And as generative AI technology continues to add more fuel its rocket ship trajectory, these are the stories and moonshots that PYMNTS has been tracking.

Generative AI can generate, well, anything. And while the possibilities are endless, they also run the gamut from widely positive and productively impactful, to dangerous and geared toward criminal goals. After all, genAI is a tool, and in the absence of a firm regulatory framework, the utility of a tool depends entirely on the hand that wields it.

Thats whyGooglehas announced anew policy mandating advertisers for the upcoming U.S. election to disclose when the ads they wish to display across any of Googles platforms (excluding YouTube) have been manipulated or created using AI.

Meta Platforms, the parent company of Instagram and Facebook, andX, formerly known as Twitter, both of which have faced allegations of spreading political misinformation, have not yet announced any specific rules around AI-generated ad content.

Complicating matters somewhat is the fact that PYMNTS Intelligence has found there doesnt yet exist a truly foolproof method to detect and exposeAI-generated content.

One of the questions that is immediately raised [around AI] ishow do you draw the linebetween human-generated and AI-generated content,John Villasenor,professor of electrical engineering, law, public policy and management atUCLAand faculty co-director of theUCLA Institute for Technology, Law and Policy, explained to PYMNTS on Friday (Sept. 8).

And as generative AI tools are increasingly leveraged by bad actors to fool ID authorization protocols and scam unsuspecting consumers, it is becoming incumbent on organizations toupgrade their own defenseswith AI capabilities.Phishing attacksalone have seen a 150% increase year over year as a result of new AI-driven techniques.

The technology is already proving to be both a blessing and a hindrance forpayments security, and as reported here on Tuesday (Sept. 5), payments firm ThetaRayrecently raised $57 million to boost its AI-powered financial crime detection capabilities.

While the artificial element of AI has its darker side, it is the intelligence aspect of the technology that enterprises and platforms want to capitalize on and integrate.

Apple isreportedly spending millions of dollars a day building out its generative AI capabilities across several product teams, including its voice assistant Siri, and there exists an attractive white space opportunity for AI to make todays smart assistants a whole lot smarter.

Chipmaker Qualcomm is working withMetato make that companys Llama 2-based artificial AI implementationsavailable on smartphones and PCs, and the Qualcomms CEO said on Tuesday (Sept. 5) he sees AI as potentially revivingthe smartphone market, where global sales are at their lowest levels in a decade.

Elsewhere, video communications companyZoomannounced that it is making its own generative AI assistantfree to paid users; while the buzzy,well-fundedAI startupAnthropic on Thursday (Sept. 7)introduced a paid planfor the Pro version of its AI assistant,Claude.

Not to be outdone, customer experience management platformSprinklrhas integrated its AI platform withGoogle CloudsVertex AI in order to let retail companieselevate contact center efficiency withgenerative AI capabilities that support service agents.

This, whileCaseys General Stores announced on Wednesday (Sept. 6) that the convenience retailer is turning to conversational voice AI ordering technologyin an ongoing push to gain share from quick-service restaurants (QSRs).

IBM also announced on Thursday (Sept. 7) that it is releasingnew enhancements to its AI platform, watsonx, and giving developers a preview next week at the companys TechXchange event in Las Vegas.

And IBM isnt the only tech company hosting a developer conference. Generative AI pioneer OpenAIannounced Wednesday (Sept. 6) that its first DevDay developer conference will take place this November.

Generative AI is also getting utilized for specialized purposes.

CFOs areincreasingly tapping the toolto help optimize their working capital and treasury management approaches; while consumer brand data platform Alloy.ai on Thursday (Sept. 7) announced the addition of new predictive AI featuresto its own forecasting and supply chain solution.

And over in the healthcare sector, the industry is reportedlyallocating over a tenth of its annual spend (10.5%) on AI and machine learning innovations.

As for what the health industry hopes to achieve with this investment? Hopefully thecure to its zettabyte-sizeddata fragmentation problems.

Continue reading here:

This Week in AI: Deepfakes, Vertical Horizons and Smarter Assistance - PYMNTS.com

Dr Ben Goertzel – A.I. Wars: Google Fights Back Against OpenAI’s … – London Real

2023 may well be the year we look back on as a time of significant change with regards to the exponential growth of artificial intelligence. AI platforms and tools are starting to have a major impact on our daily lives and virtually every conceivable industry is starting to sit up and take notice.

While the doom mongers might be concerned that these superintelligent machines pose a genuine threat to humanity, and concern grows about our future here on planet earth, many experts point to the enormous potential and benefits such sophisticated technology could have on the world.

Just recently, the co-founder of Google DeepMind, Mustafa Suleyman said that he believes within the next five years, everybody is going to have their own AI-powered personal assistant as the technology becomes cheaper and more widespread.

While on the other hand, Godfather of AI Geoffrey Hinton quit his job at Google because hes concerned at the rate of improvement in AI development and what this means for society as a whole.

One thing is for certain, the world we live in is going to change drastically in the coming years. Thankfully, Im able to call upon one of the smartest human beings I know and someone who is not only at the forefront of this shift, but who also cares deeply about the ethical, political and social ramifications of AI development, and is focussed on the goal of creating benevolent AI systems.

Dr Ben Goertzel is a cross-disciplinary scientist, futurist, author and entrepreneur, who has spent the best part of his working life focused on creating benevolent superhuman artificial general intelligence (AGI).

In fact, Ben has been credited with popularising the term AGI in our mainstream thinking and has published a dozen scientific books, 150 technical papers, and numerous journal articles, making him one of the worlds foremost experts in this rapidly expanding field.

In 2017, Ben founded SingularityNet with the goal of creating a decentralised, democratic, inclusive and beneficial AGI which has become the worlds leading decentralised AI marketplace.

At SingularityNET, Bens goal is to create an AGI that is not dependent on any central entity, is accessible to anyone, and not restricted to the narrow goals of a single corporation or even a single country.

The platform is an open and decentralised network of AI services on a blockchain where developers publish their services to the network, and can be used by anyone with an internet connection.

SingularityNets latest project is Zarqa, a supercharged, intelligent, neural-symbolic, large language model on a massive scale that promises to not only take on OpenAIs ChatGPT, but go much, much further.

Such advanced neural-symbolic AI techniques will revolutionise and disrupt every industry, taking a giant step towards AGI.

Ive come to the conclusion that to make decentralised AGI really work. We have to launch something thats way smarter than ChatGPT and launch that on a decentralised infrastructure.

In a broader sense, Ben does of course concede that there are risks in building machines that are capable of learning anything and everything, including how to reprogram itself to become an order of magnitude more intelligent than any human.

I think the implications of superintelligence are huge and hard to foresee. Its like asking nomads living in early human tribes what civilisation is going to be like. They could foresee a few aspects of it, but to some extent, you just have to discover when you get there.

Moreover, Ben highlights a more pressing concern is the risk that selfish people and big business will use AI to exert their own greed and control over other people. Its a fascinating conundrum, and there is so much to consider, something that Ben has spent more time than most thinking about.

Ben truly believes that our focus should be on building AI systems that make the world a more compassionate, more just, and more sustainable place right now and moving forward into the future.

I really enjoy sitting down for these chats with Ben, theres so much to learn, and if youre interested in the technology that is shaping our future world or looking for an investment opportunity, then make sure to tune in. The economic potential of AI is huge and over the next decade is expected to generate multiple trillions of dollars.

Im optimistic about the potential for beneficial AGI, and decentralised is important because centralisation of control tends to bring with it some narrow motivational system separate from the ethics of whats best for everyone.

View original post here:

Dr Ben Goertzel - A.I. Wars: Google Fights Back Against OpenAI's ... - London Real

A Race to Extinction: How Great Power Competition Is Making Artificial Intelligence Existentially Dangerous – Harvard International Review

Everything dies baby, thats a fact. And, if the world cannot manage the current race to superhuman artificial intelligence between great powers, everything may die much sooner than expected.

The past year has witnessed an explosion in the capabilities of artificial intelligence systems. The bulk of these advances have occurred in generative AI systems that produce novel text, image, audio, or video content from human input. The American company OpenAI took the world by storm with its public release of the ChatGPT large language model (LLM) in November 2022. In March, it released an updated version of ChatGPT powered by the more powerful GPT-4 model. Microsoft and Google have followed suit with Bing AI and Bard, respectively.

Beyond the world of text, generative applications Midjourney, DALL-E, and Stable Diffusion produce unprecedentedly realistic images and videos. These models have burst into the public consciousness rapidly. Most people have begun to understand that generative AI is an unparalleled innovation, a type of machine that possesses capacities natural language generation and artistic production long thought to be sacrosanct domains of human ability.

But generative AI is only the beginning. A team of Microsoft AI scientists recently released a paper arguing that GPT-4 arguably the most sophisticated LLM yet is showing the sparks of artificial general intelligence (AGI), an AI that is as smart or smarter than humans in every area of intelligence, rather than simply in one task. They argue that, [b]eyond its mastery of language, GPT-4 can solve novel and difficult tasks that span mathematics, coding, vision, medicine, law, psychology and more, without needing any special prompting." In these multiple areas of intelligence, GPT-4 is strikingly close to human-level performance. In short, GPT-4 appears to presage a program that can think and reason like a human. Half of surveyed AI experts expect an AGI in the next 40 years.

AGI is the holy grail for tech companies involved in AI development primarily the fields leaders, OpenAI and Google subsidiary DeepMind because of the unfathomable profits and world-historical glory that would come with being the first to develop human-level machine intelligence.

The private sector, however, is not the only relevant actor.

Because leadership in AI offers advantages both in economic competitiveness and military prowess, great powers primarily the United States and China are racing to develop advanced AI systems. Much ink has been spilled on the risks of the military applications of AI, which have the potential to reshape the strategic and tactical domains alike by powering autonomous weapons systems, cyberweapons, nuclear command and control, and intelligence gathering. Many politicians and defense planners in both countries believe the winner of the AI race will secure global dominance.

But the consequences of such a race are potentially far more reaching than who wins global hegemony. The perception of an AI arms race is likely to accelerate the already-risky development of AI systems. The pressure to outpace adversaries by rapidly pushing the frontiers of a technology that we still do not fully understand or fully control without commensurate efforts to make AI safe for humans may well present an existential risk to humanitys continued existence.

The dangers of arms races are well-established by history. Throughout the late 1950s, American policymakers began to fear that the Soviet Union was outpacing the U.S. in deployment of nuclear-capable missiles. This ostensible missile gap pushed the U.S. to scale up its ballistic missile development to catch up to the Soviets.

In the early 1960s, it became clear the missile gap was a myth. The United States, in fact, led the Soviet Union in missile technology. However, just the perception of falling behind an adversary contributed to a destabilizing buildup of nuclear and ballistic missile capabilities, with all its associated dangers of accidents, miscalculations, and escalation.

Missile gap logic is rearing its ugly head again today, this time with regard to artificial intelligence, which could be more dangerous than nuclear weapons. Chinas AI efforts are raising fears among American officials, who are concerned about falling behind. New Chinese leaps in AI inexorably produce flurries of warnings that China is on its way to dominating the field.

The reality of such a purported AI gap is complicated. Beijing does appear to lead the U.S. in military AI innovation. China also leads the world in AI academic journal citations and commands a formidable talent base. However, when it comes to the pursuit of AGI, China seems to be the laggard. Chinese companies LLMs are 1-3 years behind their American counterparts, and OpenAI set the pace for generative models. Furthermore, the Biden administrations 2022 export controls on advanced computer chips cut China off from a key hardware prerequisite for building advanced AI.

Whoever is ahead in the AI race, however, is not the most important question. The mere perception of an arms race may well push companies and governments to cut corners and eschew safety research and regulation. For AI a technology whose safety relies upon slow, steady, regulated, and collaborative development an arms race may be catastrophically dangerous.

Despite dramatic successes in AI, humans still cannot reliably predict or control its outputs and actions. While research focused on AI capabilities has produced stunning advancements, the same cannot be said for research in the field of AI alignment, which aims to ensure AI systems can be controlled by their designers and made to act in a way that is compatible with humanitys interests.

Anyone who has used ChatGPT understands this lack of human control. It is not difficult to circumvent the programs guardrails, and it is far too easy to encourage chatbots to say offensive things. When it comes to more advanced models, even if designers are brilliant and benevolent, and even if the AI pursues only its human-chosen ultimate goals, there remains a path to catastrophe.

Consider the following thought experiment about how AGI may be deployed. A human-level or superhuman intelligence is programmed by its human creators with a defined, benign goal say, develop a cure for Alzheimers, or increase my factorys production of paperclips. The AI is given access to a constrained environment of instruments: for instance, a medical lab or a factory.

The problem with such deployment is that, while humans can program AI to pursue a chosen ultimate end, it is infeasible that each instrumental, or intermediate, subgoal that the AI will pursue think acquiring steel before it can make paperclips can be defined by humans.

AI works through machine learning: it trains on vast amounts of data and learns, based on that data, how to produce desired outputs from its inputs. However, the process by which AI connects inputs to outputs the internal calculations it performs under the hood is a black box. Humans cannot understand precisely what an AI is learning to do. For example, an AI trained to pick strawberries might instead have learned to pick the nearest red object and, when released into a different environment, pick both strawberries and red peppers. Further examples abound.

In short, an AI might do precisely what it was trained to do and still produce an unwanted outcome. The means to its programmed ends crafted by an alien, incomprehensible intelligence could be prejudicial to humans. The Alzheimers AI might kidnap billions of human subjects as test subjects. The paperclip AI might turn the entire Earth into metal to make paperclips. Because humans can neither predict every possible means AI might employ nor teach it to reliably perform a definite action, programming away any dangerous outcome is infeasible.

If sufficiently intelligent, and capable of defeating resistant humans, an AI may well wipe out life on Earth in its single-minded pursuit of its goal. If given control of nuclear command and control like the Skynet system in Terminator or access to chemicals and pathogens, AI could engineer an existential catastrophe.

How does international competition come into play when discussing the technical issue of alignment? Put simply, the faster AI advances, the less time we will have to learn how to align it. The alignment problem is not yet solved, nor is it likely to be solved in time without slower and more safety-conscious development.

The fear of losing a technological arms race may encourage corporations and governments to accelerate development and cut corners, deploying advanced systems before they are safe. Many top AI scientists and organizations among them the team at safety lab Anthropic, Open Philanthropys Ajeya Cotra, DeepMind founder Demis Hassabis, and OpenAI CEO Sam Altman believe that gradual development is preferable to rapid development because it offers researchers more time to build safety features into new models; it is easier to align a less powerful model than a more powerful one.

Furthermore, fears of Chinas catching up may imperil efforts to enact AI governance and regulatory measures that could slow down dangerous development and speed up alignment. Altman and former Google CEO Eric Schmidt are on record warning Congress that regulation will slow down American companies to Chinas benefit. A top Microsoft executive has used the language of the Soviet missile gap. The logic goes: AGI is inevitable, so the United States should be first. The problem is that, in the words of Paul Scharre, AI technology poses risks not just to those who lose the race but also to those who win it.

Likewise, the perception of an arms race may preclude the development of a global governance framework on AI. A vicious cycle may emerge where an arms race prevents international agreements, which increases paranoia and accelerates that same arms race.

International conventions on the nonproliferation of nuclear bombs and missiles and the multilateral ban on biological weapons were great Cold War successes that defused arms races. Similar conventions over AI could dissuade countries from rapidly deploying AI into more risky domains in an effort to increase national power. More global cooperation over AIs deployment will reduce the risk that a misaligned AI is integrated into military and even nuclear applications that would give it a greater capacity to create a catastrophe for humanity.

While it is currently unclear whether government regulation could meaningfully increase the chances of solving AI alignment, regulation both domestic and multilateral may at least encourage slower and steadier development.

Fortunately, momentum for private Sino-American cooperation on AI alignment may be building. American AI executives and experts have met with their Chinese counterparts to discuss alignment research and mutual governance. Altman himself recently went on a world tour to discuss AI capabilities and regulation with world leaders. As governments are educated as to the risks of AI, the tide may be turning toward a more collaborative world. Such a shift would unquestionably be good news.

However, the outlook is not all rosy: as the political salience of AI continues to increase, the questions of speed, regulation, and cooperation may become politicized into the larger American partisan debate over China. Regulation may be harder to push when China hawks begin to associate slowing AI with losing an arms race to China. Recent rhetoric in Congress has emphasized the AI arms race and downplayed the necessity of regulation.

Whether or not it is real, the United States and China appear convinced that the AI arms race is happening an extremely dangerous proposition for a world that does not otherwise appear to be on the verge of an alignment breakthrough. A detente on this particular technological race however unlikely it may seem today may be critical to humanitys long-term flourishing.

Link:

A Race to Extinction: How Great Power Competition Is Making Artificial Intelligence Existentially Dangerous - Harvard International Review

The plan for AI to eat the world – POLITICO

OpenAI CEO Sam Altman. | JOEL SAGET/AFP via Getty Images

If artificial general intelligence ever arrives an AI that surpasses human intelligence and capability what will it actually do to society, and how can we prepare ourselves for it?

Thats the big, long-term question looming over the effort to regulate this new technological force.

Tech executives have tried to reassure Washington that their new AI products are tools for harmonious progress and not scary techno-revolution. But if you read between the lines of a new, exhaustive profile of OpenAI published yesterday in Wired the implications of the companys takeover of the global tech conversation become stark, and go a long way toward answering those big existential questions.

Veteran tech journalist Steven Levy spent months with the companys leaders, employees and former engineers, and came away convinced that Sam Altman and his team dont only believe that artificial general intelligence, or AGI, is inevitable, but that its likely to transform the world entirely.

That makes their mission a political one, even if it doesnt track easily along our current partisan boundaries, and theyre taking halting, but deliberate, steps toward achieving it behind closed doors in San Francisco. They expect AGI to change society so much that the companys bylaws contain written provisions for an upended, hypothetical version of the future where our current contracts and currencies have no value.

Somewhere in the restructuring documents is a clause to the effect that, if the company does manage to create AGI, all financial arrangements will be reconsidered, Levy notes. After all, it will be a new world from that point on.

Sandhini Agarwal, an OpenAI policy researcher, put a finer point on how he sees the companys mission at this point in time: Look back at the industrial revolution everyone agrees it was great for the world but the first 50 years were really painful Were trying to think how we can make the period before adaptation of AGI as painless as possible.

Theres an immediately obvious laundry list of questions that OpenAIs race to AGI raises, most of them still unanswered: Who will be spared the pain of this period before adaptation of AGI, for example? Or how might it transform civic and economic life? And just who decided that Altman and his team get to be the ones to set its parameters, anyway?

The biggest players in the AI world see the achievement of OpenAIs mission as a sort of biblical Jubilee, erasing all debts and winding back the clock to a fresh start for our social and political structures.

So if thats really the case, how is it possible that the government isnt kicking down the doors of OpenAIs San Francisco headquarters like the faceless space-suited agents in E.T.?

In a society based on principles of free enterprise, of course, Altman and his employees are as legally entitled to do what they please in this scenario as they would be if they were building a dating app or Uber competitor. Theyve also made a serious effort to demonstrate their agreement with the White Houses own stated principles for AI development. Levy reported on how democratic caution was a major concern in releasing progressively more powerful GPT models, with chief technology officer Mira Murati telling him they did a lot of work with misinformation experts and did some red-teaming and that there was a lot of discussion internally on how much to release around the 2019 release of GPT-2.

Those nods toward social responsibility are a key part of OpenAIs business model and media stance, but not everyone is satisfied with them. That includes some of the companys top executives, who split to found Anthropic in 2019. That companys CEO, Dario Amodei, told the New York Times this summer that his companys entire goal isnt to make money or usher in AGI necessarily, but to set safety standards with which other top competitors will feel compelled to comply.

The big questions about AI changing the world all might seem theoretical. But those within the AI community, and increasing numbers of watchdogs and politicians, are already taking them deadly seriously (despite a steadfast chorus of computer scientists still entirely skeptical about the possibility of AGI at all).

Just take a recent jeremiad from Foundation for American Innovation senior economist Samuel Hammond, who in a series of blog posts has tackled the political implications of AGI boosters claims if taken at face value, and the implications of a potential response from government:

The moment governments realize that AI is a threat to their sovereignty, they will be tempted to clamp down in a totalitarian fashion, Hammond writes. Its up to liberal democracies to demonstrate institutional co-evolution as a third-way between degenerate anarchy and an AI Leviathan.

For now, thats a far-fetched future scenario. But as Levys profile of OpenAI reveals, its one that the people with the most money, computing power and public sway in the AI world hold as gospel truth. Should the AGI revolution put politicians across the globe on their back foot, or out of power entirely, they wont be able to say they didnt have a warning.

On todays POLITICO Tech podcast, an AI leader recommends some very specific tools for the government to put in its toolbox when it comes to making AI safe globally.

Mustafa Suleyman, CEO of Inflection AI and co-founder of Google DeepMind, told POLITICOs Steven Overly that Washington needs to put limits on the sale of AI hardware and appoint a cabinet-level regulator for the tech.

It is a travesty that we dont have senior technical contributors in cabinet and in every government department given how critical digitization is to every aspect of our world, Suleyman told Steven, and he writes in his new book that the next five or so years are absolutely critical, a tight window when certain pressure points can still slow technology down.

To hear the full interview with Suleyman and other tech leaders, subscribe to POLITICO Tech on Apple, Spotify, Google or wherever you get your podcasts.

California Gov. Gavin Newsom. | Josh Edelson/AFP/Getty Images

The top official on the AI revolutions home turf is laying down some rules for the states use of the technology.

California Gov. Gavin Newsom issued an executive order today ordering the states agencies to research potential risks that AI poses, devise new policies and put rules in place to ensure its ethical and legal use.

This is a potentially transformative technology comparable to the advent of the internet and were only scratching the surface of understanding what GenAI is capable of, Newsom said in a press release. We recognize both the potential benefits and risks these tools enable.

That makes California just the latest state to tackle AI in its own idiosyncratic manner, as Newsom took care in his remarks to note the role its tech industry plays in the technologys development. POLITICOs Mohar Chatterjee reported for DFD in June on AI legislative efforts in Colorado, and Massachusetts saw similar efforts with a novel twist this year as well.

Stay in touch with the whole team: Ben Schreckinger ([emailprotected]); Derek Robertson ([emailprotected]); Mohar Chatterjee ([emailprotected]); and Steve Heuser ([emailprotected]). Follow us @DigitalFuture on Twitter.

If youve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.

Read the rest here:

The plan for AI to eat the world - POLITICO