Category Archives: Artificial Super Intelligence

Sam Altman-OpenAI saga: Researchers had warned board of ‘dangerous, humanity-threatening’ AI – Business Today

Before Sam Altman, the CEO of OpenAI, was temporarily removed from his position, a group of staff researchers sent a letter to the board of directors. They warned about a significant artificial intelligence discovery that could potentially pose a threat to humanity, according to a report by Reuters citing two individuals.

The report suggests that this letter and the AI algorithm it discussed were not previously reported, but it could have played a crucial role in the boards decision to remove Altman. Over 700 employees had threatened to leave OpenAI and join Microsoft, one of the companys backers, in support of Altman. The letter was one of many issues raised by the board that led to Altmans dismissal, according to the report.

Earlier this week, Mira Murati, a long-time executive at OpenAI, mentioned a project called Q* (pronounced Q star)to the employees and stated that a letter had been sent to the board before the weekends events.

After the story was published, an OpenAI spokesperson, according to the report, said that Murati had informed the employees about what the media were about to report. The company that developed ChatGPT has made progress on Q*, which some people within the company believe could be a significant step towards achieving super-intelligence, also known as artificial general intelligence (AGI).

How is the new model different?

With access to extensive computing resources, the new model was able to solve certain mathematical problems. Even though it was only performing math at the level of grade-school students, the researchers were very optimistic about Q*'s future success.

Math is considered one of the most important aspects ofgenerative AI development. Current generative AI is good at writing and language translation by statistically predicting the next word. However, the ability to do math, where there is only one correct answer, suggests that AI would have greater reasoning capabilities similar to human intelligence. This could be applied to novel scientific research.

Unlike a calculator that can only solve a limited number of operations, AGI can generalise, learn, and comprehend. In their letter to the board, the researchers highlighted the potential danger of AIs capabilities. There has been a long-standing debate among computer scientists about the risks posed by super-intelligent machines.

Sam Altman's Role

In this context, Altman led efforts to make ChatGPT one of the fastest-growing software applications in history and secured necessary investment and computing resources from Microsoft to get closer to super-intelligence.

In addition to announcing a series of new tools earlier this month, Altman hinted at a gathering of world leaders in San Francisco that he believed AGI was within reach. Four times now in the history of OpenAI, the most recent time was just in the last couple weeks, Ive gotten to be in the room, when we sort of push the veil of ignorance back and the frontier of discovery forward, and getting to do that is the professional honor of a lifetime, he said. The board fired Altman the next day.

Also read:As Sam Altman returns to OpenAI, heres who was fired from the new board and whos in

Also read:Sam Altman returns to OpenAI: Elon Musk says it is probably better than merging with Microsoft

Excerpt from:

Sam Altman-OpenAI saga: Researchers had warned board of 'dangerous, humanity-threatening' AI - Business Today

Artificial Intelligence and Synthetic Biology Are Not Harbingers of … – Stimson Center

Are AI and biological research harbingers of certain doom or awesome opportunities?

Contrary to the reigning assumption that artificial intelligence (AI) will super-empower the risks of misuse of biotech to create pathogens and bioterrorism, AI holds the promise of advancing biological research, and biotechnology can power the next wave of AI to greatly benefit humanity. Worries about the misuse of biotech are especially prevalent, recently prompting the Biden administration to publish guidelines for biotech research, in part to calm growing fears.

The doomsday assumption that AI will inevitably create new, malign pathogens and fuel bioterrorism misses three key points. First, the data must be out there for an AI to use it. AI systems are only as good as the data they are trained upon. For an AI to be trained on biological data, that data must first exist which means it is available for humans to use with or without AI. Moreover, attempts at solutions that limit access to data overlook the fact that biological data can be discovered by researchers and shared via encrypted form absent the eyes or controls of a government. No solution attempting to address the use of biological research to develop harmful pathogens or bioweapons can rest on attempts to control either access to data or AI because the data will be discovered and will be known by human experts regardless of whether any AI is being trained on the data.

Second, governments stop bad actors from using biotech for bad purposes by focusing on the actors precursor behaviors to develop a bioweapon; fortunately, those same techniques work perfectly well here, too. To mitigate the risks that bad actors be they human or humans and machines combined will misuse AI and biotech, indicators and warnings need to be developed. When advances in technology, specifically steam engines, concurrently resulted in a new type of crime, namely train robberies, the solution was not to forego either steam engines or their use in conveying cash and precious cargo. Rather, the solution was to employ other improvements, to later include certain types of safes that were harder to crack and subsequently, dye packs to cover the hands and clothes of robbers. Similar innovations in early warning and detection are needed today in the realm of AI and biotech, including developing methods to warn about reagents and activities, as well as creative means to warn when biological research for negative ends is occurring.

This second point is particularly key given the recent Executive Order (EO) released on 30 October 2023 prompting U.S. agencies and departments that fund life-science projects to establish strong, new standards for biological synthesis screening as a condition of federal funding . . . [to] manage risks potentially made worse by AI. Often the safeguards to ensure any potential dual-use biological research is not misused involve monitoring the real world to provide indicators and early warnings of potential ill-intended uses. Such an effort should involve monitoring for early indicators of potential ill-intended uses the way governments employ monitoring to stop bad actors from misusing any dual-purpose scientific endeavor. Although the recent EO is not meant to constrain research, any attempted solutions limiting access to data miss the fact that biological data can already be discovered and shared via encrypted forms beyond government control. The same techniques used today to detect malevolent intentions will work whether large language models (LLMs) and other forms of Generative AI have been used or not.

Third, given how wrong LLMs and other Generative AI systems often are, as well as the risks of generating AI hallucinations, any would-be AI intended to provide advice on biotech will have to be checked by a human expert. Just because an AI can generate possible suggestions and formulations perhaps even suggest novel formulations of new pathogens or biological materials it does not mean that what the AI has suggested has any grounding in actual science or will do biochemically what the AI suggests the designed material could do. Again, AI by itself does not replace the need for human knowledge to verify whatever advice, guidance, or instructions are given regarding biological development is accurate.

Moreover, AI does not supplant the role of various real-world patterns and indicators to tip off law enforcement regarding potential bad actors engaging in biological techniques for nefarious purposes. Even before advances in AI, the need to globally monitor for signs of potential biothreats, be they human-produced or natural, existed. Today with AI, the need to do this in ways that still preserve privacy while protecting societies is further underscored.

Knowledge of how to do something is not synonymous with the expertise in and experience in doing that thing: Experimentation and additional review. AIs by themselves can convey information that might foster new knowledge, but they cannot convey expertise without months of a human actor doing silica (computer) or in situ (original place) experiments or simulations. Moreover, for governments wanting to stop malicious AI with potential bioweapon-generating information, the solution can include introducing uncertainty in the reliability of an AI systems outputs. Data poisoning of AIs by either accidental or intentional means represents a real risk for any type of system. This is where AI and biotech can reap the biggest benefit. Specifically, AI and biotech can identify indicators and warnings to detect risky pathogens, as well as to spot vulnerabilities in global food production and climate-change-related disruptions to make global interconnected systems more resilient and sustainable. Such an approach would not require massive intergovernmental collaboration before researchers could get started; privacy-preserving approaches using economic data, aggregate (and anonymized) supply-chain data, and even general observations from space would be sufficient to begin today.

Setting aside potential concerns regarding AI being used for ill-intended purposes, the intersection of biology and data science is an underappreciated aspect of the last two decades. At least two COVID-19 vaccinations were designed in a computer and were then printed nucleotides via an mRNA printer. Had this technology not been possible, it might have taken an additional two or three years for the same vaccines to be developed. Even more amazing, nuclide printers presently cost only $500,000 and will presumably become less expensive and more robust in their capabilities in the years ahead.

AI can benefit biological research and biotechnology, provided that the right training is used for AI models. To avoid downside risks, it is imperative that new, collective approaches to data curation and training for AI models of biological systems be made in the next few years.

As noted earlier, much attention has been placed on both AI and advancements in biological research; some of these advancements are based on scientific rigor and backing; others are driven more by emotional excitement or fear. When setting a solid foundation for a future based on values and principles that support and safeguard all people and the planet, neither science nor emotions alone can be the guide. Instead, considering how projects involving biology and AI can build and maintain trust despite the challenges of both intentional disinformation and accidental misinformation can illuminate a positive path forward.

The concerns regarding the potential for AI and biology to be used for ill-intended purposes should not overshadow the present conversations about using technologies to address important regional and global issues.

Specifically, in the last few years, attention has been placed on the risk of an AI system training novice individuals how to create biological pathogens. Yet this attention misses the fact that such a system is only as good as the data sets provided to train it; the risk already existed with such data being present on the internet or via some other medium. Moreover, an individual cannot gain from an AI the necessary experience and expertise to do whatever the information provided suggests such experience only comes from repeat coursework in a real-world setting. Repeat work would require access to chemical and biological reagents, which could alert law enforcement authorities. Such work would also yield other signatures of preparatory activities in the real world.

Others have raised the risk of an AI system learning from biological data and helping to design more lethal pathogens or threats to human life. The sheer complexity of different layers of biological interaction, combined with the risk of certain types of generative AI to produce hallucinated or inaccurate answers as this article details in its concluding section makes this not as big of a risk as it might initially seem. Specifically, the risks from expert human actors working together across disciplines in a concerted fashion represent a much more significant risk than a risk from AI, and human actors working for ill-intended purposes together (potentially with machines) presumably will present signatures of their attempted activities. Nevertheless, these concerns and the mix of both hype and fear surrounding them underscore why communities should care about how AI can benefit biological research.

The merger of data and bioscience is one of the most dynamic and consequential elements of the current tech revolution. A human organization, with the right goals and incentives, can accomplish amazing outcomes ethically, as can an AI. Similarly, with either the wrong goals or wrong incentives, an organization or AI can appear to act and behave unethically. To address the looming impacts of climate change and the challenges of food security, sustainability, and availability, both AI and biological research will need to be employed. For example, significant amounts of nitrogen have already been lost from the soil in several parts of the world, resulting in reduced agricultural yields. In parallel, methane gas is a pollutant that is between 22 and 40 times worse depending on the scale of time considered than carbon dioxide in terms of its contribution to the Greenhouse Effect impacting the planet. Bacteria generated through computational means can be developed through natural processes that use methane as a source of energy, thus consuming and removing it from contributing to the Greenhouse Effect, while simultaneously returning nitrogen from the air to the soil, thereby making the soil more productive in producing large agricultural yields.

The concerns regarding the potential for AI and biology to be used for ill-intended purposes should not overshadow the present conversations about using technologies to address important regional and global issues. To foster global activities to help both encourage the productive use of these technologies for meaningful human efforts and ensure ethical applications of the technologies in parallel an existing group, namely the international Genetically Engineered Machine (iGEM) competition, should be expanded. Specifically, iGEM represents a global academic competition, which started in 2004, aimed at improving understanding of synthetic biology while also developing an open community and collaboration among groups. In recent years, over 6,000 students in 353 teams from 48 countries have participated. Expanding iGEM to include a track associated with categorizing and monitoring the use of synthetic biology for good as well as working with national governments on ensuring that such technologies are not used for ill-intended purposes would represent two great ways to move forward.

As for AI in general, when considering governance of AIs, especially for future biological research and biotechnology efforts, decisionmakers would do well to consider both existing and needed incentives and disincentives for human organizations in parallel. It might be that the original Turing Test designed by computer science pioneer Alan Turing intended to test whether a computer system is behaving intelligently, is not the best test to consider when gauging local, community, and global trust. Specifically, the original test involved Computer A and Person B, with B attempting to convince an interrogator, Person C, that they were human, and that A was not. Meanwhile, Computer A was trying to convince Person C that they were human.

Consider the current state of some AI systems, where the benevolence of the machine is indeterminate, competence is questionable because some AI systems are not fact-checking and can provide misinformation with apparent confidence and eloquence, and integrity is absent. Some AI systems can change their stance if a user prompts them to do so.

However, these crucial questions regarding the antecedents of trust should not fall upon these digital innovations alone these systems are designed and trained by humans. Moreover, AI models will improve in the future if developers focus on enhancing their ability to demonstrate benevolence, competence, and integrity to all. Most importantly, consider the other obscured boxes present in human societies, such as decision-making in organizations, community associations, governments, oversight boards, and professional settings such as decision-making in organizations, community associations, governments, oversight boards, and professional settings. These human activities also will benefit by enhancing their ability to demonstrate benevolence, competence, and integrity to all in ways akin to what we need to do for AI systems as well.

Ultimately, to advance biological research and biotechnology and AI, private and public-sector efforts need to take actions that remedy the perceptions of benevolence, competence, and integrity (i.e., trust) simultaneously.

David Bray is Co-Chair of the Loomis Innovation Council and a Distinguished Fellow at the Stimson Center.

Follow this link:

Artificial Intelligence and Synthetic Biology Are Not Harbingers of ... - Stimson Center

AMBASSADORS OF ETHICAL AI PRACTICES | by ACWOL | Nov … – Medium

http://www.acwol.com

In envisioning a future where AI developers worldwide embrace the Three Way Impact Principle (3WIP) as a foundational ethical framework, we unravel a transformative landscape for tackling the Super Intelligence Control Problem. By integrating 3WIP into the curriculum for AI developers globally, we fortify the industry with a super intelligent solution, fostering responsible, collaborative, and environmentally conscious AI development practices.

Ethical Foundations for AI Developers:

Holistic Ethical Education: With 3WIP as a cornerstone in AI education, students receive a comprehensive ethical foundation that guides their decision-making in the realm of artificial intelligence.

Superior Decision-Making: 3WIP encourages developers to consider the broader impact of their actions, instilling a sense of responsibility that transcends immediate objectives and aligns with the highest purpose of lifemaximizing intellect.

Mitigating Risks Through Collaboration: Interconnected AI Ecosystem: 3WIP fosters an environment where AI entities collaborate rather than compete, reducing the risks associated with unchecked development.

Shared Intellectual Growth: Collaboration guided by 3WIP minimizes the potential for adversarial scenarios, contributing to a shared pool of knowledge that enhances the overall intellectual landscape.

Environmental Responsibility in AI: Sustainable AI Practices: Integrating 3WIP into AI curriculum emphasizes sustainable practices, mitigating the environmental impact of AI development.

Global Implementation of 3WIP: Universal Ethical Standards: A standardized curriculum incorporating 3WIP establishes universal ethical standards for AI development, ensuring consistency across diverse cultural and educational contexts.

Ethical Practitioners Worldwide: AI developers worldwide, educated with 3WIP, become ambassadors of ethical AI practices, collectively contributing to a global community focused on responsible technological advancement.

Super Intelligent Solution for Control Problem: Preventing Unintended Consequences: 3WIP's emphasis on considering the consequences of actions aids in preventing unintended outcomes, a critical aspect of addressing the Super Intelligence Control Problem.

Responsible Decision-Making: Developers, equipped with 3WIP, navigate the complexities of AI development with a heightened sense of responsibility, minimizing the risks associated with uncontrolled intelligence.

Adaptable Ethical Framework: Cultural Considerations: The adaptable nature of 3WIP allows for the incorporation of cultural nuances in AI ethics, ensuring ethical considerations resonate across diverse global perspectives.

Inclusive Ethical Guidelines: 3WIP accommodates various cultural norms, making it an inclusive framework that accommodates ethical guidelines applicable to different societal contexts.

Future-Proofing AI Development: Holistic Skill Development: 3WIP not only imparts ethical principles but also nurtures critical thinking, decision-making, and environmental consciousness in AI professionals, future-proofing their skill set.

Staying Ahead of Risks: The comprehensive education provided by 3WIP prepares AI developers to anticipate and address emerging risks, contributing to the ongoing development of super intelligent solutions.

The integration of Three Way Impact Principle (3WIP) into the global curriculum for AI developers emerges as a super intelligent solution to the Super Intelligence Control Problem. By instilling ethical foundations, fostering collaboration, promoting environmental responsibility, and adapting to diverse cultural contexts, 3WIP guides AI development towards a future where technology aligns harmoniously with the pursuit of intellectual excellence and ethical progress. As a super intelligent framework, 3WIP empowers the next generation of AI developers to be ethical stewards of innovation, navigating the complexities of artificial intelligence with a consciousness that transcends immediate objectives and embraces the highest purpose of lifemaximizing intellect.

Cheers,

https://www.acwol.com

https://discord.com/invite/d3DWz64Ucj

https://www.instagram.com/acomplicatedway

NOTE:A COMPLICATED WAY OF LIFE abbreviated as ACWOL is a philosophical framework containing just five tenets to grok and five tools to practice. If you would like to know more, write to connect@acwol.com Thanks so much.

Read the original here:

AMBASSADORS OF ETHICAL AI PRACTICES | by ACWOL | Nov ... - Medium

3 AI-Backed Stocks That Could Return Magnificent Gains in 2024 – The Motley Fool

If you don't already own artificial intelligence stocks, you're likely to be missing out on one of the biggest technology inflections in history. But if you fear you've already missed the boat, keep in mind many key AI industry participants still trade below their 2021 highs.

But as interest rates stabilize and AI tailwinds sustain as many suspect, look for these three names to make new all-time highs -- likely in 2024.

The AI world got a shock on Friday, when OpenAI CEO Sam Altman was fired by OpenAI's board of directors. While the situation appears fluid and Altman may be able to return, it is clearly a less-than-ideal situation.

In the cloud industry, OpenAI investor Microsoft (MSFT -0.11%) is thought to have the AI lead because of the OpenAI partnership, but the current chaos may have thrown that "lead" into question. Meanwhile Amazon (AMZN 0.02%), Microsoft's chief rival in the cloud computing space, is making its own AI moves.

September was actually a momentous month for Amazon's AI ambitions. Amazon Web Services made its AWS Bedrock service generally available to enterprise customers. Bedrock is AWS's generative AI platform, whereby companies will be able to access large language models (LLMs) from leading AI start-ups AI21 Labs, Anthropic, Cohere, Stability AI, and also Meta Platforms' LLM, called Llama. In addition, Amazon has pre-trained models of its own called Titan, which customers can combine with their own private data to glean insights. Finally, Amazon's AI-powered Code Whisperer helps developers write and implement software code quickly and efficiently with natural language prompts.

September also saw Amazon announce a strategic collaboration with AI start-up Anthropic. In exchange for a minority investment up to $4 billion, Anthropic will commit to using AWS as its primary cloud provider, and use Amazon's in-house-designed Trainium and Inferentia chips. The deal in many ways is Amazon's answer to Microsoft's collaboration with OpenAI, so we will see if the Anthropic deal gives Amazon a leg up in the AI wars.

And of course, Amazon is an innovative company with huge scale across its e-commerce, advertising, and other consumer businesses. That size and data advantage should also allow Amazon's other businesses to benefit from efficiencies gleaned from AI. And that may already be happening; last quarter, Amazon's non-cloud North American business grew 11%, and its International business grew 16%, which are very healthy rates for businesses that large.

Given that Amazon is still 25% below its all-time highs, Amazon is a "Prime" candidate for a strong 2024.

As the cloud computing business seems to be bottoming out, so is the memory industry. Micron Technology (MU -0.30%) is one of only three major DRAM manufacturers, and the only one based in the United States.

Fortunately for Micron, artificial Intelligence servers require multiples more DRAM than traditional enterprise servers, and research firm Trendforce recently projected AI server unit shipments will grow at a mid-teens rate for the next five years.

That should help underpin the DRAM market, which is due for an upturn even outside of AI servers. The post-pandemic period led to the worst-ever drop in demand for PC and mobile DRAM in mid-2022, but that long down-cycle has also shown recent signs of turning around:

MU EBIT (Quarterly) data by YCharts

Not only that, but Micron has overtaken its rivals on leading technology nodes over the past year. A year ago, Micron was the first company to manufacture DRAM on the 1-beta node. Recently, Micron introduced its new 128 GB RDIMM DRAM module built on 32GB DDR5 DRAM dies that is highly desirable for AI applications. And next year, Micron will begin shipping its new high-bandwidth memory (HBM3) for AI applications, whose specs exceed those of competitors' offerings in the market today.

With the memory market bottoming out and AI-related demand tailwinds just starting to kick in, Micron should see its current losses turn into profits -- potentially, big profits -- next year.

Unlike Amazon and Micron, server maker Super Micro Computer (SMCI -0.34%) reached an all-time high earlier this year, but it has backtracked about 20% off those highs from early August. Despite its outperformance over the past two years, shares still don't look expensive at 26 times trailing earnings and 16.7 times 2024 earnings estimates, with Super Micro's fiscal year ending next June.

Super Micro's energy-efficient servers with unique features such as liquid cooling and building-block architecture have found favor with artificial intelligence companies. Over the past year, the majority of SMCI's revenue now come from AI-related servers. Given the hypergrowth projected for AI servers going forward, Super Micro should be a strong grower not only this year, but for years to come.

This year, Super Micro announced a new Malaysia manufacturing plant that will come online in 2024, which should double the capacity of the company and lower its manufacturing costs significantly. And just two weeks ago, Super Micro announced it can now deliver 5,000 server racks per month as a result of surging demand. Why is this important? Because just two quarters ago, management had hoped to reach 4,000 racks per month by year-end. That means Super Micro is exceeding its own goals in meeting strong demand.

Super Micro also plans to grow well beyond this year. While it has guided for revenue of $10 billion to $11 billion in fiscal 2024, CEO Charles Liang has set a goal for $20 billion, which he sees, "just a couple years away." Super Micro has a profitable history of beating its own guidance and publicly stated goals, so the company could get there even faster.

That makes it a stock that can soar even further in 2024.

John Mackey, former CEO of Whole Foods Market, an Amazon subsidiary, is a member of The Motley Fool's board of directors. Randi Zuckerberg, a former director of market development and spokeswoman for Facebook and sister to Meta Platforms CEO Mark Zuckerberg, is a member of The Motley Fool's board of directors. Billy Duberstein has positions in Amazon, Meta Platforms, Micron Technology, Microsoft, and Super Micro Computer and has the following options: short January 2025 $110 puts on Super Micro Computer, short January 2025 $125 puts on Super Micro Computer, short January 2025 $130 puts on Super Micro Computer, short January 2025 $280 calls on Super Micro Computer, short January 2025 $380 calls on Super Micro Computer, and short January 2025 $85 puts on Super Micro Computer. His clients may own shares of the companies mentioned. The Motley Fool has positions in and recommends Amazon, Meta Platforms, and Microsoft. The Motley Fool recommends Super Micro Computer. The Motley Fool has a disclosure policy.

More:

3 AI-Backed Stocks That Could Return Magnificent Gains in 2024 - The Motley Fool

AI and the law: Imperative need for regulatory measures – ft.lk

Using AI Technology, without the needed laws and policies to understand and monitor it, can be risky

The advent of superintelligent AI would be either the best or the worst thing ever to happen tohumanity. The real risk with AI isnt malice but

competence. A super-intelligent AI will be extremely good at accomplishing its goals and if those goals arent aligned with ours were in trouble.1

Generative AI, most well-known example being ChatGPT, has surprised many around the world, due to its output to queries being very human likeable. Its impact on industries and professions will be unprecedented, including the legal profession. However, there are pressing ethical and even legal matters that need to be recognised and addressed, particularly in the areas of intellectual property and data protection.

Firstly, how does one define Artificial Intelligence? AI systems could be considered as information processing technologies that integrate models and algorithms that produces capacity to learn and to perform cognitive tasks leading to outcomes such as prediction and decision-making in material and virtual environments. Though in general parlance we have referred to them as robots, AI is developing at such a rapid pace that it is bound to be far more independent than one can ever imagine.

As AI migrated from Machine Learning (ML) to Generative AI, the risks we are looking at also took an exponential curve. The release of Generative technologies is not human centric. These systems provide results that cannot be exactly proven or replicated; they may even fabricate and hallucinate. Science fiction writer, Vernor Vinge, speaks of the concept of technological singularity, where one can imagine machines with super human intelligence outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders and potentially subduing us with weapons we cannot even understand. Whereas the short term impact depends on who controls it, the long-term impact depends on whether it cannot be controlled at all2.

The EU AI Act and other judgements

Laws and regulations are in the process of being enacted in some of the developed countries, such as the EU and the USA. The EU AI Act (Act) is one of the main regulatory statutes that is being scrutinised. The approach that the MEPs (Members of the European Parliament) have taken with regard to the Act has been encouraging. On 1 June, a vote was taken where MEPs endorsed new risk management and transparency rules for AI systems. This was primarily to endorse a human-centric and ethical development of AI. They are keen to ensure that AI systems are overseen by people, are safe, transparent, traceable, non-discriminatory and environmentally friendly. The term AI will also have a uniform definition which will be technology neutral, so that it applies to AI systems today and tomorrow.

Co-rapporteur Dragos Tudovache (Renew, Romania) stated, We have worked to support AI innovation in Europe and to give start-ups, SMEs and industry space to grow and innovate, while protecting fundamental rights, strengthening democratic oversight and ensuring a mature system of AI governance and enforcement3.

The Act has also adopted a Risk Based Approach in terms of categorising AI systems, and has made recommendations accordingly. The four levels of risk are,

Unacceptable risk (e.g., remote biometric identification systems in public),

High risk (e.g., use of AI in the administration of justice and democratic processes),

Limited risk (e.g., using AI systems in chatbots) and

Minimal risk (e.g., spam filters).

Under the Act, AI systems which are categorised as Unacceptable Risk will be banned. For High Risk AI systems, which is the second tier, developers are required to adhere to rigorous testing requirements, maintain proper documentation and implement an adequate accountability framework. For Limited Risk systems, the Act requires certain transparency features which allows a user to make informed choices regarding its usage. Lastly, for Minimal Risk AI systems, a voluntary code of conduct is encouraged.

Moreover, in May 2023, a judgement4 was given in the USA (State of Texas), where all attorneys must file a certificate that contains two statements stating that no part of the filing was drafted by Generative AI and that language drafted by Generative AI has been verified for accuracy by a human being. The New York attorney had used ChatGPT, which had cited non-existent cases. Judge Brantley Starr stated, [T]hese platforms in their current states are prone to hallucinations and bias.on hallucinations, they make stuff up even quotes and citations. As ChatGPT and other Generative AI technologies are being used more and more, including in the legal profession, it is imperative that professional bodies and other regulatory bodies draw up appropriate legislature and policies to include the usage of these technologies.

UNESCO

On 23 November 2021, UNESCO published a document titled, Recommendations on the Ethics of Artificial Intelligence5. It emphasises the importance of governments adopting a regulatory framework that clearly sets out a procedure, particularly for public authorities to carry out ethical impact assessments on AI systems, in order to predict consequences, address societal challenges and facilitate citizen participation. In explaining the assessment further, the recommendations by UNESCO also stated that it should have appropriate oversight mechanisms, including auditability, traceability and explainability, which enables the assessment of algorithms and data and design processes as well including an external review of AI systems. The 10 principles that are highlighted in this include:

Proportionality and Do Not Harm

Safety and Security

Fairness and Non-Discrimination

Sustainability

Right to Privacy and Data Protection

Human Oversight and Determination

Transparency and Explainability

Responsibility and Accountability

Awareness and Literacy

Multi Stakeholder and Adaptive Governance and Collaboration.

Conclusion

The level of trust citizens have in AI systems can be a factor to determine the success in AI systems being used more in the future. As long as there is transparency in the models used in AI systems, one can hope to achieve a degree of respect, protection and promotion of human rights, fundamental freedoms and ethical principles6. UNESCO Director General Audrey Azoulay stated, Artificial Intelligence can be a great opportunity to accelerate the achievement of sustainable development goals. But any technological revolution leads to new imbalances that we must anticipate.

Multi stakeholders in every state need to come together in order to advise and enact the relevant laws. Using AI Technology, without the needed laws and policies to understand and monitor it, can be risky. On the other hand, not using available AI systems for tasks at hand, would be a waste. In conclusion, in the words of Stephen Hawking7, Our future is a race between the growing power of our technology and the wisdom with which we use it. Lets make sure wisdom wins.

Footnotes:

1Pg 11/12; Will Artificial Intelligence outsmart us? by Stephen Hawking; Essay taken from Brief Answers to the Big Questions John Murray, (2018)

2 Ibid

3https://www.europarl.europa.eu/news/en/press-room/20230505IPR84904/ai-act-a-step-closer-to-the-first-rules-on-artificial-intelligence

4https://www.theregister.com/2023/05/31/texas_ai_law_court/

5 https://www.unesco.org/en/articles/recommendation-ethics-artificial-intelligence

6 Ibid; Pg 22

7 Will Artificial Intelligence outsmart us? Stephen Hawking; Essay taken from Brief Answers to the Big Questions John Murray, (2018)

(The writer is an Attorney-at-Law, LL.B (Hons.) (Warwick), LL.M (Lon.), Barrister (Lincolns Inn), UK. She obtained a Certificate in AI Policy at the Centre for AI Digital Policy (CAIDP) in Washington, USA in 2022. She was also a speaker at the World Litigation Forum Law Conference in Singapore (May 2023) on the topic of Lawyers using AI, Legal Technology and Big Data and was a participant at the IGF Conference 2023 in Kyoto, Japan.)

More here:

AI and the law: Imperative need for regulatory measures - ft.lk

Are chatbots, super apps and AI the future of European marketplaces? – InternetRetailing

Marketplaces have evolved rapidly. This momentum has been maintained as they now embrace technology and new ways to sell.

The recently published European Marketplace 2023 report highlights some new technologies and even new business models that will be important as the sector evolves even further next year.

AI-powered recommendationsThe need to differentiate through better product recommendations in a crowded market and even using tech to appear to allow shoppers to spontaneously discover new things is about to step up thanks to artificial intelligence (AI).

Recommendation engines on marketplaces and retailer websites are nothing new but their level of sophistication is growing as ever-more powerful AI is brought to bear. This garners much deeper insights into consumer behaviour, allowing for much richer and more nuanced recommendations.

And its not just for existing customers. The holy grail is to be able to recommend things at least in the right ballpark to anonymous customers. Todays (and even more so, tomorrows) AI is starting to be able to do this by having the power to better understand what these new customers look at, where they linger and how they behave on the site. This can then help deliver much more intelligent suggestions and guide shoppers not just to products but also to discounts and offers and even to sign up.

This is vital to marketplaces of all types. The sector is becoming increasingly competitive and, whether a flat, generalised marketplace or a vertical specialist, understanding what customers want is vital.

This trend will be seen in 2024 across all ecommerce, not least as retailers attempt to make their non-marketplace sites perform better than their marketplace competitors. Because of this, marketplace offerings that dont look to AI-powered, deeply intelligent recommendations will struggle.

Chatbots, AI and voice commerceAI will not just be confined to recommendations it has a role to play across the whole ecommerce process, from marketplaces to traditional D2C sites. One of the key areas where it will come to the fore is in chatbots and other tools that allow communication between a marketplace, the retailer and customers.

Of course, chatbots are already used across ecommerce, handling FAQs via instant messaging and routing queries to customer service agents based on intelligently assessing what the consumer is asking. This is very much chatbot 1.0 though and as AI has exponentially increased in power and ubiquity over the past 24 months, so too have the power of AI run chatbots.

Moving on from simple rules-based responses to key words and phrases to find answers or redirect a query, todays AI allows for self-learning that leads to a degree of understanding. The use of generative AI technologies such as GPT4 can also help create bespoke and highly conversational answers. Combing this self-learning and generative approach with rules-based systems can, in theory, create a much more realistic although still totally artificial interaction between brand and customer.

While this can help handle far more consumer interactions than a team of humans ever could, its true potential lies in how to elevate all these interactions beyond being just reactive to customer comments into towards creating genuine two-way conversations akin to how an old-style shop assistant may have helped guide customers to buy. This has the potential along with AI-powered recommendations to create a new paradigm in online selling, cross-selling and up-selling.

For todays marketplaces, this combination is likely to be a much-needed differentiator in the competitive years ahead. It also once again shifts the nature of ecommerce from something that a customer does, towards something that happens to, or, even better, with the consumer.

News in late 2023 that Facebook owner Meta is poised to release chatbots with a personality on Facebook Messenger will only fuel this growth in conversational interaction.

The application of AI to voice also lends itself to voice commerce. Already well documented when smart speakers such as Amazons Echo and Apples HomePod first hit the headlines, the use of voice to interact with websites and sellers has, to some degree, failed to ignite widespread interest. Yet with natural language processing (NLP) and generative

AI surging ahead, the ability to interact verbally with marketplaces is likely to return to the agenda only this time, it wont just be through smart speakers. To gain mass appeal, it will be through the websites and apps of the marketplaces themselves.

It seems inevitable that the internet will slowly edge towards being some sort of metaverse a more immersive semi-naturalistic platform where interaction is less about typing and clicking and more about pointing and speaking. When it does, natural voice interaction and chatbots are likely to become one of the main ways we all use the web, including how we shop on marketplaces.

Messaging, payments and super appsThis shift towards talking to marketplace apps is potentially a huger, broader shift in how everyone will interact with the internet. In the instance of marketplaces, it will allow consumers to talk to these vendors actually, their AI-powered apps to search, get recommendations, discuss products and then buy them in a much more naturalistic way.

Already many younger people are communicating by sending each other voice messages. Apple has added the ability to send video messages via its FaceTime messaging app. Facebook is, as said, creating chatbots with personality. The way we access the web is already changing.

Even for those who arent shifting to this new way of communicating with the digital world, messaging services such as SMS, iMessage, WhatsApp and social media messaging are all increasingly playing a role in how consumers interact with the companies they do business with. The era of conversational commerce be that through text or voice is upon us and its set to create some radical new ways in which we shop. Social media sites, for instance, are shifting from carrying promotional posts about retailers to allowing consumers to buy from them, adding to this conversational commerce model.

Combining messaging and social engagement with shopping and indeed payments can create a powerful new marketplace model. Bringing them all together in one place to create a super app has the potential to build a new, rich way to interact with retailers which, in turn, can lead to greater sales.

Such super apps already exist in China. WeChat, for example, combines social media, messaging, payment and ecommerce in one app. Elon Musks rebrand of Twitter to X and the changes he has instigated at the platform are rumoured to be laying the groundwork to turn X into such a super app. For marketplaces, this presents an opportunity.

The platforms already have the customer base, the products and the payments tools. Add in messaging and engagement and they could relatively easily shift to being super apps. Conversely, social media platforms have the customer base, the messaging and the retailers on-board. As they add ecommerce, they too are also poised to do the same.

As the internet slowly edges towards being an immersive metaverse, these super apps would be perfectly positioned to usher in a whole new modus operandi for online sellers and customers radically altering not only what constitutes a marketplace, but also what the internet actually looks like to its users.

This feature was authored by Paul Skeldon, and appears in the ChannelX European Marketplaces 2023 report.

Download it in full to discover what the marketplace landscape looks like today; what factors are key to a successful marketplace; and how marketplaces are working to try and protect both brands and customers from fraud, counterfeits, and piracy?

See the article here:

Are chatbots, super apps and AI the future of European marketplaces? - InternetRetailing

1 Under-the-Radar Artificial Intelligence (AI) Stock to Buy Hand Over … – The Motley Fool

One of the lesser-known enterprise technology companies benefiting from artificial intelligence (AI) is ServiceNow (NOW 0.77%). The company specializes in business process automation across a variety of IT services.

Bill McDermott became CEO of ServiceNow back in October 2019 after a long, successful run at SAP. Since then, the stock is up over 160%. A good reason investors have enjoyed such robust returns is that ServiceNow has become on of the prominent players in digital transformation. Utilizing data to make more informed, impactful decisions is becoming increasingly important for businesses of all sizes. While there are a number of dashboarding tools and data analytics providers, ServiceNow has surfaced as one of the leading platforms due to its ever-evolving library of product offerings.

While the stock has been generous to investors for several years now, I think it could just be getting started. In fact, ServiceNow made The Fool's list of most undervalued growth stocks for 2023.

AI is a massive catalyst for the company, and its current financial and operating performance demonstrates that. While it may not be as well known as Microsoft, Alphabet, or Amazon, there is plenty of reason to believe that AI is helping ServiceNow evolve into an even more integral platform for businesses of all sizes. Let's assess if the stock deserves a spot in your portfolio.

As with its big tech counterparts, ServiceNow's management has been touting the prospects of AI for the last several months. The company derives revenue from two primary sources: subscriptions and professional services. Subscriptions represent high-margin recurring revenue streams, so investors tend to scrutinize trends in this metric.

For the quarter that ended Sept. 30, ServiceNow reported $2.2 billion in subscription revenue and 27% growth year over year. Even better is that the gross margin for subscription services clocked in at 81%. This high level of profitability has helped ServiceNow generate consistent positive free cash flow, which the company can use for share buybacks or to reinvest into new products and services.

During ServiceNow's Q3 earnings call, McDermott discussed why he thinks AI will help fuel continuous growth. Specifically, he referenced a study by IT research firm Gartner that estimates $3 trillion will be spent on AI-powered solutions between 2023 and 2027. Furthermore, McDermott proclaimed that AI "isn't a hype cycle; it is a generational movement."

Image source: Getty Images.

Software companies often spend a long time testing and demoing their products. Although this often makes the sales cycle and vendor procurement process long and arduous, it is paramount that devices and systems work together seamlessly. The various software platforms that companies rely on is called a tech stack. In a way, the tech stack represents the nuts and bolts that hold everything together. If important data is stored across multiple systems but cannot easily be stitched together, the tech stack probably isn't as well-managed as it could be. Despite this challenging process, ServiceNow is answering the call.

According to its Q3 earnings report, the company has released over 5,000 add-ons and new capabilities for its various modules in 2023 alone, many of which are rooted in generative AI. Moreover, investors learned that 18 of the company's top 20 net new annual contract value deals during Q3 involved eight products or more. This level of cross-selling is precisely why ServiceNow is generating high-double-digit top-line growth at a high margin.

This is an important dynamic because it shows how end-users of ServiceNow are outlaying a lot of capital up front. It's not uncommon for a software company to make a sale, and then try to cross-sell additional services after the initial deal (perhaps upon renewal of the contract). However, in ServiceNow's case, the company is doing a great job of penetrating customers more deeply during the early stages of customer acquisition. By capturing this level of customer value so early, ServiceNow is active beyond just one layer of the tech stack and evolving into what McDermott describes as a full-spectrum "intelligent super platform."

The chart below illustrates the price-to-free-cash-flow multiple for SeviceNow versus that of peers Salesforce.com, Workday, Atlassian, and Snowflake.

NOW Price to Free Cash Flow data by YCharts

Interestingly, ServiceNow's price-to-free-cash-flow multiple of 52 puts it in the middle of its peer set. Salesforce.com trades at a meaningful discount by this measure, but I'd argue that is because the company is more mature and less of a growth stock.

Given the overall haziness of the macroeconomy, I'd say ServiceNow is performing extremely well. I believe the stock has much more room to run due to the heightened interest in AI and its various use cases. As of now, companies are still spending quite a bit of time figuring out exactly how new breakthroughs in generative AI can best serve the business. For this reason, it's appropriate to think that ServiceNow's place in IT budgets and its role in the AI journey is just beginning.

Long-term investors should be excited about the company's ability to thrive in a market primarily dominated by big tech. I think that now is a terrific opportunity to initiate a position in ServiceNow, given its inroads into AI, strong financial position, and attractive valuation relative to its peers.

Suzanne Frey, an executive at Alphabet, is a member of The Motley Fools board of directors. John Mackey, former CEO of Whole Foods Market, an Amazon subsidiary, is a member of The Motley Fools board of directors. Adam Spatacco has positions in Alphabet, Amazon, and Microsoft. The Motley Fool has positions in and recommends Alphabet, Amazon, Atlassian, Microsoft, Salesforce, ServiceNow, Snowflake, and Workday. The Motley Fool recommends Gartner. The Motley Fool has a disclosure policy.

Read the original post:

1 Under-the-Radar Artificial Intelligence (AI) Stock to Buy Hand Over ... - The Motley Fool

What Is Consciousness, Really? Either AI Rules or Human Spirit … – CEOWORLD magazine

Consciousness is one of the most pressing issues of our time. Right up there with climate.

With AI taking off, tech experts are worried that intelligent machines will gain consciousness within the next few years! Meanwhile, we experience life basically in our conscious minds, yet we have only the faintest inkling of how this mysterious phenomenon works, or even what it is. For these reasons, and more, its time to address the question: What is consciousness, really?

For example:

Is human consciousness simply the physical bits of information processed within our brains circuitry? An electrochemical, living intelligence? Like AI only different?

If AI surpasses human intelligence, what happens to us mere mortals? To we, the people?

How does machine consciousness square with religious beliefs? Is spirituality simply an illusion? Is God a fantasy?

To examine these questions, aTechCast studyused collective intelligence to combine background data and the best judgment from 30 experts. Weve used this method for 20 years to forecast emerging technologies and social trends with good accuracy. For instance, TechCast publishedforecastsmore than a decade ago that AI would take off in 2023.

The best way to make sense of these results is to sketch two different propositions describing implications of opposing views.

Proposition 1: AI dominates human consciousness

Yes, this is a stark statement, but its also the logical outcome of the view that sees no significant difference between human consciousness and intelligent forms of AI.

The belief underlying this view is that consciousness is the intelligence shown by any sufficiently complex system. The corollary belief is that human consciousness is simply an outcome of information in the brain. And since AI will soon have near infinite power to process information, some form of Artificial Super Intelligence (ASI) is likely to eclipse human intelligence. From this view, it follows that humans can expect to become inferior to AI, most jobs will be automated, and the vast powers of ASI could threaten humanity.

While this is an elegant theory, its implications are so impossible that they seem to refute the theory. For instance, sound solutions to the climate crisis are well known, but the obstacles are due to procrastination, self-interest, and the lack of political will.How could the most powerful ASI possibly overcome these utterly human foibles?Or could sheer machine intelligence reduce mass shootings? Reconcile opposing sides to the interminable conflict on abortion? In short, the most intelligent AI is not likely to rule the world.

I hope the point is clear. Intelligence the ability to manage objective knowledge is fundamentally different than the subjective ability to resolve the messy, intractable dilemmas that confound humans. Julian Taylor, Software Engineer at Sun Microsystems, highlighted the limits of AI: No algorithm that I have devised has ever developed an unpredictable goal. This is simply not what these algorithmic systems do. And Kurt Gdel, the famous scientist who proved his Incompleteness Theorem, concurred: No conceivable collection of algorithms can possibly manifest human self-aware consciousness.

The meaning of this limitation is profound the most intelligent machine cant be endowed with agency, the ability to exercise free will, to act independently. Yes, its almost given that AI will soon be able to model emotions, values, beliefs, and other human qualities. But theyll be just that simple simulations of human consciousness. Not the real thing. The most brilliant machine intelligence seems doomed to lack agency. For an everyday example, your GPS car navigation system may be brilliant at leading you somewhere, but you must tell it your destination. Only you have agency!

Of the endless brilliant robots and semi-conscious AI systems out there, none are capable of truly independent behavior. For a simple example, your pet Roomba may sweep your floors with great abandon, but only within its programmed limits. Cute, but it doesnt have a conscious mind, really. No agency. ChatGPT? Nope. You have to tell it what to do. Even IBMs powerhouse, Newton, able to beat the top chess masters no agency. Without some dramatic as yet unknown breakthrough in AI,the concept of machine intelligence with human consciousness remains only a theory lacking support.

Yet almost all AI experts are convinced that AI superpowers will eclipse humans. Yuval Harari leads this wave of fear with his belief that AI is an alien species that could trigger humanitys extinction. This blind faith fuels the techno hype reminiscent of the mass hysteria we saw whenY2Kthreatened to destroy civilization at the year 2000 AD. As the critical turn of the century passed nothing happened! This study provides a sober vision that is realistic. Its time to think of AI as simply a powerful tool to be managed carefully.

This impasse in the logic of AI superiority leads to a second proposition that resolves this contradiction.

Proposition 2: Human spirit transcends AI

The lack of AI agency stands in sharp contrast to what Websters dictionary calls human spirit. We could call it the self, or we could think of it as the soul. Whatever it is, knowing that something in human consciousness is more powerful than information helps make sense of our world today.

A solid majority of our experts think that mood shifts, altered awareness, free will, and other states of mind transcend the physical body. Nobel Laureate Roger Sperry summed it up, The mind acts as an independent force.

This view also affirms the belief of all religions that humans are spiritual beings. Albert Einstein himself said, The most profound emotion we can experience is the mystical some spirit is manifest in the laws of the universe. And cognitive scientist David Chalmers thinks, We are likely to discover that consciousness is a fundamental property of the universe, like space, time and gravity.

Once we accept this special role of human spirit, the dilemmas noted above fade into a coherent story of the future. Sure, ASI is almost certain to vastly exceed our feeble ability to manage the overwhelming complexity of modern life. But thats okay because well be there to guide it. To design the systems using principles that ensure their safe behavior. To monitor them carefully and take action to avert problems.

Not only can we manage this AI-human symbiosis, the resulting freedom from todays mind-numbing knowledge work will unleash even more human freedom. More creativity. More awareness. If we can summon the courage and global consciousness to surmount the enormous challenges ahead, we might even see the flowering of human spirit.

Written by William E. Halal.

Have you read?Top Women CEOs of Americas largest public companies (2023 List).CEOs Of The Top Footwear Companies You Should Know.Top CEOs of the Worlds Largest Media Companies In 2023.Best International High Schools In The World, 2023.Revealed: The Worlds Best Airline CEOs, 2023.

Follow us on Social Media

Read the rest here:

What Is Consciousness, Really? Either AI Rules or Human Spirit ... - CEOWORLD magazine

An AI just negotiated a contract for the first time ever and no human was involved – CNBC

Mathisworks | Digitalvision Vectors | Getty Images

In a world first, artificial intelligence demonstrated the ability to negotiate a contract autonomously with another artificial intelligence without any human involvement.

British AI firm Luminance developed an AI system based on its own proprietary large language model (LLM) to automatically analyze and make changes to contracts. LLMs are a type of AI algorithm that can achieve general-purpose language processing and generation.

Jaeger Glucina, chief of staff and managing director of Luminance, said the company's new AI aimed to eliminate much of the paperwork that lawyers typically need to complete on a day-to-day basis.

In Glucina's own words, Autopilot "handles the day-to-day negotiations, freeing up lawyers to use their creativity where it counts, and not be bogged down in this type of work."

"This is just AI negotiating with AI, right from opening a contract in Word all the way through to negotiating terms and then sending it to DocuSign," she told CNBC in an interview.

"This is all now handled by the AI, that's not only legally trained, which we've talked about being very important, but also understands your business."

Luminance's Autopilot feature is much more advanced than Lumi, Luminance's ChatGPT-like chatbot.

That tool, which Luminance says is designed to act more like a legal "co-pilot," lets lawyers query and review parts of a contract to identify any red flags and clauses that may be problematic.

With Autopilot, the software can operate independently of a human being though humans are still able to review every step of the process, and the software keeps a log of all the changes made by the AI.

CNBC took a look at the tech in action in a demonstration at Luminance's London offices. It's super quick. Clauses were analyzed, changes were made, and the contract was finalized in a matter of minutes.

There are two lawyers on either side of the agreement: Luminance's general counsel and general counsel for one of Luminance's clients research firm ProSapient.

Two monitors on either side of the room show photos of the lawyers involved but the forces driving the contract analysis, scrutinizing its contents and making recommendations are entirely AI.

In the demonstration, the AI negotiators go back and forth on a non-disclosure agreement, or NDA, that one party wants the other to sign.NDAs are a bugbear in the legal profession, not least because they impose strict confidentiality limits and require lengthy scrutiny, Glucina said.

"Commercial teams are often waiting on legal teams to get their NDAs done in order to move things to the next stage," Glucina told CNBC."So it can hold up revenue, it can hold up new business partnerships, and just general business dealings. So, by getting rid of that, it's going to have a huge effect on all parts of the business."

Legal teams are spending around 80% of their time reviewing and negotiating routine documents, according to Glucina.

Luminance's software starts by highlighting contentious clauses in red. Those clauses are then changed to something more suitable, and the AI keeps a log of changes made throughout the course of its progress on the side. The AI takes into account companies' preferences on how they normally negotiate contracts.

For example, the NDA suggests a six-year term for the contract. But that's against Luminance's policy. The AI acknowledges this, then automatically redrafts it to insert a three-year term for the agreement instead.

Glucina said that it makes more sense to use a tool like Luminance Autopilot rather than something like OpenAI's software as it is tailored specifically to the legal industry, whereas tools like ChatGPT and Dall-E and Anthropic's Claude are more general-purpose platforms.

That was echoed by Peel Hunt, the U.K. investment bank, in a note to clients last week.

"We believe companies will leverage domain-specific and/or private datasets (eg data curated during the course of business) to turn general-purpose large language models (LLMs) into domain-specific ones," a team of analysts at the firm said in the note.

"These should deliver superior performance to the more general-purpose LLMs like OpenAI, Anthropic, Cohere, etc."

Luminance didn't disclose how much it costs to buy its software. The company sells annual subscription plans allowing unlimited users to access its products, and its clients include the likes of Koch Industries and Hitachi Vantara, as well as consultancies and law firms.

Founded in 2016 by mathematicians from the University of Cambridge, Luminance provides legal document analysis software intended to help lawyers become more efficient.

The company uses an AI and machine-learning-based platform to process large, complex and fragmented data sets of legal documentation, enabling managers to easily assign tasks and track the progress of an entire legal team.

It is backed by Invoke Capital a venture capital fund set up by U.K. tech entrepreneur Mike Lynch Talis Capital, and Future Fifty.

Lynch, a controversial figure who co-founded enterprise software firm Autonomy, faces extradition from the U.K. to the U.S. over charges of fraud.

He stepped down from the board of Luminance in 2022, though he remains a prominent backer.

Read the original here:

An AI just negotiated a contract for the first time ever and no human was involved - CNBC

‘What do you think of AI?’ People keep asking this question. Here’s five things the experts told me – ABC News

For the last few months, there's one question that I've been asked countless times.

It comes up without fail during idle moments: coffee breaks at work orstanding around, outat the dogpark.

What do you think about AI?

Usually, the tone is quietly sceptical.

For me, the way it's asked conveys a weary distrust of tech hype, but also a hint of concern. People are asking:Should I be paying attention to this?

Sure, at the start of 2023, many of us were amazed by newgenerative artificial intelligence (AI)tools like ChatGPT.

But, as the months have passed, these tools have lost their novelty.

The tech industry makes big claims abouthow AI is going tochange everything.

But this is an industry that has made big claims before and been proved wrong. It's happened with virtual reality, cryptocurrency, NFTs and the metaverse. And that's just in thepast three years.

So, what do I think of AI?

For the pastfew months I've been working on a podcast series about AI for the ABC, looking broadly at this topic.

It's been a bit like trekking through a blizzard of press releases andproduct announcements.

Everything solid dissolves into a white-out of jargon and dollar signs.

There's so much excitement, and so much money invested, that it can be hard to get answers to the big underlying questions.

And, of course, we're talking about the future! That's one topic on which no-one ever agrees,anyway.

But here's what I've learned from speaking to some of the top AI experts.

Forget Terminator. Forget 2001: A Space Odyssey.

Hollywood's long-ago visions of the futureare getting in the way of understanding the AI we have today.

If you picture a skeletal robot with red eyes every time someone says "AI", you'll have totally the wrong idea about what AI can do, what it can't, and what risks we should reasonably worry about.

Most of the AI tools we use, from ChatGPT to Google Translate, are machine learning (ML).

If AI isthe broad concept of machines being able to carry out tasks in what that we would consider "smart", ML is one way of achieving this.

The general idea is that, instead of telling a machine how to do a task, you give them lots of examples of wrong and right ways of doing the task, and let them learn for themselves.

So for driverless cars, you give a ML systemlots of video and other data of cars being driven correctly, and itlearns to do the same.

For translation, you give a ML toolthe same sentences in different languages, and it figures out its own method of translating between the two.

Why does this distinction between telling and learning matter?

Because a ML tool that can navigate a roundaboutor help you order coffee in French isn't plotting to take over the world.

The fact it can do these narrow tasks is very impressive, but that's all it's doing.

It doesn't even "know" the world exists, says Rodney Brooks, a world-leading Australian roboticist.

"We confuse what it does with real knowledge," he says.

Rodney Brooks has one of the most impressive resumes in AI. Born, raised and educated in Adelaide, during the 1990s heran the largest computer science department in the world, at MIT. He's even credited with inventing the robotic vacuum cleaner.

"Because I've built more robots than any other human in the world,I can't quite be ignored,"he told me when I called him at his home in San Francisco, one evening.

Professor Brooks, who's a professor emeritus at MIT, says the abilities of today's AI, though amazing, arewildly over-estimated.

He makes a distinction between "performance" and "competence".

Performance is what the AI actually does translate a sentence for example. Competence is its underlying knowledge of the world.

With humans, someone who performs well is also generally competent.

Say you walk up to a stranger and ask them for directions. If they answer with confidence, we figure we can also ask them other things about the city: where's the train station? How doyou pay for a ticket?

But that doesn't apply to AI. An AI that can give directions doesn't necessarily know anything else.

"We see ChatGPT do things ... and people say 'It's really amazing'. And then they generalise and imagine it can do all kinds of things there's no evidence it can do," Professor Brooks says.

"And then we see the hype cycle we've been in over the last year."

Another way of putting this is we have a tendency to anthropomorphise AI to seeourselves in the tools we've trained to mimic us.

As a result, we make the wrong assumptions about the scale and type ofintelligence beneath the performance.

"I think it's difficult for people, even within AI, to figure out what is deep and what is a technique," Professor Brooks says.

Now, many people in AI sayit's not so clear cut.

Rodney Brooks and others maybe completely wrong.

Maybe future, more advanced versions of ChatGPT will havean underlying model of the world. Performance will equate to competence.AI will develop a general intelligence, similar to humans.

Maybe. But that's a big unknown.

For the moment, AI systems are generally very narrow in what they can do.

From the buzz out of Silicon Valley, you could be forgiven for thinking the course of the future is pretty much decided.

Sam Altman, the boss of OpenAI, the company that built ChatGPT,has been telling everyonethat AI smarter than any human is right around the corner. He calls this dream Artificial General Intelligence, or AGI.

Perhaps as a result of this, minor advances are oftencommunicated to the public as though they're proof that AI is becoming super-intelligent.The future is coming, get out of the way.

ChatGPT can pass a lawexam? This changes everything.

Google has a new chatbot?This changes everything

Beyond this hype, there are lots of varying, equally valid, expert perspectives on what today's AI is on track to achieve.

The machine learning optimists,people likeSam Altman, are just one particularly vocal group.

They say that not only will we achieve AGI, but it will be used for good, ushering in a new age ofplenty.

"We are working to build tools that one day can help us make new discoveries and address some of humanity's biggest challenges, like climate change and curing cancer," Mr Altman told US lawmakers in May.

Then, there's thedoomers.They broadly say that, yes, AI will be really smart, but it won't be addressing climate change and curing cancer.

Some believe that AI will become sentient and aggressively pursue its own goals.

Other doomers fearpowerful AI tools will fall intothe wrong hands and be misused to generate misinformation, hackelections, and generally spread murder and mayhem.

Then there's the AI sceptics. People like Rodney Brooks.

The real danger, they say, isn't that AI will be too smart, but it will be too dumb, and we won't recognise its limits.

They point to examples of this happening already.

Driverless carsare crashing into pedestrians in San Francisco. Journalists are being replaced by faultybots. Facial recognition is leading to innocent people being locked up.

"Today's AI is a very powerful trick," Professor Brooks says.

"It's not approaching, or it's not necessarily even on the way, to a human-level intelligence."

And there's a fourth group (these groups overlap in complicated ways), who saythat all of the above misses the point.

We should worry less about what AI will become, and talk more about what we want it to be.

Rumman Chowdhury, an expert in the field of responsible AI, says talking about the future as something that will happen to us, rather than something we shape, is a cop out by tech companies.

AI isn't a sentient being, but just another tech product.

"In anthropomorphising and acting like artificial intelligence is an actor that makes independent decisions, people in tech absolve themselves of the sins of the technology they built," she says.

"In their story, they're a good guytrying to make this thing to help people.

"They've made us believe this AI is alive and making independent decisions and therefore they're not at fault."

Most of the popular discussion about AI and the future focuses on what happens when AI gets too powerful.

This is sometimes called the "alignment problem". It's the idea that, in the end, sentient AIwill not do what we what.

Within the AI community, the term "p(doom)" is used to describe the probability of this happening. It's a percentage chance that AI is going to wipe out humanity."My (p)doom is 20 per cent" etc.

But the most chilling vision of the future I heard wasn'tone whererobots stage an uprising.

Instead, it was muchmore mundane and plausible. A boring dystopia.

It's a future where AI pervades every aspect of our lives, from driving a car to writing an email, and a handful of companies that control this technology get very rich and powerful.

Maybe in this future AI issuper-intelligent, or maybe not. But it's at least good enough to displace workersin many industries.

New jobs are created, but they're not as good, because most peoplearen't aseconomically useful as they were. The skills these jobs require skills that were once exclusively human can be done by AI.

High-paying, creative jobs become low-paying ones, usually interacting with AI.

This is the fear that partly motivatedUS actors and screenwriters to go on strikethis year. It's why someauthors are suing AI companies.

It's a vision of the future where big tech's disruptions of certain industries overthe past 20 years Google and Facebooksuckingadvertising revenue out media and publishing, for instance is just thepreamble to a much larger,global transfer of wealth.

"The thing I worry about is there are fewer and fewer people holding more and more wealth and power and control," Dr Chowdhury says.

"As these models becomemore expensive to build and make, fewer and fewer people actually hold the keys to what's going to be driving essentially the economy of the entire world."

Michael Wooldridge, a computer scientist at Oxford University and one of the world's leading AI researchers, is also worried about this kind of future.

The future he envisions is less like The Terminator, and more like The Office.

Not only are most people paid less for the same work, but they're micromanaged by AI productivity software.

In this"deeply depressing" scenario,humans are the automata.

"A nagging concern I have is that we end up with AI as our boss," Professor Wooldridge says.

"Imagine in a very near future we've got AI monitoring every single keystroke that you type. It's looking at every email that you send. It's monitoring you continually throughout your working day.

"I think that future, unless something happens, feels like it's almost inevitable."

Sixty years ago, in the glory days of early AI research, some leading experts were convinced that truly intelligent, thinking machines were a decade or two away.

About 10 years later, in the early 1980s, the same thing happened: A few breakthroughs led to a flurry of excitement. This changes everything.

But as we know now, it didn't change everything. The future that was imagined never happened.

The third AI boom started in the 2010s and has accelerated through to 2023.

It's either still going, or tapering off slightly. In recent months, generative AI stocks have fallen in the US.

ChatGPT set the record for the fastest selling user base ever, in early 2023. But it hasn't maintained this momentum. Visits to the sitefell from June through to August this year.

To explain what's going on, some analysts have referenced Amara's Law, which statesthat we tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.

They've also pointed to something called the Gartner Hype Cycle, which is a graphical representation of the excitement and disappointment often associated with new technologies.

Continued here:

'What do you think of AI?' People keep asking this question. Here's five things the experts told me - ABC News