Page 1,150«..1020..1,1491,1501,1511,152..1,1601,170..»

Artificial Intelligence May Be ‘Threat’ to Human Health, Experts Warn – HealthITAnalytics.com

May 19, 2023 -In a recent analysis published in BMJ Global Health, an international group of researchers and public health experts have argued that artificial intelligence (AI) and artificial general intelligence (AGI) may pose numerous threats to human health and well-being, calling for research into these technologies to be halted until they can be properly regulated.

The authors noted that AI technology has various promising applications in healthcare, but posit that misuse of these solutions could harm human health through their impact on social, economic, political, and security-related determinants of health.

The research and development of healthcare AI are progressing rapidly, the authors stated, highlighting that much of the literature examining these tools is focused on the potential benefits gained through their implementation and use. Conversely, discussions about the potential harms of these technologies are often limited to looking at the misapplication of AI in the clinical setting.

However, AI could negatively impact upstream determinants of health, characterized by the American Medical Association (AMA) as individual factors that may seem unrelated to health on the surface, but actually have downstream impacts on patients long-term health outcomes.

The AMA indicates that these upstream factors, such as living conditions or social and institutional inequities, have not always been within the scope of public health research but can exacerbate disease incidence, injury rates, and mortality.

READ MORE: Arguing the Pros and Cons of Artificial Intelligence in Healthcare

The authors argued that the potential misuse and ongoing failure to anticipate, adapt to, and regulate AIs impacts on society could negatively affect these factors and cause harm.

The analysis identified three impacts AI could have on upstream and social determinants of health (SDOH) that could result in threats to human health: the manipulation and control of people, the proliferation of lethal autonomous weapons systems (LAWS), and the potential obsolescence of human labor.

The first threat, the authors explained, results from AIs ability to process and analyze large datasets containing sensitive or personal information, including images. This ability could enable the misuse of AI solutions in order to develop highly personalized, targeted marketing campaigns or significantly expand surveillance systems.

These could be used with good intentions, the authors noted, such as countering terrorism, but could also be used to manipulate individual behavior, citing cases of AI-driven subversion of elections across the globe and AI-driven surveillance systems that perpetuate inequities by using facial recognition and big data to produce assessments of individual behavior and trustworthiness.

The second threat is related to the development and use of LAWS, which can locate, select, and engage human targets without supervision. The authors pointed out that these can be attached to small devices like drones and easily mass-produced, providing bad actors with the ability to kill at an industrial scale.

READ MORE: Precision Oncology Data Registry May Perpetuate Health Disparities

The third threat is concerned with how AI may make human jobs and labor obsolete. The authors acknowledged that AI has the potential to help perform jobs that are repetitive, unpleasant, or dangerous, which comes with some benefits to humans. However, they noted that currently, increased automation has largely served to contribute to inequitable wealth distribution and could exacerbate the adverse health effects associated with unemployment.

In addition, the authors described how AGI could pose an existential threat to humanity.

We are now seeking to create machines that are vastly more intelligent and powerful than ourselves, they said. The potential for such machines to apply this intelligence and powerwhether deliberately or notin ways that could harm or subjugate humansis real and has to be considered.

They highlighted that AGIs connection to the internet and the real world, including robots, vehicles, digital systems that help run various aspects of society, and weapons, could be the biggest event in human history, for the benefit of humanity or to its detriment.

Because of the scale of these potential threats and the significant impacts they could have on human health, the authors stated that healthcare professionals have a critical role to play in raising awareness around the risks of AI. Further, the authors argued for the prohibition of certain types of AI and joined calls for a moratorium on AGI development.

With exponential growth in AI research and development, the window of opportunity to avoid serious and potentially existential harms is closing. The future outcomes of the development of AI and AGI will depend on policy decisions taken now and on the effectiveness of regulatory institutions that we design to minimise risk and harm and maximise benefit, they wrote.

See more here:

Artificial Intelligence May Be 'Threat' to Human Health, Experts Warn - HealthITAnalytics.com

Read More..

Today’s AI boom will amplify social problems if we don’t act now, says AI ethicist – ZDNet

AI developers must move quickly to develop and deploy systems that address algorithmic bias, said Kathy Baxter, principal Architect of Ethical AI Practice at Salesforce. In an interview with ZDNET, Baxter emphasized the need for diverse representation in data sets and user research to ensure fair and unbiased AI systems. She also highlighted the significance of making AI systems transparent, understandable, and accountable while protecting individual privacy. Baxter stresses the need for cross-sector collaboration, like the model used by the National Institute of Standards and Technology (NIST), so that we can develop robust and safe AI systems that benefit everyone.

One of the fundamental questions in AI ethics is ensuring that AI systems are developed and deployed without reinforcing existing social biases or creating new ones. To achieve this, Baxter stressed the importance of asking who benefits and who pays for AI technology. It's crucial to consider the data sets being used and ensure they represent everyone's voices. Inclusivity in the development process and identifying potential harms through user research is also essential.

Also:ChatGPT's intelligence is zero, but it's a revolution in usefulness, says AI expert

"This is one of the fundamental questions we have to discuss," Baxter said. "Women of color, in particular, have been asking this question and doing research in this area for years now. I'm thrilled to see many people talking about this, particularly with the use of generative AI. But the things that we need to do, fundamentally, are ask who benefits and who pays for this technology. Whose voices are included?"

Social bias can be infused into AI systems through the data sets used to train them. Unrepresentative data sets containing biases, such as image data sets with predominantly one race or lacking cultural differentiation, can result in biased AI systems. Furthermore, applying AI systems unevenly in society can perpetuate existing stereotypes.

To make AI systems transparent and understandable to the average person, prioritizing explainability during the development process is key. Techniques such as "chain of thought prompts" can help AI systems show their work and make their decision-making process more understandable. User research is also vital to ensure that explanations are clear and users can identify uncertainties in AI-generated content.

Also:AI could automate 25% of all jobs. Here's which are most (and least) at risk

Protecting individuals' privacy and ensuring responsible AI use requires transparency and consent. Salesforce follows guidelines for responsible generative AI, which include respecting data provenance and only using customer data with consent. Allowing users to opt in, opt-out, or have control over their data use is critical for privacy.

"We only use customer data when we have their consent," Baxter said. "Being transparent when you are using someone's data, allowing them to opt-in, and allowing them to go back and say when they no longer want their data to be included is really important."

As the competition for innovation in generative AI intensifies, maintaining human control and autonomy over increasingly autonomous AI systems is more important than ever. Empowering users to make informed decisions about the use of AI-generated content and keeping a human in the loop can help maintain control.

Ensuring AI systems are safe, reliable, and usable is crucial; industry-wide collaboration is vital to achieving this. Baxter praised the AI risk management framework created by NIST, which involved more than 240 experts from various sectors. This collaborative approach provides a common language and framework for identifying risks and sharing solutions.

Failing to address these ethical AI issues can have severe consequences, as seen in cases of wrongful arrests due to facial recognition errors or the generation of harmful images. Investing in safeguards and focusing on the here and now, rather than solely on potential future harms, can help mitigate these issues and ensure the responsible development and use of AI systems.

Also:How ChatGPT works

While the future of AI and the possibility of artificial general intelligence are intriguing topics, Baxter emphasizes the importance of focusing on the present. Ensuring responsible AI use and addressing social biases today will better prepare society for future AI advancements. By investing in ethical AI practices and collaborating across industries, we can help create a safer, more inclusive future for AI technology.

"I think the timeline matters a lot," Baxter said. "We really have to invest in the here and now and create this muscle memory, create these resources, create regulations that allow us to continue advancing but doing it safely."

Read the original post:

Today's AI boom will amplify social problems if we don't act now, says AI ethicist - ZDNet

Read More..

Artificial intelligence: World first rules are coming soon are you … – JD Supra

The EUs AI Act

The European Commission first released its proposal for a Regulation on Artificial Intelligence (the AI Act) on 21 April 2021. It is intended to be the first legislation setting out harmonised rules for the development, placing on the market, and use of AI in the European Union. The exact requirements (that mainly revolve around data quality, transparency, human oversight and accountability) depend on the risk classification of the AI in question, which ranges from a high to low and minimal risk, while a number of AI uses are prohibited outright. Given that the AI Act is expected to be a landmark piece of EU legislation that will have extraterritorial scope and will be accompanied with hard hitting penalties (including potential fines of up to 30 million or 6% of worldwide annual turnover), we have been keeping a close eye on developments.

The latest development occurred on 11 May 2023, with Members of the European Parliament (MEPs) committees voting in favour of certain proposed amendments to the original text of the AI Act. Some of the key amendments include:

General AI principles: New provisions containing general AI principles have been introduced. These are intended to apply to all AI systems, irrespective of whether they are high-risk, thereby significantly expanding the scope of the application of the AI Act. At the same time, MEPs expanded the classification of high-risk uses to include those that may result in harm to peoples health, safety, fundamental rights or the environment. Particularly interesting is the addition of AI in recommender systems used by social media platforms (with more than 45 million users under the EUs Digital Services Act) to the high-risk list.

Prohibited AI practices: As part of the amendments, MEPs substantially amended the unacceptable risk / prohibited list to include intrusive and discriminatory uses of AI systems. Such bans now extend to a number of uses of biometric data, including indiscriminate scraping of biometric data from social media to create facial recognition databases.

Foundation models: While past versions of the AI Act have predominantly focused on 'high-risk' AI systems, MEPs introduced a new framework for all foundation models. Such framework, (which would, (among other things), require providers of foundation models to guarantee robust protection of fundamental rights, health and safety and the environment, democracy and rule of law), would particularly impact providers and users of generative AI. Such providers would also need to assess and mitigate risks, comply with design, information and environmental requirements and register in the applicable EU database, while generative foundation models would also have to comply with additional transparency requirements.

User obligations: 'Users' of AI systems are now referred to as 'deployers' (a welcome change given that the previous term somewhat confusingly was not intended to capture the end user). This change means deployers become subject to an expanded range of obligations, such as the duty to undertake a wide-ranging AI impact assessment, while on the other hand, end user rights are boosted, with end users now being conferred the right to receive an explanation about decisions made by high-risk AI systems.

The next step, plenary adoption, is currently scheduled to take place in June 2023. Following this, the proposal will enter the last stage of the legislative process, and negotiations between the European Council and the European Commission on the final form of the AI Act will begin.

However, even if these timelines are adhered to, the traction that AI regulation has been receiving in recent times may mean that the EUs AI Act is not the first ever legislation in this area. Before taking a look at the developments in this sphere occurring in the UK, lets consider why those involved in the supply of products need to have AI regulation on their radar in the first place.

The uses of AI are endless. Taking inspiration from a report issued by the UKs Office for Product Safety and Standards last year, we see AI in the product development space as having the potential to lead to:

Safer product design:AI can be used to train algorithms to develop only safe products and compliant solutions.

Enhanced consumer safety and satisfaction: Data collected with the support of AI can allow manufacturers to incorporate a consumers personal characteristics and preferences in the design process of a product, which can help identify the products future use and ensure it is designed in a way conducive to this.

Safer product assembly: AI tools such as visual recognition can assist with conducting quality inspections along the supply chain, ensuring all of the parts and components being assembled are safe - leaving little room for human error.

Prevention of mass product recalls: Enhanced data collection via AI during industrial assembly can enable problems which are not easy to identify through manual inspections to be detected, thereby allowing issue-detection before products are sold.

Predictive maintenance: AI can provide manufacturers with critical information which allows them to plan ahead and forecast when equipment may fail so that repairs can be scheduled on time.

Safer consumer use: AI in customer services can also contribute to product safety through the use of virtual assistants answering consumer queries and providing recommendations on safe product usage.

Protection against cyber-attacks: AI can be leveraged to detect, analyse and prevent cyber-attacks that may affect consumer safety or privacy.

On the other hand, there are risks when using AI. In the products space, this could result in:

Products not performing as intended: Product safety challenges may result from poor decisions or errors made in the design and development phase. A lack of good data can also produce discriminatory results, particularly impacting vulnerable groups.

AI systems lacking transparency and explainability:A consumer may not know or understand when an AI system is in use and taking decisions, or how such decisions are being taken. Such lack of understanding can in turn affect the ability of those that have suffered harm to claim compensation given the difficulty in proving how the harm has come about. This is particularly a concern given product safety has traditionally envisaged risks to the physical health and safety of the end users while AI products pose risks of immaterial harms (such as psychological harm) or indirect harms from cyber security vulnerabilities.

Cyber security vulnerabilities being exploited:AI systems can be hacked and/or lose connectivity which may result in safety risks e.g. if a connected fire alarm loses connectivity, the consumer may not be warned if a fire occurs.

Currently, there is no overarching piece of legislation regulating AI in the UK. Instead, different regulatory bodies (e.g. the Medicines and Healthcare products Regulatory Agency, the Information Commissioners Office etc.) oversee AI use across different sectors, and where relevant, provide guidance on the same.

In September 2021 however, the UK government announced a 10-year plan, described as the National AI Strategy. The National AI Strategy aims to invest and plan for the long-term needs of the AI ecosystem, support the transition to an AI-enabled economy and ensure that the UK get the national and international governance of AI technologies right.

More recently, on 29 March 2023, the UK Government published its long-anticipated artificial intelligence white paper. Branding its proposed approach to AI regulation as world leading in a bid to turbocharge growth, the whitepaper provides a cross-sectoral, principles-based framework to increase public trust in AI and develop capabilities in AI technology. The five principles intended to underpin the UKs regulatory framework are:

1. Safety, security and robustness;

2. Appropriate transparency and explainability;

3. Fairness;

4. Accountability and governance; and

5. Contestability and redress.

The UK Government has said it would avoid "heavy-handed legislation" that could stifle innovation which means in the first instance at least, these principles will not be enforced using legislation. Instead, responsibility will be given to existing regulators to decide on "tailored, context-specific approaches" that best suit their sectors. The consultation accompanying the white paper is open until 21 June 2023.

However, this does not mean that no legislation in this arena is envisaged. For example:

On 4 May 2023, the Competition and Markets Authority (the CMA) announceda review of competition and consumer protection considerations in the development and use of AI foundation models. One of the intentions behind the review is to assist with the production of guiding principles for the protection of consumers and support healthy competition as technologies develop. A report on the findings is scheduled to be published in September 2023, and whether this will result in legislative proposals is yet to be seen.

The UK has, as of late, had a specific focus on IoT devices, following the passage of the UKs Product Security and Telecommunications Infrastructure Act in December 2022 and its recent announcement that the Product Security and Telecommunications Infrastructure (Product Security) Regime will come into effect on 29 April 2024. While IoT and AI devices of course differ, the UKs willingness to take a stance as a world leader in this space (being the first country in the world to introduce minimum security standards for all consumer products with internet connectivity) may mean that a similar focus on AI should be expected in the near future.

Our Global Products Law practice is fully across all aspects of AI regulation, product safety, compliance and potential liability risks. In part 2 of this article, we look to developments in France, the Netherlands and the US and share our thoughts around what businesses can do to get ahead of the curve to prepare for the regulation of AI around the world.

View original post here:

Artificial intelligence: World first rules are coming soon are you ... - JD Supra

Read More..

Generative AI Thats Based On The Murky Devious Dark Web Might Ironically Be The Best Thing Ever, Says AI Ethics And AI Law – Forbes

whether we like it or not.getty

In todays column, lets consider for a moment turning the world upside down.

Heres what I mean.

Generative AI such as the wildly and widely successful ChatGPT and GPT-4 by OpenAI is based on scanning data across the Internet and leveraging that examined data to pattern-match on how humans write and communicate in natural language. The AI development process also includes a lot of clean-up and filtering, via a technique known as RLHF (reinforcement learning via human feedback) that seeks to either excise or at least curtail unsavory language from being emitted by the AI. For my coverage of why some people nonetheless ardently push generative AI and relish stoking hate speech and other untoward AI-generated foulness, see the link here.

When the initial scanning of the Internet takes place for data training of generative AI, the websites chosen to be scanned are generally aboveboard. Think of Wikipedia or similar kinds of websites. By and large, the text found there will be relatively safe and sane. The pattern-matching is getting a relatively sound basis for identifying the mathematical and computational patterns found within everyday human conversations and essays.

Id like to bring to your attention that we can turn that crucial precept upside down.

Suppose that we purposely sought to use the worst of the worst that is posted on the Internet to do the data training for generative AI.

Imagine seeking out all those seedy websites that you would conventionally be embarrassed to even accidentally land on. The generative AI would be entirely focused exclusively on this bad stuff. Indeed, we wouldnt try to somehow counterbalance the generative AI by using some of the everyday Internet and some of the atrocious Internet. Full on we would mire the generative AI in the muck and mire of wickedness on the Internet.

What would we get?

And why would we devise this kind of twisted or distorted variant of generative AI?

Those are great questions and I am going to answer them straightforwardly. As you will soon realize, some pundits believe data training generative AI on the ugly underbelly of the Internet is a tremendous idea and an altogether brilliant strategy. Others retort that this is not only a bad idea, it could be a slippery slope that leads to AI systems that are of an evil nature and we will regret the day that we allowed this to ever get underway.

Allow me a quick set of foundational remarks before we jump into the meat of this topic.

Please know that generative AI and indeed all manner of todays AI is not sentient. Despite all those blaring headlines that claim or imply that we already have sentient AI, we dont. Period, full stop. I will later on herein provide some speculation about what might happen if someday we attain sentient AI, but thats conjecture, and no one can say for sure when or if that will occur.

Modern generative AI is based on a complex computational algorithm that has been data trained on text from the Internet. Generative AI such as ChatGPT, GPT-4, Bard, and other similar AI apps entail impressive pattern-matching that can perform a convincing mathematical mimicry of human wording and natural language. For my explanation of how generative AI works, see the link here. For my analysis of the existent doomster fearmongering regarding AI as an existential risk, see the link here.

Into all of this comes a plethora of AI Ethics and AI Law considerations.

There are ongoing efforts to imbue Ethical AI principles into the development and fielding of AI apps. A growing contingent of concerned and erstwhile AI ethicists are trying to ensure that efforts to devise and adopt AI takes into account a view of doing AI For Good and averting AI For Bad. Likewise, there are proposed new AI laws that are being bandied around as potential solutions to keep AI endeavors from going amok on human rights and the like. For my ongoing and extensive coverage of AI Ethics and AI Law, see the link here and the link here, just to name a few.

The development and promulgation of Ethical AI precepts are being pursued to hopefully prevent society from falling into a myriad of AI-inducing traps. For my coverage of the UN AI Ethics principles as devised and supported by nearly 200 countries via the efforts of UNESCO, see the link here. In a similar vein, new AI laws are being explored to try and keep AI on an even keel. One of the latest takes consists of a set of proposed AI Bill of Rights that the U.S. White House recently released to identify human rights in an age of AI, see the link here. It takes a village to keep AI and AI developers on a rightful path and deter the purposeful or accidental underhanded efforts that might undercut society.

Now that weve covered those essentials about generative AI, lets look at the seemingly oddish or scary proposition of data training generative AI on the most stinky and malicious content available on the web.

The Dark Web Is The Perfect Foundation For Bad Stuff

There is a part of the Internet that you might not have visited that is known as the Dark Web.

The browsers that you normally use to access the Internet are primed to only explore a small fraction of the web known as the visible or surface-level web. There is a lot more content out there. Within that other content is a segment generally coined as the Dark Web and tends to contain all manner of villainous or disturbing content. Standard search engines do not usually look at Dark Web pages. All in all, you would need to go out of your way to see what is posted on the Dark Web, doing so by using specialized browsers and other online tools to get there.

What type of content might be found on the Dark Web you might be wondering?

The content varies quite a bit. Some of it entails evildoers that are plotting takeovers or possibly contemplating carrying out terrorist attacks. Drug dealers find the Dark Web very useful. You can find criminal cyber hackers that are sharing tips about how to overcome cybersecurity precautions. Conspiracy theorists tend to like the Dark Web since it is a more secretive arena to discuss conspiratorial theories. And so on.

Im not saying that the Dark Web is all bad, but at least be forewarned that it is truly the Wild West of the Internet, and just about anything goes.

In a research paper entitled Dark Web: A Web of Crimes, there is a succinct depiction of the components of the Internet and the role of the Dark Web, as indicated by these two excerpts:

I realize it is perhaps chilling to suddenly realize that there is an entire segment of the Internet that you perhaps didnt know existed and that it is filled with abysmal content. Sorry to be the bearer of such gloomy news.

Maybe this will cheer you up.

The Dark Web is seemingly the ideal source of content to train generative AI if you are of the mind that data training on the worst of the worst is a presumably worthwhile and productive endeavor. Rather than having to try and bend over backward to find atrocious content on the conventional side of the Internet (admittedly, there is some of that there too), instead make use of a specialized web crawler aimed at the Dark Web and you can find a treasure trove of vile content.

Easy-peasy.

I know that I havent yet explained why data training generative AI on the Internets ugly underbelly is presumably useful, so lets get to that next. At least we now know that plentiful content exists for such a purpose.

What Does Dark Web Trained Generative AI Provide

Ill give you a moment to try and brainstorm some bona fide reasons for crafting generative AI that is based on foulness.

Any ideas?

Well, heres what some already proclaim are useful reasons:

Any discussion about the Dark Web should be careful to avoid pegging the Dark Web as exclusively a home of evildoers. There are various justifications for having a Dark Web.

For example, consider this listing by the researchers mentioned earlier:

Given those positive facets of the Dark Web, you could argue that having generative AI trained on the Dark Web would potentially further aid those benefits. For example, enabling more people to find scarce products or discover content that has been entered anonymously out of fear of governmental reprisals.

In that same breath, you could also decry that the generative AI could severely and lamentably undercut those advantages by providing a means for say government crackdowns on those that are dissenting from government oppression. Generative AI based on the Dark Web might have a whole slew of unanticipated adverse consequences including putting innocent people at risk that were otherwise using the Dark Web for ethically or legally sound purposes.

Ponder seriously and soberly whether we do or do not want generative AI that is based on the Dark Web.

The good news or bad news is that we already have that kind of generative AI. You see, the horse is already out of the barn.

Lets look at that quandary next.

The DarkGPT Bandwagon Is Already Underway

A hip thing to do involves training generative AI on the Dark Web.

Some that do so have no clue as to why they are doing so. It just seems fun and exciting. They get a kick out of training generative AI on something other than what everyone else has been using. Others intentionally train generative AI on the Dark Web. Those with a particular purpose are usually within one or more camps associated with the reasons I gave in the prior subsection about reasons to do so.

All of this has given rise to a bunch of generative AI apps that are generically referred to as DarkGPT. I say generically because there are lots of these DarkGPT monikers floating around. Unlike the bona fide trademarked name such as ChatGPT that has spawned all kinds of GPT naming variations (I discuss the legal underpinnings of the trademark at the link here), the catchphrase or naming of DarkGPT is much more loosey-goosey.

Watch out for scams and fakes.

Heres what I mean. You are curious to play with a generative AI that was trained on the Dark Web. You do a cursory search for anything named DarkGPT or DarkWebGPT or any variation thereof. You find one. You decide to try it out.

Yikes, turns out that the app is malware. You have fallen into a miserable trap. Your curiosity got the better of you. Please be careful.

Legitimate Dark Web Generative AI

Ill highlight next a generative AI that was trained on the Dark Web and serves as a quite useful research-oriented exemplar and can be a helpful role model for other akin pursuits.

The generative AI app is called DarkBERT and is described in a research paper entitled DarkBERT: A Language Model For The Dark Side Of The Internet by researchers Youngjin Jin, Eugene Jang, Jian Cui, Jin-Woo Chung, Yongjae Lee, and Seungwon Shin (posted online on May 18, 2023). Here are some excerpted key points from their study:

Lets briefly examine each of those tenants.

First, the researchers indicated that they were able to craft a Dark Web-based instance of generative AI that was comparable in natural language fluency as could be found in a generative AI trained on the conventionally visible web. This is certainly encouraging. If they had reported that their generative AI was less capable, the implication would be that we might not readily be able to apply generative AI to the Dark Web. This would have meant that efforts to do so would be fruitless or that some other as-yet-unknown new AI-tech innovation would have been required to sufficiently do so.

Bottom-line is that we can proceed to apply generative AI to the Dark Web and expect to get responsive results.

Secondly, it would seem that a generative AI solely trained on the Dark Web is likely to do a better job at pattern-matching of the Dark Web than would a generative AI that was partially data-trained on the conventional web. Remember that earlier I mentioned that we might consider data training of generative AI that mixes both the conventional web and the Dark Web. We can certainly do so, but the result here seems to suggest that making queries and using the natural language facility of the Dark Web specific generative AI is better suited than would be a mixed model (there are various caveats and exceptions, thus this is an open-research avenue).

Third, the research examined closely the cybersecurity merits of having a generative AI that is based on the Dark Web, namely being able to detect or uncover potential cyber hacks that are on the Dark Web. The aspect that the generative AI seemed especially capable in this realm is a plus for those fighting cybercriminals. You can consider using Dark Web data-trained generative AI to pursue the wrongdoers that are aiming to commit cybercrimes.

You might be somewhat puzzled as to why the name of their generative AI is DarkBERT rather than referring to the now-classic acronym of GPT (generative pre-trained transformer). The BERT acronym is particularly well-known amongst AI insiders as the name of a set of generative AI apps devised by Google that they coined BERT (bi-directional encoder representations from transformers). I thought you might like a smidgeon of AI insider terminology and ergo able to clear up that possibly vexing mystery.

A quick comment overall before we move on. Research about generative AI and the Dark Web is still in its infancy. You are highly encouraged to jump into this evolving focus. There are numerous technological questions to be addressed. In addition, there are a plethora of deeply intriguing and vital AI Ethics and AI Law questions to be considered.

Of course, youll need to be willing to stomach the stench or dreadful aroma that generally emanates from the Dark Web. Good luck with that.

When Generative AI Is Bad To The Bone

Ive got several additional gotchas and thought-provoking considerations for you on this topic.

Lets jump in.

We know that conventional generative AI is subject to producing errors, along with emitting falsehoods, producing biased content, and even making up stuff (so-called AI hallucinations, a catchphrase I deplore, for the reasons explained at the link here). These maladies are a bone of contention when it comes to using generative AI in any real-world setting. You have to be careful of interpreting the results. The generated essays and interactive dialogues could be replete with misleading and misguided content produced by the generative AI. Efforts are hurriedly underway to try and bound these problematic concerns, see my coverage at the link here.

Put on your thinking cap and get ready for a twist.

What happens if generative AI that is based on the Dark Web encounters errors, falsehoods, biases, or AI hallucinations?

In a sense, we are in the same boat as the issues confronting conventional generative AI. The Dark Web generative AI might showcase an indication that seems to be true but is an error or falsehood. For example, you decide to use Dark Web data-trained generative AI to spot a cyber crook. The generative AI tells you that it found a juicy case on the Dark Web. Upon further investigation with other special browsing tools, you discover that the generative AI falsely made that accusation.

Oops, not cool.

We need to always keep our guard up when it comes to both conventional generative AI and Dark Web-based generative AI.

Heres another intriguing circumstance.

People have been trying to use conventional generative AI for mental health advice. Ive emphasized that this is troublesome for a host of disconcerting reasons, see my analysis at the link here and the link here, just to name a few. Envision that a person is using conventional or clean generative AI for personal advice about something, and the generative AI emits an AI hallucination telling the person to take actions in a dangerous or unsuitable manner. Im sure you can see the qualms underlying this use case.

A curious and serious parallel would be if someone opted to use a Dark Web-based generative AI for mental health advice. We might assume that this baddie generative AI is likely to generate foul advice from the get-go.

Is it bad advice that would confuse and confound evildoers? I suppose we might welcome that possibility. Maybe it is bad advice in the sense that it is actually good advice from the perspective of a wrongdoer. Generative AI might instruct the evildoer on how to better achieve evil deeds. Yikes!

Or, in a surprising and uplifting consideration, might there be some other mathematical or computational pattern-matching contrivance that manages to rise above the flotsam used during the data training? Could there be lurking within the muck a ray of sunshine?

A bit dreamy, for sure.

More research needs to be done.

Speaking of doing research and whatnot, before you run out to start putting together a generative AI instance based on the Dark Web, you might want to check out the licensing stipulations of the AI app. Most of the popular generative AI apps have a variety of keystone restrictions. People using ChatGPT for example are typically unaware that there are a bunch of prohibited uses.

For example, as Ive covered at the link here, you cannot do this with ChatGPT:

If you were to develop a generative AI based on the Dark Web, you presumably might violate those kinds of licensing stipulations as per whichever generative AI app you decide to use. On the other hand, one supposes that as long as you use the generative AI for the purposes of good, such as trying to ferret out evildoers, you would potentially be working within the stated constraints of the licensing. This is all a legal head-scratcher.

One final puzzling question for now.

Will we have bad-doers that purposely devise or seek out generative AI that is based on the Dark Web, hoping to use the generative AI to further their nefarious pursuits?

I sadly note that the answer is assuredly yes, this is going to happen and is undoubtedly already happening. AI tools tend to have a dual-use capability, meaning that you can turn them toward goodness and yet also turn them toward badness, see my discussion on AI-based Dr. Evil Projects at the link here.

Conclusion

To end this discussion on the Dark Web-based generative AI, I figured we might take a spirited wooded hike into the imaginary realm of the postulated sentient AI. Sentient AI is also nowadays referred to as Artificial General Intelligence (AGI). For a similar merry romp into a future of sentient AI, see my discussion at the link here.

Sit down for what I am about to say next.

If the AI of today is eventually heading toward sentient AI or AGI, are we making ourselves a devil of time by right now proceeding to create instances of generative AI that are based on the Dark Web?

Heres the unnerving logic. We introduce generative AI to the worst of the worst of humankind. The AI pattern matches it. A sentient AI would presumably have this within its reach. The crux is that this could become the keystone for how the sentient AI or AGI decides to act. By our own hand, we are creating a foundation showcasing the range and depth of evildoing of humanity and displaying it to the AGI for all its glory to examine or use.

Some say it is the perfect storm for making a sentient AI that will be armed to wipe out humankind. Another related angle is that the sentient AI will be so disgusted by this glimpse into humankind, the AGI will decide it is best to enslave us. Or maybe wipe us out, doing so with plenty of evidence as to why we ought to go.

I dont want to conclude on a doom and gloom proposition, so give me a chance to liven things up.

Turn this unsettling proposition on its head.

By the sentient AI being able to readily see the worst of the worst about humanity, the AGI can use this to identify how to avoid becoming the worst of the worst. Hooray! You see, by noting what should not be done, the AGI will be able to identify what ought to be done. We are essentially doing ourselves a great service. The crafting of Dark Web-based generative AI will enable AGI to fully discern what is evil versus what is good.

We are cleverly saving ourselves by making sure that sentient AI is up to par on good versus evil.

Marcus Tullius Cicero, the famed Roman statesman, said this: The function of wisdom is to discriminate between good and evil. Perhaps by introducing AI to both the good and evil of humankind, we are setting ourselves up for a wisdom-based AGI that will be happy to keep us around. Maybe even help us to steer toward being good more than we are evil.

Thats your happy ending for the saga of the emergent sentient AI. I trust that you will now be able to get a good night's sleep on these weighty matters. Hint: Try to stay off the Dark Web to get a full nights slumber.

Dr. Lance B. Eliot is a world-renowned expert on Artificial Intelligence (AI) with over 6.8+ million amassed views of his AI columns. As a seasoned executive and high-tech entrepreneur, he combines practical industry experience with deep academic research. Previously a professor at USC and UCLA, and head of a pioneering AI Lab, he frequently speaks at major AI industry events. Author of over 50 books, 750 articles, and 400 podcasts, he has made appearances on media outlets such as CNN and co-hosted the popular radio show Technotrends. He's been an adviser to Congress and other legislative bodies and has received numerous awards/honors. He serves on several boards, has worked as a Venture Capitalist, an angel investor, and a mentor to founder entrepreneurs and startups.

Original post:

Generative AI Thats Based On The Murky Devious Dark Web Might Ironically Be The Best Thing Ever, Says AI Ethics And AI Law - Forbes

Read More..

Art Crowdfunding Market Report to 2030 Industry Demand … – Cottonwood Holladay Journal

New Jersey, United States The Global Art Crowdfunding Market report provides a comprehensive analysis by blending in-depth qualitative and quantitative insights. It covers a wide range of topics, including a macro overview of the Global Art Crowdfunding Market dynamics, industry structure, market size, and micro-level details segmented by type and application. This report conducts a thorough analysis of the Global Art Crowdfunding Market, taking into account various influencing factors. It highlights notable market changes and challenges that companies and competitors need to overcome, while also capturing future trends and market opportunities.

The research in this report explores the Global Art Crowdfunding Market size (value, capacity, production, and consumption) across key regions such as North America, Europe, Asia Pacific (China, Japan), and others. It categorizes the Global Art Crowdfunding Market data based on manufacturers, regions, types, and applications. Additionally, the report analyzes the market status, market share, growth rate, future trends, market drivers, opportunities, challenges, risks, entry barriers, sales channels, and distributors, and incorporates Porters Five Forces Analysis.

Get Full PDF Sample Copy of Report: (Including Full TOC, List of Tables & Figures, Chart) @https://www.verifiedmarketresearch.com/download-sample/?rid=59131

Key Players Mentioned in the Global Art Crowdfunding Market Research Report:

Kickstarter, Patreon, ArtistShare, GoFundMe, Artboost, Ulule, Art Happens, Wishberry, Indiegogo, Seed Spark.

Covering key market players, consumer buying habits, and sales strategies, this Global Art Crowdfunding market report offers insights into the dynamic markets potential growth prospects in the coming years. It also provides an analysis of market factors, including sales strategies, major players, and investment opportunities. Understanding customer buying habits becomes crucial for significant firms planning to launch new products, and this market study enables a quick examination of the global market position. Moreover, it includes valuable information on key contributors, company strategies, consumer demand, customer behavior improvements, detailed sales data, and customer purchasing habits.

Global Art CrowdfundingMarket Segmentation:

Art Crowd funding Market, By Type

5% Free 4% Free 3% Free

Art Crowd funding Market, By Application

Films Music Stage Shows

Inquire for a Discount on this Premium Report@ https://www.verifiedmarketresearch.com/ask-for-discount/?rid=59131

What to Expect in Our Report?

(1) A complete section of the Global Art Crowdfunding market report is dedicated for market dynamics, which include influence factors, market drivers, challenges, opportunities, and trends.

(2) Another broad section of the research study is reserved for regional analysis of the Global Art Crowdfunding market where important regions and countries are assessed for their growth potential, consumption, market share, and other vital factors indicating their market growth.

(3) Players can use the competitive analysis provided in the report to build new strategies or fine-tune their existing ones to rise above market challenges and increase their share of the Global Art Crowdfunding market.

(4) The report also discusses competitive situation and trends and sheds light on company expansions and merger and acquisition taking place in the Global Art Crowdfunding market. Moreover, it brings to light the market concentration rate and market shares of top three and five players.

(5) Readers are provided with findings and conclusion of the research study provided in the Global Art Crowdfunding Market report.

Key Questions Answered in the Report:

(1) What are the growth opportunities for the new entrants in the Global Art Crowdfunding industry?

(2) Who are the leading players functioning in the Global Art Crowdfunding marketplace?

(3) What are the key strategies participants are likely to adopt to increase their share in the Global Art Crowdfunding industry?

(4) What is the competitive situation in the Global Art Crowdfunding market?

(5) What are the emerging trends that may influence the Global Art Crowdfunding market growth?

(6) Which product type segment will exhibit high CAGR in future?

(7) Which application segment will grab a handsome share in the Global Art Crowdfunding industry?

(8) Which region is lucrative for the manufacturers?

For More Information or Query or Customization Before Buying, Visit @ https://www.verifiedmarketresearch.com/product/art-crowdfunding-market/

About Us: Verified Market Research

Verified Market Research is a leading Global Research and Consulting firm that has been providing advanced analytical research solutions, custom consulting and in-depth data analysis for 10+ years to individuals and companies alike that are looking for accurate, reliable and up to date research data and technical consulting. We offer insights into strategic and growth analyses, Data necessary to achieve corporate goals and help make critical revenue decisions.

Our research studies help our clients make superior data-driven decisions, understand market forecast, capitalize on future opportunities and optimize efficiency by working as their partner to deliver accurate and valuable information. The industries we cover span over a large spectrum including Technology, Chemicals, Manufacturing, Energy, Food and Beverages, Automotive, Robotics, Packaging, Construction, Mining & Gas. Etc.

We, at Verified Market Research, assist in understanding holistic market indicating factors and most current and future market trends. Our analysts, with their high expertise in data gathering and governance, utilize industry techniques to collate and examine data at all stages. They are trained to combine modern data collection techniques, superior research methodology, subject expertise and years of collective experience to produce informative and accurate research.

Having serviced over 5000+ clients, we have provided reliable market research services to more than 100 Global Fortune 500 companies such as Amazon, Dell, IBM, Shell, Exxon Mobil, General Electric, Siemens, Microsoft, Sony and Hitachi. We have co-consulted with some of the worlds leading consulting firms like McKinsey & Company, Boston Consulting Group, Bain and Company for custom research and consulting projects for businesses worldwide.

Contact us:

Mr. Edwyne Fernandes

Verified Market Research

US: +1 (650)-781-4080UK: +44 (753)-715-0008APAC: +61 (488)-85-9400US Toll-Free: +1 (800)-782-1768

Email: sales@verifiedmarketresearch.com

Website:- https://www.verifiedmarketresearch.com/

Read the original post:
Art Crowdfunding Market Report to 2030 Industry Demand ... - Cottonwood Holladay Journal

Read More..

"Splice Here: A Projected Odyssey" 2023 update – In70mm.com

Almost like a real web site

WHAT'S ON IN 7OMM?

7OMM FESTIVAL Todd-AO Festival KRRR! 7OMM Seminar GIFF 70, Gentofte Oslo 7OMM Festival Widescreen Weekend

TODD-AO Premiere | Films People | Equipment Library | Cinemas Todd-AO Projector Distortion Correcting

PRESENTED IN 70MM Super Technirama 70 MCS 70| DEFA 70 Dimension 150 Sovscope 70 ARRI 765| Blow-up 35mm to 70mm Blow-Up by title IMAX| Cinema 180 Showscan| iWERKS Various 70mm Films Large Format Engagement Chronological Order

VISION, SCOPE & RAMA Cinerama| Film Archive| Remaster Cinemiracle | Rama Cinerama 360 Kinopanorama Circle Vision 360 Realife| Grandeur Natural Vision Vitascope | Magnifilm Early Large Format Films7OMM FILM & CINEMA Australia | Brazil| Canada Denmark| England France| Germany| Iran Mexico| Norway | Sweden Turkey| USALIBRARY SENSURROUND 6-Track Dolby Stereo CDS | DTS/DATASAT 7OMM Projectors People | Eulogy 65mm/70mm Workshop

7OMM NEWS 2025| 2024| 2023 2022| 2021| 2020 2019| 2018| 2017 2016| 2015| 2014 2013| 2012 | 2011 2010| 2009 | 2008 2007| 2006| 2005 2004| 2003| 20027OMM NEWSLETTER 2005 | 2004 | 2002 2001 | 2000 | 1999 1998 | 1997 | 1996 1995 | 1994 | PDF

in70mm.com Mission: To record the history of the large format movies and the 70mm cinemas as remembered by the people who worked with the films. Both during making and during running the films in projection rooms and as the audience, looking at the curved screen. in70mm.com, a unique internet based magazine, with articles about 70mm cinemas, 70mm people, 70mm films, 70mm sound, 70mm film credits, 70mm history and 70mm technology. Readers and fans of 70mm are always welcome to contribute.

Disclaimer | Updates Support us Testimonials Table of Content

Visit biografmuseet.dk about Danish cinemas

Remote and local Q&As can even be arranged for your screening; as long as youre not making me get up and zoom at 3 a.m.

Go to "Splice Here: A Projected Odyssey" 2022 updateGo to "Splice Here: A Projected Odyssey" 2020 UpdateGo to "Splice Here: A Projected Odyssey", Crowd Funding Campaign

2022 mini review:

Thomas, editor, in70mm.com

"Splice Here" running time is around 2 hours and 15 minutes, and is presented with an overture, intermission and exit music.

"Splice Here: A Projected Odyssey", Crowd Funding Campaign

"Splice Here: A Projected Odyssey" 2020 Update

"Splice Here: A Projected Odyssey", Crowd Funding Campaign

A day with Doug Trumbull

The HATEFUL 8 @ the SUN theatre

Go here to read the rest:
"Splice Here: A Projected Odyssey" 2023 update - In70mm.com

Read More..

What is AGI? The Artificial Intelligence that can do it all – Fox News

With the release of ChatGPT last year, a renewed focus was placed on AGI artificial general intelligence the advanced technology with similar capabilities to that of humans.

And while some argue GPT-4, the latest version of the technology, appears close to AGI, others say it is years, or decades, before the technology reaches human-like abilities.

There is no one agreed upon definition of AGI, but a 2020 report from consulting giant McKinsey said a true AGI would need to master skills like sensory perception, fine motor skills, and natural language understanding.

WHAT IS AI?

Recent developments in Artificial Intelligence have led to renewed focus on AGI, the technology with capabilities similar to that of humans. (getty images)

Dr. Michael Capps, the co-founder and CEO of Diveplane, said AGI is "an AI that can do anything, and maybe as well or better than a human"

"Whats really neat about that is now we can deploy them in all different facets of life, and hopefully do all the boring stuff," he said.

A technology that advanced, though, gives some pause, Capps warned.

"The downside is AGIs can learn quickly suddenly you have something thats way smarter than a 3-year-old, or an 18-year-old, or Einstein Thats where people start getting a little nervous about, how do we even understand what that may be?," he added.

WHAT IS CHATGPT?

ChatGPT, released last year, allows users to have human-like conversations with a chatbot. ((Photo By Eduardo Parra/Europa Press via Getty Images))

Christopher Alexander, the chief communications officer of Liberty Blockchain, told Fox News Digital in his view, AGI would be an "operator," allowing him to have a conversation with it like he would with an analyst.

But, Alexander argued, the current AI models, such as GPT-4, are nowhere near a true AGI.

"It is nowhere, nowhere near that, and I think its important to recognize that," he said.

"So when? I think supercomputing power is probably going to be a major factor," Alexander added.

SENATE WARNED OF PERFECT STORM LEADING TO EMERGING AI DISASTER

Artificial General Intelligence - AGI - is an advanced technology which would mimic human-like abilities. (iStock)

Capps also emphasized that current AI models do not reach the level of AGI, and that there is no set time for when the technology will reach human-like abilities.

"I think the neat thing is, no one knows," he said. "The average AI scientist probably thinks were 20, 15 years away. But once it happens, its going to be really fast."

Others, however, view the recent developments in generative AI, such as GPT-4, as advancements in the direction of AGI.

An April report from Microsoft Research said GPT4, the latest version of ChatGPT, exhibited "more general intelligence than previous AI models."

CLICK HERE TO GET THE FOX NEWS APP

"Given the breadth and depth of GPT-4's capabilities, we believe that it could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system," a summary of the report read.

ChatGPT was released last year, and allows users to have conversations with an AI chatbot, with the ability to write text, songs, poems and even computer code. Microsoft has integrated the technology in its Bing search engine.

Link:

What is AGI? The Artificial Intelligence that can do it all - Fox News

Read More..

Why we need a "Manhattan Project" for A.I. safety – Salon

Artificial intelligence is advancing at a breakneck pace. Earlier this month, one of the world's most famous AI researchers, Geoffrey Hinton, left his job at Google to warn us of the existential threat it poses. Executives of the leading AI companies are making the rounds in Washington to meet with the Biden administration and Congress to discuss its promise and perils. This is what it feels like to stand at the hinge of history.

An AI trained on pharmaceutical data in 2022 to design non-toxic chemicals had its sign flipped and quickly came up with recipes for nerve gas and 40,000 other lethal compounds.

This is not about consumer-grade AI the use of products like ChatGPT and DALLE to write articles and make art. While those products certainly pose a material threat to certain creative industries, the future threat of which I speak is that of AI being used in ways that threaten life itself say, to design deadly bioweapons, serve as autonomous killing machines, or aid and abet genocide. Certainly, the sudden advent of ChatGPT was to the general public akin to a rabbit being pulled out of a hat. Now imagine what another decade of iterations on that technology might yield in terms of intelligence and capabilities. It could even yield an AGI, meaning a type of AI that can accomplish any cognitive task that humans can.

In fact, the threat of God-like AI has loomed large on the horizon since computer scientist I. J. Good warned of an "intelligence explosion" in the 1960s. But efforts to develop guardrails have sputtered for lack of resources. The newfound public and institutional impetus allows us for the first time to compel the tremendous initiative we need, and this window of opportunity may not last long.

As a sociologist and statistician who studies technological change, I find this situation extremely concerning. I believe governments need to fund an international, scientific megaproject even more ambitious than the Manhattan Project the 1940s nuclear research project pursued by the U.S., the U.K., and Canada to build bombs to defeat the unprecedented global threat of the Axis powers in World War II.

This "San Francisco Project" named for the industrial epicenter of AI would have the urgent and existential mandate of the Manhattan Project but, rather than building a weapon, it would bring the brightest minds of our generation to solve the technical problem of building safe AI. The way we build AI today is more like growing a living thing than assembling a conventional weapon, and frankly, the mathematical reality of machine learning is that none of us have any idea how to align an AI with social values and guarantee its safety. We desperately need to solve these technical problems before AGI is created.

We can also take inspiration from other megaprojects like the International Space Station, Apollo Program, Human Genome Project, CERN, and DARPA. As cognitive scientist Gary Marcus and OpenAI CEO Sam Altman told Congress earlier this week, the singular nature of AI compels a dedicated national or international agency to license and audit frontier AI systems.

Present-day harms of AI are undeniably escalating. AI systems reproduce race, gender, and other biases from their training data. An AI trained on pharmaceutical data in 2022 to design non-toxic chemicals had its sign flipped and quickly came up with recipes for nerve gas and 40,000 other lethal compounds. This year, we saw the first suicide attributed to interaction with a chatbot, EleutherAI's GPT-J, and the first report of a faked kidnapping and ransom call using an AI-generated voice of the purported victim.

Bias, inequality, weaponization, breaches of cybersecurity, invasions of privacy, and many other harms will grow and fester alongside accelerating AI capabilities. Most researchers think that AGI will arrive by 2060, and a growing number expect cataclysm within a decade. Chief doomsayer Eliezer Yudkowsky recently argued that the most likely AGI outcome "under anything remotely like the current circumstances, is that literally everyone on Earth will die."

Complete annihilation may seem like science fiction, but if AI begins to self-improvemodify its own cognitive architecture and build its own AI workers like those in Auto-GPTany misalignment of its values with our own will be astronomically magnified. We have very little control over what happens to today's AI systems as we train them. We pump them full of books, websites, and millions of other texts so they can learn to speak like a human, and we dictate the rules for how they learn from each piece of data, but even leading computer scientists have very little understanding of how the resultant AI system actually works.

One of the most impressive interpretability efforts to date sought simply to locate where in its neural network edifice GPT-2 stores the knowledge that the capital of Italy is Rome, but even that finding has been called into question by other researchers. The favored metaphor in 2023 has been a Lovecraftian shoggoth, an alien intelligence on which we strap a yellow smiley face maskbut the human-likeness is fleeting and superficial.

Recent discourse has centered on proposals to slow down AI research, including the March 22nd open letter calling for a 6-month pause on training systems more powerful than GPT-4, signed by some of the world's most famous AI researchers.

With the black magic of AI training, we could easily stumble upon a digital mind with goals that make us mere collateral damage. The AI has an initial goal and gets human feedback on the output produced by that goal. Every time it makes a mistake, the system picks a new goal that it hopes will do a little better. This guess-and-check method is an inherently dangerous way to learn because most goals that do well on human feedback in the lab do not generalize well to a superintelligence taking action in the real world.

Among all the goals an AI could stumble upon that elicit positive human feedback, there is instrumental convergence to dangerous tendencies of deception and power-seeking. To best achieve a goal say, filling a cauldron with water in the classic story of The Sorcerer's Apprentice a superintelligence would be incentivized to gather resources to ensure that goal is achievedlike filling the whole room with water to ensure that the cauldron never empties. There are so many alien goals that the AI could land on that, unless the AI just happens to land on exactly the goal that matches what humans want from it. Then it might just act like it's safe and friendly while figuring out how to best take over and optimize the world to ensure its success.

In response to these dangerous advances, concrete and hypothetical, recent discourse has centered on proposals to slow down AI research, including the March 22nd open letter calling for a 6-month pause on training systems more powerful than GPT-4, signed by some of the world's most famous AI researchers including Yoshua Bengio and Stuart Russell.

Want more health and science stories in your inbox? Subscribe toSalon's weekly newsletter The Vulgar Scientist.

That approach is compelling but politically infeasible given the massive profit potential and the difficulty in regulating machine learning software. In the delicate balance of AI capabilities and safety, we should consider pushing up the other end, funding massive amounts of AI safety research. If the future of AI is as dangerous as computer scientists think, this may be a moonshot we desperately need.

As a sociologist and statistician, I study the interwoven threads of social and technological change. Using computational tools like word embeddings alongside traditional research methods like interviews with AI engineers, my team and I built a model of how expert and popular understanding of AI has changed over time. Before 2022, our model focused on the landmark years of 2012 when the modern AI paradigm of deep learning took hold in the computer science firmament and 2016 when, we argue, the public and corporate framing of AI inflected from science fiction and radical futurism to an incremental real-world technology being integrated across industries such as healthcare and security.

Our model changed in late 2022 after seeing the unprecedented social impact of ChatGPT's launch: it quickly became the fastest growing app in history, outpacing even the viral social media launches of Instagram and TikTok.

This public spotlight on AI provides an unprecedented opportunity to start the San Francisco Project. The "SFP" could take many forms with varying degrees of centralization to bring our generation's brightest minds to AI safety: a single, air-gapped facility that houses researchers and computer hardware; a set of major grants to seed and support multi-university AI safety labs alongside infrastructure to support their collaboration; or major cash prizes for outstanding research projects, perhaps even a billion-dollar grand prize for an end-to-end solution to the alignment problem. In any case, it's essential that such a project stay laser-focused on safety and alignment lest it become yet another force pushing forward the dangerous frontier of unmitigated AI capabilities.

It may be inauspicious to compare AI safety technology with the rapid nuclear weaponization of the Manhattan Project. In 1942, shortly after it began, the world's first nuclear chain reaction was ignited just a few blocks from where I sit at the University of Chicago. In July 1945, the world's first nuclear weapon was tested in New Mexico, and a month later, the bombs fell on Hiroshima and Nagasaki.

The San Francisco Project could end of the century of existential risk that began when the Manhattan Project first made us capable of self-annihilation. The intelligence explosion will happen soon whether humanity is ready or not either way, AGI will be our species' final invention.

Read more

on the theoretical and real threat of A.I.

Read the rest here:

Why we need a "Manhattan Project" for A.I. safety - Salon

Read More..

Artificial intelligence GPT-4 shows ‘sparks’ of common sense, human-like reasoning, finds Microsoft – Down To Earth Magazine

"); o.document.close(); setTimeout(function() { window.frames.printArticleFrame.focus(); window.frames.printArticleFrame.print(); document.body.removeChild(a); }, 1000); } jQuery(document).bind("keyup keydown", function(e) { if ((e.ctrlKey || e.metaKey) && (e.key == "p" || e.charCode == 16 || e.charCode == 112 || e.keyCode == 80)) { e.preventDefault(); printArticle(); } });

OpenAIs more powerful version of ChatGPT, GPT-4, can be trained to reason and use common sense like humans, a new study by Microsoft has found.

GPT-4 is a significant step towards artificial general intelligence (AGI) and can reason, plan and learn from experience at the same level as humans do, or possibly above them, the analysis found.

The AI is part of a new cohort of large language models (LLM), including ChatGPT and Googles PaLM. LLMs can be trainedin massive amounts of data and fed both images and text to come up with answers.

Microsoft invested billions of dollars in OpenAI and had access to it before it was launched publicly. The company recently took out a 155-page analysis,Sparks of Artificial General Intelligence: Early experiments with GPT-4.

Read more:If AI goes wrong, it can go quite wrong: Heres ChatGPT CEOs full testimony in US Congress

GPT-4 is also used to power Microsofts Bing Chat feature.

The research team discovered that LLMs can be trained to reason and use common sense like humans. They demonstrated GPT-4 can solve complex tasks in several fields without special prompting, including mathematics, vision, medicine, law and psychology.

The system available to the public is not as powerful as the version they tested, Microsoft said.

The paper gave several examples of how the AI seemed to understand concepts, like what a unicorn is. GPT-4 drew a unicorn in a sub programming language called TiKZ. In the crude drawings, GPT4 got the concept of a unicorn right.

To demonstrate the difference between true learning and memorisation, researchers asked GPT-4 to Draw a unicorn in TikZ three times over the course of one month. The AI showed a clear evolution in the sophistication ofthedrawings. Source: Microsoft

GPT-4 also exhibited more common sense than previous models, like ChatGPT, OpenAI said. Both GPT-4 and ChatGPT were asked to stack a book, nine eggs, a laptop, a bottle and a nail.

While ChatGPT recommended placing the eggs on top of the nail, the more sophisticated model arranged the items so the eggs would not break.

Read more:Thirsty AIs: ChatGPT drinks half a litre of fresh water to answer 20-50 questions, says study

However, the report acknowledged that AI still has limitations and biases and users were warned to be careful. GPT is still not fully reliable because it still hallucinates facts and makes reasoning and basic arithmetic errors.

The analysis read:

While GPT-4 is at or beyond human-level for many tasks, overall, its patterns of intelligence are decidedly not human-like. However, GPT-4 is almost certainly only a first step towards a series of increasingly generally intelligent systems, and in fact, GPT-4 itself has improved throughout our time testing it.

However, the paper warned the users to be careful, warning of its limitations likeconfidence calibration,cognitive fallacies and irrationality andchallenges with sensitivity to inputs.

Great care should be taken when using language model outputs, particularly in high-stakes contexts, with the exact protocol (such as human review, grounding with an additional context or avoiding high-stakes uses altogether) matching the needs of a specific use-case, it said.

We are a voice to you; you have been a support to us. Together we build journalism that is independent, credible and fearless. You can further help us by making a donation. This will mean a lot for our ability to bring you news, perspectives and analysis from the ground so that we can make change together.

View post:

Artificial intelligence GPT-4 shows 'sparks' of common sense, human-like reasoning, finds Microsoft - Down To Earth Magazine

Read More..

The Senate’s hearing on AI regulation was dangerously friendly – The Verge

The most unusual thing about this weeks Senate hearing on AI was how affable it was. Industry reps primarily OpenAI CEO Sam Altman merrily agreed on the need to regulate new AI technologies, while politicians seemed happy to hand over responsibility for drafting rules to the companies themselves. As Senator Dick Durbin (D-IL) put it in his opening remarks: I cant recall when weve had people representing large corporations or private sector entities come before us and plead with us to regulate them.

This sort of chumminess makes people nervous. A number of experts and industry figures say the hearing suggests we may be headed into an era of industry capture in AI. If tech giants are allowed to write the rules governing this technology, they say, it could have a number of harms, from stifling smaller firms to introducing weak regulations.

Industry capture could harm smaller firms and lead to weak regulations

Experts at the hearing included IBMs Christina Montgomery and noted AI critic Gary Marcus, who also raised the specter of regulatory capture. (The peril, said Marcus, is that we make it appear as if we are doing something, but its more like greenwashing and nothing really happens, we just keep out the little players.) And although no one from Microsoft or Google was present, the unofficial spokesperson for the tech industry was Altman.

Although Altmans OpenAI is still called a startup by some, its arguably the most influential AI company in the world. Its launch of image and text generation tools like ChatGPT and deals with Microsoft to remake Bing have sent shockwaves through the entire tech industry. Altman himself is well positioned: able to appeal to both the imaginations of the VC class and hardcore AI boosters with grand promises to build superintelligent AI and, maybe one day, in his own words, capture the light cone of all future value in the universe.

At the hearing this week, he was not so grandiose. Altman, too, mentioned the problem of regulatory capture but was less clear about his thoughts on licensing smaller entities. Wedont wanna slow down smaller startups. We dont wanna slow down open source efforts, he said, adding, We still need them to comply with things.

Sarah Myers West, managing director of the AI Now institute, tells The Verge she was suspicious of the licensing system proposed by many speakers. I think the harm will be that we end up with some sort of superficial checkbox exercise, where companies say yep, were licensed, we know what the harms are and can proceed with business as usual, but dont face any real liability when these systems go wrong, she said.

Requiring a license to train models would ... further concentrate power in the hands of a few

Other critics particularly those running their own AI companies stressed the potential threat to competition. Regulation invariably favours incumbents and can stifle innovation, Emad Mostaque, founder and CEO of Stability AI, told The Verge. Clem Delangue, CEO of AI startup Hugging Face, tweeted a similar reaction: Requiring a license to train models would be like requiring a license to write code. IMO, it would further concentrate power in the hands of a few & drastically slow down progress, fairness & transparency.

But some experts say some form of licensing could be effective. Margaret Mitchell, who was forced out of Google alongside Timnit Gebru after authoring a research paper on the potential harms of AI language models, describes herself as a proponent of some amount of self-regulation, paired with top-down regulation. She told The Verge that she could see the appeal of certification but perhaps for individuals rather than companies.

You could imagine that to train a model (above some thresholds) a developer would need a commercial ML developer license, said Mitchell, who is now chief ethics scientist at Hugging Face. This would be a straightforward way to bring responsible AI into a legal structure.

Mitchell added that good regulation depends on setting standards that firms cant easily bend to their advantage and that this requires a nuanced understanding of the technology being assessed. She gives the example of facial recognition firm Clearview AI, which sold itself to police forces by claiming its algorithms are 100 percent accurate. This sounds reassuring, but experts say the company used skewed tests to produce these figures. Mitchell added that she generally does not trust Big Tech to act in the public interest. Tech companies [have] demonstrated again and again that they do not see respecting people as a part of running a company, she said.

Even if licensing is introduced, it may not have an immediate effect. At the hearing, industry representatives often drew attention to hypothetical future harms and, in the process, gave scant attention to known problems AI already enables.

For example, researchers like Joy Buolamwini have repeatedly identified problems with bias in facial recognition, which remains inaccurate at identifying Black faces and has produced many cases of wrongful arrest in the US. Despite this, AI-driven surveillance was not mentioned at all during the hearing, while facial recognition and its flaws were only alluded to once in passing.

Industry figures often stress future harms of AI to avoid talking about current problems

AI Nows West says this focus on future harms has become a common rhetorical sleight of hand among AI industry figures. These individuals position accountability right out into the future, she said, generally by talking about artificial general intelligence, or AGI: a hypothetical AI system smarter than humans across a range of tasks. Some experts suggest were getting closer to creating such systems, but this conclusion is strongly contested.

This rhetorical feint was obvious at the hearing. Discussing government licensing, OpenAIs Altman quietly suggested that any licenses need only apply to future systems. Where I think the licensing scheme comes in is not for what these models are capable of today, he said. But as we head towards artificial general intelligence thats where I personally think we need such a scheme.

Experts compared Congress (and Altmans) proposals unfavorably to the EUs forthcoming AI Act. The current draft of this legislation does not include mechanisms comparable to licensing, but it does classify AI systems based on their level of risk and imposes varying requirements for safeguards and data protection. More notable, though, is its clear prohibitions of known and current harmful AI uses cases, like predictive policing algorithms and mass surveillance, which have attracted praise from digital rights experts.

As West says, Thats where the conversation needs to be headed if were going for any type of meaningful accountability in this industry.

See the article here:

The Senate's hearing on AI regulation was dangerously friendly - The Verge

Read More..