Page 1,115«..1020..1,1141,1151,1161,117..1,1201,130..»

Managing the agricultural supply chain: Contract negotiation and … – Lexology

According to the Australian Bureau of Agricultural and Resource Economics and Sciences (ABARES), Australia achieved a record $90 billion in agricultural production in 2022/23.

While this is tipped to fall in the short term (to $81 billion 2023/24), it should not be considered an aberration but a new benchmark of the value the agri-sector can create.

While the COVID-19 pandemic and much of the disruption it caused is now history, like the Four Horsemen of the Apocalypse, new disruptors are on the horizon. As a result, supply chain contract best practices are also evolving, with a focus on responsible and sustainable practices.

This article explores four key areas undergoing generational change and the contractual developments in post-COVID agribusiness supply chains.

Why is the Agricultural Supply Chain (ASC) special?

When we speak of ASCs, we speak first of primary production (including the inputs necessary to achieve that production), transport and logistics (including storage and export) of that production, and processing of that production (including packaging), primarily but not perhaps exclusively which takes place in Australia.

Supply chain contracts

A challenge inherent in any ASC is that it consists of a matrix (or patchwork) of separate contracts. Many participants will be both a user of goods/services, and a supplier of the same or similar goods and services to someone else in the chain. Attempting to achieve consistency of product and service levels (including ESG reporting, traceability, and time-slotting) along the chain can be challenging.

That matrix can also have a pyramid shape; accumulation is a common feature of ASCs. This means that greater numbers of lower value contracts will contribute towards higher value contracts, with the value and sophistication of contracts increasing toward the apex.

The challenge is that lower value contracts are likely to be poorly administered (with inconsistent or non-existent terms and conditions), may not be documented, and if documented, may not be signed.

While there is an understandable tendency by some participants to press the use of proprietary contract forms with terms and conditions that favour the interests of that participant, the result can be counterproductive (and have in some cases resulted in or contributed to mandatory codes of conduct). Proprietary forms may not be industry standard; they may be overly complex and so more readily misunderstood, and may provoke disputes.

Industry associations can play an important role in developing and promoting the use of standard form contracts, contracting processes and dispute resolution mechanisms that reflect industry standard best practices. These standard forms provide a contracting platform that can then be supplemented by special or bespoke terms and conditions to more accurately reflect the particular needs of a contracting party.

In the absence of industry standard forms, there is scope for the development of voluntary industry codes of practice, or in cases of need, mandatory industry codes under the Competition and Consumer Act 2010.

Key areas undergoing generational change

1. Regional roots

Most of our production comes from regional areas. While there is a traditional image of the man on the land, production is increasingly corporatised, often involving venture capital or foreign direct investment and sophisticated technologies.

ASCs routinely feature the largest international corporates (Viterra or Cargill) and the smallest service providers (truck owners or drivers). This can create contractual and regulatory challenges. For example, industry codes of conduct in horticulture, sugar (cane) and dairy have addressed market power imbalances in sectors where producers have historically been perceived to be vulnerable and unsophisticated.

The same applies to the unfair contract terms provisions of the Australian Consumer Law.

Codes may however be less relevant/unwarranted where the producer is a corporation with sufficient sophistication and scale to bargain competitively. This process of corporatisation may also have been accelerated by the codes which impose compliance obligations on farmer producers and processors.

Infrastructure (and sometimes the lack of it) and sophistication of that infrastructure is also transforming the regional production landscape. ASCs continue to rely on local and country road networks, which can suffer from poor maintenance (particularly after heavy rains and flooding) and may not be certified for use by some classes of heavy vehicles.

Rail is the obvious alternative to heavy vehicles, and the use of rail should be more attractive when greenhouse gas emissions (GHG) are taken into account.

2. Local roads to global markets

We regularly produce commodities surplus to domestic needs so export markets are important and demand from and access to those markets can drive investment and production.

However the era of free global trade is over. Export markets will increasingly be subject to geopolitics resulting in regulation often with a political overlay. Politics and policy will also direct production towards new friendly markets. Politically motivated tariffs and sanctions will be commonplace and must factor into any exporters risk assessment tools.

How we approach existing and cultivate new markets will have a significant impact on ASCs who should increasingly look to tailor production to demand from particular end-users.

While this may have historically been the concern of exporters and government, increasingly, entire ASCs are export-focused, starting with the farmer.

Global trends will also impact local supply chains. The German Supply Chain Due Diligence Act is one such example. The law requires large German companies to conduct supply chain due diligence to identify, prevent and address human rights and environmental abuses within their own and their direct supplies operations.

3. Sustenance and sustainability

Nothing is more sensitive than food. However, so is the health of the planet and increasingly, domestic and export markets want to know that food production is both green and sustainable.

Concepts such as food security have been given fresh relevance by (first) the pandemic and then by the war in Ukraine. This is happening at the same time that concerns over the climate and decarbonisation have reached the front of consumers minds.

ESG is no longer an emerging issue it is now shaping supply chains. Both transport and agriculture are key sectors for carbon reduction targets and potentially for offsets.

4. Technology and innovation

Despite some perceptions, ASCs are highly innovative and rely on the latest technologies to remain locally and globally competitive.

In addition, we can say that artificial intelligence (AI) will change everything, but we dont know how yet.

One likely effect of AI is that it will both accelerate new developments and reduce their cost, because smaller groups of people will be able to do more, and do it more quickly.

We can expect to see new varieties of plants and animals coming to market, with particular health and environmental claims, more often and more quickly than in the past.

Contractual developments in post-COVID agribusiness supply chains

Flexible but resilient

Supply chains must be flexible but also resilient. Above all, they must be profitable, reliable and enforceable.

Due to the pandemic, supply chain contracts now incorporate provisions for contingencies such as disruptions in transportation, production, or distribution. Additionally, contracts should include provisions for emerging issues and legislative changes, such as ESG.

Force majeure (FM) clauses are key. Contracting parties must re-evaluate the events of FM stipulated in contracts. They must outline the parties' current obligations and allocate current and foreseeable risks of disruptive events.

Dispute resolution clauses should also be tailored towards the best outcomes. In particular, they must be designed to:

Contract planning and management in the procurement process

Effective contract planning and management is critical to the success of any supply chain. This includes identifying the key performance indicators (KPIs) that will be used to measure supplier performance and incorporating them into the contract.

Additionally, contracts and standard terms and conditions should be regularly reviewed and updated to ensure that they remain relevant and aligned with business goals.

Lawyers should consider whether provisions should be drafted as:

E-platform contracts

It has become common for freight and commodity procurement contracts to be negotiated and (perhaps unwittingly) concluded over text messages and platforms like WhatsApp.

As this is probably now unavoidable, companies should attempt to put policies in place to clarify whether this practice is acceptable and if it is, how it should be regulated so that appropriate terms and conditions are incorporated into those contracts and/or whether such exchanges should be expressly subject to contract.

Training should be provided to traders so that they fully understand the risks inherent in less formal contracting processes.

Model responsible supply chain standards to inform contract negotiation

Various organisations have developed responsible supply chain standards that can be used as a model for contract negotiation. These standards typically cover issues such as:

As mentioned above, these standards should align with a companys value statements. They can be recorded in policies and standard terms and conditions, and more broadly, a voluntary code for an industry sector.

Smart contracts for management of sustainable supply chains

Smart contracts are self-executing contracts with the terms of the agreement between buyer and seller being directly written into lines of code. They can be used to manage sustainable supply chains by automating processes such as verifying sustainability criteria and tracking products throughout the supply chain, particularly where a product will be comingled and/or used in other production.

While a blockchain is probably the best-known example of smart contracting, it is inevitable that generative AI will provide alternative and smarter solutions.

Particularly in the case of export contracts, thought should be given as to whether smart contracts are enforceable against your counterparty in its home jurisdiction.

Implementing ESG plans through supply chain contracts

Environmental and social due diligence involves evaluating the potential environmental and social impacts of a project or business activity.

ESG plans can be incorporated into supply chain contracts to require suppliers to meet environmental and social standards. However, the drafting of appropriate provisions and their enforcement will present challenges.

The transition to lower carbon production will be uneven. Some producers can invest more readily in carbon reduction technologies than others. Similarly, certain producers will be able to take advantage of lower carbon rail transport compared to others.

Capturing any value associated with lower carbon production, including through segregation, will prove challenging. Greenwashing, or the tendency to exaggerate ESG compliance, will also feature in ASCs.

Agriculture and transport are two of Australias largest emitters, meaning there is significant scope for emissions reduction. This will be driven both by government regulation and consumer preference for green products.

More ambitious ASCs will recognise and seek to incorporate references to the UN Millennium Development Goals in broad-based sustainability plans.

To defend against claims of greenwashing, companies will need to be able to verify claims, demonstrate that they are accurate, made in good faith, and backed by thorough due diligence to ensure that GHG claims are fairly and accurately monitored and reported.

Risk-based due diligence along responsible supply chains

Risk-based due diligence involves identifying and assessing risks along the supply chain and taking steps to mitigate them. This can include conducting supplier audits, monitoring compliance with environmental and social standards, and implementing risk management strategies.

Supplier audits can be contentious. They require both a will and the means to actively audit suppliers, and permission from the supplier to submit to ad hoc audits.

The better path is to adopt an industry standard, administered and audited by a third party or third parties. Suppliers who wish to achieve that standard may do so voluntarily and display a certificate of compliance. Customers who prefer to contract with certified suppliers may then do so.

Incorporating supply chain standards and risk-based due diligence provisions into a contract

Risk-based due diligence provisions are currently in place for most workplace safety legislation and the Heavy Vehicle National Law, so most participants in ASCs will be at least familiar with the concepts.

Supply chain contracts should incorporate provisions for responsible and sustainable practices, including the use of sustainable materials, compliance with environmental and social standards, and adherence to labour laws.

Perhaps the most current example under Australian laws is the incorporation of modern slavery provisions into supply chain contracts.

Not all Australian companies are currently required to comply with the Modern Slavery Act 2018. However, as noted above, aggregation of contracts is common in ASCs. As part of their compliance, companies with over AU$100 million annual turnover (who bear the primary compliance obligations) may require compliance assurances from smaller suppliers (a secondary obligation).

This raises the following issues:

Looking into the future, we should expect to see risk-based due diligence laws applying to ESG and particularly, carbon emissions and GHGs.

As mentioned above, Germanys Due Diligence Supply Chain Law (Leiferketterngesetz) may provide an example of developments we might expect to see in Australia and elsewhere. Under this legislation, companies with more than 3,000 employees are required to monitor their suppliers and subcontractors and to take action to prevent or mitigate harm. The law applies to a wide range of human rights and environmental issues.

Key takeaway

In summary, the latest developments in supply chain contract best practices in the agribusiness sector reflect a growing emphasis on responsible and sustainable practices. To effectively manage supply chains, it is important to plan and manage contracts, incorporate responsible supply chain standards and risk-based due diligence, and use smart contracts and ESG plans. By doing so, companies can ensure that their supply chains are resilient, sustainable, and meet the needs of all stakeholders involved.

View original post here:

Managing the agricultural supply chain: Contract negotiation and ... - Lexology

Read More..

New Religious Cryptos Pump – Are They Scams? God, Mary, Baby … – Cryptonews

Three new meme coins with a religious theme were listed on Uniswap today, May 22nd - $GOD, $MARY and $BABYJESUS - making the top crypto gainers list on DEXTools.

Each has a low DEXTscore and some warnings under the contract details section of their DEXTools links - exercise caution when buying new DEX coins.

Jesus Coin (JESUS), launched last month, is also among the top trending cryptocurrency assets today on CoinMarketCap.

Often a large percentage gain on a decentralized exchange is a sign of low liquidity, more so than real interest from buyers - meaning there aren't enough sell orders in the order books to stop the price from exploding, and a low buying volume can cause a pump in the thousands of percent.

At the time of writing all three new meme coins have a liquidity under $100,000, and a fully diluted market capitalization under $1 million:

Experienced traders can still profit from new tokens, but beginners often lose out - since as soon as a significant selling pressure comes in from early holders, the price can crash just as fast as it rose, the 'pump and dump' chart pattern.

DEXTools provides an automatic audit of all tokens' smart contracts. It can be a useful resource when reviewing new Uniswap listings, which are attracting increasing numbers of investors - something analysts say could signal that Bitcoin has topped.

$GOD token has been flagged as having a blacklist function in its contracts, as has $BABYJESUS.

$MARY has been flagged as having the function to modify the maximum amount of transactions or the maximum token position.

These warnings - while not always 100% accurate - and low liquidity issues don't apply to other popular meme coins such as $PEPE, $COPIUM, $RFD, $SPONGE and others among the most trending cryptocurrency assets on DEXTools.

When those more established meme tokens make the top crypto gainers or 'hot pairs' list, the percentage gain tends to be in the two or three figures, with four or five figure gains being much rarer when measured over a 24 hour period.

To avoid crypto scams it's also worth checking to see if any major crypto influencers have mentioned a new coin listing.

Most of the cashtag mentions for $MARY and $BABYJESUS on Twitter appear to be smaller accounts that may have botted engagement.

We were only able to find a large well-known account mentioning $GOD token, Wizard of Soho in the tweet above who has 60k followers.

See our reviews of the best new cryptocurrency projects of 2023.

Yesterday RefundCoin (RFD) spiked over 1,000%. It has a liquidity of $14 million and no DEXTools warnings.

Jesus Coin (JESUS) is up approximately 25% today, 2,300% this week and 4,800% in the past month.

Also making the most trending cryptocurrencies on CoinMarketCap are Pepe, Wagmi Coin, and SAUDI PEPE - up 85% in the past day. Those meme coins dominate the top three spots.

Other meme coins in the top ten are Shiba Inu and Love Hate Inu - recently listed on OKX.

This week Copium token has also ranked among the top trending cryptos on DEXTools.

Visit Copium Site

Here is the original post:

New Religious Cryptos Pump - Are They Scams? God, Mary, Baby ... - Cryptonews

Read More..

Parrots, paper clips and safety vs. ethics: Why the artificial intelligence debate sounds like a foreign language – CNBC

Sam Altman, chief executive officer and co-founder of OpenAI, speaks during a Senate Judiciary Subcommittee hearing in Washington, DC, US, on Tuesday, May 16, 2023. Congress is debating the potential and pitfalls of artificial intelligence as products like ChatGPT raise questions about the future of creative industries and the ability to tell fact from fiction.

Eric Lee | Bloomberg | Getty Images

This past week, OpenAI CEO Sam Altman charmed a room full of politicians in Washington, D.C., over dinner, then testified for about nearly three hours about potential risks of artificial intelligence at a Senate hearing.

After the hearing, he summed up his stance on AI regulation, using terms that are not widely known among the general public.

"AGI safety is really important, and frontier models should be regulated," Altman tweeted. "Regulatory capture is bad, and we shouldn't mess with models below the threshold."

In this case, "AGI" refers to "artificial general intelligence." As a concept, it's used to mean a significantly more advanced AI than is currently possible, one that can do most things as well or better than most humans, including improving itself.

"Frontier models" is a way to talk about the AI systems that are the most expensive to produce and which analyze the most data. Large language models, like OpenAI's GPT-4, are frontier models, as compared to smaller AI models that perform specific tasks like identifying cats in photos.

Most people agree that there need to be laws governing AI as the pace of development accelerates.

"Machine learning, deep learning, for the past 10 years or so, it developed very rapidly. When ChatGPT came out, it developed in a way we never imagined, that it could go this fast," said My Thai, a computer science professor at the University of Florida. "We're afraid that we're racing into a more powerful system that we don't fully comprehend and anticipate what what it is it can do."

But the language around this debate reveals two major camps among academics, politicians, and the technology industry. Some are more concerned about what they call "AI safety." The other camp is worried about what they call "AI ethics."

When Altman spoke to Congress, he mostly avoided jargon, but his tweet suggested he's mostly concerned about AI safety a stance shared by many industry leaders at companies like Altman-run OpenAI, Google DeepMind and well-capitalized startups. They worry about the possibility of building an unfriendly AGI with unimaginable powers. This camp believes we need urgent attention from governments to regulate development an prevent an untimely end to humanity an effort similar to nuclear nonproliferation.

"It's good to hear so many people starting to get serious about AGI safety," DeepMind founder and current Inflection AI CEO Mustafa Suleyman tweeted on Friday. "We need to be very ambitious. The Manhattan Project cost 0.4% of U.S. GDP. Imagine what an equivalent programme for safety could achieve today."

But much of the discussion in Congress and at the White House about regulation is through an AI ethics lens, which focuses on current harms.

From this perspective, governments should enforce transparency around how AI systems collect and use data, restrict its use in areas that are subject to anti-discrimination law like housing or employment, and explain how current AI technology falls short. The White House's AI Bill of Rights proposal from late last year included many of these concerns.

This camp was represented at the congressional hearing by IBM Chief Privacy Officer Christina Montgomery, who told lawmakers believes each company working on these technologies should have an "AI ethics" point of contact.

"There must be clear guidance on AI end uses or categories of AI-supported activity that are inherently high-risk," Montgomery told Congress.

See also: How to talk about AI like an insider

It's not surprising the debate around AI has developed its own lingo. It started as a technical academic field.

Much of the software being discussed today is based on so-called large language models (LLMs), which use graphic processing units (GPUs) to predict statistically likely sentences, images, or music, a process called "inference." Of course, AI models need to be built first, in a data analysis process called "training."

But other terms, especially from AI safety proponents, are more cultural in nature, and often refer to shared references and in-jokes.

For example, AI safety people might say that they're worried about turning into a paper clip. That refers to a thought experiment popularized by philosopher Nick Bostrom that posits that a super-powerful AI a "superintelligence" could be given a mission to make as many paper clips as possible, and logically decide to kill humans make paper clips out of their remains.

OpenAI's logo is inspired by this tale, and the company has even made paper clips in the shape of its logo.

Another concept in AI safety is the "hard takeoff" or "fast takeoff," which is a phrase that suggests if someone succeeds at building an AGI that it will already be too late to save humanity.

Sometimes, this idea is described in terms of an onomatopeia "foom" especially among critics of the concept.

"It's like you believe in the ridiculous hard take-off 'foom' scenario, which makes it sound like you have zero understanding of how everything works," tweeted Meta AI chief Yann LeCun, who is skeptical of AGI claims, in a recent debate on social media.

AI ethics has its own lingo, too.

When describing the limitations of the current LLM systems, which cannot understand meaning but merely produce human-seeming language, AI ethics people often compare them to "Stochastic Parrots."

The analogy, coined by Emily Bender, Timnit Gebru, Angelina McMillan-Major, and Margaret Mitchell in a paper written while some of the authors were at Google, emphasizes that while sophisticated AI models can produce realistic seeming text, the software doesn't understand the concepts behind the language like a parrot.

When these LLMs invent incorrect facts in responses, they're "hallucinating."

One topic IBM's Montgomery pressed during the hearing was "explainability" in AI results. That means that when researchers and practitioners cannot point to the exact numbers and path of operations that larger AI models use to derive their output, this could hide some inherent biases in the LLMs.

"You have to have explainability around the algorithm," said Adnan Masood, AI architect at UST-Global. "Previously, if you look at the classical algorithms, it tells you, 'Why am I making that decision?' Now with a larger model, they're becoming this huge model, they're a black box."

Another important term is "guardrails," which encompasses software and policies that Big Tech companies are currently building around AI models to ensure that they don't leak data or produce disturbing content, which is often called "going off the rails."

It can also refer to specific applications that protect AI software from going off topic, like Nvidia's "NeMo Guardrails" product.

"Our AI ethics board plays a critical role in overseeing internal AI governance processes, creating reasonable guardrails to ensure we introduce technology into the world in a responsible and safe manner," Montgomery said this week.

Sometimes these terms can have multiple meanings, as in the case of "emergent behavior."

A recent paper from Microsoft Research called "sparks of artificial general intelligence" claimed to identify several "emergent behaviors" in OpenAI's GPT-4, such as the ability to draw animals using a programming language for graphs.

But it can also describe what happens when simple changes are made at a very big scale like the patterns birds make when flying in packs, or, in AI's case, what happens when ChatGPT and similar products are being used by millions of people, such as widespread spam or disinformation.

Read more from the original source:

Parrots, paper clips and safety vs. ethics: Why the artificial intelligence debate sounds like a foreign language - CNBC

Read More..

Where AI evolves from here – Axios

Illustration: Ada Amer/Axios

Microsoft researchers say the latest model of OpenAI's GPT "is a significant step towards AGI" artificial general intelligence, the longtime grail for AI developers.

The big picture: If you think of AI as a technology ascending (or being pushed up) a ladder, Microsoft's paper claims that GPT-4 has climbed several rungs higher than anyone thought.

Driving the news: Microsoft released the "Sparks of Artificial General Intelligence" study in March, and it resurfaced in a provocative New York Times story Tuesday.

Catch up quick: Three key terms to understand in this realm are generative AI, artificial general intelligence (AGI), and sentient AI.

GPT-4, ChatGPT, Dall-E and the other AI programs that have led the current industry wave are all forms of generative AI.

AGI has a variety of definitions, all centering on the notion of human-level intelligence that can evaluate complex situations, apply common sense, and learn and adapt.

Many experts, like Microsoft's authors, see a clear path from the context-awareness of today's generative AI to building a full AGI.

Beyond the goal of AGI lies the more speculative notion of "sentient AI," the idea that these programs might cross some boundary to become aware of their own existence and even develop their own wishes and feelings.

Virtually no one else is arguing that ChatGPT or any other AI today has come anywhere near sentience. But plenty of experts and tech leaders think that might happen someday, and that there's a slim chance such a sentient AI could go off the rails and wreck the planet or destroy the human species.

Our thought bubble: The questions these categories raise divide people into two camps.

The bottom line: For help navigating this landscape, you're likely to find as much value in the science fiction novels of Philip K. Dick as in the day's news.

See the original post:

Where AI evolves from here - Axios

Read More..

Amid job losses and fears of AI take-over, more tech majors are joining Artificial Intelligence race – The Tribune India

Tribune Web Desk

Vibha Sharma

Chandigarh, May 22

Amid warnings regarding its disastrous effect on the job market and catastrophic effects on the human race, the buzz is that more companies are joining the ongoing Artificial Intelligence race in the world.

They include tech major Apple Inc which is said to be launching its own version of popular chatbot ChatGPT. According to reports, Apple, which recently banned employees from using OpenAIs ChatGPT, is hiring for positions across machine learning and AI in the company.

Though AI takeover (a hypothetical scenario in with AI becomes the dominant form of intelligence, controlling the planet as also the human species) is still a hypothetical scenario, experts are warning about data privacy, centralisation of power in a handful of companies and AGI (artificial general intelligence) surpassing human cognitive ability.

After all, AI is also a popular theme in science fiction movies, highlighting benefits and dangers, including the possibility of machines taking over the world and the human race.

According to the Twitter Bio of OpenAIthe AI research and deployment company which developed ChatGPTtheir mission is to ensure that artificial general intelligence benefits all of humanity.

While those in favour of AI endorse the feeling, an equal number also advise caution, saying that those in power are not prepared for what may be coming.

Is 'humanity sleepwalking into a catastrophe'

While there has been an increase in AI products for general consumer use, including from tech giants like Google and Microsoft, the job scenario in tech companies is not so encouraging.

According to Layoffs.fyi, 696 tech companies laid off as many as 1,97,985 employees as on date, this year. The data published by the website tracking layoffs in the tech industry around March stated that 454 tech companies had laid off 1,23,882 employees since the beginning of 2023.

The emergence and subsequent popularity of ChatGPT type AI show that the day is not far when thousands of jobs related to research, coding, writing, human resources, etc, may become redundant but there are other fears as well, including from job-seekers.

According to job advice platform, Resumebuilder.com, Many employers now use applicant tracking system (ATS) software to automate the initial stage of the hiring process. If the formatting of your resume isnt optimised for such software, it might get filtered out before it even reaches the person who decides whether or not you get an interview.

ChatGPT is not the only AI in market

ChatGPT, which crossed the one million user mark in just five days after it was made public in November 2022, currently has over 100 million users. Its website currently generates 1.8 billion visitors per month, and it is not the only one in the market.

Currently, there are several alternatives with different features and benefits to rival the tool owned and developed by OpenAI founded in December 2015 by a group which included multi-billionaire Elon Musk who is now warning against evils of AI.

Recently, an open letter signed by whos who of the tech world, including Musk, had called for a six-month pause in the out of control race for AI development, warning of its profound risks to society and humanity.

The letter came after the public release of GPT-4.

Though Musk is said to be involved in several tech/AI companies, he warned that AI could lead to civilisation destruction.

AI is more dangerous than, say, mismanaged aircraft design or production maintenance or bad car production, in the sense that it is, it has the potential however small one may regard that probability, but it is non-trivial it has the potential of civilisation destruction, Musk was quoted as saying in the interview with Tucker Carlson, an American political commentator.

#Apple Inc #Artificial Intelligence #ChatGPT

Continued here:

Amid job losses and fears of AI take-over, more tech majors are joining Artificial Intelligence race - The Tribune India

Read More..

Artificial Intelligence May Be ‘Threat’ to Human Health, Experts Warn – HealthITAnalytics.com

May 19, 2023 -In a recent analysis published in BMJ Global Health, an international group of researchers and public health experts have argued that artificial intelligence (AI) and artificial general intelligence (AGI) may pose numerous threats to human health and well-being, calling for research into these technologies to be halted until they can be properly regulated.

The authors noted that AI technology has various promising applications in healthcare, but posit that misuse of these solutions could harm human health through their impact on social, economic, political, and security-related determinants of health.

The research and development of healthcare AI are progressing rapidly, the authors stated, highlighting that much of the literature examining these tools is focused on the potential benefits gained through their implementation and use. Conversely, discussions about the potential harms of these technologies are often limited to looking at the misapplication of AI in the clinical setting.

However, AI could negatively impact upstream determinants of health, characterized by the American Medical Association (AMA) as individual factors that may seem unrelated to health on the surface, but actually have downstream impacts on patients long-term health outcomes.

The AMA indicates that these upstream factors, such as living conditions or social and institutional inequities, have not always been within the scope of public health research but can exacerbate disease incidence, injury rates, and mortality.

READ MORE: Arguing the Pros and Cons of Artificial Intelligence in Healthcare

The authors argued that the potential misuse and ongoing failure to anticipate, adapt to, and regulate AIs impacts on society could negatively affect these factors and cause harm.

The analysis identified three impacts AI could have on upstream and social determinants of health (SDOH) that could result in threats to human health: the manipulation and control of people, the proliferation of lethal autonomous weapons systems (LAWS), and the potential obsolescence of human labor.

The first threat, the authors explained, results from AIs ability to process and analyze large datasets containing sensitive or personal information, including images. This ability could enable the misuse of AI solutions in order to develop highly personalized, targeted marketing campaigns or significantly expand surveillance systems.

These could be used with good intentions, the authors noted, such as countering terrorism, but could also be used to manipulate individual behavior, citing cases of AI-driven subversion of elections across the globe and AI-driven surveillance systems that perpetuate inequities by using facial recognition and big data to produce assessments of individual behavior and trustworthiness.

The second threat is related to the development and use of LAWS, which can locate, select, and engage human targets without supervision. The authors pointed out that these can be attached to small devices like drones and easily mass-produced, providing bad actors with the ability to kill at an industrial scale.

READ MORE: Precision Oncology Data Registry May Perpetuate Health Disparities

The third threat is concerned with how AI may make human jobs and labor obsolete. The authors acknowledged that AI has the potential to help perform jobs that are repetitive, unpleasant, or dangerous, which comes with some benefits to humans. However, they noted that currently, increased automation has largely served to contribute to inequitable wealth distribution and could exacerbate the adverse health effects associated with unemployment.

In addition, the authors described how AGI could pose an existential threat to humanity.

We are now seeking to create machines that are vastly more intelligent and powerful than ourselves, they said. The potential for such machines to apply this intelligence and powerwhether deliberately or notin ways that could harm or subjugate humansis real and has to be considered.

They highlighted that AGIs connection to the internet and the real world, including robots, vehicles, digital systems that help run various aspects of society, and weapons, could be the biggest event in human history, for the benefit of humanity or to its detriment.

Because of the scale of these potential threats and the significant impacts they could have on human health, the authors stated that healthcare professionals have a critical role to play in raising awareness around the risks of AI. Further, the authors argued for the prohibition of certain types of AI and joined calls for a moratorium on AGI development.

With exponential growth in AI research and development, the window of opportunity to avoid serious and potentially existential harms is closing. The future outcomes of the development of AI and AGI will depend on policy decisions taken now and on the effectiveness of regulatory institutions that we design to minimise risk and harm and maximise benefit, they wrote.

See more here:

Artificial Intelligence May Be 'Threat' to Human Health, Experts Warn - HealthITAnalytics.com

Read More..

Today’s AI boom will amplify social problems if we don’t act now, says AI ethicist – ZDNet

AI developers must move quickly to develop and deploy systems that address algorithmic bias, said Kathy Baxter, principal Architect of Ethical AI Practice at Salesforce. In an interview with ZDNET, Baxter emphasized the need for diverse representation in data sets and user research to ensure fair and unbiased AI systems. She also highlighted the significance of making AI systems transparent, understandable, and accountable while protecting individual privacy. Baxter stresses the need for cross-sector collaboration, like the model used by the National Institute of Standards and Technology (NIST), so that we can develop robust and safe AI systems that benefit everyone.

One of the fundamental questions in AI ethics is ensuring that AI systems are developed and deployed without reinforcing existing social biases or creating new ones. To achieve this, Baxter stressed the importance of asking who benefits and who pays for AI technology. It's crucial to consider the data sets being used and ensure they represent everyone's voices. Inclusivity in the development process and identifying potential harms through user research is also essential.

Also:ChatGPT's intelligence is zero, but it's a revolution in usefulness, says AI expert

"This is one of the fundamental questions we have to discuss," Baxter said. "Women of color, in particular, have been asking this question and doing research in this area for years now. I'm thrilled to see many people talking about this, particularly with the use of generative AI. But the things that we need to do, fundamentally, are ask who benefits and who pays for this technology. Whose voices are included?"

Social bias can be infused into AI systems through the data sets used to train them. Unrepresentative data sets containing biases, such as image data sets with predominantly one race or lacking cultural differentiation, can result in biased AI systems. Furthermore, applying AI systems unevenly in society can perpetuate existing stereotypes.

To make AI systems transparent and understandable to the average person, prioritizing explainability during the development process is key. Techniques such as "chain of thought prompts" can help AI systems show their work and make their decision-making process more understandable. User research is also vital to ensure that explanations are clear and users can identify uncertainties in AI-generated content.

Also:AI could automate 25% of all jobs. Here's which are most (and least) at risk

Protecting individuals' privacy and ensuring responsible AI use requires transparency and consent. Salesforce follows guidelines for responsible generative AI, which include respecting data provenance and only using customer data with consent. Allowing users to opt in, opt-out, or have control over their data use is critical for privacy.

"We only use customer data when we have their consent," Baxter said. "Being transparent when you are using someone's data, allowing them to opt-in, and allowing them to go back and say when they no longer want their data to be included is really important."

As the competition for innovation in generative AI intensifies, maintaining human control and autonomy over increasingly autonomous AI systems is more important than ever. Empowering users to make informed decisions about the use of AI-generated content and keeping a human in the loop can help maintain control.

Ensuring AI systems are safe, reliable, and usable is crucial; industry-wide collaboration is vital to achieving this. Baxter praised the AI risk management framework created by NIST, which involved more than 240 experts from various sectors. This collaborative approach provides a common language and framework for identifying risks and sharing solutions.

Failing to address these ethical AI issues can have severe consequences, as seen in cases of wrongful arrests due to facial recognition errors or the generation of harmful images. Investing in safeguards and focusing on the here and now, rather than solely on potential future harms, can help mitigate these issues and ensure the responsible development and use of AI systems.

Also:How ChatGPT works

While the future of AI and the possibility of artificial general intelligence are intriguing topics, Baxter emphasizes the importance of focusing on the present. Ensuring responsible AI use and addressing social biases today will better prepare society for future AI advancements. By investing in ethical AI practices and collaborating across industries, we can help create a safer, more inclusive future for AI technology.

"I think the timeline matters a lot," Baxter said. "We really have to invest in the here and now and create this muscle memory, create these resources, create regulations that allow us to continue advancing but doing it safely."

Read the original post:

Today's AI boom will amplify social problems if we don't act now, says AI ethicist - ZDNet

Read More..

Artificial intelligence: World first rules are coming soon are you … – JD Supra

The EUs AI Act

The European Commission first released its proposal for a Regulation on Artificial Intelligence (the AI Act) on 21 April 2021. It is intended to be the first legislation setting out harmonised rules for the development, placing on the market, and use of AI in the European Union. The exact requirements (that mainly revolve around data quality, transparency, human oversight and accountability) depend on the risk classification of the AI in question, which ranges from a high to low and minimal risk, while a number of AI uses are prohibited outright. Given that the AI Act is expected to be a landmark piece of EU legislation that will have extraterritorial scope and will be accompanied with hard hitting penalties (including potential fines of up to 30 million or 6% of worldwide annual turnover), we have been keeping a close eye on developments.

The latest development occurred on 11 May 2023, with Members of the European Parliament (MEPs) committees voting in favour of certain proposed amendments to the original text of the AI Act. Some of the key amendments include:

General AI principles: New provisions containing general AI principles have been introduced. These are intended to apply to all AI systems, irrespective of whether they are high-risk, thereby significantly expanding the scope of the application of the AI Act. At the same time, MEPs expanded the classification of high-risk uses to include those that may result in harm to peoples health, safety, fundamental rights or the environment. Particularly interesting is the addition of AI in recommender systems used by social media platforms (with more than 45 million users under the EUs Digital Services Act) to the high-risk list.

Prohibited AI practices: As part of the amendments, MEPs substantially amended the unacceptable risk / prohibited list to include intrusive and discriminatory uses of AI systems. Such bans now extend to a number of uses of biometric data, including indiscriminate scraping of biometric data from social media to create facial recognition databases.

Foundation models: While past versions of the AI Act have predominantly focused on 'high-risk' AI systems, MEPs introduced a new framework for all foundation models. Such framework, (which would, (among other things), require providers of foundation models to guarantee robust protection of fundamental rights, health and safety and the environment, democracy and rule of law), would particularly impact providers and users of generative AI. Such providers would also need to assess and mitigate risks, comply with design, information and environmental requirements and register in the applicable EU database, while generative foundation models would also have to comply with additional transparency requirements.

User obligations: 'Users' of AI systems are now referred to as 'deployers' (a welcome change given that the previous term somewhat confusingly was not intended to capture the end user). This change means deployers become subject to an expanded range of obligations, such as the duty to undertake a wide-ranging AI impact assessment, while on the other hand, end user rights are boosted, with end users now being conferred the right to receive an explanation about decisions made by high-risk AI systems.

The next step, plenary adoption, is currently scheduled to take place in June 2023. Following this, the proposal will enter the last stage of the legislative process, and negotiations between the European Council and the European Commission on the final form of the AI Act will begin.

However, even if these timelines are adhered to, the traction that AI regulation has been receiving in recent times may mean that the EUs AI Act is not the first ever legislation in this area. Before taking a look at the developments in this sphere occurring in the UK, lets consider why those involved in the supply of products need to have AI regulation on their radar in the first place.

The uses of AI are endless. Taking inspiration from a report issued by the UKs Office for Product Safety and Standards last year, we see AI in the product development space as having the potential to lead to:

Safer product design:AI can be used to train algorithms to develop only safe products and compliant solutions.

Enhanced consumer safety and satisfaction: Data collected with the support of AI can allow manufacturers to incorporate a consumers personal characteristics and preferences in the design process of a product, which can help identify the products future use and ensure it is designed in a way conducive to this.

Safer product assembly: AI tools such as visual recognition can assist with conducting quality inspections along the supply chain, ensuring all of the parts and components being assembled are safe - leaving little room for human error.

Prevention of mass product recalls: Enhanced data collection via AI during industrial assembly can enable problems which are not easy to identify through manual inspections to be detected, thereby allowing issue-detection before products are sold.

Predictive maintenance: AI can provide manufacturers with critical information which allows them to plan ahead and forecast when equipment may fail so that repairs can be scheduled on time.

Safer consumer use: AI in customer services can also contribute to product safety through the use of virtual assistants answering consumer queries and providing recommendations on safe product usage.

Protection against cyber-attacks: AI can be leveraged to detect, analyse and prevent cyber-attacks that may affect consumer safety or privacy.

On the other hand, there are risks when using AI. In the products space, this could result in:

Products not performing as intended: Product safety challenges may result from poor decisions or errors made in the design and development phase. A lack of good data can also produce discriminatory results, particularly impacting vulnerable groups.

AI systems lacking transparency and explainability:A consumer may not know or understand when an AI system is in use and taking decisions, or how such decisions are being taken. Such lack of understanding can in turn affect the ability of those that have suffered harm to claim compensation given the difficulty in proving how the harm has come about. This is particularly a concern given product safety has traditionally envisaged risks to the physical health and safety of the end users while AI products pose risks of immaterial harms (such as psychological harm) or indirect harms from cyber security vulnerabilities.

Cyber security vulnerabilities being exploited:AI systems can be hacked and/or lose connectivity which may result in safety risks e.g. if a connected fire alarm loses connectivity, the consumer may not be warned if a fire occurs.

Currently, there is no overarching piece of legislation regulating AI in the UK. Instead, different regulatory bodies (e.g. the Medicines and Healthcare products Regulatory Agency, the Information Commissioners Office etc.) oversee AI use across different sectors, and where relevant, provide guidance on the same.

In September 2021 however, the UK government announced a 10-year plan, described as the National AI Strategy. The National AI Strategy aims to invest and plan for the long-term needs of the AI ecosystem, support the transition to an AI-enabled economy and ensure that the UK get the national and international governance of AI technologies right.

More recently, on 29 March 2023, the UK Government published its long-anticipated artificial intelligence white paper. Branding its proposed approach to AI regulation as world leading in a bid to turbocharge growth, the whitepaper provides a cross-sectoral, principles-based framework to increase public trust in AI and develop capabilities in AI technology. The five principles intended to underpin the UKs regulatory framework are:

1. Safety, security and robustness;

2. Appropriate transparency and explainability;

3. Fairness;

4. Accountability and governance; and

5. Contestability and redress.

The UK Government has said it would avoid "heavy-handed legislation" that could stifle innovation which means in the first instance at least, these principles will not be enforced using legislation. Instead, responsibility will be given to existing regulators to decide on "tailored, context-specific approaches" that best suit their sectors. The consultation accompanying the white paper is open until 21 June 2023.

However, this does not mean that no legislation in this arena is envisaged. For example:

On 4 May 2023, the Competition and Markets Authority (the CMA) announceda review of competition and consumer protection considerations in the development and use of AI foundation models. One of the intentions behind the review is to assist with the production of guiding principles for the protection of consumers and support healthy competition as technologies develop. A report on the findings is scheduled to be published in September 2023, and whether this will result in legislative proposals is yet to be seen.

The UK has, as of late, had a specific focus on IoT devices, following the passage of the UKs Product Security and Telecommunications Infrastructure Act in December 2022 and its recent announcement that the Product Security and Telecommunications Infrastructure (Product Security) Regime will come into effect on 29 April 2024. While IoT and AI devices of course differ, the UKs willingness to take a stance as a world leader in this space (being the first country in the world to introduce minimum security standards for all consumer products with internet connectivity) may mean that a similar focus on AI should be expected in the near future.

Our Global Products Law practice is fully across all aspects of AI regulation, product safety, compliance and potential liability risks. In part 2 of this article, we look to developments in France, the Netherlands and the US and share our thoughts around what businesses can do to get ahead of the curve to prepare for the regulation of AI around the world.

View original post here:

Artificial intelligence: World first rules are coming soon are you ... - JD Supra

Read More..

Generative AI Thats Based On The Murky Devious Dark Web Might Ironically Be The Best Thing Ever, Says AI Ethics And AI Law – Forbes

whether we like it or not.getty

In todays column, lets consider for a moment turning the world upside down.

Heres what I mean.

Generative AI such as the wildly and widely successful ChatGPT and GPT-4 by OpenAI is based on scanning data across the Internet and leveraging that examined data to pattern-match on how humans write and communicate in natural language. The AI development process also includes a lot of clean-up and filtering, via a technique known as RLHF (reinforcement learning via human feedback) that seeks to either excise or at least curtail unsavory language from being emitted by the AI. For my coverage of why some people nonetheless ardently push generative AI and relish stoking hate speech and other untoward AI-generated foulness, see the link here.

When the initial scanning of the Internet takes place for data training of generative AI, the websites chosen to be scanned are generally aboveboard. Think of Wikipedia or similar kinds of websites. By and large, the text found there will be relatively safe and sane. The pattern-matching is getting a relatively sound basis for identifying the mathematical and computational patterns found within everyday human conversations and essays.

Id like to bring to your attention that we can turn that crucial precept upside down.

Suppose that we purposely sought to use the worst of the worst that is posted on the Internet to do the data training for generative AI.

Imagine seeking out all those seedy websites that you would conventionally be embarrassed to even accidentally land on. The generative AI would be entirely focused exclusively on this bad stuff. Indeed, we wouldnt try to somehow counterbalance the generative AI by using some of the everyday Internet and some of the atrocious Internet. Full on we would mire the generative AI in the muck and mire of wickedness on the Internet.

What would we get?

And why would we devise this kind of twisted or distorted variant of generative AI?

Those are great questions and I am going to answer them straightforwardly. As you will soon realize, some pundits believe data training generative AI on the ugly underbelly of the Internet is a tremendous idea and an altogether brilliant strategy. Others retort that this is not only a bad idea, it could be a slippery slope that leads to AI systems that are of an evil nature and we will regret the day that we allowed this to ever get underway.

Allow me a quick set of foundational remarks before we jump into the meat of this topic.

Please know that generative AI and indeed all manner of todays AI is not sentient. Despite all those blaring headlines that claim or imply that we already have sentient AI, we dont. Period, full stop. I will later on herein provide some speculation about what might happen if someday we attain sentient AI, but thats conjecture, and no one can say for sure when or if that will occur.

Modern generative AI is based on a complex computational algorithm that has been data trained on text from the Internet. Generative AI such as ChatGPT, GPT-4, Bard, and other similar AI apps entail impressive pattern-matching that can perform a convincing mathematical mimicry of human wording and natural language. For my explanation of how generative AI works, see the link here. For my analysis of the existent doomster fearmongering regarding AI as an existential risk, see the link here.

Into all of this comes a plethora of AI Ethics and AI Law considerations.

There are ongoing efforts to imbue Ethical AI principles into the development and fielding of AI apps. A growing contingent of concerned and erstwhile AI ethicists are trying to ensure that efforts to devise and adopt AI takes into account a view of doing AI For Good and averting AI For Bad. Likewise, there are proposed new AI laws that are being bandied around as potential solutions to keep AI endeavors from going amok on human rights and the like. For my ongoing and extensive coverage of AI Ethics and AI Law, see the link here and the link here, just to name a few.

The development and promulgation of Ethical AI precepts are being pursued to hopefully prevent society from falling into a myriad of AI-inducing traps. For my coverage of the UN AI Ethics principles as devised and supported by nearly 200 countries via the efforts of UNESCO, see the link here. In a similar vein, new AI laws are being explored to try and keep AI on an even keel. One of the latest takes consists of a set of proposed AI Bill of Rights that the U.S. White House recently released to identify human rights in an age of AI, see the link here. It takes a village to keep AI and AI developers on a rightful path and deter the purposeful or accidental underhanded efforts that might undercut society.

Now that weve covered those essentials about generative AI, lets look at the seemingly oddish or scary proposition of data training generative AI on the most stinky and malicious content available on the web.

The Dark Web Is The Perfect Foundation For Bad Stuff

There is a part of the Internet that you might not have visited that is known as the Dark Web.

The browsers that you normally use to access the Internet are primed to only explore a small fraction of the web known as the visible or surface-level web. There is a lot more content out there. Within that other content is a segment generally coined as the Dark Web and tends to contain all manner of villainous or disturbing content. Standard search engines do not usually look at Dark Web pages. All in all, you would need to go out of your way to see what is posted on the Dark Web, doing so by using specialized browsers and other online tools to get there.

What type of content might be found on the Dark Web you might be wondering?

The content varies quite a bit. Some of it entails evildoers that are plotting takeovers or possibly contemplating carrying out terrorist attacks. Drug dealers find the Dark Web very useful. You can find criminal cyber hackers that are sharing tips about how to overcome cybersecurity precautions. Conspiracy theorists tend to like the Dark Web since it is a more secretive arena to discuss conspiratorial theories. And so on.

Im not saying that the Dark Web is all bad, but at least be forewarned that it is truly the Wild West of the Internet, and just about anything goes.

In a research paper entitled Dark Web: A Web of Crimes, there is a succinct depiction of the components of the Internet and the role of the Dark Web, as indicated by these two excerpts:

I realize it is perhaps chilling to suddenly realize that there is an entire segment of the Internet that you perhaps didnt know existed and that it is filled with abysmal content. Sorry to be the bearer of such gloomy news.

Maybe this will cheer you up.

The Dark Web is seemingly the ideal source of content to train generative AI if you are of the mind that data training on the worst of the worst is a presumably worthwhile and productive endeavor. Rather than having to try and bend over backward to find atrocious content on the conventional side of the Internet (admittedly, there is some of that there too), instead make use of a specialized web crawler aimed at the Dark Web and you can find a treasure trove of vile content.

Easy-peasy.

I know that I havent yet explained why data training generative AI on the Internets ugly underbelly is presumably useful, so lets get to that next. At least we now know that plentiful content exists for such a purpose.

What Does Dark Web Trained Generative AI Provide

Ill give you a moment to try and brainstorm some bona fide reasons for crafting generative AI that is based on foulness.

Any ideas?

Well, heres what some already proclaim are useful reasons:

Any discussion about the Dark Web should be careful to avoid pegging the Dark Web as exclusively a home of evildoers. There are various justifications for having a Dark Web.

For example, consider this listing by the researchers mentioned earlier:

Given those positive facets of the Dark Web, you could argue that having generative AI trained on the Dark Web would potentially further aid those benefits. For example, enabling more people to find scarce products or discover content that has been entered anonymously out of fear of governmental reprisals.

In that same breath, you could also decry that the generative AI could severely and lamentably undercut those advantages by providing a means for say government crackdowns on those that are dissenting from government oppression. Generative AI based on the Dark Web might have a whole slew of unanticipated adverse consequences including putting innocent people at risk that were otherwise using the Dark Web for ethically or legally sound purposes.

Ponder seriously and soberly whether we do or do not want generative AI that is based on the Dark Web.

The good news or bad news is that we already have that kind of generative AI. You see, the horse is already out of the barn.

Lets look at that quandary next.

The DarkGPT Bandwagon Is Already Underway

A hip thing to do involves training generative AI on the Dark Web.

Some that do so have no clue as to why they are doing so. It just seems fun and exciting. They get a kick out of training generative AI on something other than what everyone else has been using. Others intentionally train generative AI on the Dark Web. Those with a particular purpose are usually within one or more camps associated with the reasons I gave in the prior subsection about reasons to do so.

All of this has given rise to a bunch of generative AI apps that are generically referred to as DarkGPT. I say generically because there are lots of these DarkGPT monikers floating around. Unlike the bona fide trademarked name such as ChatGPT that has spawned all kinds of GPT naming variations (I discuss the legal underpinnings of the trademark at the link here), the catchphrase or naming of DarkGPT is much more loosey-goosey.

Watch out for scams and fakes.

Heres what I mean. You are curious to play with a generative AI that was trained on the Dark Web. You do a cursory search for anything named DarkGPT or DarkWebGPT or any variation thereof. You find one. You decide to try it out.

Yikes, turns out that the app is malware. You have fallen into a miserable trap. Your curiosity got the better of you. Please be careful.

Legitimate Dark Web Generative AI

Ill highlight next a generative AI that was trained on the Dark Web and serves as a quite useful research-oriented exemplar and can be a helpful role model for other akin pursuits.

The generative AI app is called DarkBERT and is described in a research paper entitled DarkBERT: A Language Model For The Dark Side Of The Internet by researchers Youngjin Jin, Eugene Jang, Jian Cui, Jin-Woo Chung, Yongjae Lee, and Seungwon Shin (posted online on May 18, 2023). Here are some excerpted key points from their study:

Lets briefly examine each of those tenants.

First, the researchers indicated that they were able to craft a Dark Web-based instance of generative AI that was comparable in natural language fluency as could be found in a generative AI trained on the conventionally visible web. This is certainly encouraging. If they had reported that their generative AI was less capable, the implication would be that we might not readily be able to apply generative AI to the Dark Web. This would have meant that efforts to do so would be fruitless or that some other as-yet-unknown new AI-tech innovation would have been required to sufficiently do so.

Bottom-line is that we can proceed to apply generative AI to the Dark Web and expect to get responsive results.

Secondly, it would seem that a generative AI solely trained on the Dark Web is likely to do a better job at pattern-matching of the Dark Web than would a generative AI that was partially data-trained on the conventional web. Remember that earlier I mentioned that we might consider data training of generative AI that mixes both the conventional web and the Dark Web. We can certainly do so, but the result here seems to suggest that making queries and using the natural language facility of the Dark Web specific generative AI is better suited than would be a mixed model (there are various caveats and exceptions, thus this is an open-research avenue).

Third, the research examined closely the cybersecurity merits of having a generative AI that is based on the Dark Web, namely being able to detect or uncover potential cyber hacks that are on the Dark Web. The aspect that the generative AI seemed especially capable in this realm is a plus for those fighting cybercriminals. You can consider using Dark Web data-trained generative AI to pursue the wrongdoers that are aiming to commit cybercrimes.

You might be somewhat puzzled as to why the name of their generative AI is DarkBERT rather than referring to the now-classic acronym of GPT (generative pre-trained transformer). The BERT acronym is particularly well-known amongst AI insiders as the name of a set of generative AI apps devised by Google that they coined BERT (bi-directional encoder representations from transformers). I thought you might like a smidgeon of AI insider terminology and ergo able to clear up that possibly vexing mystery.

A quick comment overall before we move on. Research about generative AI and the Dark Web is still in its infancy. You are highly encouraged to jump into this evolving focus. There are numerous technological questions to be addressed. In addition, there are a plethora of deeply intriguing and vital AI Ethics and AI Law questions to be considered.

Of course, youll need to be willing to stomach the stench or dreadful aroma that generally emanates from the Dark Web. Good luck with that.

When Generative AI Is Bad To The Bone

Ive got several additional gotchas and thought-provoking considerations for you on this topic.

Lets jump in.

We know that conventional generative AI is subject to producing errors, along with emitting falsehoods, producing biased content, and even making up stuff (so-called AI hallucinations, a catchphrase I deplore, for the reasons explained at the link here). These maladies are a bone of contention when it comes to using generative AI in any real-world setting. You have to be careful of interpreting the results. The generated essays and interactive dialogues could be replete with misleading and misguided content produced by the generative AI. Efforts are hurriedly underway to try and bound these problematic concerns, see my coverage at the link here.

Put on your thinking cap and get ready for a twist.

What happens if generative AI that is based on the Dark Web encounters errors, falsehoods, biases, or AI hallucinations?

In a sense, we are in the same boat as the issues confronting conventional generative AI. The Dark Web generative AI might showcase an indication that seems to be true but is an error or falsehood. For example, you decide to use Dark Web data-trained generative AI to spot a cyber crook. The generative AI tells you that it found a juicy case on the Dark Web. Upon further investigation with other special browsing tools, you discover that the generative AI falsely made that accusation.

Oops, not cool.

We need to always keep our guard up when it comes to both conventional generative AI and Dark Web-based generative AI.

Heres another intriguing circumstance.

People have been trying to use conventional generative AI for mental health advice. Ive emphasized that this is troublesome for a host of disconcerting reasons, see my analysis at the link here and the link here, just to name a few. Envision that a person is using conventional or clean generative AI for personal advice about something, and the generative AI emits an AI hallucination telling the person to take actions in a dangerous or unsuitable manner. Im sure you can see the qualms underlying this use case.

A curious and serious parallel would be if someone opted to use a Dark Web-based generative AI for mental health advice. We might assume that this baddie generative AI is likely to generate foul advice from the get-go.

Is it bad advice that would confuse and confound evildoers? I suppose we might welcome that possibility. Maybe it is bad advice in the sense that it is actually good advice from the perspective of a wrongdoer. Generative AI might instruct the evildoer on how to better achieve evil deeds. Yikes!

Or, in a surprising and uplifting consideration, might there be some other mathematical or computational pattern-matching contrivance that manages to rise above the flotsam used during the data training? Could there be lurking within the muck a ray of sunshine?

A bit dreamy, for sure.

More research needs to be done.

Speaking of doing research and whatnot, before you run out to start putting together a generative AI instance based on the Dark Web, you might want to check out the licensing stipulations of the AI app. Most of the popular generative AI apps have a variety of keystone restrictions. People using ChatGPT for example are typically unaware that there are a bunch of prohibited uses.

For example, as Ive covered at the link here, you cannot do this with ChatGPT:

If you were to develop a generative AI based on the Dark Web, you presumably might violate those kinds of licensing stipulations as per whichever generative AI app you decide to use. On the other hand, one supposes that as long as you use the generative AI for the purposes of good, such as trying to ferret out evildoers, you would potentially be working within the stated constraints of the licensing. This is all a legal head-scratcher.

One final puzzling question for now.

Will we have bad-doers that purposely devise or seek out generative AI that is based on the Dark Web, hoping to use the generative AI to further their nefarious pursuits?

I sadly note that the answer is assuredly yes, this is going to happen and is undoubtedly already happening. AI tools tend to have a dual-use capability, meaning that you can turn them toward goodness and yet also turn them toward badness, see my discussion on AI-based Dr. Evil Projects at the link here.

Conclusion

To end this discussion on the Dark Web-based generative AI, I figured we might take a spirited wooded hike into the imaginary realm of the postulated sentient AI. Sentient AI is also nowadays referred to as Artificial General Intelligence (AGI). For a similar merry romp into a future of sentient AI, see my discussion at the link here.

Sit down for what I am about to say next.

If the AI of today is eventually heading toward sentient AI or AGI, are we making ourselves a devil of time by right now proceeding to create instances of generative AI that are based on the Dark Web?

Heres the unnerving logic. We introduce generative AI to the worst of the worst of humankind. The AI pattern matches it. A sentient AI would presumably have this within its reach. The crux is that this could become the keystone for how the sentient AI or AGI decides to act. By our own hand, we are creating a foundation showcasing the range and depth of evildoing of humanity and displaying it to the AGI for all its glory to examine or use.

Some say it is the perfect storm for making a sentient AI that will be armed to wipe out humankind. Another related angle is that the sentient AI will be so disgusted by this glimpse into humankind, the AGI will decide it is best to enslave us. Or maybe wipe us out, doing so with plenty of evidence as to why we ought to go.

I dont want to conclude on a doom and gloom proposition, so give me a chance to liven things up.

Turn this unsettling proposition on its head.

By the sentient AI being able to readily see the worst of the worst about humanity, the AGI can use this to identify how to avoid becoming the worst of the worst. Hooray! You see, by noting what should not be done, the AGI will be able to identify what ought to be done. We are essentially doing ourselves a great service. The crafting of Dark Web-based generative AI will enable AGI to fully discern what is evil versus what is good.

We are cleverly saving ourselves by making sure that sentient AI is up to par on good versus evil.

Marcus Tullius Cicero, the famed Roman statesman, said this: The function of wisdom is to discriminate between good and evil. Perhaps by introducing AI to both the good and evil of humankind, we are setting ourselves up for a wisdom-based AGI that will be happy to keep us around. Maybe even help us to steer toward being good more than we are evil.

Thats your happy ending for the saga of the emergent sentient AI. I trust that you will now be able to get a good night's sleep on these weighty matters. Hint: Try to stay off the Dark Web to get a full nights slumber.

Dr. Lance B. Eliot is a world-renowned expert on Artificial Intelligence (AI) with over 6.8+ million amassed views of his AI columns. As a seasoned executive and high-tech entrepreneur, he combines practical industry experience with deep academic research. Previously a professor at USC and UCLA, and head of a pioneering AI Lab, he frequently speaks at major AI industry events. Author of over 50 books, 750 articles, and 400 podcasts, he has made appearances on media outlets such as CNN and co-hosted the popular radio show Technotrends. He's been an adviser to Congress and other legislative bodies and has received numerous awards/honors. He serves on several boards, has worked as a Venture Capitalist, an angel investor, and a mentor to founder entrepreneurs and startups.

Original post:

Generative AI Thats Based On The Murky Devious Dark Web Might Ironically Be The Best Thing Ever, Says AI Ethics And AI Law - Forbes

Read More..

Art Crowdfunding Market Report to 2030 Industry Demand … – Cottonwood Holladay Journal

New Jersey, United States The Global Art Crowdfunding Market report provides a comprehensive analysis by blending in-depth qualitative and quantitative insights. It covers a wide range of topics, including a macro overview of the Global Art Crowdfunding Market dynamics, industry structure, market size, and micro-level details segmented by type and application. This report conducts a thorough analysis of the Global Art Crowdfunding Market, taking into account various influencing factors. It highlights notable market changes and challenges that companies and competitors need to overcome, while also capturing future trends and market opportunities.

The research in this report explores the Global Art Crowdfunding Market size (value, capacity, production, and consumption) across key regions such as North America, Europe, Asia Pacific (China, Japan), and others. It categorizes the Global Art Crowdfunding Market data based on manufacturers, regions, types, and applications. Additionally, the report analyzes the market status, market share, growth rate, future trends, market drivers, opportunities, challenges, risks, entry barriers, sales channels, and distributors, and incorporates Porters Five Forces Analysis.

Get Full PDF Sample Copy of Report: (Including Full TOC, List of Tables & Figures, Chart) @https://www.verifiedmarketresearch.com/download-sample/?rid=59131

Key Players Mentioned in the Global Art Crowdfunding Market Research Report:

Kickstarter, Patreon, ArtistShare, GoFundMe, Artboost, Ulule, Art Happens, Wishberry, Indiegogo, Seed Spark.

Covering key market players, consumer buying habits, and sales strategies, this Global Art Crowdfunding market report offers insights into the dynamic markets potential growth prospects in the coming years. It also provides an analysis of market factors, including sales strategies, major players, and investment opportunities. Understanding customer buying habits becomes crucial for significant firms planning to launch new products, and this market study enables a quick examination of the global market position. Moreover, it includes valuable information on key contributors, company strategies, consumer demand, customer behavior improvements, detailed sales data, and customer purchasing habits.

Global Art CrowdfundingMarket Segmentation:

Art Crowd funding Market, By Type

5% Free 4% Free 3% Free

Art Crowd funding Market, By Application

Films Music Stage Shows

Inquire for a Discount on this Premium Report@ https://www.verifiedmarketresearch.com/ask-for-discount/?rid=59131

What to Expect in Our Report?

(1) A complete section of the Global Art Crowdfunding market report is dedicated for market dynamics, which include influence factors, market drivers, challenges, opportunities, and trends.

(2) Another broad section of the research study is reserved for regional analysis of the Global Art Crowdfunding market where important regions and countries are assessed for their growth potential, consumption, market share, and other vital factors indicating their market growth.

(3) Players can use the competitive analysis provided in the report to build new strategies or fine-tune their existing ones to rise above market challenges and increase their share of the Global Art Crowdfunding market.

(4) The report also discusses competitive situation and trends and sheds light on company expansions and merger and acquisition taking place in the Global Art Crowdfunding market. Moreover, it brings to light the market concentration rate and market shares of top three and five players.

(5) Readers are provided with findings and conclusion of the research study provided in the Global Art Crowdfunding Market report.

Key Questions Answered in the Report:

(1) What are the growth opportunities for the new entrants in the Global Art Crowdfunding industry?

(2) Who are the leading players functioning in the Global Art Crowdfunding marketplace?

(3) What are the key strategies participants are likely to adopt to increase their share in the Global Art Crowdfunding industry?

(4) What is the competitive situation in the Global Art Crowdfunding market?

(5) What are the emerging trends that may influence the Global Art Crowdfunding market growth?

(6) Which product type segment will exhibit high CAGR in future?

(7) Which application segment will grab a handsome share in the Global Art Crowdfunding industry?

(8) Which region is lucrative for the manufacturers?

For More Information or Query or Customization Before Buying, Visit @ https://www.verifiedmarketresearch.com/product/art-crowdfunding-market/

About Us: Verified Market Research

Verified Market Research is a leading Global Research and Consulting firm that has been providing advanced analytical research solutions, custom consulting and in-depth data analysis for 10+ years to individuals and companies alike that are looking for accurate, reliable and up to date research data and technical consulting. We offer insights into strategic and growth analyses, Data necessary to achieve corporate goals and help make critical revenue decisions.

Our research studies help our clients make superior data-driven decisions, understand market forecast, capitalize on future opportunities and optimize efficiency by working as their partner to deliver accurate and valuable information. The industries we cover span over a large spectrum including Technology, Chemicals, Manufacturing, Energy, Food and Beverages, Automotive, Robotics, Packaging, Construction, Mining & Gas. Etc.

We, at Verified Market Research, assist in understanding holistic market indicating factors and most current and future market trends. Our analysts, with their high expertise in data gathering and governance, utilize industry techniques to collate and examine data at all stages. They are trained to combine modern data collection techniques, superior research methodology, subject expertise and years of collective experience to produce informative and accurate research.

Having serviced over 5000+ clients, we have provided reliable market research services to more than 100 Global Fortune 500 companies such as Amazon, Dell, IBM, Shell, Exxon Mobil, General Electric, Siemens, Microsoft, Sony and Hitachi. We have co-consulted with some of the worlds leading consulting firms like McKinsey & Company, Boston Consulting Group, Bain and Company for custom research and consulting projects for businesses worldwide.

Contact us:

Mr. Edwyne Fernandes

Verified Market Research

US: +1 (650)-781-4080UK: +44 (753)-715-0008APAC: +61 (488)-85-9400US Toll-Free: +1 (800)-782-1768

Email: sales@verifiedmarketresearch.com

Website:- https://www.verifiedmarketresearch.com/

Read the original post:
Art Crowdfunding Market Report to 2030 Industry Demand ... - Cottonwood Holladay Journal

Read More..