Page 1,260«..1020..1,2591,2601,2611,262..1,2701,280..»

Bitcoin Cash (BCH) is now home to an innovative leverage trading app – Cointelegraph

Decentralized finance (DeFi) is one of the largest sectors in crypto. The applications within the DeFi space consist of decentralized trading, borrowing and lending, and many more financial services. However, even with its rising popularity, the sector has many obstacles to overcome before mass adoption is possible.

The growth of DeFis popularity was volatile and quick, which follows suit with the overall development of the crypto industry. During the peaks of 2021, the sector saw almost $180 billion in total value locked in the many protocols that still exist today. Of course, the DeFi space also got hit by the crypto winter.

Recently, DeFi stood out in the market upturn during the first quarter of 2023. The DeFi space rose $29.6 billion in value, making the sector stand out against the performances of major asset classes like gold and oil.

The popularity of DeFi can be attributed to the increasing amount of decentralized applications (DApps) and the flow of users from centralized to decentralized exchanges, among other reasons. However, while the increasing popularity is a positive development, there are some caveats.

One of the caveats is the rising transaction costs once more users start using a network. Investors experienced this during the bull market in 2021; the Ethereum (ETH) network saw a large influx of transactions, mainly happening within the DeFi space. The result was steeply rising transaction costs on the network. Sending crypto became so expensive that users faced a significant barrier to interacting with DeFi DApps.

Another risk is that of vulnerabilities such as smart contract back door keys, massive centralization on single contracts, and counter-party risks of custodial stablecoins. With numerous bridge hacks plaguing the space, and failures of algorithmic protocols such as Terra, using DeFi DApps is certainly not without risk. However, the industry keeps evolving, with many new platforms aiming to improve the issues and challenges current DApps deal with.

Thanks to the continuous development since its foundation in 2017, Bitcoin Cash (BCH) can also perform as a vibrant environment for smart contract deployment and the creation of DeFi DApps on its UTXO mainchain. One new project that has recently launched is BCH Bull.

With the help of the AnyHedge protocol, BCH Bull lets users create long or hedge positions on several assets, such as the United States dollar, Bitcoin (BTC) and gold. Users can even add leverage to their on-chain trades. Roger Ver, a well-known Bitcoin Cash supporter and early investor in AnyHedge, said about this use case: Allowing people to permissionlessly lock the value of their Bitcoin Cash to the price of external legacy currencies is an incredibly useful tool for people who dont want to deal with cryptocurrency price volatility. Thats why I chose to invest in AnyHedge.

Source: BCH Bull

The main difference between comparable apps on Ethereum is that each trade has its own independent smart contract. Once two traders agree on the terms, the smart contract is initiated. This eliminates centralized security risks for smart contracts.

Furthermore, the UTXO-based protocol of Bitcoin Cash prevents high transaction fees and makes the chain scalable, meaning fees dont increase even if the transaction volume on the network increases.

Since October 2022, BCH Bull has been in beta, in which it has already created and redeemed over 3,000 smart contracts. The project has now been released into full production mode this month. Its growing user base can now initiate up to 90-day-long contracts at 2~3x the previous contract size, enjoying the security and scalability that Bitcoin Cash offers.

Disclaimer. Cointelegraph does not endorse any content or product on this page. While we aim at providing you with all important information that we could obtain in this sponsored article, readers should do their own research before taking any actions related to the company and carry full responsibility for their decisions, nor can this article be considered as investment advice.

Here is the original post:

Bitcoin Cash (BCH) is now home to an innovative leverage trading app - Cointelegraph

Read More..

Best AI Tokens in 2023 Invest to be a billionaire? – CryptoTicker.io – Bitcoin Price, Ethereum Price & Crypto News

Artificial intelligence (AI) has emerged as a revolutionary technology with the power to reshape industries across the globe. From healthcare and education to finance and entertainment, AI is transforming the way we live and work. As the AI revolution continues to gain momentum, one way to participate and potentially benefit from its opportunities is through investing in AI tokens, which are digital assets that represent ownership in AI projects or platforms. In this article, we will introduce you to some of the best AI tokens to consider for investment in 2023, providing insights on their value proposition, and potential for growth.

AGI token is the native cryptocurrency of SingularityNET, a decentralized platform for artificial intelligence (AI) services. AGI token is used to pay for the AI services offered by the platform, as well as to reward the AI agents that provide those services. AGI token is also used for governance, allowing the token holders to vote on the development and direction of the platform. AGI token aims to create a global network of AI agents that can collaborate and exchange value with each other, fostering the emergence of artificial general intelligence (AGI).

AGIX PRICE Source: GoCharting

FET token is the native cryptocurrency of FETCH.AI, a decentralized platform that aims to connect various agents and devices in a network of autonomous economic agents (AEAs). FET token is used to power the transactions and computations on the platform, as well as to incentivize the participants to contribute their data and resources. FET token is also designed to enable interoperability and scalability among different blockchains and smart contracts. FET token has a fixed supply of 1.152 billion tokens, of which 20% are reserved for the team and advisors, 17.6% for the foundation, 12.4% for ecosystem development, and 50% for public sale.

FET PRICE Source: GoCharting

Ocean is the native token of Ocean Protocol, a decentralized platform that aims to unlock the value of data by facilitating data sharing and monetization. Ocean Coin (OCEAN) is used to buy and sell data services on the Ocean Protocol network, as well as to govern the protocol parameters. Ocean Coin is an ERC20 token that can be traded on various cryptocurrency exchanges. Ocean Protocol leverages blockchain technology, smart contracts, and data tokens to enable a new data economy where data owners and consumers can interact in a secure and transparent way.

OCEAN PRICE Source: GoCharting

NMR is the native token of Numerai, a decentralized hedge fund that crowdsources machine learning models from data scientists worldwide. Numerai employs encryption and blockchain to create a trustless and collaborative environment for data analysis and prediction. NMR holders can use the token to participate in Numerais tournaments, submit their models, and earn rewards based on their performance, potentially benefiting from the success of the hedge fund.

NMR PRICE Source: GoCharting

ROSE is the native token of Oasis Network, a privacy-enabled blockchain platform that supports scalable and confidential computation. Oasis Network enables developers to build applications that leverage secure data while preserving user privacy. ROSE holders can use the token to pay for transactions and computation on the network, stake it to secure the network and earn rewards, and potentially benefit from the adoption of the platform.

ROSE PRICE Source: GoCharting

Its important to note that investing in AI tokens, like any cryptocurrency investment, comes with risks and uncertainties. The cryptocurrency market is known for its volatility, and its crucial to do your own research, understand the vision, goals, features, risks, and challenges of each AI project or platform, and only invest what you can afford to lose. Seek professional financial advice if needed.

In conclusion, investing in AI tokens can be a potentially lucrative opportunity for those looking to participate in the AI revolution. The best AI tokens in 2023, such as AGI, FET, OCEAN, NMR, and ROSE, offer unique value propositions and have the potential for growth. However, its crucial to exercise caution, do thorough research, and understand the risks before making any investment decisions. With careful consideration and strategic investment, you may unlock the potential of AI investments and pave the way towards financial success in the ever-evolving world of AI technology.

The recent rise of cryptocurrencies brought about a new threat scammers. If you are involved in the cryptocurrency world

Ledger, one of the earliest hardware wallet providers in the crypto space, has been widely recommended for its secure storage

Full report: Will crypto prices go up until the end of 2023? Let's analyze what happened so far as crypto

See the rest here:

Best AI Tokens in 2023 Invest to be a billionaire? - CryptoTicker.io - Bitcoin Price, Ethereum Price & Crypto News

Read More..

Innovation Bootcamp Unveiled by BNB Chain – BSC NEWS

The initiative is looking to appeal to a broad range of professionals including developers, including student developers pursuing computer science or related fields, Web2 developers with traditional development experience, and Web3 developers already familiar with the blockchain ecosystem.

BNB Chain has recently launched a new initiative to promote the development of Web3 applications in different parts of the world.

The BNB Chain Bootcamp, announced on April 24th via the BNBChain blog, is a program designed to equip developers with the necessary skills to build innovative projects on the BNB Chain platform.

The program is open to different categories of developers, including student developers pursuing computer science or related fields, Web2 developers with traditional development experience, and Web3 developers already familiar with the blockchain ecosystem.

The initiative is described as a global open program aimed at bringing the best industry talent to different regions to train and mentor developers on the latest Web3 development technologies. The program seeks to create a vibrant community of developers who can collaborate on innovative projects and share their experiences. Participants in the BNB Chain Bootcamp will have the opportunity to work on real-world projects, learn from industry experts, and gain access to resources and tools to help them succeed.

The Bootcamp is set to take place over a six-week period starting on May 14th, with weekly sessions scheduled every Friday. In week one, the bootcamp will kick off with an introduction to blockchain and its use-cases, followed by a deep dive into the BNB Chain ecosystem. In week two, participants will learn about BNB Chain architecture, and the dev tools landscape.

Week three will cover smart contract development on BNB Chain, while week four will focus on interacting with deployed smart contracts and tokenization. Week five will explore bridging Web2 to Web3, and the final week will focus on the future of BNB Chain and provide guidelines for a smart contract competition. The bootcamp will feature mentorship sessions in several languages to ensure that participants can fully engage with the curriculum.

Overall, this six-week program offers an excellent opportunity for developers to gain a comprehensive understanding of BNB Chain and develop their skills in Web3 development.

The development is an interesting opportunity for developers to accelerate their Web3 development skills and work on innovative projects with a global community of like-minded individuals. The program's comprehensive curriculum covers a range of topics related to Web3 development and includes mentorship sessions in several languages to ensure that information is properly delivered.

The BNB Chain Bootcamp is an exciting development for the blockchain ecosystem, as it aims to empower developers with the necessary skills to build the next generation of Web3 applications. By providing developers with the opportunity to network with other developers and industry leaders, the program can help to create a vibrant and collaborative community of developers working towards a common goal.

Previously known as the Binance Smart Chain (BSC), BNB Chain is a community-driven, decentralized, and censorship-resistant blockchain that is powered by Binance. It consists of BNB Beacon Chain and BNB Smart Chain, EVM compatible and facilitating a multi-chain ecosystem. Through the concept of MetaFI, BNB Chain aims to build the infrastructure to power the worlds parallel virtual ecosystem.

Website | Twitter | Discord | Telegram | GitHub |

Continued here:

Innovation Bootcamp Unveiled by BNB Chain - BSC NEWS

Read More..

Neither good nor bad: Wyoming higher ed weighs rise of artificial … – Casper Star-Tribune

When OpenAIs ChatGPT burst into public view last fall, it sent ripples through higher education in Wyoming.

The states community colleges and the University of Wyoming quickly had to reckon with a technology that could write essays and answer assignments.

Action from UW was swift. President Ed Seidel set up an Artificial Intelligence Chatbots Working Group that weeks later delivered a set of recommendations, including an update to the schools cheating policies.

The university also left the question of artificial intelligence open-ended, allowing teachers to decide if and how they want to use the technology.

Casper College and Central Wyoming College have so far refrained from taking schoolwide steps, instead relying on teachers to dictate the technology in their classrooms while building broader conversations around artificial intelligence.

People are also reading

For those in higher education, the issue of artificial intelligence is nuanced. It is neither good nor bad. It is not the end of education, nor is it a lasting replacement for learning.

But as Wyomings university and community colleges begin to grapple with artificial intelligence, common sources of optimism and worries are beginning to emerge.

Reacting to new technology

Of the three schools, UW has fielded the strongest institutional response.

In January, Seidel announced the Artificial Intelligence Chatbots Working Group just a few months after OpenAI released ChatGPT. He asked the group to consider any policy changes and other measures the university might need to take in light of ChatGPT and other chatbots, which can answer complex questions and simulate humanlike conversations using computer algorithms trained to recognize, summarize and predict words and text.

Students walk on the quad between classes on Wednesday at the University of Wyoming.

A team of faculty led by Anne Alexander, UWs vice provost for strategic planning and initiatives, and Rene Laegreid, the chair of the UW Faculty Senate and a professor of history, produced a report just three weeks later.

Among its recommendations, the group suggested the school update its student academic dishonesty and cheating policies to ban the unpermitted use of artificial intelligence.

While an acknowledgement of the potential risks that the technology poses, it also left the decision to teachers. They would be the ones to decide if ChatGPT and artificial intelligence would be permitted in their classrooms.

That decision stemmed in part from the recognition from those in the working group that artificial intelligence has benefits alongside drawbacks.

Its like saying that a calculator is bad. Its like saying that a browser is bad. Its like saying that a search engine is bad. Theyre not. Theyre just tools, Alexander, who is also an economics professor, said.

We wanted it to make it clear at the very outset that theres not going to be a right answer for UW and probably not for higher ed.

In the place of a blanket policy, UW has leaned on the Ellbogen Center for Teaching and Learning and faculty like Rick Fisher, who directs Communication across the Curriculum, a branch of the university that provides guidance and support for teachers, to hold workshops and discussions that educate faculty about the technology. In turn, those teachers can decide how they want to approach artificial intelligence in their courses, sanctioning its use or banning it.

Casper College and Central Wyoming College have taken similar approaches.

They have not convened working groups or instituted policy changes, but their faculty have begun to hold discussions about the technology.

We as a college looked to them to help guide us through the next step based on what they wanted, said Kathy Wells, vice president for academic affairs at Central Wyoming College. They are the frontline.

For both colleges, teachers have been the decision makers. They have decided how the technology will be used on a case-by-case basis. Their response has been mixed, as it has at UW.

In areas like the visual arts and technical education, concerns among faculty have been minimal, Wells said. The counter has been online courses and those heavy on writing, where language-based artificial intelligence has the power to upend learning.

Casper Colleges administration and its faculty have only begun to work through some of the pressing questions that will influence how artificial intelligence will or will not be used in the classroom. Its not just a question of if technology will be used, but also the extent to which it will be, said Brandon Kosine, Casper Colleges vice president of academic affairs.

Were going to have to work with the speed of industry to make sure that were finding that balance with them, so that were not overstepping or under stepping our industry partners, Kosine said. That goes for transfer students, too.

Optimism and worries

If theres one thing that keeps higher education teachers and curriculum leaders up at night, its student learning. Suffice it to say, artificial intelligence will have an impact. But what that will be in Wyoming higher education remains to be seen.

A classroom in the Arts and Sciences building is pictured on Wednesday at the University of Wyoming.

The first few months of national media coverage following the release of ChatGPT have focused on its detriment to education. An associate dean at Oregon State University wrote an opinion piece in the digital publication Inside Higher Ed that said ChatGPT was a crisis for education. Others have compared artificial intelligence to a plague or releasing a genie from a bottle.

Humans have consistently responded to new technological developments in language with fear, Fisher, who also teaches English and writing at UW, said. Pencils with erasers were a scourge in Henry David Thoreaus time.

When writing was invented there were the same critiques in some ways that were having now, Fisher said. Writing is its own form of technology, and I think that weve always been afraid of what we set outside our own brains.

Though it removes some of the thinking, artificial intelligence also has the potential to improve higher education, most notably by advancing equity in learning. Think about dyslexia, a learning disability that makes it difficult for a student to read and often adds challenges writing.

If you are a student with dyslexia or any other kind of learning challenge if youre unable to take a thought and turn it into something cogent thats very compelling to read oh my gosh, what a great starting point? Alexander said.

ChatGPT and artificial intelligence could boost STEM education, helping students to code and visualize their research projects. And for teachers, it could mean greater efficiency in administrative tasks and could move instruction toward more challenging subjects.

Google CEO Sundar Pichai shared plans to integrate conversational artificial intelligence features into its flagship search engine. Advances in AI would supercharge Google's ability to answer an array of search queries, Pichai said in an interview with the Wall Street Journal.

Yet, those leading the way at the states university and community colleges find more concerns than clear benefits.

Preparing for a workshop, Fisher asked ChatGPT to pull up citations about best practices in education design. It came up with citations that looked right with real authors and journals, but fake titles.

It also generated a one-sentence summary of the article that did not exist in the world, Fisher said. Thats a level of fabrication that seems problematic.

For Fisher, it goes deeper. Language is more than communication, he said. Its how we process information, how we work through ideas and difficult subjects; learning and language go hand in hand. A technology that could change how we use and understand language has the potential change how we learn.

The role of language in education is really, really important beyond just as a way for students to demonstrate what they know, Fisher said. Thats the exciting opportunity and threat of this moment is having to be confronted with the sort of rethinking and reworking of some of the things that weve maybe implicitly believed or understood about language.

There are other issues besides cheating, such as intellectual property rights and information literacy since current chatbots have a habit of being inaccurate.

As a licensed counselor and the former dean for the Casper Colleges School of Social and Behavioral Sciences, Kosine worries about the ethical implications, the unintended consequences and the responsible use of artificial intelligence in higher education.

Kosine said that the rise of ChatGPT only makes the colleges teaching of critical thinking and other essential skills more important.

At the end of the day, I think that we have to keep doing what we do try to teach students to love learning, so that they dont want to rely on technology to do the learning or the output for them, he said.

Stay up-to-date on the latest in local and national government and political topics with our newsletter.

Read the original:
Neither good nor bad: Wyoming higher ed weighs rise of artificial ... - Casper Star-Tribune

Read More..

Director Chopras Prepared Remarks on the Interagency … – Consumer Financial Protection Bureau

In recent years, we have seen a rapid acceleration of automated decision-making across our daily lives. Throughout the digital world and throughout sectors of the economy, so-called artificial intelligence is automating activities in ways previously thought to be unimaginable.

Generative AI, which can produce voices, images, and videos that are designed to simulate real-life human interactions are raising the question of whether we are ready to deal with the wide range of potential harms from consumer fraud to privacy to fair competition.

Today, several federal agencies are coming together to make one clear point: there is no exemption in our nations civil rights laws for new technologies that engage in unlawful discrimination. Companies must take responsibility for their use of these tools.

The Interagency Statement we are releasing today seeks to take an important step forward to affirm existing law and rein in unlawful discriminatory practices perpetrated by those who deploy these technologies.1

The statement highlights the all-of-government approach to enforce existing laws and work collaboratively on AI risks.

Unchecked AI poses threats to fairness and to our civil rights in ways that are already being felt.

Technology companies and financial institutions are amassing massive amounts of data and using it to make more and more decisions about our lives, including whether we get a loan or what advertisements we see.

While machines crunching numbers might seem capable of taking human bias out of the equation, thats not what is happening. Findings from academic studies and news reporting raise serious questions about algorithmic bias. For example, a statistical analysis of 2 million mortgage applications found that Black families were 80 percent more likely to be denied by an algorithm when compared to white families with similar financial and credit backgrounds. The response of mortgage companies has been that researchers do not have all the data that feeds into their algorithms or full knowledge of the algorithms. But their defense illuminates the problem: artificial intelligence often feels like black boxes behind brick walls.2

When consumers and regulators do not know how decisions are made by artificial intelligence, consumers are unable to participate in a fair and competitive market free from bias.

Thats why the CFPB and other agencies are prioritizing and confronting digital redlining, which is redlining caused through bias present in lending or home valuation algorithms and other technology marketed as artificial intelligence. They are disguised through so-called neutral algorithms, but they are built like any other AI system by scraping data that may reinforce the biases that have long existed.

We are working hard to reduce bias and discrimination when it comes to home valuations, including algorithmic appraisals. We will be proposing rules to make sure artificial intelligence and automated valuation models have basic safeguards when it comes to discrimination.

We are also scrutinizing algorithmic advertising, which, once again, is often marketed as AI advertising. We published guidance to affirm how lenders and other financial providers need to take responsibility for certain advertising practices. Specifically, advertising and marketing that uses sophisticated analytic techniques, depending on how these practices are designed and implemented, could subject firms to legal liability.

Weve also taken action to protect the public from black box credit models in some cases so complex that the financial firms that rely on them cant even explain the results. Companies are required to tell you why you were denied for credit and using a complex algorithm is not a defense against providing specific and accurate explanations.

Developing methods to improve home valuation, lending, and marketing are not inherently bad. But when done in irresponsible ways, such as creating black box models or not carefully studying the data inputs for bias, these products and services pose real threats to consumers civil rights. It also threatens law-abiding nascent firms and entrepreneurs trying to compete with those who violate the law.

I am pleased that the CFPB will continue to contribute to the all-of-government mission to ensure that the collective laws we enforce are followed, regardless of the technology used.

Thank you.

Continued here:
Director Chopras Prepared Remarks on the Interagency ... - Consumer Financial Protection Bureau

Read More..

WEIRD AI: Understanding what nations include in their artificial intelligence plans – Brookings Institution

In 2021 and 2022, the authors published a series of articles on how different countries are implementing their national artificial intelligence (AI) strategies. In these articles, we examined how different countries view AI and looked at their plans for evidence to support their goals. In the later series of papers, we examined who was winning and who was losing in the race to national AI governance, as well as the importance of people skills versus technology skills, and concluded with what the U.S. needs to do to become competitive in this domain.

Since these publications, several key developments have occurred in national AI governance and international collaborations. First, one of our key recommendations was that the U.S. and India create a partnership to work together on a joint national AI initiative. Our argument was as follows: India produces far more STEM graduates than the U.S., and the U.S. invests far more in technology infrastructure than India does. A U.S. -India partnership eclipses China in both dimensions and a successful partnership could allow the U.S. to quickly leapfrog China in all meaningful aspects of A.I. In early 2023, U.S. President Biden announced a formal partnership with India to do exactly what we recommended to counter the growing threat of China and its AI supremacy.

Second, as we observed in our prior paper, the U.S. federal government has invested in AI, but largely in a decentralized approach. We warned that this approach, while it may ultimately develop the best AI solution, requires a long ramp up and hence may not achieve all its priorities.

Finally, we warned that China is already in the lead on the achievement of its national AI goals and predicted that it would continue to surpass the U.S. and other countries. News has now come that China is planning on doubling its investment in AI by 2026, and that the majority of the investment will be in new hardware solutions. The U.S. State Department also is now reporting that China leads the U.S. in 37 out of 44 key areas of AI. In short, China has expanded its lead in most AI areas, while the U.S. is falling further and further behind.

Considering these developments, our current blog shifts findings away from national AI plan achievement to a more micro view of understanding the elements of the particular plans of the countries included in our research, and what drove their strategies. At a macro level, we also seek to understand if groups of like-minded countries, which we have grouped by cultural orientation, are taking the same or different approaches to AI policies. This builds upon our previous posts by seeking and identifying consistent themes across national AI plans from the perspective of underlying national characteristics.

In this blog, the countries that are part of our study include 34 nations that have produced public AI policies, as identified in our previous blog posts: Australia, Austria, Belgium, Canada, China, Czechia, Denmark, Estonia, Finland, France, Germany, India, Italy, Japan, South Korea, Lithuania, Luxembourg, Malta, Mexico, Netherlands, New Zealand, Norway, Poland, Portugal, Qatar, Russia, Serbia, Singapore, Spain, Sweden, UAE, UK, Uruguay, and USA.

For each, we examine six key elements in these national AI plansdata management, algorithmic management, AI governance, research and development (R&D) capacity development, education capacity development, and public service reform capacity developmentas they provide insight into how individual countries approach AI deployment. In doing so, we examine commonalities between culturally similar nations which can lead to both higher and lower levels of investment in each area.

We do this by exploring similarities and differences through what is commonly referred to as the WEIRD framework, a typology of countries based on how Western, Educated, Industrialized, Rich, and Democratic they are. In 2010, the concept of WEIRD-ness originated with Joseph Henrich, a professor of human evolutionary biology at Harvard University. The framework describes a set of countries with a particular psychology, motivation, and behavior that can be differentiated from other countries. WEIRD is, therefore, one framework by which countries can be grouped and differentiated to determine if there are commonalities in their approaches to various issues based on similar decision-making processes developed through common national assumptions and biases.

Below are our definitions of each element of national AI plans, followed by where they fall along the WEIRD continuum.

Data management refers to how the country envisages capturing and using the data derived from AI. For example, the Singapore plan defines data management defines [a]s the nations custodian of personal and administrative data, the Government holds a data resource that many companies find valuable. The Government can help drive cross-sectoral data sharing and innovation by curating, cleaning, and providing the private sector with access to Government datasets.

Algorithmic management addresses the countrys awareness of algorithmic issues. For example, the German plan states that: [t]he Federal Government will assess how AI systems can be made transparent, predictable and verifiable so as to effectively prevent distortion, discrimination, manipulation and other forms of improper use, particularly when it comes to using algorithm-based prognosis and decision-making applications.

AI governance refers to the inclusivity, transparency and public trust in AI and the need for appropriate oversight. The language in the French plan asserts: [i]n a world marked by inequality, artificial intelligence should not end up reinforcing the problems of exclusion and the concentration of wealth and resources. With regards to AI, a policy of inclusion should thus fulfill a dual objective: ensuring that the development of this technology does not contribute to an increase in social and economic inequality; and using AI to help genuinely reduce these problems.

Overall, capacity development is the process of acquiring, updating and reskilling human, organizational and policy resources to adapt to technological innovation. We examine three types of capacity development R&D, Education, and Public Service Reform.

R&D capacity development focuses on government incentive programs for encouraging private sector investment in AI. For example, the Luxembourg plan states: [t]he Ministry of the Economy has allocated approximately 62M in 2018 for AI-related projects through R&D grants, while granting a total of approximately 27M in 2017 for projects based on this type of technology. The Luxembourg National Research Fund (FNR), for example, has increasingly invested in research projects that cover big data and AI-related topics in fields ranging from Parkinsons disease to autonomous and intelligent systems approximately 200M over the past five years.

Education capacity development focuses on learning in AI, at the post-secondary, vocational and secondary levels. For example, the Belgian plan states: Overall, while growing, the AI offering in Belgium is limited and insufficiently visible. [W]hile university-college PXL is developing an AI bachelor programme, to date, no full AI Master or Bachelor programmes exist.

Public service reform capacity development focuses on applying AI to citizen-facing or supporting services. For example, the Finnish plan states: Finlands strengths in piloting [AI projects] include a limited and harmonised market, neutrality, abundant technology resources and support for legislation. Promoting an experimentation culture in public administration has brought added agility to the sectors development activities.

In the next step of our analysis, we identify the level of each country and then group countries by their WEIRD-ness. Western uses the World Population Reviews definition of the Latin West, and is defined by being in or out of this group, which is a group of countries sharing a common linguistic and cultural background, centered on Western Europe and its post-colonial footprint. Educated is based on the mean years of schooling in the UN Human Development Index, where 12 years (high school graduate) is considered the dividing point between high and low education. Industrialized adopts the World Bank industry value added of GDP, where a median value of $3500 USD per capita of value added separates high from low industrialization. Rich uses the Credit Suisse Global Wealth Databook mean wealth per adult measure, where $125k USD wealth is the median amongst countries. Democratic applies the Democracy Index of the Economist Intelligence Unit, which differentiates between shades of democratic and authoritarian regimes and where the midpoint of hybrid regimes (5.0 out of 10) is the dividing point between democratic and non-democratic. For example, Australia, Austria, and Canada are considered Western, while China, India and Korea are not. Germany, the U.S., and Estonia are seen as Educated, while Mexico, Uruguay and Spain are not. Canada, Denmark, and Luxemburg are considered Industrialized, while Uruguay, India and Serbia are not. Australia, France, and Luxembourg are determined to be Rich while China, Czechia and India are not. Finally, Sweden, the UK and Finland are found to be Democratic, while China, Qatar and Russia are not.

Figure 1 maps the 34 countries in our sample as follows. Results ranged from the pure WEIRD countries, including many Western European nations and some close trading partners and allies such as the United States, Canada, Australia, and New Zealand.Figure 1: Countries classified by WEIRD framework[1]

By comparing each grouping of countries with the presence or absence of our six data elements (data management, algorithmic management, AI governance, and R&D capability development), we can understand how each country views AI alone and within its particular grouping. For example, wEIRD Japan and Korea are high in all areas except for western and both invest highly in R&D capacity development but not education capacity development.

The methodology used for this blog was Qualitative Configuration Analysis (QCA), which seeks to identify causal recipes of conditions related to the occurrence of an outcome in a set of cases. In QCA, each case is viewed as a configuration of conditions (such as the five elements of WEIRD-ness) where each condition does not have a unique impact on the outcome (an element of AI strategy), but rather acts in combination with all other conditions. Application of QCA can provide several configurations for each outcome, including identifying core conditions that are vital for the outcome and peripheral conditions that are less important. The analysis for each plan element is described below.

Data management has three different configurations of countries that have highly developed plans. In the first configuration, for WeIRD countriesthose that are Western, Industrialized, Rich, and Democratic (but not Educated; e.g., France, Italy, Portugal, and Spain)being Western was the best predictor of having data management as part of their AI plan, and the other components were of much less importance. Of interest, not being Educated was also core, making it more likely that these countries would have data management as part of their plan. This would suggest that these countries recognize that they need to catch up on data management and have put plans in place that exploit their western ties to do so.

In the second configuration, which features WEIrD Czechia, Estonia, Lithuania, and Poland, being Democratic was the core and hence most important predictor and Western, Educated, and Industrialized were peripheral and hence less important. Interestingly, not being Rich made it more likely to have this included. This would suggest that these countries have developed data management plans efficiently, again leveraging their democratic allies to do so.

In the third and final configuration, which includes the WeirD countries of Mexico, Serbia, Uruguay, and weirD India, the only element whose presence mattered was the level of Democracy. That these countries were able to do so in low wealth, education, and industrialization contexts demonstrates the importance of investment in AI data management as a low-cost intervention in building AI policy.

Taken together, there are many commonalities, but a country being Western and/or Democratic were the best predictors of a country having a data governance strategy in its plan. In countries that are Western or Democratic, there is often a great deal of public pressure (and worry) about data governance, and we suspect these countries included data governance to satisfy the demands of their populace.

We also examined what conditions led to the absence of a highly developed data management plan. There were two configurations that had consistently low development of data management. In the first configuration, which features wEIrd Russian and UAE and weIrd China, being neither Rich nor Democratic were core conditions. In the second configuration, which includes wEIRD Japan and Korea, core conditions were being not Western but highly Educated. Common across both configurations was that all countries were Industrialized but not Western. This would suggest that data management is more a concern of western countries than non-western countries, whether they are democratic or not.

However, we also found that the largest grouping of countriesthe 15 WEIRD countries in the samplewere not represented, falling neither in the high or low configurations. We believe that this is due to there being multiple different paths for AI policy development and hence they do not all stress data governance and management. For example, Australia, the UK, and the US have strong data governance, while Canada, Germany and Sweden do not. Future investigation is needed to differentiate between the WEIRDest countries.

For algorithmic management, except for WeirD Mexico, Serbia, and Uruguay, there was no discernable pattern in terms of which countries included an acknowledgment of the need and value of algorithmic management. We had suspected that more WEIRD countries would be sensitive to this, but our data did not support this belief.

We examined the low outcomes for algorithmic management and found two configurations. The first was wEIRD Japan and Korea and weIRD Singapore, where the core conditions were being not Western but Rich and Democratic. The second was wEIrd Russian and UAE and weIrd China, where the core elements were not Rich and not Democratic. Common across the two configurations with six countries was being not Western but Industrialized. Again, this suggests that algorithmic management is more a concern of western nations than non-western ones.

For AI governance, we again found that, except for WeirD Mexico, Serbia, and Uruguay, there was no discernable pattern for which countries included this in their plans and which countries did not. We believed that AI governance and algorithmic management to be more advanced in WEIRD nations and hence this was an unexpected result.

We examined the low outcomes for AI governance and found three different configurations. The first was wEIRD Japan and Korea and weIRD Singapore, where the core conditions were being not Western but Rich and Democratic. The second was wEIrd Russian and UAE, where the core elements were not Western but Educated. The third was weirD India, where the core elements were being not Western but Democratic. Common across the three configurations with six countries was not being of western classification. Again, this suggests that AI governance is more a concern of western nations than nonwestern ones.

There was a much clearer picture of high R&D development, where we found four configurations. The first configuration was the 15 WEIRD countries plus the WEIrD onesCzechia, Estonia, Lithuania, Poland. For the latter, while they are not some of the richer countries, they still manage to invest heavily in developing their R&D.

The second configuration included WeirD Mexico, Serbia, Uruguay, and weirD India. Like data governance, these countries were joined by their generally democratic nature but lower levels of education, industrialization, and wealth.

Conversely, the third configuration included the non-western, non-democratic nations such as weIRd Qatar and weIrd China. This would indicate that capability development is of primary importance for such nations at the expense of other policy elements. The implication is that investment in application of AI is much more important to these nations than its governance.

Finally, the fourth configuration included the non-western but democratic nations such as wEIRD Japan, Korea, and weIRD Singapore. This would indicate that the East, whether democratic or not, is as equally focused on capability development and R&D investment as the West.

We did not find any consistent configurations for low R&D development across the 34 nations.

For high education capacity development, we found two configurations, both with Western but not Rich core conditions. The first includes WEIrD Czechia, Estonia, Lithuania, and Poland while the second includes WeirD Mexico, Serbia, and Uruguay. Common conditions for these seven nations were being Western and Democratic, but not Rich, while the former countries were Educated and Industrialized, while the latter were not. These former eastern-bloc and colonial nations appear to be focusing on creating educational opportunities to catch up with other nations in the AI sphere.

Conversely, we found three configurations of low education capacity development. The first includes wEIRD Japan and Korea and weIRD Singapore, representing the non-Western but Industrialized, Rich, and Democratic nations. The second was weIRd Qatar, not Western or Democratic but Rich and Industrialized, while the third was wEIrd Russia and UAE. The last was weirD India, being Democratic but low in all other areas. The common factor across these countries was being non-western, demonstrating that educational investment to improve AI outcomes is a primarily western phenomenon, irrespective of other plan elements.

We did not find any consistent configurations for high public service reform capacity development, but we did find three configurations for low investment in such plans. The first includes wEIRD Japan and Korea, the second was weIRd Qatar, and the last was weirD India. This common core factor across these three configurations was that they were not western countries, further highlighting the different approaches taken by western and nonwestern countries.

Overall, we expected more commonality in which countries included certain elements, and the fragmented nature of our results likely reflects a very early stage of AI adoption and countries simply trying to figure out what to do. We believe that, over time, WEIRD countries will start to converge on what is important and those insights will be reflected in their national plans.

There is one other message that our results pointed out: the West and the East are taking very different approaches to AI development in their plans. The East is almost exclusively focused on building up its R&D capacity and is largely ignoring the traditional guardrails of technology management (e.g., data governance, data management, education, public service reform). By contrast, the West is almost exclusively focused on ensuring that these guardrails are in place and is spending relatively less effort on building the R&D capacity that is essential to AI development. This is perhaps the reason why many Western technology leaders are calling for a six-month pause on AI development, as that pause could allow suitable guardrails to be put in place. However, we are extremely doubtful that countries like China will see the wisdom in taking a six-month pause and will likely use the pause to create even more space between their R&D capacity and the rest of the world. This all gas, no brakes Eastern philosophy has the potential to cause great global harm but will undeniably increase their domination in this area. We have little doubt about the need for suitable guardrails in AI development but are also equally convinced that a six-month pause is unlikely to be honored by China. Because of Chinas lead, the only prudent strategy is to build the guardrails while continuing to engage in AI development. Otherwise, the West will continue to fall further behind, resulting in the development of a great set of guardrails but with nothing of value to guard.

[1] A capital letter denotes being high in an element of WEIRD-ness while a lowercase letter denotes being low in that element. For example, W means western while w means not western. (Back to top)

More:
WEIRD AI: Understanding what nations include in their artificial intelligence plans - Brookings Institution

Read More..

Artificial intelligence poised to hinder, not help, access to justice – Reuters

April 25 (Reuters) - The advent of ChatGPT, the fastest-growing consumer application in history, has sparked enthusiasm and concern about the potential for artificial intelligence to transform the legal system.

From chatbots that conduct client intake, to tools that assist with legal research, document management, even writing legal briefs, AI has been touted for its potential to increase efficiency in the legal industry. It's also been recognized for its ability to help close the access-to-justice gap by making legal help and services more broadly accessible to marginalized groups.

Most low-income U.S. households deal with at least one civil legal problem a year, concerning matters like housing, healthcare, child custody and protection from abuse, according to the Legal Services Corp. They dont receive legal help for 92% of those problems.

Moreover, our poorly-funded public defense system for criminal matters has been a broken process for decades.

AI and similar technologies show promise in their ability to democratize legal services, including applications such as online dispute resolution and automated document preparation.

For example, A2J Author uses decision trees, a simplistic kind of AI, to build document preparation tools for complex filings in housing law, public benefits law and more. The non-profit JustFix provides online tools that help with a variety of landlord-tenant issues. And apps have been developed to help people with criminal expungement, to prepare for unemployment hearings, and even to get divorced.

Still, there's more reason to be wary rather than optimistic about AIs potential effects on access to justice.

Much of the existing technology and breakneck momentum in the industry is simply not geared toward the interests of underserved populations, according to several legal industry analysts and experts on the intersection of law and technology. Despite the technology's potential, some warned that the current trajectory actually runs the risk of exacerbating existing disparities.

Rashida Richardson, an assistant professor at Northeastern University School of Law, told me that AI has lots of potential, while stressing that there hasnt been enough public discussion of the many limitations of AI and of data itself." Richardson has served as technology adviser to the White House and Federal Trade Commission.

"Fundamentally, problems of access to justice are about deeper structural inequities, not access to technology," Richardson said.

It's critical to recognize that the development of AI technology is overwhelmingly unregulated and is driven by market forces, which categorically favor powerful, wealthy actors. After all, tech companies are not developing AI for free, and their interest is in creating a product attractive to those who can pay for it.

Your ability to enjoy the benefits of any new technology corresponds directly to your ability to access that technology, said Jordan Furlong, a legal industry analyst and consultant, noting that ChatGPT Plus costs $20-a-month, for example.

Generative AI has fueled a new tech gold rush in "big law" and other industries, and those projects can sometimes cost millions, Reuters reported on April 4.

Big law firms and legal service providers are integrating AI search tools into their workflows and some have partnered with tech companies to develop applications in-house.

Global law firm Allen & Overy announced in February that its lawyers are now using chatbot-based AI technology from a startup called Harvey to automate some legal document drafting and research, for example. Harvey received a $5 million investment last year in a funding round, Reuters reported in February. Last month, PricewaterhouseCoopers said 4,000 of its legal professionals will also begin using the generative AI tool.

Representatives of PricewaterhouseCoopers and Allen & Overy did not respond to requests for comment.

But legal aid organizations, public defenders and civil rights lawyers who serve minority and low-income groups simply dont have the funds to develop or co-develop AI technology nor to contract for AI applications at scale.

The resources problem is reflected in the contours of the legal market itself, which is essentially two distinct sectors: one that represents wealthy organizational clients, and another that works for consumers and individuals, said William Henderson, a professor at the Indiana University Maurer School of Law.

Americans spent about $84 billion on legal services in 2021, according to Henderson's research and U.S. Census Bureau data. By contrast, businesses spent $221 billion, generating nearly 70% of legal services industry revenue.

Those disparities seem to be reflected in the development of legal AI thus far.

A 2019 study of digital legal technologies in the U.S. by Rebecca Sandefur, a sociologist at Arizona State University, identified more than 320 digital technologies that assist non-lawyers with justice problems. But Sandefur's research also determined that the applications don't make a significant difference in terms of improving access to legal help for low-income and minority communities. Those groups were less likely to be able to use the tools due to fees charged, limited internet access, language or literacy barriers, and poor technology design.

Sandefur's report identified other hurdles to innovation, including the challenges of coordination among innumerable county, state and federal court systems, and "the legal professions robust monopoly on the provision of legal advice" -- referring to laws and rules restricting non-lawyer ownership of businesses that engage in the practice of law.

Drew Simshaw, a Gonzaga University School of Law professor, told me that many non-lawyers are "highly-motivated" to develop in this area but are concerned about crossing the line into unauthorized practice of law. And there isn't a uniform definition of what constitutes unauthorized practice across jurisdictions, Simshaw said.

On balance, it's clear that AI certainly has great potential to disrupt and improve access-to-justice. But it's much less clear that we have the infrastructure or political will to make that happen.

Our Standards: The Thomson Reuters Trust Principles.

Opinions expressed are those of the author. They do not reflect the views of Reuters News, which, under the Trust Principles, is committed to integrity, independence, and freedom from bias.

Thomson Reuters

Hassan Kanu writes about access to justice, race, and equality under law. Kanu, who was born in Sierra Leone and grew up in Silver Spring, Maryland, worked in public interest law after graduating from Duke University School of Law. After that, he spent five years reporting on mostly employment law. He lives in Washington, D.C. Reach Kanu at hassan.kanu@thomsonreuters.com

Follow this link:
Artificial intelligence poised to hinder, not help, access to justice - Reuters

Read More..

Artificial Intelligence: Use and misuse – WSPA 7News

(WSPA) Artificial Intelligence is now at your fingertips like never before.

From ChatGPT to Microsofts Bing, to MidJourney or Snaptchats new My AI, the number of interactive AI programs is growing.

With that, the average user is starting to understand the endless possibilities of what AI can do.

Still, along with that comes some serious words of warning.

7NEWS looked into how the technology is already being misused and how it could affect everything from safety online to the job market.

Darren Hick, a professor at Furman University has seen firsthand, how the technology comes with some cautionary tales.

We always thought it was just over the horizon, and always just over the horizon, and last year it arrived, and we werent ready for it.

As an Assistant Professor of Philosophy, Hick was one of the first to catch a new type of plagiarism using AI two weeks after ChatGPT was launched to the public when one of his students turned in a final paper.

Normally when a student plagiarizes a paper this is a last-minute panic and so it sort of screams that it has been thrown together in the last second, according to Hick.This wasnt that. This was clean, this was really nicely put together but had all these other factors. And I had heard about ChatGPT, and it finally dawned on me that maybe this was that.

Thats when Hick started testing out ChatGPT.

The open-source program interacts a lot like a confident, well-spoken human.

You can ask it any question like, Describe AI so a 5-year-old can understand, and it spits out appropriate answers like: Its like having a smart robot that can learn and think like a human.

No matter how many times you ask the question, the answer is always slightly different, again, just like a human.

You can even ask it to write in different styles from Shakespeare to poetry.

However, with it still in its infancy, ChatGPT is full of inaccuracies.

When we asked the AI to tell us about Diane Lee from WSPA, it said she is a former journalist who may have left the station.

Kylan Cleveland, with the IT firm Cyber Solutions in Anderson, is quick to point out that right now programs like ChatGPT dont scour the web for information, which is why they are not up to date.

Programs like ChatGPT are only working with what is fed into them and have limited knowledge past 2021, as is stated on ChatGPTs home page.

Cleveland embraces the technology but also is leery of the day these programs gain access to up-to-date information.

When we can get to the point where AI has current data, I think thats when we should really take a step back and see what type of security measures, we can put in place to prevent it from being almost predictive, he said.

AI could also pose a major shakeup to the job market, with some experts who study the technology, like Thomas Fellows, predicting many white-collar jobs from accounting to marketing will be displaced.

If you dont have a job that has true human judgment and vault, it could be taken away, plain and simple, Fellows said.

Fellows has worked in software jobs where the main goal was to automate tasks. He believes AI will be akin to what machines have been to some factory jobs.

The jobs he said are most at risk are:

Still, no matter the warnings, educators and businesses alike said not embracing the many benefits of the technology would be like rejecting the internet in the 90s.

AI is a huge time saver, making tasks that used to take hours last only minutes or even seconds.

If youre scared of something then youre likely more dangerous with it than someone who is educated on it, according to Cleveland.

Fellows added, those who dont embrace the technology, from educators to companies, will lose out on a tool that is changing virtually every industry.

Fortunately, with the development of AI comes AI detectors, which is one way Hick was able to decipher plagiarism.

Despite a warning in his syllabus this Spring semester, the first case wasnt the last.

I went through exactly the same process. My first thought was not oh well this is ChatGPT, my first thought was well thats a weird way to put this, and eventually it clicked, Oh, its AI again.

Two students, two semesters, two Fs for the final grade.

Hick has a warning for all educators, no matter the school, no matter the grade level.

If we dont get used to the way plagiarism looks like now, then more and more of it is going to sneak by.

Read the original:
Artificial Intelligence: Use and misuse - WSPA 7News

Read More..

Lawmakers push for more transparency in the use of artificial intelligence – FOX61 Hartford

CONNECTICUT, USA Connecticut lawmakers are looking to regulate the use of artificial intelligence in state government. Theyre calling for more transparency and tests to ensure there isnt any discrimination at the hands of this technology.

Machine learning just picks up what we're doing now and it amplifies and perpetuates that, so we don't want to see those biases that we've known about being continued, said state Sen. James Maroney, (D-Milford).

The main concern is the possible civil rights implications of AI algorithms and trying to prevent any spread of biases by these programs.

Data science experts explain these algorithms can mimic historical patterns of discrimination and with this technology gaining popularity, state lawmakers said now theyre trying to be proactive.

Their promise is great, but so are the harms and we've seen these harms come up again and again, said Suresh Venkatasubramanian, data and computer science professor at Brown University. The time to protect our civil rights in the age of AI is now.

Tuesday the Connecticut Advisory Committee to the U.S. Commission on Civil Rights briefed lawmakers on a reportrecommending transparency in the use of AI algorithms by state government.

The lack of knowledge around this is really the problem, David McGuire, chair of the committee said. We really don't have a clear sense of what algorithms the state is using and for what reasons.

McGuire said right now it's difficult to tell which state agencies are using this technology and how. He said three state agencies are currently utilizing the algorithms and the state legislature is now pushing for a billto require state offices to report what programs are in use.

We fully support transparency, continued state Rep. David Rutigliano, (R-Trumbull). We don't want to see anybody discriminated against and we also think that the citizens' data should be protected in the same way with the government, as it is with private citizens.

To address civil rights concerns, the legislation would also implement routine tests and assessments of these AI programs to prevent discrimination.

Sign up for the FOX61 newsletters:Morning Forecast, Morning Headlines, Evening Headlines

As we're seeing AI and the changes come so quickly now, it's important that we get some guardrails in the ground because if we don't do something now it'll get out and then it'll be harder to regulate it, Maroney added.

This bill has bipartisan support and was advanced by the General Law Committee last month.

Maroney said now theyre debating the final language of the bill and trying to reduce coststhe Office of Fiscal Analysis estimatesthis proposal will cost more than $3.6 million over the next two years.

Emma Wulfhorst is a political reporter for FOX61 News. She can be reached atewulfhorst@fox61.com. Follow her onFacebook,TwitterandInstagram.

---

Have a story idea or something on your mind you want to share? We want to hear from you! Email us atnewstips@fox61.com

----

HERE ARE MORE WAYS TO GET FOX61 NEWS

Download the FOX61 News APP

iTunes:Click here to download

Google Play:Click here to download

Stream Live on ROKU:Add the channel from the ROKU store or by searching FOX61.

Steam Live on FIRE TV: Search FOX61 and click Get to download.

FOLLOW US ONTWITTER,FACEBOOK&INSTAGRAM

Read this article:
Lawmakers push for more transparency in the use of artificial intelligence - FOX61 Hartford

Read More..

How Businesses Are Using Artificial Intelligence In 2023 – Forbes

Businesses are turning to AI to a greater degree to improve and perfect their operations. According to the Forbes Advisor survey, businesses are using AI across a wide range of areas. The most popular applications include customer service, with 56% of respondents using AI for this purpose, and cybersecurity and fraud management, adopted by 51% of businesses.

Other notable uses of AI are customer relationship management (46%), digital personal assistants (47%), inventory management (40%) and content production (35%). Businesses also leverage AI for product recommendations (33%), accounting (30%), supply chain operations (30%), recruitment and talent sourcing (26%) and audience segmentation (24%).

AI is playing a significant role in enhancing customer experiences across touchpoints. According to the Forbes Advisor survey, 73% of businesses use or plan to use AI-powered chatbots for instant messaging. Moreover, 61% of companies use AI to optimize emails, while 55% deploy AI for personalized services, such as product recommendations.

Businesses also leverage AI for long-form written content, such as website copy (42%) and personalized advertising (46%). AI has made inroads into phone-call handling, as 36% of respondents use or plan to use AI in this domain, and 49% utilize AI for text message optimization. With AI increasingly integrated into diverse customer interaction channels, the overall customer experience is becoming more efficient and personalized.

AI is allowing companies to become more nimble and productive. According to the Forbes Advisor survey, AI is used or planned for use in various aspects of business management. A significant number of businesses (53%) apply AI to improve production processes, while 51% adopt AI for process automation and 52% utilize it for search engine optimization tasks such as keyword research.

Companies are also leveraging AI for data aggregation (40%), idea generation (38%) and minimizing safety risks (38%). In addition, AI is being used to streamline internal communications, plans, presentations and reports (46%). Businesses employ AI for writing code (31%) and website copy (29%) as well.

See more here:
How Businesses Are Using Artificial Intelligence In 2023 - Forbes

Read More..