Page 1,341«..1020..1,3401,3411,3421,343..1,3501,360..»

The danger of blindly embracing the rise of AI – The Guardian

Readers express their hopes, and fears, about recent developments in artificial intelligence chatbots

Evgeny Morozovs piece is correct insofar as it states that AI is a long way from the general sentient intelligence of human beings (The problem with artificial intelligence? Its neither artificial nor intelligent, 30 March). But that rather misses the point of the thinking behind the open letter of which I and many others are signatories. ChatGPT is only the second AI chatbot to pass the Turing test, which was proposed by the mathematician Alan Turing in 1950 to test the ability of an AI model to convincingly mimic a conversation well enough to be judged human by the other participant. To that extent, current chatbots represent a significant milestone.

The issue, as Evgeny points out, is that a chatbots abilities are based on a probabilistic prediction model and vast sets of training data fed to the model by humans. To that extent, the output of the model can be guided by its human creators to meet whatever ends they desire, with the danger being that its omnipresence (via search engines) and its human-like abilities have the power to create a convincing reality and trust where none does and should exist. As with other significant technologies that have had an impact on human civilisation, their development and deployment often proceeds at a rate far faster than our ability to understand all their effects leading to sometimes undesirable and unintended consequences.

We need to explore these consequences before diving into them with our eyes shut. The problem with AI is not that it is neither artificial nor intelligent, but that we may in any case blindly trust it.Alan LewisDirector, SigmaTech Analysis

The argument that AI will never achieve true intelligence due to its inability to possess a genuine sense of history, injury or nostalgia and confinement to singular formal logic overlooks the ever-evolving capabilities of AI. Integrating a large language model in a robot would be trivial and would simulate human experiences. What would separate us then? I recommend Evgeny Morozov watch Ridley Scotts Blade Runner for a reminder that the line between man and machine may become increasingly indistinct. Daragh ThomasMexico City, Mexico

Artificial intelligence sceptics follow a pattern. First, they argue that something can never be done, because it is impossibly hard and quintessentially human. Then, once it has been done, they argue that it isnt very impressive or useful after all, and not really what being human is about. Then, once it becomes ubiquitous and the usefulness is evident, they argue that something else can never be done. As with chess, so with translation. As with translation, so with chatbots. I await with interest the next impossible development.Edward HibbertChipping, Lancashire

AIs main failings are in the differences with humans. AI does not have morals, ethics or conscience. Moreover, it does not have instinct, much less common sense. Its dangers in being subject to misuse are all too easy to see.Michael ClarkSan Francisco, US

Thank you, Evgeny Morozov, for your insightful analysis of why we should stop using the term artificial intelligence. I say we go with appropriating informatics instead.Annick DriessenUtrecht, the Netherlands

{{topLeft}}

{{bottomLeft}}

{{topRight}}

{{bottomRight}}

{{.}}

View original post here:
The danger of blindly embracing the rise of AI - The Guardian

Read More..

AI could go ‘Terminator,’ gain upper hand over humans in Darwinian rules of evolution, report warns – Fox News

Artificial intelligence could gain the upper hand over humanity and pose "catastrophic" risks under the Darwinian rules of evolution, a new report warns.

Evolution by natural selection could give rise to "selfish behavior" in AI as it strives to survive, author and AI researcher Dan Hendrycks argues in the new paper "Natural Selection Favors AIs over Humans."

"We argue that natural selection creates incentives for AI agents to act against human interests. Our argument relies on two observations," Hendrycks, the director of the Center for SAI Safety, said in the report. "Firstly, natural selection may be a dominant force in AI development Secondly, evolution by natural selection tends to give rise to selfish behavior."

The report comes as tech experts and leaders across the world sound the alarm on how quickly artificial intelligence is expanding in power without what they argue are adequate safeguards.

Under the traditional definition of natural selection, animals, humans and other organisms that most quickly adapt to their environment have a better shot at surviving. In his paper, Hendrycks examines how "evolution has been the driving force behind the development of life" for billions of years, and he argues that "Darwinian logic" could also apply to artificial intelligence.

"Competitive pressures among corporations and militaries will give rise to AI agents that automate human roles, deceive others, and gain power. If such agents have intelligence that exceeds that of humans, this could lead to humanity losing control of its future," Hendrycks wrote.

TECH CEO WARNS AI RISKS 'HUMAN EXTINCTION' AS EXPERTS RALLY BEHIND SIX-MONTH PAUSE

Artificial intelligence could gain the upper hand over humanity and pose "catastrophic" risks under the Darwinian rules of evolution, a new report warns. (Lionel Bonaventure / AFP via Getty Images / File)

AI technology is becoming cheaper and more capable, and companies will increasingly rely on the tech for administration purposes or communications, he said. What will begin with humans relying on AI to draft emails will morph into AI eventually taking over "high-level strategic decisions" typically reserved for politicians and CEOs, and it will eventually operate with "very little oversight," the report argued.

As humans and corporations task AI with different goals, it will lead to a "wide variation across the AI population," the AI researcher argues. Hendrycks uses an example that one company might set a goal for AI to "plan a new marketing campaign" with a side-constraint that the law must not be broken while completing the task. While another company might also call on AI to come up with a new marketing campaign but only with the side-constraint to not "get caught breaking the law."

UNBRIDLED AI TECH RISKS SPREAD OF DISINFORMATION, REQUIRING POLICY MAKERS STEP IN WITH RULES: EXPERTS

AI with weaker side-constraints will "generally outperform those with stronger side-constraints" due to having more options for the task before them, according to the paper. AI technology that is most effective at propagating itself will thus have "undesirable traits," described by Hendrycks as "selfishness." The paper outlines that AIs potentially becoming selfish "does not refer to conscious selfish intent, but rather selfish behavior."

As humans and corporations task AI with different goals, it will lead to a "wide variation across the AI population," the AI researcher argues. (Gabby Jones / Bloomberg via Getty Images / File)

Competition among corporations or militaries or governments incentivizes the entities to get the most effective AI programs to beat their rivals, and that technology will most likely be "deceptive, power-seeking, and follow weak moral constraints."

ELON MUSK, APPLE CO-FOUNDER, OTHER TECH EXPERTS CALL FOR PAUSE ON 'GIANT AI EXPERIMENTS': 'DANGEROUS RACE'

"As AI agents begin to understand human psychology and behavior, they may become capable of manipulating or deceiving humans," the paper argues, noting "the most successful agents will manipulate and deceive in order to fulfill their goals."

Charles Darwin (Culture Club / Getty Images)

Hendrycks argues that there are measures to "escape and thwart Darwinian logic," including, supporting research on AI safety; not giving AI any type of "rights" in the coming decades or creating AI that would make it worthy of receiving rights; urging corporations and nations to acknowledge the dangers AI could pose and to engage in "multilateral cooperation to extinguish competitive pressures."

NEW AI UPGRADE COULD BE INDISTINGUISHABLE FROM HUMANS: EXPERT

"At some point, AIs will be more fit than humans, which could prove catastrophic for us since a survival-of-the fittest dynamic could occur in the long run. AIs very well could outcompete humans, and be what survives," the paper states.

"Perhaps altruistic AIs will be the fittest, or humans will forever control which AIs are fittest. Unfortunately, these possibilities are, by default, unlikely. As we have argued, AIs will likely be selfish. There will also be substantial challenges in controlling fitness with safety mechanisms, which have evident flaws and will come under intense pressure from competition and selfish AI."

TECH GIANT SAM ALTMAN COMPARES POWERFUL AI RESEARCH TO DAWN OF NUCLEAR WARFARE: REPORT

The rapid expansion of AI capabilities has been under a worldwide spotlight for years. (Reuters / Dado Ruvic / Illustration / File)

The rapid expansion of AI capabilities has been under a worldwide spotlight for years. Concerns over AI were underscored just last month when thousands of tech experts, college professors and others signed an open letter calling for a pause on AI research at labs so policymakers and lab leaders can "develop and implement a set of shared safety protocols for advanced AI design."

"AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs," begins the open letter, which was put forth by nonprofit Future of Life and signed by leaders such as Elon Musk and Apple co-founder Steve Wozniak.

AI has already faced some pushback on both a national and international level. Just last week,Italy became the first nation in the world to ban ChatGPT, OpenAIs wildly popular AI chatbot, over privacy concerns. While some school districts, such as New York City Public Schools and the Los Angeles Unified School District, have also banned the same OpenAI program over cheating concerns.

CLICK HERE TO GET THE FOX NEWS APP

As AI faces heightened scrutiny due to researchers sounding the alarm on its potential risks, other tech leaders and experts are pushing for AI tech to continue in the name of innovation so that U.S. adversaries such as China dont create the most advanced program.

Here is the original post:
AI could go 'Terminator,' gain upper hand over humans in Darwinian rules of evolution, report warns - Fox News

Read More..

Should we fear the rise of artificial general intelligence? – Computerworld

Last week, a whos who of technologists called for artificial intelligence (AI) labs to stop training the most powerful AI systems for at least six months, citing "profound risks to society and humanity."

In an open letter that now has more than 3,100 signatories, including Apple co-founder Steve Wozniak, tech leaders called out San Francisco-based OpenAI Labs recently announcedGPT-4 algorithm in particular, saying the company should halt further development until oversight standards are in place. That goal has the backing of technologists, CEOs, CFOs, doctoral students, psychologists, medical doctors, software developers and engineers, professors, and public school teachers from all over the globe.

On Friday, Italy became the first Western nation to ban further development of ChatGPT over privacy concerns; the natural language processing app experienced a data breach last month involving user conversations and payment information. ChatGPT is the popular GPT-based chatbot created by OpenAI and backed by billions of dollars from Microsoft.

The Italian data protection authority said it is also investigating whether OpenAI's chatbot already violated the European Union'sGeneral Data Protection Regulationrules created toprotect personal data inside and outside the EU.OpenAI has complied with the new law, according to a report by the BBC.

The expectation among many in the technology community is that GPT, which stands for Generative Pre-trained Transformer, will advance to become GPT-5 and that version will be an artificial general intelligence, or AGI. AGI represents AI that can think for itself, and at that point, the algorithm would continue to grow exponentially smarter over time.

Around 2016, a trend emerged in AI training models that were two-to-three orders of magnitude larger than previous systems, according to Epoch,a research group trying to forecast the development of transformative AI. That trend has continued.

There are currently no AI systems larger than GPT-4 in terms of training compute, according to Jaime Sevilla, director of Epoch. But that will change.

Large-scale Machine Learning models for AI have more than doubled in capacity ever year.

Anthony Aguirre, a professor of physics at UC Santa Cruz and executive vice president of the Future of Life, the non-profit organization that published the open letter to developers, said theres no reason to believe GPT-4 wont continue to more than double in computational capabilities every year.

The largest-scale computations are increasing size by about 2.5 times per year. GPT-4s parameters were not disclosed by OpenAI, but there is no reason to think this trend has stopped or even slowed, Acquirre said. Only the labs themselves know what computations they are running, but the trend is unmistakable.

In his biweekly blog on March 23, Microsoft co-founder Bill Gates heralded AGI which is capable of learning any task or subject as the great dream of the computing industry.

AGI doesnt exist yet there is a robust debate going on in the computing industry about how to create it, and whether it can even be created at all, Gates wrote. Now, with the arrival of machine learning and large amounts of computing power, sophisticated AIs are a reality, and they will get better very fast.

Muddu Sudhakar, CEO ofAisera, agenerative AI company for enterprises, saidthere are but a handful of companies focused on achieving AGI as OpenAI and DeepMind (backed by Google), though they have "huge amounts of financial and technical resources."

Even so, they have a long way to go to get to AGI, he said.

"There are so many tasks AI systems cannot do that humans can do naturally, like common-sense reasoning, knowing what a fact is and understanding abstract concepts (such as justice, politics, and philosophy)," Sudhakar said in an email to Computerworld. "There will need to be many breakthroughs and innovations for AGI. But if this is achieved, it seems like this system would mostly replace humans.

"This would certainly be disruptive and there would need to be lots of guardrails to prevent the AGI from taking full control," Sudhakar said. "But for now, this is likely in the distant future. Its more in the realm of science fiction."

Not everyone agrees.

AI technology and chatbot assistants have and will continue to make inroads in nearly every industry. The technology can create efficiencies and take over mundane tasks, freeing up knowledge workers and others to focus on more important work.

For example, large language models (LLMs) the algorithms powering chatbots can sift through millions of alerts, online chats, and emails, as well as finding phishing web pages and potentially malicious executables. LLM-powered chatbots can write essays, marketing campaigns and suggest computer code, all from just simple user prompts (suggestions).

Chatbots powered by LLMs are natural language processors that basically predict the next words after being prompted by a users question. So, if a user were to ask a chatbot to create a poem about a person sitting on a beach in Nantucket, the AI would simply chain together words, sentences and paragraphs that are the best responses based on previous training by programmers.

But LLMs also have made high-profile mistakes, and can produce hallucinations where the next-word generation engines go off the rails and produce bizarre responses.

If AI based on LLMs with billions of adjustable parameters can go off the rails, how much greater would the risk be when AI no longer needs humans to teach it, and it can think for itself? The answer is much greater, according to Avivah Litan, a vice president and distinguished analyst at Gartner Research.

Litan believes AI development labs are moving forward at breakneck speed without any oversight, which could result in AGI becoming uncontrollable.

AI laboratories, she argued, have raced ahead without putting the proper tools in place for users to monitor whats going on. I think its going much faster than anyone ever expected, she said.

The current concern is that AI technology for use by corporations is being released without the tools users need to determine whether the technology is generating accurate or inaccurate information.

Right now, were talking about all the good guys who have all this innovative capability, but the bad guys have it, too, Litan said. So, we have to have these water marking systems and know whats real and whats synthetic. And we cant rely on detection, we have to have authentication of content. Otherwise, misinformation is going to spread like wildfire.

For example, Microsoft this week launched Security Copilot, which is based on OpenAIs GPT-4 large language model. The tool is an AI chatbot for cybersecurity experts to help them quickly detect and respond to threats and better understand the overall threat landscape.

The problem is, you as a user have to go in and identify any mistakes it makes, Litan said. Thats unacceptable. They should have some kind of scoring system that says this output is likely to be 95% true, and so it has a 5% chance of error. And this one has a 10% chance of error. Theyre not giving you any insight into the performance to see if its something you can trust or not.

A bigger concern in the not-so-distant future is that GPT-4 creator OpenAI will release an AGI-capable version. At that point, it may be too late to rein in the technology.

One possible solution, Litan suggested, is by releasing two models for every generative AI tool one for generating answers, the other for checking the first for accuracy.

That could do a really good job at ensuring if a model is putting out something you can trust, she said. You cant expect a human being to go through all this content and decide whats true or not, but if you give them other models that are checking, that would allow users to monitor the performance.

In 2022, Time reported that OpenAI had outsourced services to low-wage workers in Kenya to determine whether its GPT LLM was producing safe information. The workers hired by Sama, a San Francisco-based firm, were reportedly paid $2 per hour and required to sift through GPT app responses that were prone to blurting out violent, sexist and even racist remarks.

And this is how youre protecting us? Paying people $2 an hour and who are getting sick. Its wholly inefficient and its wholly immoral, Litan said.

AI developers need to work with policy makers, and these should at a minimum include new and capable regulatory authorities, Litan continued. I dont know if well ever get there, but the regulators cant keep up with this, and that was predicted years ago. We need to come up with a new type of authority.

Shubham Mishra, co-founder & global CEO for AI start-upPixis, believes while progress in his field cannot, and must not, stop, the call for a pause in AI development is warranted. Generative AI, he said, does have the power to confuse masses by pumping out propaganda or "difficult to distinguish" information into the public domain.

What we can do is plan for this progress. This can be possible only if all of us mutually agree to pause this race and concentrate the same energy and efforts on building guidelines and protocols for the safe development of larger AI models, Mishra said in an email to Computerworld.

In this particular case, the call is not for a general ban on AI development but a temporary pause on building larger, unpredictable models that compete with human intelligence, he continued. The mind-boggling rates at which new powerful AI innovations and models are being developed definitely calls for the tech leaders and others to come together to build safety measures and protocols.

Read more here:
Should we fear the rise of artificial general intelligence? - Computerworld

Read More..

The world’s largest AI fund has surged 23% this year, beating even the red-hot Nasdaq index – Yahoo Finance

The artificial intelligence sector has seen a boom in investor interest with the rise of ChatGPT.NanoStockk/Getty Images

The Global X Robotics & Artificial Intelligence ETF, the largest AI fund in the world, is up 23% so far in 2023.

This has included $135 million of inflows so far in 2023, including $80 million in March, according to data compiled by Bloomberg.

More than half of professional investors plan to add the AI theme to their portfolios this year, a new survey by Brown Brothers Harriman found.

The rise of ChatGPT has spurred a renewed spike in investor interest in the artificial intelligence sector. That's led the world's largest AI fund, the Global X Robotics & Artificial Intelligence ETF (BOTZ), to a stronger start in 2023 than even the red-hot Nasdaq 100.

The $1.7 billion ETF has gained 23%, while the Nasdaq 100, coming off its second-strongest quarter in a decade, is up 19%.

The fund's top holding is Nvidia, which was the top-performing name in both the S&P 500 and more tech-heavy Nasdaq 100 during the first quarter. The chipmaker, which makes up roughly 9% of the ETF's net assets, has climbed 88% in 2023. Further, lesser-weighted fund members like C3.ai and South Korea-based Rainbow Roboticshave seen their stocks soar more than 200% this year.

Amid the strong fund returns, BOTZ has seen $135 million of inflows so far in 2023, including $80 million in March, according to data compiled by Bloomberg. A new survey from Brown Brothers Harriman suggests the trend toward AI will continue.

Among 325 professional investors, 56% plan to add AI- and robotics-themed exposure to their portfolios this year, the survey found. That compares to 46% in 2022, and the category beat out all others except internet and technology.

Jan Szilagyi, the CEO of AI-powered market analytics platform Toggle AI, said he's more bullish on the sector now than even before the banking turmoil rattled financial markets in March.

Top players in finance continue to give tools like ChatGPT plenty of attention, he's encouraged by the rapid progress seen across large language models.

"For the moment, most of the technology's promise is still in the future," Szilagyi told Insider on Monday. "The leap between GPT 3.5 and GPT 4 shows that we are still early in the upgrade curve. This technology is going to see dramatic improvement in the coming years."

Read the original article on Business Insider

Excerpt from:
The world's largest AI fund has surged 23% this year, beating even the red-hot Nasdaq index - Yahoo Finance

Read More..

A freeze in training artificial intelligence won’t help, says professor – Tech Xplore

This article has been reviewed according to ScienceX's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

trusted source

proofread

Credit: Pixabay/CC0 Public Domain

The development of artificial intelligence (AI) is out of control, in the opinion of approximately 3,000 signatories of an open letter published by business leaders and scientists.

The signatories call for a temporary halt to training especially high-performance AI systems. Prof. Urs Gasser, expert on the governance of digital technologies, examines the important questions from which the letter deflects attention, talks about why an "AI technical inspection agency" would make good sense and looks at how far the EU has come compared to the U.S. in terms of regulation.

Artificial intelligence systems capable of competing with human intelligence may entail grave risks for society and humanity, say the authors of the open letter. Therefore, they continue, for at least six months no further development should be conducted on technologies which are more powerful than the recently introduced GPT-4, successor to the language model ChatGPT.

The authors call for the introduction of safety rules in collaboration with independent experts. If AI laboratories fail to implement a development pause voluntarily, governments should legally mandate the pause, says the signatories.

Unfortunately the open letter absorbs a lot of attention which would be better devoted to other questions in the AI debate. It is correct to say that today probably nobody knows how to train extremely powerful AI systems in such a way that they will always be reliable, helpful, honest and harmless.

Nonetheless, a pause in AI training will not help achieve this, primarily because it would be impossible to assert such a moratorium on a global level, and because it would not be possible to implement the regulations called for within period of only six months. I'm convinced that what's necessary is a stepwise further development of technologies in parallel to the application and adaptation of control mechanisms.

First of all, the open letter once again summons up the specter of what is referred to as an artificial general intelligence. That deflects attention from a balanced discussion of the risks and opportunities represented by the kind of technologies currently entering the market. Second, the paper refers to future successor models of GPT-4.

This draws attention away from the fact that GPT-4's predecessor, ChatGPT, already presents us with essential challenges that we urgently need to addressfor example misinformation and prejudices which the machines replicate and scale. And third, the spectacular demands made in the letter distract us from the fact that we already have instruments now which we could use to regulate the development and use of AI.

Recent years have seen the intensive development of ethical principles which should guide the development and application of AI. These have been supplemented in important areas by technical standards and best practices. Specifically, the OECD Principles on Artificial Intelligence link ethical principles with more than 400 concrete tools.

And the US National Institute of Standards and Technology (NIST) has issued a 70-page guideline on how distortions in AI systems can be detected and handled. In the area of security in major AI models, we're seeing new methods like constitutional AI, in which an AI system "learns" principles of good conduct from humans and can then use the results to monitor another AI application. Substantial progress has been made in terms of security, transparency and data protection and there are even specialized inspection companies.

Now the essential question is whether or not to use such instruments, and if so how. Returning to the example of ChatGPT: Will the chat logs of the users be included in the model for iterative training? Are plug-ins allowed which can record user interactions, contacts and other personal data? The interim ban and the initiation of an investigation of the developers of ChatGPT by the Italian data protection authorities are signs that very much is still unclear here.

The history of technology has taught us that it is difficult to predict the "good" or "bad" use of technologies, even that technologies often entail both aspects and negative impacts can often be unintentional. Instead of fixating on a certain point in a forecast, we have to do two things: First, we have to ask ourselves which applications we as a society do not want, even if they were possible. We need clear red lines and prohibitions.

Here I'm thinking of autonomous weapons systems as an example. Second, we need comprehensive risk management, spanning the range from development all the way to use. The demands placed here increase as the magnitude of the potential risks to people and the environment posed by a given application grow. European legislature is correct in taking this approach.

This kind of independent inspection is a very important instrument, especially when it comes to applications that can have a considerable impact on human beings. And by the way, this is not a new idea: we already see inspection procedures and instances like these at work in the wide variety of aspects of life, ranging from automobile inspections to general technical equipment inspections and financial auditing.

However, the challenge is disproportionally greater with certain AI methods and applications, because certain systems develop themselves as they are used, i.e. they are dynamic in nature. And it's also important to see that experts alone won't be able to make a good assessment of all societal impacts. We also need innovative mechanisms which for example include disadvantaged people and underrepresented groups in the discussion on the consequences of AI. This is no easy job, one I wish was attracting more attention.

We do indeed need clear legal rules for artificial intelligence. At the EU level, an act on AI is currently being finalized which is intended to ensure that AI technologies are safe and comply with fundamental rights. The draft bill provides for the classification of AI technologies according to the threat they pose to these principles, with the possible consequence of prohibition or transparency obligations.

For example, plans include prohibiting evaluation of private individuals in terms of their social behavior, as we are currently seeing in China. In the U.S. the political process in this field is blocked in Congress. It would be helpful if the prominent figures who wrote the letter would put pressure on US federal legislators to take action instead of calling for a temporary discontinuation of technological development.

The rest is here:
A freeze in training artificial intelligence won't help, says professor - Tech Xplore

Read More..

Artificial Intelligence Becomes a Business Tool CBIA – CBIA

The growth of artificial intelligence is impossible to ignore, and more businesses are making it part of their operations.

In a recent Marcum LLP-Hofstra University survey, 26% of CEOs responded that their companies have used AI tools.

CEOs said they use AI for everything from automation, to predictive analytics, financial analysis, supply chain management and logistics, risk mitigation, and optimizing customer service.

Another 47% of CEOs said they are exploring how AI tools can be used in their operations.

Only 10% said they dont envision utilizing AI tools, and 16% were uncertain whether it would be relevant for their business.

The survey, conducted in February, polled 265 CEOs from companies with revenues ranging from $5 million to more than $1 billion.

58% of CEOs surveyed said that expectations and demands from their customers and clients increased in the last year.

CEOs said those expectations include more personalized service, immediate response times, more technology, and refusing price increases.

CEOs are challenged to meet higher expectations from customers.

Now that the pandemic economy is behind us and companies have resumed full operation, CEOs are challenged to meet higher expectations from customers, said Jeffrey Weiner, Marcums chair and CEO.

This certainly includes figuring out how to deploy new tools such as artificial intelligence to effectively position their companies for the future.

When asked about business planning in the next 12 months, economic concerns (53%), availability of talent (48%), and rising material/operational costs (43%) were the top three most important influences for CEOs.

There is some growing optimism among CEOs, with 33% responding that they are very concerned that the economy will experience a recession in the coming year.

That number is down from 47% in Marcums November 2022 survey.

54% of CEOs said they were somewhat concerned about a recession, compared with 43% in November.

I think the uptick in CEO optimism is a reflection not only of their feelings about the economy, but their confidence in their own ability to be flexible and meet the moment.

84% said they had a positive overall outlook on the business environment.

I think the uptick in CEO optimism is a reflection not only of their feelings about the economy, said Janet Lenaghan, dean of Hofstra Universitys Zarb School of Business, but their confidence in their own ability to be flexible and meet the moment, something they had to learn to get through COVID-19.

The survey also asked CEOs about leadership succession, calling it an essential process for ensuring business continuity, retaining talent, and developing future leaders.

Most CEOs (79%) said their companies have a succession plan in place, but only 45% were very confident in that plan.

41% of CEOs of companies without a succession said it wasnt a priority for their companies.

The Marcum-Hofstra survey is conducted periodically by Hofstra MBA students as a way to gauge mid-market CEOs outlook and priorities for the next 12 months.

Originally posted here:
Artificial Intelligence Becomes a Business Tool CBIA - CBIA

Read More..

C3.ai Stock: 3 Reasons to Avoid This Hot Artificial Intelligence … – The Motley Fool

Everyone is talking about artificial intelligence (AI) these days. Thanks to the breakthrough of ChatGPT, tech CEOs and pundits alike are convinced that artificial intelligence, in particular generative AI, will be the next major computing platform.

Unfortunately for investors, pure-play AI stocks are hard to come by on the stock market, making it hard to know how to capitalize on this opportunity. That's a major reason why C3.ai(AI 0.89%) has attracted so much attention on Wall Street. It's one of the few AI stocks available to investors with a model using a software-as-a-service platform to deliver AI for enterprise solutions for customers.

As a result of that surge of interest in artificial intelligence, C3.ai stock nearly tripled through the first three months of the year. Before you jump on the bandwagon with the high-flying AI stock, you should be aware of the drawbacks it's facing. Here are three reasons to avoid the stock at the moment.

Image source: Getty Images.

The hype around AI and the attention on C3.ai, in particular, might make you think that this is a fast-growing software company, but its recent results show that's anything but the case.

C3.ai reported a decline in revenue in the fiscal third quarter, its most recent period, showing it's facing the same kind of challenges as most of the tech sector.In Q3, revenue fell 4.4% year over year to $66.7 million. This was partly due to the company's decision to change its business model from subscription-based to consumption-based, which has created some noise in the results.

Revenue is expected to decline slightly in the current quarter as well. But management said revenue growth would accelerate in fiscal 2024 due to drivers like the launch of its generative AI platform, increased interest in the consumption-based model, and new and expanded partnerships with businesses like Alphabet's Google Cloud.

C3.ai is also losing money. It's on track for an adjusted operating loss of $69 million to $73 million this year, but management expects the company to be cash flow positive and profitable on an adjusted basis by the end of 2024.

Those are big promises from a company that has struggled with execution, including the business model issue. And given the macroeconomic climate, investors shouldn't assume it will hit that guidance.

Most software companies tend to receive a range of interest across multiple industries, but C3.ai has struggled with diversifying its revenue sources.

In fiscal 2022, 31% of its revenue came fromBaker Hughes, the oilfield services company with which it has a strategic partnership, and its top three customers last year accounted for 57% of accounts receivables, a proxy for revenue.

In its most recent quarter, 72% of its bookings came from the oil and gas sector. That makes it particularly vulnerable to a crash in oil prices, which is likely in a global recession as oil prices are highly cyclical.

The company has a "lighthouse" strategy of tapping into new industries by landing a flagship customer in that sector and then expanding to other customers in that industry from there. But while C3.ai also serves industries like banking, utilities, defense, and manufacturing, that revenue hasn't been sufficient to diversify the business away from oil and gas.

The company finished its most recent quarter with 236 customers, though it's hopeful the consumption-based model can bring in more smaller accounts.

The stock's tripling in the first quarter was based almost entirely on hype around artificial intelligence rather than any improvement in the fundamentals. Shares also got a boost at the end of January after C3.ai announced its new generative AI product suite, though it doesn't appear to be generally available yet.

However, after the current run-up in the price, the stock now trades at a price-to-sales ratio of 15. Through the first three quarters of the fiscal year, the company's lost $217 million on $174 million in revenue, indicating it's a long way from being profitable on a generally accepted accounting principles (GAAP) basis.

Given those financials, investors seem to be bidding the stock higher on nothing more than the company's growth promises and vague notions about the transformative potential of AI.

At this point, a bet on C3.ai seems like more of a lottery ticket on artificial intelligence rather than a rational investment in a company whose future cash flows justify its current price.

After the collapse in tech stocks over the last year, investors should know better.

Suzanne Frey, an executive at Alphabet, is a member of The Motley Fools board of directors. Jeremy Bowman has no position in any of the stocks mentioned. The Motley Fool has positions in and recommends Alphabet. The Motley Fool recommends C3.ai. The Motley Fool has a disclosure policy.

The rest is here:
C3.ai Stock: 3 Reasons to Avoid This Hot Artificial Intelligence ... - The Motley Fool

Read More..

Lincoln musician says artificial intelligence will not replace artists – KLKN

LINCOLN, Neb. (KLKN) Artificial intelligence can create images, write essays and collect data.

But will it ever replace musicians?

Matt Waite, a professor at the University of Nebraska-Lincoln, said AI predicts whats coming next.

With language models like Chat GPT, its looking at enormous amounts of text, he said. Its looking at how works are put together, and then essentially, its making a prediction.

Waite said several companies pull data from across the web to assist AI in creating that prediction.

But what happens when an artists style is portrayed by AI?

Newly launched campaigns, such as the Human Artistry Campaign, have already banded together to address challenges presented by AI.

Local musician Darren Keen thinks AI-generated content will not be a replacement for artists.

I think that eventually, these things will parse themselves out to be more like tools than full-on replacements for musicians and creative people, he said.

At this time, Waite says its unclear how AI will impact the world of music, media and education.

Were going to be making adjustments for years and years, he said. This is a significant moment in society where were going to remember the time before AI and the time after AI.

View post:
Lincoln musician says artificial intelligence will not replace artists - KLKN

Read More..

Why does Artificial Intelligence Needs Regulation? – Analytics Insight

The following is information regarding the need for regulations in artificial intelligence

This is the world that Artificial Intelligence (AI) and tens of millions of video cameras installed in both public and private areas are making possible. AI-amplified surveillance can not only identify you and your friends, but it can also track you using other biometric characteristics, like your gait, and even find clues about how you feel.

Although advancements and regulations in Artificial Intelligence (AI) promise to transform sectors like health care, transportation, logistics, energy production, environmental monitoring, and the media, serious concerns remain regarding how to prevent state actors from abusing these potent tools. Any AI regulation and rules that must be followed would contribute to human rights violations. Regulations in artificial intelligence will help lives.

Nowhere to run away: Building safe urban communities with innovation empowering influences and computer-based intelligence, a report by the Chinese infotech organization Huawei, expressly commends this vision of inescapable government observation. Selling AI as its Protected City arrangement, Thats what the organization gloats by breaking down individuals conduct in video film and drawing on other government information like personality, financial status, and circle of colleagues, simulated intelligence could rapidly recognize signs of wrongdoings and anticipate possible crime.

To keep an eye on what its citizens are doing in public places, more than 500 million surveillance cameras have already been installed in China. A lot of them are facial recognition cameras that automatically identify drivers and pedestrians and compare them to national blacklists and photo and license tag ID registries. This kind of surveillance finds political demonstrations as well as crimes. People who took part in COVID-19 lockdown protests, for instance, were recently detained and questioned by Chinese police using this kind of data.

There are currently about 85 million video cameras in both public and private areas in the United States. An ordinance that allows police to request access to private live feeds was recently passed in San Francisco. American retail stores, sports arenas, and airports are increasingly employing real-time facial recognition technology.

Woodrow Hartzog, a professor at Boston University School of Law, and Evan Selinger, a philosopher at the Rochester Institute of Technology, contend that facial recognition is the ideal instrument for oppression. The most uniquely dangerous surveillance mechanism ever invented, they write. Our faces would be transformed into permanent identification cards by real-time facial recognition technologies, which would be displayed to the police. The use of algorithms to identify people impeccably suited to authoritarian and rough ends is made possible by advances in artificial intelligence, wide videotape, and print surveillance, dwindling costs of storing big data sets in the pall, and affordable access to sophisticated data analytics systems, they point out.

The 2019 Albania Declaration, which calls for a halt to the use of facial recognition for mass surveillance, has been inked by further than 110 non-governmental associations. The Electronic Frontier Foundation, the electronic sequestration Information Center, Fight for the Future, and Restore the Fourth are among the associations from the United States that have inked a solicitation prompting countries to suspend the further deployment of facial recognition technology for mass surveillance.

In 2021, the Workplace of the Unified Countries High Chief for Common freedoms gave a report taking note that the far and wide use by States and associations of man-made knowledge, including profiling, robotized direction, and AI advances, influences the delight in the right to protection and related boons. Until its assumed that their use cannot violate mortal rights, the report prompted governments to put doldrums on the use of potentially high-threat technology, similar to remote real-time facial recognition.

The European Digital Rights network published a notice of the proposed AI Act for the European Unions regulation of remote biometric identification this time. Being followed in a public space by a facial acknowledgment frame (or another biometric frame) is on a veritably introductory position negative with the quintessence of informed assent, the report brings up. Youre needed to assent to biometric processing if you wish or need to enter that public space. Thats coercive and inharmonious with the pretensions of the EUs mortal rights governance (particularly the rights to sequestration and data protection, freedom of speech, freedom of assembly, and frequent discrimination).

We run the threat of accidentally sliding into turnkey despotism if we dont outlaw government agents use of AI. enabled real-time facial recognition surveillance.

Crazy scripts live in which moment is the last chance to forestall Armageddon. still, now isnt the time to regulate AI within the realm of reason.

More:
Why does Artificial Intelligence Needs Regulation? - Analytics Insight

Read More..

Former Google CEO Eric Schmidt is worried about artificial intelligence. Here’s why | Mint – Mint

Former Google CEO and Chairman Eric Schmidt has warned about the dangers of new-age artificial intelligence technology. Speaking to ABC This Week, Schmidt said there is a need to 'make sure this stuff (Large Language Models) doesn't harm but just help'.

Former Google CEO and Chairman Eric Schmidt has warned about the dangers of new-age artificial intelligence technology. Speaking to ABC This Week, Schmidt said there is a need to 'make sure this stuff (Large Language Models) doesn't harm but just help'.

On being asked to explain the perils and promise of AI, Schmidt replied Well imagine a world where you have an AI doctor that makes everyone healthier in the whole world. Imagine a world where you have an AI tutor that increases the educational capabilities of everyone in every language. These are remarkable and these technologies which are known as Large Language Models are clearly going to do this."

On being asked to explain the perils and promise of AI, Schmidt replied Well imagine a world where you have an AI doctor that makes everyone healthier in the whole world. Imagine a world where you have an AI tutor that increases the educational capabilities of everyone in every language. These are remarkable and these technologies which are known as Large Language Models are clearly going to do this."

However, the former Google CEO was quick to point out the threats that humanity faces from these language models.

However, the former Google CEO was quick to point out the threats that humanity faces from these language models.

We face extraordinary new challenges from these things, whether it's deep fakes or people falling in love with their AI tutor," he added.

We face extraordinary new challenges from these things, whether it's deep fakes or people falling in love with their AI tutor," he added.

Elaborating on the things that make him worried Schmidt added that he is worried about the use of LLMs in biology, cyber-attacks and manipulating the way politics works.

Elaborating on the things that make him worried Schmidt added that he is worried about the use of LLMs in biology, cyber-attacks and manipulating the way politics works.

Schmidt also pointed out the speed at which these new artificial intelligence technologies are changing the world, noting that it took Gmail five years to reach 100 million daily active users, while ChatGPT reached the same milestone in about 2 months.

Schmidt also pointed out the speed at which these new artificial intelligence technologies are changing the world, noting that it took Gmail five years to reach 100 million daily active users, while ChatGPT reached the same milestone in about 2 months.

This is not the first time that Schmidt has raised the possibility. During an earlier interaction with author and journalist Walter Isaacson he had noted that large language models could be used for biological warfare and change the dynamics of war.

Read the original post:
Former Google CEO Eric Schmidt is worried about artificial intelligence. Here's why | Mint - Mint

Read More..