Category Archives: Artificial Intelligence
Should we fear the rise of artificial general intelligence? – Computerworld
Last week, a whos who of technologists called for artificial intelligence (AI) labs to stop training the most powerful AI systems for at least six months, citing "profound risks to society and humanity."
In an open letter that now has more than 3,100 signatories, including Apple co-founder Steve Wozniak, tech leaders called out San Francisco-based OpenAI Labs recently announcedGPT-4 algorithm in particular, saying the company should halt further development until oversight standards are in place. That goal has the backing of technologists, CEOs, CFOs, doctoral students, psychologists, medical doctors, software developers and engineers, professors, and public school teachers from all over the globe.
On Friday, Italy became the first Western nation to ban further development of ChatGPT over privacy concerns; the natural language processing app experienced a data breach last month involving user conversations and payment information. ChatGPT is the popular GPT-based chatbot created by OpenAI and backed by billions of dollars from Microsoft.
The Italian data protection authority said it is also investigating whether OpenAI's chatbot already violated the European Union'sGeneral Data Protection Regulationrules created toprotect personal data inside and outside the EU.OpenAI has complied with the new law, according to a report by the BBC.
The expectation among many in the technology community is that GPT, which stands for Generative Pre-trained Transformer, will advance to become GPT-5 and that version will be an artificial general intelligence, or AGI. AGI represents AI that can think for itself, and at that point, the algorithm would continue to grow exponentially smarter over time.
Around 2016, a trend emerged in AI training models that were two-to-three orders of magnitude larger than previous systems, according to Epoch,a research group trying to forecast the development of transformative AI. That trend has continued.
There are currently no AI systems larger than GPT-4 in terms of training compute, according to Jaime Sevilla, director of Epoch. But that will change.
Large-scale Machine Learning models for AI have more than doubled in capacity ever year.
Anthony Aguirre, a professor of physics at UC Santa Cruz and executive vice president of the Future of Life, the non-profit organization that published the open letter to developers, said theres no reason to believe GPT-4 wont continue to more than double in computational capabilities every year.
The largest-scale computations are increasing size by about 2.5 times per year. GPT-4s parameters were not disclosed by OpenAI, but there is no reason to think this trend has stopped or even slowed, Acquirre said. Only the labs themselves know what computations they are running, but the trend is unmistakable.
In his biweekly blog on March 23, Microsoft co-founder Bill Gates heralded AGI which is capable of learning any task or subject as the great dream of the computing industry.
AGI doesnt exist yet there is a robust debate going on in the computing industry about how to create it, and whether it can even be created at all, Gates wrote. Now, with the arrival of machine learning and large amounts of computing power, sophisticated AIs are a reality, and they will get better very fast.
Muddu Sudhakar, CEO ofAisera, agenerative AI company for enterprises, saidthere are but a handful of companies focused on achieving AGI as OpenAI and DeepMind (backed by Google), though they have "huge amounts of financial and technical resources."
Even so, they have a long way to go to get to AGI, he said.
"There are so many tasks AI systems cannot do that humans can do naturally, like common-sense reasoning, knowing what a fact is and understanding abstract concepts (such as justice, politics, and philosophy)," Sudhakar said in an email to Computerworld. "There will need to be many breakthroughs and innovations for AGI. But if this is achieved, it seems like this system would mostly replace humans.
"This would certainly be disruptive and there would need to be lots of guardrails to prevent the AGI from taking full control," Sudhakar said. "But for now, this is likely in the distant future. Its more in the realm of science fiction."
Not everyone agrees.
AI technology and chatbot assistants have and will continue to make inroads in nearly every industry. The technology can create efficiencies and take over mundane tasks, freeing up knowledge workers and others to focus on more important work.
For example, large language models (LLMs) the algorithms powering chatbots can sift through millions of alerts, online chats, and emails, as well as finding phishing web pages and potentially malicious executables. LLM-powered chatbots can write essays, marketing campaigns and suggest computer code, all from just simple user prompts (suggestions).
Chatbots powered by LLMs are natural language processors that basically predict the next words after being prompted by a users question. So, if a user were to ask a chatbot to create a poem about a person sitting on a beach in Nantucket, the AI would simply chain together words, sentences and paragraphs that are the best responses based on previous training by programmers.
But LLMs also have made high-profile mistakes, and can produce hallucinations where the next-word generation engines go off the rails and produce bizarre responses.
If AI based on LLMs with billions of adjustable parameters can go off the rails, how much greater would the risk be when AI no longer needs humans to teach it, and it can think for itself? The answer is much greater, according to Avivah Litan, a vice president and distinguished analyst at Gartner Research.
Litan believes AI development labs are moving forward at breakneck speed without any oversight, which could result in AGI becoming uncontrollable.
AI laboratories, she argued, have raced ahead without putting the proper tools in place for users to monitor whats going on. I think its going much faster than anyone ever expected, she said.
The current concern is that AI technology for use by corporations is being released without the tools users need to determine whether the technology is generating accurate or inaccurate information.
Right now, were talking about all the good guys who have all this innovative capability, but the bad guys have it, too, Litan said. So, we have to have these water marking systems and know whats real and whats synthetic. And we cant rely on detection, we have to have authentication of content. Otherwise, misinformation is going to spread like wildfire.
For example, Microsoft this week launched Security Copilot, which is based on OpenAIs GPT-4 large language model. The tool is an AI chatbot for cybersecurity experts to help them quickly detect and respond to threats and better understand the overall threat landscape.
The problem is, you as a user have to go in and identify any mistakes it makes, Litan said. Thats unacceptable. They should have some kind of scoring system that says this output is likely to be 95% true, and so it has a 5% chance of error. And this one has a 10% chance of error. Theyre not giving you any insight into the performance to see if its something you can trust or not.
A bigger concern in the not-so-distant future is that GPT-4 creator OpenAI will release an AGI-capable version. At that point, it may be too late to rein in the technology.
One possible solution, Litan suggested, is by releasing two models for every generative AI tool one for generating answers, the other for checking the first for accuracy.
That could do a really good job at ensuring if a model is putting out something you can trust, she said. You cant expect a human being to go through all this content and decide whats true or not, but if you give them other models that are checking, that would allow users to monitor the performance.
In 2022, Time reported that OpenAI had outsourced services to low-wage workers in Kenya to determine whether its GPT LLM was producing safe information. The workers hired by Sama, a San Francisco-based firm, were reportedly paid $2 per hour and required to sift through GPT app responses that were prone to blurting out violent, sexist and even racist remarks.
And this is how youre protecting us? Paying people $2 an hour and who are getting sick. Its wholly inefficient and its wholly immoral, Litan said.
AI developers need to work with policy makers, and these should at a minimum include new and capable regulatory authorities, Litan continued. I dont know if well ever get there, but the regulators cant keep up with this, and that was predicted years ago. We need to come up with a new type of authority.
Shubham Mishra, co-founder & global CEO for AI start-upPixis, believes while progress in his field cannot, and must not, stop, the call for a pause in AI development is warranted. Generative AI, he said, does have the power to confuse masses by pumping out propaganda or "difficult to distinguish" information into the public domain.
What we can do is plan for this progress. This can be possible only if all of us mutually agree to pause this race and concentrate the same energy and efforts on building guidelines and protocols for the safe development of larger AI models, Mishra said in an email to Computerworld.
In this particular case, the call is not for a general ban on AI development but a temporary pause on building larger, unpredictable models that compete with human intelligence, he continued. The mind-boggling rates at which new powerful AI innovations and models are being developed definitely calls for the tech leaders and others to come together to build safety measures and protocols.
Read more here:
Should we fear the rise of artificial general intelligence? - Computerworld
A freeze in training artificial intelligence won’t help, says professor – Tech Xplore
This article has been reviewed according to ScienceX's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:
fact-checked
trusted source
proofread
Credit: Pixabay/CC0 Public Domain
The development of artificial intelligence (AI) is out of control, in the opinion of approximately 3,000 signatories of an open letter published by business leaders and scientists.
The signatories call for a temporary halt to training especially high-performance AI systems. Prof. Urs Gasser, expert on the governance of digital technologies, examines the important questions from which the letter deflects attention, talks about why an "AI technical inspection agency" would make good sense and looks at how far the EU has come compared to the U.S. in terms of regulation.
Artificial intelligence systems capable of competing with human intelligence may entail grave risks for society and humanity, say the authors of the open letter. Therefore, they continue, for at least six months no further development should be conducted on technologies which are more powerful than the recently introduced GPT-4, successor to the language model ChatGPT.
The authors call for the introduction of safety rules in collaboration with independent experts. If AI laboratories fail to implement a development pause voluntarily, governments should legally mandate the pause, says the signatories.
Unfortunately the open letter absorbs a lot of attention which would be better devoted to other questions in the AI debate. It is correct to say that today probably nobody knows how to train extremely powerful AI systems in such a way that they will always be reliable, helpful, honest and harmless.
Nonetheless, a pause in AI training will not help achieve this, primarily because it would be impossible to assert such a moratorium on a global level, and because it would not be possible to implement the regulations called for within period of only six months. I'm convinced that what's necessary is a stepwise further development of technologies in parallel to the application and adaptation of control mechanisms.
First of all, the open letter once again summons up the specter of what is referred to as an artificial general intelligence. That deflects attention from a balanced discussion of the risks and opportunities represented by the kind of technologies currently entering the market. Second, the paper refers to future successor models of GPT-4.
This draws attention away from the fact that GPT-4's predecessor, ChatGPT, already presents us with essential challenges that we urgently need to addressfor example misinformation and prejudices which the machines replicate and scale. And third, the spectacular demands made in the letter distract us from the fact that we already have instruments now which we could use to regulate the development and use of AI.
Recent years have seen the intensive development of ethical principles which should guide the development and application of AI. These have been supplemented in important areas by technical standards and best practices. Specifically, the OECD Principles on Artificial Intelligence link ethical principles with more than 400 concrete tools.
And the US National Institute of Standards and Technology (NIST) has issued a 70-page guideline on how distortions in AI systems can be detected and handled. In the area of security in major AI models, we're seeing new methods like constitutional AI, in which an AI system "learns" principles of good conduct from humans and can then use the results to monitor another AI application. Substantial progress has been made in terms of security, transparency and data protection and there are even specialized inspection companies.
Now the essential question is whether or not to use such instruments, and if so how. Returning to the example of ChatGPT: Will the chat logs of the users be included in the model for iterative training? Are plug-ins allowed which can record user interactions, contacts and other personal data? The interim ban and the initiation of an investigation of the developers of ChatGPT by the Italian data protection authorities are signs that very much is still unclear here.
The history of technology has taught us that it is difficult to predict the "good" or "bad" use of technologies, even that technologies often entail both aspects and negative impacts can often be unintentional. Instead of fixating on a certain point in a forecast, we have to do two things: First, we have to ask ourselves which applications we as a society do not want, even if they were possible. We need clear red lines and prohibitions.
Here I'm thinking of autonomous weapons systems as an example. Second, we need comprehensive risk management, spanning the range from development all the way to use. The demands placed here increase as the magnitude of the potential risks to people and the environment posed by a given application grow. European legislature is correct in taking this approach.
This kind of independent inspection is a very important instrument, especially when it comes to applications that can have a considerable impact on human beings. And by the way, this is not a new idea: we already see inspection procedures and instances like these at work in the wide variety of aspects of life, ranging from automobile inspections to general technical equipment inspections and financial auditing.
However, the challenge is disproportionally greater with certain AI methods and applications, because certain systems develop themselves as they are used, i.e. they are dynamic in nature. And it's also important to see that experts alone won't be able to make a good assessment of all societal impacts. We also need innovative mechanisms which for example include disadvantaged people and underrepresented groups in the discussion on the consequences of AI. This is no easy job, one I wish was attracting more attention.
We do indeed need clear legal rules for artificial intelligence. At the EU level, an act on AI is currently being finalized which is intended to ensure that AI technologies are safe and comply with fundamental rights. The draft bill provides for the classification of AI technologies according to the threat they pose to these principles, with the possible consequence of prohibition or transparency obligations.
For example, plans include prohibiting evaluation of private individuals in terms of their social behavior, as we are currently seeing in China. In the U.S. the political process in this field is blocked in Congress. It would be helpful if the prominent figures who wrote the letter would put pressure on US federal legislators to take action instead of calling for a temporary discontinuation of technological development.
The rest is here:
A freeze in training artificial intelligence won't help, says professor - Tech Xplore
Artificial Intelligence Becomes a Business Tool CBIA – CBIA
The growth of artificial intelligence is impossible to ignore, and more businesses are making it part of their operations.
In a recent Marcum LLP-Hofstra University survey, 26% of CEOs responded that their companies have used AI tools.
CEOs said they use AI for everything from automation, to predictive analytics, financial analysis, supply chain management and logistics, risk mitigation, and optimizing customer service.
Another 47% of CEOs said they are exploring how AI tools can be used in their operations.
Only 10% said they dont envision utilizing AI tools, and 16% were uncertain whether it would be relevant for their business.
The survey, conducted in February, polled 265 CEOs from companies with revenues ranging from $5 million to more than $1 billion.
58% of CEOs surveyed said that expectations and demands from their customers and clients increased in the last year.
CEOs said those expectations include more personalized service, immediate response times, more technology, and refusing price increases.
CEOs are challenged to meet higher expectations from customers.
Now that the pandemic economy is behind us and companies have resumed full operation, CEOs are challenged to meet higher expectations from customers, said Jeffrey Weiner, Marcums chair and CEO.
This certainly includes figuring out how to deploy new tools such as artificial intelligence to effectively position their companies for the future.
When asked about business planning in the next 12 months, economic concerns (53%), availability of talent (48%), and rising material/operational costs (43%) were the top three most important influences for CEOs.
There is some growing optimism among CEOs, with 33% responding that they are very concerned that the economy will experience a recession in the coming year.
That number is down from 47% in Marcums November 2022 survey.
54% of CEOs said they were somewhat concerned about a recession, compared with 43% in November.
I think the uptick in CEO optimism is a reflection not only of their feelings about the economy, but their confidence in their own ability to be flexible and meet the moment.
84% said they had a positive overall outlook on the business environment.
I think the uptick in CEO optimism is a reflection not only of their feelings about the economy, said Janet Lenaghan, dean of Hofstra Universitys Zarb School of Business, but their confidence in their own ability to be flexible and meet the moment, something they had to learn to get through COVID-19.
The survey also asked CEOs about leadership succession, calling it an essential process for ensuring business continuity, retaining talent, and developing future leaders.
Most CEOs (79%) said their companies have a succession plan in place, but only 45% were very confident in that plan.
41% of CEOs of companies without a succession said it wasnt a priority for their companies.
The Marcum-Hofstra survey is conducted periodically by Hofstra MBA students as a way to gauge mid-market CEOs outlook and priorities for the next 12 months.
Originally posted here:
Artificial Intelligence Becomes a Business Tool CBIA - CBIA
Why does Artificial Intelligence Needs Regulation? – Analytics Insight
The following is information regarding the need for regulations in artificial intelligence
This is the world that Artificial Intelligence (AI) and tens of millions of video cameras installed in both public and private areas are making possible. AI-amplified surveillance can not only identify you and your friends, but it can also track you using other biometric characteristics, like your gait, and even find clues about how you feel.
Although advancements and regulations in Artificial Intelligence (AI) promise to transform sectors like health care, transportation, logistics, energy production, environmental monitoring, and the media, serious concerns remain regarding how to prevent state actors from abusing these potent tools. Any AI regulation and rules that must be followed would contribute to human rights violations. Regulations in artificial intelligence will help lives.
Nowhere to run away: Building safe urban communities with innovation empowering influences and computer-based intelligence, a report by the Chinese infotech organization Huawei, expressly commends this vision of inescapable government observation. Selling AI as its Protected City arrangement, Thats what the organization gloats by breaking down individuals conduct in video film and drawing on other government information like personality, financial status, and circle of colleagues, simulated intelligence could rapidly recognize signs of wrongdoings and anticipate possible crime.
To keep an eye on what its citizens are doing in public places, more than 500 million surveillance cameras have already been installed in China. A lot of them are facial recognition cameras that automatically identify drivers and pedestrians and compare them to national blacklists and photo and license tag ID registries. This kind of surveillance finds political demonstrations as well as crimes. People who took part in COVID-19 lockdown protests, for instance, were recently detained and questioned by Chinese police using this kind of data.
There are currently about 85 million video cameras in both public and private areas in the United States. An ordinance that allows police to request access to private live feeds was recently passed in San Francisco. American retail stores, sports arenas, and airports are increasingly employing real-time facial recognition technology.
Woodrow Hartzog, a professor at Boston University School of Law, and Evan Selinger, a philosopher at the Rochester Institute of Technology, contend that facial recognition is the ideal instrument for oppression. The most uniquely dangerous surveillance mechanism ever invented, they write. Our faces would be transformed into permanent identification cards by real-time facial recognition technologies, which would be displayed to the police. The use of algorithms to identify people impeccably suited to authoritarian and rough ends is made possible by advances in artificial intelligence, wide videotape, and print surveillance, dwindling costs of storing big data sets in the pall, and affordable access to sophisticated data analytics systems, they point out.
The 2019 Albania Declaration, which calls for a halt to the use of facial recognition for mass surveillance, has been inked by further than 110 non-governmental associations. The Electronic Frontier Foundation, the electronic sequestration Information Center, Fight for the Future, and Restore the Fourth are among the associations from the United States that have inked a solicitation prompting countries to suspend the further deployment of facial recognition technology for mass surveillance.
In 2021, the Workplace of the Unified Countries High Chief for Common freedoms gave a report taking note that the far and wide use by States and associations of man-made knowledge, including profiling, robotized direction, and AI advances, influences the delight in the right to protection and related boons. Until its assumed that their use cannot violate mortal rights, the report prompted governments to put doldrums on the use of potentially high-threat technology, similar to remote real-time facial recognition.
The European Digital Rights network published a notice of the proposed AI Act for the European Unions regulation of remote biometric identification this time. Being followed in a public space by a facial acknowledgment frame (or another biometric frame) is on a veritably introductory position negative with the quintessence of informed assent, the report brings up. Youre needed to assent to biometric processing if you wish or need to enter that public space. Thats coercive and inharmonious with the pretensions of the EUs mortal rights governance (particularly the rights to sequestration and data protection, freedom of speech, freedom of assembly, and frequent discrimination).
We run the threat of accidentally sliding into turnkey despotism if we dont outlaw government agents use of AI. enabled real-time facial recognition surveillance.
Crazy scripts live in which moment is the last chance to forestall Armageddon. still, now isnt the time to regulate AI within the realm of reason.
More:
Why does Artificial Intelligence Needs Regulation? - Analytics Insight
C3.ai Stock: 3 Reasons to Avoid This Hot Artificial Intelligence … – The Motley Fool
Everyone is talking about artificial intelligence (AI) these days. Thanks to the breakthrough of ChatGPT, tech CEOs and pundits alike are convinced that artificial intelligence, in particular generative AI, will be the next major computing platform.
Unfortunately for investors, pure-play AI stocks are hard to come by on the stock market, making it hard to know how to capitalize on this opportunity. That's a major reason why C3.ai(AI 0.89%) has attracted so much attention on Wall Street. It's one of the few AI stocks available to investors with a model using a software-as-a-service platform to deliver AI for enterprise solutions for customers.
As a result of that surge of interest in artificial intelligence, C3.ai stock nearly tripled through the first three months of the year. Before you jump on the bandwagon with the high-flying AI stock, you should be aware of the drawbacks it's facing. Here are three reasons to avoid the stock at the moment.
Image source: Getty Images.
The hype around AI and the attention on C3.ai, in particular, might make you think that this is a fast-growing software company, but its recent results show that's anything but the case.
C3.ai reported a decline in revenue in the fiscal third quarter, its most recent period, showing it's facing the same kind of challenges as most of the tech sector.In Q3, revenue fell 4.4% year over year to $66.7 million. This was partly due to the company's decision to change its business model from subscription-based to consumption-based, which has created some noise in the results.
Revenue is expected to decline slightly in the current quarter as well. But management said revenue growth would accelerate in fiscal 2024 due to drivers like the launch of its generative AI platform, increased interest in the consumption-based model, and new and expanded partnerships with businesses like Alphabet's Google Cloud.
C3.ai is also losing money. It's on track for an adjusted operating loss of $69 million to $73 million this year, but management expects the company to be cash flow positive and profitable on an adjusted basis by the end of 2024.
Those are big promises from a company that has struggled with execution, including the business model issue. And given the macroeconomic climate, investors shouldn't assume it will hit that guidance.
Most software companies tend to receive a range of interest across multiple industries, but C3.ai has struggled with diversifying its revenue sources.
In fiscal 2022, 31% of its revenue came fromBaker Hughes, the oilfield services company with which it has a strategic partnership, and its top three customers last year accounted for 57% of accounts receivables, a proxy for revenue.
In its most recent quarter, 72% of its bookings came from the oil and gas sector. That makes it particularly vulnerable to a crash in oil prices, which is likely in a global recession as oil prices are highly cyclical.
The company has a "lighthouse" strategy of tapping into new industries by landing a flagship customer in that sector and then expanding to other customers in that industry from there. But while C3.ai also serves industries like banking, utilities, defense, and manufacturing, that revenue hasn't been sufficient to diversify the business away from oil and gas.
The company finished its most recent quarter with 236 customers, though it's hopeful the consumption-based model can bring in more smaller accounts.
The stock's tripling in the first quarter was based almost entirely on hype around artificial intelligence rather than any improvement in the fundamentals. Shares also got a boost at the end of January after C3.ai announced its new generative AI product suite, though it doesn't appear to be generally available yet.
However, after the current run-up in the price, the stock now trades at a price-to-sales ratio of 15. Through the first three quarters of the fiscal year, the company's lost $217 million on $174 million in revenue, indicating it's a long way from being profitable on a generally accepted accounting principles (GAAP) basis.
Given those financials, investors seem to be bidding the stock higher on nothing more than the company's growth promises and vague notions about the transformative potential of AI.
At this point, a bet on C3.ai seems like more of a lottery ticket on artificial intelligence rather than a rational investment in a company whose future cash flows justify its current price.
After the collapse in tech stocks over the last year, investors should know better.
Suzanne Frey, an executive at Alphabet, is a member of The Motley Fools board of directors. Jeremy Bowman has no position in any of the stocks mentioned. The Motley Fool has positions in and recommends Alphabet. The Motley Fool recommends C3.ai. The Motley Fool has a disclosure policy.
The rest is here:
C3.ai Stock: 3 Reasons to Avoid This Hot Artificial Intelligence ... - The Motley Fool
Lincoln musician says artificial intelligence will not replace artists – KLKN
LINCOLN, Neb. (KLKN) Artificial intelligence can create images, write essays and collect data.
But will it ever replace musicians?
Matt Waite, a professor at the University of Nebraska-Lincoln, said AI predicts whats coming next.
With language models like Chat GPT, its looking at enormous amounts of text, he said. Its looking at how works are put together, and then essentially, its making a prediction.
Waite said several companies pull data from across the web to assist AI in creating that prediction.
But what happens when an artists style is portrayed by AI?
Newly launched campaigns, such as the Human Artistry Campaign, have already banded together to address challenges presented by AI.
Local musician Darren Keen thinks AI-generated content will not be a replacement for artists.
I think that eventually, these things will parse themselves out to be more like tools than full-on replacements for musicians and creative people, he said.
At this time, Waite says its unclear how AI will impact the world of music, media and education.
Were going to be making adjustments for years and years, he said. This is a significant moment in society where were going to remember the time before AI and the time after AI.
View post:
Lincoln musician says artificial intelligence will not replace artists - KLKN
Former Google CEO Eric Schmidt is worried about artificial intelligence. Here’s why | Mint – Mint
Former Google CEO and Chairman Eric Schmidt has warned about the dangers of new-age artificial intelligence technology. Speaking to ABC This Week, Schmidt said there is a need to 'make sure this stuff (Large Language Models) doesn't harm but just help'.
Former Google CEO and Chairman Eric Schmidt has warned about the dangers of new-age artificial intelligence technology. Speaking to ABC This Week, Schmidt said there is a need to 'make sure this stuff (Large Language Models) doesn't harm but just help'.
On being asked to explain the perils and promise of AI, Schmidt replied Well imagine a world where you have an AI doctor that makes everyone healthier in the whole world. Imagine a world where you have an AI tutor that increases the educational capabilities of everyone in every language. These are remarkable and these technologies which are known as Large Language Models are clearly going to do this."
On being asked to explain the perils and promise of AI, Schmidt replied Well imagine a world where you have an AI doctor that makes everyone healthier in the whole world. Imagine a world where you have an AI tutor that increases the educational capabilities of everyone in every language. These are remarkable and these technologies which are known as Large Language Models are clearly going to do this."
However, the former Google CEO was quick to point out the threats that humanity faces from these language models.
However, the former Google CEO was quick to point out the threats that humanity faces from these language models.
We face extraordinary new challenges from these things, whether it's deep fakes or people falling in love with their AI tutor," he added.
We face extraordinary new challenges from these things, whether it's deep fakes or people falling in love with their AI tutor," he added.
Elaborating on the things that make him worried Schmidt added that he is worried about the use of LLMs in biology, cyber-attacks and manipulating the way politics works.
Elaborating on the things that make him worried Schmidt added that he is worried about the use of LLMs in biology, cyber-attacks and manipulating the way politics works.
Schmidt also pointed out the speed at which these new artificial intelligence technologies are changing the world, noting that it took Gmail five years to reach 100 million daily active users, while ChatGPT reached the same milestone in about 2 months.
Schmidt also pointed out the speed at which these new artificial intelligence technologies are changing the world, noting that it took Gmail five years to reach 100 million daily active users, while ChatGPT reached the same milestone in about 2 months.
This is not the first time that Schmidt has raised the possibility. During an earlier interaction with author and journalist Walter Isaacson he had noted that large language models could be used for biological warfare and change the dynamics of war.
Read the original post:
Former Google CEO Eric Schmidt is worried about artificial intelligence. Here's why | Mint - Mint
Unrestricted Artificial Intelligence Growth Might Lead to Extinction of … – Transcontinental Times
UNITED STATES: The unchecked and rapid development of artificial intelligence (AI) is highly irresponsible and could result in a superhumanly intelligent AI wiping out all sentient life on Earth.
This is the warning issued by Machine Intelligence Research Institute decision theorist Eliezer Yudkowsky, who recently penned an alarming article for Time Magazine about the potentially catastrophic consequences of the current AI race among major tech players.
- Advertisement -
Yudkowsky is a prominent figure in the field of AI and is known for popularising the concept of friendly AI. However, his current outlook on the future of Artificial Intelligence is dystopian and echoes the worlds of science fiction films.
In a recent article, Yudkowsky highlighted the need to curb the development of Artificial Intelligence and ensure that it does not exceed human intelligence. He also emphasised the importance of ensuring that AI systems care for biological life and do not pose a threat to it.
- Advertisement -
The Centre for Artificial Intelligence and Digital Policy also recently issued a letter urging regulators to halt further commercial deployment of new generations of the GTP language model created by OpenAI.
The letter carried 1,000 signatures from technology experts and prominent figures, including Elon Musk. It called for a six-month pause on GPT-4s commercial activities and plans to ask the United States Federal Trade Commission (FTC) to investigate whether the commercial release of GPT-4 violated US and global regulations.
- Advertisement -
Yudkowsky applauded the letters request for a moratorium and expressed respect for individuals who had signed it, but he thinks it downplayed the gravity of the problem.
He emphasised that the key issue is not human-competitive intelligence but what happens after AI surpasses human intelligence.
Yudkowsky pointed out that humanity is not prepared for AIs capabilities and is not on course to be prepared for them within any reasonable time window.
Progress in AI capabilities is far ahead of progress in AI alignment or even understanding what is going on inside these systems.
He cautioned that if we continue along this path, virtually everyone on Earth will perish as a result of the most likely outcome of creating a superhumanly intelligent AI under conditions even somewhat similar to the ones we currently face.
Precision, readiness, fresh scientific understandings, and avoiding AI systems made up of huge, incomprehensible arrays of fractional numbers are all necessary for survival under artificial intelligence.
According to Yudkowsky, AI could potentially be built to care for humans or sentient life in general, but it is currently not understood how this could be achieved.
Without this caring factor, AI would not love or hate humans but would rather see them as consisting of atoms that could be used for something else.
The likely result of humanity facing down a superhuman intelligence would be a total loss.
The concerns raised by Yudkowsky and the Centre for Artificial Intelligence and Digital Policy are significant and should be taken seriously.
While AI has the potential to bring about many benefits, it is essential to ensure that its development is carefully monitored to avoid catastrophic consequences.
Also Read: Simpsons Paradox Explained: The Paradox That Flips Statistics on Its Head
Mechanical engineering graduate, writes about science, technology and sports, teaching physics and mathematics, also played cricket professionally and passionate about bodybuilding.
View all posts
Continued here:
Unrestricted Artificial Intelligence Growth Might Lead to Extinction of ... - Transcontinental Times
Can an Artificial Intelligence Model Be Built to Closely Mimic the … – NYU Langone Health
Collaboration and innovation are at the heart of research endeavors at NYU Langone. Biyu J. He, PhD, and Eric K. Oermann, MD, are exemplifying these qualities, merging their expertise to build an artificial intelligence (AI) model that imitates the human brain.
Richard Feynman, a Nobel Prizewinning theoretical physicist, once said, What I cant make, I dont understand. The quote is one that aptly captures the spirit of the collaborative effort by Dr. He and Dr. Oermann to build an AI model that more closely mimics the human brain. They hope to use that computer algorithm as a more nimble and practical proxy for exploring the brain and plumbing the depths of its mysteries. Building a computational model can help us understand whats going on in the brain in a more quantitative, detailed way than traditional neuroscience allows us to do, says Dr. He.
To fund their research, Dr. He and Dr. Oermann have been awarded $1.2 million from the W.M. Keck Foundation, a nonprofit that supports pioneering discoveries in science, engineering, and medical research.
The researchers will start by mapping the brains of volunteers as they complete a very specific, simple taska process that neuroscientists call one-shot learning. For example, imagine seeing an abstract black-and-white drawing, and then seeing a photo of a recognizable objectsay, a helicopterthat loosely resembles it. Once your brain recognizes the helicopter in the photo, it will forever see it in the abstract image as well. Once you have that photograph in your head, its imprinted in your mind and forever alters the way you process the abstract image, explains Dr. He.
The pair will use neuroimaging and electrodes to map the brain activity involved in one-shot learning, and then leverage advanced AI techniques to create a computer model. One-shot learning is something the human brain does well, but algorithms do not, explains Dr. Oermann. We plan to use our analysis of the process to unpack the differences and make AI models more brain-like.
Dr. He and Dr. Oermann are uniquely well suited to this venture. She is a neuroscientist who researches how the brain creates conscious awareness, while he is a neurosurgeon and machine learning expert. The seeds of their collaboration began in 2019 when Dr. He received an email from John G. Golfinos, MD, chair of the Department of Neurosurgery, announcing that Dr. Oermann was a candidate for a faculty role. In addition to his training in neurosurgery, Dr. Oermann had done a postdoctoral research fellowship in machine learning at the life sciences arm of Alphabet, Googles parent company. I looked at his website and thought, It would be amazing if we recruited him, recalls Dr. He.
Likewise, Dr. Oermann was aware of Dr. Hes research before his arrival at NYU Langone. Biyu asks some of the biggest questions about how our brains make us human, he says. As an engineer and a neurosurgeon, I was used to focusing on very specific problems. But understanding how the brain works is what drew me to AI.
Once here, Dr. Oermann wasted no time in reaching out to Dr. He. We immediately sensed there was a long-term research agenda that could benefit from combining our expertise, he says. We see this as the first step in a really ambitious collaborative process.
Read more:
Can an Artificial Intelligence Model Be Built to Closely Mimic the ... - NYU Langone Health
Vinitaly pits classic art against artificial intelligence – The Drinks Business
This years Vinitaly has shown that human emotion can still trump tech innovation in the wine world. Louis Thomasreports from the fair.
The Veronafiere has been a home away from home this week for two artworks from Florences Uffizi gallery, both depicting Roman wine god Bacchus: one by Guido Reni (c. 1620), and the other from Michelangelo Merisi da Caravaggio (c. 1598).
The inclusion of fine art at Vinitaly has not gone un-criticised, with some academics suggesting that it is unacceptable to have such works in a commercial, rather than an intellectual setting.
However, there is something praiseworthy about an event that puts cultural heritage at the centre and given the queue to view the paintings, there is clearly an appetite for art, as well as wine.
Shifting from the Baroque to something rather more futuristic, a welcome dinner held by the Comitato Grandi Cru dItalia at the Teatro Ristori was hailed by committee president Valentina Argiolas as a celebration of a renaissance after difficult years.
Renaissance was an interesting word to choose, as the evening was themed around whether artificial intelligence (AI) could end up displacing wine professionals and mark the death of wine writing and criticism as we know it.
The topic, which has become increasingly dominant in the news, was introduced by having a recording of an AI simulation compre the event.
The organisers noted that they were fortunate to have prepared the answers from Mr. AI earlier last week, before Italy became the first Western nation to ban chatbot sensation ChatGPT over privacy concerns.
A video of Monica Larner, Italy reviewer for Robert Parker Wine Advocate, was shown in which both she and Mr. AI offered advice in a duel of expertise.
While Mr. AIs answers to questions such as what was the 2022 vintage like in Italy? sounded accurate, if clearly an amalgamation of different resources, Larners, crucially, had the colour of experience.
Gabriele Gorelli MW then took to the stage to share his thoughts, remarking that while Skynet from The Terminator films is a fantasy, there is still an element of risk.
As for whether it could have been of assistance during his Master of Wine examinations,as ChatGPT recently proved to be for the Master Sommelier theory papers, Gorelli said: I would have been glad to be helped by a reliable AIBut [in the MW course] were not tested on knowing things, its more holistic: why is it happening, not what is happening.
Appearing over video call, New York-based wine critic Antonio Galloni remarked: Ready or not, AIs already here. Possibly not a shock to an audience that was by that point familiar with the unsettling robotic tones of Mr. AI.
But he then reassured the audience of wine trade and media members that there was no way AI could become a substitute for wine writers, or winemakers: AI may be brilliant if you want to make orange juice for a supermarketbut there are no shortcuts to making great wine.
Precisely how AI can taste wine, surely a requirement for winemaking, is a more complex issue.
It can predict how a wine might turn out based on weather and cellar factors, or be used in conjunction with chemical analysis (as was the case in a recent video from Konstantin Baum MW).
What AI offers is a sterile smoothie of information blended together.
Ask an AI to write about wine, and it can competently regurgitate what is already on the internet, but it cannot offer insight from lived experience.
Ask it to create an image of Bacchus in the style of Caravaggio, and, though it may be less temperamental than the artist himself, it will pale in comparison to Caravaggio every single time.
Both wine writing and art come from a context that AI cannot replicate. Simply put: it lacks that human touch.
Theres nothing to worry about, at least for now.
Read the original:
Vinitaly pits classic art against artificial intelligence - The Drinks Business