Category Archives: Artificial Intelligence
The 3 Best Growth Stocks to Buy in the Artificial Intelligence Sector – InvestorPlace
There is a lot of buzz today around artificial intelligence, or AI, and how it is taking over our lives. Created through machine learning, it is about training a system with ample data to make inferences about all the new data. AI has been around for a few years but since the start of 2023, it seems like it is everywhere. With the need for advanced technology growing, AI has already become a main driver across industries, from robotics to Big Data and the internet of things. It has started to transform the software development space and is going to be huge this year. However, the market is still in the early stages of AI adoption and smart investors know that AI growth stocks are where the potential lies.
There are many artificial intelligence stocks on the market, but only a few have the potential to make it big. If you want to tap into the future of AI, consider pure-play AI stocks for your portfolio. Growing interest and an investment surge in AI makes it an ideal time to take your position. Lets take a look at the best AI growth stocks to add to your portfolio:
Source: Asif Islam / Shutterstock.com
One of the first companies that comes to mind when we think of the best AI growth stocks is Microsoft (NASDAQ:MSFT). Already a solid player in the industry, Microsoft was recently in the news for increasing its stake in OpenAI, the creator of ChatGPT. The revolutionary tool has shown how far AI can go and how capable it is in generating images, texts, ideas, and sounds. This year, Microsoft made a $10 billion investment, a sizeable increase from its previous investment of $1 billion in 2019.
Microsoft recently added AI-generated stories to its Bing search engine to give users a better insight into AI. MSFT stock is trading at $280, up 18% in the past six months, and is inching closer to the 52-week high of $315, making it one of the best artificial intelligence stocks to own today. AI justifies its higher valuation and I believe the companys investments will pay off in the near term.
Microsoft CEO Satya Nadella sees AI as the next big computing platform and this investment is just step one in the companys AI transition. Apart from this investment, the company is harnessing the power of AI in multiple ways, including clinical documentation and healthcare.
This month, Microsoft also introduced Dynamics 365 Copilot, a tool designed to assist with many businesses day-to-day tasks including marketing, sales, and customer service. The technology is still being tested but, if successful, could transform the current automated chat experience. Microsoft is a proven performer, no matter the state of the tech industry, making it a stock to buy and hold onto.
Source: Michael Vi / Shutterstock.com
Another artificial intelligence stock to watch out for is Nvidia(NASDAQ:NVDA). A leader in the graphics chip industry, it is making the most of the AI boom and could become one of the biggest players in the world. Its data center segment has shown a steady rise in the share of the total revenue of the company and managed to top the gaming segment in revenue last year. Despite the drop in tech stocks and the market turmoil, NVDA stock was standing strong because AI was one of the driving forces behind its growth.
The stock is currently trading at $265 and is up more than 100% in the past six months. Nvidia recently launched a set of inference platforms designed for generative AI and its chips are already popular for handling large workloads. Because it has applications that are required to run AI apps, Nvidia should remain in demand and relevant in the upcoming years.
In the upcoming years, Nvidia intends to make self-driving car processors another revenue stream. Since cars with self-driving capabilities gather ample amounts of data from cameras and sensors in real-time, AI is then used to make complex decisions something Nvidia intends to start contributing to in the next few years. NVDA stock does not come cheap and it is currently trading at 24.8 times sales, but the growth potential is massive.
Source: IgorGolovniov / Shutterstock.com
Alphabet (NASDAQ:GOOG, NASDAQ:GOOGL), the parent company of Google, is another top artificial intelligence stock to buy for long-term growth.It recently acquired Alter, an AI avatar startup that allows creators and brands to express their virtual identities, for $100 million. In order to get an edge in the AI sector, companies need to make ongoing investments and Alphabet has the liquidity to do so. It also has the experience and resources to make a mark in the AI industry.
The companys recent history is not without setbacks. Between the partnership of Microsoft with OpenAI and their ChatGPT tool, as well as the relatively unsuccessful launch of Googles own AI-equipped service, Bard, GOOG stock has taken hits in recent months. However, the company has many years of experience in deep learning and it can help with a smooth transition into AI.
GOOGL stock is on sale currently which is also another reason to buy. The stock is trading at $105 today and is down 25% in the past year. Considering its history and growth potential, it is cheap for the tech sector.
On the date of publication, Vandita Jadejadid not have (either directly or indirectly) any positions in the securities mentioned in this article.The opinions expressed in this article are those of the writer, subject to the InvestorPlace.comPublishing Guidelines.
Vandita Jadeja is a CPA and a freelance financial copywriter who loves to read and write about stocks. She believes in buying and holding for long term gains. Her knowledge of words and numbers helps her write clear stock analysis.
More:
The 3 Best Growth Stocks to Buy in the Artificial Intelligence Sector - InvestorPlace
Artificial intelligence shows promise in mitigating radiologist bias – Radiology Business
Artificial intelligence may serve as a useful tool for mitigating radiologist bias when interpreting images, according to a new study published in Scientific Reports [1].
CT has value in helping physicians assess patients suffering from COVID-19 (though some have criticized this practice). Quantification of pneumonia, in particular, may help to predict treatment course and outcomes, but it is heavily reliant on a radiologists subjective perceptions, Romanian researchers wrote March 25.
A survey of 40 radiologistsalong with a retrospective analysis of CT data from 109 patients treated at two hospitalsshowed that members of the specialty often overestimate lung involvement. To fix this, scientists conducted a randomized control trial using AI-based clinical decision support.
This was found to reduce any absolute overestimation error from 9.5%6.6 down to 1%5.2, the investigation found.
These results indicate a human perception bias in radiology that has clinically meaningful effects on the quantitative analysis of COVID-19 on CT, Bogdan A. Bercean, with Politehnica University of Timioara in Romania, and colleagues advised. The objectivity of AI was shown to be a valuable complement in mitigating the radiologists subjectivity, reducing the overestimation tenfold.
Bercean et al. made use of a commercial medical device from Rayscape, which is based in Romania and also co-founded by the study author. The AI analysis offers radiologists an automatic suggestion of total lung involvement percentage, along with colored segmentation overlays. These are meant to help physicians visually check the validify of the suggested percentage, while also allowing for easier mental adjustments, where needed. Researchers randomly blinded radiologists by turning the AI tool off 50% of the time to assess its effectiveness.
They found that the AI assistance reduced the average overestimation difference, with further testing confirming the findings statistical significance. Bercean and co-authors attributed success of the AI in part to widespread adoption and integration into the hospitals PACS. AI clinical decision support was particularly popular among younger radiologists, who also demonstrated the greatest bias susceptibility.
Our study demonstrated that quantification of the involvement of the lungs in COVID-19 on CT scans is a perception-sensitive process prone to cognitive overestimation bias, the authors concluded. This is of key importance given the wide use of the marker, although it was shown to be controllable with an AI decision support system. This reinforces the benefits of human-AI synergy and strengthens the need to further study the adaptability of radiology to rapid technological and methodological changes.
View original post here:
Artificial intelligence shows promise in mitigating radiologist bias - Radiology Business
MHP deploys artificial intelligence to streamline poultry operations – Poultry World
MHP said it took 5 years to develop its virtual assistant, which the company also calls a virtual zootechnician. The official name is Smart Technologist Assistant. Its like Siri, but in the poultry world, the company explained. A digital twin based on artificial intelligence helps employees in their work and warns about the risks of out-of-hours situations in poultry houses, the company explained.
So far, the IT solution has proved its value in helping poultry farms to improve the uniformity of chickens in the flock, the accuracy of weight prediction and the mortality rate. However, as the assistant keeps learning, MHP managers are confident there is more to come.
We connected deep learning algorithms and artificial intelligence and began to separate patterns, saw how poultry houses operate in general, commented Nataliya Kondratenko, director of the MHP global IT expertise centre.
We were interested in the technical indicators: whether our equipment runs correctly, how well the staff work, how to prevent a situation that can negatively affect the [production] indicators and quality of products, she added.
One surprising thing the company discovered was that production performance depends not only on basic parameters such as climate control and feed quality. There is also a concept of chicken happiness that should be taken into account, the company said. The virtual zootechnician now has the crucial task of measuring the mood in poultry houses.
The system tries to identify what factors can cause concerns among poultry flocks in order to eliminate them, Kondratenko said. The key advantage of the new system, which primarily gathers data from video surveillance, is that it notices even small details that not every employee, even the most experienced, would typically pay attention to.
For example, the artificial intelligent solution managed to put together a correlation model linking changes in the chickens mood with the work of agricultural machinery or other equipment nearby.
In general, some events sometimes seem unrelated [to production], but artificial intelligence says there is a correlation, and we can influence it. We did not think about some factors at all that they can affect the quality of meat, but they do, Kondratenko said.
Here is the original post:
MHP deploys artificial intelligence to streamline poultry operations - Poultry World
How intelligent cameras increase the efficiency of diagnostics and laboratory – Med-Tech Innovation
29 March 2023 11:17
The integration of artificial intelligence (AI) and smart cameras is rapidly transforming various industries, including the medical sector. This advanced technology is increasing the efficiency and effectiveness of medical processes, leading to better patient outcomes and optimised workflows. One area where these innovations are having a substantial impact is laboratory automation. However, it does not stop there. Read why healthcare providers need to look into the new technology to keep their competitive edge.
Smart cameras represent a turning point in laboratory automation, offering unprecedented levels of accuracy, speed, and reliability. By automating tasks such as sample sorting, analysis and inspection, these cameras can significantly reduce the risk of human error and increase overall laboratory efficiency. This allows lab technicians and researchers to focus on more complex tasks and interpreting data, ultimately leading to faster discoveries and treatment.
Revolutionising laboratory automation with Deep Learning
In addition, AI-driven cameras can already handle a wide range of samples and conditions, ensuring that even very different types are processed and categorised correctly. This flexibility is critical for labs working with diverse biological materials and complex experiments.
Similarly, intelligent cameras will be of great importance in the development and operation of medical robotics for example, by enabling robots to "see", understand, and adaptively respond to their environment. In the future, AI-controlled industrial cameras could guide surgical robots to make precise cuts and sutures, improving the overall quality of surgical procedures.
Take the leap: the power of the AI Vision System IDS NXT
However, how do companies take their first steps with the new technology? After all, many companies lack expertise and time to familiarise themselves with the field of AI and its use for their needs. TheAI vision system IDS NXTis designed to help with this, as it can be operated quickly and easily by any user group even without in-depth knowledge of machine learning, image processing or application programming. It therefore offers an excellent basis for the intelligent use of image processing.
The all-in-one system consists of intelligent industrial cameras plus software environment, which covers the entire process from creating to running AI vision applications. In addition to its user-friendly workflows and holistic design, expert tools enable open- platform programming, making IDS NXT cameras highly customisable and suitable for a wide range of applications.
Thanks to the latest software update, these intelligent cameras are now able to detect anomalies independently and thereby optimise e.g., quality assurance processes. For this purpose, users train a neural network that is then executed on the programmable cameras. To achieve this,IDS Imaging Development Systemsoffers the AI Vision StudioIDS NXT lighthouse, which is characterised by easy-to-use workflows and seamless integration into the IDS NXT ecosystem. Customers can even use only "GOOD" images for training. This means that relatively little training data is required compared to the other AI methods Object Detection and Classification. This simplifies the development of an AI vision application and is well suited for evaluating the potential of AI- based image processing for projects in the company.
By adopting artificial intelligence, healthcare providers can stay ahead of the curve and ensure they are well equipped to meet the challenges of an increasingly digital and connected healthcare environment. Smart cameras will continue to play a critical role in shaping medicine and providing more accurate, efficient, and personalised care to patients worldwide.
Do not wait to explore the possibilities of AI and intelligent cameras in medical applications and lab automation.Click hereto learn more about the AI vision system IDS NXT and how it can help you stay ahead in the ever-evolving world of healthcare.
Continue reading here:
How intelligent cameras increase the efficiency of diagnostics and laboratory - Med-Tech Innovation
University of Aberdeen academic appointed to national artificial intelligence role – Aberdeen Live
A leading academic from the University of Aberdeen has been appointed to a role helping to deliver Scotland's national artificial intelligence (AI) strategy.
Dr Georgios Leontidis, director of the Universitys interdisciplinary centre for data and AI, has been appointed as a member of the Scottish AI Alliance Leadership Group for an initial period of two years.
READ MORE: The North East 250 - Aberdeenshire's answer to the North Coast 500
Over the course of two years Dr Leontidis will focus on thought leadership and expertise in core AI technologies and how these technologies can be deployed in a fair, trustworthy and ethical manner, across different sectors.
The Scottish AI Alliance exists to provide a focus for dialogue, collaboration, and action on AI activities in Scotland.
Dr Leontidis said: "As an AI researcher and academic, I am excited to contribute my skills, knowledge, and experience to help shape the future of AI in Scotland and realise the actions outlined in Scotlands AI strategy.
"Whether it's identifying new applications for AI, developing cutting-edge technologies, or addressing ethical concerns, I believe that AI has the potential to make a significant positive impact on society. I am excited to be a part of this journey towards expanding the already world-leading AI activities in Scotland further.
"I look forward to collaborating with my colleagues in the Scottish AI alliance leadership group to drive innovation, promote diversity and inclusivity, and ensure that AI is developed and deployed responsibly."
Sign up to our newsletter here.
READ NEXT:
Autism Acceptance Week - everything you need to know for week of understanding
'Phantom cat shaver' on the loose with 84 cases being probed from Aberdeenshire to Southampton
Aberdeen youth, 17, charged with attempted murder as duo appear in court over recent attack
New Aberdeen road opened in Bucksburn as part of major housing development
Take a look inside the 1.8million mansion outside Aberdeen with five bedrooms and cinema
Read more:
University of Aberdeen academic appointed to national artificial intelligence role - Aberdeen Live
Artificial intelligence won’t save banks from short-sightedness – SWI swissinfo.ch in English
Banks like Credit Suisse use sophisticated models to analyse and predict risks, but too often they are ignored or bypassed by humans, saysrisk management expert Didier Sornette.
This content was published on March 28, 2023March 28, 2023 minutes
Writes about the impact of new technologies on society: are we aware of the revolution in progress and its consequences? Hobby: free thinking. Habit: asking too many questions.
The collapse of Credit Suisse has once again exposed the high-stakes risk culture in the financial sector. The many sophisticated artificial intelligence (AI) tools used by the banking system to predict and manage risks arent enough to save banks from failure.
According to Didier Sornette, honorary professor of entrepreneurial risks at the federal technology institute ETH Zurich, the tools aren't the problem but rather the short-sightedness of bank executives who prioritise profits.
SWI swissinfo.ch: Banks use AI models to predict risks and evaluate the performance of their investments, yet these models couldnt save Credit Suisse or Silicon Valley Bank from collapse. Why didnt they act on the predictions?And why didnt decision-makers intervene earlier?
Didier Sornette:I have made so many successful predictions in the past that were systematically ignored by managers and decision-makers. Why? Because it is so much easier to say that the crisis is an act of God and could not have been foreseen, and to wash your hands of any responsibility.
Acting on predictions means to stop the dance, in other words to take painful measures. This is why policymakers are essentially reactive, always behind the curve. It is political suicide to impose pain to embrace a problem and solve it before it explodes in your face. This is the fundamental problem of risk control.
Credit Suisse had very weak risk controls and culture for decades. Instead, business units were always left to decide what to do and therefore inevitably accumulated a portfolio of latent risks or I'd say lots of far out-of-the-money put options [when an option has no intrinsic value].Then, when a handful of random events occurred that were symptomatic of the fundamental lack of controls, people started to get worried. When a large US bank [Silicon Valley Bank] with $220 billion (CHF202 billion) of assets quickly went insolvent, people started to reassess their willingness to leave uninsured deposits at any poorly run bank - and voil.
SWI: This means that risk prediction and management wont work if the problem is not solved at the systemic level?
D.S.: The policy of zero or negative interest rates is the root cause of all this.It has led to positions of these banks that are vulnerable to rising rates. The huge debts of countries have also made them vulnerable. We live in a world that has become very vulnerable because of the short-sighted and irresponsible policies of the big central banks, which have not considered the long-term consequences of their "firefighting" interventions.
The shock is a systemic one, starting from Silicon Valley Bank, Signature Banketc., with Credit Suisse being only an episode revealing the major problem of the system: the consequences of the catastrophic policies of the central banks since 2008, which flooded the markets with easy money and led to huge excesses in financial institutions. We are now seeing some of the consequences.
SWI: What role can AI-based risk prediction play, for example, in the case of the surviving giant UBS?
D.S.: AIand mathematical models are irrelevant in the sense that (risk control) tools are useful only if there is a will to use them!
When there is a problem, many people always blame the models, the risk methods etc. This is wrong. The problems lie with humans whosimply ignore models and bypass them. There were so many instances in the last 20 years. Again and again, the same kind of story repeats itself with nobody learning the lessons. So AI cant do much because the problem is not about more "intelligencebut greed and short-sightedness.
Despite the apparent financial gains, this is probably a bad and dangerous deal for UBS. The reason is that it takes decades to create the right risk culture and they are now likely to create huge morale damage via the big headcount reductions. Additionally, no regulator will be giving them an indemnity for inherited regulatory or client Anti-Money Laundering violations from the Credit Suisse side, which we know had very weak compliance. They will have to deal with surprising problems there for years.
SWI: Could we envision a more rigorous form of oversight of the banking system by governments or even taxpayers using data collected by AI systems?
D.S.: Collecting data is not the purview of AI systems. Collecting clean and relevant data is the most difficult challenge, much more difficult than machine learning and AI techniques. Most data is noisy, incomplete, inconsistent, very costly to obtain and to manage. This requires huge investments and a long-term view that is almost always missing. Hence crises occur every fiveyears or so.
SWI: Lately, weve been hearing more and more about behavioral finance. Is there more psychology and irrationality in the financial system than we think?
D.S.: There is greed, fear, hope and... sex. Joking aside, people in banking and finance are in general superrational when it comes to optimising their goals and getting rich. It is not irrationality, it is betting and taking big risks where the gains are privatised and the losses are socialised.
Strong regulations need to be imposed. In a sense, we need to make "banking boring to tame the beasts that tend to destabilise the financial system by construction.
SWI: Is there a future in which machine learning can prevent the failure of "too big to fail"banks like Credit Suisse, or is that pure science fiction?
D.S.: Yes, an AI can prevent a future failure if the AI takes power and enslaves humans to follow the risk managements with incentives dictated by the AI, as in many scenarios depicting the dangers of superintelligent AI. I am not kidding.
The interview was conducted in writing. It has been edited for clarity and brevity.
In compliance with the JTI standards
More: SWI swissinfo.ch certified by the Journalism Trust Initiative
More:
Artificial intelligence won't save banks from short-sightedness - SWI swissinfo.ch in English
Everything to Know About Artificial Intelligence, or AI – The New York Times
Welcome to On Tech: A.I., a pop-up newsletter that will teach you about artificial intelligence, especially the new breed of chatbots like ChatGPT all in only five days.
Well tackle some of the big themes and questions around A.I. By the end of the week, youll know enough to command the room at a dinner party, or impress your co-workers.
Every day, well give you a quiz and a homework assignment. (A pro tip: Ask the chatbots themselves about how they work, or about concepts you dont understand. Answering such questions is one of their most useful skills. But keep in mind that they sometimes get things wrong.)
Lets start at the beginning.
The term artificial intelligence gets tossed around a lot to describe robots, self-driving cars, facial recognition technology and almost anything else that seems vaguely futuristic.
A group of academics coined the term in the late 1950s as they set out to build a machine that could do anything the human brain could do skills like reasoning, problem-solving, learning new tasks and communicating using natural language.
Progress was relatively slow until around 2012,when a single idea shifted the entire field.
It was called a neural network. That may sound like a computerized brain, but, really, its a mathematical system that learns skills by finding statistical patterns in enormous amounts of data. By analyzing thousands of cat photos, for instance, it can learn to recognize a cat. Neural networks enable Siri and Alexa to understand what youre saying, identify people and objects in Google Photos and instantly translate dozens of languages.
A New Generation of Chatbots
A brave new world. A new crop of chatbotspowered by artificial intelligence has ignited a scramble to determine whether the technology could upend the economics of the internet, turning todays powerhouses into has-beens and creating the industrys next giants. Here are the bots to know:
ChatGPT. ChatGPT, the artificial intelligence language model from a research lab, OpenAI, has been making headlines since November for its ability to respond to complex questions, write poetry, generate code, plan vacationsand translate languages. GPT-4, the latest version introduced in mid-March, can even respond to images(and ace the Uniform Bar Exam).
Bing. Two months after ChatGPTs debut, Microsoft, OpenAIs primary investor and partner, added a similar chatbot, capable of having open-ended text conversations on virtually any topic, to its Bing internet search engine. But it was the bots occasionally inaccurate, misleading and weird responsesthat drew much of the attention after its release.
Ernie. The search giant Baidu unveiled Chinas first major rival to ChatGPT in March. The debut of Ernie, short for Enhanced Representation through Knowledge Integration, turned out to be a flopafter a promised live demonstration of the bot was revealed to have been recorded.
The next big change: large language models. Around 2018, companies like Google, Microsoft and OpenAI began building neural networks that were trained on vast amounts of text from the internet, including Wikipedia articles, digital books and academic papers.
Somewhat to the experts surprise, these systems learned to write unique prose and computer code and carry on sophisticated conversations. This is sometimes called generative A.I. (More on that later this week.)
The result: ChatGPT and other chatbotsare now poised to change our everyday lives in dramatic ways. Over the next four days, we will explain the technology behind these bots, help you understand their abilities and limitations, and where they are headed in the years to come.
Tuesday: How do chatbots work?
Wednesday: How can they go wrong?
Thursday: How can you use them right now?
Friday: Where are they headed?
Youve got some homework to do! One of the best ways to understand A.I. is to use it yourself.
The first step is to sign up for these chatbots. Bing and Bard chatbots are being rolled out slowly, and you may need to get on their waiting lists for access. ChatGPT currently has no waiting list, but requires setting up a free account.
Once youre ready, just type your words (known as a prompt) into the text box, and the chatbot will reply. You may want to play around with different prompts and see if you get a different response.
Todays assignment: Ask ChatGPT or one of its competitors to write a cover letter for your dream job like, say, a NASA astronaut.
We want to see the results! Share it as a comment and see what other people have submitted.
Weve been covering developments in artificial intelligence for a long time, and we've both written recent books on the subject. But this moment feels distinctly different from whats come before. We recently chatted on Slack with our editor, Adam Pasick, about how were each approaching this unique point in time.
Cade:The technologies driving the new wave of chatbots have been percolating for years. But the release ofChatGPT really opened peoples eyes. It set off a new arms race across Silicon Valley. Tech giants like Google and Meta had been reluctant to release this technology, but now theyre racing to compete with OpenAI.
Kevin:Yeah, its crazy out there I feel like Ive got vertigo. Theres a natural inclination to be skeptical of tech trends. Wasnt crypto supposed to change everything? Werent we all just talking about the metaverse? But it feels different with A.I., in part because millions of users are already experiencing the benefits. Ive interviewed teachers, filmmakers and engineers who are using tools like ChatGPT every day. And it came out only four months ago!
Adam: How do you balance the excitement out there with caution about where this could go?
Cade:A.I. is not as powerful as it might seem. If you take a step back, you realize that these systems cant duplicate our common sense or reasoning in full. Remember the hype around self-driving cars: Were those cars impressive? Yes, remarkably so. Were they ready to replace human drivers? Not by a long shot.
Kevin:I suspect that tools like ChatGPT are actually more powerful than they seem. We havent yet discovered everything they can do. And, at the risk of getting too existential, Im not sure these models work so differently than our brains. Isnt a lot of human reasoning just recognizing patterns and predicting what comes next?
Cade:These systems mimic humans in some ways but not in others. They exhibit what we can rightly call intelligence. But as OpenAIs chief executive told me, this is an alien intelligence. So, yes, they will do things that surprise us. But they can also fool us into thinking they are more like us than they really are. They are both powerful and flawed.
Kevin: Sounds like some humans I know!
Question 1 of 3
Start the quiz by choosing your answer.
Neural network: A mathematical system, modeled on the human brain, that learns skills by finding statistical patterns in data. It consists of layers of artificial neurons: The first layer receives the input data, and the last layer outputs the results. Even the experts who create neural networks dont always understand what happens in between.
Large language model: A type of neural network that learns skills including generating prose, conducting conversations and writing computer code by analyzing vast amounts of text from across the internet. The basic function is to predict the next word in a sequence, but these models have surprised experts by learning new abilities.
Generative A.I.: Technology that creates content including text, images, video and computer code by identifying patterns in large quantities of training data, and then creating new, original material that has similar characteristics. Examples include ChatGPTfor text and DALL-E and Midjourney for images.
Click here for more glossary terms.
Read the original here:
Everything to Know About Artificial Intelligence, or AI - The New York Times
Godfather of AI Says There’s a Minor Risk It’ll Eliminate Humanity – Futurism
"It's not inconceivable."Nonzero Chance
Geoffrey Hinton, a British computer scientist, is best known as the "godfather of artificial intelligence." His seminal work on neural networks broke the mold by mimicking the processes of human cognition, and went on to form the foundation of machine learning models today.
And now, in a lengthy interview with CBS News, Hinton shared his thoughts on the current state of AI, which he fashions to be in a "pivotal moment," with the advent of artificial general intelligence (AGI) looming closer than we'd think.
"Until quite recently, I thought it was going to be like 20 to 50 years before we have general purpose AI," Hinton said. "And now I think it may be 20 years or less."
AGI is the term that describes a potential AI that could exhibit human or superhuman levels of intelligence. Rather than being overtly specialized, an AGI would be capable of learning and thinking on its own to solve a vast array of problems.
For now, omens of AGI are often invoked to drum up the capabilities of current models. But regardless of the industry bluster hailing its arrival or how long it might really be before AGI dawns on us, Hinton says we should be carefully considering its consequences now which may include the minor issue of it trying to wipe out humanity.
"It's not inconceivable, that's all I'll say," Hinton told CBS.
Still, Hinton maintains that the real issue on the horizon is how AI technology that we already have AGI or not could be monopolized by power-hungry governments and corporations (see: the former non-profit and now for-profit OpenAI).
"I think it's very reasonable for people to be worrying about these issues now, even though it's not going to happen in the next year or two," Hinton said in the interview. "People should be thinking about those issues."
Luckily, by Hinton's outlook, humanity still has a little bit of breathing room before things get completely out of hand, since current publicly available models are mercifully stupid.
"We're at this transition point now where ChatGPT is this kind of idiot savant, and it also doesn't really understand about truth, " Hinton told CBS, because it's trying to reconcile the differing and opposing opinions in its training data. "It's very different from a person who tries to have a consistent worldview."
But Hinton predicts that "we're going to move towards systems that can understand different world views" which is spooky, because it inevitably means whoever is wielding the AI could use it push a worldview of their own.
"You don't want some big for-profit company deciding what's true," Hinton warned.
More on AI: AI Company With Zero Revenue Raises $150 Million
See more here:
Godfather of AI Says There's a Minor Risk It'll Eliminate Humanity - Futurism
Humans In The GenAI Loop – Forbes
an image designed with artificial intelligence by Berlin-based digital creator Julian van Dieken (C) inspired by Johannes Vermeer's painting "Girl with a Pearl Earring" at the Mauritshuis museum in The Hague on March 9, 2023. - Julian van Dieken's work made using artificial intelligence (AI) is part of the special installation of fans' recreations of Johannes Vermeer's painting "Girl with a Pearl Earring" on display at the Mauritshuis museum. (Photo by Simon Wohlfahrt / AFP) / RESTRICTED TO EDITORIAL USE - MANDATORY MENTION OF THE ARTIST UPON PUBLICATION - TO ILLUSTRATE THE EVENT AS SPECIFIED IN THE CAPTION (Photo by SIMON WOHLFAHRT/AFP via Getty Images)AFP via Getty Images
Generative AI, the technology behind ChatGPT, is going supernova, as astronomers say, outshining other innovations for the moment. But despite alarmist predictions of AI overlords enslaving mankind, the technology still requires human handlers and will for some time to come.
While AI can generate content and code at a blinding pace, it still requires humans to oversee the output, which can be low quality or simply wrong. Whether it be writing a report or writing a computer program, the technology cannot be trusted to deliver accuracy that humans can rely on. Its getting better, but even that process of improvement depends on an army of humans painstakingly correcting the AI models mistakes in an effort to teach it to behave.
Humans in the loop is an old concept in AI. It refers to the practice of involving human experts in the process of training and refining AI systems to ensure that they perform correctly and meet the desired objectives.
In the early days of AI research, computer scientists were focused on developing rule-based systems that could reason and make decisions based on pre-programmed rules. However, these systems were tedious to construct requiring experts to write down the rules and were limited by the fact that they could only operate within the constraints of the rules that were explicitly programmed into them.
As AI technology advanced, researchers began to explore new approaches, such as machine learning and neural networks, that enabled computers to learn on their own from large volumes of training data.
But the dirty little secret behind the first wave of such applications, which are still the dominant form of AI used today, is that they depend on hand-labeled data. Tens of thousands of people continue to toil at the mind-numbing task of putting labels on images, text and sound to teach supervised AI systems what to look or listen for.
Then along came generative AI, which does not require labeled data. It teaches itself by consuming vast amounts of data and learning the relationships within that data, much as an animal does in the wild. Large language models, which use generative AI, learn the world through the lens of text and the world has been amazed by these models ability to compose human-like answers and even engage in human-like conversations.
ChatGPT, a large language model trained by OpenAI, has awed the world with the depth of its knowledge and the fluency of its responses. Nevertheless, its utility is limited by so-called hallucinations, mistakes in the generated text that are semantically or syntactically plausible but are, in fact, incorrect or nonsensical.
The answer? Humans, again. OpenAI is working to address ChatGPT's hallucinations through reinforcement learning with human feedback (RLHF), employing, yes, large number of workers.
RLHF has been employed to shape ChatGPT's behavior, where the data collected during its interactions are used to train a neural network that functions as a "reward predictor." The reward predictor evaluates ChatGPT's outputs and predicts a numerical score that represents how well those actions align with the system's desired behavior. A human evaluator periodically checks ChatGPT's responses and selects those that best reflect the desired behavior. This feedback is used to adjust the reward-predictor neural network, which is then utilized to modify the behavior of the AI model.
Ilya Sutskever, OpenAI's chief scientist and one of the creators of ChatGPT, believes that the problem of hallucinations will disappear with time as large language models learn to anchor their responses in reality. He suggests that the limitations of ChatGPT that we see today will diminish as the model improves. However, humans in the loop are likely to remain a feature of the amazing technology for years to come.
This is why generative AI coding assistants like GitHubs CoPilot and Amazons CodeWhisperer are just that, assistants working in concert with experienced coders who can correct their mistakes or pick the best option among a handful of coding suggestions. While AI can generate code at a rapid pace, humans bring creativity, context, and critical thinking skills to the table.
True autonomy in AI depends on trust and reliability of AI systems, which may come as those systems improve. But for now, humans are the overlords and trusted results depend on collaboration between humans and AI.
Sylvain Duranton is the Global Leader of BCG X and a member of BCGs Executive Committee. BCG X is the tech build & design unit of BCG. Turbocharging BCGs deep industry and functional expertise, BCG X brings together advanced tech knowledge and ambitious entrepreneurship to help organizations enable innovation at scale. With nearly 3,000 technologists, scientists, programmers, engineers, and human-centered designers located across 80+ cities, BCG X builds and designs platforms, software to address the worlds most important challenges and opportunities. Teaming across practices, and in close collaboration with clients, their end-to-end global team unlocks new possibilities. Together theyre creating the bold and disruptive products, services, and businesses, of tomorrow. Duranton was the Global leader and founder of BCG GAMMA, BCGs AI and Data + Analytics Unit.
Read more here:
Humans In The GenAI Loop - Forbes
ChatGPT in the Humanities Panel: Researchers Share Concerns, Prospects of Artificial Intelligence in Academia – Cornell University The Cornell Daily…
Does the next Aristotle, Emily Dickinson or Homer live on your computer? A group of panelists explored this idea in a talk titled Chat GPT and the Humanities on Friday in the A.D. White Houses Guerlac Room.
ChatGPTs ability to produce creative literature was one of the central topics explored in the talk as the discourse on the use of artificial intelligence software in academic spheres continues to grow.
In the panel, Prof. Morten Christiansen, psychology, Prof. Laurent Dubreuil, comparative literature, Pablo Contreras Kallens grad and Jacob Matthews grad explored the benefits and consequences of utilizing artificial intelligence within humanities research and education.
The forum was co-sponsored by the Society for the Humanities, the Humanities Lab and the New Frontier Grant program.
The Society for the Humanities was established in 1966 and connects visiting fellows, Cornell faculty and graduate students to conduct interdisciplinary research connected to an annual theme. This years focal theme is Repair which refers to the conservation, restoration and replication of objects, relations and histories.
All four panelists are members of the Humanities Lab, which works to provide an intellectual space for scholars to pursue research relating to the interaction between the sciences and the humanities. The lab was founded by Dubreuil in 2019 and is currently led by him.
Christiansen and Dubreuil also recently received New Frontier Grants for their project titled Poetry, AI and the Mind: A Humanities-Cognitive Science Transdisciplinary Exploration, which focuses on the application of artificial intelligence to literature, cognitive science and mental and cultural diversity. For well over a year, they have worked on an experiment comparing humans poetry generation to that of ChatGPT, with the continuous help of Contreras Kallens and Matthews.
Before the event began, attendees expressed their curiosity and concerns about novel AI technology.
Lauren Scheuer, a writing specialist at the Keuka College Writing Center and Tompkins County local, described worries about the impact of ChatGPT on higher education.
Im concerned about how ChatGPT is being used to teach and to write and to generate content, Scheuer said.
Sarah Milliron grad, who is pursuing a Ph.D. in psychology, also said that she was concerned about ChatGPTs impact on academia as the technology becomes more widely used.
I suppose Im hoping [to gain] a bit of optimism [from this panel], Milliron said. I hope that they address ways that we can work together with AI as opposed to [having] it be something that we ignore or have it be something that we are trying to get rid of.
Dubreuil first explained that there has been a recent interest in artificial intelligence due to the impressive performance of ChatGPT and its successful marketing campaign.
All scholars, but especially humanities, are currently wondering if we should take into account the new capabilities of automated text generators, Dubreuil said.
Dubreuil expressed that scholars have varying concerns and ideas regarding ChatGPT.
Some [scholars] believe we should counteract [ChatGPTs consequences] by means of new policies, Dubreuil said. Other [scholars] complained about the lack of morality or the lack of political apropos that is exhibited by ChatGPT. Other [scholars] say that there is too much political apropos and political correctness.
Dubreuil noted that other scholars prophesy that AI could lead to the fall of humanity.
For example, historian Yuval Harari recently wrote about the 2022 Expert Survey on Progress in AI, which found that out of more than 700 surveyed top academics and researchers, half said that there was at least a 10 percent chance of human extinction or similarly permanent and severe disempowerment due to future AI systems.
Contreras Kallens then elaborated on their poetry experiment, which utilized what he referred to as fragment completion essentially, ChatGPT and Cornell undergraduates were both prompted to continue writing from two lines of poetry from an author such as Dickinson.
Contreras Kallens described that ChatGPT generally matched the poetry quality of a Cornell undergraduate, while expectedly falling short of the original authors writing. However, the author recognition program they used actually confused the artificial productions with the original authors work.
The final part of the project, which the group is currently refining, will measure whether students can differentiate between whether a fragment was completed by the original author, an undergraduate or by ChatGPT.
When describing the importance of this work, Contreras Kallens explained the concept of universal grammar a linguistics theory that suggests that people are innately biologically programmed to learn grammar. Thus, ChatGPTs being able to reach the writing quality of many humans challenges assumptions about technologys shortcomings.
[This model] invites a deeper reconsideration of language assumptions or language acquisition processing, Contreras Kallens said. And thats at least interesting.
Matthews then expressed that his interest in AI does not lie in its generative abilities but in the possibility of representing text numerically and computationally.
Often humanists are dealing with large volumes of text [and] they might be very different, Matthews said. [It is] fundamental to the humanities that we debate [with each other] about what texts mean, how they relate to one another were always putting different things into relation with one another. And it would be nice sometimes to have a computational or at least quantitative basis that we could maybe talk about, or debate or at least have access to.
Matthews described that autoregressive language models which refer to machine learning models that use past behavior models to predict the following word in a text reveal the perceived similarity between certain words.
Through assessing word similarity, Matthews found that ChatGPT contains gendered language bias, which he said reflects the bias in human communication.
For example, Matthews inputted the names Mary and James the most common female and male names in the United States along with Sam, which was used as a gender-neutral name. He found that James is closer to the occupations of lawyer, programmer and doctor than the other names, particularly Mary.
Matthews explained that these biases were more prevalent in previous language modeling systems, but that the makers of GPT-3.5 the embedding model of ChatGPT, as opposed to GPT-3, which is the model currently available to the public have acknowledged bias in their systems.
Its not just that [these models] learn language theyre also exposed to biases that are present in text, Matthews said. This can be visible in social contexts especially, and if were deploying these models, this has consequences if theyre used in decision making.
Matthews also demonstrated that encoding systems can textually analyze and compare literary works, such as those by Shakespeare and Dickinson, making them a valuable resource for humanists, especially regarding large texts.
Humanists are already engaged in thinking about these types of questions [referring to the models semantics and cultural analyses], Matthews said. But we might not have the capacity or the time to analyze the breadth of text that we want to and we might not be able to assign or even to recall all the things that were reading. So if were using this in parallel with the existing skill sets that humanists have, I think that this is really valuable.
Christiansen, who is part of a new University-wide committee looking into the potential use of generative AI, then talked about the opportunities and challenges of the use of AI in education and teaching.
Christiansen described that one positive pedagogical use of ChatGPT is to have students ask the software specific questions and then for the students to criticize the answers. He also explained that ChatGPT may help with the planning process of writing, which he noted many students frequently discount.
I think also, importantly, that [utilizing ChatGPT in writing exercises] can actually provide a bit of a level playing field for second language learners, of which we have many here at Cornell, Christiansen said.
Christiansen added that ChatGPT can act as a personal tutor, help students develop better audience sensitivity, work as a translator and provide summaries.
However, these models also have several limitations. For instance, ChatGPT knows very little about any events that occurred after September 2021 and will be clueless about recent issues, such as the Ukraine war.
Furthermore, Christiansen emphasized that these models can and will hallucinate which refers to their making up information, including falsifying references. He also noted that students could potentially use ChatGPT to violate academic integrity.
Overall, Dubreuil expressed concern for the impact of technologies such as ChatGPT on innovation. He explained that ChatGPT currently only reorganizes data, which falls short of true invention.
There is a wide range between simply incremental inventions and rearrangements that are such that they not only rearrange the content, but they reconfigure the given and the way the given was produced its meanings, its values and its consequences, Dubreuil said.
Dubreuil argued that if standards for human communication do not require invention, not only will AI produce work that is not truly creative, but humans may become less inventive as well.
It has to be said that through social media, especially through our algorithmic life, these days, we may have prepared our own minds to become much more similar to a chatbot. We may be reprogramming ourselves constantly and thats the danger, Dubreuil said. The challenge of AI is a provocation toward reform.
Correction, March 27, 2:26 p.m.: A previous version of this article incorrectly stated the time frame about which ChatGPT is familiar and the current leaders of the Humanities Lab. In addition, minor clarification has been added to the description of Christiansen and Dubreuils study on AI poetry generation. The Sun regrets these errors, and the article has been corrected.
Read this article:
ChatGPT in the Humanities Panel: Researchers Share Concerns, Prospects of Artificial Intelligence in Academia - Cornell University The Cornell Daily...