Category Archives: Artificial Intelligence

Senators on Using Artificial Intelligence in Agriculture It’s already here. – AG INFORMATION NETWORK OF THE … – AGInfo Ag Information Network

The Senate Agriculture Committee hosted a hearing on AI and innovation into American agriculture. In her opening statement, Michigan Senator and Chair Debbie Stabenow pointed to agriculture's role in technology.

American agriculture has always been at the forefront of innovation its imperative we strike a balance between harnessing the benefits A.I. offers, while addressing the concerns it raises.

Concerns about A.I. include data privacy, workforce implications, and equitable access to the technology. Stabenow says the reality is A.I. is already being integrated into our daily lives.

In fact, Im going to pause, my entire statement up to this point was generated by A.I. and its something I would have said. So its incredible.

Panelist Dr. Mason Earles with the University of California Davis defines AI.

Put simply, an A.I. is a computer program physical action.

Follow this link:
Senators on Using Artificial Intelligence in Agriculture It's already here. - AG INFORMATION NETWORK OF THE ... - AGInfo Ag Information Network

AI is the buzz, the big opportunity and the risk to watch among the Davos glitterati – The Associated Press

DAVOS, Switzerland (AP) Artificial intelligence is easily the biggest buzzword for world leaders and corporate bosses diving into big ideas at the World Economic Forums glitzy annual meeting in Davos. Breathtaking advances in generative AI stunned the world last year, and the elite crowd is angling to take advantage of its promise and minimize its risks.

In a sign of ChatGPT maker OpenAIs skyrocketing profile, CEO Sam Altman made his Davos debut to rock star crowds, with his benefactor, Microsoft CEO Satya Nadella, hot on his heels.

Illustrating AIs geopolitical importance like few other technologies before it, the word was on the lips of world leaders from China to France. It was visible across the Swiss Alpine town and percolated through afterparties.

Heres a look at the buzz:

The leadership drama at the AI worlds much-ballyhooed chatbot maker followed Altman and Nadella to the swanky Swiss snows.

Altmans sudden firing and swift rehiring last year cemented his position as the face of the generative AI revolution but questions about the boardroom bustup and OpenAIs governance lingered. He told a Bloomberg interviewer that hes focused on getting a great full board in place and deflected further questions.

At a Davos panel on technology and humanity Thursday, a question about what Altman learned from the upheaval came at the end.

We had known that our board had gotten too small, and we knew that we didnt have a level of experience we needed, Altman said. But last year was such a wild year for us in so many ways that we sort of just neglected it.

Altman added that for every one step we take closer to very powerful AI, everybodys character gets, like, plus 10 crazy points. Its a very stressful thing. And it should be because were trying to be responsible about very high stakes.

From China to Europe, top officials staked their positions on AI as the world grapples with regulating the rapidly developing technology that has big implications for workplaces, elections and privacy.

The European Union has devised the worlds first comprehensive AI rules ahead of a busy election year, with AI-powered misinformation and disinformation the biggest risk to the global economy as it threatens to erode democracy and polarize society, according to a World Economic Forum report released last week.

Chinese Premier Li Qiang called AI a double-edged sword.

Human beings must control the machines instead of having the machines control us, he said in a speech Tuesday.

AI must be guided in a direction that is conducive to the progress of humanity, so there should be a redline in AI development a red line that must not be crossed, Li said, without elaborating.

China, one of the worlds centers of AI development, wants to step up communication and cooperation with all parties on improving global AI governance, Li said.

China has released interim regulations for managing generative AI, but the EU broke ground with its AI Act, which won a hard-fought political deal last month and awaits final sign-off.

European Commission President Ursula von der Leyen said AI is a very significant opportunity, if used in a responsible way.

She said the global race is already on to develop and adopt AI, and touted the 27-nation EUs efforts, including the AI Act and a program pairing supercomputers with small and midsized businesses to train large AI models.

French President Emmanuel Macron said hes a strong believer in AI and that his country is an attractive and competitive country for the industry. He played up Frances role in helping coordinate regulation on deepfake images and videos created with AI as well as plans to host a follow-up summit on AI safety after an inaugural gathering in Britain in November.

The letters AI were omnipresent along the Davos Promenade, where consulting firms and tech giants are among the groups that swoop onto the main drag each year, renting out shops and revamping them into showcase pavilions.

Inside the main conference center, a giant digital wall emanated rolling images of AI art and computer-generated conceptions of wildlife and nature like exotic birds or tropical streams.

Davos-goers who wanted to delve more deeply into the technical ins and outs of artificial intelligence could drop in to sessions at the AI House.

Generative AI systems like ChatGPT and Googles Bard captivated the world by rapidly spewing out new poems, images and computer code and are expected to have a sweeping impact on life and work.

The technology could help give a boost to the stagnating global economy, said Nadella, whose company is rolling out the technology in its products.

The Microsoft chief said hes very optimistic about AI being that general purpose technology that drives economic growth.

Business leaders predicted AI will help automate mundane work tasks or make it easier for people to do advanced jobs, but they also warned that it would threaten workers who cant keep up.

A survey of 4,700 CEOs in more than 100 countries by PwC, released at the start of the Davos meetings, said 14% think theyll have to lay off staff because of the rise of generative AI.

There isnt an area, there isnt an industry thats not going to be impacted by AI, said Julie Sweet, CEO of consulting firm Accenture.

For those who can move with the change, AI promises to transform tasks like computer coding and customer relations and streamline business functions like invoicing, IBM CEO Arvind Krishna said.

If you embrace AI, youre going to make yourself a lot more productive, he said. If you do not ... youre going to find that you do not have a job.

During a session featuring Meta chief AI scientist Yann LeCun, talk about risks and regulation led to the moderators hypothetical example of infinitely conversant sexbots that could be built by anyone using open source technology.

Taking the high road, LeCun replied that AI cant be dominated by a handful of Silicon Valley tech giants if its going to serve people around the world with different languages, cultures and values.

You do not want this to be under the control of a small number of private companies, he said.

Chan reported from London. AP Technology Writer Matt OBrien contributed from Providence, Rhode Island.

This story has been corrected to show the U.K. AI safety summit was in November not October.

The rest is here:
AI is the buzz, the big opportunity and the risk to watch among the Davos glitterati - The Associated Press

MotoGP, China close to Ducati: Lenovo uses artificial intelligence in the ‘Remote Garage’ – GPOne.com

During each race weekend, the team collects a total of 100 GB of data from the eight Desmosedicis in action, thanks to the approximately 50 sensors present on each. The Chinese company helps the MotoGP team with platforms that exploit the potential of AI

Submitted by Chiara Rainis on Mon, 22/01/2024 - 16:07

On the occasion of the "Campioni in Pista" event organized by Ducati in Madonna di Campiglio, Lenovo unveiled the range of technological solutions with which it will help the MotoGP team in the hunt for a new title.

The competition will be tough, but we are excited to continue working with the teamand making our innovative services available to raise the level of performance. The goal is not only to strive for maximum results, but also to make accessible to all the technological advances developed on the track, as for road bikes", were the words of Luca Rossi, president of the Intelligent Devices Group.

Linked to the Emilian brand since 2018, its task is to develop programs to transform data into information, run complex simulations and make strategic decisions in a few seconds. During each race weekend, the team collects a total of 100 GB of data from the eight Desmosedicis in action, thanks to approximately 50 sensors present on each.

To make analysis even more precise, rapid and detailed, this year a hyperconverged infrastructure, the(HCI) Lenovo ThinkAgile, will be introduced, while promoting mobility and reliability, even in difficult environments, thanks to the Lenovo ThinkSystem SE350 edge servers. This infrastructure, optimized for artificial intelligence, will power the team's deep learning and machine learning tools with the aim of implementing thedata comparisonwith the riders sensations.

The data is not only analysed on the circuit, but also in the Remote Garage, this allows the technicians present in the factory to work on the information in real time, perform complex analyses and collaborate with the group present on the track to optimize the configuration of the bikes before they return to action. With this in mind, the quantity ofLenovo hardware has been increased, including monitors, workstations and accessories.

The departments active at Ducati headquarters are also responsible for the aerodynamic and fluid dynamic simulations, processed with High Performance Computing (HPC) technology based on the ThinkSystem SD530, SR630 and SR650 servers. Furthermore, to meet the needs of a fast-paced sport, a Cloud Solution Provider (CSP) agreement was signed which makes power and additional services available on demand through a public cloud service to quickly adapt to work peaks. ThinkPad P1 mobile workstations will then follow the electronics engineersto the starting grid to finalizethe motorcycle setup.

Also among the new features is theThinkStation P360 Ultra platform. Aself-driving robot equipped with a wide range of inertial and optical sensors, will travel around the circuit at the start of the GP weekend, allowing the team to obtain a digital copy of itas faithful as possible to reality. Through it, a total of 200 GB of information will be collected and processed, for a total of2.6 million data points per second (255 MB/s), through LiDAR (Light Detection And Ranging) sensors.

See the rest here:
MotoGP, China close to Ducati: Lenovo uses artificial intelligence in the 'Remote Garage' - GPOne.com

A Technologist Spent Years Building an AI Chatbot Tutor. He Decided It Cant Be Done. – EdSurge

When Satya Nitta worked at IBM, he and a team of colleagues took on a bold assignment: Use the latest in artificial intelligence to build a new kind of personal digital tutor.

This was before ChatGPT existed, and fewer people were talking about the wonders of AI. But Nitta was working with what was perhaps the highest-profile AI system at the time, IBMs Watson. That AI tool had pulled off some big wins, including beating humans on the Jeopardy quiz show in 2011.

Nitta says he was optimistic that Watson could power a generalized tutor, but he knew the task would be extremely difficult. I remember telling IBM top brass that this is going to be a 25-year journey, he recently told EdSurge.

He says his team spent about five years trying, and along the way they helped build some small-scale attempts into learning products, such as a pilot chatbot assistant that was part of a Pearson online psychology courseware system in 2018.

But in the end, Nitta decided that even though the generative AI technology driving excitement these days brings new capabilities that will change education and other fields, the tech just isnt up to delivering on becoming a generalized personal tutor, and wont be for decades at least, if ever.

Well have flying cars before we will have AI tutors, he says. It is a deeply human process that AI is hopelessly incapable of meeting in a meaningful way. Its like being a therapist or like being a nurse.

Instead, he co-founded a new AI company, called Merlyn Mind, that is building other types of AI-powered tools for educators.

Meanwhile, plenty of companies and education leaders these days are hard at work chasing that dream of building AI tutors. Even a recent White House executive order seeks to help the cause.

Earlier this month, Sal Khan, leader of the nonprofit Khan Academy, told the New York Times: Were at the cusp of using A.I. for probably the biggest positive transformation that education has ever seen. And the way were going to do that is by giving every student on the planet an artificially intelligent but amazing personal tutor.

Khan Academy has been one of the first organizations to use ChatGPT to try to develop such a tutor, which it calls Khanmigo, that is currently in a pilot phase in a series of schools.

Khans system does come with an off-putting warning, though, noting that it makes mistakes sometimes. The warning is necessary because all of the latest AI chatbots suffer from what are known as hallucinations the word used to describe situations when the chatbot simply fabricates details when it doesnt know the answer to a question asked by a user.

AI experts are busy trying to offset the hallucination problem, and one of the most promising approaches so far is to bring in a separate AI chatbot to check the results of a system like ChatGPT to see if it has likely made up details. Thats what researchers at Georgia Tech have been trying, for instance, hoping that their muti-chatbot system can get to the point where any false information is scrubbed from an answer before it is shown to a student. But its not yet clear that approach can get to a level of accuracy that educators will accept.

At this critical point in the development of new AI tools, though, its useful to ask whether a chatbot tutor is the right goal for developers to head toward. Or is there a better metaphor than tutor for what generative AI can do to help students and teachers?

Michael Feldstein spends a lot of time experimenting with chatbots these days. Hes a longtime edtech consultant and blogger, and in the past he wasnt shy about calling out what he saw as excessive hype by companies selling edtech tools.

In 2015, he famously criticized promises about what was then the latest in AI for education a tool from a company called Knewton. The CEO of Knewton, Jose Ferreira, said his product would be like a robot tutor in the sky that can semi-read your mind and figure out what your strengths and weaknesses are, down to the percentile. Which led Feldstein to respond that the CEO was selling snake oil because, Feldstein argued, the tool was nowhere near to living up to that promise. (The assets of Knewton were quietly sold off a few years later.)

So what does Feldstein think of the latest promises by AI experts that effective tutors could be on the near horizon?

ChatGPT is definitely not snake oil far from it, he tells EdSurge. It is also not a robot tutor in the sky that can semi-read your mind. It has new capabilities, and we need to think about what kinds of tutoring functions todays tech can deliver that would be useful to students.

He does think tutoring is a useful way to view what ChatGPT and other new chatbots can do, though. And he says that comes from personal experience.

Feldstein has a relative who is battling a brain hemorrhage, and so Feldstein has been turning to ChatGPT to give him personal lessons in understanding the medical condition and his loved-ones prognosis. As Feldstein gets updates from friends and family on Facebook, he says, he asks questions in an ongoing thread in ChatGPT to try to better understand whats happening.

When I ask it in the right way, it can give me the right amount of detail about, What do we know today about her chances of being OK again? Feldstein says. Its not the same as talking to a doctor, but it has tutored me in meaningful ways about a serious subject and helped me become more educated on my relatives condition.

While Feldstein says he would call that a tutor, he argues that its still important that companies not oversell their AI tools. Weve done a disservice to say theyre these all-knowing boxes, or they will be in a few months, he says. Theyre tools. Theyre strange tools. They misbehave in strange ways as do people.

He points out that even human tutors can make mistakes, but most students have a sense of what theyre getting into when they make an appointment with a human tutor.

When you go into a tutoring center in your college, they dont know everything. You dont know how trained they are. Theres a chance they may tell you something thats wrong. But you go in and get the help that you can.

Whatever you call these new AI tools, he says, it will be useful to have an always-on helper that you can ask questions to, even if their results are just a starting point for more learning.

What are new ways that generative AI tools can be used in education, if tutoring ends up not being the right fit?

To Nitta, the stronger role is to serve as an assistant to experts rather than a replacement for an expert tutor. In other words, instead of replacing, say, a therapist, he imagines that chatbots can help a human therapist summarize and organize notes from a session with a patient.

Thats a very helpful tool rather than an AI pretending to be a therapist, he says. Even though that may be seen as boring, by some, he argues that the technologys superpower is to automate things that humans dont like to do.

In the educational context, his company is building AI tools designed to help teachers, or to help human tutors, do their jobs better. To that end, Merlyn Mind has taken the unusual step of building its own so-called large language model from scratch designed for education.

Even then, he argues that the best results come when the model is tuned to support specific education domains, by being trained with vetted datasets rather than relying on ChatGPT and other mainstream tools that draw from vast amounts of information from the internet.

What does a human tutor do well? They know the student, and they provide human motivation, he adds. Were all about the AI augmenting the tutor.

Read more:
A Technologist Spent Years Building an AI Chatbot Tutor. He Decided It Cant Be Done. - EdSurge

Demystifying AI: The Probability Theory Behind LLMs Like OpenAI’s ChatGPT – PYMNTS.com

When a paradigm shift occurs, it is not always obvious to those affected by it.

But there is no eye of the storm equivalent when it comes to generative artificial intelligence (AI).

The technology ishere. There are already variouscommercial productsavailable fordeployment, and organizations that can effectively leverage it in support of theirbusiness goalsare likely to outperform their peers that fail to adopt the innovation.

Still, as with many innovations, uncertainty and institutional inertia reign supreme which is why understanding how the large language models (LLMs) powering AI work is critical to not just piercing the black box of the technologys supposed inscrutability, but also to applying AI tools correctly within an enterprise setting.

The most important thing to understand about the foundational models powering todays AI interfaces and giving them their ability to generate responses is the simple fact that LLMs, like Googles Bard, Anthropics Claude, OpenAIs ChatGPT and others, are just adding one word at a time.

Underneath the layers of sophisticated algorithmic calculations, thats all there is to it.

Thats because at a fundamental level, generative AI models are built to generate reasonable continuations of text by drawing from a ranked list of words, each given different weighted probabilities based on the data set the model was trained on.

Read more:There Are a Lot of Generative AI Acronyms Heres What They All Mean

While news of AI that can surpass human intelligence are helping fuel the hype of the technology, the reality is far more driven by math than it is by myth.

It is important for everyone to understand that AIlearns from data at the end of the day [AI] is merely probabilistics and statistics, Akli Adjaoute, AI pioneer and founder and general partner at venture capital fund Exponion, told PYMNTS in November.

But where do the probabilities that determine an AI systems output originate from?

The answer lies within the AI models training data. Peeking into the inner workings of an AI model reveals that it is not only the next reasonable word that is being identified, weighted, then generated, but that this process occurs on a letter by letter basis, as AI models break apart words into more manageable tokens.

That is a big part of whyprompt engineering for AI models is an emerging skillset. After all, different prompts produce different outputs based on the probabilities inherent to each reasonable continuation, meaning that to get the best output, you need to have a clear idea of where to point the provided input or query.

It also means that the data informing the weight given to each probabilistic outcome must berelevantto the query. The more relevant, the better.

See also:Tailoring AI Solutions by Industry Key to Scalability

While PYMNTS Intelligence has found that more than eight in 10 business leaders (84%) believe generative AI will positively impactthe workforce, generative AI systems are only as good as the data theyre trained on. Thats why the largest AI players are in an arms race toacquire the best training data sets.

Theres a long way to go before theres afuturistic version of AIwhere machines think and make decisions. Humans will be around for quite a while,Tony Wimmer, head of data and analytics atJ.P. Morgan Payments, told PYMNTS in March. And the more that we can write software that has payments data at the heart of it to help humans, the better payments will get.

Thats why, to train an AI model to perform to the necessary standard, many enterprises are relying ontheir own internal datato avoid compromising model outputs. By creating vertically specialized LLMs trained for industry use cases, organizations can deploy AI systems that are able to find the signal within the noise, as well as to be further fine-tuned to business-specific goals with real-time data.

AsAkli Adjaoutetold PYMNTS back in November, if you go into a field where the data is real, particularly in thepayments industry, whether its credit risk, whether its delinquency, whether its AML [anti-money laundering], whether its fraud prevention, anything that touches payments AI can bring a lot of benefit.

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.

The rest is here:
Demystifying AI: The Probability Theory Behind LLMs Like OpenAI's ChatGPT - PYMNTS.com

3 Artificial Intelligence (AI) Stocks to Buy Today, Still Below Their 2021 Highs – The Motley Fool

2023 has come and gone, leaving the stock market's tech sector buzzing with the promise of artificial intelligence (AI) technology. Some stocks have skyrocketed in response to the AI-based sea change, but a few were left behind -- and not always for good reason.

Three of The Motley Fool's top tech experts got together to share their most affordable AI plays in this market. Read on for the straight dope on their clear-eyed picks: Taiwan Semiconductor Manufacturing (TSM -1.25%), Amazon (AMZN -0.45%), and Applied Materials (AMAT 0.39%). All three are trading significantly below their all-time highs of 2021, and they are hungry for a comeback in 2024 and beyond.

Anders Bylund (Taiwan Semiconductor): Semiconductor-making giant Taiwan Semiconductor Manufacturing (often called TSMC) has been through a lot in the last couple of years, and most of the changes have been helpful.

Granted, the sailing wasn't all smooth. Intel's unexpected entrance on the third-party manufacturing stage added new headwinds to TSMC's business. Ongoing political tensions between China and America don't help either, though the company is working around the problem by positioning new facilities far away from the Chinese sphere of influence, such as Arizona.

But I'm still talking about a dominant player in an important industry, with tremendous growth prospects as the world economy wriggles out of the inflation-tinted straitjacket it donned in 2021. Taiwan Semiconductor's stock trades at merely 15 times forward earnings and 8.4 times sales. Share prices stand approximately 20% below their all-time highs, recorded in February of 2021 and again in January 2022.

I've been a fan of Taiwan Semi and its stock for decades. The company never ceases to surprise me with its iron-fisted grip on the chip-making market and terrific financial results. The stock has gained roughly 1,000% since I first looked into it in 2006, quadrupling the returns of the S&P 500 market index over that span -- and after all that, TSMC's stock still looks affordable and poised for further growth right now.

The chip industry should experience a glut of orders as companies of every stripe search for a foothold in the explosive AI industry, keeping TSMC's production lines more than busy for years to come. So I highly recommend grabbing a few TSMC shares before the stock price takes off again.

Nicholas Rossolillo (Amazon): Many investors found solace after the bear market with a "flight to safety" to the so-called "Magnificent Seven" stocks: Big tech platforms that kept growing and outperformed the market overall in 2023.

However, not all the Mag7 have been all that magnificent in recent years. Take Amazon, for example, which remains nearly 20% down from the all-time highs last set in late 2021.

I believe 2024 could be the year Amazon finally achieves those peaks again. The e-commerce and cloud computing leader is in the midst of a multiyear process of right-sizing its operations to boost profitability. In e-commerce, it's been filling its distribution centers with robotics for years. And in a further push to monetize its marketplace, Amazon has been rolling out advertising features for third-party merchants. Amazon already optimizes ads using AI, and late in 2023, it introduced AI-generated images for its marketers to use for promoting products.

And on the cloud computing side (where Amazon Web Services is still the cloud market leader), Amazon has reported that its customer spending seems to be solidifying after a year of trying to cut costs and conserve cash. And though it was late to the generative AI party, Amazon Web Services has been installing Nvidia GPUs into its data centers as well to keep pace with the times.

Indeed, even outside research indicates that the cloud market is poised for a monster year in 2024. Tech researcher Gartner thinks global cloud spending will rise 20% this year to around $680 billion. That could be a portent of good things to come for Amazon stock.

Amazon currently trades for about 28 times Wall Street analysts' expectations for 2024 free cash flow -- which implies this profit metric could skyrocket about 50% this year as Amazon's optimization work starts to pay off. I remain a buyer of Amazon stock at these levels.

Billy Duberstein (Applied Materials): Applied Materials is only about 10% below its late 2021 highs, but look for this all-star semiconductor leader to break that resistance level and eventually move higher.

Applied's business has a terrific combination of growth, profitability, and shareholder returns that should allow it to compound earnings well into the future. And compound earnings is the recipe for eventual new highs in the stock market.

Applied's great financial characteristics come from it being the most diversified semiconductor equipment company in the world, with leadership in several key technologies spanning leading-edge chips, lagging-edge specialty chips, and memory.

That diversification was on full display over the past year, when Applied's leading-edge and memory equipment sales went into a downturn. However, sales of lagging-edge specialty equipment usually used for producing auto and industrial chips remained strong. So while front-end wafer fab equipment is projected to decline about 15% in 2023 according to industry group SEMI, Applied actually managed to grow its semiconductor equipment sales 4.8% in its last fiscal year.

While impressive, some investors believe the previously strong industrial and auto sectors are now going into their downturn, so Applied's stock has plateaued a bit in recent months. But Applied's leading-edge tools, especially for AI chips and high-bandwidth memory, should get a boost in the near future.

In a recent analyst note last week, analysts at Keybanc Capital markets boosted their outlook for several AI-related stocks based on current channel checks. While Applied wasn't one of the stocks upgraded, its leading-edge tools do help produce the chips from each of the three stocks highlighted. So leading-edge tool growth should offset any weakness in the specialty sector. That's especially true as IDC projects the overall semiconductor market to bounce back with 20% growth in 2024.

Moreover, Applied isn't resting on its laurels. It's a forward-thinking company perpetually looking for new growth avenues and the next major technology breakthrough. For instance, the company is currently looking to apply its atomic-level manufacturing talents to augmented reality. This past week, Applied and Alphabet (GOOG -0.10%) (GOOGL -0.20%) announced a collaboration for multiple generations of Google's new lightweight augmented reality glasses platform. And last year, Applied announced it would be investing $4 billion in its groundbreaking EPIC R&D center. The EPIC center will be a nexus of collaboration between university researchers, Applied, and the company's chipmaking customers to speed up the pace of innovation.

Applied's profitability allows it to invest in new ventures like these, somewhat future-proofing its business, all while returning capital to shareholders via buybacks and a rising dividend. It shouldn't stay below its all-time high much longer.

John Mackey, former CEO of Whole Foods Market, an Amazon subsidiary, is a member of The Motley Fool's board of directors. Suzanne Frey, an executive at Alphabet, is a member of The Motley Fool's board of directors. Anders Bylund has positions in Alphabet, Amazon, Intel, and Nvidia. Billy Duberstein has positions in Alphabet, Amazon, Applied Materials, and Taiwan Semiconductor Manufacturing. His clients may own shares of the companies mentioned. Nicholas Rossolillo has positions in Alphabet, Amazon, Applied Materials, and Nvidia. The Motley Fool has positions in and recommends Alphabet, Amazon, Applied Materials, Nvidia, and Taiwan Semiconductor Manufacturing. The Motley Fool recommends Gartner and Intel and recommends the following options: long January 2023 $57.50 calls on Intel, long January 2025 $45 calls on Intel, and short February 2024 $47 calls on Intel. The Motley Fool has a disclosure policy.

Read more:
3 Artificial Intelligence (AI) Stocks to Buy Today, Still Below Their 2021 Highs - The Motley Fool

Comparing Student Reactions To Lectures In Artificial Intelligence And Physics – Science 2.0

In the past two weeks I visited two schools in Veneto to engage students with the topic of Artificial Intelligence, which is something everybody seems to be happy to hear about these days: on the 10th of January I visited a school in Vicenza, and on the 17th a school in Venice. In both cases there were about 50-60 students, but there was a crucial difference: while the school in Venezia (the "Liceo Marco Foscarini", where I have been giving lectures in the past within the project called "Art and Science") was a classical liceum and the high-schoolers who came to listen to my presentation were between 16 and 18 years old, the one in Vicenza was a middle school, and its attending students were between 11 and 13 years old.Since the contents of the lecture could withstand virtually no change - I was too busy during these first few post-Christmas weeks - the two-pronged test was an effective testing ground to spot differences in the reaction of the two audiences. To be honest, I approached the first event with some worries that the content I was presenting to those young kids was going to be a bit overwhelming to them, so maybe in hindsight we could imagine that the impression I got was biased by this "low expectations" attitude.

To make matters worse, because my lecture was the first in a series organized by a local academy, with comparticipation of the Comune of Vicenza, the lecture I gave had to follow speeches from the school director, the maior of Vicenza, and a couple of other introductions - something that I was sure was further decreasing the stamina and willingness to listen to a frontal lecture of the young audience. In fact, I was completely flabberghasted.

Not only did the middle schoolers in Vicenza follow with attention and in full silence the 80-minutes-long talk I had prepared. They also interrupted a few times with witty questions (as I had begged them to do, in fact). At the end of the presentation, I was hit by a rapid succession of questions ranging over the full contents of the lecture - from artificial intelligence to particle physics, to details about the SWGO experiment, astrophysics, and what not. I counted about 20 questions and then lost track of that. This continued after the end of the event, when some of the students were not completely happy yet and came to meet me and ask for more detail.

Above, a moment during the lecture in Vicenza

When I gave the same lecture in Venice, I must say I did receive again several interesting questions. But in comparison, the Foscarini teenagers were clearly a bit less enthusiastic on the whole of the topic of the lecture. Maybe my assessment comes from the bias I was mentioning earlier; and in part, I have to say I have much more experience with high-schoolers than with younger students, so I knew better what to expect and I was not surprised by the outcome.

This comparison seems to align with what has been once observed by none other than Carl Sagan. I have to thank Phil Warnell here, who commenting on Facebook to a post I wrote there on my experience with middle schoolers cited a piece from Sagan that is quite relevant:

I cannot but concur with what Sagan says in these two quotes. I also believe that part of the unwillingness of high-schoolers to ask questions is due to the judgment of their peers. What happens is that until we are 12 or 13 we for the most part have not yet had experience with the negative feedback we may get by being participative in school events, and we do not yet fear the reaction of our friends and not-so-friendly schoolmates. It seems that kind of experience grows a shell around them, making them a bit less willing to expose themselves and speak up to discuss what they did not understand, or to express enthusiasm. I think that is a bit sad, but it is of course part of our early trajectory amid experiences that form us and equip us with the vaccines we are going to need in the rest of our life.

See original here:
Comparing Student Reactions To Lectures In Artificial Intelligence And Physics - Science 2.0

Musk: Reports of xAI’s $20B Valuation Target Not Accurate – PYMNTS.com

Elon Musk is dismissing reports that his AI company has raised $500 million.

This is simply not accurate, Musk wrote on his social media platform X Friday (Jan. 19), following a Bloomberg News story saying that his artificial intelligence (AI) startup was halfway to its goal of $1 billion in funding.

Musk also deemed the report, which said xAI was discussing a valuation of $15 billion to $20 billion, fake news.

The billionaire Tesla CEO announced the launch of xAI in June, saying it would bring together a collection of AI industry veterans and endeavor to understand reality.

The company debuted itsAI chatbot Grokin November, saying the tool has capabilities that rival Metas LLaMA 2 AI model and can handle math problems and reasoning at a level approaching that of OpenAIs GPT-3.5.

The Bloomberg report said Musk and investors are expected to finalize terms in the next couple weeks, according to sources familiar with the matter.

One source said some of the parties want to see whether they can get computing power in addition to, or in some cases instead of, equity shares in xAI. This would help venture capital firms portfolio companies, which need intensive data processing capabilities to build AI products of their own.

News of xAIs $1 billion funding goalemerged last month in a company filing with the U.S. Securities and Exchange Commission (SEC).

As PYMNTS wrote at the time, Musk has been a vocal critic of OpenAI, the highest-profile AI startup and developer of ChatGPT. Musk had been involved with that company at the beginning, but has been critical of its establishment of a for-profit arm and Microsofts ties to the company.

Musk was one of the first signatories to an open letter published by AI watchdog group Future of Life Institute last year warning of thepotential dangers of AI.

Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be managed, the letter said, while also calling for all AI labs to immediately pause for at least six months the training of AI systems more powerful than GPT-4.

See more here:
Musk: Reports of xAI's $20B Valuation Target Not Accurate - PYMNTS.com

The Urgent but Difficult Task of Regulating Artificial Intelligence – Amnesty International

By David Nolan, Hajira Maryam & Michael Kleinman, Amnesty Tech

The year 2023 marked a new era of AI hype, rapidly steering policy makers towards discussions on the safety and regulation of new artificial intelligence (AI) technologies. The feverish year in tech started with the launch of ChatGPT in late 2022 and ended with a landmark agreement on the EU AI Act being reached. Whilst the final text is still being ironed out in technical meetings over the coming weeks, early signs indicate the western worlds first AI rulebook goes someway to protecting people from the harms of AI but still falls short in a number of crucial areas, failing to ensure human rights protections especially for the most marginalised. This came soon after the UK Government hosted an inaugural AI Safety Summit in November 2023, where global leaders, key industry players, and select civil society groups gathered to discuss the risks of AI. Although the growing momentum and debate on AI governance is welcomed and urgently needed, the key question for 2024 is whether these discussions will generate concrete commitments and focus on the most important present-day AI risks, and critically whether it will translate into further substantive action in other jurisdictions.

Whilst AI developments do present new opportunities and benefits, we must not ignore the documented dangers posed by AI tools when they are used as a means of societal control, mass surveillance and discrimination. All too often, AI systems are trained on massive amounts of private and public datadata which reflects societal injustices, often leading to biased outcomes and exacerbating inequalities. From predictive policing tools, to automated systems used in public sector decision-making to determine who can access healthcare and social assistance, to monitoring the movement of migrants and refugees, AI has flagrantly and consistently undermined the human rights of the most marginalised in society. Other forms of AI, such as fraud detection algorithms, have also disproportionately impacted ethnic minorities, who have endured devastating financial problems as Amnesty International has already documented, while facial recognition technology has been used by the police and security forces to target racialised communities and entrench Israels system of apartheid.

So, what makes regulation of AI complex and challenging? First, there is the vague nature of the term AI itself, making efforts to regulate this technology more cumbersome. There is no widespread consensus on the definition of AI because the term does not refer to a singular technology and rather encapsulates a myriad technological applications and methods. The use of AI systems in many different domains across the public and private sector, means a large number of varied stakeholders are involved in its development and deployment, meaning such systems are a product of labour, data, software and financial inputs and any regulation must grapple with upstream and downstream harms. Further, these systems cannot be strictly considered as hardware or software, but rather their impact comes down to the context in which they are developed and implemented and regulation must take this into account.

As we enter 2024, now is the time to not only ensure that AI systems are rights respecting by design, but also to guarantee that those who are impacted by these technologies are not only meaningfully involved in decision-making on how AI technology should be regulated, but also that their experiences are continually surfaced and are centred within these discussions.

Alongside the EU legislative process, the UK, US, and others, have set out their distinct roadmaps and approach to identifying the key risks AI technologies present, and how they intend to mitigate these. Whilst there are many complexities of these legislative processes, this should not delay any efforts to protect people from the present and future harms of AI, and there are crucial elements that we, at Amnesty, know any proposed regulatory approach must contain. Regulation must be legally binding and center the already documented harms to people subject to these systems. Commitments and principles on the responsible development and use of AIthe core of the current pro-innovation regulatory framework being pursued by the UKdo not offer an adequate protection against the risks of emerging technology and must be put on statutory footing.

Similarly, any regulation must include broader accountability mechanisms over and above technical evaluations that are being pushed by industry. Whilst these may be a useful string within any regulatory toolkits bow, particularly in testing for algorithmic bias, bans and prohibitions cannot be off the table for systems fundamentally incompatible with human rights, no matter how accurate or technically efficacious they purport to be.

Others must learn from the EU process and ensure there are not loopholes for public and private sector players to circumvent regulatory obligations, and removing any exemptions for AI used within national security or law enforcement is critical to achieving this. It is also important that where future regulation limits or prohibits the use of certain AI systems in one jurisdiction, no loopholes or regulatory gaps allow the same systems to be exported to other countries where they could be used to harm the human rights of marginalized groups. This remains a glaring gap in the UK, US, and EU approaches, as they fail to take into account the global power imbalances of these technologies, especially their impact on communities in the Global Majority whose voices are not represented in these discussions. There have already been documented cases of outsourced workers being exploited in Kenya and Pakistan by companies developing AI tools.

As we enter 2024, now is the time to not only ensure that AI systems are rights-respecting by design, but also to guarantee that those who are impacted by these technologies are not only meaningfully involved in decision-making on how AI technology should be regulated, but also that their experiences are continually surfaced and are centred within these discussions.More than lip service by lawmakers, we need binding regulation that holds companies and other key industry players to account and ensures that profits do not come at the expense of human rights protections. International, regional and national governance efforts must complement and catalyse each other, and global discussions must not come at the expense of meaningful national regulation or binding regulatory standards these are not mutually exclusive. This is the level at which accountability is servedwe must learn from past attempts to regulate tech, which means ensuring robust mechanisms are introduced to allow victims of AI-inflicted rights violations to seek justice.

View original post here:
The Urgent but Difficult Task of Regulating Artificial Intelligence - Amnesty International

Artificial Intelligence: inevitable integration enterprises | Top Stories | theweeklyjournal.com – The Weekly Journal

Given the accelerated pace at which Artificial Intelligence (AI) occupies various lines to enhance the way in which private and public agencies carry out their work, more and more people must be trained to understand the impact of the new technology in their lives.

According to CRANT's chief executive, lvaro Melndez, AI represents a transformation for marketing, in which brands will have to appeal to credibility at a time when consumer vulnerability is threatened by the constant generation of content that is mostly not real.

The ManpowerGroup Employment Expectations Survey (MEOS) revealed that net hiring intentions

"Artificial intelligence, beyond the superficial form, in which there has been a lot of talk about it helping you to generate images or video or text, obviously helps a lot because it makes the work easier and gives new opportunities, but there is a much deeper transformation that is what interests us and is that transformation that now all this is possible and much of what will be generated can be misleading ... it may be a lie," said Melndez.

Because the situation involves a new way of consuming information, the executive considered that it is an opportunity for brands to use the tool responsibly to generate a positive impact through marketing.

"It's a different way of thinking about marketing. It's no longer about communicating a product or a service, but now it's about how you are showing reality," the executive commented.

To address the problem, Melndez said that companies must educate themselves in the use of AI in an ethical manner, and become a source of confidence for the consumer.

At a time when marketing is in the early stages with AI, he considered that by the end of this year all companies will have it incorporated, which will generate competitiveness in relation to those that do not.

That is why Melndez designed and carried out the "AI for Marketers" workshop, in collaboration with the agency de la Cruz, to provide a group of marketers with an explanatory framework of the basic principles, ethics, tools, advantages and opportunities that AI provides so that they are not left behind by the incursion of the technology.

"The goal is to facilitate a much deeper understanding of what artificial intelligence is and how it can be applied, both to enhance their work with their companies and their brands, but also to enhance their career. Artificial intelligence (AI) is not for tech people, it's not for data scientists. We all have to understand and master artificial intelligence," said the founder of the company dedicated to the creative application of artificial intelligence.

Results of AI in companies

Among the companies that incorporate Artificial Intelligence as efficiency strategies to generate higher value content, Melndez exemplified Tomorrow AI that generates around 60 thousand marketing materials monthly with a team of only four people.

"Another example is Duolingo, this company that teaches languages. They had to lay off - which is the downside - about a thousand people, because a lot of the content that Duolingo does, and the way they educate people, they can now do it through artificial intelligence," said the CRANT executive.

When asked by The News Journal about the repercussions of AI in terms of employment, Melndez pointed out that the part where more people will be out of work is inevitable because companies will understand that they can carry out tasks through technology.

Although many jobs will disappear, he assured that a creative explosion will emerge that will give way to entrepreneurship.

The finance industry has not been immune to the technological advances of recent decades; in

"We will start to see companies doing things that we would never have imagined possible, and that is interesting because it will break the market and large established companies will disappear because others have solved it in a better way," said Melndez.

"If you are a person who has an idea and wants to execute it, but can't because you don't have the resources or because you don't know how to program, let's say you want to make an application, with artificial intelligence you will be able to make that application without knowing how to program and launch your company with almost no employees and without hiring anyone," he added.

At present, estimates by investment banking group Goldman Sachs on the rise of platforms that use AI suggest that 300 million jobs around the world could be automated, and, in the case of the United States, the workload could be replaced by 25% to 50%.

Read the original:
Artificial Intelligence: inevitable integration enterprises | Top Stories | theweeklyjournal.com - The Weekly Journal