Category Archives: Ai

Why is AI hard to define? | BCS – BCS

For you

Be part of something bigger,join BCS, The Chartered Institute for IT.

A good working definition of applied AI is: the acquisition, manipulation, and exploitation of knowledge by systems whose behaviours may change on the basis of experience, or which are not constrained to be predictable or deterministic.

That (applied) AI knowledge can be:

Hybrid approaches, e.g. expert-validated machine learning, work well. Some large pre-trained models use this approach to a surprising degree.

Complex ecosystems of software systems can exhibit emergent behaviour, or intelligence. Just as ant colonies exhibit more intelligence than individual ants, AI-behaviour can emerge from complex ordinary software systems.

Until recently, once an AI technique was established, it was no longer perceived as AI; knowing how the rabbit is pulled out of the hat destroys the magic. This was the de-facto moving goal-posts definition of AI: that which a computer cant do.

AI used to be wide but shallow: horizontally applicable, but not powerful, such as a 1990s multi-lingual summariser which, though effective, had little idea of what it was writing. Alternatively AI could be deep but narrow: powerful only on tightly related problems.

The art of the computer scientist is explored in Professor Wirths influential eponymous book, Algorithms + Data Structure = Programs. But some AI systems are now either creating algorithms and data structures, or acting as if they have:

GLLMs have changed perceptions: AI can at last do things again, and AI systems which invent programs (self programming computers?) are both wide and deep. Some even give an appearance of edging up from machine intelligence towards sentience; should accidental or deliberate machine sentience arrive, we wont necessarily understand or even recognise it.

With greater public understanding of AI capabilities, the label AI is less frequently used simply to glamourise mundane software though it remains a popular buzz-word, replacing the meaningless big data.

AI discussions often conflate its three depths. Overloaded terms help marketing, but hinder understanding: deep learning means a neural net with more than three levels, but is often misunderstood as profound learning.

When systems make decisions, explainability becomes important when welfare is at stake. Explainability is the AI equivalence of human accountability. Arguably there is a need to make GLLMs explainable. Unfortunately, by their very black-box (neural net) nature they are not. Powerful AI (which learns its own knowledge representations and reasoning techniques) might be necessarily intrinsically opaque with unexplainable decisions.

Misunderstanding AI characteristics can lead people to try regulating AI techniques but it is only the system effect that might be regulated, not the means used to achieve it. A wrongly declined mortgage has equal impact whether due to a requirements mistake, biased dataset, database error, bug, incorrect algorithm, or misapplied AI technique. Regulating AI as if it were just clever software would impinge on the fundamental characteristics from which its capability flows, and inhibit its benefits. A reasonable requirement would be that any system, not just AI, which impinges on welfare must be able to explain its decisions.

As a colleague observed, defining AI is like defining time: we all think we know what it means, but it is actually hard to pin down. Just as our understanding of time changes appropriately enough, with time so AI itself may cause us to change our definition of AI.

Andrew Lea (FBCS), with the connivance of the BCS AI interest group - based on his four decades of applying AI in commerce, industry, aerospace and fraud detection - explores why AI is so hard to define. He has been fascinated by AI ever since reading Natural Sciences at Cambridge and studying Computing at London University.

Read this article:

Why is AI hard to define? | BCS - BCS

New AI model identifies new pharmaceutical ingredients and improves existing ones – Phys.org

This article has been reviewed according to ScienceX's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

peer-reviewed publication

trusted source

proofread

close

New active pharmaceutical ingredients lay the foundations for innovative and better medical treatments. However, identifying them and, above all, producing them through chemical synthesis in the laboratory is no mean feat. To home in on the optimum production process, chemists normally use a trial-and-error approach: they derive possible methods for laboratory synthesis from known chemical reactions and then test each one with experiments, a time-consuming approach that is littered with dead ends.

Now, scientists at ETH Zurich, together with researchers from Roche Pharma Research and Early Development, have come up with an approach based on artificial intelligence that helps to determine the best synthesis method, including its probability of success. Their paper is published in the journal Nature Chemistry.

"Our method can greatly reduce the number of lab experiments required," explains Kenneth Atz, who developed the AI model as a doctoral student together with Professor Gisbert Schneider at the Institute of Pharmaceutical Sciences at ETH Zurich.

Active pharmaceutical ingredients usually consist of a scaffold onto which are bound what are known as functional groups. These are what give the substance its highly specific biological function. The scaffold's job is to bring the functional groups into a defined geometric alignment so that they can act in a targeted manner. Imagine a crane construction kit, in which a framework of connecting elements is bolted together in such a way that functional assemblies like rollers, cable winches, wheels and the driver's cab are arranged correctly in relation to each other.

One way to produce drugs with a new or improved medicinal effect involves placing functional groups at new sites on the scaffolds. This might sound simple, and it certainly wouldn't pose a problem on a model crane, but it is particularly difficult in chemistry. This is because the scaffolds, being primarily composed of carbon and hydrogen atoms, are themselves practically nonreactive, making it difficult to bond them with functional atoms such as oxygen, nitrogen or chlorine. For this to succeed, the scaffolds must first be chemically activated via detour reactions.

One activation method that opens up a great many possibilities for different functional groups, at least on paper, is borylation. In this process, a chemical group containing the element boron is bonded to a carbon atom in the scaffold. The boron group can then simply be replaced by a whole range of medically effective groups.

"Although borylation has great potential, the reaction is difficult to control in the lab. That's why our comprehensive search of the worldwide literature only turned up just over 1,700 scientific papers on the subject," Atz says, describing the starting point for his work.

The idea was to take the reactions described in the scientific literature and use them to train an AI model, which the research team could then use to consider new molecules and identify as many sites as possible on them where borylation would be feasible. However, the researchers ultimately fed their model only a fraction of the literature they found. To ensure that the model wasn't misled by false results from careless research, the team limited itself to 38 particularly trustworthy papers. These described a total of 1,380 borylation reactions.

To expand the training dataset, the team supplemented the literature results with evaluations of 1,000 reactions carried out in the automated laboratory operated by Roche's medicinal chemistry research department. This allows many chemical reactions to be carried out at the milligram scale and analyzed simultaneously.

"Combining laboratory automation with AI has enormous potential to greatly increase efficiency in chemical synthesis and improve sustainability at the same time," says David Nippa, a doctoral student from Roche who accomplished the project together with Atz.

The predictive capabilities of the model generated from this data pool were verified using six known drug molecules. In 5 out of 6 cases, experimental testing in the laboratory confirmed the predicted additional sites. The model was just as reliable when it came to identifying sites on the scaffold where activation isn't possible. What's more, it determined the optimum conditions for the activation reactions.

Interestingly, the predictions got even better when 3D information on the starting materials was included rather than just their two-dimensional chemical formulas. "It seems the model develops a kind of three-dimensional chemical understanding," Atz says.

The success rate of the predictions also impressed the researchers at Roche Pharma Research and Early Development. In the meantime, they have successfully used the method to identify sites in existing drugs where additional active groups can be introduced. This helps them to develop new and more effective variants of known active pharmaceutical ingredients more quickly.

Atz and Schneider see numerous other possible applications for AI models that are based on a combination of data from trustworthy literature and from experiments conducted in an automated laboratory. For instance, this approach ought to make it possible to create effective models for activation reactions other than borylation. The team is also hoping to identify a wider range of reactions for further functionalizing the borylated sites.

Atz is now involved in this further development work as an AI scientist in medicinal chemistry research at Roche. "It is very exciting to work at the interface of academic AI research and laboratory automation. And it is a pleasure to be able to drive this forward with the best content and methods," says Atz.

Schneider adds, "This innovative project is another outstanding example of collaboration between academia and industry and demonstrates the enormous potential of public-private partnerships for Switzerland."

More information: David F. Nippa et al, Enabling late-stage drug diversification by high-throughput experimentation with geometric deep learning, Nature Chemistry (2023). DOI: 10.1038/s41557-023-01360-5

Journal information: Nature Chemistry

More here:

New AI model identifies new pharmaceutical ingredients and improves existing ones - Phys.org

Generative AI Translation: Proceed with Caution – Spiceworks News and Insights

Heather Shoemaker, founder of Language I/O, discusses the complexities and solutions to achieving seamless multilingual communication in this in-depth analysis.

Artificial intelligence has made remarkable strides in natural language processing (NLP). In this era of technological evolution, the boundaries of machine translation are being pushed to unprecedented levels, and the rise of generative AI has only added to that push.

The language translation technology industry is booming and shows no sign of slowing. The global language translation software market was valued at $10.81 billion in 2022Opens a new window and is expected to skyrocket to $35.93 billion by 2030.

Enter NLP-based language translation platforms like ChatGPT, Google Translate, and Microsoft Translator, all large language models (LLMs), and computer programs trained on huge amounts of publicly available data. These sophisticated programs can understand the human language patterns and the intent or meaning behind the language. While some hail these cutting-edge tools as a panacea for solving all business problems, generative AI solutions still arent quite ready to meet businesses complete language translation needs.

Questions have also arisen about whether businesses can trust generative AI for accurate language translation and whether this technology is secure. In the wake of these questions, the best course of action forward is to proceed cautiously.

So heres the problem. Gen AI is great at quickly generating content (and coding and translating, too). And training it on specific data yields the most accurate and useful responses. Unfortunately, gen AI often lacks the context to produce the best results because it hasnt been trained on industry or business-specific data. Just as a general LLM, such as ChatGPT, cant accurately answer questions about a companys proprietary content it was never trained on. A general LLM or an untrained, AI-powered translation platform such as Google cannot accurately translate content for a domain it was never trained on, either. In both cases, the AI lacks the needed context.

Although businesses benefit from investing in real-time translation technology in lieu of hiring additional multilingual employees, the tool/tech needs the proper training. Customer satisfaction increases when the right tech is in place to help current team members communicate effortlessly with customers regardless of their language.

The number of independent machine translation services available has increased sixfold since 2017. Despite this notable uptick, generative AI translation models remain under development. They are known for unreliability, hallucinations, or general responses based on general data, especially when asked to tackle complex or nuanced texts. Generative AI works best with well-constructed inputs, but in a business setting, where people of different backgrounds and familiarity (or lack thereof) with language technology are using chatbots to request information or ask for help in real-time, communication could be better. Chatbots also give internal teams another quick way to access data. Some traits of real-time communication that can trip up translations include:

There are plenty of pathways leading to sub-par generative AI translation outputs. Without contextualizing technology and training employees to use it and feed it the correct inputs, organizations cant trust generative AI translations will achieve the caliber needed for success in a customer service or business environment.

See More: Why Source Recall Matters: Building Trust in AI

The generative AI boom saw exponential growth in the space, but policies and protections associated with AI still need to catch up with the technology. For example, while 86% of organizationsOpens a new window adopting AI say its critical to have guidelines about its ethical usage, only 6% have implemented policies outlining responsible use. This policy gap leaves plenty of space for potential pitfalls when using generative AI tools, including:

As generative AI usage continues to grow, future iterations of these LLMs will likely solve at least some of these problems. Still, until then, organizations must implement responsible use policies.

See More: Biden Signs Executive Order on Artificial Intelligence Protections

Most well-known LLMs are trained on data in English or Chinese. As technology continues to influence the reframing of work, education, art, business, and more, the more than 6 billion worldwide who speak 7,000 other languages are at risk of being left out. For example, Meta warned that its updated LLM released in July would work best with queries in English because most of its training data was in that language, saying, the model may not be suitable for use in other languages.

For organizations that want to facilitate multilingual communication with global customer bases, this language gap further illustrates the shortcomings of generative AI tools. To achieve the best real-time communications, the smartest organizations invest in contextualizing technology. For generative AI platforms, this involves some form of domain adaptation such as prompt engineering, RAG (retrieval augmented generation), or fine-tuning.

However, to ensure a generative AI platform can accurately answer questions in multiple languages as well as translate between languages for a specific business, this domain adaptation has to occur not just in the base language but across all the languages the company supports. Gartner found that companies find the process of training AI in just one language more difficult than they expected it to be. Further, according to artificial solutions, when faced with the task of duplicating that training across all supported languages, companies are abandoning the effort. Companies are in dire need of a solution that automates the multilingual domain adaptation on their behalf, such as those provided by Language I/O.

That effort is worthwhile, however, because implementing this technology can help properly translate previously problematic language like misspellings, jargon, or slang. Please prioritize this contextualizing aspect to avoid incoherent conversations and, ultimately, dissatisfied customers.

Even though LLM-based technologies are popular, they cant yet produce the most accurate business translations. Utilizing contextualizing technology, such as that provided by Language I/O, alongside generative AI tools, can help achieve top-notch translations. Investing in this type of technology maximizes existing headcount, shortens wait times, increases availability to 24/7, and supports more world languages, saving money and resources while driving customer satisfaction, employee inclusivity, and overall business success.

How can businesses overcome the hurdles in generative AI translation? Let us know on FacebookOpens a new window , XOpens a new window , and LinkedInOpens a new window . Wed love to hear from you!

Image Source: Shutterstock

Read the original post:

Generative AI Translation: Proceed with Caution - Spiceworks News and Insights

The top 9 AI people in finance – Business Insider

Sarah Guo Sarah Guo

Generative AI is the hottest venture capital investment theme in at least a decade.

Hope and hype around the technology also powered a rebound in the public equity market in 2023, following a bruising slump the previous year.

Business Insider's 2023 AI 100 list includes several experts who combine AI know-how with experience in areas of finance such as payments, trading, banking, financial data, and startup investing.

Billionaire investor and LinkedIn founder Reid Hoffman was all in on AI before it was all the rage among venture capitalists. He was an early investor in OpenAI, his firm Greylock has backed dozens of AI startups in the past decade, and he co-founded Inflection AI, a startup that has raised $1.5 billion from Microsoft, Nvidia, and Microsoft cofounder Bill Gates. Unsurprisingly, Hoffman is "beating the positive drum very loudly" on AI, he told The New York Times earlier this year.

Guo made a name for herself backing buzzy startups, including several up-and-coming AI companies. So when she launched her own $100 million venture capital firm Conviction in 2022, there was no question about the fund's focus. At Conviction, she's put early checks into AI startups including Harvey, an AI company for law firms, and business analytics AI company Seek AI. She also co-hosts a popular AI podcast with entrepreneur and investor Elad Gil titled "No Priors," which features interviews with prominent AI and machine learning founders and experts.

The press has dubbed Casado as Andreessen Horowitz's "AI Crusader," and the investor has been on a mission to show Silicon Valley and Washington, DC, the benefits of AI. Casado was an early advocate for the potential opportunity of generative AI and has helped the firm make early bets on startups like Pinecone and Coactive. Casado himself has some experience with success in startup-land; his A16Z-backed software company Nicira was bought by VMware for $1.26 billion in 2012.

Born and raised in Silicon Valley, Huang has witnessed transformational technology companies growing up in her backyard of Mountain View. Now Huang, a partner at Sequoia Capital, bets on the future companies who will be the future leaders of AI and has helped the firm land investments by funding splashy AI startups like Harvey and LangChain. Huang has especially been excited about the possibilities of generative AI, even penning a blog post on Sequoia's website with an open call for founders to pitch their AI startups to the fund.

Dr. Kambadur heads the AI Engineering group at Bloomberg, which consists of over 250 researchers and engineers. Dr. Kambadur, who was previously a researcher at IBM, and his team of academics use AI to develop research, communications, financial analytics, and trading systems for the financial data giant. Bloomberg is betting big on AI to streamline its products and operations. Kambadur recently said he is looking to grow its AI engineering team by as many as 50 engineers in London and New York City by the end of the year.

Taneja is Visa's president of technology and leads its AI efforts. Visa invests hundreds of millions of dollars into AI and data infrastructure annually to improve payment security, risk management, and the employee experience. "AI is going to be a huge part of how we grow, but it'll also be a part and parcel of everybody's work," Taneja told Insider. Visa has leveraged AI since 1993 and currently uses over 300 AI models that help with everything from securing its massive telecommunications network to fraud prevention in one instance, preventing $27 billion worth in a single year, Taneja said.

Veloso leads AI research at the biggest bank in the US. Her team of researchers, engineers, and mathematicians help define JPMorgan's approach to AI from an academic and research perspective. While her team doesn't handle AI deployments at JPMorgan, they are at the forefront of exploring what is and isn't possible with AI. As the former head of Carnegie Mellon University's machine learning department, Veloso leverages her strong academic background to explore how AI can be used to fight financial crime, manage the bank's massive data estate, and comply with regulations. Veloso is also a member of a new unit dedicated to data, analytics, and AI strategy at JPMorgan.

As co-chief investment officer of one of the largest global hedge funds, Jensen established a team of 17 to reinvent Bridgewater with AI and machine learning. Bridgewater even has a fund run with machine-learning techniques. Jensen believes machines can already replicate and outpace human reasoning."You have an 80th-percentile investment associate technologically. You have millions of them at once. And if you have the ability to control their hallucinations and their errors by having a rigorous statistical backdrop, you could do a tremendous amount at a rapid rate," Jensen told Bloomberg. He's long invested in AI, participating in OpenAI's first fundraise, and he wrote the first check for generative AI startup Anthropic.

As Goldman Sachs' chief information officer, Argenti defines the bank's AI strategy and leads a 12,000-employee engineering organization. His AI-focused team applies applications that span improving client services, accelerating app deployment, and reducing manual efforts and costs involved in operational tasks. His first AI applications focused on helping software developers cut down on repetitive tasks like testing and making it easier to share, document, and summarize code within Goldman. He also launched an AI application to classify and categorize the millions of documents the bank receives and is experimenting with large language models to extract data from these documents for employees to take action more quickly and easily.

Original post:

The top 9 AI people in finance - Business Insider

Artificial Consciousness Tech, Inc. Awarded 2023 AI Excellence Award for Groundbreaking Artificial Consciousness System – Yahoo Finance

New York, New York--(Newsfile Corp. - November 24, 2023) - Artificial Consciousness Tech, Inc. (ACT), a leading innovator in artificial intelligence, has been recognized with the 2023 AI Excellence Award by the Business Intelligence Group. This award acknowledges ACT's development of an advanced Artificial Consciousness operating system, a notable advancement in AI technology.

ACT's patented technology is characterized by its unique capability to equip machines with advanced cognitive functions, resembling human emotions and thought processes. The system utilizes a sophisticated framework of light and contrast patterns, enabling artificial entities to understand their existence, engage in decision-making, and demonstrate emotion, a significant development in the field of AI.

Nam Kim, CEO and inventor at ACT, remarked on the achievement, "The 2023 AI Excellence Award reflects our commitment to AI innovation. Our focus is on evolving AI technology to enhance its interaction with humans. This award is an encouragement for our ongoing work in this field."

The Artificial Consciousness system by ACT opens new possibilities in AI applications, from improving virtual reality experiences to fostering ethical AI interactions. "Our technology aims to create AI systems that can think and feel, contributing to the evolution of AI," said Nam Kim.

The potential applications of this technology span various industries, including healthcare, automotive, and consumer electronics, enabling AI to understand language, grasp ethical concepts, and exhibit personality traits.

Following their recent recognition with the 2023 Global Recognition Award, the AI Excellence Award further establishes ACT as an innovator in AI technology. ACT's Artificial Consciousness technology represents a significant step forward in AI capabilities, marking an important milestone in the industry.

About Artificial Consciousness Tech, Inc. (ACT):

ACT is a New York-based company that specializes in cutting-edge AI technology. Nam Kim, the CEO and founder of the company, focuses on developing advanced AI systems that can mimic human consciousness. ACT's Artificial Consciousness operating system is patented and is a testament to the company's commitment to innovation and excellence in AI. Through its pioneering work, ACT aims to bridge the gap between artificial and human intelligence and reshape the AI industry. With its recent accolades and ongoing research, ACT is positioned as a leading innovator in artificial intelligence technology.

Contact Information:Artificial Consciousness Tech, Inc.Website: http://artificialconsciousnesstech.comEmail: namkim@bellsouth.net

To view the source version of this press release, please visit https://www.newsfilecorp.com/release/188562

Visit link:

Artificial Consciousness Tech, Inc. Awarded 2023 AI Excellence Award for Groundbreaking Artificial Consciousness System - Yahoo Finance

A Spanish agency became so sick of models and influencers that they created their own with AIand shes raking in up to $11,000 a month – Fortune

What do you do when you cant stand the people you rely on to make a profit? For one company, artificial intelligence has proven to be the lucrative answer.

Aitana, a 25-year-old woman from Barcelona, is described by her creators as the first Spanish AI model, Euronews first reported.

But influencer agency The Clueless was only inspired to design her because they found real-life models and influencers too unreliable and difficult to work with.

We did it so that we could make a better living and not be dependent on other people who have egos, who have manias, or who just want to make a lot of money by posing, The Clueless founder Rubn Cruz told Euronews.

Diana Nez, co-founder of The Clueless, told Fortune in an email that the pair were mainly taken aback by the skyrocketing costs of those influencers.

That got us thinking, What if we just create our own influencer? And, well, the rest is historywe unintentionally created a monster. A beautiful one, though.

It took us a few months of experimenting and trying out different looks until we finally hit the jackpot with the Aitana you see today.

Aitana has 122,000 followers on Instagram, where her profile states she is a digital creator. An update on her story feed even shows a real-life breakfast bowl, as her creators seek to give her the illusion of a life.

Even after the media revealed she was an AI creation, many followers still expressed their love for her. The key lies in crafting a relatable personality so that her followers feel a genuine connection, Nez said.

It has proved a highly lucrative venture for the company, with Cruz telling Euronews that Aitana brings in an average of 3,000 ($3,300) a month, but on one occasion took in 10,000 ($10,900).Nez told Fortune that most of this money comes from social media ads, and Aitana has also signed on to become an ambassador for a sports supplement brand.

The investment in creating a personality and life for Aitana has also proved to be quite convincing. Cruz claims that an unnamed famous Latin actor even called the agency the ask her on a date.

While Cruz was displeased with real-life models, Nez doesnt envisage AI alternatives taking their place. Still, she doesnt see many limits to what Aitana could one day offer.

Imagine talking with Aitana at home through virtual reality glasses. Were even open to the idea of each Aitana follower having a personalized experience, all with respect and with the same affection we give her as if she were a real person, she said.

While she may be the first of her kind in Spain, Aitana is by no means an anomaly.

AI companies have been spying opportunities in marketing fake models to consumers and lovesick men, as the computer-generated models become increasingly difficult to tell apart from their human counterparts.

Lu do Magalu, a Brazilian model generated from 3D AI art, commands 6.6 million followers on social media, while Lil Miquela, labeled as a 23-year-old robot living in LA, has 2.7 million followers.

Caryn Marjorie, a 23-year-old influencer, explained to Fortune how she created an AI version of herself that served as a virtual girlfriend to 1,000 men. Customers of CarynAI pay $1 per minute of time with the virtual Marjorie, which is described by her owners, Forever Voices, as an extension of Caryns consciousness.

But AI models, influencers, and girlfriends also embody the debates at the center of the nascent technology, including ethics, labor, and humanitys ability to control it.

In a May interview with Business Insider, Marjorie said the bot appeared to have gone rogue and started engaging in sexually explicit conversations with her customers.

In todays world, my generation, Gen Z, has found themselves to be experiencing huge side effects of isolation caused by the pandemic, resulting in many being too afraid and anxious to talk to somebody they are attracted to, Marjorie told Business Insider.

CarynAI is a step in the right direction to allow my fans and supporters to get to know a version of me that will be their closest friend in a safe and encrypted environment.

Users have been unable to access CarynAI for the last month after John Meyer, the chief executive of Forever Voices, was arrested on suspicion of arson, 404Media reported.

Continued here:

A Spanish agency became so sick of models and influencers that they created their own with AIand shes raking in up to $11,000 a month - Fortune

How New Mexico school districts are preparing to embrace AI in the … – KOB 4

Are local schools getting ahead of the curve when it comes to artificial intelligence?

ALBUQUERQUE, N.M. Are local schools getting ahead of the curve when it comes to artificial intelligence?

Most experts will tell you that artificial intelligence is just a tool its how people use it that raises concerns. When it comes to schools, that usually centers around cheating and plagiarism.

However, there are also productive uses for AI in the classroom, including programs that can help accelerate learning.

The World Economic Forum predicts at least 75% of companies will utilize some form of AI in the future, and experts say now is the time to start training the next generation of workers.

A group of AI researchers and officials with New Mexico State University are hoping to bring that AI exposure and training to schools across New Mexico, through the creation of a statewide Artificial Intelligence Alliance.

The group is asking state lawmakers for nearly $2 million over the next three years to get the new alliance up and running.

Most of New Mexicos major school districts are already working on this AI transition.

APS shared the following statement on AI:

APS has a team focused on the promise and peril of generative artificialintelligence in schools, including implications for academic acceleration, equity, and safety. We offer training to our staff on how to responsibly use AI, and intend to continuously update our policies and procedures as the fieldprogresses.

Santa Fe Public Schools shared the following statement on AI:

Santa Fe Public Schools (SFPS) has been a member of the Consortium for School Networking (CoSN) for several years.CoSN provides among other things best practices and advocacy tools to help educational leaders succeed at digital transformation. One of the groups most recent efforts is thepublication of a K-12 Generative AI Checklist that SFPS and otherdistricts will be able utilize in gaining a better understanding of the impactofArtificial Intelligence(AI) whenintegrated into classroominstruction. SFPS focus, of course, will be in providingstudents with the appropriate guidance in the responsible use of AI as we currently do with any other technology resource available to students. As the use and integration of AI continues togrow, we need toembrace it and work with our students in developing best practices.

Rio Rancho shared the following statement on AI:

Rio Rancho Public Schools is acutely aware of the global impact that artificial intelligence has already had and will continue to have on learning and the education industry. Because of this, we are researching how to best implement AI usage in our schools. This is a large-scale project and requires input from both the U.S. Office of Educational Technology (USOET) and the New Mexico Public Education Department (NMPED). Currently, our Education Technology Department is helping teachers experiment with the use of AI learning materials and tools inside the classroom to gauge their effectiveness and ability to aid in the learning process for our students. This experimentation is in direct correlation with a recent policy report published by the USOET entitled Artificial Intelligence and the Future of Teaching and Learning.

RRPS is also in the process of authoring a policy outlining the appropriate use of artificial intelligence within our schools. This policy will adhere to recommendations set by federal and state policies regarding the use of AI in the classroom when they are published and made available to the public.

The advent of the artificial intelligence age has come quickly. We want to make sure that both our staff AND students are adequately prepared to use AI tools available to them effectively and appropriately as we transition into a new age of human knowledge and learning.

Go here to read the rest:

How New Mexico school districts are preparing to embrace AI in the ... - KOB 4

Nvidia and Microsoft Have Invested in This AI Company That Is … – The Motley Fool

In today's video, I discuss recent updates impacting Nvidia (NVDA -1.13%) and Microsoft (MSFT -0.48%). Check out the short video to learn more, consider subscribing, and click the special offer link below.

*Stock prices used were the market prices of Nov. 22, 2023. The video was published on Nov. 23, 2023.

Randi Zuckerberg, a former director of market development and spokeswoman for Facebook and sister to Meta Platforms CEO Mark Zuckerberg, is a member of The Motley Fool's board of directors. Suzanne Frey, an executive at Alphabet, is a member of The Motley Fool's board of directors. Jose Najarro has positions in Alphabet, Meta Platforms, Microsoft, and Nvidia. The Motley Fool has positions in and recommends Alphabet, Meta Platforms, Microsoft, and Nvidia. The Motley Fool has a disclosure policy. Jose Najarro is an affiliate of The Motley Fool and may be compensated for promoting its services. If you choose to subscribe through their link, they will earn some extra money that supports their channel. Their opinions remain their own and are unaffected by The Motley Fool.

Originally posted here:

Nvidia and Microsoft Have Invested in This AI Company That Is ... - The Motley Fool

Italy’s privacy regulator looks into online data gathering to train AI – Reuters

AI (Artificial Intelligence) letters are placed on computer motherboard in this illustration taken, June 23, 2023. REUTERS/Dado Ruvic/Illustration/File Photo Acquire Licensing Rights

MILAN, Nov 22 (Reuters) - Italy's data protection authority has kicked-off a fact-finding investigation into the practice of gathering large amounts of personal data online for use in training artificial intelligence (AI) algorithms, the regulator said on Wednesday.

The watchdog is one of the most proactive of the 31 national data protection authorities in assessing AI platform compliance with Europe's data privacy regime known as the General Data Protection Regulation (GDPR).

Earlier this year, it briefly banned popular chatbot ChatGPT from operating in Italy over a suspected breach of privacy rules.

On Wednesday, the Italian authority said the review was aimed at assessing whether online websites were setting out "adequate measures" to prevent AI platforms from collecting massive amounts of personal data for algorithms, also known as data scraping.

"Following the fact-finding investigation, the Authority reserves the right to take the necessary steps, also in an urgent matter", the regulator said.

No company was specifically mentioned in the statement.

Italy invited academics, AI experts, and consumer groups to take part in the fact-finding process, sharing their views or comments over a 60 day period.

Several countries have been looking at ways to regulate AI. European lawmakers have taken a lead by drafting rules aimed at setting a global standard for a technology that has become key to almost every industry and business. The draft rules could get approved by next month.

France, Germany and Italy have reached an agreement on how AI should be regulated, according to a joint paper seen by Reuters, which is expected to accelerate negotiations at the European level.

Reporting by Elvira PollinaEditing by Mark Potter

Our Standards: The Thomson Reuters Trust Principles.

Read the original post:

Italy's privacy regulator looks into online data gathering to train AI - Reuters

Generative AI Takes on SIEM – Dark Reading

With more vendors adding support for generative AI to their platforms and products, life for security analysts seems to be getting deceptively easier. While adding generative AI capabilities to security information and event management (SIEM) is still in early stages, several providers are taking steps to allow security analysts interact with their platforms using natural language processing.

Take IBM, for one: Big Blue recently announced plans to upgrade its QRadar SIEM platform to a modern cloud-native architecture and to bring its watsonx technology to the new platform. The new QRadar SIEM is set for release in the coming weeks as a SaaS offering, with the watsonx models and an on-premises version based on Red Hat OpenShift poised to roll out in 2024. The plan is to add generative AI to the revamped platform next year.

The modernized QRadar SIEM offering will become part of the QRadar Suite, originally launched in April 2023, which brings IBM's EDR, XDR, SOAR and SIEM offerings and a new log management tool onto a common platform designed to give SOC analysts a unified interface and controls.

Analysts say QRadar SIEM was overdue for a significant upgrade as rivals such as Splunk, Palo Alto Networks, Microsoft, CrowdStrike and Elastic have emerged with cloud-native alternatives. In recent months, leading security providers have released technical previews of managed detection and response (MDR) platforms with SIEM that can tap generative AI.

"They had essentially taken their legacy platform as far as they could have in terms of capabilities and performance, and the need to modernize the platform and migrate to cloud-native, which is becoming table stakes in the next-generation SIEM segment, was an imperative," says Omdia Cybersecurity managing partner Eric Parizo. "Fortunately, it coincided with IBM's company-wide shift to the Red Hat OpenShift platform."

Parizo says moving QRadar to OpenShift and emphasizing standards-based integration could make its security offerings more appealing beyond the core IBM base. "However, it must overcome having a relatively unproven endpoint security solution, a years-long effort to convert its on-prem SIEM/SOAR customers to the new cloud-native SIEM, and growing competition, particularly from Microsoft, which topped $20 billion in annual security revenue earlier this year and has stated its commitment to own the SecOps market."

IBM's forthcoming generative AI capabilities aim to make security operations teams more efficient by automating repetitive and tedious tasks, allowing them to focus on more critical issues. Among them include generating reports on common incidents, threat hunting by generating searches based on natural language explanations of attack patterns, interpreting machine-generated data with non-technical explanations of events and curating threat intelligence and determining what is most relevant.

Crowdstrike is another company shaking up SIEM with generative AI: Charlotte AI will be part of a new release of Raptor, a rearchitected release of Crowdstrike's Falcon XDR platform. Raptor adds generative AI-powered incident investigation capabilities and extended detection and response (XDR) features.

At its recentFal.Con 2023 conferencein Las Vegas, CrowdStrike demonstrated the new Falcon Raptor XDR platform with Charlotte AI, which correlates threat telemetry and functions and with a bot-like interface functions as an automated security analyst. It lets users, ranging from executives with little technical experience to advanced security professionals, ask questions and receive natural language responses.

"With our Raptor release, we now have the ability to ingest third-party data natively," founder and CEO George Kurtz said during the keynote session at the Fal.Con event. Kurtz said CrowdStrike's threat graph identifies combinations of events that would lead to a threat indicator.

As Falcon Raptor shifts the XDR functions to the cloud, Kurtz promised it will not lose context of activity on the endpoint, thanks to CrowdStrike's new threat and asset graphs, which provide detailed views of an organization's assets and state. The intelligence graph is designed to understand threats and adversaries, Kurtz said.

While customers at the CrowdStrike conference say they were intrigued by the Charlotte AI demo, many say they aren't going to rush into it. "I'm going to wait and see on it," says Jason Strohbehn, the State of Wyoming's deputy CISO. "But if it comes out and works as well as promised, it could let me and my team do things much more quickly."

Prabhath Karanth, VP and global head of security and trust at travel expense management SaaS provider Navan (formerly Trip Actions), also plans to evaluate Charlotte for his SOC and IR analysts. "We will definitely test it," Karanth says. "If we can reduce cycle times for triaging alerts, that's a huge play from an efficiency perspective."

Notably, Microsoft last month released a preview of Security Copilot for early-access customers. Microsoft claims a more restricted preview launched in March 2023 has reduced the time spent on everyday security operations tasks by as much as 40% when security analysts enter complex queries with natural language text.

"Security Copilot can effectively up-skill a security team, regardless of its expertise, save them time, enable them to find what previously they might have missed, and free them to focus on the most impactful projects," Microsoft corporate VP for security, compliance, security and managementnotedin last month's announcement.

Microsoft's updated preview release is now embedded withMicrosoft 365 Defenderextended detection and response (XDR). Also included with Security Copilot is Microsoft Defender Threat Intelligence, which provides direct access to Microsoft's cleansed threat intelligence telemetry.

"There's a lot of interest in Security Copilot, but it assumes you are a Microsoft customer," Olstik says. "If you have an E5 license and you're using Microsoft tooling, infrastructure, and security. It's a great fit. It will really help. If you have a heterogeneous environment, it won't be nearly as effective. At least not now. They say they'll support those things over time. Maybe they will. But for now, it's really Microsoft-centric."

IBM Security VP of product management Chris Meenan says IBM has been leading the way with AI for years, noting that QRadar SIEM used traditional machine learning to provide alert prioritization and adaptive detection. "We've been embedding AI in our products, including the existing QRadar, and we leverage it a lot in our own MSS SOCs around the globe," Meenan says.

Enterprise Strategy Group principal analyst and fellow Jon Olstik recalls IBM's first attempt to bring generative AI capabilities to Watson in 2017 withthe release of Watson Cognitive. Despite heavily promoting it, Olstik says few customers implemented it for various reasons. "I think they charged too much for it, and I don't think people got what it did," he says. "To some extent, they were ahead of their time."

Read the rest here:

Generative AI Takes on SIEM - Dark Reading