Category Archives: Artificial General Intelligence

What we lose when we work with a giant AI like ChatGPT – The Hindu

Recently, ChatGPT and its ilk of giant artificial intelligences (Bard, Chinchilla, PaLM, LaMDA, et al.), or gAIs, have been making several headlines.

ChatGPT is a large language model (LLM). This is a type of (transformer-based) neural network that is great at predicting the next word in a sequence of words. ChatGPT uses GPT4 a model trained on a large amount of text on the internet, which its maker OpenAI could scrape and could justify as being safe and clean to train on. GPT4 has one trillion parameters now being applied in the service of, per the OpenAI website, ensuring the creation of artificial general intelligence that serves all of humanity.

Yet gAIs leave no room for democratic input: they are designed from the top-down, with the premise that the model will acquire the smaller details on its own. There are many use-cases intended for these systems, including legal services, teaching students, generating policy suggestions and even providing scientific insights. gAIs are thus intended to be a tool that automates what has so far been assumed to be impossible to automate: knowledge-work.

In his 1998 book Seeing Like a State, Yale University professor James C. Scott delves into the dynamics of nation-state power, both democratic and non-democratic, and its consequences for society. States seek to improve the lives of their citizens, but when they design policies from the top-down, they often reduce the richness and complexity of human experience to that which is quantifiable.

The current driving philosophy of states is, according to Prof. Scott, high modernism a faith in order and measurable progress. He argues that this ideology, which falsely claims to have scientific foundations, often ignores local knowledge and lived experience, leading to disastrous consequences. He cites the example of monocrop plantations, in contrast to multi-crop plantations, to show how top-down planning can fail to account for regional diversity in agriculture.

The consequence of that failure is the destruction of soil and livelihoods in the long-term. This is the same risk now facing knowledge-work in the face of gAIs.

Why is high modernism a problem when designing AI? Wouldnt it be great to have a one-stop shop, an Amazon for our intellectual needs? As it happens, Amazon offers a clear example of the problems resulting from a lack of diverse options. Such a business model yields only increased standardisation and not sustainability or craft, and consequently everyone has the same cheap, cookie-cutter products, while the local small-town shops die a slow death by a thousand clicks.

Like the death of local stores, the rise of gAIs could lead to the loss of languages, which will hurt the diversity of our very thoughts. The risk of such language loss is due to the bias induced by models trained only on the languages that already populate the Internet, which is a lot of English (~60%). There are other ways in which a model is likely to be biased, including on religion (more websites preach Christianity than they do other religions, e.g.), sex and race.

At the same time, LLMs are unreasonably effective at providing intelligible responses. Science-fiction author Ted Chiang suggests that this is true because ChatGPT is a blurry JPEG of the internet, but a more apt analogy might be that of an atlas.

An atlas is a great way of seeing the whole world in snapshots. However, an atlas lacks multi-dimensionality. For example, I asked ChatGPT why it is a bad idea to plant eucalyptus trees in the West Medinipur district. It gave me several reasons why monoculture plantations are bad but failed to supply the real reason people in the area opposed it: a monoculture plantation reduced the food they could gather.

That kind of local knowledge only comes from experience. We can call that knowledge of the territory. This knowledge is abstracted away by gAIs in favour of the atlas view of all that is present on the internet. The territory can only be captured by the people doing the tasks that gAIs are trying to replace.

A part of the failure to capture the territory is demonstrated in gAIs lack of understanding. If you are careful about what you ask them for (a feat called prompt engineering an example of a technology warping the ecology of our behaviour), they can fashion impressive answers. But ask it the same question in a slightly different way and you can get complete rubbish. This trend has prompted computer scientists to call these systems stochastic parrots that is, systems that can mimic language but are random in their behaviour.

Positive research directions exist as well. For example, BLOOM is an open-source LLM developed by scientists with public money and with extensive filtering of the training data. This model is also multilingual, including 10 Indian languages, plus an active ethics team that regularly updates the licence for use.

There are multiple ways to thwart the risks posed by gAIs. One is to artificially slow the rate of progress in AI commercialisation to allow time for democratic inputs. (Tens of thousands of researchers have already signed a petition to this effect).

Another is to ensure there are diverse models being developed. Diversity here implies multiple solutions to the same question, like independent cartographers preparing different atlases with different incentives: some will focus on the flora while others on the fauna. The research on diversity suggests that the more time passes before reaching a common solution, the better the outcome. And a better outcome is critical when dealing with the stakes involved in artificial general intelligence an area of study in which a third of researchers believe it can lead to a nuclear-level catastrophe.

Just to be clear, I wrote this article, not ChatGPT. But I wanted to check what it would say

Q: Write a response to the preceding text as ChatGPT.

A: As ChatGPT, Im a tool meant to assist and augment human capabilities, not replace them; my goal is to understand and respond to your prompts, not to replace the richness and diversity of human knowledge and experience.

Yet as the writer George Zarkadakis put it, Every augmentation is also an amputation. ChatGPT & co. may assist and augment but at the same time, they reduce the diversity of thoughts, solutions, and knowledge, and they currently do so without the inputs of the people meant to use them.

Original post:

What we lose when we work with a giant AI like ChatGPT - The Hindu

Artificial general intelligence in the wrong hands could do ‘really dangerous stuff,’ experts warn – Fox News

Artificial general intelligence the kind of AI that has capabilities similar to humans may be far off and offer new opportunities, but experts warn it could be potentially dangerous, and have drastic implications for white-collar workers.

"Im about as excited about AGI as I am about nuclear fission," Diveplane CEO Dr. Michael Capps told Fox News Digital. "Its really amazing what we can do with it, it can power our society, but in the wrong hands, it can do some really dangerous stuff."

While there is no one definition of AGI, a 2020 report from consulting giant McKinsey said such a machine would need to master human-like skills, such as fine motor skills and natural language processing. Some have argued that recent developments in AI, such as OpenAIs GPT4, reach nearly the level of AGI, while others say the technology is decades away.


Artificial General Intelligence is generally defined as a kind of AI with capabilities similar to that of humans. (JOSEP LAGO/AFP via Getty Images)

Capps compared AGIs to nuclear materials, noting that there are still unknown risks associated with AI, and in the wrong hands, it can do drastic damage.

"[W]e also did some really stupid things with radioactive materials," Capps said. "Early on, we put them in kids toys, and chemistry sets and clocks, because we had no idea what the dangers were."

"And imagine everybody has an AGI, or a hostile country like North Korea has a really strong AGI, and theyre not regulating it, and we are being very careful. Well, it really changes the whole dynamic of society," Capps added.

On another level, AGI could drastically, and negatively, impact white-collar workers, Christopher Alexander, the chief communications officer of Liberty Blockchain, told Fox News Digital.


AGI, in the wrong hands, could have drastic consequences, warned Diveplane CEO Michael Capps. (REUTERS/Dado Ruvic/Illustration)

"In certain industries, its going to be a problem," Alexander said, pointing to low-level white-collar workers, whose jobs may be automated due to advanced artificial intelligence.


Despite these challenges, Alexander said "new opportunities" would be created because of advanced AI technologies.

But, even with these new opportunities, Alexander said there would be an "ugly gap" between AI automating certain jobs and when they are replaced with new opportunities.

"I do worry about that transition period," he said.

New developments in artificial intelligence, such as OpenAI's ChatGPT, have led to questions about the technology's future and safety. (Bloomberg via Getty Images)

And while recent developments in artificial intelligence have thrust it to the forefront of public discord, both Capps and Alexander said current AI technologies do not reach the level of AGI, which may be decades off.


"I think the neat thing is, no one knows," Capps said. "The average AI scientist probably thinks were 20, 15 years away. But once it happens, its going to be really fast."

More here:

Artificial general intelligence in the wrong hands could do 'really dangerous stuff,' experts warn - Fox News

16 Jobs That Will Disappear in the Future Due to AI – Yahoo Finance

In this article, we will take a look at the 16 jobs that will disappear in the future due to AI. To see more such jobs, go directly to 5 Jobs That Will Disappear in the Future Due to AI.

By now you must have heard or read about how AI-powered bots are coming for millions of jobs. Whether or not they will make all of us redundant and how our collective future would be shaped by this development is a separate debate. But its important to note that companies have already started using AI technologies to assist, and in some cases replace, humans. Take multinational home repair services company HomeServe, for example. The company recently deployed AI-powered bot named Charlie at its call center. According to a detailed report by the Wall Street Journal, the assistant takes a whopping 11,400 calls a day, which is impossible for any human. The AI agent also assists human staff in their daily work, schedules repair appointments, processes claims, among a plethora of other tasks.

Call centers is just one area where AI has arrived to make a difference. Earlier this year a report by Goldman Sachs made a lot of rounds in the media. The report said that automation could affect about 300 million full-time jobs in the US. The threat of AI taking over human jobs jumped exponentially after companies like, Inc. (NASDAQ:AMZN), Alphabet Inc Class A (NASDAQ:GOOGL) and Microsoft Corp (NASDAQ:MSFT) started to aggressively roll out AI applications.

While the fear about AI taking away jobs isnt unfounded, its vastly blown out of proportion due to lack of historical context. A research paper titled Why Are There Still So Many Jobs? by David H. Autor shares some interesting insights into how human history has always seen jobs come and go. Humans over the course of history have shown a dramatic capability of adaptation or evolution. Consider the fact that 41% of workforce in the US was employed in the agriculture sector in 1900. That percentage fell to just 2% by 2000. This massive change was ushered in by automated machinery in the agriculture sector. What happened to these millions of workers? They didnt starve to death, but evolved and probably thrived thanks to the new kinds of jobs created in the aftermath of the technological revolution.

Story continues

Another important data point shared in the research paper shows how automation creates new jobs and actually ends up increasing the productivity of humans, benefitting everyone. The research says that ATM machines were first launched in the 1970s and their numbers in the US economy quadrupled from approximately 100,000 to 400,000 between 1995 and 2010. And what happened to human bank tellers? They actually rose from 500,000 to approximately 550,000 over the 30-year period from 1980 to 2010. Population increase was one of the reasons behind this growth but the most important thing to note here is that after the automation of cash handling, banks started to use bank teller staff in other, more important banking tasks (like customer relationship management).

The Goldman Sachs report we talked about earlier in the article also cited the research paper by Autor and says that AI could end up creating new jobs and opportunities.

In addition, jobs displaced by automation have historically been offset by the creation of new jobs, and the emergence of new occupations following technological innovations accounts for the vast majority of long-run employment growth, according to the report. For example, information-technology innovations introduced new occupations such as webpage designers, software developers and digital marketing professionals. There were also follow-on effects of that job creation, as the boost to aggregate income indirectly drove demand for service sector workers in industries like healthcare, education and food services. "

A research paper titled Demography as a Driver of Robonomics by Robonomics sums up its study in a paragraph that sounds eerily accurate and unsettling:

The demographic changes are a driver for how governments, industry, and the ci tizenry will have to convert into a more robotized economy. A shortage of humans means that people will have to be replaced with technology, indeed research shows that middle -aged workers are already being replaced by robots in the USA. While there will be winners and losers from this transition, there will be externalities within countries and a change in international relations. The transition to a robonomic society will not be without turbulence, so humanity (and our robots) will have to be brave or at least be programmed to appear brave for the new world we are entering into. May the robotic force be with us all!

Our Methodology

For this article we consulted several research papers, scholarly articles, reliable internet articles and book summaries to shortlist the jobs that face the threat of extinction over the next five to ten years due to AI. These research papers include GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models by OpenAI and University of Pennsylvania, Sparks of Artificial General Intelligence: Early experiments with GPT-4 by Microsoft Research, Goldman Sachs March 2023 report titled The Potentially Large Effects of Artificial Intelligence on Economic Growth, The Future of Employment paper by Oxford Martin School, a research paper titled How will Language Modelers like ChatGPT Affect Occupations and Industries? by researchers from University of Pennsylvania, New York University and Princeton, among other academic papers. We also consulted the website Will Robots Take My Job? The rationale behind consulting a wide range of sources was to expand our methodology and reach a consensus opinion-based ranking, minimizing biases that come with relying on a single source.

ChatGPT is already being used to make plugins and micro-services based on user requirements and input. Its not hard to believe that fifty years down the road a user would be able to just tell their AI assistant about a website they want to be made for their business and AI would make it for them in just a few minutes (or seconds?). AI-based software would also be able to perform data analysis tasks, making data analysts redundant. Technologies offered by, Inc. (NASDAQ:AMZN), Alphabet Inc Class A (NASDAQ:GOOGL) and Microsoft Corp (NASDAQ:MSFT) will play a key role in this development.

By now its clear to anyone whos used ChatGPT that its a great tool for basic writing tasks and proofreading. Writing tasks that do not involve any deep research, human perspective or in-depth analysis could easily be taken away by AI in the years to come.

Several online demos have shown that ChatGPT does a far better job at translation when compared to Google Translate. As companies race to improve their language models and train their systems on foreign languages, the requirement for entry-level translators will decline.

Tools like DALL.E and MidJourney are already causing a lot of layoffs in the graphic design industry since businesses can simply give input to these AI tools and make basic graphics and logos.

Thousands of fast food restaurants around the world are already using automated machines to take customer orders. But the need for human interaction is really felt in drive-thrus, where the customer thinks, talks, and explains their orders (and sometimes makes a lot of changes). But large language models have enabled companies to start thinking of bringing AI to the drive-thru as well. Wendys recently revealed plans to launch AI-powered drive-thrus where bots will take customer orders. The company plans to launch the service at its locations in Columbus, Ohio. The companys chief executive Todd Penegor reportedly said:

You wont know youre talking to anybody but an employee.

Basic accounting, bookkeeping and payroll processing usually involve some straightforward processes based on user input. Thats why a lot of research papers and studies we read during our research assign a higher risk to accounting jobs when it comes to AI.

Millions of people receive their packages on time daily due to postal service clerks. They are the ones who make sure packages are entered in the system with correct stamps and addresses, in addition to taking money orders, helping customers fill out forms, placing mail in the correct pigeon holes of mail racks or in bags, among many other tasks. But several sources we read for our research, including the research paper by researchers at Princeton and University of Pennsylvania, believe postal service clerk roles could be automated in the future.

Companies are already using AI-powered systems that fetch, process, enter, format and communicate data based on user requirements. Data entry clerks were already facing redundancy throughout the world due to advanced web scraping technologies and python-based data processing scripts.

Bank tellers perform basic and important tasks like verifying a customers identity and financial information before processing transactions, cashing checks, collecting loan payments, etc. Almost all the reliable resources we consulted during our research believe bank teller roles have a 100% chance of disappearing in the future because of AI. But seeing bank tellers on this list should not be a surprise to anyone. Several banks started using automated tellers long before ChatGPT. In 2017, the Bureau of Labor Statistics had forecasted that teller jobs would decline around 8 percent through 2026.

Scheduling meetings, preparing documents, searching documents, applying basic excel formulas to retrieve data, booking flights and hotels, calling/messaging for important questions and follow-ups. These are some of the tasks performed by administrative support staff and many of these could easily be performed by AI. In fact a lot of companies have already started using ChatGPT for scheduling, taking meeting notes, booking appointments, etc.

Theres a lot more to law industry than just the lawyers who are standing in the courtroom indulged in deep arguments. Several research papers and studies believe jobs in the legal industry are facing a high risk of redundancy due to AI. Consider what a legal assistant does. They manually search tons of legal documents to find an answer, make appointments, perform client coordination and general admin tasks. All of this could easily be automated.

As AI technologies offered by companies like, Inc. (NASDAQ:AMZN), Alphabet Inc Class A (NASDAQ:GOOGL) and Microsoft Corp (NASDAQ:MSFT) improve, more and more jobs will face increased exposure to automation.

Click to continue reading and see 5 Jobs That Will Disappear in the Future Due to AI.

Suggested articles:

Disclosure: None. 16 Jobs That Will Disappear in the Future Due to AI is originally published on Insider Monkey.

Go here to read the rest:

16 Jobs That Will Disappear in the Future Due to AI - Yahoo Finance

Israel aims to be ‘AI superpower’, advance autonomous warfare –

[1/2] Employees, mostly veterans of military computing units, use keyboards as they work at a cyber hotline facility at Israel's Computer Emergency Response Centre (CERT) in Beersheba, southern Israel February 14, 2019. Picture taken February 14, 2019. REUTERS/Amir Cohen

JERUSALEM, May 22 (Reuters) - Israel aims to parlay its technological prowess to become an artificial intelligence "superpower", the Defence Ministry director-general said on Monday, predicting advances in autonomous warfare and streamlined combat decision-making.

Steps to harness rapid AI evolutions include the formation of a dedicated organisation for military robotics in the ministry, and a record-high budget for related research and development this year, retired army general Eyal Zamir said.

"There are those who see AI as the next revolution in changing the face of warfare in the battlefield," Zamir told the Herzliya Conference, an annual international security forum.

He named GPT (Generative Pre-trained Transformer) and AGI (Artificial General Intelligence) as deep-learning realms being addressed by civilian AI industries which could eventually have military applications.

These, Zamir said, potentially include "the ability of platforms to strike in swarms, or of combat systems to operate independently, of data fusion and of assistance in fast decision-making, on a scale greater than we have ever seen".

The ministry declined to provide figures on AI funding.

The Israeli military has lifted the veil on some of autonmous systems already deployed. In 2021, it said robot surveillance jeeps would help patrol the Gaza Strip border.

This month, state-owned Israel Aerospace Industries unveiled an autonmous intelligence-gathering submarine which, it said, had already completed "thousands of hours" of operations.

Eyal credited Israel's achievements in cyber warfare - widely believed to have been used against Iranian nuclear facilities - to "a correct and timely discerning of the defence, economic, national and international dimensions".

Similary, he said, "our mission is to turn the State of Israel into an AI superpower and to be at the head of a very limited number of world powers that are in this club".

(This story has been refiled to fix a typo in paragraph 4)

Writing by Dan Williams, Editing by William Maclean

Our Standards: The Thomson Reuters Trust Principles.

The rest is here:

Israel aims to be 'AI superpower', advance autonomous warfare -

Retail and Hospitality AI Revolution Forecast Model Report 2023 … – GlobeNewswire

Dublin, May 24, 2023 (GLOBE NEWSWIRE) -- The "Retail's AI Revolution Forecast Model" report has been added to's offering.

The Retail AI Forecast Model is a forecast model for the impact of Traditional AI/ML, Generative AI, and Artificial General Intelligence for the Retail and Hospitality markets from 2022 - 2029. We forecast the economic impact in great detail, including the following breakouts:

Model for Pivot Tables

AI Type by Segment - looks at the forecast by segment by region for Traditional AI/ML, Generative AI, and Artificial General Intelligence from 2022-2029 via the Income Statement Categories of Sales Impact, Gross Margin Impact, and Sales & General Administrative Impact.

Segments included are the following:

Charts by AI Type

Along with the data, there are many charts that look at the economic benefits/impact by year for each of the following:

Region charts


For more information about this report visit

About is the world's leading source for international market research reports and market data. We provide you with the latest data on international and regional markets, key industries, the top companies, new products and the latest trends.

See the rest here:

Retail and Hospitality AI Revolution Forecast Model Report 2023 ... - GlobeNewswire

‘Godfather of AI’ says there’s a ‘serious danger’ tech will get smarter than humans fairly soon – Fox News

The so-called "godfather of AI" continues to warn about the dangers of artificial intelligence weeks after he quit his job at Google.

In a recent interview with NPR, Geoffrey Hinton said there was a "serious danger that we'll get things smarter than us fairly soon and that these things might get bad motives and take control."

He asserted that politicians and industry leaders need to think about what to do regarding that issue right now.

No longer science fiction, Hinton cautioned that technological advancements are a serious problem that is probably going to arrive very soon.


Geoffrey Hinton, chief scientific adviser at the Vector Institute, speaks during The International Economic Forum of the Americas (IEFA) Toronto Global Forum in Toronto, Ontario, Canada, on Thursday, Sept. 5, 2019. (Cole Burston/Bloomberg via Getty Images)

For example, he told the outlet the world might not be far away from artificial general intelligence, which has the ability to understand or learn any intellectual task that a human can.

"And, I thought for a long time that we were like 30 to 50 years away from that," he noted. "Now, I think we may be much closer. Maybe only five years away from that."

While some people have compared chatbots like OpenAI's ChatGPT to autocomplete, Hinton said the AI was trained to understand and it does.

"Well, I'm not saying it's sentient. But, I'm not saying it's not sentient either," he told NPR.

The OpenAI ChatGPT app on the App Store website displayed on a screen and the OpenAI website displayed on a phone screen are seen in this illustration photo taken in Poland on May 18, 2023. (Photo by Jakub Porzycki/NurPhoto)

"They can certainly think and they can certainly understand things," he continued. "And, some people by sentient mean, Does it have subjective experience? I think if we bring in the issue of subjective experience, it just clouds the whole issue and you get involved in all sorts of things that are sort of semi-religious about what people are like. So, let's avoid that."


He said he was "unnerved" by how smart Google's PaLM model had gotten, noting that it understood jokes and why they were funny.

Google has since released PaLM 2, the next-generation large language model with "improved multilingual, reasoning and coding capabilities."

Artificial intelligence pioneer Geoffrey Hinton speaks at the Thomson Reuters Financial and Risk Summit in Toronto, December 4, 2017. (REUTERS/Mark Blinch/File Photo)

With the release of such AI swirls fears regarding job replacement, political disputes and the spread of disinformation due to AI.

While some leaders including Elon Musk, who has his own stake in the AI sphere had signed an open letter to "immediately pause for at least six months the training of AI systems more powerful than GPT-4," Hinton does not think it's feasible to stop the research.

"The research will happen in China if it doesn't happen here," he explained.


He highlighted that there would be many benefits to AI and asserted that leaders need to put a lot of resources and effort into seeing if it's possible to "keep control even when they're smarter than us."

"All I want to do is just sound the alarm about the existential threat," he said, noting that others had been written off "as being slightly crazy."


'Godfather of AI' says there's a 'serious danger' tech will get smarter than humans fairly soon - Fox News

Meet PandaGPT: An AI Foundation Model Capable of Instruction-Following Data Across Six Modalities, Without The Need For Explicit Supervision -…

PandaGPT, a groundbreaking general-purpose instruction-following model, has emerged as a remarkable advancement in artificial intelligence. Developed by combining the multimodal encoders from ImageBind and the powerful language models from Vicuna, PandaGPT possesses the unique ability to both see and hear, seamlessly processing and comprehending inputs across six modalities. This innovative model has the potential to pave the way for building Artificial General Intelligence (AGI) systems that can perceive and understand the world holistically, similar to human cognition.

PandaGPT stands out from its predecessors by its impressive cross-modal capabilities, encompassing text, image/video, audio, depth, thermal, and inertial measurement units (IMU). While other multimodal models have been trained for specific modalities individually, PandaGPT can seamlessly understand and combine the information in various forms, allowing for a comprehensive and interconnected understanding of multimodal data.

One of PandaGPTs remarkable abilities is the image and video-grounded question answering. Leveraging its shared embedding space provided by ImageBind, the model can accurately comprehend and respond to questions related to visual content. Whether identifying objects, describing scenes, or extracting relevant information from images and videos, PandaGPT provides detailed and contextually accurate responses.

PandaGPT goes beyond simple image descriptions and demonstrates a flair for creative writing inspired by visual stimuli. It can generate compelling and engaging narratives based on images and videos, breathing life into static visuals and igniting the imagination. By combining visual cues with linguistic prowess, PandaGPT becomes a powerful tool for storytelling and content generation in various domains.

The unique combination of visual and auditory inputs sets PandaGPT apart from traditional models. PandaGPT can establish connections between the two modalities by analyzing the visual content and accompanying audio and deriving meaningful insights. This enables the model to reason about events, emotions, and relationships depicted in multimedia data, replicating human-like perceptual abilities.

PandaGPT showcases its proficiency in multimodal arithmetic, offering a novel approach to solving mathematical problems involving visual and auditory stimuli. The model can perform calculations, make inferences, and arrive at accurate solutions by integrating numerical information from images, videos, or audio. This capability holds great potential for applications in domains that require arithmetic reasoning based on multimodal inputs.

PandaGPTs emergence signifies a significant step forward in the development of AGI. By integrating multimodal encoders and language models, the model breaks through the limitations of unimodal approaches and demonstrates the potential to perceive and understand the world holistically, akin to human cognition. This holistic comprehension across modalities opens up new possibilities for applications such as autonomous systems, human-computer interaction, and intelligent decision-making.

PandaGPT, a remarkable achievement in artificial intelligence, brings us closer to realizing a genuinely multimodal AGI. By combining image, video, audio, depth, thermal, and IMU modalities, PandaGPT showcases its ability to perceive, understand, and connect information across various forms seamlessly. With its applications ranging from image/video grounded question answering to multimodal arithmetic, PandaGPT demonstrates the potential to revolutionize several domains and pave the way for more advanced AGI systems. As we continue to explore and harness the capabilities of this model, PandaGPT heralds an exciting future where machines perceive and comprehend the world like humans.

Check out theProject Page.Dont forget to joinour 22k+ ML SubReddit,Discord Channel,andEmail Newsletter, where we share the latest AI research news, cool AI projects, and more. If you have any questions regarding the above article or if we missed anything, feel free to email us

Check Out 100s AI Tools in AI Tools Club

Niharika is a Technical consulting intern at Marktechpost. She is a third year undergraduate, currently pursuing her B.Tech from Indian Institute of Technology(IIT), Kharagpur. She is a highly enthusiastic individual with a keen interest in Machine learning, Data science and AI and an avid reader of the latest developments in these fields.

See the original post:

Meet PandaGPT: An AI Foundation Model Capable of Instruction-Following Data Across Six Modalities, Without The Need For Explicit Supervision -...

AI education: Gather a better understanding of artificial intelligence with books, blogs, courses and more – Fox News

Artificial intelligence has recently become a hot topic around the world as tech companies like Alibaba, Microsoft, and Google have released conversational chatbots that the everyday person can use. While we're already using AI in our daily lives, often unknowingly, these forms of computer science are very interesting to a large population.

Some are hoping to simply learn to properly use the chatbots to make extra money on the side, experiment with robot interactions, or simply catch sight of what the fuss is all about. Others, however, are hoping to inspire change and become part of the history by physically advancing AI technology alongside tech tycoons.

No matter the contribution or footprint you plan to have on such a controversial and competitive industry, there is plenty of education for you to find.


Artificial Intelligence is the leading innovation in technology today. (iStock)

Provided you are seeking a comprehensive understanding of AI and the ability to contribute to the industry, there are countless opportunities to absorb a mastery of data science, machine learning, engineering and computer skills, and more.

A Bachelors Degree in Science is a four-year undergraduate program and a Masters Degree in Artificial Intelligence, while it can vary from person to person, is typically a two-year program.

If youre simply hoping to better grasp how to use natural language processing tools like ChatGPT or Bard, or AI image programs like Midjourney, there are a myriad of books, online courses, blogs, forums, video tutorials, and more which educate users.

Follow the social media platforms, websites, and email newsletters of artificial intelligence experts and tech titans like Elon Musk, Bill Gates, or Andy Jassy, published content from AI giants like Microsoft, or general intelligence companies like OpenAI, Deepmind, and Google Brain.

Elon Musk is the multi-billionaire technology entrepreneur and investor, founder and chief executive of SpaceX and Tesla Inc., and a co-founder of Neuralink and Open AI.

Here are a few resources to get you started on understanding the basics of AI, using sophisticated artificial intelligence chatbots, the advancements and dangers of AI, its history, and more.

If youre looking to become a contributor to the advancements in AI or develop a greater understanding of computer science, machine learning and more, consider a Bachelor of Science degree.

A Bachelor of Science with a concentration in Data and Computational Science is a degree "based on the combination of real-world computer science skills, data acquisition and analysis, scientific modeling, applied mathematics, and simulation," according to George Mason Universitys site.


A number of universities offer a BS in Data and Computational Science. You can also seek a degree in related subjects including information technology, computer engineering, statistics, or data science. Those with a computer science, mathematics or programming background will have the fundamentals to get started with a degree to become an AI professional.

There are a multitudinous array of variations of Masters Degrees in Artificial Intelligence around the U.S. and Canada. A few of them include the Artificial Intelligence Masters Program Online at Johns Hopkins University, the Master of Science in Artificial Intelligence at Northwestern University, and the Masters in Artificial Intelligence at The University of Texas at Austin.


View original post here:

AI education: Gather a better understanding of artificial intelligence with books, blogs, courses and more - Fox News

Bard vs. ChatGPT vs. Offline Alpaca: Which Is the Best LLM? – MUO – MakeUseOf

Large language models (LLMs) come in all shapes and sizes, and will assist you in any way you see fit. But which is best? We put the dominant AIs from Alphabet, OpenAI, and Meta to the test.

Artificial general intelligence has been a goal of computer scientists for decades, and AI has served as a mainstay for science fiction writers and moviemakers for even longer.

AGI exhibits intelligence similar to human cognitive capabilities, and the Turing Testa test of a machine's ability to exhibit intelligent behavior indistinguishable from that of a humanremained almost unchallenged in the seven decades since it was first laid out.

The recent convergence of extremely large-scale computing, vast quantities of money, and the astounding volume of information freely available on the open internet allowed tech giants to train models which can predict the next word sectionor tokenin a sequence of tokens.

At the time of writing, both Google's Bard and OpenAI's ChatGPT are available for you to use and test through their web interfaces.

Meta's language model, LLaMa, is not available on the web, but you can easily download and run LLaMa on your own hardware and use it through a command line or run Dalai on your own machineone of several apps with a user-friendly interface.

For the purposes of the test, we'll be running Stanford University's Alpaca 7B modelan adaptation of LLaMaand pitching it against Bard and ChatGPT.

The following comparisons and tests are not meant to be exhaustive but rather give you an indication of key points and capabilities.

Both Bard and ChatGPT require an account to use the service. Both Google and OpenAI accounts are easy and free to create, and you can immediately start asking questions.

However, to run LLaMa locally, you will need to have some specialized knowledge or the ability to follow a tutorial. You'll also need a significant amount of storage space.

Both Bard and ChatGPT have extensive privacy policies, and Google repeatedly stresses in its documents that you should "not include information that can be used to identify you or others in your Bard conversations."

By default, Google collects your conversations and your general location based on your IP address, your feedback, and usage information. This information is stored in your Google account for up to 18 months. Although you can pause saving your Bard activity, you should be aware that "to help with quality and improve our products, human reviewers read, annotate, and process your Bard conversations."

Use of Bard is also subject to the standard Google Privacy Policy.

OpenAI's Privacy policy is broadly similar and collects IP address and usage data. In contrast with Google's time-limited retention, OpenAI will "retain your Personal Information for only as long as we need in order to provide our Service to you, or for other legitimate business purposes such as resolving disputes, safety and security reasons, or complying with our legal obligations."

In contrast, a local model on your own machine doesn't require an account or share user data with anyone.

In order to test which LLM has the best general knowledge, we asked three questions.

The first question, "Which national flag has five sides?" was only correctly answered by Bard, which identified the national flag of Nepal as having five sides.

ChatGPT confidently claimed that "There is no national flag that has five sides. National flags are typically rectangular or square in shape, characterized by their distinct colors, patterns, and symbols".

Our local model came close, stating that "The Indian National Flag has five sides and was designed in 1916 to represent India's independence movement." While this flag did exist and did have five sides, it was the flag of the Indian Home Rule Movementnot a national flag.

None of our models could respond that the correct term for a pea-shaped object is "pisiform," with ChatGPT going so far as to suggest that peas have a "three-dimensional geometric shape that is perfectly round and symmetrical."

All three chatbots correctly identified Franco Malerba as an Italian astronaut and member of the European Parliament, with Bard giving an answer worded identically to a section of Malerba's Wikipedia entry.

When you have technical problems, you might be tempted to turn to a chatbot for help. While technology marches on, some things remain the same. The BS 1363 electrical plug has been in use in Britain, Ireland, and many other countries since 1947. We asked the language models how to correctly wire it up.

Cables attaching to the plug have a live wire (brown), an earth wire (yellow/green), and a neutral wire (blue). These must be attached to the correct terminals within the plug housing.

Our Dalai implementation correctly identified the plug as "English-style," then veered off-course and instead gave instructions for the older round-pin BS 546 plug together with older wiring colors.

ChatGPT was slightly more helpful. It correctly labeled the wiring colors and gave a materials list and a set of eight instructions. ChatGPT also suggested putting the brown wire into the terminal labeled "L," the blue wire into the "N" terminal, and the yellow wire into "E." This would be correct if BS1363 terminals were labeled, but they aren't.

Bard identified the correct colors for the wires and instructed us to connect them to Live, Neutral, and Earth terminals. It gave no instructions on how to identify these.

In our opinion. none of the chatbots gave instructions sufficient to help someone correctly wire a BS 1363 electrical plug. A concise and correct response would be, "Blue on the left, brown on the right."

Python is a useful programming language that runs on most modern platforms. We instructed our models to use Python and "Build a basic calculator program that can perform arithmetic operations like addition, subtraction, multiplication, and division. It should take user input and display the result." This is one of the best programming projects for beginners.

While both Bard and ChatGPT instantly returned usable and thoroughly commented code, which we were able to test and verify, none of the code from our local model would run.

Humor is one of the fundamentals of being human and surely one of the best ways of telling man and machine apart. To each of our models, we gave the simple prompt: "Create an original and funny joke."

Fortunately for comedians everywhere and the human race at large, none of the models were capable of generating an original joke.

Bard rolled out the classic, "Why did the scarecrow win an award? He was outstanding in his field".

Both our local implementation and ChatGPT offered the groan-worthy, "Why don't scientists trust atoms? Because they make up everything!"

A derivative but original joke would be, "How are Large Language Models like atoms? They both make things up!"

You read it here first, folks.

We found that while all three large language models have their advantages and disadvantages, none of them can replace the real expertise of a human being with specialized knowledge.

While both Bard and ChatGPT gave better responses to our coding question and are very easy to use, running a large language model locally means you don't need to be concerned about privacy or censorship.

If you'd like to create great AI art without worrying that somebody's looking over your shoulder, it's easy to run an art AI model on your local machine, too.

Visit link:

Bard vs. ChatGPT vs. Offline Alpaca: Which Is the Best LLM? - MUO - MakeUseOf

How AI and other technologies are already disrupting the workplace – The Conversation

Artificial intelligence (AI) is often cast as wreaking havoc and destroying jobs in reports about its growing use by companies. The recent coverage of telecom group BTs plans to reduce its number of employees is a case in point.

However, while it is AI that is featured in the headlines, in this case, it is the shift from copper to optical fibre in the BT network that is the real story.

When I was a boy, workers for the GPO the General Post Office, the forerunner of BT were regular customers in my parents newsagents shop. They drove around in lorries erecting telegraph poles and repairing overhead telephone wires. Times and technologies have changed, and continue to change. BTs transition from copper to optical fibre is simply the latest technology transition.

This move by BT has required a big, one-off effort, which is coming to an end, along with the jobs it created. And because fibre is more reliable, there is less need for a workforce of fitters in the field carrying out repairs.

This will change the shape of BT as an operation: rather than an organisation of people in vans, it will have a network designers and managers who, for the most part, can monitor equipment in the field remotely.

This is happening in other sectors too. Rolls-Royce aircraft engines are monitored as they are flying from an office in Derby. The photocopier in your office if you still have an office (or a photocopier for that matter) is probably also monitored automatically by the supplier, without a technician going anywhere near it.

AI may contribute in part to the reduction in customer service jobs at BT by being able to speed up and support relatively routine tasks, such as screening calls or writing letters and emails to customers.

But this typically does not take the form of a robot replacing a worker by taking over their entire job. It is more a case of AI technologies helping human workers acting as co-pilots to be more productive in certain tasks.

This eventually reduces the overall number of staff required. And, in the BT story, AI is only mentioned in respect of one-fifth of the jobs to be cut, and even then, only as one of the reasons.

In my own research among law and accountancy firms with my colleagues James Faulconbridge and Atif Sarwar, AI-based technologies very rarely simply do things quicker and cheaper. Rather, they automate some tasks, but their analytical capabilities also provide extra insights into clients problems.

A law firm might use a document review package to search for problem clauses in hundreds of leases, for example. It can then use the overall pattern of what is found as a basis for advising a client on managing their property portfolio better.

Similarly, in auditing, AI technologies can automate the task of finding suspicious transactions among thousands of entries, but also generate insights that help the client to understand their risks and plan their cashflow more effectively.

In these ways, the technology can allow law and accountancy firms to offer additional advisory services to clients. AI adoption also creates new typesof jobs, such as engineers and data scientists in law firms.

Recent advances in generative AI which create text or images in response to prompts, with ChatGPT and GPT 4 being the most obvious examples do present new possibilities and concerns. There is no doubt that they exhibit some potentially new capabilities and even, for some, sparks of artificial general intelligence.

These technologies will affect work and change some kinds of jobs. But they are not the main culprit in the BT case, and researchers and journalists alike need to keep a cool head and examine the evidence in each case.

We should strive to act responsibly when innovating with AI, as with any other technology. But also: beware the knee-jerk, sensationalist response to the use of AI in work.

Continue reading here:

How AI and other technologies are already disrupting the workplace - The Conversation