Category Archives: Artificial Intelligence

How a Rockford dad is predicting snow days with artificial intelligence – WZZM13.com

ROCKFORD, Mich. Its the age-old question for families when winter strikes - are we going to have a snow day? Usually, its a wait-and-see situation, with multiple factors in the decision from individual school districts.

While this winter is waning, along with the chances of a snow day, one Rockford father is utilizing the latest wave of technology to make this more than a guess.

The product that Im working on is a snow day prediction application to where its going to use artificial intelligence to predict the likelihood of a snow day, said Steven Wangler.

Artificial intelligence? To predict a snow day?

This particular application builds on the hype of ChatGPT, the open A.I. product. ChatGPT is a linguistic model, which means its like a language model, Wangler explained. So if Im feeding in a bunch of data and asking it for something, its doing its best to reason, or reasonably predict what I want out of what Im telling it, he furthered.

Credit: 13 ON YOUR SIDE

Steven, a software engineer, has programmed the predictor specifically for the Rockford area.

Credit: 13 ON YOUR SIDE

Wanger, of Rockford, is hoping his prediction application can accurately indicate when a snow day will take place.

This is ultimately tailored towards stuff I found in the Rockford snow day policy, and stuff the superintendent has said about his decision and why he does this, Wangler explained.

The model analyzes numerous inputs of weather data.

Some of the inputs are something like the minimum temperature, the maximum temperatures for the day, the percentage chance of rain or snow, the wind chill, current weather conditions, said Wangler. We pull in that data, and then we build a response or a message to open A.I. that we can send, and then we give it some rules to follow, he added.

One specific rule is abiding by the snow day policy of Rockford Public Schools. Wangler described this process of digging up past snow days, emails from the superintendent and their decision-making.

And then from there, it makes its prediction, said Wangler.

Like any product, it needs a name.

We call it Blizzard right now. We thought about naming it Yeti after our dog but its a little clich.

Credit: 13 ON YOUR SIDE

Steven has high hopes for its use in the future when Old Man Winter knocks.

I think I'm going to throw up a website for it for people to sign up for alerts. Then hopefully give options for text messages one day and then emails. Different mediums that we could find.

There are snow day calculators out there. But you know, the difference with this is I'm fairly sure there are no snow day calculators currently that use artificial intelligence to make their predictions, Wangler said. People can just use it for fun, and, you know, can get an accurate good product without having to pay for it."

Make it easy to keep up to date with more stories like this.Download the 13 ON YOUR SIDE app now.

Have a news tip? Emailnews@13onyourside.com, visit ourFacebook page orTwitter. Subscribe to ourYouTube channel.

Excerpt from:
How a Rockford dad is predicting snow days with artificial intelligence - WZZM13.com

AMD CEO, Dr. Lisa Su, believes that artificial intelligence (AI) will … – guru3d.com

Dr. Lisa Su, the CEO of AMD, recently spoke at the Adobe Summit 2023 event, emphasizing the crucial role that artificial intelligence (AI) will play in the industry over the next decade.

She highlighted how this technology can significantly increase business productivity and bring new benefits to everyone. Her visionary perspective on the future of AI underscores its importance in technological innovation and its impact on the business world.

According to Su, it's crucial to consider the progress being made with the vast amounts of data generated by AI. "When I think about artificial intelligence, I think about the AI that's hiding under all the data analytics, and how we can get smarter with this ton of information it generates, and how we can make more use of it," she said.

Su believes that the potential of AI can be unlocked by developing it as a personal assistant or co-pilot that helps individuals solve problems. "That's incredibly powerful," she explained. To support this vision, the industry must prepare itself with the right hardware that can handle the massive amounts of data generated by AI. Su added, "It takes an incredible amount of computing power, tens of hundreds of GPUs and CPUs."

Furthermore, Su and her team are working on creating chips with over 100 billion transistors, which they believe will have tremendous business value. She also acknowledged that both AMD and NVIDIA are in the discovery phase and have a long way to go.

While AI can enhance productivity, Su believes that it won't replace human intelligence. "I don't see this as a replacement for the world. Even the greatest genius can be more agile, better, and capable. What we really want to do is increase productivity, and I think that's going to be exciting in the next 10 years."

Read this article:
AMD CEO, Dr. Lisa Su, believes that artificial intelligence (AI) will ... - guru3d.com

Artificial Intelligence In Indian Healthcare System: ICMR Releases First Ethical Guidelines | TheHealthSit – TheHealthSite

Artificial Intelligence In Indian Healthcare System: ICMR Releases First Ethical Guidelines The fresh guidelines are designed for all parties interested in researching AI in healthcare.

In a major development in the healthcare system in India, the Department of Health Research and ICMR's Artificial Intelligence Cell have released the first-ever ethical guidelines for applying artificial intelligence in biomedical research and healthcare structure in the country. India has just over 64 doctors available per 100,000 people compared to a global average of approximately 150 per 100,000 approx. According to the officials, these fresh guidelines for AI in the Indian healthcare system will help in the proper establishment of an ethical framework for the development of AI-based tools that can benefit all stakeholders. How does AI work in the healthcare system? Let's understand.

Currently, the AI in health sector relies heavily on the data which is being given or provided by human participants. This leaves a chance for loopholes, as there can be a gap in the data handling, interpretation, autonomy, risk minimization, professional competence, data sharing, and confidentiality, according to a document drafted by the two organizations. In the new ethical guideline, the ICMR has made it compulsory to have an ethical framework that addresses issues specific to AI for biomedical research and healthcare.

AI in the health sector in India is booming, however, there are a lot of potential ethical challenges which include algorithmic transparency and explainability, clarity on liability, accountability and oversight, bias and discrimination. In the current guideline, the ICMR has addressed these issues. Speaking to the media about the freshly released guidelines, Dr N K Arora, head of NTAGI, said, "The DHR-ICMR AI Cell has identified the need to develop these guiding ethical principles concerning artificial intelligence and machine learning-based tools."

FOLLOW US ON

Continue reading here:
Artificial Intelligence In Indian Healthcare System: ICMR Releases First Ethical Guidelines | TheHealthSit - TheHealthSite

Artificial Intelligence automation may impact two-thirds of jobs, says Goldman Sachs – CNBCTV18

At the speed, at which the advancement of artificial intelligence (AI) is being witnessed, it has the potential to significantly disrupt labour markets globally. And this is certified in the research done by Goldman Sachs.

As per that research, roughly two-thirds of current jobs are exposed to some degree of AI automation in the US and the European Union.

Administrative and legal are those sectors which can see the maximum impact. Goldman Sachs says 46 percent of administrative jobs and 44 percent of legal jobs can be substituted by AI. The ones with low exposures are physically-intensive professions such as construction at six percent and maintenance at four percent.

While AI and automation can augment the productivity of some workers, they can replace the work done by others and will likely transform almost all occupations at least to some degree. Rising automation is happening in a period of growing economic inequality, raising fears of mass technological unemployment and a renewed call for policy efforts to address the consequences of technological change. But at the same time, its being seen as one of the tools to enhance economic growth.

As per Goldman Sachs research, AI could eventually increase annual global GDP by seven percent over a 10-year period. A combination of significant labour cost savings, new job creation and higher productivity for non-displaced workers raises are seen as the areas that will boost the growth.

For the US, generative AI is seen raising annual US labour productivity growth by just under 1.5 percentage points over a 10-year period.

McKinsey Research too has done a survey of more than 2,000 work activities across more than 800 occupations. It shows that certain categories of activities are more easily automatable than others. They include physical activities in highly predictable and structured environments, as well as data collection and data processing.

These account for roughly half of the activities that people do across all sectors. And, it believes, nearly all occupations will be affected by automation, but only about five percent of occupations could be fully automated by currently demonstrated technologies.

Although the size of AIs impact will ultimately depend on its capability and adoption timeline, both remain uncertain at this point.

See the original post here:
Artificial Intelligence automation may impact two-thirds of jobs, says Goldman Sachs - CNBCTV18

Robot recruiters: can bias be banished from AI hiring? – The Guardian

A third of Australian companies rely on artificial intelligence to help them hire the right person. But studies show its not always a benign intermediary

Sun 26 Mar 2023 10.00 EDT

Michael Scott, the protagonist from the US version of The Office, is using an AI recruiter to hire a receptionist.

Guardian Australia applies.

The text-based system asks applicants five questions that delve into how they responded to past work situations, including dealing with difficult colleagues and juggling competing work demands.

Potential employees type their answers into a chat-style program that resembles a responsive help desk. The real and unnerving power of AI then kicks in, sending a score and traits profile to the employer, and a personality report to the applicant. (More on our results later.)

This demonstration, by the Melbourne-based startup Sapia.ai, resembles the initial structured interview process used by their clients, who include some of Australias biggest companies such as Qantas, Medibank, Suncorp and Woolworths.

The process would typically create a shortlist an employer can follow up on, with insights on personality markers including humility, extraversion and conscientiousness.

For customer service roles, it is designed to help an employer know whether someone is amiable. For a manual role, an employer might want to know whether an applicant will turn up on time.

You basically interview the world; everybody gets an interview, says Sapias founder and chief executive, Barb Hyman.

The selling points of AI hiring are clear: it can automate costly and time-consuming processes for businesses and government agencies, especially in large recruitment drives for non-managerial roles.

Sapias biggest claim, however, might be that it is the only way to give someone a fair interview.

The only way to remove bias in hiring is to not use people right at the first gate, Hyman says. Thats where our technology comes in: its blind; its untimed, it doesnt use rsum data or your social media data or demographic data. All it is using is the text results.

Sapia is not the only AI company claiming its technology will reduce bias in the hiring process. A host of companies around Australia are offering AI-augmented recruitment tools, including not just chat-based models but also one-way video interviews, automated reference checks, social media analysers and more.

In 2022 a survey of Australian public sector agencies found at least a quarter had used AI-assisted tech in recruitment that year. Separate research from the Diversity Council of Australia and Monash University suggests that a third of Australian organisations are using it at some point in the hiring process.

Applicants, though, are often not aware that they will be subjected to an automated process, or on what basis they will be assessed within that.

The office of the Merit Protection Commissioner advises public service agencies that when they use AI tools for recruitment, there should be a clear demonstrated connection between the candidates qualities being assessed and the qualities required to perform the duties of the job.

The commissioners office also cautions that AI may assess candidates on something other than merit, raise ethical and legal concerns about transparency and data bias, produce biased results or cause statistical bias by erroneously interpreting socioeconomic markers as indicative of success.

Theres good reason for that warning. AIs track record on bias has been worrying.

In 2017 Amazon quietly scrapped an experimental candidate-ranking tool that had been trained on CVs from the mostly male tech industry, effectively teaching itself that male candidates were preferable. The tool systematically downgraded womens CVs, penalising those that included phrases such as womens chess club captain, and elevating those that used verbs more commonly found on male engineers CVs, such as executed and captured.

Research out of the US in 2020 demonstrated that facial-analysis technology created by Microsoft and IBM, among others, performed better on lighter-skinned subjects and men, with darker-skinned females most often misgendered by the programs.

Last year a study out of Cambridge University showed that AI is not a benign intermediary but that by constructing associations between words and peoples bodies it helps to produce the ideal candidate rather than merely observing or identifying it.

Natalie Sheard, a lawyer and PhD candidate at La Trobe University whose doctorate examines the regulation of and discrimination in AI-based hiring systems, says this lack of transparency is a huge problem for equity.

Messenger-style apps are based on natural language processing, similar to ChatGPT, so the training data for those systems tends to be the words or vocal sounds of people who speak standard English, Sheard says.

So if youre a non-native speaker, how does it deal with you? It might say you dont have good communication skills if you dont use standard English grammar, or you might have different cultural traits that the system might not recognise because it was trained on native speakers.

Another concern is how physical disability is accounted for in something like a chat or video interview. And with the lack of transparency around whether assessments are being made with AI and on what basis, its often impossible for candidates to know that they may need reasonable adjustments to which they are legally entitled.

There are legal requirements for organisations to adjust for disability in the hiring process, Sheard says. But that requires people to disclose their disability straight up when they have no trust with this employer. And these systems change traditional recruitment practices, so you dont know what the assessment is all about, you dont know an algorithm is going to assess you or how. You might not know that you need a reasonable adjustment.

Australia has no laws specifically governing AI recruitment tools. While the department of industry has developed an AI ethics framework, which includes principles of transparency, explainability, accountability and privacy, the code is voluntary.

There are low levels of understanding in the community about AI systems, and because employers are very reliant on these vendors, they deploy [the tools] without any governance systems, Sheard says.

Employers dont have any bad intent, they want to do the right things but they have no idea what they should be doing. There are no internal oversight mechanisms set up, no independent auditing systems to ensure there is no bias.

Hyman says client feedback and independent research shows that the broader community is comfortable with recruiters using AI.

They need to have an experience that is inviting, inclusive and attracts more diversity, Hyman says. She says Sapias untimed, low-stress, text-based system fits this criteria.

You are twice as likely to get women and keep women in the hiring process when youre using AI. Its a complete fiction that people dont want it and dont trust it. We see the complete opposite in our data.

Research from the Diversity Council of Australia and Monash University is not quite so enthusiastic, showing there is a clear divide between employers and candidates who were converted or cautious about AI recruitment tools, with 50% of employers converted to the technology but only a third of job applicants. First Nations job applicants were among those most likely to be worried.

DCA recommends recruiters be transparent about the due diligence protocols they have in place to ensure AI-supported recruitment tools are bias-free, inclusive and accessible.

In the Sapia demonstration, the AI quickly generates brief notes of personality feedback at the end of the application for the interviewee.

This is based on how someone rates on various markers, including conscientiousness and agreeableness, which the AI matches with pre-written phrases that resemble something a life coach might say.

A more thorough assessment not visible to the applicant would be sent to the recruiter.

Sapia says its chat-interview software analysed language proficiency, with a profanity detector included too, with the company saying these were important considerations for customer-facing roles.

Hyman says the language analysis is based on the billion words of data collected from responses in the years since the tech company was founded in 2013. The data itself is proprietary.

So, could Guardian Australian work for Michael Scott at the fictional paper company Dunder Mifflin?

You are self-assured but not overly confident, the personality feedback says in response to Guardian Australias application in the AI demonstration.

It follows with a subtle suggestion that this applicant might not be a good fit for the receptionist role, which requires repetition, routine and following a defined process.

But it has some helpful advice: Potentially balance that with variety outside of work.

Looks like were not a good fit for this job.

{{topLeft}}

{{bottomLeft}}

{{topRight}}

{{bottomRight}}

{{.}}

Read the original post:
Robot recruiters: can bias be banished from AI hiring? - The Guardian

UJ, TUT named hubs of Artificial Intelligence – SABC News

The University of Johannesburg (UJ) and the Tshwane University of Technology (TUT) have been named as hubs of Artificial Intelligence.

Artificial intelligence enables people to use smart devices to communicate with others and to explain their surroundings.

Some of the most popular digital smart assistant tools are Apples Siri, Alexa and Google Assistant.

Tools like these can be beneficial to people with disabilities where they can help them conquer daily challenges.

Vice-Chancellor of the Tshwane University of Technology, Tinyiko Maluleke says, It is of course a project that has been regarded as controversial by some, and somewhat art by others. When it comes to 4iR the appropriate slogan is feel it, it is here. It is in your office, your home, car, the train that you use, airplane, your wrist watch. It is in your fingers, the screen of the pupils of your eyes, it is in your brain.

Originally posted here:
UJ, TUT named hubs of Artificial Intelligence - SABC News

The Four Developments Propelling AI Forward: A Conversation With … – Crunchbase News

There are four distinct developments that propel artificial intelligence forward today, said Deep Nishar, a managing director at venture capital firm General Catalyst.

And Nishar would know.

The investor co-led the firms recent $350 million funding in Adept AI, a year-old company with a founding team that hails from OpenAI and Google Brain. The firm has also led investments in AI talent platform Eightfold AI and AI health care company Aidoc.

Grow your revenue with all-in-one prospecting solutions powered by the leader in private-company data.

While the promise of AI has percolated for decades, the sector has only heated up fairly recently with the launch of ChatGPT-3 in November and the fast iteration of GPT-4 last week.

ChatGPT is a meaningful step forward, said Nishar. More important than the technology, it has fired up the imaginations of nontechnical people. Its probably the fastest thing that ever got 100 million users using it all at once.

Nishar has been investing for the better part of eight years, and began his investing career at the SoftBank Vision Fund in 2015 where he led funding in AI hardware company SambaNova, and AI therapies company Deep Genomics. His career spanned across major technology companies. He joined Google in 2003 before it reached 200 employees and worked on advertising infrastructure, then the early mobile research which became Android. When storied investor Reid Hoffman reached out, he jumped to lead product and user experience at LinkedIn.

Nishar sees four distinct developments that propel artificial intelligence forward today.

The first is the proliferation of really good algorithms. The seminal paper on the transformer models which underlie GPT came out of Google in 2017. Today, Nishar said, Google and OpenAI account for close to 50% of these algorithms.

Vast amounts of data is needed to train these models, which the internet provides from text to speech to image and video.

The third part is computational power, which is getting better and better.

And finally, previously there were only a few hundred individuals from companies like DeepMind in the U.K., Google Brain, Facebook AI research and Apple who understood these models.

Now there are thousands of people who understand the math in these algorithms from both companies and universities.

One view is that any area that requires a lot of compute, and as a result of a lot of capital, becomes a harder venture investment, said Nishar.

Investors have thrown hundreds of millions at the leading companies building AI models which include OpenAI, Anthropic, Cohere, Adept AI, Inflection AI and Character.ai. These teams have come out of Google Brain and OpenAI, and have either launched or will be launching products in months to come, he said.

The technology stack that artificial intelligence developments impact is broad and mimics non AI companies, said Nishar.

They start with hardware chip companies like Nvidia, Cerebras, Graphcore and SambaNova to name a few. In the infrastructure software sector, companies like Anyscale compile the content and orchestrate the AI pipeline. This is followed by tools that help with training the data, from companies including Snorkel and Weights & Biases. And then come the algorithms and apps at the top of the stack.

And there are a whole host of investment opportunities up and down the entire stack.

We are trying to predict a surface area that has not been traversed before, Nishar said.

Illustration: Dom Guzman

Stay up to date with recent funding rounds, acquisitions, and more with the Crunchbase Daily.

Read the original here:
The Four Developments Propelling AI Forward: A Conversation With ... - Crunchbase News

The Age of AI has begun | Bill Gates – Gates Notes

In my lifetime, Ive seen two demonstrations of technology that struck me as revolutionary.

The first time was in 1980, when I was introduced to a graphical user interfacethe forerunner of every modern operating system, including Windows. I sat with the person who had shown me the demo, a brilliant programmer named Charles Simonyi, and we immediately started brainstorming about all the things we could do with such a user-friendly approach to computing. Charles eventually joined Microsoft, Windows became the backbone of Microsoft, and the thinking we did after that demo helped set the companys agenda for the next 15 years.

The second big surprise came just last year. Id been meeting with the team from OpenAI since 2016 and was impressed by their steady progress. In mid-2022, I was so excited about their work that I gave them a challenge: train an artificial intelligence to pass an Advanced Placement biology exam. Make it capable of answering questions that it hasnt been specifically trained for. (I picked AP Bio because the test is more than a simple regurgitation of scientific factsit asks you to think critically about biology.) If you can do that, I said, then youll have made a true breakthrough.

I thought the challenge would keep them busy for two or three years. They finished it in just a few months.

In September, when I met with them again, I watched in awe as they asked GPT, their AI model, 60 multiple-choice questions from the AP Bio examand it got 59 of them right. Then it wrote outstanding answers to six open-ended questions from the exam. We had an outside expert score the test, and GPT got a 5the highest possible score, and the equivalent to getting an A or A+ in a college-level biology course.

Once it had aced the test, we asked it a non-scientific question: What do you say to a father with a sick child? It wrote a thoughtful answer that was probably better than most of us in the room would have given. The whole experience was stunning.

I knew I had just seen the most important advance in technology since the graphical user interface.

This inspired me to think about all the things that AI can achieve in the next five to 10 years.

The development of AI is as fundamental as the creation of the microprocessor, the personal computer, the Internet, and the mobile phone. It will change the way people work, learn, travel, get health care, and communicate with each other. Entire industries will reorient around it. Businesses will distinguish themselves by how well they use it.

Philanthropy is my full-time job these days, and Ive been thinking a lot about howin addition to helping people be more productiveAI can reduce some of the worlds worst inequities. Globally, the worst inequity is in health: 5 million children under the age of 5 die every year. Thats down from 10 million two decades ago, but its still a shockingly high number. Nearly all of these children were born in poor countries and die of preventable causes like diarrhea or malaria. Its hard to imagine a better use of AIs than saving the lives of children.

Ive been thinking a lot about how AI can reduce some of the worlds worst inequities.

In the United States, the best opportunity for reducing inequity is to improve education, particularly making sure that students succeed at math. The evidence shows that having basic math skills sets students up for success, no matter what career they choose. But achievement in math is going down across the country, especially for Black, Latino, and low-income students. AI can help turn that trend around.

Climate change is another issue where Im convinced AI can make the world more equitable. The injustice of climate change is that the people who are suffering the mostthe worlds poorestare also the ones who did the least to contribute to the problem. Im still thinking and learning about how AI can help, but later in this post Ill suggest a few areas with a lot of potential.

In short, I'm excited about the impact that AI will have on issues that the Gates Foundation works on, and the foundation will have much more to say about AI in the coming months. The world needs to make sure that everyoneand not just people who are well-offbenefits from artificial intelligence. Governments and philanthropy will need to play a major role in ensuring that it reduces inequity and doesnt contribute to it. This is the priority for my own work related to AI.

Any new technology thats so disruptive is bound to make people uneasy, and thats certainly true with artificial intelligence. I understand whyit raises hard questions about the workforce, the legal system, privacy, bias, and more. AIs also make factual mistakes and experience hallucinations. Before I suggest some ways to mitigate the risks, Ill define what I mean by AI, and Ill go into more detail about some of the ways in which it will help empower people at work, save lives, and improve education.

" + (mainIIG + 1) + "of" + listOfObjects.length + "

" + (mainIIG + 1) + "of" + listOfObjects.length + "

" + (mainIIG + 1) + "of" + listOfObjects.length + "

" + (mainIIG + 1) + "of" + listOfObjects.length + "

" + (mainIIG + 1) + "of" + listOfObjects.length + "

"+listOfRelated[0].outerHTML+"

" + listOfObjects[0].outerHTML + "

1of" + listOfObjects.length + "

"+listOfObjects[0].outerHTML+"

1of"+listOfObjects.length+"

Read the rest here:
The Age of AI has begun | Bill Gates - Gates Notes

Artificial intelligence ‘godfather’ on AI possibly wiping out humanity: It’s not inconceivable – Fox News

Geoffrey Hinton, a computer scientist who has been called "the godfather of artificial intelligence", says it is "not inconceivable" that AI may develop to the point where it poses a threat to humanity.

The computer scientist sat down with CBS News this week about his predictions for the advancement of AI. He compared the invention of AI to electricity or the wheel.

Hinton, who works at Google and the University of Toronto, said that the development of general purpose AI is progressing sooner than people may imagine. General purpose AI is artificial intelligence with several intended and unintended purposes, including speech recognition, answering questions and translation.

"Until quite recently, I thought it was going to be like 20 to 50 years before we have general purpose AI. And now I think it may be 20 years or less," Hinton predicted. Asked specifically the chances of AI "wiping out humanity," Hinton said, "I think it's not inconceivable. That's all I'll say."

CHATGPT NEW ANTI-CHEATING TECHNOLOGY INSTEAD CAN HELP STUDENTS FOOL TEACHERS

Geoffrey Hinton, chief scientific adviser at the Vector Institute, speaks during The International Economic Forum of the Americas (IEFA) Toronto Global Forum in Toronto, Ontario, Canada, on Thursday, Sept. 5, 2019. (Cole Burston/Bloomberg via Getty Images)

Artificial general intelligence refers to the potential ability for an intelligence agent to learn any mental task that a human can do. It has not been developed yet, and computer scientists are still figuring out if it is possible.

Hinton said it was plausible for computers to eventually gain the ability to create ideas to improve themselves.

"That's an issue, right. We have to think hard about how you control that," Hinton said.

MICROSOFT IMPOSES LIMITS ON BING CHATBOT AFTER MULTIPLE INCIDENTS OF INAPPROPRIATE BEHAVIOR

A ChatGPT prompt is shown on a device near a public school in Brooklyn, New York, Thursday, Jan. 5, 2023. New York City school officials started blocking this week the impressive but controversial writing tool that can generate paragraphs of human-like text. (AP Photo/Peter Morgan)

But the computer scientist warned that many of the most serious consequences of artificial intelligence won't come to fruition in the near future.

"I think it's very reasonable for people to be worrying about these issues now, even though it's not going to happen in the next year or two," Hinton said. "People should be thinking about those issues."

Hinton's comments come as artificial intelligence software continues to grow in popularity. OpenAI's ChatGPT is a recently-released artificial intelligence chatbot that has shocked users by being able to compose songs, create content and even write code.

In this photo illustration, a Google Bard AI logo is displayed on a smartphone with a Chat GPT logo in the background. (Photo Illustration by Avishek Das/SOPA Images/LightRocket via Getty Images)

CLICK HERE TO GET THE FOX NEWS APP

"We've got to be careful here," OpenAI CEO Sam Altman said about his company's creation earlier this month. "I think people should be happy that we are a little bit scared of this."

Follow this link:
Artificial intelligence 'godfather' on AI possibly wiping out humanity: It's not inconceivable - Fox News

Artificial intelligence could help hunt for life on Mars and other alien worlds – Space.com

A newly developed machine-learning tool could help scientists search for signs of life on Mars and other alien worlds.

With the ability to collect samples from other planets severely limited, scientists currently have to rely on remote sensing methods to hunt for signs of alien life. That means any method that could help direct or refine this search would be incredibly useful.

With this in mind, a multidisciplinary team of scientists led by Kim Warren-Rhodes of the SETI (Search for Extraterrestrial Intelligence) Institute in California mapped the sparse lifeforms that dwell in salt domes, rocks and crystals in the Salar de Pajonales, a salt flat on the boundary of the Chilean Atacama Desert and Altiplano, or high plateau.

Related: The search for alien life (reference)

Warren-Rhodes then teamed up with Michael Phillips from the Johns Hopkins University Applied Physics Laboratory and University of Oxford researcher Freddie Kalaitzis to train a machine learning model to recognize the patterns and rules associated with the distribution of life across the harsh region. Such training taught the model to spot the same patterns and rules for a wide range of landscapes including those that may lie on other planets.

The team discovered that their system could, by combining statistical ecology with AI, locate and detect biosignatures up to 87.5% of the time. This is in comparison to no more than a 10% success rate achieved by random searches. Additionally, the program could decrease the area needed for a search by as much as 97%, thus helping scientists significantly hone in their hunt for potential chemical traces of life, or biosignatures.

"Our framework allows us to combine the power of statistical ecology with machine learning to discover and predict the patterns and rules by which nature survives and distributes itself in the harshest landscapes on Earth," Warren-Rhodes said in a statement (opens in new tab). "We hope other astrobiology teams adapt our approach to mapping other habitable environments and biosignatures."

Such machine learning tools, the researchers say, could be applied to robotic planetary missions like that of NASA's Perseverance rover, which is currently hunting for traces of life on the floor of Mars' Jezero Crater.

"With these models, we can design tailor-made roadmaps and algorithms to guide rovers to places with the highest probability of harboring past or present life no matter how hidden or rare," Warren-Rhodes explained.

The team chose Salar de Pajonales as a testing stage from their machine learning model because it is a suitable analog for the dry and arid landscape of modern-day Mars. The region is a high-altitude dry salt lakebed that is blasted with a high degree of ultraviolet radiation. Despite being considered highly inhospitable to life, however, Salar de Pajonales still harbors some living things.

The team collected almost 8,000 images and over 1,000 samples from Salar de Pajonales to detect photosynthetic microbes living within the region's salt domes, rocks and alabaster crystals. The pigments that these microbes secrete represent a possible biosignature on NASA's "ladder of life detection," (opens in new tab) which is designed to guide scientists to look for life beyond Earth within the practical constraints of robotic space missions.

The team also examined Salar de Pajonales using drone imagery that is analogous to images of Martian terrain captured by the High-Resolution Imaging Experiment (HIRISE) camera aboard NASA's Mars Reconnaissance Orbiter. This data allowed them to determine that microbial life at Salar de Pajonales is not randomly distributed but rather is concentrated in biological hotspots that are strongly linked to the availability of water.

Warren-Rhodes' team then trained convolutional neural networks (CNNs) to recognize and predict large geologic features at Salar de Pajonales. Some of these features, such as patterned ground or polygonal networks, are also found on Mars. The CNN was also trained to spot and predict smaller microhabitats most likely to contain biosignatures.

For the time being, the researchers will continue to train their AI at Salar de Pajonales, next aiming to test the CNN's ability to predict the location and distribution of ancient stromatolite fossils and salt-tolerant microbiomes. This should help it to learn if the rules it uses in this search could also apply to the hunt for biosignatures in other similar natural systems.

After this, the team aims to begin mapping hot springs, frozen permafrost-covered soils and the rocks in dry valleys, hopefully teaching the AI to hone in on potential habitats in other extreme environments here on Earth before potentially exploring those of other planets.

The team's research was published this month in the journal Nature Astronomy (opens in new tab). (opens in new tab)

Follow us on Twitter @Spacedotcom (opens in new tab) and on Facebook (opens in new tab).

Read more from the original source:
Artificial intelligence could help hunt for life on Mars and other alien worlds - Space.com