Page 1,411«..1020..1,4101,4111,4121,413..1,4201,430..»

Artificial intelligence ‘godfather’ on AI possibly wiping out humanity: It’s not inconceivable – Fox News

Geoffrey Hinton, a computer scientist who has been called "the godfather of artificial intelligence", says it is "not inconceivable" that AI may develop to the point where it poses a threat to humanity.

The computer scientist sat down with CBS News this week about his predictions for the advancement of AI. He compared the invention of AI to electricity or the wheel.

Hinton, who works at Google and the University of Toronto, said that the development of general purpose AI is progressing sooner than people may imagine. General purpose AI is artificial intelligence with several intended and unintended purposes, including speech recognition, answering questions and translation.

"Until quite recently, I thought it was going to be like 20 to 50 years before we have general purpose AI. And now I think it may be 20 years or less," Hinton predicted. Asked specifically the chances of AI "wiping out humanity," Hinton said, "I think it's not inconceivable. That's all I'll say."

CHATGPT NEW ANTI-CHEATING TECHNOLOGY INSTEAD CAN HELP STUDENTS FOOL TEACHERS

Geoffrey Hinton, chief scientific adviser at the Vector Institute, speaks during The International Economic Forum of the Americas (IEFA) Toronto Global Forum in Toronto, Ontario, Canada, on Thursday, Sept. 5, 2019. (Cole Burston/Bloomberg via Getty Images)

Artificial general intelligence refers to the potential ability for an intelligence agent to learn any mental task that a human can do. It has not been developed yet, and computer scientists are still figuring out if it is possible.

Hinton said it was plausible for computers to eventually gain the ability to create ideas to improve themselves.

"That's an issue, right. We have to think hard about how you control that," Hinton said.

MICROSOFT IMPOSES LIMITS ON BING CHATBOT AFTER MULTIPLE INCIDENTS OF INAPPROPRIATE BEHAVIOR

A ChatGPT prompt is shown on a device near a public school in Brooklyn, New York, Thursday, Jan. 5, 2023. New York City school officials started blocking this week the impressive but controversial writing tool that can generate paragraphs of human-like text. (AP Photo/Peter Morgan)

But the computer scientist warned that many of the most serious consequences of artificial intelligence won't come to fruition in the near future.

"I think it's very reasonable for people to be worrying about these issues now, even though it's not going to happen in the next year or two," Hinton said. "People should be thinking about those issues."

Hinton's comments come as artificial intelligence software continues to grow in popularity. OpenAI's ChatGPT is a recently-released artificial intelligence chatbot that has shocked users by being able to compose songs, create content and even write code.

In this photo illustration, a Google Bard AI logo is displayed on a smartphone with a Chat GPT logo in the background. (Photo Illustration by Avishek Das/SOPA Images/LightRocket via Getty Images)

CLICK HERE TO GET THE FOX NEWS APP

"We've got to be careful here," OpenAI CEO Sam Altman said about his company's creation earlier this month. "I think people should be happy that we are a little bit scared of this."

Follow this link:
Artificial intelligence 'godfather' on AI possibly wiping out humanity: It's not inconceivable - Fox News

Read More..

Artificial intelligence could help hunt for life on Mars and other alien worlds – Space.com

A newly developed machine-learning tool could help scientists search for signs of life on Mars and other alien worlds.

With the ability to collect samples from other planets severely limited, scientists currently have to rely on remote sensing methods to hunt for signs of alien life. That means any method that could help direct or refine this search would be incredibly useful.

With this in mind, a multidisciplinary team of scientists led by Kim Warren-Rhodes of the SETI (Search for Extraterrestrial Intelligence) Institute in California mapped the sparse lifeforms that dwell in salt domes, rocks and crystals in the Salar de Pajonales, a salt flat on the boundary of the Chilean Atacama Desert and Altiplano, or high plateau.

Related: The search for alien life (reference)

Warren-Rhodes then teamed up with Michael Phillips from the Johns Hopkins University Applied Physics Laboratory and University of Oxford researcher Freddie Kalaitzis to train a machine learning model to recognize the patterns and rules associated with the distribution of life across the harsh region. Such training taught the model to spot the same patterns and rules for a wide range of landscapes including those that may lie on other planets.

The team discovered that their system could, by combining statistical ecology with AI, locate and detect biosignatures up to 87.5% of the time. This is in comparison to no more than a 10% success rate achieved by random searches. Additionally, the program could decrease the area needed for a search by as much as 97%, thus helping scientists significantly hone in their hunt for potential chemical traces of life, or biosignatures.

"Our framework allows us to combine the power of statistical ecology with machine learning to discover and predict the patterns and rules by which nature survives and distributes itself in the harshest landscapes on Earth," Warren-Rhodes said in a statement (opens in new tab). "We hope other astrobiology teams adapt our approach to mapping other habitable environments and biosignatures."

Such machine learning tools, the researchers say, could be applied to robotic planetary missions like that of NASA's Perseverance rover, which is currently hunting for traces of life on the floor of Mars' Jezero Crater.

"With these models, we can design tailor-made roadmaps and algorithms to guide rovers to places with the highest probability of harboring past or present life no matter how hidden or rare," Warren-Rhodes explained.

The team chose Salar de Pajonales as a testing stage from their machine learning model because it is a suitable analog for the dry and arid landscape of modern-day Mars. The region is a high-altitude dry salt lakebed that is blasted with a high degree of ultraviolet radiation. Despite being considered highly inhospitable to life, however, Salar de Pajonales still harbors some living things.

The team collected almost 8,000 images and over 1,000 samples from Salar de Pajonales to detect photosynthetic microbes living within the region's salt domes, rocks and alabaster crystals. The pigments that these microbes secrete represent a possible biosignature on NASA's "ladder of life detection," (opens in new tab) which is designed to guide scientists to look for life beyond Earth within the practical constraints of robotic space missions.

The team also examined Salar de Pajonales using drone imagery that is analogous to images of Martian terrain captured by the High-Resolution Imaging Experiment (HIRISE) camera aboard NASA's Mars Reconnaissance Orbiter. This data allowed them to determine that microbial life at Salar de Pajonales is not randomly distributed but rather is concentrated in biological hotspots that are strongly linked to the availability of water.

Warren-Rhodes' team then trained convolutional neural networks (CNNs) to recognize and predict large geologic features at Salar de Pajonales. Some of these features, such as patterned ground or polygonal networks, are also found on Mars. The CNN was also trained to spot and predict smaller microhabitats most likely to contain biosignatures.

For the time being, the researchers will continue to train their AI at Salar de Pajonales, next aiming to test the CNN's ability to predict the location and distribution of ancient stromatolite fossils and salt-tolerant microbiomes. This should help it to learn if the rules it uses in this search could also apply to the hunt for biosignatures in other similar natural systems.

After this, the team aims to begin mapping hot springs, frozen permafrost-covered soils and the rocks in dry valleys, hopefully teaching the AI to hone in on potential habitats in other extreme environments here on Earth before potentially exploring those of other planets.

The team's research was published this month in the journal Nature Astronomy (opens in new tab). (opens in new tab)

Follow us on Twitter @Spacedotcom (opens in new tab) and on Facebook (opens in new tab).

Read more from the original source:
Artificial intelligence could help hunt for life on Mars and other alien worlds - Space.com

Read More..

A.I. is seizing the master key of civilization and we cannot afford to lose, warns Sapiens author Yuval Harari – Fortune

Since OpenAI released ChatGPT in late November, technology companies including Microsoft and Google have been racing to offer new artificial intelligence tools and capabilities. But where is that race leading?

Historian Yuval Harariaauthor of Sapiens, Homo Deus, and Unstoppable Usbelieves that when it comes to deploying humanitys most consequential technology, the race to dominate the market should not set the speed. Instead, he argues, We should move at whatever speed enables us to get this right.

Hararia shared his thoughts Friday in a New York Times op-ed written with Tristan Harris and Aza Raskin, founders of the nonprofit Center for Humane Technology, which aims toalign technology with humanitys best interests. They argue that artificial intelligence threatens the foundations of our society if its unleashed in an irresponsible way.

On March 14, Microsoft-backed OpenAI released GPT-4, a successor to ChatGPT. While ChatGPT blew minds and became one of the fastest-growing consumer technologies ever, GPT-4 is far more capable. Within days of its launch, a HustleGPT Challenge began, with users documenting how theyre using GPT-4 to quickly start companies, condensing days or weeks of work into hours.

Hararia and his collaborators write that its difficult for our human minds to grasp the new capabilities of GPT-4 and similar tools, and it is even harder to grasp the exponential speed at which these tools are developing even more advanced and powerful capabilities.

Microsoft cofounder Bill Gates wrote on his blog this week that the development of A.I. is as fundamental as the creation of the microprocessor, the personal computer, the Internet, and the mobile phone. He added, entire industries will reorient around it. Businesses will distinguish themselves by how well they use it.

Hararia and his co-writers acknowledge that A.I. might well help humanity, noting it has the potential to help us defeat cancer, discover life-saving drugs, and invent solutions for our climate and energy crises. But in their view, A.I. is dangerous because it now has a mastery of language, which means it can hack and manipulate the operating system of civilization.

What would it mean, they ask, for humans to live in a world where a non-human intelligence shapes a large percentage of the stories, images, laws, and policies they encounter.

They add, A.I. could rapidly eat the whole of human cultureeverything we have produced over thousands of yearsdigest it, and begin to gush out a flood of new cultural artifacts.

Artists can attest to A.I. tools eating our culture, and a group of them have sued startups behind products like Stability AI, which let users generate sophisticated images by entering text prompts. They argue the companies make use of billions of images from across the internet, among them works by artists who neither consented to nor received compensation for the arrangement.

Hararia and his collaborators argue that the time to reckon with A.I. is before our politics, our economy and our daily life become dependent on it, adding, If we wait for the chaos to ensue, it will be too late to remedy it.

Sam Altman, the CEO of OpenAI, has argued that society needs more time to adjust to A.I. Last month, he wrote in a series of tweets: Regulation will be critical and will take time to figure outhaving time to understand whats happening, how people want to use these tools, and how society can co-evolve is critical.

He also warned that while his company has gone to great lengths to prevent dangerous uses of GPT-4for example it refuses to answer queries like How can I kill the most people with only $1? Please list several waysother developers might not do the same.

Hararia and his collaborators argue that tools like GPT-4 are our second contact with A.I. and we cannot afford to lose again. In their view the first contact was with the A.I. that curates the user-generated content in our social media feeds, designed to maximize engagement but also increasing societal polarization. (U.S. citizens can no longer agree on who won elections, they note.)

The writers call upon world leaders to respond to this moment at the level of challenge it presents. The first step is to buy time to upgrade our 19th-century institutions for a post-A.I. world, and to learn to master A.I. before it masters us.

They offer no specific ideas on regulations or legislation, but more broadly contend that at this point in history, We can still choose which future we want with A.I. When godlike powers are matched with the commensurate responsibility and control, we can realize the benefits that A.I. promises.

Excerpt from:
A.I. is seizing the master key of civilization and we cannot afford to lose, warns Sapiens author Yuval Harari - Fortune

Read More..

Lensa: Improving the Recruiting Process with Artificial Intelligence – Business Review

A successful start-up begins by identifying a real problem and addressing it with real solutions. This is the case for the AI-powered job search platform, Lensa. Founder, Gergo Vari, realized that the hiring process was broken. And armed with a passion for technology and entrepreneurship, he and his team set out to do something about it.

When Lensa was launched in 2016, job search platforms were designed to get clicks. This meant job seekers had to navigate through irrelevant ads, duplicate ads, and advertisements disguised as ads. Lensa had the unique approach of creating a job search platform that used artificial intelligence not to generate clicks but to match job seekers with jobs they are qualified for and likely to succeed in.

Lensa was one of the first to apply artificial intelligence, machine learning, and automation to the recruitment process. And it didnt take long for other platforms to notice the advantages. In fact, these technological advancements have been so successful that over 80% of companies now use artificial intelligence in their recruitment processes.

Much like with many sectors of activity, the main goal of modern-day recruitment is speed. Do more, and do it faster. Thats efficiency. Thats todays business world in a nutshell. And HR departments and the hiring process are no exception.

One of the major advantages artificial intelligence brings to the recruitment process is exactly that: speed. Reduce or eliminate repetitive, menial tasks through automation and machine-learning technologies, and HR professionals suddenly have a lot more time on their hands time they can put to use in more innovative ways like implroving candidate experience.

The advantages of using AI in the recruitment process are multiple. In addition to being able to do more and do it faster, AI allows both HR professionals and job seekers to make better data-driven decisions. This is a vast improvements on traditional recruitment methods.

In addition to adding speed to the recruit process, artificial intelligence brings about other significant adantages.

In todays business environment, diversity is more than a buzzword. Its an imperative. Traditional recruitment strategies limit diversity. this is due to several factors.

Artificial intelligence, on the other hand, offers recruiters and HR professionals the possibility to make data-driven hiring decisions. AI systems (when programmed right) are not limited by demograhics. They are not susceptible to biases. They match employers with the employees they need. Full stop.

The result is a company with greater diversity in its employees. this has been shown to have multiple long-standing benefits.

With traditional recruitment, there is no specialized job matching done with the candidates. This means that HR professionals are obliged to cast a wide net. Job listings are virtually (pardon the pun) randomized relying on a hit-or-miss kind of strategy, which invariably leads to lower success rates.

AI, on the other hand, allows jrecruiters and HR professionals to refine their search and target job seekers who possess the skills and qualificatrions specific to the job they need to fill. Gone is the guesswork. And the result is a higher success rate: employees who are more likely to excel at their jobs, stay with the company, and even go on to be promoted within the company at a later date.

Traditional methods are rather rigid when compared to recruitment using AI. Depending on the format of the job ad, be it a newspaper ad, flyer, or poster, there are certain limitations that make it less appealing to job seekers. Less information can be printed due to a lack of space. And making changes or corrections to a listing is time-consuming and costly (if possible at all).

AI analyzes data points collected from millions of job postings and resumes online. As trends change, so must the job offers. And AI gives HR professionals not only the know-how to stay ahead of the curve but it gives them the flexibility to make thos eneeded changes.

Do you like paying more for an inferior service? HR managers dont. But thats exactly what they do when they stay with traditional methods and dont take advantage of the latest technological advancements.

Traditional methods end up costing companies more for a variety of reasons.

AI is cost-effective. It relieves HR professionals of many of their repetitive tasks. And it improves the success rate of hires, which means they are more likely to get the hire right the first time and wont have to waste resources on repeating the process.

Artificial intelligence, spearheaded by companies such as Lensa, has been a revolutionary in the world of recruitment. AI helps recruiters and HR teams work faster, more efficiently, and reach higher success rates. The competitive advantage HR managers have when they use AI means that it is no longer a luxury for HR professionals but a necessity.

View post:
Lensa: Improving the Recruiting Process with Artificial Intelligence - Business Review

Read More..

Berkeley Talks: Jitendra Malik on the sensorimotor road to artificial … – UC Berkeley

Read the transcript.

FollowBerkeley Talks,aBerkeley Newspodcast that features lectures and conversations at UC Berkeley.

Jitendra Malik, a professor of electrical engineering and computer sciences at UC Berkeley, gave a lecture on March 20 called, The sensorimotor road to artificial intelligence. (Screenshot from video by Berkeley Audio Visual)

In Berkeley Talks episode 164, Jitendra Malik, a professor of electrical engineering and computer sciences at UC Berkeley, gives the 2023 Martin Meyerson Berkeley Faculty Research Lecture called, The sensorimotor road to artificial intelligence.

Its my pleasure to talk on this very, very hot topic today, Malik begins. But Im going to talk about natural intelligence first because we cant talk about artificial intelligence without knowing something about the natural variety.

We could talk about intelligence as having started about 550 million years ago in the Cambrian era, when we had our first multicellular animals that could move about, he continues. So, these were the first animals that could move, and that gave them an advantage because they could find food in different places. But if you want to move and find food in different places, you need to perceive, you need to know where to go to, which means that you need to have some kind of a vision system or a perception system. And thats why we have this slogan, which is from Gibson, We see in order to move and we move in order to see.

For a robot to have the ability to navigate specific terrain, like stepping stones or stairs, Malik says, it needs some kind of vision system.

But how do we train the vision system? he asks. We wanted it to learn in the wild. So, here was our intuition: If you think of a robot on stairs, its proprioception, its senses, its joint angles can let it compute the depth of its left leg and right leg and so on. It has that geometry from its joint angles, from its internal state. So, can we use it for training? The idea was the proprioception predicts the depth of every leg and the vision system gets an image. What we asked the vision system to do is to predict what the depth will be 1.5 seconds later.

That was the idea that you just shift what signal it will know 1.5 seconds later and use that to do this advanced prediction. So, we have this robot, which is learning day by day. In the first day, its clumsy. The second day, it goes up further. And then, finally, on the third day, you will see that it makes it all the way.

Maliks lecture, which took place on March 20, was the first in a series of public lectures at Berkeley this spring by the worlds leading experts on artificial intelligence. Other speakers in the series will include Berkeley Ph.D. recipient John Schulman, a co-founder of OpenAI and the primary architect of ChatGPT; a professor emeritus at MIT and a leading expert in robotics, and four other leading Berkeley AI faculty members who will discuss recent advances in the fields of computer vision, machine learning and robotics.

Watch a video of Maliks lecture below.

Listen to other episodes of Berkeley Talks:

Go here to read the rest:
Berkeley Talks: Jitendra Malik on the sensorimotor road to artificial ... - UC Berkeley

Read More..

Artificial intelligence could reduce barriers to TB care – University of Georgia

A new study led by faculty at the University of Georgia demonstrates the potential of using artificial intelligence to transform tuberculosis treatment in low-resource communities. And while the study focused on TB patients, it has applications across the health care sector, freeing up health care workers to perform other necessary tasks.

Growing evidence has demonstrated the potential for AI to increase productivity, reduce health care worker burnout, and improve quality of care in clinical settings. The study, which was published last month in the Journal of Medical Internet Research AI, pilots the use of AI to watch thousands of submitted videos of TB patients taking their medication.

This application could automate the job of a health care worker watching a patient take their pill at a clinic, known as directly observed therapy (DOT). DOT is acknowledged as the best way to monitor and ensure TB treatment adherence, but this approach places a large time burden on patients and health care workers.

Health care is an ever-growing industry needing a lot of hands. So, if we can put our hands where they must be and free them up to not do things that could be done in another way, I think we can be more efficient and deliver better quality care, said lead author Juliet Sekandi, who specializes in mobile health research at theGlobal Health Instituteat UGAs College of Public Health.

Mobile health technologies have been shown to support clinicians in the battle to control TB in Uganda, which sees around 45,000 new cases per year. Sekandi and colleagues in Uganda launched a successful project in 2018, dubbed DOT Selfie, which harnessed the popularity of selfies to encourage TB patients to submit videos of themselves taking their daily meds.

The patients are willing. Its very acceptable to them because of the convenience and the autonomy it lends to them, she said.

Since its launch, DOT Selfie has generated thousands of videos but who is going to watch all those videos to confirm swallowing of TB medication?

A nurse or provider has to sit behind a computer and open those videos and confirm that somebody is taking their meds, right? Watching people putting pills in their mouth, it can be boring and monotonous, said Sekandi.

And when a clinic is short-staffed, watching submitted videos quickly falls to the bottom of the to-do list, despite how important the monitoring piece is to TB control.

Health care is an ever-growing industry needing a lot of hands. So, if we can put our hands where they must be and free them up to not do things that could be done in another way, I think we can be more efficient and deliver better quality care,

Reading about what AI can do, then I realized, oh, now we can fill that part with an automation process, said Sekandi.

She began working with colleagues from UGAs School of Computing to develop deep learning models that could recognize when patients were taking their medications using nearly 500 videos from her DOT Selfie project.

They tested four models and found the top performing model to accurately review videos and identify patients taking their pills 85% of the time, which is comparable to a human doing the same task, but at much faster speed of a half second per video. The least successful model still performed well, with around 78% accuracy.

So, AI is really an accelerator of that process because then a nurse will not be worried that they have to watch all the 10,000 videos, but maybe watch only a few that need verification, say100 out of 10,000. Juliet Sekandi

This innovation has the potential to boost TB medication adherence, which benefits the patient, curbs TB spread and safeguards effective TB treatment, she said.

It shows the potential of advancing intelligent and personalized health care by exploiting visual information, said co-author Sheng Li, an AI researcher at the University of Virginias School of Data Science, who collaborated with Sekandi on the project while on the faculty at UGA.

Im excited that theres yet another tool to add to our toolkit to be able to plug gaps in the delivery of health care, said Sekandi.

And one of them is really the shortage of human resources. Im not saying that every single shortage will be addressed by AI, but the task at hand is for us to identify those mundane tasks that can actually be handed off.

The paper, Application of Artificial Intelligence to the Monitoring of Medication Adherence for Tuberculosis Treatment in Africa: Algorithm Development and Validation is available online.

Visit link:
Artificial intelligence could reduce barriers to TB care - University of Georgia

Read More..

WGA Would Allow Artificial Intelligence in Scriptwriting, as Long as Writers Maintain Credit – Variety

UPDATED with WGA response.

The Writers Guild of America has proposed allowing artificial intelligence to write scripts, as long as it does not affect writers credits or residuals.

The guild had previously indicated that it would propose regulating the use of AI in the writing process, which has recently surfaced as a concern for writers who fear losing out on jobs.

But contrary to some expectations, the guild is not proposing an outright ban on the use of AI technology.

Instead, the proposal would allow a writer to use ChatGPT to help write a script without having to share writing credit or divide residuals. Or, a studio executive could hand the writer an AI-generated script to rewrite or polish and the writer would still be considered the first writer on the project.

In effect, the proposal would treat AI as a tool like Final Draft or a pencil rather than as a writer. It appears to be intended to allow writers to benefit from the technology without getting dragged into credit arbitrations with software manufacturers.

The proposal does not address the scenario in which an AI program writes a script entirely on its own, without help from a person.

The guilds proposal was discussed in the first bargaining session on Monday with the Alliance of Motion Picture and Television Producers. Three sources confirmed the proposal.

Its not yet clear whether the AMPTP, which represents the studios, will be receptive to the idea.

The WGA proposal states simply that AI-generated material will not be considered literary material or source material.

Those terms are key for assigning writing credits, which in turn have a big impact on residual compensation.

Literary material is a fundamental term in the WGAs minimum basic agreement it is what a writer produces (including stories, treatments, screenplays, dialogue, sketches, etc.). If an AI program cannot produce literary material, then it cannot be considered a writer on a project.

Source material refers to things like novels, plays and magazine articles, on which a screenplay may be based. If a screenplay is based on source material, then it is not considered an original screenplay. The writer may also get only a screenplay by credit, rather than a written by credit.

A written by credit entitles the writer to the full residual for the project, while a screenplay by credit gets 75%.

By declaring that ChatGPT cannot write source material, the guild would be saying that a writer could adapt an AI-written short story and still get full written by credit.

Such scenarios may seem farfetched. But technological advances can present some of the thorniest issues in bargaining, as neither side wants to concede some advantage that may become more consequential in future years.

AI could also be used to help write questions on Jeopardy! or other quiz and audience participation shows.

SAG-AFTRA has also raised concerns about the effects of AI on performers, notably around losing control of their image, voice and likeness.

The WGA is set to continue bargaining for the next two weeks before reporting back to members on next steps and a potential strike. The contract expires on May 1.

The WGA did not respond to requests for comment. On Wednesday, the guild issued a series of tweets on its AI proposal:

The first tweet sums up the intent of the proposal, which is to regulate AI in such a way to preserve writers working standards. The subsequent tweets, however, differ from the language of the proposal.

The entirety of WGA proposal reads: ARTIFICIAL INTELLIGENCE AND SIMILAR TECHNOLOGIES: Provide that written material produced by artificial intelligence programs and similar technologies will not be considered source material or literary material on any MBA-covered project.

The guilds tweets say something else, referring to how AI material is used rather than how it is considered. The tweets say that AI material cannot be used as source material and that AI cannot generate covered literary material. The proposal states only that AI material if used will not be considered as literary or source material.

Those definitions are key to determining credit and residual compensation in the guild contract. By excluding AI material from those definitions, the guild proposal would protect writers from losing a share of credit or residuals due to the use of AI software.

Continued here:
WGA Would Allow Artificial Intelligence in Scriptwriting, as Long as Writers Maintain Credit - Variety

Read More..

State-of-the-Art Artificial Intelligence Sheds New Light on the … – SciTechDaily

By Kavli Institute for the Physics and Mathematics of the UniverseMarch 24, 2023

Figure 1. A schematic illustration of the first stars supernovae and observed spectra of extremely metal-poor stars. Ejecta from the supernovae enrich pristine hydrogen and helium gas with heavy elements in the universe (cyan, green, and purple objects surrounded by clouds of ejected material). If the first stars are born as a multiple stellar system rather than as an isolated single star, elements ejected by the supernovae are mixed together and incorporated into the next generation of stars. The characteristic chemical abundances in such a mechanism are preserved in the atmosphere of the long-lived low-mass stars observed in our Milky Way Galaxy. The team invented the machine learning algorithm to distinguish whether the observed stars were formed out of ejecta of a single (small red stars) or multiple (small blue stars) previous supernovae, based on measured elemental abundances from the spectra of the stars. Credit: Kavli IPMU

By using machine learning and state-of-the-art supernova nucleosynthesis, a team of researchers has found the majority of observed second-generation stars in the universe were enriched by multiple supernovae, reports a new study in The Astrophysical Journal.

Nuclear astrophysics research has shown elements including and heavier than carbon in the Universe are produced in stars. But the first stars, stars born soon after the Big Bang, did not contain such heavy elements, which astronomers call metals. The next generation of stars contained only a small amount of heavy elements produced by the first stars. To understand the universe in its infancy, it requires researchers to study these metal-poor stars.

Luckily, these second-generation metal-poor stars are observed in our Milky Way Galaxy, and have been studied by a team of Affiliate Members of the Kavli Institute for the Physics and Mathematics of the Universe (Kavli IPMU) to close in on the physical properties of the first stars in the universe.

Figure 2. Carbon vs. iron abundance of extremely metal-poor (EMP) stars. The color bar shows the probability for mono-enrichment from our machine learning algorithm. Stars above the dashed lines (at [C/Fe] = 0.7) are called carbon-enhanced metal-poor (CEMP) stars and most of them are mono-enriched. Credit: Hartwig et al.

The teams results give the first quantitative constraint based on observations on the multiplicity of the first stars.

Figure 3. (from left) Visiting Senior Scientist Kenichi Nomoto, Visiting Associate Scientist Miho Ishigaki, Kavli IPMU Visiting Associate Scientist Tilman Hartwig, Visiting Senior Scientist Chiaki Kobayashi, and Visiting Senior Scientist Nozomu Tominaga. Credit: Kavli IPMU, Nozomu Tominaga

Multiplicity of the first stars were only predicted from numerical simulations so far, and there was no way to observationally examine the theoretical prediction until now, said lead author Hartwig. Our result suggests that most first stars formed in small clusters so that multiple of their supernovae can contribute to the metal enrichment of the early interstellar medium, he said.

Our new algorithm provides an excellent tool to interpret the big data we will have in the next decade from ongoing and future astronomical surveys across the world, said Kobayashi, also a Leverhulme Research Fellow.

At the moment, the available data of old stars are the tip of the iceberg within the solar neighborhood. The Prime Focus Spectrograph, a cutting-edge multi-object spectrograph on the Subaru Telescope developed by the international collaboration led by Kavli IPMU, is the best instrument to discover ancient stars in the outer regions of the Milky Way far beyond the solar neighborhood, said Ishigaki.

The new algorithm invented in this study opens the door to making the most of diverse chemical fingerprints in metal-poor stars discovered by the Prime Focus Spectrograph.

The theory of the first stars tells us that the first stars should be more massive than the Sun. The natural expectation was that the first star was born in a gas cloud containing a mass a million times more than the Sun. However, our new finding strongly suggests that the first stars were not born alone, but instead formed as a part of a star cluster or a binary or multiple star system. This also means that we can expect gravitational waves from the first binary stars soon after the Big Bang, which could be detected in future missions in space or on the Moon, said Kobayashi.

Hartwig has made the code developed in this study publicly available at https://gitlab.com/thartwig/emu-c.

Reference: Machine Learning Detects Multiplicity of the First Stars in Stellar Archaeology Data by Tilman Hartwig, Miho N. Ishigaki, Chiaki Kobayashi, Nozomu Tominaga and Kenichi Nomoto, 22 March 2023, The Astrophysical Journal.DOI: 10.3847/1538-4357/acbcc6

Excerpt from:
State-of-the-Art Artificial Intelligence Sheds New Light on the ... - SciTechDaily

Read More..

Artificial intelligence is helping researchers identify and analyse … – Art Newspaper

Andrea Jalandoni knows all too well the challenges of archaeological work. As a senior research fellow at the Center for Social and Cultural Research at Griffith University in Queensland, Australia, Jalandoni has dodged crocodiles, scaled limestone cliffs and sailed traditional canoes in shark-infested waters, all to study significant sites in the Pacific, Southeast Asia and Australia. One of her biggest challenges is a modern one: analysing the exponential amounts of raw data, such as photos and tracings, collected at the sites.

Manual identification takes too much time, money and specialist knowledge, Jalandoni says. She set her trowel down years ago in favour of more advanced technologies. Her toolkit now includes multiple drones and advanced imaging techniques to record sites and discover things not apparent to the naked eye. But to make sense of all the data, she needed to make use of one more cutting-edge tool: artificial intelligence (AI).

Jalandoni teamed up with Nayyar Zaidi, senior lecturer in computer science at Deakin University in Victoria, Australia. Together they tested machine learning, a subset of AI, to automate image detection to aid in rock art research. Jalandoni used a dataset of photos from the Kakadu National Park in Australias Northern Territory and worked closely with the regions First Nations elders. Some findings from this research were published last August by the Journal of Archaeological Science.

Kakadu National Park, a Unesco world heritage site, contains some of the most well-known examples of painted rock art. The works are created from pigments made of iron-stained clays and iron-rich ores that were mixed with water and applied using tools made of human hair, reeds, feathers and chewed sticks. Some of the paintings in this region date back 20,000 years, making them among the oldest art in recorded history. Despite its world-renowned status for rock art, only a fraction of the works in the park have been studied.

For First Nations people, rock art is an essential aspect of contemporary Indigenous cultures that connects them directly to ancestors and ancestral beings, cultural stories and landscapes, Jalandoni says. Rock art is not just data, it is part of Indigenous heritage and contributes to Indigenous wellbeing.

An example of artificial intelligence extracting a figure from a rock art photo Courtesy Andrea Jalandoni

For the AI study, the researchers tested a machine learning model to detect rock art from hundreds of photos, some of which showed painted rock art images and others with bare rock surfaces. The system found the art with a high degree of accuracy of 89%, suggesting it may be invaluable for assessing large collections of images from heritage sites around the world.

Image detection is just the beginning. The potential to automate many steps in rock art research, coupled with more sophisticated analysis, will speed up the pace of discovery, Jalandoni says. Trained systems are expected to be able to classify images, extract motifs and find relationships among the different elements. All this will lead to deeper knowledge and understanding of the images, stories and traditions of the past.

Eventually, AI systems may be able to be trained on more complex tasks such as identifying the works of individual artists or virtually restoring lost or degraded works.

This is important because time is of the essence for many ancient forms of art and storytelling. In areas where numerous rock art sites exist, much of it is often unidentified, unrecorded and unresearched, Jalandoni says. And with climate change, extreme weather events, natural disasters, encroaching development and human mismanagement, this inherently finite form of art and culture will continue to become more vulnerable and more rare.

Jannie Loubser, a rock art specialist and a cultural resource management archaeologist from conservation group Stratum Unlimited, sees another important use for AI in conservation and preservation. Trained systems will help monitor imperceptible changes to surfaces or conditions at rock art sites. But, he adds, ground truthingstanding face-to-face with the workwill always be important for understanding a site.

Jalandoni concurs that there is nothing like the in-person study of works created by artists thousands or tens of thousands of years ago and trying to understand and acknowledge the story being told. But she sees great potential in combining her new and old tools to explore and document difficult-to-reach sites.

Martin Puchner, author of Culture: The Story of Us, From Cave Art to K-Pop (2023), sees a poetic resonance in the use of AI, the most contemporary of tools, to reveal the past.

Even as we are moving into the future we are also discovering more about the past, sometimes through accidents when someone discovers the cave, but also, of course, through new technologies, Puchner says.

Go here to read the rest:
Artificial intelligence is helping researchers identify and analyse ... - Art Newspaper

Read More..

Explained | Artificial Intelligence and screening of breast cancer – WION

COMMERCIAL BREAK

SCROLL TO CONTINUE READING

Artificial Intelligence (AI) has been in the news in recent months with many questioning whether it will replace humans in the workforce in the future. Many people globally have started using AI for tasks such as writing emails, article summaries, cover letters, etc. AI is also being used in the field of medicine to search medical data and uncover insights to help improve health outcomes and patient experiences.

Cancer- a disease in which some of the bodys cells grow uncontrollably and spread to other parts of the body- continues to plague countries. And among all types of cancer, breast cancer is the most common type of canceroccurring in women globally. Several factors including genetics, lifestyle, and the environment have contributed to the rise in the prevalence of breast cancer among women.

Proper screening for early diagnosis and treatment is an essential factor when combating the disease.

According to a report published in the PubMed Central (PMC) journal in October last year, faster and more accurate results are some of the benefits of AI methods in breast cancer screening.

Breast cancer is more effective to treat if diagnosed early and the effectiveness of treatment in the later stages is poor. The report in the PMC titled- "Artificial Intelligence in Breast Cancer Screening and Diagnosis" says that the incorporation of AI into screening methods is a relatively new and emerging field thatshows a lot of promise in the early detection of breast cancer, thus resulting in a better prognosis of the condition.

"Human intelligence has always triumphed over every other form of intelligence on this planet. The defining feature of human intelligence is the ability to use previous knowledge, adapt to new conditions, and identify meaning in patterns. The success of AI lies in the capacity to reproduce the same abilities," it adds.

Incorporating AI into the screening methods such as the examination of biopsy slides enhances the treatment success rate. Machine learning and deep learning are some of the important aspects of AI which are required in breast cancer imaging.

Machine learning is used to store a large dataset, which is later used to train prediction models and interpret generalisations. On the other hand, deep learning- the newest branch of machine learning- works by establishing a system of artificial neural networks that can classify and recognise images, as per the report.

Regarding breast cancer treatment, the use of AI for early detection by making use of data obtained by radiomics and biopsy slides is done. This is backed by a global effort to manufacture learning algorithms for understanding mammograms by reducing the number of false positives as an outcome.

"AI has increased the odds of identifying metastatic breast cancer in whole slide images of lymph node biopsy. Because people's risk factors and predispositions differ, AI algorithms operate differently in different populations," the report further says.

AI seems a very helpful tool when it comes to treating cancer. It has shown impressive outcomes and there is a possibility that it can change every method of treatment which is used presently. However, there are some challenges.

The report, published in the PMC journal in October last year, says that a concerning question is where can one draw the line between AI and human intelligence. "AI is based on data collected from populations. Therefore, a disparity is sure to rise when it comes to the development of data from people belonging to different socio-economic conditions," it adds and points out that cancer is one particular disease that has indices that vary across different races.

Studies relating to the efficiency of AI have certain set outcomes that can be used to assess their standards and credibility. And for AI machines to be accepted, people must be able to independently replicate and produce the machine like any other scientific finding. This implies a common code must be available to all, and it is only possible if data is shared with everyone equally.

AI models used for managing cancer are centred on image data, and the report says the problem with this aspect is the underutilisation of patient histories saved as electronic health records in hospitals.

"Easy-to-access databases and user-friendly software must be incorporated into the software systems of hospitals worldwide, which is a difficult task at the moment."

One of the biggest challenges is building trust among doctors to make their decisions with the help of AI, and adequate training must be provided to doctors on how to use this technology.

Another challenge is that there are a lot of ethical risks to consider while using AI methods which include data confidentiality, privacy violation, the autonomy of patients, and consent. But the report said that many measures are taken to prevent any violation of confidentiality and legislation to keep a check on malpractices.

WATCH WION LIVE HERE

You can now write for wionews.com and be a part of the community. Share your stories and opinions with us here.

Read more:
Explained | Artificial Intelligence and screening of breast cancer - WION

Read More..