Page 1,370«..1020..1,3691,3701,3711,372..1,3801,390..»

Playbook Deep Dive: What Trump’s indictment means – POLITICO

Well, I mean, in terms of the characters, yes, you're right that this is all sort of a throwback to 2016-2018 period.

But, you know, one of the people who's testified twice, I believe, in front of this grand jury and who is central to this whole episode and who I believe has never spoken publicly about it is David Pecker. And so if there's any chance that he ends up testifying at a trial or ends up speaking about his side of the story, I would be very intrigued to hear that.

As you know, as someone who, you know, he was extremely close to Donald Trump and that's how he got involved in this hush money payment to begin with. That's someone I would really like to hear from at some point if there's an opportunity to do that.

But in terms of the sort of the legal questions that are going to come up here, there's quite a number. But I think the biggest one is, you know, I mentioned that the indictment is sealed. We don't know what the counts are yet, but there's a lot of questions about how the district attorney, Alvin Bragg, constructed these charges and whether they will survive in court, because if they are what we think they're going to be, they're a largely untested legal theory.

And Trump's lawyers, of course, will try their hardest to fight them and given that they're untested, there's just a lot of questions about how they'll survive. So that's probably the biggest issue here. But then, of course, we will run into all sorts of questions about the sort of scheduling of legal proceedings and a potential trial for someone who is a presidential candidate. And that is likely to be very, very complicated. So.

See the rest here:
Playbook Deep Dive: What Trump's indictment means - POLITICO

Read More..

With ChatGPT hype swirling, UK government urges regulators to come up with rules for A.I. – CNBC

The ChatGPT and OpenAI emblem and website.

Nurphoto | Nurphoto | Getty Images

The U.K. government on Wednesday published recommendations for the artificial intelligence industry, outlining an all-encompassing approach for regulating the technology at a time when it has reached frenzied levels of hype.

In a white paper to be put forward to Parliament, the Department for Science, Innovation and Technology (DSIT) will outline five principles it wants companies to follow. They are: safety, security and robustness; transparency and explainability; fairness; accountability and governance; and contestability and redress.

Rather than establishing new regulations, the government is calling on regulators to apply existing regulations and inform companies about their obligations under the white paper.

It has tasked the Health and Safety Executive, the Equality and Human Rights Commission, and the Competition and Markets Authority with coming up with "tailored, context-specific approaches that suit the way AI is actually being used in their sectors."

"Over the next twelve months, regulators will issue practical guidance to organisations, as well as other tools and resources like risk assessment templates, to set out how to implement these principles in their sectors," the government said.

"When parliamentary time allows, legislation could be introduced to ensure regulators consider the principles consistently."

Maya Pindeus, CEO and co-founder of AI startup Humanising Autonomy, said the government's move marked a "first step" toward regulating AI.

"There does need to be a bit of a stronger narrative," she said. "I hope to see that. This is kind of planting the seeds for this."

However, she added, "Regulating technology as technology is incredibly difficult. You want it to advance; you don't want to hinder any advancements when it impacts us in certain ways."

The arrival of the recommendations is timely. ChatGPT, the popular AI chatbot developed by the Microsoft-backed company OpenAI, has driven a wave of demand for the technology, and people are using the tool for everything from penning school essays to drafting legal opinions.

ChatGPT has already become one of the fastest-growing consumer applications of all time, attracting 100 million monthly active users as of February. But experts have raised concerns about the negative implications of the technology, including the potential for plagiarism and discrimination against women and ethnic minorities.

AI ethicists are worried about biases in the data that trains AI models. Algorithms have been shown to have a tendency of being skewed in favor men especially white men putting women and minorities at a disadvantage.

Fears have also been raised about the possibility of jobs being lost to automation. On Tuesday, Goldman Sachs warned that as many as 300 million jobs could be at risk of being wiped out by generative AI products.

The government wants companies that incorporate AI into their businesses to ensure they provide an ample level of transparency about how their algorithms are developed and used. Organizations "should be able to communicate when and how it is used and explain a system's decision-making process in an appropriate level of detail that matches the risks posed by the use of AI," the DSIT said.

Companies should also offer users a way to contest rulings taken by AI-based tools, the DSIT said. User-generated platforms like Facebook, TikTok and YouTube often use automated systems to remove content flagged up as being against their guidelines.

AI, which is believed to contribute 3.7 billion ($4.6 billion) to the U.K. economy each year, should also "be used in a way which complies with the UK's existing laws, for example the Equality Act 2010 or UK GDPR, and must not discriminate against individuals or create unfair commercial outcomes," the DSIT added.

On Monday, Secretary of State Michelle Donelan visited the offices of AI startup DeepMind in London, a government spokesperson said.

"Artificial intelligence is no longer the stuff of science fiction, and the pace of AI development is staggering, so we need to have rules to make sure it is developed safely," Donelan said in a statement Wednesday.

"Our new approach is based on strong principles so that people can trust businesses to unleash this technology of tomorrow."

Lila Ibrahim, chief operating officer of DeepMind and a member of the U.K.'s AI Council, said AI is a "transformational technology," but that it "can only reach its full potential if it is trusted, which requires public and private partnership in the spirit of pioneering responsibly."

"The UK's proposed context-driven approach will help regulation keep pace with the development of AI, support innovation and mitigate future risks," Ibrahim said.

It comes after other countries have come up with their own respective regimes for regulating AI. In China, the government has required tech companies to hand over details on their prized recommendation algorithms, while the European Union has proposed regulations of its own for the industry.

Not everyone is convinced by the U.K. government's approach to regulating AI. John Buyers, head of AI at the law firm Osborne Clarke, said the move to delegate responsibility for supervising the technology among regulators risks creating a "complicated regulatory patchwork full of holes."

"The risk with the current approach is that an problematic AI system will need to present itself in the right format to trigger a regulator's jurisdiction, and moreover the regulator in question will need to have the right enforcement powers in place to take decisive and effective action to remedy the harm caused and generate a sufficient deterrent effect to incentivise compliance in the industry," Buyers told CNBC via email.

By contrast, the EU has proposed a "top down regulatory framework" when it comes to AI, he added.

WATCH: Three decades after inventing the web, Tim Berners-Lee has some ideas on how to fix it

Original post:
With ChatGPT hype swirling, UK government urges regulators to come up with rules for A.I. - CNBC

Read More..

8 mind-blowing space documentaries to watch now on NOVA – PBS

For almost 50 years, NOVA has explored the cosmos, taking viewers across our solar system, into distant galaxies, and right up to the edge of a black hole. From a Mars rover swooping down to the red planet, to a probes daring encounter with an asteroid, weve followed NASA and other space missions as theyve revealed the universe to humanity. Now we present a curated selection of space documentaries from the past five years so you can explore the universe alongside the scientists who make the journey possible.

In July 2022, NASAs James Webb Space Telescope released its first images, looking further back in time than ever before to show our universe in stunningly beautiful detail. But that was just the beginning: With tons of new data and spectacular images flooding in, Webb is allowing scientists to peer deep in time to try to answer some of astronomys biggest questions. Whenand howdid the first stars and galaxies form? And can we see the fingerprints of life in the atmospheres of distant worldsor even within our own solar system?

A NASA spacecraft named Lucy blasts off from Cape Canaveral on a mission to the Trojans, a group of asteroids over 400 million miles from Earth thought to hold important clues about the origins of our solar system. Just hours before, in Senegal, West Africa, a team of scientists sets out to capture extraordinarily precise observations vital to the success of the Lucy missioncrucial data needed to help NASA navigate Lucy to its asteroid targets across millions of miles of space. The teams leader, Senegalese astronomer Maram Kaire, takes viewers on a journey to investigate his nations rich and deep history of astronomy, reaching back thousands of yearsand the promising future ahead.

How did NASA engineers build and launch the most ambitious telescope of all time? Follow the dramatic story of the James Webb Space Telescopethe most complex machine ever launched into space. If it works, scientists believe that this new eye on the universe will peer deeper back in time and space than ever before to the birth of galaxies, and may even be able to sniff the atmospheres of exoplanets as we search for signs of life beyond Earth. But getting it to work is no easy task. The telescope is far bigger than its predecessor, the famous Hubble Space Telescope, and it needs to make its observations a million miles away from Earthso there will be no chance to go out and fix it. That means theres no room for error; the most ambitious telescope ever built needs to work perfectly. Meet the engineers making it happen and join them on their high stakes journey to uncover new secrets of the universe.

In the five-part series NOVA Universe Revealed, we delve into the vastness of space to capture moments of high drama when the universe changed forever. In this episode, we tackle an age-old question: Are we alone? Or do other lifeforms and intelligences thrive on worlds far beyond our own? Ultra-sensitive telescopes and dogged detective work are transforming alien planet-hunting from science fiction into hard fact. Join NOVA on a visit to exotic worlds orbiting distant suns, from puffy planets with the density of Styrofoam to thousand-degree, broiling gas giants. Most tantalizing of all are the Super-Earths in the Goldilocks zone, just the right distance from their sun to support life, and with one of them signaling lifes essential ingredient, water, in its atmosphere. Are we on the brink of answering that haunting question?

Follow along as NASA launches the Mars 2020 Mission, perhaps the most ambitious hunt yet for signs of ancient life on Mars. In February 2021, the spacecraft blazes into the Martian atmosphere at some 12,000 miles per hour and lowers the Perseverance Rover into the rocky Jezero Crater, home to a dried-up river delta scientists think could have harbored life. Perseverance will comb the area for signs of life and collect samples for possible return to Earth. Traveling onboard is a four-pound helicopter that will conduct a series of test flightsthe first on another planet. During its journey, Perseverance will also test technology designed to produce oxygen from the Martian atmosphere, in hopes that the gas could be used for fuelor for humans to breatheon future missions.

In October 2020, a NASA spacecraft called OSIRIS-REx attempts to reach out and grab a piece of an asteroid named Bennu to bring it back to Earth. The OSIRIS-REx team has just three chances to extend its spacecrafts specialized arm, touch down for five seconds, and collect material from the surface of Bennu. But if they can pull it off, scientists could gain great insight into Earths own originsand even learn to defend against rogue asteroids that may one day threaten our planet.

On the 50th anniversary of the historic Apollo 11 Moon landing, NOVA looks ahead to the hoped-for dawn of a new age in lunar exploration. This time, governments and private industry are working together to reach our nearest celestial neighbor. But why go back? The Moon can serve as a platform for basic astronomical research; as an abundant source of rare metals and hydrogen fuel; and ultimately as a stepping stone for human missions to Mars and beyond. Join the next generation of engineers that aim to take us to the Moon, and discover how our legacy of lunar exploration won't be confined to the history books for long.

Black holes are the most enigmatic and exotic objects in the universe. Theyre also the most powerful, with gravity so strong it can trap light. And theyre destructive, swallowing entire planets, even giant stars. Anything that falls into them vanishesgone forever. Now, astrophysicists are realizing that black holes may be essential to how our universe evolvedtheir influence possibly leading to life on Earth and, ultimately, us. In this two-hour special, astrophysicist and author Janna Levin takes viewers on a journey to the frontiers of black hole science. Along the way, we meet leading astronomers and physicists on the verge of finding new answers to provocative questions about these shadowy monsters: Where do they come from? Whats inside? What happens if you fall into one? And what can they tell us about the nature of space, time, and gravity?

Receive emails about upcoming NOVA programs and related content, as well as featured reporting about current events through a science lens.

See the original post here:
8 mind-blowing space documentaries to watch now on NOVA - PBS

Read More..

Machine Learning Executive Talks Rise, Future of Generative AI – Georgetown University The Hoya

Keegan Hines, a former Georgetown adjunct professor and the current vice president of machine learning at Arthur AI, discussed the rapid rise in generative Artificial Intelligence (AI) programs and Georgetowns potential in adapting to software like ChatGPT.

The Master of Science in Data Science and Analytics program in the Graduate School of Arts & Sciences hosted the talk on March 17. The discussion centered on the rapid development of generative AI over the past six months.

Hines said generative AI has the capacity to radically change peoples daily lives, including how students are taught and how entertainment is consumed.

I definitely think were going to see a lot of personal tutoring technologies coming up for both little kids and college students, Hines said at the event. I have a feeling that in the next year, someone will try to make an entirely AI-generated TV show. Its not that hard to imagine an AI-generated script, animation and voice actors.

Imagine what Netflix becomes. Netflix is no longer recommend Keegan the best content; Netflix is now create something from scratch which is the perfect show Keegans ever wanted to see, Hines added.

Hines then discussed algorithms that generate text. He said the principal goal of these algorithms is to create deep learning systems that can understand complex patterns over longer time scales.

Hines said one challenge AI faces is that it can provide users with incorrect information.

These models say things and sometimes theyre just flatly wrong, Hines said. Google got really panned when they made a product announcement about Bard and then people pointed out Bard had made a mistake.

Bard, Googles AI chatbot, incorrectly answered a question about the James Webb Space Telescope in a video from the programs launch Feb. 6, raising concerns about Googles rushed rollout of Bard and the possibility for generative AIs to spread misinformation.

Hines said the potential for bias and toxicity in AI is present, as seen with Microsofts ChatGPT-powered Bing search engine, which manufactured a conspiracy theory relating Tom Hanks to the Watergate scandal.

Theres been a lot of research in AI alignment, Hines said. How do we make these systems communicate the values we have?

Teaching and learning in all levels of education will need to adapt to changes in technology, according to Hines.

One example is a high school history teacher who told students to have ChatGPT write a paper and then correct it themselves, Hines said. I think this is just the next iteration of open book, internet, ChatGPT. How do you get creative testing someones critical thinking on the material?

Hines said OpenAI, the company behind ChatGPT, noticed larger, more complex language models were more accurate than smaller models due to lower levels of test loss or errors made during training.

A small model has a high test loss whereas a really big model has a much more impressive test loss, Hines said. The big model also requires less data to reach an equivalent amount of test loss.

OpenAIs hypothesis was that the secret to unlocking rapid advancement in artificial intelligence lies in creating the largest model possible, according to Hines.

There didnt seem to be an end to this trend, Hines said. Their big hypothesis was, lets just go crazy and train the biggest model we can think of and keep going. Their big bet paid off and these strange, emergent, semi-intelligent behaviors are happening along the way.

Hines said he is optimistic about the fields future, and he predicted AI will be able to produce even more complex results, such as creating a TV show. It was really only about ten years ago that deep learning was proven to be viable. Hines said. If were going to avoid the dystopian path and go down the optimistic path, generative AI will be an assistant. It will get you 80% of the way and you do the next 20%.

See more here:
Machine Learning Executive Talks Rise, Future of Generative AI - Georgetown University The Hoya

Read More..

Machine learning identifies ‘heart roundness’ as a new tool for diagnosing cardiovascular conditions – Medical Xpress

This article has been reviewed according to ScienceX's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

peer-reviewed publication

trusted source

proofread

Deep learning-enabled analysis of medical images identifies cardiac sphericity as an early marker of cardiomyopathy and related outcomes. Credit: Med/Vukadinovic et al.

Physicians currently use assessments like heart chamber size and systolic function to diagnose and monitor cardiomyopathy and other related heart conditions. A paper published in the journal Med on March 29 suggests that another measurementcardiac sphericity, or roundness of the heartmay one day be a useful implement to add to the diagnostic toolkit.

"Roundness of the heart isn't necessarily the problem per seit's a marker of the problem," says co-corresponding author Shoa L. Clarke, a preventive cardiologist and an instructor at Stanford University School of Medicine. "People with rounder hearts may have underlying cardiomyopathy or underlying dysfunction with the molecular and cellular functions of the heart muscle. It could be reasonable to ask whether there is any utility in incorporating measurements of sphericity into clinical decision-making."

This proof-of-concept study used big data and machine learning to look at whether other anatomical changes in the heart could improve the understanding of cardiovascular risk and pathophysiology. The investigators chose to focus on sphericity because clinical experience had suggested it is associated with heart problems. Prior research had primarily focused on sphericity after the onset of heart disease, and they hypothesized that sphericity may increase even before the onset of clinical heart disease.

"We have established traditional ways of evaluating the heart, which have been important for how we diagnose and treat heart disease," Clarke says. "Now with the ability to use deep-learning techniques to look at medical images at scale, we have the opportunity to identify new ways of evaluating the heart that maybe we haven't considered much in the past."

"They say a picture is worth a thousand words, and we show that this is very true for medical imaging," says co-corresponding author David Ouyang, a cardiologist and researcher at the Smidt Heart Institute of Cedars-Sinai. "There's a lot more information available than what physicians are currently using. And just as we've previously known that a bigger heart isn't always better, we're learning that a rounder heart is also not better."

This research employed data from the UK Biobank, which includes genetic and clinical information on 500,000 people. As part of that study, a subset of volunteers had MRI imaging of their hearts performed. The California-based team used data from a subset of about 38,000 UK Biobank study participants who had MRIs that were considered normal at the time of the scans. Subsequent medical records from the volunteers indicated which of them later went on to develop diseases like cardiomyopathy, atrial fibrillation, or heart failure and which did not.

The researchers then used deep-learning techniques to automate the measurement of sphericity. Increased cardiac sphericity appeared to be linked to future heart troubles.

The investigators also looked at genetic drivers for cardiac sphericity and found overlap with the genetic drivers for cardiomyopathy. Using Mendelian randomization, they were able to infer that intrinsic disease of the heart musclemeaning defects not caused by heart attackscaused cardiac sphericity.

"There are two ways that these findings could add value," Ouyang says. "First, they might allow physicians to gain greater clinical intuition on how patients are likely to do at a very rapid glance. In the broader picture, this research suggests there are probably many useful measurements that clinicians still don't understand or haven't discovered. We hope to identify other ways to use imaging to help us predict what will happen next."

The researchers emphasize that much more research is needed before the findings from this study can be translated to clinical practice. For one thing, the connection is still speculative and would need to be confirmed with additional data. If the link is confirmed, a threshold would need to be established to indicate what degree of sphericity might suggest that clinical interventions are needed. The team is sharing all the data from this work and making them available to other investigators to begin answering some of these questions.

Additionally, ultrasound is more commonly used than MRI to image the heart. To further advance this research, replicating these findings using ultrasound images will be useful, they note.

More information: Shoa L. Clarke, Deep learning-enabled analysis of medical images identifies cardiac sphericity as an early marker of cardiomyopathy and related outcomes, Med (2023). DOI: 10.1016/j.medj.2023.02.009. http://www.cell.com/med/fulltext/S2666-6340(23)00069-7

Journal information: Med

See the original post:
Machine learning identifies 'heart roundness' as a new tool for diagnosing cardiovascular conditions - Medical Xpress

Read More..

What the AI-generated image of Pope Francis means for the imagination – Vox.com

A recent viral image of Pope Francis wearing an unusually hip white puffer jacket was both a fake created by generative AI and an omen that marked the accelerating collapse of a clearly distinguishable boundary between imagination and reality.

Photorealistic images of fictions like Donald Trump getting arrested while stumbling in a sea of cops can now be generated on demand by AI programs like Midjourney, DALL-E 2, and Stable Diffusion. This sets off alarm bells around how misinformation may thrive. But along with risks, AI-generated imagery also offers a great leap forward for the human imagination.

Seeing is believing goes both ways. Image-generating AI will allow us to see realistic depictions of what does not yet exist, expanding the kinds of futures we can imagine as visual realities. The human imagination doesnt build ideas from scratch. Its combinatorial: The mind cobbles together new ideas from accumulated bits and pieces it has been exposed to. AI-generated images will greatly increase the raw material of plausible worlds the mind can imagine inhabiting and, through them, the kinds of futures we perceive as possible.

For example, its one thing to read a description or see an illustration of a futuristic city with inspiring architecture, public transportation woven through greenery, and spaces designed for human interaction, not cars. Its another to see a spread of photorealistic images of what that could actually look like. By creating realistic representations of imagined realities, text-to-image-generating AI can make it easier for the mind to include new possibilities in how it imagines the world, reducing the barriers to believing that they could become a lived reality.

Last Friday, Reddit user u/trippy_art_special posted the image of the pope to the Midjourney subreddit, the generative AI platform used to produce it. The post contained four variations (a hallmark of Midjourney) of the pope ensconced in an on-trend long, puffy, white coat. One even had him in dark sunglasses, which looked especially smooth, even mysterious, in contrast to the radiant white of the coat and the deep chain.

The image was widely mistaken as real, and the popes outfit was big news over the weekend. Once people caught on that the image was fake, it became even bigger news. No way am I surviving the future of technology, the American model Chrissy Teigen tweeted.

Debates over why this particular image went viral or why so many people believed it to be real will soon be moot. For something that appears so convincing, why wouldnt we believe it? Neither was this the first media brush between Pope Francis and high fashion. In 2008, the Vatican daily newspaper quashed rumors of designer loafers, stating, The pope, therefore, does not wear Prada, but Christ.

For those who scrutinized the image, you could still find clues of falsehood. A few inconspicuous smudges and blurs. But Midjourneys pace of improvement suggests correcting these remaining signs will happen swiftly. What then?

At The Verge, senior reporter James Vincent likened AI-generated imagery to the dawn of hyperreality, a concept developed by the French philosopher Jean Baudrillard. Sooner or later, Vincent wrote, AI fakes are going to become hyperreal, masking the distinction entirely between the imaginary and the real.

Its easy to imagine the nightmare that could follow. Hyperreality is usually invoked as a concern over simulations displacing reality, posing real and looming threats. AI fakes will offer fertile grounds for a new and potentially harrowing era of misinformation, rabbit holes unmoored from reality, and all manners of harassment. Adapting media literacy habits and protective regulations will be crucial.

But there is an upside: While AI fakes threaten to displace what the mind perceives as reality, they can also expand it.

In 1998, two leading philosophers Andy Clark and David Chalmers published a paper on their idea of the extended mind. They argued that cognitive processes are not confined within the boundaries of the skull, but extend out through the tools we use to interact with the world. These aids a notebook, for example are tangled up in how we think and are part of our extended minds. In this view, tools can become something like cognitive limbs: not separate from our capacities, but part of them.

You can flip this around: Building new tools is a way of building new mental capabilities. Until last weekend, most people could have imagined some image of what the pope might look like in a fashion-week puffer jacket (unless you have aphantasia, in which mental imagery is not part of your internal experience). But those mental images can be slippery. The more artistic among us could have drawn a few ideas, prompting a richer image. But soon, anyone will be able to imagine anything and render it into photorealistic quality, seeing it as though it were real. Making the visual concrete gives the mind something solid to grab hold of. That is a new trick for the extended mind.

You should understand these tools as aids to your imagination, says Tony Chemero, a professor of philosophy and psychology at the University of Cincinnati and member of the Center for Cognition, Action, and Perception. But imagining isnt something that just happens in your brain, he added. Its interacting skillfully with the world around you. The imagination is in the activity, like an architect doing sketches.

There is disagreement among cognitive scientists on which kinds of tools merge with our extended minds, and which retain separate identities as tools we think with rather than through. Chemero distinguished between tools of the extended mind, like spoons or bicycles, and computers that run generative AI software like Midjourney. When riding a bicycle and suddenly wobbling through an inconveniently placed crater in the concrete, people tend to say, I hit a pothole, instead of, The bicycle wheel hit the pothole. The tool is conceived as a part of you. Youd be less likely to say, I fell on the floor, after dropping your laptop.

Still, he told me that any tool that changes how we interact with the world also changes how we understand ourselves. Especially what we understand ourselves as being capable of, he added.

Clark and Chalmers end their paper with an unusually fun line for academic philosophy: once the hegemony of skin and skull is usurped, we may be able to see ourselves more truly as creatures of the world. Thinking with AI image generators, we may be able to see ourselves in picture-perfect quality as creatures of many different potential worlds, flush with imaginative possibilities that blend fact and fiction.

It might be that you can use this to see different possible futures, Chemero told me, to build them as a kind of image that a young person can imagine themselves as moving toward. G20 summits where all the world leaders are women; factories with warm lighting, jovial atmospheres, and flyers on how to form unions. These are now fictional realities we can see, rather than dimly imagine through flickers in the mind.

Of course, reality is real, as the world was reminded earlier this week when 86-year-old Pope Francis was taken into medical care for what the Vatican is calling a respiratory infection, though by Thursday he was reportedly improving and tweeting from the hospital. But if seeing is believing, these tools will make it easier for us to believe that an incredible diversity of worlds is possible, and to hold on to their solid images in our minds so that we can formulate goals around them. Turning imagination into reality starts with clear pictures. Now that we can generate them, we can get to work.

We have a request

Vox's journalism is free because we believe that everyone deserves to understand the world that they live in. That kind of knowledge helps create better citizens, neighbors, friends, parents, consumers and stewards of this planet. In short, understanding benefits everyone. You can join in on this mission by making a financial gift to Vox today. Reader support helps keep our work free, for everyone. Will you join us?

Read this article:
What the AI-generated image of Pope Francis means for the imagination - Vox.com

Read More..

Where nature meets technology: Machine learning as a tool for … – McGill Tribune

With the dangers of continued fossil fuel use and environmental mismanagement unfolding before our eyes in the form of intense heat waves, droughts, and wildfires, its obvious that dramatic, transformative action must be taken.

Throughout the pessimistic debate about the effectiveness of climate change policy and methods of pollution mitigation, almost every solution under the sun has been proposed. Some have suggested the widespread use of carbon capture technology, while others, like Boyan Slat, have developed ways to remove garbage from our oceans. But one technology has the potential to revolutionize climate action: Artificial intelligence (AI).

In a recent paper spearheaded by professor David Rolnick of the Department of Computer Science, researchers studied the application of machine learning to climate science in great detail. Each section of the article explored a specific sectorincluding electricity, industry, or infrastructureand explained the ways machine learning could be used to reduce the sectors impact on the climate.

Machine learning is an offshoot of AI. While the aim of AI is to develop computers that can think like a human, machine learning is more about training computers on experiences and data to recognize patterns and make decisions.

Machine learning is looking at large amounts of data, finding the patterns that are common across that data and linking those to what the algorithm is asked to do, Rolnick said in an interview with The McGill Tribune.

Uses for machine learning fall into a few categories, according to Rolnick: Monitoring, optimization, simulation, and forecasting. Take, for example, how forecasting can be applied to the study of electricity.

Machine learning is used to predict the amount of electricity that will be in demand at a given point in time so there is enough supply to meet that but not more than there needs to be, Rolnick explained. Understanding how much power is needed and how much power is available is important to make sure the grid is running effectively and without waste.

Since AI cannot plant trees or pass legislation, its practical application may seem abstract. However, its effects are tangible: AI has been used to increase crop yield in India, improve electricity efficiency on wind farms by planning for weather, and improve data centres efficiency.

Most of the technologies that I am talking about are at some level of deployment. For example, the U.K.s national grid has already integrated deep learning models into forecasting supply and demand of electricity and has greatly increased efficiency as a result, Rolnick said. The UN uses AI to guide interventions in flooded areas [.] These are not just research projects and its fundamentally important.

Although AI is an incredibly promising technology, there are a couple of drawbacks to be addressed. One of these drawbacks is human biassince humans write the algorithms and supply the human-collected data to train machine learning, these tools can replicate human biases. To prevent these biases, then, human bias needs to be correctedthere is no software fix.

We cannot technology our way out of most biases, Rolnick said. The solutions to biases in technology are the same as solutions to biases in any other part of human endeavour. That means they are hard, but they are solvable via human choices.

This technology also requires enormous quantities of energy for algorithms to be trained and maintained, but the energy can be minimized by designing efficient algorithms and planning applications carefully.

Its also worth noting that most of the negative climate impacts of AI globally come from how it is used, not the direct energy consumption, Rolnick wrote in a follow-up email.

Although machine learning models can be quite energy hungry, the models Rolnick uses are not exceedingly energy-intensive. With careful planning, scientists hope that the emissions benefits from these models outweigh their energy consumption.

Read the original here:
Where nature meets technology: Machine learning as a tool for ... - McGill Tribune

Read More..

Machine-learning-powered extraction of molecular diffusivity from … – Nature.com

Lippincott-Schwartz, J., Snapp, E. & Kenworthy, A. Studying protein dynamics in living cells. Nat. Rev. Mol. Cell Biol. 2, 444456 (2001).

Article CAS PubMed Google Scholar

Verkman, A. S. Solute and macromolecule diffusion in cellular aqueous compartments. Trends Biochem. Sci. 27, 2733 (2002).

Article CAS PubMed Google Scholar

Mach, R. & Wohland, T. Recent applications of fluorescence correlation spectroscopy in live systems. FEBS Lett. 588, 35713584 (2014).

Article PubMed Google Scholar

Lippincott-Schwartz, J., Snapp, E. L. & Phair, R. D. The development and enhancement of FRAP as a key tool for investigating protein dynamics. Biophys. J. 115, 11461155 (2018).

Article CAS PubMed PubMed Central Google Scholar

Wawrezinieck, L., Rigneault, H., Marguet, D. & Lenne, P.-F. Fluorescence correlation spectroscopy diffusion laws to probe the submicron cell membrane organization. Biophys. J. 89, 40294042 (2005).

Article CAS PubMed PubMed Central Google Scholar

Bacia, K., Kim, S. A. & Schwille, P. Fluorescence cross-correlation spectroscopy in living cells. Nat. Methods 3, 8389 (2006).

Article CAS PubMed Google Scholar

Elson, E. L. Fluorescence correlation spectroscopy: past, present, future. Biophys. J. 101, 28552870 (2011).

Article CAS PubMed PubMed Central Google Scholar

Krieger, J. W. et al. Imaging fluorescence (cross-) correlation spectroscopy in live cells and organisms. Nat. Protoc. 10, 19481974 (2015).

Article CAS PubMed Google Scholar

Manley, S. et al. High-density mapping of single-molecule trajectories with photoactivated localization microscopy. Nat. Methods 5, 155157 (2008).

Article CAS PubMed Google Scholar

Chenouard, N. et al. Objective comparison of particle tracking methods. Nat. Methods 11, 281289 (2014).

Article CAS PubMed PubMed Central Google Scholar

Cognet, L., Leduc, C. & Lounis, B. Advances in live-cell single-particle tracking and dynamic super-resolution imaging. Curr. Opin. Chem. Biol. 20, 7885 (2014).

Article CAS PubMed Google Scholar

Manzo, C. & Garcia-Parajo, M. F. A review of progress in single particle tracking: from methods to biophysical insights. Rep. Prog. Phys. 78, 124601 (2015).

Article PubMed Google Scholar

Shen, H. et al. Single particle tracking: from theory to biophysical applications. Chem. Rev. 117, 73317376 (2017).

Article CAS PubMed Google Scholar

Beheiry, M. E., Dahan, M. & Masson, J.-B. InferenceMAP: mapping of single-molecule dynamics with Bayesian inference. Nat. Methods 12, 594595 (2015).

Article PubMed Google Scholar

Xiang, L., Chen, K., Yan, R., Li, W. & Xu, K. Single-molecule displacement mapping unveils nanoscale heterogeneities in intracellular diffusivity. Nat. Methods 17, 524530 (2020).

Article CAS PubMed PubMed Central Google Scholar

Yan, R., Chen, K. & Xu, K. Probing nanoscale diffusional heterogeneities in cellular membranes through multidimensional single-molecule and super-resolution microscopy. J. Am. Chem. Soc. 142, 1886618873 (2020).

Article CAS PubMed PubMed Central Google Scholar

Xiang, L., Chen, K. & Xu, K. Single molecules are your Quanta: a bottom-up approach toward multidimensional super-resolution microscopy. ACS Nano 15, 1248312496 (2021).

Article CAS PubMed PubMed Central Google Scholar

Schuster, J., Cichos, F. & von Borczyskowski, C. Diffusion measurements by single-molecule spot-size analysis. J. Phys. Chem. A 106, 54035406 (2002).

Article CAS Google Scholar

Zareh, S. K., DeSantis, M. C., Kessler, J. M., Li, J.-L. & Wang, Y. M. Single-image diffusion coefficient measurements of proteins in free solution. Biophys. J. 102, 16851691 (2012).

Article CAS PubMed PubMed Central Google Scholar

Serag, M. F., Abadi, M. & Habuchi, S. Single-molecule diffusion and conformational dynamics by spatial integration of temporal fluctuations. Nat. Commun. 5, 5123 (2014).

Article CAS PubMed Google Scholar

Mckl, L., Roy, A. R. & Moerner, W. E. Deep learning in single-molecule microscopy: fundamentals, caveats, and recent developments [Invited]. Biomed. Opt. Express 11, 16331661 (2020).

Article PubMed PubMed Central Google Scholar

Nehme, E., Weiss, L. E., Michaeli, T. & Shechtman, Y. Deep-STORM: super-resolution single-molecule microscopy by deep learning. Optica 5, 458464 (2018).

Article CAS Google Scholar

Zhang, P. et al. Analyzing complex single-molecule emission patterns with deep learning. Nat. Methods 15, 913916 (2018).

Article CAS PubMed PubMed Central Google Scholar

Zelger, P. et al. Three-dimensional localization microscopy using deep learning. Opt. Express 26, 3316633179 (2018).

Article CAS PubMed Google Scholar

Kim, T., Moon, S. & Xu, K. Information-rich localization microscopy through machine learning. Nat. Commun. 10, 1996 (2019).

Article PubMed PubMed Central Google Scholar

Hershko, E., Weiss, L. E., Michaeli, T. & Shechtman, Y. Multicolor localization microscopy and point-spread-function engineering by deep learning. Opt. Express 27, 61586183 (2019).

Article CAS PubMed Google Scholar

Mckl, L., Petrov, P. N. & Moerner, W. E. Accurate phase retrieval of complex 3D point spread functions with deep residual neural networks. Appl. Phys. Lett. 115, 251106 (2019).

Article PubMed PubMed Central Google Scholar

Zhang, Z., Zhang, Y., Ying, L., Sun, C. & Zhang, H. F. Machine-learning based spectral classification for spectroscopic single-molecule localization microscopy. Opt. Lett. 44, 58645867 (2019).

Article CAS PubMed PubMed Central Google Scholar

Gaire, S. K. et al. Accelerating multicolor spectroscopic single-molecule localization microscopy using deep learning. Biomed. Opt. Express 11, 27052721 (2020).

Article CAS Google Scholar

Mckl, L., Roy, A. R., Petrov, P. N. & Moerner, W. E. Accurate and rapid background estimation in single-molecule localization microscopy using the deep neural network BGnet. Proc. Natl Acad. Sci. 117, 6067 (2020).

Article PubMed Google Scholar

Nehme, E. et al. DeepSTORM3D: dense 3D localization microscopy and PSF design by deep learning. Nat. Methods 17, 734740 (2020).

Article CAS PubMed PubMed Central Google Scholar

Speiser, A. et al. Deep learning enables fast and dense single-molecule localization with high accuracy. Nat. Methods 18, 10821090 (2021).

Article CAS PubMed PubMed Central Google Scholar

Cascarano, P. et al. DeepCEL0 for 2D single-molecule localization in fluorescence microscopy. Bioinformatics 38, 14111419 (2022).

Article CAS PubMed Google Scholar

Spilger, R. et al. in Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support Vol. 11045 (eds. Stoyanov, D. et al.) 128136 (Springer International Publishing, 2018).

Newby, J. M., Schaefer, A. M., Lee, P. T., Forest, M. G. & Lai, S. K. Convolutional neural networks automate detection for tracking of submicron-scale particles in 2D and 3D. Proc. Natl Acad. Sci. USA 115, 90269031 (2018).

Article CAS PubMed PubMed Central Google Scholar

Muoz-Gil, G. et al. Objective comparison of methods to decode anomalous diffusion. Nat. Commun. 12, 6253 (2021).

Article PubMed PubMed Central Google Scholar

Kowalek, P., Loch-Olszewska, H. & Szwabiski, J. Classification of diffusion modes in single-particle tracking data: Feature-based versus deep-learning approach. Phys. Rev. E 100, 032410 (2019).

Article CAS PubMed Google Scholar

Granik, N. et al. Single-particle diffusion characterization by deep learning. Biophys. J. 117, 185192 (2019).

Article CAS PubMed PubMed Central Google Scholar

Pinholt, H. D., Bohr, S. S.-R., Iversen, J. F., Boomsma, W. & Hatzakis, N. S. Single-particle diffusional fingerprinting: a machine-learning framework for quantitative analysis of heterogeneous diffusion. Proc. Natl Acad. Sci. 118, e2104624118 (2021).

Article CAS PubMed PubMed Central Google Scholar

Pineda, J. et al. Geometric deep learning reveals the spatiotemporal features of microscopic motion. Nat. Mach. Intell. 5, 7182 (2023).

Article Google Scholar

He, K., Zhang, X., Ren, S. & Sun, J. in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 770778 (IEEE, 2016).

Ioffe, S. & Szegedy, C. in Proceedings of the 32nd International Conference on Machine Learning 448456 (PMLR, 2015).

Ramachandran, P., Zoph, B. & Le, Q. V. Searching for activation functions. arXiv https://doi.org/10.48550/arXiv.1710.05941 (2017).

Nair, V. & Hinton, G. E. in Proc. 27th International Conference on International Conference on Machine Learning 807814 (Omnipress, 2010).

Choi, A. A. et al. Displacement statistics of unhindered single molecules show no enhanced diffusion in enzymatic reactions. J. Am. Chem. Soc. 144, 48394844 (2022).

Article CAS PubMed PubMed Central Google Scholar

Tobin, J. et al. Domain randomization for transferring deep neural networks from simulation to the real world. arXiv https://doi.org/10.48550/arXiv.1703.06907 (2017).

Filippov, A., Ordd, G. & Lindblom, G. Sphingomyelin structure influences the lateral diffusion and Raft formation in lipid Bilayers. Biophys. J. 90, 20862092 (2006).

Article CAS PubMed Google Scholar

Mach, R. & Hof, M. Lipid diffusion in planar membranes investigated by fluorescence correlation spectroscopy. Biochim. Biophys. Acta BBA Biomembr. 1798, 13771391 (2010).

Article Google Scholar

Sharonov, A. & Hochstrasser, R. M. Wide-field subdiffraction imaging by accumulated binding of diffusing probes. Proc. Natl Acad. Sci. USA 103, 1891118916 (2006).

Article CAS PubMed PubMed Central Google Scholar

Maekawa, T. et al. Molecular diffusion and nano-mechanical properties of multi-phase supported lipid bilayers. Phys. Chem. Chem. Phys. 21, 1668616693 (2019).

Article CAS PubMed Google Scholar

Kuo, C. & Hochstrasser, R. M. Super-resolution microscopy of lipid bilayer phases. J. Am. Chem. Soc. 133, 46644667 (2011).

Article CAS PubMed PubMed Central Google Scholar

Yan, R., Wang, B. & Xu, K. Functional super-resolution microscopy of the cell. Curr. Opin. Chem. Biol. 51, 9297 (2019).

Article CAS PubMed Google Scholar

Go here to see the original:
Machine-learning-powered extraction of molecular diffusivity from ... - Nature.com

Read More..

By the Book: Sarah Bakewell Is No Fan of Thrillers and Mysteries – The New York Times

How do you organize your books?

Most of them are organized with sinister precision by genre and author, except that biographies are by subject and history is roughly chronological. I cant help it; Im a librarian. Not only that, but I tend to spot anomalies. If someone has moved a book out of order, I fix it with my gimlet eye almost as soon as I walk in the room. Of course, this leads to people moving my books around for fun, to see if Ill notice. (And sometimes I dont.)

Every year, I receive a book of stories, memoirs, drawings or clerihews, as well as a wall-calendar of splendid literary caricatures, all created by my generous and gifted friend in Seattle, Brad Craft. Nothing can ever beat that.

As a child I read books manically, greedily and repeatedly, and loved anything with an animal in it. My two favorite series were Willard Prices gung-ho stories about two brothers collecting wild creatures for their fathers zoo, and the Adventure series by Enid Blyton, which sent four children and a parrot into dangerous situations up a river, out to sea, inside a hollow mountain and away with a traveling circus.

By my early teens, I was grabbing any book for adults that came within my reach, and making whatever skewed, half-baked sense of it I could. Woolfs The Waves, Nabokovs Lolita, Ginsbergs Howl, Luke Rhineharts The Dice Man, David Nivens The Moons a Balloon, a bit of Shakespeare it all went into the ravenous maw. I do remember being more perplexed than usual by The Sex-Life Letters: Fascinating Correspondence From Todays Men and Women About the Variety of Their Sexual Attitudes and Experiences, edited by Harold and Ruth Greenwald. I think that had animals in it too.

Ive long liked both philosophy and biography, but the balance keeps shifting toward the biography end. In my 20s, a night in with Heidegger was my idea of fun. Now, given a choice between contemplating the being of beings and finding out, for example, that Vita Sackville-Wests mother once papered an entire room with used postage stamps well, its the stamps every time.

More here:
By the Book: Sarah Bakewell Is No Fan of Thrillers and Mysteries - The New York Times

Read More..

Top 9 Ways Ethical Hackers Will Use Machine Learning to Launch … – Analytics Insight

The top 9 ways ethical hackers will use machine learning to launch attacks are enlisted here

Several threat detection and response platforms are using machine learning and artificial intelligence (AI) as essential technologies. Security teams benefit from being able to learn on the go and automatically adjust to evolving cyber threats.

Yet, certain ethical hackers are also evading security measures, finding new vulnerabilities, and scaling up their cyberattacks at an unprecedented rate and with fatal outcomes by utilizing machine learning and AI. Below are the top 9 ways ethical hackers will use machine learning to launch attacks.

Machine learning has been used by defenders for decades to identify spam. The attacker can alter their behavior if the spam filter they are using offers explanations for why an email message was rejected or creates a score of some sort. They would be utilizing lawful technology to boost the effectiveness of their attacks.

Ethical Hackers will use machine learning to creatively alter phishing emails so that they dont appear in bulk email lists and are designed to encourage interaction and clicks. They go beyond simply reading the emails text. AI can produce realistic-looking images, social media profiles, and other content to give communication the best possible legitimacy.

Machine learning is also being used by criminals to improve their password-guessing skills. Moreover, they use machine learning to recognize security measures so they can guess better passwords with fewer attempts, increasing the likelihood that they will succeed in gaining access to a system.

The most ominous use of artificial intelligence is the creation of deep fake technologies that can produce audio or video that is difficult to differentiate from actual human speech. To make their messages seem more credible, fraudsters are now leveraging AI to create realistic-looking user profiles, photographs, and phishing emails. Its a huge industry.

Nowadays, a lot of widely used security technologies come equipped with artificial intelligence or machine learning. For instance, anti-virus technologies are increasingly searching for suspicious activities outside the fundamental signs. Attackers might use these tools to modify their malware so that it can avoid detection rather than defend against attacks.

Attackers can employ machine learning for reconnaissance to examine the traffic patterns, defenses, and possible weaknesses of their target. Its unlikely that the typical cybercriminal would take on anything like this because its difficult to do. It may, however, become more publicly available if, at some time, the technology is marketed and offered as a service through the criminal underworld.

Malware may not be able to link back to its command-and-control servers for instructions if a business recognizes that it is under assault and disables internet connectivity for impacted computers.

A machine learning model can be deceived by an attacker by being fed fresh data. For instance, a compromised user account may log into a system every day at 2 a.m. to perform unimportant tasks, fooling the system into thinking that working at that hour is normal, and reducing the number of security checks the user must complete.

Fuzzing software is used by reputable software engineers and penetration testers to create random sample inputs to crash a program or discover a vulnerability. The most advanced versions of this software prioritize inputs such as text strings most likely to create issues using machine learning to generate inputs that are more targeted and ordered. Because of this, fuzzing technologies are not only more effective for businesses but also more lethal in the hands of attackers.

Read more:
Top 9 Ways Ethical Hackers Will Use Machine Learning to Launch ... - Analytics Insight

Read More..