Category Archives: Ai
AI algorithm discovers ‘potentially hazardous’ asteroid 600 feet wide … – Space.com
A new artificial intelligence algorithm programmed to hunt for potentially dangerous near-Earth asteroids has discovered its first space rock.
The roughly 600-foot-wide (180 meters) asteroid has received the designation 2022 SF289, and is expected to approach Earth to within 140,000 miles (225,000 kilometers). That distance is shorter than that between our planet and the moon, which are on average, 238,855 miles (384,400 km)apart. This is close enough to define the rock as a Potentially Hazardous Asteroid (PHA), but that doesn't mean it will impact Earth in the foreseeable future.
The HelioLinc3D program, which found the asteroid, has been developed to help the Vera C. Rubin Observatory, currently under construction in Northern Chile, conduct its upcoming 10-year survey of the night sky by searching for space rocks in Earth's near vicinity. As such, the algorithm could be vital in giving scientists the heads up about space rocks on a collision course with Earth.
"By demonstrating the real-world effectiveness of the software that Rubin will use to look for thousands of yet-unknown potentially hazardous asteroids, the discovery of 2022 SF289 makes us all safer," Vera C. Rubin researcher Ari Heinze said in a statement.
Related: Super-close supernova captivates record number of citizen scientists
Tens of millions of space rocks roam the solar system ranging from asteroids the size of a few feet to dwarf planets around the size of the moon. These space rocks are the remains of material that initially formed the planets around 4.5 billion years ago.
While most of these objects are located far from Earth, with the majority of asteroids homed in the main asteroid belt between Mars and Jupiter, some have orbits that bring them close to Earth. Sometimes worryingly close.
Space rocks that come close to Earth are defined as near-Earth objects (NEOs), and asteroids that venture to within around 5 million miles of the planet get the Potentially Hazardous Asteroid (PHA) status. This doesn't mean that they will impact the planet, though. Just as is the case with 2022 SF289, no currently known PHA poses an impact risk for at least the next 100 years.Astronomers search for potentially hazardous asteroids and monitor their orbits just to make sure they are not heading for a collision with the planet.
This new PHA was found when the asteroid-hunting algorithm was paired with data from the ATLASsurvey in Hawaii, as a test of its efficiency before Rubin is completed.
The discovery of 2022 SF289 has shown that HelioLinc3D can spot asteroids with fewer observations than current space rock hunting techniques allow.
Searching for potentially hazardous asteroids involves taking images of parts of the sky at least four times a night. When astronomers spot a moving point of light traveling in an unambiguous straight line across the series of images, they can be quite certain they have found an asteroid. Further observations are then made to better constrain the orbit of these space rocks around the sun.
The new algorithm, however, can make a detection from just two images, speeding up the whole process.
Around 2,350 PHAs have been discovered thus far, and though none poses a threat of hitting Earth in the near future, astronomers aren't quite ready to relax just yet as they know that many more potentially dangerous space rocks are out there yet to be uncovered.
It is estimated that the Vera Rubin Observatory could uncover as many as 3,000 hitherto undiscovered potentially hazardous asteroids.
Rubin's 27-foot-wide (8.4 meters) mirror and massive 3,200-megapixel camera will revisit locations in the night sky twice per night rather than the four times a night observations conducted by current telescopes. Hence the creation of HelioLinc3D, a code that could find asteroids in Rubins dataset even with fewer available observations.
But, the algorithm's creators wanted to give the software a trial run before the construction of Rubin is completed. This meant testing if it could find an asteroid in data that had already been collected, data that has too few observations for currently employed algorithms to scour.
With ATLAS data offered as such a test subject, HelioLinc3D set about looking for PHAs, and on July 18, 2023, it hit paydirt, uncovering 2022 SF289. This PHA was spotted by ATLAS on September 19, 2022, while it was 3 million miles from Earth. ATLAS had actually spotted this new PHA three times over the course of four nights but hadn't spotted it four times in the same night, meaning current surveys missed it. By putting together fragments of data from all four nights, HelioLinc3D was able to identify the PHA.
"Any survey will have difficulty discovering objects like 2022 SF289 that are near its sensitivity limit, but HelioLinc3D shows that it is possible to recover these faint objects as long as they are visible over several nights," lead ATLAS astronomer Larry Denneau said. "This in effect gives us a 'bigger, better' telescope."
With the position of 2022 SF289 pinpointed, astronomers could then follow up on the discovery with other telescopes to confirm the PHA's existence.
"This is just a small taste of what to expect with the Rubin Observatory in less than two years when HelioLinc3D will be discovering an object like this every night," Rubin scientist and HelioLinc3D team leader Mario Juri said. "But more broadly, it's a preview of the coming era of data-intensive astronomy. From HelioLinc3D to AI-assisted codes, the next decade of discovery will be a story of advancement in algorithms as much as in new, large, telescopes."
The discovery of 2022 SF289 was announced in the International Astronomical Union's Minor Planet Electronic Circular MPEC 2023-O26.
Go here to see the original:
AI algorithm discovers 'potentially hazardous' asteroid 600 feet wide ... - Space.com
UW researchers develop AI tool for therapy: HealthLink – KING5.com
SEATTLE From ChatGPT to AI-generated images, artificial intelligence is in the limelight.
But what about using AI as a therapist?
Tim Althoff, an associate professor of computer science at the University of Washington, believes the technology is there to get things started.
"I believe that that technology is now at a point where it can start to actually be useful to people in a mental health context, especially focused on a kind of collaboration between people and AI," Althoff said.
Althoff and a team of researchers have been developing AI programs as a form of behavioral health therapy, but they are by no means the typical chatbots.
One of Althoff's projectsuses AI to help professional, human therapists be more empathetic in their interactions with clients.
"Essentially what we called empathic rewriting," said Dr. Dave Atkins, a research professor at the University of Washington School of Medicine's Department of Psychiatry and Behavioral Sciences. Atkins has also worked with the Behavioral Research in Technology and Engineering (BRiTE) Center at the University of Washington and is CEO of Lyssn.
Atkins worked with Althoff on how AI can be used in behavioral health conversations. Using language models, the AI provides suggestions for human-to-human chat interactions to be more empathetic.
"So that when someone is ready to send a message, you can get feedback from the AI and how to update, edit that message so that it's more empathic," Atkins said.
But one of Althoff's recent projectsis an AI platform that interacts directly with the user to reframe negative thinking.
"We co-developed this tool that essentially walks you through a process where you learn how to challenge negative thoughts," Althoff said.
Theresa Nguyen is chief research officer at Mental Health America, the nonprofit advocacy group that collaborated with Althoff's research project. She considers it as another self-help tool.
"The idea is we can change our feelings, or we can change our behaviors by first focusing on our thoughts. And so once you start with that negative thought, you learn your patterns of how your thinking causes trouble for you," Nguyen said.
Althoff points out the technology is not intended to replace professional therapists.
"I think one important point is that it actually does not try to do that," Althoff said.
Althoff emphasized it is an online tool that gives the user suggestions to get away from negative thinking. The toolis currently available for anyone to try out.
The interaction is similar to a Q&A that provides guidance for ways to get out of negative thinking.
Althoff said such tools could be helpful in filling a gap.
"Even if an AI system could replace a therapist, that therapist most of the time doesn't even exist because that person didn't have access to somebody in the first place," Althoff said.
Nguyen added tools like these can provide a mental health resource for those who lack access or money to receive professional mental health treatment.
"It's dearly needed with kind of AI technology being one part, kind of piece, of that puzzle," Nguyen said.
Read the original post:
UW researchers develop AI tool for therapy: HealthLink - KING5.com
Apps Are Rushing to Add AI. Is Any of It Useful? – WIRED
Ever since the ChatGPT API opened up, all sorts of apps have been strapping on AI functionality. I've personally noticed this a lot in email clients: Apps like Spark and Canary are prominently bragging about their built-in AI functionality.
The most common features will write replies for you, or even generate an entire email using only a prompt. Some will summarize a long email in your inbox or even a thread. It's a great idea in the abstract, but I think integrations like these conspire to make communication less efficient instead of more efficient. You should feel free to try such featurestheyre fun!but dont expect them to change your life. Here's why.
The Ouroboros of Communication
We are all overwhelmed with email and communication in general. It's easy to look at this as a tech problem because it's happening on screens. It's not a tech problem, thoughat least, it's not only a tech problem. It's a social problem.
You could say that you get too many emails, and that might be accurate. Another way of saying the same thing is that more people are trying to contact you than you feel mentally capable of responding to. Trying to solve a social problem with tech often only creates new social problems.
For example, instead of writing an email myself inviting you to come over and have some beers, suppose I asked ChatGPT to write that email. The result is 220 words long, including an introduction (I hope this email finds you well!), an explanation of the reasons people might want to have beers together (It's the perfect opportunity to catch up, share stories, and simply have a good time), and a few oddly-worded details made up out of thin air (I'll make sure to create a comfortable and welcoming atmosphere, complete with some snacks to complement our beer tasting experience.)
Most people, seeing an email this long, are going to feel too overwhelmed to read it. Maybe they'll use AI on their end to summarize the message. I asked ChatGPT to summarize the long email into a single sentence, and it essentially gave me back my initial prompt: Would you like to come over for beers?
The American philosopher Homer Simpson once called alcohol the cause of, and solution to, all life's problems. AI, in this context, serves a similar function: It creates a problem (the emails are too long) and then solves them (summarizing the emails). It's an ouroboros, a snake eating its own tail, a technology that exists in part to solve the problems it is creating.
It's better, in my opinion, to look at the cultural assumptions instead of reaching for unnecessarily complicated technological ones. What cultural forces are making me think I can't just write a one-sentence email? Can I ignore that, if it makes communication better?
I asked ChatGPT to summarize the long email into a single sentence, and it essentially gave me back my initial prompt: Would you like to come over for beers?
Cultural problems, of course, are harder to grasp than technological ones. You could start sending one-sentence emails right now, but some people might interpret that as rude, or at the very least odd. But any individualor organizationlooking to become more efficient should think about these things. Unless, of course, you want a bot pretending to know that you have beers ranging from local brews to classic favorites in your fridge right now.
We Dont Know The Contexts in Which AI Will Work Best
My friend Kay-Kay and I, for months, had an in-joke that became a ritual: tapping LinkedIn's conversational auto-recommendations. This social network, for some reason, offers suggested replies to messages. It was never not hilarious.
Courtesy of Justin Pot
Read the original:
Apple seeks to bolster expertise in generative AI on mobile devices – Financial Times
What is included in my trial?
During your trial you will have complete digital access to FT.com with everything in both of our Standard Digital and Premium Digital packages.
Standard Digital includes access to a wealth of global news, analysis and expert opinion. Premium Digital includes access to our premier business column, Lex, as well as 15 curated newsletters covering key business themes with original, in-depth reporting. For a full comparison of Standard and Premium Digital, click here.
Change the plan you will roll onto at any time during your trial by visiting the Settings & Account section.
If you do nothing, you will be auto-enrolled in our premium digital monthly subscription plan and retain complete access for $69 per month.
For cost savings, you can change your plan at any time online in the Settings & Account section. If youd like to retain your premium access and save 20%, you can opt to pay annually at the end of the trial.
You may also opt to downgrade to Standard Digital, a robust journalistic offering that fulfils many users needs. Compare Standard and Premium Digital here.
Any changes made can be done at any time and will become effective at the end of the trial period, allowing you to retain full access for 4 weeks, even if you downgrade or cancel.
You may change or cancel your subscription or trial at any time online. Simply log into Settings & Account and select "Cancel" on the right-hand side.
You can still enjoy your subscription until the end of your current billing period.
We support credit card, debit card and PayPal payments.
See the rest here:
Apple seeks to bolster expertise in generative AI on mobile devices - Financial Times
AI is going 4-dimensional – TechCrunch
Image Credits: nadia_bormotova / Getty Images
Welcome to Startups Weekly. Sign up here to get it in your inbox every Friday morning.
State-sponsored bad actors have long been able to make deepfake videos that are good enough to trick unsophisticated viewers and probably some more clued-in folks, too. That sort of work takes significant processing power and technical know-how to pull off. Now AI is stepping in, handing over an unlabeled glass bottle, muttering, Hold my beer, and cracking its proverbial knuckles. Things that we could barely dream of at the beginning of 2023 are beginning to be possible when it comes to generated video AI.
Of course, with great power comes great responsibility, but tell that to the memelords doin it for the lolz. Personally (and, perhaps, perversely), I think its a great thing that these technologies are making their way into everyones hands. Special effects have been a strange, mythical other that Hollywood does. Generated AI selfies were a rage for a hot minute (is anyone still using Lensa?) and did wonders in educating people on what is possible. Its not that Im excited about this tech being universally available, but (contrary to what these curmudgeonly pieces would indicate), Im an optimist at heart. Perhaps exposing people to whats possible will help give even non-tech-savvy folks a fighting chance at spotting fake videos.
I suppose it is only optimism if its from the optimisme region of France. Maybe what Im experiencing is lightly sparking hope.
Over the past few weeks, Ive done a lot of writing about fundraising for startup founders. In a conversation with a VC this week, I told them that I had a flags-based checklist for evaluating pitch decks (e.g., red flag means that you havent a whelks chance in a supernova to raise funding. I havent figured out if there should be a mauve flag, nor what it would mean). It inspired me to share where founders go wrong when fundraising (TC+). Yes, it means Im showing the world everything I care about in a pitch deck, but, I mean, 100+ articles about pitching and fundraising later, I think that cat was well and truly out of the bag anyway.
You know what early-stage founders really hate? Putting together their traction slide. What do you put when youre straddling that pre-product/pre-revenue line? I had a bit of an epiphany when I was working with one of my pitch clients: Your traction slide, abstractly, is how much risk you have designed out of the business. Tell that story, and you end up with a reasonable traction narrative, even if it isnt directly tied to revenue.
Apropos fundraising, theres been a fair bit of activity on that front:
Its a nice Jobs if you can get it: Apple founder Steve Jobs met his demise from cancer. Now his son, Reed Jobs, takes the wraps off a $200 million venture fund that will back new cancer treatments.
iForgot: Backed by a16z, Rewind launches an iPhone app to help you remember everything.
Dude, wheres my cell tower?: eSIMs are great and all, but you know whats really cool? Being able to pop a local SIM card into your phone and be chillin like a villain, local style. Airalo just raised $60 million to make that a tiny bit easier, even with eSIMs.
This week, the federal government isnt just laying down the law about certain ex-commanders-in-chief. I spent an hour reading the most recent indictment its surprisingly readable, and fascinating AF. The NYT has a great annotated edition. Also, if thats a thing youre interested in, I definitely recommend the Prosecuting Donald Trump podcast. Two extremely experienced lawyers talk about the cavalcade of cluster-copulation thats happening in the legal system. Rather compelling.
Closer to home, in startup land, the Federal Communications Commission (FCC) has been accused of being toothless, but it truly has had enough of one companys BS, fining a robocaller a record $300 million after blocking billions of their scam calls.
Insert inappropriate how long can you go? joke here: It turns out that Tesla has allegedly been a little floppy with the truth about the range estimates for its cars for a hot minute. Suing Tesla is practically a national sport at this time, and, indeed, the first Tesla range inflation lawsuit has been filed.
Lets get to the meat of things: YouTube star MrBeast has a charming, likable, aw-shucks persona, but it turns out he does have finite patience, suing the ghost kitchen behind the MrBeast Burger. Amandas report doesnt include whether you should like and subscribe to the court case.
A HIPPO-sized HIPAA breach: Close to 2.5% of the U.S. population had their health data accessed by MOVEit hackers, a government contractor says.
The social media world continues to be a Muppet wrapped in googly-eye duct tape, or some similarly confusing simile. People truly hate the Twitter-to-X rebrand. How much? Well, Amandas guide for how to make the blue bird come back as your app icon on iOS is right up there with our most-read stories. On top of that, App Store users are decimating Twitters review rating with one-star reviews after the rebrand. Thats ... a lot of steps to not have to stare at an X. Pretty wild: Apple doesnt usually allow one-letter app names, but it made an exception for Tw . . . I mean...X. I avoided throwing myself into the chaos mid-pandemic by deleting the Twitter app off my phone altogether, which is faster and better for your mental health, but Ill leave you to make the best choices for you.
Mammoth > Bird: Famous for its nature programming, U.K. broadcaster BBC is taking a stroll through the digital ecosystem, and it seems it has had enough of Musks shenanigans. Natasha L reports that BBC is testing being on Mastodon, saying that the fediverse is a better fit for public purposes than Twitter or Instagrams Threads.
Robot says youre looking fiiiiine, 0x58 / 0x59: AI really gets its grubby little mitts everywhere, and it seems that Tinder is joining the fray as it tests an AI photo selection feature to help users build profiles. But, as a non-AI, lemme just say: You look great, fam. Id swipe on you. Raaawr.
Uncrop! Enhance: It was a CSI meme, but we are one step closer to being able to uncrop images, revealing whats beyond the edges. Not for real, but based on Photoshops new generative AI feature taking its best guess. And you know what? Its really, really, really good. No wonder every other TikTok video I get served these days seems to feature uncrop shenanigans.
Are you still reading? Your tenacity and persistence are heartwarming. Now, make yourself a cup of tea and pat yourself on the back youve truly mastered the art of the no, this is work, honestly! type of procrastination. I see you. Im proud of you. Youre doing great.
Heres what everyone else has been ogling this past week:
Hacking your way to horsepower: You know what the problem is with selling people $10,000 software upgrades to their cars? At some point, someone is going to change the $GOFAST =0 flag to $GOFAST =1 and get free heated rear seats. Personally, I think its truly ridiculous to turn off the ability to heat rear seats if youve gone through all the trouble of, I dunno, adding the hardware to heat the rear seats, but thats why Im a lowly TechCrunch hack and not the CEO of a car company, a tunnel company, a space company, and whatever X is.
Rolling electric: Fun fact: volvo means to roll. Presumably they mean the wheels and not some sort of sordid MDMA binge, but in any case, the all-electric Volvo EX30 is a huge deal.
Conducting, in your office: What if room-temperature superconductors were real? Tim wondered, and got (1) a really interesting article and (2) a buttload of traffic for his efforts. Well done, Tim. Keep it up, I love reading your stuff.
Get your TechCrunch fix IRL. Join us at Disrupt 2023 in San Francisco this September to immerse yourself in all things startup. From headline interviews to intimate roundtables to a jam-packed startup expo floor, theres something for everyone at Disrupt. Save up to $600 when you buy your pass now through August 11, and save 15% on top of that with promo code STARTUPS. Learn more.
See more here:
Mommy Musings: The who part is the hard part of AI considerations – Longmont Times-Call
During a weeklong mini family reunion with my husbands kin, my father-in-law reached out to my youngest son, Ray, 13, who needed a nap after days of playing with his brothers and cousins in Diamond Lake in Tustin, Mich., on Tuesday. (Photo by Pam Mellskog)
This summer, I paid more attention to the prospect of artificial intelligence when it learned yet another new and wonderful thing how to lift John Lennons voice from a demo song he recorded on a cassette shortly before his death in 1980 for former fellow Beatle, Paul McCartney.
Gone was the inexplicable electrical buzzing in Lennons New York City apartment the day he pushed record on his boombox. Gone was his piano accompaniment, too.
AIs ability to recognize the distinctive human voice Lennons, yours or mine allowed it to work like invisible magical tweezers. It pulled the voice from the static and piano instrumental to mix Lennons pure voice into a final McCartney-led classic Beatles project.
Anyone could appreciate AIs handiwork in retrieving the famous voice from the aging cassette time capsule.
But the news gave me another pause to consider this powerful tool.
Like all tools, this ones impact depends on who uses it from scientists to scammers and for what purpose.
But that who part is the hard part of considering AI. It is not just another object like a shovel or a gun in total service to its handler.
It is technology designed to learn in increasingly sophisticated ways that benefit many of us in our daily lives from the grammar autocorrect feature in word processing to smart phone navigational maps that can report traffic jams and give us estimated delay times and alternate routes in real time.
Eventually, AI might learn enough to develop into some semblance of a who a sentient entity. That is, a technological creation with self-awareness.
We already see it learning like a genius genius to synthesize information with predictive abilities that can be used on the dark side to create imposters to terrify and extort.
When an Arizona mom, Jennifer DeStafano, answered her cell phone from an unknown number earlier this year, she listened to a convincing suspected AI-engineered recording of her 15-year-old daughter screaming that she had been kidnapped.
A gruff man on the line told DeStafano through his profanities that he would drug, rape and kill the girl if the family didnt pay a $1 million ransom.
The police got involved in the hoax, and the girl was confirmed to be safe at a skiing competition upstate.
After the incident, experts said sophisticated AI ventriloquy likely created the girls voice well enough to fool the girls mother with maybe just a minute-long audio clip lifted from the girls social media presence.
Both of these AI application stories one wonderful, the other wicked explain why international gatekeepers of this technology continue scrambling to encourage its responsible uses and police its abuses.
But since most of us are not keeping a close eye on this genie coming out of her bottle, we can use the time between now and when we discover AIs future roles to use old-fashioned ways to understand the who in ourselves and others.
Sometimes, we only get a glimpse of who someone may be.
This was the case for us last week. During our annual mini family reunion with my husbands side of the family at a cottage on Diamond Lake in Tustin, Mich., we heard the clippety-clop of a horse and black buggy coming around the bend.
Our youngest son, Ray, 13, loves horses.
So, my mother-in-law snatched his hand and they ran past our parked cars to the road.
Ray also noticed the Amish people inside the buggy. One woman held the reins and slowed the horse from a trot to a walk as they passed by. The other woman held a baby.
Both of them peeked out from under their wide bonnet brims to smile and wave something Ray and Grandma Vanden Berg returned.
And just like that strangers living in very different worlds at the same time and place in America connected.
Other times, we may get lots of time to get to know someone not easily reached. For instance, Rays special needs related to Down syndrome make it tough for him to speak clearly, although he understands us well.
So, our family has learned to support his speech therapy goals and pay much closer attention to his nonverbal communication.
Grandpa Vanden Berg noticed on Tuesday that Ray, after days of playing in the lake with his older brothers and his cousins, had cuddled up on the outdoor sofa with Woody his favorite action figure from Disneys Toy Story movie series.
Anyone could see that all the fresh air and exercise had tired out Ray. But understanding someone is not the same as responding to someone from that understanding.
Grandpa Vanden Berg did both. He touched Rays shoulder and asked him how he felt. He comforted our boy by offering to drape a beach towel over him.
Then, he encouraged Ray to rest.
Whenever I manage to do a 2023 family photo album, a photo of those two in that moment will make the cut with the help of AI face recognition. In a snap, AI can do the otherwise tedious and time-consuming work of finding photos of just Ray and this grandpa in our huge archive.
But there will always be a world of difference between recognizing someones face and cherishing it.
Pam Mellskog can be reached at p.mellskog@gmail.com or 303-746-0942. For more stories and photos, please visit timescall.com/tag/mommy-musings.
Go here to read the rest:
Mommy Musings: The who part is the hard part of AI considerations - Longmont Times-Call
AI May Be Able to Warn Us Before The Next Pandemic Strikes – ScienceAlert
The global COVID-19 pandemic has shown us just how devastating these outbreaks can be and it could have been much worse. Now, scientists have developed an AI application that promises to warn us about dangerous variants in future pandemics.
It's called the early warning anomaly detection (EWAD) system, and when tested against actual data from the spread of SARS-CoV-2, it was accurate in predicting which new variants of concern (VOCs) would emerge as the virus mutated.
Scientists from Scripps Research and Northwestern University in the US used a machine learning method to produce EWAD. In machine learning, vast amounts of training data are analyzed by computers to spot patterns, develop algorithms, and then make predictions about how those patterns may play out in future, unknown scenarios.
In this case, the AI was fed information about the genetic sequences of SARS-CoV-2 variants as infections spread, how frequent those variants were, and the reported global mortality rate from COVID-19. The software could then spot genetic shifts as the virus adapted, usually shown in increasing infection rates and falling mortality rates.
"We could see key gene variants appearing and becoming more prevalent, as the mortality rate also changed, and all this was happening weeks before the VOCs containing these variants were officially designated by the WHO," says William Balch, a microbiologist at Scripps Research.
The specific technique used here by the team is called Gaussian process-based spatial covariance, which essentially crunches the numbers on a set of existing data to predict new data using not just the averages of the data points but also the relationships between them.
By testing their model on something that's already happened and finding close matches between the real and the predicted data, the scientists could prove EWAD's effectiveness at predicting how measures such as vaccines and mask-wearing could cause a virus to continue evolving.
"One of the big lessons of this work is that it is important to take into account not just a few prominent variants, but also the tens of thousands of other undesignated variants, which we call the 'variant dark matter,'" says Balch.
The researchers say their AI algorithms were able to spot "rules" of virus evolution that would otherwise have gone undetected, and that could prove vital in combating future pandemics as they emerge.
Not only that, but the system developed here could also enable scientists to understand more about the very basics of virus biology. That could then be used to improve treatments and other public health measures.
"This system and its underlying technical methods have many possible future applications," says mathemologist Ben Calverley from Scripps Research.
The research has been published in Cell Patterns.
View original post here:
AI May Be Able to Warn Us Before The Next Pandemic Strikes - ScienceAlert
1 Highly Profitable Cloud AI Stock Investors Need to Know About Now – The Motley Fool
2023 has been a big year for many tech stocks as they have rallied back toward their previous highs. Cloud and AI-powered software company Dynatrace (DT -1.00%) was no exception -- at least, not until after its latest earnings update. The company just started a new fiscal year, and its Q1 financial results were impressive. Yet the stock plunged after the report was published. Is now a buying opportunity?
Dynatrace is an infrastructure software provider, specifically geared toward large multinational companies that are migrating to and using complex cloud-based applications. Basically, Dynatrace helps these organizations monitor their apps and data, and uses AI to find performance issues, and recommend and automate fixes.
Sound familiar? This branch of the infrastructure software industry includes top names like Datadogand Splunk. It's notable, though, that market research firm Gartner recently named Dynatrace as the leader among its peer group for application performance monitoring and observability.
I've whittled my investments in this realm of cloud software down to just Dynatrace over the years because of its balance between generating growth and profitability -- in contrast to many other smaller software outfits in this space that have struggled to operate in the black.
DT Revenue (TTM) data by YCharts.
Dynatrace just got its fiscal 2024 off on the right foot, too. For its fiscal Q1, which ended June 30, revenue increased 25% year over year to $333 million. Its GAAP (generally accepted accounting principles) operating profit margin increased to 10% versus 7% in the prior-year period, resulting in earnings per share going from $0.01 in Q1 fiscal 2023 to $0.13 this last quarter. Free cash flow was down 9% to $124 million, but was still a healthy 37% of revenue. https://ir.dynatrace.com/news-events/press-releases/detail/304/dynatrace-reports-first-quarter-of-fiscal-year-2024
Shares of Dynatrace fell despite the solid quarter due to management's outlook -- specifically, that the outlook remained (mostly) unchanged from three months ago in spite of the surge in cloud-based AI activity this year. Management said revenue should increase by 21% to 22% in fiscal 2024 (versus guidance for growth of 20% to 21% before), and reiterated its forecast for the full-year free-cash-flow margin to be about 22%.
CFO James Benson said Dynatrace's customers have been cutting costs due to the recession they expected in 2023, and remain a bit cautious on the economy. Benson struck a balanced tone on the earnings call and said Dynatrace "had a solid start to the year, but it is still early in our fiscal year, and we do not want to get ahead of ourselves." Prudence is OK by me.
The market likely wasn't happy with that view, given Dynatrace's rich valuation. Shares trade for about 45 times expected fiscal 2024 free cash flow. A more dramatic upgrade to the outlook seems to have been the consensus expectation.
Dynatrace did announce new AI tools to help automate cloud monitoring in the last quarter, as well as application development debugging. And though macro conditions may be weighing on cash flows a bit in the near term, the company's earnings are ramping up quickly as revenue continues to grow at a fairly consistent pace. The stock certainly isn't "cheap," but its valuation most certainly isn't unreasonable either if Dynatrace can keep going like it has been in recent years.
In the wake of the market's post-earnings selloff of Dynatrace, I'm a buyer of this top dog in the cloud app monitoring and cloud observability space, which keeps profitably capitalizing on computing secular growth trends.
Nicholas Rossolillo has positions in Dynatrace. The Motley Fool has positions in and recommends Datadog and Splunk. The Motley Fool recommends Gartner. The Motley Fool has a disclosure policy.
See the rest here:
1 Highly Profitable Cloud AI Stock Investors Need to Know About Now - The Motley Fool
Is artificial intelligence a threat to journalism or will the technology destroy itself? – The Guardian
Opinion
Hitching a struggling media industry to the wagon of AI wont serve our interests in the long run
Before we start, I want to let you know that a human wrote this article. The same cant be said for many articles from News Corp, which is reportedly using generative AI to produce 3,000 Australian news stories per week. It isnt alone. Media corporations around the world are increasingly using AI to generate content.
By now, I hope its common knowledge that large language models such as GPT-4 do not produce facts; rather, they predict language. We can think of ChatGPT as an automated mansplaining machine often wrong, but always confident. Even with assurances of human oversight, we should be concerned when material generated this way is repackaged as journalism. Aside from the issues of inaccuracy and misinformation, it also makes for truly awful reading.
Content farms are nothing new; media outlets were publishing trash long before the arrival of ChatGPT. What has changed is the speed, scale and spread of this chaff. For better or worse, News Corp has huge reach across Australia so its use of AI warrants attention. The generation of this material appears to be limited to local service information churned out en masse, such as stories about where to find the cheapest fuel or traffic updates. Yet we shouldnt be too reassured because it does signal where things might be headed.
In January, tech news outlet CNET was caught publishing articles generated by AI that were riddled with errors. Since then, many readers have been bracing themselves for an onslaught of AI generated reporting. Meanwhile, CNET workers and Hollywood writers alike are unionising and striking in protest of (among other things) AI-generated writing, and they are calling for better protections and accountability regarding the use of AI. So, is it time for Australian journalists to join the call for AI regulation?
The use of generative AI is part of a broader shift of mainstream media organisations towards acting like digital platforms that are data-hungry, algorithmically optimised, and desperate to monetise our attention. Media corporations opposition to crucial reforms to the Privacy Act, which would help impede this behaviour and better protect us online, makes this strategy abundantly clear. The longstanding problem of dwindling profits in traditional media in the digital economy has led some outlets to adopt digital platforms surveillance capitalism business model. After all, if you cant beat em, join em. Adding AI generated content into the mix will make things worse, not better.
What happens when the web becomes dominated by so much AI generated content that new models are trained not on human-made material, but on AI outputs? Will we be left with some kind of cursed digital ouroboros eating its own tail?
Its what Jathan Sadowski has dubbed Habsburg AI, referring to an infamously inbred European royal dynasty. Habsburg AI is a system that is so heavily trained on the outputs of other generative AIs that it becomes an inbred mutant, replete with exaggerated, grotesque features.
As it turns out, research suggests that large language models, like the one that powers ChatGPT, quickly collapse when the data they are trained on is created by other AIs instead of original material from humans. Other research found that without fresh data, an autophagous loop is created, doomed to a progressive decline in the quality of content. One researcher said were about to fill the internet with blah. Media organisations using AI to generate a huge amount of content are accelerating the problem. But maybe this is cause for a dark optimism; rampant AI generated content could seed its own destruction.
AI in the media doesnt have to be bad news. There are other AI applications that could benefit the public. For example, it can improve accessibility by helping with tasks such as transcribing audio content, generating image descriptions, or facilitating text-to-speech delivery. These are genuinely exciting applications.
Hitching a struggling media industry to the wagon of generative AI and surveillance capitalism wont serve Australias interests in the long run. People in regional areas deserve better, genuine, local reporting, and Australian journalists deserve protection from the encroachment of AI on their jobs. Australia needs a strong, sustainable and diverse media to hold those in power to account and keep people informed rather than a system that replicates the woes exported from Silicon Valley.
Samantha Floreani is a digital rights activist and writer based in Naarm
{{topLeft}}
{{bottomLeft}}
{{topRight}}
{{bottomRight}}
{{.}}
See the rest here:
AI Model May Define Margins and Help Identify Risk in Prostate … – AJMC.com Managed Markets Network
A version of this article originally appeared on Cancer Network. This version has been lightly edited.
An artificial intelligence (AI) deep learning model was better able to predict focal treatment margins and negative margin probabilities in resected prostate cancer specimens than conventional models, according to findings from a retrospective study published in European Urology Open Science.
The mean sensitivity for cancer-bearing voxels was 96.9% (interquartile range [IQR], 99.5%-100%) using AI-derived margins compared with 37.4% (IQR, 24.4-48.3) using conventional Prostate Imaging Reporting and Data System (PI-RADs) regions of interest (ROI; P <.001), 93.2% (IQR, 98.8%-100%) using 10 mm margins around conventional ROIs (P = .24), and 94.1% (IQR, 93.9%-100%) using hemigland margins (P <.001).
AI-derived margins also yielded a smaller extent of missed clinically-significant prostate cancer, at 1.6 mm vs 3.8 mm with hemigland margins (P < .001). AI produced a negative margin rate of 80% for clinically significant prostate cancer and 90% for index lesions compared with rates of 56% (P = .01) and 66% (P < .001) produced with hemigland margins, respectively.
AI-derived margins were negative for all specimens with negative hemigland margins. Moreover, AI margins were negative in 12 of 22 specimens with positive hemigland margins due to successful coverage of index lesion midline extensions in 10 of 12 cases.
Compared with the application of 10-mm margins to ROI, AI margins had a smaller extent of missed clinically-significant prostate cancer, with means of 1.6 mm vs 3.2 mm, respectively (P <.001). Additionally, AI margins also demonstrated a mean specificity of 51.2% (IQR, 40.0%-65.7%) compared with 63.4% (IQR, 53.1%-76%) with 10-mm margins (P <.001). The negative margin rates for clinically-significant prostate cancer were 80% and 74% with AI-derived and 10-mm margins, respectively (P = .48), and they were 90% vs 82%, respectively, for index lesions (P = .24).
Precision management of [prostate cancer] has the potential to optimize therapy while preserving quality of life, but targeted treatment first requires accurate tumor localization, the investigators wrote. PI-RADS ROI are known to underestimate tumor extent, and in our study, treatment of the original ROI would have resulted in positive margins for every patient.
It is clear that current multiparametric MRI contouring protocols, which were developed for diagnosis, are not suitable for targeted treatment. We developed a novel AI-driven approach and platform to address this shortcoming, combining multimodal informationMRI, tracked biopsy, and prostate-specific antigen [PSA]to produce [cancer estimation maps] and define optimal margins.
Investigators of this retrospective study conducted testing in an independent dataset of 50 consecutive patients with intermediate-risk prostate cancer who underwent radical prostatectomy.
The median patient age was 65.5 years (IQR, 60.0-69.0), and the median prostate volume was 33.4 cc (IQR, 27.0-40.4). The median PSA level was 6.9 ng/ml (IQR, 5.6-9.0) and the median PSA density was 0.198 (ng/ml)/ml (IQR, 0.144-0.278). Most patients (56.3%) had a PI-RADS v2 score of grade 4. Of the 966 biopsy cores assessed, 62.9% had an International Society of Urological Pathology grade of benign; a further 19.4% were grade 1, and 13.1% were grade 2.
Among the limitations to this study identified by investigators was the fact that the assessed population exclusively included recipients of radical prostatectomy and therefore likely had larger and more advanced disease than the average focal therapy patient. Moreover, the study population was drawn from a single institution only, and the AI model was not compared against physician readers.
This approach could help improve and standardize focal treatment margins, potentially reducing cancer recurrence rates, the investigators concluded. Prospective studies are warranted, as AI-enabled cancer mapping shows considerable promise for patient-specific treatment planning and personalized medicine.
ReferencePriester A, Fan RE, Shubert J, et al. Prediction and mapping of intraprostatic tumor extent with artificial intelligence. Eur Urol Open Sci. 2023;54:20-27. doi:10.1016/j.euros.2023.05.018
Follow this link: