Category Archives: Ai
SoundHound AI Stock: Bull vs. Bear – The Motley Fool
In the dynamic world of tech investments, differing viewpoints are as common and healthy as software updates. When it comes to artificial intelligence (AI) company SoundHound AI (SOUN), two Fool.com tech aficionados have two very different takes.
Read on for an analysis of this under-the-radar tech company. It may just change your mind.
Image source: Getty Images.
Anders Bylund (Bull case): Remember the days when SoundHound was the app to use for identifying a catchy tune? I sure do. It was a two-dog race between SoundHound and Shazam back in the early smartphone era, long before digital assistants like Apple's Siri and the Google Assistant started doing the same job. This company has been developing leading-edge audio analysis tools since 2005.
Now, SoundHound AI has evolved from a personal music detective to a maestro of voice AI, orchestrating conversations between businesses and people with the help of artificial intelligence. The company may not dominate song-naming services anymore, but you and I have probably interacted with SoundHound's Houndify technology more recently without knowing it. The company's impressive client roster includes social media giant Snap, video-streaming veteran Netflix, and giant carmaker Stellantis, just to name a few.
This isn't just a trip down memory lane; it's a testament to SoundHound AI's journey from a cool app on smartphones to a powerful force behind the voice-controlled AI revolution. The company's vision of creating a conversational AI platform that exceeds human capabilities isn't just a lofty goal; it's a reality unfolding before us. Each interaction with clients like restaurant management expert Toastor in your favorite Stellantis vehicle, is a brush with SoundHound's advanced AI platform.
Each one of the household names listed above could have selected another voice recognition system from a larger, better-known tech giant. But they all selected Houndify, making the client list a mighty selling point in future deal negotiations. It's happening now, with two new partnerships announced in just the last two months. Houndify powers Netflix's reference system for set-top boxes, guiding consumer electronics partners to build products that connect to the streaming service. In Jeep and Dodge cars, SoundHound's software helps you control the infotainment system, navigation, and more. And yep, it's Houndify's generative AI voice you hear on the phone with restaurants using Toast's automated ordering system. This little company is going places.
And for investors, this is more than just betting on a company. SoundHound shareholders are part of a story that many of us have experienced passively for years. SoundHound AI, with its blend of nostalgia-worthy experience and cutting-edge technology, presents a unique investment narrative -- one where the past and future of AI innovation converge.
Jeremy Bowman (Bear case): 2023 has been the year of artificial intelligence on the stock market, and SoundHound AI is among the winners. That makes sense as AI is at the core of the company's speech recognition and voice-to-text capabilities.
Shares of SoundHound AI have nearly doubled this year on enthusiasm for AI stocks, and there are legitimate reasons to like the stock. The company reported 52% sequential revenue growth in its most recent quarter, and while it's still unprofitable, its losses are narrowing, showing it's taking steps to profitability. Its valuation is high, but not unreasonable at a price-to-sales ratio of 12.
The reason I'm taking a bearish position against the stock is that I don't think SoundHound can defend its turf against big tech companies over the long term. The company claims to have best-in-class voice AI technology and says it has 15-plus years of Voice AI data accumulation, but it's still a small company with just $13 million in revenue in the recent quarter, and growth has been uneven.
SoundHound also says it's operating in a $160 billion addressable market, but that seems to exaggerate the company's opportunity. On that large of a scale, the company would be competing against Apple's Siri, Google Assistant, and Amazon Alexa, which have all been developing voice recognition technology for at least a decade. Suppose SoundHound proves that there is a significant market opportunity and a profitable one. In that case, the company will likely face increasing competition from those deep-pocketed tech giants and specialists in individual sectors.
Looking at SoundHound from that perspective, its upside potential seems more limited. Investors can likely find better AI growth stocks elsewhere.
Suzanne Frey, an executive at Alphabet, is a member of The Motley Fool's board of directors. Anders Bylund has positions in Alphabet and Netflix. Jeremy Bowman has positions in Netflix and Snap. The Motley Fool has positions in and recommends Alphabet, Apple, and Netflix. The Motley Fool recommends Stellantis and Toast. The Motley Fool has a disclosure policy.
Go here to see the original:
New AI tech allows UCF researchers to monitor the health of buildings – WMFE
Well-made buildings are said to have "good bones." But if a building or a bridge had broken bones, how would an inspector know?
Doctors use X-rays for patients, and soon local scientists are hoping to put similar monitoring technology in the hands of engineers. Researchers at the University of Central Florida are developing virtual reality and artificial intelligence tools to better monitor the health of buildings and bridges
In 2019, theU.S. ranked 13th on the World Economic Forum for its aging infrastructure. In 2021, the American Society of Civil Engineers gave America's infrastructure a C- and called out a need for more innovative technologies to better monitor and repair the countrys buildings, bridges, and roads.
UCF professor, Necati Catbas, is hoping to address that need with the creation of four different technologies. He and his team of UCF students and postdocs are hoping their tools will allow engineers to check up on buildings the same way a doctor would check on a patient.
In a way, you're looking at a patient versus you're looking at a patient, and also you're using MRI or X-ray to really understand what's going on, said Catbas, a Lockheed Martin St. Laurent professor.
The University of Central Florida
"Computer vision" is one such technology UCF researchers have developed to see cracks in infrastructure that in-person inspectors might miss. Using a headset connected to sensors built into a structure, users can see the vibration deformation, and movement of support beams inside a structure. Using mixed reality, users can interact with cracks they spot and use predictive tools to see how they could develop.
Computer vision is made for the visual inspection of structural health, which is practical for inspectors since it doesnt require access to the structures in question.
"The state of inspection right now is based on visual inspections," Catbas said. "The expertise and know-how of the engineer or inspector is very critical. And that accumulates over the years, but they also need complementary technologies."
Another tool, the generative adversarial network, would allow users to predict how newer structures may crack or shift over a set period of time based on archival data from an older, yet similar structure.
We are generating new data from the existing data like we are creating synthetic data, and based on the algorithm and methods we can create, and see how the structure is going to look when it has some damage," Catbas said.
The UCF team has also developed an Immersive visualization system that uses virtual reality and augmented reality to conduct virtual visits of a building or bridge, from afar. A computer-simulated environment of the real world is generated and overlayed with AR details giving users the structure's status in realtime.
The University of Central Florida
"It's almost like you're having a virtual tour on the bridge," Catbas said. "These are tools to provide more flexibility to have access to the bridge and to have access to the data."
Lastly, the collective intelligence framework technology uses AI to speed up the inspection processes. An inspector uses a headset or a handheld device to scan a damaged area and analyze it in real time. The inspector is spared from performing manual measurements and has access to the buildings condition.
"The ultimate goal here is to effectively manage the data that we are collecting and understand the complex data domains," Catbas said.
Catbas also said these smart structure technologies are ready to be adopted into engineering standards but must reviewed first by many committees throughout the country before they can be applied to everyday engineering and inspection.
Catbas sees the tech as becoming a vital component of America's infrastructure.
"We can utilize these technologies, not only for a particular bridge or bridge assessment, but also for extreme events like hurricanes, floods, earthquakes, and really help people recover from these damaging events," he said. "We can find the critical links in our communities, on the roads, and in buildings. We can find the ones that we need to pay more attention to, work, prepare, and make them more resilient."
See more here:
New AI tech allows UCF researchers to monitor the health of buildings - WMFE
Osium AI uses artificial intelligence to speed up materials innovation – TechCrunch
Image Credits: Osium AI
While everybody is trying to figure out how artificial intelligence can be leveraged across various industries, French startup Osium AI has found an interesting use case for AI research and development in materials science.
Founded by Sarah Najmark and Luisa Bouneder, the startup raised a $2.6 million seed round from Y Combinator, Singular, Kima Ventures, Collaborative Fund, Raise Phiture and several business angels (Julien Chaumond, Thomas Clozel, Isaac Oates, Liz Wessel, Ebert Hera Group, Patrick Joubert, Sequoia Scout and Atomico Angel).
During my undergrad, I had done research on materials, particularly in the field of cosmetics. And I had seen that materials development methodologies were still very manual, with a lot of trial and error and many methods relying mainly on intuition, Najmark told me.
After graduating, she joined Google X, the moonshot division of the giant tech company, and spent three years working on robotics and deep tech technologies. She also co-authored some patents.
I was tech lead, so I really had ownership over end-to-end artificial intelligence pipelines on robotics and system engineering subjects, she said.
Her co-founder, Luisa Bouneder, spent three years working on data products for industrial companies, and in the materials field in particular. She also noticed firsthand that there was a lot of trial and error that was slowing down the development process.
In discussions with many industrial companies, we also realized that there were really new challenges linked to sustainability, with the development of new materials: lighter materials materials for aeronautics, for example but also more durable, environment-friendly materials, with optimized and greener manufacturing processes, Najmark said.
Its a subject that really affects all types of industries, including construction, packaging, aeronautics, aerospace, textiles and smartphones, she added later in the conversation.
So how does Osium AI actually work? Its all about optimizing the feedback loop between materials formulation and testing using a data-driven approach. With the startups proprietary tech, industrial companies can predict the physical properties of new materials based on a list of criteria. After that, Osium AI can also help refine and optimize those new materials while avoiding common mistakes involved with trial and error.
Several industrial companies are already trying out Osium AIs solution, and they see the potential. Our users saw that our solution could enable them to accelerate both the development and analysis of materials by a factor of 10. So right from the start of our testing, we saw that we were bringing value, Najmark said.
In many ways, Osium AI is just getting started. There are only two people working for the company (the two co-founders), so the startup will soon ramp up its team and start turning these first contracts into real business. The company is already talking with 30 industrial companies that could potentially become clients.
Visit link:
Osium AI uses artificial intelligence to speed up materials innovation - TechCrunch
Rival pranked OpenAI with thousands of paper clips to warn about AI apocalypse – Business Insider
Microsoft's much-maligned Clippy was one of the first "intelligent office assistants" but never tried to wipe out humanity. SOPA Images/Getty Images
One of OpenAI's biggest rivals played an elaborate prank on the AI startup by sending thousands of paper clips to its offices.
The paper clips in the shape of OpenAI's distinctive spiral logo were sent to the AI startup's San Francisco offices last year by an employee at rival Anthropic, in a subtle jibe suggesting that the company's approach to AI safety could lead to the extinction of humanity, according to a report from The Wall Street Journal.
They were a reference to the famous "paper clip maximizer" scenario, a thought experiment from philosopher Nick Bostrom, which hypothesized that an AI given the sole task of making as many paper clips as possible might unintentionally wipe out the human race in order to achieve its goal.
"We need to be careful about what we wish for from a superintelligence, because we might get it," Bostrom wrote.
Anthropic was founded by former OpenAI employees who left the company in 2021 over disagreements on developing AI safely.
Since then, OpenAI has rapidly accelerated its commercial offerings, launching ChatGPT last year to record-breaking success and striking a multibillion-dollar investment deal with Microsoft in January.
AI safety concerns have come back to haunt the company in recent weeks, however, with the chaotic firing and subsequent reinstatement of CEO Sam Altman.
Reports have suggested that concerns over the speed of AI development within the company, and fears that this could hasten the arrival of superintelligent AI that could threaten humanity, were reasons why OpenAI's non-profit board chose to fire Altman in the first place.
OpenAI's chief scientist Ilya Sutskever, who took part in the board coup against Altman before dramatically joining calls for him to be reinstated, has been outspoken about the existential risks artificial general intelligence could pose to humanity, and reportedly clashed with Altman on the issue.
According to The Atlantic, Sutskever commissioned and set fire to a wooden effigy representing "unaligned" AI at a recent company retreat, and he reportedly also led OpenAI's employees in a chant of "feel the AGI" at the company's holiday party, after saying: "Our goal is to make a mankind-loving AGI."
OpenAI and Anthropic did not immediately respond to a request for comment from Business Insider, made outside normal working hours.
Loading...
Read this article:
Rival pranked OpenAI with thousands of paper clips to warn about AI apocalypse - Business Insider
Nashville music industry wants federal legislation to reign in AI – WKRN News 2
Nashville, TENN. (WKRN) Artificial intelligence is rocking the music business.Industry leaders have been meeting with lawmakers about national legislation so AI and musicians can have a healthy working relationship.
In a tough business its safe to say that Songwriter Jamie Moore has made it.
My job is to make you dance, to make you cry, to make you laugh, said Moore.
The four-time Grammy nominated artist has written songs performed by Florida Georgia Line, Morgan Wallen and Carrie Underwood, to name a few.
The ones that are instant are special, said Moore, talking about songwriting.
But Moore isnt resting on his laurels.Hes working harder than ever, concerned that AI could one day take his job.
For what I do, its picking up on phrasing, its picking up on rhythm, melodic sensibilities. Its really getting smart, said Moore.Will any of us have a job in the next two years?
AI is here. And we have to find a balance, said Bart Herbison.
Bart Herbison is executive director of the Nashville Songwriters Association International.
I think songwriters are smart, and they think it can be a useful tool. But, I also dont think its hyperbole to say they are freaking out over it. Weve got to get some regulations put around this, said Herbison.
And that effort is underway.Herbison recently traveled to Washington D.C. with songwriters to share their concerns with lawmakers, and progress is being made. Federal legislation called The NO FAKES Act is in its early stages.If passed, it could give artists property rights over their name, image, likeness and voice.In the Senate, Republican Marsha Blackburn and Democrat Chris Coons are behind the effort.
Its a smart approach, because we need to find the areas where we can get consensus. Weve got to start with something. And the NO FAKES Act, both on the House and Senate side, seem to be very well received.
So, it gives us some tools to tell you to take it down. And if not, we can sue you.
If we have rules, regulation, laws, and government concerning people. We have to have rules, laws, government, some type of guardrails set for AI, because it has none right now, said Moore.
As the new world order between music and AI takes shape in the coming years, songwriters would like to see a focus on what they call the four Ps permission, payment, proof, and penalty. Here are the details:
Those are our four principles, said Herbison. Right now, ChatGPT is using a songwriters song to learn how to write songs like that songwriter. And whats the goal? To replace them. So there have to be protections around this.
For Moore, music is all hes ever wanted to do.And though hes leery of AI, he also knows the human condition is more complex than an algorithm.
Putting the human experience in song. And I dont know if a robot can quite can feel, or can cry, or stir up emotion, said Moore.
Its a question mark. We know it can make music, but can it feel?
The NO FAKES Act is currently in a discussion draft stage.But Herbison hopes to see legislation next year.
More here:
Nashville music industry wants federal legislation to reign in AI - WKRN News 2
Facebook-parent Meta breaks up its Responsible AI team – CNBC
Mark Zuckerberg, CEO of Meta, attends a U.S. Senate bipartisan Artificial Intelligence Insight Forum at the U.S. Capitol in Washington, D.C., Sept. 13, 2023.
Stefani Reynolds | AFP | Getty Images
Meta has disbanded its Responsible AI division, the team dedicated to regulating the safety of its artificial intelligence ventures as they get developed and deployed, according to a Meta spokesperson.
Most members of the RAI team have been reassigned to the company's Generative AI product division, while some others will now work on the AI Infrastructure team, the spokesperson said. The news was first reported by The Information.
The Generative AI team, born in February, focuses on developing products that generate language and images to mimic the equivalent human-made version. It came as companies across the tech industry poured money into machine learning development so as not to get left behind in the AI race. Meta is among the Big Tech companies that have been playing catch-up since the AI boom took hold.
The RAI restructuring comes as the Facebook parent nears the end of its "year of efficiency," as CEO Mark Zuckerberg called it during a February earnings call. So far, that has played out as a flurry of layoffs, team-mergers and redistributions at the company.
Ensuring the safety of AI has become a stated priority of top players in the space, especially as regulators and other officials pay closer attention to the nascent technology's potential harms. In July, Anthropic, Google, Microsoft and OpenAI formed an industry group focused specifically on setting safety standards as AI advances.
Though RAI employees have now been dispersed throughout the organization, the spokesperson noted that they will continue to support "responsible AI development and use."
"We continue to prioritize and invest in safe and responsible AI development," the spokesperson said.
Follow this link:
Facebook-parent Meta breaks up its Responsible AI team - CNBC
AI is here. Ypsilanti schools weigh integrity, ethics of new technology – MLive.com
YPSILANTI, MI -- As the use of artificial intelligence becomes more and more common, Ypsilanti Community Schools is working to keep up with the technology.
With so much still unclear about the full capabilities of AI, Superintendent Alena Zachery-Ross said she believes its critical for schools to balance the usefulness of the new technology with maintaining academic integrity.
Weve really taken the stance that artificial intelligence is here, and so we need to teach integrity and the ethical considerations that teachers must think about, Zachery-Ross said. We understand that its going to be artificial intelligence and human intelligence interacting together from here on out.
YCS has been slowly rolling out the implementation of AI-powered tools since last summer. One way Zachary-Ross sees AI being used is to assist students in developing writing skills.
By using chatbots like ChatGPT -- an artificial intelligence developed by OpenAI that serves as a language model generating human-like text in a conversational style -- YCS can develop prompts and help students brainstorm ideas for writing exercises, Zachary-Ross said.
One way teachers can stem potential misuse of AI is requiring students to complete written assignments in the classroom -- either by writing on paper or typing in a monitored Google document -- so potential cheating would be easier to catch, Zachary-Ross said.
(Students can) use it for analysis, synthesis and improving their work -- not to generate the work for them, Zachary-Ross said.
In addition to potentially offering new opportunities to personalize student learning, AI could ease some classroom management burdens, such as large-scale data analysis and quickly organizing lesson plans, Zachary-Ross said.
YCS English Learner Department has been at the frontline of AI implementation in the district. The technology can be used to quickly generate instructional materials in several different languages, said teacher Connor Laporte.
We primarily use AI tools to create materials for students, Laporte said. Weve done a little bit of having students use it as well, but were trying to be a little bit slower in talking about how we are rolling that out. You have to be pretty discerning to use (AI).
Serving the roughly 30% of YCS students who can speak a language other than English, the English Learner Department has found multiple ways to bring AI into the classroom, including helping teachers develop multilingual explanations of core concepts discussed in the curriculum -- and save time doing it.
A lot of that time saving allows us to focus more on giving that important feedback that allows students to grow an be aware of their progress and their learning, Laporte said.
Laporte uses an example of a Spanish-speaking intern who improved a vocabulary test by double-checking the translations and using ChatGPT to add more vocabulary words and exercises. Another intern then used ChatGPT to make a French version of the same worksheet.
While convenient, artificial intelligence is not infallible, and native speaking staff members are careful to double-check the work produced through AI tools, Laporte said.
The future is now
AI engines like Google Bard can be used to create bespoken materials for individual students, effectively tailoring classwork for students based on their language proficiency.
AI-generated voice programs also give more options for students to hear multiple dialects of a chosen language. Students will get a chance to differentiate Tanzanian and Ugandan Swahili -- something the monotone, robot-like voice of Google Translate doesnt offer, Laporte said.
We are planning on rolling it out a little more widely, Laporte said. Were still cautious -- last year I feel like everyone was terrified of AI, so we dont want to just jump right into it.
Since the beginning of 2023, fifth-grade teacher Melanie Eccles has been implementing the Roadmaps digital education platform to digitally organize her lesson plans.
Developed by the University of Michigan College of Engineerings Center for Digital Curricula, Roadmaps allows Eccles to monitor students complete work in the same program. The platform uses AI-technology to automate the process of sharing information amongst students and other teachers.
(Roadmaps) has helped me both incorporate digital learning into the students curriculum and train them on how to use the curriculum in a way that isnt just browsing the internet, Eccles said.
Sydney Fortson, an 11-year-old student of Eccles social studies class, likes that the collaboration-based Roadmaps allows her to edit their own work and not just rely on a teacher.
I like how everything is in one place (with Roadmaps), Sydney said. I wish there were a few less tabs, but I like how it gives me choices on how I can learn.
Balance is critical
Whether or not students will use AI in their education is not a question of if, but when, Zachary-Ross said. Because of this, YCS is changing how teachers approach crafting their assignments in the first place.
Teachers are asking students to do more rigorous tasks -- things that do require more critical thinking and analysis, Zachary-Ross said. When we get to that level, thats something that a bot cant contribute to.
YCS staff are preparing for a future in which methods like group projects, hands-on assignments and asking students to explain concepts verbally are the norm in lieu of relying on written assignments to showcase student aptitude.
(Students) are having formative instruction where theyre growing and not just getting a final paper or final, simple assignment that can be put into an AI bot, Zachary-Ross said. We have to move away from that, because thats not higher-level thinking anyway. We really want to get students to analyze and be critical workers.
Though her district is open to the AI-powered future, Zachary-Ross said it will be important to stay careful and cautious when dealing with the technology, and for school districts to learn and grow from each other in order to balance utility with integrity.
Students need to understand that there have to be ethical considerations, Zachary-Ross said. That balance is critical for any district or educator thinking about adopting generative AI into their work.
If you would like more reporting like this delivered free to your inbox, click here and signup for our weekly newsletter: Michigan Schools.
Want more Ann Arbor-area news? Bookmark the local Ann Arbor news page, the Ypsilanti-area news page or sign up for the free 3@3 Ann Arbor daily newsletter.
Go here to see the original:
AI is here. Ypsilanti schools weigh integrity, ethics of new technology - MLive.com
Digitizing Healthcare: Can AI Augment Empathy and Compassion in … – MedCity News
With the advent of the latest technologies and software, including generative artificial intelligence (AI), virtual reality, ChatGPT, and others, organizations are racing to find purpose and use for these new tools in fear of losing relevance in the marketplace. In most industries, incorporating new and emerging technologies is seen as innovative, impressive, and ambitious. In healthcare, an industry that inherently holds the lives and touchpoints of care of many populations across the nation in their hands, moving towards digitizing healthcare inherently demands greater discernment around true impact, quality, and cost.
In the case of healthcare AI, we have seen its arrival signal developments in interactive and customized patient experiences, facilitated or eliminated administrative tasks in hospital or provider workflows, and improved access to healthcare. Yet there is still much work to be done. Some AI tools arent yet equipped to source from up-to-date and relevant materials, require human editors or handlers to double check the results and work, and should be optimized to recognize and appropriately address human emotion that consumers need. When it comes to addressing evolving patient and healthcare gaps, we are left to question, can AI help augment empathy and compassion in healthcare, or will it eventually crack under the pressure?
What is empathy and compassion?
Empathy and compassion go hand-in-hand. Empathy is simply defined as feeling for someone or being aware of others emotions and attempting to understand how they feel. Compassion is defined as feeling for someone and having the desire to help, an emotional response to empathy that evokes a desire to act. Within healthcare, compassion and empathy can play a critical part in improving patient outcomes and furthering patient care quality, yet the industry still struggles to find ways in which to foster and support the use of compassionate and empathetic care across the industry.
Evidence increasingly validates that exhibiting empathy in a healthcare setting, including providers, professionals, social care workers, etc., has shown results of higher satisfaction levels, and better health outcomes for patients. Compassionate care is also highly regarded by patients and can help providers determine appropriate care plans that focus on the unique patients needs based on their care story. Compassionate care can also strengthen physician-patient relationships as trust is established throughout care. Patients value compassionate and empathic concern as much as, if not more than, technical competence, when choosing a physician, yet empathy and compassion among healthcare professionals is sometimes seen to decrease over time, especially during training and clinical practice.
The current state of empathy and compassion in healthcare
As we continue to move towards care models that express or emphasize the attractiveness of value-based care, some argue physicians are unable to empathize with every patient genuinely and effectively without feeling emotionally drained. Compassion fatigue, highlighted as burnout and emotional exhaustion among healthcare professionals, is another deterrent to improving care as this phenomenon can lead to reduction in empathy, decreased patient and employee satisfaction, poorer clinical judgment, and other emotional turmoil. Overall, healthcare professionals today are finding it difficult to properly provide compassionate care under modern time and labor constraints, affecting both provider and patient satisfaction and outcomes, and leaving both feeling unsupported within the care continuum.
In combatting time and labor constraints, AI has proven to simplify workloads, maximize time, and offload repetitive or organizational tasks from an already over-burdened workforce. In regard to emotion, empathy or compassion, AI has also progressed to be able to recognize and respond to emotional distress. Experts debate AI cannot replace human empathy, specifically in a healthcare setting and with empathy being key to the successful treatment of patients, yet a recent JAMA Internal Medicinereport found ChatGPTs patient-provider communication skills were rated higher than that of their physician counterparts, including on the empathetic scale. While machines currently cannot feel a need or desire to help, as compassion requires, AI can replicate questions and responses that mimic an empathetic interaction. While we might question how AI is better able to provide these interactions to patients than providers, we must recognize AI chatbots are not better at empathy, AI is just not under the same time pressures as human clinicians.
How AI can help augment empathy and compassion
There is an inherent opportunity that emerges for AI to be used to help physicians provide better, compassionate, and empathetic care. Whether we use AI in training or to help free up time and space for healthcare workers to provide better care, we need to continue exploring effective AI use across the care continuum to help every member patient and provider included. The healthcare industry should prioritize a patients experience with compassion and empathy within healthcare rather than just looking at the outcomes. Through this, when using AI tools to augment and improve compassionate and empathetic care, we can ensure high standards are met with each interaction. The measurement of experienced or perceived empathy and compassion can easily be deprioritized for return on investment measured in hard dollars. Yet, there is most assuredly a return on investment when an individual stays engaged in their health journey due to compassionate interaction.
Our human impact on the consumer experience needs to be at the forefront of our care as we look to improve performance. Health teams feel a sense of responsibility towards their impact on a humans lived experience. This is a foundational element to a better company culture where healthcare systems and organizations are better able to impact patient outcomes and assure the intersection of AI and empathy is beneficial for all.
In its current state, AI can be relied on to improve efficiency, and free time/emotional labor for healthcare professionals to focus more fully on the human side of care fostering trust and relationship, and properly engaging with patients. Yet to conclude that AI and artificial empathy will evolve enough to completely replace physicians/healthcare workers, or the human side of healthcare is to misrepresent the issues at hand and the possible solutions. To digitize healthcare and lean on the emerging technologies is to find the opportunity within the relationship between machine and human to augment humans ability to be human to empathize and provide the compassionate touch to care.
Photo: ipopba, Getty Images
See the rest here:
Digitizing Healthcare: Can AI Augment Empathy and Compassion in ... - MedCity News
Icebergs are melting fast. This AI can track them 10000 times faster … – Space.com
Scientists are turning to artificial intelligence to quickly spot giant icebergs in satellite images with the goal of monitoring their shrinkage over time. And unlike the conventional iceberg-tracking approach, which takes a human a few minutes to outline just one of these structures in an image, AI accomplished the same task in less than 0.01 seconds. That's 10,000 times faster.
"It is crucial to locate icebergs and monitor their extent, to quantify how much meltwater they release into the ocean," Anne Braakmann-Folgmann, lead author of a study on the results and a scientist at the University of Leeds in the U.K., said in a statement.
In late October, the British Antarctic Survey reported that massive ice sheets covering Antarctica will melt at an accelerated rate for the rest of the century, and contribute inevitably to sea level rise around the globe in the coming decades. Last year, one of the biggest icebergs known to scientists A68a was more than 100 miles long and 30 miles wide thawed in the South Atlantic Ocean after drifting for five years from its home in the Antarctic Peninsula, where it had broken apart in 2017.
Related:
Along with dumping 1 trillion tons of fresh water into the ocean, the melting iceberg also pumped nutrients into its environment, which will radically alter the local ecosystem for years to come, scientists have said. It's still unclear whether this change will have a positive or negative effect on the marine food chain.
Scientists monitored A68a's travels and shrinkage using images from satellites. Accurately identifying the iceberg, crucial to monitor changes to its size and shape over the years, is not an easy task, as the icebergs, sea ice and clouds are all white. Plus, although analyzing one satellite image for icebergs takes only a few minutes to complete, the time quickly adds up when thousands of images are waiting for their turn.
"In addition, the Antarctic coastline may resemble icebergs in the satellite images, so standard segmentation algorithms often select the coast too instead of just the actual iceberg," said Braakmann-Folgmann.
So to reduce this time-consuming and laborious process, researchers have, for the first time, trained a neural network to do the job.
The study team trained the AI to spot large icebergs by using images from the European Space Agency's Sentinel-1 satellite, whose radar eyes can capture Earth's surface regardless of cloud cover or lack of light.
Except for missing a few parts of icebergs bigger than the examples the AI was trained on a solvable problem scientists found the system managed to detect satellite image icebergs with 99 percent accuracy. This included correctly identifying seven icebergs ranging in size from 54 square kilometers (approximately the size of the city of Bern in Switzerland( to 1052 square kilometers (as large as Hong Kong.)
"This study shows that machine learning will enable scientists to monitor remote and inaccessible parts of the world in almost real-time," study co-author Andrew Shepherd, who is a professor at the Northumbria University in England, said in the statement.
The AI tool also didn't make the same mistakes as other more conventional automated approaches, such as the error of misconstruing individual bits of ice as one collective iceberg, the researchers say.
"Being able to map iceberg extent automatically with enhanced speed and accuracy will enable us to observe changes in iceberg area for several giant icebergs more easily and paves the way for an operational application," said Braakmann-Folgmann.
This research is described in a paper published Thursday (Nov. 9) in the journal The Cryosphere.
See the original post here:
Icebergs are melting fast. This AI can track them 10000 times faster ... - Space.com
Generative AI Companies Love This Stock. Could It Be a Winner For … – The Motley Fool
Investors have been falling over themselves to get exposure to artificial intelligence (AI) stocks this year.
Since ChatGPT's launch nearly a year ago, investors have been convinced that AI and generative AI, specifically, will be the next transformative technology, and excitement about the possibilities is a major reason the Nasdaq Composite has soared this year, including AI stocks such as Nvidia and Microsoft.
However, investing in stocks selling generative AI capabilities like Microsoft or selling the building blocks necessary to run the advanced computing they require, like Nvidia, isn't the only way to get exposure to the fast-growing field.
There's another picks-and-shovels approach to getting exposure to generative AI, which is buying the stocks that provide the technology that these rapidly growing AI companies need. One company that is already serving generative AI customers and well positioned to benefit from their growth is Amplitude (AMPL 0.20%), a cloud software company that helps companies learn how their customers use their digital products and how they can improve. For example, Amplitude helped Peloton figure out that social interaction was key to getting loyalty from its members.
Image source: Getty Images.
Because Amplitude has focused on digital products, it has long been popular with tech start-ups that are eager to see how customers experience their products and improve their user interface, and Amplitude is now seeing a boom in demand from generative AI start-ups.
Two new generative AI companies just became Amplitude customers in its recently reported third quarter. Those are Midjourney, an AI image generation company similar to Stable Diffusion or OpenAI's DALL-E, and Character.ai, a large language model chatbot similar to ChatGPT.
Midjourney is using Amplitude to understand free-to-paid conversions, see how demographics relate to users, and to A/B test changes to its user interface. Character.ai, meanwhile, is using Amplitude's analytics and experimental products to allow them to better understand the user experience and improve it.
On the earnings call, Amplitude CEO Spenser Skates said, "Amplitude is the platform of choice for some of the biggest, brightest, and best names in generative AI, helping them guide their businesses in ways that our competitors cannot match."
Skates also saw Amplitude playing a key role for AI companies because they are competing, in large part, on user experience, which makes the product data that Amplitude provides so valuable. He also said the demand from AI companies was part of a larger trend, adding, "I would compare AI to actually previous waves of technological innovation. We've seen stuff like VR, crypto, mobile, SaaS, all of the new companies that those categories created ended up becoming day one Amplitude customers from the very start, starting out with a small -- growing with us -- you know, us growing with them over time as they continue to scale."
Amplitude's ability to grow with its customers is key here.
Amplitude stock sold off following its third-quarter earnings report on Tuesday night, even as the company topped estimates in the report and raised its guidance.
Like a lot of cloud software companies, Amplitude is still seeing some macro-related challenges as many of its customers have grown more cautious, and it's seen an uptick in churn. That's not surprising, given the recent layoffs in the tech sector and broader fears of a recession.
In an interview I had with Skates, he expressed optimism about faster growth returning by the second half of next year, but the company is also making progress on the bottom line. It's on track for an adjusted operating income profit in the second half of this year, and it posted $7.5 million in free cash flow in the third quarter, giving it a free cash flow margin of more than 10% in the quarter.
Still, the company's strength with tech start-ups like Midjourney and Character.ai could be its biggest strength as the artificial intelligence industry is expected to explode. Digital product usage should only increase with the proliferation of generative AI tools, and Amplitude is well positioned as a leader in product analytics.
With a market cap just north of $1 billion, the company has a lot of upside potential if it can take advantage of the generative AI wave.
Read more:
Generative AI Companies Love This Stock. Could It Be a Winner For ... - The Motley Fool