Category Archives: Artificial Intelligence
Revealing speckle obscured living human retinal cells with artificial intelligence assisted adaptive optics optical … – Nature.com
P-GAN enables visualization of cellular structure from a single speckled image
The overall goal was to learn a mapping between the single speckled and averaged images (Fig.1b) using a paired training dataset. Inspired by the ability of traditional GAN networks to recover aspects of the cellular structure (Supplementary Fig.4), we sought to further improve upon these networks with P-GAN. In our network architecture (Supplementary Fig.2), the twin and the CNN discriminators were designed to ensure that the generator faithfully recovered both the local structural details of the individual cells as well as the overall global mosaic of the RPE cells. In addition, we incorporated a WFF strategy to the twin discriminator that concatenated features from different layers of the twin CNN with appropriate weights, facilitating effective comparisons and learning of the complex cellular structures and global patterns of the images.
P-GAN was successful in recovering the retinal cellular structure from the speckled images (Fig.1d and Supplementary Movie1). Toggling between the averaged RPE images (obtained by averaging 120 acquired AO-OCT volumes) and the P-GAN recovered images showed similarity in the cellular structure (Supplementary Movie2). Qualitatively, P-GAN showed better cell recovery capability than other competitive deep learning networks (U-Net41, GAN25, Pix2Pix30, CycleGAN31, medical image translation using GAN (MedGAN)42, and uncertainty guided progressive GAN (UP-GAN)43) (additional details about network architectures and training are shown in Other network architectures section in Supplementary Methods and Supplementary Table4, respectively) with clearer visualization of the dark cell centers and bright cell surroundings of the RPE cells (e.g., magenta arrows in Supplementary Fig.4 and Supplementary Movie3), possibly due to the twin discriminators similarity assessment. Notably, CycleGAN was able to generate some cells that were perceptually similar to the averaged images, but in certain areas, undesirable artifacts were introduced (e.g., the yellow circle in Supplementary Fig.4).
Quantitative comparison between P-GAN and the off-the-shelf networks (U-Net41,GAN25, Pix2Pix30, CycleGAN31, MedGAN42, and UP-GAN43) using objective performance metrics (PieAPP34, LPIPS35, DISTS36, and FID37) further corroborated our findings on the performance of P-GAN (Supplementary Table5). There was an average reduction of at least 16.8% in PieAPP and 7.3% in LPIPS for P-GAN compared to the other networks, indicating improved perceptual similarity of P-GAN recovered images with the averaged images. Likewise, P-GAN also achieved the best DISTS and FID scores among all networks, demonstrating better structural and textural correlations between the recovered and the ground truth averaged images. Overall, these results indicated that P-GAN outperformed existing AI-based methods and could be used to successfully recover cellular structure from speckled images.
Our preliminary explorations of the off-the-shelf GAN frameworks showed that these methods have the potential for recovering cellular structure and contrast but alone are insufficient to recover the fine local cellular details in extremely noisy conditions (Supplementary Fig.4). To further reveal and validate the contribution of the twin discriminator, we trained a series of intermediate models and observed the cell recovery outcomes. We began by training a conventional GAN, comprising of the generator, G, and the CNN discriminator, D2. Although GAN (G+D2) showed promising RPE visualization (Fig.2c) relative to the speckled images (Fig.2a), the individual cells were hard to discern in certain areas (yellow and orange arrows in Fig.2c). To improve the cellular visualization, we replaced D2 with the twin discriminator, D1. Indeed, a 7.7% reduction in DISTS was observed with clear improvements in the visualization of some of the cells (orange arrows in Fig.2c, d).
a Single speckled image compared to images of the RPE obtained via b average of 120 volumes (ground truth), c generator with the convolutional neural network (CNN) discriminator (G+D2), d generator with the twin discriminator (G+D1), e generator with CNN and twin discriminators without the weighted feature fusion (WFF) module (G+D2+D1-WFF), and f P-GAN. The yellow and orange arrows indicate cells that are better visualized using P-GAN compared to the intermediate models. gi Comparison of the recovery performance using deep image structure and texture similarity (DISTS), perceptual image error assessment through pairwise preference (PieAPP), and learned perceptual image patch similarity (LPIPS) metrics. The bar graphs indicate the average values of the metrics across sample size, n=5 healthy participants (shown in circles) for different methods. The error bars denote the standard deviation. Scale bar: 50m.
Having shown the outcomes of training D1 and D2 independently with G, we showed that combining both D1 and D2 with G (P-GAN) boosted the performance even further, evident in the improved values (lower scores implying better perceptual similarity) of the perceptual measures (Fig.2gi). For this combination of D1 and D2, we replaced the WFF block, which concatenated features from different layers of the twin CNN with appropriate weights, with global average pooling of the last convolutional layer (G+D2+D1-WFF). Without the WFF, the model did not adequately extract powerful discriminative features for similarity assessment and hence resulted in poor cell recovery performance. This was observed both qualitatively (yellow and orange arrows in Fig.2e, f) as well as quantitatively with the higher objective scores (indicating low perceptual similarity with ground truth averaged images) for G+D2+D1-WFF compared to P-GAN (Fig.2gi).
Taken together, this established that the CNN discriminator (D2) helped to ensure that recovered images were closer to the statistical distribution of the averaged images, while the twin discriminator (D1), working in conjunction with D2, ensured structural similarity of local cellular details between the recovered and the averaged images. The adversarial learning of G with D1 and D2 ensured that the recovered images not only have global similarity to the averaged images but also share nearly identical local features.
Finally, experimentation using different weighting configurations in WFF revealed that the fusion of the intermediate layers with weights of 0.2 with the last convolutional layer proved complementary in extracting shape and texture information for improved performance (Supplementary Tables2,3). These ablation experiments indicated that the global perceptual closeness (offered by D2) and the local feature similarity (offered by D1 and WFF) were both important for faithful cell recovery.
Given the relatively recent demonstration of RPE imaging using AO-OCT in 201612, and the long durations needed to generate these images, currently, there are no publicly available datasets for image analysis. Therefore, we acquired a small dataset using our custom-built AO-OCT imager13 consisting of seventeen retinal locations obtained by imaging up to four different retinal locations for each of the five participants (Supplementary Table1). To obtain this dataset, a total of 84h was needed (~2h for image acquisition followed by 82hours of data processing which included conversion of raw data to 3D volumes and correction for eye motion-induced artifacts). After performing traditional augmentation (horizontal flipping), this resulted in an initial dataset of only 136 speckled and averaged image pairs. However, considering that this and all other existing AO-OCT datasets that we are aware of are insufficient in size compared to the training datasets available for other imaging modalities44,45, it was not surprising that P-GAN trained on this initial dataset yielded very low objective perceptual similarity (indicated by the high scores of DISTS, PieAPP, LPIPS, and FID in Supplementary Table6) between the recovered and the averaged images.
To overcome this limitation, we leveraged the natural eye motion of the participants to augment the initial training dataset. The involuntary fixational eye movements, which are typically faster than the imaging speed of our AO-OCT system (1.6 volumes/s), resulted in two types of motion-induced artifacts. First, due to bulk tissue motion, a displacement of up to hundreds of cells between acquired volumes could be observed. This enabled us to create averaged images of different retinal locations containing slightly different cells within each image. Second, due to the point-scanning nature of the AO-OCT system compounded by the presence of continually occurring eye motion, each volume contained unique intra-frame distortions. The unique pattern of the shifts in the volumes was desirable for creating slightly different averaged images, without losing the fidelity of the cellular information (Supplementary Fig.3). By selecting a large number of distinct reference volumes onto which the remaining volumes were registered, we were able to create a dataset containing 2984 image pairs (22-fold augmentation compared to the initial limited dataset) which was further augmented by an additional factor of two using horizontal flipping, resulting in a final training dataset of 5996 image pairs for P-GAN (also described in Data for training and validating AI models in Methods). Using the augmented dataset for training P-GAN yielded high perceptual similarity of the recovered and the ground truth averaged images which was further corroborated by improved quantitative metrics (Supplementary Table6). By leveraging eye motion for data augmentation, we were able to obtain a sufficiently large training dataset from a recently introduced imaging technology to enable P-GAN to generalize well for never-seen experimental data (Supplementary Table1 and Experimental data for RPE assessment from the recovered images in Methods).
In addition to the structural and perceptual similarity that we demonstrated between P-GAN recovered and averaged images, here, we objectively assessed the degree to which cellular contrast was enhanced by P-GAN compared to averaged images and other AImethods. As expected, examination of the 2D power spectra of the images revealed a bright ring in the power spectra (indicative of the fundamental spatial frequency present within the healthy RPE mosaic arising from the regularly repeating pattern of individual RPE cells) for the recovered and averaged images (insets in Fig.3bi).
a Example specked image acquired from participant S1. Recovered images using b U-Net, c generative adversarial network (GAN), d Pix2Pix, e CycleGAN, f medical image translation using GAN (MedGAN), g uncertainty guided progressive GAN (UP-GAN), h parallel discriminator GAN (P-GAN). i Ground truth averaged image (obtained by averaging 120 adaptive optics optical coherence tomography (AO-OCT) volumes). Insets in (ai) show the corresponding 2D power spectra of the images. A bright ring representing the fundamental spatial frequency of the retinal pigment epithelial (RPE) cells can be observed in U-Net,GAN, Pix2Pix, CycleGAN, MedGAN, UP-GAN, P-GAN, and averaged images power spectrum corresponds to the cell spacing. j Circumferentially averaged power spectral density (PSD) for each of the images. A visible peak corresponding to the RPE cell spacing was observed for U-Net,GAN, Pix2Pix, CycleGAN, MedGAN, UP-GAN, P-GAN, and averaged images. The vertical line indicates the approximate location of the fundamental spatial frequency associated with the RPE cell spacing. The height of the peak (defined as peak distinctiveness (PD)) indicates the RPE cellular contrast measured as the difference in the log PSD between the peak and the local minima to the left of the peak (inset in (j)). Scale bar: 50m.
Interestingly, although this ring was not readily apparent on the speckled single image (inset in Fig.3a), it was present in all the recovered images, reinforcing our observation of the potential of AI to decipher the true pattern of the RPE mosaic from the speckled images. Furthermore, the radius of the ring, representative of the approximate cell spacing (computed from the peak frequency of the circumferentially averaged PSD) (Quantification of cell spacing and contrast in Methods), showed consistency among the different methods (shown by the black vertical line along the peak of the circumferentially averaged PSD in Fig.3j and Table1), indicating high fidelity of recovered cells in comparison to the averaged images.
The height of the local peak of the circumferentially averaged power spectra (which we defined as peak distinctiveness) provided an opportunity to objectively quantify the degree to which cellular contrast was enhanced. Among the different AI methods, the peak distinctiveness achieved by P-GAN was closest to the averaged images with a minimal absolute error of 0.08 compared to ~0.16 for the other methods (Table1), which agrees with our earlier results indicating the improved performance of P-GAN. In particular, P-GAN achieved a contrast enhancement of 3.54-fold over the speckled images (0.46 for P-GAN compared with 0.13 for the speckled images). These observations demonstrate P-GANs effectiveness in boosting cellular contrast in addition to structural and perceptual similarity.
Having demonstrated the efficacy and reliability of P-GAN on test data, we wanted to evaluate the performance of P-GAN on experimental data from never-seen human eyes across an experimental dataset (Supplementary Table1), which to the best of our knowledge, covered the largest extent of AO-OCT imaged RPE cells reported (63 overlapping locations per eye). This feat was made possible using the AI-enhanced AO-OCT approach developed and validated in this paper. Using the P-GAN approach, in our hands, it took 30min of time (including time needed for rest breaks) to acquire single volume acquisitions from 63 separate retinal locations compared to only 4 non-overlapping locations imaged with nearly the same duration using the repeated averaging process (15.8-fold increase in number of locations). Scaling up the averaging approach from 4 to 63 locations would have required nearly 6h to acquire the same amount of RPE data (note that this does not include any data processing time), which is not readily achievable in clinical practice. This fundamental limitation explains why AO-OCT RPE imaging is currently performed only on a small number of retinal locations12,13.
Leveraging P-GANs ability to successfully recover cellular structures from never-seen experimental data, we stitched together overlapping recovered RPE images to construct montages of the RPE mosaic (Fig.4 and Supplementary Fig.5). To further validate the accuracy of the recovered RPE images, we also created ground truth averaged images by acquiring 120 volumes from four of these locations per eye (12 locations total) (Experimental data for RPE assessment from the recovered images in Methods). The AI-enhanced and averaged images for the experimental data at the 12 locations were similar in appearance (Supplementary Fig.6). Objective assessment using PieAPP, DISTS, LPIPS, and FID also showed good agreement with the averaged images (shown by comparable objective scores for experimental data in Supplementary Table7 and test data in Supplementary Table5) at these locations, confirming our previous results and illustrating the reliability of performing RPE recovery for other non-seen locations as well (P-GAN was trained using images obtained from up to 4 retinal locations across all participants). The cell spacing estimated using the circumferentially averaged PSD between the recovered and the averaged images (Supplementary Fig.7 and Supplementary Table8) at the 12 locations showed an error of 0.61.1m (meanSD). We further compared the RPE cell spacing from the montages of the recovered RPE from the three participants (S2, S6, and S7) with the previously published in vivo studies (obtained using different imaging modalities) and histological values (Fig.5)12,46,47,48,49,50,51. Considering the range of values in Fig.5, the metric exhibited inter-participant variability, with cell spacing varying up to 0.5m across participants at any given retinal location. Nevertheless, overall our measurements were within the expected range compared to the published normative data12,46,47,48,49,50,51. Finally, peak distinctiveness computed at 12 retinal locations of the montages demonstrated similar or better performance of P-GAN compared to the averaged images in improving the cellular contrast (Supplementary Table8).
The image shows the visualization of the RPE mosaic using the P-GAN recovered images (this montage was manually constructed from up to 63 overlapping recovered RPE images from the left eye of participant S2). The white squares (ae) indicate regions that are further magnified for better visualization at retinal locations a 0.3mm, b 0.8mm, c 1.3mm, d 1.7mm, and e 2.4mm temporal to the fovea, respectively. Additional examples of montages from two additional participants are shown in Supplementary Fig.5.
Symbols in black indicate cell spacing estimated from P-GAN recovered images for three participants (S2, S6, and S7) at different retinal locations. For comparison, data in gray denote the mean and standard deviation values from previously published studies (adaptive optics infrared autofluorescence (AO-IRAF)48, adaptive optics optical coherence tomography (AO-OCT)12, adaptive optics with short-wavelength autofluorescence (AO-SWAF)49, and histology46,51).
Voronoi analysis performed on P-GAN and averaged images at 12 locations (Supplementary Fig.8) resulted in similar shapes and sizes of the Voronoi neighborhoods. Cell spacing computed from the Voronoi analysis (Supplementary Table9) fell within the expected ranges and showed an average error of 0.50.9m. These experimental results demonstrate the possibility of using AI to transform the way in which AO-OCT is used to visualize and quantitatively assess the contiguous RPE mosaic across different retinal locations directly in the living human eye.
The rest is here:
Revealing speckle obscured living human retinal cells with artificial intelligence assisted adaptive optics optical ... - Nature.com
The Hidden Peril of Over-Reliance on Artificial Intelligence – yTech
Summary: As artificial intelligence (AI) becomes more ingrained in everyday life, its important to consider the ramifications of allowing AI to make decisions on our behalf. While the technology promises numerous advantages, especially in data-intensive fields, the potential for AI to erode our decision-making abilities shouldnt be overlooked.
The steady infiltration of artificial intelligence into various aspects of our lives has raised a multitude of concerns, from privacy violations to biased algorithms. However, one significant danger that might be overshadowed is that AI could weaken our capabilities in making thoughtful and disciplined decisions.
Thoughtful decision-making traditionally comprises a few vital steps that promote understanding and exploring multiple options while considering their trade-offs. This process usually culminates with a well-informed choice, resisting the urge to swiftly conclude without adequate reflection. Regrettably, the convenience of AI shortcuts this regimen, providing solutions devoid of transparent reasoning and stifling our crucial cognitive exercises.
Harnessing the potential of AI without becoming subservient to it involves recognizing the technologys limitations. AI-generated advice, while helpful in certain contexts, does not substitute the need for personal scrutiny and critical thinking. Societal over-reliance on AI threatens to entrench the existing biases and group conformity, neglecting the individuals analytical growth.
Going forward, a balanced approach is essential. While AI can revolutionize sectors such as healthcare and cybersecurity, its vital to retain the human element in everyday decision-making. Embracing AIs benefits while ensuring that society does not forfeit the fundamental human capacity for thoughtful choice may help prevent an over-dependence that could ultimately render humans less autonomous and wise. Maintaining this delicate balance honors the intrinsic human privilege and duty of discernment and choice, fostering personal and societal advancement.
Artificial Intelligence: The Implications of Outsourcing Decisions
As the prominence of artificial intelligence (AI) in modern life continues to escalate, we are beginning to face the complex consequences of integrating this advanced technology into our daily routines. The AI industrys spread reaches far and wide, touching upon areas like healthcare, where it assists in diagnosing diseases; finance, where it aids in fraud detection; and transportation, through the advent of self-driving vehicles. These applications are just the tip of the iceberg, revealing the immense potential that AI possesses to transform our world.
However, as we marvel at AIs capabilities, we must also ponder the market forecasts that predict its growth. Experts anticipate that the AI sector will witness exponential growth in the coming years. Industries worldwide are slated to spend billions on AI technologies, with healthcare, automotive, and finance sectors leading the charge. The global AI market size is expected to reach new heights, suggesting a future where AI applications become even more omnipresent.
But this anticipated boon comes with a caveat the issues related to the AI industry and product. As AIs decision-making capabilities surpass human speed and accuracy, we must confront the ethical and practical challenges it creates. These range from job displacement in the workforce to significant privacy and security concerns. Furthermore, the potential for ingrained biases within AI algorithms poses a severe risk, potentially amplifying existing societal inequalities.
Ethically, there is a burgeoning debate on how much control should be relinquished to AI. Reliance on machine-generated choices could lead to atrophy in human cognitive abilities, specifically in critical thinking and problem-solving skills. As humans become mere supervisors of AI-driven processes, the fear of eroding our decision-making faculties looms large.
To approach these challenges effectively, broad discussions across various sectors are essential. Stakeholdersincluding technologists, ethicists, policymakers, and the publicmust work together to establish guidelines that balance AI advancements with human oversight. Creating transparent AI systems that can explain their reasoning is crucial for fostering trust and understanding.
In fostering this balance, we are reminded of the importance of nurturing human competencies that AI cannot replicateempathy, moral reasoning, and deep contextual understanding. It is in these uniquely human traits that our dominion over artificial intelligence must be maintained.
To explore the latest AI news and insights, you can visit IBM Watson and DeepMind, leaders in the AI space.
Maintaining a measured perspective is key to leveraging AI effectively while safeguarding our inherent ability to make thoughtful decisions. The challenge lies in embracing the convenience of AI without becoming dependent on it, thereby preserving the crucial human element in an increasingly automated world.
Leokadia Gogulska is an emerging figure in the field of environmental technology, known for her groundbreaking work in developing sustainable urban infrastructure solutions. Her research focuses on integrating green technologies in urban planning, aiming to reduce environmental impact while enhancing livability in cities. Gogulskas innovative approaches to renewable energy usage, waste management, and eco-friendly transportation systems have garnered attention for their practicality and effectiveness. Her contributions are increasingly influential in shaping policies and practices towards more sustainable and resilient urban environments.
View post:
The Hidden Peril of Over-Reliance on Artificial Intelligence - yTech
Artificial intelligence in NY’s courts? Panel will study benefits and potential risks. – Gothamist
The New York state court system has established an advisory panel to study the potential benefits and risks of how artificial intelligence is utilized in court.
The Advisory Committee on Artificial Intelligence and the Courts will be made up of judges, court administrators, attorneys, academics and other experts from around New York. They will examine the use of AI tools in the courts by judges, court staff, attorneys and litigants, and identify how it could be used to improve the administration of justice while minimizing risks. The group will also be charged with developing appropriate guardrails to ensure AI is used safely.
Chief Administrative Judge Joseph A. Zayas, who is tasked with overseeing the day-to-day operation of the statewide court system, announced the formation of the committee in a press release on Thursday.
While these are incredibly exciting times, with AI showing tremendous promise for transforming court operations, improving court experiences for all users, and greatly expanding access to justice, we have to move cautiously in considering the adoption of the use of AI tools, Zayas said in a statement. The New York State Courts must aspire to the effective, responsible, and impartial use of AI, taking every step possible to guard against bias and the lack of human input, and to ensure that all security and privacy concerns are protected.
Initially, the 39-member panel will focus on studying and then recommending AI training, determining how to ensure AI use is equitable and assessing the ethical implications of using AI tools, according to the press release.
Among those appointed to the panel are NYU Law School Director and professor Jason Schultz, who will serve as one of three co-chairs, and Manhattan District Attorney Alvin Bragg.
The creation of the committee comes as elected officials in the state and across the country grapple with how to handle the growing use of AI. Earlier this year, Gov. Kathy Hochul announced her commitment to putting New York State at the cutting-edge of AI research, which included a proposal to create a consortium to create an AI computing center in Upstate New York.
More:
Artificial intelligence in NY's courts? Panel will study benefits and potential risks. - Gothamist
1 Stealthy Artificial Intelligence (AI) Stock That Could Be Huge – sharewise
There is something special about seeing events live, whether you like sports, concerts, or shows. I recently visited T-Mobile Arena in Las Vegas to watch the Vegas Golden Knights take on the Detroit Red Wings, and there was something different about getting into the arena. I'll explain below.
A large part of any event is security. Sadly, mass shootings have skyrocketed over the past 20 years. New York City has also seen a surge in violence on the subways, so it is turning to this company to help keep people safe.
Whatever our personal experience, we are all at least indirectly affected by unfortunate incidents when we go through tight security and long lines. However, traditional metal detector security has problems -- problems that Evolv Technologies (NASDAQ: EVLV) is trying to solve using artificial intelligence (AI) technology.
Continue reading
Source Fool.com
See the original post:
1 Stealthy Artificial Intelligence (AI) Stock That Could Be Huge - sharewise
AI makes retinal imaging 100 times faster, compared to manual method – National Institutes of Health (NIH) (.gov)
News Release
Wednesday, April 10, 2024
NIH scientists use artificial intelligence called P-GAN to improve next-generation imaging of cells in the back of the eye.
Researchers at the National Institutes of Health applied artificial intelligence (AI) to a technique that produces high-resolution images of cells in the eye. They report that with AI, imaging is 100 times faster and improves image contrast 3.5-fold. The advance, they say, will provide researchers with a better tool to evaluate age-related macular degeneration (AMD) and other retinal diseases.
Artificial intelligence helps overcome a key limitation of imaging cells in the retina, which is time, said Johnny Tam, Ph.D., who leads the Clinical and Translational Imaging Section at NIH's National Eye Institute.
Tam is developing a technology called adaptive optics (AO) to improve imaging devices based on optical coherence tomography (OCT). Like ultrasound, OCT is noninvasive, quick, painless, and standard equipment in most eye clinics.
Adaptive optics takes OCT-based imaging to the next level, said Tam. Its like moving from a balcony seat to a front row seat to image the retina. With AO, we can reveal 3D retinal structures at cellular-scale resolution, enabling us to zoom in on very early signs of
While adding AO to OCT provides a much better view of cells, processing AO-OCT images after theyve been captured takes much longer than OCT without AO.
Tams latest work targets the retinal pigment epithelium (RPE), a layer of tissue behind the light-sensing retina that supports the metabolically active retinal neurons, including the photoreceptors. The retina lines the back of the eye and captures, processes, and converts the light that enters the front of the eye into signals that it then transmits through the optic nerve to the brain. Scientists are interested in the RPE because many diseases of the retina occur when the RPE breaks down.
Imaging RPE cells with AO-OCT comes with new challenges, including a phenomenon called speckle. Speckle interferes with AO-OCT the way clouds interfere with aerial photography. At any given moment, parts of the image may be obscured. Managing speckle is somewhat similar to managing cloud cover. Researchers repeatedly image cells over a long period of time. As time passes, the speckle shifts, which allows different parts of the cells to become visible. The scientists then undertake the laborious and time-consuming task of piecing together many images to create an image of the RPE cells that's speckle-free.
Tam and his team developed a novel AI-based method called parallel discriminator generative adversarialnetwork (P-GAN)a deep learning algorithm. By feeding the P-GAN network nearly 6,000 manually analyzed AO-OCT-acquired images of human RPE, each paired with its corresponding speckled original, the team trained the network to identify and recover speckle-obscured cellular features.
When tested on new images, P-GAN successfully de-speckled the RPE images, recovering cellular details. With one image capture, it generated results comparable to the manual method, which required the acquisition and averaging of 120 images. With a variety of objective performance metrics that assess things like cell shape and structure, P-GAN outperformed other AI techniques. Vineeta Das, Ph.D., a postdoctoral fellow in the Clinical and Translational Imaging Section at NEI, estimates that P-GAN reduced imaging acquisition and processing time by about 100-fold. P-GAN also yielded greater contrast, about 3.5 greater than before.
By integrating AI with AO-OCT, Tam believes that a major obstacle for routine clinical imaging using AO-OCT has been overcome, especially for diseases that affect the RPE, which has traditionally been difficult to image.
Our results suggest that AI can fundamentally change how images are captured, said Tam. Our P-GAN artificial intelligence will make AO imaging more accessible for routine clinical applications and for studies aimed at understanding the structure, function, and pathophysiology of blinding retinal diseases. Thinking about AI as a part of the overall imaging system, as opposed to a tool that is only applied after images have been captured, is a paradigm shift for the field of AI.
More news from the NEI Clinical and Translational Imaging Section.
This press release describes a basic research finding. Basic research increases our understanding of human behavior and biology, which is foundational to advancing new and better ways to prevent, diagnose, and treat disease. Science is an unpredictable and incremental process each research advance builds on past discoveries, often in unexpected ways. Most clinical advances would not be possible without the knowledge of fundamental basic research. To learn more about basic research, visit https://www.nih.gov/news-events/basic-research-digital-media-kit.
NEIleads the federal governments effortsto eliminate vision loss and improve quality of life through vision researchdriving innovation, fostering collaboration, expanding the vision workforce, and educating the public and key stakeholders.NEI supports basic and clinical science programs to develop sight-saving treatments and to broaden opportunities for people with vision impairment.For more information, visithttps://www.nei.nih.gov.
About the National Institutes of Health (NIH): NIH, the nation's medical research agency, includes 27 Institutes and Centers and is a component of the U.S. Department of Health and Human Services. NIH is the primary federal agency conducting and supporting basic, clinical, and translational medical research, and is investigating the causes, treatments, and cures for both common and rare diseases. For more information about NIH and its programs, visit http://www.nih.gov.
NIHTurning Discovery Into Health
Vineeta Das, Furu Zhang, Andrew Bower, et al. Revealing speckle obscured living human retinal cells with artificial intelligence assisted adaptive optics optical coherence tomography.Communications Medicine. April 10, 2024,https://doi.org/10.1038/s43856-024-00483-1.
###
Report: Sam Altman Seeks $1 Billion to Fund AI Hardware Device – PYMNTS.com
How much will it cost to build Sam Altmans planned artificial intelligence-powered personal device?
At least $1 billion, according to areportby The Information, which listed that amount as what Altman, CEO ofOpenAI, and his partner, Jony Ive,Apples former design guru, are seeking from investors.
While the precise nature of the device is unclear, it will not resemble a smartphone, the report said. The effort was first reported last fall, but the latest story indicated that discussions withThrive Capitaland venture capital groupEmerson Collective are proceeding to move the company forward.
Writing about the effort last year, PYMNTS likened the project to the way Apples business model has always revolved around a close integration betweenhardware and software.
This approach allowed Apple to control both the device and the operating system, ensuring a consistent and user-friendly experience, as well as facilitating the rise of subscription-based pricing models for apps and content, allowing users to pay on a recurring basis for access to premium services or content, the October report said. Similar pricing and operating model evolutions are already taking place in the AI ecosystem.
The steep cost of AI, fueled mainly by the computing power AI models need, is a reality that businesses need to face to stay competitive. Estimates from analysts showed thatMicrosofts Bing AI chatbot, powered byOpenAI, requires at least $4 billion ofinfrastructureto perform its tasks.
Managing these costs could lead to the development of new business models, or the transformation of existing ones, as businesses look to pass on both costs and cost-savings to end-users through dynamic pricing strategies, PYMNTS wrote.
In a separate report, PYMNTS noted that the AI landscape is growing more crowded and competitive, making it tougher for companies to get their ownAI productsinto the hands of consumers.
Building an AI hardware device may give OpenAI an edge, the September report said. The firm currently relies on Apple and Android phones to run its apps, and browsers housed in a variety of computer manufacturers casings to run its software.
For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.
Continued here:
Report: Sam Altman Seeks $1 Billion to Fund AI Hardware Device - PYMNTS.com
How Artificial Intelligence Is Fueling Incel Communities – The Daily Beast
In late January 2024, X was flooded with graphic, deepfaked images of Taylor Swift. While celebrities have long been the victims of photo leaks and cyber-attacks, this time it was different because these were generated using artificial intelligence.
The images were quickly reported by the Shake It Off singers fanbase and taken down after being live on the posters profile for less than a day. However, it was enough time for them to go viral despite the platform having policies against non-consensual nudity. A report from disinformation research firm Graphika later found that the images had been created on 4chan, where users encouraged each other to generate sexually charged deepfakes in an attempt to skirt content policies surrounding nudity using famous female celebrities.
Unfortunately, Swifts experience isnt a one-off. Marvel actress Xochitl Gomez, who is only 17 years old at the time of reporting, said on the podcast The Squeeze that she struggled to get deepfakes of her taken down from X and shared the mental impact that had on her. Gomez and Swift are just two of the countless women whove recently become victims to deepfakes depicting them in sexual ways.
People have always used media to try and defame people, that hasnt changed. Whats changed is how accessible its now gotten, Siwei Lyu, professor of Computer Science at the University of Buffalo, told The Daily Beast.
Late last year, AI image generation platform CivitAI became popular for its Bounties feature, which encouraged users to create deepfakes in exchange for virtual rewards. Almost all the bounties created were of women, according to reporting from 404 Media. Some included women who werent celebrities or public figures either, but rather private citizens.
Experts expect it to only get worseespecially as more and more incel communities online use these technologies. Henry Ajder, an AI and deepfake adviser and expert, told The Daily Beast that this has been a growing problem for years now and CivitAI is an example of a platform heavily linked to that kind of evolution.
He said that CivitAI has become a hotbed for not just artistically created content, but also content thats erotic. Its a specific place to find specific knowledge and people have started using it for pornographic content.
Ajder also describes the technology on the platform as agnostic or dual use, saying once its there it can be used in any way, while, others are explicitly designed for creating pornographic content without consent. The tools have only gotten popular within incel culture via platforms like Reddit and 4chan.
Governments are simply trying to play catch-up: the technology has gone faster than their ability to regulate.
Belinda Barnet
Theres such a low threshold, Hera Husain, founder of Chayn, a nonprofit supporting victims of gender-based violence and trauma, told The Daily Beast. Its an easy-to-access method which allows people to fulfill the darkest fantasies they may have. [...]They may feel it is victimless, but it has huge consequences for those people.
Its not just deepfakes that have penetrated incel culture either. Theres even research that shows that AI girlfriends will be making incels even more dangerous. With this tech allowing them to form and control their perceptions of a so-called ideal woman, theres a danger that they may push those perceptions on real women. When they find themselves unable to do so or when a woman seems unattainable like in the case of Swift or Gomez, incels begin deepfake campaigns. At least, then, incels can make these women do what they like.
Governments are simply trying to play catch-up; the technology has gone faster than their ability to regulate, Belinda Barnet, senior lecturer in media at Swinburne University, told The Daily Beast.
This gets even more dangerous as we look at global contexts. Patriarchal norms in different nations often further endanger women who become victims to such campaigns. In many more conservative countries, even a deepfake of a woman can be enough for her family to ostracize her or, in extreme cases, use violence against her. For example, in late 2023, an 18-year-old was killed by her father over an image of her with a man which police suspect was doctored.
It doesnt matter that the image is fake. The fact that their image is associated with such a depiction is enough for society to ostracize them. Its not so much about people believing the images are real as it is about pure spite. Its a different kind of trauma to revenge porn, Ajder explained.
With AI generation becoming more accessible, this also makes it an easier barrier to entry for global incels who may have struggled with language barriers. In South Asia, where Husain focuses much of her work, it also becomes harder to counter incel radicalization, both socially and on a policy level. They dont have as strong a counter to the radicalization theyre seeing in the incel community, she explained.
Lyu says that policies regarding free speech and tech access across the world vary so there can be different impacts. In the U.S., using AI generation tools to create content... is freedom of speechbut people can take advantage of that as well. Drawing that line becomes very hard. Whereas in China, theres very strong limitations on the use of this technology, so that is possible but prevents positive uses of the same line of technology.
Incel culture existed long before AI generation tools became popular. Now that theyre mainstream, these communities will be quick to adopt them to further cause harm and trauma. The issue is sure to get worse before it gets better.
In terms of incel culture, this is another weapon in their twisted arsenal to abuse women, perpetuate stereotypes, and further make visceral the twisted ideas they have about women, Ajder said.
Read the original post:
How Artificial Intelligence Is Fueling Incel Communities - The Daily Beast
Video: Where Bitcoin and Artificial Intelligence Meet – Bloomberg
The halving, a preordained event in the code of Bitcoin that happens every four years, is upon us again. Once it occursperhaps as soon as this monththe reward every miner receives for mining the digital asset is immediately cut in half.
There will be a day when miners come to work and they mine roughly half the number of Bitcoin they mined the day before, says Tyler Page, chief executive of Cipher Mining Technologies Inc. The halving is a natural phenomenon in Bitcoin that disciplines the entire market and forces it to become more efficient. As it turns out, each time its happened in the past, Bitcoin prices eventually hit a new record. Still, the event comes as some miners are looking for a hedgespecifically by branching out into artificial intelligence. In the mini-documentary Where Bitcoin and AI Meet, Bloomberg Originals explains how the two hottest technologies of the 21st century are coming together.
Read more:
Video: Where Bitcoin and Artificial Intelligence Meet - Bloomberg
Experts discuss misinformation, artificial intelligence, grassroots solutions at panel – The Brown Daily Herald
Misinformation experts discussed social media, algorithms and artificial intelligence at a Tuesday panel hosted by The Information Futures Lab.
Titled Everything We Know (And Dont Know) About Tackling Rumors and Conspiracies, the panel was moderated by Claire Wardle, a co-director of the IFL and a professor of the practice of health services, policy and practice.
Despite its societal impact, research on media misinformation remains a young field, according to Stefanie Friedoff, another co-director of the IFL and an associate professor of the practice of health services, policy and practice.
Having worked as a senior policy advisor on the White House COVID-19 Response Team, she later contributed to a literature review on pandemic misinformation interventions: a topic she discussed at the panel.
Were significantly understudying this, Friedoff said, citing a lack of longitudinal research on non-American and video-based misinformation. We dont have a lot of useful evidence to apply in the field, and we need to work on that.
Evelyn Prez-Verdia, founder of We Are Ms, a strategic consulting firm, spoke about her work to combat misinformation at the panel. She aims to empower Spanish-speaking diasporas in South Florida through community-based trust-building: Recently, she has worked with the IFL as a fellow to conduct a survey of information needs in Florida.
According to Prez-Verdia, non-English-speaking and immigrant communities are prone to misinformation because of language and cultural barriers. When people are offered accessible resources, she argues, communities become empowered and less susceptible to misinformation. People are hungry for information, she said.
Abbie Richards, another panelist and senior video producer at Media Matters for America, a watchdog journalism organization, identified social media algorithms as an exacerbating factor. In a video shown during the panel, Richards highlighted the proliferation of misleading or inaccurate content on platforms like TikTok. As a video producer, she looks to distill research and discourse on this topic for audiences who wouldnt necessarily read research papers, she said.
She researched AI-generated content on social media, which is often designed to take advantage of the various platforms monetization policies. Theres a monetization aspect behind this content, Richards elaborated.
Algorithms are designed to show (users) what they want to see and what theyll engage with, she said. When viewers feel disempowered it makes it really easy to gravitate towards misinformation."
When discussing AI-generated misinformation that is designed to be entertaining, Freidhoff noted that only some of us have the luxury to laugh at misinformation.
But from the perspective of somebody behind the paywall, who doesnt necessarily speak English, factual information becomes increasingly difficult to access, she added. She describes this as misinformation inequities, which all speakers acknowledged existed in their projects.
In an interview with The Herald, Friedhoff and Wardle emphasized how the online information ecosystem connects different types of misinformation. Vaccine skepticism, Wardle said, is a slippery slope towards climate change denial: We have to understand as researchers and practitioners that we can't think in silos.
Many of the speakers agreed that misinformation spreads in part because people tend to prioritize relationships both in real life and parasocial over fact. Theres nothing more powerful than someone you trust and close to you, Prez-Verdia said.
Richards said emotional literacy is the backbone to navigating both AI and misinformation. This includes teaching people how to recognize (confirmation bias) within themselves and understanding common misinformation techniques.
When asked to offer potential solutions, the speakers offered a range of responses. Richards suggested a marketing campaign for federal agencies to facilitate increased governmental literacy that allows for all citizens to understand how the government functions. Prez-Verdia also identified diverse and culturally conscientious government messaging as key, while Friedhoff recommended creating community conversations to explore perspectives rather than further polarizing them.
Audience member Benjy Renton, a research associate at the School of Public Health, was inspired by community-based approaches like Prez-Verdias work: it was great to see the diverse range of perspectives on misinformation.
The speakers told The Herald that they found each others perspectives enlightening. I'm somebody that people feel like they can go to because I've spent years talking about (misinformation), Richards said in an interview with The Herald after the event. But the idea of how you measure (trust) is fully beyond me.
Get The Herald delivered to your inbox daily.
Prez-Verdia ended the discussion by re-iterating the fight against misinformation as founded on teamwork: When you look at all of these pieces, the women here, a collaboration where we all have our individual gifts thats exactly what needs to be done on a larger spectrum.
Megan is a Senior Staff Writer covering community and activism in Providence. Born and raised in Hong Kong, she spends her free time drinking coffee and wishing she was Meg Ryan in a Nora Ephron movie.
The rest is here:
Experts discuss misinformation, artificial intelligence, grassroots solutions at panel - The Brown Daily Herald
The potential for artificial intelligence to transform healthcare: perspectives from international health leaders – Margolis Center for Health Policy
Artificial intelligence (AI) has the potential to transform care delivery by improving health outcomes, patient safety, and the affordability and accessibility of high-quality care. AI will be critical to building an infrastructure capable of caring for an increasingly aging population, utilizing an ever-increasing knowledge of disease and options for precision treatments, and combatting workforce shortages and burnout of medical professionals. However, we are not currently on track to create this future. This is in part because the health data needed to train, test, use, and surveil these tools are generally neither standardized nor accessible. There is also universal concern about the ability to monitor health AI tools for changes in performance as they are implemented in new places, used with diverse populations, and over time as health data may change. The Future of Health (FOH), an international community of senior health care leaders, collaborated with the Duke-Margolis Institute for Health Policy to conduct a literature review, expert convening, and consensus-building exercise around this topic. This commentary summarizes the four priority action areas and recommendations for health care organizations and policymakers across the globe that FOH members identified as important for fully realizing AIs potential in health care: improving data quality to power AI, building infrastructure to encourage efficient and trustworthy development and evaluations, sharing data for better AI, and providing incentives to accelerate the progress and impact of AI.