Page 1,289«..1020..1,2881,2891,2901,291..1,3001,310..»

Sliding Out of My DMs: Young Social Media Users Help Train … – Drexel University

In a first-of-its-kind effort, social media researchers from Drexel University, Vanderbilt University, Georgia Institute of Technology and Boston University are turning to young social media users to help build a machine learning program that can spot unwanted sexual advances on Instagram. Trained on data from more than 5 million direct messages annotated and contributed by 150 adolescents who had experienced conversations that made them feel sexually uncomfortable or unsafe the technology can quickly and accurately flag risky DMs.

The project, which was recently published by the Association for Computing Machinery in its Proceedings of the ACM on Human-Computer Interaction, is intended to address concerns that an increase of teens using social media, particularly during the pandemic, is contributing to rising trends of child sexual exploitation.

In the year 2020 alone, the National Center for Missing and Exploited Children received more than 21.7 million reports of child sexual exploitation which was a 97% increase over the year prior. This is a very real and terrifying problem, said Afsaneh Razi, PhD, an assistant professor in Drexels College of Computing & Informatics, who was a leader of the research.

Social media companies are rolling out new technology that can flag and remove sexually exploitative images and helps users to more quickly report these illegal posts. But advocates are calling for greater protection for young users that could identify and curtail these risky interactions sooner.

The groups efforts are part of a growing field of research looking at how machine learning and artificial intelligence be integrated into platforms to help keep young people safe on social media, while also ensuring their privacy. Its most recent project stands apart for its collection of a trove of private direct messages from young users, which the team used to train a machine learning-based program that is 89% accurate at detecting sexually unsafe conversations among teens on Instagram.

Most of the research in this area uses public datasets which are not representative of real-word interactions that happen in private, Razi said. Research has shown that machine learning models based on the perspectives of those who experienced the risks, such as cyberbullying, provide higher performance in terms of recall. So, it is important to include the experiences of victims when trying to detect the risks.

Each of the 150 participants who range in age from 13- to 21-years-old had used Instagram for at least three months between the ages of 13 and 17, exchanged direct messages with at least 15 people during that time, and had at least two direct messages that made them or someone else feel uncomfortable or unsafe. They contributed their Instagram data more than 15,000 private conversations through a secure online portal designed by the team. And were then asked to review their messages and label each conversation, as safe or unsafe, according to how it made them feel.

Collecting this dataset was very challenging due to sensitivity of the topic and because the data is being contributed by minors in some cases, Razi said. Because of this, we drastically increased the precautions we took to preserve confidentiality and privacy of the participants and to ensure that the data collection met high legal and ethical standards, including reporting child abuse and the possibility of uploads of potentially illegal artifacts, such as child abuse material.

The participants flagged 326 conversations as unsafe and, in each case, they were asked to identify what type of risk it presented nudity/porn, sexual messages, harassment, hate speech, violence/threat, sale or promotion of illegal activities, or self-injury and the level of risk they felt either high, medium or low.

This level of user-generated assessment provided valuable guidance when it came to preparing the machine learning programs. Razi noted that most social media interaction datasets are collected from publicly available conversations, which are much different than those held in private. And they are typically labeled by people who were not involved with the conversation, so it can be difficult for them to accurately assess the level of risk the participants felt.

With self-reported labels from participants, we not only detect sexual predators but also assessed the survivors perspectives of the sexual risk experience, the authors wrote. This is a significantly different goal than attempting to identify sexual predators. Built upon this real-user dataset and labels, this paper also incorporates human-centered features in developing an automated sexual risk detection system.

Specific combinations of conversation and message features were used as the input of the machine learning models. These included contextual features, like age, gender and relationship of the participants; linguistic features, such as wordcount, the focus of questions, or topics of the conversation; whether it was positive, negative or neutral; how often certain terms were used; and whether or not a set of 98 pre-identified sexual-related words were used.

This allowed the machine learning programs to designate a set of attributes of risky conversations, and thanks to the participants assessments of their own conversations, the program could also rank the relative level of risk.

The team put its model to the test against a large set of public sample conversations created specifically for sexual predation risk-detection research. The best performance came from its Random Forest classifier program, which can rapidly assign features to sample conversations and compare them to known sets that have reached a risk threshold. The classifier accurately identified 92% of unsafe sexual conversations from the set. It was also 84% accurate at flagging individual risky messages.

By incorporating its user-labeled risk assessment training, the models were also able to tease out the most relevant characteristics for identifying an unsafe conversation. Contextual features, such as age, gender and relationship type, as well as linguistic inquiry and wordcount contributed the most to identifying conversations that made young users feel unsafe, they wrote.

This means that a program like this could be used to automatically warn users, in real-time, when a conversation has become problematic, as well as to collect data after the fact. Both of these applications could be tremendously helpful in risk prevention and the prosecution of crimes, but the authors caution that their integration into social media platforms must preserve the trust and privacy of the users.

Social service providers find value in the potential use of AI as an early detection system for risks, because they currently rely heavily on youth self-reports after a formal investigation had occurred, Razi said. But these methods must be implemented in a privacy-preserving matter to not harm the trust and relationship of the teens with adults. Many parental monitoring apps are privacy invasive since they share most of the teen's information with parents, and these machine learning detection systems can help with minimal sharing of information and guidelines to resources when it is needed.

They suggest that if the program is deployed as a real-time intervention, then young users should be provided with a suggestion rather than an alert or automatic report and they should be able to provide feedback to the model and make the final decision.

While the groundbreaking nature of its training data makes this work a valuable contribution to the field of computational risk detection and adolescent online safety research, the team notes that it could be improved by expanding the size of the sample and looking at users of different social media platforms. The training annotations for the machine learning models could also be revised to allow outside experts to rate the risk of each conversation.

The group plans to continue its work and to further refine its risk detection models. It has also created an open-source community to safely share the data with other researchers in the field recognizing how important it could be for the protection of this vulnerable population of social media users.

The core contribution of this work is that our findings are grounded in the voices of youth who experienced online sexual risks and were brave enough to share these experiences with us, they wrote. To the best of our knowledge, this is the first work that analyzes machine learning approaches on private social media conversations of youth to detect unsafe sexual conversations.

This research was supported by the U.S. National Science Foundation and the William T. Grant Foundation.

In addition to Razi, Ashwaq Alsoubai and Pamela J. Wisniewski, from Vanderbilt University; Seunghyun Kim and Munmun De Choudhury, from Georgia Institute of Technology; and Shiza Ali and Gianluca Stringhini, from Boston University, contributed to the research.

Read the full paper here: https://dl.acm.org/doi/10.1145/3579522

Read the original:
Sliding Out of My DMs: Young Social Media Users Help Train ... - Drexel University

Read More..

Levi’s and JCPenney Bolster Leadership Team, Tapping Kenny … – Retail Info Systems News

Levi Strauss & Co. and JCPenney are looking to bolster their executive leadership, naming a new SVP and CMO and a chief customer officer, respectively.

At Levis, Kenny Mitchell is taking on the role of senior vice president and chief marketing officer, overseeing the companys consumer marketing strategies and focusing on growing the brands market share.

Mitchell, who has more than 20 years of brand-building and digital experience across global markets, will take on the role beginning June 5, reporting to Levis president, Michelle Gass. He is coming to Levis from Snap, the parent company of social media platform Snapchat, where he has been chief marketing officer since 2019, leading the companys global community, advertising, and developer partner growth.

Previously, Mitchell worked with McDonalds USA as its vice president of brand content and engagement, managing the companys brand and consumer marketing strategy. He has also worked with PepsiCos Gatorade as head of consumer engagement.

I am thrilled to join a values-led company like LS&Co. and grateful for the opportunity to work alongside their enormously talented teams to help expand the reach and strength of the Levis brand, Mitchell said, in a statement. I have long admired the enduring global relevance of Levis as both a quintessential apparel brand and cultural icon. It is an honor to be part of shaping the future of the greatest story ever worn.

According to Gass, Mitchell has been a widely recognized innovation leader and talent builder across the marketing space, with an impressive track record of growing global brands and pioneering digital marketing strategies to accelerate value creation.

It is especially fitting to have someone of his exceptional caliber join our Levis team in this milestone year, further positioning us for long-term growth and operational success as we celebrate the 150th anniversary of the 501 jean and the 170th year of the companys founding, Gass added. With Kenny onboard, I have full confidence in our ability to continue earning our place at the center of culture and building our global community of Levis fans.

Originally posted here:

Levi's and JCPenney Bolster Leadership Team, Tapping Kenny ... - Retail Info Systems News

Read More..

How AI, automation, and machine learning are upgrading clinical trials – Clinical Trials Arena

Artificial intelligence (AI) is set to be the most disruptive emerging technology in drug development in 2023, unlocking advanced analytics, enabling automation, and increasing speed across the clinical trial value chain.

Todays clinical trials landscape is being shaped by macro trends that include the Covid-19 pandemic, geopolitical uncertainty, and climate pressures. Meanwhile, advancements in adaptive design, personalisation and novel treatments mean that clinical trials are more complex than ever. Sponsors seek greater agility and faster time to commercialisation while maintaining quality and safety in an evolving global market. Across every stage of clinical research, AI offers optimisation opportunities.

A new whitepaper from digital technology solutions provider Taimei examines the transformative impact of AI on the clinical trials of today and explores how it will shape the future.

The big delay areas are always patient recruitment, site start-up, querying, data review, and data cleaning, explains Scott Clark, chief commercial officer at Taimei.

Patient recruitment is typically the most time-consuming stage of a clinical trial. Sponsors must find and identify a set of subjects, gather information, and use inclusion/exclusion criteria to filter and select participants. And high-quality patient recruitment is vital to a trials success.

Once patients are recruited, they must be managed effectively. Patient retention has a direct impact on the quality of the trials results, so their management is crucial. In todays clinical trials, these patients can be distributed over more than a hundred sites and across multiple geographies, presenting huge data management challenges for sponsors.

AI can be leveraged across patient recruitment and management to boost efficiency, quality, and retention. Algorithms can gather subject information and screen and filter potential participants. They can analyse data sources such as medical records and even social media content to detect subgroups and geographies that may be relevant to the trial. AI can also alert medical staff and patients to clinical trial opportunities.

The result? Faster, more efficient patient recruitment, with the ability to reach more diverse populations and more relevant participants, as well as increase quality and retention. [Using AI], you can develop the correct cohort, explains Clark. Its about accuracy, efficiency, and safety.

Study build can be a laborious and repetitive process. Typically, data managers must read the study protocol and generate as many as 50-60 case report forms (CRFs). Each trial has different CRF requirements. CRF design and database building can take weeks and has a direct impact on the quality and accuracy of the clinical trial.

Enter AI. Automated text reading can parse, categorise, and stratify corpora of words to automatically generate eCRFs and the data capture matrix. In study building, AI is able to read the protocols and pull the best CRF forms for the best outcomes, adds Clark.

It can then use the data points from the CRFs to build the study base, creating the whole database in a matter of minutes rather than weeks. The database is structured for export to the biostatisticians programming. AI can then facilitate the analysis of data and develop all of the required tables, listings and figures (TLFs). It can even come to a conclusion on the outcomes, pending review.

Optical character recognition (OCR) can address structured and unstructured native documents. Using built-in edit checks, AI can reduce the timeframe for study build from ten weeks to just one, freeing up data managers time. We are able to do up to 168% more edit checks than are done currently in the human manual process, says Clark. AI can also automate remote monitoring to identify outliers and suggest the best route of action, to be taken with approval from the project manager.

AI data management is flexible, agile, and robust. Using electronic data capture (EDC) removes the need to manage paper-based documentation. This is essential for modern clinical trials, which can present huge amounts of unstructured data thanks to the rise of advances such as decentralisation, wearables, telemedicine, and self-reporting.

Once the trial is launched, you can use AI to do automatic querying and medical coding, says Clark. When theres a piece of data that doesnt make sense or is not coded, AI can flag it and provide suggestions automatically. The data manager just reviews what its corrected, adds Clark. Thats a big time-saver. By leveraging AI throughout data input, sponsors also cut out the lengthy process of data cleaning at the end of a trial.

Implementing AI means establishing the proof of concept, building a customised knowledge base, and training the model to solve the problem on a large scale. Algorithms must be trained on large amounts of data to remove bias and ensure accuracy. Today, APIs enable best-in-class advances to be integrated into clinical trial applications.

By taking repetitive tasks away from human personnel, AI accelerates the time to market for life-saving drugs and frees up man-hours for more specialist tasks. By analysing past and present trial data, AI can be used to inform future research, with machine learning able to suggest better study design. In the long term, AI has the potential to shift the focus away from trial implementation and towards drug discovery, enabling improved treatments for patients who need them.

To find out more, download the whitepaper below.

Read the original post:
How AI, automation, and machine learning are upgrading clinical trials - Clinical Trials Arena

Read More..

Application od Machine Learning in Cybersecurity – Read IT Quik

The most crucial aspect of every business is its cybersecurity. It aids in ensuring the security and safety of their data. Artificial intelligence and machine learning are in high demand, changing the cybersecurity industry as a whole. Cybersecurity may benefit greatly from machine learning, which can be used to better available antivirus software, identify cyber dangers, and battle online crime. With the increasing sophistication of cyber threats, companies are constantly looking for innovative ways to protect their systems and data. Machine learning is one emerging technology that is making waves in cybersecurity. Cybersecurity professionals can now detect and mitigate cyber threats more effectively by leveraging artificial intelligence and machine learning algorithms. This article will delve into key areas where machine learning is transforming the security landscape.

One of the biggest challenges in cybersecurity is accurately identifying legitimate connection requests and suspicious activities within a companys systems. With thousands of requests pouring in constantly, human analysis can fall short. This is where machine learning can play a crucial role. AI-powered cyber threat identification systems can monitor incoming and outgoing calls and requests to the system to detect suspicious activity. For instance, there are many companies that offer cybersecurity software that utilizes AI to analyze and flag potentially harmful activities, helping security professionals stay ahead of cyber threats.

Traditional antivirus software relies on known virus and malware signatures to detect threats, requiring frequent updates to keep up with new strains. However, machine learning can revolutionize this approach. ML-integrated antivirus software can identify viruses and malware based on their abnormal behavior rather than relying solely on signatures. This enables the software to detect not only known threats but also newly created ones. For example, companies like Cylance have developed smart antivirus software that uses ML to learn how to detect viruses and malware from scratch, reducing the dependence on signature-based detection.

Cyber threats can often infiltrate a companys network by stealing user credentials and logging in with legitimate credentials. It can be challenging to detect with traditional methods. However, machine learning algorithms can analyze user behavior patterns to identify anomalies. By training the algorithm to recognize each users standard login and logout patterns, any deviation from these patterns can trigger an alert for further investigation. For instance, Darktrace offers cybersecurity software that uses ML to analyze network traffic information and identify abnormal user behavior patterns.

Machine learning offers several advantages in the field of cyber security. First and foremost, it enhances accuracy by analyzing vast amounts of data in real time, helping to identify potential threats promptly. ML-powered systems can also adapt and evolve as new threats emerge, making them more resilient against rapidly growing cyber-attacks. Moreover, ML can provide valuable insights and recommendations to cybersecurity professionals, helping them make informed decisions and take proactive measures to prevent cyber threats.

As cyber threats continue to evolve, companies must embrace innovative technologies like machine learning to strengthen their cybersecurity defenses. Machine learning is transforming the cybersecurity landscape with its ability to analyze large volumes of data, adapt to new threats, and detect anomalies in user behavior. By leveraging the power of AI and ML, companies can stay ahead of cyber threats and safeguard their systems and data. Embrace the future of cybersecurity with machine learning and ensure the protection of your companys digital assets.

Go here to see the original:
Application od Machine Learning in Cybersecurity - Read IT Quik

Read More..

An M.Sc. computer science program in RUNI, focusing on machine learning – The Jerusalem Post

The M.Sc. program in Machine Learning & Data Science at the Efi Arazi School of Computer Science aims to provide a deep theoretical understanding of machine learning and data-driven methods as well as a strong proficiency in using these methods. As part of this unique program, students with solid exact science backgrounds, but not necessarily computer science backgrounds, are trained to become data scientists. Headed by Prof. Zohar Yakhini and PhD Candidate Ben Galili, this program provides students with the opportunity to become skilled and knowledgeable data scientists by preparing them with fundamental theoretical and mathematical understandings, as well as endowing them with scientific and technical skills necessary to be creative and effective in these fields. The program offers courses in statistics and data analysis, different levels of machine -learning courses as well as unique electives such as a course in recommendation systems and on DNA and sequencing technologies.

M.Sc. student Guy Assa, preparing DNA for sequencing on a nanopore device, in Prof. Noam Shomrons DNA sequencing class, part of the elective curriculum (Credit: private photo)

In recent years, data science methodologies have become a foundational language and a main development tool for science and industry. Machine learning and data-driven methods have developed considerably and now penetrate almost all areas of modern life. The vision of a data-driven world presents many exciting challenges to data experts in diverse fields of application, such as medical science, life science, social science, environmental science, finance, economics, business.

Graduates of the program are successful in becoming data scientists in Israeli hi-tech companies. Lior Zeida Cohen, a graduate of the program says After earning a BA degree in Aerospace Engineering from the Technion and working as an engineer and later leading a control systems development team, I sought out a graduate degree program that would allow me to delve deeply into the fields of Data Science and Machine Learning while also allowing me to continue working full-time. I chose to pursue the ML & Data Science Program, at Reichman University. The program provided in-depth study in both the theoretical and practical aspects of ML and Data Science, including exposure to new research and developments in the field. It also emphasized the importance of learning the fundamental concepts necessary for working in these domains. In the course of completing the program, I began work at Elbit Systems as an algorithms developer in a leading R&D group focusing on AI and Computer Vision. The program has greatly contributed to my success in this position".

As a part of the curriculum, the students execute collaborative research projects with both external and internal collaborators, in Israel and around the world; One active collaboration is with the Leibniz Institute for Tropospheric Research (TROPOS) in Leipzig, Germany. In this collaboration, the students, led by Prof. Zohar Yakhini and Dr. Shay Ben-Elazar, a Principal Data Science and Engineering Manager at Microsoft Israel, as well as Dr. Johannes Bhl from TROPOS, are using data science and machine learning tools in order to infer properties of stratospheric layers by using data from sensory devices. The models developed in the project provide inference from simple devices that achieves an accuracy which is close to that which is obtained through much more expensive measurements. This improvement is enabled through the use of neural network models (deep learning).

Results from the TROPOS project: a significant improvement in the inference accuracy. Left panel: actual atmospheric status as obtained from the more expensive measurements (Lidar + Radar) Middle panel: predicted status as inferred from Lidar measurements using physical models. Right panel: status determined by the deep learning model developed in the project.

Additional collaborations include a number of projects with Israeli hospitals such as Sheba Tel Hashomer, Beilinson Hospital, and Kaplan Medical Center, as well as with the Israel Nature and Parks Authority and with several hi-tech companies.

PhD candidate Ben Galili, Academic Director of Machine Learning and Data Science Program (Credit: private photo)

Several research and thesis projects are led by students in the program addressing data analysis questions related to spatial biology the study of molecular biology processes in their bigger location context. One project, led by student Guy Attia and supervised by Dr. Leon Anavy addressed imputation methods for spatial transcriptomics data. A second one, led by student Efi Herbst, aims to expand the inference scope of data from spatial transcriptomics, into molecular properties that are not directly measured by the technology device.

According to Maya Kerem, a recent graduate, the MA program taught me a number of skills that would enable me to easily integrate into a new company based on the knowledge I gained. I believe that this program is particularly unique because it always makes sure that the learnings are applied to industry-related problems at the end of each module. This is a hands-on program at Reichman University, which is what drew me to enroll in this MA program.

For more info

This article was written in cooperation with Reichman University

See the rest here:
An M.Sc. computer science program in RUNI, focusing on machine learning - The Jerusalem Post

Read More..

New Machine Learning Parameterization Tested on Atmospheric … – Eos

Editors Highlights are summaries of recent papers by AGUs journal editors.Source: Journal of Advances in Modeling Earth Systems

Atmospheric models must represent processes on spatial scales spanning many orders of magnitude. Although small-scale processes such as thunderstorms and turbulence are critical to the atmosphere, most global models cannot explicitly resolve them due to computational expense. In conventional models, heuristic estimates of the effect of these processes, known as parameterizations, are designed by experts. A recent line of research uses machine learning to create data-driven parameterizations directly from very high-resolution simulations that require fewer assumptions.

Yuval and OGorman [2023] provide the first such example of a neural network parameterization of the effects of subgrid processes on the vertical transport of momentum in the atmosphere. A careful approach is taken to generate a training dataset, accounting for subtle issues in the horizontal grid of the high-resolution model. The new parameterization generally improves the simulation of winds in a coarse-resolution model, but also over-corrects and leads to larger biases in one configuration. The study serves as a complete and clear example for researchers interested in the application of machine learning for parameterization.

Citation: Yuval, J., & OGorman, P. A. (2023). Neural-network parameterization of subgrid momentum transport in the atmosphere. Journal of Advances in Modeling Earth Systems, 15, e2023MS003606. https://doi.org/10.1029/2023MS003606

Oliver Watt-Meyer, Associate Editor, JAMES

Related

Original post:
New Machine Learning Parameterization Tested on Atmospheric ... - Eos

Read More..

Activating vacation mode: Utilizing AI and machine learning in your … – TravelDailyNews International

Say the words dream vacation and everyone will picture something different. This brings a particular challenge to the modern travel marketer especially in a world of personalization, when all []

Say the words dream vacation and everyone will picture something different. This brings a particular challenge to the modern travel marketer especially in a world of personalization, when all travelers are looking for their own unique experiences. Fortunately, artificial intelligence (AI) provides a solution that allows travel marketers to draw upon a variety of sources when researching the best ways to connect with potential audiences.

By utilizing and combining data from user-generated content, transaction history and other online communications, AI and machine-learning (ML) solutions can help to give marketers a customer-centric approach, while successfully accounting for the vast diversity amongst their consumer base.

AI creates significant value for travel brands, which is why 48% of business executives are likely to invest in AI and automation in customer interactions over the next two years, according to Deloitte. Using AI and a data-driven travel marketing strategy, you can predict behaviors and proactively market to your ideal customers. There are as many AI solutions in the market as there are questions that require data, so choosing the right one is important.

For example, a limited-memory AI solution can skim a review site, such as TripAdvisor, to determine the most popular destinations around a major travel season, like summertime. Or, a chatbot can speak directly with visitors to your site, and aggregate their data to give brands an idea on what prospective consumers are looking for. Other solutions offer predictive segmentation, which can separate consumers based on their probability of taking action, categorize your leads and share personalized outreach on their primary channels. Delivering personalized recommendations are a major end goal for AI solutions in the travel industry. For example, Booking.com utilizes a consumers search history to determine whether they are traveling for business or leisure and provide recommendations accordingly.

A major boon of todays AI and machine-learning solutions are their ability to monitor and inform users of ongoing behavioral trends. For example, who could have predicted the popularity of hotel day passes for remote workers, as little as three years ago? Or the growing consumer desire for sustainable toiletries? Trends change every year or, more accurately, every waking hour so, having a tool that can stay ahead of the next biggest thing is essential.

In an industry where every element of the customers experience travel costs, hotels, activities is meticulously planned, delivering personalized experiences is critical to maintaining a customers interest. Consumers want personalization. As Google reports, 90% of leading marketers indicate that personalization significantly contributes to business profitability.

Particularly in the travel field, where there are as many consumer preferences as there are destinations on a map, personalization is essential in order to gain their attention. AI capabilities can solve common traveler frustrations, further enhancing the consumer experience. Natural language processors can skim through review sites, gathering the generalized sentiment from prior reviews and determining common complaints that may arise. Through these analyses from a range of sources from across a consumers journey, you can catch problems before they start.

For travel marketers already dealing with a diverse audience, and with a need for personalization to effectively stand out amongst the competition, AI and ML solutions can effectively help you plan and execute personalized outreach, foster brand loyalty and optimize the consumer experience. With AI working behind the scenes, your customers can look forward to fun in the sun, on the slopes, or wherever their destination may be.

Janine Pollack is the Executive Director, Growth & Content, and self-appointed Storyteller in Chief at MNI Targeted Media. She leads the brands commitment to generating content that informs and inspires. Her scope of work includes strategy and development for Fortune Knowledge Groups thought leadership programs and launching Fortunes The Most Powerful Woman podcast. She is proud to have partnered with The Hebrew University on the inaugural Nexus: Israel program, featuring worldwide luminaries. Janine has also written lifetime achievements for Sports Business Journal. She earned her masters from the Northwestern University Medill School of Journalism and B.A. from The American University in Washington D.C.

Read the original post:
Activating vacation mode: Utilizing AI and machine learning in your ... - TravelDailyNews International

Read More..

A novel CT image de-noising and fusion based deep learning … – Nature.com

SARS-CoV-2, known as corona virus, causes COVID-19. It is an infectious disease first discovered in China in December 20191,2,3. World Health Organization (WHO) also declares it as a pandemic. Figure1 shows its detail structure3. This new virus quickly spread throughout the world. Its effect is transmitted to humans through their zoonotic flora. COVID-19's main clinical topographies are cough, sore throat, muscle pain, fever, and shortness of breath4,5. Normally, RT-PCR is used for COVID-19 detection. CT and X-ray have also vital roles in early and quick detection of COVID-196. However, RT-PCR has low sensitivity of about 60% -70% and even some times negative results are obtained7,8. It is observed that CT is a subtle approach to detecting COVID-19, and it may be a best screening means9.

Artificial intelligence and its subsets play a significant role in medicine and have recently expanded their prominence by being used as tool to assist physicians10,11,12. Deep learning techniques are also used with prominent results in many disease detections like skin cancer detection, breast cancer detection, and lung segmentation13,14. However, Due to limited resources and radiologists, providing clinicians to each hospital is a difficult task. Consequently, a need of automatic AI or machine learning methods is required to mitigate the issues. It can also be useful in reducing waiting time and test cost by removing RT-PCR kits. However, thorough pre-processing of CT images is necessary to achieve the best results. Poisson or Impulse noise during the acquisition process of these photos could have seriously damaged the image information15. To make post-processing tasks like object categorization and segmentation easier, it is essential to recover this lost information. Various filtering algorithms have been proposed to de-blur and to de-noise images in past. Standard Median Filter (SMF) is one of the most often used non-linear filters16.

A number of SMF modifications, including Weighted median and Center weighted median (CWM)17,18, have been proposed. The most widely used noise adaptive soft-switching median (NASM) was proposed in19, which achieved optimal results. However, if the noise density exceeds 50%, the quality of the recovered images degradedsignificantly. These methods are all non-adaptive and unable to distinguish between edge pixels, uncorrupted pixels, and corrupted pixels. Recent deep learning idea presented in20,21,22 performs well in recovering the images degraded by fixed value Impulse noise. However, its efficiency decreases with the increase in the noise density and in reduction of Poisson noise, which normally exist in CT images. Additionally, most of these methods are non-adaptive and fails while recovering Poisson noise degraded images. In the first phase of this study, layer discrimination with max/min intensities elimination with adaptive filtering window is proposed, which can handle high density Impulse and Poisson noise corrupted CT images. The proposed method has shown superior performance both visually and statistically.

Different deep learning methods are being utilized to detect COVID-19 automatically. To detect COVID-19 in CT scans, a deep learning model employing the COVIDX-Net model that consists of seven CNN models, was developed. This model has higher sensitivity, specificity and can detect COVID-19 with 91.7% accuracy23. Reference24 shows a deep learning model which obtains 92.4% results in detection of COVID-19. A ResNet50 model was proposed in25 which also achieved 98% results as well. All of these trials, nevertheless, took more time to diagnose and didn't produce the best outcomes because of information loss during the acquisition process. There are many studies on detection of COVID-19 that employ machine learning models with CT images26,27,28,29.A study presented in30proposes two different approaches with two systems each to diagnose tuberculosis from two datasets. In this study,initially, PCA) algorithm was employedto reduce the features dimensionality, aiming to extract the deep features. Then, SVM algorithm was used to for classifying features. This hybrid approachachieved an accuracy of 99.2%, a sensitivity of 99.23%, a specificity of 99.41%, and an AUC of 99.78%. Similarly, a study presented in31 utilizes different noise reduction techniques and compared the resultsby calculating qualitative visual inspection and quantitative parameters like Peak Signal-to-Noise Ratio (PSNR), Correlation Coefficient (Cr), and system complexity to determine the optimum denoising algorithm to be applied universally. However, these techniques manipulate all pixels whether they are contaminated by noise or not.An automated deep learning approach from Computed Tomography (CT) scan images to detect COVID-19 is proposed in32. In this method anisotropic diffusion techniques are used to de-noised the image and then CNN model is employed to train the dataset. At the end, different models including AlexNet, ResNet50, VGG16 and VGG19 have been evaluated in the experiments. This method worked well and achieved higher accuracy. However, when the images were contaminated with higher noise density, its performance suffered.Similarly, the authors in33 used four powerful pre-trained CNN models, VGG16, DenseNet121, ResNet50,and ResNet152, for the COVID-19 CT-scan binary classification task. In this method, a FastAI ResNet framework was designed to automatically find the best architecture using CT images. Additionally, a transfer learning techniques were used to overcome the large training time. This method achieved a higher F1 score of 96%. A deep learning method to detect COVID-19 using chest X-ray images was presented in 34. A dataset of 10,040 samples were used in this study. This model has a detection accuracy of 96.43% and a sensitivity of 93.68%.However, its performance dramatically decreases with higher density Poisson noise. A convolution neural networks method used for binary classification pneumonia-based conversion of VGG-19, Inception_V2, and decision tree model was presented in35. In this study, X-ray and CT scan images dataset that contains 360 images were used for COVID-19 detection. According to the findings, VGG-19, Inception_V2 and decision tree model illustrate high performance with accuracy of 91% than Inception_V2 (78%) and decision tree (60%) models.

In this paper, a paradigm for automatic COVID-19 screening that is based on assessment fusion is proposed. The effectiveness and efficiency of all baseline models were improved by our proposed model, which utilized the majority voting prediction technique to eliminate the mistakes of individual models. The proposed AFM model only needs chest X-ray images to diagnose COVID-19 in an accurate and speeding way.

The rest of the paper is organized as: The dataset is explained in section "Meterial and methods". section "Proposed method" explains our proposed approach and section "Results and Discussion" presents empirical results and analysis. section "Conclusion" describes conclusion and the specific contributions along with the future directions for improving the efficiency of the proposed work.

The rest is here:
A novel CT image de-noising and fusion based deep learning ... - Nature.com

Read More..

Knowledge Graphs: The Dream of a Knowledge Network – SAP News Center

The eighth largest defense contractor in the U.S., SAP customer L3 Technologies is embracing innovation and making it part of its corporate culture, according to Heidi Wood, senior vice president of Strategy and Operations.

In an interview with CXO Talks Michael Krigsman, Wood explains what it takes to become data-driven and radically transparent.

Click the button below to load the content from YouTube.

You have to embrace innovation. You have to make that part of your corporate culture. You have to encourage risk taking because thats a necessary and frequently not enough spoken about element of innovation, which is the willingness to take risks, the willingness to be bold, put yourself out there, and be courageous, Wood tells Krigsman when asked about driving forces that are responsible for transformation. The way I like to describe it is, we took all of the different systems that we have, and we piped them together into a fused system. It helps us come back to better decisions. Together, we can move with speed because all of us are seeing it at the same time and its based on fact, not anecdotes.

You want to show your better parts, Wood adds. But, you kind of get to a stage where everybody gets comfortable with, look, this is the truth, this is where were really, really at. It enables more collective contributions because people can see the areas that are ailing and say, Well, Ive got some guys that can help with this thing that youre working on because now we can see that that area needs work.

I think one of the exciting things about IT is that you actually have an angle where IT is helping change the culture of a company, she concludes.

Watch the complete interview to hear more about L3 and how the company is working toward a data-driven and radically transparent organization.

The rest is here:
Knowledge Graphs: The Dream of a Knowledge Network - SAP News Center

Read More..

The Ethics of AI: Navigating the Future of Intelligent Machines – KDnuggets

Depending on your life, everybody has different opinions on artificial intelligence and its future. Some believed that it was just another fad that was going to die out soon. Whilst some believed there was a huge potential to implement it into our everyday lives.

At this point, its clear to say that AI is having a big impact on our lives and is here to stay.

With the recent advancements in AI technology such as ChatGPT and autonomous systems such as Baby AGI - we can stand on the continuous advancement of artificial intelligence in the future. It is nothing new. It's the same drastic change we saw with the arrival of computers, the internet, and smartphones.

A few years ago, there was a survey conducted with 6,000 customers in six countries, where only 36% of consumers were comfortable with businesses using AI and 72% expressed that they had some fear about the use of AI.

Although it is very interesting, it can also be concerning. Although we expect more to come in the future regarding AI, the big question is What are the ethics around it?.

The most developing and implemented area of AI development is machine learning. This allows models to learn and improve using past experience by exploring data and identifying patterns with little human intervention. Machine learning is used in different sectors, from finance to healthcare. We have virtual assistants such as Alexa, and now we have large language models such as ChatGPT.

So how do we determine the ethics around these AI applications, and how it will affect the economy and society?

There are a few ethical concerns surrounding AI:

1. Bias and Discrimination

Although data is the new oil and we have a lot of it, there are still concerns about AI being biased and discriminative with the data it has. For example, the use of facial recognition applications has proven to be highly biased and discriminative to certain ethnic groups, such as people with darker skin tones.

Although some of these facial recognition applications had high racial and gender bias, companies such as Amazon refused to stop selling the product to the government in 2018.

2. Privacy

Another concern around the use of AI applications is privacy. These applications require a vast amount of data to produce accurate outputs and have high performance. However, there are concerns regarding data collection, storage, and use.

3. Transparency

Although AI applications are inputted with data, there is a high concern about the transparency of how these AI applications come to their decision. The creators of these AI applications deal with a lack of transparency raising the question of who to hold accountable for the outcome.

4. Autonomous Applications

We have seen the birth of Baby AGI, an autonomous task manager. Autonomous applications have the ability to make decisions with the help of a human. This naturally opens eyes to the public on leaving the decision to be made by technology, which could be deemed ethically or morally wrong in society's eyes.

5. Job security

This concern has been an ongoing conversation since the birth of artificial intelligence. With more and more people seeing that technology can do their job, such as ChatGPT creating content and potentially replacing content creators - what are the social and economic consequences of implementing AI into our everyday lives?

In April 2021, the European Commission published its legislation on the Act of the use of AI. The act aimed to ensure that AI systems met fundamental rights and provided users and society with trust. It contained a framework that grouped AI systems into 4 risk areas; unacceptable risk, high risk, limited, and minimal or no risk. You can learn more about it here: European AI Act: The Simplified Breakdown.

Other countries such as Brazil also passed a bill in 2021 that created a legal framework around the use of AI. Therefore, we can see that countries and continents around the world are looking further into the use of AI and how it can be ethically used.

The fast advancements in AI will have to align with the proposed frameworks and standards. Companies who are building or implementing AI systems will have to follow ethical standards and conduct an assessment of the application to ensure transparency, and privacy and account for bias and discrimination.

These frameworks and standards will need to focus on data governance, documented, transparent, human oversight, and robust, accurate, cyber-safe AI systems. If companies fail to comply, they will, unfortunately, have to deal with fines and penalties.

The launch of ChatGPT and the development of general-purpose AI applications have prompted scientists and politicians to establish a legal and ethical framework to avoid any potential harm or impact of AI applications.

This year alone there have been many papers released on the use of AI and the ethics surrounding it. For example, Assessing the Transatlantic Race to Govern AI-Driven Decision-Making through a Comparative Lens. We will continue to see more and more papers getting released till governments conduct and publish a clear and concise framework for companies to implement.

Nisha Arya is a Data Scientist, Freelance Technical Writer and Community Manager at KDnuggets. She is particularly interested in providing Data Science career advice or tutorials and theory based knowledge around Data Science. She also wishes to explore the different ways Artificial Intelligence is/can benefit the longevity of human life. A keen learner, seeking to broaden her tech knowledge and writing skills, whilst helping guide others.

The rest is here:
The Ethics of AI: Navigating the Future of Intelligent Machines - KDnuggets

Read More..