Category Archives: Artificial Intelligence

Beyond the Unknown: Applications of Artificial intelligence In Space – Analytics Insight

Artificial intelligence (AI) is rapidly being explored or adopted by many industries for a wide array of applications. Today it is creating a string of opportunities in space industry use-cases too. As artificial intelligence emerges as a popular theme in space exploration,it is also being deployed for many critical tasks too.

For instance scientists have leveraged artificial intelligence, for charting unmarked galaxies, supernovas, stars, blackholes, and studying cosmic events that would otherwise go unnoticed.One of the recent illustration of this application was when CHIRP (Continuous High-Resolution Image Reconstruction using Patch Priors) Algorithm helped in creating first-ever image of a black hole. CHIRP is a Bayesian algorithm used to perform de-convolution on images created in radio astronomy. It used the image data from the Event Horizon Telescopes to carry further image processing. Even images from the Hubble Space Telescope are used to simulate galaxy formation and further classification using deep learning algorithms.

Artificial intelligence also proves resourceful in classifying heavenly bodies, especially exoplanets. A couple of years ago, a research team developed an artificial neural networks algorithm, to classify planets, based on whether they resemble present-day Earth, early Earth, Mars, Venus or Saturns largest moon, Titan. These five bodies are most potentially habitable objects in our solar system and are therefore associated with acertain probability of life.

In regards to life in outer space, Researchers atNASAs Frontier Development Lab(FDL) employed generative adversarial networks, or GANs, to create 3.5 million possible permutations of alien life based on signals from Kepler and the European Space Agencys Gaia telescope.

Besides, NASA has teamed up with Google to train its artificial intelligence algorithms to sift through the data from the Kepler mission to look for signals from an exoplanet crossing in front of its parent star. With the help of Googles trained model, NASA managed to discover two obscure planets Kepler-90i and Kepler-80g. In 2019, astronomers from the University of Texas at Austin, teamed with Google, to useAI for uncovering two more hidden planets in the Keplerspace telescope archive (Keplers extended mission, called K2).They used an AI algorithm that sifts through Keplers data to ferret out signals that were missed by traditional planet-hunting methods. This helped them discover the planets K2-293b and K2-294b.

Under the Artificial Intelligence Data Analysis (AIDA) project, which is funded under the European Horizons 2020 framework, an intelligent system is being developed that can read and process data from space. The key object of this project is to enable the discovery of new celestial objects, using data from NASA.

AI applications can also found in the field of satellite imagery. Data based on satellite imagery offers insights on several global-scale economic, social and industrial processes, which was previously not possible. Some examples include Earth Observer 1 (EO-1) satellite, SKICAT, ENVISAT. These satellites leverage artificial intelligence to provide actionable insights for agencies, governments and businesses, and help them in making accurate decisions.

While humans are capable ofinterpreting, understanding, and analyzing images collected by satellites, it does cost us time and resources while waiting for a satellite to move back around to the same position to further refine image analysis. Artificial intelligence helps eliminate the necessity for large amounts of communication to and from Earth to analyze photos and helps determine whether a new photo needs to be taken. Moreover, it saves processing power, reduces battery usage, and fast-tracks the image gathering process.

In case of space mining, artificial intelligence will augment mining machinery with intelligence that will empower them to extract minerals and identify any hazards or solve minor issues at hand without the need for immediate support from humans on Earth. Meanwhile, NASA is also developing a companion for astronauts aboard the ISS,called Robonaut, which will work alongside the astronauts or take on tasks that are too risky for them. According to NASAs blog, Robonaut 2 is slowly approaching human dexterity implying tasks like changing out an air filter can be performed without modifications to the existing design.

Artificial intelligence has also helped us develop space humanoids like Kirobo from Japan Aerospace Exploration Agency, Dextre from Canadian Space Agency, and AILA from German Research Center for Artificial Intelligence to help astronauts in space missions. NASAs free-flying robotic system,Astrobee, uses AI to help astronauts reduce their time on routine duties, leaving them to focus more on the things that only humans can do. We also have CIMON or (Crew Interactive Mobile Companion), an AI powered robot that floats through the zero-gravity environment of the space station to research a database of information about the ISS. In addition to the mechanical tasks assigned, CIMON assesses the moods of its human crewmates at the ISS and interacts accordingly with them.

The rest is here:
Beyond the Unknown: Applications of Artificial intelligence In Space - Analytics Insight

Artificial Intelligence can predict whether you will die from COVID-19 – Free Press Journal

Copenhagen: Using patient data, artificial intelligence can make a 90 per cent accurate assessment of whether a person will die from COVID-19 or not, according to new research at the University of Copenhagen.

Body mass index (BMI), gender, and high blood pressure are among the most heavily weighted factors. The research can be used to predict the number of patients in hospitals, who will need a respirator and determine who ought to be first in line for a vaccination. The results of the study were published in the journal Scientific Reports -- Nature.

Artificial intelligence is able to predict who is most likely to die from the coronavirus. In doing so, it can also help decide who should be at the front of the line for the precious vaccines now being administered across Denmark.

The result is from a newly published study by researchers at the University of Copenhagen's Department of Computer Science. Since the COVID pandemic's first wave, researchers have been working to develop computer models that can predict, based on disease history and health data, how badly people will be affected by COVID-19.

Based on patient data from the Capital Region of Denmark and Region Zealand, the results of the study demonstrate that artificial intelligence can, with up to 90 percent certainty, determine whether an uninfected person who is not yet infected will die of COVID-19 or not if they are unfortunate enough to become infected. Once admitted to the hospital with COVID-19, the computer can predict with 80 percent accuracy whether the person will need a respirator.

"We began working on the models to assist hospitals, as, during the first wave, they feared that they did not have enough respirators for intensive care patients. Our new findings could also be used to carefully identify who needs a vaccine," explains Professor Mads Nielsen of the University of Copenhagen's Department of Computer Science.

Older men with high blood pressure are highest at risk The researchers fed a computer program with health data from 3,944 Danish COVID-19 patients. This trained the computer to recognise patterns and correlations in both patients' prior illnesses and in their bouts against COVID-19.

"Our results demonstrate, unsurprisingly, that age and BMI are the most decisive parameters for how severely a person will be affected by COVID-19. But the likelihood of dying or ending up on a respirator is also heightened if you are male, have high blood pressure or neurological disease," explains Mads Nielsen.

The diseases and health factors that, according to the study, have the most influence on whether a patient ends up on a respirator after being infected with COVID-19 are in order of priority: BMI, age, high blood pressure, being male, neurological diseases, COPD, asthma, diabetes and heart disease.

"For those affected by one or more of these parameters, we have found that it may make sense to move them up in the vaccine queue, to avoid any risk of them becoming infected and eventually ending up on a respirator," says Nielsen.

Predicting respiratory needs is a must. Researchers are currently working with the Capital Region of Denmark to take advantage of this fresh batch of results in practice. They hope that artificial intelligence will soon be able to help the country's hospitals by continuously predicting the need for respirators.

"We are working towards a goal that we should be able to predict the need for respirators five days ahead by giving the computer access to health data on all COVID positives in the region," says Mads Nielsen, adding: "The computer will never be able to replace a doctor's assessment, but it can help doctors and hospitals see many COVID-19 infected patients at once and set ongoing priorities."

However, technical work is still pending to make health data from the region available for the computer and thereafter to calculate the risk to the infected patients. The research was carried out in collaboration with Rigshospitalet and Bispebjerg and Frederiksberg Hospital.

View original post here:
Artificial Intelligence can predict whether you will die from COVID-19 - Free Press Journal

How to Build a Modern Workplace with Artificial Intelligence and Internet of Things – BBN Times

1. Automating Tasks

Workplaces have several tasks that are routine and mundane such as scheduling meetings. Usually, employees may send emails back-and-forth to several other employees and enquire about an open slot on their calendar. This process can be increasingly tedious and time-consuming for employees.

Business leaders can adopt AI in the workplace to enhance employee productivity. Organizations can deploy AI-powered personal assistants for scheduling, cancelling, and rescheduling meetings. AI-enabled assistants can analyze an employees schedule and suggest time slots to other employees based on their availability. When a time slot gets decided, an AI assistant will notify all participants of the meeting. Also, AI can be used to automatically transcribe meetings and create a text file. Furthermore, the introduction of AI in the workplace can also automate other tasks such as sorting and categorizing emails.

HR executives usually resolve employee queries related to various workplace policies. Also, HR executives have other core tasks such as managing payroll, recruiting talent, and onboarding new employees. Similarly, IT professionals can be caught up employee queries along with their core tasks. Hence, the productivity of HR and IT department can be severely affected.

The deployment of AI in the workplace can enable organizations to resolve employee queries without interrupting their HR or IT departments.Several organizations are deploying AI-powered chatbotsin the workplace. Similarly, every organization can deploy AI-enabled chatbots that can answer different employee queries accurately. Employees can ask queries using emails, text messages, and online messengers and AI will respond accordingly. If the chatbot is unable to answer a query, then it will assign a request to the concerned personnel who can resolve the query. With this approach, AI-enabled chatbots can also learn how to respond to various queries. Additionally, the deployment of AI in the workplace will allow employees to communicate in various language as AI can translate their queries to English.

The adoption of AI in the workplace can streamline the onboarding process. AI systems can automate various tasks such as generating offer letters, sending documents, and walking new employees through various company-wide policies. Additionally, AI can coach new employees by observing and analyzing how they conduct various tasks. Then, AI tools can suggest ways to improve their efficiency. For instance, AI systems can analyze sales calls of multiple employees and offer tips to improve their performance. For this purpose, AI systems can record sales calls and generate statistics for each employee. Using this approach, AI systems can offer suggestions based on every employees data. Likewise, AI can also train customer service executives to help them deliver better services. With this approach, AI can provide personalized training for each employee.

In the digital age, running a competitive business without data is almost impossible. Businesses collectdifferent types of datasuch as social media data, customer data, and operational data for various applications. However, the obtained cannot be useful if it is not used for generating analytics. Hence, deployment of AI in the workplace can enable business leaders to generate valuable insights from the acquired data. For this purpose, AI systems will optimize data collected from various sources such as social media and personal information of customers and store it in a centralized location. Then, AI systems will analyze the collected data to offer profound insights that can help business leaders predict industry trends, identify anomalies, and generate detailed reports.

The introduction of IoT in the workplace can benefit organizations in the following manner:

Every employee prefers a different temperature on the thermostat and this disagreement can be atopic of conflict in the workplace.Business leaders can install smart thermostats and temperature sensors in the workplace to automate thermostat settings. Smart thermostats learn from employee temperature preferences and set the temperature accordingly. Similarly, business leaders can install several IoT-powered appliances such as smart lights, smart air conditioners, and coffee machines that can be operated using a smartphone.

Organizations can install IoT sensors in the workplace to notify employees about empty conference rooms. These sensors will monitor all conference rooms and display their status as available or busy in a centralized location. With this approach, employees can effortlessly find empty conference rooms.

Business leaders can introduce effective security measures and access control in the workplace with the help of IoT. Conventional keys, badges, and passes can be easily forgotten or duplicated. Hence, organizations can deploy smart locks that can be effortlessly unlocked using a smartphone. Such locks can also enable access control for certain rooms. For instance, only a few employees will have access to rooms that contain crucial paperwork and confidential data. With the help of IoT, business leaders can offer a granular approach to access control in the workplace. Also, smart locks can integrate with existing security systems in an organization.

The US consumes around23% of the worlds energy.Such statistics can be worrying after knowing about depleting energy reserves, overpopulation, and climate change. Energy in the form of electricity is extensively utilized in the workplace for several business procedures. Also, the cause of excessive energy consumption may be the inability to track energy utilization in the workplace. Hence, business leaders can deploy IoT sensors that can monitor energy usage. IoT sensors can monitor energy consumption in real-time and present the data to concerned parties. Concerned personnel can analyze the acquired data and take necessary steps to reduce energy consumption. Also, IoT sensors and smart appliances can help in controlling energy usage. For instance, smart lights have IoT sensors that can detect people in a room. In case a room is empty, lights would shut off and turn back on when someone enters the room. With this approach, organizations can monitor and control energy consumption and conserve energy.

The introduction of IoT and AI in the workplace will help businesses deliver more efficient operations and workflow, leading to a better ROI. Also, IoT and AI can significantly improve employee experience, which can help organizations in attracting and retaining the best talent. Additionally, AI and IoT can work together to make the existing applications more advanced. Hence, business leaders must invest in these modern technologies to reap their benefits and gain a competitive edge.

More here:
How to Build a Modern Workplace with Artificial Intelligence and Internet of Things - BBN Times

Morality Poses the Biggest Risk to Military Integration of Artificial Intelligence – The National Interest

Finding an effective balance between humans and artificial intelligence (AI) in defense systems will be the sticking point for any policy promoting the distancing from humans in the loop. Within this balance, we must accept some deviations when considering concepts such as thekill chain.How would a progression of policy look within a defense application? Addressing the political, technological, and legal boundaries of AI integration would allow the benefits of AI, notably speed, to be incorporated into the kill chain. Recently, former Secretary of Defense Ash Carter stated We all accept that bad things can happen with machinery. What we dont accept is when it happens amorally. Certainly, humans will retain the override ability and accountability without exception. Leaders will be forever bound by the actions of AI guided weapon systems, perhaps no differently than they would be responsible for the actions of a service member in combat and upholding ethical standards of which the AI has yet to grasp.

The future of weapon systems will include AI guiding the selection of targets, information gathering and processing, and ultimately, delivering force as necessary. Domination on the battlefield will not be in traditional means, rather conflicts dominated by AI with competing algorithms. The normalcy of a human-dominated decisionmaking process does provide allowances for AI within the process, however, not in a meaningful way. At no point does artificial intelligence play a significant role in making actual decisions towards the determination of lethal actions. Clearly, the capability and technology supporting integration havefar surpassed the tolerance of our elected officials. We must build confidence with them and the general public with a couple of fundamental steps.

First, information gathering and processing can be controlled primarily by the AI with little to no friction from officials. This integration, although not significant by way of added capability in a research and development (R&D) perspective, will aid in building confidence and can be completed quickly. Developing elementary protocols for the AI to follow for individual systems such as turrets, easy at first then slowly increasing in difficulty, would allow the progression of technology from an R&D standpoint while incrementally building confidence and trust. The inclusion of recognition software into the weapon system would allow specific target selections, be it civilians or terrorists, of which could be presented, prioritized, and then given to the commander for action. Once functioning within a set of defined perimeters confidently, you can increase the number of systems for overlapping coverage. A human can be at the intersection of all the data via a command center supervising these systems with a battlement management system; effectively being a human on the loop with the ability to stop any engagements as required or limiting AI roles based on individual or mission tolerance.

This process must not be encapsulated solely within an R&D environment. Rather, transparency to the public and elected officials alike, must know and be accepting. Yes, these steps seem elementary, however, they are not being done. Focus has been concentrated on capability development without a similar concern for associated policy development when both must progress together. Small concrete steps with sound policy and oversight are crucial. Without such an understanding, decisionmakers cannot in their conscience approve, rather defaulting to the safe and easy answer, no. Waiting to act on AI integration into our weapons systems puts us behind the technological curve required to effectively compete with our foes. It would be foolish to believe our adversaries and their R&D programs are being held up on AI integration due to moral and public support requirements; the Chinese call it intelligentized war and have invested heavily. Having humans on the loop during successful testing and fielding will be the bridge to additionalAIauthorities and public support necessary for the United States to continue to develop these technologies as future warfare will dictate.

John Austerman is an experienced advisor to senior military and civilian leaders focusing on armaments policy primarily within research and development. Experience with 50+ countries and the Levant to include hostile-fire areas and war zones.

Image: Reuters.

Follow this link:
Morality Poses the Biggest Risk to Military Integration of Artificial Intelligence - The National Interest

How Artificial intelligence is Transforming the Apparel Industry – BBN Times

Trend Spotting

Taking into account the fast-changing fashion trends, it goes without saying that anticipating fashion trends is not only tricky but also a time-consuming task. Manually researching the previously popular styles, social media fashion trends, and customer preferences, analysts were expected to spot the upcoming trends. The guesswork done by the professionals may or may not be ac

curate. Besides the hassles of manual work, spotting fashion trends can also pose cost issues before fashion brands if not forecasted rightly. Instead, if the brands invest in leveraging AI, they can cut down all the problems quickly.

The AI tool, trained with quality and quantity data, will analyze past fashion data, check out the customer demand and preferences, gauge competitors moves, and identify the market trends. After processing the data, the AI tool will give accurate details on trendy styles and designs within minutes. With AI, fashion brands can bolster their apparel business by tracking the latest fashion trends in just minutes, which would otherwise take days or even months.

Realizing the potential of AI in design, many tech giants are already making big moves by integrating the technology for their benefit. For instance, a group of professionals inAmazon developed an AI toolthat is capable of analyzing and learning the images that are entered, and then generating an altogether new fashion design by itself. Besides, the industry behemoth - Amazon - has developed another AI application that can analyze and process the fed pictures, thereby giving a conclusion on whether a particular style will look trendy or not. Not only Amazon, but there are dozens of other such tech giants who have already embarked on their AI journey, streaming their design creation process completely.IBM, in collaboration with Tommy Hilfiger and The Fashion Institute of Technology (FIT) is using AIto empower designers in boosting the pace of the product development lifecycle.

With customers becoming restless, irritated, and grumpy on not receiving quick assistance or service, fashion retailers are faced with constant pressure to offer what customers want almost instantaneously. Several industry giants have already come up with the newest technology-powered applications that promote enhanced customer experience, one that goes beyond personalized ads, notification alerts on price drops, or chatbot assistance. Using this sophisticated technology, fashion brands strive to put customization at the forefront for customers during their buying journey. There areAI-powered personal stylist appsin the market that allow users to browse clothes online or to click pictures of their clothes. Giving these images as inputs, the app will recommend the best style according to the user's body type, complexion, and preferences while keeping the fashion trends in mind. From providing customers with personalized advertisement notifications to alerting them on price drops to clearing their doubts or queries with chatbots to now being a personal stylist and providing instant outfit suggestions, fashion brands can meet their aim ofelevating customer experience with the help of AI. With AI being able to act as both, design assistants for designers and personal stylist for consumers, it is pretty much clear that the impact of the technology is more than what we ever imagined.

The emergence of trend-setting technology, AI, has changed the way businesses carry out their processes. And, the discussion weve had, is a proof of the fact that the apparel industry is no exception. With a majority of big fashion brands already tapping into the benefits and applications of AI, it is undeniable that soon the technology will become mainstream in medium-sized companies and startups also. So, for garment companies, who haven't planned to adopt AI yet, the right time to plan and kick-start their digital transformation journey is today. After all, no one would want to be left behind in the digital race, isnt it?

Go here to read the rest:
How Artificial intelligence is Transforming the Apparel Industry - BBN Times

How Airbus And Boeing Are Using Artificial Intelligence To Advance Autonomous Flight – Simple Flying

Pilot-less jetliners may still be far off in the future due to several reasons, public trust in automated systems not being the least of them. However, this does not mean the software technology to support such operations has not developed in leaps and bounds. While there are several start-ups in tech-driven unmanned airborne vehicles, lets take a look at how the two main aircraft manufacturers use artificial intelligence in the quest for safe autonomous flight.

Artificial Intelligence (AI) is a divisive subject. Some herald it as the key solution to everything from Alzheimers and cancer to food shortages and climate change. Others, more pessimistically or dystopically inclined, say it will be the end of humanity or, at the very least, take most of our jobs.

One thing is for certain, though; AI is here to stay, and it will have a massive impact on our everyday lives in the future. Aviation is often critiqued for having been slow on the ball when it comes to AI. However, things have begun to change, and its various applications will transform the industry in the decades to come.

Data-driven sophisticated algorithms will revolutionize everything from ticket pricing, air traffic control, crew and maintenance schedules to aircraft assembly, natural language processing in the cockpit. And, of course, it will have an enormous impact on more advanced technology such as autonomous vision-based navigation or pilot-less planes, if you will.

Stay informed:Sign up for ourdaily aviation news digest.

A little over a year ago, on January 16th, 2020, Airbus completed the first fully automatic vision-based take-off and landing within the framework of its Autonomous Taxi, Take-Off and Landing (ATTOL) project. Rather than relying on an Instrument Landing System (ILS), the AI-controlled take-off was governed by image-recognition software installed on the aircraft.

Image recognition is softwares ability to identify people, places, objects, etc., in images. You are involved in it every time you respond to a prompt to identify yourself as a human online by clicking on all the images containing a cross-walk, traffic light, or motorcycle. In the video below, it is clearly distinguishable how the software reads the visual input of the aircrafts surroundings to perform the take-off procedure.

The ATTOL project was completed in June last year. However, Airbus has stated that its goal is for autonomous technologies to improve flight operations and overall performance not to reach autonomous flight as a target in itself. Pilots, the planemaker says, will remain at the heart of operations.

Over in the other corner, in December 2020, Boeing completed a series of test-flights exploring how high-performance uncrewed aircraft can operate together controlled by AI using onboard command and data sharing. Aircraft were added one by one over a period of ten days until five operated as an autonomous unit, reaching speeds of up to 167 miles per hour.

The tests demonstrated our success in applying artificial intelligence algorithms to teach the aircrafts brain to understand what is required of it, Emily Hughes, director of Phantom Works, Boeings prototyping arm for its defense branch, said in a statement shared with Vision Systems Design at the time.

With the size, number and speed of aircraft used in the test, this is a very significant step for Boeing and the industry in the progress of autonomous mission systems technology, Hughes continued.

While Decembers test-flights were part of its defense part of the business, Boeing stated that the technologies developed from the program would not only inform its developmental Airpower Teaming System (ATS) but apply to all future autonomous aircraft.

Meanwhile, Boeings subsidiary Aurora Flight Sciences, part of Boeing NeXt, is building smaller autonomous flight vehicles. This includes the Centaur, configured for autonomous flight featuring a detect-and-avoid technology supported by radar.

How soon would you get on a crewless aircraft? Are you excited about the prospects of autonomous flight? What do you consider to be the main issues? Let us know in the comments.

Continued here:
How Airbus And Boeing Are Using Artificial Intelligence To Advance Autonomous Flight - Simple Flying

Robust artificial intelligence tools to predict future cancer – MIT News

To catch cancer earlier, we need to predict who is going to get it in the future. The complex nature of forecasting risk has been bolstered by artificial intelligence (AI) tools, but the adoption of AI in medicine has been limited by poor performance on new patient populations and neglect to racial minorities.

Two years ago, a team of scientists from MITs Computer Science and Artificial Intelligence Laboratory (CSAIL) and Jameel Clinic (J-Clinic) demonstrated a deep learning system to predict cancer risk using just a patients mammogram. The model showed significant promise and even improved inclusivity: It was equally accurate for both white and Black women, which is especially important given that Black women are 43 percent more likely to die from breast cancer.

But to integrate image-based risk models into clinical care and make them widely available, the researchers say the models needed both algorithmic improvements and large-scale validation across several hospitals to prove their robustness.

To that end, they tailored their new Mirai algorithm to capture the unique requirements of risk modeling. Mirai jointly models a patients risk across multiple future time points, and can optionally benefit from clinical risk factors such as age or family history, if they are available. The algorithm is also designed to produce predictions that are consistent across minor variances in clinical environments, like the choice of mammography machine.

The team trained Mirai on the same dataset of over 200,000 exams from Massachusetts General Hospital (MGH) from their prior work, and validated it on test sets from MGH, the Karolinska Institute in Sweden, and Chang Gung Memorial Hospital in Taiwan. Mirai is now installed at MGH, and the teams collaborators are actively working on integrating the model into care.

Mirai was significantly more accurate than prior methods in predicting cancer risk and identifying high-risk groups across all three datasets. When comparing high-risk cohorts on the MGH test set, the team found that their model identified nearly two times more future cancer diagnoses compared the current clinical standard, the Tyrer-Cuzick model. Mirai was similarly accurate across patients of different races, age groups, and breast density categories in the MGH test set, and across different cancer subtypes in the Karolinska test set.

Improved breast cancer risk models enable targeted screening strategies that achieve earlier detection, and less screening harm than existing guidelines, says Adam Yala, CSAIL PhD student and lead author on a paper about Mirai that was published this week in Science Translational Medicine. Our goal is to make these advances part of the standard of care. We are partnering with clinicians from Novant Health in North Carolina, Emory in Georgia, Maccabi in Israel, TecSalud in Mexico, Apollo in India, and Barretos in Brazil to further validate the model on diverse populations and study how to best clinically implement it.

How it works

Despite the wide adoption of breast cancer screening, the researchers say the practice is riddled with controversy: More-aggressive screening strategies aim to maximize the benefits of early detection, whereas less-frequent screenings aim to reduce false positives, anxiety, and costs for those who will never even develop breast cancer.

Current clinical guidelines use risk models to determine which patients should be recommended for supplemental imaging and MRI. Some guidelines use risk models with just age to determine if, and how often, a woman should get screened; others combine multiple factors related to age, hormones, genetics, and breast density to determine further testing. Despite decades of effort, the accuracy of risk models used in clinical practice remains modest.

Recently, deep learning mammography-based risk models have shown promising performance. To bring this technology to the clinic, the team identified three innovations they believe are critical for risk modeling: jointly modeling time, the optional use of non-image risk factors, and methods to ensure consistent performance across clinical settings.

1. Time

Inherent to risk modeling is learning from patients with different amounts of follow-up, and assessing risk at different time points: this can determine how often they get screened, whether they should have supplemental imaging, or even consider preventive treatments.

Although its possible to train separate models to assess risk for each time point, this approach can result in risk assessments that dont make sense like predicting that a patient has a higher risk of developing cancer within two years than they do within five years. To address this, the team designed their model to predict risk at all time points simultaneously, by using a tool called an additive-hazard layer.

The additive-hazard layer works as follows: Their network predicts a patients risk at a time point, such as five years, as an extension of their risk at the previous time point, such as four years. In doing so, their model can learn from data with variable amounts of follow-up, and then produce self-consistent risk assessments.

2. Non-image risk factors

While this method primarily focuses on mammograms, the team wanted to also use non-image risk factors such as age and hormonal factors if they were available but not require them at the time of the test. One approach would be to add these factors as an input to the model with the image, but this design would prevent the majority of hospitals (such as Karolinska and CGMH), which dont have this infrastructure, from using the model.

For Mirai to benefit from risk factors without requiring them, the network predicts that information at training time, and if it's not there, it can use its own predictive version. Mammograms are rich sources of health information, and so many traditional risk factors such as age and menopausal status can be easily predicted from their imaging. As a result of this design, the same model could be used by any clinic globally, and if they have that additional information, they can use it.

3. Consistent performance across clinical environments

To incorporate deep-learning risk models into clinical guidelines, the models must perform consistently across diverse clinical environments, and its predictions cannot be affected by minor variations like which machine the mammogram was taken on. Even across a single hospital, the scientists found that standard training did not produce consistent predictions before and after a change in mammography machines, as the algorithm could learn to rely on different cues specific to the environment. To de-bias the model, the team used an adversarial scheme where the model specifically learns mammogram representations that are invariant to the source clinical environment, to produce consistent predictions.

To further test these updates across diverse clinical settings, the scientists evaluated Mirai on new test sets from Karolinska in Sweden and Chang Gung Memorial Hospital in Taiwan, and found it obtained consistent performance. The team also analyzed the models performance across races, ages, and breast density categories in the MGH test set, and across cancer subtypes on the Karolinska dataset, and found it performed similarly across all subgroups.

African-American women continue to present with breast cancer at younger ages, and often at later stages, says Salewai Oseni, a breast surgeon at Massachusetts General Hospital who was not involved with the work. This, coupled with the higher instance of triple-negative breast cancer in this group, has resulted in increased breast cancer mortality. This study demonstrates the development of a risk model whose prediction has notable accuracy across race. The opportunity for its use clinically is high.

Here's how Mirai works:

1. The mammogram image is put through something called an "image encoder."

2. Each image representation, as well as which view it came from, is aggregated with other images from other views to obtain a representation of the entire mammogram.

3. With the mammogram, a patient's traditional risk factors are predicted using a Tyrer-Cuzick model (age, weight, hormonal factors). If unavailable, predicted values are used.

4. With this information, the additive-hazard layer predicts a patients risk for each year over the next five years.

Improving Mirai

Although the current model doesnt look at any of the patients previous imaging results, changes in imaging over time contain a wealth of information. In the future the team aims to create methods that can effectively utilize a patient's full imaging history.

In a similar fashion, the team notes that the model could be further improved by utilizing tomosynthesis, an X-ray technique for screening asymptomatic cancer patients. Beyond improving accuracy, additional research is required to determine how to adapt image-based risk models to different mammography devices with limited data.

We know MRI can catch cancers earlier than mammography, and that earlier detection improves patient outcomes, says Yala. But for patients at low risk of cancer, the risk of false-positives can outweigh the benefits. With improved risk models, we can design more nuanced risk-screening guidelines that offer more sensitive screening, like MRI, to patients who will develop cancer, to get better outcomes while reducing unnecessary screening and over-treatment for the rest.

Were both excited and humbled to ask the question if this AI system will work for African-American populations, says Judy Gichoya, MD, MS and assistant professor of interventional radiology and informatics at Emory University, who was not involved with the work. Were extensively studying this question, and how to detect failure.

Yala wrote the paper on Mirai alongside MIT research specialist Peter G. Mikhael, radiologist Fredrik Strand of Karolinska University Hospital, Gigin Lin of Chang Gung Memorial Hospital, Associate Professor Kevin Smith of KTH Royal Institute of Technology, Professor Yung-Liang Wan of Chang Gung University, Leslie Lamb of MGH, Kevin Hughes of MGH, senior author and Harvard Medical School Professor Constance Lehman of MGH, and senior author and MIT Professor Regina Barzilay.

The work was supported by grants from Susan G Komen, Breast Cancer Research Foundation, Quanta Computing, and the MIT Jameel Clinic. It was also supported by Chang Gung Medical Foundation Grant, and by Stockholm Lns Landsting HMT Grant.

More here:
Robust artificial intelligence tools to predict future cancer - MIT News

Artificial Intelligence in Cybersecurity Market Research Report by Function, by Type, by Technology, by Industry, by Deployment – Global Forecast to…

New York, Jan. 29, 2021 (GLOBE NEWSWIRE) -- Reportlinker.com announces the release of the report "Artificial Intelligence in Cybersecurity Market Research Report by Function, by Type, by Technology, by Industry, by Deployment - Global Forecast to 2025 - Cumulative Impact of COVID-19" - https://www.reportlinker.com/p06015709/?utm_source=GNW

Market Statistics:The report provides market sizing and forecast across five major currencies - USD, EUR GBP, JPY, and AUD. This helps organization leaders make better decisions when currency exchange data is readily available.

1. The Global Artificial Intelligence in Cybersecurity Market is expected to grow from USD 9,246.79 Million in 2020 to USD 25,354.64 Million by the end of 2025.2. The Global Artificial Intelligence in Cybersecurity Market is expected to grow from EUR 8,107.76 Million in 2020 to EUR 22,231.43 Million by the end of 2025.3. The Global Artificial Intelligence in Cybersecurity Market is expected to grow from GBP 7,207.81 Million in 2020 to GBP 19,763.78 Million by the end of 2025.4. The Global Artificial Intelligence in Cybersecurity Market is expected to grow from JPY 986,866.80 Million in 2020 to JPY 2,705,982.57 Million by the end of 2025.5. The Global Artificial Intelligence in Cybersecurity Market is expected to grow from AUD 13,427.56 Million in 2020 to AUD 36,818.30 Million by the end of 2025.

Market Segmentation & Coverage:This research report categorizes the Artificial Intelligence in Cybersecurity to forecast the revenues and analyze the trends in each of the following sub-markets:

Based on Function, the Artificial Intelligence in Cybersecurity Market studied across Advanced Threat Detection, Data Loss Prevention, Encryption, Identity and Access Management, Intrusion Detection/Prevention Systems, Proactive Defense and Threat Mitigation, and Risk and Compliance Management.

Based on Type, the Artificial Intelligence in Cybersecurity Market studied across Application Security, Cloud Security, Endpoint Security, and Network Security.

Based on Technology, the Artificial Intelligence in Cybersecurity Market studied across Context Awareness Computing, Machine Learning, and Natural Language Processing. The Machine Learning further studied across Deep Learning, Reinforcement Learning, Supervised Learning, and Unsupervised Learning.

Based on Industry, the Artificial Intelligence in Cybersecurity Market studied across Aerospace & Defense, Automotive & Transportation, Banking, Financial Services & Insurance, Building, Construction & Real Estate, Consumer Goods & Retail, Education, Energy & Utilities, Government & Public Sector, Healthcare & Life Sciences, Information Technology, Manufacturing, Media & Entertainment, Telecommunication, and Travel & Hospitality.

Based on Deployment, the Artificial Intelligence in Cybersecurity Market studied across On-Cloud and On-Premises.

Based on Geography, the Artificial Intelligence in Cybersecurity Market studied across Americas, Asia-Pacific, and Europe, Middle East & Africa. The Americas region surveyed across Argentina, Brazil, Canada, Mexico, and United States. The Asia-Pacific region surveyed across Australia, China, India, Indonesia, Japan, Malaysia, Philippines, South Korea, and Thailand. The Europe, Middle East & Africa region surveyed across France, Germany, Italy, Netherlands, Qatar, Russia, Saudi Arabia, South Africa, Spain, United Arab Emirates, and United Kingdom.

Company Usability Profiles:The report deeply explores the recent significant developments by the leading vendors and innovation profiles in the Global Artificial Intelligence in Cybersecurity Market including Acalvio Technologies, Inc., Amazon.com, Inc., Argus Cyber Security, Bitsight Technologies, Cylance, Inc., Darktrace Limited, Deep Instinct, Feedzai S.A., Fortscale Security, Inc., High-Tech Bridge, Indegy Ltd., Intel Corporation, International Business Machines Corp., Micron Technology, Inc., Nozomi Networks, NVIDIA Corporation, Samsung Electronics Co., Ltd., Securonix, Inc., Sentinelone Inc., Sift Science Inc., Skycure Ltd., SparkCognition Inc., Threatmetrix, inc., Vectra Networks, Xilinx, Inc., and Zimperium, Inc..

Cumulative Impact of COVID-19:COVID-19 is an incomparable global public health emergency that has affected almost every industry, so for and, the long-term effects projected to impact the industry growth during the forecast period. Our ongoing research amplifies our research framework to ensure the inclusion of underlaying COVID-19 issues and potential paths forward. The report is delivering insights on COVID-19 considering the changes in consumer behavior and demand, purchasing patterns, re-routing of the supply chain, dynamics of current market forces, and the significant interventions of governments. The updated study provides insights, analysis, estimations, and forecast, considering the COVID-19 impact on the market.

360iResearch FPNV Positioning Matrix:The 360iResearch FPNV Positioning Matrix evaluates and categorizes the vendors in the Artificial Intelligence in Cybersecurity Market on the basis of Business Strategy (Business Growth, Industry Coverage, Financial Viability, and Channel Support) and Product Satisfaction (Value for Money, Ease of Use, Product Features, and Customer Support) that aids businesses in better decision making and understanding the competitive landscape.

360iResearch Competitive Strategic Window:The 360iResearch Competitive Strategic Window analyses the competitive landscape in terms of markets, applications, and geographies. The 360iResearch Competitive Strategic Window helps the vendor define an alignment or fit between their capabilities and opportunities for future growth prospects. During a forecast period, it defines the optimal or favorable fit for the vendors to adopt successive merger and acquisition strategies, geography expansion, research & development, and new product introduction strategies to execute further business expansion and growth.

The report provides insights on the following pointers:1. Market Penetration: Provides comprehensive information on the market offered by the key players2. Market Development: Provides in-depth information about lucrative emerging markets and analyzes the markets3. Market Diversification: Provides detailed information about new product launches, untapped geographies, recent developments, and investments4. Competitive Assessment & Intelligence: Provides an exhaustive assessment of market shares, strategies, products, and manufacturing capabilities of the leading players5. Product Development & Innovation: Provides intelligent insights on future technologies, R&D activities, and new product developments

The report answers questions such as:1. What is the market size and forecast of the Global Artificial Intelligence in Cybersecurity Market?2. What are the inhibiting factors and impact of COVID-19 shaping the Global Artificial Intelligence in Cybersecurity Market during the forecast period?3. Which are the products/segments/applications/areas to invest in over the forecast period in the Global Artificial Intelligence in Cybersecurity Market?4. What is the competitive strategic window for opportunities in the Global Artificial Intelligence in Cybersecurity Market?5. What are the technology trends and regulatory frameworks in the Global Artificial Intelligence in Cybersecurity Market?6. What are the modes and strategic moves considered suitable for entering the Global Artificial Intelligence in Cybersecurity Market?Read the full report: https://www.reportlinker.com/p06015709/?utm_source=GNW

About ReportlinkerReportLinker is an award-winning market research solution. Reportlinker finds and organizes the latest industry data so you get all the market research you need - instantly, in one place.

__________________________

Read more:
Artificial Intelligence in Cybersecurity Market Research Report by Function, by Type, by Technology, by Industry, by Deployment - Global Forecast to...

Artificial Intelligence in Epidemiology Market by AI Type, Infrastructure, Deployment Model, and Services – Global Forecast to 2026 -…

DUBLIN--(BUSINESS WIRE)--The "Artificial Intelligence in Epidemiology Market by AI Type, Infrastructure, Deployment Model, and Services 2021 - 2026" report has been added to ResearchAndMarkets.com's offering.

This global AI epidemiology and public health market report provides a comprehensive evaluation of the positive impact that AI technology will produce with respect to healthcare informatics, and public healthcare management, and epidemiology analysis and response. The report assesses the macro factors affecting the market and the resulting need for hardware and software technology used in the public healthcare and epidemiology informatics.

The macro factors include the growth drivers and challenges of the market along with the potential application and usage areas in public health industry verticals. The report also provides the anticipated market value of AI in the public health and epidemiology informatics market globally and regionally. This includes core technology and AI-specific technologies. Market forecasts cover the period of 2021 - 2026.

The Center for Disease Control and Prevention sees epidemiology as the study and analysis of the distribution, patterns and determinants of health and disease conditions in defined populations. It is a cornerstone of public health and shapes policy decisions and evidence-based practice by identifying risk factors for disease and targets for preventive healthcare.

This includes identification of the factors involved with diseases transmitted by food and water, acquired during travel or recreational activities, bloodborne and sexually transmitted diseases, and nosocomial infections such as hospital-acquired illnesses. Epidemiology is also concerned with the identification of trends and predictive capabilities to prevent diseases.

Sources of disease data include medical claims data (commercial claims, Medicare), electronic healthcare records (EHR) including medical treatment facilities and pharmacies, death registries and socioeconomic data. It is important to note that some data is highly structured whereas other data elements are highly unstructured, such as data gathered from social media and Web scraping.

Artificial Intelligence (AI) will increasingly be relied upon to improve the efficiency and effectiveness of transforming data correlation to meaningful insights and information. For example, machine learning has been used to gather Web search and location data as a means of identifying potential unsafe areas, such as restaurants involved in food-borne illnesses.

The combination of data aggregation from multiple sources with machine learning and advanced analytics will greatly improve the efficacy of epidemiology predictive models. For example, machine learning allows epidemiologists to evaluate as many variables as desired without increasing statistical error, a problem that often arises with multiple testing bias, which is a condition that occurs when each additional test run on the data increases the possibility for error against a hypothetical target result.

Another example of AI in epidemiology is the use of natural language processing to capture clinical notes for preservation in EHR databases. As part of data capture and identification of most important information, AI will also be used to validate key terms to identify conditions, diagnoses and exposures that are otherwise difficult to capture/identify through traditional data source mining. This will be used for data discovery and validation as well as knowledge representation.

An extremely important and high growth area for AI in epidemiology is drug discovery, safety, and risk analysis, which we anticipate will be a $699 million global market by 2026. Other high opportunity areas for AI are disease and syndromic surveillance, infection prediction and forecasting, monitoring population and incidence of disease, and use of AI in Immunization Information Systems (IIS). In addition to mapping vaccinations to disease incidence, the IIS will leverage AI to identify the impact of public sentiment analysis and for public safety services such as mass notification.

Select Report Findings:

Report Benefits:

Key Topics Covered:

1.0 Executive Summary

2.0 Introduction

3.0 Technology and Application Analysis

4.0 Company Analysis

5.0 Market Analysis and Forecasts 2021 - 2026

6.0 Conclusions and Recommendations

Companies Mentioned

For more information about this report visit https://www.researchandmarkets.com/r/bwvnfe

Link:
Artificial Intelligence in Epidemiology Market by AI Type, Infrastructure, Deployment Model, and Services - Global Forecast to 2026 -...

Data Analytics and Artificial Intelligence to Propel Smart Water and Wastewater Leak Detection Solutions Market – PR Newswire India

Increasing adoption of new technologies is transforming the industry's business model from product-based solutions to leak management as a service (LMaaS), finds Frost & Sullivan

SANTA CLARA, Calif., Jan. 28, 2021 /PRNewswire/ -- Frost & Sullivan's recent analysis, Data Analytics and AI Boost Accuracy to Drive Global Smart Water and Wastewater Leak Detection Solutions Market, finds that the wastewater leak detection market has witnessed a significant rate of innovation and digital transformation. Internet of Things (IoT) sensors, machine learning (ML), artificial intelligence (AI), and cloud- or edge-based data analytics platforms are boosting the market. By 2026, the market is estimated to garner a revenue of $1.99 billion from $1.23 billion in 2020, up at a compound annual growth rate (CAGR) of 8.4%.

Photo - https://mma.prnewswire.com/media/1428835/smart_water.jpg

For further information on this analysis, please visit: http://frost.ly/54b

"The high rate of urbanization in most developing countries has increased the pressure on existing water and wastewater infrastructure, which has pushed the demand for leak detection solutions, partly to improve asset efficiency and partly to meet water conservation goals," said Paul Hudson, Energy & Environment Research Analyst at Frost & Sullivan. "To tap into this growth prospect, leak detection solution providers should integrate their technologies and customize services to meet customers' demands and exploit investments made for the development of Smart Cities and resilient infrastructure."

Hudson added: "The increasing adoption of cloud-based data analytics, ML and AI is transforming the industry's business model from product-based solutions to leak detection services. Further, utilities' emphasis on a 'one-stop solution provider' for leak detection in both their water and wastewater networks is encouraging solution providers to embrace new business models such as technology-as-a-service (TaaS) and leak (or non-revenue water (NRW)) management-as-a-service (LMaaS). TaaS enables service providers to fully control and strategically expand and enhance their technology offerings, whereas LMaaS could help focus on the growth and market penetration of smart solutions such as continual leak monitoring and proactive prevention."

The move toward a circular economy and holistic sustainability will present immense growth opportunities for market participants, varying considerably depending on the region:

Data Analytics and AI Boost Accuracy to Drive Global Smart Water and Wastewater Leak Detection Solutions Market is part of Frost & Sullivan's Global Energy and Environment Growth Partnership Service program.

About Frost & Sullivan

For six decades, Frost & Sullivan has been world-renowned for its role in helping investors, corporate leaders and governments navigate economic changes and identify disruptive technologies, Mega Trends, new business models, and companies to action, resulting in a continuous flow of growth opportunities to drive future success.Contact us: Start the discussion

Data Analytics and AI Boost Accuracy to Drive Global Smart Water and Wastewater Leak Detection Solutions Market

MF9F-15

Contact:

Srihari Daivanayagam, Corporate CommunicationsM: +91 9742676194; P: +91 44 6681 4412E: [emailprotected]http://ww2.frost.com

http://www.frost.com

SOURCE Frost & Sullivan

Visit link:
Data Analytics and Artificial Intelligence to Propel Smart Water and Wastewater Leak Detection Solutions Market - PR Newswire India