Category Archives: Artificial Intelligence

How to Invest in Robotics and Artificial Intelligence – Analytics Insight

We frequently put robotics and artificial intelligence together, but they are two separate fields. The robotics and artificial intelligence industries are some of the largest markets in the tech space today. Almost every industry in the world is adopting these technologies to boost growth and increase customer engagement.

According to reports, the global robotics market is expected to grow up to US$158.21 billion, between the period 2018 to 2025, at a CAGR of 19.11%. This growth is connected to the increasing adoption of artificial intelligence and robotics technology. Between 2020 to 2025, the market will grow at a CAGR of 25.38%.

During the pandemic, the demand for robotics technology has increased drastically. The medical field is deploying surgical robots to fight against Covid-19. Robots are helping healthcare professionals and patients by delivering food and medications, measuring the vitals, and aiding social distancing.

The automation industry is also using robotics technology to drive growth and transformation. Other industries like food, defense, manufacturing, retail, and others are also deploying robotics.

According to the reports, the global AI market is expected to grow from US$58.3 billion in 2021 to US$309.6 billion by 2026. Among the many factors that will drive the growth in the artificial intelligence market, the Covid-19 pandemic is the chief reason.

The pandemic has encouraged new applications and technological advancements in the market. Industries like healthcare, food, and manufacturing are increasingly adopting AI technologies to promote efficiency in business operations. Big tech companies like Microsoft, IBM, and Google are deploying AI to facilitate drug development, remote communication between patients and healthcare providers, and other services. AI-powered machines are also helping educators to track students performances, bridging the gaps in teaching techniques, and automating laborious administrative tasks.

Share This ArticleDo the sharing thingy

View original post here:
How to Invest in Robotics and Artificial Intelligence - Analytics Insight

Artificial Intelligence Restores Mutilated Rembrandt Painting The Night Watch – ARTnews

One of Rembrandts finest works, Militia Company of District II under the Command of Captain Frans Banninck Cocq (better known as The Night Watch) from 1642, is a prime representation of Dutch Golden Age painting. But the painting was greatly disfigured after the artists death, when it was moved from its original location at the Arquebusiers Guild Hall to Amsterdams City Hall in 1715. City officials wanted to place it in a gallery between two doors, but the painting was too big to fit. Instead of finding another location, they cut large panels from the sides as well as some sections from the top and bottom. The fragments were lost after removal.

Now, centuries later, the painting has been made complete through the use of artificial intelligence. The Rijksmuseum in the Netherlands has owned The Night Watch since it opened in 1885 and considers it one of the best-known paintings in its collection. In 2019, the museum embarked on a multi-year, multi-million-dollar restoration project, referred to as Operation Night Watch, to recover the painting. The effort marks the 26th restoration of the work over the span of its history.

In the beginning, restoring The Night Watch to its original size hadnt been considered until the eminent Rembrandt scholar Erst van der Wetering suggested it in a letter to the museum, noting that the composition would change dramatically. The museum tapped its senior scientist, Rob Erdmann, to head the effort using three primary tools: the remaining preserved section of the original painting, a 17th-century copy of the original painting attributed to Gerrit Lundens that had been made before the cuts, and AI technology.

About the decision to use AI to reconstruct the missing pieces instead of commissioning an artist to repaint the work, Erdmann told ARTnews, Theres nothing wrong with having an artist recreate [the missing pieces] by looking at the small copy, but then wed see the hand of the artist there. Instead, we wanted to see if we could do this without the hand of an artist. That meant turning to artificial intelligence.

AI was used to solve a set of specific problems, the first of which was that the copy made by Lundens is one-fifth the size of the original, which measures almost 12 feet in length. The other issue was that Lundens painted in a different style than Rembrandt, which raised the question of how the missing pieces could be restored to an approximation of how Rembrandt would have painted them. Erdmann created three separate neural networks, a type of machine learning technology that trains computers to learn how to do specific tasks to address the problems.

The first [neural network] was responsible for identifying shared details. It found more than 10,000 details in common between The Night Watch and Lundenss copy. For the second, Erdmann said, Once you have all of these details, everything had to be warped into place, essentially by tinkering with the pieces by scoot[ing one part] a little bit to the left and making another section of the painting 2 percent bigger, and rotat[ing another] by four degrees. This way all the details would be perfectly aligned to serve as inputs to the third and final stage. Thats when we sent the third neural network to art school.

Erdmann made a test for the neural network, similar to flashcards, by splitting up the painting into thousands of tiles and placing matching tiles from both the original and the copy side-by-side. The AI then had to create an approximation of those tiles in the style of Rembrandt. Erdmann graded the approximationsand if it painted in the style of Lundens, it failed. After the program ran millions of times, the AI was ready to reproduce tiles from the Lundens copy in the style of Rembrandt.

The AIs reproduction was printed onto canvas and lightly varnished, and then the reproduced panels were attached to the frame of The Night Watch over top the fragmented original. The reconstructed panels do not touch Rembrandts original painting and will be taken down in three months out of respect for the Old Master. It already felt to me like it was quite bold to put these computer reconstructions next to Rembrandt, Erdmann said.

As for the original painting by Rembrandt, it may receive conservation treatment depending on the conclusions of the research being conducted as part of Operation Night Watch. The painting has sustained damaged that may warrant additional interventions. In 1975, the painting was slashed several times, and, in 1990, it was splashed with acid.

The reconstructed painting went on view at the Rijksmuseum on Wednesday and will remain into September.

Continue reading here:
Artificial Intelligence Restores Mutilated Rembrandt Painting The Night Watch - ARTnews

Banking on AI: The Opportunities and Limitations of Artificial Intelligence in the Fight Against Financial Crime and Money Laundering – International…

By Justin Bercich, Head of AI, Lucinity

Financial crime has thrived during the pandemic. It seems obvious that the increase in digital banking, as people were forced to stay inside for months on end, would correlate with a sharp rise in money laundering (ML) and other nefarious activity, as criminals exploited new attack surfaces and the global uncertainty caused by the pandemic.

But, when you consider that fines for money-laundering violations have catapulted by 80% since 2019, you begin to realise just how serious and widespread the situation is. Consequently, the US Government is making strides to re-write its anti-money laundering (AML) rulebook, having enacted its first major piece of AML legislation since 2004 earlier this year.New secretary of the treasury Janet Yellen, with her decades of financial regulation experience, adds further credence to the fact the AML sector is primed for more significant reform in the coming months and years.

Yet, despite the positives and promises of technological innovation in the AML space, there still remains great debate and scepticism about the ethics and viability of incorporating artificial intelligence (AI) and machine learning deeply into banks and the broader financial ecosystem. What are the opportunities and limitations of AI, and how can we ensure its application remains ethical for all?

Human AI A banks newest investigator

While AI isnt a new asset in the fight against financial crime, Human AI is a ground-breaking application that has the potential to drastically improve compliance programs among forward-thinking banks. Human AI is all about bringing together the best tools and capabilities of people and machines. Together, human and machine help one another unearth important insights and intelligence at the exact point when key decisions need to be made forming the perfect money laundering front-line investigator and drastically improve productivity in AML.

The most powerful aspect of Human AI is that its a self-fulfilling cycle. Insights are fed back into the machine learning model, so that both human and technology improve. After all, the more the technology improves, the more the human trusts it. As we gain trust in technology we feed more relevant human-led insights back into the machine, ultimately resulting in a flowing stream of synergies that strengthens the Human-AI nexus, therefore empowering users and improving our collective defenses against financial crime. That is Human AI.

An example of this in action is Graph Data Science (GDS) an approach that is capable of finding hidden relationships in financial transaction networks. The objective of money launderers is to hide in plain sight, while AML systems are trying to uncover the hidden connections between a seemingly normal person/entity and a nefarious criminal network. GDS helps uncover these links, instead of relying on a human to manually trawl through a jungle of isolated spreadsheets with thousands of fields.

Human AI brings us all together

Whats more, a better understanding of AI doesnt just benefit the banks and financial institutions wielding its power on the frontline, it also strengthens the relationship between bank and regulator. Regulatorus need to understand why a decision has been made by AI in order to determine its efficacy and with Human AI becoming more accessible and transparent (and, therefore, human), banks can ensure machine-powered decisions are repeatable, understandable, and explainable.

This is otherwise known as Explainable AI, meaning investigators, customers, or any user of an AI system have the ability to see and interact with data that is logical, explainable and human. Not only does this help build a bridge of trust between humans and machines, but also between banks and regulators, ultimately leading to better systems of learning that help improve one another over time.

This collaborative attitude should also be extended to the regulatory sandbox, a virtual playground where fintechs and banks can test innovative AML solutions in a realistic and controlled environment overseen by the regulators. This prevents brands from rushing new products into the market without the proper due diligence and regulatory frameworks in place.

Known as Sandbox 2.0, this approach represents the future of policy making, giving fintechs the autonomy to trial cutting-edge Human AI solutions that tick all the regulatory boxes, and ultimately result in more sophisticated and effective weapons in the fight against financial crime and money laundering.

Overhyped or underused? The limitations of AI

Anti-money laundering technology has, in many ways, been our last line of defence against financial crime in recent years a dam that is ready to burst at any moment. Banks and regulators are desperately trying to keep pace with the increasing sophistication of financial criminals and money launderers. New methods for concealing illicit activity come to surface every month, and technological innovation is struggling to keep up.

This is compounded by our need to react quicker than ever before to new threats. This leaves almost no room for error, and often not enough time to exercise due diligence and ethical considerations. Too often, new AI and machine learning technologies are prematurely hurried out into the market, almost like rushing soldiers to the front line without proper training.

Increasing scepticism around AI is understandable, given the marketing bonanza of AI as a panacea to growth. Banks that respect the opportunities and limitations of AI will use the technology to focus more on efficiency gains and optimization, allowing AI algorithms to learn and grow organically, before looking to extract deeper intelligence used to driverevenue growth. It is a wider business lesson that can easily be applied to AI adoption: banks must learn their environment, capabilities, and limitations beforemastering a task.

What banks must also remember is that AI experimentation comes withdiminishing returns. They should focus on executing strategic, production-readyAI micro-projects in parallel with human teams to deliver actionable insights and value. At the same time, this technology can be trained to learn from interactions with their human colleagues.

But technology cant triumph alone

Application of AI and machine learning is now being used across most major aspects of the financial ecosystem, areas that have traditionally been people-focussed, such as issuing new products, performing compliance functions, and customer service. This requires an augmentation of thinking, where human and AI work alongside one another to achieve a common goal, rather than just throwing an algorithm at the problem.

But of course, we must recognise that this technology cant win the fight in isolation. This isnt the time to keep our cards close to our chests the benefits of AI against financial crime and ML must be made accessible to everyone affected.

Data must be tracked across all vendors and along the entire supply chain, from payments processors to direct integrations. And, the AI technology being used to enable near-real time information sharing must go both ways: from bank to regulator and back again. Only then suspicious activity can be analysed effectively, meaning everyone can trust the success of AI.

Over the next few years, the potential of Human AI will be brought to life. Building trust between one another is crucial to addressing blackbox concerns, along with consistent training of AI and machines to become more human in their output, which will ultimately make all our lives more fulfilling.

Continued here:
Banking on AI: The Opportunities and Limitations of Artificial Intelligence in the Fight Against Financial Crime and Money Laundering - International...

Acceleration of Artificial Intelligence in the Healthcare Industry – Analytics Insight

Healthcare Industry Leverages Artificial Intelligence

With the continuous evolvement of Artificial Intelligence, the world is being benefited to the utmost level, as the applications of Artificial Intelligence is unremitting. This technology can be operated in any sector of industry, including the healthcare industry.The advancement of technology and the AI (Artificial Intelligence), as a part of modern technology have resulted in the formation of a digital macrocosm. Artificial Intelligence, to be precise, is a programming where, there is a duplication of human intelligence incorporated in the machines and it works and acts like a human.

Artificial Intelligence is transmuting the system and methods of the healthcare industries. Artificial Intelligence and healthcare, were found together over half a century. The healthcare industries use Natural Language Process to categorise certain data patterns.Natural Language Process is the process of giving a computer, the ability to understand text and spoken words just like the same way human beings can. In the healthcare sector, it gives the effect to the clinical decision support. The natural language process uses algorithms that can mimic like human responses to conversation and queries. This NLP, just like a human can take the form of simulated mediator using algorithms to connect to the health plan members.

Artificial Intelligence can be used by the clinical trials, to hasten the searches and validation of medical coding. This can help reduce the time to start, improve and accomplish clinical trainings. In simple words medical coding is transmitting medial data about a patient into alphanumeric code.

Clinical Decisions All the healthcare sectors are overwhelmed with gigantic volumes of growing responsibility and health data. Machine learning technologies as a part of Artificial Intelligence, can be applied to the electronic health records, with the help of this the clinical professionals can hunt for proper, error-free, confirmation-based statistics that has been cured by medical professionals. Further, Natural Language Process just like the chatbots, can be used for everyday conversation where it allows the users to type questions as if they are questioning a medical professional and receive fast and unfailing answers.

Health Equity Artificial Intelligence and Machine learning algorithms can be used to reduce bias in this sector by promoting diversities and transparency in data to help in the improvement of health equity.

Medication Detection Artificial Intelligence can be used by the pharma companies, to deal with drug discoveries and thus helping in reducing the time to determine and taking drugs all the way to the market. Machine Learning and Big Data as a part of Artificial Intelligence do have the great prospective to cut down the value of new medications.

Pain Management With the help of Artificial Intelligence and by creating replicated veracities the patients can be easily distracted from their existing cause of pain. Not only this, the AI can also be incorporated for the for the help of narcotic crisis.

System Networked Infirmaries Unlike now, one big hospital curing all kind of diseases can be divided into smaller pivots and spokes, where all these small and big clinics will be connected to a single digital framework. With the help of AI, it can be easy to spot patients who are at risk of deterioration.

Medical Images and Diagnosis The Artificial Intelligence alongside medical coding can go through the images and X-rays of the body to identify the system of the diseases that is to be treated. Further Artificial Intelligence technology with the help of electronic health records is used in healthcare industry that allows the cardiologists to recognize critical cases first and give diagnosis with accuracy and potentially avoiding errors.

Health Record Analysing With the advance of Artificial Intelligence, now it is easy for the patients as well as doctors to collect everyday health data. All the smart watches that help to calculate heart rates are the best example of this technology.

This is just the beginning of Artificial Intelligence in the healthcare industry. Making a start from Natural Language process, Algorithms and medical coding, imaging and diagnosis, there is a long way for the Artificial Intelligence to be capable of innumerable activities and to help medical professionals in making superior decisions. The healthcare industry is now focusing on technological innovation in serving to its patients. The Artificial Intelligence have highly transmuted the healthcare industry, thus resulting in development in patient care.

Share This ArticleDo the sharing thingy

Continued here:
Acceleration of Artificial Intelligence in the Healthcare Industry - Analytics Insight

Data Privacy Is Key to Enabling the Medical Community to Leverage Artificial Intelligence to Its Full Potential – Bio-IT World

Contributed Commentary by Mona G. Flores, MD

June 24, 2021 | If theres anything the global pandemic has taught healthcare providers, it is the importance of timely and accurate data analysis and being ready to act on it. Yet these same organizations must move within the bounds of patient rights regulations, both existing and emerging, making it harder to access the data needed for building relevant artificial intelligence (AI) models.

One way to get around this constraint is de-identify the data before curating it into one centralized location where it can be used for AI model training.

An alternative option would be to keep the data where it originated and learn from this data in a distributed fashion without the need for de-identification. New companies are being created to do this, such as US startup Rhino Health. It recently raised $5 million (US) to connect hospitals with large databases from diverse patient populations to train and validate AI models using Federated Learning while ensuring privacy.

Other companies are following suit. This is hardly surprising considering that the global market for big data analytics in health care was valued at $16.87 billion in 2017 and is projected to reach $67.82 billion by 2025, according to a report from Allied Market Research.

Federated Learning Entering the Mainstream

AI already has led to disruptive innovations in radiology, pathology, genomics, and other fields. To expand upon these innovations and meet the challenge of providing robust AI models while ensuring patient privacy, more healthcare organizations are turning to federated learning.

With Federated Learning, Institutions hide their data and seek the knowledge. Federated Learning brings the AI model to the local data, trains the model in a distributed fashion, and aggregates all the learnings along the way. In this way, no data is exchanged whatsoever. The only exchange occurring is model gradients.

Federated Learning comes in many flavors. In the client-server model employed by Clara FL today, the server aggregates the model gradients it receives from all of the participating local training sites (Client-sites) after each iteration of training. The aggregation methodology can vary from a simple weighted average to more complex methods chosen by the administrator of the FL training.

The end result is a more generalizable AI model trained on all the data from each one of the participating institutions while maintaining data privacy and sovereignty.

Early Federated Learning Work Shows Promise

New York-based Mount Sinai Health Systems recently used federated learning to analyze electronic health records to better predict how COVID-19 patients will progress using the AI model and data from five separate hospitals. The federated learning process allowed the model to learn from multiple sources without exposing patient data.

The Federated model outperformed local models built using data from each hospital separately and it showed better predictive capabilities.

In a larger collaboration among NVIDIA and 20 hospitals, including Mass General Brigham, National Institutes of Health in Bethesda, and others in Asia and Europe, the work focused on creating a triage model for COVID-19. The FL model predicted on initial presentation if a patient with symptoms suspicious for COVID-19 patient will end up needing supplemental oxygen within a certain time window.

Considerations and Coordination

While Federated learning addresses the issue of data privacy and data access, it is not without its challenges. Coordination between the client sites needs to happen to ensure that the data used for training is cohesive in terms of format, pre- processing steps, labels, and other factors that can affect training. Data that is not identically distributed at the various client sites can also pose problems for training, and it is an area of active research. And there is also the question of how the US Food and Drug Administration, European Union, and other regulatory bodies around the world will certify models trained using Federated Learning. Will they require some way of examining the data that went into training to be able to reproduce the results of Federated Learning, or will they certify a model based on its performance on external data sets?

In January, the U.S. Food and Drug Administration updated its action plan for AI and machine learning in software as a medical device, underscoring the importance of inclusivity across dimensions like sex, gender, age, race, and ethnicity when compiling datasets for training and testing. The European Union also includes a right to explanation from AI systems in GDPR.

It remains to be seen how they will rule on Federated Learning.

AI in the Medical Mainstream

As Federated Learning approaches enter the mainstream, hospital groups are banking on Artificial Intelligence to improve patient care, improve the patient experience, increase access to care, and lower healthcare costs. But AI needs data, and data is money. Those who own these AI models can license them around the world or can share in commercial rollouts. Healthcare organizations are sitting on a gold mine of data. Leveraging this data securely for AI applications is a golden goose, and those organizations that learn to do this will emerge the victors.

Dr. Mona Flores is NVIDIAs Global Head of Medical AI. She brings a unique perspective with hervaried experience in clinical medicine, medical applications, and business. She is a boardcertified cardiac surgeon and the previous Chief Medical Officer of a digital health company.She holds an MBA in Management Information Systems and has worked on Wall Street. Herultimate goal is the betterment of medicine through AI. She can be reached at mflores@nvidia.com.

More here:
Data Privacy Is Key to Enabling the Medical Community to Leverage Artificial Intelligence to Its Full Potential - Bio-IT World

AI@EIF 2021: Artificial Intelligence and Machine Learning – ARC Viewpoints

The ARC Industry Forum Europe 2021 Accelerating Digital Transformation in a Post-COVID World was held as a virtual event due to the ongoing epidemic. The digital event attracted participants from all sectors of industrial production. In this series of blogs, we are presenting the highlights from our forum.

You would like to watch a session again or missed one? The presentation and panel discussion videos are now available on ARC Industry Forum Europe 2021 (vfairs.com)until August 19 by clicking on Presentations. Furthermore, you can still visit sponsor booths by clicking on Exhibit Hall. ARC and the Sponsors have also uploaded valuable videos and reports on this platform for you to add to your vfairs briefcase. If you attended the ARC Industry Forum in May, you can still register on the platform.

Advanced companies already left the conceptual phase far behind and are applying AI in various steps in manufacturing. This session showed various examples of use cases for AI and how it used today.

For AI, the usual suspects are quality control or predictive maintenance, which provide easy and quick return on invest, but the session clearly showed that the area of applications today is already much broader, including supply chain, root cause analysis, time series analysis, and much more. Microsoft shared how they provide the backbone for many operations. IBM shared used cases of a cognitive supply chain, which adapts to disruptive events as well as changes in the company processes, and who made it. Siemens shared detailed use case of electronics manufacturing quality inspection. Phoenix Contact illustrated how they were able to detect anomalies and do root cause analysis using only 3-5 percent of all IO signals. Pfizer provided insight into their current transformation process, featuring examples with Natural Language Processing for maintenance.

AI is not the problem. In fact, it is the 80 to 90 percent of work needed around the actual AI part that determine the success or failure of an AI project. This was the base of the AI workshop.

Compared to past technology adoptions, we now have a much better understanding of these challenges, such as the human factor during introduction or maintenance to lower lifecycle costs. The experts emphasized how important the collaboration between different teams is, and that many of them also more frequently work with psychologists to enable a successful change management. Change management is needed for AI, as it cuts across functions, brings together different stakeholders, and impacts business processes and structure. A clear learning was that it is important for end users to reduce overall complexity with AI and not just shift complexity from blue collar workers to white collar.

We worked intensely to create a guideline to solve three of the top AI challenges and were able to share them with the audience. Our experts from Voith, Philip Morris, Siemens, and Nnaisense combined not only deep AI know-how, but also decades of plant floor experience.

We would like to thank all experts, panelists, and presenters of the Artificial Intelligence and Machine Learning " session at ARC Advisory Group European Industry Forum 2021. And a special thanks to all our sponsors: Global sponsor: Siemens; Gold sponsors: ANDRITZ and OPC Foundation; and Silver sponsors: Capgemini, CC-Link, Optimistik Orange Cyberdefense, PHOENIX CONTACT, and RapidMiner.

Presentations and panel discussion videos are available until August 19 for you watch at your convenience. We encourage you and your colleagues to login or register and watch the recordings on our platform: ARC Industry Forum Europe 2021 (vfairs.com).

Continued here:
AI@EIF 2021: Artificial Intelligence and Machine Learning - ARC Viewpoints

Hicks Announces New Artificial Intelligence Initiative > US DEPARTMENT OF DEFENSE > Defense Department News – Department of Defense

The integration of artificial intelligence technology is about trust, and a responsible AI ecosystem is the foundation for that trust, Deputy Defense Secretary Kathleen H. Hicks said today.

Speaking virtually to the opening of the Defense Department's Artificial Intelligence Symposium and Tech Exchange, Hicks said DOD's operators must come to trust the outputs of AI systems; its commanders must come to trust the legal, ethical and moral foundations of explainable AI; and the American people must come to trust the values DOD has integrated into each of its applications.

"A key part of an AI-ready department is a strong data foundation," Hicks said. "Data enables the creation of algorithmic models, and, with the right data, we are able to take concepts and ideas and turn them into reality."

The deputy secretary said she recently set forth a series of data decrees for DOD that will help the U.S. achieve the AI superiority it needs.

"We will ensure that DOD data is visible, accessible, understandable, linked, trustworthy, interoperable and secure. To do so, I have directed key initial steps to ensure the department treats data as a strategic asset," she said, adding these steps set DOD on a solid foundation both ethically and organizationally.

"Today, I am proud to announce the DOD AI and Data Acceleration initiative, or ADA initiative. Its goal is to rapidly advance data and AI dependent concepts, like joint all-domain command and control, to the ADA initiative [to] generate foundational capabilities through a series of implementation experiments or exercises, each one purposefully building understanding through successive and incremental learning."

Hicks said each exercise pushes the boundaries of the one before, building on the knowledge gained. She said this represents a software engineering approach that will iteratively gain and expand capabilities to different lines of effort:

"Importantly, these events will be conducted in alignment with the busy combatant command experimentation and exercise cycle," Hicks said. "Through successive experiments, we seek to understand the obstacles and challenges that impair our current ability to rapidly scale AI across the department and the Joint Force."

As DOD completes these episodic exercises and experiments, it intends to leave behind capability, Hicks said. "True to our software engineering mindset, we aim to interactively gain capability and rapidly scale to other combatant command environments with similar challenges. This will ultimately produce data and operational platforms designed for real-time sensor data fusion, automated command-and-control tasking and autonomous system integration. It will allow data to flow across both geographic and functional commands."

Hicks said DOD's fourth line of effort will set the stage for advanced data management platforms consistent with the data decrees. These platforms will enable open data standard architecture and the production of scalable, testable and repeatable data workflows. This will facilitate cross-domain and cross-component experimentation and development. By generating centralized and scalable data, DOD will be accelerating the gains from leveraging AI, she explained.

The ADA initiative recognizes the challenges that DOD is facing and provides a systematized method to harness data and AI. It also creates a path forward for a mission space that has often appeared to be more rhetoric than action, Hicks said.

"You represent the department and its many partners who are rising to the competitive challenge of our future. [Secretary of Defense Lloyd J. Austin III] and I need your help to harness our innovation, build trust, modernize our processes, and serve our great nation," Hicks said, thanking the group for its efforts."

See the rest here:
Hicks Announces New Artificial Intelligence Initiative > US DEPARTMENT OF DEFENSE > Defense Department News - Department of Defense

Artificial Intelligence In Healthcare And How It’s Transforming The Industry – BioSpace

We have enjoyed the power of technology in the past few decades, and we saw it progress. From the gadgets that we use daily to make our life more convenient to the medical field and healthcare, we have been enjoying the technology of artificial intelligence to make things easier.

AI in healthcare benefits both medical practitioners and patients alike. Lets dive in on how were using this and how we can use it in the future.

RELATED: AI Applications for Clinical Trials Increase, Refining Endpoints, Quantifying Pain, & More

The future of healthcare is here as we are using artificial intelligence in diagnostics and treatment. It could only mean that we can expect the advancements in technology in this field to rise further and faster.

Here are some examples of how we apply artificial intelligence in healthcare:

Medical practitioners use AI in healthcare from the smallest scale to the biggest and most crucial ones, such as dealing with high-risk diseases. On a small scale, patients use telehealth thru computers and mobile devices.

There are telehealth tools used for documentation, recording metrics, and process of information. These are commonly used from home.

Especially in times like these, going out of your home can be a threat because of the pandemic. Telemedicine is one of the best options, especially for those who need immediate care.

Doctors and physicians use AI on patients to detect early signs of stroke, cancer, neurological, or cardiovascular disorders by recording algorithms. This way, the computer can see the trends and activity of a persons organs to catch and cure a potential disease before it can pose a threat.

IBM recently partnered with Pfizer to develop an AI machine that can detect the early onset of Alzheimers disease in a person. The test evaluates cognitive impairment in various neurological disorders, including stroke and Alzheimers disease.

In addition to helping with diagnosis and prevention, AI can also be used by physicians as their assistants when dealing with patients.

A study revealed that physicians spend almost half of their work time dealing with data in Electronic Health Records (EHR). Primary care physicians can focus on dealing with the patients more since computers can now take notes for them, analyze discussions with the patients, and enter the necessary information into the EHRs.

In addition to this, science now uses voice recognition and speech dictation to make clinical tasks possible through natural language processing. It is a process where the computer catches the commands given by a person and converts that into data.

In relationship with the use of EHR, AI can help treat patients through personalized medicine. With all the records stored in the computer, the computer can identify large quantities of data to identify treatment options instantly based on a patients background.

The precise and quick process of drug development and clinical trials are now possible because of AI.

Computers and artificial intelligence can help clinicians work efficiently and lead to more precise diagnoses at the clinical level.

Valence Discovery recently used machine learning and artificial intelligence in their healthcare institution for molecular property prediction and multiparameter optimization for preclinical drug discovery to Charles Rivers patients.

RELATED: AI in Biopharma: Deep Genomics and BioMarin Forge Pact; InterVenn Raises $34 Million

Wearable technology like smartwatches or even smartphones can detect oxygen levels, heart rate, and even violent falls.

These devices can even directly call emergency if it reaches critical level making these smart devices a reliable way to prevent serious conditions.

Most useful to dermatologists or ophthalmologists, using a smartphone to take selfies for diagnostic is being used to treat and examine patients, especially in this day and age.

With the popularity of phone-in check-ups during this pandemic, using this technology for clinical improvements and diagnosis can be considered a step-up for healthcare using technology.

Inserting smart device capabilities on hospital machines and devices can help doctors detect an early sign of a patients critical condition through algorithms and patterns.

When were talking about integrating disparate data from across the healthcare system, integrating it, and generating an alert that would alert an ICU doctor to intervene early on the aggregation of that data is not something that a human can do very well, Executive Director of the MGH & BWH Center for Clinical Data Science Mark Michalski, MD said in an interview.

How can technology help a person deal with pain, you ask? Artificial intelligence combined with virtual reality is being used as pain management tools by some companies.

Clinics and hospitals can create simulated realities to distract patients from their pain and even an opioid crisis.

Johnson & Johnson Reality Program is the first company to do this and is expected to become a trend and be used by other clinics or hospitals.

As a prediction by medical experts, obtaining tissues and other radiology tools will be improved through AI.

If non-invasive tools like x-rays, MRI machines, and CT scans are for internal visibility of the body, and biopsies are created to collect tissue samples from organs, the future, with the development of using AI technology, can do these things without being invasive or cause any harm from patients.

We want to bring together the diagnostic imaging team with the surgeon or interventional radiologist and the pathologist, MD Brigham & Womens Hospital Director of Image-Guided Neurosurgery Alexandra Golby said in an interview. That coming together of different teams and aligning goals is a big challenge.

If we want the imaging to give us information that we presently get from tissue samples, then were going to have to be able to achieve very close registration so that the ground truth for any given pixel is known.

Technology and artificial intelligence in healthcare have been very vital in its progress. It has been a major help with drug discovery and the recognition of diseases.

As researchers continue to discover new technology, the medical field, doctors, and patients will benefit from its advancements.

Go here to see the original:
Artificial Intelligence In Healthcare And How It's Transforming The Industry - BioSpace

5 things we learned about artificial intelligence at Discovery Place – Qcity metro

Sponsored by:

What comes to mind when you think of artificial intelligence?

Maybe you think of an evil cyborg created to kill and destroy? Or maybe a network of vast computers that threatens to steal your job.

Whatever your idea, the truth about artificial intelligence, or AI for short, is that its far more commonplace than you might imagine, says HP Newquist, a computer historian who developed the Artificial Intelligence: Your Mind & The Machine exhibit currently at Discovery Place Science.

The reality is, we already have AI all around us, he said. We have it in our cell phones with Siri and Alexa. We have it in our homes and Google GPS maps.

Every time you buy a product on Amazon or watch a show on Netflix, AI is watching.

The exhibit, Newquist said, is meant to make artificial intelligence real and relevant to more to people.

Here are five things we learned while touring the exhibit:

In an industry dominated by men, women have played a central role in the development of AI.

From Merry Shell, the 19th century author who wrote the book Frankenstein, right up to this current day, women have envisioned machines that could think like humansor at least appear to anyway.

Youll learn about Fei-Fei Li, a Chinese-born scientist who was an early pioneer of artificial intelligence.

Fun fact: A woman named Ada Lovelace is considered by some to be the worlds first computer programmerway back in the 1800s.

You know those cat pictures we love to hate on the internet? This exhibition tells you how they helped in the development of artificial intelligence.

That technology has now given way to facial-recognition software, which allows you to unlock your phone simply by looking into the screen.

Check out the photo above.

Can you tell which squares contain Chihuahua and which show blueberry muffin? Can artificial intelligence tell the difference? The Discovery Place exhibit provides some surprising answers.

Cameras loaded with facial-recognition software are everywhere in China, and video footage in the Discovery Place exhibit takes you there.

But here in the U.S., a growing number of police departments are using that same technology to locate suspects.

Because so much is at stake, Newquist said, the tech companies that are developing new generations of AI must do more to recruit and train people of color, lest (even more) racial bias gets built into the systems.

The exhibit will help you see other reasons why thats important.

Imagine the amount of artificial intelligence it takes to produce a self-driving car.

Someday that same technology may replace hundreds of thousands of long-haul truckers and other commercial drivers. And when that day arrives, Newquist said, countless other jobs that depend on human drivers also will be lost.

Is your job vulnerable to artificial intelligence?

The exhibit offers some clues.

When asked whether society is better off with or without artificial intelligence, Newquist was quick with his response.

With it, he said.

Despite the challenges, artificial intelligence has made our lives better and safer, eliminating mundane tasks and freeing up more of our time for productive (and fun) activities, he said.

It ultimately only takes a wrong turn when the wrong people use it, he said.

Visit Discovery Place Science in Uptown to see for yourself.

The AI exhibit will be there through August 22. To keep visitors safe and socially distanced, reservations are required.

Read the original:
5 things we learned about artificial intelligence at Discovery Place - Qcity metro

Code^Shift Lab Aims To Confront Bias In AI, Machine Learning – Texas A&M Today – Texas A&M University Today

As machines increasingly make high-risk decisions, a new lab at Texas A&M aims to reduce bias in artificial intelligence and machine learning.

Getty Images

The algorithms underpinning artificial intelligence and machine learning increasingly influence our daily lives. They can decide everything from which video were recommended to watch next on YouTube to who should be arrested based on facial recognition software.

But the data used to train these systems often replicate the harmful social biases of the engineers who build them. Eliminating this bias from technology is the focus of Code^Shift, a new data science lab at Texas A&M University that brings together faculty members and researchers from a variety of disciplines across campus.

Its an increasingly critical initiative, said Lab Director Srividya Ramasubramanian, as more of the world becomes automated. Machines, rather than humans, are making many of the decisions around us, including some that are high-risk.

Code^Shift tries to shift our thinking about the world of code or coding in terms of how we can be thinking of data more broadly in terms of equity, social healing, inclusive futures and transformation, said Ramasubramanian, professor of communication in the College of Liberal Arts. A lot of trauma and a lot of violence has been caused, including by media and technologies, and first we need to acknowledge that, and then work toward reparations and a space of healing individually and collectively.

Bias in artificial intelligence can have major impacts. In just one recent example, a man has sued the Detroit Police Department after he was arrested and jailed for shoplifting after being falsely identified by the departments facial recognition technology. The American Civil Liberties Union calls it the first case of its kind in the United States.

Code^Shift will attempt to confront this issue using a collaborative research model that includes Texas A&M experts in social science, data science, engineering and several other disciplines. Ramasubramanian said eight different colleges are represented, and more than 100 people attended the labs virtual launch last month.

Experts will work together on research, grant proposals and raising awareness in the broader public of the issue of bias in machine learning and artificial intelligence. Curriculum may also be developed to educate professionals in the tech industry, such as workshops and short courses on anti-racism literacy, gender studies and other topics that are sometimes not covered in STEM fields.

The labs name references coding, which is foundational to todays digital world. Its also a play on code-switching the way people change the languages they use or how they express themselves in conversation depending on the context.

As an immigrant, Ramasubramanian says shes familiar with living in two worlds. She offers several examples of computer-based biases shes encountered in everyday life, including an experience attempting to wash her hands in an airport bathroom.

Standing at the sink, Ramasubramanian recalls, she held her hands under the faucet. As she moved them back and forth and the taps stayed dry, she realized that the sensors used to turn the water on could not recognize her hands. It was the same case with the soap dispenser.

It was something I never thought much about, but later on I was reading an article about this topic that said many people with darker skin tones were not recognized by many systems, she said.

Similarly, when Ramasubramanian began to work remotely during the COVID-19 pandemic, she noticed that her skin and hair color made her disappear against the virtual Zoom backgrounds. Voice recognition software she attempted to use for dictation could not understand her accent.

The system is treating me as the other and different in many, many ways, she said. And in return, there are serious consequences of who feels excluded, and thats not being captured.

Co-director Lu Tang, an assistant professor in the College of Liberal Arts who examines health disparity in underserved populations, says her research shows that Black patients, for example, must have much more severe symptoms that non-Black patients in order to be assigned certain diagnoses in computer software used in hospitals.

She said this is just one instance of the disparities embedded in technology. Tangs research also focuses on how machine learning algorithms used on social media platforms are more likely to expose people to misinformation about health.

If I inhabit a social media space where a lot of my friends hold certain erroneous attitudes about things like vaccines or COVID-19, I will repeatedly be exposed to the same information without being exposed to different information, she said.

Tang also is interested in what she calls the filter bubble the phenomenon of where an algorithm leads a user on TikTok, YouTube or other platforms based on content theyve watched in the past or what other people with similar viewing behaviors are watching at that moment. Watching just one video containing vaccine misinformation could prompt the algorithm to continue recommending similar videos. Tang said the filter bubble is another added layer that influences the content that people are exposed to.

I think to really understand this society and how we are living today, we as social scientists and humanities scholars need to acknowledge and understand the way computers are influencing the way society is run today, Tang said. I feel like working with computer science engineers is a way for us to combine our strengths to understand a lot of the problems we have in this society.

Computer Science and Engineering Assistant Professor Theodora Chaspari, another co-director of Code^Shift, agrees that minds from different disciplines are needed to design better systems.

To build an inclusive system, she said, engineers need to include representative data from all populations and social groups. This could help facial recognition algorithms better recognize faces of all races, she said, because a system cannot really identify a face until it has seen many, many faces. But engineers may not understand more subtle sources of bias, she said, which is why social and life sciences experts are needed to help with the thoughtful design of more equitable algorithms.

The goal of Code^Shift is to help bridge the gap between systems and people, Chaspari said. The lab will do this by raising awareness through not only research, but education.

Were trying to teach our students about fairness and bias in engineering and artificial intelligence, Chaspari said. Theyre pretty new concepts, but are very important for the new, young engineers who will come in the next years.

So far, Code^Shift has held small group discussion on topics like climate justice, patient justice, gender equity and LGBTQ issues. A recent workshop focused on health equity and the ways in which big data and machine learning can be used to take into account social structures and inequalities.

Ramasubramanian said a full grant proposal to the Texas A&M Institute of Data Science Thematic Data Science Labs Program is also being developed. The labs directors hope to connect with more colleges and make information accessible to more people.

They say collaboration is critical to the initiative. The people who create algorithms often come from small groups, Ramasubramanian said, and are not necessarily collaborating with social scientists. Code^Shift asks for more accountability in how systems are created: who has access to the data, whos deciding how to use it, and how is it being shared?

Texas A&M is home to some of the worlds top data scientists, Ramasubramanian said, making it an important place to have conversations about difficult topics like data equity.

To me, we should also be leaders in thinking about the ethical, social, health and other impacts of data, she said.

To join the Code^Shift mailing list or learn more about collaborating with the lab, contact Ramasubramanian at srivi@tamu.edu.

Read this article:
Code^Shift Lab Aims To Confront Bias In AI, Machine Learning - Texas A&M Today - Texas A&M University Today