Category Archives: Machine Learning

Top 5 data quality & accuracy challenges and how to overcome them – VentureBeat

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!

Every company today is data-driven or at least claims to be. Business decisions are no longer made based on hunches or anecdotal trends as they were in the past. Concrete data and analytics now power businesses most critical decisions.

As more companies leverage the power of machine learning and artificial intelligence to make critical choices, there must be a conversation around the qualitythe completeness, consistency, validity, timeliness and uniquenessof the data used by these tools. The insights companies expect to be delivered by machine learning (ML) or AI-based technologies are only as good as the data used to power them. The old adage garbage in, garbage out, comes to mind when it comes to data-based decisions.

Statistically, poor data quality leads to increased complexity of data ecosystems and poor decision-making over the long term. In fact, roughly $12.9 million is lost every year due to poor data quality. As data volumes continue to increase, so will the challenges that businesses face with validating and their data. To overcome issues related to data quality and accuracy, its critical to first know the context in which the data elements will be used, as well as best practices to guide the initiatives along.

Data initiatives are not specific to a single business driver. In other words, determining data quality will always depend on what a business is trying to achieve with that data. The same data can impact more than one business unit, function or project in very different ways. Furthermore, the list of data elements that require strict governance may vary according to different data users. For example, marketing teams are going to need a highly accurate and validated email list while R&D would be invested in quality user feedback data.

The best team to discern a data elements quality, then, would be the one closest to the data. Only they will be able to recognize data as it supports business processes and ultimately assess accuracy based on what the data is used for and how.

Data is an enterprise asset. However, actions speak louder than words. Not everyone within an enterprise is doing all they can to make sure data is accurate. If users do not recognize the importance of data quality and governanceor simply dont prioritize them as they shouldthey are not going to make an effort to both anticipate data issues from mediocre data entry or raise their hand when they find a data issue that needs to be remediated.

This might be addressed practically by tracking data quality metrics as a performance goal to foster more accountability for those directly involved with data. In addition, business leaders must champion the importance of their data quality program. They should align with key team members about the practical impact of poor data quality. For instance, misleading insights that are shared in inaccurate reports for stakeholders, which can potentially lead to fines or penalties. Investing in better data literacy can help organizations create a culture of data quality to avoid making careless or ill-informed mistakes that damage the bottom line.

It is not practical to fix a large laundry list of data quality problems. Its not an efficient use of resources either. The number of data elements active within any given organization is huge and is growing exponentially. Its best to start by defining an organizations Critical Data Elements (CDEs), which are the data elements integral to the main function of a specific business. CDEs are unique to each business. Net Revenue is a common CDE for most businesses as its important for reporting to investors and other shareholders, etc.

Since every company has different business goals, operating models and organizational structures, every companys CDEs will be different. In retail, for example, CDEs might relate to design or sales. On the other hand, healthcare companies will be more interested in ensuring the quality of regulatory compliance data. Although this is not an exhaustive list, business leaders might consider asking the following questions to help define their unique CDEs: What are your critical business processes? What data is used within those processes? Are these data elements involved in regulatory reporting? Will these reports be audited? Will these data elements guide initiatives in other departments within the organization?

Validating and remediating only the most key elements will help organizations scale their data quality efforts in a sustainable and resourceful way. Eventually, an organizations data quality program will reach a level of maturity where there are frameworks (often with some level of automation) that will categorize data assets based on predefined elements to remove disparity across the enterprise.

Businesses drive value by knowing where their CDEs are, who is accessing them and how theyre being used. In essence, there is no way for a company to identify their CDEs if they dont have proper data governance in place at the start. However, many companies struggle with unclear or non-existent ownership into their data stores. Defining ownership before onboarding more data stores or sources promotes commitment to quality and usefulness. Its also wise for organizations to set up a data governance program where data ownership is clearly defined and people can be held accountable. This can be as simple as a shared spreadsheet dictating ownership of the set of data elements or can be managed by a sophisticated data governance platform, for example.

Just as organizations should model their business processes to improve accountability, they must also model their data, in terms of data structure, data pipelines and how data is transformed. Data architecture attempts to model the structure of an organizations logical and physical data assets and data management resources. Creating this type of visibility gets at the heart of the data quality issue, that is, without visibility into the *lifecycle* of datawhen its created, how its used/transformed and how its outputtedits impossible to ensure true data quality.

Even when data and analytics teams have established frameworks to categorize and prioritize CDEs, they are still left with thousands of data elements that need to either be validated or remediated. Each of these data elements can require one or more business rules that are specific to the context in which it will be used. However, those rules can only be assigned by the business users working with those unique data sets. Therefore, data quality teams will need to work closely with subject matter experts to identify rules for each and every unique data element, which can be extremely dense, even when they are prioritized. This often leads to burnout and overload within data quality teams because they are responsible for manually writing a large sum of rules for a variety of data elements. When it comes to the workload of their data quality team members, organizations must set realistic expectations. They may consider expanding their data quality team and/or investing in tools that leverage ML to reduce the amount of manual work in data quality tasks.

Data isnt just the new oil of the world: its the new water of the world. Organizations can have the most intricate infrastructure, but if the water (or data) running through those pipelines isnt drinkable, its useless. People that need this water must have easy access to it, they must know that its usable and not tainted, they must know when supply is low and, lastly, the suppliers/gatekeepers must know who is accessing it. Just as access to clean drinking water helps communities in a variety of ways, improved access to data, mature data quality frameworks and deeper data quality culture can protect data-reliant programs & insights, helping spur innovation and efficiency within organizations around the world.

JP Romero is Technical Manager at Kalypso

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even considercontributing an articleof your own!

Read More From DataDecisionMakers

See the original post here:
Top 5 data quality & accuracy challenges and how to overcome them - VentureBeat

Researchers Work to Make Artificial Intelligence – Maryland Today

Out of 11 proposals that were accepted this year by the NSF Program on Fairness in Artificial Intelligence in Collaboration with Amazon, two are led by UMD faculty.

The programs goals are to increase accountability and transparency in AI algorithms and make them more accessible so that the benefits of AI are available to everyone. This includes machine learning algorithmsa subset of AI in which computerized systems are trained on large datasets to allow them to make proper decisions. Machine learning is used by some colleges around the country to rank applications for admittance to graduate school or allocate resources for faculty mentoring, teaching assistantships or coveted graduate fellowships.

As these AI-based systems are increasingly used in higher education, we want to make sure they render representations that are accurate and fair, which will require developing models that are free of both human and machine biases, said Furong Huang, an assistant professor of computer science who is leading one of the UMD teams.

That project, Toward Fair Decision Making and Resource Allocation with Application to AI-Assisted Graduate Admission and Degree Completion, received $625,000 from NSF with an additional $375,000 from Amazon.

A key part of the research, Huang said, is to develop dynamic fairness classifiers that allow the system to train on constantly evolving data and then make multiple decisions over an extended period. This requires feeding the AI system historical admissions data, as is normally done now, and consistently adding student-performance data, something that is not currently done on a regular basis.

The researchers are also active in developing algorithms that can differentiate notions of fairness as it relates to resource allocation. This is important for quickly identifying resourcesadditional mentoring, interventions or increased financial aidfor at-risk students who may already be underrepresented in the STEM disciplines.

Collaborating with Huang are Min Wu and Dana Dachman-Soled, a professor and an associate professor, respectively, in the Department of Electrical and Computer Engineering.

A second UMD team led by Marine Carpuat, an associate professor of computer science, is focused on improving machine learning models used in language translation systemswith particular focus on platforms that can accurately function in high-stakes situations like an emergency hospital visit or legal proceeding.

That project, A Human-Centered Approach to Developing Accessible and Reliable Machine Translation, is funded with $393,000 from NSF and $235,000 from Amazon.

Immigrants and others who dont speak the dominant language can be hurt by poor translation, said Carpuat. This is a fairness issue, because these are people who may not have any other choice but to use machine translation to make important decisions in their daily lives, she said. Yet they dont have any way to assess whether the translations are correct or the risks that errors might pose.

To address this, Carpuats team will design systems that are more intuitive and interactive to help the user recognize and recover from translation errors that are common in many systems today.

Central to this approach is a machine translation bot that will quickly recognize when a user is having difficulty. The bot will flag imperfect translations, and then help the user to craft alternate inputsphrasing their query in a different way, for exampleresulting in better outcomes.

Carpuats team includes Ge Gao, an assistant professor in the iSchool, and Niloufar Salehi, an assistant professor in the School of Information at UC Berkeley.

Of the six researchers involved in the Fairness in AI projects, five have appointments in the University of Maryland Institute for Advanced Computer Studies (UMIACS).

Were tremendously encouraged that our faculty are active in advocating for fairness in AI and are developing new technologies to reduce biases on many levels, said UMIACS Director Mihai Pop. Im particularly proud that the teams represent four different schools and colleges at two universities. This is interdisciplinary research at its best.

See the rest here:
Researchers Work to Make Artificial Intelligence - Maryland Today

Chief Officer Awards Finalist Anthony Iasso: ‘Never Stop Learning, and Never Stop Teaching’ – WashingtonExec

Anthony Iasso, Xator Corp.

The finalists for WashingtonExecs Chief Officer Awards were announced March 25, and well be highlighting some of them until the event takes place live, in-person May 11 at the The Ritz-Carlton in McLean, Virginia.

Next is Chief Technology Officer (Private & Public) finalist Anthony Iasso, whos CTO at Xator Corp. Here, he talks primary focus areas going forward, taking professional risks, proud career moments and more.

What has made you successful in your current role?

The incredibly talented people who work at Xator, our partner companies and our customer organizations make me successful in my current role. My focus is developing and leading the Xator technology strategy and vision. We need to be leading edge, though not always bleeding edge, because our customers need proven solutions that balance innovation with risk.

Securing embassies or equipping Marines cant be a science experiment. I keep us focused on key performance measures for technical systems to be sure what we deliver works as intended, to meet the customers requirements. I do that by marshalling the tremendous talent we have in a whole-of-Xator approach, by bringing together people from across the entire organization to focus on immediate and future challenges through solutioneering.

What energizes me is to learn and understand our customer challenges, and then bring to bear our technologists, Xator core technologies and partner technologies and talent to deliver solutions better, faster and more cost effectively than any of our competitors.

Im successful when the customers mission is properly supported, and Xator, our partners and customers are proud of the work weve done.

What are your primary focus areas going forward, and why are those so important to the future of the nation?

One of my primary focuses is on the balance between security technology and privacy. We are bringing amazing technologies together in the areas of biometrics, identity understanding, machine learning, low-cost ubiquitous sensors and cameras, data collection and data analytics that are changing the way we secure our country.

But balancing what we can do, with what we should do with this technology, will be the defining question for our nations future. Technologists like me must support the transparent application of these technologies in a way that accomplishes our security objectives while at the same time safeguards privacy and protections of a free society.

How do you help shape the next generation of government leaders/industry leaders?

Leading by example is always a great start. When I graduated from West Point, I remember thinking, Wow, Im in the same spot that Eisenhower, Grant, MacArthur and countless other great leaders once stood.

I frequently look back since then and think about the process that transformed those who came before from young eager kids into great national leaders. It is a process of pulling up the next generation of leaders, while being pulled up by the previous generation of leaders.

I am still learning from my mentors and developing new mentors that are worthy of emulation, and I try to fill that role for those who have worked for and with me over the years. In that, I feel the responsibility of being a link in this multigenerational process. The military has an amazing ability to transform second lieutenants into four-star generals, by a process of gradually increasing the scope of responsibilities and letting leaders lead at each step of the way. I think that same approach applies to success in civil service and the civilian world. Never stop learning, and never stop teaching.

Which rules do you think you should break more as a government/industry leader?

This is an interesting question and I stared at it for a while before selecting it to answer for this interview, but I should be bold and go for it. I am not a rule breaker by nature, and one of my core tenets is to never burn bridges. In this business, politics and bureaucracy are intertwined with the ability to break through, win business and deliver solutions. An unwritten rule is dont rock the boat. You never know who may be making decisions in the future that can affect your core business, and a bad relationship can one day block you out.

We cant go right to our end users in government and get them to buy our solutions, even if we have the best things since sliced bread. I am becoming more inclined to call out situations where biases and obstructions, especially if they are political or bureaucratic, prevent progress and innovation, because Ive seen good businesses suffer and Ive seen end users suffer.

Maybe I cant break through, but maybe I can. Maybe I make an anti-sponsor, but maybe I make a sponsor from an anti-sponsor. Over the years, Ive become more inclined to try and to use the credibility I have built in my experience and career to that purpose.

Whats the biggest professional risk youve ever taken?

Starting and growing my own company was certainly the biggest professional risk, but it was well worth the reward. Prior to my time at Xator, I left my job working for a series of solid defense contractors and joined with two partners to build and grow InCadence. For 10 years, we built InCadence, and as president of the company, I saw first-hand the highly competitive environment of launching and growing a startup.

A big key to our success was our focus on technology-differentiated solutions, especially in the field of biometrics and identity, which is one of my major technical competencies. To be able to build a successful company, and to see it continue to thrive as a part of Xator Corp., has been a great reward for all the risks of being responsible, for all aspects of maintaining and growing a business and keeping key technical talent constantly innovating and delivering for our customers.

Looking back at your career, what are you most proud of?

I am most proud of having designed and coded, from the first line of code, the Biometrics Automated Toolset system, which I started writing when I was just out of the Army and just 29 years old. I had transitioned as an Army captain to a contractor working at the Battle Lab at the Army Intelligence Center, and I had a fantastic boss, Lt. Col. Kathy De Bolt, who asked me to build a biometrics system from the ground up.

That work took on a life of its own, being used in Kosovo, Iraq and Afghanistan. It is still an Army program of record system today and is the first digital biometrics system ever deployed on the battlefield.

From that, I built an exciting career and team of colleagues that led to where I am today, to include the success with the newest generation of biometric technologies at Xator. I know that the BAT system was indispensable to operations in support of our national security, and I still regularly have soldiers and Marines come up to me today and tell me stories of how they used BAT overseas.

More:
Chief Officer Awards Finalist Anthony Iasso: 'Never Stop Learning, and Never Stop Teaching' - WashingtonExec

Machine Learning Tools for Clinical Researchers: A Pragmatic Approach Event Series | Newsroom – UNC Health and UNC School of Medicine

This virtual seminar series will provide a background in the use of machine learning tools to answer clinical questions, understand the strengths and limitations of these methods, and examine real-world examples of machine learning methodology in clinical research. The series is co-sponsored by the UNC Core Center for Clinical Research and the UNC Program for Precision Medicine in Health Care.

The UNC Core Center for Clinical Research and the UNC Program for Precision Medicine in Health Care are co-sponsoring a series of virtual, free events on machine learning tools for clinical researchers. Anyone interested in using machine learning as part of their own research is encouraged to attend, regardless of research background or experience with machine learning. The goal of the seminar series is to bring together researchers and clinicians across the UNC campus and catalyze new clinical research using machine learning.

Machine learning analysis methods offer the opportunity to integrate and learn from large amounts of biological, clinical and environmental data, and there is a growing interest in how these tools can be used to inform and individualize clinical decision making in a variety of disease areas.

Machine learning can offer different, yet often complementary, insights compared to traditional statistical analyses to better understand heterogeneity in patient presentation, prognoses, and treatment response, generating critical data for precision medicine research. These methods can allow integration across diverse data types and large feature sets, overcoming some limitations of traditional tools to answer clinical questions. However, many clinical researchers have little exposure to machine learning methods, presenting a barrier to utilization of these tools themselves and/or to effective collaboration with methodologists in their own research.

The event series will series provide a background/foundation of knowledge regarding the use of machine learning tools in clinical questions, help attendees understand the strengths and limitations of these methods, help attendees recognize some real-world examples of applied machine learning methodology in clinical research, and elucidate how machine learning can be used to advance precision medicine research.

On May 11, 2022, clinicians and researchers will discuss examples of how machine learning tools have been applied in arthritis and autoimmune disease. This session will feature an overview of machine learning and its application to identify clinical phenotypes of osteoarthritis and type 1 diabetes. Register online to attend.

On May 18, 2022, clinicians and researchers will explore the use of machine learning tools and precision medicine techniques in clinical research. This session will feature an overview of machine learning tools in the field of precision medicine and address how they may be used to inform decision support for peripheral artery disease and rare genetic diseases. Register online to attend.

On May 25, 2022, 1 3 p.m., a panel discussion will focus on how researchers and clinicians at UNC-Chapel Hill can integrate machine learning techniques into their own clinical research. Register online to attend.

Clinicians with ideas for how patient care could be improved with computational decision support tools can pitch their idea (5-10 minute overview) to assembled machine learning experts during the May 25 session. Attendees also will receive expert guidance and can compete for funding from the UNC Program for Precision Medicine in Health Care for analytical support to develop their projects. Participants can email precisionmedicine@med.unc.edu for more information about the pitch opportunity.

This series is jointly sponsored by the UNC Core Center for Clinical Research and the UNC Program for Precision Medicine in Health Care. All events will be virtual on Zoom and free of charge.

Excerpt from:
Machine Learning Tools for Clinical Researchers: A Pragmatic Approach Event Series | Newsroom - UNC Health and UNC School of Medicine

What is Hybrid Machine Learning and How to Use it? – Analytics Insight

Most of us have probably been including HML estimations in some designs without recognizing it. We might have used methodologies that are a blend of existing ones or got together with strategies that are imported from various fields. We try to a great extent to apply data change methods like principles component analysis (PCA) or simple linear correlation analysis to our data preceding passing them to a ML methodology. A couple of experts use extraordinary estimations to mechanize the headway of the limits of existing ML methodologies. HML estimations rely upon an ML plan that is hard and not exactly equivalent to the standard work process. We seem to have misjudged the ML estimations as we fundamentally use them ready to move, for the most part dismissing the nuances of how things fit together.

HML is a progress of the ML work process that perfectly unites different computations, processes, or procedures from equivalent or different spaces of data or areas of usage fully intended to enhance each other. As no single cap fits all heads, no single ML procedure is appropriate for all issues. A couple of strategies that are extraordinary in managing boisterous data anyway may not be prepared for dealing with high-layered input space. Some others could scale pretty well on high-layered input space anyway may not be good for managing sparse data. These conditions are a fair motivation to apply HML to enhance the contender procedures and use one to overcome the deficiency of the others.

The open doors for the hybridization of standard ML methodologies are ceaseless, and this ought to be workable for every single one to collect new combination models in different ways.

This kind of HML consistently consolidates the architecture of at least two customary algorithms, entirely or mostly, in an integral way to develop a more-hearty independent algorithm. The most ordinarily utilized model is Adaptive Neuro-Fluffy Interference System (ANFIS). ANFIS has been utilized for some time and is generally considered an independent customary ML strategy. It really is a blend of the standards of fluffy rationale and ANN. The engineering of ANFIS is made out of five layers. The initial three are taken from fuzzy logic, while the other two are from ANN.

This kind of cross hybrid advancement consistently joins information control cycles or systems with customary ML techniques with the goal of supplementing the last option with the result of the previous. The accompanying models are legitimate opportunities for this kind of crossover learning technique:

If an (FR) calculation is utilized to rank and preselect ideal highlights prior to applying the (SVM) calculation to the information, this can be called an FR-SVM hybrid model.

Assuming a PCA module is utilized to separate a submatrix of information that is adequate to make sense of the first information prior to applying a brain network to the information, we can call it a PCA-ANN hybrid model.

If an SVD calculation is utilized to lessen the dimensionality of an informational collection prior to applying an ELM model, then, at that point, we can call it an SVD-ELM hybrid model.

Hybrid techniques that we depend on include determination, a sort of information control process that looks to supplement the implicit model choice course of customary ML strategies, which have become normal. It is realized that every ML algorithm has an approach to choosing the best model in light of an ideal arrangement of info highlights.

It is realized that each conventional ML technique utilizes a specific improvement or search algorithm, for example, gradient descent or grid search to decide its ideal tuning boundaries. This sort of crossover learning tries to supplement or supplant the underlying boundary improvement strategy by utilizing specific progressed techniques that depend on developmental calculations. The potential outcomes are additionally huge here. Instances of such conceivable outcomes are:

1. Assuming the particular swam advancement (PSO) algorithm is utilized to upgrade the preparation boundaries of an ANN model, the last option turns into a PSO-ANN hybrid model.

2. At the point when generic calculation (GA) is utilized to streamline the preparation boundaries of the ANFIS technique, the last option turns into a GANFIS hybrid model.

3. The equivalent goes with other developmental streamlining calculations like Honey bee, Subterranean insect, Bat, and Fish State that are joined with customary ML techniques to shape their relating half, breed models.

An ordinary illustration of the component determination-based HML is the assessment of a specific supply property, for example, porosity utilizing coordinated rock physical science, geographical, drilling, and petrophysical informational collections. There could be in excess of 30 info highlights from the consolidated informational indexes. It will be a decent learning exercise and a commitment to the assortment of information to deliver a positioning and decide the general significance of the elements. Utilizing the main 5 or 10, for instance, may deliver comparative outcomes and subsequently decrease the computational intricacy of the proposed model. It might likewise help space specialists to fewer features in on the fewer highlights rather than the full arrangement of logs, most of which might be excess.

Link:
What is Hybrid Machine Learning and How to Use it? - Analytics Insight

Machine learning hiring levels in the mining industry rose to a year-high in March 2022 – Mining Technology

The proportion of mining industry operations and technologies companies hiring for machine learning related positions rose in March 2022 compared with the equivalent month last year, with 25.8% of the companies included in our analysis recruiting for at least one such position.

This latest figure was higher than the 16.9% of companies that were hiring for machine learning related jobs a year ago and an increase compared to the figure of 18.8% in February 2022.

When it came to the rate of all job openings that were linked to machine learning, related job postings rose in March 2022, with 0.7% of newly posted job advertisements being linked to the topic.This latest figure was a decrease compared to the 1% of newly advertised jobs that were linked to machine learning in the equivalent month a year ago.

Machine learning is one of the topics that GlobalData, from which our data for this article is taken, has identified as being a key disruptive force facing companies in the coming years. Companies that excel and invest in these areas now are thought to be better prepared for the future business landscape and better equipped to survive unforeseen challenges.

Our analysis of the data shows that mining industry operations and technologies companies are currently hiring for machine learning jobs at a rate lower than the average for all companies within GlobalData's job analytics database. The average among all companies stood at 1.3% in March 2022.

GlobalData's job analytics database tracks the daily hiring patterns of thousands of companies across the world, drawing in jobs as they're posted and tagging them with additional layers of data on everything from the seniority of each position to whether a job is linked to wider industry trends.

You can keep track of the latest data from this database as it emerges by visiting our live dashboard here.

Bulk Material Handling Solutions for the Mining Sector

Selective Electroplating and Anodising Solutions for the Mining Industry

Read the original here:
Machine learning hiring levels in the mining industry rose to a year-high in March 2022 - Mining Technology

The extensive use of AI and machine learning can do wonders in predictive analysis | eHealth Magazine eHealth Magazine – Elets

Devanand K T, Regional Chief Executive Officer, Aster DM Healthcare shares his thoughts on strengthening the healthcare infrastructure in the country and the future of digital health. Edited excerpts:

Has Indias health care system embarked on a purely digital journey?

Yes, our country has taken the strategic decision of moving ahead with the digital health ecosystem as is evident from the recent budget. The allocation for the National Digital Health Mission is raised to Rs 200 crore from the existing Rs 30 crore. The mental health sector received an allocation of Rs 40 crore for the first time. The evolution of United Health Interface combined which focuses on the integration of various stakeholders of the eco system will rapidly accelerate the journey. The factors like rising growth of smartphone users, rapidly expanding healthcare IT infrastructure, people more conscious of the value of fitness and wellbeing, rise in volume of patients with chronic conditions, growing venture capital investments and economies of scale that bring down cost of digital health solutions will provide the necessary impetus to this drive.

How do you see the role of technology in pre-and post Covid healthcare?

Even pre-covid, patients were becoming more digitally savvy, and doctors were getting increasingly open to using digital platforms and tools. Covid has accelerated this trend leading to removal of many behavioural barriers to digital health adoption. There is a behavioural shift in regulatory and reimbursement trends that provide further opportunities for rise in digital health adoption. Telemedicine witnessed a spike in usage during covid that made inroads or at least made companies and users to place other components of digital health like AI, data mining, machine learning, biosensors, etc into high accelerated development mode. The effective combination of these technological tools will provide a digital ecosystem of comfort, improved quality, continuum of care, better outcomes, and lower healthcare costs.

What are the various technologies and tools you see making a difference in the healthcare segment in the coming times?

Tele-health has already made a huge difference in providing convenience to patients in accessing healthcare. The extensive use of AI and machine learning can do wonders in predictive analysis especially in areas like Oncology. Remote/Bio-sensing devices can provide ease of access to information and can provide vital real time data for better treatment outcomes by effective interventions.

How can technology be used to strengthen the health infrastructure in a country like ours?

The health infrastructure in our country is lagging, especially in rural areas. There are issues related to lack of accessibility, affordability, and awareness in these areas. The technology can provide the necessary impetus to overcome these challenges by providing services like Tele-ICU, Tele-Emergency, Tele-Radiology, etc. Cowin App was one such classical example of technology reaching the mass population in driving vaccination services. Such PPP models involving technological interventions can really help in addressing many concerns related to health infrastructure in a country like ours. Telehealth enabled Mobile Medical Services launched by Aster DM Healthcare is one such step taken by our organization to provide accessibility to the rural mass.

What are the policy initiatives you foresee in the Health IT space that will create a healthcare system of the future?

There should be clear policy guidelines on patient data privacy and security. The cyber frauds should be mitigated by enforcing some clear protocols. Universal Health Card unifying all government schemes can really help in mobilizing the entire population towards health coverage. This can help in reducing budget allocation to various Govt. schemes too. The steps taken by the Government of India in creating United Health Interface and higher allocation of budget to tele-health are clear indications towards this transformation. All healthcare providers to be incentivized for enabling healthcare digitalization. Linking various analytical/predictive tools using machine learning to this huge database can provide a plethora of information for Population Health Management.

Follow and connect with us on Facebook, Twitter, LinkedIn, Elets video

Read the original here:
The extensive use of AI and machine learning can do wonders in predictive analysis | eHealth Magazine eHealth Magazine - Elets

Machine Learning Applications in the Manufacturing Industry – IoT For All

Manufacturers, to keep up with the latest changes in technology, need to explore one of the most critical elements driving factories forward into the future: machine learning. Lets talk about the most important applications and innovations that ML technology is providing in 2022.

Machine learning is a subfield of artificial intelligence, but not all AI technologies count as machine learning. There are various other types of AI that play a role in many industries, such as robotics, natural language processing, and computer vision. If youre curious about how these technologies affect the manufacturing industry, check out our review below.

Basically, machine learning algorithms utilize training data to power an algorithm that allows the software to solve a problem. This data may come from real-time IoT sensors on a factory floor, or it may come from other methods. Machine learning has a variety of methods such as neural networks and deep learning. Neural networks imitate biological neurons to discover patterns in a dataset to solve problems. Deep learning utilizes various layers of neural networks, where the first layer utilizes raw data input and passes processed information from one layer to the next.

Lets start by imagining a box with assembly robots, IoT sensors, and other automated machinery. At one end you supply the materials necessary to complete the product; at the other end, the product rolls off the assembly line. The only intervention needed for this device is routine maintenance of the equipment inside. This is the ideal future of manufacturing, and machine learning can help us understand the full picture of how to achieve this.

Aside from the advanced robotics necessary for automated assembly to work, machine learning can help ensure: quality assurance, NDT analysis, and localizing the causes of defects, among other things.

You can think of this factory in a box example as a way of simplifying a larger factory, but in some cases its quite literal.Nokiais utilizing portable manufacturing sites in the form of retrofitted shipping containers with advanced automated assembly equipment. You can use these portable containers in any location necessary, allowing manufacturers to assemble products on site instead of needing to transport the products longer distances.

Using neural networks, high optical resolution cameras, and powerfulGPUs, real-time video processing combined with machine learning and computer vision can complete visual inspection tasks better than humans can. This technology ensures that the factory in a box is working correctly and that unusable products are eliminated from the system.

In the past, machine learnings use in video analysis has been criticized for the quality of video used. This is because images can be blurry from frame to frame, and the inspection algorithm may be subject to more errors. With high-quality cameras and greater graphical processing power, however, neural networks can more efficiently search for defects in real-time without human intervention.

Using various IoT sensors, machine learning can help test the created products without damaging them. An algorithm can search for patterns in the real-time data that correlate with a defective version of the unit, enabling the system to flag potentially unwanted products.

Another way that we can detect defects in materials is through non-destructive testing. This involves measuring a materials stability and integrity without causing damage. For example, you can use an ultrasound machine to detect anomalies like cracks in a material. The machine can measure data that humans can analyze to look for these outliers by hand.

However, outlier detection algorithms, object detection algorithms, and segmentation algorithms can automate this process by analyzing the data for recognizable patterns that humans may not be able to see with much greater efficiency. Machine learning is also not subject to the same number of errors that humans are prone to make.

One of the core tenants of machine learnings role in manufacturing is predictive maintenance. PwCreportedthat predictive maintenance will be one of the largest growing machine learning technologies in manufacturing, having an increase of 38 percent in market value from 2020 to 2025.

With unscheduled maintenance having the potential to deeply cut into a businesss bottom line, predictive maintenance can enable factories to make appropriate adjustments and corrections before machinery can experience more costly failures. We want to make sure that our factory in a box will have as much uptime with the fewest delays possible, and predictive maintenance can make that happen.

Extensive IoT sensors that record vital information about the operating conditions and status of a machine make predictive maintenance possible. This may include humidity, temperature, and more.

A machine learning algorithm can analyze patterns in data collected over time and reasonably predict when the machine may need maintenance. There are several approaches to achieve this goal:

Thanks to the IoT sensors powering predictive maintenance, machine learning can analyze the patterns in the data to see what parts of the machine need to be maintained to prevent a failure. If certain patterns lead to a trend of defects, its possible that hardware or software behaviors can be identified as causes of those defects. From here, engineers can come up with solutions to correct the system to avoid those defects in the future. This enables us to reduce the margin of error of our factory in a box scenario.

Digital twins are a virtual recreation of the production process based on data from IoT sensors and real-time data. They can be created as an original hypothetical representation of a system that doesnt yet exist, or they could be a recreation of an existing system.

The digital twin is a sandbox for experimentation in which machine learning can be used to analyze patterns in a simulation to optimize the environment. This helps support quality assurance and predictive maintenance efforts as well. We can also use machine learning alongside digital twins for layout optimization. This works when planning the layout of a factory or for optimizing the existing layout.

If we want to optimize every part of the factory, we also need to pay attention to the energy that it requires. The most common way to do this is to use sequential data measurements, which can be analyzed by data scientists with machine learning algorithms powered by autoregressive models and deep neural networks.

Weve used machine learning to optimize the factorys production processes, but what about the product itself? BMWintroducedthe BMW iX Flow at CES 2022 with a special e-ink wrap that can allow it to change the color (or more accurately, the shade) of the car between black and white. BMW explained that Generative design processes are implemented to ensure the segments reflect the characteristic contours of the vehicle and the resulting variations in light and shadow.

Generative design is where machine learning is used to optimize the design of a product, whether it be an automobile, electronic device, toy, or other items. With data and a desired goal, machine learning can cycle through all possible arrangements to find the best design.

ML algorithms can be trained to optimize a design for weight, shape, durability, cost, strength, and even aesthetic parameters.

Generative design process can be based on these algorithms:

Lets step away from the factory in a box example for a bit and look at a broader picture of needs in manufacturing. Production is only one element. The supply chain roles from a manufacturing center are also being improved with machine learning technologies, such as logistics route optimization and warehouse inventory control. These make up a cognitive supply chain that continues to evolve in the manufacturing industry.

AI-powered logistics solutions use object detection models instead of barcode detection, thus replacing manual scanning. Computer vision systems can detect shortages and overstock. By identifying these patterns, managers can be made aware of actionable situations. Computers can even be left to take action automatically to optimize inventory storage.

At MobiDev, we have researched a use case of creating a system capable of detecting objects for logistics. Read more aboutobject detection using small datasetsfor automated items counting in logistics.

How much should a factory produce and ship out? This is a question that can be difficult to answer. However, with access to appropriate data, machine learning algorithms can help factories understand how much they should be making without overproducing. The future of machine learning in manufacturing depends on innovative decisions.

Read more:
Machine Learning Applications in the Manufacturing Industry - IoT For All

Autonomous Drones use AI for infrastructure inspection with technologies from Auterion and Spleenlab – sUAS News

Drones run AI on the edge and use machine learning to autonomously conduct infrastructure inspections and collision avoidance

MOORPARK, Calif. and ZURICH April 20, 2022 Auterion, the company building an open and software-defined future for enterprise drone fleets, is excited today to welcome Spleenlab to the growing Auterion ecosystem. Spleenlabs VISIONAIRYtechnology delivers AI-based safe perception software for drones to complete fully autonomous infrastructure detection, inspection, and collision avoidance.

Together with Spleenlab, were demonstrating that the future of robotics is now with our combined AI, machine learning and onboard edge technologies, says Markus Achtelik, vice president of Engineering at Auterion. This new partnership brings together the best technologiesAuterions Skynode and AI Node with Spleenlabs VISIONAIRY Perception Softwareto deliver the best possible autonomous solutions for enterprise drone users.

Spleenlabs VISIONAIRY AI, built to be safe from ground, together with Auterions AI Node, which is equipped with the worlds smallest AI supercomputer for embedded and edge systems, transforms Skynode itself into a supercomputer. The combination runs AI right at the edge onboard any drone. Software running on the Auterion ecosystem enables autonomous inspection of critical infrastructureempowering enterprise users to scale beyond single-pilot, single-drone operations.

Were excited to collaborate with Auterion to take this next step into an autonomous future, says Stefan Milz, Founder at Spleenlab. Advanced ML algorithms are computation-heavy and require appropriate horsepower to be deployed onboard mobile robots. With Auterions Skynode and AI Node, the VISIONAIRY AI software can easily be deployed and run onboard drones on a high safety level.

When Spleenlabs VISIONAIRYsoftware and ML algorithm are installed on Auterions AI Node, the combined technologies enable drones to:

Enterprise drones are able to understand their environment and predict safe landing spots in real time for package delivery, emergencies, and other situations. Risk estimation also includes detecting cooperative and non-cooperative air traffic, with up to 360 degree field of view and several kilometers range. The unique, combined capabilities move industries toward fully realized beyond-visual-line-of-sight (BVLOS) autonomous flight.

Learn more about the VISIONAIRYintegration with AI Nodehere.

About Spleenlab GmbH

Spleenlab GmbH is a highly specialized AI software company founded with the Idea to redefine Safety and AI. Since April 2018, the company has been primarily engaged in the development and distribution of safe Machine Learning algorithms for semi- and fully autonomous mobility, especially the flight of unmanned aerial vehicles (UAV), helicopters, Air Taxis, driving vehicles and beyond. The groundbreaking fusion of different sensors, such as cameras, lasers and radars by means of Machine Learning is the core business of the company. The generated SLAM (Simultaneous Localization and Mapping) enables completely new applications and products for any kind of autonomous mobility. The company is based in Jena (Saalburg), Germany.

Learn more athttps://spleenlab.com/

About Auterion

Auterion is building the worlds leading autonomous mobility platform for enterprise and government users to better capture data, carry out high-risk work remotely, and deliver goods with drones. Auterions open source based platform was nominated by the U.S. government as the standard for its future drone program. With 70+ employees across offices in California, Switzerland, and Germany, Auterions global customer base includes GE Aviation, Quantum-Systems, Freefly Systems, Avy, Watts Innovations, and the U.S. government.

Learn more athttps://auterion.com/

Read more from the original source:
Autonomous Drones use AI for infrastructure inspection with technologies from Auterion and Spleenlab - sUAS News

New SEE Shell Mobile Application Uses Machine Learning to Help Tackle the Illegal Tortoiseshell Trade – PR Newswire

PORTLAND, Ore., April 18, 2022 /PRNewswire/ --Conservation nonprofit SEE Turtles has launched an innovative mobile application that will address the illegal trade of hawksbill sea turtle shells. The beautiful shells of this critically endangered species, commonly referred to as "tortoiseshell," are used to create jewelry and ornamental souvenirs in many countries. The SEE Shell App employs machine learning to differentiate real and faux tortoiseshell products; it is the first mobile application to use artificial intelligence to combat the illegal wildlife trade. This novel technology will enable tourists, law enforcement, and wildlife officials to quickly identify products made of authentic tortoiseshell.

Despite international laws against the sale of tortoiseshell, this trade is active in at least 40 countries, according to SEE Turtles 2020 "Global Tortoiseshell Trade" report, and it remains the primary threat to hawksbill turtles. With an estimated 15,000 to 25,000 adult female hawksbills remaining in the wild, this groundbreaking mobile application will play a key role in bringing these animals back from the brink of extinction.

SEE Shell will help eliminate this confusion. This highly accurate application can now discern whether an item is made of real hawksbill shell or from faux tortoiseshell materials such as resin, horn, bone, seashells, or coconut shells with at least 94% accuracy by simply taking a photo. The mobile application utilizes deep learning technology that compares product photos taken by app users to a data library of more than 4,000 real and artificial tortoiseshell products. As images are stored in the catalog from locations around the globe, a clearer understanding of the size and location of the illegal tortoiseshell trade will emerge.

"Thanks to our conservation partners around the world who have contributed tortoiseshell photos, we have created a first in the wildlife trafficking field; an app that can help individual consumers identify and avoid endangered animal products," said Alexander Robillard, Computer Vision Engineer with SEE Turtles.

SEE Turtles has also partnered with the World Wildlife Fund for Nature, who is providing financial and technical support. As part of SEE Turtles' "Too Rare to Wear" campaign, partnering organizations in Indonesia and Latin America have helped to test the app in the field and will train local law enforcement officials on how to use the application to document the presence of tortoiseshell trade in their regions. Participating organizations include the Turtle Foundation (Indonesia), Fundacin Tortugas del Mar (Colombia), Latin American Sea Turtles (Costa Rica), The Leatherback Project (Panama), and Sos Nicaragua.

Media Contact:Brad Nahill800-215-0378[emailprotected]

SOURCE SEE Turtles

Read this article:
New SEE Shell Mobile Application Uses Machine Learning to Help Tackle the Illegal Tortoiseshell Trade - PR Newswire