Category Archives: Machine Learning
Ensuring compliance with data governance regulations in the Healthcare Machine learning (ML) space – BSA bureau
"Establishing decentralized Machine learning (ML) framework optimises and accelerates clinical decision-making for evidence-based medicine" says Krishna Prasad Shastry, Chief Technologist (AI Strategy and Solutions) at Hewlett-Packard Enterprise
The healthcare industry is becoming increasingly information-driven. Smart machines are creating a positive impact to enhance capabilities in healthcare and R&D. Promising technologies are aiding healthcare staff in areas with limited resources, helping to achieve a more efficient healthcare system. Yet, with all its benefits, using data to deliver more value-based care is not without risks. Krishna Prasad Shastry, Chief Technologist (AI Strategy and Solutions) at Hewlett-Packard Enterprise, Singapore shares further details on the establishment of a decentralized machine learning framework while ensuring compliance with data governance regulations.
Technology will be indispensable in the future of healthcare, with advancements in various technologies such as artificial intelligence (AI), robotics, and nanotechnology. Machine learning (ML) a subset of AI now plays a key role in many health-related realms, such as disease diagnosis. For example, ML models can assist radiologists to diagnose diseases, like Leukaemia or Tuberculosis, more accurately and more rapidly. By using ML algorithms to evaluate imaging such as chest X-rays, MRI, or CT scans, and applying ML to analyse medical imaging, radiologists can better prioritise which potential positive cases to investigate. Similarly, ML models can be developed to recommend personalised patient care, by observing various vital parameters, sensors, or electronic health records (EHRs). The efficiency gains that ML offers stand to take the pressure off the healthcare system especially valuable when resources are stretched and access to hospitals and clinics are disrupted.
Data underpins these digital healthcare advancements. Healthcare organisations globally are embracing digital transformation and using data to enhance operations. Yet, with all its benefits, using data to deliver more value-based care is not without risks. For example, using ML for diagnostic purposes requires a diverse set of data in order to avoid bias. But, access to diverse data sets is often limited by privacy regulations in the health sector. Healthcare leaders face the challenge of how to use data to fuel innovation in a secure and compliant manner.
For instance, HPEs Swarm Learning, a decentralized machine learning framework allows insights generated from data to be shared without having to share the raw data itself. The insights generated by each owner in a group are shared, allowing all participants to still benefit from the collaborative insights of the network. In the case of a hospital thats building an ML model for diagnostics, Swarm Learning enables decentralized model training that benefits from access to insights of a larger data set, while respecting privacy regulations.
Partnering with stakeholders across the public and private sectors will enable us to better provide patients access to new digital healthcare solutions that can reform the management of challenging diseases such as cancer. Our recent partnership with AstraZeneca, under their A. Catalyst Network aims to drive healthcare improvement across Singapores healthcare ecosystem. Further, Swarm Learning can reduce the risk of breaching data governance regulations and can accelerate medical research.
The future of healthcare lies in working in tandem with technology; innovations in the AI and ML space are already being implemented across the treatment chain in the healthcare industry, with successful case studies that we can learn from. From diagnosis to patient management, AI and ML can be used to perform tasks such as predicting diseases, identifying high-risk patients, and automating hospital operations. As ML models are increasingly used in the diagnosis of diseases, there is an increasing need for data sets covering a diverse set of patients. This is a challenging demand to fulfill due to privacy and regulatory restrictions. Gaining insights from a diverse set of data without compromising on privacy might help, as in Swarm Learning.
AI models are used in precision medicine to improve diagnostic outcomes through integration and by modeling multiple data points, including genetic, biochemical, and clinical data. They are also used to optimise and accelerate clinical decision-making for evidence-based medicine. In the sphere of life sciences, AI models are used in areas such as drug discovery, drug toxicity prediction, clinical trials, and adverse event management. For all these cases, Swarm Learning can help build better models by collaborating across siloed data sets.
As we progress towards a technology-driven future, the question of how humans and technology can work hand in hand for the greater good will remain a question to be answered. But I believe that we will be able to maximise the benefits of digital healthcare, as long as we continue to facilitate collaboration between healthcare and IT professionals to bridge the existing gaps in the industry.
Privacy And Cybersecurity Risks In Transactions Impacts From Artificial Intelligence And Machine Learning, Addressing Security Incidents And Other…
To print this article, all you need is to be registered or login on Mondaq.com.
Cyberattacks. Data breaches. Regulatory investigations. Emergingtechnology. Privacy rights. Data rights. Compliance challenges. Therapidly evolving privacy and cybersecurity landscape has created aplethora of new considerations and risks for almost everytransaction. Companies that engage in corporate transactions andM&A counsel alike should ensure that they are aware of andappropriately manage the impact of privacy and cybersecurity riskson their transactions. To that point, in this article we provide anoverview of privacy and cybersecurity diligence, discuss the globalspread of privacy and cybersecurity requirements, provide insightsrelated to the emerging issues of artificial intelligence andmachine learning and discuss the impact of cybersecurity incidentson transactions before, during and after a transaction.
There is a common misunderstanding that privacy matters only forcompanies that are steeped in personal information and thatcybersecurity matters only for companies with a business modelgrounded in tech or data. While privacy issues may not be the mostcritical issues facing a company, all companies must addressprivacy issues because all companies have, at the very least,personal information about employees. And as recent publicizedcybersecurity incidents have demonstrated, no company, regardlessof industry, is immune from cybersecurity risks.
Privacy and cybersecurity are a Venn diagram of legal concepts:each has its own considerations, and for certain topics theyoverlap. This construct translates into how privacy andcybersecurity need to be addressed in M&A: each stands alone,and they often intermingle. Accordingly, they must both beaddressed and considered together.
Privacy requirements in the U.S. are a patchwork of federal andstate laws, with several comprehensive privacy laws now in effector soon to be in effect at the state level. Notably, while itdoesn't presently apply in full to personnel andbusiness-to-business personal data, the California Consumer PrivacyAct covers all residents of the state of California, not justconsumers (despite confusingly calling residents"consumers" in the law). Further, there are specificlaws, such as the Illinois Biometric Information Privacy Act andthe Telephone Consumer Protection Act, that add further, morespecific privacy considerations for certain business activities.And while there is an assortment of laws with a wide variety ofenforcement mechanisms from private rights of action to regulatorycivil penalties or even disgorgement of IP, one consistent trend isthe increasing potential for financial liability that can befall anon-compliant entity.
Laws in the U.S. related to cybersecurity compliance are not ascommon as laws related to responding to and notifying of a databreach. In recent years, specific laws and regulations have largelyfocused on the healthcare and financial services industries.However, legislative and regulatory activity is expanding in thisspace, requiring increasingly specific technological,administrative and governance safeguards for cybersecurity programswell beyond these two industries. Additionally, while breachresponse and notification where sensitive personal data is impactedhas been a well-established legal requirement for several yearsnow, increasingly complex cyber-attacks on private and publicentities has expanded the focus of cybersecurity incident reportingrequirements and enterprise cybersecurity risk considerations.
What Does This All Mean for Diligence?
For the buy side, identifying the specifics of what data, datauses and applicable laws are relevant to the target company ispivotal to appropriately understanding the array of risks that maybe present in the transaction. Equally, at least basictechnological cybersecurity diligence is important to understandthe risks of the transaction and potential future integration. Forthe sell side, entities should be prepared to address their data,data uses and privacy and cybersecurity obligations in diligencerequests.
Separately, privacy and cybersecurity diligence should not focussolely on the risks created by past business activity but alsoconsider future intentions for the data, systems and company'sbusiness model. If an entity is looking to make an acquisitionbecause it will be able to capitalize on the data that the acquiredentity has, then diligence should ensure that those intended useswon't be legally or contractually problematic. This issue isbest known earlier than later in the transaction, as it may impactthe value of the target or even the desire to move ahead.
In the event that diligence uncovers concerns, some privacy andcybersecurity risks will warrant closing conditions and/or specialindemnities to meet the risk tolerance of the acquiring entity. Inintense situations, such as where a data breach happens or isidentified during a transaction, there may even be a pricerenegotiation. Understanding the depth and presence of these risksshould be front of mind for any entity considering a sale to allowfor timely identification and remediation and in some instances tounderstand how persistent risks may impact the transaction if itmoves ahead. For all of these situations, privacy and cybersecurityspecialists are critical to the process.
The prevalence of global business, even for small entities thatmay have overseas vendors or IT support, creates additional layersof considerations for privacy and cybersecurity diligence.
Privacy and cybersecurity laws have existed in certainjurisdictions for years or even decades. In others, the expandedcreation of, access to and use of digital data, along withexemplars like the European Union (EU) General Data ProtectionRegulation, have caused a profound uptick in comprehensive privacyand cybersecurity laws. Depending on how you count, there are closeto or over 100 countries with such laws currently or soon to be inplace. This proliferation and dispersion of legal requirementsmeans a compounding of risk considerations for diligence.
Common themes in recently enacted and proposed global privacyand cybersecurity laws include data localization, appointed companyrepresentatives, restrictions on use and retention, enumeratedrights for individuals and significant penalties. Moreover, asidefrom comprehensive laws that address privacy and cybersecurity,other laws are emerging that are topic-specific. For example, theEU has a rather complex proposed law related to the use ofartificial intelligence. It is critical to ensure that theappropriate team is in place to diligence privacy and cybersecurityfor global entities and to help companies take appropriaterisk-based approaches to understanding the global complianceposture. It can be difficult to strike a balance in diligencepriorities due to both the growing number of new global laws andthe lack of many (or any) historical examples of enforcement forthese jurisdictions. But robust fact-finding paired with continueddiscussions on risk tolerance and business objectives, and carefulconsideration of commercial terms, will help.
As mentioned, artificial intelligence is a hot topic for privacyand cybersecurity laws. One of the biggest diligence risks relatedto artificial intelligence and machine learning (AI/ML) is notidentifying that it's being used. AI/ML is a technicallyadvanced concept, but its use is far more prevalent than may beimmediately understood when looking at the nature of an entity.Anything from assessing weather impacts on crop production todetermining who is approved for certain medical benefits caninvolve AI/ML. The unlimited potential for AI/ML applicationcreates a variety of diligence considerations.
Where AI/ML is trained or used on personal data, there can besignificant legal risks. The origin of training data needs to beunderstood, and diligence should ensure that the legal support forusing that data is sound. In fact, the legal ability to use allinvolved data should be assessed. Companies commonly treat all dataas traditional proprietary information. But privacy laws complicatethe traditional property-law concepts, and even if laws permit theuse of data, contracts may prohibit it. Recent legal actions haveshown the magnitude of penalties a company can face for wronglyusing data when developing AI/ML. Notably, in 2021 the FTCdetermined that a company had wrongly used photos and videos fortraining facial recognition AI. As part of the settlement, the U.S.Federal Trade Commission ordered that all models and algorithmsdeveloped with the use of the photos and videos be deleted. If acompany's primary offering is an AI/ML tool, such an ordercould have a material impact on the company.
Additionally, the use of AI/ML may not result in the intendedoutput. Despite efforts to use properly sourced data and avoidnegative outcomes, studies have shown that bias or other integrityissues can arise from AI/ML. This is not to say the technologycannot be accurate, but it does demonstrate that when performingdiligence it is crucial to understand the risks that may be presentfor the purposes and uses of AI/ML.
Security incidents have been the topic of many a headline overthe past few years. Some of these incidents are the result of thegrowing trend of ransomware or other cyber extortions, includingdata theft extortions or even denial-of-service extortion. Theidentification of a data security may well have a serious impact ona transaction. Moreover, transactions can be impacted by datasecurity incidents occurring before, during and after atransaction. Below we outline some key considerations for each.
An Incident Happened BEFORE a Transaction Started
An Incident Happens DURING a Transaction
An Incident Happens AFTER a Transaction
While far from the totality of privacy and cybersecurityconsiderations for transactions, these topics should help establisha baseline understanding of what to look for and how to approachprivacy and cybersecurity in the current legal environment.
The content of this article is intended to provide a generalguide to the subject matter. Specialist advice should be soughtabout your specific circumstances.
Leverage machine learning on your iPhone to translate Braille with this free app – 9to5Mac
If you ever thought about learning Braille or just wanted to quickly translate something written in UEB to your iPhone, theres a new app that can help you with that.
Software engineer Aaron Stephenson started learning Braille a few years ago. To put his knowledge into practice, he built an app using CoreML and Vision to find Braille. Now, he has just released an app that can translate Braille (and more) using just your iPhone.
Braille Scanner allows users to take a photo of a piece of paper with Braille on it using their iPhones and then within seconds, its translated to text.
The developer explains his intention behind the project and also the limitations so far:
Braille Scanner was created to help transcribe from Braille to text. It uses a combination of machine learning and vision to do this. The current transcribing model uses Unified English Braille, grade 1, and Im planning on adding more in the coming app updates.
Here are the top features of Braille Scanner for iPhone users:
Since the app just launched, the developer asks for feedback whether users find incorrectly translated braille, so he can build a more accurate machine learning model.
Braille Scanner requires iOS 14.7 or later. Its free to download and you can find it here on the App Store.
What do you think of this initiative? Share your thoughts in the comment section below.
Related:
FTC: We use income earning auto affiliate links. More.
Check out 9to5Mac on YouTube for more Apple news:
Read more:
Leverage machine learning on your iPhone to translate Braille with this free app - 9to5Mac
The Federal Executive Forum’s Machine Learning and AI in Government 2022 – Federal News Network
Date:April 12, 2022Time:1 p.m. ETDuration:1 hourCost:No Fee
DescriptionMachine learning and artificial intelligence technology is very important in helping agencies with their people, processes and technology. But how are agencies utilizing this technology and what benefits do they see?
During this webinar, you will learn how federal IT practitioners from the Department of Veterans Affairs and Defense Intelligence Agency are implementing strategies and initiatives around machine learning and artificial intelligence.
The following experts will explore what the future of machine learning and AI in government means to you:
Panelists also will share lessons learned, challenges and solutions and a vision for the future.
Registration is complimentary. Please register using the form on this page or call (202) 895-5023.
By providing your contact information to us, you agree: (i) to receive promotional and/or news alerts via email from Federal News Network and our third party partners, (ii) that we may share your information with our third party partners who provide products and services that may be of interest to you and (iii) that you are not located within the European Economic Area.
More here:
The Federal Executive Forum's Machine Learning and AI in Government 2022 - Federal News Network
AI and machine learning are the future of retail: Survey – ITP.net
Artificial intelligence and machine learning are changing the way retail works as it creates knowledge out of data that retailers can turn into action.
Sixty-five percent of decision makers at retail companies and organisations said AI and ML are mission-critical technologies, according to a survey sponsored by Rackspace Technology.
The technologies provide an opportunity to enhance customer experiences, improve revenue growth potential, undertake rapid innovation and create smart operations all of which can help businesses to stand out from the competition.
Fifty-eight percent of respondents in the retail space said AI and ML technologies are a high priority for their industry.
Sixty-nine percent reported AI and ML had a positive impact on brand awareness and on brand reputation (67 percent), as well as on revenue generation (72 percent) and on expense reduction (72 percent).
Meanwhile, 75 percent of respondents in retail say they are employing AI and ML as part of their business strategy, IT strategy or both.
Some 68 percent of retail respondents are allocating between 6 percent and 10 percent of their budget to AI and ML projects.
The technology is being used by retailers in an increasingly wide variety of contexts, including improving the speed and efficiency of processes (47 percent), personalising content and understanding customers (43 percent), increasing revenue (41 percent), gaining competitive edge (42 percent) and predicting performance (32 percent), and understanding marketing effectiveness (42 percent).
In an indication of the increasing maturity of the technologies, 66 percent of retail respondents said their AI/ML projects have gone past the experimentation stage and are now either in the optimising/innovating or formalising states of implementation.
There are however challenges when it comes to AI and ML adoption. Thirty-four percent of retail respondents cite difficulties aligning AI and ML strategies to the business.
From a talent perspective, more than half 61 percent of retail respondents said they have necessary AI and ML skills within their organisation.
At the same time, more than half of all respondents say that bolstering internal skills, hiring talent and improving both internal and external training are on their agenda.
Comparing departments, 69 percent of retail respondents say IT staff grasp AI and ML benefits while 46 percent in sales, 45 percent in R&D, 44 percent in senior management and boards, 41 percent in customer service and operations and only 34 percent in marketing departments understand the benefits of these technologies.
Read more from the original source:
AI and machine learning are the future of retail: Survey - ITP.net
Praisidio Uses Machine Learning to Identify At-Risk Employees and Build Tailored Retention Plans with Procaire 3.0 – PR Newswire
New machine learning-driven retention path technology identifies urgently needed actions and enables HR executives to take immediate steps to retain at-risk employees
SAN FRANCISCO, April 5, 2022 /PRNewswire/ -- Praisidio, the leader in talent retention management, today announced the general availability of Procaire 3.0, which includes new patent-pending retention path functionality. Retention paths, auto-generated by machine learning technology, feature curated groups of employees with similar risk factors and include specific retention recommendations. Support for user-defined retention paths is also provided.
Procaire 3.0's retention recommendation engine presents contextually effective recommendations which HR professionals may choose and track. Retention paths enable HR leaders to take immediate actions to significantly reduce voluntary employee attrition.
Additionally, Procaire 3.0 includes retention impact dashboards that reflect in real-time the cumulative business impact of implemented retention actions. Metrics shown include retention improvement, maker time increases, management one-on-one improvement, time in role decreases, etc.
"Procaire provides us early visibility into the causes of attrition, recommends retention activities, and measures the impact of our HR organization's proactive actions. With Procaire retention paths, we were able to identify the main causes of attrition with employees grouped into risk and cause cohorts, allowing us to target retention activities across the company," said Gail Jacobs, Head of Talent and HR Operations, Guardant Health.
"With Procaire retention paths, I was able to identify the main problems in my organization and help our employees. In one example, I helped my organization increase their weekly maker time significantly to reduce the risk of Zoom burnout" said Iga Opanowicz, Sr. People Generalist, Guardant Health.
Customers can use Retention Paths to address groups of employees with similar risk factors such as bias, burnout, stagnation, and disconnection. Moreover, critical employees are surfaced in high-risk cohorts or groups who report to high-attrition managers.
Ben Eubanks, Chief Research Officer of Lighthouse Research & Advisory, remarked: "Our research shows that employers struggle with retention because it's hard to know what specific steps to take. With Procaire retention paths, HR professionals now have the power of machine learning at their fingertips and can easily see the exact retention drivers for their best employees."
After retention actions are taken, Procaire helps ensure follow-up and follow-through via retention workflows and optimizes future recommendations by gauging action efficacy over time.
Procaire 3.0 is immediately available.
About Praisidio
Praisidio is a talent retention management company solving employee attrition. Praisidio's Procaire unifies enterprise and HCM data, applies advanced machine learning, reveals talent risks early in real-time, provides actionable insights, root cause explanations, comparisons, recommendations, and enables employee care at scale to improve employee engagement and retention materially. For more information, visit http://www.praisidio.com.
For media contact, please reach out at[emailprotected]
SOURCE Praisidio, Inc.
See more here:
Praisidio Uses Machine Learning to Identify At-Risk Employees and Build Tailored Retention Plans with Procaire 3.0 - PR Newswire
California FEHC Proposes Sweeping Regulations Regarding Use of Artificial Intelligence and Machine Learning in Connection With Employment Decision…
The California Fair Employment and Housing Council (FEHC) recently took a major step towards regulating the use of artificial intelligence (AI) and machine learning (ML) in connection with employment decision-making. On March 15, 2022, the FEHC published Draft Modifications to Employment Regulations Regarding Automated-Decision Systems, which specifically incorporate the use of "automated-decision systems" in existing rules regulating employment and hiring practices in California.
The draft regulations seek to make unlawful the use of automated-decision systems that "screen out or tend to screen out" applicants or employees (or classes of applicants or employees) on the basis of a protected characteristic, unless shown to be job-related and consistent with business necessity. The draft regulations also contain significant and burdensome recordkeeping requirements.
Before the proposed regulations take effect, they will be subject to a 45-day public comment period (which has not yet commenced) before FEHC can move toward a final rulemaking.
"Automated-Decision Systems" are defined broadly
The draft regulations define "Automated-Decision Systems" broadly as "[a] computational process, including one derived from machine-learning, statistics, or other data processing or artificial intelligence techniques, that screens, evaluates, categorizes, recommends, or otherwise makes a decision or facilitates human decision making that impacts employees or applicants."
The draft regulations provide the following examples of Automated-Decision Systems:
Similarly, "algorithm" is broadly defined as "[a] process or set of rules or instructions, typically used by a computer, to make a calculation, solve a problem, or render a decision."
Notably, the scope of this definition is quite broad and will likely cover certain applications or systems that may only be tangentially related to employment decisions. For example, the term "or facilitates human decision making" is ambiguous. A broad reading of that term could potentially allow for the regulation of technologies designed to aid human decision-making in small or subtle ways.
The draft regulations would make it unlawful for any covered entity to use Automated-Decision Systems that "screen out or tend to screen out" applicants or employees on the basis of a protected characteristic, unless shown to be job-related and consistent with business necessity
The draft regulations would apply to employer (and covered third-party) decision-making throughout the employment lifecycle, from pre-employment recruitment and screening, through employment decisions including pay, advancement, discipline, and separation of employment. The draft regulations would incorporate the limitations on Automated-Decision Systems to apply to characteristics already protected under California law.
The precise scope and reach of the draft regulations are ambiguous in that key definitions define Automated-Decision Systems as those systems that screen out "or tend to screen out" applicants or employees on the basis of a protected characteristic. No clear explanation of the scope of the phrase "tend to screen out" is offered in the proposed regulations, and the inherent ambiguity of the language itself presents a real risk that these regulations will extend to certain systems or processes that are not involved in screening applicants or employees on the basis of a protected characteristic.
The draft regulations apply not just to employers, but also to "employment agencies," which could include vendors that provide AI/ML technologies to employers in connection with making employment decisions
The draft regulations apply not just to employers, but also to "covered entities," which include any "employment agency, labor organization[,] or apprenticeship training program." Notably, "employment agency" is defined to include, but is not limited to, "any person that provides automated-decision-making systems or services involving the administration or use of those systems on an employer's behalf."
Therefore, any third-party vendors that develop AI/ML technologies and sell those systems to third-parties using the technology for employment decisions are potentially liable if their automated-decision system screens out or tends to screen out an applicant or employee based on a protected characteristic.
The draft regulations require significant recordkeeping
Covered entities are required to maintain certain personnel or other employment records affecting any employment benefit or any applicant or employee. Under FEHC's draft regulations, those recordkeeping requirements would increase from two to four years. And, as relevant here, those records would include "machine-learning data."
Machine-learning data includes "all data used in the process of developing and/or applying machine-learning algorithms that are used as part of an automated-decision system." That definition expressly includes datasets used to train an algorithm. It also includes data provided by individual applicants or employees. And it includes the data produced from the application of an automated-decision system operation (i.e., the output from the algorithm).
Given the nature of algorithms and machine learning, that definition of machine-learning data could require an employer or vendor to preserve data provided to an algorithm not just four years looking backward, but to preserve all data (including training datasets) ever provided to an algorithm and extending for a period of four years after that algorithm's last use.
The regulations add that any person who engages in the advertisement, sale, provision, or use of a selection tool, including but not limited to an automated-decision system to an employer or other covered entity, must maintain records of "the assessment criteria used by the automated-decision system for each such employer or covered entity to whom the automated-decision system is provided."
Additionally, the draft regulations would add causes of action for aiding and abetting when a third party provides unlawful assistance, unlawful solicitation or encouragement, or unlawful advertising when that third party advertises, sells, provides, or uses an automated-decision system that limits, screens out, or otherwise unlawfully discriminates against applicants or employees based on protected characteristics.
Conclusion
The draft rulemaking is still in a public workshop phase, after which it will be subject to a 45-day public comment period, and it may undergo changes prior to its final implementation. Although the formal comment period has not yet opened, interested parties may submit comments now if desired.
Considering what we know about the potential for unintended bias in AI/ML, employers cannot simply assume that an automated-decision system produces objective or bias-free outcomes. Therefore, California employers are advised to:
View original post here:
California FEHC Proposes Sweeping Regulations Regarding Use of Artificial Intelligence and Machine Learning in Connection With Employment Decision...
Meet the Canadian Researcher Helping Solidify Edmonton as a Global Hub for Artificial Intelligence – Skift
Edmonton, Canada has become a leading global hub in technology, artificial intelligence, and machine learning. Alona Fyshe, a professor at the University of Alberta and a researcher in the artificial intelligence field, shares what makes the city a world-class destination for tech innovation and events that bring the industry together.
Destination Canada
When Alona Fyshe was growing up in Edmonton the capital city of the Canadian province of Alberta and attending the University of Alberta, she never expected the city would turn into an epicenter of artificial intelligence work. However, that is exactly what it has become.
In 2018, after some time away, Fyshe was lured back to Edmonton for a job as a professor at her alma mater, and shes thrilled to have settled in such a vibrant city thats home to a thriving and collaborative network of artificial intelligence and machine learning researchers.
Its a remarkable place to be. We have young up-and-coming researchers, but we also have established researchers who are leaders in the field, Fyshe said. Everyones interested in each others ideas, and there are many opportunities for smart people to connect with each other.
Fyshe is now an assistant professor in the Department of Computing Science and Department of Psychology, and uses artificial intelligence and machine learning to understand how the human brain processes language.
As Fyshe explained, Computer models of language are exciting, and theyre becoming more and more accurate. Im interested in using those computer models to understand how we comprehend language, but also how this can help further improve those models.
Fyshe is not alone in her enthusiasm for the technology and Edmontons unique position as a leader in artificial intelligence research. Edmonton is gaining a worldwide reputation as a hub for businesses, and organizations from across the globe are flocking to the city to learn, collaborate, and advance their work in the field.
An organization at the core of the regions artificial intelligence innovation is the Alberta Machine Intelligence Institute (Amii), one of Canadas preeminent centers of artificial intelligence. Amii plays a prominent role in building connections between researchers of different stripes in the tech community and, in doing so, developing a thriving artificial intelligence ecosystem in the province.
Amii also represents Edmonton as a member of the Pan-Canadian Artificial Intelligence Strategy, the worlds first national strategy committed to building local and regional AI ecosystems, supporting talent and training in the field, fostering collaboration, and understanding the societal implications of the technology.
The region is also home to the Alberta Artificial Intelligence Association (AlbertataAI), a non-profit organization aimed at cultivating the Alberta AI ecosystem and collaborating with like-minded communities. The group organized the first annual Alberta AI conference in March 2019, which attracted 350 participants, including AI experts, IT professionals, university professors, computer science graduate students, and enthusiasts from various industries across the artificial intelligence field.
When Fyshe isnt engaging in the ground-breaking research that drew her back to Edmonton, she is deeply engaged in teaching at The University of Alberta or, as she puts it, training the next generation of scholars to do machine learning in the real world.
Located at the edge of Edmontons North Saskatchewan River valley, The University of Alberta is one of the top five AI research institutions globally, with a world-renowned robotics lab and brain imaging center that are helping push the field in new directions.
The use of artificial intelligence in language processing has progressed rapidly over the last decade, and the University of Albertas expertise in this area has played a pivotal role in this. The university continues to provide exciting opportunities to advance this innovation, further contributing to Edmontons position as a leader in artificial intelligence.
In addition, The Kule Institute for Advanced Study (KIAS) at the University of Alberta is the official host of the International Center for Information Ethics (ICIE), a leader in the field of digital ethics, and publisher of the International Review of Information Ethics. When Canadian Prime Minister Justin Trudeau and French President Emmanuel Macron committed to creating an international study group on artificial intelligence in 2018, the institute set out to organize its first annual AI, Ethics and Society Conference in 2019 an interdisciplinary gathering which explored the history, ethics, policy, business, and science of AI.
Also at the university, the AI4Society Reverse EXPO 2022 brings together the University of Alberta AI research community and representatives from industry, government, and civil society to learn about what students are doing to advance AI-related research and applications, as well as explore collaboration options.
Describing the universitys community, Fyshe said, There really is an excellent set of people who are working and learning here. The work of introducing new learners, both students and members of the larger community, to artificial intelligence and machine learning is a collaborative and inspiring effort. Theres an exciting network of researchers fueling the energy in Edmonton.
Edmontons energy and expertise is attracting events and organizations from around the world, tapping into the citys ecosystem of innovation.
There is ample opportunity to learn from and collaborate with Edmontons AI experts like Fyshe. Conference attendees can engage in meaningful tours of state-of-the art facilities, hear from outstanding keynote speakers, and connect with leading institutions in artificial intelligence and its these collaborations that are driving innovation to help shape the future of this increasingly important industry.
Are you an event decision maker in the artificial intelligence sector? Click here to learn how you can tap into Canadas innovative technology ecosystem to bring your event or conference to the next level.
For more information about Destination Canadas work within priority economic sectors and how they can benefit business events, visit Destination Canada Business Events.
This content was created collaboratively by Destination Canada Business Events and Skifts branded content studio, SkiftX.
Original post:
Meet the Canadian Researcher Helping Solidify Edmonton as a Global Hub for Artificial Intelligence - Skift
Ambitions to become GitHub for machine learning? Hugging Face adds Decision Transformer to its library – Analytics India Magazine
Hugging Face is one of the most promising companies in the world. It has set to achieve a unique feat to become the GitHub for machine learning. Over the last few years, the company has open-sourced a number of libraries and tools, especially in the NLP space. Now, the company has integrated Decision Transformer, an offline reinforcement learning method, into the transformers library and the Hugging Face Hub.
Decision transformers were first introduced by Chen L. and his team in the paper Decision Transformer: Reinforcement Learning via Sequence Modelling. This paper introduced this framework that abstracts reinforcement learning as a sequence modelling problem. Unlike previous approaches, Decision Transformers output the optimal actions by leveraging a causally masked Transformer. A Decision Transformer can generate future actions that achieve desired return by conditioning an autoregressive model on desired reward, past states, and actions. The authors concluded that despite the simple design of this transformer, it matches, even exceeds, the performance of the state-of-art model and free offline reinforcement learning baselines on Atari, OpenAI Gym, and Key-to-Door tasks.
Decision Transformer architecture
The idea of using a sequence modelling algorithm is that instead of training a policy using reinforcement methods that would suggest action to maximise the return, Decision Transformers generate future actions based on a set of desired parameters. It is a shift in the reinforcement learning paradigm since the user is using a generative trajectory modelling to replace conventional reinforcement learning algorithms. The important steps involved in this are feeding the last K timesteps in the Decision Transformer with three inputs (return-to-go, state, action); embedding the tokens with a linear layer (if the state is a vector) or CNN encoder if it is a frame; processing the inputs by GPT-2 model that predicts future actions through autoregressive modelling.
Reinforcement learning is a framework to build decision making agents that learn optimal behaviour by interacting with the environment via trial and error method. The ultimate goal of an agent is to maximise the cumulative reward called return. One can say that reinforcement learning is based on the reward hypothesis and all the goals are the maximisation of the expected cumulative reward. Most reinforcement learning techniques are geared in the online learning setting, where the agents interact with the environment and gather information using current policy and exploration schemes to find higher-reward areas. The drawback with this method is that the agent has to be trained directly in the real world or have a simulator. In case a simulator is not available, one would be required to build it, which is a very complex process. Simulators may even have flaws that can be exploited by agents to gain a competitive advantage.
Credit: Hugging Face
This problem is present in the case of offline reinforcement learning. In this case, the agent only uses the data collected from other agents or human demonstrations without interacting with the environment. Offline reinforcement learning learns skills only from previously collected datasets without active environment interaction and provides a way to utilise previously collected datasets from sources like human demonstrations, prior experiments, and domain-specific solutions.
Hugging Faces startup journey has been nothing short of being phenomenal. The company, which started as a chatbot, has gained massive attention from the industry in a very short period; big companies like Apple, Monzo, and Bing use their libraries in production. Hugging Faces transformer library is backed by PyTorch and TensorFlow, and it offers thousands of pretrained models for tasks like text classification, summarisation, and information retrieval.
In September last year, the company released Datasets, a community library for contemporary NLP, which contains 650 unique datasets and more than 250 contributors. With Datasets, the company aims at standardising end-user interface, versioning, and documentation. This sits well with the companys larger vision of democratising AI, which would extend the benefits of emerging technologies to smaller technologies, which is otherwise concentrated in a few powerful hands.
See more here:
Ambitions to become GitHub for machine learning? Hugging Face adds Decision Transformer to its library - Analytics India Magazine
The ROI of Emotional Intelligence Technology | eWEEK – eWeek
The COVID-19 pandemic has ushered in a world where remote work, virtual communications, and on-demand service offerings are here to stay. For some, it hasnt been all positive. People have become increasingly stressed, isolated, and emotionally drained.
With this growing complexity in peoples emotional well-being, what can organizations do to increase their awareness of peoples mental states? Could technology in the form of AI actually help in some way?
Certainly the terms machine learningand emotional intelligence are not synonymous. The term machine learning often describes automating monotonous work that is otherwise tedious and emotionally unrewarding for humans to do. The term emotional intelligence refers to someones ability to recognize and manage their own emotions and the emotions of others. People with strong emotional intelligence (or EQ) typically demonstrate a high degree of personal accountability and empathy.
These traits are highly desirable, not only in interpersonal communication but also in business interactions. Accountability and empathy form the foundation of consumer trust, and the more trust consumers have in a brand, the more likely they are to want to do business with that brand.
What does this have to do with artificial intelligence? Lets take a look.
Also see: Top AI Software
Could a machine AI actually help people to increase their emotional intelligence skills?
Even people with a high EQ interacting in a face-to-face environment have significant limitations when it comes to reading the emotional states of others. Most of us can only pay attention to a highly limited amount at a time, and we often miss or misinterpret emotional cues that other people are sending out. Our ability to read the room only worsens when we move to digital channels.
But machines dont have this limitation. With todays advanced computer vision, automated speech recognition (ASR), and natural language processing (NLP), the emerging technology of emotion AI software can monitor everyone at the same time and integrate multiple sources of information, ranging from nonverbal gestures and facial expressions to spoken words and tonal inflections.
Businesses looking to better understand and connect with their customers can use emotion AI to get a comprehensive look at how people are reacting to what they see and hear, whether during a meeting or a sales presentation, and collect deeper insights into what motivates people to action.
Emotion AI technology has a variety of uses across the enterprise. Some of the most exciting, and most practical, emotion AI applications are in customer service.
Customers today are starved for empathy, and companies that have taken notice are investing in the deployment of emotion AI within their CX (customer experience) platforms to create a more emotionally positive experience. Contact centers in particular have benefited from this technology, reporting higher results in customer satisfaction, loyalty, and engagement.
However, customer service isnt the only department that benefits from the rise of emotion AI. Sales and even Marketing departments are successfully leveraging emotional intelligence to optimize remote engagements.
For example, sellers armed with AI tools can tailor their sales presentations based on real-time customer EQ data. If customers seem distracted or uninterested, the emotionally-aware seller can change course to re-engage them, increasing their likelihood to close.
Another equally important, if somewhat overlooked, benefit of emotion AI is in employee engagement. While much attention has been paid to the consumer side of AI investment, there is considerable ROI on the employee side as well.
Research shows that empowering employees with better tools and technology fuels higher satisfaction and retention rates. For customer service, an industry with notoriously high attrition, the benefit of emotion AI is twofold: Businesses can better engage customers in an increasingly digital world while simultaneously minimizing the rising cost of turnover. Emotion AI, clearly, will see greater adoption by businesses of many types.
Also see: Tech Predictions for 2022: Cloud, Data, Cybersecurity, AI and More
About the Author:
Patrick Ehlen, VP of artificial intelligence, Uniphore
Read the original:
The ROI of Emotional Intelligence Technology | eWEEK - eWeek