Category Archives: Data Science

Tiger Analytics: Generative AI has improved productivity – Storyboard18

Tiger Analytics, a data sciences and advanced analytics company, has undergone a rebranding exercise. Storyboard18 caught up with Mahesh Kumar, founder and chief executive officer, and Pradeep Gulipalli, co-founder of Tiger Analytics, who spoke about the rebranding initiative, the combination of AI and analytics, and the benefits of generative AI.

Rebranded logo

Could you touch upon the rebranding initiative of Tiger Analytics?

Mahesh Kumar: The last time we may have done something similar on a much smaller scale was when we were a team of 100 people, which was almost eight to 10 years ago. Today, our team has grown to 4,000 people. Now, we have multiple large offices in India and seven to eight countries.

We provide services in areas including data science, data engineering, ML (machine learning) engineering, MLOps (machine learning operations), quality engineering and application engineering. We are doing quite a lot of work in generative AI and large language models.

Today, we have 100+ clients. These are large companies spread globally, and the nature of the work spans across retail, consumer product goods, pharmaceutical, insurance, banking, transportation, logistics and manufacturing.

We have been constantly evolving our messaging when we interact with the external world, our clients, agencies as well as internally. But we felt that this is the time we need to take a holistic look at what we are trying to do here, create a consistent simple story to convey to everyone in this complex world.

What are the goals you aim to accomplish through this exercise?

Pradeep Gulipalli: What does building the world's best AI and analytics firm mean? What do we do? This is what the whole branding exercise was about.

The whole business ecosystem has gotten pretty complex. You don't know a particular action any business takes and what the outcome will be. Customer behaviours are changing with all the technology and the revolution that is happening. The competitive landscape is also quite dynamic. ChatGPT didn't exist a year ago and then see how dramatically things have changed. So it's a pretty complex situation that our clients deal with.

And how do they make decisions? Sometimes they have a lot of data. Sometimes they have no data. Signals are hidden. It's ambiguous. It's not easy. So many times, they end up just sitting on it. This is where we come in and we say, We will make sense of all of this for you.'

Generative AI is a big buzzword. How are you making use of this tool in your daily practices to solve the toughest of problems?

Gulipalli: If you take a look at our work, we are working on some business problems. It could be trying to predict something, some forecasting or trying to optimise something. Now, generative AI is doing two things for us.

One is that the process of developing these models is becoming easier now. It's been heard that a generative AI can now replace software engineers or it's able to write code. Parts of it are definitely happening. Not the replacement, but certainly it's substantially improved productivity.

But that's one part. The second part is solving actual business problems. Things which were unsolvable earlier, now some solutions are coming up. There is a live project we are working on with a very large company. I consider it to be among the top three BPO companies in the world. They run call centres across multiple industries globally.

We are working with the group that works with airlines. With airlines, it's easiest to buy a ticket. But then once you ring the call centre, it can be a frustrating experience.

So the solution we are building for them, we're doing it in two phases. Phase one, when a customer calls, they are going to listen to that conversation live with the help of AI.

And then, there are generative AI models which understand it. And they are giving a solution to the query being raised by the customer. Now, phase one was, Will there still be a human who will answer?

We are giving that solution to the call centre agent, saying, Here is the answer to that question.

Now, in the next phase, we are saying, 'Can we experiment where the generative AI or the bot directly answers the question, without a human in the loop?'

And there will be some pockets which are complex enough that only humans can do. And people can spend more time there. So that's phase two. So this is an example. And you can draw parallels to how other people would be using the massive amount of information available, but we don't know what is relevant. That's where AI comes in handy.

Kumar: About 1.4 billion people in India can register their complaints with the ministry of consumer affairs. Any product or service you buy as a consumer, you become the customer for the country. Hence, you can register your complaints.

And earlier there used to be fewer complaints just because the channels were cumbersome. Now they have opened up mobile channels, email, phone, etc. So, the current government is encouraging more registration of these complaints, suggestions and feedback.

The problem for them is now the volume is so high. There are only 25-30 people who are manually processing those complaints right now. So, they are launching an RFP right now for the use of gen AI for processing these complaints. And the purpose is categorising them.

First, many complaints are duplicates from the multiple channels people are getting. So how to deduplicate them? Second, some could be bogus complaints. How can they classify them into relevant, non-relevant, depending on the intensity of the complaints?

There's a question: How important is the complaint? Everyone says it's an important complaint, but they do internal categorisation. Then also, which category kind of complaint it is.

So all of that can be done by gen AI, which was earlier done manually. Eventually, what happens is if the complaints are more serious, one can approach the court. What they are trying to do is, can AI provide an intermediate solution, almost like negotiation between the service provider and the consumer? So there are a lot of interesting applications coming up with gen AI, which is going to touch the day-to-day life of every individual.

What trends are you expecting for 2023 in the space you cover? And could you explain the disruption that you will witness in space?

Kumar: Tiger Analytics was started in 2011 and since then, we have witnessed a lot of changes in the work of traditional data science and data engineering. In the post-Covid era, the demand for and adoption of these technologies has increased significantly with the entry of cloud.

The demand for gen AI has increased, where it led to the increase of our services. So gen AI is in that category where it has a promise of high value.

Earlier, if one went to large Fortune 500 companies, there would be 100 people talking about artificial intelligence (AI), data science and data engineering.

But gen AI has everyone talking about it. So, it has expanded the total addressable market of the services we provide. That's a big change we're seeing. And it will translate into much wider adoption of what we do.

Through AI and analytics, what are the challenges and opportunities Tiger Analytics has helped companies overcome and embrace?

Gulipalli: AI and analytics mainly help with decision making. Coming up with decisions based on data is a much more scientific way of making a decision. Sometimes the decision can be manual or it can be automated. AI can make the decision for you. But at the end of the day, we are making a decision.

Now think of looking across the organisation. You have a product which you are trying to sell. So it starts with sales. How can I better sell? Who is the right audience? How do I sell? Who do I market it to? How much do I spend on marketing? Today, there are a number of ways a product can be marketed.

There are hundreds of television channels. Digital has exploded. Where do I market? AI can give you answers for all of these. For example, you have acquired a customer. How do you provide a good customer experience? How do you retain the customer? How do you address their pain points? Identifying issues manually is a huge problem.

AI can help identify these hotspots. Then, you approach the operations of a company. Are you having optimal usage of resources? Is there wastage? Through AI and analytics, there can be better optimal usage of resources.

When you try to do planning as a business, you want to know how much I manufacture. For that, you would want to know what the demand is going to look like. How do you predict demand? There could be so many things that impact demand. The decisions spanning from sales, marketing, customer, operations, financial, risk, manufacturing, supply chain, all of these are things that can be solved using analytics and AI.

What makes AI and analytics a very deadly combination? Is there any other tool you would combine with analytics or AI?

Gulipalli: AI is like an engine. Think of it as if you're building a Formula One car. AI is the engine. But then just that engine will not get you to your destination. You need to build the overall chassis, have the steering, the seats, the body, etc.

There are different technologies which will come in to complete the picture. For example, AI is great but let's say for you to understand what AI is saying and for you to act on it, maybe you need a certain technology which is an interpreter between you and AI. So there are a variety of adjacent technologies which come in and help. And there'll be more that will keep coming in.

What AI was 10 years ago is not the same today. A lot of advancements have taken place. A lot of embellishments will come and enablement will come in terms of other technologies. So we have for a long heard about software development. So, both are very much tied together.

This is because, at the end of the day, what you'll use is a software product but at the heart of the software product is AI. For you to understand something much better, maybe you might need certain visualisations so that you can grasp the complexity of a situation better. Now, this is where AI and the visualisation technology will work together.

Currently, how many clients do you have?

Kumar: Right now, we are working with 130-140 companies. Seventy percent of our clients are from Fortune 1,000 companies.

In the consumer product goods category, among the top 10 companies, we work with six of them. Majority of business is in the US right now. We do have some business in Australia, Asia, Singapore, the Philippines, the UK and Malaysia. We recently started working with some government entities.

In India, we work with Tata Steel and with Star Network at Disney India. We are just starting our first project with the government of Bihar, which revolves around AI. In the US, we have clients like PepsiCo, The Hartford (The Hartford Financial Services Group) and Nestle. We are about to start work with Citibank.

Continued here:

Tiger Analytics: Generative AI has improved productivity - Storyboard18

New York Life Hires Industry Veteran Don Vu as Chief Data and Analytics Officer – Yahoo Finance

NEW YORK, October 12, 2023--(BUSINESS WIRE)--New York Life today announced the hiring of industry veteran Don Vu as senior vice president and chief data and analytics officer. Mr. Vu will lead a newly formed artificial intelligence (AI) and data team with responsibility for AI, data, and insights capabilities and aligning data architecture with business architecture in support of the companys business strategy and objectives. Mr. Vu will report to Alex Cook, senior vice president and head of Strategic Capabilities.

"Considering Dons impressive experience and track record of success, we are delighted to welcome him to the team at this exciting point in our innovation journey," said Cook. "We look forward to the role Don will play in furthering our digitization efforts and leveraging AI to provide industry-leading experiences for our customers, agents, advisors, and employees."

Mr. Vu joins New York Life from Northwestern Mutual where he served as chief data officer since early 2020. In leading the companys data and analytics function, he drove organizational transformation and enterprise data and AI strategy across the company. He led a consolidated team across various disciplines, including data product and strategy; data science, AI, and analytics; data engineering; and data governance. He also served on the executive steering committee of the companys Data Science Institute.

Previously, Mr. Vu served as vice president of data and analytics at WeWork, where he led its central data and analytics organization. He also spent 13 years at Major League Baseball (MLB), where he led its consolidated analytics organization as vice president of data and analytics. While at MLB, Mr. Vu led data and advanced analytics efforts at BAMTech, a streaming media technology spin-off of MLB Advanced Media.

Mr. Vu received a B.S. in Information Systems and Commerce from the University of Virginias McIntire School of Commerce. He currently serves on the advisory board for McIntires Business Analytics program.

Story continues

ABOUT NEW YORK LIFE

New York Life Insurance Company (www.newyorklife.com), a Fortune 100 company founded in 1845, is the largest1 mutual life insurance company in the United States and one of the largest life insurers in the world. Headquartered in New York City, New York Lifes family of companies offers life insurance, retirement income, investments, and long-term care insurance. New York Life has the highest financial strength ratings currently awarded to any U.S. life insurer from all four of the major credit rating agencies.2

1Based on revenue as reported by "Fortune 500 ranked within Industries, Insurance: Life, Health (Mutual)," Fortune magazine, 6/5/2023. For methodology, please see https://fortune.com/franchise-list-page/fortune-500-methodology-2023/.

2Individual independent rating agency commentary as of 10/18/2022: A.M. Best (A++), Fitch (AAA), Moodys Investors Service (Aaa), Standard & Poors (AA+).

View source version on businesswire.com: https://www.businesswire.com/news/home/20231012892088/en/

Contacts

Kevin Maher New York Life (212) 576-6955 kevin_b_maher@newyorklife.com

Read more:

New York Life Hires Industry Veteran Don Vu as Chief Data and Analytics Officer - Yahoo Finance

The role of data skills in the modern labour market – CEPR

The increasing use of data and advanced analytics across countries has driven demand for new types of jobs (Acemoglu and Restrepo 2017). Individuals who master those skills are in high demand in the labour market and typically earn a wage premium (Sostero and Tolan 2022). With rapid technology changes, it is less clear which jobs are becoming more digital, and which occupations and industries hire people with these skills. Traditional labour market statistics provide only limited possibilities to explore these questions, as they often lack detailed information on skill requirements.

In recent years, online job advertisement data have gained popularity as an alternative data source relating to labour markets, as they provide timely and granular information. In many cases, they have advantages over existing occupation classifications and are often complementary to official employment or vacancy statistics (Atalay et al. 2022). Recent studies using online job advertisements showed that digital skillsets have evolved over the past decade and can be found at the core of some traditionally non-digital domains (Sostero and Tolan 2022). Online job advertisements have also contributed to an improved understanding of the impact of crises on labour markets, such as the COVID-19 recession in the US and Canada (Soh et al. 2022, Bellatin and Galassi 2022) and the war in Ukraine (Pham et al. 2023).

In a recent paper (Schmidt et al. 2023), we aim to estimate the data intensity of occupations and sectors (i.e. the share of data-related jobs involved in the production of data). First, we put forward a novel methodology applying natural language processing (NLP) to online job advertisements from Lightcast to generate occupation- and industry-level estimates of data intensity. Second, the methodology can be used to advance cross-country comparable results on measuring the value of data assets in the data economy and the evolution of digital skills in the labour market. Third, the NLP algorithm is flexible and can be applied to concepts that are difficult to capture in traditional labour market statistics, such as green and AI-related jobs. The algorithm can also be adapted to over 66 languages, meaning the scope of the analysis could be broadened.

The NLP techniques enable the extraction of relevant skills and tasks from the raw text of the online job advertisements. Due to the high granularity of the data, the algorithm can classify the extracted information into data entry, database and data analytics related activities. Finally, the methodology computes occupation and industry-level aggregates of the share of jobs involved in data production activities. The methodology relies on an open-source NLP pipeline provided by the 'spacy' python library (spaCy 2022), which allows for an efficient treatment of large amounts of text data, and a flexible deployment of NLP models.

Results illustrate that the percentage of data analytics skills is higher than those of other data-related skills in all three countries. At the sectoral level, the emerging picture is more heterogeneous across countries. While the information and communication industry as well as the finance industry are highly data-intensive, larger differences in data intensity exist for agriculture, mining and quarrying, and electricity, gas, steam and air conditioning supply. Differences in labour demand mostly explain these variations, with low data-intensity professions contributing most to economy-wide data intensity. Preliminary estimates show that the results remain stable for pre-COVID years, too.

In the UK, the occupation with the highest level of data intensity is data scientist, with a rate of 92.3% in 2020. Following closely are data engineer at 69% and data entry clerk at 68% (Figure 1). Most of these occupations revolve primarily around data analytics skills, with a few exceptions. For instance, data entry clerks and database administrators exhibit data intensities largely linked to data-entry and database-related capabilities. In general, the highly data-intensive occupations tend to be specialised, technology-oriented professions, with occupations such as biostatistician and clinical data manager showing connections to fields such as biology and medicine. Similar trends are observed in Canada and the US.

Figure 1 Data intensity at occupation level in the UK is linked to data analytics skills

Per cent of labour demand, 2020

Financial and insurance, and information and communication activities are the two most data-intensive industries in all three countries, with shares close to or above 10% in 2020 (Figure 2). Shares are similar in most sectors with low data intensity, in particular accommodation and food service activities, construction, and transportation and storage. This is consistent with Calvino et al. (2018), who use a different methodology.

Figure 2 Data intensity in the US, Canada, and the US by industry

Per cent of labour demand, 2020

However, these numbers can mask some structural differences across countries. For instance, in the finance and insurance sector, the UKs share is almost on par with those of the US and Canada, with data mining analysts making the largest contribution to the data intensity of the sector in all three countries. However, the high demand for data mining analysts in the sector more than compensates for the lower average data intensity of the profession in the UK (30% as compared to 70% in 2020 in the US and Canada). Overall, the contribution of the profession to the data intensity of the sector is about twice as high in the UK (0.8 percentage point), compared to Canada (0.3 percentage point) and the US (0.4 percentage point).

Differences in the data intensity of individual industries are noticeable in all three countries in professional, scientific and technical activities, with data intensity much higher in the US, and to a lesser extent Canada, than in the UK. Similarly, the data intensity in agriculture and forestry and electricity, gas, steam and air conditioning supply differs across countries, with labour demand being more data-intensive in the UK than in Canada or the US. In a few sectors, such as mining and quarrying, arts, entertainment and recreation, and public administration and defence, the US and Canada exhibit similar data intensity rates, which are much lower than in the UK.

At the economy-wide level, the UK and Canada appear to be less data-intensive than the US (Figure 3). The overall share of data-intensive jobs in the UK was estimated to be 3.4% in 2020, weighting the data intensity at occupation level by the number of job advertisements posted. This compares to 3.9% in Canada and 4.6% in the US.

Figure 3 Low data-intensity occupations contribute most to data intensity in the UK

Aggregate data intensity, per cent, 2020

In the UK, the low data-intensity occupations are those that count most for the overall data economy more than medium and highly data-intensity jobs (Figure 4, Panel A). For instance, office assistants, which represent 6% of the labour demand in the low data-intensity occupation class, contribute as much to the overall data intensity of the economy as data scientists.

Figure 4 Data intensity across occupations

Data intensity of an occupation in per cent, contribution to aggregate data intensity in percentage points, 2020

A) United Kingdom

B) Canada

C) United States

In Canada and the US, medium data-intensity occupation classes contribute the largest proportion to the overall data intensity, and the level of data intensity across professions is generally higher (Figure 4, Panel B and C). A data scientist has a data intensity score of 94.5% in Canada and 95.1% in the US, compared to 92.3% in the UK. Among the high data-intensity professions in Canada, data entry clerks, database administrators, and data mining analysts contribute most to aggregate data intensity, next to professions such as the business management analyst and software developer at the medium level. The US has the widest range of professions contributing at the medium and high data intensity level, amongst them network system analysts and computer system engineers.

Acemoglu, D and P Restrepo (2017), Robots and Jobs: Evidence from US Labor Markets, NBER Working Paper No. 23285.

Atalay, E, S Sotelo and D Tannenbaum (2022), The geography of job tasks, VoxEU.org, 12 November.

Bellatin, A and G Galassi (2022), What COVID-19 May Leave Behind: Technology-Related Job Postings in Canada, Bank of Canada Staff Working Paper 2022/17.

Calvino, F, C Criscuolo, L Marcolin and M Squicciarini (2018), A taxonomy of digital intensive sectors, OECD Science, Technology and Industry Working Paper 2018/14.

Pham, T, O Talavera and Z Wu (2023), Labour markets during wartime: Evidence from online job advertisements, VoxEU.org, 22 July.

Schmidt, J, G Pilgrim and A Mourougane (2023), What is the role of data in jobs in the United Kingdom, Canada, and the United States? A natural language processing approach, OECD Statistics Working Paper 2023/05.

Soh, J, M Oikonomou, C Pizzinelli, I Shibata and M Mendes Tavares (2022), Did the COVID-19 Recession Increase the Demand for Digital Occupations in the United States? Evidence from Employment and Vacancies Data, IMF Working Paper 2022/195.

Sostero, M and S Tolan (2022), Digital skills for all? From computer literacy to AI skills in online job advertisements, JRC Working Papers Series on Labour Education and Technology.

spaCy (2022),"Language Processing Pipelines".

Excerpt from:

The role of data skills in the modern labour market - CEPR

Better Data Decisions on the Journey to a Single Source of Truth – Spiceworks News and Insights

The enterprise journey to the single source of truth (SSOT) has been long and windy. The benefits of this direction are many, from a common data set to enterprise governance to unlocking valuable use cases and transformation across the enterprise, says Paula Hansen of Alteryx.

We can all relate to the desire for any employee to arrive at the same answer, once and for all putting to rest who has the better, more accurate data. But while organizations are on this journey, should they wait to pursue analytic insights and automation?

The exponential growth of data within the enterprise and the accelerating pace of business both suggest that pausing analytic automation until the single source of truth is completed will separate analytic leaders from laggards. Data is often scattered across various databases and applications, and different departments will migrate to a single source in a phased approach. Many business use cases, competitive decisions and transformation opportunities can be lost in this wait.

Simply put, driving business value through data and becoming analytically mature can advance in parallel with data centralization.

The reality is most enterprises today are pulling data from 6 input sources, from legacy databases and applications to modern cloud data warehouses and cloud platforms. A CFO pursuing a reduction in the time to close the quarter or a Head of Supply Chain wanting to optimize complex logistics is certainly pulling data from multiple sources and in multiple formats. In fact, the triangulation of different data sets and scenarios increases the quality of insights, creates better models and promotes enterprise collaboration.

For example, the development of a global go-to-market strategy likely pulls data from multiple CRMs, ERPs, legacy spreadsheets, third-party data sets and SaaS applications across a global enterprise. Different stakeholders within the business also want different insights from the data, ranging from partner insights to product insights to geographic insights to margin insights. The list goes on and on.

To reach the highest levels of analytic maturity what the International Institute of AnalyticsOpens a new window refers to as analytical nirvana and accelerate time-to-insight, it is important to democratize data and analytics to all of its employees. Organizations across every industry are increasing data literacy through analytics, using governance to scale responsibly, and evaluating the latest technologies like cloud, machine learning and generative AI to drive business outcomes faster than ever before.

See More: Data Warehouses: Why a Single Source of Truth Is Necessary for Customer Analysis

Centralized data science teams, while very valuable, lack the specific business expertise needed to effectively solve business challenges in each department. Data scientists are not trained as accountants, HR professionals, marketing experts or supply chain managers. Business context is required to ask relevant questions about your data and to reduce the time from insight to action.

The first step to increasing analytic maturity is to increase organizational data literacy. In this scenario, all employees are empowered to marry their domain expertise with the ability to ask more precise questions of the data, accurately analyze the data, and draw out valuable insights all through self-service technology that meets them where they are regardless of analytic skillset or analytic language preference.

Once you start your analytic program, there are many ways to democratize analytics and encourage greater data literacy. One approach is through gamification to increase employee engagement and upskilling. Jones Lang LaSalle (JLL), for example, established a gamification program that incorporates training to learn the functions of their analytics platform, provides users with challenges to work on their problem-solving skills, and issues certifications for awards and recognition of capability.

See More: Rise of Digital Banking Poses New Security Risks to Mobile Apps

Often, disaggregated analytics tools lack interoperability and create silos that result in complexity, duplication, and inefficiency for users and IT. Hybrid architectures that span on-premise and cloud-based data and applications are becoming the norm. Therefore, companies must implement the right tools and processes to pull disparate data types and place them into analytics processes wherever their data resides.

Further, the cloud can play a pivotal role in accelerating democratization through its flexibility, scalability, speed, and self-service. Using cloud-based analytic platforms streamlines IT management, removes overhead costs, and makes it easier for users to collaborate on their analytics solutions.

In todays fast-paced world, the ability to communicate insights quickly and effectively to stakeholders is critical. Generative AI will supercharge time-to-insight through its core value proposition and ability to rapidly accelerate content creation. Additionally, generative AI will accelerate analytic best practices and ML models across the enterprise.

For example, generative AI can automate communications that synthesize and deliver trusted analytical insights to stakeholders. The tone and language of the communications can be selected based on the intended audience. Instead of spending hours drafting reports or presentations for stakeholders, you can instead focus on absorbing insights and planning the best course of action based on results. This benefits users in several ways, including improved time-to-value, operational efficiency, and decision-making.

Governance is also a key consideration when scaling analytics across the enterprise. Governance doesnt mean democratization will come to a standstill. Proper governance helps organizations strike the balance of democratization at the speed the business demands with the controls that IT requires.

Establishing data governance means creating frameworks that define who can take what actions, with what data, in what situations, and what methods they can use. These principles guide data analytics at every stage of the collection of data and will be unique to each organization depending on the type of data, data systems, and regulatory requirements.

Data governance also helps to ensure that data is usable and accessible for analytics. This becomes even more important as companies adopt generative AI, where large language models rely on the quality of data inputs.

Digital transformation agendas have put data analytics at the center of every organizations focus in every industry. Whether your data is in a single location or not, empowering employees across the enterprise to make data-driven decisions through the use of analytics will directly impact company performance. Plus, employees stay longer in organizations that invest in their development and help them to automate the mundane and focus on value-add. Improving organizational analytic maturity can be done while maintaining proper governance. Analytic leaders will leverage their insights to uncover new revenue streams, improve operational efficiency, and stay hyper-competitive in todays evolving world.

What innovative data analytics strategies are you using to drive operational efficiency and decision-making? Share with us on FacebookOpens a new window , XOpens a new window , and LinkedInOpens a new window . Wed love to hear from you!

Image Source: Shutterstock

Read more from the original source:

Better Data Decisions on the Journey to a Single Source of Truth - Spiceworks News and Insights

Data and Privacy Challenges Most Marketers Are Still Faced With – MarTech Series

From public to private, the customer experience has evolved over these years, and with the demise of third-party cookies, the privacy experience is leading the way ahead for customers as well as for organizations.

In the era of heightened data privacy concerns, building an efficient privacy experience that aligns equally with your business objectives, customer expectations, and legal regulations is challenging. It requires businesses to consider several factors because the entire digital ecosystem is lingering around consent and privacy.

Let us walk you through the emerging factors that pose a challenge while dealing with privacy concerns, but they also offer opportunities for enterprise-scale companies to craft a privacy strategy that sustains even during volatile times.

Companies over all these years have spent considerable about of time, money, and other resources to become privacy-compliant. Nevertheless, the ownership of such initiatives is being passed from one department to another. What started as a task for legal teams, gradually passed to the data and IT teams, and now, the baton is in the hands of marketing and sales teams, as they are primary generators and users of customer data.

Data breaches and privacy violations make up a small part of possible customer data privacy violations, some other functions at risk are violating data privacy laws while collecting, managing, processing, storing, using, and disposing of customer data.

All the internal stakeholders in an organization should work together to align and build a privacy-first organization. Data privacy should be an independent function that can drastically improve the chances of alignment. The scope of such a privacy function could be:

Privacy is a complex and ever-evolving issue. While trying to operationalize privacy protocols, organizations must consider multiple stakeholders that use data to accomplish their tasks, for example, UX and product design, data science, marketing, sales, legal, HR, procurement, finance teams, and more.

Cross-functional collaboration works but it takes an approach beyond just partnering with all the departments. There needs to be an operational alignment with existing workflows, technical integration with multiple systems along with following legal compliance with a myriad of ever-changing laws.

A robust privacy roadmap must contain the design, plan, and implementation strategy that comply with the current laws and is agile enough to evolve with the changing needs of the business.

Marketing Technology News:MarTech Interview with Derek Slager, Co-founder & CTO and Amperity

As third-party cookies come to an end, collecting data from different channels has become all the more challenging. However, the constant data storm is coming at a rapid pace in terms of frequency, volume, and diversity from a spiralling set of data channels, sources, platforms, and devices entails marketers are to struggle in controlling and streamlining data management workflows across all touchpoints.

Marketing leaders should address the problem of data deluge with innovative strategies. For generation and collection, we must focus on premium zero- and first-party data, and to reconcile customer data from several digital and physical sources for a seamless customer experience.

Blockchain, generative AI, AR, and VR have made their way through all industries and sectors. Unfortunately, companies still do not have clarity on how to process the data collected or shared with these technologies following compliance and protocols.

Generating and utilizing data using AI has become a double-edged sword for marketing and sales teams. AI has offered intelligent automation, but it creates look-alike identities that may violate anyones privacy and security. Marketers must own the responsibility of how their model uses the data and control how to share it in the public domain.

How do you create a privacy tech stack? An ideal tech stack is built around tools and technologies that run and grow on privacy-related initiatives like consent, compliance, and preference. While the martech and fintech tools in your ecosystem have certain privacy features, the privacy-first organizations should further invest in building an independent privacy tech stack to facilitate an advanced range of privacy use cases.

Coping with privacy concerns is becoming challenging day by day. The field is interconnected and evolving. Modern enterprises must strive to take it from a mere compliance task to a strategic initiative as their commitment to positive privacy experiences.

Addressing the challenges mentioned above can go a long way in creating a sustainable privacy strategy that can withstand the volatility in the market and offer a competitive edge to the brand over a long period.

Marketing Technology News: The Human Touch: Elevating AIs Role in Marketing Campaigns

Read more:

Data and Privacy Challenges Most Marketers Are Still Faced With - MarTech Series

11 most in-demand gen AI jobs companies are hiring for – CIO

Generative AI is quickly changing the landscape of the business world, with rapid adoption rates across nearly every industry. Businesses are turning to gen AI to streamline business processes, develop proprietary AI technology, and reduce manual efforts in order to free up employees to take on more intensive tasks. A recent survey of senior IT professionals from Foundry found that 57% of IT organizations have identified several areas for gen AI use cases, 25% have started pilot programs, and 41% are engaged in training and upskilling employees on gen AI.

In the next six to 12 months, some of the most popular anticipated uses for gen AI include content creation (42%), data analytics (53%), software development (41%), business insight (51%), internal customer support (45%), product development (40%), security (42%), and process automation (51%). Organizations are also optimistic that gen AI will boost productivity and improve business outcomes, with 58% saying that they believe gen AI will make employees more productive, 55% saying that gen AIinfused products lead to better business outcomes, and 55% saying that gen AI enables employees to focus more on value-adding tasks.

As this technology becomes more popular, its increased the demand for relevant roles to help design, develop, implement, and maintain gen AI technology in the enterprise. Foundrys AI survey also identified several roles that companies are looking to hire to help with the integration of gen AI in the workplace. Here are the top 11 roles companies are currently hiring for, or have plans to hire for, to directly address their emerging gen AI strategies.

As companies embrace gen AI, they need data scientists to help drive better insights from customer and business data using analytics and AI. For most companies, AI systems rely on large datasets, which require the expertise of data scientists to navigate. Responsibilities include building predictive modeling solutions that address both client and business needs, implementing analytical models alongside other relevant teams, and helping the organization make the transition from traditional software to AI infused software. Its a role that requires experience with natural language processing, coding languages, statistical models, and large language and generative AI models. According to the survey, 28% of respondents said they have hired data scientists to support generative AI, while 30% said they have plans to hire candidates.

Machine learning engineers are tasked with transforming business needs into clearly scoped machine learning projects, along with guiding the design and implementation of machine learning solutions. This role is responsible for training, developing, deploying, scheduling, monitoring, and improving scalable machine learning solutions in the enterprise. Its a role that requires a wide range of skills including model architecture, data and ML pipeline creation, software development skills, experience with popular MLOps tools, and experience with tools such as BERT, GPT, and RoBERTa, among others. The goal of a machine learning engineer is to ultimately make machine learning more accessible across the organization so that everyone can benefit from the technology. According to the survey, 22% of respondents say they have already hired machine learning engineers to support generative AI, while 28% say they have plans to hire for the role.

AI is new territory for businesses, and theres still a lot to discover about the technology, which is why theyre looking to hire AI researchers to help identify the best possible applications of AI within the business. AI researchers help develop new models and algorithms that will improve the efficiency of generative AI tools and systems, improve current AI tools, and identify opportunities for how AI can be used to improve processes or achieve business needs. AI researchers need to understand data and automation infrastructure, machine learning models, AI tools and algorithms, data science, programming, and how to build AI models from scratch. According to the survey, 31% of respondents say they have already hired AI researchers to support generative AI, while 19% say they have plans to hire for the role.

Algorithm engineers, sometimes referred to as algorithm developers, are tasked with building, creating, and implementing algorithms for software and computer systems to achieve specific tasks and business needs. The role of algorithm engineer requires knowledge of programming languages, testing and debugging, documentation, and of course algorithm design. These engineers are responsible for solving complex computational problems in the organization, often working with large data sets to design intricate algorithms that address and solve business needs. Businesses rely on algorithm engineers to help navigate gen AI technology, relying on these experts to scale and deploy gen AI solutions, consider all the ethical and bias implications, and ensure theyre aligned with all compliance and regulatory requirements. According to the survey, 16% of respondents say they have already hired algorithm engineers to support generative AI, while 31% say they have plans to hire for the role.

Deep learning engineers are responsible for heading up the research, development, and maintenance of the algorithms that inform AI and machine learning systems, tools, and applications. Deep learning is a subset of AI, and vital to the development of gen AI tools and resources in the enterprise. This role is responsible for building and maintaining powerful AI algorithms, identifying data requirements, and finding better ways to automate processes in the business to improve performance. Technologies such as chatbots, virtual assistants, facial recognition, medical devices, and automated cars rely on deep learning to create effective products. As companies continue to embrace gen AI, deep learning engineers are critical for businesses that want to capitalize on AI and integrate it into business processes, services, and products. According to the survey, 16% of respondents say they have already hired deep learning engineers to support generative AI, while 28% say they have plans to hire for the role.

Natural language processing (NLP) engineer is a vital role for embracing gen AI in any organization. Gen AI relies heavily on NLP to improve communication and to create chatbots and other AI services that need to communicate effectively with users, no matter the query. This role is responsible for training NLP systems, developing models, running experiments, identifying proper tools and algorithms, and performing regular maintenance and analysis of the models. Candidates typically have experience in big data, coding, model selection and customization, language modeling, language translation, and text summarization using NLP tools. NLP plays a big role in technologies such as text-to-speech (TTS) and speech-to-text (STT), chatbots and virtual assistants, and other gen AI tools that are designed to interact in real-time with users. According to the survey, 15% of respondents say they have already hired NLP engineers to support generative AI, while 27% say they have plans to hire for the role.

Chatbots are one of the earliest and most common uses of gen AI in a business setting its very likely that you have interacted with an AI chatbot at some point in the past several years. They help direct customers to the right associates, connect users with important documentation, and can alleviate some of the load on customer service representatives. With gen AI, chatbots are becoming even more sophisticated, with the rise of services such as ChatGPT, Bard, Replika, Cleverbot, and others, which have shown to be powerful tools that are useful to businesses. Chatbot technology is in demand across every industry, and businesses are eager to develop their own chatbot tools to help streamline customer service, appointment scheduling, social media engagement, user support, and even marketing and promotions. According to the survey, 15% of respondents say they have already hired AI chatbot developers to support generative AI, while 27% say they have plans to hire for the role.

Prompt engineers are responsible for ensuring that tools using gen AI, especially text-to-text and text-to-image AI models, can accurately assess user prompts and deliver the correct information. Its a role that requires extensive knowledge of natural language processing, coding, natural language queries, and artificial neural networks. Examples of prompt engineering can be seen in tools such as ChatGPT, which takes user queries and generates a unique response, and AI image tools such as Midjourney, which produces unique art and imagery based on user requests. For businesses interested in leveraging AI, especially with chatbots, automated assistants, and image generators, prompt engineering is a vital role to ensure those tools are effective and useful. According to the survey, 11% of respondents say they have already hired prompt engineers to support generative AI, while 26% say they have plans to hire for the role.

Chief AI officer is a relatively new senior executive position that helps organizations tackle the rapid progress of and demand for AI in the workplace. There are so many considerations when integrating AI into the workplace, especially around security, bias, compliance, and privacy. A chief AI officer is responsible for overseeing AI strategy development by navigating and overseeing the development and implementation of AI in the business. Other responsibilities include overseeing data management and governance, business unit collaboration, ethics and compliance, risk management, talent acquisition and team building for AI, and monitoring overall performance and analytics reporting on AI tools. According to the survey, 11% of respondents say they have already hired a chief AI officer to support generative AI, while 21% say they have plans to hire for the role.

More companies are turning to AI for content creation, including writing blog posts, product descriptions, and more. But the results arent always perfect and often need a human eye to edit and rework gen AI results into something that sounds more human and relatable to readers. Companies are looking for experienced writers and editors who can help turn around content quickly, using generative AI, while ensuring that the content is well-written and easy to understand by the intended audience. According to the survey, 10% of respondents say they have already hired AI writers to support generative AI, while 21% say they have plans to hire for the role.

AI art is one of the newer applications of gen AI, with tools such as Midjourney and Stable Diffusion taking off in the past year. These tools can take a prompt, or an image, and either create entirely unique content, or make specific edits to already-existing imagery. There is a lot of potential for organizations looking to create marketing materials, product images, stock images, and other art-related content. Organizations are seeking experienced artists and graphic designers who can take that expertise and knowledge to get the most out of image generation tools. Artists have the right knowledge and expertise to create prompts that will garner better results from generative AI. They know the lingo, terminology, and nuances of various artistic areas be it film, artwork, or visual graphics which can help ensure that businesses get the results they want from these services. According to the survey, 7% of respondents say they have already hired AI artists to support generative AI, while 15% say they have plans to hire for the role.

Read this article:

11 most in-demand gen AI jobs companies are hiring for - CIO

SCIT to offer its coveted MBA programmes through SNAP 2023 – Hindustan Times

India, 11th October 2023: Symbiosis Centre of Information Technology (SCIT), a well-known constituent of Symbiosis International (Deemed University), is known as one of the leading institutes for technical and business education in India. The institute has invited applications for its two tech-centric management programmes, MBA in IT Business Management (ITBM) and MBA in Data Sciences and Data Analytics (DS & DA) via SNAP (Symbiosis National Aptitude Test) 2023. The registration process for SNAP 2023 closes on November 23, 2023, and the registration for SCIT closes on December 21, 2023. Candidates can begin this journey by completing SNAP registration followed by successful SCIT registration at the official SCIT registration website.

Welcoming the new cohort, Dr. Dhanya Pramod, Director, SCIT said, SCIT's MBA programmes prepare candidates to be tech-savvy business leaders in today's technology-driven business landscape, spanning e-commerce and finance. The curriculum focuses on honing analytical skills to tackle industry challenges. Alongside technical knowledge, students benefit from mentorship by industry experts and a broad professional network. SCIT invites aspiring individuals to join their journey towards becoming the next generation of business leaders, combining technology expertise with a deep understanding of the business world.

In 2023, the SNAP Computer-Based Test (CBT) will be conducted on three separate dates, providing candidates with the flexibility to select the date that best suits their schedule: December 10, 2023 (Sunday), December 17, 2023 (Sunday), and December 22, 2023 (Friday). The much-anticipated SNAP 2023 examination results will be revealed on January 10, 2024 (Wednesday). To access the admit cards for the SNAP Test, candidates can log in to the official SNAP website, http://www.snaptest.org, on December 04, 2023 (Monday) for SNAP Test 1 and December 09, 2023 (Saturday) for SNAP Test 02 and SNAP Test 03. It's worth noting that the payment deadline aligns with the registration closing date of the examination.

SCIT is at the forefront of grooming future leaders for the technology-centric landscape of Industry 4.0. These programmes seamlessly blend core management principles with cutting-edge technical knowledge from disciplines such as Computer Science, cyber security, AI & ML, Data Science, and Analytics to name a few. Moreover, the curriculum is tailored to serve applications across diverse sectors, including Finance, IT, Operations, Supply Chain Management, and General Management.

To ensure excellence, SCIT has world-class facilities and robust industry connections that help candidates in the learning process. Also, the cohort gets the sought-after opportunity to be groomed by pioneering faculty members and industry leaders, graduating as industry-ready professionals with hands-on experience in essential tools like R, python, Hadoop, and MapReduce, among others.

The MBA in ITBM programme is designed for aspiring technocrats, specializing in areas such as Systems, Information Security, Digital Transformation, and Data Science. Graduates are equipped for roles in Functional Consultancy, Product Management, Cybersecurity consultant, Business Development, ERP Implementation, and more. The SCIT programme prepares students to excel in a tech-driven business world, where technology is the key to solving problems and optimizing operations. Similarly, the MBA (DS & DA) programme is tailored to equip management professionals possessing a strong scientific and analytical inclination with the skills to address data-driven, real-time challenges prevalent in today's dynamic landscape. It aims to prepare candidates for various significant roles within the burgeoning Data Sciences and Analytics sectors, aligning them with the demands of the industry.

In conclusion, SCIT, an integral part of Symbiosis International (Deemed University), stands as a beacon of excellence in grooming future leaders for the technology-centric landscape. The institutions MBA programmes adeptly blend technical acumen with management expertise, positioning graduates as formidable contenders in the ever-evolving world of IT, Data Science, Systems, and beyond. With robust industry connections and a dedicated faculty, SCIT is the launchpad for aspiring professionals poised to make a profound impact.

For more information visit: SCIT Pune

Disclaimer: This article is a paid publication and does not have journalistic/editorial involvement of Hindustan Times. Hindustan Times does not endorse/subscribe to the content(s) of the article/advertisement and/or view(s) expressed herein. Hindustan Times shall not in any manner, be responsible and/or liable in any manner whatsoever for all that is stated in the article and/or also with regard to the view(s), opinion(s), announcement(s), declaration(s), affirmation(s) etc., stated/featured in the same.

Read the original:

SCIT to offer its coveted MBA programmes through SNAP 2023 - Hindustan Times

DU introduces MA in Hindu Studies along with Data Analytics … – Education Times

The University of Delhi (DU) in its centenary year has set up a Centre ForHinduStudiesoffering an MA in the subject in the academic year 2023,along with a few morenew centres forstudies.Registration for the course is on and the University has received applications from a wide range of graduates from across the country.The program aims to combine traditional knowledge ofHindu philosophy with contemporary subjects such as computer science, data analytics, commerce, and political science.

While Nalanda University, BanarasHinduUniversity (BHU), Indira Gandhi National Open University (IGNOU) and 20 other institutes in India have been offeringHinduStudies for several years, DU initiated this course much later. Several courses on religious subjects have been successfully running at DU for the last several decades. The Universities have programmes in subjects such as Buddhist Studies, IslamicStudies,andChristianStudieshowever,HinduStudies has been launched after 66 years.

Talking toEducation Times,Prerna Malhotra, joint director, Centre forHinduStudies, says, Several prominent Western institutions including the Oxford University, University of Cambridge, Massachusetts Institute of Technology (MIT) and Harvard University have already been offering courses onHinduStudies. India is relatively late in introducing the subject in educational institutes. India has aHindu majority, but the subject is taught only in 20 universities so far, says Malhotra.

The course is in line with the National Education Policy (NEP) 2020s progressive provision. Besides religious philosophy, the students will study contemporary subjects such as Computer Science, Data Analytics, Commerce, and Political Science as a minor. The course has a major focus on theoretical curriculum, which will include the essence ofHindu dharma, Upanishads and Vedas. It will also include a mandatory paper on Western methods such as critical and analytical theory, papers on Ramayana and Mahabharata and a paper on English or Sanskrit language, says Malhotra.

The curriculum has been designed in a manner that is open for students from varied fields so that students from any discipline can apply for the course.

Several new jobs are being created in the realm of religiousstudies, thus, the pass-out students will have multiple opportunities. After finishing the course, students can take the National Eligibility Test (NET) to work as assistant professors or research fellows. Being aHindu life coach is yet another job that is gaining popularity. These coaches are involved in religious counselling at various corporates. They can also be hired as columnists or writers for various media houses and TV Channels. Apart from this, students will be able to continue with their minor degree as a career option as offered by the university, says Malhotra.

Originally posted here:

DU introduces MA in Hindu Studies along with Data Analytics ... - Education Times

Newark Board of Education Hosts 2nd Installment of Microchip … – PR Newswire

NEWARK, N.J., Oct. 13, 2023 /PRNewswire/ -- On Saturday, September 30, 2023 pre-kindergarten students from Mount Vernon Elementary School were greeted at the Newark School of Data Science & Information Technology for the Newark Board of Education's second installment of its Microchip Robotics program. This is part of the Before Kindergarten (B4K) Initiative, which connects pre-kindergarten students from elementary schools to their aligned high schools in order to begin building lasting relationships and provide them with the opportunity to participate in innovative activities.

As part of this collaboration, students will have many writing, mathematics, and fine motor skills opportunities. Microchip Robotics also allows pre-kindergarten students to start seeing how STEM exists around them using, in this case, using energy as the theme. Energy permeates all STEM fields and is vital in the real world.

Data and AP Computer Science Teacher and Robotics Coach Tim Salerno states, "Energy is all around them! I want them to start thinking about their relationship to energy and how energy moves through a system (in our case, food -> student -> Lego Machine), as this is a relationship their generation will be thinking about as climate change progresses. We all think about our relationship with energy production and consumption."

Upcoming sessions of the B4K Microchip Robotics program will be focused on Lego and mechanical energy, but also expand to electrical energy and simple circuits/electricity so students can see how energy comes in many forms.

"We are thrilled with the success of the Microchip Robotics program in inspiring our youngest learners to engage with STEM concepts," Superintendent Len shared. "By introducing them to the importance of energy in our world, we empower them to think critically about their impact on the environment and make informed decisions for a brighter future."

For media inquiries, please contact:Nancy J. Deering, Acting Communications Director[emailprotected]

About Newark Public SchoolsThe Newark Public Schools is the largest school district in New Jersey and dates back to 1676. The District currently enrolls over 39,000 students in 63 schools. After more than two decades of state operation and upon return to local control in 2018, the District has opened 9 new schools under Superintendent Len's leadership with an additional portfolio of new options to be announced in the coming months and years. The Newark Board of Education serves as a beacon of educational excellence, dedicated to nurturing the potential of every student. With a commitment to innovation, inclusivity, and fostering a love for learning, the District continues to shape future generations and make a positive impact within the community.

SOURCE Newark Board of Education

View post:

Newark Board of Education Hosts 2nd Installment of Microchip ... - PR Newswire

4 Steps To Face The Challenges Of Integrating AI In Your Medtech – Med Device Online

By Thophile Mohr Durdez, chief executive officer and cofounder, Volta Medical

In Season 2, Episode 18 of The Big Bang Theory, the guys try helping Penny expand the market for her hand-crafted barrettes by developing a product line for men. When Penny questions why men would be interested in sparkly, floral hair clips, Howard suggests making the barrettes Bluetooth enabled and Sheldon agrees, noting that everything is better with Bluetooth. As silly as that might seem, society seems to have arrived at a similar juncture with respect to its obsession with anything and everything that uses artificial intelligence-based technology. Seemingly overnight, AI has become personal assistant, menu planner, and fitness coach for individuals. The Screen Actors Guild and Screen Writers Guild went on strike this summer, in part, over concerns about the use of AI instead of human talent in Hollywoods creative processes. While the latter ended in September, the former is ongoing. Companies across diverse sectors are using AI-based technologies for enhanced data analytics and knowledge workers are exploring how generative AI tools, such as ChatGPT, can streamline research tasks and development of written materials.

In the medical technology arena, the safe and effective integration of AI into medical devices demands a more deliberate, measured, and purpose-driven approach. Rather than looking for any and every opportunity to deploy AI, device innovators need to identify unmet needs in which AI can be used to analyze past outcomes and incorporate new disease knowledge to predict a clear path toward better health outcomes. Lets take a look at the four steps to follow.

Medical device innovators seeking to deploy AI in meaningful ways need to focus on disease indications and medical procedures in which data analytics can provide improved outcomes compared with what the human mind, eye, and hand can achieve. For example, AI is being used in screening and diagnostic settings to improve detection of polyps and adenomas during colonoscopy and identification of breast tumors in mammography images. In these settings, AI systems enable rates of detection that are superior to what the human eye can achieve, even when that eye resides in the head of an experienced radiologist or gastroenterologist.

While there are many applications of AI in the diagnostic arena, its use in the therapeutic arena has just started emerging. Logically, the first applications of AI in the medical device industry have been developed in highly digitalized specialties and intervention types. For instance, it is being used to improve radiation therapy contouring the precise mapping of where and how much radiation is delivered to a target area. This increases the safety and efficacy of radiation therapy by enabling delivery of higher radiation doses to the entire tumor while sparing adjacent healthy tissue. On top of this, AI-enabled radiation delivery devices allow radiation to be used in the treatment of additional tumor types and in patients who would not previously have been candidates for treatment.

Cardiac electrophysiology is yet another specialty in which AI enabled systems are being used to improve outcomes for patients undergoing catheter ablation. In traditional ablation procedures, target tissue is identified based on data provided from multiple diagnostic outputs including fluoroscopy, ultrasound, electrical conduction maps, and electrograms (electrical measurements made directly on the heart rather than through the skin). Here again, the development and dissemination of AI technologies are highly facilitated by the huge amount of digital data that is being produced during each cardiac ablation intervention. As of today, extracting the various data generated during a single procedure would require 20 to 50 Gb, depending on the length of the procedure. While physicians are highly trained to interpret electrograms to help guide them in the selection of target tissue, the interpretation increases in complexity with more advanced disease states. AI-based systems can perform calculations more rapidly and have a more comprehensive approach, learning from a greater variety and number of cases than any single electrophysiologist has experienced. In these examples, physicians are overwhelmed by a large amount of data coming from multiple widespread acquisition systems. In both cases, the highly digitalized environment is suitable to AI development and dissemination. As a result, AI improves radiation therapy and cardiac ablation outcomes by aiding physicians in making more informed decisions about which tissue to target.

Importantly, use of these systems can help to reduce the time and associated resources needed for each patient without sacrificing quality by potentially reducing intervention time and reducing the number of disease recurrences. This is an important advantage at a time when health professionals are facing high levels of burn out, hospitals and care centers are facing unprecedented staffing shortages, and there is growing recognition of the need to address entrenched health inequities by making quality care accessible regardless of where a patient is treated.

Medical device innovators seeking to ensure effective deployment of AI to address unmet clinical need should work collaboratively with physicians to develop AI-enabled devices that meet specific clinical needs and eliminate obstacles that simply cant be overcome using the human brain alone. Obviously, as for other medical device technologies, end users are best suited to identify tasks and procedures for which they need additional support and thereafter to specify the user requirements. But what about analytical tasks that they have learned to perform over the course of their career but are unable to describe? This is where AI may play a role due to its ability to learn from human expertise. As an example, while we can all recognize the faces of our friends, we are often not able to provide a reliable description that would enable someone else to recognize the individual we know so well. In this context, clinicians also can identify procedures in which AI has the potential to enable improved clinical decision-making. The AI-enabled cardiac ablation noted above was developed in collaboration with three electrophysiologists to address their real unmet needs. Designing devices based on user need is the best way to both make meaningful improvements in health outcomes and ensure a market for your new AI-enabled device.

After identifying a setting in which the use of AI can enable improved outcomes, device innovators need to focus on curating, annotating, and sometimes extracting the machine learning training and testing data that enable optimal device performance. This is why collaboration with expert physicians is essential. The systems in the examples above are trained using hundreds of thousands of data points collected from previously treated patients. The ability of these systems to learn is entirely dependent on the quality of the training data. For example, using training data that inaccurately categorizes a mammography finding as malignant when it is fact benign will train your system to make the same mistake. Consequently, training data must be gathered from reliable sources for which there is high confidence that annotations are correct, and the data sets are complete. Device innovators willing to develop AI-enabled systems will need to invest a significant amount of resources in engaging with physicians and designing the tools needed to collect, curate, and annotate data. Making sure that physician experts will be able to dedicate their time to the project despite their overloaded agendas is a critical challenge for success in such collaborations. When curating training or testing data, device innovators also need to be conscious of inherent inequities and biases that may exist within the data and associated annotations and take steps to ensure that the training and validation sets accurately represent the intended population and use for which the device is designed.

While AI algorithms can be designed for high performance, they also often are viewed as black boxes by those outside the realm of data science. This may make it difficult for people to understand and thus trust how they work. Robust validation processes can address this concern. Consequently, once an AI-enabled device has been designed and trained, medical technology companies should conduct a rigorous and extensive validation on the bench and/or in a clinical setting.

The validation process should be designed to demonstrate that the levels of safety and efficacy are in accordance with regulatory claims. Whether it is to demonstrate superiority or non-inferiority, this validation process often requires randomized controlled clinical trials to provide robust clinical validation of the novel technology. These trials should be designed to clearly show that the AI-enabled device provides tangible benefit to users and patients. This may be achieved by demonstrating superiority on a clinically relevant endpoint, such as improved diagnostic sensitivity and specificity, reduction in disease symptoms/recurrence, or improved overall/disease-free survival. It may also be achieved by demonstrating that the AI enhanced system can reduce per-procedure or overall healthcare costs, ease patients treatment burdens (shorter/fewer treatments), or increase access to care compared with standard approaches.

Finally, companies developing AI-enabled medical technologies must also be able to articulate a business value proposition that resonates with traditional medtech investors (who may be less familiar with AI-based technologies) and traditional tech investors (who may be less familiar with the longer development timelines and evolving regulatory requirements for AI-based medical devices). They must also be able to address the real concerns that patients may have about the use of AI in their medical care as well as the potential use of their own health data to train future medtech innovation. Both audiences deserve clear and transparent communication regarding the potential risks and benefits of both the AI and medtech components of new products and services.

Combining artificial intelligence with human compassion brings together the best that technology and humanity have to offer improving health outcomes. While focusing on the machines, medtech innovators can never lose sight of the parents, children, friends, and co-workers who look to them for hope of a longer and healthier life.

About The Author:

Thophile Mohr Durdez is co-founder and CEO of Volta Medical, a data-inspired electrophysiology company using artificial intelligence, which was founded in 2016 with three cardiac electrophysiologists in Marseille, France. Mohr Durdez and his team work to improve cardiac arrhythmia management by developing data-driven medical devices trained on large databases of procedural data.

Follow this link:

4 Steps To Face The Challenges Of Integrating AI In Your Medtech - Med Device Online