Category Archives: Data Science
Unlocking the Investment Potential of S in ESG – AllianceBernstein
The most commonly available social data concern workforce gender diversity, health and safety, product recalls, and human rights policies. But data quality remains sketchy. For example, the fact that a company has a human rights policy doesnt mean it is a good policy or well implemented.
Its also difficult to compare social metrics across companies and industries. Unlike carbon footprints and governance standards, which can be easily compared across companies or sectors, social issues differ across industries.
In the apparel industry, for example, key issues include forced or child labor, the proportion of employees covered by trade unions or bargaining agreements, grievance reporting mechanisms, and supplier codes of conduct.
In retail banking, predatory lending is a big social concern, along with access to services for customers from lower-income backgrounds, privacy and data security, and fines levied for regulatory breaches. Food and beverage companies should be judged on product quality and recalls, investment in safety and quality systems, and production time lost because of workplace injury or safety incidents.
These issues will gain more prominence because official scrutiny of S factors is increasing steadily.
Our research shows that, between 2011 and 2022, key Western governments and quasi-governments took 23 significant actionssuch as introducing legislation or guiding principles and holding parliamentary inquiriesto curb forced labor and human rights abuses. Most actions (17) took place in the second half of that period.
This slow-moving tsunami of legislation will force companies to carry out and report due diligence in their operations and supply chains. Government moves to ban products made with forced labor are already underway in the US and will soon be followed by the European Union.
Grassroots awareness of social issues is becoming more vocal. COVID-19 highlighted the inequality of vaccine distribution and the strain on healthcare, while supply chain disruptions caused by the pandemic and by the war in Ukraine have shed light on challenging conditions in some export-producing countries. Rising inflation and the cost-of-living crisis are increasing public awareness of social issues.
Investors, in our view, should take two steps to accommodate the growing importance of S factors in their portfolios.
The first is to address issues of data quality and availability. Where data are available, their materiality for various industries should be mapped appropriately. Then, data science and qualitative analysis can help drive better insights.
For example, specialized third-party providers may have more in-depth knowledge of S factors than in-house securities analysts, but they typically cover fewer companies. Using data science, investors can access new data sources with the help of artificial intelligence.
Understanding the data is important to avoid drawing false conclusions. S controversies are more common in some industries, such as autos. But dont assume that industries with less data have a correspondingly low level of controversies. Similarly, fundamental research can verify that a companys human rights policy is effective and appropriately implemented.
The second step is to develop a research framework that can identify key S-related risks and opportunities.
We have identified three broad themes to help investors make sense of the evolving S investment environment: a changing world, a just world and a healthy world (Display).
Follow this link:
Unlocking the Investment Potential of S in ESG - AllianceBernstein
5 unique jobs at Protective with openings now – Bham Now
Sponsored
Dreaming of working in a culture you love? We spoke with five people working in unique roles within Birmingham-based Protective and were surprised to see clear themes emerging from what they love about their jobs. Keep reading for all the details.
Blanton DePalma assists Protectives claims, asset protection division and mail center business areas in addressing their training needs.
The job requires intentional organization, advanced planning and a healthy mix of structure and flexibility to provide the level of support Protectives business colleagues require.
What does he wish hed known before joining Protective? How enthusiastic everyone on his team is. He finds their outlook contagious and energizing.
Looking for a new role? Protective is hiring.
Louise Ritter manages a team of Product Owners and Scrum Masters who work with Information Technology (IT) teams to deliver quality solutions as efficiently as possible.
As a people person her favorite aspect of the job is getting to meet and work with so many different personalities. She likes problem-solving and helping to enable others success.
I remember when I first joined six years ago, being really impressed by the amount of pride our employees associate with being part of Protective. We have a very strong culture, and our employees are at the center of that. Protective is invested in giving back in so many ways, which inspires a level of commitment you dont see everywhere.
Aquilla Jackson leads a solution team that supports those who sell Protectives products directly to the customer.
Case Managers are responsible for the initial review of life insurance applications, as well as establishing and maintaining internal and external distribution team relationships. She loves that each day is different and gives her the opportunity to learn something new. She enjoys leading people and building relationships.
I love, love, love that the culture of Protective is family. In New Business, we pride ourselves in having a sense of family. Aside from our real families, we spend the most time with our coworkers. It feels amazing to show up to a place every day knowing that you have the same level of support here as you do at home.
Raja Chakarvorty is the Chief Data Scientist at Protective, leading the Enterprise Data Science team and function. His team analyzes large amounts of data using advanced analytics techniques to create a best-in-class customer experience and a targeted sales process with superior risk selection.
His expertise is to build data science teams and capabilities from the ground up. Protective offered him that opportunity, which was very aligned with his passion.
Data is our lifeline, and it comes to us in broadly two formsstructured (rows and columns) and unstructured (pdfs, text, voice, images). Without data I cannot do any science! I have a motto No data left behind!
He finds that recruiting, training and growing the team has been unexpectedly rewarding.
Because theyre a fast-growing team, his role challenges him in many ways.
The innovation and business value creation invigorates me and drives me to get more business for my team. I love that!
Before applying and relocating from Hartford, Connecticut, he wishes hed known what incredible people he would get to work with each day and how cosmopolitan the city of Birmingham is.
Trenaye Baileys role enables a portion of the marketing team to achieve their goals by effectively utilizing print, tradeshows, multi-touch campaigns and digital media to increase sales productivity and generate leads for financial professionals.
She also specializes in ensuring they deliver tools, collateral and digital content that makes it easier for financial professionals to do business with and for Protective.
This role challenges her because no day is the same. This is mainly due to the growth of the company and the changing needs of the audiences.
I wish before applying, I would have known about the great culture and great people that work at Protective. I honestly wish I would have applied for Protective fresh from college. Protective cultivates their employees and treats them like family.
Protective is hiring. Apply today.
Sponsored by:
Protective refers to Protective Life Corporation, and insurance company subsidiaries including Protective Life Insurance Company (Nashville, TN) and Protective Life and Annuity Insurance Company (Birmingham, AL). Protective is a registered trademark of Protective Life Insurance Company.
See the original post here:
Mathematics student accepted into National Science Foundation … – Clarksville Now
CLARKSVILLE, TN Christine Jator, a student in Austin Peay State Universitys Department of Mathematics and Statistics, has been accepted into the prestigious National Science Foundation Research Experiences (REU) for Undergraduates program at Southern Methodist University in Dallas, Texas. The program is designed to provide students with first-hand research and coding experience in the field of data science.
Jator will take part in the programs research on solving environmental problems facing urban neighborhoods. This opportunity will give her a chance to see what working in the data science industry looks like and to gain valuable knowledge and skills that will benefit her future career aspirations.
I am thrilled to have been accepted into this program, Jator said. I have always been passionate about finding solutions to environmental issues, and this program will allow me to work towards that goal while gaining valuable experience in the field of data science.
Jators professors, Dr. Ramanjit Sahi and Dr. Matt Jones, recommended the program to her and provided links to summer REUs and internships. With the recommendation letter of Jones, a professor in the Department of Mathematics and Statistics, Jator secured her spot in the program.
The NSF REU program at Southern Methodist University is renowned for its data science program, making it a prime destination for students who are passionate about the field. Jator is excited to be a part of this program and looks forward to contributing to its research efforts.
Follow this link:
Mathematics student accepted into National Science Foundation ... - Clarksville Now
We have joined Turning University Network – Mirage News
We have further strengthened our ties to The Alan Turing Institute and our connections with other top-ranking universities by becoming a member of the Institutes newly launched Turing University Network.
The Alan Turing Institute is the national institute for data science and artificial intelligence and the Turing University Network forms part of its new strategy aimed at using data science and Artificial Intelligence (AI) for social good.
Our research in data science and AI is recognised as world leading and our researchers are tackling real-world challenges.
Last year, we were named among the first-ever successful applicants to The Alan Turing Institutes Network Development Awards, in recognition of our proven research excellence in data science and AI.
We have a significant research portfolio in data science and AI that supports pioneering multi-disciplinary data science research.
Notable research carried out at our globally outstanding University includes innovative sensing technology for self-driving cars, next generation airport scanners, automated visual surveillance and using the ExaHyPE simulation engine to power the oneAPI programming model.
Additionally, we are expanding the Turing portfolio in areas such as AI in education, digital theology and digital humanities.
Over the years, our Computer Science Department has expanded with more AI-facing research groups, such as the Artificial Intelligence and Humans Systems group (AIH); Scientific Computing (SciComp); Vision, Imaging and Visualisation (VIViD); beside the original Algorithms and Complexity group (ACiD).
Recently, our strength in the AI in Education area was recognised, with Durham being selected to host two major international conferences: EDM22 and the CORE-A AIED22; as well as co-hosting the Turing Artificial Intelligence in Education Event22.
As a research-intensive university, we are heavily involved in important regional initiatives such as the N8 Research Partnership, representing the eight most research-intensive universities in the North of England.
Within the N8 Centre of Excellence in Computationally Intensive Research (N8CIR), we visibly support the high-performance computing community, by hosting the BEDE supercomputer and key research areas of Digital Humanities, Digital Health, and recently Machine Learning.
Being a member of the Turning University Network will enable us to engage and collaborate with other UK universities with an interest in data science and AI both within The Alan Turing Institute and its broader networks.
Go here to read the rest:
Perfecting the Data Balancing Act – CDOTrends
What do Hong Kongs business giants have in common? A diverse portfolio of businesses, long history, and large operations. These characteristics also make them less agile to change. But many remain successful because of their ability to disrupt and innovate.
Against the backdrop of the rise of AI, many of them are going through the next phase of transformation. Enterprises are rethinking their core data architectures with data lakehouse platforms as they prepare to ride the oncoming AI wave.
A data lakehouse is the latest data management architecture that combines the flexibility and scalability ofdata lakesin storing structured and unstructured data with the data management and transactions of data warehouses, enabling BI and machine learning (ML) on all data. More about data lakehouse can be found here.
Shaking from the data core
One of them is Li & Fung, a leading global supply chain player. The 117-year-old company is rich with history and talents but also complex. Leo Liu, the chief digital officer of LFX, the digital arm of Li & Fung, recently shared the century-old companys data transformation journey at the recent Data + AI World Tour conference in Hong Kong organized by Databricks.
With over 100 years of supply chain knowledge and experience, LFXs mission is to create and invest in digital ventures that will transform the supply chain and retail industries, said Liu. One key initiative is to modernize our legacy data platform.
He explained the company once acquired 50 different businesses within a year, creating an extremely complicated environment with multiple technology stacks. On top of that, the organization relied on a 10-year-old on-premises data warehouse, slowing down its big data and AI strategy for innovation. AI cannot work without [a modernized] data platform, he said.
Shaking down a data platform from its core for a business giant is not easy. But Liu took the challenge to a new level by looking to complete the data architecture transformation within five months. This includes configuring, setting up a new cloud, data platform and data pipelines, migrating the data and dashboards, training, and launching.
Im still here, so you know we did it, Liu said. The success includes producing three quick wins to demonstrate the value of the new platform, all within the year.
On top of his dedicated and motivated team, Liu said the success was attributed to three primary criteria of the new data platform: open standards; multi-cloud support; fully integrated for data engineers, scientists, and business users. Working with Databricks in building a data lakehouse platform, Liu said the company could now achieve all three criteria and focus on its AI innovation.
We can now develop dashboards and reports within 24 hours; before, it took weeks and months, he said. This year, we are moving forward with our AI strategy.
Beyond legacy
Dealing with legacy is even more challenging for highly-regulated payment players like HSBC and Octopus Card. They are achieving better data governance and predictive modeling while riding on a data lakehouse platform.
As business needs to evolve, there is a growing need for better data analytics and robust data governance to ensure that data provides value and supports our business strategy, said Thomas Qian, wholesale chief data science architect & analytical platform lead at HSBC.
Qian noted one example is the tracking and analysis of users behavior at PayMe, HSBCs mobile payment service. He said the insights on customers usage patterns contributed to the launch of PayMe for Business, a service for merchants to collect payment.
By working with Databricks, we can scale data analytics and machine learning to enable customer-centric use cases, including personalization, recommendations, and fraud detection, he added.
Data governance struggle
Meanwhile, at Octopus Card, data privacy and governance have been top priorities.
We have a very tight data governance policy to protect customer data because we literally hold the data of all Hong Kong people, said Tony Chan, senior data science manager at Octopus Card.
Chan shared the challenge of battling through the stringent governance process to access data. But his team is exploring the use of a data lakehouse platform for easier data governance and more scalable prediction analysis.
He added that they wanted to move from rule-based analysis to AI modeling to detect merchant churn rates. The data lakehouse platform allows his team to scale the analysis on tens of thousands of merchants and predict their churn rate, helping the sales team to prioritize the renewal process.
We hope to slowly transform the users' and senior managements expectations on AI and promote more AI applications, Chan added.
Speedy rebound as borders reopen
The data lakehouse platform is also transforming customer experiences. Swire Properties, a real-estate unit of another Hong Kong-based conglomerate, has recently taken advantage of a data lakehouse platform to drive precision marketing.
I first joined the company as customer relationship management, nothing technical. But I quickly realized that I could do nothing about quality customer engagement without quality data. So, I took the initiative to formulate a data strategy, said Veronica Ho, head of data analytics & insights at Swire Properties.
Part of the strategy was consolidating more than one million data points from 30 different data sources across four business pillars: shopping malls, offices, residential, and hotels. This consolidated data platform became the foundation for developing a predictive model supporting the companys speedy rebound from the post-COVID-19 border reopening.
Ho said by understanding the customers with multi-faceted segmentations, the company can develop hyper-personalized and precision marketing campaigns, like tailored birthday surprises. These precision marketing campaigns allowed the company to reach and engage seven times more members.
For members across the border, data-driven marketing also allowed the team to identify and develop personalized offers to 60% of high-potential members who went dormant during the pandemic.
Like Swire Properties, Octopus Card, HSBC, and Li & Fung, unifying the data platform to drive data integrity and governance is only the first step towards realizing their AI strategies. More data-forward business giants are harnessing the value of their data and applying AI through the lakehouse platform to transform into business legends.
As pioneers of the data lakehouse, we are passionate about making data and AI accessible to everyone, said Jia Woei Ling, managing director for North Asia at Databricks.
Sheila Lam is the contributing editor of CDOTrends. Covering IT for 20 years as a journalist, she has witnessed the emergence, hype, and maturity of different technologies but is always excited about what's next. You can reach her at[emailprotected].
Image credit: iStockphoto/Orla
Read more here:
Ultimate Guide to Kickstarting Your Career as an AI/ML Engineer – Dignited
The field of artificial intelligence and machine learning is growing rapidly, and with it comes a high demand for skilled professionals. If you are interested in becoming an AI/ML engineer, this guide will provide you with the necessary steps to get started.
AI/ML engineers are responsible for developing and implementing machine learning models that can make predictions or decisions based on data. These models are used in a wide range of applications, from self-driving cars to medical diagnosis.
Becoming an AI/ML engineer requires a combination of technical skills, practical experience, and a deep understanding of the field. This guide will help you navigate the path to becoming an AI/ML engineer.
Before diving into AI/ML engineering, it is essential to understand the basics of data science. Data science is the foundation of machine learning, and it involves collecting, cleaning, analyzing, and visualizing data.
There are many resources available to learn data science, including online courses, books, and tutorials. Some popular tools for data science include Python, R, and SQL.
To get started with data science, it is essential to understand statistics, linear algebra, and calculus. These mathematical concepts are the building blocks of machine learning models.
Related: Artificial Intelligence (AI) Vs Artificial General Intelligence (AGI)
There are many programming languages used in AI/ML engineering, but some of the most popular ones include Python, R, Java, and C++. Python is the most commonly used language in the field of data science because it has a wide range of libraries and tools for machine learning.
Learning a programming language takes time and practice, but there are many online courses and tutorials available to help you get started. It is also essential to practice coding on your own and work on personal projects to build your skills.
Once you have learned a programming language, it is essential to become familiar with popular machine-learning libraries, such as TensorFlow, Keras, and Scikit-learn. These libraries provide pre-built models and tools for developing machine-learning applications.
Machine learning is a complex field, and it is essential to understand the underlying concepts before diving into AI/ML engineering. Some of the most important concepts to learn include supervised learning, unsupervised learning, and reinforcement learning.
Supervised learning involves training a model on labeled data to make predictions or decisions on new data. Unsupervised learning involves training a model on unlabeled data to identify patterns or clusters. Reinforcement learning involves training a model to make decisions based on rewards or punishments.
It is also essential to understand the different types of machine learning algorithms, such as decision trees, neural networks, and support vector machines.
Related: In the Age of AI: A Beginners Introduction to Artificial Intelligence
One of the best ways to develop your skills as an AI/ML engineer is to work on personal projects. These projects can be anything from predicting stock prices to developing a chatbot.
Data science competitions, such as Kaggle, provide an excellent opportunity to test your skills and learn from other professionals in the field. These competitions involve developing machine learning models to solve real-world problems.
Building your own projects allows you to apply the concepts you have learned and gain practical experience. It also provides you with a portfolio of work to showcase to potential employers.
When building your projects, it is essential to keep in mind the best practices for developing machine learning models, such as data preprocessing, model selection, and evaluation.
Participating in data science competitions allows you to work on challenging problems and gain exposure to the latest techniques and tools used in the field. It also provides you with an opportunity to network with other professionals in the field.
Winning a data science competition can also be a valuable addition to your portfolio and resume.
Attending industry conferences and meetups is an excellent way to stay up-to-date with the latest trends and techniques in AI/ML engineering. These events provide an opportunity to network with other professionals in the field and learn from experts.
Some popular conferences and meetups for AI/ML engineering include the annual Conference on Neural Information Processing Systems (NeurIPS), the International Conference on Machine Learning (ICML), and the TensorFlow Meetup.
Attending these events can also be a valuable addition to your resume and demonstrate your commitment to the field.
Related: AI in Law: Top Software for Legal Firms in 2023
Internships and entry-level positions are excellent ways to gain practical experience in AI/ML engineering. These positions provide an opportunity to work on real-world projects and learn from experienced professionals.
Some companies that offer internships and entry-level positions in AI/ML engineering include Google, Microsoft, and Amazon. It is also essential to check with local startups and companies in your area that may be hiring.
When applying for these positions, it is essential to showcase your skills and experience through a well-crafted resume and portfolio.
The field of AI/ML engineering is constantly evolving, and it is essential to keep learning and staying up-to-date with the latest trends and techniques.
Some ways to continue learning include reading research papers, taking online courses, and attending industry conferences and meetups. It is also essential to keep practicing your coding skills and working on personal projects.
Staying up-to-date with the latest tools and techniques in the field can also provide you with a competitive edge in the job market.
Related:
Becoming an AI/ML engineer requires a combination of technical skills, practical experience, and a deep understanding of the field. By following the steps outlined in this guide, you can get started on the path to becoming an AI/ML engineer.
Remember to continue learning and staying up-to-date with the latest trends and techniques in the field. With dedication and hard work, you can build a successful career in AI/ML engineering.
See the original post here:
Ultimate Guide to Kickstarting Your Career as an AI/ML Engineer - Dignited
ASU graduate has 4 majors, 2 minors, 3 certificates and long list of … – ASU News Now
April 24, 2023
Editor's note: This story is part of a series of profiles of notable spring 2023 graduates.
Anusha Natarajan has made a splash at Arizona State University as a leader, diligent student and involved community member. Anusha Natarajan Download Full Image
She has been featured in ASU News previously for being a Killam Fellow, being selected for the Henry Clay Center College Student Congress and for winning the 2022 John Lewis Youth Leadership Award from the Arizona Secretary of State's Office. However, this barely scratches the surface of what Natarajan accomplished while at ASU.
Natarajan is graduating this semester with four concurrent bachelors degrees in history, political science, sociology and applied quantitative science, along with minors in Spanish and geography and certificates in international studies, political economy and social science research methods.
I initially started off as a business major, but I realized there was not as much flexibility in the school with all of the interests that I have, so I decided to major in sociology and history, Natarajan said. Eventually, I became interested in wanting to strengthen my quantitative background, so I added the applied quantitative science degree and social science research methods certificate to get more proficient in that.
On top of her studies, she was a journalist for State Press,a research fellow for the Center on the Future of War, anda student representative on theCivic Engagement Coalition. She alsoworked within Changemaker Central at ASUand was elected to the Barrett Honors College Council.
Additionally, as a student she started an organization called Culture Talk, which seeks to educate the larger community about culture and she was the editor-in-chief and co-founder of the School of Historical, Philosophical and Religious Studies Digital Humanities Journal, an online journal for ASU students to publish their research in history, philosophy and religious studies.
Outside of campus, Natarajan serves on the student advisory board at Campus Vote Project and is involved with Girl Up, a leadership development initiative focusing on equity for girls and women in spaces where they are unheard or underrepresented.
She is also the winner of 14 scholarships and awards, including the Spirit of Service Scholarship, the Lily K. Sell Global Experience Scholarship and a PULSE Scholarship.
My time at ASU has been a great way for me to learn how I can combine different fields together, whether it be through research or my academic experience, said Natarajan.
We caught up with her to discuss her time at ASU, her advice for current students and her plans for the future.
Questions: What was your aha moment when you realized you wanted to study the fields you majored in?
Answer: I would say close to junior year when I decided that I wanted to add on the other degrees to become more proficient in data analysis and other data collection methods. I took some statistics classes in my junior and senior year that made me realize the importance of having data in our lives and how to make that relatable to social issues, like economic inequality or misinformation. Data is needed now more than ever in the social sciences, especially in our ever-changing world.
Q: Whats something you learned while at ASU in the classroom or otherwise that surprised you or changed your perspective?
A: During my time at the State Press, I was able to learn about how ASU has been harnessing its charter to build partnerships with the State Department and other big companies to make education more accessible and open to the world. I was able to learn how ASU values the importance of universal learning through my reporting work on ASUs partnership with Crash Course and creating the ASU for You platform during the pandemic. Universal learning is a process where we continue to learn, and I like how ASU provides opportunities for academic enrichment regardless of where one might be in life.
Q: Why did you choose ASU?
A: I chose ASU because of the academic opportunities, specifically Barrett, The Honors College, and extracurricular activities, like the State Press. I also liked how my college experience has gotten me ready for the professional and academic world, especially when it came to getting involved with the various research opportunities during my time here. I like the focus that ASU has with research, and I have been able to get a lot from that in my academic and extracurricular experiences.
Q: Which professor taught you the most important lesson while at ASU?
A: This is kind of general, but going to my professors' office hours has been really great towards my planning for the future because I have the opportunity to get to know them one on one. All of my professors have taught me about the importance of office hours, and they are important because you take a lot of information away, especially when it comes to an assignment or your next step after undergrad.
Q: Whats the best piece of advice you would give to students?
A: I would say get involved on campus. You will be able to find a lot of opportunities for growth. You meet people from outside of your major, and you also gain a lot of professional skills that the classroom might not give you, especially leadership skills. It is also a great way to start building your networking skills because that will be important after graduation.
Q: What was your favorite spot on campus, whether for studying, meeting friends or just thinking about life?
A: I really like the Hayden Library, especially the reading room on the first floor. I like it because it is quiet and also it's nice to see people moving around and about throughout the day. I also like the wide-ranging genres for books that are available for students to continue learning.
Q: What are your plans after graduation?
A: I plan to enroll in a data science program either at Columbia or Vanderbilt to strengthen my quantitative background en route to a PhD program to further research about comparative election misinformation.
Q: If someone gave you $40 million to solve one problem on our planet, what would you tackle?
A: I would love to tackle education equality as many people around the world still dont have access to it. I would invest resources in building scholarships for underrepresented women globally for them to get funding for their higher education and also investing resources for textbooks, paper and appropriate technologies to ensure that schools are properly equipped to teach their students and for teachers to feel confident and prepared in teaching. The COVID-19 pandemic has caused serious gaps in our education proficiency, and I want to ensure that future generations dont suffer from those setbacks.
Read more:
ASU graduate has 4 majors, 2 minors, 3 certificates and long list of ... - ASU News Now
Is your Data Governance program unlocking the true potential of … – IQVIA
Changing Commercial Models
The past 5 years have ushered in a new era for Life Science Commercial Models. The explosion of volume and variety of data sources available, coupled with advancements to Machine Learning and Artificial Intelligence based platforms, has unlocked unforeseen opportunities. However, post-pandemic economic instability has also left a mark, making it all the more important that companies choose the correct opportunities. Although their respective business strategies and tactics to address these opportunities may vary, one thing remains constant across the entire industry: the need for a strong data governance program to enable the maximization of Commercial ROI, while simultaneously protecting its most critical input - data.
A holistic Data Governance & Stewardship (DG&S) program is predicted on 5 core elements.
In the remainder of this blog, we will focus on how a data governance program directly supports maximizing the value of Life Sciences commercial operating model.
To support competitive differentiation, Commercial Life Science is increasingly focusing on deploying next-generation use cases such as AI&ML-based Next Best Action, Dynamic Segmentation, and Advanced Digital Channel Optimization to increase customer centricity and ultimately, revenue. These use cases require the integration of multiple data feeds from a plethora of internal and external sources. Robust data governance ensures the existence of data quality checks at both a source level and downstream, to ensure a steady ongoing stream of high-quality data is feeding these use cases. This in turn increases the reliability, validity and resulting financial value, generated by the insights.
Most organizations have suboptimal policies and processes in place for managing data. These often involve convoluted data quality maintenance processes to support data cleaning, where business teams interspersed throughout the organization work with stewards on an ad-hoc or per needed basis to respond reactively to quality concerns. This model can result in duplicate process and learning efforts when it comes to onboarding new and managing existing data. A data governance program ensures that these inefficiencies are addressed by creating a stewardship model with clearly defined and allocated roles and responsibilities. Such a model should be as streamlined as possible, freeing up resources to work on more meaningful work.
Data compliance is of particular importance for pharma, in large part due to the elevated level of regulatory scrutiny and the constant evolution of regulations across different markets. Data governance plays a critical role in ensuring that through applicable policies, data is classified in accordance with its sensitivity, and that it is managed and restricted accordingly. It ensures that data is stored, archived, retained, and disposed in a manner that complies with all relevant regulations. Failure to do so can result in crippling penalties, loss of reputation and disruption to business continuity.
Optimizing and operationalizing Data Governance programs can feel like a daunting endeavor, especially when it comes to identifying where to begin.
We recommend Life Science companies begin with assessing their current data governance maturity across the 5 core elements to better understand their challenges and more importantly, diagnose the root causes. The next step is to look at the activities leveraged by other Life Science peers to solve the same problems, and then contextualizing those activities to their specific situation. Lastly, we recommend defining an overall roadmap for data governance programs with a focus on launching quick strike pilots that can be scaled based on priorities.
IQVIA supports more than 40 global pharma, medical device and healthcare companies in their data governance and stewardship needs. If you are interested in learning firsthand about how Life Science companies have leveraged Data Governance to address their challenges, please connect with us here we would be happy to speak with you. You can learn more about IQVIAs Data Governance and Stewardship capabilities at our website.
Here is the original post:
Is your Data Governance program unlocking the true potential of ... - IQVIA
Chicago Department of Public Health Wins Smart 50 Award for its … – chicago.gov
CHICAGO The Chicago Department of Public Health (CDPH) has been named a Smart 50 Award winner given to innovative urban projects from around the world for the Chicago Health Atlas data platform, developed in partnership with the University of Illinois-Chicago and software developer Metopio. The Chicago Health Atlas is a free community health data resource that residents, community organizations, the media and public health stakeholders can use to search for, analyze and download neighborhood level health data for all of Chicagos 77 community areas.
The Chicago Health Atlas is designed so that anyone can review, explore and compare health-related data over time and across communities, said Nikhil Prachand, Director of Epidemiology at CDPH. Our hope is that people will use this data to both better understand health in Chicago and identify opportunities to improve health and well-being.
The Smart 50 Awards are given by Smart Cities Connect and the Smart Cities Connect Foundation, which annually honor the most innovative and smart municipal and regional-scale projects in the world.
Users of the Chicago Health Atlas, which is co-managed by the Population Health Analytics Metrics Evaluation (PHAME) Center at the UIC School of Public Health, can explore data on more than 160 public health indicators from more than 30 participating healthcare, community and research partners.
We are humbled to share the Smart 50 awards platform with 49 other incredibly transformative awardees from around the world," said Sanjib Basu, PhD, Paul Levy and Virginia F. Tomasek Professor of Epidemiology and Biostatistics at the UIC School of Public Health. "This global award is a distinct recognition of the role of the Chicago Health Atlas in advancing the health of communities and its residents.
The Chicago Health Atlas is also a place for users to gauge progress of the implementation of Healthy Chicago, the citywide plan to improve health equity and close the racial life expectancy gap.
This is an exciting partnership between CDPH, UIC and Metopio, said Will Snyder, Co-founder and CEO of Metopio. Our software is designed to break down data siloes and make powerful analytics available to a variety of stakeholders, regardless of their data science background, so they can uncover insights about populations and places they care about.
At the 7th annual edition of the Smart 50 Awards in Denver on May 15, Smart Cities Connect will announce three overall winning projects out of the 50 total awardees. The Chicago Health Atlas is also supported by the Otho S. A. Sprague Memorial Institute.
###
Read the original:
Chicago Department of Public Health Wins Smart 50 Award for its ... - chicago.gov
Dealing With Noisy Labels in Text Data – KDnuggets
With the rising interest in natural language processing, more and more practitioners are hitting the wall not because they cant build or fine-tune LLMs, but because their data is messy!
We will show simple, yet very effective coding procedures for fixing noisy labels in text data. We will deal with 2 common scenarios in real-world text data:
We will use ITSM (IT Service Management) dataset created for this tutorial (CCO license). Its available on Kaggle from the link below:
https://www.kaggle.com/datasets/nikolagreb/small-itsm-dataset
Its time to start with the import of all libraries needed and basic data examination. Brace yourself, code is coming!
Each row represents one entry in the ITSM database. We will try to predict the category of the ticket based on the text of the ticket written by a user. Lets examine deeper the most important fields for described business use cases.
If we take a look at the first two tickets, although one ticket is in German, we can see that described problems refer to the same software??Asana, but they carry different labels. This is starting distribution of our categories:
The help needed looks suspicious, like the category that can contain tickets from multiple other categories. Also, categories Outlook and Mail sound similar, maybe they should be merged into one category. Before diving deeper into mentioned categories, we will get rid of missing values in columns of our interest.
There isnt a valid substitute for the examination of data with the bare eye. The fancy function to do so in pandas is .sample(), so we will do exactly that once more, now for the suspicious category:
Bundled problems with Office since restart:
Messages not sent
Outlook does not connect, mails do not arrive
Error 0x8004deb0 appears when Connection attempt, see attachment
The company account is affected: AB123
Access via Office.com seems to be possible.
Obviously, we have tickets talking about Discord, Asana, and CRM. So the name of the category should be changed from Help Needed to existing, more specific categories. For the first step of the reassignment process, we will create the new column Keywords that gives the information if the ticket has the word from the list of categories in the Text column.
Also, note that using "if word in str(words_categories)" instead of "if word in words_categories" would catch words from categories with more than 1 word (Internet Browser in our case), but would also require more data preprocessing. To keep things simple and straight to the point, we will go with the code for categories made of just one word. This is how our dataset looks now:
output as image:
After extracting the keywords column, we will assume the quality of the tickets. Our hypothesis:
We made our new distribution and now is the time to examine tickets classified as a potential problem. In practice, the following step would require much more sampling and look at the larger chunks of data with the bare eye, but the rationale would be the same. You are supposed to find problematic tickets and decide if you can improve their quality or if you should drop them from the dataset. When you are facing a large dataset stay calm, and don't forget that data examination and data preparation usually take much more time than building ML algorithms!
outlook issue , I did an update Windows and I have no more outlook on my notebook ? Please help !
We understand that tickets from Outlook and Mail categories are related to the same problem, so we will merge these 2 categories and improve the results of our future ML classification algorithm.
Last, but not least, we want to relabel some tickets from the meta category Help Needed to the proper category.
We did our data relabeling and cleaning but we shouldn't call ourselves data scientists if we don't do at least one scientific experiment and test the impact of our work on the final classification. We will do so by implementing The Complement Naive Bayes classifier in sklearn. Feel free to try other, more complex algorithms. Also, be aware that further data cleaning could be done - for example, we could also drop all tickets left in the "Help Needed" category.
Pretty impressive, right? The dataset we used is small (on purpose, so you can easily see what happens in each step) so different random seeds might produce different results, but in the vast majority of cases, the model will perform significantly better on the dataset after cleaning compared to the original dataset. We did a good job!Nikola Greb been coding for more than four years, and for the past two years he specialized in NLP. Before turning to data science, he was successful in sales, HR, writing and chess.
More: