Category Archives: Data Science

NVIDIA Invites Dataiku to the DGX-Ready Software Program to … – Database Trends and Applications

Dataiku, the platform for Everyday AI, is joining NVIDIAs DGX-Ready Software program, simplifying the deployment and management of AI for customers.

Dataiku has been selected for the exclusive, invite-only program because of its tested and certified solutions that pair with NVIDIA DGX systems, allowing NVIDIA customers and partners to easily implement advanced analytics and AI.

Enterprises are seeking integrated solutions to power successful AI deployments, said John Barco, senior director of DGX Product Management, NVIDIA. Pairing NVIDIA DGX systems with Dataiku software can help customers seamlessly and securely access and manage their data to simplify the deployment of enterprise AI.

Already a member of the NVIDIA AI Accelerated program, Dataiku has also recently become a Premier member of NVIDIA Inception, a program that offers resources and support to cutting-edge startups transforming industries with advancements in AI and data science.

Through Dataikus collaboration with NVIDIA, customers will be able to overcome challenges through the following benefits:

This collaboration gives our customers a clear advantage with unrivaled access to market-leading NVIDIA technology, said Abhi Madhugiri, vice president, global technology alliances at Dataiku. The combination of Dataiku's platform with NVIDIA accelerated computing solutions like DGX will accelerate and simplify data projects, bringing the power of AI to organizations regardless of size or industry. Joining the exclusive NVIDIA DGX-Ready Software program and becoming a Premier member of NVIDIA Inception is an exciting opportunity for us to better serve our customers and continue our mission to democratize AI.

For more information about this news, visit http://www.dataiku.com.

Link:

NVIDIA Invites Dataiku to the DGX-Ready Software Program to ... - Database Trends and Applications

Ethical Use of AI in Insurance Modeling and Decision-Making – FTI Consulting

With increased availability of next-generation technology and data mining tools, insurance company use of external consumer data sets and artificial intelligence (AI) and machine learning (ML)-enabled analytical models is rapidly expanding and accelerating. Insurers have initially targeted key business areas such as underwriting, pricing, fraud detection, marketing distribution and claims management to leverage technical innovations to realize enhanced risk management, revenue growth and improved profitability. At the same time, regulators worldwide are intensifying their focus on the governance and fairness challenges presented by these complex, highly innovative tools specifically, the potential for unintended bias against protected classes of people.

In the United States, the Colorado Division of Insurance recently issued a first-in-the-nation draft regulation to support the implementation of a 2021 law passed by the states legislature.1 This law (SB21-169) prohibits life insurers from using external personal data and information sources (ECDIS), or employing algorithms and models that use ECDIS, where the resulting impact of such use is unfair discrimination against consumers on the basis of race, color, national or ethnic origin, religion, sex, sexual orientation, disability, gender identity or gender expression.2 In pre-release public meetings with industry stakeholders, the Colorado Department of Insurance also offered guidance that similar rules should be expected in the not-too-distant future for property & casualty insurers. In the same vein, UK and EU regulators are now penning new policies and legal frameworks to prevent AI model-driven consumer bias, ensure transparency and explainability of model-based decisions for customers and other stakeholders, and impose accountability for insurers who leverage these capabilities.3

Clearly, regulators around the globe believe that well-defined guard rails are needed to ensure the ethical use of external data and AI-powered analytics in insurance decision-making. Moreover, in some jurisdictions, public oversight and enablement bodies such as the U.S. Department of Commerces National Institute of Standards and Technology (NIST) are also actively working to define cross-industry guidelines and rules for the acceptable use of external data to train AI/ML-powered decision support models without resulting in discrimination against protected classes of consumers.4 Examples of potentially disfavored data may include:

Based on the Colorado draft regulation that was recently published, the expected breadth of pending new AI and external data set rules could mean potentially onerous execution challenges for insurers, who seek to balance the need for proactive risk management, market penetration and profitability objectives with principles of consumer fairness. For many insurers, internal data science and technology resources that are already swamped with their day jobs will be insufficient to meet expected reporting and model testing obligations across the multiple jurisdictions in which their companies do business. In other situations, insurers may lack appropriate test data and skill sets to assess potential model bias. In either instance or both, model testing and disclosure obligations will continue to mount and support will be needed to satisfy regulator demands and avoid the significant business ramifications of non-compliance.

So, how can insurance companies and their data science/technology teams best address the operational challenges that evolving data privacy and model ethics regulations will certainly present? Leading companies that want to get ahead of the curve may opt to partner with skilled experts, who understand the data and processing complexities of non-linear AI/ML-enabled models. The best of these external operators will also bring to the table deep insurance domain knowledge to assure context for testing and offer reliable, independent and market-proven test data and testing methodologies that can be easily demonstrated and explained to insurers and regulators alike.

The burden of regulatory compliance in the insurance industry cannot be diminished and can challenge a companys ability to attain target business benefits if these two seemingly opposing objectives compliance and profitability -- are not managed in a proactively strategic and supportive way. With appropriate guidance and execution, insurers who comply with new and emerging regulations for the use of AI-powered decision-support models and external data sets may actually realize a number of tangible benefits beyond compliance, including more stable analytic insight models, improved new business profitability and operational scalability, and a better customer experience that enhances brand loyalty and drives customer retention and enhanced lifetime value.

More here:

Ethical Use of AI in Insurance Modeling and Decision-Making - FTI Consulting

Accenture to acquire Bengaluru-based AI firm Flutura – The Indian Express

IT services and consulting firm Accenture on Tuesday said it will acquire Bengaluru-based industrial artificial intelligence company Flutura.

You have exhausted your monthly limit of free stories.

To continue reading,simply register or sign in

Continue reading with an Indian Express Premium membership starting Rs 133 per month.

This premium article is free for now.

Register to continue reading this story.

This content is exclusive for our subscribers.

Subscribe to get unlimited access to The Indian Express exclusive and premium stories.

This content is exclusive for our subscribers.

Subscribe now to get unlimited access to The Indian Express exclusive and premium stories.

The deal size was not disclosed.

Flutura has approximately 110 professionals who specialize in industrial data science services for manufacturers and other asset-intensive companies.

Flutura will strengthen Accentures industrial AI services to increase the performance of plants, refineries, and supply chains while also enabling clients to accomplish their net-zero goals faster, Accenture said in a statement.

Ireland-based Accenture plans to bring Fluturas capabilities to clients in the energy, chemicals, metals, mining, and pharmaceutical industries.

Flutura democratizes AI for engineers. This acquisition will power industrial AI-led transformation for our clients globally and particularly in Australia, South-East Asia, Japan, Africa, India, Latin America and the Middle East, Senthil Ramani, senior managing director and Accenture Applied Intelligence lead for Growth Markets, said.

Last year, Accenture acquired data science company ALBERT in Japan.

Other recent AI acquisitions of Accenture include Analytics8 in Australia, Sentelis in France, Bridgei2i and Byte Prophecy in India, Pragsis Bidoop in Spain Mudano in the UK and Clarity Insights in the US.

First published on: 21-03-2023 at 13:25 IST

See original here:

Accenture to acquire Bengaluru-based AI firm Flutura - The Indian Express

Cloud Company Vultr Announces Availability of NVIDIA H100s and … – insideHPC

The NVIDIA HGX H100 joins Vultrs other cloud-based NVIDIA GPU offerings, including theA100,A40, andA16, rounding out Vultrs extensive infrastructure-as-a-service (IaaS) support for accelerated computing workloads. From generative AI, deep learning, HPC, video rendering and graphics-intensive applications, to virtual and augmented reality (VR/AR) applications and more, Vultrs robust GPU lineup extends its full range of affordable cloud computing solutions that make Vultr the ideal cloud infrastructure provider for AI and machine learning intensive businesses.

The expansion of our NVIDIA GPU portfolio combined with our partnerships with Domino and Anaconda demonstrate our commitment to supporting innovation in AI and machine learning, said J.J. Kardwell, CEO of Vultrs parent company Constant. We have aligned with the best providers of solutions for data scientists and MLOps to enable easy and affordable access to the highest performance accelerated computing infrastructure.

The seamless integration of the products and services from Vultr, Domino, and Anaconda delivers a platform engineering approach to MLOps to accelerate data science and reduce time-to-value. Within 60 seconds data scientists can spin up a complete and secure Anaconda development environment on the Domino MLOps platform, running on Vultr infrastructure, to immediately begin developing and testing new machine learning models. This unique alliance eliminates the complexity of configuring infrastructure and IDEs, so data science and MLOps teams can focus on innovation instead of operations.

The most innovative data scientists can solve the worlds greatest challenges when they have easier access to the tools they know and love, and better collaboration resulting in faster model iterations and deployment, said Thomas Robinson, COO of Domino Data Lab. By teaming up with Anaconda and Vultr, were paving the way to breakthrough innovation with end-to-end support for AI/ML lifecycles that accelerates time-to-value for data science teams.

Together, Domino and Vultr make near-instantaneous access to our IDE available on demand from anywhere serious data science needs to happen, said Anaconda SVP Worldwide Revenue, Al Gashi . Customers can confidently access the open-source tools their team needs to drive innovation forward and scale their machine learning projects. Through this partnership, data science teams can focus on what they do best moving the world forward with AI-based innovation.

Vultrs introduction of the NVIDIA HGX H100 on its platform is unleashing the power of next-generation accelerated computing for customers, said Dave Salvator, director of accelerated computing products at NVIDIA. AI and machine learning developers can easily access the NVIDIA HGX H100 through Vultr, along with the broad NVIDIA AI platform, to supercharge their AI solutions.

Vultrs mission is to make high-performance cloud computing easy to use, affordable, and locally accessible for businesses and developers around the world. From the largest supercomputing clusters to virtual machines with fractional GPUs, Vultr makes access to the industrys best GPUs affordable by freeing users from the expensive overprovisioning embedded in the offerings of the Big Tech clouds. Customers can access Cloud GPUs by the hour or month just as needed, or on a long-term reserved capacity basis.

Vultr now offers the most powerful accelerated computing resources for unprecedented performance. NVIDIA GPUs are also integrated with Vultrs broad array of virtualized cloud compute and bare metal offerings, as well as Kubernetes, managed databases, block and object storage, and more. This seamless product and services array makes Vultr the preferred all-in-one provider for businesses of all sizes with critical AI and machine learning initiatives.

View original post here:

Cloud Company Vultr Announces Availability of NVIDIA H100s and ... - insideHPC

Focus where the value is with data science – The Australian Financial Review

Tong says this is a challenge for data science teams across every sector of the Australian economy.

Ada Tong, Product Director at Domain Insight.Actuaries Institute

Data sources are vast and complex and, in some cases, havent been used before. Were at the cusp of an explosion in machine-generated data egged on by widespread adoption of connected apps and devices, so its an incredibly exciting time, but that means nothing unless the right data is directed to solve the most important problems.

As actuaries working in data science, our job is to turn data into something an organisation can interpret rapidly and confidently make decisions that deliver better experiences for the people it serves, she says.

Tongs team harnesses a massive volume of data, including buyer behaviour on their app and portal illuminating leading indicators of property market dynamics at a micro market level. Thats critical information for all parties involved on both the buying and financing sides of the equation.

It may just be the perfect use-case for an actuary to aid disparate parties engage with more confidence in Australias largest, most scrutinised and, most emotionally charged asset classes.

Understanding the whole ecosystem and being able to navigate its complexities has been crucial to serving up the most valuable insights possible from our data. The actuarial training teaches us to start with a clear understanding of the market, the customer and the business problem and then work back to a focused solution she says.

More broadly, she says Domain Insights data provides a unique perspective on how people approach refinancing, renovation or borrowing decisions, and how lenders value the funding of those decisions.

As well as planning, local and state governments use the data to scope future infrastructure needs and ensure property-based levies are collected on a fair and accurate basis.

Its incredibly empowering to help so many different parties make better decisions in a complex and dynamic property market, says Tong.

They say information is power and I tend to agree with that we cant eliminate uncertainty altogether but the right insights can certainly reduce it, and Im proud of the role we play in democratising access to high quality property data.

Tongs Domain Insight experience is just one example of actuaries helping businesses right across the economy break new ground with data in sectors ranging from health to retail, financial services to agriculture, and, perhaps most pertinently, solutions to climate change.

Elayne Grace, CEO of the Actuaries Institute of Australia, says almost every business process can be improved through the intelligent deployment of data.

Elayne Grace, CEO of the Actuaries Institute.Actuaries Institute

Actuaries are at the forefront of innovation in Australias most exciting start-ups, disruptors, and established businesses, says Grace.

They help businesses understand their customers as individuals, leveraging data to personalise products, pricing, and experiences.

They also help supply chain managers move things from farm, mine, factory, or warehouse to sale as quickly and efficiently as possible.

Actuaries are also modelling climate risk, determining where to invest, base agriculture and build communities over coming decades, she says.

One of the areas where data science is having a huge impact is public health, both in terms of improving services and enhancing efficiency.

Actuaries were prominent advisors to government during the pandemic, projecting infection rates and modelling the potential impact of the available policy settings, she says.

In insurance, meanwhile, they use techniques including natural language processing to get claims to the right assessor as quickly as possible and scrutinise remediation quotes from repairers to make sure theyre fair.

Grace says that when businesses hire an actuary, they hire someone who is rigorously trained in the technical, commercial, and innovation skills needed to succeed in data science.

Humans, and increasingly machines, are producing more data than ever before. But more data only means more in the right hands, she says.

Great data scientists can come from a wide range of backgrounds, but when you hire an actuary know they are rigorously trained in the technical, commercial and innovation skills needed to succeed in data science.

We are trained to start with the commercial problem and work back, and thats absolutely key. Theres no point creating a theoretically perfect solution if it doesnt work in the real world.

Many leading businesses across every sector of the Australian economy have actuaries making a significant contribution, says Grace.

Actuaries are making a difference in respected tech businesses like Canva and Domain, leading data science consultancies like Quantium, Deloitte, and Taylor Fry.

CBA, Telstra, and Woolworths are forming long-term partnerships with consultancies because they want to access data science talent that would be hard to hire direct, says Grace.

She says the key to actuarial advice is not recommending a solution that a business doesnt have the capacity to execute.

Thats why commerciality is so important, she says.

You can distract a business and destroy a lot of value by creating a solution a company doesnt have the technology, people or culture to deliver. The role of the actuary is to define a solution that delivers the commercial outcome in the real world.

And the rewards? Real-world data science solutions to your most urgent commercial challenges, always delivered with the highest ethical standards.

Grace says our actuaries are only at the tip of the iceberg in terms of the problems data science and AI can solve.

In addition to helping established businesses, our members are starting and scaling new businesses from the ground up, applying their skills to brand new problems, says Grace.

Its only going to accelerate from here.

With the revolution in IoT and sensors, were at the early stages of an explosion of machine generated data that promises to transform supply chains across the economy making them highly responsive to the needs of consumers and with enormous environmental and social dividends.

Its an incredibly exciting time.

To learn more, visit http://www.dodatabetter.com.au.

Read the original post:

Focus where the value is with data science - The Australian Financial Review

Rancho Biosciences Welcomes Regeneron as the Newest Member … – AccessWire

SAN DIEGO, CA / ACCESSWIRE / March 21, 2023 / Rancho Biosciences, a leading data sciences services company, is pleased to announce that Regeneron has become the latest Member to join its Single Cell Data Science (SCDS) pre-competitive consortium, along with existing Members: BenevolentAI, Bristol Myers Squibb, Janssen Research & Development, LLC, part of the Janssen Pharmaceutical Companies of Johnson & Johnson, Novartis and Vesalius Therapeutics.

The mission of the SCDS consortium is to find a common industry standard around how single cell datasets are created and formatted by a systematic effort to develop data models and ensure that public data are curated in a consistent way. Due to the undeniable impact of single cell transcriptomics technology on drug discovery, there continues to be an exponential growth in the use of single cell sequencing methods by pharmaceutical companies. The availability of ever-increasing amounts of single cell datasets in the public domain allows pharmaceutical companies to dramatically expand their universe of single cell experiments over those generated internally. Leveraging this vast public data lake by finding, downloading and curating single cell data is highly laborious and time consuming compared to the resources for collectively analyzing data by scientists to gain value for biomedical research.

In Year 1, the SCDS consortium delivered a Data Tracker portal, and a 4-entity, deep data model around which 115 analysis-ready, harmonized datasets were released to Members. Now in Year 2, Members will continue to prioritize the deliverables and receive an ongoing stream of harmonized single cell datasets. In addition, there will be increased emphasis on cell type annotation, the creation of an SCDS Reference Atlas and evaluating multi-modal datasets. Under this shared cost model, these can be delivered at much higher throughput than one single company could achieve and much more cost-effectively. New Members are welcome.

About Rancho

Founded in 2012, Rancho Biosciences is a privately held company offering services for data curation, management and analysis for companies engaged in pharmaceutical research and development. Its clients include top 20 pharma and biotech companies, research foundations, government labs and academic groups.

For more information about Rancho and the SCDS consortium, contact:

Andy Hope, PhD

[emailprotected]

Contact Information

Andy Hope Business Development [emailprotected] 6302407809

SOURCE: Rancho BioSciences, LLC

Read more here:

Rancho Biosciences Welcomes Regeneron as the Newest Member ... - AccessWire

Future of Winning: How Data Science is Reshaping Politics and … – Columbia University

Molly MurphyPresident of Impact Research

Navin NayakCounselor and President, Center for American Progress Action Fund

Patrick RuffiniFounding Partner, Echelon Insights

Gregory J. Wawro, Ph.D.Professor of Political Science; Program Director, Political Analytics

Analytics is a driving force in politics and advocacy today. Whether its developing content and ads across media, fundraising for causes or campaigns, building a social platform, getting people to register or vote, or helping to define and defend a political strategyif its not data-driven, its dead on arrival. In this robust conversation with right- and left-leaning leaders in the use of data in politics and advocacy, you will learn about how the numbers will determine tomorrows victoriesand defeats.

Event details:

This event is hosted by Columbia School of Professional Studies.

This event is open to the public and the Columbia community including alumni, prospective and current students, faculty, and their guests.

An admissions counselor will be available to speak with interested prospective students at this event. For additional information about program offerings at Columbia Universitys School of Professional Studies, please contact an admissions counselor at 212-854-9666 or[emailprotected].

For further information please contact[emailprotected].

Click here to RSVP!!!

Molly MurphyPresident of Impact ResearchMolly Murphy is a top pollster and campaign strategist who has worked on some of the most consequential campaigns in recent history. Murphy has experience with hundreds of statewide, congressional, and local races all over the country, helping elect Democrats in Republican and swing districts on top of making it possible for Democrats to prevail in competitive primaries. In addition to her work with candidates, Murphy serves as a lead advisor for the DCCC, as well as closely supporting the DSCC and DGA to elect Democrats across the country.

Navin NayakCounselor and President, Center for American Progress Action FundNavin Nayak serves as counselor at the Center for American Progress and president of the Center for American Progress Action Fund. He oversees day-to-day management of the Action Fund and helps create new programs and projects that will assist the organization in fulfilling its mission.

Nayak has more than 15 years of experience in advocacy and elections with several organizations. Most recently, Nayak served as the director of opinion research for Hillary Clintons presidential campaign, where he oversaw all the campaigns message research. Prior to that, he worked for eight years at the League of Conservation Voters, where, among several roles, he served as senior vice president for campaigns, overseeing all the organizations electoral work. He also served as the deputy director for the Clean Energy Works campaign, a national multimillion-dollar effort to pass comprehensive climate legislation in 20092010.

In addition to his advocacy experience, Nayak worked as a donor advisor at Corridor Partners, where he provided strategic guidance to donors on their advocacy and electoral investments.

Patrick RuffiniFounding Partner, Echelon InsightsOver the past 15 years, Patrick Ruffini has advanced the digital and data-driven transformation of politics in numerous roles, most recently as a cofounder of Echelon Insights, a next-generation polling, analytics, and intelligence firm.

Ruffini began his career as one of the countrys first political-digital practitioners, starting at the Republican National Committee in 2002. He managed grassroots technology and outreach for President George W. Bushs 2004 reelection campaign and returned to the RNC to run digital strategy in 2006. As the founder of Engage, a leading right-of-center digital agency, Ruffini would apply these lessons learned at the presidential level to political campaigns nationally and internationally, the advocacy and nonprofit worlds, the Fortune 50, and beyond.

In 2014 he cofounded Echelon Insights to evolve the traditional ways that organizations collect information to drive strategy. Ruffini leads the firms analytics and technology practices, helping a wide array of clients craft more persuasive messages, manage crises, and reach audiences more cost-effectively.

As a writer and public speaker, Ruffini offers insights on political, demographic, and technology trends that are often highlighted by national media. He has contributed to publications including The Washington Post, FiveThirtyEight, Politico, and National Review; has been featured in The New York Times, Time, and Newsweek; and has appeared as a political analyst for NPR.

Gregory J. Wawro, Ph.D.Professor of Political Science; Program Director, Political AnalyticsProfessor Gregory Wawro is the director and founder of the M.S. in Political Analytics program. He previously served as the chair of the Department of Political Science at Columbia. He holds his Ph.D. from Cornell University and specializes in American politics, including Congress, elections, campaign finance, judicial politics, political economy, and political methodology. He is the author of Legislative Entrepreneurship in the U.S. House of Representatives and coauthor (with Eric Schickler) of Filibuster: Obstruction and Lawmaking in the United States Senate, which is a historical analysis of the causes and consequences of filibusters. His most recent book, Time Counts: Quantitative Analysis for Historical Social Science (with Ira Katzelson), seeks to advance historical research in the social sciences by bridging the divide between qualitative and quantitative analysis through an expansion of the standard quantitative methodological toolkit with a set of innovative approaches that capture nuances missed by more commonly used statistical methods. Professor Wawro is currently working on projects that explore the role that social media is playing in congressional elections and democratic participation.

Visit link:

Future of Winning: How Data Science is Reshaping Politics and ... - Columbia University

3 Key Ingredients for Making the Most of State Data Work – Government Technology

Despite the boom in data science, government projects that involve large data sets and powerful data tools still have a surprisingly high failure rate. Government agencies are increasingly seeking ways to use the data they already collect to achieve their measurement, evaluation and learning goals, but they often do not have the capacity or the right mix of staff to carry out data projects effectively.

The Center for Data Insights at MDRC, a nonprofit, nonpartisan research organization, recently partnered with state agencies to develop and execute a variety of data projects. We learned that barriers to success are not primarily about technical issues or analytic methods. Rather, data projects need three essential ingredients to be successful: people, perseverance and project scoping.

For example, MDRC worked with the New York State Office of Temporary and Disability Assistance to explore factors associated with long-term cash assistance receipt. The agency gathered an 11-person cross-functional team that included researchers, programmers, employment experts and operational staff members who worked regularly with local offices. Team members who did not have technical expertise provided content expertise and contextual information that were instrumental for both data quality assurance and interpretation of the analysis. The collaborative process prompted the technical staff to ask questions such as How can different local offices use this analytical information in a practical way? as they conducted their data analysis.

Perseverance is essential to success in any data project. Teams using new data techniques often go through a hype cycle in which high expectations for exciting results from a planned data analysis are frustrated by an analytic challenge. Successful teams persevere and adjust their original plans as needed.

The Colorado Department of Human Services was exploring the use of supportive payments, which are additional cash payments that can be used for the basic needs of the most vulnerable families who participate in the Temporary Assistance for Needy Families (TANF) program. They first wanted to know how the timeliness of certain types of supportive payments were related to employment outcomes, but the way the data had been recorded and tracked did not allow them to analyze data by payment type. Once they adjusted their research question to investigate the relationship between payment receipt and employment, they found selection bias issues that led to misleading findings about supportive payments. The team then tried several different ways to reduce the bias before identifying the approach that more accurately estimated the positive contribution of supportive payments to employment outcomes.

Project scoping is a way to set boundaries on your project by defining specific goals, deliverables and timelines. Designers should make room to be agile as they determine the scope of their data projects. The idea is to start small and then use what you learn to build more complex and nuanced analyses.

For example, the Washington Student Achievement Council (WSAC), the agency that oversees higher education in the state of Washington, wanted to learn about whether the United Way of King Countys Bridge to Finish campaign, which provides services to college students who may be at risk of insecurities for food, housing or other basic needs, could help students persist and earn a degree. The project scope began with a simple task: specify demographic and service use characteristics of students that may be associated with academic persistence and determine if these characteristics are measurable with the available data. This allowed the team to focus on the questions that were answerable based on data quality and completeness: Did the program recruit and serve students from historically marginalized groups? Was the program model flexible enough to address students most pressing needs?

If instead the project had been scoped to begin with more complexity, like building a predictive risk model to identify students who might not persist or complete college, the project would have been stymied because of insufficient data and an incomplete analytical tool. For the Bridge to Finish campaign, the simpler approach at the outset, with a project scope that was flexible enough to change as data challenges emerged, ended up leading to findings that were much more useful and actionable.

Setting up data projects for success is not primarily about data itself. Instead, it is about people who are planning designing, and pushing through challenges together. Projects that are scoped effectively and that encourage project teams to persevere through challenges yield better results, richer findings and ultimately help government agencies fulfill their missions.

Edith Yang is a senior associate with the Center for Data Insights at MDRC, a nonprofit, nonpartisan research organization.

More here:

3 Key Ingredients for Making the Most of State Data Work - Government Technology

How has national wellbeing evolved over time? – Economics Observatory

National wellbeing is normally measured by surveying individuals and collating their responses. There are many well-known national and international sources, including the World Values Survey, the World Happiness Report and Eurobarometer.

Unfortunately, these measures only give us between ten and 50 years of data, which is not ideal for forming a long-run understanding of how national wellbeing has changed over time. We can supplement these measures with modern techniques from data science including text analysis, which make it possible to infer mood from the language that people use.

This technique allows us to roll back measures of national wellbeing to around 1800 and gives us considerable insight into how national wellbeing has evolved over time. In particular, we can see that income matters for wellbeing but perhaps not by as much as we might have thought. Aspirations matter too.

Health correlates well with wellbeing as we might expect, but perhaps the most important factor in keeping wellbeing levels high has been avoiding major conflicts. This analysis provides us with some understanding of the most striking peaks and troughs of human happiness over time.

National wellbeing is far from a new concept, but it has become increasingly normalised as a potential policy objective for governments as data have become more readily available.

Watershed moments include when the United Nations (UN) asked member countries to measure happiness and use the data to guide policy in 2011, publication of the first World Happiness Report in 2012 and the UN International Day of Happiness. This annual occasion was first celebrated in 2013 and has since become a global focus for all things related to happiness.

The UNs World Values Survey has contained a question on happiness since 1981. This initially covered 11 countries but the number had risen to 100 in the 2017-22 wave. Other regional or national surveys provide slightly longer duration data.

Eurobarometer a public opinion survey in the European Union is probably the most well-known of these surveys. It has data on life satisfaction going back to 1972 for a selection of European countries. The World Happiness Report also includes global data on wellbeing that amounts to around ten years worth of data.

What this means is that we have a maximum of around 50 years of data for a small number of countries, and perhaps ten years for most others. This is not enough to enable us to understand fully how an important socio-economic variable changes over time. Neither does it allow us to analyse how wellbeing responds to major social or economic shifts, wars, famines, pandemics and many other big events that tend to occur relatively rarely.

To go back further, we have to move beyond traditional methods of data collection and rely on non-survey methods.

Our work explores how we can measure national wellbeing before the 1970s using text data from newspapers and books. The principle is that peoples mood can be extracted from the words that they use (Hills et al, 2019). This allows us to supplement traditional methods by constructing a long-run measure of national wellbeing going back 200 years.

Many national and international surveys measure reported wellbeing. For example, the World Values Survey includes a typical question: Taking all things together would you say you were very happy/quite happy/not very happy/not at all happy?

This use of a short ordered set of answers, or a Likert scale, is also used in the World Happiness Report though with an expanded range of zero to ten rather than just four possible responses. Respondents are asked to place their current level of wellbeing somewhere in this scale. This has led to the idea of the Cantril ladder since respondents are asked to think of each number as a rung on a ladder.

Other national surveys also follow the Likert approach. For example, Eurobarometers life satisfaction measure asks respondents: On the whole, how satisfied are you with the life you leadvery unsatisfied/not very satisfied/fairly satisfied/very satisfied?

There is debate about how concepts such as happiness and life satisfaction differ. But most accept that life satisfaction is a longer-term measure, while current happiness is more vulnerable to short-term fluctuations.

Nevertheless, averaged across large numbers of respondents and across long periods, most measures that use words like wellbeing, satisfaction or happiness tend to be correlated. What all of these surveys have in common is the need to interview a large number of people, which is costly in terms of time and organisation. That explains why data tend to be annual. Unfortunately, this provides a limit on the speed with which we can build up a good supply.

To generate more data, especially from the past, we need to use non-survey methods. One approach is to make use of well-known results from psychology indicating that mood can be inferred from language. These insights have been used successfully at the individual level to pick up sentiment from social media posts and other sources (Dodds and Danforth, 2009).

To scale this to the national level, we need two things: a large body of text data (a corpus) and a way to translate text into numerical data (a norm). We use several examples of each, but to give a feel for how this works, Google have digitised millions of books published between 1500 and the present, allowing us access to billions of words. This is one of the core sources for our work.

The main norm used is Affective Norms for English Words known as ANEW (Bradley and Lang, 1999). This converts words into numbers that measure happiness (text valence) on a scale of one to nine. For example, the word joy scores 8.21, while stress scores only 1.79.

We then shrink the set of words down to a common 1,000 that appear widely across time and different languages. Finally, we construct a weighted average of implied happiness in text for a number of different languages and periods. For example, we take the weighted average text valence for each year in books and newspapers published in the UK from 1800 to 2009, and we call this the National Valence Index (NVI).

To see how this works, imagine two years in which the number of words is the same but there is a shift from words like joy to words like stress. In this case, the weighted average text valence score would fall significantly.

Validation is crucial: we need to be sure that our measure corresponds with survey measures. It is also necessary to recognise and control for changes in language over time and, of course, variations in literacy and the purpose of literature.

First, this measure is highly correlated with survey results. Further, the correlation is positive: when the nation is happy (according to survey data), the text we read and write tends to be happy (high valence). The reverse is true when the nation is sad.

Second, the measure needs to control for language evolving over time. We do this by looking at the neighbourhood around words. Specifically, if we see that a word is surrounded by different words over time, this tends to mean that the word has changed meaning. In this case, it is removed from the 1,000, and we go down to the 500 most stable words those that have the same words in a neighbourhood around them. This study also includes controls for literacy. It is limited to the period post-1800 when literacy levels were high in the UK and when text data are mainly coming from novels (as opposed to a large share being religious texts or legal documents as in the 1600s).

Using this text measure, we can document longitudinal shifts in happiness over time. But we need to be careful when interpreting graphical data. First, comparisons are best made over short durations. In other words, rates of change are always more valid than looking at long-run levels. Second, the quantity of data has risen over time, which makes more distant history more prone to error.

Figure 1 shows a book-based NVI measure for the UK. It highlights huge falls and rises surrounding the two world wars in the 20th century.

This provides a clue as to the major force that has driven wellbeing in the past: avoiding major conflicts. Analysis that looks at how our measure changes alongside variations in other major socio-economic variables also sheds light on other key drivers.

National income does correlate with national wellbeing, but the effect sizes are small. In other words, it takes a very large rise in national income to produce a small increase in wellbeing. National health, traced using proxy measures such as life expectancy or child mortality, unsurprisingly correlates with national wellbeing.

The data also show how powerful aspirations seem to be. To highlight this, we can look at the later 20th century. We see a sharp rise from 1945 up to 1957 (when Harold Macmillan famously said that the country had never had it so good), but then there is a slow decline through to 1978-79 (the aptly named Winter of Discontent).

In line with current thinking on what influences wellbeing, this seems to reflect expectations. In the period following the Second World War, hopes were high. But it seems that they were not fully realised, pushing wellbeing down. This occurred even though there were significant increases in productivity and national income, and improvements in technology between the 1950s and the 1970s.

Crucially, people seem to be largely thinking about their wellbeing relative to where they thought they might be. As a result, the 1950s seemed good relative to the 1940s, but the 1970s did not satisfy hopes relative to the 1960s.

Previous research has also argued that aspirations play a role in determining reported wellbeing (Blanchflower and Oswald, 2004). It has even been stated that more realistic aspirations are part of the reason why happiness rises after middle age for many people Blanchflower and Oswald, 2008).

To use language to measure happiness, we need books and newspapers to have been digitised and norms to be available, which restricts the number of countries we can analyse. One way around this is to use audio data.

Music is sometimes called a universal language and a language of the emotions. It can be sad, happy, exciting, dull, terrifying or calming and these emotions can span different cultures and time periods.

Working with a group of computer scientists, we have developed a machine-learning algorithm that can recognise 190 different characteristics of sound and use these to estimate the happiness embodied in music (Benetos et al, 2022).

The algorithm first needed to be trained on sound samples where we already know the embodied happiness this is the equivalent of using a norm for text. The equivalent of the corpus of text is the music itself, and to maximise the chances of measuring national mood, we focus on top-selling music.

This study finds that the mood embodied in a single popular song seems to be better at predicting survey-based mood than the vast amount of text data that we use. This seems remarkable until you remember that language contains a mixture of emotional content and information. This might explain why using music, which has a greater emotional content, could be a better way to capture wellbeing, especially for nations where text data are sparse. Putting this all together, our hope is that as data science, computational power and behavioural science advance, our understanding of national wellbeing will continue to improve. This can only help policy-makers to develop a better understanding of how government policy or major shocks are likely to affect the wellbeing of the nation.

Visit link:

How has national wellbeing evolved over time? - Economics Observatory

LabWare Announces Foundational Integration to Software and … – PR Newswire

LabWare shares how they will use data science and machine learning to allow their customers to save time, money and resources, giving them a competitive advantage.

PHILADELPHIA, March 21, 2023 /PRNewswire/ -- LabWare announced today at Pittcon 2023, the premier annual conference on laboratory science, that it is making data science and machine learning foundational to its software. This concept is unique to the industry, and will enable labs of the future.

"This integration will revolutionize the way that laboratories handle data, enabling them to uncover insights that were previously hidden," said Patrick Callahan, Director of Advanced Analytics, LabWare. "As a global leader of laboratory information management systems, we need to stay one step ahead in the industry, and we have a responsibility to our clients."

As the pandemic has changed the way of the world, LabWare has played a key role in making sure the labs around the world operated in making life saving discoveries and producing results.

LabWare actively works with public and private sector organizations worldwide to apply their considerable know-how and advanced technology to enhance workflow and operational efficiency in the lab. These efforts to increase laboratory testing capacity have met the unprecedented public health testing demands. Data serves as the fabric inside LabWare's application and maximizing its potential is foundational to their platform development.

"In today's day and age, there's a huge need to not only acquire data, but also understand it and apply it to scientists' and lab manager's tasks without taking them outside their normal work streams," Callahan said. "That's where LabWare analytics comes in, to help our customers explore and leverage the data they've acquired. This will be critical as we move into new methods of Automation and discovery."

Through client conversations, Labware has found having data science and machine learning foundational to what they do enables their clients to succeed in the lab and beyond.

"We intend to ensure our customers have the competitive advantage they need by leveraging our solutions," Callahan said.

To learn more about LabWare, visitwww.labware.com.Visit LabWare at Pittcon March19-22 at Booth #2442

LabWare is recognized as the global leader of Laboratory Information Management Systems (LIMS) and instrument integration software products. The company's Enterprise Laboratory Platform combines the award-winning LabWare LIMS and LabWare ELN, which enables its clients to optimize compliance, improve quality, increase productivity, and reduce costs. LabWare is a full-service informatics provider offering software, professional implementation and validation services, training, and world-class technical support to ensure customers get the maximum value from their LabWare products.

Founded in 1978, the company is headquartered in Wilmington, Delaware with offices throughout the world to support customer installations in over 125 countries.

Contact:Katie Zamarra917.379.5422[emailprotected]

SOURCE LabWare

Read the rest here:

LabWare Announces Foundational Integration to Software and ... - PR Newswire