Category Archives: Data Science

5. About this canvassing of experts – Pew Research Center

This report is the second of two reports issued in 2021 that share results from the 12th Future of the Internet canvassing by the Pew Research Center and Elon Universitys Imagining the Internet Center. The first report examined the new normal for digital life that could exist in 2025 in the wake of the outbreak of the global pandemic and other crises in 2020.

For this report, experts were asked to respond to several questions about the future of ethical artificial intelligence via a web-based instrument that was open to them from June 30-July 27. In all, 602 people responded after invitations were emailed to more than 10,000 experts and members of the interested public. The results published here come from a nonscientific, nonrandom, opt-in sample and are not projectable to any other population other than the individuals expressing their points of view in this sample.

Respondent answers were solicited though the following prompts:

Application of AI Ethics by 2030: In recent years, there have been scores of convenings and even more papers generated proposing ethical frameworks for the application of artificial intelligence (AI). They cover a host of issues, including transparency, justice and fairness, privacy, freedom and human autonomy, beneficence and non-maleficence, freedom, trust, sustainability and dignity. Our questions here seek your predictions about the possibilities for such efforts.

The question on the future of ethical AI: By 2030, will most of the AI systems being used by organizations of all sorts employ ethical principles focused primarily on the public good?

-YES, ethical principles focused primarily on the public good WILL be employed in most AI systemsby 2030

-NO, ethical principles focused primarily on the public good WILL NOT be employed in most AI systems by 2030

Follow-up question on ethical AI, seeking a written elaboration on the previous question: Will AI mostly be used in ethical or questionable ways in the next decade? Why? What gives you the most hope? What worries you the most? How do you see AI applications making a difference in the lives of most people? As you look at the global competition over AI systems, what issues concern you or excite you?

Results for the quantitative question regarding how widely deployed ethical AI systems will be in 2030:

The respondents were also asked to consider the possible role that quantum computing might make in creating ethical AI systems. The prompting question was:

Quantum computing? How likely is it that quantum computing will evolve over the next decade to assist in creating ethical artificial intelligence systems?

In all, 551 respondents answered this question: 17% said very likely; 32% said somewhat likely; 27% said somewhat unlikely; and 24% said very unlikely.

The follow-up prompt to elicit their open-end written answers was:

Follow-up on quantum computing (written elaboration). If you do not think it likely that quantum computing will evolve to assist in building ethical AI, why not? If you think that will be likely, why do you think so? How will that evolution unfold and when? Will humans still be in the loop as AI systems are created and implemented?

The web-based instrument was first sent directly to an international set of experts (primarily U.S.-based) identified and accumulated by Pew Research and Elon University during previous studies, as well as those identified in a 2003 study of people who made predictions about the likely future of the internet between 1990 and 1995. Additional experts with proven interest in digital health, artificial intelligence ethics and other aspects of these particular research topics were also added to the list. We invited a large number of professionals and policy people from government bodies and technology businesses, think tanks and interest networks (for instance, those that include professionals and academics in law, ethics, medicine, political science, economics, social and civic innovation, sociology, psychology and communications); globally located people working with communications technologies in government positions; technologists and innovators; top universities engineering/computer science, political science, sociology/anthropology and business/entrepreneurship faculty, graduate students and postgraduate researchers; plus some who are active in civil society organizations that focus on digital life; and those affiliated with newly emerging nonprofits and other research units examining the impacts of digital life.

Among those invited were researchers, developers and business leaders from leading global organizations, including Oxford, Cambridge, MIT, Stanford and Carnegie Mellon universities; Google, Microsoft, Akamai, IBM and Cloudflare; leaders active in the advancement of and innovation in global communications networks and technology policy, such as the Internet Engineering Task Force (IETF), Internet Corporation for Assigned Names and Numbers (ICANN), Internet Society (ISOC), International Telecommunications Union (ITU), Association of Internet Researchers (AoIR), and the Organization for Economic Cooperation and Development (OECD). Invitees were encouraged to share the survey link with others they believed would have an interest in participating, thus there may have been somewhat of a snowball effect as some invitees invited others to weigh in.

The respondents remarks reflect their personal positions and are not the positions of their employers; the descriptions of their leadership roles help identify their background and the locus of their expertise. Some responses are lightly edited for style and readability.

A large number of the expert respondents elected to remain anonymous. Because peoples level of expertise is an important element of their participation in the conversation, anonymous respondents were given the opportunity to share a description of their internet expertise or background, and this was noted, when available, in this report.

In this canvassing, 65% of respondents answered at least one of the demographic questions. Seventy percent of these 591 people identified as male and 30% as female. Some 77% identified themselves as being based in North America, while 23% are located in other parts of the world. When asked about their primary area of interest, 37% identified themselves as professor/teacher; 14% as research scientists; 13% as futurists or consultants; 9% as technology developers or administrators; 7% as advocates or activist users; 8% as entrepreneurs or business leaders; 3% as pioneers or originators; and 10% specified their primary area of interest as other.

Following is a list noting a selection of key respondents who took credit for their responses on at least one of the overall topics in this canvassing. Workplaces are included to show expertise; they reflect the respondents job titles and locations at the time of this canvassing.

Sam Adams, 24-year veteran of IBM now senior research scientist in artificial intelligence for RTI International; Micah Altman, a social and information scientist at MIT; Robert D. Atkinson, president of the Information Technology and Innovation Foundation; David Barnhizer, professor of law emeritus and co-author of The Artificial Intelligence Contagion: Can Democracy Withstand the Imminent Transformation of Work, Wealth and the Social Order?; Marjory S. Blumenthal, director of the science, technology and policy program at RAND Corporation; Gary A. Bolles, chair for the future of work at Singularity University; danah boyd, principal researcher, Microsoft Research, and founder of Data and Society; Stowe Boyd, consulting futurist expert in technological evolution and the future of work; Henry E. Brady, dean of the Goldman School of Public Policy at the University of California, Berkeley; Tim Bray, technology leader who has worked for Amazon, Google and Sun Microsystems; David Brin, physicist, futures thinker and author of the science fiction novels Earth and Existence; Nigel Cameron, president emeritus, Center for Policy on Emerging Technologies; Kathleen M. Carley, director, Center for Computational Analysis of Social and Organizational Systems, Carnegie Mellon University; Jamais Cascio, distinguished fellow at the Institute for the Future; Vint Cerf, Internet Hall of Fame member and vice president and chief internet evangelist at Google; Barry Chudakov, founder and principal at Sertain Research; Adam Clayton Powell III, senior fellow, USC Annenberg Center on Communication Leadership and Policy; Christina J. Colclough, an expert on the future of work and the politics of technology and ethics in AI; Susan Crawford, a professor at Harvard Law School and former special assistant in the Obama White House for science, technology and innovation policy; Kenneth Cukier, senior editor at The Economist and coauthor of Big Data; Neil Davies, co-founder of Predictable Network Solutions and a pioneer of the committee that oversaw the UKs initial networking developments; Rosalie Day, policy leader and consultancy owner specializing in system approaches to data ethics, compliance and trust; Abigail De Kosnik, director of the Center for New Media, University of California, Berkeley; Amali De Silva-Mitchell, futurist and consultant participating in global internet governance processes; Jeanne Dietsch, New Hampshire senator and former CEO of MobileRobots Inc.; Stephen Downes, senior research officer for digital technologies, National Research Council of Canada; Bill Dutton, professor of media and information policy at Michigan State University, former director of the Oxford Internet Institute; Esther Dyson, internet pioneer, journalist, entrepreneur and executive founder of Way to Wellville; Glenn Edens, professor at Thunderbird School of Global Management, Arizona State University, previously a vice president at PARC; June Anne English-Lueck, professor of anthropology at San Jose State University and a distinguished fellow at the Institute for the Future; Susan Etlinger, industry analyst for Altimeter Group; Daniel Farber, author, historian and professor of law at the University of California, Berkeley; Marcel Fafchamps, professor of economics and senior fellow at the Center on Democracy, Development and the Rule of Law at Stanford University; Seth Finkelstein, consulting programmer and Electronic Frontier Foundation Pioneer Award winner; Rob Frieden, professor of telecommunications law at Penn State, previously worked with Motorola and held senior U.S. policy positions at the FCC and National Telecommunications and Information Administration; Edward A. Friedman, professor emeritus of technology management at Stevens Institute of Technology; Jerome C. Glenn, co-founder and CEO of the futures-research organization The Millennium Project; Mike Godwin, former general counsel for the Wikimedia Foundation and author of Godwins Law; Kenneth Grady, futurist, founding author of The Algorithmic Society blog; Erhardt Graeff, researcher expert in the design and use of technology for civic and political engagement, Olin College of Engineering; Benjamin Grosof, chief scientist at Kyndi, a Silicon Valley AI startup; Glenn Grossman, a consultant of banking analytics at FICO; Wendy M. Grossman, a UK-based science writer, author of net.wars and founder of the magazine The Skeptic; Jonathan Grudin, principal researcher, Microsoft; John Harlow, smart-city research specialist at the Engagement Lab at Emerson College; Brian Harvey, emeritus professor of computer science at the University of California, Berkeley; Su Sonia Herring, a Turkish-American internet policy researcher with Global Internet Policy Digital Watch; Mireille Hildebrandt, expert in cultural anthropology and the law and editor of Law, Human Agency and Autonomic Computing; Gus Hosein, executive director of Privacy International; Stephan G. Humer, lecturer expert in digital life at Hochschule Fresenius University of Applied Sciences in Berlin; Alan Inouye, senior director for public policy and government, American Library Association; Shel Israel, Forbes columnist and author of many books on disruptive technologies; Maggie Jackson, former Boston Globe columnist and author of Distracted: Reclaiming Our Focus in a World of Lost Attention; Jeff Jarvis, director, Tow-Knight Center, City University of New York; Jeff Johnson, professor of computer science, University of San Francisco, previously worked at Xerox, HP Labs and Sun Microsystems; Paul Jones, professor emeritus of information science at the University of North Carolina, Chapel Hill; Anthony Judge, editor of the Encyclopedia of World Problems and Human Potential; David Karger, professor at MITs Computer Science and Artificial Intelligence Laboratory; Frank Kaufmann, president of the Twelve Gates Foundation; Eric Knorr, pioneering technology journalist and editor in chief of IDG; Jonathan Kolber, a member of the TechCast Global panel of forecasters and author of a book about the threats of automation; Gary L. Kreps, director of the Center for Health and Risk Communication at George Mason University; David Krieger, director of the Institute for Communication and Leadership, based in Switzerland; Benjamin Kuipers, professor of computer science and engineering at the University of Michigan; Patrick Larvie, global lead for the workplace user-experience team at one of the worlds largest technology companies; Jon Lebkowsky, CEO, founder and digital strategist, Polycot Associates; Sam Lehman-Wilzig, professor and former chair of communication at Bar-Ilan University, Israel; Mark Lemley, director of Stanford Universitys Program in Law, Science and Technology; Peter Levine, professor of citizenship and public affairs at Tufts University; Rich Ling, professor at Nanyang Technological University, Singapore; J. Scott Marcus, an economist, political scientist and engineer who works as a telecommunications consultant; Nathalie Marchal, senior research analyst at Ranking Digital Rights; Alice E. Marwick, assistant professor of communication at the University of North Carolina, Chapel Hill, and adviser for the Media Manipulation project at the Data & Society Research Institute; Katie McAuliffe, executive director for Digital Liberty; Pamela McCorduck, writer, consultant and author of several books, including Machines Who Think; Melissa Michelson, professor of political science, Menlo College; Steven Miller, vice provost and professor of information systems, Singapore Management University; James Morris, professor of computer science at Carnegie Mellon; David Mussington, senior fellow at CIGI and director at the Center for Public Policy and Private Enterprise at the University of Maryland; Alan Mutter, consultant and former Silicon Valley CEO; Beth Noveck, director, New York University Governance Lab; Concepcion Olavarrieta, foresight and economic consultant and president of the Mexico node of The Millennium Project; Fabrice Popineau, an expert on AI, computer intelligence and knowledge engineering based in France; Oksana Prykhodko, director of the European Media Platform, an international NGO; Calton Pu, professor and chair in the School of Computer Science at Georgia Tech; Irina Raicu, a member of the Partnership on AIs working group on Fair, Transparent and Accountable AI; Glynn Rogers, retired, previously senior principal engineer and a founding member at the CSIRO Centre for Complex Systems Science; Douglas Rushkoff, writer, documentarian and professor of media, City University of New York; Jean Seaton, director of the Orwell Foundation and professor of media history at the University of Westminster; Greg Sherwin, vice president for engineering and information technology at Singularity University; Henning Schulzrinne, Internet Hall of Fame member, co-chair of the Internet Technical Committee of the IEEE and professor at Columbia University; Ben Shneiderman, distinguished professor of computer science and founder of Human Computer Interaction Lab, University of Maryland; John Smart, foresight educator, scholar, author, consultant and speaker; Jim Spohrer, director of cognitive open technologies and the AI developer ecosystem at IBM; Sharon Sputz, executive director, strategic programs, Columbia University Data Science Institute; Jon Stine, executive director of the Open Voice Network, setting standards for AI-enabled vocal assistance; Jonathan Taplin, author of Move Fast and Break Things: How Google, Facebook and Amazon Cornered Culture and Undermined Democracy; Brad Templeton, internet pioneer, futurist and activist, a former president of the Electronic Frontier Foundation; Ed Terpening, consultant and industry analyst with the Altimeter Group; Ian Thomson, a pioneer developer of the Pacific Knowledge Hub; Joseph Turow, professor of communication, University of Pennsylvania; Dan S. Wallach, a professor in the systems group at Rice Universitys Department of Computer Science; Wendell Wallach, ethicist and scholar at Yale Universitys Interdisciplinary Center for Bioethics; Amy Webb, founder, Future Today Institute, and professor of strategic foresight, New York University; Jim Witte, director of the Center for Social Science Research at George Mason University; Simeon Yates, a professor expert in digital culture and personal interaction at the University of Liverpool and the research lead for the UK governments Digital Culture team; Warren Yoder, longtime director at Public Policy Center of Mississippi, now an executive coach; Jillian York, director of international freedom of expression for the Electronic Frontier Foundation; and Ethan Zuckerman, director, MITs Center for Civic Media, and co-founder, Global Voices.

A selection of institutions at which some of the respondents work or have affiliations:

AAI Foresight; AI Now Research Institute of New York University; AI Impact Alliance; Access Now; Akamai Technologies; Altimeter Group; American Enterprise Institute; American Institute for Behavioral Research and Technology; American Library Association; American University; American University of Afghanistan; Anticipatory Futures Group; APNIC; Arizona State University; Aspen Institute; AT&T; Atlantic Council; Australian National University; Bar-Ilan University; Benton Institute; Bloomberg Businessweek; Brookings Institution; BT Group; Canada Without Poverty; Carleton University; Carnegie Endowment for International Peace; Carnegie Mellon University; Center for a New American Security; Center for Data Innovation; Center for Global Enterprise; Center for Health and Risk Communication at George Mason University; Center for Strategic and International Studies; Centre for International Governance Innovation; Centre National de la Recherche Scientifique, France; Chinese University of Hong Kong; Cisco Systems; Citizens and Technology Lab; City University of New York; Cloudflare; Columbia University; Constellation Research; Convo Research and Strategy; Cornell University; Council of Europe; Data Across Sectors for Health at the Illinois Public Health Institute; Data & Society Research Institute; Data Science Institute at Columbia; Davis Wright Tremaine LLP; Dell EMC; Deloitte; Digital Grassroots; Digital Value Institute; Disney; DotConnectAfrica; The Economist; Electronic Frontier Foundation; Electronic Privacy Information Center; Enterprise Roundtable Accelerator; Emerson College; Fight for the Future; European Broadcasting Union; Foresight Alliance; Future Today Institute; Futuremade; Futurous; FuturePath; Futureproof Strategies; General Electric; Georgetown University; Georgia Tech; Global Business Network; Global Internet Policy Digital Watch; Global Voices; Google; Hague Centre for Strategic Studies, Harvard University; Hochschule Fresenius University of Applied Sciences; Hokkaido University; IBM; Indiana University; Internet Corporation for Assigned Names and Numbers (ICANN); IDG; Ignite Social Media; Information Technology and Innovation Foundation; Institute for the Future; Instituto Superior Tcnico, Portugal; Institute for Ethics and Emerging Technologies; Institute for Prediction Technology; International Centre for Free and Open Source Software; International Telecommunication Union; Internet Engineering Task Force (IETF); Internet Society; Internet Systems Consortium; Johns Hopkins University; Institute of Electrical and Electronics Engineers (IEEE); Ithaka; Juniper Networks; Kyndi; Le Havre University; Leading Futurists; Lifeboat Foundation; MacArthur Research Network on Open Governance; Macquarie University, Sydney, Australia; Massachusetts Institute of Technology; Menlo College; Mercator XXI; Michigan State University; Microsoft Research; Millennium Project; Mimecast; Missions Publiques; Moses & Singer LLC; Nanyang Technological University, Singapore; Nautilus Magazine; New York University; Namibia University of Science and Technology; National Distance University of Spain; National Research Council of Canada; Nonprofit Technology Network; Northeastern University; North Carolina State University; Olin College of Engineering; Pinterest; Policy Horizons Canada; Predictable Network Solutions; R Street Institute; RAND; Ranking Digital Rights; Rice University; Rose-Hulman Institute of Technology; RTI International; San Jose State University; Santa Clara University; Sharism Lab; Singularity University; Singapore Management University; Sdertrn University, Sweden; Social Science Research Council; Sorbonne University; South China University of Technology; Spacetel Consultancy LLC; Stanford University; Stevens Institute of Technology; Syracuse University; Tallinn University of Technology; TechCast Global; Tech Policy Tank; Telecommunities Canada; Tufts University; The Representation Project; Twelve Gates Foundation; United Nations; University of California, Berkeley; University of California, Los Angeles; University of California, San Diego; University College London; University of Hawaii, Manoa; University of Texas, Austin; the Universities of Alabama, Arizona, Dallas, Delaware, Florida, Maryland, Massachusetts, Miami, Michigan, Minnesota, Oklahoma, Pennsylvania, Rochester, San Francisco and Southern California; the Universities of Amsterdam, British Columbia, Cambridge, Cyprus, Edinburgh, Groningen, Liverpool, Naples, Oslo, Otago, Queensland, Toronto, West Indies; UNESCO; U.S. Geological Survey; U.S. National Science Foundation; U.S. Naval Postgraduate School; Venture Philanthropy Partners; Verizon; Virginia Tech; Vision2Lead; Volta Networks; World Wide Web Foundation; Wellville; Whitehouse Writers Group; Wikimedia Foundation; Witness; Work Futures; World Economic Forum; XponentialEQ; and Yale University Center for Bioethics.

Complete sets of credited and anonymous responses can be found here:

Credited Responses: The Future of Ethical AI Design

Anonymous Responses: The Future of Ethical AI Design

The rest is here:

5. About this canvassing of experts - Pew Research Center

Mysterious flashes of radio light come in two ‘flavors,’ new survey finds – Livescience.com

Every two minutes, a mysterious flash of radio light explodes somewhere in the sky and fades back into darkness within a matter of milliseconds. Astronomers first noticed the bursts in data archived from 2007 and have spent the decade or so since carefully stockpiling examples of the fast radio bursts, or FRBs, looking for patterns that might reveal their origins. Now, they have a whopping 500 new bursts to study.

On June 9, an international research collaboration released the first FRB catalog from the Canadian Hydrogen Intensity Mapping Experiment (CHIME) in British Columbia, more than tripling the number of known FRBs in a single day. The new dataset lends strong support to the notion that two distinct types of FRBs dot the radio sky, and it foreshadows a future where astronomers leverage FRBs to illuminate the most distant reaches of the universe.

"This represents a new phase in FRB science," Kiyoshi Masui, an Massachusetts Institute of Technology astrophysicist and representative of the CHIME collaboration, said at a news briefing.

Related: The 12 strangest objects in the universe

CHIME was not initially designed to become the world's leading FRB hunter. Astronomers originally planned the machine to use the jitters of dim hydrogen atoms to chart the cosmos's matter out to unprecedented distances. But after the Canadian government funded the $9 million machine, researchers realized it was perfectly suited to solving the emerging mystery of FRBs.

The sky flashes with FRBs all the time about 880 times a day, according to the CHIME collaboration's new results. But unless astronomers happen to have a large radio dish trained on exactly the right random point in the sky at exactly the right moment, a burst will go unseen.

CHIME, however, has a cosmic perspective. The telescope's broad receivers (more half-pipes than dishes) pick up radio waves from much of the sky overhead at once, and Earth's rotation points it in different directions. A $4.5 million supercomputing cluster dedicated to FRB hunting, added part-way through the design process, digitally focuses the telescope on thousands of points at once.

Previously, researchers tended to analyze FRBs on a case-by-case basis. The catalog now opens the door to studying bunches of FRBs at once, "transforming this whole field into big data science," Mohit Bhardwaj, a CHIME collaboration member from McGill University in Montreal, said at the news briefing.

Most astrophysicists think FRBs emanate from magnetars, which are one of the weirdest things a star can become when it dies. Magnetars are highly magnetized versions of the stellar corpses known as neutron stars, making them some of the densest and most magnetic objects in the universe. Only a body packing so much mass and magnetic intensity into such a small package could be powerful and nimble enough to beam out the brief bursts, theorists have reasoned. Then in 2020, CHIME caught a magnetar mid-burst in our own galaxy. Still, exactly how magnetars are churning out radio waves is anyone's guess.

Related: The 15 weirdest galaxies in our universe

"There are a plethora of theories, but nothing that tells us which ones could be right and which could be wrong," Masui said.

The CHIME catalog all but confirms a long-held suspicion: Not all FRBs are alike. Astronomers have identified a small minority of FRBs that occur repeatedly from the same spot in the sky, dubbed "repeaters." Of the 535 newly revealed bursts, 61 flashes came from 18 repeat offenders.

The astronomers also found that repeaters look intrinsically different from one-off bursts. One-time FRBs are brief and tend to shine with a rainbow of radio waves, while repeat bursts linger and tend to show up as a single radio hue. The distinction hints that magnetars could have at least two different ways of spitting out radio waves.

Regardless of what's causing FRBs or how, researchers are already thinking about how to put the flashes in the dark to work. The hundreds of bursts seem to be coming from all directions, as opposed to, say, aligning with the Milky Way, That's a sign that the cosmic lighthouses emitting them are scattered across the cosmos, with many coming from hundreds of millions to billions of light-years away.

CHIME also picks up a quality of FRBs called dispersion, a measure of how the radio frequencies of a burst have spread out as its photons travel between galaxies. This separation grows as FRB photons plow through the thin plasma that fills space (like white light separates into a rainbow as it passes through a prism). In this dispersion, each FRB records how much matter it encountered on its journey, much as a car's tires carry a history of the roads they have traveled.

As CHIME's FRB catalog grows, astronomers hope they will be able to use it to create a map of the cosmos's matter on the largest scales.

"We think that [FRBs] are going to be the ultimate tool for studying the universe," Masui said.

Originally published on Live Science.

More:

Mysterious flashes of radio light come in two 'flavors,' new survey finds - Livescience.com

EY Announces Merav Yuravlivker of Data Society as an Entrepreneur Of The Year 2021 Mid-Atlantic Award Finalist – Business Wire

WASHINGTON--(BUSINESS WIRE)--Data Society, the leading provider of custom Data Science Training and cutting edge AI Solutions to Government Agencies and Fortune 500 companies, announced today that CEO and co-founder Merav Yuravlivker had been named an Ernst & Young LLP (EY US) Entrepreneur Of The Year 2021 Mid-Atlantic Award finalist. The program honors unstoppable business leaders whose ambition, ingenuity, and courage in the face of adversity help catapult from the now to the next and beyond.

Being recognized as an EY Entrepreneur of the Year finalist is a true testament to the talent, commitment, and passion of the Data Society team, said Merav Yuravlivker, CEO and co-founder, Data Society. Our goal is to transform how organizations and Government Agencies operate, solve critical challenges, and value their workforce through custom training academies.

A panel of independent judges selected Merav Yuravlivker. Award winners will be announced during a special virtual celebration on August 3rd and will become lifetime members of an esteemed community of Entrepreneur Of The Year alumni from around the world.

A leader in Data Science Training for the past seven years, Data Society, has worked with organizations to build their data capacity and open up new opportunities for expansion by developing customized Data Academies. These programs have led to the up-skilling of thousands of professionals, saving their employers hundreds of millions of dollars. Data Societys response to COVID-19 included critical support to Government initiatives and has doubled its team during this time.

Regional award winners are eligible for consideration for the Entrepreneur Of The Year National Awards, announced in November at the Strategic Growth Forum, one of the nations most prestigious gatherings of high-growth, market-leading companies. The Entrepreneur Of The Year National Overall Award winner will then compete for the EY World Entrepreneur Of The Year Award in June 2022.

Sponsors

Founded and produced by Ernst & Young LLP, the Entrepreneur Of The Year Awards are nationally sponsored by SAP America and the Kauffman Foundation.

About Data Society

Data Society specializes in providing customized signature training programs and AI/ML Solutions to executives, analysts, scientists and engineers on a variety of programming, data literacy, data analysis, data visualization, data science, machine learning and artificial intelligence topics. Since inception in 2014, Data Society has demonstrated success in training government teams across federal, state and local agencies, as well as Fortune 500 clients. datasociety.com

In Mid-Atlantic, sponsors also include

PNC, DLA Piper LLP, the Washington Business Journal, the Baltimore Business Journal and Cooley LLP.

About Entrepreneur Of The Year

Entrepreneur Of The Year is the worlds most prestigious business awards program for unstoppable entrepreneurs. These visionary leaders deliver innovation, growth and prosperity that transform our world. The program engages entrepreneurs with insights and experiences that foster growth. It connects them with their peers to strengthen entrepreneurship around the world. Entrepreneur Of The Year is the first and only truly global awards program of its kind. It celebrates entrepreneurs through regional and national awards programs in more than 145 cities in over 60 countries. National Overall winners go on to compete for the EY World Entrepreneur Of The Year title. ey.com/us/eoy

About EY

EY exists to build a better working world, helping create long-term value for clients, people and society and build trust in the capital markets.Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate.Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.EY refers to the global organization, and may refer to one or more, of the member firms of Ernst & Young Global Limited, each of which is a separate legal entity. Ernst & Young Global Limited, a UK company limited by guarantee, does not provide services to clients. Information about how EY collects and uses personal data and a description of the rights individuals have under data protection legislation are available via ey.com/privacy. EY member firms do not practice law where prohibited by local laws. For more information about our organization, please visit ey.com.

View original post here:

EY Announces Merav Yuravlivker of Data Society as an Entrepreneur Of The Year 2021 Mid-Atlantic Award Finalist - Business Wire

Heap Unveils Heap Illuminate, A Suite of Tools That Proactively Surface Unknown Unknowns in User Behavior – Business Wire

SAN FRANCISCO--(BUSINESS WIRE)--Heap, the leading digital insights platform, today announced the release of Heap Illuminate, a suite of capabilities that automatically surface high-impact insights about user behavior on customers websites and digital products. By leveraging a Data Science layer built to natively integrate with Heaps complete, automatically-captured behavioral dataset, Heap Illuminate slashes the time it takes to locate high-value information, and can uncover large portions of the user journey that remain invisible to digital experience teams using other analytics platforms.

Anonymized data from thousands of digital experiences shows that 38% of funnels leave out a key user action, says Dan Robinson, CTO at Heap. For teams trying to build great digital experiences, this is critical information about user behavior. Because we automatically collect all behavioral data from customers sites, we can leverage data science to quickly identify the events and correlations teams are missing, eliminate blind spots in their data, and automatically highlight the insights that will most impact the business.

In releasing these features, Heap is advancing its mission of empowering teams to make better, more data-driven decisions. A recent Heap-sponsored survey found that only 24% of product teams say they have full insight into the user journey on their site, and only 16% of teams say they know why most customers drop off their site.

Without tools that can actively surface events or behaviors teams are not tracking, its just too easy to miss out on major opportunities, says Rachel Obstler, VP Product at Heap. Heap Illuminate was built to empower teams to move quickly by rapidly anticipating trends in the market and identifying key areas for prioritization. This keeps teams aligned and focused, and eliminates the worry that theyll work for weeks or months only to realize theyve missed something essential.

Thanks to the insights surfaced by Heap about our nomination flow, we saw double-digit increases in nomination starts and submissions, says Jack Canning, Senior Director of Digital Analytics & Optimization at Workhuman, a Heap customer. Heap helped us look at our award nomination funnel much more granularly by surfacing the time and effort that users spent on each step of the flow, which helped us prioritize where to focus. Now our first question doesnt have to be 'can we track that?,' and is instead 'what does this data mean?'

This new announcement comes on the heels of six straight quarters of record-breaking growth at Heap, as well as a number of recent announcements, including the appointment of Ken Fine as CEO, and of David Fullerton and Rachel Obstler as VP Engineering and VP Product, respectively. The past year also saw Heap expand its customer base to more than 7,600 and increase employee count from 150 to over 250 as the company prepares for its next stage of growth.

About Heap

Heaps mission is to power business decisions with truth. We empower fast-moving digital teams to focus on what matters building the best digital experiences not wrestling with their analytics platform. Heap automatically collects and organizes all customer behavioral data, allowing product managers to improve their products with maximum agility. Over 7,600 businesses use Heap to drive business impact by delivering better experiences and better products. Visit heap.io to learn more.

Continue reading here:

Heap Unveils Heap Illuminate, A Suite of Tools That Proactively Surface Unknown Unknowns in User Behavior - Business Wire

Before the next pandemic: Lessons learned, and those still to be absorbed – Stories – Microsoft

Medical student James E.K. Hildreth was on his first clinical rotation when he saw the patient, a Black woman in her early 20s who had just given birth. It was the early 1980s and AIDS was spreading, with no treatment for the virus. Both the mother and her baby did not make it.

There was nothing we could do except treat their symptoms and watch them die, Dr. Hildreth says quietly. The experience so affected him he changed his specialty from training to be a transplant surgeon to an HIV investigator. He became one of the worlds top HIV/AIDS researchers, with much of his work focusing on blocking HIV infection by learning how it gets into cells.

Now, as president and CEO of Meharry Medical College in Nashville, one of the nations oldest and largest historically Black academic health science centers, he sees similarities between the AIDS era and the COVID-19 pandemic responses initial reluctance by some government officials to acknowledge the gravity of the virus and its impact on people of color and says they must not be repeated.

Theres a lot that we learned, Dr. Hildreth says. But one thing we learned for sure is that we need to do a better job of diversifying our health care work force. We need to spend more money on public health and preventive medicine to prevent this from happening again. And it also illustrates the importance of improving the basic health status of all of us, so that the next time this happens, we wont be having the same conversation again. This is not the last pandemic for sure.

Meharry is among the academic institutions and nonprofit organizations that are grantees of Microsoft AI for Health, which uses artificial intelligence (AI) and Azure high-performance computing to help improve the health of people and communities worldwide. AI for Health was launched a few months before COVID-19, and once the pandemic struck, more than 180 AI for Health grants went to those on the front lines of COVID-19 research, data and insights.

John Kahan, Microsoft vice president, Chief Data Analytics Officer and global lead for the AI for Health program, says when the pandemic started, so little information was known, and there was a massive race for data and insights.

I think that our learning is in better shape now, he says. The science is in better shape. But it is still unclear that the governments of the world have gotten together on a common set of standards around exactly what data must be gathered immediately after a pandemic has been declared.

Other AI for Health grantees, including Brown University School of Public Health, the Institute for Health Metrics and Morehouse School of Medicine, agree about the work that remains to be done.

Dr. Ashish Jha, dean of the Brown University School of Public Health, is a pandemic expert who now is also a familiar face to many television viewers in the U.S. for his perspectives. Using Azure and Power BI, Brown and Microsoft AI for Health developed a comprehensive COVID-19 dashboard that includes whether states in the U.S. are meeting COVID-19 testing target numbers, risk levels for each county in the country, and vaccine distribution and administration data. The dashboard also includes worldwide figures, and has become a helpful tool for the public, as well as for policymakers and leaders, to gauge the progress of the vaccine rollout.

Whats important about it and other COVID-19-related dashboards that have since been created is that they represent the first time such important tools have become widely available.

Fundamentally the public health systems data infrastructure (during COVID-19) sort of worked a little, but not nearly enough, Dr. Jha says.

Every public health department in the U.S. has its own data infrastructure for collecting information on infections, tests, hospitalizations and deaths and theyre really, really old, clunky systems, Dr. Jha says. What that means is that there are places across the country when somebody has a positive COVID-19 test, they might send that information to their local health department by literally printing out the test result, and faxing it to the local health department, who will then hand-enter it into a computer system. This is how data is still largely being collected.

It hindered the COVID-19 response in the U.S., Dr. Jha says. At a national level, until very recently, we had no government-driven data on infections and cases and deaths. In fact, national data was being aggregated by a group of journalists who were pulling together data and cleaning it up across every state and putting it together. Even the previous White House (administration) was largely using this data as opposed to using federal data.

Dr. Jha says a collection of anonymized, non-health related data such as restaurant reservations made through an app also are incredibly helpful in providing information about peoples behaviors during the pandemic, such as their willingness to go out for dinner. Its really a way of measuring peoples sense of safety in their community, he says. For example, we saw reservation numbers fall as infection numbers began to rise, well before any policy was made on shutting down restaurants.

Read more from the original source:

Before the next pandemic: Lessons learned, and those still to be absorbed - Stories - Microsoft

Boston Limited and Iguazio Partner to Operationalize AI for the Enterprise – Business Wire

TEL AVIV, Israel & LONDON--(BUSINESS WIRE)--Iguazio, the data science & MLOps platform build for production, today announced a strategic partnership with Boston Limited, an NVIDIA Elite Partner and leading provider of high-performance, mission critical server and storage solutions. The partnership enables both companies to extend their offerings to enterprises across industries looking to bring data science into real life applications, and accelerate their path to production.

Data science is becoming a critical element of business strategy in enterprises across industries. Companies need better ways to implement their AI solutions in real-world environments, to help them cut costs, work more efficiently, and accelerate the rollout of new AI services and products for customers.

Yet as enterprises navigate the journey from data science to live AI applications, they often find the move to production challenging and complex. They need new tools and technologies to help manage this, especially as they scale.

This radical shift to transforming business models with AI typically requires highly customizable infrastructure, as well as a streamlined data science workflow to navigate the transformation effectively and efficiently. With the new partnership, Boston Limited will offer high-performance data center hardware and technical services, while Iguazio provides its data science platform which saves time and cost on getting AI to production.

Our primary focus is to provide our customers with the ability to customize their solutions based on their toughest business requirements. The partnership with Iguazio allows us to facilitate greater enterprise AI capabilities within our existing storage and server solutions and combines it with our strong technical skills allows us to create the systems that will deliver the breakthrough performance required in the industry today said Manoj Nayee, Managing Director, Boston Limited.

As enterprises across industries are weaving AI into their products and services, theyre discovering just how complex it is to deploy AI efficiently and see business impact. According to industry research, 85% of AI projects never make it to production said Asaf Somekh, CEO and co-founder of Iguazio. AI solutions at scale require enterprises to embrace MLOps best practices, and invest in the right AI infrastructure solutions. Through our partnership with Boston, were excited to be able to empower more enterprises to take advantage of the promise of AI.

Both companies have an extensive network of highprofile partners, including NVIDIA, NetApp, Microsoft, AWS, Supermicro, AMD, IBM and Intel.

Boston Limited is the only NVIDIA (NASDAQ: NVDA) elite partner in North Europe to hold Deep Learning, GPU virtualization, HPC and Professional Virtualization competencies, and they offer services globally. Iguazio is one of the first partners in the NVIDIA DGX-Ready Software partner program, allowing enterprises to industrialize AI development workflow and realize the benefits of MLOps on NVIDIA DGX systems.

About Boston Limited

Boston Limited has been providing cutting edge technology since 1992 using Supermicro building blocks. Our high performance, mission-critical server and storage solutions can be tailored for each specific client, helping you to create your ideal solution. From the initial specification, solution design and even full custom branding we can help you solve your toughest business challenges simply and effectively.

About Iguazio

The Iguazio Data Science Platform enables enterprises to develop, deploy and manage AI applications at scale. With Iguazio, enterprises can run AI models in real time, deploy them anywhere (multi-cloud, on-prem or edge), and bring to life their most ambitious AI-driven strategies. Enterprises spanning a wide range of verticals, including financial services, manufacturing, smart mobility and telecoms use Iguazio to automate MLOps and create business impact through a multitude of real-time use cases such as fraud prevention, self-healing networks and location-based recommendations. Iguazio brings data science to life. Find out more on http://www.iguazio.com

More:

Boston Limited and Iguazio Partner to Operationalize AI for the Enterprise - Business Wire

Improve Your Insight by Mixing Qualitative Research With Data Science – Built In

As a data science practitioner, think about all the data you have at your disposal. Then think about the latest data science methods you know about. Now, try answering these two questions: Why do your users adopt or reject your products? And how do they use these products?

Both may sound simple, but theyre trick questions. You can much more easily figure out which features of the products are being adopted, when and where theyre adopted, and perhaps even by whom.

Answering why and how questions with analytics, however, is much more complicated because you need to better understand your users, the contexts in which they operate, their considerations, and their motivations. Though answers to these questions are critical, data science teams cannot always rely only on assumptions, models, and numbers to understand the choices users make and the decisions that lead them to use a product in the ways they do.

The purpose of this little thought exercise is to suggest that, while analytics has many advantages, it also has its limits. Recognizing these limits will help you broaden your insight and become more innovative.

But to achieve this, youll need to engage in exploration where parameters and variables are less known, assumptions are mostly absent, and curiosity abounds. Youll need to think in a way that diverges from how you were trained, and youll need to use data and research methods fundamentally different from those you usually rely on.

In short, think about incorporating qualitative research into your analytics process.

Typical analyses involve getting data from users devices, their logged activities, or through user experiments such as A/B testing. But to answer why and how, youll need to learn the perspectives, meanings, and considerations of users from them directly.

Instead of top-down analytics, gather data by getting out into the field the contexts in which your users operate. Rather than rely on known hypotheses and existing variables to carry out deductive work (by far the more common form of analysis), immerse yourself in an inductive process of qualitative research.

The data collected in qualitative research is different from what youre used to in data science. This is how it works:

Note that, unlike traditional data used in data science, qualitative data are multilayered and complex. Field observations are tied to notes, the notes are then connected to interviews, both are then connected to the documents.

Its not a linear process, and by going back-and-forth between these interconnected layers of data, patterns emerge, research questions get refined, new behaviors and characteristics identified, and insights gained. And because the data is intentionally collected mostly in an unstructured fashion (meaning not answers to specific, close-ended questions) the input reflects different perspectives.

All of this can lead to refined questions and hypotheses to further pursue using data science tools.

Read More From Our Expert ContributorsIs a Data Lake Right for Your Company?

In business contexts, qualitative research is mainly reserved for studies of user experience (UX), product and UX design, and innovation. This intensive research work is largely disconnected from the work done by data science teams.

However, if your data context involves people, you should consider bridging this disconnect.

Take, for instance, an engineering team at Indeed that realized they needed to create a new measure for lead quality but didnt have enough background about leads and how to assess their properties. So they spent some time observing, interviewing, and analyzing documents from their account executives. By analyzing these data, they identified features that theyd not considered before and developed the measure they were after.

Being able to collect data on new features and designing machine learning models added significant value. But they realized that as the market, user needs, and their platform continued to evolve, it was important to return to collecting qualitative data from time to time to further inform their analytics pipeline. This ongoing integration of qualitative data and big data resulted in millions of dollars in added revenue.

Or consider the results of having qualitative research and data science teams work together at Spotify. Despite having a wealth of user data at the online streaming service, the company still needed to make sense of users behavior when receiving ads. The data science team followed the standard approach and performed an A/B test (with the intervention being skippable ads and the control being the standard ad experience). The results led the data science team to identify distinct behavior profiles.

Interestingly, the company also asked qualitative researchers to directly study users. Their findings were fundamentally different. For instance, they found that some of these profiles had nothing to do with inherent choices but were actually more an outcome of confusion about features and presented information.

Learning from this experience, the company started embracing a mixed methods approach (where qualitative data is integrated with more structured big data) to leverage the benefits of both approaches. They established a common research query, devised a process where researchers continuously communicated, and then triangulated their insight with both qualitative and quantitative data.

The result was more comprehensive research data, where business and design decisions, such as notifying users explicitly about ad skip limits, were based upon insight gained from users and data about users.

There are several ways for a data science team to engage with qualitative data:

You should also familiarize yourself with the three data collections methods (observations, interviews, and document analysis) that are at the core of qualitative research:

Finally, connect with researchers who have experience with immersive research, inside and outside your company. They can help you as you are thinking about ways to collect and analyze qualitative data. And perhaps you can offer to help them with parts of the tedious analysis of the more quantitative bit of their data too.

Go here to see the original:

Improve Your Insight by Mixing Qualitative Research With Data Science - Built In

ExcelR has completed the training for 5000+ Students in Pune on Data Science Since their Inception. | YorkPedia – York Pedia

(YorkPedia Editorial):- Pune, Maharashtra Jun 4, 2021 (Issuewire.com)ExcelR Data science course in Pune:

ExcelR, in partnership with Steinbeis University, Germany, provides you with the best data science courseor data scientist course to help you acquire greater heights in your data science career. The course follows a practical approach where everything is taught with real-life examples and case studies, and at a later stage, you apply your learnings on live projects. Our course material is designed by industry experts having more than 17+ years of industry experience and currently the best in the business. On top of it, you get the alumni status from Steinbeis University, Germany.

More on YorkPedia:

This course makes you ready to get recruited by the top multi-international companies in the world. After you are done with the course, you can appear for the exams. You have to score approximately 60% in the exam to obtain a certificate. We have a proper system to help our participants in assignments, projects, resume preparation, and interviews as well. You also get lifetime access to self-paced learning. Students learn everything from Hadoop, Spark, MySQL, Python, R, Tableau, to even Artificial intelligence technologies. ExcelR trains you to be the best in the market and then places you in a reputable company. 5000+ students from Pune have been placed already. You also get access to attend unlimited batches for 365 days with our JUMBO PASS.

Why get ExcelR training for data science?

ExcelR is one of the best training grounds when it comes to data science courses. Our training is curated for everyone regardless of the knowledge they possess. We help you get placed in top MNCs with our 100% placement assistance. Our placement cell has over 150 corporate partners ready to hire you after the successful completion of the course and exam.

All the data science concepts are practically and theoretically explained with the proper use of case studies. Then we try to give you hands-on experience with the help of different projects where you learn the real-life implementation of theoretical concepts.

Our faculty consists of alumni of IITs, IIMs, and ISBs and possess industry experience of over 15 years. Some of them are Ph.D.s as well.

All thanks to expert trainers, ExcelR has gained a top position among the data science training institutes. Students enrolled with ExcelR can choose either classroom sessions or self-learning. However, the lectures are always recorded and made available to the students by the instructors. Students have access to re-learn from these lectures as many times as they want for one year. They dont need to pay any charges for this.

The benefits of receiving a data science certification:

Digitalization and the Internet have had a massive impact on the amount of data we consume and produce. The consumption, as well as the production of data, is increased by many folds. And this data needs to be explored and properly evaluated to get valuable insights. This quest for the exploration of data gave rise to data science and has generated various opportunities in the market.

The demand for data scientists is ever-increasing as we will continue to consume and produce data. Plus, the salaries are also very lucrative in the field of data science. Undoubtedly, a career in data science will be rewarding in the long term.

Even right now, there are almost 1.4 lakh jobs available for data scientists. Businesses have realized the significance of analyzing data in growing their business. In India, even the entry-level professionals in data science are offered salaries up to 5 LPA.

So, what are you waiting for? Contact us right away and start your data science career with a BANG!

Go here to see the original:

ExcelR has completed the training for 5000+ Students in Pune on Data Science Since their Inception. | YorkPedia - York Pedia

How data science gives new insight into air pollution in the US – MIT News

To do really important research in environmental policy, said Francesca Dominici, the first thing we need is data.

Dominici, a professor of biostatistics at the Harvard T.H. Chan School of Public Health and co-director of the Harvard Data Science Initiative, recently presented the Henry W. Kendall Memorial Lecture at MIT. She described how, by leveraging massive amounts of data, Dominici and a consortium of her colleagues across the nation are revealing, on a grand scale, the effects air pollution levels have on human health in the United States. Their efforts are critical for providing a data-driven foundation on which to build environmental regulations and human health policy. When we use data and evidence to inform policy, we can get very excellent results, Dominici said.

Overall, air pollution has dropped dramatically nationwide in the past 20 years, thanks to regulations dating back to the Clean Air Act of 1970. On average, we are all breathing cleaner air, said Dominici. But the research efforts of Dominici and her colleagues show that even relatively low air pollution levels, like those currently present in much of the country, can fall well within national regulations and still be harmful to health. Moreover, recent patterns of decreasing air pollution have left certain geographic areas worse off than others, and exacerbated environmental injustice in the process. We are not cleaning the air equally for all of the racial groups, Dominici said.

Speaking over Zoom to audience members tuning in from around the world, Dominici shared these findings and discussed the underlying methodologies at the 18th Henry W. Kendall Memorial Lecture on April 21. This annual lecture series, which is co-sponsored by the MIT Center for Global Change Science (CGCS) and MITs Department of Earth, Atmospheric and Planetary Sciences (EAPS), honors the memory of the late MIT professor of physics Henry W. Kendall. Kendall was instrumental in bringing awareness of global environmental threats to the world stage through the World Scientists' Warning to Humanity in 1992 and the Call for Action at the Kyoto Climate Summit in 1997. The Kendall Lecture spotlights leading global change science by outstanding researchers, according to Ron Prinn, TEPCO Professor of Atmospheric Science in EAPS and director of CGCS.

How Much Evidence Do You Need? 18th Henry W. Kendall Lecture

In the various studies Dominici discussed, she and her colleagues honed in on a specific kind of harmful air pollution called fine particulate matter, or PM2.5. These tiny particles, less than 2.5 microns in width, come from a variety of sources including vehicle emissions and industrial facilities that burn fossil fuel. Particulate matter can penetrate very deep into the lungs [and] it can get into our blood, said Dominici, noting that this can lead to systemic inflammation, cardiovascular disease, and a compromised immune system.

To analyze how much of a risk PM2.5poses to human health, Dominici and her colleagues turned to the data specifically, to large datasets about people and the environment they experience. One dataset provided fine-grained information on the more than 60 million Americans enrolled in Medicare, including not only their health history, but also factors like socioeconomic status and Zip code. Meanwhile, a team led by Joel Schwartz, a professor of environmental epidemiology at the Harvard T.H. Chan School of Public Health, amassed satellite data on air pollution, weather, land use, and other variables, combined it with air quality data from the EPAs national network, and created a model that provides daily levels of PM2.5for every square kilometer in the continental United States over the last 20 years. In this way we could assign, to every single person enrolled in the Medicare system, their daily exposure to PM2.5, said Dominici.

Combining and analyzing these datasets provided a holistic look at how PM2.5affects the population enrolled in Medicare, and yielded several important findings. Based on the current national ambient air quality standards (NAAQS) for PM2.5, levels below 12 micrograms per cubic meter are considered safe. However, Dominicis team pointed out that even levels below that standard are associated with a higher risk of death. They further showed that making air quality regulations more stringent by lowering the standard to 10 micrograms per cubic meter would save an estimated 140,000 lives over the course of a decade.

The scope of the datasets enabled Dominici and her colleagues to use not only traditional statistical approaches, but also a method called matching. They compared pairs of individuals who had the same occupations, health conditions, and racial and socioeconomic profiles, but who differed in terms of PM2.5exposure. In this way, the researchers could eliminate potential confounding factors and lend further support to their findings.

Their research also illuminated issues of environmental injustice. We started to see some drastic environmental differences in risk across socioeconomic and racial groups, said Dominici. Black Americans have a risk of death from exposure to PM2.5that is three times higher than the national average. Asian and Hispanic populations, as well as people with low socioeconomic status, are also more at risk than the national population as a whole.

One factor behind these discrepancies is that air pollution has been decreasing at different rates in different parts of the country over the past 20 years. In 2000, nearly the entire eastern half of the United States had relatively high levels of PM2.5at 8 micrograms per cubic meter or higher. In 2016, those pollution levels had dropped dramatically across much of the map, but remained high in areas with the highest proportions of Black residents. Racial inequalities in air pollution exposure are actually increasing over time, said Dominici. She noted that one thing to consider is whether future regulations can tackle such inequities while also lowering air pollution for the entire nation on average.

Issues of both air pollution and environmental injustice have been thrown into stark relief during the Covid-19 pandemic. An early study on Covid-19 and air pollution led by Dominici showed that long-term exposure to higher levels of air pollution increased the risk of dying from Covid-19, and that areas with more Black Americans are even more at risk. Additional research showed that during last years wildfire season in California, up to 50 percent of Covid-19 deaths in some areas were attributable to the spikes in PM2.5that result from wildfires.

Due to a lack of data on individual Covid-19 patients, some of these analyses were based on county-level data, which Dominici noted was a major limitation. Fortunately, in some geographical areas, weve started getting access to individual-level records, said Dominici. Access to more and better data has sparked additional research around the world on the link between air pollution and Covid-19. Dominici was also part of an international collaboration that estimated, for example, that 13 percent of Covid-19 deaths in Europe were attributable to fossil-fuel related emissions.

For Dominici, a data scientist at heart, findings like these highlight the role of data science in influencing critical environmental policy decisions. Our all being devastated by this pandemic could provide an additional source of evidence of the importance of controlling fossil-fuel related emission.

See the article here:

How data science gives new insight into air pollution in the US - MIT News

Data Science Job Vacancies: Top Openings to Apply for This Week – Analytics Insight

Analytics Insight has listed the top data science job aspirants should apply for this week.

Data science professionalsget an average salary of US$100,560, according to the US Bureau of Labor Statistics. While their experience in thedata sciencefield rises,data scientistscan have command over their considerably higher pay packages. The evolution and importance ofdata science jobs spiraled in the past decade. The eruption of cloud computing accelerated the adoption ofartificial intelligence, which further brought in the concept ofdata science. Organizations across the globe realized the importance of technology and opened the door for data sciencevacancies. However, technology is an evolving field that needs learning. Fortunately, every year, there are more updates on the knowledge and experience ofdata science.Data science jobaspirants also learn more about the updates of the technology via onlinedata sciencecommunities and courses. When the demand fordata science professionalsis surging, Analytics Insight has listed the topdata science jobs to apply for this week.

Location: Bengaluru, Karnataka, India

About the company: LinkedIn is a social network platform that focuses on professional networking and career development. It is a simple job source platform based on principles like connecting to friends or LinkedIn connections, posting updates, sharing and liking content, and instant messaging other users. Today, over 756 million members in more than 200 countries and territories worldwide use the platform.

Roles and responsibilities: As an Insights Analyst at LinkedIn, the candidate is expected to leverage one of the richest datasets in the world to discover insights that help the companys sales team serve its clients. One should be highly analytical, be comfortable structuring and scoping ambiguous requests, have a passion for querying databases, and enjoy presenting their findings in a visually compelling way. They should interrogate databases to extract necessary data and visualize findings in Excel, PowerPoint, Tableau, or other data visualization software. The candidate should act in a consultative capacity to their sales partners and the wider business area of data-driven insights.

Qualifications:

Applyherefor the job.

Location: Bengaluru, Karnataka, India

About the company: KPMG is one of the leading providers of risk, financial, and business advisory, tax and regulatory services, internal audit, and corporate governance. The company values integrity, excellence, courage, and other qualities in a person and believes in providing an inclusive culture. The leadership team is the principal governing body of KPMGs operations in India.

Roles and responsibilities: According to KPMG, the candidate who is expected to fill the position should be strong on statistics, have exploratory data analysis, machine learning, and predictive modeling skills. One should know different algorithms and should be able to able to ask WHY and suggest alternatives, instead of doing as directed. They should have strong hands-on skills in either R or Python programming. Besides, others skills like exposure to NLP, Neural networks, Databricks, spark, etc are preferred.

Qualifications:

Applyherefor the job.

Location (s): Bengaluru, Chennai, Pune, Gurgaon- India

About the company: Wipro Ltd is one of Indias leading tech companies, providing IT services including Business Process Outsourcing (BPO) services globally. Wipro harnesses the power of cognitive computing, hyper-automation, robotics, cloud, analytics, and emerging technologies to help its clients adapt to the digital world and make them successful.

Roles and responsibilities: As a data science specialist, the candidate will be part of Wipro Digital-Intelligence Enterprise Practice. The role entails leveraging data science and machine learning to solve typical problems within an organization and create IP that could be deployed in a productized mode. One is expected to have hands-on experience with Python or R. They should be well-versed in using industry-leading BI tools and technologies for analytic purposes. The candidate should have prior experience in using big data frameworks like Hadoop. Other responsibilities include data preparation for modeling like missing data imputation, outlier treatment, scaling, normalization, standardization, etc.

Qualifications:

Applyherefor the job.

Location: Chicago, Illinois

About the company: Google LLC is an American search engine company that offers more than 50 internet services and products, from email and online document creation to software for mobile phones and tablet computers. More than 70% of the worlds online search requests are handled by Google, placing it at the heart of most Internet users experience.

Roles and responsibilities: Working as a Data and Analytics Analyst at Google Procurement Organization (GPO), the candidate will support the center of excellence team who is responsible for developing, deploying, and ingraining best practices throughout the organization related to activities such as the procurement process, systems, metrics, reporting, or training, including building infrastructure for the team. One should research new ways to modeling data to unlock actionable procurement insights and monitor performance indicators, highlight trends, and analyzing causes of unexpected variance. They should make business recommendations with effective presentations of findings at multiple levels of stakeholders through visual displays of quantitative information.

Qualifications:

Applyherefor the job.

Location: Folsom, CA

About the company: Intel or Intel Corporation is an American manufacturer of semiconductor computer circuits. Founded in 1968, Intels technology has been at the heart of computing breakthroughs. The company stands at the brink of several technology inflexions including artificial intelligence, 5G network transformation, and the rise of intelligent edge.

Roles and responsibilities: The data analyst vacancy is a global role, with accountability to stakeholders across a variety of regions and functional groups. The candidate will be responsible for managing and further defining the processes by which additions and modifications are made to Intels customer information systems, and for facilitating adherence to data governance standards. One should execute customer data synchronization across various systems to create and maintain a seamless customer experience. They should identify the troubleshoot discrepancies in customer profile information using a combination of defined processes, research, and critical thinking. The candidate is expected to have strong knowledge of best practices in data management. They should also possess strong data mining, analytics, and problem-solving skills.

Qualifications:

Applyherefor the job.

Follow this link:

Data Science Job Vacancies: Top Openings to Apply for This Week - Analytics Insight