Category Archives: Data Science

Researchers link heavy wildfire smoke in Reno to increased risk of contracting COVID-19 – Reno Gazette-Journal

Cathie Anderson| The Sacramento Bee via Associated Press

SACRAMENTO, Calif. Cases of COVID-19 rose sharply last year in Reno, Nevada, when a heavy layer of wildfire smoke settled over the city, according to scientists at the Desert Research Institute, and they and other scientists are postulating that there is a link between air pollution and increased susceptibility to the new coronavirus.

Our results showed a substantial increase in the COVID-19 positivity rate in Reno during a time when we were affected by heavy wildfire smoke from California wildfires, said Daniel Kiser, a co-lead author of the study published in the Journal of Exposure Science and Environmental Epidemiology. This is important to be aware of as we are already confronting heavy wildfire smoke ... with COVID-19 cases again rising in Nevada and other parts of the western U.S.

Kiser, an assistant research scientist of data science at the institute, said he became interested in studying the effect of the microscopic particulate matter from wildfires after reading a Canadian scientists article on the dual effect of confronting both issues at the same time.

In the preface to her work, senior scientist Sarah Henderson of the British Columbia Center for Disease Control, wrote: As we enter the wildfire season in the northern hemisphere, the potential for a dangerous interaction between SARS-CoV-2 and smoke pollution should be recognized and acknowledged. This is challenging because the public health threat of COVID-19 is immediate and clear, whereas the public health threat of wildfire smoke seems distant and uncertain in comparison. However, we must start preparing now to effectively manage the combination of public health threats.

Kiser is hoping that his research results will motivate people to get vaccinated and to wear masks to reduce their exposure to the virus and to tiny wildfire particulate matter that measures 2.5 micrometers or less.

Thats about 1/30th the size of a human hair at its largest. Scientists refer to it as PM 2.5 for short.

To analyze the relationship between this fine wildfire ash and COVID-19 positivity rates, Kiser and his team collected data from the Washoe County Health District and the regions big hospital system, Renown Health.

He said they discovered that the PM 2.5 was responsible for a 17.7% increase in the number of COVID-19 cases that occurred during a period of prolonged smoke that took place between Aug. 16, 2020, and Oct. 10, 2020.

Washoe Countys 450,000 residents, many of whom live in Reno, experienced 43 days of elevated PM 2.5 during that period, researchers said, compared with 26 days for residents of the San Francisco Bay Area.

We had a unique situation here in Reno last year where we were exposed to wildfire smoke more often than many other areas, including the Bay Area, said Dr. Gai Elhanan, co-lead author of the study and an associate research scientist of computer science at the institute. We are located in an intermountain valley that restricts the dispersion of pollutants and possibly increases the magnitude of exposure, which makes it even more important for us to understand smoke impacts on human health.

The relationship between COVID-19 positivity rates and air pollution in general has gained interest among scientists around the world, and Kiser and Elhanan cite research papers from Europe and Asia that explore the phenomenon as well.

Kent Pinkerton, an expert on air pollution on the faculty at the University of California, Davis, said theres concern among physicians and scientists about the impact of climate change on cardiopulmonary health, a topic hes currently addressing in an article hes submitting to a medical journal.

Hotter temperatures, climate change, wildfires, air pollution, all seem to have some association with a greater risk of COVID-19 cases, Pinkerton said. If youre susceptible to air pollution, such as particulate matter, it could be that you just have a situation where youll be also much more susceptible to viral particles that might be in the air that youre breathing. Its not that the air pollution makes the COVID-19 cases more likely to happen, but it may simply be a reflection of just the fact that, where areas of high pollution are, ... the risk for COVID-19 cases may be greater.

Pinkerton said he read a paper on a study out of Turkey, which was submitted to a medical journal, and researchers there also found a terrible upswing in COVID-19 cases linked to increased air pollution.

No one has yet found the mechanism that increases the risk, Kiser and Pinkerton said, but there have been some hypotheses.

Could the new coronavirus be hitching rides on the PM 2.5 and managing to remain virulent as it is breathed into peoples lungs? Certainly, PM 2.5 has been found in the smallest air sacs of peoples lungs.

Kisers team cites a study out of Northern Italy where researchers found the new coronavirus on particulate matter, and Pinkerton noted that the pathogen has been detected in water supplies and in sewage.

We know that dust from the Mongolian desert, that comes across the Pacific Ocean, can carry at least biological material, whether it be viral or bacterial, Pinkerton said. What people have argued about is that the dust can be a carrier for microorganisms.

It raises questions, Pinkerton added, of how long a virus can survive.

Kiser and Pinkerton said researchers also have postulated that the PM 2.5 irritates nasal, throat and lung passages, creating inflammation that makes those areas ripe for infection. Some research has even suggested that the PM 2.5 increases the presence of a histamine receptor to which the COVID-19 virus attaches, Kiser said.

Elhanan said: We believe that our study greatly strengthens the evidence that wildfire smoke can enhance the spread of SARS-CoV-2. We would love public health officials across the U.S. to be a lot more aware of this because there are things we can do in terms of public preparedness in the community to allow people to escape smoke during wildfire events.

In fact, the U.S. Centers for Disease Control and Prevention have a website about wildfire smoke and COVID-19 that provides tips on how to prepare for wildfire season, including identifying high-efficiency air filters and maintaining a supply of N95 respirators which filter out particulates.

Go here to read the rest:

Researchers link heavy wildfire smoke in Reno to increased risk of contracting COVID-19 - Reno Gazette-Journal

Decode Your Future in Software Development With This Discounted Bundle – PCMag.com

Has your life been turned upside down by the chaos of the last year? Why not take the opportunity to switch to a career in software development?

The2021 Google Software Engineering Manager Prep Bundlelets you train at your own pace, with more than 90 hours of content covering Java, C#, Python, data science, and more.

If you're in website development, check out two courses on user interface design: JavaFX: Build Beautiful User Interfaces and UI Design. Need extra certifications to boost your resum? Get some important test prep in ISACA CISA (Certified Information Systems Auditor) 2021 and Certified Information Security Manager (CISM).

Software Architecture: Functional Programming in C# can certainly be helpful for coders, and there are multiple Python courses, including Machine Learning with Python and Python Engineering Animations: Bring Math & Data to Life.

Take a deep dive into the practical applications of natural language processing, such as spam detection, with Data Science: Natural Language Processing (NLP) in Python. Then follow that up with the Advanced NLP & Sequence Models with Deep Learning course, which covers neural machine translation, text classification, and more. If you want to focus on big data, you'll love the Big Data Code Optimization in Python NumPy: Sound Processing class.

Plus, this bundle has you covered with practical business applications: Learn to use the free business intelligence and data analytics tool Google Data Studio, and how to completely transform your marketing campaigns with artificial intelligence to achieve significantly improved results.

Don't miss this opportunity to gain advanced training in a variety of tech specializations. PCMag readers can get The 2021 Google Software Engineering Manager Prep Bundle on sale for$47.7698% off the $2,388 MSRP.

Prices subject to change.

This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of Use and Privacy Policy. You may unsubscribe from the newsletters at any time.

Originally posted here:

Decode Your Future in Software Development With This Discounted Bundle - PCMag.com

NIH expands biomedical research in the cloud with Microsoft Azure – National Institutes of Health

News Release

Tuesday, July 20, 2021

Microsoft Azure has joined the National Institutes of Healths Science and Technology Research Infrastructure for Discovery, Experimentation, and Sustainability (STRIDES) Initiative as the newest cloud service provider to support biomedical research. The addition of this latest industry partner will further the STRIDES Initiatives aim to accelerate biomedical research in the cloud by reducing economic and process barriers as well as providing cost-effective access to cloud platforms, training, cloud experts, and best practices for optimizing research in the cloud.

In just a few years, the STRIDES Initiative has expanded access to critical infrastructure and cutting-edge cloud resources for NIH researchers, as well as NIH-funded researchers at more than 2,500 academic institutions across the nation. To date, NIH has helped more than 425 research programs and projects leverage cloud resources through the STRIDES Initiative. Collectively, researchers have used more than 83 million hours of computational resources to access and analyze more than 115 petabytes of high-value biomedical data in the cloud. This is equivalent to 2.3 million four-drawer filing cabinets full of text.

By leveraging the STRIDES Initiative, the National Library of MedicinesSequence Read Archive (SRA) one of the worlds largest, publicly available genome sequence repositories migrated over 43 petabytes of next generation sequencing data to the cloud, easing access for millions of researchers. Using the cloud, researchers can nowsearch the entire catalog of genomic data and take advantage of the computational tools for analysis.

The cloud can help democratize access to high-value research data and the most advanced analytical technologies for all researchers. Expanding our network of providers and access to the most advanced computational infrastructure, tools, and services provides the agility and flexibility that researchers need to accelerate research discoveries, said Andrea T. Norris, Director of NIHs Center for Information Technology and NIH Chief Information Officer. Partnering with Microsoft Azure as a cloud service provider furthers our goals to enhance discovery and improve efficiency in biomedical research.

We often risk losing the value of biomedical data because of the sheer volumes being generated and digitized around the world. By leveraging cloud and artificial intelligence capabilities, biomedical researchers are able to quickly identify and extract critical, lifesaving insights from this sea of information, said Toni Townes-Whitley, President, U.S. Regulated Industries, Microsoft. We are honored to collaborate with the NIH to help researchers solve some of todays biggest medical challenges, in support of a healthier and more sustainable global population.

A central tenet of the STRIDES Initiative is that data made available through these partnerships will incorporate standards endorsed by the biomedical research community to make data Findable, Accessible, Interoperable, and Reusable (FAIR).

NIH has an ambitious vision of a modernized, FAIR biomedical data landscape, said Susan K. Gregurick, Ph.D., Associate Director for Data Science and Director of the Office of Data Science Strategy at NIH. By partnering with Microsoft Azure, which has over three decades of experience in the cloud space, we can strengthen NIHs data ecosystem and accelerate data-driven research and discovery.

Microsoft Azure joins Google Cloud and Amazon Web Services in supporting the STRIDES Initiative.

About the NIH Office of Data Science Strategy: The Office of Data Science Strategy (ODSS) leads implementation of the NIH Strategic Plan for Data Science through scientific, technical, and operational collaboration with the institutes, centers, and offices that comprise NIH. The office was formed in 2018 within the Division of Program Coordination, Planning, and Strategic Initiatives, which plans and coordinates trans-NIH initiatives and research supported by the NIH Common Fund. More information is available at the Office of Data Science Strategy website: datascience.nih.gov.

About the National Institutes of Health (NIH):NIH, the nation's medical research agency, includes 27 Institutes and Centers and is a component of the U.S. Department of Health and Human Services. NIH is the primary federal agency conducting and supporting basic, clinical, and translational medical research, and is investigating the causes, treatments, and cures for both common and rare diseases. For more information about NIH and its programs, visit http://www.nih.gov.

NIHTurning Discovery Into Health

###

Read the original:

NIH expands biomedical research in the cloud with Microsoft Azure - National Institutes of Health

Deadline 2024: Why you only have 3 years left to adopt AI – VentureBeat

All the sessions from Transform 2021 are available on-demand now. Watch now.

If your company has yet to embrace AI, youre in a race against the clock. And by my calculations, you have just three years left.

How did I arrive at 2024 as the deadline for AI adoption? My prediction formulated with KUNGFU.AI advisor Paco Nathan is rooted in us noticing that many futurists J curves show innovations typically have a 12-to-15-year window of opportunity, a period between when a technology emerges and when it reaches the point of widespread adoption.

While AI can be traced to the mid-1950s and machine learning dates back to the late 1970s, the concept of deep learning was popularized by the AlexNet paper published in 2012. Of course, its not just machine learning that started the clock ticking.

Though cloud computing was initially introduced in 2006, it didnt take off until 2010 or so. The rise of data engineering can also be traced to the same year. The original paper for Apache Spark was published in 2010, and it became foundational for so much of todays distributed data infrastructure.

Additionally, the concept of data science has a widely reported inception date of 2009. Thats when Jeff Hammerbacher, DJ Patil and others began getting recognized for leading data science teams and helping define the practice.

If you do the math, those 20092012 dates put us within that 12-to-15-year window. And that makes 2024 the cutoff for companies hoping to gain a competitive advantage from AI.

If you look at the graph below from Everett Rogers Diffusion of Innovations youll get a sense of how those who wait to put AI into production will miss out on cornering the market. Here the red line shows successive groups adopting new technology while the purple line shows how market share eventually reaches a saturation level.

Source: Everett Rogers, Diffusion of Innovations

A 2019 survey conducted by the MIT Sloan Management Review and Boston Consulting Group explicitly shows how the Diffusion of Innovations theory applies to AI. Their research was based on a global survey of more than 3,000 executives, managers, and analysts across various industries.

Once the responses to questions around AI understanding and adoption were analyzed, survey respondents were assigned to one of four distinct categories:

Pioneers (20%) These organizations possess a deep knowledge of AI and incorporate it into their offerings and internal processes. Theyre the trailblazers.

Investigators (30%) These organizations understand AI but arent deploying it beyond the pilot stage. Theyre taking more of a look before you leap approach.

Experimenters (18%) These organizations are piloting AI without truly understanding it. Their strategy is fake-it-until-you-make-it.

Passives (32%) These organizations have little-to-no understanding of AI and will likely miss out on the opportunity to profit from it.

The 2020 survey, which uses the same questions and methodology, gives even greater insight into how executives embrace AI. 87% believe AI will offer their companies an advantage over others. Just 59% of companies, however, have an AI strategy.

Comparing the MIT and BCG 2020 survey responses to those since the surveys inception in 2017, a growing number of execs recognize that competitors are using AI. Yet only one in 10 companies are using AI to generate significant financial benefits.

I anticipate this gap between leaders and laggards will continue widening, making this your companys last chance to take action before 2024 (if it hasnt already).

MIT and BCGs 2020 data reveals that companies focused on the initial steps of AI adoption (ensuring data, talent, and a strategy are in place) will have a 21% chance of becoming a market leader. When companies begin to iterate on AI solutions with their organizational users (effectively adopting AI and applying it across multiple use cases) that chance rises to 39%. And those that can orchestrate the macro and micro interactions between humans and machines (sharing knowledge amongst both and smartly structuring those interactions) will have a 73% chance of market leadership.

Building upon MIT and BCGs success predictions, McKinsey & Company has specifically broken down how AI integration impacts revenue in this 2020 chart.

Source: McKinsey & Company Global Survey, 2020

While the ROI for AI integration can be immediate, thats not typically the case. According to MIT and BCGs 2019 data, only two out of three companies that have made some investment in AI (Investigators and Experimenters) report gains within three years. This stat improves to three out of five when companies that have made significant investments in AI (Pioneers) are included.

The 2020 MIT/BCG data builds upon this, claiming companies that use AI to make extensive changes to many business processes are 5X more likely to realize a major financial benefit vs. those making small or no changes to a few business processes.

So where will you be in 2024? On your way to reaping the rewards of AI, or lamenting that you missed an opportunity for market advantage?

Steve Meier is a co-founder and Head of Growth at AI services firm KUNGFU.AI.

More here:

Deadline 2024: Why you only have 3 years left to adopt AI - VentureBeat

How the National Science Foundation is taking on fairness in AI – Brookings Institution

Most of the public discourse around artificial intelligence (AI) policy focuses on one of two perspectives: how the government can support AI innovation, and how the government can deter its harmful or negligent use. Yet there can also be a role for government in making it easier to use AI beneficiallyin this niche, the National Science Foundation (NSF) has found a way to contribute. Through a grant-making program called Fairness in Artificial Intelligence (FAI), the NSF is providing $20 million in funding to researchers working on difficult ethical problems in AI. The program, a collaboration with Amazon, has now funded 21 projects in its first two years, with an open call for applications in its third and final year. This is an important endeavor, furthering a trend of federal support for the responsible advancement of technology, and the NSF should continue this important line of funding for ethical AI.

The FAI program is an investment in what the NSF calls use-inspired research, where scientists attempt to address fundamental questions inspired by real world challenges and pressing scientific limitations. Use-inspired research is an alternative to the traditional basic research, which attempts to make fundamental advances in scientific understanding without necessarily a specific practical goal. NSF is better known for basic research in computer science, where the NSF provides 87% of all federal basic research funding. Consequently, the FAI program is a relatively small portion of the NSFs total investment in AIaround $3.3 million per year, considering that Amazon covers half of the cost. In total, the NSF requested $868 million in AI spending, about 10% of its entire budget for 2021, and Congress approved every penny. Notably, this is a broad definition of AI spending that includes many applications of AI to other fields, rather than fundamental advances in AI itself, which is likely closer to $100 or $150 million, by rough estimation.

The FAI program is specifically oriented towards the ethical principle of fairnessmore on this choice in a moment. While this may seem unusual, the program is a continuation of prior government funded research into the moral implications and consequences of technology. Starting in the 1970s, the federal government started actively shaping bioethics research in response to public outcry following the APs reporting on the Tuskegee Syphilis Study. While the original efforts may have been reactionary, they precipitated decades of work towards improving the biomedical sciences. Launched alongside the Human Genome Project in 1990, there was an extensive line of research oriented towards the ethical, legal, and social implications of genomics. Starting in 2018, the NSF funded 21 exploratory grants on the impact of AI on Society, a precursor to the current FAI program. Today, its possible to draw a rough trend line through these endeavors, in which the government is becoming more concerned with first pure science, then the ethics of the scientific process, and now the ethical outcomes of the science itself. This is a positive development, and one worth encouraging.

NSF made a conscious decision to focus on fairness rather than other prevalent themes like trustworthiness or human-centered design. Dr. Erwin Gianchandani, an NSF deputy assistant director, has described four categories of problems in FAIs domain, and these can each easily be tied to present and ongoing challenges facing AI. The first category is focused on the many conflicting mathematical definitions of fairness and the lack of clarity around which are appropriate in what contexts. One funded project studied the human perceptions of what fairness metrics are most appropriate for an algorithm in the context of bail decisionsthe same application of the infamous COMPAS algorithm. The study found that survey respondents slightly preferred an algorithm that had a consistent rate of false positives (how many people were unnecessarily kept in jail pending trial) between two racial groups, rather than an algorithm which was equally accurate for both racial groups. Notably, this is the opposite quality of the COMPAS algorithm, which was fair in its total accuracy, but resulted in more false positives for Black defendants.

The second category, Gianchandani writes, is to understand how an AI system produces a given result. The NSF sees this as directly related to fairness because giving an end-user more information about an AIs decision empowers them to challenge that decision. This is an important pointby default, AI systems disguise the nature of a decision-making process and make it harder for an individual to interrogate the process. Maybe the most novel project funded by NSF FAI attempts to test the viability of crowdsourcing audits of AI systems. In a crowdsourced audit, many individuals might sign up for a toole.g., a website or web browser extensionthat pools data about how those individuals were treated by an online AI system. By aggregating this data, the crowd can determine if the algorithm is being discriminatory, which would be functionally impossible for any individual user.

The third category seeks to use AI to make existing systems fairer, an especially important task as governments around the world are continuing to consider if and how to incorporate AI systems into public services. One project from researchers at New York University seeks, in part, to tackle the challenge of fairness when an algorithm is used in support of a human decision-maker. This is perhaps inspired by a recent evaluation of judges using algorithmic risk assessments in Virginia, which concluded that the algorithm failed to improve public safety and had the unintended effect of increasing incarceration of young defendants. The NYU researchers have a similar challenge in minddeveloping a tool to identify and reduce systemic biases in prosecutorial decisions made by district attorneys.

The fourth category is perhaps the most intuitive, as it aims to remove bias from AI systems, or alternatively, make sure AI systems work equivalently well for everyone. One project looks to create common evaluation metrics for natural language processing AI, so that their effectiveness can be compared across many different languages, helping to overcome a myopic focus on English. Other projects looks at fairness in less studied methods, like network algorithms, and still more look to improve in specific applications, such as for medical software and algorithmic hiring. These last two are especially noteworthy, since the prevailing public evidence suggests that algorithmic bias in health-core provisioning and hiring is widespread.

Critics may lament that Big Tech, which plays a prominent role in AI research, is present even in this federal programAmazon is matching the support of the NSF, so each organization is paying around $10 million. Yet there is no reason to believe the NSFs independence has been compromised. Amazon is not playing any role in the selection of the grant applications, and none of the grantees contacted had any concerns about the grant-selection process. NSF officials also noted that any working collaboration with Amazon (such as receiving engineering support) is entirely optional. Of course, it is worth considering what Amazon has to gain from this partnership. Reading the FAI announcement, it sticks out that the program seeks to contribute to trustworthy AI systems that are readily accepted and that projects will enabled broadened acceptance of AI systems. It is not a secret that the current generation of large technology companies would benefit enormously from increased public trust in AI. Still, corporate funding towards genuinely independent research is good and unobjectionable especially relative to other options like companies directly funding academic research.

Beyond the funding contribution, there may be other societal benefits from the partnership. For one, Amazon and other technology companies may pay more attention to the results of the research. For a company like Amazon, this might mean incorporating the results into its own algorithms, or into the AI systems that it sells through Amazon Web Services (AWS). Adoption into AWS cloud services may be especially impactful, since many thousands of data scientists and companies use those services for AI. As just an example, Professor Sandra Wachter of the Oxford Internet Institute was elated to learn that a metric of fairness she and co-authors had advocated for had been incorporated into an AWS cloud service, making it far more accessible for data science practitioners. Generally speaking, having an expanded set of easy-to-use features for AI fairness makes it more likely that data scientists will explore and use these tools.

In its totality, FAI is a small but mighty research endeavor. The myriad challenges posed by AI are all improved with more knowledge and more responsible methods driven by this independent research. While there is an enormous amount of corporate funding going into AI research, it is neither independent nor primarily aimed at fairness, and may entirely exclude some FAI topics (e.g., fairness in the government use of AI). While this is the final year of the FAI program, one of NSF FAIs program directors, Dr. Todd Leen, stressed when contacted for this piece that the NSF is not walking away from these important research issues, and that FAIs mission will be absorbed into the general computer science directorate. This absorption may come with minor downsidesfor instance, a lack of clearly specified budget line and no consolidated reporting on the funded research projects. The NSF should consider tracking these investments and clearly communicating to the research community that AI fairness is an ongoing priority of the NSF.

The Biden administration could also specifically request additional NSF funding for fairness and AI. For once, this funding would not be a difficult sell to policymakers. Congress funded the totality of the NSFs $868 million budget request for AI in 2021, and President Biden has signaled clear interest in expanding science funding; his proposed budget calls for a 20% increase in NSF funding for fiscal year 2022, and the administration has launched a National AI Research Taskforce co-chaired by none other than Dr. Erwin Gianchandani. With all this interest, bookmarking $5 to $10 million per year explicitly for the advancement of fairness in AI is clearly possible, and certainly worthwhile.

The National Science Foundation and Amazon are donors to The Brookings Institution. Any findings, interpretations, conclusions, or recommendations expressed in this piece are those of the author and are not influenced by any donation.

Read this article:

How the National Science Foundation is taking on fairness in AI - Brookings Institution

How Hackathons Are Changing The Way Data Scientists Are Hired – Analytics India Magazine

Today, hackathons are one of the primary recruitment tools for tech companies. Organisations invest in new-age tools to hire employees by assessing their problem-solving approach and skills to manage time & people. In machine learning hackathons, the participants are given a problem statement and need to work with a dataset to create an accurate model to top the leaderboard. The gamifying experience makes the hiring process more interactive and less stressful for both candidates and recruiters.

A recent report showed universities that leveraged technologies like hackathons in their hiring process achieved a 70 percent on-boarding rate.

For a better perspective, think of a way of solving a problem as quickly as possible. To crack the code, you will certainly need multiple minds to work and innovate together. Thats the simplest definition of a hackathon, Nikhil Barshika, founder of Imarticus Learning.

ML hackathons allow employers to take a closer look at how potential hires deal with real-world situations. Lets take a deep dive into why hackathons are changing the way data scientists are hired.

In a bid to spotlight virtual hackathon as a non-traditional channel of recruitment, MachineHack hosted The Great Indian Hiring Hackathon (2020) in collaboration with 12 prominent companies including Aditya Birla Group, Bridgei2i, Concentrix and Fractal.

Since its inception, MachineHack has aimed to empower data scientists to innovate. Data scientists, despite having tremendous talent and innovation to offer, are facing unprecedented challenges during this pandemic, and we at MachineHack want to tap into that pool of talent, said Bhasker Gupta, CEO & Founder, AIM.

MachineHack has an ongoing fortnight-long hiring hackathon, Mathco. Thon for data scientists and machine learning practitioners. TheMathCompany will interview the candidates who make it to the top leaderboard positions. The participants also stand to win a cash prize.

Organisations can also leverage hackathons for training and upskilling employees, thus preparing them for senior and more relevant roles within the company. The hackathon approach helps companies achieve two goals: to promote the work culture among existing employees and build a substantial brand recall value. For example, Karan Juneja, a regular participant in MachineHack hackathons and its grandmaster, has said hackathons have helped him pick up new data science skills.

While employers get the opportunity to shortlist the best talents for their organisation, candidates get a hang of the organisations work culture. The hackathons offer the ideal setting for both candidates and recruiter to understand if they are the right fit for each other.

I am a Liberal Arts graduate who enjoys researching new topics and writing about them. An aspiring journalist, I love to read books, go on a drive on rainy days and listen to old Bollywood music.

Follow this link:

How Hackathons Are Changing The Way Data Scientists Are Hired - Analytics India Magazine

Data Science Platform Market 2020: Potential Growth, Challenges, and Know the Companies List Could Potentially Benefit or Loose out From the Impact of…

Data Science Platform Market is latest research study released by Adroit Market Research evaluating the market risk side analysis, highlighting opportunities and leveraged with strategic and tactical decision-making support (2021-2028). The market Study is segmented by key a region that is accelerating the marketization. The report provides information on market trends and development, growth drivers and the changing investment structure of the Data Science Platform Market. Some of the key players profiled in the study areMicrosoft, IBM, Google, MathWorks, Cloudera, Altair Engineering, SAS, Wolfram, Alteryx, and SAP. Moreover, the other potential players in the data science platform market are RapidMiner, Dataiku, Civis Analytics, Databricks, and Anaconda.

By end users/application, market is sub-segmented as: by Application (Logistics, Marketing, Sales, Customer Support, Human Resource, and Others), Industry Vertical (IT & Telecom, BFSI, Retail, Healthcare, Government & Defense, and Others)

Breakdown by type, the market is categorized as: by Platform (Solutions and Services (Managed Services and Professional Services))

Regional Analysis for Data Science Platform Market includes: North America, US, Canada, Mexico, Europe, Germany, France, U.K., Italy, Russia, Nordic Countries, Benelux, Rest of Europe, Asia, China, Japan, South Korea, Southeast Asia, India, Rest of Asia, South America, Brazil, Argentina, Rest of South America, Middle East & Africa, Turkey, Israel, Saudi Arabia, UAE & Rest of Middle East & Africa

The Data Science Platform Market study covers on-going status, % share, upcoming growth patterns, development cycle, SWOT analysis, sales channels & distributions to anticipate trending scenarios for years to come. It aims to recommend analysis of the market by trend analysis, segment breakdown, and players contribution in Data Science Platform market upliftment. The market is sized by 5 major regions i.e., North America, Europe, Asia Pacific (includes Asia & Oceania separately), Middle East and Africa (MEA), and Latin America and further broken down by 18+ jurisdiction or countries like China, the UK, Germany, United States, France, Japan, India, group of Southeast Asian & Nordic countries etc.

Players profiled in the report:Microsoft, IBM, Google, MathWorks, Cloudera, Altair Engineering, SAS, Wolfram, Alteryx, and SAP. Moreover, the other potential players in the data science platform market are RapidMiner, Dataiku, Civis Analytics, Databricks, and Anaconda.

Consumer Traits Includes Following Patterns**

Consumer Buying patterns (e.g., comfort & convenience, economical, pride)

Customer Lifestyle (e.g., health conscious, family orientated, community active)

Expectations (e.g., service, quality, risk, influence)

Major Highlights from the Data Science Platform Market factored in the Analysis

Data Science Platform Market Measures & Parameters Addressed in Study: The report highlights Data Science Platform market features such segment revenue, weighted average selling price by region, capacity utilization rate, production & production value, % gross margin by company, consumption, import & export, demand & supply, cost bench-marking of finished product in Data Science Platform Industry, market share and annualized growth rate (Y-o-Y) and % CAGR.

Major Strategic Data Science Platform Market Developments: Activities such as Research & Development (R&D) by phase, ongoing and completed Merger & Acquisition (M&A) [deal value, purpose, effective year], Joint ventures (JVs), Technological tie-ups, Suppliers partnerships & collaborations, agreements, new launches etc taken by Data Science Platform Industry players during projected timeframe of study.

The Data Science Platform Market report provides the rigorously studied and evaluated data of the top industry players and their scope in the market by means of various analytical tools. To gain a deep dive analysis; qualitative commentary on changing market dynamics {drivers, restraints & opportunities}, PESTLE, 5-Forces, Feasibility study, BCG matrix, SWOT by players, Heat Map analysis etc have been provided to better correlate key players product offering in the market.

1. Data Science Platform Market Overview

Market Snapshot

Definition

Product Classification

2. Data Science Platform Market Dynamics

Drivers, Trends, Restraints

Market Factors Analysis

3. New Entrants and Entry-barriers

4. Standardization, Regulatory and collaborative initiatives

Manufacturing Process Analysis

Industrial/Supply Chain Analysis, Sourcing Strategy and Downstream Buyers

5. Data Science Platform Market Competition by Manufacturers

6. Data Science Platform Market Value [USD], Capacity, Supply (Production), Consumption, Price, Export-Import (EXIM), by Region (2016-2020)

.

7. Data Science Platform Revenue (Value), Production, Sales Volume, by Region (2021-2028)

8. Data Science Platform Market Trend by Type {, HKP::LT;500ml, 500-1000ml & HKP::GT;1000ml}

9. Data Science Platform Market Analysis by Application {Offline Store & Online Store}

10. Data Science Platform Market Manufacturers Profiles/Analysis

Market Share Analysis by Manufacturers (2020)

Manufacturers Profiles (Overview, Financials, SWOT etc)

Connected Distributors/Traders

Marketing Strategy by Key Manufacturers/Players

Thanks for reading Data Science Platform Industry research publication; you can also get individual chapter wise section or region wise report version like America, LATAM, Europe, Nordic nations, Oceania or Southeast Asia or Just Eastern Asia.

About Us

Adroit Market Research is an India-based business analytics and consulting company incorporated in 2018. Our target audience is a wide range of corporations, manufacturing companies, product/technology development institutions and industry associations that require understanding of a markets size, key trends, participants and future outlook of an industry. We intend to become our clients knowledge partner and provide them with valuable market insights to help create opportunities that increase their revenues. We follow a code Explore, Learn and Transform. At our core, we are curious people who love to identify and understand industry patterns, create an insightful study around our findings and churn out money-making roadmaps.

Contact Us:

Ryan Johnson

Account Manager Global

3131 McKinney Ave Ste 600, Dallas,

TX75204, U.S.A.

https://neighborwebsj.com/

View post:

Data Science Platform Market 2020: Potential Growth, Challenges, and Know the Companies List Could Potentially Benefit or Loose out From the Impact of...

Why Data Scientists and ML Engineers Shouldn’t Worry About the Rise of AutoML – Datanami

(Sdecoret/Shutterstock)

Low-code and no-code development tools are becoming increasingly popular, and the pandemic only accelerated this trend. When we think of low-code or no-code development, were usually referring to tools that allow a non-software-engineer to create a digital app (or workflow) in a plug-and-play manner that doesnt require extensive technical knowledge.

But the idea of low-code or no-code engineering also extends to tools for machine learning and data scienceand today, were seeing a proliferation of options in this category, too, sometimes referred to as AutoML. As with low-code dev tools, the allure of these offerings is that they enable businesses to implement data science and ML workflows without needing the resources or expertise to build them from scratch. AutoML tools allow a user to input a dataset and then, with minimal data science knowledge needed, deploy a model to run over the data and generate results. Its tempting to think that AutoML fully breaks down the barriers to AI, allowing anyone to do this type of work, but for reasons Ill explain later, thats not really the casequite the opposite, in fact.

AutoML does have some potential benefits. This article from Deloitte notes two advantages in particular. The first is increased productivity for data scientists, who can speed up specific steps of the ML lifecycle through automation. This will ultimately enable data scientists to increase their value contribution to the business and to focus on more complex problems.

A second benefit is enabling non-technical business leaders to gain some access to ML, which makes particular sense in the context of the well-documented demand for data scientists. Some have speculated that AutoML might ease the talent crunch for data scientists, if it does in fact allow existing employees to do the same type of ML work without specialized training. Amid COVID-19 cost-cutting, questions have been raised about whether demand for data scientists would begin to cool, especially since its a field that can struggle to show clear ROI in some business settings. How will the rise of AutoML fit into the mix?

Just A Rather Very Intelligent System (J.A.R.V.I.S.) was originally Tony Starks natural-language user interface computer system (Image courtesy Marvel Cinematic Universe)

I do think that AutoML will impact the data science field. As AutoML tools become more widespread, well see a corresponding increase in ML adoption among businesses. For a long time, enterprise ML was the provenance of the fewtech giants, innovative startups, and traditional businesses that were large enough to fund in-house AI centers. Tools like AutoML will make basic ML models and outputs more accessible to other types of companies. This doesnt mean that the neighborhood florist is going to suddenly have a system like J.A.R.V.I.S. running the place; as an article from McKinsey rightly notes, at present, the technology is best suited to streamlining the development of common forecasting tasks.

As AutoML increases enterprise ML adoption by lowering the barriers, enterprises will in fact find that they have a greater need for expert data scientistsnot a reduced one. As organizations adopt more and more ML technologies and their use cases become more specific, theyll outgrow the one-size-fits-all approach. At that point, theyll need qualified data scientists and ML engineers to help continue on a growth trajectory. This is true not only because of the limitations of AutoML, but also because of the need for human oversight to account for ethical concerns like bias as ML usage becomes more prevalent.

Additionally, ML workflows are not typically a set it and forget it process: as dynamic forces of business change over time, data drift or concept drift may cause ML models to become less accurate. A skilled data scientist can detect and correct for these types of problems; they can also improve the overall model function by adjusting the training data as needed, to avoid the classic garbage in, garbage out scenario. While AutoML can improve access to basic ML workflows that a business can build on, experienced data scientists are needed to enable peak performance of those workflows and to provide the most nuanced, useful interpretation of results.

The reality is that well never automate away the need for data scientists, even if we do automate some of their tasks or improve accessibility to basic ML workflows for non-technical business people. If anything, growing adoption of AutoML will drive increased need for real, live data science expertise. Putting companies on a more equal footing in terms of their ability to incorporate ML into their businesses is a good thing, as are efforts to further democratize data science and AI. But well always need expert data scientists to guide implementations, especially as they become more use-case-specific or begin to more directly impact the public.

About the author: Kevin Goldsmith serves as the Chief Technology Officer for Anaconda, the data science platform that has more than 25 million users. In his role, he brings more than 29 years of experience in software development and engineering management to the team, where he oversees innovation for Anacondas current open-source and commercial offerings. Goldsmith also works to develop new solutions to bring data science practitioners together with innovators, vendors, and thought leaders in the industry. Prior to Anaconda, Kevin served as CTO of AI-powered identity management company Onfido. Other roles have included CTO at Avvo, vice president of engineering, consumer at Spotify, and nine years at Adobe Systems as a director of engineering. He has also held software engineering roles at Microsoft and IBM.

Related Items:

Hiring, Pay for Data Science and Analytics Pros Picks Up Steam

AutoML Tools Emerge as Data Science Difference Makers

Whats the Difference Between AI, ML, Deep Learning, and Active Learning?

The rest is here:

Why Data Scientists and ML Engineers Shouldn't Worry About the Rise of AutoML - Datanami

Businesses need to show data science isnt dull, it can be fun and rewarding – ComputerWeekly.com

This is a guest blogpost by Libby Duane Adams, chief advocacy officer at Alteryx.

In todays business environment, data is key to success. With over 2.5 quintillion bytes of data created each day, data-driven insights are the main driver in every major business decision and are essential to discovering more efficient processes, reduction in risk or new sources of revenue.

However, harnessing the power of data continues to be a challenge, due to the on-going shortage of data science skills in the labour market, as demand for digital skills still far outstrips the supply. A recentUK government report found that nearly half of businesses (46%) have struggled to recruit for roles requiring hard data and analytics skills.

IDC estimates that by 2025 well have created more than 175 zettabytes globally. As the world of business continues evolving, companies are moving fast and need fast solutions they can no longer tolerate knowledge workers, delivering low strategic output from legacy tools for the enterprise. The sheer abundance of data and its growing complexity means data skilled workers able to harness it for fast and sound decisions will be at the forefront of the job market throughout the next decade.

While not every worker needs to become a data scientist, many businesses are turning to upskilling their employees to overcome this shortage. Building their own internal pool of talented data workers with the skills, desire, knowledge, and analytical expertise to be successful and thrive in an increasingly data-rich environment.

Organisations have already started to recognize data literacy as an important skill for their workforce. A recentMcKinsey studyfound that 84% of executive leaders when increasing their talent pool of data specialists experienced more success from upskilling their existing workforce, compared to just 16% who succeeded when hiring externally.By providing analytics solutions that upskill information workers into data-literate knowledge workers, these knowledge workers individually and collectively can help drive organisational transformation. Employees have the context of the business questions to solve as well as the knowledge of the data assets available that can drive answers through analytics.

Creating a culture of upskilling is by no means an easy feat. Getting employees engaged can be half the battle. It requires building a new culture where data is accessible to workers throughout the organisation, as well as significant investment in new tools and platforms that do not require users to know complex coding languages. Low code and no code solutions provide space for employees who want to upskill, learn and practice to become skilled data workers themselves.

By implementing formal upskilling programmes that focus on key skills and technologies, in addition to providing a learning curriculum that can result in valuable and credible certifications, companies can set themselves and their employees up for success. However, these programmes should not be dry and academic. In fact, the upskilling journey can be a social experience.

For instance, businesses can host lunch and learn activities and company-wide data challenges that bring people together from across the organisation, introduce staff to data science and make it appealing and accessible. Gamification strategies can also encourage staff to use online learning resources and develop their data skills by using leader boards, points scoring and creating personal challenges and achievements.

The aim is to create an open culture of learning where staff communicate and work together to solve data problems. A companys existing data scientists should act as coaches to colleagues, encouraging them to think analytically and ask the right questions of datasets. This will help build data skills into every team, so that data analytics becomes an enterprise-wide initiative, rather than siloed into one team of analytics professionals.

The other benefit of this more social approach to data science is how it can impact diversity. Simply put, data science has a diversity problem: as few as 15% of data scientists are women. This lack of diversity is a huge concern, because with a diverse range of approaches and points of view to tackle data challenges and ensure data models and algorithms are free from biases, businesses will see improvement in results. Its no secret that the more diverse the workforce the richer the business outcomes will be, research by McKinsey has shown that organisations with more ethnic and gender diversity are more likely to outperform. When we value our varied experiences, they impact how we solve problems to get to better answers.

The evolving landscape of the data science and analytics market creates an inherent need for organisations to foster and grow data analytics cultures fuelled by collaboration and diversity, presenting an opportunity for all demographics traditionally underrepresented in the technology workforce, to accelerate their careers by embracing analytic roles. For business leaders, this represents an opportunity to look within for specialists with the right attitude to problem solving, not just technical aptitude, to support and upskill in both data literacy and analytics.

By investing in upskilling, people from any age, gender and background can learn vital data skills and progress their careers. It also enables companies to recruit new individuals who dont necessarily have an academic background or specific coding skills, which may encourage a more diverse range of applicants. This was the experience of the sports and fitness apparel company Gymshark, which uses Alteryx to empower and upskill its employees.

Weve been able to expand faster because we are able to find these individuals easier, rather than having to find people with very specific skillsets, says Gemma Hulbert, CDO at Gymshark. New hires are now able to come in and hit the ground running right away with Alteryx, even though they arent data analysts. We are able to create apps that empower our employees to be able to learn new skills using the platform.

Data science doesnt have to be the preserve of the elite few. Anyone in the workforce with a passion for solving data puzzles is now able to do it, not just a handful of specialists. In the past, employees with vast expertise in their own fields were locked out of data analytics due to the technical knowledge it required.

With the right tools and investment, anyone can learn data skills, and when people are encouraged to be creative and think critically, they are able to ask the right questions and solve all sorts of problems. Thanks to self-service platforms and automation, the power of analytics is no longer restricted to a few gatekeepers, but rather it is available to all. By enabling employees to scale their passion for data science, businesses will accelerate the knowledge workers journey to become data-driven, be better able to unlock data-driven insights and tackle the worlds biggest problems with a successful digital transformation journey.

Original post:

Businesses need to show data science isnt dull, it can be fun and rewarding - ComputerWeekly.com

Top Data Science Funding’s and Investment to Watch Out in Q2 2021 – Analytics Insight

Data and analytics are being used every day in businesses to drive transformation and efficiency and generate accurate insights for greater revenue. The impact of data science reaches far and beyond the IT industry and is solving some of the most pressing issues in other industries. In healthcare, defense, and education, data science technologies have to revolutionize traditional business operations.

This article provides a list of the top data science companies funding and investments to look out for in Q2 of 2021.

Amount Raised: US$15M

Transaction Type: Series A

Key Investor(s): Menlo Ventures, Amity Ventures, and others

Edge Delta is a stream processing platform for observability, predicting, and detecting anomalies in operational and security data. The company allows enterprises to use a network of analytics to identify and remediate potential DevOps, IT, operational, and security incidents more accurately.

Amount Raised: US$140M

Transaction Type: Series B

Key Investor(s): Softbank Vision Fund, 5square, and others

Vianai provides artificial intelligence solutions to its clients. The company aims in defining, maintaining, and delivering different software for industry leaders. It envisions empowering millions of its clients to build machine learning applications and solutions to reach new heights.

Amount Raised: US$11M

Transaction Type: Series A

Key Investor(s): ATX Venture Partners, Circle K Ventures

Pensa Systems is a provider of autonomous perception systems for retail inventory visibility. The company has created a platform that allows drones to monitor the shelves and alert the retailers in real-time when the product is out of stock or reloaded.

Amount Raised: US$3.4M

Transaction Type: Seed

Key Investor(s): Seraphim Capital, Creative Ventures, and others

PlanetWatchers provide SaaS solutions for enterprises, governments, and NGOs to monitor their natural assets across the world. Their advanced geospatial technology combines machine learning algorithms, cloud infrastructure, and multi-source satellite sensors to provide critical information for efficient management.

Continue reading here:

Top Data Science Funding's and Investment to Watch Out in Q2 2021 - Analytics Insight