Category Archives: Data Science

OpsRamp Introduces The Future of Incident Response: Harnessing Machine Learning and Data Science to Predict and Prevent IT Outages – Yahoo Finance

The latest release allows operators to deliver stellar customer experiences, drive proactive incident response, and gain powerful capabilities for hybrid monitoring

OpsRamp Dark Mode reduces blue-light fatigue for ops teams that troubleshoot during late hours or are accustomed to darkened UIs.

SAN JOSE, Calif., Sept. 21, 2021 (GLOBE NEWSWIRE) -- OpsRamp, a modern digital operations management platform for hybrid monitoring and AI-driven event management, today announced its Summer Release which includes alert predictions for preventing outages and incidents, alert enrichment policies for faster incident troubleshooting, and auto monitoring enhancements for Alibaba Cloud and Prometheus metrics ingestion.

Machine learning and data science continue to transform the discipline of IT operations, with 75% of global 2,000 enterprises planning to adopt AIOps by 2023. As CIOs ramp up on intelligent automation for driving proactive operations, OpsRamps latest release helps IT teams avoid outages and prevent reputational damages with predictive alerting, alert enrichment, and dynamic workflows. The OpsRamp Summer 2021 Release also introduces new monitoring integrations for Alibaba Cloud, Prometheus metrics ingestion, Hitachi, VMware, Dell EMC, and Poly.

Highlights of the OpsRamp Summer 2021 Release include:

Predictive Alerting. Alert prediction policies help IT teams anticipate which alerts repeat regularly and turn into performance-impacting incidents. With AIOps, operators can reduce service degradations by identifying seasonal alert patterns as well as lower incident volumes by forecasting repetitive alerts.

Alert Enrichment. Organizations can accelerate incident troubleshooting by enriching the problem area field in the alert description subject. IT operators can use regular expressions to populate alert context details so that they can identify problems faster with relevant information.

Auto Monitoring. IT operators can now rapidly onboard and monitor their Windows infrastructure, including Windows Server, Active Directory, Exchange, IIS, and SQL Server through auto monitoring. Cloud engineers can ensure centralized data storage and retention of Prometheus metrics with support for Prometheus instances running on bare metal and virtualized infrastructure.

Alibaba Cloud Monitoring. CloudOps engineers can now onboard and monitor their services running in Alibaba Cloud. They can visualize, alert, and perform root cause analysis on ECS instances, Auto Scaling, RDS, Load Balancer, EMR, and VPC services within Alibaba Cloud and accelerate troubleshooting for multicloud infrastructure within a single platform.

Datacenter Monitoring. System administrators can now monitor the performance and health of popular datacenter infrastructure such as Hitachi VSP OpsCenter, NAS and HCI, VMware vSAN, NSX-T and NSX-V, Dell EMC PowerScale, PowerStore and PowerMax, and Poly Trio, VVX/CCX and Group.

Dynamic Workflows (Beta). Instead of building a number of different automation workflows, IT operators can maintain a single decision table to address specific operational scenarios at scale. Dynamic workflows ensure faster incident response by invoking diagnostic actions for distinct scenarios.

Mobile Application. IT teams can now respond to alerts and incidents through the OpsRamp mobile application with support for both Android and iOS devices. Operators can view, sort, search, filter, comment, and take action on alerts while also being able to access, edit, sort, filter, and reassign incidents from anywhere.

Powerful Visualizations. Operators can now clearly visualize metric values that can arbitrarily increase or decrease within a fixed range using Gauge charts. For network operations teams that work in 24/7 shifts, dark mode reduces eye strain, improves readability, and offers ergonomic comfort.

Modern IT teams have to deal with escalating customer expectations, constant toil, technical debt, and an overwhelming amount of operational data to process and analyze, said Sheen Khoury, Chief Revenue Officer at OpsRamp. OpsRamp's digital operations management platform transforms reactive incidents workflows to proactive and preventive operations for faster incident prediction, recognition, and remediation.

Story continues

Learn about the OpsRamp Summer 2021 Release at http://www.OpsRamp.com/whatsnew or try OpsRamp free for 14 days at try.opsramp.com.

About OpsRamp

OpsRamp is a digital operations management software company whose SaaS platform is used by enterprise IT teams to monitor and manage their cloud and on-premises infrastructure. Key capabilities of the OpsRamp platform include hybrid infrastructure discovery and monitoring, event and incident management, and remediation and automation, all of which are powered by artificial intelligence. OpsRamp investors include Sapphire Ventures, Morgan Stanley Expansion Capital and HPE. For more information, visit http://www.opsramp.com.

Media contact:Kevin WolfTGPRkevin@tgprllc.com

A photo accompanying this announcement is available at https://www.globenewswire.com/NewsRoom/AttachmentNg/ea9f332b-3f25-4a11-8d6a-320323cb8d68

Read the rest here:

OpsRamp Introduces The Future of Incident Response: Harnessing Machine Learning and Data Science to Predict and Prevent IT Outages - Yahoo Finance

Twitter round-up: KDnuggets’ tweet on the importance of extract transform load (ETL) in data science the most popular tweet in Q2 2021 – Verdict

Verdict lists five of the most popular tweets on big data in Q2 2021 based on data from GlobalDatas Influencer Platform.

The top tweets were chosen from influencers as tracked by GlobalDatas Influencer Platform, which is based on a scientific process that works on pre-defined parameters. Influencers are selected after a deep analysis of the influencers relevance, network strength, engagement, and leading discussions on new and emerging trends.

KDnuggets, a website focused on artificial intelligence (AI), analytics, big data, data science, and machine learning (ML) founded by data scientist Gregory Piatetsky-Shapiro, shared an article on the importance of ETL in data science. ETL involves the extraction of data from different sources, making changes in the data and loading it into a single destination.

Data is stored in different file formats and in different locations in most organisations, which can be inaccurate and inconsistent making it difficult to gain insights from the data or use it for data science, according to the article. ETL can help in addressing these issues by extracting the data and loading it into a central data warehouse. ETL can also help in running AI and ML applications by providing accurate data for the algorithms.

Username: KDnuggets

Twitter handle: @kdnuggets

Retweets: 60

Likes: 202

Kirk Borne, chief science officer at DataPrime, a provider of data science, analytics, ML and AI services and products, shared an article on how data analytics and ML can be used for predicting stroke, which is the second largest cause of death in the world. An estimated five billion people in the world get stroke a year, according to the World Health Organization (WHO).

Strokes can be prevented by identifying high risk patients and motivating them to choose a healthy lifestyle. The high-risk patients can be identified using data-science, data analytics and ML, according to the article. Several data analytics and ML models have been applied to evaluate the stroke risk factors including the use of a mixed-effect linear model to forecast the risk of cognitive decline in patients after a stroke. Another model created by researchers accurately predicted stroke outcome with high accuracy, stated the article.

Username: Kirk Borne

Twitter handle: @KirkDBorne

Retweets: 48

Likes: 105

Antonio Grasso, CEO of Digital Business Innovation, a digital business transformation consulting firm, shared an article on the key features that should be considered when choosing big data analytics tools. The tools should have certain features to meet the users needs and improve user experience to achieve successful analytics projects, the article highlighted.

Data breaches and safety issues, for example, can be avoided using big data analytics tools that have well-equipped security features. Some of the important big data analytics features highlighted by the article include data integration, data wrangling and preparation, data exploration, scalability, and data governance.

Username: Antonio Grasso

Twitter handle: @antgrasso

Retweets: 49

Likes: 92

Dr. Marcell Vollmer, partner and director at Boston Consulting Group (BCG), a management consulting firm, shared an infographic on the big data crisis. The infographic detailed on how a strategy is needed to analyse the massive amounts of data collected by companies, of which just 0.5% is currently being analysed.

The infographic highlighted how companies such as streaming service Netflix, professional network LinkedIn and insurance company United Services Automobile Association (USAA) are utilising big data to their advantage by combining it with customer engagement. Netflix, for example, collects passive data and engages with users by identifying microgenres of their interest to help stream better content.

Username: Dr. Marcell Vollmer

Twitter handle: @mvollmer1

Retweets: 90

Likes: 88

Ronald van Loon, CEO of Intelligent World, an influencer network that connects companies and experts to new audiences, shared a video on the impact of big data and AI on business practices and developments. He highlighted the role big data plays in the development of smart cars, including research and development, and supply chain management.

Van Loon elaborated on how smart technology is becoming a major part of peoples life in the form of smart homes, smart vehicles and smart cities. Big data can play a key role in integrating smart vehicles with the smart technologies and help in smart city planning and development. He detailed the comments made by Eric Xu, CEO of technology company Huawei, during the Huawei Analyst Summit 2021 on how the company plans to use big data to ensure driving safety in smart cities. Huawei is developing the HI dual-motor electric driving system, which will use AI and big data analysis to alert users of battery exceptions and prevent loss of power during driving.

Username: Ronald Van Loon

Twitter handle: @Ronald_vanLoon

Retweets: 50

Likes: 79Related Report Download the full report from GlobalData's Report StoreGet the Report

Latest report from Visit GlobalData Store

Visit link:

Twitter round-up: KDnuggets' tweet on the importance of extract transform load (ETL) in data science the most popular tweet in Q2 2021 - Verdict

RwHealth: supporting the NHS with AI and data science – Healthcare Global – Healthcare News, Magazine and Website

Software firm RwHealth is a leading provider of AI solutions to the UK's National Health Service (NHS).Formerly called Draper & Dash, the company combines data science, technology and predictive analytics to provide insights to clinicians, particularly to support patients who might be suitable for clinical trials aimed at treating rare diseases, such as sickle cell anaemia.

We caught up with RwHealth's founder Orlando Agrippa to find out more about their work and how they are supporting the NHS.

What led you to create RWHealth?RwHealth was founded to support health systems and ultimately patients, by accelerating access to data-driven solutions to support clinical care and clinical research. Improving outcomes and access to care for patients through clinical care technology and clinical research is our mission.What key challenges does the NHS face that RWHealth, and more broadly artificial intelligence, is able to help with?The NHS has a 5.3 million patient challenge which means extreme spending on treatment and timely access to care for patients. This is compounded by a challenging clinical delivery resource base - we dont have enough clinicians and many of the great frontline workers are now feeling tired, with some experiencing burnout.

Artificial intelligence, data science and machine learning are now key parts of how care is delivered, from helping clinicians to process millions of records to redesigning and optimising patient pathways, and accelerating time to diagnosis for patients with rare and orphan diseases

At a more basic level, a branch of AI can help with predicting demand and modelling capacity to accelerate time to care for cancer patients and others.How did the pandemic impact what RWHealth does?Vaccines have been tested and deployed to billions in under a year, and this enhanced our belief that things in clinical care and research dont need to take forever to be done. The impact for us has been being able to accelerate our own technology and data strategy to support clinician and research teams at pace and scale.You have worked in the US and Australia. What differences are there between the health systems in these countries, and the UK's?Patients are patients in all countries. All patients want better outcomes and better access irrespective of whether its self-pay or government-funded. I spent time in the Hangzhou health system in China, one might think this is very different however it left me with the same understanding it has and should always be about the patients.

See original here:

RwHealth: supporting the NHS with AI and data science - Healthcare Global - Healthcare News, Magazine and Website

Trialbee and Castor Partner to Democratize Access and Simplify Enrollment to Clinical Trials Globally – Northeast Mississippi Daily Journal

Country

United States of AmericaUS Virgin IslandsUnited States Minor Outlying IslandsCanadaMexico, United Mexican StatesBahamas, Commonwealth of theCuba, Republic ofDominican RepublicHaiti, Republic ofJamaicaAfghanistanAlbania, People's Socialist Republic ofAlgeria, People's Democratic Republic ofAmerican SamoaAndorra, Principality ofAngola, Republic ofAnguillaAntarctica (the territory South of 60 deg S)Antigua and BarbudaArgentina, Argentine RepublicArmeniaArubaAustralia, Commonwealth ofAustria, Republic ofAzerbaijan, Republic ofBahrain, Kingdom ofBangladesh, People's Republic ofBarbadosBelarusBelgium, Kingdom ofBelizeBenin, People's Republic ofBermudaBhutan, Kingdom ofBolivia, Republic ofBosnia and HerzegovinaBotswana, Republic ofBouvet Island (Bouvetoya)Brazil, Federative Republic ofBritish Indian Ocean Territory (Chagos Archipelago)British Virgin IslandsBrunei DarussalamBulgaria, People's Republic ofBurkina FasoBurundi, Republic ofCambodia, Kingdom ofCameroon, United Republic ofCape Verde, Republic ofCayman IslandsCentral African RepublicChad, Republic ofChile, Republic ofChina, People's Republic ofChristmas IslandCocos (Keeling) IslandsColombia, Republic ofComoros, Union of theCongo, Democratic Republic ofCongo, People's Republic ofCook IslandsCosta Rica, Republic ofCote D'Ivoire, Ivory Coast, Republic of theCyprus, Republic ofCzech RepublicDenmark, Kingdom ofDjibouti, Republic ofDominica, Commonwealth ofEcuador, Republic ofEgypt, Arab Republic ofEl Salvador, Republic ofEquatorial Guinea, Republic ofEritreaEstoniaEthiopiaFaeroe IslandsFalkland Islands (Malvinas)Fiji, Republic of the Fiji IslandsFinland, Republic ofFrance, French RepublicFrench GuianaFrench PolynesiaFrench Southern TerritoriesGabon, Gabonese RepublicGambia, Republic of theGeorgiaGermanyGhana, Republic ofGibraltarGreece, Hellenic RepublicGreenlandGrenadaGuadaloupeGuamGuatemala, Republic ofGuinea, RevolutionaryPeople's Rep'c ofGuinea-Bissau, Republic ofGuyana, Republic ofHeard and McDonald IslandsHoly See (Vatican City State)Honduras, Republic ofHong Kong, Special Administrative Region of ChinaHrvatska (Croatia)Hungary, Hungarian People's RepublicIceland, Republic ofIndia, Republic ofIndonesia, Republic ofIran, Islamic Republic ofIraq, Republic ofIrelandIsrael, State ofItaly, Italian RepublicJapanJordan, Hashemite Kingdom ofKazakhstan, Republic ofKenya, Republic ofKiribati, Republic ofKorea, Democratic People's Republic ofKorea, Republic ofKuwait, State ofKyrgyz RepublicLao People's Democratic RepublicLatviaLebanon, Lebanese RepublicLesotho, Kingdom ofLiberia, Republic ofLibyan Arab JamahiriyaLiechtenstein, Principality ofLithuaniaLuxembourg, Grand Duchy ofMacao, Special Administrative Region of ChinaMacedonia, the former Yugoslav Republic ofMadagascar, Republic ofMalawi, Republic ofMalaysiaMaldives, Republic ofMali, Republic ofMalta, Republic ofMarshall IslandsMartiniqueMauritania, Islamic Republic ofMauritiusMayotteMicronesia, Federated States ofMoldova, Republic ofMonaco, Principality ofMongolia, Mongolian People's RepublicMontserratMorocco, Kingdom ofMozambique, People's Republic ofMyanmarNamibiaNauru, Republic ofNepal, Kingdom ofNetherlands AntillesNetherlands, Kingdom of theNew CaledoniaNew ZealandNicaragua, Republic ofNiger, Republic of theNigeria, Federal Republic ofNiue, Republic ofNorfolk IslandNorthern Mariana IslandsNorway, Kingdom ofOman, Sultanate ofPakistan, Islamic Republic ofPalauPalestinian Territory, OccupiedPanama, Republic ofPapua New GuineaParaguay, Republic ofPeru, Republic ofPhilippines, Republic of thePitcairn IslandPoland, Polish People's RepublicPortugal, Portuguese RepublicPuerto RicoQatar, State ofReunionRomania, Socialist Republic ofRussian FederationRwanda, Rwandese RepublicSamoa, Independent State ofSan Marino, Republic ofSao Tome and Principe, Democratic Republic ofSaudi Arabia, Kingdom ofSenegal, Republic ofSerbia and MontenegroSeychelles, Republic ofSierra Leone, Republic ofSingapore, Republic ofSlovakia (Slovak Republic)SloveniaSolomon IslandsSomalia, Somali RepublicSouth Africa, Republic ofSouth Georgia and the South Sandwich IslandsSpain, Spanish StateSri Lanka, Democratic Socialist Republic ofSt. HelenaSt. Kitts and NevisSt. LuciaSt. Pierre and MiquelonSt. Vincent and the GrenadinesSudan, Democratic Republic of theSuriname, Republic ofSvalbard & Jan Mayen IslandsSwaziland, Kingdom ofSweden, Kingdom ofSwitzerland, Swiss ConfederationSyrian Arab RepublicTaiwan, Province of ChinaTajikistanTanzania, United Republic ofThailand, Kingdom ofTimor-Leste, Democratic Republic ofTogo, Togolese RepublicTokelau (Tokelau Islands)Tonga, Kingdom ofTrinidad and Tobago, Republic ofTunisia, Republic ofTurkey, Republic ofTurkmenistanTurks and Caicos IslandsTuvaluUganda, Republic ofUkraineUnited Arab EmiratesUnited Kingdom of Great Britain & N. IrelandUruguay, Eastern Republic ofUzbekistanVanuatuVenezuela, Bolivarian Republic ofViet Nam, Socialist Republic ofWallis and Futuna IslandsWestern SaharaYemenZambia, Republic ofZimbabwe

Follow this link:

Trialbee and Castor Partner to Democratize Access and Simplify Enrollment to Clinical Trials Globally - Northeast Mississippi Daily Journal

Data Scientist vs Data Engineers: All you need to know before choosing the right career path – India Today

Workplace job titles are often far from accurate or precise. It might seem that anyone who works in technology is a programmer, or at least has some programming skills, but with big data on the rise, two jobs are in high demand: data engineers and data scientists. The positions may sound the same but they are very different, with less overlap than the names may imply.

Imagine a NASCAR car racing team. There is a "Pit Crew" which is responsible for making sure the "race vehicle" is in peak form by ensuring all the different parts of the vehicle are working correctly so that it can perform under heavy stress that will be put on the vehicle during the race.

In addition, another very important role is the "racing driver" who is responsible for making sure that the vehicle is used in an optimized way by using different strategies such as when to speed, what type of "banking" should be done when turning and other techniques during the race. Both the driver and the pit crew had to work very closely for a successful outcome of the race.

In a similar manner, Data Engineers and Data Scientists whose functions were very blurry earlier are becoming essential for a successful outcome of a data science implementation.

"Data engineers" transform data into a format that is ready for analysis. These professionals are usually software engineers by trade. Their job involves cleaning the data, compilation and installation of database systems, scaling to multiple machines, writing complex queries, and strategizing disaster recovery systems.

"Data scientists" usually start with data preprocessing, which is cleaning, understanding, and trying to fill gaps in the data with the help of domain experts. Once this is done, they will build models which are truly valuable in extrapolating, analysing, and finding patterns in existing data.

We can see from the above responsibilities that both Data Scientists and Data Engineer responsibilities are very critical for a favorable outcome of any Data Science implementation.

Data Engineers are the less famous cousins of data scientists, but no less important. Data engineers focus on collecting the data and validating the information that data scientists use to answer questions.

Data Engineers need to have a solid knowledge of the Hadoop ecosystem, streaming, and computation at scale. In addition, they should be very familiar with common scripting languages and tools, such as PostgreSQL, MySQL, MapReduce, Hive and Pig.

Nowadays, since very large data-intensive projects such as autonomous cars, e-commerce shopping, large financial networks, etc., use Artificial Intelligence, the role of data engineers has been deemed very critical and on the rise.

The role of Data Scientist has been projected as a must-have entity for all disruptive technology projects. The Data Scientist mainly focuses on understanding core human abilities such as vision, speech, language, decision making, and other complex tasks, and designing machines and software to emulate these processes.

Data Scientist responsibilities are focused on finding the right model to solve tasks such as "to augment or replace complex time-consuming decision-making processes" or "to automate customer interactions to be more natural and human-like" or "to uncover subtle patterns and make decisions that involve complicated new types of streaming data."

Data scientists should have a very good understanding of statistics, Machine Learning, Artificial Intelligence concepts and model building techniques. Knowledge of Data Visualization and Design thinking approaches to problem solving is very critical. Without these, a Data Scientist would be unable to add value to organisations. From a tools knowledge, typically having a good working knowledge of the R and python Data Science stack (e.g., NumPy, SciPy, pandas, scikit-learn, etc.), one or more deep learning frameworks (e.g., TensorFlow Torch, etc.), and distributed data tools (e.g., Hadoop, Spark, etc.). is required

Both Data Engineers and Data Scientists are in very high demand. According to a recent survey by INDEED, in INDIA there is a need for 200,000 Data Scientists and Data Engineers in the next 5 years. From a salary perspective, both positions are equally paid. A recent poll conducted by LinkedIn suggests that the average salary for either a Data Scientist or Data Engineer is around 18 lakhs per annum in India and around USD 100,000 per year in the USA.

Since there is so much demand for both Data Science and Data Engineering skills, a new field called "Computational Data Science" where data engineering concepts and AI concepts are being equally emphasised, is one of the most sought-after degree programmes in the Ivy League and other top universities across the world.

In conclusion, we can say that data scientists dig into the research and visualization of data, whereas data engineers ensure data flows correctly through the pipeline. Both are very essential and have a tremendous demand with limited supply. It all depends on individual interests and strength. You will not go wrong choosing either one of these professions.

See the original post:

Data Scientist vs Data Engineers: All you need to know before choosing the right career path - India Today

Taylor & Francis Group Partners with Robert Bosch Centre for Data Science and AI to Amplify Research – IT News Online

Business Wire IndiaTaylor & Francis Group is pleased to announce its publishing partnership with the Robert Bosch Centre for Data Science and Artificial Intelligence (IIT Madras). The Robert Bosch Centre for Data Science and AI (RBCDSAI) is one of India's preeminent interdisciplinary research centers for Data Science and AI, with twenty-eight faculty spanning ten departments.

Professor Balaraman Ravindran, Mindtree Faculty Fellow and the Head of RBCDSAI, IIT Madras, says, The current pandemic has highlighted the importance of openness and collaboration in the scholarly publishing industry to share research more efficiently. The collaboration will give the center more visibility for our research, as well as support services, and guidance for our authors on publishing. Researchers at the center will also be able to avail onboarding assistance on publishing processes, open practices, research promotion, and ongoing support from the knowledgeable Taylor & Francis team.

Nitasha Devasar, Managing Director, Taylor & Francis India and Vice President & Commercial Lead, India, South Asia & Africa shared an article by Professor Balaraman Ravindran of RBCDSAI, and that set the ball rolling for this amazing partnership. Were excited to collaborate with RBCDSAI. As part of this partnership, we will be able to provide our society and association partners with information, advice, and exclusive benefits to serve their authors and members.

India has the highest relative AI skill penetration rate in the world according to the Stanford University Artificial Index Report 2021, and its also ranked as one of the top five countries for growth in AI hiring. This is an incredible opportunity for Taylor & Francis to partner with an organization that is helping India to become a global leader. We anticipate that this co-branded organic commissioning program from this world-leading institute will result in an exchange of ideas, concerns, and best practices in scholarly publishing that will be mutually beneficial, says Dr. Gagandeep Singh, Senior Publisher (Engineering), CRC Press.

Taylor & Francis growing list of outstanding titles in Artificial Intelligence and Machine Learning range from fundamental and theoretical concepts to advanced applications. The collection explores safety, security, and ethical concerns in AI and Machine Learning, as well as cutting-edge topics, such as deep learning, autonomous vehicles, autonomous networks, and robotics.About the Robert Bosch Centre for Data Science and AI

The Robert Bosch Centre for Data Science and AI (RBCDSAI) aims to leverage data science to give insights to make actionable, reliable and impactful decisions for adoption in engineering, finance and healthcare domains. They are one of the pre-eminent interdisciplinary research centres for Data Science and AI in India with the largest network analytics, deep reinforcement learning, and the most active natural language processing and deep learning groups.About Taylor & Francis Group

Taylor & Francis Group partners with researchers, scholarly societies, universities, and libraries worldwide to bring knowledge to life. We are one of the worlds leading publishers of scholarly content spanning all areas of Humanities, Social Sciences, Behavioral Sciences, Science, Technology and Medicine. From our network of offices around the world, Taylor & Francis Group professionals provide expertise and support for Taylor & Francis, Routledge, Dovepress, and F1000 Research products and services.

Read the original here:

Taylor & Francis Group Partners with Robert Bosch Centre for Data Science and AI to Amplify Research - IT News Online

Modern Hire Reveals New Research on the Effectiveness of Social Media in Hiring – PRNewswire

CLEVELAND and DELAFIELD, Wis., Sept. 21, 2021 /PRNewswire/ -- Modern Hire, the leading enterprise hiring platform for predicting job performance and fit, today released new research revealing that social media is not a valid or predictive hiring tool, cautioning recruiters on the risks of incorporating it into their hiring practices and platforms.

A recent whitepaper, What Does the Science Say: Social Media in Hiring, features a study conducted by Modern Hire's team of advanced-degree industrial-organizational psychologists and data scientists, who set out to understand the validity of social media as a hiring tool by investigating whether any relevant information from a candidate's LinkedIn profile is related to on-the-job performance.

Specifically, Modern Hire's study focused on job candidates in sales positions, measuring success on the job with employees' sales performance metrics. With few exceptions, Modern Hire's research results suggest that an employee's LinkedIn profile elements are not strongly correlated with their sales performance metrics, meaning using LinkedIn profiles for candidate selection and vetting is not shown to be predictive of candidates' on-the-job performance.

"Social media is increasingly being leveraged in the hiring process without much policy or guidance around it," said Eric Sydell, Ph.D. and EVP of Innovation at Modern Hire. "Our latest research demonstrates that, at least at this time, using social media in the hiring process offers little to no scientific value, and can even have an adverse impact on candidates during the recruiting and hiring process."

While using social media as a hiring tool can be an innovative way to engage with candidates, it can also introduce bias into the hiring process. Many social media platforms contain protected class information, and as a result, using social media for anything beyond identifying prospective candidates especially when it comes to the evaluation and selection stages increases the risk of unconscious bias and adverse impact in the hiring process.

Additionally, many candidates are not aware that their social media posts will be used by recruiters and hiring managers as part of the hiring evaluation process. With the exception of LinkedIn, prominent social media platforms like Facebook, Twitter and Instagram were built for personal not professional use, and it is not clear whether candidates intend for potential employers to use this information in hiring situations. As an alternative to leveraging social media in the hiring process, Modern Hire's research suggests that recruiters should focus on using unbiased hiring practices that start with quality data, as well as predictive hiring tools that are validated and fair.

"It's difficult to predict what the future may hold for the use of social media in hiring," said Mike Hudy, Ph.D. and Chief Science Officer at Modern Hire. "With the rapid, constant evolution in social media functionality and user preferences, practices that may be fair and legally defensible today could become outdated virtually overnight. It's important to choose hiring strategies and technologies that are scientifically proven to improve hiring experiences for candidates and results for companies."

Modern Hire's research is powered by CognitIOn by Modern Hire, the company's industry-leading science that represents its cutting-edge capability and expertise in data science, predictive analytics, AI and industrial-organizational (I/O) psychology. Modern Hire's research has been widely published in several academic journals, including the Journal of Applied Psychology and the International Journal of Selection and Assessment, and has been presented at the annual conference of the Society for Industrial and Organizational Psychology (SIOP).

For more information and to download Modern Hire's new research report, What Does the Science Say: Social Media in Hiring, visit: https://modernhire.com/what-does-the-science-say-social-media-in-hiring/. To learn more about Modern Hire's award-winning, science-based enterprise hiring platform, please visit https://modernhire.com/platform/.

About Modern HireModern Hire's intelligent hiring platform transforms each step of the process with screening, assessment, interview and workflow automation tools that make hiring more effective, efficient, ethical and engaging. Modern Hire is differentiated by its advanced selection science and is trusted by more than 700 leading global enterprises and nearly half the Fortune 100. To learn more about the company's commitment to seriously better hiring, visitwww.modernhire.com.

Contact: Allison ZulloWalker Sands, for Modern Hire[emailprotected]330-554-5965

SOURCE Modern Hire

Here is the original post:

Modern Hire Reveals New Research on the Effectiveness of Social Media in Hiring - PRNewswire

How is data science changing the way we get insured for the better? – BOSS Magazine

Reading Time: 3 minutes

Data is everywhere. While we consume a lot of data every day, we also give away loads of information to the internet during our interaction, no matter how small a time frame. And while we see active use of data science and related technology in fields such as targeted marketing, personalization of technology, customer support, and more, it is now slowly reaching the insurance industry.

The insurance industry runs on statistics and logic as it involves risk management, competition, and event prediction daily. With the integration of big data, IoT, and data science, things are taking a turn for the better, for the customers and the companies.

The power of AI and data science is now used to predict risks and outcomes in the industry to cut as much gross loss as possible. The prediction, or assessment, of risk is really just identifying the type and reasons of risk to help the industry avoid them with as big a margin as possible. This is all done with the help of data analytics.

The information about the customer (could be a person, a group of people, or an entire company) is acquired and fed into a model. The model, designed using algorithms that combine and understand data, then assesses the risks nature, type, and result in a given objective statement. For evaluation, the risk is cast as a result fit for the audiences consumption, such as a visually descriptive graph, table, etc., thus, ensuring the customers profitability.

While an increase in profitability sounds like the ultimate requirement for data science in the insurance industry, something a bit more concerning requires expert attention.

Insurance frauds are quite the talk of the town everywhere, all the time. Not only does it bring a great loss to individuals, but it also brings financial loss to the insurance companies. The leading causes of fraudulent activities are suspicious links, malware, phishing, etc. Thankfully, data science platforms have made it easier for everyone to detect and fight these using various techniques, often AI-driven.

The insurance companies constantly run their history of data into statistical models using algorithms designed to detect fraud. These are conditioned using previous cases of fraudulent activities to understand the same. Using the newly gained intelligence, the algorithm analyzes the ongoing stream of data to filter out even the most subtle instance of fraud that might have gone unnoticed otherwise.

But the integration of data science into the insurance industry isnt an isolated event of cross-industry advancement. With data analysis techniques, we often see IoT tunneling into the health and finance sector.

We live in an age and world that connects us to a vast network through multiple channels. Health insurance companies know that the best. They seek out and collect data from wearable body sensors, such as smartwatches and phones; transactions such as payments made at fast-food joints; data from exercise monitoring systems at gyms; social media content posted by individuals to evaluate their mental health. This endless sea of data we all provide to insurance companies falls into the algorithm that studies us, our behavior, and our history.

This acquired intelligence is used to then provide us with personalized, structured healthcare plans, services, etc., by the companies.

If you can use data science to learn more about better healthcare schemes, can you do the same for automobiles? Yes, we can.

Risk assessment is as big a part of the automobile industry as any other. Data about various automobiles runs through a centralized database constantly to help customers learn what they need from their insurance company to cover them with the least amount of risk and maximum benefits.

Thankfully, platforms like Salty do the trick. They seamlessly simplify AI for you, understand you and your needs, and provide you with a customized plan to help you stay insured always.

Read this article:

How is data science changing the way we get insured for the better? - BOSS Magazine

Business Analytics Lecture Series Kicks Off with Janssen Pharmaceutical’s Jeffery Headd – Seton Hall University News & Events

Jeffery Headd

Presented by the Stillman School of Business, the second annual Executive Lessons Learned in Analytics and Advanced Technologies lecture series will return with virtual sessions. This year's focus is on leveraging analytics and associated applications to improve business impact and financial outcomes.

The first lecture in the second annual Executive Lessons Learned in Analytics and Advanced Technologies series will feature Jeffrey J Headd, Ph.D., senior director of Commercial Data Sciences at The Janssen Pharmaceutical Companies of Johnson & Johnson. Headd's lecture titled, "Driving Measurable Business Impact Through Data Science at Scale" will be presented virtually on Thursday, September 23 at 6:30 p.m.

During the presentation, attendees will learn how Janssen Pharmaceuticals has matured its data science capabilities from early proof of concept projects to a cornerstone capability enabling critical business initiatives.

As the leader of the Janssen Business Technology Commercial Data Sciences and Data Management team, Headd partners with commercial leadership across business functions to identify and tackle critical challenges with novel data-driven solutions. His multidisciplinary team applies expertise in artificial intelligence, machine learning, data integration, and other modern analytical methods to address these challenges and generate measurable business value.

Headd holds a Ph.D. in computational biology and bioinformatics from Duke University. In addition to his current position in Janssen, he has also worked in J&J's Medical Devices sector as a member of the R&D Technology Data Sciences team, following his initial J&J role as a data scientist in the Janssen Business Technology Data Sciences team. Prior to joining J&J, he worked in methods development for macromolecular crystallography as a member of the Phenix team at both Duke University and Lawrence Berkeley National Lab.

Jay Liebowitz, D.Sc., M.S. in Business Analytics (MSBA) program co-director in the Stillman School of Business, will moderate the series. Liebowitz is one of the world's leading knowledge management researchers and practitioners. Moreover, a Stanford University study recognized Liebowitz as being among the world's top 2 percentmost-cited scientists in the field of AI, with a secondary discipline in Business.

To learn more about the series and to register for one or more of the lectures, click here.

Continue reading here:

Business Analytics Lecture Series Kicks Off with Janssen Pharmaceutical's Jeffery Headd - Seton Hall University News & Events

The Poisson Process and Poisson Distribution, Explained (With Meteors!) – Built In

Do you know the real tragedy of statistics education in most schools? Its boring! Teachers spend hours wading through derivations, equations, and theorems. Then, when you finally get to the best part applying concepts to actual numbers its with irrelevant, unimaginative examples like rolling dice. Its a shame because stats can be engaging if you skip the derivations (which youll likely never need) and focus on using the concepts to solve interesting problems.

So lets look at Poisson processes and the Poisson distribution, two important probability concepts. After highlighting the relevant theory, well work through a real-world example.

A Poisson process is a model for a series of discrete events where the average time between events is known, but the exact timing of events is random. The arrival of an event is independent of the event before (waiting time between events is memoryless). For example, suppose we own a website that our content delivery network (CDN) tells us goes down on average once per 60 days, but one failure doesnt affect the probability of the next. All we know is the average time between failures. The failures are a Poisson process that looks like:

We know the average time between events, but the events are randomly spaced in time (stochastic). We might have back-to-back failures, but we could also go years between failures because the process is stochastic.

A Poisson process meets the following criteria (in reality, many phenomena modeled as Poisson processes dont precisely match these but can be approximated as such):

The last point events are not simultaneous means we can think of each sub-interval in a Poisson process as a Bernoulli Trial, that is, either a success or a failure. With our website, the entire interval in consideration is 60 days, but each with sub-interval (one day) our website either goes down or it doesnt.

Common examples of Poisson processes are customers calling a help center, visitors to a website, radioactive decay in atoms, photons arriving at a space telescopeand movements in a stock price. Poisson processes are generally associated with time, but they dont have to be. In the case of stock prices, we might know the average movements per day (events per time), but we could also have a Poisson process for the number of trees in an acre (events per area).

One example of a Poisson process we often see is bus arrivals (or trains). However, this isnt a proper Poisson process because the arrivals arent independent of one another. Even for bus systems that run on time, a late arrival from one buscan impact the next buss arrival time. Jake VanderPlas has a great article on applying a Poisson process to bus arrival times which works better with made-up data than real-world data.

More From Will KoehrsenUse Precision and Recall to Evaluate Your Classification Model

The Poisson process is the model we use for describing randomly occurring events and, by itself, isnt that useful. We need the Poisson distribution to do interesting things like find the probability of a given number of events in a time period or find the probability of waiting some time until the next event.

The Poisson distribution probability mass function (pmf) gives the probability of observing k events in a time period given the length of the period and the average events per time:

The pmf is a little convoluted, and we can simplify events/time * time period into a single parameter, lambda (), the rate parameter. With this substitution, the Poisson Distribution probability function now has one parameter:

We can think of lambda as the expected number of events in the interval. (Well switch to calling this an interval because, remember, the Poisson process doesnt always use a time period). I like to write out lambda to remind myself the rate parameter is a function of both the average events per time and the length of the time period, but youll most commonly see it as above. (The discrete nature of the Poisson distribution is why this is a probability mass function and not a density function.)

As we change the rate parameter, , we change the probability of seeing different numbers of events in one interval. The graph below is the probability mass function of the Poisson distribution and shows the probability (y-axis) of a number of events (x-axis) occurring in one interval with different rate parameters.

The most likely number of events in one interval for each curve is the curves rate parameter. This makes sense because the rate parameter is the expected number of events in one interval. Therefore, the rate parameter represents the number of events with the greatest probability when the rate parameter is an integer. When the rate parameter is not an integer, the highest probability number of events will be the nearest integer to the rate parameter. (The rate parameter is also the mean and variance of the distribution, which dont need to be integers.)

We can use the Poisson distribution pmf to find the probability of observing a number of events over an interval generated by a Poisson process. Another use of the mass function equation (as well see later)is to find the probability of waiting a given amount of time between events.

Learn More From Our Data Science ExpertsWhat Is Multiple Regression?

We could continue with website failures to illustrate a problem solvable with a Poisson distribution, but I propose something grander. When I was a child, my father would sometimes take me into our yard to observe (or try to observe) meteor showers. We werent space geeks, but watching objects from outer space burn up in the sky was enough to get us outside, even though meteor showers always seemed to occur in the coldest months.

We can model the number of meteors seen as a Poisson distribution because the meteors are independent, the average number of meteors per hour is constant (in the short term), and this is an approximation meteors dont occur at the same time.

All we need to characterize the Poisson distribution is the rate parameter, the number of events per interval * interval length. In a typical meteor shower, we can expect five meteors per hour on average or one every 12 minutes. Due to the limited patience of a young child (especially on a freezing night), we never stayed out more than 60 minutes, so well use that as the time period. From these values, we get:

Five meteors expected mean that is the most likely number of meteors wed observe in an hour. According to my pessimistic dad, that meant wed see three meteors in an hour, tops. To test his prediction against the model, we can use the Poisson pmf distribution to find the probability of seeing exactly three meteors in one hour:

We get 14 percent or about 1/7. If we went outside and observed for one hour every night for a week, then we could expect my dad to be right once! We can use other values in the equation to get the probability of different numbers of events and construct the pmf distribution. Doing this by hand is tedious, so well use Python calculation and visualization (which you can see in this Jupyter Notebook).

The graph below shows the probability mass function for the number of meteors in an hour with an average of 12 minutes between meteors, the rate parameter (which is the same as saying five meteors expected in an hour).

The most likely number of meteors is five, the rate parameter of the distribution. (Due to a quirk of the numbers, four and fivehave the same probability, 18 percent). There is one most likely value as with any distribution, but there is also a wide range of possible values. For example, we could see zero meteors or see more than 10 in one hour. To find the probabilities of these events, we use the same equation but, this time, calculate sums of probabilities (see notebook for details).

We already calculated the chance of seeing precisely three meteors as about 14 percent. The chance of seeing three or fewer meteors in one hour is 27 percent which means the probability of seeing more than 3 is 73 percent. Likewise, the probability of more than five meteors is 38.4 percent, while we could expect to see five or fewer meteors in 61.6 percent of hours. Although its small, there is a 1.4 percent chance of observing more than ten meteors in an hour!

To visualize these possible scenarios, we can run an experiment by having our sister record the number of meteors she sees every hour for 10,000 hours. The results are in the histogram below:

(This is just a simulation. No sisters were harmed in the making of thisarticle.)

On a few lucky nights, wed see 10or more meteors in an hour, although more often, wed see four or fivemeteors.

The rate parameter, , is the only number we need to define the Poisson distribution. However, since its a product of two parts (events/interval * interval length), there are two ways to change it: we can increase or decrease the events/interval, and we can increase or decrease the interval length.

First, lets change the rate parameter by increasing or decreasing the number of meteors per hour to see how those shifts affect the distribution. For this graph, were keeping the time period constant at 60 minutes.

In each case, the most likely number of meteors in one hour is the expected number of meteors, the rate parameter. For example, at 12 meteors per hour (MPH), our rate parameter is 12, and theres an 11 percent chance of observing exactly 12 meteors in one hour. If our rate parameter increases, we should expect to see more meteors per hour.

Another option is to increase or decrease the interval length. Heres the same plot, but this time were keeping the number of meteors per hour constant at five and changing the length of time we observe.

Its no surprise that we expect to see more meteors the longer we stay out.

Improve Your Data Visualization Skills7 Ways to Tell Powerful Stories With Your Data Visualization

An intriguing part of a Poisson process involves figuring out how long we have to wait until the next event (sometimes called the interarrival time). Consider the situation: meteors appear once every 12 minutes on average. How long can we expect to wait to see the next meteor if we arrive at a random time? My dad always (this time optimistically) claimed we only had to wait six minutes for the first meteor, which agrees with our intuition. Lets use statistics to see if our intuition is correct.

I wont go into the derivation (it comes from the probability mass function equation), but the time we can expect to wait between events is a decaying exponential. The probability of waiting a given amount of time between successive events decreases exponentially as time increases. The following equation shows the probability of waiting more than a specified time.

With our example, we have one event per 12 minutes, and if we plug in the numbers, we get a 60.65 percent chance of waiting more than six minutes. So much for my dads guess! We can expect to wait more than 30 minutes, about 8.2 percent of the time. (Note this is the time between each successive pair of events. The waiting times between events are memoryless, so the time between two events has no effect on the time between any other events. This memorylessness is also known as the Markov property).

A graph helps us to visualize the exponentially decaying probability of waiting time:

There is a 100 percent chance of waiting more than zero minutes, which drops off to a near-zero percent chance of waiting more than 80 minutes. Again, as this is a distribution, theres a wide range of possible interarrival times.

Rearranging the equation, we can use it to find the probability of waiting less than or equal to a time:

We can expect to wait six minutes or less to see a meteor 39.4 percent of the time. We can also find the probability of waiting a length of time: Theres a 57.72 percent probability of waiting between 5 and 30 minutes to see the next meteor.

To visualize the distribution of waiting times, we can once again run a (simulated) experiment. We simulate watching for 100,000 minutes with an average rate of one meteor per 12 minutes. Then we find the waiting time between each meteor we see and plot the distribution.

The most likely waiting time is one minute, but thats distinct from the average waiting time. Lets try to answer the question: On average, how long can we expect to wait between meteor observations?

To answer the average waiting time question, well run 10,000 separate trials, each time watching the sky for 100,000 minutes, and record the time between each meteor. The graph below shows the distribution of the average waiting time between meteors from these trials:

The average of the 10,000 runs is 12.003 minutes. Surprisingly, this average is also the average waiting time to see the first meteor if we arrive at a random time. At first, this may seem counterintuitive: if events occur on average every 12 minutes, then why do we have to wait the entire 12 minutes before seeing one event? The answer is we are calculating an average waiting time, taking into account all possible situations.

If the meteors came precisely every 12 minutes with no randomness in arrivals, then the average time wed have to wait to see the first one would be six minutes. However, because waiting time is an exponential distribution, sometimes we show up and have to wait an hour, which outweighs the more frequent times when we wait fewer than 12 minutes. The average time to see the first meteor averaged over all the occurrences will be the same as the average time between events. The average first event waiting time in a Poisson process is known as the Waiting Time Paradox.

As a final visualization, lets do a random simulation of one hour of observation.

Well, this time we got precisely the result we expected: five meteors. We had to wait 15 minutes for the first one then 12 minutes for the next. In this case, itd be worth going out of the house for celestial observation!

The next time you find yourself losing focus in statistics, you have my permission to stop paying attention to the teacher. Instead, find an interesting problem and solve it using the statistics youre trying to learn. Applying technical concepts helps you learn the material and better appreciate how stats help us understand the world. Above all, stay curious: There are many amazing phenomena in the world, and data science is an excellent tool for exploring them.

This article was originally published on Towards Data Science.

More:

The Poisson Process and Poisson Distribution, Explained (With Meteors!) - Built In