Category Archives: Machine Learning

Post-doctoral Research Fellow (A/B) in Machine Learning job with UNIVERSITY OF ADELAIDE | 293182 – Times Higher Education

We are seeking to appoint a Post-doctoral Research Fellow (Level A/B) in Machine Learning - $71,401 to $119,391 per annum includingan employer contribution of up to 17% superannuation may apply.

A 1.5 year fixed-term position is available to work on a research project for developing Machine Learning methods for network protocol evaluation with the possibility of extension to 3 years.

This is a fantastic opportunity for a high-achieving postdoctoral researcher to join a world-leading research group in Computer Security and Machine Learning as well as Computer Science department ranked 48thin the world and The University of Adelaide ranked in the top 1% of Universities worldwide.

You will work on a research program to address the problems in software-based emulation and assessment of networking protocols to support automated dynamic analysis of networking protocols.

The project aims to develop and implement methods to automatically find vulnerabilities and attack strategies in common Internet routing protocols. You will be involved in the development of theory, techniques (such as fuzzing and machine learning methods) and tools for discovering bugs and vulnerabilities in protocol implementations.

You will work with a team of researchers from the University of Adelaides School of Computer Science and the Australian Institute of Machine Learning, University of New South Wales, CSIROs Data61 and Defence Science and Technology Organisation (DSTG).

In this role you will have the options to purseone or moreof the following:

This is an outstanding opportunity to advance your career in cyber security, network security, computer security, software engineering and machine learning whilst exploring the area of large scale, automated, dynamic analysis of networking software with three world-class institutions in a world-leading environment.

The University of Adelaide is a member of Australias prestigious Group of Eight research-intensive universities and ranks inside the worlds top 100. In the Australian Governments 2018 Excellence in Research for Australia (ERA) assessment, 100% of University of Adelaide research was rated world-class or above, with work in 41 distinct fields achieving the highest possible rating of well above world-standard. This included Artificial Intelligence and Image Processing, and Electrical and Electronic Engineering.

Our world-renowned researchers have established a culture of innovation and a strong track record of publication in the top venues, particularly in the area of machine learning, computer vision and security. We're committed to delivering fundamental and commercially oriented research thats highly valued by our local and global communities. Here youll work in one of the worlds most talented and creative machine learning teams, with constant researchengineering collaboration. Youll use state-of-the-art technology and youll be based in the heart of one of the worlds top 10 most liveable cities.

To be successful you will need

Level A

Level B (in addition to the above)

Enjoy an outstanding career environment

The University of Adelaide is a uniquely rewarding workplace. The size, breadth and quality of our education and research programs - including significant industry, government and community collaborations - offers you vast scope and opportunity for a long, fulfilling career.

It also enables us to attract high-calibre people in all facets of our operations, ensuring you will be surrounded by talented colleagues, many world-leading. Our work's cutting-edge nature - not just in your own area, but across virtually the full spectrum of human endeavour - provides a constant source of inspiration.

Our culture is one that welcomes all and embraces diversity consistent with ourStaff Values and Behaviour Frameworkand our Values of integrity, respect, collegiality, excellence and discovery. We firmly believe that our people are our most valuable asset, so we work to grow and diversify the skills, knowledge and capability of all our staff.

We embrace flexibility as a key principle to allow our people to manage the changing demands of work, personal and family life. Flexible working arrangements are on offer for all roles at the University.

In addition, we offer a wide range of attractive staff benefits. These include: salary packaging; flexible work arrangements; high-quality professional development programs and activities; and an on-campus health clinic, gym and other fitness facilities.

Learn more at:adelaide.edu.au/jobs

Your faculty's broader role

The Faculty of Sciences, Engineering and Technology is a multidisciplinary hub of cutting-edge teaching and research. Many of its academic staff are world leaders in their fields and graduates are highly regarded by employers. TheFacultyactively partners with innovative industries to solve problems of global significance.

Learn more at:set.adelaide.edu.au

If you have the talent, we'll give you the opportunity. Together, let's make history.

Click on the Apply Now button to be taken through to the online application form. Please ensure you submit a cover letter, resume, and upload a document that includes your responses to all of the selection criteria for the position as contained in the position description or selection criteria document.

Applications close 11:55 pm, 12 June 2022.

For further information

For a confidential discussion regarding this position, contact:

Damith RanasingheAssociate Professor, School of Computer ScienceP: +61 (8) 8313-0066E:damith.ranasinghe@adelaide.edu.au

You'll find a full selection criteria below:(If no links appear, try viewing on another device)

The University of Adelaide is an Equal Employment Opportunity employer. Women and Aboriginal and Torres Strait Islander people who meet the requirements of this position are strongly encouraged to apply.

See the original post here:
Post-doctoral Research Fellow (A/B) in Machine Learning job with UNIVERSITY OF ADELAIDE | 293182 - Times Higher Education

Tech Visionaries to Address Accelerating Machine Learning, Unifying AI Platforms and Taking Intelligence to the Edge, at the Fifth Annual AI Hardware…

SANTA CLARA, Calif.--(BUSINESS WIRE)--Metas VP of Infrastructure Hardware, Alexis Black Bjorlin, will open the flagship AI Hardware Summit with a keynote, while her colleague Vikas Chandra, Metas Director of AI Research will open Edge AI Summit. Other notable keynotes include Microsoft Azures CTO, Mark Russinovich, plus Wells Fargos EVP of Model Risk, Agus Sudjianto; Synopsys President & COO, Sassine Ghazi; Cadences Executive Chairman, Lip-Bu Tan; and Siemens EVP, IC EDA, Joseph Sawicki, among many others

Machine learning and deep learning are fast becoming major line items on agendas in board rooms in every organization across the globe. The technology stack needed to support these workloads, and to execute them quickly, efficiently, and affordably, is fast developing in both the datacenter and in client systems at the edge.

In 2018, a new Silicon Valley event called the AI Hardware Summit launched to provide a platform to discuss innovations in hardware necessary for supporting machine learning both at the very large scale, and in small resource-constrained environments. The event attracted enormous interest from the semiconductor and systems sectors, welcomed Habana Labs into the industry in its inaugural year, and subsequently hosted Alphabet Inc.s Chairman and Turing Award Winner, John L. Hennessy, as a keynote speaker in 2019. Shortly after, the Edge AI Summit was launched to focus specifically on deploying machine learning in commercial use cases in client systems.

Hennessy said of the AI Hardware Summit: Its a great place where lots of people interested in AI Hardware are coming together and exchanging ideas, and together we make the technology better. Theres a synergistic effect at these summits which is really amazing and powers the entire industry.

Fast forward a few years of virtual shows and the events are back in-person with a fresh angle. An all-star cast of tech visionary speakers will address optimizing and accelerating machine learning hardware and software, focusing on the intersection between systems design and ML development. Developer workshops with HuggingFace are a new feature this year focused on helping bring new hardware innovation into leading enterprises.

The co-location of the two industry-leading summits combines the proposition to focus on building, optimizing and unifying software-defined ML platforms across the cloud-edge continuum. Attendees of the AI Hardware Summit can expect content spanning from hardware and infrastructure up to models/applications, whereas the Edge AI Summit has a much tighter focus on case studies of ML in enterprise.

This years audience will consist of machine learning practitioners and technology builders from various engineering disciplines, discussing topics such as systems-first ML, AI acceleration as a full-stack endeavour, software defined systems co-design, boosting developer efficiency, optimizing applications across diverse ML platforms and bringing state of the art production performance into the enterprise.

While the AI Hardware Summit has broadened its scope beyond focusing purely on hardware, there will still be plenty for hardware-focused attendees to explore. The event website, http://www.aihardwaresummit.com, gives accessible information on why a software-focused or hardware-focused attendee should register.

The Edge AI Summit features more end user use cases than any other event of its kind, and is a must attend for anyone moving ML workloads to the edge. The event website, http://www.edgeaisummit.com, gives more information.

Read the original:
Tech Visionaries to Address Accelerating Machine Learning, Unifying AI Platforms and Taking Intelligence to the Edge, at the Fifth Annual AI Hardware...

Using machine learning to predict COVID-19 infection and severity risk among 4510 aged adults: a UK Biobank cohort study | Scientific Reports -…

Study design and participants

This retrospective study involved the UK Biobank cohort12. UK Biobank consists of approximately 500,000 people now aged 50 to 84years (mean age=69.4years). Baseline data was collected in 20062010 at 22 centers across the United Kingdom13,14. Summary data are listed in Table 1. This research involved deidentified epidemiological data. All UK Biobank participants gave written, informed consent. Ethics approval for the UK Biobank study was obtained from the National Health Service Health Research Authority North WestHaydock Research Ethics Committee (16/NW/0274), in accordance with relevant guidelines and regulations from the Declarations of Helsinki. All analyses were conducted in line with UK Biobank requirements.

The following categories of predictors were downloaded: (1) demographics; (2) health behaviors and long-term disability or illness status; (3) anthropometric and bioimpedance measures of fat, muscle, or water content; (4) pulse and blood pressure; (5) a serum panel of thirty biochemistry markers commonly collected in a clinic or hospital setting; and (6) a complete blood count with a manual differential.

These factors included participant age in years at baseline, sex, education qualifications, ethnicity, and Townsend Deprivation Index. Sex was coded as 0 for female and 1 for male. For education, higher scores roughly correspond to progressively more skilled trade/vocational or academic training. Ethnicity was coded as UK citizens who identified as White, Black/Black British, or Asian/Asian British. The Townsend index15 is a standardized score indicating relative degree of deprivation or poverty based on permanent address.

This category consisted of self-reported alcohol status, smoking status, a subjective health rating on a 14 Likert scale (Excellent to Poor), and whether the participant had a self-described long-term medical condition. As noted in Table 1, 48.4% of participants indicated having such an ailment. We independently confirmed self-reported data with ICD-10 codes while at hospital. These conditions included all-cause dementia and other neurological disorders, various cancers, major depressive disorder, cardiovascular or cerebrovascular diseases and events, cardiometabolic diseases (e.g., type 2 diabetes), renal and pulmonary diseases, and other so-called pre-existing conditions.

The first automated reading of pulse, diastolic and systolic blood pressure at the baseline visit were used.

Anthropometric measures of adiposity (Body Mass Index, waist circumference) were derived as described16. Data also included bioelectrical impedance metrics that estimate central body cavity (i.e., trunk) and whole body fat mass, fat-free muscle mass, or water content17.

Serum biomarkers were assayed from baseline samples as described18. Briefly, using immunoassay or clinical chemistry devices, spectrophotometry was used to initially quantify values for 34 biochemistry analytes. UK Biobank deemed 30 of these markers to be suitably robust. We rejected a further 4 markers due data missingness>70% (estradiol, rheumatoid factor), or because there was strong overlap with multicollinear variables that had more stable distributions or trait-like qualities (glucose rejected vs. glycated hemoglobin/hba1c; direct bilirubin rejected vs. total bilirubin). A complete blood count with a manual differential was separately processed for red and white blood cell counts, as well as white cell sub-types.

As described (http://biobank.ctsu.ox.ac.uk/crystal/crystal/docs/infdisease.pdf), among 9695 randomized UK Biobank participants selected from the full 500,000 participant cohort, baseline serum was thawed and pathogen-specific assays run in parallel using flow cytometry on a Luminex bead platform19.

Here, the goal of the multiplex serology panel was to measure multiple antibodies against several antigens for different pathogens, reducing noise and estimating the prevalence of prior infection and seroconversion in at least UK Biobank. All measures were initially confirmed in serum samples using gold-standard assays with median sensitivity and specificity of 97.0% and 93.7%, respectively. Antibody load for each pathogen-specific antigen was quantified using median fluorescence intensity (MFI). Because seropositivity is difficult to assess for several pathogens, we did not use pathogen prevalence as a predictor in models.

Table 2 shows the selected pathogens, their respective antigens, estimated prevalence of each pathogen based roughly on antibody titers, and assay values. This array ranges from delta-type retroviruses like human T-cell lymphotropic virus 1 that are rare (<1%) to human herpesviruses 6 and 7 that have an estimated prevalence of more than 90%.

Our study was based on COVID PCR test data available from March 16th to May 19th 2020. Specifically, we used the May 26th, 2020 tranche of COVID-19 polymerase chain reaction (PCR) data from Public Health England. There were 4510 unique participants that had 7539 individual tests administered, hereafter called test cases. To characterize each test case, UK Biobank had a binary variable for test positivity (result) and a separate binary variable for test location (origin). For the positivity variable, a COVID-19 test was coded as negative (0) or positive (1). The second binary variable represented whether the COVID-19 test occurred through a setting that was out-patient (0) or in-patient at hospital (1). As a proxy for COVID-19 severity later verified by electronic health records and death certificates20, and as done in other UK Biobank reports21, a test case first needed to be positive for COVID-19 (i.e., the test had a 1 value for the positivity variable). Next, if the positive test case occurred in an out-patient setting the infection was considered mild (i.e., 0), whereas for in-patient hospitalization it was considered severe (i.e., 1). Thus, two separate sets of analyses were run to predict: (1) COVID-19 positivity; and (2) COVID-19 severity.

For a more technical description of the specific machine learning algorithm used to predict test case outcomes, see Supplementary Text 1. Supplementary Text 2 has an in-depth description and analysis of within-subject variation for outcome measures and number of test cases done per participant. Briefly, this variability was modest and had no significant impact on classifier model performance. SPSS 27 was used for all analyses and Alpha set at 0.05. Preliminary findings suggested that baseline serology data performed well in classifier models, despite a limited number of participants with serology. To determine if this serology sub-group was noticeably different from the full sample, MannWhitney U and KruskalWallis tests were done (Alpha=0.05). Hereafter, separate sets of classification analyses were performed for: (1) the full cohort; and (2) the sub-group of participants that had serology data. In other words, due to the imbalance of sample sizes and by definition the absence or presence of serology data, classifier performance in the serology sub-group was never statistically compared to the full cohort.

Next, linear discriminant analysis (LDA) was used in two separate sets of analyses to predict either: (1) COVID-19 diagnosis (negative vs. positive); or (2) COVID-19 infection severity (mild vs. severe). Again, for a given test case, COVID-19 severity would be examined only among participants who tested positive for COVID-19. LDA is a regression-like classification technique that finds the best linear combination of predictors that can maximally distinguish between groups of interest. To determine how useful a given predictor or related group of predictors (e.g., demographics) were for classification, simple forced entry models were first done. Subsequently, to derive best fit, robust models of the data, stepwise entry (Wilks Lambda, F value entry=3.84) was used to exclude predictors that did not significantly account for unique variance in the classification model. This data reduction step is critical because LDA can lead to model overfitting when there are too many predictors relative to observations22,23, which are COVID-19 test cases for our purposes. Finally, because there were multiple test cases that could occur for the same participant, this would violate the assumption of independence. To guard against this problem, we used Mundry and Sommers permutation LDA approach. Specifically, for each LDA model, permutation testing (1000 iterations, P<0.05) was done by randomizing participants across groupings of test cases to confirm robustness of the original model24.

LDA model overfitting can also occur when there is a sample size imbalance. Because there were many more negative vs. positive COVID-19 test cases in the full sample (5329 vs. 2210), the negative test group was undersampled. Specifically, a random number generator was used to discard 2500 negative test cases at random, such that the proportion of negative to positive tests was now 55% to 45% instead of 70.6 to 29.4%. Results without undersampling were similar (data not shown). No such imbalance was seen for COVID-19 severity in the full sample or for the serology sub-group. A typical holdout method of 70% and 30% was used for classifier training and then testing25. Finally, a two-layer non-parametric approach was used to determine model significance and estimated fit of one or more predictors. First, bootstrapping26 (95% Confidence Interval, 1000 iterations) was done to derive estimates robust against any violations of parametric assumptions. Next, leave-one-out cross-validation22 was done with bootstrap-derived estimates to ensure that models themselves were robust. Collectively, the stepwise LDA models ensured that estimation bias of coefficients would be low because most predictors are thrown out before models are generated using the remaining predictors.

For each LDA classification model, outcome threshold metrics included: specificity (i.e., true negatives correctly identified), sensitivity (i.e., true positives correctly identified), and the geometric mean (i.e., how well the model predicted both true negatives and positives). The area under the curve (AUC) with a 95% confidence interval (CI) was reported to show how well a given model could distinguish between a COVID-19 negative or positive test result, and separately for COVID-19+test cases if the disease was mild or severe. Receiver operating characteristic (ROC) curves plotted sensitivity against 1-specificity to better visualize results for sets of predictors and a final stepwise model. For stepwise models, the Wilks Lambda statistic and standardized coefficients are reported to see how important a given predictor was for the model. A lower Wilks Lambda corresponds to a stronger influence on the canonical classifier.

Ethics approval for the UK Biobank study was obtained from the National Health Service Health Research Authority North WestHaydock Research Ethics Committee (16/NW/0274). All analyses were conducted in line with UK Biobank requirements.

See the rest here:
Using machine learning to predict COVID-19 infection and severity risk among 4510 aged adults: a UK Biobank cohort study | Scientific Reports -...

Research Assistant (Bigdata, Machine Learning, and IoT) job with UNITED ARAB EMIRATES UNIVERSITY | 292997 – Times Higher Education

Job Description

The Electrical Engineering Department at the United Arab Emirates University is seeking a research assistant in Bigdata analytics with good exposure to machine learning and Internet-of-things (IoT). The main task is programming and integration of software components in our new designed Bigdata platform. Basic knowledge of wireless communication networks is a plus. The candidate should have a good command of Python programming and be familiar with integrating Bigdata packages and machine learning platforms. The preferred candidate should also have demonstrated experience in Linux operating system with IoT connectivity.

Minimum Qualification

An appropriate educational degree supplemented with documented evidence to support the following:

Preferred Qualification

Division College of Engineering - (COE)Department As.Dean for Research&Grad.Std.- COEJob Close Date 31-08-2022Job Category Academic - Research Assistant

Visit link:
Research Assistant (Bigdata, Machine Learning, and IoT) job with UNITED ARAB EMIRATES UNIVERSITY | 292997 - Times Higher Education

Beacon Biosignals announces partnership with Stratus to advance at-home brain monitoring and machine learning-enabled neurodiagnostics – BioSpace

Collaboration will enable AI-powered decentralized clinical trials

BOSTON, May 10, 2022 /PRNewswire/ -- Beacon Biosignals, which applies AI to EEG to unlock precision medicine for brain conditions,today announced a partnership with Stratus Research Labs, the nation's leading provider of EEG services, to enable expanded clinical trial service capabilities by leveraging Beacon's machine learning neuroanalytics platform.

EEG is standard of care in the clinical diagnosis and management of many neurologic diseases and sleep disorders, yet features of clinical significance often are difficult to extract from EEG data. Broader adoption of EEG technology has been further limited by labor-intensive workflows and variability in clinician expert interpretation. By linking their platforms, Beacon and Stratus will unlock AI-powered at-home clinical trials, addressing these challenges head-on.

"The benefits of widely incorporating EEG data into pharmaceutical trials has been desired for years, but the challenge of uniformly capturing and interpreting the data has been an issue," said Charlie Alvarez, chief executive officer for Stratus. "Stratus helps solve data capture issues by providing accessible, nationwide testing services that reduce the variability in data collection and help ensure high-quality data across all sites. Stratus is proud to partner with Beacon and its ability to complete the equation by providing algorithms to ensure the quality of EEG interpretations."

Stratus offers a wide variety of EEG services, including monitored long-term video studies and routine EEGs conducted in the hospital, clinic, and in patients' homes. Stratus has a strong track record of high-quality data acquisition, enabled by an industry-leading pool of registered EEG technologists and a national footprint for EEG deployment logistics. The announced agreement establishes Stratus as a preferred data acquisition partner for Beacon's clinical trial and neurobiomarker discovery efforts using Beacon's analytics platform.

"Reliable and replicable quantitative endpoints help drive faster, better-powered trials," said Jacob Donoghue, MD, PhD, co-founder of Beacon Biosignals. "A barrier to their development, along with performing the necessary analysis, can often be the acquisition of quality EEG at scale. Partnering with Stratus and benefiting from its infrastructure and platform eliminates that hurdle and paves the way toward addressing the unmet need for endpoints, safety tools and computational diagnostics."

Beacon's platform provides an architectural foundation for discovery of robust quantitative neurobiomarkers that subsequently can be deployed for patient stratification or automated safety or efficacy monitoring in clinical trials. The powerful and validated algorithms developed by Beacon's machine learning teams can replicate the consensus interpretation of multiple trained epileptologists while exceeding human capabilities over many hours or days of recording. These algorithms can be focused on therapeutic areas such as neurodegenerative disorders, epilepsy, sleep disorders and mental illness.For example, Beacon is currently assessing novel EEG signatures in Alzheimer's disease patients to identify which patients may or may not benefit from a specific type of therapy.

"This collaboration will enable at-home studies for diseases like Alzheimer's," Donoghue said. "It has traditionally been difficult to obtain clinical-grade EEG for these patients at the scale required for phase 3 and phase 4 clinical trials. Stratus' extensive expertise in scaling EEG operations in at-home settings unlocks real opportunities to harness brain data to evaluate treatment efficacy."

About Beacon BiosignalsBeacon's machine learning platform for EEG enables and accelerates new treatments that transform the lives of patients with neurological, psychiatric or sleep disorders. Through novel machine learning algorithms, large clinical datasets, and advances in software engineering, Beacon Biosignals empowers biopharma companies with unparalleled tools for efficacy monitoring, patient stratification, and clinical trial endpoints from brain data. For more information, visit https://beacon.bio/. For careers, visit https://beacon.bio/careers; for partnership inquiries, visit https://beacon.bio/contact. Follow us on Twitter (@Biosignals) or LinkedIn (https://www.linkedin.com/company/beacon-biosignals).

About StratusStratus is the nation's leading provider of EEG solutions, including ambulatory in-home video EEG. The company has served more than 80,000 patients across the U.S. Stratus offers technology, services, and proprietary software solutions to help neurologists accurately and quickly diagnose their patients with epilepsy and other seizure-like disorders. Stratus also provides mobile cardiac telemetry to support the diagnostic testing needs of the neurology community. To learn more, visit http://www.stratusneuro.com.

MEDIA CONTACTMegan MoriartyAmendola Communications for Beacon Biosignals913.515.7530mmoriarty@acmarketingpr.com

View original content to download multimedia:https://www.prnewswire.com/news-releases/beacon-biosignals-announces-partnership-with-stratus-to-advance-at-home-brain-monitoring-and-machine-learning-enabled-neurodiagnostics-301543440.html

SOURCE Beacon Biosignals

Excerpt from:
Beacon Biosignals announces partnership with Stratus to advance at-home brain monitoring and machine learning-enabled neurodiagnostics - BioSpace

Researchers From University Of California Irvine Publish Research In Machine Learning (Machine Learning In Ratemaking, An Application In Commercial…

2022 MAY 09 (NewsRx) -- By a News Reporter-Staff News Editor at Insurance Daily News -- Research findings on artificial intelligence are discussed in a new report. According to news reporting out of the University of California Irvine by NewsRx editors, research stated, This paper explores the tuning and results of two-part models on rich datasets provided through the Casualty Actuarial Society (CAS).

Financial supporters for this research include Casualty Actuarial Society Award: NA.

Our news correspondents obtained a quote from the research from University of California Irvine: These datasets include bodily injury (BI), property damage (PD) and collision (COLL) coverage, each documenting policy characteristics and claims across a four-year period. The datasets are explored, including summaries of all variables, then the methods for modeling are set forth. Models are tuned and the tuning results are displayed, after which we train the final models and seek to explain select predictions. Data were provided by a private insurance carrier to the CAS after anonymizing the dataset. These data are available to actuarial researchers for well-defined research projects that have universal benefit to the insurance industry and the public.

According to the news reporters, the research concluded: Our hope is that the methods demonstrated here can be a good foundation for future ratemaking models to be developed and tested more efficiently.

For more information on this research see: Machine Learning in Ratemaking, an Application in Commercial Auto Insurance. Risks, 2022,10(80):80. (Risks - http://www.mdpi.com/journal/risks). The publisher for Risks is MDPI AG.

A free version of this journal article is available at https://doi.org/10.3390/risks10040080.

Our news editors report that more information may be obtained by contacting Spencer Matthews, Department of Statistics, Donald Bren School of Information and Computer Science, University of California Irvine, Irvine, CA 92697, USA. Additional authors for this research include Brian Hartman.

(Our reports deliver fact-based news of research and discoveries from around the world.)

Read the original post:
Researchers From University Of California Irvine Publish Research In Machine Learning (Machine Learning In Ratemaking, An Application In Commercial...

Beacon Biosignals announces partnership with Stratus to advance at-home brain monitoring and machine learning-enabled neurodiagnostics – PR Newswire

Collaboration will enable AI-powered decentralized clinical trials

BOSTON, May 10, 2022 /PRNewswire/ -- Beacon Biosignals, which applies AI to EEG to unlock precision medicine for brain conditions,today announced a partnership with Stratus, the nation's leading provider of EEG services, to enable expanded clinical trial service capabilities by leveraging Beacon's machine learning neuroanalytics platform.

EEG is standard of care in the clinical diagnosis and management of many neurologic diseases and sleep disorders, yet features of clinical significance often are difficult to extract from EEG data. Broader adoption of EEG technology has been further limited by labor-intensive workflows and variability in clinician expert interpretation. By linking their platforms, Beacon and Stratus will unlock AI-powered at-home clinical trials, addressing these challenges head-on.

"The benefits of widely incorporating EEG data into pharmaceutical trials has been desired for years, but the challenge of uniformly capturing and interpreting the data has been an issue," said Charlie Alvarez, chief executive officer for Stratus. "Stratus helps solve data capture issues by providing accessible, nationwide testing services that reduce the variability in data collection and help ensure high-quality data across all sites. Stratus is proud to partner with Beacon and its ability to complete the equation by providing algorithms to ensure the quality of EEG interpretations."

Stratus offers a wide variety of EEG services, including monitored long-term video studies and routine EEGs conducted in the hospital, clinic, and in patients' homes. Stratus has a strong track record of high-quality data acquisition, enabled by an industry-leading pool of registered EEG technologists and a national footprint for EEG deployment logistics. The announced agreement establishes Stratus as a preferred data acquisition partner for Beacon's clinical trial and neurobiomarker discovery efforts using Beacon's analytics platform.

"Reliable and replicable quantitative endpoints help drive faster, better-powered trials," said Jacob Donoghue, MD, PhD, co-founder of Beacon Biosignals. "A barrier to their development, along with performing the necessary analysis, can often be the acquisition of quality EEG at scale. Partnering with Stratus and benefiting from its infrastructure and platform eliminates that hurdle and paves the way toward addressing the unmet need for endpoints, safety tools and computational diagnostics."

Beacon's platform provides an architectural foundation for discovery of robust quantitative neurobiomarkers that subsequently can be deployed for patient stratification or automated safety or efficacy monitoring in clinical trials. The powerful and validated algorithms developed by Beacon's machine learning teams can replicate the consensus interpretation of multiple trained epileptologists while exceeding human capabilities over many hours or days of recording. These algorithms can be focused on therapeutic areas such as neurodegenerative disorders, epilepsy, sleep disorders and mental illness.For example, Beacon is currently assessing novel EEG signatures in Alzheimer's disease patients to identify which patients may or may not benefit from a specific type of therapy.

"This collaboration will enable at-home studies for diseases like Alzheimer's," Donoghue said. "It has traditionally been difficult to obtain clinical-grade EEG for these patients at the scale required for phase 3 and phase 4 clinical trials. Stratus' extensive expertise in scaling EEG operations in at-home settings unlocks real opportunities to harness brain data to evaluate treatment efficacy."

About Beacon BiosignalsBeacon's machine learning platform for EEG enables and accelerates new treatments that transform the lives of patients with neurological, psychiatric or sleep disorders. Through novel machine learning algorithms, large clinical datasets, and advances in software engineering, Beacon Biosignals empowers biopharma companies with unparalleled tools for efficacy monitoring, patient stratification, and clinical trial endpoints from brain data. For more information, visit https://beacon.bio/. For careers, visit https://beacon.bio/careers; for partnership inquiries, visit https://beacon.bio/contact. Follow us on Twitter (@Biosignals) or LinkedIn (https://www.linkedin.com/company/beacon-biosignals).

About StratusStratus is the nation's leading provider of EEG solutions, including ambulatory in-home video EEG. The company has served more than 80,000 patients across the U.S. Stratus offers technology, services, and proprietary software solutions to help neurologists accurately and quickly diagnose their patients with epilepsy and other seizure-like disorders. Stratus also provides mobile cardiac telemetry to support the diagnostic testing needs of the neurology community. To learn more, visit http://www.stratusneuro.com.

MEDIA CONTACTMegan MoriartyAmendola Communications for Beacon Biosignals913.515.7530[emailprotected]

SOURCE Beacon Biosignals

See the rest here:
Beacon Biosignals announces partnership with Stratus to advance at-home brain monitoring and machine learning-enabled neurodiagnostics - PR Newswire

Presentation – David Bolder – Statistics and machine-learning: variations on a theme – Central Banking

Presentation - David Bolder - Statistics and machine-learning: variations on a theme - Central Banking

Presentation - David Bolder - Statistics and machine-learning: variations on a theme

You are currently unable to print this content. Please contact [emailprotected] to find out more.

You are currently unable to copy this content. Please contact [emailprotected] to find out more.

Please try again later. Get in touch with our customer services team if this issue persists.

New to Central Banking? View our subscription options

If you already have an account, please sign in here.

Register for a Central Banking trial to read this article in full. Sign up today and get access to:

Start trial

Previously taken a trial? Subscribe now

You need to sign in to use this feature. If you dont have a Central Banking account, please register for a trial.

Best Digital B2B Publishing Company 2016, 2017 & 2018

Best Digital B2B Publishing Company

You need to sign in to use this feature. If you dont have a Central Banking account, please register for a trial.

To use this feature you will need an individual account. If you have one already please sign in.

Alternatively you can request an individual account here:

Read the original here:
Presentation - David Bolder - Statistics and machine-learning: variations on a theme - Central Banking

Could Machine Learning Reduce Healthcare Costs and Improve Care? – Design News

Todays patients want more control of their own healthcare and health data, but the current healthcare system may be limiting. Unfortunately, current systems available in the market do not actively take into account patient needsthey are designed to serve physicians and hospitals but are not focused on meeting end-user needs, claims Ajay Panwar, CEO and founder of Pulse, a digital health startup. When we did our customer analysis, it was one of the popular asks that they need to have better control of their health and also define their risk level for various conditions. The current market doesn't have anything similar to this need.

Panwar hopes to address such patient needs by building simple, interactive systems that utilize machine learning (ML) and predict patient behaviors. In the first phase of Pulses development, machine learning models would help predict the patient behaviors based on their care plan and patient compliance, and then predict what is needed to ensure the successful outcomes of their healthcare needs, he told Design News. Later phases would include how to ensure patients are following through with the pattern for the best possible outcomes. It would also include genetic, family-related health conditions and how those could have an impact on quality of life. We are using unsupervisedmachine learning models in order to predict these outcomes.

The ML algorithms are specifically designed for each task, he explained further. The ML models will have the capabilities to take into consideration existing assessments and calculate the user needs, he said. The backend system development is in multiple languages.

The stand-alone software could also enable patients to schedule surgeries, manage chronic conditions and appointments, arrange rehabilitation, and reach dedicated care teams if needed, he explained. These are just basic examples;however, the end goal is to improve patient outcomes by connecting various pieces of independenthealthcare systems.

The interactive systems would give users the ability to control theirhealthcare. If they need [to be] hands-off, then the Pulse healthcare team would help them navigate through these challenges, he said.

In a later phase of development, Pulse plans to build APIs to link to medical devices so the patient interface would be able to interact with the devices. This is a bigger challenge to achieve; currently nothing like this exists, Panwar said.

With multiple phases of platform development planned, Pulse will initially target high-risk populations that have the imminent need for care coordination due to theirhealth conditions and limitations, he said. We would also build a risk meter that is entirely based on the prediction model with a very limitedbeta error.

Panwar shared that Pulse reached the semi-finalist round during the prestigious University of California--Irvine New Venture Competition (NVC) last year. The annual competition offers participants the opportunity to launch a startup and fund a business idea in just several months, he said.

As development continues, Pulse continues to engage with angel investors, venture capitalists, and healthcare partners looking to reinvent healthcare and overcome the obstacles causing coordination difficulties and low-value care, the company shared in a news release.

Panwar isa senior engineering manager atMedtronic and has authored articles for Design News's sister publication MD+DI and other publications.

Excerpt from:
Could Machine Learning Reduce Healthcare Costs and Improve Care? - Design News

Steps to perform when your machine learning model overfits in training – Analytics India Magazine

Overfitting is a basic problem in supervised machine learning where the model shows well generalisation capabilities on seen data but poorly performs on unseen data. Overfitting occurs as a result of the existence of noise, the small size of the training set, and the complexity involved in algorithms. In this article, we will be discussing different strategies to overcome the overfitting of machine learners while at the training stage. Following are the topics to be covered.

Lets start with the overview of overfitting in the machine learning model.

Model is overfitting data when it memorises all the specific details of the training data and fails to generalise. It is a statistical error caused by poor statistical judgments. Because it is too closely tied to the data set, it adds bias to the model. Overfitting limits the models relevance to its data set and renders it irrelevant to other data sets.

Definition according to statistics

In the presence of a hypothesis space, a hypothesis is said to overfit the training data if there exists some alternative hypothesis with a smaller error than the hypothesis over the training examples, but the alternative hypothesis with a smaller overall error than the entire distribution of instances.

Are you looking for a complete repository of Python libraries used in data science,check out here.

Detecting overfitting is almost impossible before you test the data. During the training, there are two errors: training error and validation error when the training is constantly decreasing but the validation error decreases for a period and then starts to increase but meanwhile the training error is still decreasing. This kind of scenario is overfitting.

Lets understand the mitigation strategies for this statistical problem.

There are different stages in a machine learning project where different mitigation techniques could be applied to mitigate the overfitting.

High dimensional data lead to model overfitting because in these data the number of observations is much less than the number of features. This will result in indeterministic answers to the problem.

Ways to mitigate

During the process of data wrangling, one can face the problem of outliers in the data. As these outliers increase the variance in the dataset and due to this the model will train itself to these outliers and will result in an output which has high variance and low bias. Hence the bias-variance tradeoff is disturbed.

Ways to mitigate

They either require particular attention or should be utterly ignored, depending on the circumstances. If the data set contains a significant number of outliers, it is critical to utilise a modelling approach that is resistant to outliers or to filter out the outliers.

Cross-validation is a resampling technique used to assess machine learning models on a small sample of data. Cross-validation is primarily used in applied machine learning to estimate a machine learning models skill on unseen data. That is, to use a small sample to assess how the model will perform in general when used to generate predictions on data that was not utilised during the models training.

Evaluation Procedure using K-fold cross-validation

The above is the process of K fold when k is 5 this is known as 5 folds.

This method is used to prevent the learning speed slow-down problem. Because of noise learning, the accuracy of algorithms stops improving beyond a certain point or even worsens.

The green line represents the training error, and the red line represents the validation error, as illustrated in the picture, where the horizontal axis is an epoch and the vertical axis is an error. If the model continues to learn after the point, the validation error will rise while the training error will fall. So the goal is to pinpoint the precise time at which to discontinue training. As a result, we achieved an ideal fit between under-fitting and overfitting.

Way to achieve the ideal fit

To compute the accuracy after each epoch and stop training when the accuracy of test data stops improving, and then use the validation set to figure out a perfect set of values for the hyper-parameters, and then use the test set to complete the final accuracy evaluation. When compared to directly using test data to determine hyper-parameter values, this method ensures a better level of generality. This method assures that, at each stage of an iterative algorithm, bias is reduced while variance is increased.

Noise reduction, naturally, becomes one study path for overfitting inhibition. Pruning is recommended to lower the size of final classifiers in relational learning, particularly in decision tree learning, based on this concept. Pruning is an important principle that is used to minimise classification complexity by removing less useful or irrelevant data, and then to prevent overfitting and increase classification accuracy. There are two types of pruning.

In many circumstances, the amount and quality of training datasets may have a considerable impact on machine learning performance, particularly in the domain of supervised learning. The model requires enough data for learning to modify parameters. The sample count is proportional to the number of parameters.

In other words, an extended dataset can significantly enhance prediction accuracy, particularly in complex models. Existing data can be changed to produce new data. In summary, there are four basic techniques for increasing the training set.

When creating a predictive model, feature selection is the process of minimising the number of input variables. It is preferable to limit the number of input variables to lower the computational cost of modelling and, in some situations, to increase the models performance.

The following are some prominent feature selection strategies in machine learning:

Regularisation is a strategy for preventing our network from learning an overly complicated model and hence overfitting. The model grows more sophisticated as the number of features rises.

An overfitting model takes all characteristics into account, even if some of them have a negligible influence on the final result. Worse, some of them are simply noise that has no bearing on the output. There are two types of strategies to restrict these cases:

In other words, the impact of such ineffective characteristics must be restricted. However, there is uncertainty in the unnecessary characteristics, so minimise them altogether by reducing the models cost function. To do this, include a penalty word called regularizer into the cost function. There are three popular regularisation techniques.

Instead of discarding those less valuable qualities, it assigns lower weights to them. As a result, it can gather as much information as possible. Large weights can only be assigned to attributes that improve the baseline cost function significantly.

Hyperparameters are selection or configuration points that allow a machine learning model to be tailored to a given task or dataset. To optimise them is known as hyperparameter tuning. These characteristics cannot be learnt directly from the standard training procedure.

They are generally resolved before the start of the training procedure. These parameters indicate crucial model aspects such as the models complexity or how quickly it should learn. Models can contain a large number of hyperparameters, and determining the optimal combination of parameters can be thought of as a search issue.

GridSearchCV and RandomizedSearchCV are the two most effective Hyperparameter tuning algorithms.

GridSearchCV

In the GridSearchCV technique, a search space is defined as a grid of hyperparameter values, and each point in the grid is evaluated.

GridSearchCV has the disadvantage of going through all intermediate combinations of hyperparameters, which makes grid search computationally highly costly.

Random Search CV

The Random Search CV technique defines a search space as a bounded domain of hyperparameter values that are randomly sampled. This method eliminates needless calculation.

Image source

Overfitting is a general problem in supervised machine learning that cannot be avoided entirely. It occurs as a result of either the limitations of training data, which might be restricted in size or comprise a large amount of data, or noises, or the restrictions of algorithms that are too sophisticated and need an excessive number of parameters. With this article, we could understand the concept of overfitting in machine learning and the ways it could be mitigated at different stages of the machine learning project.

Read more:
Steps to perform when your machine learning model overfits in training - Analytics India Magazine