Category Archives: Machine Learning

Navigation – Machine Learning Mastery

It is likely to be your first choice for developing a machine learning or [] Continue Reading 0. Python Classes and Their Use in Keras. By Stefania Cristina on December 15, 2021 in Python for Machine Learning. Classes are one of the fundamental building blocks of the Python language, which may be applied in the development of []

Visit link:
Navigation - Machine Learning Mastery

TUI adds machine learning to optimize its shared-transfer platform – PhocusWire

Global tourism company TUI Group is partnering withBoston-based Mobi Systems to improve the transportation services it provides tocustomers around the world.

TUI Group says it sold more than 31 million transfers in2019, moving customers between airports, hotels and points of interest.

Starting this month in Mallorca and then rolling outworldwide, TUI is using a new platform for managing shared transportation such as large and small buses, shuttles and cars that is integrated with MobiSystems machine-learning technology.

The system uses TUI's customer booking data, suchas flights, hotels and number of customers, along with data about flightdelays, traffic, weather and vehicle inventory, to calculate the mostefficient transfer plan, updating it in real time and automatically communicatingthe current route and timing to bus companies, drivers and travelers throughthe TUI app.

Subscribe to our newsletter below

One of the key areas that has always been a source oftension for our guests ... is the airport and the transfer, says Peter Ulwahn, chiefdigital officer of TUI Musement, the tours and activities division of TUIGroup.

We now have for the first time a technology that can showcasethe time to the first hotel, the number of hotels they are stopping at, if theirbus is delayed. What we were aiming for was an Uber-style information kind ofservice that our customers have been getting used to with all the ride-sharingservices.

In addition to reducing stress for travelers, Ulwahn says Mobismachine-learning technology automatically recalculates routes as needed, eliminatingtimely manual processes and reducing operating costs and CO2 emissions through better vehicle optimization and routing.

Integrating new technologies, such as machine learning, helps ensure we deliver the best customer experience through having a faster, more stable and more accurate platform, Ulwahn says.

Our transfer scheduling is already automated, but with Mobi it will be faster what previously took hours can be done in seconds - and it will continue to become even more efficient. The huge advantage of this system is that it can scale to schedule the millions of transfers we manage, while also enabling us to deliver a personalized customer experience.

The platform is being launched for airport transfers, but Ulwahn says it will eventually be used also for transportation for excursions, multi-day tours and cruise passengers.

Excerpt from:
TUI adds machine learning to optimize its shared-transfer platform - PhocusWire

Silicon Labs Brings AI and Machine Learning to the Edge with Matter-Ready Platform – inForney.com

AUSTIN, Texas, Jan. 24, 2022 /PRNewswire/ -- Silicon Labs, a leader in secure, intelligent wireless technology for a more connected world, today announced the BG24 and MG24 families of 2.4 GHz wireless SoCs for Bluetooth and Multiple-protocol operations, respectively, and a new software toolkit. This new co-optimized hardware and software platform will help bring AI/ML applications and wireless high performance to battery-powered edge devices. Matter-ready, the ultra-low-power BG24 and MG24 families support multiple wireless protocols and incorporate PSA Level 3 Secure Vaultprotection, ideal for diverse smart home, medical and industrial applications. The SoC and software solution for the Internet of Things (IoT) announced today includes:

"The BG24 and MG24 wireless SoCs represent an awesome combination of industry capabilities including broad wireless multiprotocol support, battery life, machine learning, and security for IoT Edge applications," said Matt Johnson, CEO of Silicon Labs.

First Integrated AI/ML Acceleration Improves Performance and Energy Efficiency

IoT product designers see the tremendous potential of AI and machine learning to bring even greater intelligence to edge applications like home security systems, wearable medical monitors, sensors monitoring commercial facilities and industrial equipment, and more. But today, those considering deploying AI or machine learning at the edge are faced with steep penalties in performance and energy use that may outweigh the benefits.

The BG24 and MG24 alleviate those penalties as the first ultra-low powered devices with dedicated AI/ML accelerators built-in. This specialized hardware is designed to handle complex calculations quickly and efficiently, with internal testing showing up to a 4x improvement in performance along with up to a 6x improvement in energy efficiency. Because the ML calculations are happening on the local device rather than in the cloud, network latency is eliminated for faster decision-making and actions.

The BG24 and MG24 families also have the largest Flash and random access memory (RAM) capacities in the Silicon Labs portfolio. This means that the device can evolve for multi-protocol support, Matter, and trained ML algorithms for large datasets. PSA Level3-Certified Secure VaultTM,the highest level of security certification for IoT devices, provides the security needed in products like door locks, medical equipment, and other sensitive deployments where hardening the device from external threats is paramount.

To learn more about the capabilities of the BG24 and MG24 SoCs and view a demo on how to get started, register for the instructional Tech Talk "Unboxing the new BG24 and MG24 SoCs" here: https://www.silabs.com/tech-talks.

AI/ML Software and Matter-Support Help Designers Create for New Innovative Applications

In addition to natively supporting TensorFlow, Silicon Labs has partnered with some of the leading AI and ML tools providers, like SensiML and Edge Impulse, to ensure that developers have an end-to-end toolchain that simplifies the development of machine learning models optimized for embedded deployments of wireless applications. Using this new AI/ML toolchain with Silicon Labs's Simplicity Studio and the BG24 and MG24 families of SoCs, developers can create applications that draw information from various connected devices, all communicating with each other using Matter to then make intelligent machine learning-driven decisions.

For example, in a commercial office building, many lights are controlled by motion detectors that monitor occupancy to determine if the lights should be on or off. However, when typing at a desk with motion limited to hands and fingers, workers may be left in the dark when motion sensors alone cannot recognize their presence. By connecting audio sensors with motion detectors through the Matter application layer, the additional audio data, such as the sound of typing, can be run through a machine-learning algorithm to allow the lighting system to make a more informed decision about whether the lights should be on or off.

ML computing at the edge enables other intelligent industrial and home applications, including sensor-data processing for anomaly detection, predictive maintenance, audio pattern recognition for improved glass-break detection, simple-command word recognition, and vision use cases like presence detection or people counting with low-resolution cameras.

Alpha Program Highlights Variety of Deployment Options

More than 40 companies representing various industries and applications have already begun developing and testing this new platform solution in a closed Alpha program. These companies have been drawn to the BG24 and MG24 platforms by their ultra-low power, advanced features, including AI/ML capabilities and support for Matter. Global retailers are looking to improve the in-store shopping experience with more accurate asset tracking, real-time price updating, and other uses. Participants from the commercial building management sector are exploring how to make their building systems, including lighting and HVAC, more intelligent to lower owners' costs and reduce their environmental footprint. Finally, consumer and smart home solution providers are working to make it easier to connect various devices and expand the way they interact to bring innovative new features and services to consumers.

Silicon Labs' Most Capable Family of SoCs

The single-die BG24 and MG24 SoCs combine a 78 MHz ARM Cortex-M33 processor, high-performance 2.4 GHz radio, industry-leading 20-bit ADC, an optimized combination of Flash (up to 1536 kB) and RAM (up to 256 kB), and an AI/ML hardware accelerator for processing machine learning algorithms while offloading the ARM Cortex-M33, so applications have more cycles to do other work.Supporting a broad range of 2.4 GHz wireless IoT protocols, these SoCs incorporate the highest security with the best RF performance/energy-efficiency ratio in the market.

Availability

EFR32BG24 and EFR32MG24 SoCs in 5 mm x 5 mm QFN40 and 6 mm x 6 mm QFN48 packages are shipping today to Alpha customers and will be available for mass deployment in April 2022. Multiple evaluation boards are available to designers developing applications.Modules based on the BG24 and MG24 SoCs will be available in the second half of 2022.

To learn more about the new BG24 family, go to: http://silabs.com/bg24.

To learn more about the new MG24 family, go to: http://silabs.com/mg24.

To learn more about how Silicon Labs supports AI and ML, go to: http://silabs.com/ai-ml.

About Silicon Labs

Silicon Labs (NASDAQ: SLAB) is a leader in secure, intelligent wireless technology for a more connected world. Our integrated hardware and software platform, intuitive development tools, unmatched ecosystem, and robust support make us an ideal long-term partner in building advanced industrial, commercial, home, and life applications. We make it easy for developers to solve complex wireless challenges throughout the product lifecycle and get to market quickly with innovative solutions that transform industries, grow economies, and improve lives.Silabs.com

View original content to download multimedia:https://www.prnewswire.com/news-releases/silicon-labs-brings-ai-and-machine-learning-to-the-edge-with-matter-ready-platform-301466032.html

SOURCE Silicon Labs

Read the original post:
Silicon Labs Brings AI and Machine Learning to the Edge with Matter-Ready Platform - inForney.com

How digital twins expand the scope of deep learning applications – Analytics India Magazine

Ajinkya Bhave, Country Head (India) at Siemens Engineering Services, discussed the rising significance of simulated data, in his talk at the MLDS conference, titled Simulation-driven Machine Learning. He discussed the application of simulated data to train machine learning models in situations impossible with physical data. At Siemens, the tool connects simulation models and data to train frameworks for ML to train the model at scale using the digital twin, he explained.

He outlined the challenge of the generation and labelling of real-world data and how industries can overcome the hurdles using a digital twin and simulation data. He referred to the Reduced Order Model (ROM), which simplifies a high-fidelity static or dynamical model, preserving essential behaviour and dominant effects to reduce solution time or storage capacity required for the more complex model.

ROM, simulation and digital twins

The reduced-order model helps organisations convert data to models, extend their scope and compute faster. ROM can run your digital twin on embedded devices, cloud and on-site. The basic idea is that the ROM is the catalyst of the digital twin, enabling more applications that werent possible in the past, he explained.

There are multiple ways to create a ROM, depending on your application area, data, and the models system. The model can be anywhere from data-driven with machine learning and deep learning, hybrid with statistical models and physics, to a complete physics-based model. You cannot create a model without domain knowledge that you encapsulate in it. But, equally important, the data matters. All the models require some amount of data, he said.

To create ROMs based on a neural network approach, the data can become either a stopping point or an advantage. At Siemens, the team either augments existing physical data, creates synthetic data or cleans/ labels existing data.

Simulation plays a huge role in connecting machine learning to the digital twin model. Ajinkya explored this ability through interesting real-world case studies.

Case study 1: Applying synthetic data to deploy the machine in real-world scenarios

Ajinkya walked the audience through a case study of a Siemens client that creates gearboxes for wind turbines. The wind turbines break down due to failures in gearboxes and ball bearings. The company turned to predictive monitoring to minimise the downtime. While the customer had tons of data, they did not have the domain distribution needed, making most of the data good with only one-off events of fault anomalies. To balance the distribution, Siemens leveraged 1D and 3D tools to model the gearbox and the ball bearings around the gears in the companys multiphysics tool for 1D modelling. The model and its parts were simulated through a nonlinear spring mass damper system with both parameters based on real data and others tuned. Then, fault injections were applied on the model with the faults the customer was looking for, that output a synthetic time series. Next, statistical noise injection was done to ensure the output was closer to real-time. Siemens combined noise to create a time-series analyses and ran it through a neural network to identify faults.

The idea was that we created synthetic training data, which was then used to train a neural network on a digital twin of the model. Then we tested that on the real faults which occur in the ball bearings of the gearboxes with the physical data. The graph showed us the prediction was pretty accurate, he said. The model trained on synthetic data with a well-tuned simulation model was able to create good training data for the machine learning algorithm for it to be able to predict those faults in real-world data in a real-world deployment.

Case study 2: Model predictive control

MPCs algorithms need to be accurate and high fidelity plant models, but that is not always possible. To that end, a virtual model of the plant is created through the black or grey-box model approach. The model either completes the system as a plant or as a sub-system in the form of a virtual sensor for the parts of the plant that are not measurable. The neural network-based sensor infers from the physical measurements and a model for the subsystem states the controller needs and are later given to the MPC. You have augmented the physical plant along with the unobservable data using a simulation-based approach to help the controller to do better than what it would have with only the physical model of the plant, he said.

The ROM and synthetic data can be additionally applied to the neural network of the plant in the MPC for model-based reinforcement learning, autonomous driving and factory robots for a fast but reduced-order model of the plant for the controller to optimise.

Case study 3: Predictive maintenance of pole-mounted transformers

The last case study was about the pole-mounted transformers that take high tension wires and reduce the voltage to 230 V for the safe operation of house appliances. However, given Indias diverse temperature conditions, such transformers are a fire risk. An identified cause is the oil levels between the coils of the transformers going down, causing it to overheat or spark. To monitor the oil levels of different transformers, the normalised twin concept is used. Siemens retrofitted the transformers infrastructure with a Siemens box containing four temperature senses and a cloud-based router to send the measurements periodically to the cloud.

This allowed Siemens to infer the oil level, specialise the normalised digital twin for that model and use the live twin to virtually estimate the oil labels. Although this is still an ongoing project, using a digital twin with simulated data was parameterised and finetuned with real parameters from the field.

Lastly, Ajinkya discussed a generative design case study focusing on CFD simulations. ML can be used to adaptively learn the success certainty of simulation runs and reduce the hours of the process to mere minutes.

Here is the original post:
How digital twins expand the scope of deep learning applications - Analytics India Magazine

Machine learning reduced workload for the Cochrane COVID-19 Study Register: development and evaluation of the Cochrane COVID-19 Study Classifier -…

This article was originally published here

Syst Rev. 2022 Jan 22;11(1):15. doi: 10.1186/s13643-021-01880-6.

ABSTRACT

BACKGROUND: This study developed, calibrated and evaluated a machine learning (ML) classifier designed to reduce study identification workload in maintaining the Cochrane COVID-19 Study Register (CCSR), a continuously updated register of COVID-19 research studies.

METHODS: A ML classifier for retrieving COVID-19 research studies (the Cochrane COVID-19 Study Classifier) was developed using a data set of title-abstract records included in, or excluded from, the CCSR up to 18th October 2020, manually labelled by information and data curation specialists or the Cochrane Crowd. The classifier was then calibrated using a second data set of similar records included in, or excluded from, the CCSR between October 19 and December 2, 2020, aiming for 99% recall. Finally, the calibrated classifier was evaluated using a third data set of similar records included in, or excluded from, the CCSR between the 4th and 19th of January 2021.

RESULTS: The Cochrane COVID-19 Study Classifier was trained using 59,513 records (20,878 of which were included in the CCSR). A classification threshold was set using 16,123 calibration records (6005 of which were included in the CCSR) and the classifier had a precision of 0.52 in this data set at the target threshold recall >0.99. The final, calibrated COVID-19 classifier correctly retrieved 2285 (98.9%) of 2310 eligible records but missed 25 (1%), with a precision of 0.638 and a net screening workload reduction of 24.1% (1113 records correctly excluded).

CONCLUSIONS: The Cochrane COVID-19 Study Classifier reduces manual screening workload for identifying COVID-19 research studies, with a very low and acceptable risk of missing eligible studies. It is now deployed in the live study identification workflow for the Cochrane COVID-19 Study Register.

PMID:35065679 | DOI:10.1186/s13643-021-01880-6

Read the original here:
Machine learning reduced workload for the Cochrane COVID-19 Study Register: development and evaluation of the Cochrane COVID-19 Study Classifier -...

Getting a Read on Responsible AI | The UCSB Current – The UCSB Current

There is great promise and potential in artificial intelligence (AI), but if such technologies are built and trained by humans, are they capable of bias?

Absolutely, says William Wang, the Duncan and Suzanne Mellichamp Chair in Artificial Intelligence and Designs at UC Santa Barbara, who will give the virtual talk What is Responsible AI, at 4 p.m. Tuesday, Jan. 25, as part of the UCSB Librarys Pacific Views speaker series (register here).

The key challenge for building AI and machine learning systems is that when such asystem is trained on datasets with limited samples from history, they may gain knowledge from the protected variables (e.g., gender, race, income, etc.), and they are prone to produce biased outputs, said Wang, also director of UC Santa Barbaras Center for Responsible Machine Learning.

Sometimes these biases could lead to the rich getting richer phenomenon after the AI systems are deployed, he added.Thats why in addition to accuracy, it is important to conduct research in fair and responsible AI systems, including the definition of fairness, measurement, detection and mitigation of biases in AI systems.

Wangs examination of the topic serves as the kickoff event for UCSB Reads 2022, the campus and community-wide reading program run by UCSB Library. Their new season is centered on Ted Chiangs Exhalation, a short story collection thataddressesessential questions about human and computer interaction, including the use of artificial intelligence.

Copies of Exhalation will be distributed free to students (while supplies last) Tuesday, Feb. 1 outside the Librarys WestPaseo entrance. Additional events announced so far include on-air readings from the book on KCSB, a faculty book discussion moderated by physicist and professor David Weld and a sci-fi writing workshop. It all culminates May 10 with a free lecture by Ted Chiang in Campbell Hall.

First though: William Wang, an associate professor of computer science and co-director of the Natural Language Processing Group.

In this talk, my hope is to summarize the key advances of artificialintelligence technologies in the last decade, and share how AI can bring us an exciting future, he noted. I will also describe the key challenges of AI: how we should consider the research and development of responsible AI systems,which not only optimize their accuracy performance,but also provide a human-centric view to consider fairness, bias, transparency and energy efficiency of AI systems.

How do we build AI models that are transparent? How do we write AI system descriptions that meet disclosive transparency guidelines?How do we consider energy efficiency when building AI models? he asked. The future of AI is bright, but all of these are key aspects of responsible AI that we need to address.

See more here:
Getting a Read on Responsible AI | The UCSB Current - The UCSB Current

How quantum computing is helping businesses to meet objectives – Information Age

Johannes Oberreuter, Quantum Computing practice lead and data scientist at Reply, spoke to Information Age about how quantum computing is helping businesses to meet objectives

Quantum is emerging as a new vehicle for business problem solving.

Quantum computing is an evolving technology that promises to enhance an array of business operations. Based on quantum mechanics that focus on the smallest dimensions of nature molecules, atoms and subatomic particles quantum computers are set to provide faster solutions to complex business problems, through testing multiple possible solutions for a problem simultaneously.

The basis for quantum computing is a unit of information known as a qubit; unlike bits, which can only have the values zero or one, can come in the form of anything in between, which allows for this new approach to become possible, and is called a superposition. Combined, multiple qubits can produce many outcomes at the same time. Every extra qubit doubles the search space, which therefore grows exponentially.

Many companies are looking into how quantum can bolster industries and provide new use cases for businesses. One organisation thats exploring this space is Reply, which has been developing solutions for optimisation in logistics, portfolio management and fault detection, among other areas.

Discussing how Reply is helping to provide possible use cases to its clients, quantum computing expert Johannes Oberreuter said: We work on a level which translates the problem into a quantum language that is as universal as possible, and doesnt go too deep into the hardware.

The first thing weve found thats delivering value now is the domain of optimisation problems. An example is the travelling salesman problem, which has lots of applications in logistics, where complexities and constraints also need to be accounted for, like during the pandemic.

Very often, problems, which are found too complex to be optimised on common hardware, are tackled by some heuristics. Usually, theres a team or a person with experience in the domain, who can help with this, but they dont know yet that there are better solutions out there now. Quantum computing allows for problems being presented in a structured way similar to a wish list, containing all business complexities. They are all encoded into a so-called objective function, which can then be solved in a structured way.

Companies have used all sorts of algorithms and brain power to try to solve optimisation problems. Finding the optimum with an objective function is still a difficult problem to solve, but here a quantum computer can come to the rescue.

Pushing parameters

According to Oberreuter, once a quantum computer becomes involved in the problem solving process, the optimal solution can really be found, allowing businesses to find the best arrangements for the problem. While current quantum computers, which are suitable for this kind of problems, called quantum annealers now have over 5,000 qubits, many companies that enlist Replys services often find that problems they have require more than 16,000-20,000 variables, which calls for more progress to be made in the space.

You can solve this by making approximations, commented the Reply data scientist. Weve been writing a program that is determining an approximate solution of this objective function, and we have tested it beyond the usual number of qubits needed.

The system is set up in a way that prevents running time from increasing exponentially, which results in a business-friendly running time of a couple of seconds. This reduces the quality of the solution, but we get a 10-15% better result than what business heuristics are typically providing.

Through proofs-of-concepts, Reply has been able to help clients to overcome the challenge of a lack of expertise in quantum. By utilising and building up experience in the field, a shoulder-to-shoulder approach helps to clarify how solutions can be developed more efficiently.

Machine learning has risen in prominence over the last few years to aid automation of business processes with data, and help organisations meet goals faster. However, machine learning projects can sometimes suffer from lack of data and computational expense. To combat this, Reply has been looking to the problem solving capabilities brought by quantum computing.

Oberreuter explained: What weve discovered with quantum machine learning is you can find better solutions, even with the limited hardware thats accessible currently. While there will probably never be an end-to-end quantum machine learning workflow, integration of quantum computing into the current machine learning workflow is useful.

Some cloud vendors now offer quantum processing units (QPUs). In a deep learning setup for complex tasks, you could easily rent it from the cloud providers by individual calls to experiment, if it improves your current model.

What weve found interesting from our contribution towards the quantum challenge undertaken by BMW and AWS, is the marriage of classical machine learning models with quantum models. The former is really good at extracting attributes from unstructured data such as images, which are then joined by a quantum representation which provides an advantage for classification.

How organisations can drive value from AI on the edge

Mike Ellerton, partner at Go Reply, spoke to Information Age about Replys recent research conducted into edge AI, and how organisations can drive value from the technology. Read here

Additionally, quantum technologies are being explored for cyber security, with the view that soon quantum computers can solve problems that are currently insurmountable for todays technologies. A particular algorithm thats been cited by Reply, that could be solved by quantum computing, is the one used for RSA key cryptography, which while trusted to be secure now, is estimated to need 6000 error-free qubits to be cracked in the space of two weeks.

Quantum technology for cyber security is now on the shelf, and were offering this to our clients to defend against this threat, said Oberreuter. Quantum mechanics have a so-called no-cloning theorem, which prevents users from copying messages sent across a communication channel. The crux is that in order for this to work, you need a specialised quantum channel.

We have experts who specialise in cyber security, that have been leading the effort to craft an offering for this.

Reply is a network of highly specialised industry companies, that helps clients across an array of sectors to optimise and integrate processes, applications and devices using the latest technologies. Established in 1996, the organisation offers services for capabilities including quantum, artificial intelligence (AI), big data, cloud and the Internet of Things (IoT). More information on the services that Reply provides can be found here.

This article was written as part of a paid-for content campaign with Reply

View post:
How quantum computing is helping businesses to meet objectives - Information Age

A machine learning model based on tumor and immune biomarkers to predict undetectable MRD and survival outcomes in multiple myeloma – DocWire News

This article was originally published here

Clin Cancer Res. 2022 Jan 21:clincanres.3430.2021. doi: 10.1158/1078-0432.CCR-21-3430. Online ahead of print.

ABSTRACT

PURPOSE: Undetectable measurable residual disease (MRD) is a surrogate of prolonged survival in multiple myeloma (MM). Thus, treatment individualization based on the probability of a patient to achieve undetectable MRD with a singular regimen, could represent a new concept towards personalized treatment with fast assessment of its success. This has never been investigated; therefore, we sought to define a machine learning model to predict undetectable MRD at the onset of MM.

EXPERIMENTAL DESIGN: This study included 487 newly-diagnosed MM patients. The training (n=152) and internal validation cohort (n=149) consisted of 301 transplant-eligible active MM patients enrolled in the GEM2012MENOS65 trial. Two external validation cohorts were defined by 76 high-risk transplant-eligible smoldering MM patients enrolled in the GEM-CESAR trial, and 110 transplant-ineligible elderly patients enrolled in the GEM-CLARIDEX trial.

RESULTS: The most effective model to predict MRD status resulted from integrating cytogenetic [t(4;14) and/or del(17p13)], tumor burden (bone marrow plasma cell clonality and circulating tumor cells) and immune-related biomarkers. Accurate predictions of MRD outcomes were achieved in 71% of cases in the GEM2012MENOS65 trial (n=214/301), and 72% in the external validation cohorts (n=134/186). The model also predicted sustained MRD negativity from consolidation onto 2-years maintenance (GEM2014MAIN). High-confidence prediction of undetectable MRD at diagnosis identified a subgroup of active MM patients with 80% and 93% progression-free and overall survival rates at five years.

CONCLUSION: It is possible to accurately predict MRD outcomes using an integrative, weighted model defined by machine learning algorithms. This is a new concept towards individualized treatment in MM.

PMID:35063966 | DOI:10.1158/1078-0432.CCR-21-3430

Link:
A machine learning model based on tumor and immune biomarkers to predict undetectable MRD and survival outcomes in multiple myeloma - DocWire News

Associate / Full Professor of Theoretical Biophysics and Machine Learning job with RADBOUD UNIVERSITY NIJMEGEN | 278686 – Times Higher Education (THE)

Associate / Full Professor of Theoretical Biophysics and Machine Learning

A world from which we demand more and more requires people who can make a contribution. Critical thinkers who will take a closer look at what is really important. As a Professor, you will perform leading research and teach students in the area of theoretical biophysics and physics-based machine learning, to strengthen the role and visibility of the international Theoretical Biophysics landscape.

As a successful candidate you will join the Department of Biophysics at the Donders Center for Neuroscience (DCN) and perform internationally leading theoretical research in an area of theoretical biophysics or physics-based machine learning. You are interested in applications of theoretical biophysics methods to neuroscience problems studied in the DCN, and you will engage actively in interdisciplinary research collaborations with other physicists in the Faculty of Science and with external partners. You will contribute to the teaching and the innovation of Radboud's popular theoretical machine learning and biophysics courses, and possibly contribute to other core undergraduate physics subjects taught at the Faculty of Science. You will supervise students' research projects at the Bachelor's, Master's and PhD levels. Finally, you will contribute to the effective administration of Radboud University and the acquisition of research funding, and will strengthen the role and visibility of Radboud University in the international Theoretical Biophysics landscape.

Profile

We are

The Donders Institute for Brain, Cognition and Behaviour of Radboud University seeks to appoint a Professor of Theoretical Biophysics and Machine Learning. The Donders Institute is a world-class research institute, housing more than 700 researchers devoted to understanding the mechanistic underpinnings of the human mind/brain. Research at the Donders Institute focuses on four themes:

Language and Communication

Perception, Action, and Decision-making

Development and Lifelong Plasticity

Natural Computing and Neurotechnology.

We have excellent and state-of-the-art research facilities available for a broad range of neuroscience research. The Donders Institute fosters a collaborative, multidisciplinary, supportive research environment with a diverse international staff. English is the lingua franca at the Institute.

You will join the academic staff of the Donders Center for Neuroscience (DCN) - one of the four Donders Centers at Radboud University's Faculty of Science. The Biophysics Department is part of the DCN. Neurophysicists at DCN mainly conduct experimental, theoretical and computational research into the principles of information processing by the brain, with particular focus on the mammalian auditory and visual systems. The Physics of Machine Learning and Complex Systems Group studies a broad range of theoretical topics, ranging from physics-based machine learning paradigms and quantum machine learning, via Bayesian inference and applications of statistical mechanics techniques in medical statistics, to network theory and the modelling of heterogeneous many-variable processes in physics and biology. The group engages in multiple national and international research collaborations, and participates in several multidisciplinary initiatives that support theoretical biophysics and machine learning research and teaching at Radboud University.

Radboud University actively supports equality, diversity and inclusion, and encourages applications from all sections of society. The university offers customised facilities to better align work and private life. Parents are entitled to partly paid parental leave and Radboud University employees enjoy flexibility in the way they structure their work. The university highly values the career development of its staff, which is facilitated by a variety of programmes. The Faculty of Science is an equal opportunity employer, committed to building a culturally diverse intellectual community, and as such encourages applications from women and minorities.

Radboud University

We want to get the best out of science, others and ourselves. Why? Because this is what the world around us desperately needs. Leading research and education make an indispensable contribution to a healthy, free world with equal opportunities for all. This is what unites the more than 24,000 students and 5,600 employees at Radboud University. And this requires even more talent, collaboration and lifelong learning. You have a part to play!

We offer

Additional employment conditions

Work and science require good employment practices. This is reflected in Radboud University's primary and secondary employment conditions. You can make arrangements for the best possible work-life balance with flexible working hours, various leave arrangements and working from home. You are also able to compose part of your employment conditions yourself, for example, exchange income for extra leave days and receive a reimbursement for your sports subscription. And of course, we offer a good pension plan. You are given plenty of room and responsibility to develop your talents and realise your ambitions. Therefore, we provide various training and development schemes.

Would you like more information?

For questions about the position, please contact Ton Coolen, Professor at +31 24 361 42 45 or ton.coolen@donders.ru.nl.

Practical information and applications

You can apply until 25 February 2022, exclusively using the button below. Kindly address your application to Ton Coolen. Please fill in the application form and attach the following documents:

The first round of interviews will take place around the end of March. You would preferably begin employment on 1 September 2022.

This vacancy was also published in a slightly modified form in 2021. Applicants who were rejected at that time are kindly requested not to apply again.

We can imagine you're curious about our application procedure. It offers a rough outline of what you can expect during the application process, how we handle your personal data and how we deal with internal and external candidates.

We drafted this vacancy to find and hire our new colleague ourselves. Recruitment agencies are kindly requested to refrain from responding.

The rest is here:
Associate / Full Professor of Theoretical Biophysics and Machine Learning job with RADBOUD UNIVERSITY NIJMEGEN | 278686 - Times Higher Education (THE)

Heard on the Street 1/24/2022 – insideBIGDATA

Welcome to insideBIGDATAs Heard on the Street round-up column! In this regular feature, we highlight thought-leadership commentaries from members of the big data ecosystem. Each edition covers the trends of the day with compelling perspectives that can provide important insights to give you a competitive advantage in the marketplace. We invite submissions with a focus on our favored technology topics areas: big data, data science, machine learning, AI and deep learning. Enjoy!

COVID-19: A Data Tsunami That Ushered in Unprecedented Opportunities for Businesses and Data Scientists. Commentary by Thomas Hazel, founder & CTO at ChaosSearch

From creating volatile data resources to negatively impacting forecasting models, there have been countless challenges the pandemic has caused for organizations that rely on data to inform business decisions. However, there is also an upside to the data tsunami that COVID-19 created. The movement to all-things-digital translated into a tsunami of log data streaming from these digital systems. All this data presented an incredible opportunity for companies to deeply understand their customers and then tailor customer and product experiences. However, theyd need the right tools and processes in place to avoid being overwhelmed by the volume of data. The impact spans all industries, from retail to insurance to education.Blackboard is a perfect example. The world-leading EdTech provider was initially challenged at the start of the pandemic with the surge of daily log volumes from students and school systems that moved online seemingly overnight. The company quickly realized they needed a way to efficiently analyze log data for real-time alerts and troubleshooting, as well as a method to access long-term data for compliance purposes. To accomplish this, Blackboard leverages its data lake to monitor cloud deployments, troubleshoot application issues, maximize uptime, and deliver on data integrity and governance for highly sensitive education data. This use case demonstrates just how important data has become to organizations that rely on digital infrastructure and how a strong data platform is a must to reduce the time, cost, and complexity of extracting insights from data. While the pandemic created this initial data tsunami, tech-driven organizations that have evolved to capitalize on its benefits, like Blackboard, have accepted that this wave of data is now a constant force that they will have to manage more effectively for the foreseeable future.

Cloud Tagging Best Practices. Commentary by Keith Neilson, Technical Evangelist at CloudSphere

While digital transformation has been on many organizations priority list for years, the Covid-19 pandemic applied more pressure and urgency to move this forward. Through their modernization efforts, companies have unfortunately wasted time and resources on unsuccessful data deployments, ultimately jeopardizing company security. For optimal cyber asset management, consider the following cloud tagging best practices:Take an algorithmic approach to tagging. While tags can represent simple attributes of an asset (like region, department, or owner), they can also assign policies to the asset. This way, assets can be effectively governed, even on a dynamic and elastic platform. Next, optimize tagging for automation and scalability. Proper tagging will allow for vigorous infrastructure provisioning for IT financial management, greater scalability and automated reporting for better security. Finally, be sure to implement consistent cloud tagging processes and parameters within your organization. Designate a representative to enforce certain tagging formulas, retroactively tag when IT personnel may have added assets or functions that they didnt think to tag and reevaluate business outputs to ensure tags are effective.While many underestimate just how powerful cloud tagging can be, the companies embracing this practice will ultimately experience better data organization, security, governance and system performance.

Using AI to improve the supply chain.Commentary by Melisa Tokmak, GM of Document AI, Scale AI

As supply chain delays continue to threaten businesses at the beginning of 2022, AI can be a crucial tool for logistics companies to speed up their supply chain as the pandemic persists. Logistics and freight forwarding companies are required to process dozens of documents such as bills of lading, commercial invoices and arrival notices fast, and with the utmost accuracy, in order to report data to Customs, understand changing delivery timelines, collect & analyze data about moving goods to paint information about the global trade. For already overtaxed and paperwork-heavy systems, manual processing and human error are some of the most common points of failure, which exacerbate shipping delays and result in late cargo, delayed cash flow & hefty fines.As logistics companies have a wealth of information buried in the documents they process, updating databases with this information is necessary to make supply chains more predictable globally. Most companies spend valuable time analyzing inconsistent data or navigating OCR and template-based solutions, which arent effective due to the high variability of data in these documents. Machine learning-based, end-to-end document processing solutions, such as Scale AIs Document AI, dont rely on templates and can automate this process; AI solutions allow logistics companies to leverage the latest industry research without changing their developer environment. This way, companies can focus on using their data to cater to customers and serve the entire logistics industry, rather than spending valuable time and resources on data-mining.ML-based solutions can extract the most valuable information accurately in seconds, accelerating internal operations, reducing the number of times containers are opened for checksdecreasing costs and shipping delays significantly. Using Scales Document AI, freight forwarding leader Flexport achieved significant cost savings in operations and decreased the processing time of each document. Flexports documents were formerly processed in over two days, but with Document AI, were processed in less than 60 seconds with 95%+ accuracy, all without having to build and maintain a team of machine learning engineers and data scientists. As COVID has led to a breakdown of internal processes, AI-powered document processing solutions are helping build systems back up: optimizing operations to handle any logistic needs that come their way at such a crucial time.

IBM to Sell Watson Health. Paddy Padmanabhan, Founder and CEO of Damo Consulting

IBMs decision to sell the Watson Health assets is not an indictment of the promise of AI in healthcare. Our research indicates AI was one of the top technology investments for health systems in 2021. Sure, there are challenges such as data quality and bias in the application of AI in the healthcare context but by and large there has been progress with AI in healthcare. The emergence of other players, notably Google with its Mayo Partnership, or Microsoft with its partnership with healthcare industry consortium Truveta are strong indicators of progress.

Data Privacy Day 2022 Commentary. Commentary by Lewis Carr, Senior Director, Product Marketing at Actian

In 2022, expect to see all personal information and data sharing options get more granular as to how we control them both on our devices and in the cloud specific to each company, school or government agency. Well also start to get some visibility into and control over how our data is shared between organizations without us involved. Companies and public sector organizations will begin to pivot away from the binary options (opt-in or opt-out) tied to a lengthy legal letter that no one will read and will instead provide the data management and cybersecurity platforms with granular permission to parts of your personal data, such as where its stored, for how long, and under what circumstances it can be used. You can also expect new service companies to sprout up that will offer intermediary support to monitor and manage your data privacy across.

Data Privacy Day 2022 Commentary. Commentary by Rob Price, Principal Expert Solution Consultant at Snow Software

The adoption of cloud technology has been a critical component to how we approach privacy and data protection today. A common misconception is that if your data is offsite or cloud-based its not your problem but that is not true because the cloud is not a data management system. Two fundamental factors for data protection and security are the recovery point objective (how old can data be when you recover it) and the recovery time objective (how quickly can you recover the data). Every companys needs are different, but these two factors are important when planning for data loss.

Sign up for the free insideBIGDATAnewsletter.

Join us on Twitter:@InsideBigData1 https://twitter.com/InsideBigData1

Read more here:
Heard on the Street 1/24/2022 - insideBIGDATA