Page 2,729«..1020..2,7282,7292,7302,731..2,7402,750..»

Mydecine Innovations kicks off machine learning-based drug discovery program with the University of Alberta – Proactive Investors USA & Canada

The program will enable the company to more rapidly screen hundreds of thousands of new molecules without the need to produce them, allowing Mydecine to focus on the strongest potential therapeutics

() () () has launched its in-silico drug discovery program in conjunction with researchers at the University of Alberta (UofA), the company announced.

Led by computer-assisted drug development expert and UofA assistant professor at the Li Ka Shing Institute of Virology, Khaled Barakat, the program is focused on utilizing artificial intelligence/machine learning (AI/ML) to support drug screenings, including both the ability to build drugs from the receptor up and assess drugs around the receptors of Mydecines choosing.

The in-silico (read: computer simulated) program will enable the company to more rapidly screen hundreds of thousands of new molecules without the need to produce them, allowing Mydecine to focus on the strongest potential therapeutics for its chemical and natural development programs, the company said.

Mydecine will also be able to more efficiently screen its own proprietary library of novel compounds designed by Chief Science Officer Rob Roscow and advisory board member, Denton Hoyer.

Years of research have shown that the chemical components of psychoactive and non-psychoactive mushrooms can be extremely powerful in a therapeutic setting and yet, there is still so much that we dont understand about how these molecules can affect biological systems, CEO Josh Bartch said in a statement.

As the next evolution of drug discovery progresses forward, we strongly believe that this new age will be fully led by artificial intelligence and machine learning. Expanding our R&D efforts with the addition of our cutting-edge AI/ML drug screening program will allow our research teams to take a leading role within the psychedelic community to more efficiently expand our knowledge of these components and their pharmacological value.

At UofA, Barakat and his team specialize in understanding the nature and biophysical processes underlying protein-drug interaction, protein-protein interactions, protein-DNA interactions, drug off-target interactions and predicting drug-mediated toxicity.

Dr. Barakat and his team have built an impressive reputation as leaders at the intersection of technology and pharmacological science, Bartch said. Adding their specialization in developing innovative computer models and novel technologies to predict protein-protein and protein-drug interactions will bring tremendous value to Mydecines research and enable us to more quickly bring to market effective drugs that can produce better outcomes for patients.

Contact Andrew Kessel at andrew.kessel@proactiveinvestors.com

Follow him on Twitter @andrew_kessel

Originally posted here:
Mydecine Innovations kicks off machine learning-based drug discovery program with the University of Alberta - Proactive Investors USA & Canada

Read More..

2 supervised learning techniques that aid value predictions – TechTarget

This article is excerpted from the course "Fundamental Machine Learning," part of the Machine Learning Specialist certification program from Arcitura Education. It is the ninth part of the 13-part series, "Using machine learning algorithms, practices and patterns."

This article explores the numerical prediction and category prediction supervised learning techniques. These machine learning techniques are applied when the target whose value needs to be predicted is known in advance and some sample data is available to train a model. As explained in Part 4, these techniques are documented in a standard pattern profile format.

A data set may contain a number of historical observations (rows) amassed over a period of time where the target value is numerical in nature and is known for those observations. An example is the number of ice creams sold and the temperature readings, where the number of ice creams sold is the target variable. To obtain value from this data, a business use case might require a prediction of how much ice cream will be sold if the temperature reading is known in advance from the weather forecast. As the target is numerical in nature, supervised learning techniques that work with categorical targets cannot be applied (Figure 1).

The historical data is capitalized upon by first finding independent variables that influence the target dependent variable and then quantifying this influence in a mathematical equation. Once the mathematical equation is complete, the value of the target variable is predicted by inputting the values of the independent values.

The data set is first scanned to find the best independent variables by applying the associativity computation pattern to find the relationship between the independent variables and the dependent variable. Only the independent variables that are highly correlated with the dependent variable are kept. Next, linear regression is applied.

Linear regression, also known as least squares regression, is a statistical technique for predicting the values of a continuous dependent variable based on the values of an independent variable. The dependent and independent variables are also known as response and explanatory variables, respectively. As a mathematical relationship between the response variable and the explanatory variables, linear regression assumes that a linear correlation exists between the response and explanatory variables. A linear correlation between response and explanatory variables is represented through the line of best fit, also called a regression line. This is a straight line that passes as closely as possible through all points on the scatter plot (Figure 2).

Linear regression model development starts by expressing the linear relationship. Once the mathematical form has been established, the next step is to estimate the parameters of the model via model fitting. This determines the line of best fit achieved via least squares estimation that aims to reduce the sum of squared error (SSE). The last stage is to evaluate the model either using R squared or mean squared error (MSE).

MSE is a measure that determines how close the line of best fit is to the actual values of the response variable. Being a straight line, the regression line cannot pass through each point; it is an approximation of the actual value of the response variable based on estimated values. The distance between the actual and the estimated value of response variable is the error of estimation. For the best possible estimate of the response variable, the errors between all points, as represented by the sum of squared error, must be minimized. The line of best fit is the line that results in the minimum possible sum of squares errors. In other words, MSE identifies the variation between the actual value and the estimated value of the response variable as provided by the regression line (Figure 3).

The coefficient of determination, called R squared, is the percentage of variation in the response variable that is predicted or explained by the explanatory variable, with values that vary between 0 and 1. A value equal to 0 means that the response variable cannot be predicted from the explanatory variable, while a value equal to 1 means the response variable can be predicted without any errors. A value between 0 and 1 provides the percentage of successful prediction.

In regression, more than two explanatory variables can be used simultaneously for predicting the response variable, in which case it is called multiple linear regression.

The numerical prediction pattern can benefit from the application of the graphical summaries computation pattern by drawing a scatter plot to graphically validate if a linear relationship exists between the response and explanatory variables (Figure 4).

There are cases where a business problem involves predicting a category -- such as whether a customer will default on their loan or whether an image is a cat or a dog -- based on historical examples of defaulters and cats and dogs, respectively. In this case, the categories (default/not default and cat/dog) are known in advance. However, as the target class is categorical in nature, numerical predictive algorithms cannot be applied to train and predict a model for classification purposes (Figure 5).

Supervised machine learning techniques are applied by selecting a problem-specific machine learning algorithm and developing a classification model. This involves first using the known example data to train a model. The model is then fed new unseen data to find out the most appropriate category to which the new data instance belongs.

Different machine learning algorithms exist for developing classification models. For example, naive Bayes is probabilistic while K-nearest neighbors (KNN), support vector machine (SVM), logistic regression and decision trees are deterministic in nature. Generally, in the case of a binary problem -- cat or dog -- logistic regression is applied. If the feature space is n-dimensional (a large number of features) with complex interactions between the features, KNN is applied. Naive Bayes is applied when there is not enough training data or fast predictions are required, while decision trees are a good choice when the model needs to be explainable.

Logistic regression is based on linear regression and is also considered a class probability estimation technique, since its objective is to estimate the probability of an instance belonging to a particular class.

KNN, also known as lazy learning and instance-based learning, is a black-box classification technique where instances are classified based on their similarity, with a user-defined (K) number of examples (nearest neighbors). No model is explicitly generated. Instead, the examples are stored as-is and an instance is classified by first finding the closest K examples in terms of distance, then assigning the class based on the class of the majority of the closest examples (Figure 6).

Naive Bayes is a probability-based classification technique that predicts class membership based on the previously observed probability of all potential features. This technique is used when a combination of a number of features, called evidence, affects the determination of the target class. Due to this characteristic, naive Bayes can take into account features that may be insignificant when considered on their own but when considered accumulatively can significantly impact the probability of an instance belonging to a certain class.

All features are assumed to carry equal significance, and the value of one feature is not dependent on the value of any other feature. In other words, the features are independent. It serves as a baseline classifier for comparing more complex algorithms and can also be used for incremental learning, where the model is updated based on new example data without the need for regenerating the whole model from scratch.

A decision tree is a classification algorithm that represents a concept in the form of a hierarchical set of logical decisions with a tree-like structure that is used to determine the target value of an instance. [See discussion of decision trees in part 2 of this series.] Logical decisions are made by performing tests on the feature values of the instances in such a way that each test further filters the instance until its target value or class membership is known. A decision tree resembles a flowchart consisting of decision nodes, which perform a test on the feature value of an instance, and leaf nodes, also known as terminal nodes, where the target value of the instance is determined as a result of traversal through the decision nodes.

The category prediction pattern normally requires the application of a few other patterns. In the case of logistic regression and KNN, applying the feature encoding pattern ensures that all features are numerical as these two algorithms only work with numerical features. The application of the feature standardization pattern in the case of KNN ensures that none of the large magnitude features overshadow smaller magnitude features in the context of distance measurement. Naive Bayes requires the application of the feature discretization pattern as naive Bayes only works with nominal features. KNN can also benefit from the application of feature discretization pattern via a reduction in feature dimensionality, which contributes to faster execution and increased generalizability of the model.

The next article covers the category discovery and pattern discovery unsupervised learning patterns.

Read the original post:
2 supervised learning techniques that aid value predictions - TechTarget

Read More..

AWS leader talks about technologies needed to take precision medicine to the next level – Healthcare IT News

One of the most significant challenges to the advancement of precision medicine has been the lack of an infrastructure to support translational bioinformatics, supporting organizations as they work to uncover unique datasets to find novel associations and signals.

By supporting greater interoperability and collaboration, data scientists, developers, clinicians and pharmaceutical partners have the opportunity to leverage machine learning to reduce the time it takes to move from insight to discovery, ultimately leading to the right patients receiving the right care, with the right therapeutic at the right time.

To get a better understanding of challenges surrounding precision medicine and its future, Healthcare IT News sat down with Dr. Taha Kass-Hout, director of machine learning at AWS.

Q: You've said that one of the most significant challenges to the advancement of precision medicine has been the lack of an infrastructure to support translational bioinformatics. Please explain this challenge in detail.

A: One of the challenges in developing and utilizing storage, analytics and interpretive methods is the sheer volume of biomedical data that needs to be transformed that often resides on multiple systems and in multiple formats. The future of healthcare is so vibrant and dynamic and there is an opportunity for cloud and big data to take on a larger role to help the industry address these areas.

For example, datasets used to perform tasks such as computational chemistry and molecular simulations that help de-risk, and advance molecules into development, contain millions of data points and require billions of calculations to produce an experimental output. In order to bring new therapeutics to market faster, scientists need to move targets through development faster and find more efficient ways to collaborate both inside and outside of their organizations.

Another challenge is that large volumes of data acquired by legacy research equipment, such as microscopes and spectrometers, is usually stored locally. This creates a barrier for securely archiving, processing and sharing with collaborating researchers globally. Improving access to data, securely and compliantly, while increasing usability is critical to maximizing the opportunities to leverage analytics and machine learning.

For instance, Dotmatics' cloud-based software provides simple, unified, real-time access to all research data in Dotmatics and third-party databases, coupled with integrated, scientifically aware informatics solutions for small molecule and biologics discovery that expedite laboratory workflows and capture experiments, entities, samples and test data so that in-house or multi-organizational research teams become more efficient.

Today we are seeing a rising wave of healthcare organizations moving to the cloud, which is enabling researchers to unite R&D data with information from across the value chain, while benefiting from compute and storage options that are more cost-effective than on-premises infrastructure.

For large datasets in the R&D phase, large-scale, cloud-based data transfer services can transfer hundreds of terabytes and millions of files at speeds up to 10 times faster than open-source tools. Storage gateways ensure experimental data is securely stored, archived and available to other permissioned collaborators. Uniting data in a data lake improves access and helps to eliminate silos.

Cloud-based hyperscale computing and machine learning enable organizations to collaborate across datasets, create and leverage global infrastructures to maintain data integrity, and more easily perform machine learning-based analyses to accelerate discoveries and de-risk candidates faster.

For example, six years agoModerna started building databases and information-based activities to support all of their programs. Today, they are fully cloud-based, and their scientists don't go to the lab to pipette their messenger RNA and proteins. They go to their web portal, the Drug Design Studio that runs on the cloud.

Through the portal, scientists can access public and private libraries that contain all the messenger RNA that exists and the thousands of proteins they can produce. Then, they only need to press a button and the sequence goes to a fully automated, central lab where data is collected at every step.

Over the years, data from the portal and lab has helped Moderna improve their sequence design and production processes and improve the way their scientists gather feedback. In terms of research, all of Moderna's algorithms rely on computational power from the cloud to further their science.

Q: You contend that by supporting greater interoperability and collaboration, data scientists, developers, clinicians and pharmaceutical partners have the opportunity to leverage machine learning to reduce the time it takes to move from insight to discovery. Please elaborate on machine learning's role here in precision medicine.

A: For the last decade, organizations have focused on digitizing healthcare. In the next decade, making sense of all this data will provide the biggest opportunity to transform care. However, this transformation will primarily depend on data flowing where it needs to, at the right time, and supporting this process in a way that is secure and protects patients' health data.

It comes down to interoperability. It may not be the most exciting topic, but it's by far one of the most important, and one the industry needs to prioritize. By focusing on interoperability of information and systems today, we can ensure that we end up in a better place in 10 years than where we are now. And so, everything around interoperability around security, around identity management, differential privacy is likely to be part of this future.

Machine learning models trained to support healthcare and life sciences organizations can help automatically normalize, index and structure data. This approach has the potential to bring data together in a way that creates a more complete view of a patient's medical history, making it easier for providers to understand relationships in the data and compare this to the rest of the population, drive increased operational efficiency, and have the ability to use data to support better patient health outcomes.

For example, AstraZeneca has been experimenting with machine learning across all stages of research and development, and most recently in pathology to speed up the review of tissue samples. Labeling the data is a time-consuming step, especially in this case, where it can take many thousands of tissue-sample images to train an accurate model.

AstraZeneca uses a machine learning-powered, human-in-the-loop data-labeling and annotation service to automate some of the most tedious portions of this work, resulting in at least 50% less time spent cataloging samples.

It also helps analysts spot trends and anomalies in the health data and derive actionable insights to improve the quality of patient care, make predictions for medical events such as stroke or congestive heart failure, modernize care infrastructure, increase operational efficiency and scale specialist expertise.

Numerate, a discovery-stage pharmaceutical, uses machine learning technologies to more quickly and cost-effectively identify novel molecules that are most likely to progress through the research pipeline and become good candidates for new drug development.

The company recently used its cloud-based platform to rapidly discover and optimize ryanodine receptor 2 (RYR2) modulators, which are being advanced as new drugs to treat life-threatening cardiovascular diseases.

Ryanodine 2 is a difficult protein to target, but the cloud made that process easier for the company. Traditional methods could not have attacked the problem, as the complexity of the biology makes the testing laborious and slow, independent of the industry's low 0.1% screening hit rate for much simpler biology.

In Numerate's case, using the cloud enabled the company to effectively decouple the trial-and-error process from the laboratory and discover and optimize candidate drugs five times faster than the industry average.

Machine learning also is helping power the entire clinical development process. Biopharma researchers use machine learning to design the most productive trial protocols, study locations, recruitmentand patient cohorts to enroll. Researchers not trained as programmers can use cloud-based machine learning services to build, train and deploy machine learning algorithms to help with pre-clinical studies, complex simulations and predictive workflow optimization.

Machine learning can also help accelerate the regulatory submission process, as the massive amounts of data generated during clinical trials can be captured and effectively shared to collaborate between investigators, contract research organizations (CROs) and sponsor organizations.

For example, the Intelligent Trial Planner (ITP) from Knowledgent, now part of Accenture, uses machine learning services to determine the feasibility of trial studies and forecast recruitment timelines. The ITP platform enables study design teams at pharma organizations to run prediction analysis in minutes, not weeks, allowing them to iterate faster and more frequently.

Powered by machine learning, real-time scenario planning helps to facilitate smarter trial planning by enabling researchers to determine the most optimal sites, countries and/or protocol combinations.

By eliminating poor performing sites, trial teams have the potential to reduce their trial cost by 20%. And by making data-driven decisions that are significantly more accurate, they can plan and execute clinical trials faster, leading to hundreds of thousands in cost savings for every month saved in a trial.

Additionally, purpose-built machine learning is supported by cost-effective cloud-based compute options. For example, high-performance computing (HPC) can quickly scale to accommodate large R&D datasets, orchestrating services and simplifying the use and management of HPC environments.

Data transformation tools can also help to simplify and accelerate data profiling, preparation and feature engineering, as well as enable reusable algorithms both for new model discovery and inference.

The healthcare and life sciences industry has come a long way in the last year. However, for progress and transformation to continue, interoperability needs to be prioritized.

Q: The ultimate goal of precision medicine is the right patients receiving the right care, with the right therapeutic, at the right time. What do healthcare provider organization CIOs and other health IT leaders need to be doing with machine learning and other technologies today to be moving toward this goal?

A: The first things IT leaders need to ask themselves is: 1) If they are not investing yet in machine learning, do they plan to this year? And 2) What are the largest blockers to machine learning in their teams?

Our philosophy is to make machine learning available to every data scientist and developer without the need to have a specific background in machine learning, and then have the ability to use machine learning at scale and with cost efficiencies.

Designing a personalized care pathway using therapeutics tuned for particular biomarkers relies on a combination of different data sources such as health records and genomics to deliver a more complete assessment of a patient's condition. By sequencing the genomes of entire populations, researchers can unlock answers to genetic diseases that historically haven't been possible in smaller studies and pave the way for a baseline understanding of wellness.

Population genomics can improve the prevention, diagnosis and treatment of a range of illnesses, including cancer and genetic diseases, and produce the information doctors and researchers need to arrive at a more complete picture of how an individual's genes influence their health.

Advanced analytics and machine learning capabilities can use an individual or entire population's medical history to better understand relationships in data and in turn deliver more personalized and curated treatment.

Second, healthcare and life sciences organizations need to be open to experimenting, learning about and embracing both cloud and technology and many organizations across the industry are already doing this.

Leaders in precision medicine research such as UK Biobank, DNAnexus, Genomics England, Lifebit, Munich Lukemia Lab, Illumina, Fabric Genomics, CoFactor Genomics and Emedgene all leverage cloud and technology to speed genomic interpretation.

Third, supporting open collaboration and data sharing needs to be a business priority. The COVID-19 Open Research Dataset (CORD-19) created last year by a coalition of research groups provided open access to the plenary of available global COVID-19 research and data.

This was one of the primary factors that enabled the discovery, clinical trial and delivery of the mRNA-based COVID-19 vaccines in an unprecedented timeframe. Additionally, our Open Data Programmakes more than 40 openly available genomics datasets accessible, providing the research community with a single documented source of truth.

Commercial solutions that have leveraged machine learning to enable large-scale genomic sequencing include organizations such as Munich Leukemia Lab, who has been able to use the Field Programmable Gate Array-based compute instances to greatly speed up the process of whole genome sequencing.

As a result, what used to take 20 hours of compute time can now be achieved in only three hours. Another example is Illumina, which is using cloud solutions to offer its customers a lower-cost, high-performance genomic analysis platform, which can help them speed their time to insights as well as discoveries.

Twitter:@SiwickiHealthITEmail the writer:bsiwicki@himss.orgHealthcare IT News is a HIMSS Media publication.

Read more here:
AWS leader talks about technologies needed to take precision medicine to the next level - Healthcare IT News

Read More..

The CIO’s Guide to Building a Rockstar Data Science and AI Team | eWEEK – eWeek

Just about everyone agrees that data scientists and AI developers are the new superstars of the tech industry. But ask a group of CIOs to define the precise area of expertise for data science-related job titles, and discord becomes the word of the day.

As businesses seek actionable insights by hiring teams that include data analysts, data engineers, data scientists, machine learning engineers and deep learning engineers, a key to success is understanding what each role can and cant do for the business.

Read on to learn what your data science and AI experts can be expected to contribute as companies grapple with ever-increasing amounts of data that must be mined to create new paths to innovation.

In a perfect world, every company employee and executive works under a well-defined set of duties and responsibilities.

Data science isnt that world. Companies often will structure their data science organization based on project need: Is the main problem maintaining good data hygiene? Or is there a need to work with data in a relational model? Perhaps the team requires someone to be an expert in deep learning, and to understand infrastructure as well as data?

Depending on a companys size and budget, any one job title might be expected to own one or more of these problem-solving skills. Of course, roles and responsibilities will change with time, just as theyve done as the era of big data evolves into the age of AI.

That said, its good for a CIO and the data science team she or he is managing today to remove as much of the ambiguity as possible regarding roles and responsibilities for some of the most common roles those of the data analyst, data engineer, data scientist, machine learning engineer and deep learning engineer.

Teams that have the best understanding of how each fits into the companys goals are best positioned to deliver a successful outcome. No matter the role, accelerated computing infrastructure is also key to powering success throughout the pipeline as data moves from analytics to advanced AI.

Its important to recognize the work of a data analyst, as these experts have been helping companies extract information from their data long before the emergence of the modern data science and AI pipeline.

Data analysts use standard business intelligence tools like Microsoft Power BI, Tableau, Qlik, Yellowfin, Spark, SQL and other data analytics applications. Broad-scale data analytics can involve the integration of many different data sources, which increases the complexity of the work of both data engineers and data scientists another example of how the work of these various specialists tends to overlap and complement each other.

Data analysts still play an important role in the business, as their work helps the business assess its success. A data engineer might also support a data analyst who needs to evaluate data from different sources.

Data scientists take things a step further so that companies can start to capitalize on new opportunities with recommender systems, conversational AI, and computer vision, to name a few examples.

A data engineer makes sense of messy data and theres usually a lot of it. People in this role tend to be junior teammates who make data nice and neat (as possible) for data scientists to use. This role involves a lot of data prep and data hygiene work, including lots of ETL (extract, transform, load) to ingest and clean data.

The data engineer must be good with data jigsaw puzzles. Formats change, standards change, even the fields a team is using on a webpage can change frequently. Datasets can have transmission errors, such as when data from one field is incorrectly entered into another.

When datasets need to be joined together, data engineers need to fix the data hygiene problems that occur when labeling is inconsistent. For example, if the day of the week is included in the source data, the data engineer needs to make sure that the same format is used to indicate the day, as Monday could also be written as Mon., or even represented by a number that could be one or zero depending on how the days of the week are counted.

Expect your data engineers to be able to work freely with scripting languages like Python, and in SQL and Spark. Theyll need programming language skills to find problems and clean them up. Given that theyll be working with raw data, their work is important to ensuring your pipeline is robust.

If enterprises are pulling data from their data lake for AI training, this rule-based work can be done by a data engineer. More extensive feature engineering is the work of a data scientist. Depending on their experience and the project, some data engineers may support data scientists with initial data visualization graphs and charts.

Depending on how strict your company has been with data management, or if you work with data from a variety of partners, you might need a number of data engineers on the team. At many companies, the work of a data engineer often ends up being done by a data scientist, who preps her or his own data before putting it to work.

Data scientists experiment with data to find the secrets hidden inside. Its a broad field of expertise that can include the work of data analytics and data processing, but the core work of a data scientist is done by applying predictive techniques to data using statistical machine learning or deep learning.

For years, the IT industry has talked about big data and data lakes. Data scientists are people who finally turn these oceans of raw data into information. These experts use a broad range of tools to conduct analytics, experiment, build and test models to find patterns. To be great at their work, data scientists also need to understand the needs of the business theyre supporting.

These experts use many applications, including NumPy, SciKit-Learn, RAPIDS, CUDA, SciPy, Matplotlib, Pandas, Plotly, NetworkX, XGBoost, domain-specific libraries and many more. They need to have domain expertise in statistical machine learning, random forests, gradient boosting, packages, feature engineering, training, model evaluation and refinement, data normalization and cross-validation. The depth and breadth of these skills make it readily apparent why these experts are so highly valued at todays data-driven companies.

Data scientists often solve mysteries to get to the deeper truth. Their work involves finding the simplest explanations for complex phenomena and building models that are simple enough to be flexible yet faithful enough to provide useful insight. They must also avoid some perils of model training, including overfitting their data sets (that is, producing models that do not effectively generalize from example data) and accidentally encoding hidden biases into their models.

A machine learning engineer is the jack of all trades. This expert architects the entire process of machine and deep learning. They take AI models developed by data scientists and deep learning engineers and move them into production.

These unicorns are among the most sought-after and highly paid in the industry and companies work hard to make sure they dont get poached. One way to keep them happy is to provide the right accelerated computing resources to help fuel their best work. A machine learning engineer has to understand the end-to-end pipeline, and they want to ensure that pipeline is optimized to deliver great results, fast.

Its not always easily intuitive, as the machine learning engineers must know the apps, understand the downstream data architecture, and key in on system issues that may arise as projects scale. A person in this role must understand all the applications used in the AI pipeline, and usually needs to be skilled in infrastructure optimization, cloud computing, containers, databases and more.

To stay current, AI models need to be reevaluated to avoid whats called model drift as new data impacts the accuracy of the predictions. For this reason, machine learning engineers need to work closely with their data science and deep learning colleagues who will need to reassess models to maintain their accuracy.

A critical specialization for the machine learning engineer is deep learning engineer. This person is a data scientist who is an expert in deep learning techniques. In deep learning, AI models are able to learn and improve their own results through neural networks that imitate how human beings think and learn.

These computer scientists specialize in advanced AI workloads. Their work is part science and part art to develop what happens in the black box of deep learning models. They do less feature engineering and far more math and experimentation. The push for explainable AI (XAI) model interpretability and explainability can be especially challenging in this domain.

Deep learning engineers will need to process large datasets to train their models before they can be used for inference, where they apply what theyve learned to evaluate new information. They use libraries like PyTorch, TensorFlow and MXNet, and need to be able to build neural networks and have strong skills in statistics, calculus and linear algebra.

Given all of the broad expertise in these key roles, its clear that enterprises need a strategy to help them grow their teams success in data science and AI. Many new applications need to be supported, with the right resources in place to help this work get done as quickly as possible to solve business challenges.

Those new to data science and AI often choose to get started with accelerated computing in the cloud, and then move to a hybrid solution to balance the need for speed with operational costs. In-house teams tend to look like an inverted pyramid, with more analysts and data engineers funneling data into actionable tasks for data scientists, up to the machine learning and deep learning engineers.

Your IT paradigm will depend on your industry and its governance, but a great rule of thumb is to ensure your vendors and the skills of your team are well aligned. With a better understanding of the roles of a modern data team, and the resources they need to be successful, youll be well on your way to building an organization that can transform data into business value.

ABOUT THE AUTHOR

By Scott McClellan, Head of Data Science, NVIDIA

Read more here:
The CIO's Guide to Building a Rockstar Data Science and AI Team | eWEEK - eWeek

Read More..

Discover the theory of human decision-making using extensive experimentation and machine learning – Illinoisnewstoday.com

Discover a better theory

In recent years, the theory of human decision making has skyrocketed. However, these theories are often difficult to distinguish from each other and offer less improvement in explaining decision-making patterns than previous theories.Peterson et al. Leverage machine learning to evaluate classical decision theory, improve predictability, and generate new theories of decision making (see Perspectives by Bhatia and He). This method affects the generation of theory in other areas.

Science, Abe2629, this issue p. 1209abi7668, p. See also. 1150

Predicting and understanding how people make decisions is a long-standing goal in many areas, along with a quantitative model of human decision-making that informs both social science and engineering research. did. Show how large datasets can be used to accelerate progress towards this goal by enhancing machine learning algorithms that are constrained to generate interpretable psychological theories. .. Historical discoveries by conducting the largest experiments on risky choices to date and analyzing the results using gradient-based optimizations of differentiable decision theory implemented via artificial neural networks. A new, more accurate model of human decision-making in the form of summarizing, confirming that there is room for improvement of existing theories, and preserving insights from centuries of research.

Discover the theory of human decision-making using extensive experimentation and machine learning

Source link Discover the theory of human decision-making using extensive experimentation and machine learning

See the rest here:
Discover the theory of human decision-making using extensive experimentation and machine learning - Illinoisnewstoday.com

Read More..

Clearing the way toward robust quantum computing – MIT News

MIT researchers have made a significant advance on the road toward the full realization of quantum computation, demonstrating a technique that eliminates common errors in the most essential operation of quantum algorithms, the two-qubit operation or gate.

Despite tremendous progress toward being able to perform computations with low error rates with superconducting quantum bits (qubits), errors in two-qubit gates, one of the building blocks of quantum computation, persist, says Youngkyu Sung, an MIT graduate student in electrical engineering and computer science who is the lead author of a paper on this topic published today in Physical Review X. We have demonstrated a way to sharply reduce those errors.

In quantum computers, the processing of information is an extremely delicate process performed by the fragile qubits, which are highly susceptible to decoherence, the loss of their quantum mechanical behavior. In previous research conducted by Sung and the research group he works with, MIT Engineering Quantum Systems, tunable couplers were proposed, allowing researchers to turn two-qubit interactions on and off to control their operations while preserving the fragile qubits. The tunable coupler idea represented a significant advance and was cited, for example, by Google as being key to their recent demonstration of the advantage that quantum computing holds over classical computing.

Still, addressing error mechanisms is like peeling an onion: Peeling one layer reveals the next. In this case, even when using tunable couplers, the two-qubit gates were still prone to errors that resulted from residual unwanted interactions between the two qubits and between the qubits and the coupler. Such unwanted interactions were generally ignored prior to tunable couplers, as they did not stand out but now they do. And, because such residual errors increase with the number of qubits and gates, they stand in the way of building larger-scale quantum processors. The Physical Review X paper provides a new approach to reduce such errors.

We have now taken the tunable coupler concept further and demonstrated near 99.9 percent fidelity for the two major types of two-qubit gates, known as Controlled-Z gates and iSWAP gates, says William D. Oliver, an associate professor of electrical engineering and computer science, MIT Lincoln Laboratory fellow, director of the Center for Quantum Engineering, and associate director of the Research Laboratory of Electronics, home of the Engineering Quantum Systems group. Higher-fidelity gates increase the number of operations one can perform, and more operations translates to implementing more sophisticated algorithms at larger scales.

To eliminate the error-provoking qubit-qubit interactions, the researchers harnessed higher energy levels of the coupler to cancel out the problematic interactions. In previous work, such energy levels of the coupler were ignored, although they induced non-negligible two-qubit interactions.

Better control and design of the coupler is a key to tailoring the qubit-qubit interaction as we desire. This can be realized by engineering the multilevel dynamics that exist, Sung says.

The next generation of quantum computers will be error-corrected, meaning that additional qubits will be added to improve the robustness of quantum computation.

Qubit errors can be actively addressed by adding redundancy, says Oliver, pointing out, however, that such a process only works if the gates are sufficiently good above a certain fidelity threshold that depends on the error correction protocol. The most lenient thresholds today are around 99 percent. However, in practice, one seeks gate fidelities that are much higher than this threshold to live with reasonable levels of hardware redundancy.

The devices used in the research, made at MITs Lincoln Laboratory, were fundamental to achieving the demonstrated gains in fidelity in the two-qubit operations, Oliver says.

Fabricating high-coherence devices is step one to implementing high-fidelity control, he says.

Sung says high rates of error in two-qubit gates significantly limit the capability of quantum hardware to run quantum applications that are typically hard to solve with classical computers, such as quantum chemistry simulation and solving optimization problems.

Up to this point, only small molecules have been simulated on quantum computers, simulations that can easily be performed on classical computers.

In this sense, our new approach to reduce the two-qubit gate errors is timely in the field of quantum computation and helps address one of the most critical quantum hardware issues today, he says.

See the article here:
Clearing the way toward robust quantum computing - MIT News

Read More..

Heres How Quantum Computers Will Really Affect Cryptocurrencies – Forbes

Cryptocurrency

Theres been a lot of focus recently on encryption within the context of cryptocurrencies. Taproot being implemented in bitcoin has led to more cryptographic primitives that make the bitcoin network more secure and private. Its major upgrade from a privacy standpoint is to make it impossible to distinguish between multi-signature and single-signature transactions. This will, for example, make it impossible to tell which transactions involve the opening of Lightning Network channels versus regular base layer transactions. The shift from ECDSA signatures to Schnorr signatures involves changes and upgrades in cryptography.

Yet these cryptographic primitives might need to shift or transition in the face of new computers such as quantum computers. If you go all the way back down to how these technologies work, they are built from unsolved mathematical problems something humans havent found a way to reduce down to our brains capacity for creativity yet limited memory retrieval, or a computers way of programmed memory retrieval. Solving those problems can create dramatic breaks in current technologies.

I sat down with Dr. Jol Alwen, the chief cryptographer of Wickr, the encrypted chat app, to talk about post-quantum encryption and how evolving encryption standards will affect cryptocurrencies. Heres a summary of the insights:

Despite all of the marketing hype around quantum computing and quantum supremacy, the world isnt quite at the stage where the largest (publicly disclosed) quantum computer can meaningfully break current encryption standards. That may happen in the future, but commercially available quantum computers now cannot meaningfully dent the encryption standards cryptocurrencies are built on.

Quantum computer and encryption experts are not communicating with one another as much as they should. This means that discrete advances in quantum computing may happen with a slight lag in how encryption would operate. Its been the case that nation-states, such as China, have been going dark on research related to quantum this has the effect of clouding whether or not serious attempts can be made on the encryption standards of today, and disguising the sudden or eventual erosion of encryption a sudden break that might mean devastation for cryptocurrencies and other industries that rely on cryptography.

Its been known that many encryption schemes that defeat classical computers may not be able to defeat a sufficiently powerful quantum computer. Grovers algorithm is an example. This is a known problem and with the continued development of quantum computers, will likely be a significant problem in a matter of time.

Encryption standards being diluted now is not only a risk for the future, but also an attack on the conversations and transactions people will have to remain private in the past as well. Past forms of encryption that people relied upon would be lost the privacy they assumed in the past would be lost as well.

Cryptographic primitives are baked into cryptocurrencies regardless of their consensus algorithm. A sudden shift in encryption standards will damage the ability for proof-of-work miners or those looking to demonstrate the cryptographic proof that theyve won the right to broadcast transactions in the case of proof-of-stake designs such as the one proposed by Ethereum. Digital signatures are the common point of vulnerability here, as well as the elliptic curve cryptography used to protect private keys.

Everything here breaks if the digital signatures are no longer valid anybody with access to public keys could then spend amounts on other peoples behalf. Wallet ownership would be up for grabs. says Dr. Alwen. Proof-of-work or proof-of-stake as a consensus algorithm would be threatened as well in all cases, the proof would no longer be valid and have it be authenticated with digital signatures anybody could take anybody elses blocks.

While proof-of-work blocks would have some protection due to the increasingly specialized hardware (ASICs) being manufactured specifically for block mining, both systems would have vulnerabilities if their underlying encryption scheme were weakened. Hashing might be less threatened but quantum compute threatens key ownership and the authenticity of the system itself.

Post-quantum encryption is certainly possible, and a shift towards it can and should be proactive. Theres real stuff we can do. Dr. Alwen says here. Bitcoin and other cryptocurrencies may take some time to move on this issue, so any preparatory work should be regarded as important, from looking at benefits and costs you can get a lot of mileage out of careful analysis.

Its helped here by the fact that there is a good bottleneck in a sense: there are only really two or three types of cryptographic techniques that need replacement. Digital signatures and key agreement are the two areas that need the focus. Patching these two areas will help the vast majority of vulnerabilities that might come from quantum computation.

Its important to note that a sudden and critical break in encryption would affect other industries as well and each might have different reasons why an attack would be more productive or they might be more slow to react. Yet if there were a revolution tomorrow, this would pose a clear and direct threat to the decentralization and security promises inherent in cryptocurrencies. Because of how important encryption and signatures are to cryptocurrencies, its probable that cryptocurrency communities will have many more debates before or after a sudden break, but time would be of the essence in this scenario. Yet, since encryption is such a critical part of cryptocurrencies, there is hope that the community will be more agile than traditional industries on this point.

If a gap of a few years is identified before this break happens, a soft fork or hard fork that the community rallies around can mitigate this threat along with new clients. But it requires proactive changes and in-built resistance, as well as keeping a close eye on post-quantum encryption.

It is likely that instead of thinking of how to upgrade the number of keys used or a gradual change, that post-quantum encryption will require dabbling into categories of problems that havent been used in classical encryption. Dr. Alwen has written about lattice-based cryptography as a potential solution. NIST, the National Institute of Standards and Technology currently responsible for encryption standards has also announced a process to test and standardize post-quantum public-key encryption.

Hardware wallets are in principle the way to go now for security in a classical environment Dr. Alwen points out, having done research in the space. The fact that theyre hard to upgrade is a problem, but its much better than complex devices like laptops and cell phones in terms of the security and focus accorded to the private key.

In order to keep up with cryptography and its challenges, MIT and Stanford open courses are a good place to start to get the basic terminology. There is for example, an MIT Cryptography and Cryptanalysis course on MIT OpenCourseWare and similar free Stanford Online courses.

There are two areas of focus: applied cryptography or theory of cryptography. Applied cryptography is a field that is more adjacent to software engineering, rather than math-heavy cryptography theory. An important area is to realize what role suits you best when it comes to learning: making headway on breaking cryptography theory or understanding from an engineering perspective how to implement solid cryptography.

When youre a bit more advanced and focused on cryptography theory, Eprint is a server that allows for an open forum for cryptographers to do pre-prints. Many of the most important developments in the field have been posted there.

Forums around common cryptography tools help with applied cryptography as well as some of the cryptography theory out there: the Signal forums, or the Wickr blog are examples.

Cryptocurrencies are co-evolving with other technologies. As computers develop into different forms, there are grand opportunities, from space-based cryptocurrency exchange to distributed devices that make running nodes accessible to everybody.

Yet, in this era, there will also be new technologies that force cryptocurrencies to adapt to changing realities. Quantum computing and the possibility that it might eventually break the cryptographic primitives cryptocurrencies are built on is one such technology. Yet, its in the new governance principles cryptocurrencies embody that might help them adapt.

Visit link:
Heres How Quantum Computers Will Really Affect Cryptocurrencies - Forbes

Read More..

New quantum computing company will set the pace – Cambridge Network

Cambridge Quantum Computing, a quantum computing and algorithm company founded by Ilyas Khan, Leader in Residence and a Fellow in Management Practice at Cambridge Judge Business School, announced it will combine with Honeywell Quantum Solutions, a unit of US-based Honeywell, which has been an investor in Cambridge Quantum since 2019.

Ilyas was also the inaugural Chairman of the Stephen Hawking Foundation, is a fellow commoner of St Edmunds College, and was closely involved in the foundation of the Accelerate Cambridge programme run by the Business Schools Entrepreneurship Centre.

The new company is extremely well-positioned to lead the quantum computing industry by offering advanced, fully integrated hardware and software solutions at an unprecedented pace, scale and level of performance to large high-growth markets worldwide, Cambridge Quantum said in an announcement.

The combination will form the largest, most advanced standalone quantum computing company in the world, setting the pace for what is projected to become a $1 trillion quantum computing industry over the next three decades, Honeywell said in a companion announcement.

The new company, which will be formally named at a later date, will be led by Cambridge Quantum founder Ilyas Khan as Chief Executive with Tony Uttley of Honeywell Quantum Solutions as President. Honeywell Chairman and CEO Darius Adamczyk will serve on the board of directors as the Chairman. Honeywell will have a 54% share of the merged entity, which was dubbed by publication Barrons as the Apple of Quantum Computing, and CQCs shareholders will have a 46% share.

In addition, Honeywell will invest between $270 million to $300 million in the new company. Cambridge Quantum was founded in 2014, and has offices in Cambridge, London and Oxford, and abroad in the US, Germany and Japan.

Originally posted here:
New quantum computing company will set the pace - Cambridge Network

Read More..

Honeywell joins hands with Cambridge Quantum Computing to form a new company – The Hindu

(Subscribe to our Today's Cache newsletter for a quick snapshot of top 5 tech stories. Click here to subscribe for free.)

Multinational conglomerate Honeywell said it will combine with Cambridge Quantum Computing in a bid to form the largest standalone quantum computing company in the world.

According to Honeywell, the merger will be completed in the third quarter of 2021 and will set the pace for what is projected to become a $1 trillion quantum computing industry over the next three decades.

In the yet to be named company, Honeywell will invest between $270 million and $300 million, and will own a major stake. It will also engage in an agreement for manufacturing critical ion traps needed to power quantum hardware.

The new company will be led by Ilyas Khan, the CEO and founder of CQC, a company that focuses on building software for quantum computing. Honeywell Chairman and Chief Executive Officer Darius Adamczyk will serve as chairman of the new company while Tony Uttley, currently the president of HQS, will serve as the new company's president.

"Joining together into an exciting newly combined enterprise, HQS and CQC will become a global powerhouse that will develop and commercialize quantum solutions that address some of humanity's greatest challenges, while driving the development of what will become a $1 trillion industry," Khan said in a statement.

With this new company, both firms plan to use Honeywells hardware expertise and Cambridges software platforms to build the worlds highest-performing computer.

See the article here:
Honeywell joins hands with Cambridge Quantum Computing to form a new company - The Hindu

Read More..

NSWCDD Focuses on Quantum Computing with its First-Ever Hackathon – Naval Sea Systems Command

DAHLGREN, Va.

The Innovation Lab at Naval Surface Warfare Center Dahlgren Division (NSWCDD) hosted its first-ever hackathon in partnership with Microsoft June 2-4.

While the term hackathon may conjure up familiar depictions in media of a raucous semi-sporting event where audiences look on as hackers write line by line of code to break into a borderline impenetrable system, the event does not always quite look like that. This hackathon looked a lot like a room full of smart, creative people working together to develop rapid solutions to difficult problems.

Participants in NSWCDDs first hackathon were challenged to utilize Microsofts quantum computing toolkit to generate solutions to assigned problems.

The Navy is at the forefront of quantum [computing] efforts and Microsoft is very excited to collaborate with the Navy and excited to do this hackathon with the Innovation Lab here at Dahlgren, said Microsoft Technology Strategist Dr. Monica DeZulueta. The caliber of people participating here is phenomenal.

The event kicked off with a quantum computing bootcamp led by Microsoft quantum computing professionals. Participants in the hackathon along with approximately 25 more eager quantum students who joined the event via Microsoft Teams were introduced to quantum computing basics and the Q# programming language.

Quantum computing is a fundamentally different mode of computing from what has traditionally been in use. While classical computing relies on bits of 1s and 0s, quantum computing qbits can exist as 1s and 0s simultaneously.

Although still an emerging field of application, quantum computing holds incredible implications for generating answers to previously intractable problems. From logistics solutions such as flight path optimization to more rapid, higher-fidelity modeling and simulation, quantum computing may play a key role in giving the warfighter the technological advantage over adversaries.

The goal of this hackathon is to get the workforce thinking about quantum computing, said Innovation Lab Director Dr. John Rigsby.

Innovation Lab Deputy Director Tamara Stuart added, Were already seeing how quantum communication and quantum sensors are enhancing our technologies and how we are thinking about these applications in the future. Everybody is expecting a quantum computing revolution to come so we are gearing up.

Rigsby and Stuart said an enthusiastic response followed the call for hackathon participants. Each department across NSWCDD sent its best and brightest minds to compete and vie for the first place title in the bases first-ever hackathon.

When the hacking began in earnest on day two of the event, the spirit of the anticipated battle of the departments shifted from competitive to collaborative as rival teams began to combine brainpower to attack the puzzling set of problems created by Microsoft quantum computing professionals.

Each team presented their solutions on the third and final day of the event. Along with the solutions to the problem set, participants were asked by the events judges to consider potential applications for quantum computing in their everyday work.

Following presentations, judges declared a three-way tie between Dahlgrens Electromagnetic and Sensor System Department, Gun and Electric Weapon Systems Department and the Integrated Combat Systems Department.

Chief Technology Officer Jennifer Clift highlighted the importance of events like this hackathon.

The Innovation Lab is a place for our workforce to explore new technologies and solve complex naval challenges. Our goal is to tap into the entrepreneurial spirit of our talented workforce and provide the resources and environment necessary to discover, innovate and deliver cutting edge capabilities to the warfighter. Events like this hackathon allow our scientists and engineers to learn new skills, collaborate to solve complex challenges, and prepare for future naval technology needs, said Clift.

Stefano Coronado, a scientist from the Electromagnetic and Sensor System Department, said the in-person collaboration was exciting.

This hackathon was a great experience for me, said Coronado.

NSWCDDs Innovation Lab leadership said this is the first of many similar events to come with hackathons hopefully occurring multiple times a year. Plans for the warfare centers second hackathon are already in the works.

Continue reading here:
NSWCDD Focuses on Quantum Computing with its First-Ever Hackathon - Naval Sea Systems Command

Read More..