Category Archives: Machine Learning

Accurate and rapid prediction of tuberculosis drug resistance from genome sequence data using traditional machine learning algorithms and CNN |…

Data collection

To prepare the training data and labels, we downloaded the whole-genome sequencing (WGS) data for 10,575 MTB isolates from the sequence read archive (SRA) database17 and obtained corresponding lineage and phenotypic drug susceptibility test (DST) data from CRyPTIC Consortium and the 100,000 Genomes project in an excel file, which is also available in the supplementary of their publication15. The phenotypic DST results for the drugs were used as labels when training and evaluating our ML models. All the data were collected and shared by the CRyPTIC Consortium and the 100,000 Genomes Project15. Like the datasets used by previous studies, this dataset is imbalanced in that most isolates are susceptible, and the minority of them are resistant for all the four first-line drugs (Fig.1) and four second-line drugs. The numbers of isolate samples with phenotypic DST results available are 7138, 7137, 6347 and 7081 for EMB, INH, PZA and RIF, respectively. There are 6291 shared isolates among the four sample sets. In addition, 6820 out of the 10,575 isolates have phenotypic DST result available for each of the four second-line drugs.

Phenotypic overview of the MTB isolates. This bar chart shows numbers of susceptible and resistant isolates with DST results available for each of the four first-line drugs.

To detect the potential genetic features that could contribute to MTB drug resistance classification, we used a command-line tool called ARIBA18. ARIBA is a very rapid, flexible and accurate AMR genotyping tool that generates detailed and customizable outputs from which we extracted genetic features. First, we downloaded all reference data from CARD, which included not only references from different MTB strains but also from other bacteria (e.g., Staphylococcus aureus). Secondly, we clustered reference sequences based on their similarity. Then we used this collection of reference clusters as our pan-genome reference and aligned read pairs of an isolate to them. For each cluster that had reads mapped, we ran local assemblies, found the closest reference, and identified variants. After running these steps, ARIBA generated files including a summary file for alignment quality, a report file containing information of detected variants and AMR-associated genes, and a read depth file. For each cluster, the read depth file provides counts of the four DNA bases on each locus of the closest reference where reads were mapped.

Next, we filtered out low-quality mappings that did not pass the match criteria defined in ARIBAs GitHub wiki18. From these high-quality mappings, we collected novel variants in coding regions, well-studied resistance-causing variants and AMR-associated gene presences that were detected from at least one out of the 10,575 isolates as 263 genetic features. In addition, we included indicator variables for each of the 19 lineages into our feature vector resulting in a total of 282 features.

We applied two traditional ML algorithms, RF and LR, on the sample sets labeled with phenotypic DST results (see Data collection section) to train MTB AMR classifiers for the eight drugs (first-line and second-line), where the feature vector for each sample consists of the 282 features mentioned in Genetic feature extraction section.

RF is an ensemble method and made up of tens or hundreds of estimators (decision trees) to compress overfitting19,20. A final prediction is an average or majority vote of the predictions of all trees. It is often used when there are large training datasets and a large number of input features. Moreover, RF is good at dealing with imbalanced data by using class weighting. Here we trained each RF classifier with 1000 estimators.

LR is a popular regression technique for modeling binary dependent variable21. By using a sigmoid function (logit), linear regression is transformed into logistic regression so that the prediction range is [0, 1] for outputting probabilities. Then, LR model is fitted using maximum likelihood estimation. During the training process, we applied L1 regularization on LR models for feature selection and to prevent overfitting22.

CNN is a class of deep neural networks that takes multi-dimensional data as input23. When we sayCNN, generally, we refer to a 2-dimensional CNN, which is often used for image classification. However, there are two other types of CNN used in practice: 1-dimensional and 3-dimensional CNNs. Conv1D is generally used for time-series data where the kernel moves on one dimension and the input and output data are 2-dimensional. Conv2d and 3D kernels move on two dimensions and three dimensions, respectively.

Because deep learning algorithms require substantial computational power, we performed feature selection to only keep relevant features as input for deep learning algorithms. First, we randomly selected 80 percent of samples to calculate the importance of each feature by using the scikit-learn RF feature importance function that averages the impurity decrease from each feature across the trees to determine the final importance of each variable24. Then, we tuned the feature importance cutoff to find the one that maximizes the F1-score of an RF model trained on the remaining 20 percent of samples. For each of the eight drugs, features were selected when their feature importance scores were bigger than the optimal cutoff. The tuning processes for first-line drugs are visualized in Fig.2.

Feature importance cutoff tuning. For the four first-line drugs, when the cutoff increases, the F1-score quickly increases to its maximum and then continues to decrease. The cutoffs maximized F1-scores are 0.0004 (EMB), 0.0006 (INH), 0.0008 (PZA) and 0.0016 (RIF).

After the relevant features were selected, we designed and built a multi-input CNN architecture with TensorFlow Keras25 that took N inputs of 421 matrices representing N selected SNP features into the first layer. Each 421 matrix consists of normalized DNA base counts for each locus within a 21-base reference sequence window centered on the focal SNP (Fig.3). We generated normalized counts based on the raw base counts extracted from the read depth file mentioned in Genetic feature extraction section. Our convolutional architecture starts with two 1D convolutional layers followed by a flattening layer for each SNP input. Then, it concatenates the N flattening layers with the inputs of AMR-associated gene presence and lineage features. Finally, we added three fully connected layers to complete the deep neural network architecture (Fig.4). It smoothly integrates sequential and non-sequential features.

Conversion of raw base counts at each locus of a 21-base reference window into normalized base counts as Conv1D input of each selected SNP feature. The raw base counts were derived from reference-reads alignment, as shown on the left of this figure. The center of the window is the locus of a selected SNP feature. The normalized base counts at each locus are the percentage of the four DNA bases (ACGT), respectively.

Flowchart of our 1D CNN architecture.

Read more:
Accurate and rapid prediction of tuberculosis drug resistance from genome sequence data using traditional machine learning algorithms and CNN |...

Benefits of and Best Practices for Protecting Artificial Intelligence and Machine Learning Inventions as Trade Secrets – JD Supra

We previously discussedwhich portions of an artificial intelligence/machine-learning (AI/ML) platform can be patented. Under what circumstances, however, is it best to keep at least a portion of the platform a trade secret? And what are some best practices for protecting trade secrets? In this post, we explore important considerations and essential business practices to keep in mind when working to protect the value of trade secrets specific to AI/ML platforms, as well as the pros and cons of trade secret versus patent protection.

Protecting AI/ML Platforms via Trade Secrets

What qualifies as a trade secret can be extraordinarily broad, depending on the relevant jurisdiction, as, generally speaking, a trade secret is information that is kept confidential and derives value from being kept confidential. This can potentially include anything from customer lists to algorithms. In order to remain a trade secret, however, the owner of the information must follow specific business practices to ensure the information remains secret. If businesses do not follow the proscribed practices, then the ability to protect the trade secret is waived and its associated value is irretrievably lost. The business practices required are not onerous or complex, and we will discuss these below, but many businesses are unaware of what is required for their specific type of IP and only discover their error when attempting to monetize their inventions or sell their business. To avoid this devastating outcome, we work to arm our clients with the requisite practices and procedures tailored to their specific inventions and relevant markets.

In the context of AI/ML platforms, trade secrets can include the structure of the AI/ML model, formulas used in the model, proprietary training data, a particular method of using the AI/ML model, any output calculated by the AI/ML model that is subsequently converted into an end product for a customer, and similar aspects of the platform. There are myriad ways in which the value of the trade secret may be compromised.

For example, if an AI/ML model is sold as a platform and the platform provides the raw output of the model and a set of training data to the customer, then the raw output and the set of training data would no longer qualify for trade secret protection. Businesses can easily avoid this pitfall by having legally binding agreements in place between the parties to protect the confidentiality and ownership interests involved. Another area in which we frequently see companies waive trade secret protection is where the confidential information that can be independently discovered (such as through reverse-engineering a product). Again, there are practices that businesses can follow to avoid waiving trade secret protection due to reverse-engineering. Owners, therefore, must also be careful in ensuring that the information they seek to protect cannot be discovered through use or examination of the product itself and where that cannot be avoided, ensure that such access is governed by agreements that prohibit such activities, thereby maintaining the right to assert trade secret misappropriation and recover the value of the invention.

To determine if an invention may be protected as a trade secret, courts will typically examine whether the business has followed best practices or reasonable efforts for the type of IP and relevant industries. See e.g. Intertek Testing Services, N.A., Inc. v. Frank Pennisi et al., 443 F. Supp. 3d 303, 323 n.19 (E.D.N.Y. Mar. 9, 2020). What constitutes best practices for a particular type of IP can vary greatly. For example, a court may examine whether those trade secrets were adequately protected. The court may also look to whether the owner created adequate data policies to prevent employees from mishandling trade secrets. See Yellowfin Yachts, Inc. v. Barker Boatworks, LLC, 898 F.3d 1279 (11th Cir. Aug. 7, 2018)(where the court held that requiring password protection to access trade secrets was insufficient without adequate measures to protect information stored on employee devices). If the court decides that the business has not employed best practices, the owner can lose trade secret protection entirely.

Most often, a failure to ensure all parties who may be exposed to trade secrets are bound by a legally-sufficient confidentiality or non-disclosure agreement forces the owner to forfeit their right to trade secret protection for that exposed information. Owners should have experienced legal counsel draft these agreements to ensure that the agreements are sufficient to protect the trade secret and withstand judicial scrutiny; many plaintiffs have learned the hard way that improperly-drafted agreements can affect the trade secret protection afforded to their inventions. See, e.g., BladeRoom Group Ltd. v. Emerson Electric Co., 11 F.4th 1010, 1021 (9th Cir. Aug. 30, 2021)(holding that NDAs with expiration dates also created expiration dates for trade secret protection); Foster Cable Servs., Inc. v. Deville, 368 F. Supp. 3d 1265 (W.D. Ark. 2019)(holding that an overbroad confidentiality agreement was unenforceable); Temurian v. Piccolo, No. 18-cv-62737, 2019 WL 1763022 (S.D. Fla. Apr. 22, 2019)(holding that efforts to protect data through password protection and other means were negated by not requiring employees to sign a confidentiality agreement).

There are many precautions owners can take to protect their trade secrets, which we discuss below:

Confidentiality and Non-Disclosure Agreements: One of the most common methods of protecting trade secrets is to execute robust confidentiality agreements and non-disclosure agreements with everyone who may be exposed to trade secrets, to ensure they have a legal obligation to keep those secrets confidential. Experienced legal counsel who can ensure the agreements are enforceable and fully protect the owner and their trade secrets are essential as there are significant pitfalls in these types of agreements and many jurisdictions have contradicting requirements.

Marketing and Product Development: The AI/ML platform itself should also be constructed and marketed in such a way as to prevent customers from easily discovering the trade secrets, whether through viewing marketing materials, through ordinary use of the platform, or through reverse-engineering of the platform. For example, if an AI/ML platform uses a neural network to classify medical images, and the number of layers used and the weights used by the neural network to calculate output are commercially valuable, the owner should be careful to exclude any details about the layers of the AI/ML model in marketing materials. Further, the owner may want to consider developing the platform in such a way that the neural network is housed internally (protected by various security measures) and therefore not directly accessible by a customer seeking to reverse-engineer the product.

Employee Training: Additionally, owners should also ensure that employees or contractors who may be exposed to trade secrets are trained in how to handle those trade secrets, including how to securely work on or discuss trade secrets, how to handle data on their personal devices (or whether trade secret information may be used on personal devices at all), and other such policies.

Data Security: Owners should implement security precautions (including limiting who can access trade secrets, requiring passwords and other security procedures to access trade secrets, restricting where data can be downloaded and stored, implementing mechanisms to protect against hacking attempts, and similar precautions) to reduce the risk of unintended disclosure of trade secrets. Legal counsel can help assess existing measures to determine whether they are sufficient to protect confidential information under various trade secret laws.

Pros and Cons of Trade Secret Protection over Patent Protection

Trade secret protection and patent protection are obtained and maintained in different ways. There are many reasons why trade secret protection may be preferable to patent protection for various aspects of an AI/ML platform, or vice-versa. Below we discuss some criteria to consider before deciding how to protect ones platform.

Protection Eligibility: As noted in our previous blog post, patent protection may be sought for many components of an AI/ML platform. There are, however, some aspects of an AI/ML platform that may not be patent-eligible. For example, while the architecture of a ML model may be patentable, specific mathematical components of the model, such as the weight values, mathematical formulas used to calculate weight values in an AI/ML algorithm, or curated training data, may not, on their own, be eligible for patent protection. If the novelty of a particular AI/ML platform is not in how an AI/ML model is structured or utilized, but rather in non-patentable features of the model, trade secret protection can be used to protect this information.

Cost: There are filing fees, prosecution costs, issue fees, and maintenance fees required to obtain and keep patent protection on AI/ML models. Even for an entity that qualifies as a micro-entity under the USPTOs fee schedule, the lifetime cost of a patent could be several thousand dollars in fees, and several thousand dollars in attorneys fees to draft and prosecute the patent. Conversely, the costs of trade secret protection are the costs to implement any of the above methods of keeping critical portions of the AI/ML model secret from others. In many instances, it may be less expensive to rely on trade secret protection, than it may be to obtain patent protection.

Development Timeline: AI/ML models, or software that implements them, may undergo several iterations through the course of developing a product. As it may be difficult to determine which, if any, iterations are worth long-term protection until development is complete, it may be ideal to protect each iteration until the value of each has been determined. However, obtaining patent protection on each iteration may, in some circumstances, be infeasible. For example, once a patent application has been filed, the specification and drawings cannot be amended to cover new, unanticipated iterations of the AI/ML model; a new application that includes the new material would need to be filed, incurring further costs. Additionally, not all iterations will necessarily include changes that can be patented, or it may be unknown until after development how valuable a particular modification is to technology in the industry, making it difficult to obtain patent protection for all iterations of a model or software using the model. In these circumstances, it may be best to use a blend of trade secret and patent protection. For example, iterations of a model or software can be protected via trade secret; the final product, and any critical iterations in between, can subsequently be protected by one or more patents. This allows for a platform to be protected without added costs per iteration, and regardless of the nature of the changes made in each iteration.

Duration of Protection: Patent owners can enjoy protection of their claimed invention for approximately twenty years from the date of filing a patent application. Trade secret protection, on the other hand, lasts as long as an entity keeps the protected features a secret from others. For many entities, the twenty-year lifetime of a patent is sufficient to protect an AI/ML platform, especially if the patent owner anticipates substantially modifying the platform (e.g., to adapt to future needs or technological advances) by the end of the patent term. To the extent any components of the AI/ML platform are unlikely to change within twenty years (for example, if methods used to curate training data are unlikely to change even with future technological advances), it may be more prudent to protect these features as trade secrets.

Risk of Reverse-Engineering: As noted above, trade secrets do not protect inventions that competitors have been able to discover by reverse-engineering an AI/ML product. While an entity may be able to prevent reverse-engineering of some aspects of the invention through agreements between parties with permission to access the AI/ML product or through creative packaging of the product, there are some aspects of the invention (such as the training data that needs to be provided to the platform, end product of the platform, and other features) that may need to remain transparent to a customer, depending on the intended use of the platform. Such features, when patent-eligible, may benefit more from patent protection than from trade secret protection, as a patent will protect the claimed invention even if the invention can be reverse-engineered.

Exclusivity: A patent gives the patent owners the exclusive right to practice or sell their claimed inventions, in exchange for disclosing how their inventions operate. Trade secrets provide no such benefit; to the extent competitors are able to independently construct an AI/ML platform, they are allowed to do so even if an entity has already sold a similar platform protected by trade secret. Thus, to the extent an exclusive right to the AI/ML model or platform is necessary for the commercial viability of the platform or its use, patent protection may be more desirable than trade secret protection.

Conclusion

Trade secret law allows broad protection of information that can be kept secret from others, provided certain criteria are met to ensure the information is adequately protected from disclosure to others. Many aspects of an AI/ML platform can be protected under either trade secret law or patent law, and many aspects of an AI/ML platform may only be protected under trade secret law. It is therefore vital to consider trade secret protection alongside patent protection, to ensure that each component of the platform is being efficiently and effectively protected.

[View source.]

See original here:
Benefits of and Best Practices for Protecting Artificial Intelligence and Machine Learning Inventions as Trade Secrets - JD Supra

Raster plots machine learning to predict the seizure liability of drugs and to identify drugs | Scientific Reports – Nature.com

Human iPSC-derived neural network drug response

Culturing of a human iPSC-derived neural network seeded on an MEA was possible without cell aggregation even on the 12th week of culturing. Network burst firing was observed from the 6th week of culture onward. Figure1A (a) shows a cultured 81days in vitro (DIV) phase contrast image, and Fig.1A (b) shows a typical network burst signal. Concentration-dependent data were obtained for 13 seizure-causing compounds and two seizure-free compounds after the 14th week of culturing, when the neural networks were considered mature28. Whenever the signal that was obtained passed a threshold, the spikes detected were used to create a raster plot. Figure1A (c) shows the threshold used to detect spikes in the single electrode signal (top portion) and a raster plot of the detected spikes (bottom portion). Figure1B shows raster plots of compounds with different mechanisms of action: (a) 4-aminopyridine (4-AP), (b) pentylenetetrazol (PTZ), (c) carbamazepine, (d) N-methyl-D-aspartic acid (NMDA), (e) acetaminophen, and (f) dimethyl sulfoxide (DMSO). Seizure-causing compounds caused different changes depending on their mechanism of action (Fig.1B). Figure1C shows a schematic of five analytic parameters calculated from raster plots (total spikes (TS), number of network bursts (NoB), inter network burst interval (IBI), duration of a network burst (DoB), and spikes in a network burst (SiB)). Figure2 shows the drug response of each parameter when the vehicle response is set to 100%. The numerical data are listed in supplementary Tables S1S5. The maximum increases in the NoB of 4-AP and PTZ were 321.0%15.4% (30M) and 147.3%2.7% (10M), respectively. The IBI, DoB, and SiB decreased starting at a concentration of 1M for 4-AP and PTZ (Fig.2a,b). The DoB decreased starting at 0.3M of picrotoxin (Fig.2c). For carbamazepine, the TS and NoB decreased at 30M, and the DoB decreased and the IBI increased at 100M (Fig.2d). For pilocarpine, the IBI increased starting at 10M, the DoB decreased starting at 30M, and the TS decreased at 100M (Fig.2e). For kainic acid, the TS decreased at 0.3M and the NoB went to 0 starting at 1M (Fig.2f). For NMDA, the TS increased at 1M whereas the TS, DoB, and SiB decreased and the NoB increased at 10M (Fig.2g). For tramadol, the NoB decreased and the SiB increased starting at 3M, the TS, DoB, and SiB decreased at 30M, and the IBI increased at 100M (Fig.2h). For theophylline, the IBI increased starting at 10M, and the SiB increased whereas the NoB decreased starting at 30M (Fig.2i). For paroxetine, the DoB decreased starting at 0.3M, and the TS decreased starting at 1M (Fig.2j). For varenicline, the IBI increased and the DoB decreased at 30M (Fig.2k). For venlafaxine, the DoB decreased at 10M, and the TS and SiB decreased at 30M (Fig.2l). For acetaminophen, the DoB decreased starting at 3M (Fig.2m). For DMSO and amoxapine, no changes in any parameters were observed (Fig.2n,o).

MEA data from a cultured human iPSC-derived neural network. (A) (a) Phase-contrast image of neurons on an MEA chip at 81days in vitro (DIV). (b) Typical action potential waveform in a spontaneous recording. (c) Upper graph shows the action potential waveform acquired with a single electrode and the voltage threshold for spike detection (red line). Raster plots of detected spikes (black circles) are shown under the graph. (B) Concentration-dependent Raster plot images of typical mechanisms of action (a) 4-AP, (b) carbamazepine, (c) NMDA, (d) PTZ, (e) acetaminophen, (f) DMSO. (C) Schematic diagram of analysis parameters.

Concentration-dependent changes of 15 compounds in five parameters: TS (pink), NoB (black), IBI (green), DoB (blue), SiB (cyan). Parameters were depicted as the average % change of control (vehicle control set to 100%)SEM from n=34 wells. Data were analyzed using one-way ANOVA followed by post hoc Dunnett's test (*p<0.05, **p<0.01 vs. vehicle).

Based on the preceding results, we found that the changes in the parameters studied were not similar among all seizure-causing compounds; changes differed based on the mechanism of action of the drug. At the same time, a significant difference in the DoB was detected for acetaminophen, which is a seizure-free compound. Changes in DoB may be observed for certain seizure-free compounds. Consequently, we found that there are difficulties in using a single parameter to distinguish between seizure-causing compounds with different mechanisms of action and seizure-free compounds.

We created an artificial intelligence (AI) that was trained on raster plots so that it could classify the responses of seizure-causing compounds with different mechanisms of action as well as the responses of seizure-free compounds. Raster plots were created from the time-series data of the detected spikes, and then images were created by segmenting the data into time windows four times that of the inter-maximum frequency of a network burst interval (IMFI) in the pre-drug administration. The network burst frequency differed depending on the well, so the number of segmented raster plot images also differed depending on the well. The reason for choosing four times the IMFI is that it is suitable for capturing both the regularity of network burst activity and fine firing patterns, and reduces variability between wells. Next, the segmented raster plot images were input into AlexNet36, an object recognition model, and the 4096-dimensional parameters which were output from the fully connected layer (the 21st layer) were extracted as image feature quantities. Lastly, we corrected for differences between the wells due to differing initial states by normalizing the feature quantities of each drug around the mean value of the feature quantities when the vehicle was administered to each well. The 13 seizure-causing compound and two seizure-free compound datasets, which is the number of split raster plots per concentration, were created as shown in Table 1. We used a pattern recognition neural network composed of 4096 neuron input layers, nine sigmoid neuron hidden layers, and an output layer with two classes, which made up a toxicity prediction model to predict whether a compound was a seizure-causing compound or a seizure-free compound (Fig.3A). We used four seizure-causing compounds with different mechanisms and burst frequency responses (4-AP [30 and 60M, n=3 wells, respectively], carbamazepine [100M, n=3 wells], NMDA [3 and 10M, n=3 wells], and PTZ [1000M, n=3 wells]) and two seizure-free compounds (all concentrations of acetaminophen [n=3 wells] and all concentrations of DMSO [n=3 wells]) to train and validate the effectiveness of this model; 75% of the dataset was used for training, and the remaining 25% was used for validation after training (Table 1). The reason for selecting the four seizure-causing compounds is that, in order to cover the firing pattern of the seizure-causing compound, compounds having different mechanisms of action were selected as training data, one in which the firings increased and the other in which the firings decreased. The accuracy was evaluated using the raster plots of unlearned wells after training, i.e., using the holdout scheme. The training data used contained 330 4-AP plots, 822 carbamazepine plots, 1323 NMDA plots, 198 PTZ plots, and 3546 acetaminophen plots and DMSO 2286 plots. The test data used contained 111 4-AP plots, 294 carbamazepine plots, 441 NMDA plots, 54 PTZ plots, and 1182 acetaminophen plots and 702 DMSO plots. We created a confusion matrix of the seizure-causing and seizure-free classification results from the training data and test data (Fig.3B). Next, a receiver operating characteristic curve and the area under the curve (AUC) were calculated for all training data and all test data, and the optimal operating point was determined (Fig.3C(a)). The accuracy, positive predictive value, sensitivity, specificity, and F-measure of the prediction results of the model at the optimal operating point were calculated (Table 2). The model trained on raster plot feature quantities had an AUC in the training data of 0.9998 and an AUC in the unlearned data of 0.9967; the optimal operating point was 0.158. The classification precision in the training data for each drug at the optimal operating point was as follows: 100% for 4-AP, 97.8% for carbamazepine, 99.6% for NMDA, and 96.0% for PTZ. The classification precision in the unlearned data was 100% for 4-AP, 91.5% for carbamazepine, 100% for NMDA, and 94.4% for PTZ. The prediction accuracy for all compounds was 98.4%.

Creation of seizure risk prediction AI using raster plot images and evaluation of the prediction model. (A) Data flow and architecture of seizure risk prediction model. w1 is the weight between the input layer and the hidden layer, w2 is the weight between the hidden layer and the output layer. (B) (a) Confusion matrix for each compound used for training, (b) confusion matrix for the entire training dataset, (c) confusion matrix for each compound used for the test, (d) confusion matrix for the entire test dataset. The test dataset used the data of the wells not used for training dataset. Vehicle in the confusion matrix indicates vehicle data in four seizure-causing compounds. (C) (a) Receiver operating characteristic (ROC) curve after classification of training and testing data in a neural network model (black line: training data; red line: testing data; red dot: optimum operating point). (b) Comparison of ROC curves after classification of the same testing data in NN and SVM models (black line: SVM model; red line: NN model).

Figure3C(b) shows the ROC curve using a support vector machine (SVM) model trained with the same 4096-dimensional feature dataset as the neural network (NN) model. Comparing the AUC in the test data of SVM and NN revealed that the NN model had an AUC of 0.9967 and the SVM model had an AUC of 0.9841; thus, the NN model was superior to the SVM model [Fig.3C(b)]. Therefore, in this study, we used the NN model.

The seizure-causing/seizure-free classification AI trained on the raster plots that we created accurately classified the responses of seizure-causing compounds with differing mechanisms and seizure-free compounds.

If we are able to establish a ranked development priority for compounds based on their seizure liability, it will lead to more efficient drug discovery and development. Determining the concentration dependence is necessary in order to assign priority to drugs. Thus, using the AI we created, we investigated the concentration dependence of seizure-causing/seizure-free judgments. The concentration data toxicity probabilities predicted by the AI are shown in Fig.4. The proportions of the images classified as seizure-causing and as seizure-free used the time-series data from each well, and then the mean probability for each well was calculated and used to represent the toxicity risk at each concentration. For unlearned sample, which includes data of the wells that were not used for training dataset, the following concentrations were determined to have a seizure liability probability of 50% or higher4-AP: 1M (62.2%), 10M (94.6%), 30M (100%), and 60M (100%); carbamazepine: 30M (76.9%) and 100M (85.0%); NMDA: 1M (63.3%), 3M (100%), and 10M (100%); and PTZ: 1M (51.9%), 10M (81.5%), 100M (88.9%), and 1000M (88.9%) (Fig.4 (a), (b), (d), (e)). The seizure liability at concentrations lower than the concentrations the AI was trained on was shown, and then the concentration dependence was calculated. Acetaminophen, which is a seizure-free compound, was determined to be seizure-free with a probability of 97.9% or higher, regardless of the concentration. DMSO was also determined to be seizure-free with a probability of 99.1% or higher, regardless of the concentration (Fig.4c,f). The seizure liability prediction AI we created determined the concentration dependence of seizure-causing compounds and identified seizure-free compounds as seizure-free regardless of the concentration.

Concentration-dependent prediction of seizure risk in learning drugs by AI. AI predicted the negative probabilities (blue bar) and seizure risk (red bar) at each concentration of training data (left) and test data (right). (a) 4-AP, (b) NMDA, (c) acetaminophen, (d) carbamazepine, (e) PTZ, (f) DMSO.

It is important for the AI that we created to detect the toxicity of drugs that it has not been trained on. Thus, we used the AI we created to determine the toxicity of nine unlearned seizure-causing compounds based on data collected on them. In order to verify AI, nine unlearned seizure-causing compounds were regarded as unknown compounds and were not trained. Figure5 shows the seizure toxicity determination results for each concentration of the unlearned drugs. The concentrations that showed a 50% or higher probability of seizure liability were as followskainic acid: 1M (81.8%), 3M (100%), and 10M (100%); paroxetine: 3M (73.7%), 10M (100%), and 30M (100%); picrotoxin: 0.1M (91.4%), 0.3M (93.7%), 1M (91.8%), 3M (97.8%), and 10M (91.5%); varenicline: 10M (52.6%) and 30M (77.1%); pilocarpine: 1M (62.3%), 3M (75.8%), 10M (86.8%), 30M (89.4%), and 100M (97.0%); tramadol: 3M (61.9%), 10M (88.6%), 30M (98.9%), and 100M (100%); and venlafaxine: 10M (90.5%), 30M (100%), and 100M (100%). Seven of the unlearned drugs were determined to have concentration-dependent seizure liability (Fig.5ad, fh). On the other hand, amoxapine and theophylline were determined to be seizure-free at all concentrations (Fig.5e,i). This showed that the AI was able to detect seizure toxicity in a concentration-dependent manner, even for unlearned drugs.

Concentration-dependent prediction of seizure risk in non-training drugs by AI. AI predicted the negative probabilities (blue bar) and seizure risk (red bar) at each concentration. (a) Kainic acid, (b) paroxetine, (c) picrotoxin, (d) varenicline, (e) amoxapine, (f) pilocarpine, (g) tramadol, (h) venlafaxine, (i) theophylline.

In order to verify whether AI can determine the safety of unlearned negative compounds, the negative compounds Aspirin (1, 3, 10, 30, 100M) and Amoxicillin (1, 3, 10, 30, 100) M) and Felbinac (1, 3, 10, 30, 100M) data were judged (Fig.6). The negative probabilities of Aspirin were 76.3% (1M), 82.0% (3M), 79.0% (10M), 80.8% (30M), and 81.7% (100M). Amoxicillin were 91.3% (1M), 86.3% (3M), 86.4% (10M), 81.1% (30M), and 77.6% (100M). Felbinac were 83.8% (1M), 80.9% (3M), 76.1% (10M), 71.8% (30M), and 77.7% (100M) (Fig.6b). Although there were some significant differences in the conventional analysis parameters (Fig.6a), AI judged negative at all three concentrations. From these results, it was confirmed that AI can be judged to be negative even for negative compounds that are unlearned drugs.

Prediction of seizure risk in non-training negative compounds by AI. (A) Concentration-dependent changes of 3 negative compounds in five parameters: TS (pink), NoB (black), IBI (green), DoB (blue), SiB (cyan). (a) Aspirin, (b) amoxicillin, (c) felbinac, (B) AI predicted the negative probabilities (blue bar) and seizure risk (red bar) at each concentration.

Because seizure-causing compounds with differing mechanisms elicit different responses, if the AI is able to classify the compounds despite this, it can also predict the mechanism of seizure liability of unlearned drugs. Thus, we trained the AI on drug names and raster plots in order to classify compounds as seizure-causing compounds with differing mechanisms or seizure-free compounds.

We used a pattern recognition neural network composed of 4096 neuron input layers, 120 hidden layers containing sigmoid neurons, and an output layer with 14 classes (Fig.7), which made up a drug identification model to predict the name of seizure-causing compounds and seizure-free compounds. The model was trained on a dataset composed of 4-AP (30 and 60M), amoxapine (100M), carbamazepine (30 and 100M), kainic acid (1, 3, and 10M), NMDA (3 and 10M), PTZ (1000M), paroxetine (3, 10, and 30M), picrotoxin (1, 3, and 10M), pilocarpine (10, 30, and 100M), theophylline (100M), tramadol (30 and 100M), varenicline (30M), and venlafaxine (10, 30, and 100M) as well as all concentrations of acetaminophen as well as all concentrations of DMSO as seizure-free compounds (Table 3). The all compounds dataset that was used was made up of 56 wells. Training was conducted by excluding one of the 56 wells and training the AI on the names of the drugs in the other 55 wells; 75% of the 55 well datasets were used for training, and 25% were used for validation after training. The excluded well was used for obtaining test data. The prediction accuracy was calculated using the leave-one-sample (well)-out scheme. We created five AIs for each excluded well. In other words, we created 565=280 AIs. For the data from the single well (the data from the single well that was not used to train the AI), the name of the drug was identified based on the five models we created, and the mean value was calculated. The deviation of the five models prediction accuracy was 0.11% at the trained concentrations of all drugs. The deviation of the prediction accuracy at all concentrations of all drugs was 1.6%. The predictive probabilities at different drug concentrations are shown in Table 4. DMSO and acetaminophen, which are seizure-free compounds, were judged to be seizure-free at all concentrations for every drug vehicle, with a mean probability of 99.9%0.3%. 4-AP (1M), amoxapine (3M), NMDA (1M), picrotoxin (0.1M), pilocarpine (1M), PTZ (10M), theophylline (3M), varenicline (10M), venlafaxine (3M), and tramadol (10M) were correctly identified at concentrations lower than those in the training data. Carbamazepine (30M), kainic acid (1M), and paroxetine (3M) were correctly identified at the concentrations used to train the AI. The drugs that could not be identified at certain concentrations were all seizure-free compounds, and no drugs were misidentified as different drugs. The mean predictive accuracy for all drugs at the concentrations used to train the AI was 99.9%0.1%. The drug identification AI we created correctly identified the responses of 13 seizure-causing compounds and two seizure-free compounds.

Creation of drug name prediction AI using raster plot images. Data flow and architecture of drug name prediction model. w1 is the weight between the input layer and the hidden layer; w2 is the weight between the hidden layer and the output layer.

View original post here:
Raster plots machine learning to predict the seizure liability of drugs and to identify drugs | Scientific Reports - Nature.com

Stanford to offer Free Machine Learning with Graphs course online from fall – Analytics India Magazine

Stanford Universitys Machine Learning with Graphs course will be available online for free from the fall of 2022.

Complex data can be represented as a graph of relationships between objects. Such networks are a fundamental tool for modelling social, technological, and biological systems. The course focuses on the computational, algorithmic, and modelling challenges specific to the analysis of massive graphs. By means of studying the underlying graph structure and its features, students are introduced to machine learning techniques and data mining tools apt to reveal insights on a variety of networks.

The topics covered in the course include representation learning and Graph Neural Networks; algorithms for the World Wide Web; reasoning over Knowledge Graphs; influence maximisation; disease outbreak detection and social network analysis.

The pre-requisites for the course include:

1. Knowledge of basic computer science principles, sufficient to write a reasonably non-trivial computer program (e.g., CS107 or CS145 or equivalent are recommended)

2. Familiarity with the basic probability theory (CS109 or Stat116 are sufficient but not necessary)

3. Familiarity with the basic linear algebra

The recitation sessions in the first weeks of the class will give an overview of the expected background.Stanford University recommended Graph Representation Learning, Networks, Crowds, and Markets: Reasoning About a Highly Connected World and Network Science as optional reading.

Register for Data Engineering Summit 2022

Read the rest here:
Stanford to offer Free Machine Learning with Graphs course online from fall - Analytics India Magazine

MLOps: What Is It and Why Do We Need It? – CIO Insight

Recently, machine learning (ML) has become an increasingly essential component of big data analytics, business intelligence, predictive analytics, fraud detection, and more. Because there is a plethora of methods and tools businesses can use to analyze their data, companies must select an ML approach that minimizes cost and maximizes efficiency. The concept of machine learning operations (MLOps) has emerged from big data analytics as that solution.

Read more: AI vs Machine Learning: What Are Their Differences & Impacts?

Machine learning operations is a way to scale large ML projects. The job of any data scientist is to figure out what data can teach them about their business and help improve it, but MLOps takes that idea one step further by applying deep learning on top of large-scale datasets. It involves the use of methods, systems, algorithms, and processes for improving data-driven decision-making, and value generation through machine learning.

This area of study combines data mining, AI, analytics, and big data with automation to create a self-managing system capable of handling incredibly complex tasks.

ML is being used for a wide range of processes and can benefit those involving predictions or simulations. Companies are employing machine learning to optimize their operations, gain a competitive edge, and drive revenue. Here are some use cases of machine learning in business.

Read more: AI & Machine Learning: Substance Behind the Hype?

Alteryx is a California-based computer software company with a development facility in Broomfield, Colorado. The products of the company are used in data science and analytics.

Dataiku is an AI and ML company founded in 2013, which has offices based in New York City and Paris, France. It provides Data Science Studio (DSS) with a focus on cross-discipline collaboration and usability.

DataRobot is a Boston, Massachusetts-based platform for augmented data science and machine learning. The platform automates critical tasks, allowing data scientists to work more effectively and citizen data scientists to more easily develop models.

RapidMiner is headquartered in Boston, Massachusetts. Data preparation, machine learning, deep learning, text mining, and predictive analytics are all offered through the companys integrated ecosystem.

RapidMiner products include RapidMiner Studio, RapidMiner Auto Model, RapidMiner Turbo Prep, RapidMiner Go, RapidMiner Server, and RapidMiner Radoop.

MathWorks is headquartered in Natick, Massachusetts. The companys two flagship products are MATLAB, which offers an environment for scientists, engineers, and programmers to analyze and display data and build algorithms, and Simulink, a graphical and simulation environment for model-based design of dynamic systems.

MATLAB and Simulink are widely used in the aerospace, automotive, software, and other industries. Polyspace, SimEvents, and Stateflow are some of the companys other products.

There are numerous risks involved when it comes to implementing new, cutting-edge technology like machine learning in business operations, including:

Read more: What Is Adversarial Machine Learning?

The data required to train ML algorithms can be quite large. Training models often require hundreds of thousands or even millions of instances to identify meaningful patterns.

Training a deep neural network for object recognition, for example, requires images of tens of thousands of labeled objects, and training a natural language processing system means downloading gigabytes worth of data.

For most organizations, its unfeasible to simply push all that data into production and let a model run until it finishes, and many business processes dont allow for taking things offline in order to retrain.

By combining operations and machine learning, developers can build applications that can continuously learn from new data as theyre being created. Not only does an MLOps approach enable faster time-to-market with improved accuracy, it also has big implications for forecasting, anomaly detection, predictive maintenance, and more.

More:
MLOps: What Is It and Why Do We Need It? - CIO Insight

Machine Learning as a Service (MLaaS) Market 2022, Industry Size, Trends, Share, Growth, Analysis and Forecast to 2028 Business – Inter Press Service

Machine Learning as a Service (MLaaS) Market 2022-2028

A New Market Study, Titled Machine Learning as a Service (MLaaS) Market Upcoming Trends, Growth Drivers and Challenges has been featured on fusionmarketresearch.

Description

This global study of theMachine Learning as a Service (MLaaS)Marketoffers an overview of the existing market trends, drivers, restrictions, and metrics and also offers a viewpoint for important segments. The report also tracks product and services demand growth forecasts for the market. There is also to the study approach a detailed segmental review. A regional study of the globalMachine Learning as a Service (MLaaS)industryis also carried out in North America, Latin America, Asia-Pacific, Europe, and the Near East & Africa.The report mentions growth parameters in the regional markets along with major players dominating the regional growth.

Request Free Sample Report @ https://www.fusionmarketresearch.com/sample_request/Machine-Learning-as-a-Service-(MLaaS)-Market-Global-Outlook-and-Forecast-2022-2028/83160

Machine Learning is a multidisciplinary interdisciplinary subject, involving probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory and other disciplines.This report contains market size and forecasts of Machine Learning as a Service (MLaaS) in Global, including the following market information:Global Machine Learning as a Service (MLaaS) Market Revenue, 2017-2022, 2023-2028, ($ millions)Global top five companies in 2021 (%)

The global Machine Learning as a Service (MLaaS) market was valued at 2103.3 million in 2021 and is projected to reach US$ 7923.8 million by 2028, at a CAGR of 20.9% during the forecast period.The U.S. Market is Estimated at $ Million in 2021, While China is Forecast to Reach $ Million by 2028.Private Clouds Segment to Reach $ Million by 2028, with a % CAGR in next six years.

The global key manufacturers of Machine Learning as a Service (MLaaS) include Amazon, Oracle, IBM, Microsoftn, Google, Salesforce, Tencent, Alibaba and UCloud, etc. In 2021, the global top five players have a share approximately % in terms of revenue.

Fusion Market Research (FMR) has surveyed the Machine Learning as a Service (MLaaS) manufacturers, suppliers, distributors and industry experts on this industry, involving the sales, revenue, demand, price change, product type, recent development and plan, industry trends, drivers, challenges, obstacles, and potential risks.

Competitor AnalysisThe report also provides analysis of leading market participants including:Key companies Machine Learning as a Service (MLaaS) revenues in global market, 2017-2022 (Estimated), ($ millions)Key companies Machine Learning as a Service (MLaaS) revenues share in global market, 2021 (%)Key companies Machine Learning as a Service (MLaaS) sales in global market, 2017-2022 (Estimated), (K Units)Key companies Machine Learning as a Service (MLaaS) sales share in global market, 2021 (%)Further, the report presents profiles of competitors in the market, key players include:

Total Market by Segment:Global Machine Learning as a Service (MLaaS) Market, by Type, 2017-2022, 2023-2028 ($ Millions) & (K Units)

Global Machine Learning as a Service (MLaaS) Market, by Application, 2017-2022, 2023-2028 ($ Millions) & (K Units)

Market segment by Region, regional analysis covers

Ask Queries @ https://www.fusionmarketresearch.com/enquiry.php/Machine-Learning-as-a-Service-(MLaaS)-Market-Global-Outlook-and-Forecast-2022-2028/83160

Table of Contents

1 Introduction to Research & Analysis Reports

2 Global Machine Learning as a Service (MLaaS) Overall Market Size

3 Company Landscape

4 Market Sights by Product

5 Sights by Application

6 Sights by Region

7 Players Profiles

8 Conclusion

9 Appendix

What our report offers:

Free Customization Offerings:

Continue

ABOUT US:

Fusion Market Research is one of the largest collections of market research reports from numerous publishers. We have a team of industry specialists providing unbiased insights on reports to best meet the requirements of our clients. We offer a comprehensive collection of competitive market research reports from a number of global leaders across industry segments.

CONTACT US

sales@fusionmarketresearch.com

Phone:+ (210) 775-2636 (USA)+ (91) 853 060 7487

The post Machine Learning as a Service (MLaaS) Market 2022, Industry Size, Trends, Share, Growth, Analysis and Forecast to 2028 appeared first on 360PRWire.

Go here to see the original:
Machine Learning as a Service (MLaaS) Market 2022, Industry Size, Trends, Share, Growth, Analysis and Forecast to 2028 Business - Inter Press Service

Research Fellow in Machine Learning, Natural Language Processing and Speech Processing job with UNIVERSITY OF SOUTHAMPTON | 281156 – Times Higher…

Agents, Interactions & Complexity

Location: Highfield CampusSalary: 31,406 to 38,587 Per annumFull Time Fixed Term (12 months)Closing Date: Wednesday 09 March 2022Interview Date: To be confirmedReference: 1694122FP

The roles will be part of the UKRI Trustworthy Autonomous Systems Hub (TAS Hub). The Hub is led by the University of Southampton with partners from the University of Nottingham and Kings College London. TAS Hub is the focal point of the 33m UKRI Trustworthy Autonomous Systems programme (for more details see http://www.tas.ac.uk)

You will undertake independent research as well as working as part of a team - this will include using approaches or methodologies and techniques appropriate to the type of research, and being responsible for writing up your work in order to contribute to published outcomes. There will be the opportunity to use creativity to identify areas for research, develop research methods and extend your research portfolio.

As the research will need to generalise across more than one application domain (e.g., healthcare, autonomous vehicles, IoT, etc..) this offers the opportunity to collaborate with partners from across the TAS Hub and the wider TAS programme (i.e., the 60+ TAS hub industrial partners and the TAS Nodes) and undertake industry placements.

To take advantage of these opportunities you will have a PhD or equivalent professional qualifications and experience in one of the following areas:- Machine Learning, Natural Language processing, and Speech Processing. As work will need to be carried out in a multi-disciplinary setting involving experts and researchers from fields such as healthcare, law, engineering, business, and policy - it is important that you are able to communicate research outputs in a way that is understandable and useful to researchers from diverse disciplines.

The candidates will have experience in Machine Learning (with applications to computer vision, speech or signal processing), Reinforcement Learning, and Natural Language Processing to develop and evaluate a range of autonomous systems for challenging real-world applications. Experience in evaluative methods such as user studies and surveys, and a demonstrable interest in autonomous systems would be a welcome addition.

A strong track record of good publications at international venues (IJCAI, AIJ, JAIR, ICML, ICLR, AAMAS, NeurIPS) is desirable.

Equality, diversity and Inclusion is central to the ethos in the School of Electronics and Computer Science. We particularly encourage women, Black, Asian and minority ethnic, LGBT and disabled applicants to apply for this position. We are committed to improving equality for women in science and have been successful in achieving an Athena SWAN bronze award in April 2020. We give full consideration to applicants that wish to work flexibly including part-time and due consideration will be given to applicants who have taken a career break. The University has a generous maternity policy*, onsite childcare facilities

The University of Southampton is in the top 1% of world universities and in the top 10 of the UKs research-intensive universities. The University of Southampton is committed to sustainability and being a globally responsible university and has recently been awarded the Platinum EcoAward. Our vision is to embed the principles of sustainability into all aspects of our individual and collective work, integrating sustainable development into our business planning, policy-making, and professional activities. This commits all of our staff and students to take responsibility for managing their activities to minimise harm to the environment, whether this through switching off non-essential electrical equipment or using the recycling facilities.

*subject to qualifying criteria

The posts are full time fixed term for 1 year initially. The posts are due to start 1 March 2022.

Applications for Research Fellow positions will be considered from candidates who are within six months of a relevant PhD qualification. The title of Research Fellow will be applied upon successful completion of the PhD. Prior to the qualification being awarded the title of Senior Research Assistant will be given.

Application Procedure

You should submit your completed online application form at https://jobs.soton.ac.uk. The application deadline will be midnight on the closing date 09/03/2022. If you need any assistance, please contact Sian Gale (Recruitment Team) on 02380 592750 or at Recruitment@soton.ac.uk. Please quote reference 1694122FP on all correspondence.

Read the rest here:
Research Fellow in Machine Learning, Natural Language Processing and Speech Processing job with UNIVERSITY OF SOUTHAMPTON | 281156 - Times Higher...

Hybrid Machine-Learning Approach Gives a Hand to Prosthetic-Limb Gesture Accuracy – Neuroscience News

Summary: Researchers have developed a novel hybrid machine learning approach to muscle gesture recognition in prosthetic arms.

Source: Beijing Institute of Technology Press

Engineering researchers have developed a hybrid machine-learning approach to muscle gesture recognition in prosthetic hands that combines an AI technique normally used for image recognition with another approach specialized for handwriting and speech recognition. The technique is achieving far superior performance than traditional machine learning efforts.

A paper describing the hybrid approach was published in the journalCyborg and Bionic Systemson November 8th, 2021.

Motor neurons are those parts of the central nervous system that directly control our muscles. They transmit electrical signals that cause muscles to contract. Electromyography (EMG) is a method of measuring muscle response by recording this electrical activity through the insertion of electrode needles through the skin and into the muscle. Surface EMG (sEMG) performs this same recording process in a non-invasive fashion with the electrodes placed on the skin above the muscle, and is used for non-medical procedures such as sports and physiotherapy research.

Over the last decade, researchers have begun investigating the potential use of surface EMG signals to control prostheses for amputees, especially with respect to the complexity of movements and gestures required by prosthetic hands in order to deliver smoother, more responsive, and more intuitive activity of the devices than is currently possible.

Unfortunately, unexpected environmental interference such as a shift of the electrodes introduces a great deal of noise to the process of any device attempting to recognize the surface EMG signals. Such shifts regularly occur in daily wear and use of such systems. To try to overcome this problem, users must engage in a lengthy and tiring sEMG signal training period prior to use of their prostheses. Users are required to laboriously collect and classify their own surface EMG signals in order to be able to control the prosthetic hand.

In order to reduce or eliminate the challenges of such training, researchers have explored various machine learning approachesin particular deep learning pattern recognitionto be able to distinguish between different, complex hand gestures and movements despite the presence of environmental signal interference.

A reduction in the training is in turn obtained by optimizing the network structure model of that deep learning. One possible improvement that has been trialed is the use of a convolutional neural network (CNN), which is analogous to the connection structure of the human visual cortex. This type of neural network offers improved performance with images and speech and as such is at the heart of computer vision.

Researchers up to now have achieved some success with CNN, significantly improving upon the recognition (extraction) of the spatial dimensions of sEMG signals related to hand gestures. But while good dealing with space, they struggle with time. Gestures are not static phenomena, but take place over time, and CNN ignores time information in the continuous contraction of muscles.

Recently, some researchers have begun to apply a long short-term memory (LSTM) artificial neural network architecture to the problem. LSTM involves a structure that involves feedback connections, giving it superior performance in processing classifying, and making predictions based on sequences of data over time, especially where there are lulls, gaps or interferences of unexpected duration between the events that are important. LSTM is a form of deep learning that has been best applied to tasks that involve unsegmented, connected activity such as handwriting and speech recognition.

The challenge here is that while researchers have achieved better gesture classification of sEMG signals, the size of the computational model required is a serious problem. The microprocessor needed to be used is limited. Using something more powerful would be too costly. And finally, while such deep learning training models work with the computers in the lab, they are difficult to apply via the sort of embedded hardware found in a prosthetic device

Convolutional neural networks were after all conceived with image recognition in mind, not control of prostheses, said Dianchun Bai, one of the authors of the paper and professor of electrical engineering at Shenyang University of Technology. We needed to couple CNN with a technique that could deal with the dimension of time, while also ensuring feasibility in the physical device that the user must wear.

So the researchers developed a hybrid CNN and LSTM model that combined the spacial and temporal advantages of the two approaches. This reduced the size of the deep learning model while achieving high accuracy, with more robust resistance to interference.

After developing their system, they tested the hybrid approach on ten non-amputee subjects engaging in a series of 16 different gestures such as gripping a phone, holding a pen, pointing, pinching and grasping a cup of water. The results demonstrated far superior performance compared to CNN alone or other traditional machine learning methods, achieving a recognition accuracy of over 80 percent.

The hybrid approach did however struggle to accurately recognize two pinching gestures: a pinch using the middle finger and one using the index finger. In future efforts, the researchers want to optimize the algorithm and improve its accuracy still further, while keeping the training model small so it can be used in prosthesis hardware. They also want to figure out what is prompting the difficulty in recognizing pinching gestures and expand their experiments to a much larger number of subjects.

Ultimately, the researchers want to develop a prosthetic hand that is as flexible and reliable as a users original limb.

Author: Ning XuSource: Beijing Institute of Technology PressContact: Ning Xu Beijing Institute of Technology PressImage: The image is credited to Dr. Tie Liu, School of Electrical Engineering, Shenyang University of Technology

Original Research: Open access.Application Research on Optimization Algorithm of sEMG Gesture Recognition Based on Light CNN+LSTM Model by Tie Liu et al. Cyborg and Bionic Systems

Abstract

Application Research on Optimization Algorithm of sEMG Gesture Recognition Based on Light CNN+LSTM Model

The deep learning gesture recognition based on surface electromyography plays an increasingly important role in human-computer interaction. In order to ensure the high accuracy of deep learning in multistate muscle action recognition and ensure that the training model can be applied in the embedded chip with small storage space, this paper presents a feature model construction and optimization method based on multichannel sEMG amplification unit.

The feature model is established by using multidimensional sequential sEMG images by combining convolutional neural network and long-term memory network to solve the problem of multistate sEMG signal recognition.

The experimental results show that under the same network structure, the sEMG signal with fast Fourier transform and root mean square as feature data processing has a good recognition rate, and the recognition accuracy of complex gestures is 91.40%, with the size of 1MB.

The model can still control the artificial hand accurately when the model is small and the precision is high.

Read more here:
Hybrid Machine-Learning Approach Gives a Hand to Prosthetic-Limb Gesture Accuracy - Neuroscience News

Symbolic AI: The key to the thinking machine – VentureBeat

Join today's leading executives online at the Data Summit on March 9th. Register here.

Even as many enterprises are just starting to dip their toes into the AI pool with rudimentary machine learning (ML) and deep learning (DL) models, a new form of the technology known as symbolic AI is emerging from the lab that has the potential to upend both the way AI functions and how it relates to its human overseers.

Symbolic AIs adherents say it more closely follows the logic of biological intelligence because it analyzes symbols, not just data, to arrive at more intuitive, knowledge-based conclusions. Its most commonly used in linguistics models such as natural language processing (NLP) and natural language understanding (NLU), but it is quickly finding its way into ML and other types of AI where it can bring much-needed visibility into algorithmic processes.

The technology actually dates back to the 1950s, says expert.ais Luca Scagliarini, but was considered old-fashioned by the 1990s when demand for procedural knowledge of sensory and motor processes was all the rage. Now that AI is tasked with higher-order systems and data management, the capability to engage in logical thinking and knowledge representation is cool again.

One of the keys to symbolic AIs success is the way it functions within a rules-based environment. Typical AI models tend to drift from their original intent as new data influences changes in the algorithm. Scagliarini says the rules of symbolic AI resist drift, so models can be created much faster and with far less data to begin with, and then require less retraining once they enter production environments.

Because they are bound by rules, however, symbolic algorithms cannot improve themselves over time, which is, after all, one of the key value propositions that AI brings to the table, says Jans Aasman, CEO of knowledge graph solutions provider Franz Inc. This is why symbolic AI is being integrated into ML, DL, and other forms of rules-free AI to create hybrid environments that provide the best of both worlds: full machine intelligence with logic-based brains that improve with each application.

This, in turn, enables AI to be trained using multiple techniques, including semantic inferencing and both supervised and unsupervised learning, which will ultimately create AI systems that can reason, learn, and engage in natural language question-and-answer interactions with humans. Already, this technology is finding its way into such complex tasks as fraud analysis, supply chain optimization, and sociological research.

This creates a crucial turning point for the enterprise, says Analytics Weeks Jelani Harper. Data fabric developers like Stardog are working to combine both logical and statistical AI to analyze categorical data; that is, data that has been categorized in order of importance to the enterprise. Symbolic AI plays the crucial role of interpreting the rules governing this data and making a reasoned determination of its accuracy. Ultimately this will allow organizations to apply multiple forms of AI to solve virtually any and all situations it faces in the digital realm essentially using one AI to overcome the deficiencies of another.

For organizations looking forward to the day they can interact with AI just like a person, symbolic AI is how it will happen, says tech journalist Surya Maddula. After all, we humans developed reason by first learning the rules of how things interrelate, then applying those rules to other situations pretty much the way symbolic AI is trained. Integrating this form of cognitive reasoning within deep neural networks creates what researchers are calling neuro-symbolic AI, which will learn and mature using the same basic rules-oriented framework that we do.

While this may be unnerving to some, it must be remembered that symbolic AI still only works with numbers, just in a different way. By creating a more human-like thinking machine, organizations will be able to democratize the technology across the workforce so it can be applied to the real-world situations we face every day.

It certainly wont be able to solve all our problems, but it will relieve us of the most annoying ones.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More

Read more:
Symbolic AI: The key to the thinking machine - VentureBeat

Machine Learning in Communication Market to Witness Astonishing Growth by 2030 | Amazon, IBM, Microsoft and more Talking Democrat – Talking Democrat

Global Machine Learning in Communication market report contains a detailed analysis of the current state and future scope along with the sales patterns, market size, share, price structure, and market progressions. The study discusses the underlying trends and impact of various factors that drive the market, along with their influence on the evolution of the Machine Learning in Communication market. This report briefly deals with the product life cycle, comparing it to the relevant products from across industries and then evaluates the snapshot given by Porters five forces analysis for identifying new opportunities in this industry. A thorough evaluation of the restrain included in this report portrays contrast to drivers which helps make strategic planning easier.

Get a Sample PDF of the report at https://www.datalabforecast.com/request-sample/372172-machine-learning-in-communication-market

Asia-Pacific region is expected to dominate the market over the forecast period owing to the increasing focus on the research, development, and manufacturing of Machine Learning in Communication in countries including China, Japan, India, and South Korea.

The report also comprises the study of current issues with end users and opportunities for the Machine Learning in Communication market. It also contains value chain analysis along with key market participants. To provide users of this report with a comprehensive view of the Machine Learning in Communication market, we have included a detailed competitive analysis of market key players. Furthermore, the report also comprehends business opportunities and scope for expansion.

List of Top Key Players in Machine Learning in Communication Market Report are:

Amazon, IBM, Microsoft, Google, Nextiva, Nexmo, Twilio, Dialpad, Cisco, RingCentral.

The qualitative data gathered by extensive primary and secondary research presented in the report aims to provide crucial information regarding market dynamics, market trends, key developments and innovations, and product developments in the market. It also provides data about vendors, including their profile details which include product specifications, applications and industry performance, annual sales, revenue, relevant mergers, financial timelines, investments, growth strategies and future developments.

The biggest highlight of the report is to provide companies in the industry with a strategic analysis of the impact of COVID-19 which will help market players in this field to evaluate their business approaches. At the same time, this report analyzed the market of leading 20 countries and introduce the market potential of these countries.

We are currently offering Quarter-end Discount to all our high potential clients and would really like you to avail the benefits and leverage your analysis based on our report.

To Know How COVID-19 Pandemic Will Impact Machine Learning in Communication Market/Industry- Request a sample copy of the report- https://www.datalabforecast.com/request-sample/372172-machine-learning-in-communication-market

Market Analysis and Insights: Global Machine Learning in Communication Market

In 2020, the global Machine Learning in Communication market size was USD million and it is expected to reach USD million by the end of 2030, with a high CAGR between 2022 and 2030

Global Machine Learning in Communication Scope and Market Size

The global Machine Learning in Communication market is segmented by region (country), company, by Type, and by Application. Players, stakeholders, and other participants in the global Machine Learning in Communication market will be able to gain the upper hand as they use the report as a powerful resource. The segmental analysis focuses on sales, revenue and forecast by region (country), by Type, and by Application for the period 2017-2030.

Enquire before purchasing this report https://www.datalabforecast.com/request-enquiry/372172-machine-learning-in-communication-market

Machine Learning in Communication Market

Global Machine Learning in Communication Market Segment Analysis:

This report focuses on the Machine Learning in Communication market by volume and value at the global level, regional level, and company level. From a global perspective, this report represents the overall Machine Learning in Communication market size by analyzing historical data. Additionally, type-wise and application-wise consumption tables and figures of the Machine Learning in Communication market are also given. It also distinguishes the market based on geographical regions like North America, Europe, Asia-Pacific, Latin America, and Middle East and Africa.

By the product type, the market is primarily split into

Cloud-Based, On-Premise.

By the end-users/application, this report covers the following segments

Network Optimization, Predictive Maintenance, Virtual Assistants, Robotic Process Automation (RPA).

Customization of the Report:

This report can be customized to meet the clients requirements. Please connect with our sales team ([emailprotected]), who will ensure that you get a report that suits your needs. You can also get in touch with our executives on +1 917-725-5253 to share your research requirements.

Contact:Henry KData Lab Forecast86 Van Wagenen Avenue, Jersey,New Jersey 07306, United States

Phone: +1 917-725-5253Email: [emailprotected]

Website: https://www.datalabforecast.com/Explore News Releases: https://newsbiz.datalabforecast.com/

Follow Us on: LinkedIN | Twitter |

More Trending Reports by Data Lab Forecast:

Original post:
Machine Learning in Communication Market to Witness Astonishing Growth by 2030 | Amazon, IBM, Microsoft and more Talking Democrat - Talking Democrat