Page 2,347«..1020..2,3462,3472,3482,349..2,3602,370..»

Agnostiq Announces Partnership With Mila to Bridge the Quantum Computing and Machine Learning Communities – PRNewswire

TORONTO, Jan. 18, 2022 /PRNewswire/ --Agnostiq, Inc., the first-of-its-kind quantum computing SaaS startup, is pleased to announce that it has formed a strategic partnership with Montreal-based Mila, the world's largest machine learning research institute, in an effort to bridge the gap between the quantum computing and machine learning communities.

"Quantum computing will have a tremendous impact on many fields and machine learning is no exception," says Oktay Goktas, CEO of Agnostiq. "A partnership with Mila brings us access to a world-class research community that comes with decades of experience in machine learning, which will in turn help us design better tools for emergent quantum machine learning use cases."

The new partnership gives Mila access to Agnostiq's quantum researchers, who are working on classes of machine learning problems that are specific to quantum computing, and Agnostiq access to Mila's AI/ML researchers and partner network. Partnering with Mila will help Agnostiq remain at the forefront and among the first to discover compelling new use cases for quantum machine learning.

"Agnostiq offers an exciting opportunity to explore ML challenges specific to quantum computing, as our strategic alliance with this promising startup will allow us to combine our expertise," says Stphane Ltourneau, Executive Vice President of Mila. "Mila's research community works daily toward improving the democratization of machine learning, developing new algorithms, and advancing deep learning capabilities. We are thrilled to work closely with Agnostiq to continue these important missions."

The partnership will also support Agnostiq's talent attraction and retention efforts, encouraging potential candidates to apply, as they will have the opportunity to collaborate with Mila's world-renowned researchers. Finally, the collaboration further validates Canada's position as a global leader in quantum computing and machine learning research.

ABOUT AGNOSTIQ INC.:Agnostiq develops software tools and applications for advanced computing resources, with the aim of making these technologies more accessible to developers and enterprise customers. Agnostiq is an interdisciplinary team of physicists, computer scientists, and mathematicians backed by leading investors in New York and Silicon Valley. Learn more at http://www.agnostiq.ai.

ABOUT MILA:Founded by Professor Yoshua Bengio of the Universit de Montral, Mila is a research institute in artificial intelligence which rallies about 700 researchers specializing in the field of deep learning. Based in Montreal, Mila's mission is to be a global pole for scientific advances that inspires innovation and the development of AI for the benefit of all. Mila is a non-profit organization recognized globally for its significant contributions to the field of deep learning, particularly in the areas of language modeling, machine translation, object recognition and generative models. Learn more at mila.quebec.

MEDIA CONTACT: Nina Pfister, MAG PR via email at [emailprotected]or by phone: 781-929-5620

SOURCE Agnostiq

Go here to read the rest:
Agnostiq Announces Partnership With Mila to Bridge the Quantum Computing and Machine Learning Communities - PRNewswire

Read More..

Machine learning assisted real-time deformability cytometry of CD34+ cells allows to identify patients with myelodysplastic syndromes | Scientific…

This paper provides a proof-of-concept to use RT-DC for detection of MDS. As RT-DC captures the morphology of cells, the information content is similar to morphology analyses of BM smears which is currently the gold standard for MDS diagnosis. In addition, the mechanical readout of RT-DC is a promising feature as earlier studies showed alterations of the actin cytoskeleton in association with MDS8,9,10.

Current MDS diagnosis routines are under reconsideration due to reproducibility issues, high labor intensiveness, and requirement of expert staff2,4,19. These issues could be addressed utilizing a combination of imaging flow cytometry (IFC) for high-throughput acquisition and machine learning for automated data analysis20,21. In IFC, fluorescence images are captured which allows labelling of different cell types and intracellular structures. However it was already shown that using deep learning, brightfield images are sufficient for example to predict lineages of differentiation or distinguish cell types in blood22,23. Hence, the label-free approach of RT-DC could be advantageous as the staining process can be omitted.

In the present work we employ RT-DC for the first time for detection of MDS. From each captured cell, seven features are computed in real-time, which were then used to train a random forest model, reaching an accuracy of 82.9% for the classification of healthy and MDS samples. As RT-DC performs image analysis in real-time, the MDS classification result could be provided immediately during the measurement. Both, the label-free aspect of RT-DC and the real-time analysis could allow to shorten the time needed for diagnosis.

By employing a model interpretation technique, we found that the width of the distribution of cell sizes is one of the most important criteria used by the random forest classification model. While employing only a single feature for classification lowers the accuracy (78%), it may be more suitable for observation in clinical practice. Interestingly, our finding is in accordance with the WHO guidelines which suggest a consideration of cell sizes during morphology evaluation. Our measurements show consistently that a subpopulation of cells in the size range (25 mu {text{m}}^{2}le Ale 45 mu {text{m}}^{2}) is underrepresented in MDS samples (see Fig.1D and Supplementary Fig. S3). This effect could be explained by the reduced number of B lymphocyte precursor cells in MDS24, which are CD34+ and could be present in the sample after CD34 based sorting25. Moreover, the histogram of cell sizes in Fig.1D shows a narrow peak at 50m2 for MDS, while the healthy counterpart presents a wider distribution. Hence, especially the width of the distribution plays a role, rather than the mean or median which is similar for both samples. However, since only 41 samples have been employed to train and validate the random forest model, the extrapolation of this study on the highly heterogeneous MDS population is limited, as the model could be overfitted to this small dataset. Moreover, random forest models do not perform well in extrapolation tasks. Hence, a larger prospective clinical study is required to reach more decisive conclusions.

Our work considered seven features obtained using RT-DC which can be summarized into three groups: features describing cell size (A, Lx, Ly), mechanical properties (, D, I), and porosity (). However, updated versions of the RT-DC technology are capable to save the brightfield image and compute transparency features in real-time which was shown to allow for discrimination between different blood cell types26. Moreover, images can be evaluated by a deep neural net which allows to employ fine grained details of the image for an accurate classification22,23. Future research should incorporate those new modalities to improve label-free detection of MDS using RT-DC.

MDS is caused by accumulation of genetic mutations which can be identified by whole genome sequencing. While costs for whole genome sequencing reduced from a hundred million to a thousand dollars during the last 20years, currently only targeted sequencing plays a role in clinical practice27. Here, only chosen genes that are frequently affected in MDS are checked, which is problematic, due to the large genomic heterogeneity present in various types of MDS28,29. Therefore, the standard diagnosis relies on an assessment of cell morphology as an indirect readout of genetic properties. Morphological alterations are accompanied by changes in the F-actin distribution and structural changes of the cytoskeleton8,9,10. RT-DC allows to measure mechanical properties of cells that are determined by the cytoskeleton5,30,31. It was already shown that diseases like malaria, leukemia, or spherocytosis lead to measurable differences in mechanical properties26,32. To link mechanical and genetic changes, we measured HSCs from MDS patients using RT-DC and performed molecular analysis in parallel. Figure2B indicates that larger numbers of genetic mutations correspond to lower median deformation. Therefore, RT-DC could provide an additional indirect readout of acquired mutations that has low cost per measurement, low measurement time, and offers real-time analysis results. However, despite the high correlation, we would regard this finding as hypothesis-generating due to the small sample size (n=10). Additionally, we could neither identify an association of mutation type and deformation, nor a significant mechanical difference between the low and the high-risk group (data not shown), but rather the biological features of the blast cells, such as number of mutations, correlated with the mechanical properties. The importance of Dmedian resulting from the random forest model is low (see Fig.1B). This suggests that Dmedian is similar for healthy and MDS samples. Hence, the approach of correlating Dmedian to infer the number of mutations is only valid for samples for which MDS had already been diagnosed.

HSCs only make up approximately 1% of the cells in the bone marrow33,34. To focus our study on this small subpopulation, we used MACS for CD34 enrichment of HSCs prior to the measurement. However, as the cells produced by mutated HSCs are presumably morphologically different from the healthy counterpart, a future endeavor should assess unsorted bone marrow in RT-DC using a similar approach as shown in the present work. Moreover, the efficiency of CD34 isolation is low, which results in small total numbers of cells for the measurement. As a result, our measurements could not fully employ the available throughput-capacity of RT-DC. Samples shown in this manuscript were subjected to cryopreservation and thawing which could potentially alter the cell morphology and MDS prediction outcome. A follow up project should therefore ideally use fresh BM.

Taken together, our study shows that RT-DC has the potential to expand the current status quo of MDS diagnostics. Both, morphological and mechanical readout from RT-DC are promising parameters for identification of MDS. Whether this method can be complementary to the standard diagnostic procedures in the borderline cases or serve as a rapid reliable test in the initial diagnostics remains to be demonstrated in the prospective clinical studies.

See the article here:
Machine learning assisted real-time deformability cytometry of CD34+ cells allows to identify patients with myelodysplastic syndromes | Scientific...

Read More..

6 Ways Machine Learning Can Improve Customer Satisfaction – TechSpective

Machine learning algorithms are assisting companies in improving and advancing many different aspects of their internal and external technological tools. One of the most common areas where machine learning is particularly useful is in customer service. Here are six ways machine learning can improve customer satisfaction.

These algorithms can more easily coordinate and handle workloads related to tools as varied as neural network chips, keyword searching algorithms, and data analytics programs. In general, machine learning can be leveraged to improve the efficiency and speed of these tools and all types of other processes and programs. With these algorithms, youre more likely to be able to quickly figure out what customers need and want and then direct them where they want to go.

The ability of machine learning algorithms to learn as they work allows them to more accurately and effectively pinpoint the needs of each customer and provide the necessary personalization and customization options. The algorithm can learn about each customer using your website and utilize the information it gleans to ensure the customer gets the experience or service that will most effectively benefit him or her. To take it a step further, as the algorithm learns more about customers and how to do its job efficiently, it may be capable of determining specific products that the customer is most likely to be interested in buying or using. It can effectively match users with the right products for them and their needs.

Not only can these algorithms improve your customers general experiences with your brand and with specific areas of your company, such as your website or your customer service department, but they can also help improve the customer experience in ways that are less immediately obvious. One such way is in improving your ability to identify fraud. These algorithms are capable of scanning and reviewing exponentially more transactions faster and more accurately than human beings can. Over time, your machine learning algorithm can learn to better identify signs of potential or definite fraud or identity theft.

One of the key uses of machine learning algorithms so far has been to perfect data collection, analytics and the generation of insights and predictions based on that data. You can use a machine learning algorithm to collect more cohesive data sets based on things your customers provide through data input, clicks or navigation around your website. The algorithm can then develop more complete insights and predictions based on that data in order to determine the best future marketing campaigns or customer service innovations and to identify valuable potential customers.

Another less obvious benefit of machine learning algorithms for your customers experiences is the enablement of continuous improvement these algorithms can provide. You can use the data collected by your algorithms to determine where your customer service is lacking and what you need to invest or do in order to make improvements. Powerful machine learning algorithms may even be capable of making certain kinds of adjustments and improvements automatically as they learn more about your customer service practices and your customers experiences with those practices and tools.

A powerful machine learning algorithm can even learn how to understand a customers intent when he or she interacts with your company on your website, via a phone call to your customer service department or over your social media. They can do so by gathering past user data related to the customer or by gathering information about the customers location regarding his or her situation or the product causing an issue. This is a common usage by energy companies, for example, when customers experience power outages.

Machine learning can help you improve your customer satisfaction results in various ways. These algorithms can provide you with the means to gather more data, interpret that data better, provide more personalization and continuously improve your offerings.

See the article here:
6 Ways Machine Learning Can Improve Customer Satisfaction - TechSpective

Read More..

Analyzing Twisted Graphene with Machine Learning and Raman Spectroscopy – AZoNano

In a recent study published in the journal ACS Applied Nano Materials,researchers developed a novel techniquefor automatically classifying the Raman spectra of twisted bilayer graphene (tBLG) into a variety of twist orientations using machine learning (ML).

Study:Machine Learning Determination of the Twist Angle of Bilayer Graphene by Raman Spectroscopy: Implications for van der Waals Heterostructures. Image Credit:ktsdesign/Shutterstock.com

With the recent surge in demand in tBLG, rapid, efficient, and non-destructive techniques for accurately determining the twist angle are needed.

Given the vast quantity of information on all aspects of the graphene recorded in its Raman spectrum, Raman spectroscopy might offer such a methodology. However, the angle determination can be difficult due to small variations in the Spectral data caused by the stacking sequence.

In recent decades, the discovery of highly conductive phases at tiny twist orientations has prompted interest in twisted bilayer graphene (tBLG) research. However, estimating the twist angle of tBLG layersisdifficult.

The most precise angle measurements are obtained using high-resolution imaging methods, such as transmission electron microscopy (TEM) or scanning probe microscopy (SPM).

The disadvantage is that observations take time and requireeither a free-standing material or one that is mounted on a current collector. Furthermore, they give highly localized features on sub-micron-sized locations, while the twist degree might change by several micrometers.

As a result, these approaches are inappropriate for real applications that need large-area categorizations on unpredictable materials in a short time.

Raman spectroscopy is a non-invasive examination method that allows for a variety of substrates and environments to be used for measurements, as well as the ability to analyze relatively vast regions in a small amount of time.

It has alsobeen extensively employed in graphene characterization, offering a wealth of data about the material's properties, purity, and electrical configuration.

In the case of tBLG, the twist orientation may also be determined using spectroscopic methods, which can provide sub-degree accuracy for specific angle regions.

In particular, calculating the twist angle necessitates an evaluation of numerous Raman spectra components at the same time. However, the growing complexity of the spectrum might make this process much more challenging.

This intricacy is most noticeable at low twist orientations when the tBLG experiences a structural rebuilding.

Although the Raman spectrum includes data on the angular position of tBLG, the differences for various angles can be every minute, including small alterations in the locations, breadth, and intensity ratios of the various peaks.

These distinctions are sometimes undetectable at first sight and may be easily ignored, necessitating a thorough examination of the spectrum.

Machine learning (ML) is a set of approaches that use mathematics to classify new information depending on a model's learning or recognize trends in uncategorized data (unsupervised ML).

ML-based approaches are being prepared and implemented in many parts of 2D material study and processing. ML has lately been shown to be useful in identifying the twist angle of simulated Raman spectrum parts. ML was also utilized to detect certain twist angles of BLG created by synthetic single-layer graphene stacks.

To estimate the layering sequence of tBLG from its absorption spectra, the researchers created a simple, rapid, and low computationally intensive ML-based analytical technique in this study. This approach entails gathering enough data from the tBLG Raman spectra to build an ML model capable of inferring the twist angle within prescribed ranges.

Compared to manual twist angle labeling, the suggested approach achieves a precision of over 99 percent. Furthermore, the approach is computationally light, delivering forecasts for whole Raman mapping, including dozens of wavelengths in a matter of a few seconds on even the most basic desktop machines.

Finally, the suggested method's versatility allows it to be expanded to measure the amount of strain and loading in graphene and the adjustment factor of other thin films and heterostacks.

The suggested approach might also be used to investigate the twist orientations of various vdW nanostructured materials, making it a valuable and straightforward analytical tool with real-world applications for the current level of understanding of twistronics.

Continue reading: How Graphene was Found in 4.5 Billion-Year-Old Meteorites.

Sols-Fernndez, P. et al. (2022). Machine Learning Determination of the Twist Angle of Bilayer Graphene by Raman Spectroscopy: Implications for van der Waals Heterostructures. ACS Applied Nano Materials. Available at: https://pubs.acs.org/doi/10.1021/acsanm.1c03928

Disclaimer: The views expressed here are those of the author expressed in their private capacity and do not necessarily represent the views of AZoM.com Limited T/A AZoNetwork the owner and operator of this website. This disclaimer forms part of the Terms and conditions of use of this website.

More here:
Analyzing Twisted Graphene with Machine Learning and Raman Spectroscopy - AZoNano

Read More..

Cloudian Partners with WEKA to Deliver High-Performance, Exabyte-Scalable Storage for AI, Machine Learning and Other Advanced Analytics -…

SAN MATEO, Calif., Jan. 20, 2022 (GLOBE NEWSWIRE) -- Cloudian today announced the integration of its HyperStore object storage with the WEKA Data Platform for AI, providing high-performance, exabyte-scalable private cloud storage for processing iterative analytical workloads. The combined solution unifies and simplifies the data pipeline for performance-intensive workloads and accelerated DataOps, all easily managed under a single namespace. In addition, the new solution reduces the storage TCO associated with data analytics by a third, compared to traditional storage systems.

Advanced Analytics Workloads Create Data Storage ChallengesOrganizations are consuming and creating more data than ever before, and many are applying AI, machine learning (ML) and other advanced analytics on these large data sets to make better decisions in real-time and unlock new revenue streams. These analytics workloads create and use massive data sets that pose significant storage challenges, most importantly the ability to manage the data growth and enable users to extract timely insights from that data. Traditional storage systems simply cant handle the processing needs or the scalability required for iterative analytics workloads and introduce bottlenecks to productivity and data-driven decision making.

Cloudian-WEKA Next Generation Storage PlatformTogether, Cloudian and WEKA enable organizations to overcome the challenges of accelerating and scaling their data pipelines while lowering data analytics storage costs. WEKAs data platform, built on WekaFS, addresses the storage challenges posed by todays enterprise AI workloads and other high-performance applications running on-premises, in the cloud or bursting between platforms. The joint solution offers the simplicity of NAS, the performance of SAN or DAS and the scale of object storage, along with accelerating every stage of the data pipeline from data ingestion to cleansing to modeled results.

Integrated through WEKAs tiering function, Cloudians enterprise-grade, software-defined object storage provides the following key benefits:

As organizations increasingly employ AI, ML and other advanced analytics to extract greater value from their data, they need a modern storage platform that enables fast, easy data processing and management, said Jonathan Martin, president, WEKA. The combination of the WEKA Data Platform and Cloudian object storage provides an ideal solution that can seamlessly and cost-effectively scale to meet growing demands.

When it comes to supporting advanced analytics applications, users shouldnt have to make tradeoffs between storage performance and capacity, said Jon Toor, chief marketing officer, Cloudian. By eliminating any need to compromise, the integration of our HyperStore software with the WEKA Data Platform gives customers a storage foundation that enables them to fully leverage these applications so they can gain new insights from their data and drive greater business and operational success.

The new solution is available today. For more information, visit cloudian.com/weka/.

About CloudianCloudian is the most widely deployed independent provider of object storage. With a native S3 API, it brings the scalability and flexibility of public cloud storage into the data center while providing ransomware protection and reducing TCO by up to 70% compared to traditional SAN/NAS and public cloud. The geo-distributed architecture enables users to manage and protect object and file data across siteson-premises and in the cloudfrom a single platform. Available as software or appliances, Cloudian supports conventional and containerized applications. More atcloudian.com.

U.S. Media ContactJordan Tewell 10Fold Communicationscloudian@10fold.com+1 415-666-6066

EMEA Media Contact Jacob GreenwoodRed Lorry Yellow Lorrycloudian@rlyl.com+44 (0) 20 7403 8878

See original here:
Cloudian Partners with WEKA to Deliver High-Performance, Exabyte-Scalable Storage for AI, Machine Learning and Other Advanced Analytics -...

Read More..

Klika Tech Joins tinyML Foundation to Accelerate Development of Machine Learning at the Edge – PR Web

To learn more about tinyML, reserve a seat for a Klika Tech LinkedIn Live event on January 27 at 1:00 pm EST. See https://bit.ly/klikatech_tinyml for details

MIAMI (PRWEB) January 19, 2022

Klika Tech, an award-winning IoT and Cloud-native product and solutions development company, has joined the tinyML Foundation as a Strategic Partner to provide technical, cross-industry expertise as the organization advances Machine Learning for on-device data analytics and decision making at the edge.

The tinyML Foundation and its over 60 sponsor companies are the primary force behind the rapidly expanding field of ML technologies and applications that help businesses shift data analytics from the cloud to the edge using hardware, software, and ML algorithms to perform decision making. Klika Tech joins the organization as a Platinum Sponsor as the demand for dynamic tinyML use cases emerge with a focus on memory constrained devices for equipment level decision making. Among the primary drivers for deploying tinyML are enabling businesses to overcome decision latency, strengthen data privacy, and reduce data transfer costs at the edge.

We are excited about Klika Tech, with its multidisciplinary experience in solving real business problems with integrations of hardware, software, and custom ML models, joining us as a Strategic Partner. Small machine learning models, hardware, and software co-design and integration is at the heart of tinyML. We are looking forward to collaborating with the Klika Tech team on accelerating tinyML ecosystem growth and the ongoing popularization of tinyML, said Evgeni Gousev, chair of the tinyML Foundation Board of Directors.

The tinyML Foundation, since its inaugural tinyML Summit in March 2019 has rapidly grown as a nonprofit organization dedicated to accelerating the growth of tinyML technologies and innovations. The organization now counts 7,000+ tinyML-oriented professionals world-wide and a roster of 25 Strategic Partners.

We are only at the beginning of realizing the full potential of tinyML and the full business value of facilitating ML inference close to where data originates, said Klika Tech Co-CEO and President Gennadiy M Borisov. We are excited to join the tinyML Foundation at the forefront of AI/ML revolution where tinyML can empower resource-constrained devices to be better, smarter, and more responsive elements of operations.

To learn more about the emerging impact of tinyML for business, join Klika Tech January 27th at 1:00 pm EST for a LinkedIn Live discussion surrounding the shift of compute intensive ML functions from the cloud as tinyML pulls it closer to where data originates. Reserve a seat today at: https://bit.ly/klikatech_tinyml.

ABOUT KLIKA TECHKlika Tech is a global IoT and Cloud-native product and solutions development company headquartered in the U.S. with operations and offices across North America, Europe and Asia. Founded in 2013, the company co-creates end-to-end hardware, embedded, and cloud solutions for a range of industries including smart home / building / city platforms, connected healthcare, smart retail, connected agriculture, asset tracking and logistics, automotive / smart mobility, and edge to cloud integrations. Klika Tech is an Advanced Consulting and IoT Competency Partner in the Amazon Web Services (AWS) Partner Network (APN) with AWS Service Delivery designations for AWS IoT Core Services, AWS IoT Greengrass, Amazon API Gateway, AWS CloudFormation, and AWS Lambda. For more information visit http://www.klika-tech.com

ABOUT tinyML FOUNDATIONHeadquartered in Silicon Valley (Los Altos, California), tinyML Foundation (http://www.tinyml.org) is a non-profit organization with the mission to accelerate the growth of a prosperous and integrated global community of hardware, software, machine learning, data scientists, systems engineers, designers, product, and business application people developing leading edge energy efficient machine learning computing. The goal is to connect various technologies and innovations in the domain of machine intelligence to enormous product and business opportunities to drive value creation across the whole ecosystem.

Share article on social media or email:

Originally posted here:
Klika Tech Joins tinyML Foundation to Accelerate Development of Machine Learning at the Edge - PR Web

Read More..

From Coffee Cart to Educational Computing Platform – UC San Diego Health

In classic UC San Diego fashion, an overheard conversation at a campus coffee cart has turned into an interdisciplinary project that's making computing-intensive coursework more exciting while saving well over $1 million dollars so far. The effort gives UC San Diego graduate and undergraduate studentsand their professorsbetter hardware and software ecosystems for exploring real-world, data-intensive and computing-intensive projects and problems in their courses.

Larry Smarr, Distinguished Professor Emeritus, Department of Computer Science and Engineering at the UC San Diego Jacobs School of Engineering.

It all started while UC San Diego computer science and engineering professor Larry Smarr was waiting for coffee in the "Bear" courtyard at the Jacobs School of Engineering a little more than three years ago. While standing in line, Smarr overheard a student say, "I can't get a job interview if I haven't run TensorFlow on a GPU on a real problem."

While this one student's conundrum may sound extremely technical and highly specific, Smarr heard a general need; and he saw an opportunity. In particular, Smarr realized that innovations coming out of a U.S. National Science Foundation (NSF) funded research project he leadsthe Pacific Research Platform (PRP)could be leveraged to create better computing infrastructure for university courses that rely heavily on machine learning, data visualizations, and other topics that require significant computer resources. This infrastructure would make it easier for professors to offer courses that challenge students to solve real-world data- and computation-intensive problems, including things like what he heard at the coffee cart: running TensorFlow on a GPU on a real problem.

Fast forward to 2022, and Smarr's spark of an idea has grown into a cross-campus collaboration called the UC San Diego Data Science/Machine Learning Platform or the UC San Diego JupyterHub. Through this platform, the inexpensive, high-performance computational building blocks combining hardware and software that Smarr and his PRP collaborators designed for use in computation-intensive research across the country are now also the backbone of dynamic computing ecosystems for UC San Diego students and professors who use machine learning, data visualization, and other computing- and data-intensive tools in their courses. The platform has been widely used in every division on campus, including with courses taught in biological sciences, cognitive science, computer science, data science, engineering, health sciences, marine sciences, medicine, music, physical sciences, public health and more.

Xuan Zhang (UC San Diego Chemistry PhD, '21) is one of the tens of thousands of UC San Diego students and young researchers who has used the UC San Diego Data Science/Machine Learning Platform extensively in courses.

It's a unique, collaborative project that leverages federally funded computing research innovations for classroom use. To make the jump from research to classroom applications, a creative and hardworking interdisciplinary team at UC San Diego came together. UC San Diego's IT Services / Academic Technology Services stepped up in a big way. Senior architect Adam Tilghman and chief programmer David Andersen led the implementation effort, with leadership and funding support from UC San Diego CIO Vince Kellen and Academic Technology Senior Director Valerie Polichar. The project has already helped the campus avoid well over $1 million dollars in cloud-computing spend, according to Kellen.

At the same time, the project gives the UC San Diego community tools to encourage the back-and-forth flow of students and ideas between classroom projects and follow-on research projects.

"Our students are getting access to the same level of computing capacity that normally only a researcher using an advanced system like a supercomputer would get. The students are exploring much more complex data problems because they can," said Smarr, who was also the founding director of the California Institute for Telecommunications and Information Technology (Calit2), a UC San Diego / UC Irvine partnership. Calit2 is now expanding to also include UC Riverside.

One of the many professors from all across campus using the UC San Diego Data Science / Machine Learning Platform for courses is Melissa Gymrek, who is a professor in both the Department of Computer Science and Engineering and the Department of Medicine's Division of Genetics.

Her students write and run code in a software environment called Jupyter Notebooks that runs on the UC San Diego platform. "They can write code in the notebook and press execute and see the results. They can build figures to visualize data. We focus a lot more now on data visualizations," said Gymrek.

UC San Diego ITS senior architect Adam Tilghman poses with some of the innovative computing hardware that has opened the door to more data-intensive and computing-intensive coursework for UC San Diego students. These PCs run a wide range of leading-edge software to help students program the system, record their results in Jupyter notebooks, and execute a variety of data analytic and machine learning algorithms on their problems.

One of the thousands of UC San Diego students who has used the platform extensively is Xuan Zhang. Through the data- and visualization-intensive coursework in CSE 284, Zhang realized that the higher order genetic structures at the center of her chemistry Ph.D. dissertationR-Loopscould be regulated by the short tandem repeats (STRs) that are at the center of much of the research in Gymrek's lab. Without the computing-infrastructure for real-world coursework problems, Zhang believes she would not have made the research connection.

After taking Gymrek's course, Zhang also realized that she could apply to obtain her own independent research profile on the UC San Diego Data Science / Machine Learning Platform in order to retain access to all her coursework and to keep building on it. (When Jupyter Notebooks are hosted on the commercial cloud, students generally lose access to their data-intensieve coursework when the class ends, unless they download the data themselves.)

"I thought it was just for the course, but then I realized that Jupyter Notebooks are available for research, without losing access through the UC San Diego Jupyterhub," said Zhang.

This educational infrastructure has added benefits for professors as well.

"With these Jupyter Notebooks, you can automatically embed the grading system. It saves a lot of work," said Gymrek. You can designate how many points a student gets if they get the code right, she explained. Before using this system, students sent PDFs of their problem sets which made grading more time intensive. "It was hard to go past a dozen students. Now, you can scale," said Gymrek. In fact, she has been able to expand access to her personal genomics graduate class to more than 50 students, up from a dozen before she had access to these new tools.

Direct uploading of assignments and grades to the campus learning management system, Canvas, is also now available.

The platform is truly transforming education. Unlike many learning technology innovations, classes in every division at UC San Diego have used the Data Science/Machine Learning Platform. Many thousands of students use it every year. Its innovation with real impact, preparing our students in manysometimes unexpectedfields to be leaders and innovators when they graduate, said Polichar.

"If you build your distributed supercomputer, like the PRP, on commodity hardware then you can ride Moore's Law," explained Smarr.

Following this commodity hardware strategy, Smarr and his PRP collaborators developed hardware designs where performance goes up while prices go down over time. The computational building blocks developed by the PRP, that were repurposed by UC San Diegos ITS, are rack-mounted PCs, containing multi-core CPUs, eight Graphics Processing Units (GPUs), and optimized for data-intensive projects, including accelerating machine learning on the GPUs. These PCs run a wide range of leading-edge software to help students program the system, record their results in Jupyter Notebooks, and to execute a variety of data analytic and machine learning algorithms on their problems.

Building on this commodity hardware approach to high performance computing has allowed UC San Diego to build a dynamic and innovative "on premises" ecosystem for data- and computing- intensive coursework, rather than relying solely on commercial cloud computing services.

The computing platform is made of rack-mounted PCs optimized for data-intensive projects, including accelerating machine learning on Graphics Processing Units (GPUs). The system now contains 126 GPUs, whose usage by students typically ramps up to 100% as each academic quarter progresses.

"The commercial cloud doesn't provide an ecosystem that gives students the same platform from course to course, or the same platform they have in their courses as they have in their research," said Tilghman. "This is especially true in the graduate area where students are starting work in a course context and then they continue that work in their research. It's that continuity, even starting as a lower division undergraduate, all the way up. I think that's one of the innovative advantages that we give at UC San Diego."

UC San Diego professors and students interested in learning more about the Data Science / Machine Learning Platformcan find additional details and contact information on their website.

"I've been at this for 50 years," said Smarr. "I don't know of many examples where I've seen such a close linking of research and education all the way around, in a circle."

This alignment of research and education feeds into UC San Diego's culture of innovation and relevance.

"It's essential for the nation that students all across campus learn and work on computing infrastructure that is relevant for their future, whether it's in industry, academia or the public sector," said Albert P. Pisano, dean of the UC San Diego Jacobs School of Engineering. "These information technology ecosystems being created and deployed on campus are critical for empowering our students to leverage innovations to serve society."

To view a a video providing an overview of the Pacific Research Platform (PRP) and a sampling of research projects the platform has enabled, visit the Pacific Research Platform website.

Larry Smarr serves as Principal Investigator on the PRP and allied grants (NSF Awards OAC-1541349, OAC-1826967, CNS-1730158, CNS-2100237) which are administered through the Qualcomm Institute, which is the UC San Diego Division of Calit2.

Read more from the original source:
From Coffee Cart to Educational Computing Platform - UC San Diego Health

Read More..

Five Machine Learning Applications in Healthcare – CIO Applications

Being healthy and capable of performing basic tasks is one of the top priorities for people worldwide. When the health of a loved one is in jeopardy, humans tend to go far beyond their limits.

Fremont, CA: Machine learning can undoubtedly make a mark in its initiative to improve human health in today's age where there is a plethora of information, and big data in healthcare strives to improve the current healthcare systems. Here are five exciting machine learning applications in healthcare.

Imaging and Diagnosis

Machine learning is based on algorithms and healthcare professionals are looking forward to use it in their field by actively developing algorithms and providing information to machines that can assist them in imaging and analyzing human bodies for abnormalities.

Data Gathering and Follow-Up

Humans prefer personalization whenever they travel. Because big data has many applications and collects information from various sources, leveraging it to improve human life can help doctors provide better patient services. Doctors can personalize treatment options when ML can accommodate enough data about a user.

Radiology and Radiotherapy are Third-level Specialties

Machine learning has previously demonstrated its worth and capabilities in detecting cancer, and it is one of the most viable options for leading healthcare pioneers to identify any abnormalities. ML is proving to be a viable option for radiology and radiotherapy with such results. Doctors can use this technology to explore the possibilities of a patient's reaction to a specific radiation input through their body.

Drug Discovery and Experiments

Scientists strive to discover newer ways to cure certain deadly diseases. With rigorous efforts to improve healthcare, they look for different drugs that can act as advanced medicines and conduct experiments solely on how these medications can help. Scientists benefit from machine learning algorithms because they improve drug performance and behavior on test subjects. The behavioral details observed in a test subject and a dummy drug can be recorded, and machine learning algorithms can be used to determine how those medications perform in humans.

Surgical Procedures

When machines focus on improving operation performance, they can assist doctors by using surgical robots. These surgical robots are incredibly beneficial to doctors because they provide them with high-definition imagery and the ability to reach out in critical areas for a doctor. In addition, machine learning has numerous other applications in various fields that aim to improve people's lives.

Excerpt from:
Five Machine Learning Applications in Healthcare - CIO Applications

Read More..

Global Machine Learning in Automobile Market Size 2021-2029 Trend and Opportunities Discovery Sports Media – Discovery Sports Media

The report provides a comprehensive landscape of analysis of the global Machine Learning in Automobile market. Market estimates are the result of in-depth secondary research, primary interviews and reviews by internal experts. In addition, these market valuations have been reviewed by examining the impact of various social, political and economic factors, as well as current market dynamics affecting the growth of the global market.

The Machine Learning in Automobile Market report highlights product development and growth strategies such as mergers and acquisitions adopted by market players along with SWOT Analysis and PEST analyses of the Market, highlighting the strengths, weaknesses, opportunities and communications of critical companies. In addition, the report provides two specific market forecasts, one from the manufacturers perspective and the other from the consumers perspective.

Get a Sample Copy of this report at https://www.qyreports.com/request-sample/?report-id=323068

Machine Learning in Automobile Market Leading Companies:

Allerin

Intellias Ltd

NVIDIA Corporation

Xevo

Kopernikus Automotive

Blippar

Alphabet Inc

Intel

IBM

Microsoft

Along with an overview of the market, which consists of market dynamics, including Porters five forces analysis, the five forces are explained: namely, the bargaining power of buyers, the bargaining power of suppliers, the threat of new entrants, the threat of substitutes, and the degree of competition in the market. GlobalMachine Learning in Automobile Market. In addition, he explains the various actors such as system integrators, resellers, and end users in the market ecosystem.

The study discusses sustainable position divided by breadth of application, geographic area, product form, and competitive hierarchy. It explains how COVID-19 will impact revenue share, revenue volume, and projected growth rates for each category. The Machine Learning in Automobile market research provides industry analysis based on a detailed assessment of market dynamics and market leading vendors. Internal research offers more accurate data and reduces margins based on the information received.

Ask for a discount on this premium report https://www.qyreports.com/ask-for-discount/?report-id=323068

Type Analysis of the Machine Learning in Automobile Market:

Supervised Learning

Unsupervised Learning

Semi Supervised Learning

Reinforced Leaning

Application Analysis of the Machine Learning in Automobile Market:

AI Cloud Services

Automotive Insurance

Car Manufacturing

Driver Monitoring

Market Players:

The report covers significant developments in the Machine Learning in Automobile market. Inorganic growth strategies seen in the market have included acquisitions, partnerships and collaborations. These events have paved the way for the expansion of the market participants business and customer base.

As a result, the market players from the Machine Learning in Automobile Market are anticipated to have lucrative growth opportunities in the future with the rising demand for the Market.

Inquiry before buying this premium Report https://www.qyreports.com/enquiry-before-buying/?report-id=323068

Key Highlights of the Report:

Statistically validated analysis of historical, current and forecasted industry Machine Learning in Automobile market trends with reliable information and market size data in terms of value and volume where applicable Direct and indirect factors affecting the industry, as well as predictable rationales that are expected to affect the industry in the future Historical and current demand (consumption) and supply (production) scenarios, including predictive analysis of supply and demand scenarios

Machine Learning in Automobile market Detailed List of Key Buyers and End-Users (Consumers) analyzed as per Regions and Applications

About Us

Market Research Inc is farsighted in its view and covers massive ground in global research. Local or global, we keep a close check on both markets. Trends and concurrent assessments sometimes overlap and influence the other. When we say market intelligence, we mean a deep and well-informed insight into your products, market, marketing, competitors, and customers. Market research companies are leading the way in nurturing global thought leadership. We help your product/service become the best they can with our informed approach.

Contact Us

Market Research Inc

Author: Kevin

US Address:51 Yerba Buena Lane, Ground Suite,

Inner Sunset San Francisco, CA 94103, USA

Call Us: +1 (628) 225-1818

Write Us:sales@marketresearchinc.com

https://www.marketresearchinc.com

Read more:
Global Machine Learning in Automobile Market Size 2021-2029 Trend and Opportunities Discovery Sports Media - Discovery Sports Media

Read More..

Centuries-old ‘impossible’ math problem cracked using the strange physics of Schrdinger’s cat – Livescience.com

A math problem developed 243 years ago can be solved only by using quantum entanglement, new research finds.

The mathematics problem is a bit like Sudoku on steroids. It's called Euler's officer problem, after Leonhard Euler, the mathematician who first proposed it in 1779. Here's the puzzle: You're commanding an army with six regiments. Each regiment contains six different officers of six different ranks. Can you arrange them in a 6-by-6 square without repeating a rank or regiment in any given row or column?

Euler couldn't find such an arrangement, and later computations proved that there was no solution. In fact, a paper published in 1960 in the Canadian Journal of Mathematics used the newfound power of computers to show that 6 was the one number over 2 where no such arrangement existed.

Now, though, researchers have found a new solution to Euler's problem. As Quanta Magazine's Daniel Garisto reported, a new study posted to the preprint database arXiv finds that you can arrange six regiments of six officers of six different ranks in a grid without repeating any rank or regiment more than once in any row or column if the officers are in a state of quantum entanglement.

The paper, which has been submitted for peer review at the journal Physical Review Letters, takes advantage of the fact that quantum objects can be in multiple possible states until they are measured. (Quantum entanglement was famously demonstrated by the Schrdinger's cat thought experiment, in which a cat is trapped in a box with radioactive poison; the cat is both dead and alive until you open the box.)

In Euler's classic problem, each officer has a static regiment and rank. They might be a first lieutenant in the Red Regiment, for example, or a captain in the Blue Regiment. (Colors are sometimes used in visualizing the grids, to make it easier to identify the regiments.)

But a quantum officer might occupy more than one regiment or rank at once. A single officer could be either a Red Regiment first lieutenant or a Blue Regiment captain; a Green Regiment major or Purple Regiment colonel. (Or, theoretically, any other combination.)

The key to solving Euler's problem with this identity switcheroo is that the officers on the grid can be in a state of quantum entanglement. In entanglement, the state of one object informs the state of another. If Officer No. 1 is, in fact, a Red Regiment first lieutenant, Officer No. 2 must be a major in the Green Regiment, and vice versa.

Using brute-force computer power, the authors of the new paper, led by Adam Burchardt, a postdoctoral researcher at Jagiellonian University in Poland, proved that filling the grid with quantum officers made the solution possible. Surprisingly, the entanglement has its own pattern, study co-author Suhail Rather, a physicist at the Indian Institute of Technology Madras, told Quanta Magazine. Officers are only entangled with officers of ranks one step below or above them, while regiments are also only entangled with adjacent regiments.

The results could have real impacts on quantum data storage, according to Quanta Magazine. Entangled states can be used in quantum computing to ensure that data is safe even in the case of an error a process called quantum error correction. By entangling 36 quantum officers in a state of interdependent relationships, the researchers found what is called an absolutely maximally entangled state. Such states can be important for resilient data storage in quantum computing.

You can read all about the impossible problem's solution in Quanta Magazine.

Originally published on Live Science.

Go here to read the rest:

Centuries-old 'impossible' math problem cracked using the strange physics of Schrdinger's cat - Livescience.com

Read More..