Category Archives: Machine Learning

Making an Impact: IoT and Machine Learning in Business – Finextra

Two is better than one, isnt it? This is undoubtedly true in the case of IoT and machine learning. These two most popular and trending technologies are offering a solid growth system for companies if implemented together correctly. When combined, they help you unlock the true power of data and boost business efficiency, sales, and customer relationships.

Therefore, incorporation of IoT and machine learning in business is seen on a wide scale. We are going to discuss some of the popular areas where these technologies are used. Before that, lets see some statistics around them.

Statistics Showing Trend of IoT and MLAccording to IoT analytics, the world will have 14.4 billion IoT-connected devices by the end of 2022 which is 10% more than the previous year.

By 2025, this number will reach approximately 27 billion clearly indicating that businesses are quickly adopting it. The market of machine learning, on the other hand, is expected to cross the $200 billion mark by 2025. These figures are enough to confidently say that the market of IoT and machine learning are not going to slow down anytime, but rather will increase over time.

Now, a question pops up: what are the benefits of using IoT and machine learning in business? First things first, knowing how they work together will help you understand the true value they add to your business.

How IoT and Machine Learning Work Together?As the name suggested, the Internet of things is a network of all devices having sensors, connected through the internet. This connection gives them the ability to communicate with any other device on the network.

What after that? How will you put that data to use? Machine learning is the answer. It is a subset of AI and a process of using data to develop mathematical models or algorithms to train the computer without much human interference.

With that learning, the system can be used to anticipate the most likely plot based on the data. The prediction can be wrong or right and depending on that algorithm updates itself to deliver a better possible scenario next time.

Thus, both complement each other to give a competitive advantage to businesses over others through data accumulation and analysis so that they can decide whats better for their growth. This is true for every type of sector, be it healthcare, finance, automotive, agriculture, manufacturing, and more.

But theres more than the above-mentioned reason to use IoT and machine learning in business processes. Lets understand their role in different businesses better and what advantages they offer.

Benefits of IoT and Machine Learning for Businesses -It Automates the Business ProcessesFor any organization, whether small or large, there are a certain set of business processes. Each one should be efficient to achieve the organizations goal. However, monotonous tasks like scheduling emails or record-keeping processes can cause unnecessary delays and hamper overall productivity.

Machine learning and IoT can automate those boring and repetitive tasks to streamline the business process. Not just that, it reduces the chances of human errors, and inefficiencies, improves follow-up with the lead, scheduling of marketing campaigns, events, etc.

Adds an Extra Layer of SecurityNo place is protected from accidents, frauds, and cyber-attacks. They are common in the industry and if not addressed immediately can cause major losses to the business, its employees, and customers.

But it is hard to keep an eye on every single area or device. Using IoT and machine learning in business not only help in monitoring each aspect to identify loopholes and threats but also let you take necessary preventive measures beforehand.

Helps Identifying the Productive ResourcesWhether it's financial, human, physical, or technological resources your business has, it is essential to filter out the most productive ones and eliminate the rarely used resources. With use of IoT and machine learning in business processes, you can assist you in analysing this and prevent unnecessary expenses on those unused and non-productive resources. They can also suggest where your company needs to utilize those resources.

Helps Understanding the CustomersCustomers are an important asset of any company. Making them satisfied is thus important to be successful and increase revenue. Machine learning and IoT can help companies in delivering what their customers want without guessing it. They can learn how customers are interacting with their brand and what things they dislike or like the most.

With all the valuable insights in your hands, you can create products and services they are expecting the most. Or analyze which one is doing good in the market. This way brands can benefit in two ways- delivering better customer experience and increasing revenue by delivering the right products to the audience. For e-commerce platforms, machine learning and IoT are the go-to technologies to achieve this.

Use Cases of IoT and Machine Learning in Various Businesses -Retail Industry: Supply Chain ManagementThe supply chain industry is data-reliant which means wrong or incomplete data can cause several issues in the process. Cost inefficiency, technical downtimes, problem in determining price and transportation costs, inventory theft and loss, etc are a few such problems they face.

Implementing IoT sensors on the devices involved to extract vital data and then send them to machine-learning models can help in the following ways.

Improve the quality of products Reduce operational costs Check the status of delivery Prevent inventory theft and fraud Maintain the balance between demand and supply Improve supply chain visibility to boost customer satisfaction Boost transportation of goods across borders Increase operational efficiency and revenue opportunities Check for any defects in the product or industrial equipment

Automotive Industry: Self-Driving CarsIoT sensors are enhancing the capabilities of vehicles making them smarter and more independent. We call them smart cars or self-driving cars, where human presence is not even an option. Together with artificial intelligence and machine learning, these vehicles can evaluate the situation on the road and can make better decisions in real-time.

They now have reliable cameras to get a clear understanding of roads. Radar detectors allow autonomous vehicles to see even at night thus improving their visibility.

Healthcare Industry: Smart Healthcare SolutionsPatient monitoring has become easy with machine learning and IoT. Doctors can now get real-time data on patients health conditions from connected gadgets and suggest tailored treatments.

Remote glucose monitoring is one such use case where doctors can monitor the glucose level of patients through CGM( continuous glucose monitoring) systems. If there is any anomaly in the glucose level, a warning notification is issued so that patients can immediately connect to the doctor and get the necessary treatment.

AI-equipped Apple Watch is another best use case of machine learning and IoT. The smartwatch is very useful in monitoring the heartbeat. According to a study by Cardiogram, the Apple watch gives 97 percent accurate results on heart rate monitoring and can detect paroxysmal atrial fibrillation which is mainly caused due to irregularity in heart rhythm.

Manufacturing Industry: Condition-Based MonitoringMachines are undoubtedly not going to last forever; they continuously undergo wear and tear and ultimately reach a point where they need to be repaired or discarded. As the manufacturing industry is one of the sectors that depend heavily on machines, they need to keep an eye on machines health strictly.

CBM is one of the most important predictive maintenance strategies that work in this case. Using machine learning techniques and combined with the information gathered from the IoT sensors, conclusions regarding the status of the equipment can be monitored.

For example, mechanical misalignment, short circuits, and wear-out conditions can be detected through this technique. This helps identify the root problem and how early a machine needs maintenance.

Furthermore, this type of automated machine learning assistance decreases the human engineering effort by 50 %, reduces the maintenance budget, and boosts the availability of machines. False alarming, which is one of the main issues of condition monitoring, is also solved by 90% with the help of machine learning models in CBM.

ConclusionNo single technology can alone bring massive success to businesses. Thus, they should be flexible enough to incorporate several technologies together. The Internet of Things (IoT), and Machine Learning are two such powerful combinations that when used correctly can scale up the growth of a business.

They are reshaping almost every industry from agriculture to IT making them more efficient, scalable, and productive.

Read more here:
Making an Impact: IoT and Machine Learning in Business - Finextra

AWS and NVIDIA Collaborate on Next-Generation Infrastructure for … – NVIDIA Blog

New Amazon EC2 P5 Instances Deployed in EC2 UltraClusters Are Fully Optimized to Harness NVIDIA Hopper GPUs for Accelerating Generative AI Training and Inference at Massive Scale

GTCAmazon Web Services, Inc. (AWS), an Amazon.com, Inc. company (NASDAQ: AMZN), and NVIDIA (NASDAQ: NVDA) today announced a multi-part collaboration focused on building out the world's most scalable, on-demand artificial intelligence (AI) infrastructure optimized for training increasingly complex large language models (LLMs) and developing generative AI applications.

The joint work features next-generation Amazon Elastic Compute Cloud (Amazon EC2) P5 instances powered by NVIDIA H100 Tensor Core GPUs and AWSs state-of-the-art networking and scalability that will deliver up to 20 exaFLOPS of compute performance for building and training the largest deep learning models. P5 instances will be the first GPU-based instance to take advantage of AWSs second-generation Elastic Fabric Adapter (EFA) networking, which provides 3,200 Gbps of low-latency, high bandwidth networking throughput, enabling customers to scale up to 20,000 H100 GPUs in EC2 UltraClusters for on-demand access to supercomputer-class performance for AI.

AWS and NVIDIA have collaborated for more than 12 years to deliver large-scale, cost-effective GPU-based solutions on demand for various applications such as AI/ML, graphics, gaming, and HPC, said Adam Selipsky, CEO at AWS. AWS has unmatched experience delivering GPU-based instances that have pushed the scalability envelope with each successive generation, with many customers scaling machine learning training workloads to more than 10,000 GPUs today. With second-generation EFA, customers will be able to scale their P5 instances to over 20,000 NVIDIA H100 GPUs, bringing supercomputer capabilities on demand to customers ranging from startups to large enterprises.

Accelerated computing and AI have arrived, and just in time. Accelerated computing provides step-function speed-ups while driving down cost and power as enterprises strive to do more with less. Generative AI has awakened companies to reimagine their products and business models and to be the disruptor and not the disrupted, said Jensen Huang, founder and CEO of NVIDIA. AWS is a long-time partner and was the first cloud service provider to offer NVIDIA GPUs. We are thrilled to combine our expertise, scale, and reach to help customers harness accelerated computing and generative AI to engage the enormous opportunities ahead.

New Supercomputing ClustersNew P5 instances are built on more than a decade of collaboration between AWS and NVIDIA delivering the AI and HPC infrastructure and build on four previous collaborations across P2, P3, P3dn, and P4d(e) instances. P5 instances are the fifth generation of AWS offerings powered by NVIDIA GPUs and come almost 13 years after its initial deployment of NVIDIA GPUs, beginning with CG1 instances.

P5 instances are ideal for training and running inference for increasingly complex LLMs and computer vision models behind the most-demanding and compute-intensive generative AI applications, including question answering, code generation, video and image generation, speech recognition, and more.

Specifically built for both enterprises and startups racing to bring AI-fueled innovation to market in a scalable and secure way, P5 instances feature eight NVIDIA H100 GPUs capable of 16 petaFLOPs of mixed-precision performance, 640 GB of high-bandwidth memory, and 3,200 Gbps networking connectivity (8x more than the previous generation) in a single EC2 instance. The increased performance of P5 instances accelerates the time-to-train machine learning (ML) models by up to 6x (reducing training time from days to hours), and the additional GPU memory helps customers train larger, more complex models. P5 instances are expected to lower the cost to train ML models by up to 40% over the previous generation, providing customers greater efficiency over less flexible cloud offerings or expensive on-premises systems.

Amazon EC2 P5 instances are deployed in hyperscale clusters called EC2 UltraClusters that are comprised of the highest performance compute, networking, and storage in the cloud. Each EC2 UltraCluster is one of the most powerful supercomputers in the world, enabling customers to run their most complex multi-node ML training and distributed HPC workloads. They feature petabit-scale non-blocking networking, powered by AWS EFA, a network interface for Amazon EC2 instances that enables customers to run applications requiring high levels of inter-node communications at scale on AWS. EFAs custom-built operating system (OS) bypass hardware interface and integration with NVIDIA GPUDirect RDMA enhances the performance of inter-instance communications by lowering latency and increasing bandwidth utilization, which is critical to scaling training of deep learning models across hundreds of P5 nodes. With P5 instances and EFA, ML applications can use NVIDIA Collective Communications Library (NCCL) to scale up to 20,000 H100 GPUs. As a result, customers get the application performance of on-premises HPC clusters with the on-demand elasticity and flexibility of AWS. On top of these cutting-edge computing capabilities, customers can use the industrys broadest and deepest portfolio of services such as Amazon S3 for object storage, Amazon FSx for high-performance file systems, and Amazon SageMaker for building, training, and deploying deep learning applications. P5 instances will be available in the coming weeks in limited preview. To request access, visit https://pages.awscloud.com/EC2-P5-Interest.html.

With the new EC2 P5 instances, customers like Anthropic, Cohere, Hugging Face, Pinterest, and Stability AI will be able to build and train the largest ML models at scale. The collaboration through additional generations of EC2 instances will help startups, enterprises, and researchers seamlessly scale to meet their ML needs.

Anthropic builds reliable, interpretable, and steerable AI systems that will have many opportunities to create value commercially and for public benefit. At Anthropic, we are working to build reliable, interpretable, and steerable AI systems. While the large, general AI systems of today can have significant benefits, they can also be unpredictable, unreliable, and opaque. Our goal is to make progress on these issues and deploy systems that people find useful, said Tom Brown, co-founder of Anthropic. Our organization is one of the few in the world that is building foundational models in deep learning research. These models are highly complex, and to develop and train these cutting-edge models, we need to distribute them efficiently across large clusters of GPUs. We are using Amazon EC2 P4 instances extensively today, and we are excited about the upcoming launch of P5 instances. We expect them to deliver substantial price-performance benefits over P4d instances, and theyll be available at the massive scale required for building next-generation large language models and related products.

Cohere, a leading pioneer in language AI, empowers every developer and enterprise to build incredible products with world-leading natural language processing (NLP) technology while keeping their data private and secure. Cohere leads the charge in helping every enterprise harness the power of language AI to explore, generate, search for, and act upon information in a natural and intuitive manner, deploying across multiple cloud platforms in the data environment that works best for each customer, said Aidan Gomez, CEO at Cohere. NVIDIA H100-powered Amazon EC2 P5 instances will unleash the ability of businesses to create, grow, and scale faster with its computing power combined with Coheres state-of-the-art LLM and generative AI capabilities.

Hugging Face is on a mission to democratize good machine learning. As the fastest growing open source community for machine learning, we now provide over 150,000 pre-trained models and 25,000 datasets on our platform for NLP, computer vision, biology, reinforcement learning, and more, said Julien Chaumond, CTO and co-founder at Hugging Face. With significant advances in large language models and generative AI, were working with AWS to build and contribute the open source models of tomorrow. Were looking forward to using Amazon EC2 P5 instances via Amazon SageMaker at scale in UltraClusters with EFA to accelerate the delivery of new foundation AI models for everyone.

Today, more than 450 million people around the world use Pinterest as a visual inspiration platform to shop for products personalized to their taste, find ideas to do offline, and discover the most inspiring creators. We use deep learning extensively across our platform for use-cases such as labeling and categorizing billions of photos that are uploaded to our platform, and visual search that provides our users the ability to go from inspiration to action," said David Chaiken, Chief Architect at Pinterest. "We have built and deployed these use-cases by leveraging AWS GPU instances such as P3 and the latest P4d instances. We are looking forward to using Amazon EC2 P5 instances featuring H100 GPUs, EFA and Ultraclusters to accelerate our product development and bring new Empathetic AI-based experiences to our customers.

As the leader in multimodal, open-source AI model development and deployment, Stability AI collaborates with public- and private-sector partners to bring this next-generation infrastructure to a global audience. At Stability AI, our goal is to maximize the accessibility of modern AI to inspire global creativity and innovation, said Emad Mostaque, CEO of Stability AI. We initially partnered with AWS in 2021 to build Stable Diffusion, a latent text-to-image diffusion model, using Amazon EC2 P4d instances that we employed at scale to accelerate model training time from months to weeks. As we work on our next generation of open-source generative AI models and expand into new modalities, we are excited to use Amazon EC2 P5 instances in second-generation EC2 UltraClusters. We expect P5 instances will further improve our model training time by up to 4x, enabling us to deliver breakthrough AI more quickly and at a lower cost.

New Server Designs for Scalable, Efficient AILeading up to the release of H100, NVIDIA and AWS engineering teams with expertise in thermal, electrical, and mechanical fields have collaborated to design servers to harness GPUs to deliver AI at scale, with a focus on energy efficiency in AWS infrastructure. GPUs are typically 20x more energy efficient than CPUs for certain AI workloads, with the H100 up to 300x more efficient for LLMs than CPUs.

The joint work has included developing a system thermal design, integrated security and system management, security with the AWS Nitro hardware accelerated hypervisor, and NVIDIA GPUDirect optimizations for AWS custom-EFA network fabric.

Building on AWS and NVIDIAs work focused on server optimization, the companies have begun collaborating on future server designs to increase the scaling efficiency with subsequent-generation system designs, cooling technologies, and network scalability.

Visit link:
AWS and NVIDIA Collaborate on Next-Generation Infrastructure for ... - NVIDIA Blog

Biological research and self-driving labs in deep space supported … – Nature.com

Afshinnekoo, E. et al. Fundamental biological features of spaceflight: advancing the field to enable deep-space exploration. Cell 183, 11621184 (2020).

Article Google Scholar

Loftus, D. J., Rask, J. C., McCrossin, C. G. & Tranfield, E. M. The chemical reactivity of lunar dust: from toxicity to astrobiology. Earth Moon Planets 107, 95105 (2010).

Article Google Scholar

Pohlen, M., Carroll, D., Prisk, G. K. & Sawyer, A. J. Overview of lunar dust toxicity risk. NPJ Microgravity 8, 55 (2022).

Paul, A.-L. & Ferl, R. J. The biology of low atmospheric pressureimplications for exploration mission design and advanced life support. Am. Soc. Gravit. Space Biol. 19, 317 (2005).

Council, N. R. Recapturing a Future for Space Exploration: Life and Physical Sciences Research for a New Era (National Academies Press, 2011).

Goswami, N. et al. Maximizing information from space data resources: a case for expanding integration across research disciplines. Eur. J. Appl. Physiol. 113, 16451654 (2013).

Article Google Scholar

Nangle, S. N. et al. The case for biotech on Mars. Nat. Biotechnol. 38, 401407 (2020).

Article Google Scholar

Costes, S. V., Sanders, L. M. & Scott, R. T. Workshop on Artificial Intelligence & Modeling for Space Biology. Zenodo https://doi.org/10.5281/zenodo.7508535 (2023).

Jordan, M. I. & Mitchell, T. M. Machine learning: trends, perspectives, and prospects. Science 349, 255260 (2015).

Article MathSciNet MATH Google Scholar

Topol, E. J. Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again (Basic Books, 2019).

Topol, E. J. High-performance medicine: the convergence of human and artificial intelligence. Nat. Med. 25, 4456 (2019).

Article Google Scholar

Scott, R. T. et al. Biomonitoring and precision health in deep space supported by artificial intelligence. Nat. Mach. Intell. https://doi.org/10.1038/s42256-023-00617-5 (2023).

National Academies of Sciences, Engineering, and Medicine, Policy and Global Affairs, Board on Research Data and Information & Committee on Toward an Open Science Enterprise Open Science by Design: Realizing a Vision for 21st Century Research (National Academies Press, 2018).

Wilkinson, M. D. et al. The FAIR guiding principles for scientific data management and stewardship. Sci. Data 3, 160018 (2016).

Article Google Scholar

Berrios, D. C., Beheshti, A. & Costes, S. V. FAIRness and usability for open-access omics data systems. AMIA Annu. Symp. Proc. 2018, 232241 (2018).

Google Scholar

Low, L. A. & Giulianotti, M. A. Tissue chips in space: modeling human diseases in microgravity. Pharm. Res. 37, 8 (2019).

Article Google Scholar

Ronca, A. E., Souza, K. A. & Mains, R. C. (eds) Translational Cell and Animal Research in Space: 19652011 NASA Special Publication NASA/SP-2015-625 (NASA Ames Research Center, 2016).

Alwood, J. S. et al. From the bench to exploration medicine: NASA life sciences translational research for human exploration and habitation missions. NPJ Microgravity 3, 5 (2017).

Schatten, H., Lewis, M. L. & Chakrabarti, A. Spaceflight and clinorotation cause cytoskeleton and mitochondria changes and increases in apoptosis in cultured cells. Acta Astronaut. 49, 399418 (2001).

Article Google Scholar

Shi, L. et al. Spaceflight and simulated microgravity suppresses macrophage development via altered RAS/ERK/NFB and metabolic pathways. Cell. Mol. Immunol. 18, 14891502 (2021).

Article Google Scholar

Ferl, R. J., Koh, J., Denison, F. & Paul, A.-L. Spaceflight induces specific alterations in the proteomes of Arabidopsis. Astrobiology 15, 3256 (2015).

Article Google Scholar

Ou, X. et al. Spaceflight induces both transient and heritable alterations in DNA methylation and gene expression in rice (Oryza sativa L.). Mutat. Res. 662, 4453 (2009).

Article Google Scholar

Overbey, E. G. et al. Spaceflight influences gene expression, photoreceptor integrity, and oxidative stress-related damage in the murine retina. Sci. Rep. 9, 13304 (2019).

Article Google Scholar

Clment, G. & Slenzka, K. Fundamentals of Space Biology: Research on Cells, Animals, and Plants in Space (Springer Science & Business Media, 2006).

Yeung, C. K. et al. Tissue chips in space-challenges and opportunities. Clin. Transl. Sci. 13, 810 (2020).

Article Google Scholar

Low, L. A., Mummery, C., Berridge, B. R., Austin, C. P. & Tagle, D. A. Organs-on-chips: into the next decade. Nat. Rev. Drug Discov. 20, 345361 (2021).

Article Google Scholar

Globus, R. K. & Morey-Holton, E. Hindlimb unloading: rodent analog for microgravity. J. Appl. Physiol. 120, 11961206 (2016).

Article Google Scholar

Simonsen, L. C., Slaba, T. C., Guida, P. & Rusek, A. NASAs first ground-based Galactic cosmic ray simulator: enabling a new era in space radiobiology research. PLoS Biol. 18, e3000669 (2020).

Article Google Scholar

Buckey, J. C. Jr & Homick, J. L. The Neurolab Spacelab Mission: Neuroscience Research in Space: Results from the STS-90, Neurolab Spacelab Mission. NASA Technical Reports Server (NASA, 2003).

Diallo, O. N. et al. Impact of the International Space Station Research Results. NASA Technical Reports Server (NASA, 2019).

Vandenbrink, J. P. & Kiss, J. Z. Space, the final frontier: a critical review of recent experiments performed in microgravity. Plant Sci. 243, 115119 (2016).

Article Google Scholar

Massaro Tieze, S., Liddell, L. C., Santa Maria, S. R. & Bhattacharya, S. BioSentinel: a biological CubeSat for deep space exploration. Astrobiology https://doi.org/10.1089/ast.2019.2068 (2020).

Ricco, A. J., Maria, S. R. S., Hanel, R. P. & Bhattacharya, S. BioSentinel: a 6U nanosatellite for deep-space biological science. IEEE Aerospace Electron. Syst. Mag. 35, 618 (2020).

Article Google Scholar

Chen, Y. et al. Automated cells-to-peptides sample preparation workflow for high-throughput, quantitative proteomic assays of microbes. J. Proteome Res. 18, 37523761 (2019).

Article Google Scholar

Zampieri, M., Sekar, K., Zamboni, N. & Sauer, U. Frontiers of high-throughput metabolomics. Curr. Opin. Chem. Biol. 36, 1523 (2017).

Article Google Scholar

Stephens, Z. D. et al. Big data: astronomical or genomical? PLoS Biol. 13, e1002195 (2015).

Article Google Scholar

Tomczak, K., Czerwiska, P. & Wiznerowicz, M. The Cancer Genome Atlas (TCGA): an immeasurable source of knowledge. Contemp. Oncol. 19, A68A77 (2015).

Google Scholar

Lonsdale, J. et al. The Genotype-Tissue Expression (GTEx) project. Nat. Genet. 45, 580585 (2013).

Article Google Scholar

Atta, L. & Fan, J. Computational challenges and opportunities in spatially resolved transcriptomic data analysis. Nat. Commun. 12, 5283 (2021).

Article Google Scholar

Marx, V. Method of the year: spatially resolved transcriptomics. Nat. Methods 18, 914 (2021).

Article Google Scholar

Deamer, D., Akeson, M. & Branton, D. Three decades of nanopore sequencing. Nat. Biotechnol. 34, 518524 (2016).

Article Google Scholar

Mardis, E. R. DNA sequencing technologies: 20062016. Nat. Protoc. 12, 213218 (2017).

Article Google Scholar

Stuart, T. & Satija, R. Integrative single-cell analysis. Nat. Rev. Genet. 20, 257272 (2019).

Article Google Scholar

Asp, M. et al. A spatiotemporal organ-wide gene expression and cell atlas of the developing human heart. Cell 179, 16471660.e19 (2019).

Article Google Scholar

Giacomello, S. et al. Spatially resolved transcriptome profiling in model plant species. Nat Plants 3, 17061 (2017).

Article Google Scholar

Mao, X. W. et al. Characterization of mouse ocular response to a 35-day spaceflight mission: evidence of blood-retinal barrier disruption and ocular adaptations. Sci. Rep. 9, 8215 (2019).

Article Google Scholar

Jonscher, K. R. et al. Spaceflight activates lipotoxic pathways in mouse liver. PLoS ONE 11, e0152877 (2016).

Article Google Scholar

Beheshti, A. et al. Multi-omics analysis of multiple missions to space reveal a theme of lipid dysregulation in mouse liver. Sci. Rep. 9, 19195 (2019).

Article Google Scholar

Malkani, S. et al. Circulating miRNA spaceflight signature reveals targets for countermeasure development. Cell Rep. 33, 108448 (2020).

Article Google Scholar

da Silveira, W. A. et al. Comprehensive multi-omics analysis reveals mitochondrial stress as a central biological hub for spaceflight impact. Cell 183, 11851201.e20 (2020).

Article Google Scholar

Jiang, P., Green, S. J., Chlipala, G. E., Turek, F. W. & Vitaterna, M. H. Reproducible changes in the gut microbiome suggest a shift in microbial and host metabolism during spaceflight. Microbiome 7, 113 (2019).

Article Google Scholar

Beisel, N. S., Noble, J., Barbazuk, W. B., Paul, A.-L. & Ferl, R. J. Spaceflight-induced alternative splicing during seedling development in Arabidopsis thaliana. NPJ Microgravity 5, 9 (2019).

Polo, S.-H. L. et al. RNAseq analysis of rodent spaceflight experiments is confounded by sample collection techniques. iScience 23, 101733 (2020).

Article Google Scholar

Choi, S., Ray, H. E., Lai, S.-H., Alwood, J. S. & Globus, R. K. Preservation of multiple mammalian tissues to maximize science return from ground based and spaceflight experiments. PLoS ONE 11, e0167391 (2016).

Article Google Scholar

Krishnamurthy, A., Ferl, R. J. & Paul, A.-L. Comparing RNA-seq and microarray gene expression data in two zones of the Arabidopsis root apex relevant to spaceflight. Appl. Plant Sci. 6, e01197 (2018).

Article Google Scholar

Vrana, J. et al. Aquarium: open-source laboratory software for design, execution and data management. Synth. Biol. 6, ysab006 (2021).

Article Google Scholar

Miles, B. & Lee, P. L. Achieving reproducibility and closed-loop automation in biological experimentation with an IoT-enabled lab of the future. SLAS Technol. 23, 432439 (2018).

Article Google Scholar

Visit link:
Biological research and self-driving labs in deep space supported ... - Nature.com

How Deep Learning is Revolutionizing Biology – BBN Times

In recent years, the field of biology has been rapidly transformed by the use of deep learning technology.

Deep learning algorithms have revolutionized the way we understand and analyze biological data, providing powerful tools for drug discovery, genomics, disease diagnosis, and protein folding. With the ability to quickly and accurately analyze vast amounts of data, deep learning is helping researchers identify patterns, make predictions, and develop new treatments for a variety of diseases.

Source: Science Direct

Drug discovery is one of the most promising applications of deep learning in biology. Traditionally, drug discovery has been a time-consuming, expensive, and often unreliable process, involving testing thousands of compounds for their potential to treat a specific disease. However, deep learning algorithms can analyze large amounts of data from drug trials, animal models, and clinical studies to identify promising drug candidates. For example, the pharmaceutical company Atomwise has developed a deep learning algorithm that can predict the efficacy of potential drug compounds by analyzing their chemical structures. By using deep learning, researchers can reduce the time and cost of drug discovery, while also increasing the chances of success.

Source: Nature Magazine

Another area where deep learning is making a significant impact is in genomics. Genomics involves the analysis of the human genome, which is composed of over three billion base pairs. Traditional methods of analyzing this data are slow and inefficient, but deep learning algorithms can quickly and accurately analyze genomic data, allowing researchers to identify genetic mutations associated with diseases such as cancer. For example, a team of researchers from the University of California, San Francisco, and Stanford University used deep learning to identify a genetic mutation that increases the risk of breast cancer. Deep learning has the potential to transform our understanding of genetics, leading to new treatments and therapies for a variety of diseases.

Source: Healthcare IT News

Deep learning is also being used to improve disease diagnosis. Traditionally, doctors have relied on their experience and medical training to diagnose diseases. However, deep learning algorithms can analyze large amounts of patient data, including medical histories, laboratory results, and imaging scans, to identify patterns and make accurate diagnoses. For example, a team of researchers from Stanford University developed a deep learning algorithm that can diagnose skin cancer with the same accuracy as board-certified dermatologists. By using deep learning, doctors can make more accurate diagnoses, leading to better patient outcomes and improved healthcare.

Source: PNAS

One of the most challenging problems in biology is predicting the three-dimensional structure of proteins. The shape of a protein determines its function, and understanding protein structure is critical for developing new drugs and understanding disease. Deep learning is being used to tackle this problem by analyzing large datasets of protein structures to identify patterns and predict the structure of unknown proteins. For example, Google's DeepMind developed a deep learning algorithm called AlphaFold that can accurately predict protein structure, outperforming traditional methods. By using deep learning, researchers can accelerate their understanding of protein folding, leading to new treatments and therapies for a variety of diseases.

Source: EMBO Press

Deep learning has many benefits for the field of biology. It allows researchers to analyze vast amounts of data quickly and accurately, leading to breakthroughs in our understanding of complex biological problems. By using deep learning, researchers can develop new treatments and therapies for a variety of diseases, leading to improved healthcare and better patient outcomes. Deep learning also has the potential to accelerate our understanding of genetics and protein folding, leading to new discoveries and innovations in the field of biology.

Source: Genome Biology

While deep learning has many benefits for the field of biology, there are also limitations and challenges in implementing this technology. One of

the main challenges is the need for large amounts of high-quality data to train deep learning models. Biological data is often noisy and incomplete, which can make it difficult to train accurate deep learning models. Additionally, deep learning algorithms are often considered "black boxes" because they can be difficult to interpret, making it challenging to understand how they arrived at their conclusions. This can make it difficult for researchers to replicate results and ensure that deep learning models are making accurate predictions.

Source: Technology Networks

The future of deep learning in biology looks promising. As technology continues to improve, we can expect to see even more powerful deep learning algorithms developed specifically for biological applications. With the continued growth of big data and advances in machine learning algorithms, we can expect deep learning to become an increasingly important tool for researchers in the field of biology. Deep learning has the potential to revolutionize our understanding of complex biological problems, leading to new treatments and therapies for a variety of diseases.

Source: PLOS

Deep learning is revolutionizing the field of biology, providing researchers with powerful tools for drug discovery, genomics, disease diagnosis, and protein folding. By analyzing vast amounts of data quickly and accurately, deep learning is helping researchers identify patterns, make predictions, and develop new treatments for a variety of diseases. While there are limitations and challenges in implementing deep learning in biology, the future looks promising. With continued advancements in technology and machine learning algorithms, we can expect deep learning to become an increasingly important tool for researchers in the field of biology.

Read the original:
How Deep Learning is Revolutionizing Biology - BBN Times

AI finds the first stars were not alone – Science Daily

By using machine learning and state-of-the-art supernova nucleosynthesis, a team of researchers have found the majority of observed second-generation stars in the universe were enriched by multiple supernovae, reports a new study in The Astrophysical Journal.

Nuclear astrophysics research has shown elements including and heavier than carbon in the universe are produced in stars. But the first stars, stars born soon after the Big Bang, did not contain such heavy elements, which astronomers call 'metals'. The next generation of stars contained only a small amount of heavy elements produced by the first stars. To understand the universe in its infancy, it requires researchers to study these metal-poor stars.

Luckily, these second-generation metal-poor stars are observed in our Milky Way Galaxy, and have been studied by a team of Affiliate Members of the Kavli Institute for the Physics and Mathematics of the Universe (Kavli IPMU) to close in on the physical properties of the first stars in the universe.

The team, led by Kavli IPMU Visiting Associate Scientist and The University of Tokyo Institute for Physics of Intelligence Assistant Professor Tilman Hartwig, including Visiting Associate Scientist and National Astronomical Observatory of Japan Assistant Professor Miho Ishigaki, Visiting Senior Scientist and University of Hertfordshire Professor Chiaki Kobayashi, Visiting Senior Scientist and National Astronomical Observatory of Japan Professor Nozomu Tominaga, and Visiting Senior Scientist and The University of Tokyo Professor Emeritus Ken'ichi Nomoto, used artificial intelligence to analyze elemental abundances in more than 450 extremely metal-poor stars observed to date. Based on the newly developed supervised machine learning algorithm trained on theoretical supernova nucleosynthesis models, they found that 68 per cent of the observed extremely metal-poor stars have a chemical fingerprint consistent with enrichment by multiple previous supernovae.

The team's results give the first quantitative constraint based on observations on the multiplicity of the first stars.

"Multiplicity of the first stars were only predicted from numerical simulations so far, and there was no way to observationally examine the theoretical prediction until now," said lead author Hartwig. "Our result suggests that most first stars formed in small clusters so that multiple of their supernovae can contribute to the metal enrichment of the early interstellar medium," he said.

"Our new algorithm provides an excellent tool to interpret the big data we will have in the next decade from on-going and future astronomical surveys across the world" said Kobayashi, also a Leverhulme Research Fellow.

"At the moment, the available data of old stars are the tip of the iceberg within the solar neighborhood. The Prime Focus Spectrograph, a cutting-edge multi-object spectrograph on the Subaru Telescope developed by the international collaboration led by Kavli IPMU, is the best instrument to discover ancient stars in the outer regions of the Milky Way far beyond the solar neighborhood.," said Ishigaki.

The new algorithm invented in this study opens the door to make the most of diverse chemical fingerprints in metal-poor stars discovered by the Prime Focus Spectrograph.

"The theory of the first stars tells us that the first stars should be more massive than the Sun. The natural expectation was that the first star was born in a gas cloud containing the mass million times more than the Sun. However, our new finding strongly suggests that the first stars were not born alone, but instead formed as a part of a star cluster or a binary or multiple star system. This also means that we can expect gravitational waves from the first binary stars soon after the Big Bang, which could be detected future missions in space or on the Moon," said Kobayashi.

Original post:
AI finds the first stars were not alone - Science Daily

7 free learning resources to land top data science jobs – Cointelegraph

Data science is an exciting and rapidly growing field that involves extracting insights and knowledge from data. To land a top data science job, it is important to have a solid foundation in key data science skills, including programming, statistics, data manipulation and machine learning.

Fortunately, there are many free online learning resources available that can help you develop these skills and prepare for a career in data science. These resources include online learning platforms such as Coursera, edX and DataCamp, which offer a wide range of courses in data science and related fields.

Data science and related subjects are covered in a variety of courses on the online learning platform Coursera. These courses frequently involve subjects such as machine learning, data analysis and statistics and are instructed by academics from prestigious universities.

Here are some examples of data science courses on Coursera:

One can apply for financial aid to earn these certifications for free. However, doing a course just for certification may not land a dream job in data science.

Kaggle is a platform for data science competitions that provides a wealth of resources for learning and practicing data science skills. One can refine their skills in data analysis, machine learning and other branches of data science by participating in the platforms challenges and host of datasets.

Here are some examples of free courses available on Kaggle:

Related:9 data science project ideas for beginners

EdX is another online learning platform that offers courses in data science and related fields. Many of the courses on edX are taught by professors from top universities, and the platform offers both free and paid options for learning.

Some of the free courses on data science available on edX include:

All of these courses are free to audit, meaning that you can access all the course materials and lectures without paying a fee. Nevertheless, there will be a cost if you wish to access further course features or receive a certificate of completion. A comprehensive selection of paid courses and programs in data science, machine learning and related topics are also available on edX in addition to these courses.

DataCamp is an online learning platform that offers courses in data science, machine learning and other related fields. The platform offers interactive coding challenges and projects that can help you build real-world skills in data science.

The following courses are available for free on DataCamp:

All of these courses are free and can be accessed through DataCamps online learning platform. In addition to these courses, DataCamp also offers a wide range of paid courses and projects that cover topics such as data visualization, machine learning and data engineering.

Udacity is an online learning platform that offers courses in data science, machine learning and other related fields. The platform offers both free and paid courses, and many of the courses are taught by industry professionals.

Here are some examples of free courses on data science available on Udacity:

Related:5 high-paying careers in data science

MIT OpenCourseWare is an online repository of course materials from courses taught at the Massachusetts Institute of Technology. The platform offers a variety of courses in data science and related fields, and all of the materials are available for free.

Here are some of the free courses on data science available on MIT OpenCourseWare:

GitHub is a platform for sharing and collaborating on code, and it can be a valuable resource for learning data science skills. However, GitHub itself does not offer free courses. Instead, one can explore the many open-source data science projects that are hosted on GitHub to find out more about how data science is used in practical situations.

Scikit-learn is a popular Python library for machine learning, which provides a range of algorithms for tasks such as classification, regression and clustering, along with tools for data preprocessing, model selection and evaluation.The project is open-source and available on GitHub.

Jupyter is an open-source web application for creating and sharing interactive notebooks. Jupyter notebooks provide a way to combine code, text and multimedia content in a single document, making it easy to explore and communicate data science results.

These are just a few examples of the many open-source data science projects available on GitHub. By exploring these projects and contributing to them, one can gain valuable experience with data science tools and techniques, while also building their portfolio and demonstrating their skills to potential employers.

Read the original here:
7 free learning resources to land top data science jobs - Cointelegraph

Autonomous shuttle gets new capabilities through machine learning … – Fleet World

Autonomous transport company Aurrigo has improved its driverless vehicles capabilities in a project with Aston University.

Aurrigos airport Auto-Dolly is now able to differentiate between many different objects

The two-year Knowledge Transfer Partnership (KTP) with the university developed a new machine vision solution, using machine learning and artificial intelligence that means the Coventry-based companys driverless vehicles are now able to see and recognise objects in greater detail. This results in improved performance across a wider spectrum of test situations.

Previously the companys driverless vehicles were only capable of detecting that there was an object in their path, not the type of object, so would just stop when they encountered something in their way.

The new computer vision systems, coupled with machine learning and artificial intelligence, are now able to differentiate between different objects, enabling Aurrigos airport Auto-Dolly to differentiate between many different objects airside.

Professor David Keene, CEO of Aurrigo, said: This partnership has allowed us to produce a system which has resulted in our vehicles becoming smarter and more capable and enabled us to expand our operations, particularly with baggage handling in airports worldwide.

Dr George Vogiatzis, senior lecturer in computer science at Aston University, added: This KTP has been a great way for us to work with a new industrial partner whilst applying our expertise in deep learning and robotics to the exciting field of autonomous vehicles.

It is very rewarding to see the success of this collaboration.

The project findings will also be applied to other vehicles in the Aurrigo product range.

Read the original:
Autonomous shuttle gets new capabilities through machine learning ... - Fleet World

Data Annotation and Labeling Global Market Report 2023 … – GlobeNewswire

Dublin, March 23, 2023 (GLOBE NEWSWIRE) -- The "Data Annotation and Labeling Market Component, Data Type, Application (Dataset Management, Sentiment Analysis), Annotation Type, Vertical (BFSI, IT and ITES, Healthcare and Life Sciences) and Region - Global Forecast to 2027" report has been added to ResearchAndMarkets.com's offering.

The global data annotation and labeling market is projected to grow from USD 0.8 billion in 2022 to USD 3.6 billion by 2027, at a CAGR of 33.2% during the forecast period.

Any model or system that relies on a computer-driven decision-making system must annotate and label the data in order to guarantee that the decisions are accurate and pertinent. Businesses use massive amounts of datasets when building an ML model, carefully customizing them according to the model training needs.

As a result, machines can detect data that has been annotated in a variety of comprehensible formats, including images, texts, and videos. This explains why AI and ML firms seek out this type of annotated data to put into their ML algorithm, training it to learn and detect recurrent patterns, and ultimately employing the same to create accurate estimates and predictions.

The major market players, such as Google, Appen, IBM, Oracle, TELUS International, Adobe, AWS have adopted numerous growth strategies, which include acquisitions, new product launches, product enhancements, and business expansions, to enhance their market shares.

By organization size, SMEs are anticipated to grow at the highest CAGR during the forecast period

The increased competitive market scenario is expected to prompt SMEs to invest in cutting-edge technologies and adopt go-to-market strategies for making informed business decisions. SMEs are more open to adopting new technology to improve and streamline business operations as well as to expand their market presence in the global economy. During the forecast period, SMEs are anticipated to grow at highest CAGR.

By application, catalogue management segment to register the highest CAGR during the forecast period

The catalogue management tool helps businesses handle enormous amounts of unstructured data across many AI and ML projects. Teams that work on data annotation need strong tools that can gather all different kinds of data and information from various sources into a single, searchable database. Companies such as LabelBox developed a data annotation tool powered by catalogue management with the intent to filter out unstructured data based on metadata properties. Among applications, catalogue management is projected to register the highest CAGR during the forecast period.

Asia Pacific market to register highest CAGR during the forecast period

The data annotation and labeling market is projected to register the highest CAGR in the Asia Pacific region during the forecast period. The rapid industrialization of the countries across the Asia Pacific and the increasing digitalization trend is leading to the production of the bulk of unstructured data. Since the Asia Pacific region has shown untapped potential in the increased adoption of data annotation and labeling solutions, most organisations are moving there to extend their market reach. Due to the expanding corporate productivity awareness and the competently designed data annotation and labeling solutions offered by the vendors in this market, the Asia Pacific has emerged as a very promising region.

Market Dynamics

Drivers

Restraints

Opportunities

Challenges

Key Attributes:

Key Topics Covered:

1 Introduction

2 Research Methodology

3 Executive Summary

4 Premium Insights

5 Market Overview and Industry Trends

6 Data Annotation and Labeling Market, By Component6.1 Introduction6.2 Solutions6.3 Services6.3.2 Professional Services6.3.2.1 Training and Consulting6.3.2.2 System Integration and Implementation6.3.2.3 Support and Maintenance6.3.3 Managed Services

7 Data Annotation and Labeling Market, By Data Type7.1 Introduction7.2 Text7.3 Image7.4 Video7.5 Audio

8 Data Annotation and Labeling Market, By Deployment Type8.1 Introduction8.2 On-Premises8.3 Cloud

9 Data Annotation and Labeling Market, By Organization Size9.1 Introduction9.2 Small and Medium-Sized Enterprises9.3 Large Enterprises

10 Data Annotation and Labeling Market, By Annotation Type10.1 Introduction10.2 Manual10.3 Automatic10.4 Semi-Supervised

11 Data Annotation and Labeling Market, By Application11.1 Introduction11.2 Dataset Management11.3 Security and Compliance11.4 Data Quality Control11.5 Workforce Management11.6 Content Management11.7 Catalogue Management11.8 Sentiment Analysis11.9 Other Applications

12 Data Annotation and Labeling Market, By Vertical12.1 Introduction12.2 BFSI12.3 Healthcare and Life Sciences12.4 Telecom12.5 Government, Defense, and Public Agencies12.6 IT and ITES12.7 Retail and Consumer Goods12.8 Automotive12.9 Other Verticals

13 Market, By Region

14 Competitive Landscape

15 Company Profiles

16 Adjacent and Related Markets

17 Appendix

Companies Mentioned

For more information about this report visit https://www.researchandmarkets.com/r/j2el21

About ResearchAndMarkets.comResearchAndMarkets.com is the world's leading source for international market research reports and market data. We provide you with the latest data on international and regional markets, key industries, the top companies, new products and the latest trends.

Read the original post:
Data Annotation and Labeling Global Market Report 2023 ... - GlobeNewswire

Machine Intelligence and Humanity Benefit From "Spiral" of Mutual Learning – Neuroscience News

Summary: Humans and computers can interact via multiple modes and channels to respectively gain wisdom and deepen intelligence.

Source: Intelligent Computing

Deyi Li from the Chinese Association for Artificial Intelligence believes that humans and machines have a mutually beneficial relationship.

His paper on machine intelligence, which was published inIntelligent Computing builds on five groundbreaking works by Schrdinger, the father of quantum mechanics, Turing, the father of artificial intelligence, and Wiener, the father of cybernetics.

Schrdinger and beyond: Machines can think and interact with the world as time goes by.

Inspired by Schrdingers book What is Life? The Physical Aspect of the Living Cell, Li believes that machines can be considered living things. That is, like humans, they decrease the amount of entropy or disorder in their environment through their interactions with the world.

The machines of the agricultural age and the industrial age existed only at the physical level, but now, in the age of intelligence, machines consist of four elements at two different levels: matter and energy at the physical level, and structure and time at the cognitive level. The machine can be the carrier of thought, and time is the foundation of machine cognition, Li explained.

Turing and beyond: Machines can think, but can they learn?

In 1936, Turing published what has been called the most influential mathematics paper, establishing the idea of a universal computing machine able to perform any conceivable computation. Such hypothetical computers are called Turing machines.

His 1950 paper Computing Machinery and Intelligence introduced what is now known as the Turing test for measuring machine intelligence, sparking a debate over whether machines can think. A proponent of thinking machines, Turing believed that a child machine could be educated and eventually achieve an adult level of intelligence.

However, given that cognition is only one part of the learning process, Li pointed out two limitations of Turings model in achieving better machine intelligence: First, the machines cognition is disconnected from its environment rather than connected to it.

This shortcoming has also been highlighted in a paper by Michael Woodridge titledWhat Is Missing from Contemporary AI? The World.Second, the machines cognition is disconnected from memory and thus cannot draw on memories of past experiences.

As a result, Li defines intelligence as the ability to engage in learning, the goal of which is to be able to explain and solve actual problems.

Wiener and beyond: Machines have behavioral intelligence.

In 1948, Wiener published a book that served as the foundation of the field of cybernetics, the study of control and communication within and between living organisms, machines and organizations.

In the wake of the success of the book, he published another, focusing on the problems of cybernetics from the perspective of sociology, suggesting ways for humans and machines to communicate and interact harmoniously.

According to Li, machines follow a control pattern similar to the human nervous system. Humans provide missions and behavioral features to machines, which must then run a complex behavior cycle regulated by a reward and punishment function to improve their abilities of perception, cognition, behavior, interaction, learning and growth.

Through iteration and interaction, the short-term memory, working memory and long-term memory of the machines change, embodying intelligence through automatic control.

In essence, control is the use of negative feedback to reduce entropy and ensure the stability of the embodied behavioral intelligence of a machine, Li concluded.

The strength of contemporary machines is deep learning, which still requires human input, but leverages the ability of devices to use brute force methods of solving problems with insights gleaned directly from big data.

A joint future: from learning to creating

Machine intelligence cannot work in isolation; it requires human interaction. Furthermore, machine intelligence is inseparable from language, because humans use programming languages to control machine behavior.

The impressive performance of ChatGPT, a chatbot showcasing recent advances in natural language processing, proves that machines are now capable of internalizing human language patterns and producing appropriate example texts, given the appropriate context and goal.

Since AI-generated texts are increasingly indistinguishable from human-written texts, some are saying that AI writing tools have passed the Turing test. Such declarations provoke both admiration and alarm.

Li is among the optimists who envision artificial intelligence in a natural balance with human civilization. He believes, from a physics perspective, that cognition is based on a combination of matter, energy, structure and time, which he calls hard-structured ware, and expressed through information, which he calls soft-structured ware.

He concludes that humans and machines can interact through multiple channels and modes to gain wisdom and intelligence, respectively. Despite their different endowments in thinking and creativity, this interaction allows humans and machines to benefit from each others strengths.

Author: Xuwen LiuSource: Intelligent ComputingContact: Xuwen Liu Intelligent ComputingImage: The image is credited to Deyi Li

Original Research: Open access.Cognitive PhysicsThe Enlightenment by Schrdinger, Turing, and Wiener and Beyond by Deyi Li. Intelligent Computing

Abstract

Cognitive PhysicsThe Enlightenment by Schrdinger, Turing, and Wiener and Beyond

In the first half of the 20th century, 5 classic articles were written by 3 outstanding scholars, namely, Wiener (1894 to 1964), the father of cybernetics, Schrdinger (1887 to 1961), the father of quantum mechanics, and Turing (1912 to 1954), the father of artificial intelligence.

The articles discuss the concepts such as computability, life, machine, control, and artificial intelligence, establishing a solid foundation for the intelligence of machines (how machines can recognize as humans do?) and its future development.

Continued here:
Machine Intelligence and Humanity Benefit From "Spiral" of Mutual Learning - Neuroscience News

Workday’s Response To AI and Machine Learning: Moving Faster Than Ever – Josh Bersin

This week we met with Workday at the companys annual Innovation Summit and I walked away very impressed. Not only is Workday clear-eyed and definitive about its AI product strategy, the company is entering one of its strongest product cycles in years. I have never seen so many Workday features reach maturity and its clear to me that the platform is hitting on all cylinders.

Let me start with an overview: the ERP market is big, important, and changing. Every company needs a financial and human capital system, and these platforms are being asked to do hundreds of things at once. We expect them to be easy to use, fast, and instantly configurable for our company. But we also want them to be easy to extend, open to integration with many other systems, and built on a modern architecture. How can Workday, a company founded 18 years ago, stay ahead in all these areas?

Its actually pretty simple. Workday is not an ERP or software applications company: its a technology company that builds platforms for business solutions. In other words, Workday thinks architecture first, applications second, and this was reinforced again and again as we went through Workdays offerings. Let me give you a few insights on what we learned, and I encourage you to contact us or read more from Workday on many of the things below.

First, Workday is quite clear that AI and Machine Learning will, over time, reinvent what business systems do. The traditional ERP world was a set of core business applications which include Financials, Human Capital (HCM), Supply Chain, Manufacturing, and later Marketing, Customer Analysis, and others. Almost every vendor who starts in one of these areas tries to move into adjacencies, primarily with the goal of selling more software to existing customers.

Today, while companies want to consolidate these applications (a big opportunity for Workday), the bigger goal is reinventing how these applications work together. As Workday describes it, their goal is to help businesses improve planning, execution, and analysis. When its hard to hire, like it will likely continue to be for years, we want the HCM system to help us find contractors, look at alternative work arrangements, and arrange financial and billing solutions to outsource work or tasks, and also find and develop internal candidates. So the red lines between these applications is blurring, and Workday understands this well.

In a sense this is the core of our new Systemic HR Operating Model. We want these various HCM systems, for example, to look at all four of these elements and help us manage them together. Workdays new HCM demo actually showed some of this in action.

Beyond ERP To AI And ML At The Core

But the platform market is even moving faster. Not only do companies want a suite of apps that work together (Workday, Oracle, SAP, and others do this), they want AI and machine learning to operate across the company. And this will change what ERP systems do. Workday listed more than 50 different machine learning experiences the company is already delivering, and the take the form of recommendations or forms pre-filled out or workflows pre-designed that dont look like magic, they just look like intelligent systems that help you run your company better. And this is where Workday is focused.

The new Workforce Management system (labor optimization), for example, can predict hiring and staffing needs based on month, weather, and other external inputs. It can then schedule workers based on their availability, skills, and wages. And it can automatically create a workforce schedule, decide when contract labor is needed, and then automatically create hiring portals and candidate experiences to find people. This is really AI-enabled ERP not a fancy demo of Generative AI to make emails easier to write.

Workday HCM Continues To Mature

The Workday HCM suite is in the strongest shape Ive seen in years. The Workday Skills Cloud is maturing into a skills intelligence platform and it now has features that make it almost essential for a Workday customer. It can import data from any vertical or specialized skills database, it gives companies multiply ways to infer or assess skills, and it gives you dozens of ways to report on skills gaps, predict skills deficiencies, and create upskilling pathways for each employee or workforce group. Ive watched this technology grow over the years and never before have I seen it so well put together and positioned to do what companies want.

This is not to say, by the way, that companies still need specialized skills systems for recruiting (Eightfold, Beamery, Phenom, Seekout, Paradox, iCims, others), mobility (Gloat, Fuel50), learning (Cornerstone, Docebo, Degreed), pay equity (Syndio, Trusaic, Salary.com), and many more. In some sense every HR tech platform now has a skills engine under the covers (remember, a skill is a series of words that describes attributes of a person) and these systems leverage these data elements for very unique purposes. Skills Cloud, in its more mature position in the market, is intended to be a consolidation point to bring the taxonomy into one place. (And its the skills engine that the Workday HCM tools rely upon.)

I know, by the way, that all Workday customers have a multitude of other HCM systems. Given the innovation cycle taking place (vendors are getting on the AI bandwagon in very creative ways), this is going to continue. But Workdays role as the core remains strong, particularly because of my next point.

Workday Is Now Truly Open

I was also impressed with Workdays progress with Extend and Orchestrate, the external APIs and development tools that enable customers and partners to build add-on applications. Workday as a company is not planning on building a lot of vertical solutions, rather they are now pushing partners (Accenture, PwC, and clients) to contribute to the app ecosystem. This creates a force multiplier effect where third parties can make money by building a dev team around Workday. (This, by the way, is why Microsoft is so ubiquitous: their reseller and partner network is massive.)

In addition to these programming interfaces, Workday has made a serious commitment to Microsoft Teams (Workday Everywhere). You can now view Workday cards within Teams and click on deep links within Teams that take you right to Workday transactions. While the company is still committed to continuous improvements in its user interface, I think Workday now understands that users will never spend all day figuring out how Workday works. I believe this trend will continue, and I encouraged Workday to consider Chat-GPT as the next major interface to build. (They were non-commital).

Vertical Applications

I asked the management team what do you think about Oracles decision to buy Cerner, one of the leaders in clinical patient management? Do you think this threatens your vertical strategy? Aneel Bhusri jumped up to argue we would never buy an old legacy company like that it would never integrate into our architecture. This matters because Workdays integrated architecture lets the company deliver AI at scale. In other words, Workday intends to be the pure-play architectural leader, and let the vertical applications come over time.

Today Workday focuses on the education market and has several vertical solutions in financial services, insurance, and healthcare (many built by partners). I dont think the company is going to follow the SAP or Oracle strategy to build deep vertical apps. And this strategy, that of staying pure to the core architecture, may play out well in the longrun. So for those of you who want to build addons, Workday is opening up faster than ever.

What About AI In The Core?

Now lets talk about AI, the most important technology innovation of our time. Sayan Chakraborty, the new co-president and a recognized academic expert on AI, has a very strong position. He believes that Workdays 60 million users (many of which have opted in to be used for anonymous neural network analysis), give the company a massive AI-enabled platform already. So the companys strategy is to double down on declarative AI (machine learning) and then look at Generative AI as a new research effort.

In many ways Workday as been doing AI since they acquired Identified in 2014, and many AI algorithms are built into the Skills Cloud, sourcing and recruiting tools, and myriad of tools for analytics, adaptive planning, and learning. Most of the product managers have AI-related features on their plates, and David Somers, who runs the HCM suite, told us there are hundreds of ideas for new AI features floating around. So in many ways Workday has been an AI platform for years: theyre just now starting to market it.

That said, Workdays real data assets are not that big. Assume that 30 million Workday users have opted in to Workdays AI platform. And lets assume that the Skills Cloud has tried to index their skills and possibly look at career paths or other attributes. Compared to the data resident in Eightfold (over a billion user records), Seekout (nearly a billion), and systems like Retrain.ai, Skyhive, and sourcing systems like Beamery or Phenom, this is a very small amount of data. At some point Workday is going to have to understand that the HCM AI platforms of today are really global workforce data systems, not just customer data systems. So most of the AI well see in Workday will make your version of Workday run a bit better.

Prism: Workdays Strategy To Consolidate Data

Finally let me mention the growth of Prism Analytics (now referred to as just Prism), Workdays open data platform for analytics and third party data. When the company acquired Platfora the original need was to give Workday customers a place to put non-Workday data. Since the Workday data platform is a proprietary, object-based database, there was no way to directly import data into Workday so the company needed a scalable data platform.

Since then Prism has grown exponentially. Initially positioned as an analytics system (you could put financial data into Prism and cross-correlate it with HR data), it is now a big data platform which companies can use for financial applications, HR applications, and just about anything you want. Its not designed to compete with Google Big Query or Red Shift from AWS (at least not at the moment) but for Workday customers who want to leverage their investment in Workday security and existing applications, its pretty powerful.

One of the customers who spoke at the conference was Fannie Mae, who has more than $4 trillion in mortgages and loans in its risk managed portfolio. They are using Prism along with Workday Financials to manage their complex month-end close and other financial analysis. Last year I met a large bank who was using Prism to manage, price, and analyze complex banking securities with enormous amounts of calculations built in. Because Prism is integrated into the Workday platform, any Prism application can leverage any Workday data object, so its really a Big Data Extension to the Workday platform.

And that leads back to AI. If Sayans vision comes true, the Workday platform could become a place where customers take their transactional data, customer data, and other important business data and correlate it with Workday financial and HCM data, using AI to find patterns and opportunities. While AWS, Google Cloud, and Azure will offer these services too, none of these vendors have any business applications to offer. So part of Workdays AI strategy is to enable companies to build their own AI-enabled apps, implemented through Extend and Orchestrate and fueled with data from Prism.

This is going to be a crowded space. Microsofts new Power Platform Copilot and OpenAI Azure Services also give companies a place (and method) to build enterprise AI apps. And Google will soon likely launch many new AI services as well. But for companies that have invested in Workday as their core Financial or HCM platform, there are going to be new AI apps that wind up in the Workday platform and that drives utilization, revenue (through Extend, Prism, and Orchestrate), and even vertical apps for Workday.

Workdays Position For The Future

In summary, Workday is well positioned for this new technology revolution. I did challenge the management team to consider ChatGPT as a a new conversational front end to the whole system and they agreed that it is on their list of things to look at.

(By the way, the creative solutions coming to HR in Generative AI are going to blow your mind. Ill share more soon.)

For enterprise buyers, Workday remains rock solid. With only a few major competitors to think about (Oracle, SAP, UKG, Darwinbox, ADP), the company is likely to continue to grow market share for large companies. There will be pricing pressure because of the economy, but for companies that want a first-class technology platform for core Finance and HR, Workday will continue to be a leader.

Additional Resources

The Role Of Generative AI And Large Language Models in HR

New MIT Research Shows Spectacular Increase In White Collar Productivity From ChatGPT

LinkedIn Announces Generative AI Features For Career, Hiring, and Learning

Microsoft Launches OpenAI CoPilots For Dynamics Apps And The Enterprise.

Understanding Chat-GPT, And Why Its Even Bigger Than You Think (*updated)

Microsofts Massive Upgrade: OpenAI CoPilot For Entire MS 365 Suite.

Originally posted here:
Workday's Response To AI and Machine Learning: Moving Faster Than Ever - Josh Bersin