Category Archives: Machine Learning
Application of machine learning in predicting non-alcoholic fatty liver … – Nature.com
Aggarwal, A., Puri, K., Thangada, S., Zein, N. & Alkhouri, N. Nonalcoholic fatty liver disease in children: Recent practice guidelines, where do they take us?. Curr. Pediatr. Rev. 10(2), 151161 (2014).
Article CAS PubMed Google Scholar
Khashab, M. A., Liangpunsakul, S. & Chalasani, N. Nonalcoholic fatty liver disease as a component of the metabolic syndrome. Curr. Gastroenterol. Rep. 10(1), 7380 (2008).
Article PubMed Google Scholar
Wagenknecht, L. E. et al. Correlates and heritability of nonalcoholic fatty liver disease in a minority cohort. Obesity 17(6), 12401246 (2009).
Article CAS PubMed Google Scholar
Abdelmalek, M. F. & Diehl, A. M. Nonalcoholic fatty liver disease as a complication of insulin resistance. Med. Clin. North Am. 91(6), 11251149 (2007).
Article CAS PubMed Google Scholar
Mili, S. & timac, D. Nonalcoholic fatty liver disease/steatohepatitis: Epidemiology, pathogenesis, clinical presentation and treatment. Dig. Dis. 30(2), 158162 (2012).
Article PubMed Google Scholar
Clark, J. M., Brancati, F. L. & Diehl, A. M. The prevalence and etiology of elevated aminotransferase levels in the United States. Am. J. Gastroenterol. 98(5), 960967 (2003).
Article CAS PubMed Google Scholar
Kim, W. R., Brown, R. S. Jr., Terrault, N. A. & El-Serag, H. Burden of liver disease in the United States: Summary of a workshop. Hepatology 36(1), 227242 (2002).
Article PubMed Google Scholar
McCullough, A. J. Pathophysiology of nonalcoholic steatohepatitis. J. Clin. Gastroenterol. 40, S17S29 (2006).
CAS PubMed Google Scholar
Chalasani, N. et al. The diagnosis and management of non-alcoholic fatty liver disease: Practice Guideline by the American Association for the Study of Liver Diseases, American College of Gastroenterology, and the American Gastroenterological Association. Hepatology 55(6), 20052023 (2012).
Article PubMed Google Scholar
Ertle, J. et al. Non-alcoholic fatty liver disease progresses to hepatocellular carcinoma in the absence of apparent cirrhosis. Int. J. Cancer 128(10), 24362443 (2011).
Article CAS PubMed Google Scholar
Bellentani, S. & Marino, M. Epidemiology and natural history of non-alcoholic liver disease (NAFLD). Ann. Hepatol. 8(S1), 48 (2009).
Article Google Scholar
Patton, H. M. et al. Pediatric nonalcoholic fatty liver disease: A critical appraisal of current data and implications for future research. J. Pediatr. Gastroenterol. Nutr. 43(4), 413427 (2006).
Article PubMed Google Scholar
Shiotani, A., Motoyama, M., Matsuda, T. & Miyanishi, T. Brachial-ankle pulse wave velocity in Japanese university students. Intern. Med. 44(7), 696701 (2005).
Article PubMed Google Scholar
Razmpour, F., Abbasi, B. & Ganji, A. Evaluating the accuracy and sensitivity of anthropometric and laboratory variables in diagnosing the liver steatosis and fibrosis in adolescents with non-alcoholic fatty liver disease. J. Liver Res. Disord. Ther. 4(3), 121125 (2018).
Article Google Scholar
Bellentani, S. et al. Prevalence of and risk factors for hepatic steatosis in Northern Italy. Ann. Intern. Med. 132(2), 112119 (2000).
Article CAS PubMed Google Scholar
Omagari, K. et al. Fatty liver in non-alcoholic non-overweight Japanese adults: Incidence and clinical characteristics. J. Gastroenterol. Hepatol. 17(10), 10981105 (2002).
Article PubMed Google Scholar
Shaw, N. J., Crabtree, N. J., Kibirige, M. S. & Fordham, J. N. Ethnic and gender differences in body fat in British schoolchildren as measured by DXA. Arch. Dis. Child. 92(10), 872875 (2007).
Article PubMed PubMed Central Google Scholar
Chumlea, W. C., Siervogel, R., Roche, A. F., Webb, P. & Rogers, E. Increments across age in body composition for children 10 to 18 years of age. Hum. Biol. 55, 845852 (1983).
CAS PubMed Google Scholar
Van der Sluis, I., De Ridder, M., Boot, A., Krenning, E. & de Muinck, K.-S. Reference data for bone density and body composition measured with dual energy x ray absorptiometry in white children and young adults. Arch. Dis. Child. 87(4), 341347 (2002).
Article PubMed PubMed Central Google Scholar
Alferink, L. J. M. et al. Nonalcoholic fatty liver disease in the Rotterdam study: About muscle mass, sarcopenia, fat mass, and fat distribution. J. Bone Miner. Res. 34(7), 12541263 (2019).
Article CAS PubMed Google Scholar
He, Q. et al. Sex and race differences in fat distribution among Asian, African-American, and Caucasian prepubertal children. J. Clin. Endocrinol. Metab. 87(5), 21642170 (2002).
Article CAS PubMed Google Scholar
Pudowski, P., Matusik, H., Olszaniecka, M., Lebiedowski, M. & Lorenc, R. S. Reference values for the indicators of skeletal and muscular status of healthy Polish children. J. Clin. Densitom. 8(2), 164177 (2005).
Article PubMed Google Scholar
Yang, K. C. et al. Association of non-alcoholic fatty liver disease with metabolic syndrome independently of central obesity and insulin resistance. Sci. Rep. 6(1), 110 (2016).
Google Scholar
Balakrishnan, M. et al. Obesity and risk of nonalcoholic fatty liver disease: A comparison of bioelectrical impedance analysis and conventionally-derived anthropometric measures. Clin. Gastroenterol. Hepatol. 15(12), 19651967 (2017).
Article PubMed PubMed Central Google Scholar
Brambilla, P., Bedogni, G., Heo, M. & Pietrobelli, A. Waist circumference-to-height ratio predicts adiposity better than body mass index in children and adolescents. Int. J. Obes. 37(7), 943946 (2013).
Article CAS Google Scholar
Huang, B.-A. et al. Neck circumference, along with other anthropometric indices, has an independent and additional contribution in predicting fatty liver disease. PLoSOne 10(2), e0118071 (2015).
Article PubMed PubMed Central Google Scholar
Sookoian, S. & Pirola, C. J. Systematic review with meta-analysis: Risk factors for non-alcoholic fatty liver disease suggest a shared altered metabolic and cardiovascular profile between lean and obese patients. Aliment. Pharmacol. Ther. 46(2), 8595 (2017).
Article CAS PubMed Google Scholar
Stabe, C. et al. Neck circumference as a simple tool for identifying the metabolic syndrome and insulin resistance: Results from the Brazilian Metabolic Syndrome Study. Clin. Endocrinol. 78(6), 874881 (2013).
Article CAS Google Scholar
Subramanian, V., Johnston, R., Kaye, P. & Aithal, G. Regional anthropometric measures associated with the severity of liver injury in patients with non-alcoholic fatty liver disease. Aliment. Pharmacol. Ther. 37(4), 455463 (2013).
Article CAS PubMed Google Scholar
Borruel, S. et al. Surrogate markers of visceral adiposity in young adults: Waist circumference and body mass index are more accurate than waist hip ratio, model of adipose distribution and visceral adiposity index. PLoSOne 9(12), e114112 (2014).
Article ADS PubMed PubMed Central Google Scholar
Rankinen, T., Kim, S., Perusse, L., Despres, J. & Bouchard, C. The prediction of abdominal visceral fat level from body composition and anthropometry: ROC analysis. Int. J. Obes. 23(8), 801 (1999).
Article CAS Google Scholar
Lee, S. S. & Park, S. H. Radiologic evaluation of nonalcoholic fatty liver disease. World J. Gastroenterol. WJG 20(23), 7392 (2014).
Article PubMed Google Scholar
EskandarNejad, M. Correlation of perceived body image and physical activity in women and men according to the different levels of Body Mass Index (BMI). J. Health Promot. Manag. 2, 5940 (2013).
Google Scholar
Belghaisi-Naseri, M. et al. Plasma levels of vascular endothelial growth factor and its soluble receptor in non-alcoholic fatty liver. J. Fast. Health (2018).
Dehnavi, Z. et al. Fatty Liver Index (FLI) in predicting non-alcoholic fatty liver disease (NAFLD). Hepat. Mon. 18(2) (2018).
Birjandi, M., Ayatollahi, S. M. T., Pourahmad, S. & Safarpour, A. R. Prediction and diagnosis of non-alcoholic fatty liver disease (NAFLD) and identification of its associated factors using the classification tree method. Iran. Red Crescent Med. J. 18(11) (2016).
Islam, M., Wu, C.-C., Poly, T. N., Yang, H.-C. & Li, Y.-C.J. Applications of machine learning in fatty live disease prediction. Building Continents of Knowledge in Oceans of Data: The Future of Co-Created eHealth 166170 (IOS Press, 2018).
Google Scholar
Ma, H., Xu, C.-F., Shen, Z., Yu, C.-H. & Li, Y.-M. Application of machine learning techniques for clinical predictive modeling: A cross-sectional study on nonalcoholic fatty liver disease in China. BioMed Res. Int. 2018 (2018).
Wu, C.-C. et al. Prediction of fatty liver disease using machine learning algorithms. Comput. Methods Programs Biomed. 170, 2329 (2019).
Article PubMed Google Scholar
Gaia, S. et al. Reliability of transient elastography for the detection of fibrosis in non-alcoholic fatty liver disease and chronic viral hepatitis. J. Hepatol. 54(1), 6471 (2011).
Article PubMed Google Scholar
Sasso, M. et al. Controlled attenuation parameter (CAP): A novel VCTE guided ultrasonic attenuation measurement for the evaluation of hepatic steatosis: Preliminary study and validation in a cohort of patients with chronic liver disease from various causes. Ultrasound Med. Biol. 36(11), 18251835 (2010).
Article PubMed Google Scholar
Hsu, C. et al. Magnetic resonance vs transient elastography analysis of patients with nonalcoholic fatty liver disease: A systematic review and pooled analysis of individual participants. Clin. Gastroenterol. Hepatol. 17(4), 630637 (2019).
Article PubMed Google Scholar
Shamsi, A. et al. An uncertainty-aware transfer learning-based framework for COVID-19 diagnosis. IEEE Trans. Neural Netw. Learn. Syst. 32(4), 14081417 (2021).
Article PubMed Google Scholar
Pedregosa, F. et al. Scikit-learn: Machine learning in Python. J. Mach. Learn. Res. 12, 28252830 (2011).
MathSciNet MATH Google Scholar
Noor, N. M. et al. (eds) (Trans Tech Publ, 2015).
Google Scholar
Norazian, M. N. Comparison of linear interpolation method and mean method to replace the missing values in environmental data set (2007).
Cunningham, J. P. & Ghahramani, Z. Linear dimensionality reduction: Survey, insights, and generalizations. J. Mach. Learn. Res. 16(1), 28592900 (2015).
MathSciNet MATH Google Scholar
Onat, A. et al. Neck circumference as a measure of central obesity: Associations with metabolic syndrome and obstructive sleep apnea syndrome beyond waist circumference. Clin. Nutr. 28(1), 4651 (2009).
Article PubMed Google Scholar
Rafiei, R., Fouladi, L. & Torabi, Z. Which component of metabolic syndrome is the most important one in development of colorectal adenoma?
Albhaisi, S. Noninvasive imaging modalities in nonalcoholic fatty liver disease: Where do we stand?. EMJ 4(3), 5762 (2019).
Article Google Scholar
Ferraioli, G. & Monteiro, L. B. S. Ultrasound-based techniques for the diagnosis of liver steatosis. World J. Gastroenterol. 25(40), 6053 (2019).
Article PubMed PubMed Central Google Scholar
Khov, N., Sharma, A. & Riley, T. R. Bedside ultrasound in the diagnosis of nonalcoholic fatty liver disease. World J. Gastroenterol. WJG 20(22), 6821 (2014).
Article PubMed Google Scholar
Angulo, P. et al. Liver fibrosis, but no other histologic features, is associated with long-term outcomes of patients with nonalcoholic fatty liver disease. Gastroenterology 149(2), 389-397.e10 (2015).
Article PubMed Google Scholar
Read more:
Application of machine learning in predicting non-alcoholic fatty liver ... - Nature.com
Learning to grow machine-learning models | MIT News | Massachusetts Institute of Technology – MIT News
Its no secret that OpenAIs ChatGPT has some incredible capabilities for instance, the chatbot can write poetry that resembles Shakespearean sonnets or debug code for a computer program. These abilities are made possible by the massive machine-learning model that ChatGPT is built upon. Researchers have found that when these types of models become large enough, extraordinary capabilities emerge.
But bigger models also require more time and money to train. The training process involves showing hundreds of billions of examples to a model. Gathering so much data is an involved process in itself. Then come the monetary and environmental costs of running many powerful computers for days or weeks to train a model that may have billions of parameters.
Its been estimated that training models at the scale of what ChatGPT is hypothesized to run on could take millions of dollars, just for a single training run. Can we improve the efficiency of these training methods, so we can still get good models in less time and for less money? We propose to do this by leveraging smaller language models that have previously been trained, says Yoon Kim, an assistant professor in MITs Department of Electrical Engineering and Computer Science and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL).
Rather than discarding a previous version of a model, Kim and his collaborators use it as the building blocks for a new model. Using machine learning, their method learns to grow a larger model from a smaller model in a way that encodes knowledge the smaller model has already gained. This enables faster training of the larger model.
Their technique saves about 50 percent of the computational cost required to train a large model, compared to methods that train a new model from scratch. Plus, the models trained using the MIT method performed as well as, or better than, models trained with other techniques that also use smaller models to enable faster training of larger models.
Reducing the time it takes to train huge models could help researchers make advancements faster with less expense, while also reducing the carbon emissions generated during the training process. It could also enable smaller research groups to work with these massive models, potentially opening the door to many new advances.
As we look to democratize these types of technologies, making training faster and less expensive will become more important, says Kim, senior author of a paper on this technique.
Kim and his graduate student Lucas Torroba Hennigen wrote the paper with lead author Peihao Wang, a graduate student at the University of Texas at Austin, as well as others at the MIT-IBM Watson AI Lab and Columbia University. The research will be presented at the International Conference on Learning Representations.
The bigger the better
Large language models like GPT-3, which is at the core of ChatGPT, are built using a neural network architecture called a transformer. A neural network, loosely based on the human brain, is composed of layers of interconnected nodes, or neurons. Each neuron contains parameters, which are variables learned during the training process that the neuron uses to process data.
Transformer architectures are unique because, as these types of neural network models get bigger, they achieve much better results.
This has led to an arms race of companies trying to train larger and larger transformers on larger and larger datasets. More so than other architectures, it seems that transformer networks get much better with scaling. Were just not exactly sure why this is the case, Kim says.
These models often have hundreds of millions or billions of learnable parameters. Training all these parameters from scratch is expensive, so researchers seek to accelerate the process.
One effective technique is known as model growth. Using the model growth method, researchers can increase the size of a transformer by copying neurons, or even entire layers of a previous version of the network, then stacking them on top. They can make a network wider by adding new neurons to a layer or make it deeper by adding additional layers of neurons.
In contrast to previous approaches for model growth, parameters associated with the new neurons in the expanded transformer are not just copies of the smaller networks parameters, Kim explains. Rather, they are learned combinations of the parameters of the smaller model.
Learning to grow
Kim and his collaborators use machine learning to learn a linear mapping of the parameters of the smaller model. This linear map is a mathematical operation that transforms a set of input values, in this case the smaller models parameters, to a set of output values, in this case the parameters of the larger model.
Their method, which they call a learned Linear Growth Operator (LiGO), learns to expand the width and depth of larger network from the parameters of a smaller network in a data-driven way.
But the smaller model may actually be quite large perhaps it has a hundred million parameters and researchers might want to make a model with a billion parameters. So the LiGO technique breaks the linear map into smaller pieces that a machine-learning algorithm can handle.
LiGO also expands width and depth simultaneously, which makes it more efficient than other methods. A user can tune how wide and deep they want the larger model to be when they input the smaller model and its parameters, Kim explains.
When they compared their technique to the process of training a new model from scratch, as well as to model-growth methods, it was faster than all the baselines. Their method saves about 50 percent of the computational costs required to train both vision and language models, while often improving performance.
The researchers also found they could use LiGO to accelerate transformer training even when they didnt have access to a smaller, pretrained model.
I was surprised by how much better all the methods, including ours, did compared to the random initialization, train-from-scratch baselines. Kim says.
In the future, Kim and his collaborators are looking forward to applying LiGO to even larger models.
The work was funded, in part, by the MIT-IBM Watson AI Lab, Amazon, the IBM Research AI Hardware Center, Center for Computational Innovation at Rensselaer Polytechnic Institute, and the U.S. Army Research Office.
Machine Learning Programs Predict Risk of Death Based on Results From Routine Hospital Tests – Neuroscience News
Summary: Using ECG data, a new machine learning algorithm was able to predict death within 5 years of a patient being admitted to hospital with 87% accuracy. The AI was able to sort patients into 5 categories ranging from low to high risk of death.
Source: University of Alberta
If youve ever been admitted to hospital or visited an emergency department, youve likely had an electrocardiogram, or ECG, a standard test involving tiny electrodes taped to your chest that checks your hearts rhythm and electrical activity.
Hospital ECGs are usually read by a doctor or nurse at your bedside, but now researchers are using artificial intelligence to glean even more information from those results to improve your care and the health-care system all at once.
Inrecently published findings, the research team built and trained machine learning programs based on 1.6 million ECGs done on 244,077 patients in northern Alberta between 2007 and 2020.
The algorithm predicted the risk of death from that point for each patient from all causes within one month, one year and five years with an 85 percent accuracy rate, sorting patients into five categories from lowest to highest risk.
The predictions were even more accurate when demographic information (age and sex) and six standard laboratory blood test results were included.
The study is a proof-of-concept for using routinely collected data to improve individual care and allow the health-care system to learn as it goes, according to principal investigatorPadma Kaul, professor of medicine and co-director of theCanadian VIGOUR Centre.
We wanted to know whether we could use new methods like artificial intelligence and machine learning to analyze the data and identify patients who are at higher risk for mortality, Kaul explains.
These findings illustrate how machine learning models can be employed to convert data collected routinely in clinical practice to knowledge that can be used to augment decision-making at the point of care as part of a learning health-care system.
A clinician will order an electrocardiogram if you have high blood pressure or symptoms of heart disease, such as chest pain, shortness of breath or an irregular heartbeat. The first phase of the study examined ECG results in all patients, but Kaul and her team hope to refine these models for particular subgroups of patients.
They also plan to focus the predictions beyond all-cause mortality to look specifically at heart-related causes of death.
We want to take data generated by the health-care system, convert it into knowledge and feed it back into the system so that we can improve care and outcomes. Thats the definition of a learning health-care system.
Author: Ross NeitzSource: University of AlbertaContact: Ross Neitz University of AlbertaImage: The image is in the public domain
Original Research: Open access.Towards artificial intelligence-based learning health system for population-level mortality prediction using electrocardiograms by Padma Kaul et al. npj Digital Medicine
Abstract
Towards artificial intelligence-based learning health system for population-level mortality prediction using electrocardiograms
The feasibility and value of linking electrocardiogram (ECG) data to longitudinal population-level administrative health data to facilitate the development of a learning healthcare system has not been fully explored. We developed ECG-based machine learning models to predict risk of mortality among patients presenting to an emergency department or hospital for any reason.
Using the 12-lead ECG traces and measurements from 1,605,268 ECGs from 748,773 healthcare episodes of 244,077 patients (20072020) in Alberta, Canada, we developed and validated ResNet-based Deep Learning (DL) and gradient boosting-based XGBoost (XGB) models to predict 30-day, 1-year, and 5-year mortality. The models for 30-day, 1-year, and 5-year mortality were trained on 146,173, 141,072, and 111,020 patients and evaluated on 97,144, 89,379, and 55,650 patients, respectively. In the evaluation cohort, 7.6%, 17.3%, and 32.9% patients died by 30-days, 1-year, and 5-years, respectively.
ResNet models based on ECG traces alone had good-to-excellent performance with area under receiver operating characteristic curve (AUROC) of 0.843 (95% CI: 0.8380.848), 0.812 (0.8080.816), and 0.798 (0.7920.803) for 30-day, 1-year and 5-year prediction, respectively; and were superior to XGB models based on ECG measurements with AUROC of 0.782 (0.7760.789), 0.784 (0.7800.788), and 0.746 (0.7400.751).
This study demonstrates the validity of ECG-based DL mortality prediction models at the population-level that can be leveraged for prognostication at point of care.
Read the original:
Machine Learning Programs Predict Risk of Death Based on Results From Routine Hospital Tests - Neuroscience News
A.I. and machine learning are about to have a breakout moment in finance – Fortune
Good morning,
Theres been a lot of discussion on the use of artificial intelligence and the future of work. Will it replace workers? Will human creativity be usurped by bots? How will A.I. be incorporated into the finance function? These are just some of the questions organizations will face.
I asked Sayan Chakraborty, copresident at Workday (sponsor of CFO Daily), who also leads the product and technology organization, for his perspective on a balance between tech and human capabilities.
Workdays approach to A.I. and machine learning (ML) is to enhance people, not replace them, Chakraborty tells me. Our approach ensures humans can effectively harness A.I. by intelligently applying automation and providing supporting information and recommendationswhile keeping humans in control of all decisions. He continues, We believe that technology and people, working together, can allow businesses to strengthen competitive advantage, be more responsive to customers, deliver greater economic and social value, and generate more meaning and purpose for individuals in their work.
Workday, a provider of enterprise cloud applications for finance and HR, has been building and delivering A.I. and ML to customers for nearly a decade, according to Chakraborty. He holds a seat on the National Artificial Intelligence Advisory Committee (NAIAC), which advises the White House on policy issues related to A.I. (And as much as I pressed, Chakraborty is not at liberty to discuss NAIAC efforts or speak for the committee, he says.) But he did share that generative A.I. continues to be a growing part of policy discussions both in the U.S. and in Europe, which has embraced a risk-based approach to A.I. governance.
Techs future in finance
Chakrabortys Workday colleague Terrance Wampler, group general manager for the Office of the CFO at Workday, has further thoughts on how A.I. will impact finance. If you can automate transaction processes, that means you reduce risk because you reduce manual intervention, Wampler says. Finance chiefs are also looking for the technology to help in accelerating data-based decision-making and recommendations for the company, as well as play a role in training people with new skills, he says.
Consulting firm Gartner recently made three predictions on financial planning and analysis (FP&A) and controller functions and the use of technology:
By 2025, 70% of organizations will use data-lineage-enabling technologies including graph analytics, ML, A.I., and blockchain as critical components of their semantic modeling.
By 2027, 90% of descriptive and diagnostic analytics in finance will be fully automated.
By 2028, 50% of organizations will have replaced time-consuming bottom-up forecasting approaches with A.I.
Workday thinks about and implements A.I. and ML differently than other enterprise software companies, Wampler says. I asked him to explain. Enterprise resource planning (ERP) is a type of software that companies use to manage day-to-day business activities like accounting and procurement. What makes Workdays ERP for finance and HR different is A.I. and ML are embedded into the platform, he says. So, its not like the ERP is just using an A.I. or ML program. It is actually an A.I. and ML construct. And having ML built into the foundation of the system means theres a quicker adaptation of new ML applications when theyre added. For example, Workday Financial Management allows for faster automation of high-volume transactions, he says.
ML gets better the more you use it, and Workday has over 60 million users representing about 442 billion transactions a year, according to the company. So ML improves at a faster rate. The platform also allows you to use A.I. predictively. Lets say an FP&A team has its budget for the year. Using ML, they predictively identify reasons why they would meet that budget, he says. And Workday works on a single cloud-based database for both HR and financials. You have all the information in one place. For quite some time, the company has been using large language models, the technology that has enabled generative A.I., Wampler says. Workday will continue to look into use cases where generative A.I. can add value, he says.
It will definitely be interesting to have a front-row seat as technology in the finance function continues to evolve over the next decade.
Sheryl Estradasheryl.estrada@fortune.com
Upcoming event: The nextFortuneEmerging CFO virtual event, Addressing the Talent Gap with Advanced Technologies, presented in partnership with Workday (a CFO Daily sponsor), will take place from 11 a.m.-12 p.m. EST on April 12. Matt Heimer, executive editor of features atFortune, and I will be joined byKatie Rooney, CFO at Alight Solutions; andAndrew McAfee, cofounder and codirector of MITs Initiative on the Digital Economy and principal research scientist at MIT Sloan School of Management.Click here to learn more and register.
The race to cloud: Reaching the inflection point to long-sought value, a report by Accenture, finds that over the past two years, theres been a surge in cloud commitment, with more than 86% of companies reporting an increase in cloud initiatives. To gauge how companies today are approaching the cloud, Accenture asked them to describe the current state of their cloud journeys. Sixty-eight percent said they still consider their cloud journeys incomplete. About a third of respondents (32%) see their cloud journeys as complete and are satisfied with their abilities to meet current business goals. However, 41% acknowledge their cloud journeys are ongoing and continue to evolve to meet changing business needs. The findings are based on a global survey of 800 business and IT leaders in a variety of industries.
The workforce well-being imperative, a new report by Deloitte, exploresthree factors that have a prominent impact on well-being in todays work environment: leadership behaviors at all levels, from a direct supervisor to the C-suite; how the organization and jobs are designed; and the ways of working across organizational levels. Deloitte refers to these as work determinants of well-being.
Lance Tucker was promoted to CFO at Papa Johns International, Inc. (Nasdaq: PZZA). Tucker succeeds David Flanery, who will retire from Papa Johns after 16 years with the company. Flanery will continue at the company through May, during a transition period. Tucker, 42, has served as Papa Johns SVP of strategic planning and chief of staff since 2010. He has 20 years of finance and management experience, including previously serving in manager and director of finance roles at Papa Johns from 1994 to 1999. Before Papa Johns, Tucker was CFO of Evergreen Real Estate, LLC.
Narayan Menon was named CFO at Matillion, a data productivity cloud company. Menon brings over 25 years of experience in finance and operations. Most recently, Menon served as CFO of Vimeo Inc., where he helped raise multiple rounds of funding and took the company public in 2021. Hes also held senior executive roles at Prezi, Intuit, and Microsoft. Menon also served as an advisory board member for the Rutgers University Big Data program.
This was a bank that was an outlier.
Federal Reserve Chair Jerome Powell said of Silicon Valley Bank in a press conference following a Fed decision to hike interest rates 0.25%, Yahoo Finance reported. Powell referred to the banks high percentage of uninsured deposits and its large investment in bonds with longer durations. These are not weaknesses that are there at all broadly through the banking system, he said.
See the rest here:
A.I. and machine learning are about to have a breakout moment in finance - Fortune
Machine Learning Finds 140000 Future Star Forming Regions in the Milky Way – Universe Today
Our galaxy is still actively making stars. Weve known that for a while, but sometimes its hard to understand the true scale in astronomical terms. A team from Japan is trying to help with that by using a novel machine-learning technique to identify soon-to-be star-forming regions spread throughout the Milky Way. They found 140,000 of them.
The regions, known in astronomy as molecular clouds, are typically invisible to humans. However, they do emit radio waves, which can be picked up by the massive radio telescopes dotted around our planet. Unfortunately, the Milky Way is the only galaxy close enough where we can pick up those signals, and even in our home galaxy; the clouds are so far spread apart it has been challenging to capture an overall picture of them.
Therefore a team from Osaka Metropolitan University thought machine learning to the rescue. They took a data set from the Nobeyama radio telescope located in Nagano prefecture and looked for the prevalence of carbon monoxide molecules. That resulted in an astonishing 140,000 visible molecular clouds in just one quadrant of the Milky Way.
As a next step, the team looked deeper into the data and figured out how large they were, as well as where they were located in the galactic plane. Given that there are four more quadrants to explore, theres a good chance there are significantly more to find.
But to access at least two of those quadrants, they need a different radio telescope. Nobeyama is located in Japan, in the northern hemisphere, and cant see the southern sky. Plenty of radio telescopes, such as ALMA, are already online in the southern hemisphere. Some are on the horizon, such as the Square Kilometer Array that could provide an even farther look around the southern hemispheres galactic plane.The team just needs to pick which one they would like to use.
One of the great things about AI is that once you train it, which can take a significant amount of time, analyzing similar data sets is a breeze. Future work on more radio data should take advantage of that fact and allow Dr. Shinji Fujita and his team to quickly analyze even more star-forming regions. With some additional research, well be able to truly understand our galaxys creation engine sometime in the not-too-distant future.
Learn More:Osaka Metropolitan University AI draws most accurate map of star birthplaces in the GalaxyFujita et al. Distance determination of molecular clouds in the first quadrant of the Galactic plane using deep learning: I. Method and resultsUT One of the Brightest Star-Forming Regions in the Milky Way, Seen in InfraredUT Speedrunning Star Formation in the Cygnus X Region
Lead Image:Image of star-forming region Sharpless 2-106, about 2,000 light years away from Earth.Credit NASA , ESA, STScI/Aura
Like Loading...
More:
Machine Learning Finds 140000 Future Star Forming Regions in the Milky Way - Universe Today
Crypto AI Announces Its Launch, Using AI Machine Learning to … – GlobeNewswire
LONDON, UK, March 23, 2023 (GLOBE NEWSWIRE) -- Crypto AI ($CAI), an AI-powered NFT generator that uses machine learning algorithms to create unique digital assets, has announced its official launch in March 2023. The project aims to revolutionize the NFT space by combining the power of artificial intelligence and machine learning.
Crypto AI ($CAI) is a software application that generates NFTs through a proprietary algorithm that creates unique digital assets. These assets can then be sold on various NFT marketplaces or used as part of a larger project.
Discover What Crypto AI Do
Crypto AI Strives to Disrupt the NFT and Chat GPT space using Artificial Intelligence and Machine Learning.
Martin Weiner, the CEO of Crypto AI, stated, "We are excited to announce the official launch of Crypto AI, an AI-powered NFT generator that uses machine learning algorithms to create unique digital assets. Our goal is to disrupt the NFT space by offering a product that can generate truly unique NFTs that stand out in the marketplace."
Weiner went on to explain the key features of Crypto AI that sets it apart from other NFT generators. "What sets Crypto AI apart is the power of our proprietary algorithm. Our algorithm uses advanced machine learning techniques to create unique digital assets that are truly one-of-a-kind. Our AI-powered NFT generator is not only faster than traditional methods, but it is also more accurate and efficient."
Crypto AI aims to offer a new way for artists and creators to monetize their work through NFTs. The project believes that AI-powered NFTs will help increase the value of digital assets and make them more accessible to a broader audience.
Weiner added, "We believe that AI-powered NFTs have the potential to revolutionize the art world by making it more inclusive and accessible to a wider audience. Our platform offers a new way for artists and creators to monetize their work and showcase it to the world."
Crypto AI is also committed to sustainability and plans to use renewable energy sources for its operations. The project believes that it is essential to minimize the environmental impact of its operations and is actively exploring ways to reduce its carbon footprint.
"We understand the importance of sustainability, and we are committed to minimizing our environmental impact. We plan to use renewable energy sources for our operations and explore ways to reduce our carbon footprint," Weiner stated.
Crypto AI's launch is highly anticipated by the NFT community, and the project has already gained significant interest from artists and collectors worldwide. The project's innovative approach to NFT creation and its commitment to sustainability have made it stand out in a crowded marketplace.
About Crypto AI
Crypto AI ChatGPT Bot is an AI-powered bot that assists users in their conversations with automated and intelligent responses. We use natural language processing and machine learning algorithms to generate meaningful and relevant responses to user queries.
AI App on
Social Links
Twitter: https://twitter.com/CryptoAIbsc
Telegram: https://t.me/CryptoAI_eng
Medium: https://medium.com/@CryptoAI
Discord: https://github.com/crypto-ai-git
Media Contact
Brand: Crypto AI
E-mail: team@cai.codes
Website: https://cai.codes
SOURCE: Crypto AI
Read more here:
Crypto AI Announces Its Launch, Using AI Machine Learning to ... - GlobeNewswire
Unlock the Next Wave of Machine Learning with the Hybrid Cloud – The New Stack
Machine learning is no longer about experiments. Most industry-leading enterprises have already seen dramatic successes from their investments in machine learning (ML), and there is near-universal agreement among business executives that building data science capabilities is vital to maintaining and extending their competitive advantage.
The bullish outlook is evident in the U.S. Bureau of Labor Statistics predictions regarding growth of the data science career field: Employment of data scientists is projected to grow 36% from 2021 to 2031, much faster than the average for all occupations.
The aim now is to grow these initial successes beyond the specific parts of the business where they had initially emerged. Companies are looking to scale their data science capabilities to support their entire suite of business goals and embed ML-based processes and solutions everywhere the company does business.
Vanguards within the most data-centric industries, including pharmaceuticals, finance, insurance, aerospace and others, are investing heavily. They are assembling formidable teams of data scientists with varied backgrounds and expertise to develop and place ML models at the core of as many business processes as possible.
More often than not, they are running headlong into the challenges of executing data science projects across the regional, organizational, and technological divisions that abound in every organization. Data is worthless without the tools and infrastructure to use it, and both are fragmented across regions and business units, as well as in cloud and on-premises environments.
Even when analysts and data scientists overcome the hurdle of getting access to data in other parts of the business, they quickly find that they lack effective tools and hardware to leverage the data. At best, this results in low productivity, weeks of delays, and significantly higher costs due to suboptimal hardware, expensive data storage, and unnecessary data transfers. At worst, it results in project failure, or not being able to initiate the project to begin with.
Successful enterprises are learning to overcome these challenges by embracing hybrid-cloud strategies. Hybrid cloud the integrated use of on-premises and cloud environments also encompasses multicloud, the use of cloud offerings from multiple cloud providers. A hybrid-cloud approach enables companies to leverage the best of all worlds.
They can take advantage of the flexibility of cloud environments, the cost benefits of on-premises infrastructure, and the ability to select best-of-breed tools and services from any cloud vendor and machine learning operations tooling. More importantly for data science, hybrid cloud enables teams to leverage the end-to-end set of tools and infrastructure necessary to unlock data-driven value everywhere their data resides.
It allows them to arbitrage the inherent advantages of different environments while preserving data sovereignty and providing the flexibility to evolve as business and organizational conditions change.
While many organizations try to cope with disconnected platforms spread across different on-premises and cloud environments, today the most successful organizations understand that their data science operations must be hybrid cloud by design. That is, to implement end-to-end ML platforms that support hybrid cloud natively and provide integrated capabilities that work seamlessly and consistently across environments.
In a recent Forrester survey of AI infrastructure decision-makers, 71% of IT decision-makers say hybrid cloud support by their AI platform is important for executing their AI strategy, and 29% say its already critical. Further, 91% said they will be investing in hybrid cloud within two years, and 66% said they already had invested in hybrid support for AI workloads.
In addition to the overarching benefit of a hybrid-cloud strategy for data science the ability to execute data science projects and implement ML solutions anywhere in your business there are three key drivers that are accelerating the trend:
Data sovereignty: Regulatory requirements like GDPR are forcing companies to process data locally with the threat of heavy fines in more and more parts of the world. The EU Artificial Intelligence Act, which triages AI applications across three risk categories and calls for outright bans on applications deemed to be the riskiest, will go a step further than fines. Gartner predicts that 65% of the worlds population will soon be covered by similar regulations.
Cost optimization: The size of ML workloads grows as companies scale data science because of the increasing number of use cases, larger volumes of data and the use of computationally intensive, deep learning models. Hybrid-cloud platforms enable companies to direct workloads to the most cost-effective infrastructure; e.g., optimize utilization of an on-premise GPU cluster, and mitigate rising cloud costs.
Flexibility: Taking a hybrid-cloud approach allows for future-proofing to address the inevitable changes in business operations and IT strategy, such as a merger or acquisition involving a company that has a different tech stack, expansion to a new geography where your default cloud vendor does not operate or even a cloud vendor becoming a significant competitor.
Implementing a hybrid-cloud strategy for ML is easier said than done. For example, no public cloud vendor offers more than token support for on-premises workloads, let alone support for a competitors cloud, and the range of tools and infrastructure your data science teams need scales as you grow your data science rosters and undertake more ML projects. Here are the three essential capabilities for which every business must provide hybrid-cloud support in order to scale data science across the organization:
Full data science life cycle coverage: From model development to deployment to monitoring, enterprises need data science tooling and operations to manage every aspect of data science at scale.
Agnostic support for data science tooling: Given the variety of ML and AI projects and the differing skills and backgrounds of the data scientists across your distributed enterprise, your strategy needs to provide hybrid cloud support for the major open-source data science languages and frameworks and likely a few proprietary tools not to mention the extensibility to support the host of new tools and methods that are constantly being developed.
Scalable compute infrastructure: More data, more use cases and more advanced methods require the ability to scale up and scale out with distributed compute and GPU support, but this also requires an ability to support multiple distributed compute frameworks since no single framework is optimal for all workloads. Spark may work perfectly for data engineering, but you should expect that youll need a data-science-focused framework like Ray or Dask (or even OpenMPI) for your ML model training at scale.
Embedding ML models throughout your core business functions lies in the heart of AI-based digital transformation. Organizations must adopt a hybrid-cloud or equivalent multicloud strategy to expand beyond initial successes and deploy impactful ML solutions everywhere.
Data science teams need end-to-end, extensible and scalable hybrid-cloud ML platforms to access the tools, infrastructure and data they need to develop and deploy ML solutions across the business. Organizations need these platforms for the regulatory, cost and flexibility benefits they provide.
The Forrester survey notes that organizations that adopt hybrid cloud approaches to AI development are already seeing the benefits across the entire AI/ML life cycle, experiencing 48% fewer challenges in deploying and scaling their models than companies relying on a single cloud strategy. All evidence suggests that the vanguard of companies who have already invested in their data science teams and platforms are pulling even further ahead using hybrid cloud.
Excerpt from:
Unlock the Next Wave of Machine Learning with the Hybrid Cloud - The New Stack
Scientists are using machine learning to forecast bird migration and … – Yahoo News
With chatbots like ChatGPT making a splash, machine learning is playing an increasingly prominent role in our lives. For many of us, its been a mixed bag. We rejoice when our Spotify For You playlist finds us a new jam, but groan as we scroll through a slew of targeted ads on our Instagram feeds.
Machine learning is also changing many fields that may seem surprising. One example is my discipline, ornithology the study of birds. It isnt just solving some of the biggest challenges associated with studying bird migration; more broadly, machine learning is expanding the ways in which people engage with birds. As spring migration picks up, heres a look at how machine learning is influencing ways to research birds and, ultimately, to protect them.
Most birds in the Western Hemisphere migrate twice a year, flying over entire continents between their breeding and nonbreeding grounds. While these journeys are awe-inspiring, they expose birds to many hazards en route, including extreme weather, food shortages and light pollution that can attract birds and cause them to collide with buildings.
Our ability to protect migratory birds is only as good as the science that tells us where they go. And that science has come a long way.
In 1920, the U.S. Geological Survey launched the Bird Banding Laboratory, spearheading an effort to put bands with unique markers on birds, then recapture the birds in new places to figure out where they traveled. Today researchers can deploy a variety of lightweight tracking tags on birds to discover their migration routes. These tools have uncovered the spatial patterns of where and when birds of many species migrate.
However, tracking birds has limitations. For one thing, over 4 billion birds migrate across the continent every year. Even with increasingly affordable equipment, the number of birds that we track is a drop in the bucket. And even within a species, migratory behavior may vary across sexes or populations.
Story continues
Further, tracking data tells us where birds have been, but it doesnt necessarily tell us where theyre going. Migration is dynamic, and the climates and landscapes that birds fly through are constantly changing. That means its crucial to be able to predict their movements.
This is where machine learning comes in. Machine learning is a subfield of artificial intelligence that gives computers the ability to learn tasks or associations without explicitly being programmed. We use it to train algorithms that tackle various tasks, from forecasting weather to predicting March Madness upsets.
But applying machine learning requires data and the more data the better. Luckily, scientists have inadvertently compiled decades of data on migrating birds through the Next Generation Weather Radar system. This network, known as NEXRAD, is used to measure weather dynamics and help predict future weather events, but it also picks up signals from birds as they fly through the atmosphere.
BirdCast is a collaborative project of Colorado State University, the Cornell Lab of Ornithology and the University of Massachusetts that seeks to leverage that data to quantify bird migration. Machine learning is central to its operations. Researchers have known since the 1940s that birds show up on weather radar, but to make that data useful, we need to remove nonavian clutter and identify which scans contain bird movement.
This process would be painstaking by hand but by training algorithms to identify bird activity, we have automated this process and unlocked decades of migration data. And machine learning allows the BirdCast team to take things further: By training an algorithm to learn what atmospheric conditions are associated with migration, we can use predicted conditions to produce forecasts of migration across the continental U.S.
BirdCast began broadcasting these forecasts in 2018 and has become a popular tool in the birding community. Many users may recognize that radar data helps produce these forecasts, but fewer realize that its a product of machine learning.
Currently these forecasts cant tell us what species are in the air, but that could be changing. Last year, researchers at the Cornell Lab of Ornithology published an automated system that uses machine learning to detect and identify nocturnal flight calls. These are species-specific calls that birds make while migrating. Integrating this approach with BirdCast could give us a more complete picture of migration.
These advancements exemplify how effective machine learning can be when guided by expertise in the field where it is being applied. As a doctoral student, I joined Colorado State Universitys Aeroecology Lab with a strong ornithology background but no machine learning experience. Conversely, Ali Khalighifar, a postdoctoral researcher in our lab, has a background in machine learning but has never taken an ornithology class.
Together, we are working to enhance the models that make BirdCast run, often leaning on each others insights to move the project forward. Our collaboration typifies the convergence that allows us to use machine learning effectively.
Machine learning is also helping scientists engage the public in conservation. For example, forecasts produced by the BirdCast team are often used to inform Lights Out campaigns.
These initiatives seek to reduce artificial light from cities, which attracts migrating birds and increases their chances of colliding with human-built structures, such as buildings and communication towers. Lights Out campaigns can mobilize people to help protect birds at the flip of a switch.
As another example, the Merlin bird identification app seeks to create technology that makes birding easier for everyone. In 2021, the Merlin staff released a feature that automates song and call identification, allowing users to identify what theyre hearing in real time, like an ornithological version of Shazam.
This feature has opened the door for millions of people to engage with their natural spaces in a new way. Machine learning is a big part of what made it possible.
Sound ID is our biggest success in terms of replicating the magical experience of going birding with a skilled naturalist, Grant Van Horn, a staff researcher at the Cornell Lab of Ornithology who helped develop the algorithm behind this feature, told me.
Opportunities for applying machine learning in ornithology will only increase. As billions of birds migrate over North America to their breeding grounds this spring, people will engage with these flights in new ways, thanks to projects like BirdCast and Merlin. But that engagement is reciprocal: The data that birders collect will open new opportunities for applying machine learning.
Computers cant do this work themselves. Any successful machine learning project has a huge human component to it. That is the reason these projects are succeeding, Van Horn said to me.
This article is republished from The Conversation, an independent nonprofit news site dedicated to sharing ideas from academic experts. Like this article? Subscribe to our weekly newsletter.
It was written by: Miguel Jimenez, Colorado State University.
Read more:
Miguel Jimenez receives funding from the National Aeronautics and Space Administration.
Here is the original post:
Scientists are using machine learning to forecast bird migration and ... - Yahoo News
Striveworks Partners With Carahsoft to Provide AI and Machine … – PR Newswire
AUSTIN, Texas, March 23, 2023 /PRNewswire/ -- Striveworks, a pioneer in responsible MLOps, today announceda partnership with Carahsoft Technology Corp., The Trusted Government IT Solutions Provider.Under the agreement, Carahsoft will serve as Striveworks' public sector distributor, making the company's Chariot platform and other software solutions available to government agencies through Carahsoft's reseller partners, NASA Solutions for Enterprise-Wide Procurement (SEWP) V, Information Technology Enterprise Solutions Software 2 (ITES-SW2), OMNIA Partners, and National Cooperative Purchasing Alliance (NCPA) contracts.
"We are excited to partner with Carahsoft and its reseller partners to leverage their public sector expertise and expand access to our products and solutions," said Quay Barnett, Executive Vice President at Striveworks. "Striveworks' inclusion on Carahsoft's contracts enables U.S. Federal, State, and Local Governments to make better models, faster."
Decision making in near-peer and contested environments requires end-to-end dynamic data capabilities that are rapidly deployed. Current solutions remain isolated, not scalable, and not integrated from enterprise to edge. The Striveworks and Carahsoft partnership helps simplify the procurement of Striveworks' AI and machine learning solutions.
Striveworks' Chariot provides a no-code/low-code solution that supports all phases of mission-relevant analytics including: developing, deploying, monitoring, and remediating models. Also available through the partnership is Ark, Striveworks' edge model deployment software for the rapid and custom integration of computer vision, sensors, and telemetry data collection.
"We are pleased to add Striveworks' solutions to our AI and machine learning portfolio," said Michael Adams, Director of Carahsoft's AI/ML Solutions Portfolio. "Striveworks' data science solutions and products allow government agencies to simplify their machine learning operations. We look forward to working with Striveworks and our reseller partners to help the public sector drive better outcomes in operationally relevant timelines."
Striveworks' offerings are available through Carahsoft's SEWP V contracts NNG15SC03B and NNG15SC27B, ITES-SW2 contract W52P1J-20-D-0042, NCPA contract NCPA001-86, and OMNIA Partners contract R191902. For more information contact Carahsoft at (888) 606-2770 or [emailprotected].
About Striveworks
Striveworks is a pioneer in responsible MLOpsfor national security and other highly regulated spaces. Striveworks' MLOps platform, Chariot, enables organizations to deploy AI/ML models at scale while maintaining full audit and remediation capabilities. Founded in 2018, Striveworks was highlighted as an exemplar in the National Security Commission for AI 2020 Final Report. For more information visit http://www.striveworks.com.
About Carahsoft
Carahsoft Technology Corp. is The Trusted Government IT Solutions Provider, supporting Public Sector organizations across Federal, State and Local Government agencies and Education and Healthcare markets. As the Master Government Aggregator for our vendor partners, we deliver solutions for Artificial Intelligence & Machine Learning, Cybersecurity, MultiCloud, DevSecOps, Big Data, Open Source, Customer Experience and more. Working with resellers, systems integrators and consultants, our sales and marketing teams provide industry leading IT products, services and training through hundreds of contract vehicles. Visit us at http://www.carahsoft.com.
Media ContactMary Lange(703) 230-7434[emailprotected]
SOURCE Striveworks, Inc.
View post:
Striveworks Partners With Carahsoft to Provide AI and Machine ... - PR Newswire
Applied Intuition Acquires the SceneBox Platform to Strengthen … – PR Newswire
MOUNTAIN VIEW, Calif., March 21, 2023 /PRNewswire/ -- Applied Intuition, Inc., a simulation and software provider for autonomous vehicle (AV) development, has acquired SceneBox, a data management and operations platform built specifically for machine learning (ML). The core team of Caliber Data Labs, Inc., the creator of SceneBox, will join the Applied team.
The SceneBox platform enables engineers to train better, more accurate ML models with a data-centric approach. To successfully train production-grade ML models, teams rely heavily on high-quality datasets. When working with enormous unstructured data, finding the right datasets can be difficult, time-consuming, and costly. SceneBox lets engineers explore, curate, and compare datasets rapidly, diagnose problems, and orchestrate complex data operations. The platform offers a rich web interface, extensive APIs, and advanced features such as embedding-based search.
"We are thrilled to welcome Yaser and the SceneBox team to Applied," said Qasar Younis, Co-Founder and CEO of Applied Intuition. "When we learned of Yaser's vision and our complementary product strategies, we immediately wanted to join forces. The SceneBox team brings a wealth of knowledge and experience in ML and data ops that will help strengthen our offerings. We look forward to working together and better serving our customers."
"We are proud to be a part of the Applied team and the company's mission to accelerate the world's adoption of safe and intelligent machines," said Yaser Khalighi, Founder and CEO of Caliber Data Labs. "Autonomy is a data problem. I am confident that our joint expertise will allow customers to spend less time wrangling data and more time building better ML models."
DLA Piper LLP (U.S.) served as legal counsel to Applied Intuition. Fasken served as legal counsel to Caliber Data Labs.
About Applied IntuitionApplied Intuition's mission is to accelerate the world's adoption of safe and intelligent machines. The company's suite of simulation, validation, and data management software makes it faster, safer, and easier to bring autonomous systems to market. Autonomy programs across industries and 17 of the top 20 global automotive OEMs rely on Applied's solutions to develop, test, and deploy autonomous systems at scale. Learn more at https://applied.co.
About SceneBoxSceneBox is a Software 2.0 data engine for computer vision engineers. The Caliber Data Labs team built SceneBox as a modular and scalable platform that enables engineers to quickly search, curate, orchestrate, visualize, and debug massive perception datasets (e.g., camera and lidar images, videos, etc.). Teams can measure the performance of their ML models and fix problems using the right data. By helping engineers spend more time building ML models and less time wrangling data, SceneBox aims to fundamentally change the way perception data is managed at a global scale.
SOURCE Applied Intuition
Visit link:
Applied Intuition Acquires the SceneBox Platform to Strengthen ... - PR Newswire