Category Archives: Data Science

Nimble Gravity Acquires mDEVZ to Strengthen Its Data, AI and Software Engineering Capabilities – Yahoo Finance

DENVER, Colo. and BUENOS AIRES, Argentina, Sept. 29, 2023 /PRNewswire/ --Today,Nimble Gravity LLC and mDEVZ,announced that Nimble Gravity has successfully completed its acquisition of mDEVZ, a data science and application engineering consultancy based in Buenos Aires, Argentina. mDEVZ brings a deep heritage in finance, gaming, and retail and will expand Nimble Gravity's Data Science and Engineering practices, continuing the company's strategic 2023 growth initiative.

mDEVZ strengthens Nimble Gravity's ability to help customers transform their businesses with AI, bringing both data science capabilities, additional expertise in application development and net new capabilities in computer vision, rendering & optimization, and Unity3D.

"We are excited to bring mDEVZ onboard as we continue to advance our growth strategy for 2023," said Tony Aug, co-founder, and chief executive officer of Nimble Gravity. "Their expertise in data science, artificial intelligence and software engineering will further strengthen our position in the market and enhance our ability to deliver comprehensive solutions to clients looking to leverage cutting-edge technology for their businesses."

"We are proud of the results we've delivered to our customers during our 10-year history and are immensely appreciative of all their support," said Mauro Lopez, founder and CEO of mDEVZ. "Joining Nimble Gravity represents a unique opportunity for our team to scale the impact of our work to an even broader customer base. Together, we can drive forward the strategy and execute the most innovative tech solutions to achieve unparalleled success and to give our team new opportunities to grow their careers and deepen the valuable skills at the company."

mDEVZ builds on Nimble Gravity's global operations, augmenting its team of professionals ready to tackle the hardest challenges businesses are facing in today's digital landscape.

Story continues

About Nimble Gravity:

Founded in 2019, Nimble Gravity is an international consultancy firm that specializes in Strategy, E-Commerce, Digital Transformation, Data Science, Analytics, and BI, as well as Software Development and Tech Design. Nimble Gravity believes in the power of data and evidence-based approaches to drive growth, transform businesses, and create winning solutions for a diverse clientele.

Headquartered in Denver, Colorado, with offices in Mexico City, Guadalajara, Buenos Aires, and Medelln, Nimble Gravity is a rapidly growing consulting firm ready to tackle the hardest challenges your business is facing.

For more information, please contact sales@nimblegravity.com

Cision

View original content:https://www.prnewswire.com/news-releases/nimble-gravity-acquires-mdevz-to-strengthen-its-data-ai-and-software-engineering-capabilities-301942583.html

SOURCE Nimble Gravity

See the article here:

Nimble Gravity Acquires mDEVZ to Strengthen Its Data, AI and Software Engineering Capabilities - Yahoo Finance

AI and Data to look at use of data analytics to address long-term … – Digital Health

Still in its early stages, one especially promisingapplication of AI in health is its use with population health data sets to help address the growth of chronic diseases. A session on Day Two of Digital Health AI and Data will focus on how AI can meet the challengesfaced by patients with multiple long-term conditions (MLTC).

Simon Fraser, professor of public health at the School of Primary Care, Population Science and Medical Education at the University of Southampton, has spent much of his career as a public health specialist looking at the epidemiology of long-term conditions.

He will join Dr Gyucha Thomas Jun, professor of socio-technical system design at the School of Design and Creative Arts at Loughborough, Krish Nirantharakumar, professor of health data science and public health at the University of Birmingham and Professor Michael Barnes, professor of bioinformatics and director of the centre for translational bioinformatics at Queen Mary, University of London for the AI and Data session, which looks at how four UK research groups are using AI and data analytics to look at Englands 14 million people living with MLTCs.

Fraser, who heads one of the National Institute for Health and Care Researchs (NIHR) seven research consortia looking at multiple long-term condition multimorbidity, observes that understanding the development of long-term conditions to better inform prevention requires information from across the life course, but that electronic health records are recent enough that they cannot provide sufficient information on their own.

We are exploring data that have the potential to look at the whole life course, Fraser told Digital Health News, adding that the research group is using both birth cohorts groups of several thousand people born in the same week in time (for example, in 1970), for whom extensive data is collected every few years and ordinary healthcare information in electronic health records.

Although social determinants of health, from upbringing to education to income, are known to influence the likelihood that an individual will develop health problems, that level of detail is often hard for researchers to access and such data is not routinely collected in health settings.

The important thing is to fill in that gap and bring those determinants into the wider story of development of long-term conditions across the life-course, Fraser said.

Because of the complexity of those data, there is the potential for AI methods to help with clustering, sequencing and various other aspects of how people develop conditions over time.

Using technology to create data links

Technology also has the potential to learn across datasets, for example between birth cohorts and routine healthcare records. In secure data environments, linkage is also possible between electronic health records and other valuable sources, such as educational and census data.

This linkage can help researchers learn without incurring the risk of identifying individuals. Ultimately, the goal is to learn at what periods in the life course it might be possible to intervene to prevent or delay the onset of multiple long-term conditions, he said.

There are challenges related to the very different data types we are using that only have some overlapping domains, he said. The birth cohorts are very rich in these social data and wider determinants, but relatively limited on the long-term condition front. The electronic health data records are rich on long-term conditions but relatively limited on social data.

The challenge for the future, Fraser said, will be finding new ways of looking across both kinds of data sets in ways that are both reliable and secure in order to give a fuller picture of the lifecourse.

AI and Data is from the organisers of the market-leadingDigital Health RewiredandDigital Health Summer Schoolsevents and includes a wide-ranging programme of events on two stages: AI and Analytics and Data and Research.

All sessions are CPD accredited. AI and Data is free for the NHS, public sector, start-ups, charities, education and research. Commercial tickets start from 275+VAT.Register here.

Read more:

AI and Data to look at use of data analytics to address long-term ... - Digital Health

Data Science Platform Market Size Worth USD 942.76 Billion with Healthy CAGR of 29.00% by 2030 – Benzinga

"The Best Report Benzinga Has Ever Produced"

Massive returns are possible within this market! For a limited time, get access to the Benzinga Insider Report, usually $47/month, for just $0.99! Discover extremely undervalued stock picks before they skyrocket! Time is running out! Act fast and secure your future wealth at this unbelievable discount! Claim Your $0.99 Offer NOW!

Advertorial

Data Bridge Market Research analyses that the Data Science Platform Market which was USD 122.94 billion in 2022, would rocket up to USD 942.76 billion by 2030, and is expected to undergo a CAGR of 29.00% during the forecast period. In addition to the market insights such as market value, growth rate, market segments, geographical coverage, market players, and market scenario, the market report curated by the Data Bridge Market Research team includes in-depth expert analysis, import/export analysis, pricing analysis, production consumption analysis, and pestle analysis.

Research and analysis about the key developments in the market, key competitors and comprehensive competitor analysis included in the consistent Data Science Platform report assists businesses visualize the bigger picture of the market place and products which ultimately aids in defining superior business strategies. This market research report is comprehensive and encompasses various parameters of the market. The report can be used to obtain valuable market insights in a commercial way. Data Science Platform Market report includes most-detailed market segmentation, systematic analysis of major market players, trends in consumer and supply chain dynamics, and insights about new geographical markets for this industry.

Get a Sample PDF of Data Science Platform Market Research Report: https://www.databridgemarketresearch.com/request-a-sample/?dbmr=global-data-science-platform-market&Somesh=

Global Data Science Platform Drivers

The size of data captured by professional is frequently growing because of rise in social media, IOT and other media. Data science platform have created a prodigious flow of data in both structured and unstructured forms. The development of machine-based and human-generated data is generally 10 times greater than that of old-style corporate data, the growing rate of machine data is generated 50 times quicker. The huge growth in data offerings chances for businesses to acquire new things, which led the rise of demand for fresh approaches and plays a crucial role to drive the market of data science platform.

Huge investment in research and development have headed to the rapid progression of technology. Modern data handling coordination and solutions are significant for business growth, the demand for technologies enhancing proficiency is growing with the increasing number of business. Data science platforms are in demand because it make simpler to train, design, scale. Technology such as artificial intelligence, edge computing and machine learning are in their growing phage which help to propel the data science platform market.

Top Leading Key Players of Data Science Platform Market:

Key Opportunities

The high investment in research and development is estimated to generate lucrative opportunities for the market, which will further expand the data science platform market's growth rate in the future. Moreover, the rapid advancements in technologies such as artificial intelligence (AI), machine learning (ML), and internet of things (IoT) further offer numerous growth opportunities within the market.

To Gain More Insights into the Market Analysis, Browse Summary of the Data Science Platform Market Report@ https://www.databridgemarketresearch.com/reports/global-data-science-platform-market?Somesh=

Key Market Segments Covered in Data Science Platform Industry Research

Component Type

Deployment and Integration

Function Division

Deployment Model

Organization Size

End User Application

Data Science Platform Market Country Level Analysis

The countries covered in the Data Science Platform Market report are U.S. Canada and Mexico, China, Japan, India, South Korea, Australia, Singapore, Malaysia, Thailand, Indonesia, Philippines, and Rest of Asia-Pacific, U.K., Germany, France, Italy, Spain, Russia, Netherlands, Switzerland, Turkey, Belgium, and Rest of Europe, South Africa, Egypt, U.A.E., Saudi Arabia, Israel, and Rest of Middle East and Africa, Brazil, Argentina, and rest of South America.

The region section of the report also provides individual market-impacting factors and changes in market regulation that impact the current and future trends of the market. Data points like downstream and upstream value chain analysis, technical trends, and porter's five forces analysis, case studies are some of the pointers used to forecast the market scenario for individual countries. Also, the presence and availability of global brands and their challenges faced due to large or scarce competition from local and domestic brands, the impact of domestic tariffs, and trade routes are considered while providing forecast analysis of the region data.

New Business Strategies, Challenges & Policies are mentioned in Table of Content, Request TOC: https://www.databridgemarketresearch.com/toc/?dbmr=global-data-science-platform-market&Somesh=

Browse Related Reports:

https://www.databridgemarketresearch.com/reports/global-natural-language-processing-nlp-healthcare-life-sciences-market

https://www.databridgemarketresearch.com/reports/global-multi-cuvette-spectrophotometer-for-life-science-market

https://www.databridgemarketresearch.com/reports/global-multi-cuvette-spectrophotometer-for-forensic-science-market

https://www.databridgemarketresearch.com/reports/global-aerospace-and-life-sciences-tic-market

About Data Bridge Market Research, Private Ltd

Data Bridge Market Research Pvt Ltd is a multinational management consulting firm with offices in India and Canada. As an innovative and neoteric market analysis and advisory company with unmatched durability level and advanced approaches. We are committed to uncover the best consumer prospects and to foster useful knowledge for your company to succeed in the market.

Data Bridge Market Research is a result of sheer wisdom and practice that was conceived and built-in Pune in the year 2015. The company came into existence from the healthcare department with far fewer employees intending to cover the whole market while providing the best class analysis. Later, the company widened its departments, as well as expands their reach by opening a new office in Gurugram location in the year 2018, where a team of highly qualified personnel joins hands for the growth of the company. "Even in the tough times of COVID-19 where the Virus slowed down everything around the world, the dedicated Team of Data Bridge Market Research worked round the clock to provide quality and support to our client base, which also tells about the excellence in our sleeve."

Data Bridge Market Research has over 500 analysts working in different industries. We have catered more than 40% of the fortune 500 companies globally and have a network of more than 5000+ clientele around the globe.

Contact Us

US: +1 888 387 2818UK: +44 208 089 1725Hong Kong: +852 8192 7475Email - corporatesales@databridgemarketresearch.com

COMTEX_441132026/2657/2023-09-28T08:21:24

Massive returns are possible within this market! For a limited time, get access to the Benzinga Insider Report, usually $47/month, for just $0.99! Discover extremely undervalued stock picks before they skyrocket! Time is running out! Act fast and secure your future wealth at this unbelievable discount! Claim Your $0.99 Offer NOW!

Advertorial

Enter your email and you'll also get Benzinga's ultimate morning update AND a free $30 gift card and more!

2023 Benzinga.com. Benzinga does not provide investment advice. All rights reserved.

Here is the original post:

Data Science Platform Market Size Worth USD 942.76 Billion with Healthy CAGR of 29.00% by 2030 - Benzinga

AI and science: what 1,600 researchers think – Nature.com

Illustration by Acapulco Studio

Artificial-intelligence (AI) tools are becoming increasingly common in science, and many scientists anticipate that they will soon be central to the practice of research, suggests a Nature survey of more than 1,600 researchers around the world.

Science and the new age of AI: a Nature special

When respondents were asked how useful they thought AI tools would become for their fields in the next decade, more than half expected the tools to be very important or essential. But scientists also expressed strong concerns about how AI is transforming the way that research is done.

The share of research papers that mention AI terms has risen in every field over the past decade, according to an analysis for this article by Nature.

Machine-learning statistical techniques are now well established, and the past few years have seen rapid advances in generative AI, including large language models (LLMs), that can produce fluent outputs such as text, images and code on the basis of the patterns in their training data. Scientists have been using these models to help summarize and write research papers, brainstorm ideas and write code, and some have been testing out generative AI to help produce new protein structures, improve weather forecasts and suggest medical diagnoses, among many other ideas.

See Supplementary information for full methodology.

With so much excitement about the expanding abilities of AI systems, Nature polled researchers about their views on the rise of AI in science, including both machine-learning and generative AI tools.

Focusing first on machine-learning, researchers picked out many ways that AI tools help them in their work. From a list of possible advantages, two-thirds noted that AI provides faster ways to process data, 58% said that it speeds up computations that were not previously feasible, and 55% mentioned that it saves scientists time and money.

AI has enabled me to make progress in answering biological questions where progress was previously infeasible, said Irene Kaplow, a computational biologist at Duke University in Durham, North Carolina.

The survey results also revealed widespread concerns about the impacts of AI on science. From a list of possible negative impacts, 69% of the researchers said that AI tools can lead to more reliance on pattern recognition without understanding, 58% said that results can entrench bias or discrimination in data, 55% thought that the tools could make fraud easier and 53% noted that ill-considered use can lead to irreproducible research.

The main problem is that AI is challenging our existing standards for proof and truth, said Jeffrey Chuang, who studies image analysis of cancer at the Jackson Laboratory in Farmington, Connecticut.

To assess the views of active researchers, Nature e-mailed more than 40,000 scientists who had published papers in the last 4 months of 2022, as well as inviting readers of the Nature Briefing to take the survey. Because researchers interested in AI were much more likely to respond to the invitation, the results arent representative of all scientists. However, the respondents fell into 3 groups: 48% who directly developed or studied AI themselves, 30% who had used AI for their research, and the remaining 22% who did not use AI in their science. (These categories were more useful for probing different responses than were respondents research fields, genders or geographical regions; see Supplementary information for full methodology).

Among those who used AI in their research, more than one-quarter felt that AI tools would become essential to their field in the next decade, compared with 4% who thought the tools essential now, and another 47% felt AI would be very useful. (Those whose research field was already AI were not asked this question.) Researchers who dont use AI were, unsurprisingly, less excited. Even so, 9% felt these techniques would become essential in the next decade, and another 34% said they would be very useful.

The chatbot ChatGPT and its LLM cousins were the tools that researchers mentioned most often when asked to type in the most impressive or useful example of AI tools in science (closely followed by protein-folding AI tools, such as AlphaFold, that create 3D models of proteins from amino-acid sequences). But ChatGPT also topped researchers choice of the most concerning uses of AI in science. When asked to select from a list of possible negative impacts of generative AI, 68% of researchers worried about proliferating misinformation, another 68% thought that it would make plagiarism easier and detection harder, and 66% were worried about bringing mistakes or inaccuracies into research papers.

Respondents added that they were worried about faked studies, false information and perpetuating bias if AI tools for medical diagnostics were trained on historically biased data. Scientists have seen evidence of this: a team in the United States reported, for instance, that when they asked the LLM GPT-4 to suggest diagnoses and treatments for a series of clinical case studies, the answers varied depending on the patients race or gender (T. Zack et al. Preprint at medRxiv https://doi.org/ktdz; 2023) probably reflecting the text that the chatbot was trained on.

There is clearly misuse of large language models, inaccuracy and hollow but professional-sounding results that lack creativity, said Isabella Degen, a software engineer and former entrepreneur who is now studying for a PhD in using AI in medicine at the University of Bristol, UK. In my opinion, we dont understand well where the border between good use and misuse is.

The clearest benefit, researchers thought, was that LLMs aided researchers whose first language is not English, by helping to improve the grammar and style of their research papers, or to summarize or translate other work. A small number of malicious players notwithstanding, the academic community can demonstrate how to use these tools for good, said Kedar Hippalgaonkar, a materials scientist at the National University of Singapore.

Researchers who regularly use LLMs at work are still in a minority, even among the interested group who took Natures survey. Some 28% of those who studied AI said they used generative AI products such as LLMs every day or more than once a week, 13% of those who only use AI said they did, and just 1% among others, although many had at least tried the tools.

Moreover, the most popular use among all groups was for creative fun unrelated to research (one respondent used ChatGPT to suggest recipes); a smaller share used the tools to write code, brainstorm research ideas and to help write research papers.

Some scientists were unimpressed by the output of LLMs. It feels ChatGPT has copied all the bad writing habits of humans: using a lot of words to say very little, one researcher who uses the LLM to help copy-edit papers wrote. Although some were excited by the potential of LLMs for summarizing data into narratives, others had a negative reaction. If we use AI to read and write articles, science will soon move from for humans by humans to for machines by machines, wrote Johannes Niskanen, a physicist at the University of Turku in Finland.

Around half of the scientists in the survey said that there were barriers preventing them from developing or using AI as much as they would like but the obstacles seem to be different for different groups. The researchers who directly studied AI were most concerned about a lack of computing resources, funding for their work and high-quality data to run AI on. Those who work in other fields but use AI in their research tended to be more worried by a lack of skilled scientists and training resources, and they also mentioned security and privacy considerations. Researchers who didnt use AI generally said that they didnt need it or find it useful, or that they lacked experience or time to investigate it.

Another theme that emerged from the survey was that commercial firms dominate computing resources for AI and ownership of AI tools and this was a concern for some respondents. Of the scientists in the survey who studied AI, 23% said they collaborated with or worked at firms developing these tools (with Google and Microsoft the most often named), whereas 7% of those who used AI did so. Overall, slightly more than half of those surveyed felt it was very or somewhat important that researchers using AI collaborate with scientists at such firms.

The principles of LLMs can be usefully applied to build similar models in bioinformatics and cheminformatics, says Garrett Morris, a chemist at the University of Oxford, UK, who works on software for drug discovery, but its clear that the models must be extremely large. Only a very small number of entities on the planet have the capabilities to train the very large models which require large numbers of GPUs [graphics processing units], the ability to run them for months, and to pay the electricity bill. That constraint is limiting sciences ability to make these kinds of discoveries, he says.

Researchers have repeatedly warned that the naive use of AI tools in science can lead to mistakes, false positives and irreproducible findings potentially wasting time and effort. And in the survey, some scientists said they were concerned about poor-quality research in papers that used AI. Machine learning can sometimes be useful, but AI is causing more damage than it helps. It leads to false discoveries due to scientists using AI without knowing what they are doing, said Lior Shamir, a computer scientist at Kansas State University in Manhattan.

When asked if journal editors and peer reviewers could adequately review papers that used AI, respondents were split. Among the scientists who used AI for their work but didnt directly develop it, around half said they didnt know, one-quarter thought reviews were adequate, and one-quarter thought they were not. Those who developed AI directly tended to have a more positive opinion of the editorial and review processes.

Reviewers seem to lack the required skills and I see many papers that make basic mistakes in methodology, or lack even basic information to be able to reproduce the results, says Duncan Watson-Parris, an atmospheric physicist who uses machine learning at the Scripps Institution of Oceanography in San Diego, California. The key, he says, is whether journal editors are able to find referees with enough expertise to review the studies.

That can be difficult to do, according to one Japanese respondent who worked in earth sciences but didnt want to be named. As an editor, its very hard to find reviewers who are familiar both with machine-learning (ML) methods and with the science that ML is applied to, he wrote.

Nature also asked respondents how concerned they were by seven potential impacts of AI on society which have been widely discussed in the news. The potential for AI to be used to spread misinformation was the most worrying prospect for the researchers, with two-thirds saying they were extremely or very concerned by it. Automated AI weapons and AI-assisted surveillance were also high up on the list. The least concerning impact was the idea that AI might be an existential threat to humanity although almost one-fifth of respondents still said they were extremely or very concerned by this prospect.

Many researchers, however, said AI and LLMs were here to stay. AI is transformative, wrote Yury Popov, a specialist in liver disease at the Beth Israel Deaconess Medical Center in Boston, Massachusetts. We have to focus now on how to make sure it brings more benefit than issues.

View original post here:

AI and science: what 1,600 researchers think - Nature.com

Bryant welcomes 15 new faculty members to the university community – Bryant University

This fall, fifteen new faculty members joined Bryant's community of scholars. The group is comprised of dedicated educators who are also accomplished researchers and industry leaders. Their acumen ranges from using data science to combat societal problems to reshaping how we see ourselves and society.

We are delighted to welcome new faculty members to Bryant community who bring a wealth of expertise, experience, and perspectives that I am quite certain will enrich our learning environment and inspire our students, says Provost and Chief Academic Officer Rupendra Paliwal, Ph.D. I am impressed by our new facultys dedication to teaching and the pursuit of knowledge, I know they will create a transformative experience for our students, and I am very much looking forward to our shared academic journey together.

These new faculty members, Paliwal notes, join Bryant at a time of growth and innovation. The universitys Vision 2030 Strategic Plan is ushering Bryant into a new era while supporting the universitys transformational learning experiences, exceptional outcomes, and mission to develop passionate, purpose-driven leaders. Cutting-edge academic programs launching this fall includeExercise and Movement Science, Arts and Creative Industries, and Healthcare Analytics, among others, alongside a new general education curriculum organized around the United Nations Sustainable Development Goals.

The following faculty members have joined Bryants College of Business, College of Arts and Sciences, and School of Health and Behavioral Sciences:

Barbara Byers, Lecturer of History, Literature, and the Arts, received her Ph.D. from the University of California. An accomplished scholar, performer, and manager, Byers is trained in vocal performance, composition, dance/physical theater, Oud, and several instruments, including the piano and guitar. Her dissertation, Helwalker, took the form of an experimental folk opera audio drama exploring nature, decay, and renewal in the context of a heros journey narrative structure. She received her M.A. from the University of California and her B.A. from Bates College.

Kristen Falso-Capaldi, Lecturer of History, Literature, and the Arts, is a writer, educator, filmmaker, and artist. She has previously taught at New England Institute of Technology and the University of Rhode Island, in the Cranston public school system, and as part of Bryants Writing Workshop course. Falso-Capaldi is the RI State Chair for Women in Film and Video of New England and was a speaker at this springs TEDxBryantU 2023. She received her B.A. in English and Communication Studies from the University of Rhode Island and her MAT in English and Education from Rhode Island College.

Geri Louise Dimas, Assistant Professor of Information Systems and Analytics,received her Ph.D. in Data Science from Worcester Polytechnic Institute and teaches both undergraduate and graduate data science courses. The Co-Director of the Institute for the Qualitative Study of Inclusion, Diversity, and Equity's (QSIDE) Stopping Trafficking and Modern-day Slavery Project (STAMP) Lab, her research focuses on applications of applied analytics and data science at the intersection of societal issues such as immigration, anti-human trafficking, and homelessness. She received her B.A. from Roosevelt University, and her M.S. from Bowling Green State University.

Amanda Fontaine, Assistant Professor of Politics, Law, and Society, holds a Ph.D. from the University of New Hampshire. Her dissertation examined the interplay of personality, social support, and sociodemographics on college students mental health. Her current work involves the dissemination of peer-reviewed and evidence-based suicide prevention research/practices. Fontaine also received a Cognate in College Teaching, an M.A. in Sociology, and an M.A. in Music Studies from the University of New Hampshire, as well as a B.A. in Psychology and Music Theory/Composition from Clark University.

Mary Ann Gallo, Lecturer of Communication and Language Studies, has taught at New England Institute of Technology, Community College of Rhode Island, the University of Rhode Island, Johnson and Wales University, Nichols College, and Roger Williams University. She has also taught several Communications courses at Bryant, including Intro to Communication, Public Speaking, Public Relations, and Interpersonal Communication, and has also worked as a freelance writer, information and public relations specialist, and news reporter/anchor at different points during her career. Gallo received both her B.A. and her M.S. from Northeastern University.

Yuan Guo, Visiting Assistant Professor of Information Systems and Analytics, received his Ph.D. from Northeastern University. His industry experience includes serving as a machine learning engineer for LZ Finance and as a senior software engineer at Baidu Research USA, a research and development center for Baidu, Chinas largest search engine provider. Guos research has been published and presented in a number of different publications and forums. He received his M.A. in electrical engineering from Tsinghua University and his B.Sc. in electronics engineering from Huazhong University of Science and Technology.

Eun Kang, Associate Professor of Marketing, received her Ph.D. from the University of Texas at Austin. She has previously taught at Kutztown University of Pennsylvania and the University of Texas at Austin. Kangs research interests include digital marketing, influencer marketing, consumer psychology and behavior, and sustainability and ethical consumption and her published work ranges from the motivations for binge watching to how advertising has affected alcohol sales. Kang received an M.A. and B.A. from Michigan State University and two B.S. degrees from Kyung Hee University.

Carrie Kell, Lecturer of History, Literature, and the Arts, received her Ed.D. in Learning, Design, and Technology from the University of Wyoming. Her teaching philosophy is founded on student-centered teaching strategies and she has taught at the University of Toledo and the University of Rhode Island, among other schools. She has also served in a range of capacities at Rhode Island School of Design, Middlebridge school, and several other learning institutions. Kell earned her M.Ed. from Northwestern University, and an M.A., B.E., and B.A. from University of Toledo.

David Liao, Lecturer of History, Literature, and the Arts, earned his Ph.D. from Brown University. His research and teaching interests include multiethnic U.S. literatures, comparative race and ethnic studies, twentieth and twenty-first century U.S. literature and culture, histories of dictatorship and authoritarianism, twentieth century discourses of memory, and genre fiction. Liaos scholarly work includes examinations of the role the film Scarface has played in the evolution of hip-hop, the Godfather trilogy, and the fiction of John LeCarre. He received his B.A. from the State University of New York at Binghamton.

Melanie Maimon, Assistant Professor of Psychology, earned her Ph.D. in Social Psychology from Rutgers University-New Brunswick. Her research examines the experiences and consequences of stigmatization and explores methods to improve the inclusion and belonging of people with minoritized identities across social environments. While completing her doctoral degree, Maimon worked with the TA Project at Rutgers University, leading inclusive teaching workshops and courses on teaching in higher education. Maimon earned her M.S. from Rutgers University-New Brunswick and her B.S. from the University of Massachusetts-Amherst.

Taylor Maroney, Lecturer of History, Literature, and the Arts, received their MFA in painting from the University of Massachusetts Dartmouth. They have been teaching nationally and internationally since 2010 in various environments and student demographics from South Africa to San Francisco to rural North Dakota. Their research focuses on race and gender constructs within the United States and their work has been featured in multiple publications and exhibitions. Maroney also received their M.A. from University of Massachusetts Dartmouth and their BFA from the University of New Hampshire.

Eric Paul, Lecturer of History, Literature, and the Arts, received his MFA from Farleigh Dickinson University. He has previously taught at Johnson and Wales and Dean College, as well as various Bryant University courses. Paul's work has been published in a variety of publications as well as several collections of his poems 2019's A Suitcase Full of Dirt being the most recent an audiobook, and a vinyl spoken word release. He has also served as the poetry editor for the three most recent volumes of the university's Bryant Literary Review. Paul received his B.A. from Rhode Island College.

Nafees Qamar, Associate Professor and Healthcare Informatics Director, received his Ph.D. from the University of Grenoble. With a comprehensive background spanning health informatics, applied computer science, and software security, Qamar is dedicated to bridging the gap between healthcare and technology to enhance patient care and data management. Over the course of his career, Qamar has held distinguished positions at a variety of institutions, including the State University of New York and Vanderbilt University, which has aided him in nurturing a multidisciplinary perspective. Most recently, he served as an associate professor at Governors State University in Chicago.

Jerrica Rowlett, Assistant Professor of Communication and Language Studies, received her Ph.D. from Florida State University. Rowletts research explores the intersection of a range of topics including gender, politics, pop culture, and social media and her dissertation explored the role Snapchat Live Stories has played in the collective identity and action of offline communities. She has previously taught as an assistant professor of Communication and Media Studies at Georgetown College. Rowlett received her M.A. from Clemson University and a B.A. from Georgetown College.

Jason Sawyer, Associate Professor and Exercise and Movement Science Program Coordinator, received his Ph.D. from Springfield College. He has previously taught at numerous institutions, most recently Rhode Island College. An accomplished scholar and presenter, Sawyers research interests have recently focused on the effects of exercise on depression in college-aged individuals. He has previously served as the Rhode Island state representative for the National Strength and Conditioning Association, and his coaching experience includes strength and conditioning, Olympic weightlifting, basketball, and martial arts. He received his B.S. from Plymouth State University and his M.S. from Springfield College.

Go here to read the rest:

Bryant welcomes 15 new faculty members to the university community - Bryant University

Fish-ial recognition software aims to protect trout – EurekAlert

New research focused on brook trout is using artificial intelligence to identify individual fish, with the goal of building population models that track trout health and habitat changes.

This groundbreaking use of AI, a collaboration between data scientists at the University of Virginia and the U.S. Geological Survey, will create a more efficient and accurate way to track trout by using fish-ial recognition software.

Researchers are classifying fish in both controlled and natural environments in West Virginia and Massachusetts, building a unique database that has the potential to save the taxpayer millions of dollars and advance protective measures for trout and streams. They hope to engage anglers as boots-on-the-ground citizen scientists to assist with the project, creating an interactive application where fishermen can upload images of fish and participate in protecting the health of brook trout and preservingtheir natural environment.

Fish biologists have been studying climate change and conservation for decades, and tracking fish is not new. Previously, however, scientists have had to use markers or injections to identify individual fish, methods that are invasive, require minor surgery, and do not work on small fish. The new frontier is individual recognition using AI technology, said Nathaniel Hitt, a research fish biologist with the U.S. Geological Survey.

The project originated during work at Shenandoah National Park by researchers from the U.S. Geological Surveys Ecological Science Center in West Virginia. We were using video sampling in stream pools to estimate the abundance of brook trout. We would take underwater video and have human observers count fish, said Hitt. We actually crowdsourced this to schools across the nation.

The success of the crowdsourcing got the fish biologists thinking about how they could automate the process. With the rise of AI and computer science applications like facial recognition software, they thought, why not apply it to fish? Brook trout have unique identifying markings, making them the perfect fish species to test this theory.

Brook trout are unique in that they are the only native trout of Appalachia and have been around for millions of years. Anglers for generations have come to love the fish and are invested in protecting its future. Brook trout have ecological importance as well, according to Hitt: Theyre the canary in the coal mine for climate change.

Ben Letcher, a research ecologist at the Conte Research Laboratory in Turners Falls, Mass., who is partnering on the project, explains: Each state in New England has cold-water criteria, and some states use the presence of a brook trout to identify a cold-water stream. Cold-water streams get special protections, so knowing where the trout are now and where they will be in the future is important for land protection and conservation.

To build a database of images large enough to be useful for prediction models, the researchers are capturing fish images in both controlled fisheries in West Virginia and in the wild streams of western Massachusetts, using different methods while working toward the same goal.

In Massachusetts, the team uses an electrofisher backpack to collect fish. They then place the caught fish in a bucket, anesthetize a few at a time, and then take measurements and photographs before releasing them back into the stream from which they came. In West Virginia researchers have used GoPro cameras to collect images of fish while they swim in tanks. The team then uses anesthetics to capture measurements and take additional photographs.

All of those images are then shared with data scientists at the University of Virginia who feed them into an image processing pipeline that identifies individual fish features. The team, led by Sheng Li, an assistant professor of data science at UVA, then trains the model to improve image recognition.

Its quite challenging, said Li. You see a large variation in fish appearance such as body size and other changes over time. We have had to developmultiple AI methods to improve the recognition of each individual fish.

The data scientists rely heavily on the images provided by the on-the-ground fish biologists and ecologists wading into streams, catching fish, and carefully and categorically photographing them.

Everyone on the project credits the projects success to interdisciplinary collaboration. Fish experts like Hitt and Letcher work with computer and data scientists like Li and others to use new techniques to solve old, persistent problems.

Hitt believes the tools they are developing using AI could have applications toward research on any animal with spots. We envision this transforming fish biology globally.

But the challenge of amassing a large enough and current database of images remains, which is why the team hopes to appeal to citizen scientists and anglers to be active participants.

By using their phone to capture and upload photos of fish caught, a future interactive database has the potential to identify a specific fish, trace its tracking history, and feed up-to-date information in real time. The U.S. Geological Survey is working with fishing expedition companies to test out this new method of collecting data.

Letcher predicts a phone application could be created where an angler takes a photo of caught fish; uploads it to an open, shared database; and learns its exact identification and history. This could be very valuable for collecting scientific information but also to engage anglers in new ways, he said.

Using images, we can create individual fish ID and could monitor population trajectories, said Hitt, but this also changes the relationship between anglers and these natural resources. It fosters a deeper sense of stewardship and connection to the streams and rivers.

When speaking of brook trout, Letcher and his colleagues become almost reverent. Youre taking something so ancient, so deeply rooted in the evolution of our planet, and developing a new appreciation and respect for it.

Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.

Original post:

Fish-ial recognition software aims to protect trout - EurekAlert

Giving students the computational chops to tackle 21st-century … – MIT News

Graduate student Nikasha Patel 22 is using artificial intelligence to build a computational model of how infants learn to walk, which could help robots acquire motor skills in a similar fashion.

Her research, which sits at the intersection of reinforcement learning and motor learning, uses tools and techniques from computer science to study the brain and human cognition.

Its an area of research she wasnt aware of before she arrived at MIT in the fall of 2018, and one Patel likely wouldnt have considered if she hadnt enrolled in a newly launched blended major, Course 6-9: Computation and Cognition, the following spring.

Patel was drawn to the flexibility offered by Course 6-9, which enabled her to take a variety of courses from the brain and cognitive sciences major (Course 9) and the computer science major (Course 6). For instance, she took a class on neural computation and a class on algorithms at the same time, which helped her better understand some of the computational approaches to brain science she is currently using in her research.

After earning her undergraduate degree last spring, Patel enrolled in the 6-9 masters program and is now pursuing a PhD in computation and cognition. While a PhD wasnt initially on her radar, the blended major opened her eyes to unique opportunities in cross-disciplinary research. In the future, she hopes to study motor control and the computational building blocks that our brains use for movement.

Looking back on my experience at MIT, being in Course 6-9 really led me up to this moment. You cant just think of the world through one lens. You need to have both perspectives so you can tackle these complex problems together, she says.

Blending disciplines

The Department of Brain and Cognitive Sciences Course 6-9 is one of four blended majors available through the MIT Schwarzman College of Computing. Each of the majors is offered jointly by the Department of Electrical Engineering and Computer Science and a different MIT department. Course 6-7, Computer Science and Molecular Biology, is offered with the Department of Biology; Course 6-14, Computer Science, Economics, and Data Science, is offered with the Department of Economics; and Course 11-6, Urban Science and Planning with Computer Science, is offered with the Department of Urban Studies and Planning.

Each major is designed to give students a solid grounding in computational fundamentals, such as coding, algorithms, and ethical AI, while equipping them to tackle hard problems in different fields like neurobiology, economics, or urban design, using tools and insights from the realm of computer science.

The four majors, all launched between 2017 and 2019, have grown rapidly and now encompass about 360 undergraduates, or roughly 8 percent of MITs total undergraduate enrollment.

With so much focus on generative AI and machine learning in many disciplines, even those not traditionally associated with computer science, it is no surprise to associate professor Mehrdad Jazayeri that blended majors, and Course 6-9 in particular, have grown so rapidly. Course 6-9 launched with 40 students and has since quadrupled its enrollment.

Many students who come to MIT are enamored with machine-learning tools and techniques, so the opportunity to utilize those skills in a field like neurobiology is a great opportunity for students with varied interests, says Jazayeri, who is also director of education for the Department of Brain and Cognitive Sciences and an investigator at the McGovern Institute for Brain Research.

It is pretty clear that new developments and insights in industry and technology will be heavily dependent on computational power. Fields related to the human mind are no different from that, from the study of neurodegenerative diseases, to research into child development, to understanding how marketing affects the human psyche, he says.

Computation to improve medicine

Using the power of computer science to make an impact in biological research inspired senior Charvi Sharma to major in Course 6-7.

Though she was interested in medicine from a young age, it wasnt until she came to MIT that she began to explore the role computation could play in medical care.

Coming to college with interests in both computer science and biology, Sharma considered a double major; however, she soon realized that what really interested her was the intersection of the two disciplines, and Course 6-7 was a perfect fit.

Sharma, who is planning to attend medical school, sees computer science and medicine dovetail through her work as an undergraduate researcher at MITs Koch Institute for Cancer Research. She and her fellow researchers seek to understand how signaling pathways contribute to a cells ability to escape from cell cycle arrest, or the inability of a cell to continue dividing, after DNA damage. Their work could ultimately lead to improved cancer treatments.

The data science and analysis skills she has honed through computer science courses help her understand and interpret the results of her research. She expects those same skills will prove useful in her future career as a physician.

A lot of the tools used in medicine do require some knowledge of technology. But more so than the technical skills that Ive learned through my computer science foundation, I think the computational mindset the problem solving and pattern recognition will be incredibly helpful in treatment and diagnosis as a physician, she says.

AI for better cities

While biology and medicine are areas where machine learning is playing an increasing role, urban planning is another field that is rapidly becoming dependent on big data and the use of AI.

Interested in learning how computation could enhance urban planning, senior Kwesi Afrifa decided to apply to MIT after reading about the blended major Course 11-6, urban sciences and planning with computer science.

His experiences growing up in the Ghanian capital of Accra, situated in the midst of a rapidly growing and sprawling metro area of about 5.5 million people, convinced Afrifa that data can be used to shape urban environments in a way that would make them more livable for residents.

The combination of fundamentals from Course 6, like software engineering and data science, with important concepts from urban planning, such as equity and environmental management, has helped him understand the importance of working with communities to create AI-driven software tools in an ethical manner for responsible development.

We cant just be the smart engineers from MIT who come in and tell people what to do. Instead, we need to understand that communities have knowledge about the issues they face, and tools from tech and planning are a way to enhance their development in their own way, he says.

As an undergraduate researcher, Afrifa has been working on tools for pedestrian impact analysis, which has shown him how ideas from planning, such as spatial analysis and mapping, and software engineering techniques from computer science can build off one another.

Ultimately, he hopes the software tools he creates enable planners, policymakers, and community members to make faster progress at reshaping neighborhoods, towns, and cities so they meet the needs of the people who live and work there.

Read the original here:

Giving students the computational chops to tackle 21st-century ... - MIT News

Duality Technologies Joins AWS Partner Network and Launches … – PR Newswire

Duality leverages modern PETs and privacy-preserving AI to deliver faster and more secure data collaboration for healthcare, financial services, government, and more.

HOBOKEN, N.J., Sept. 28, 2023 /PRNewswire/ -- Duality Technologies, the leader in secure data collaboration for enterprises and government agencies, today announced it has joined the Amazon Web Services (AWS) Partner Network (APN) and launched its secure data collaboration platformin AWS Marketplace, a digital catalog with thousands of software listings from independent software vendors that make it easy to find, test, buy, and deploy software that runs on AWS.

Duality protects the intellectual property of AI/ML models and the security and privacy of the data used for training.

The APN is a global community of AWS Partners that leverage programs, expertise, and resources to build, market, and sell customer offerings in diverse global markets. Duality Technologies underwent the comprehensive AWS Foundational Technical Review (FTR) to certify the enterprise readiness of its platform. As an APN member, Duality allows AWS users to securely collaborate on data without requiring direct access to the raw data, supporting privacy regulations and unlocking additional data sources not previously permitted. Duality is also being used to train models while protecting the intellectual property (IP) of the artificial intelligence and machine learning (AI/ML) models and maintaining the security and privacy of the protected health or personally identifiable information (PII/PHI) used for training and model personalization.

"Duality's inclusion in the APN and its availability in AWS Marketplace means AWS customers can more easily collaborate on data science projects utilizing sensitive and regulated data across their business ecosystem from a single location within AWS," said VP of Product, Adi Hirschstein, Duality Technologies. "This adds privacy and security guardrails required by various regulated industries and organizations to leverage AWS services like AWS Nitro Enclaves and Amazon SageMaker. Not only that, but AWS customers will find that by making it easier to work with sensitive, these integrations will accelerate data-driven innovations and growth strategies."

Duality's enterprise-ready secure data collaboration platform operationalizes PrivacyEnhancing Technologies (PETs) to empower users to unleash the full value of collaborative data science and AI while minimizing risk. Organizations can securely share, analyze, and enrich sensitive data to gain business value while raw data remains encrypted throughout the entire data science lifecycle, minimizing the risk of exposure and ensuring compliance with data protection and industry regulations. Duality's uniquely secure solution is made possible vialeading-edge cryptographic and security technologies.

Duality has integrated with both Amazon SageMaker and AWS Nitro Enclaves to enable seamless integration with AWS services. The integration with AWS Nitro Enclaves expands the privacy-enhancing capabilities of Duality's platform, allowing organizations to collaborate on any data type with any type of model. The integration with Amazon SageMaker now allows companies to benefit from AWS model outputs using data that would otherwise be off-limits due to IP/PII/PHI in the data set.

"As an unequivocal global leader in making privacy technology real and practical, we're thrilled to bring the power of secure data collaboration to AWS. Combining Duality and AWS allows data-first organizations to securely apply data science and machine learning on sensitive data, further breaking down silos that exist within and between organizations," said Prof.Kurt Rohloff, chief technical officer and co-founder of Duality.

AWS customers can utilize Duality's secure collaboration solution today through AWS Marketplace.

As an APN member, Duality joins a global network of 100,000 AWS Partners from more than 150 countries working with AWS to provide innovative solutions, solve technical challenges, and deliver value to mutual customers.

About Duality Technologies

Dualityis the leader in privacy enhanced secure data collaboration, empowering organizations worldwide to maximize the value of their data without compromising on privacy or regulatory compliance. Founded and led by world-renowned cryptographers and data scientists, Duality operationalizes privacy enhancing technologies (PETs) to accelerate data insights by enabling analysis and AI on sensitive data while protecting data privacy, compliance, and protecting valuable IP. A World Economic Forum (WEF) Tech Pioneer and a Gartner Cool Vendor, Duality is recognized by numerous industry awards, including Fast Company 2023 World Changing Ideas award, 2023 CyberTech 100 Most Innovative Companies list, 2022 CB Insights' AI 100, the 2022 RegTech 100 Awards, and the AIFinTech100 2022 Awards.Learn more.

CONTACT: Derek Wood[emailprotected]+1 917-310-1175[emailprotected]

SOURCE Duality Technologies, Inc.

More:

Duality Technologies Joins AWS Partner Network and Launches ... - PR Newswire

Deploying Your Machine Learning Model to Production in the Cloud – KDnuggets

AWS, or Amazon Web Services, is a cloud computing service used in many businesses for storage, analytics, applications, deployment services, and many others. Its a platform utilizes several services to support business in a serverless way with pay-as-you-go schemes.

Machine learning modeling activity is also one of the activities that AWS supports. With several services, modeling activities can be supported, such as developing the model to making it into production. AWS has shown versatility, which is essential for any business that needs scalability and speed.

This article will discuss deploying a machine learning model in the AWS cloud into production. How could we do that? Lets explore further.

Before you start this tutorial, you need to create an AWS account, as we would need them to access all the AWS services. I assume that the reader would use the free tier to follow this article. Additionally, I assume the reader already knows how to use Python programming language and has basic knowledge of machine learning. Also, we will focus on the model deployment part and will not concentrate on other aspects of data science activity, such as data preprocessing and model evaluation.

With that in mind, we will start our journey of deploying your machine learning model in the AWS Cloud services.

In this tutorial, we will develop a machine-learning model to predict churn from the given data. The training dataset is acquired from Kaggle, which you can download here.

After we have acquired the dataset, we would create an S3 bucket to store the dataset. Search the S3 in the AWS services and make the bucket.

In this article, I named the bucket telecom-churn-dataset and located in Singapore. You can change them if you want, but lets go with this one for now.

After you have finished creating the bucket and uploading the data into your bucket, we will go to the AWS SageMaker service. In this service, we will use the Studio as our working environment. If you have never used the Studio, lets create a domain and user before proceeding further.

First, choose the Domains within the Amazon SageMaker Admin configurations.

In the Domains, you would see a lot of buttons to select. In this screen, select the Create domain button.

Choose the quick setup if you want to speed up the creation process. After its finished, you should see a new domain created in the dashboard. Select the new domain you just created and then click the Add user button.

Next, you should name the user profile according to your preferences. For the execution role, you can leave it on default for now, as its the one that was created during the Domain creation process.

Just click next until the canvas setting. In this section, I turn off several settings that we dont need, such as Time Series Forecasting.

After everything is set, go to the studio selection and select the Open studio button with the user name you just created.

Inside the Studio, navigate to the sidebar that looks like a folder icon and create a new notebook there. We can let them by default, like the image below.

With the new notebook, we would work to create a churn prediction model and deploy the model into API inferences that we can use in production.

First, lets import the necessary package and read the churn data.

Next, we would split the data above into training data and testing data with the following code.

We set the test data to be 30% of the original data. With our data split, we would upload them back into the S3 bucket.

You can see the data inside your S3 bucket, which currently consists of three different datasets.

With our dataset ready, we would now develop a churn prediction model and deploy them. In the AWS, we often use a script training method for machine learning training. Thats why we would develop a script before starting the training.

For the next step, we need to create an additional Python file, which I called train.py, in the same folder.

Inside this file, we would set our model development process to create the churn model. For this tutorial, I would adopt some code from Ram Vegiraju.

First, we would import all the necessary packages for developing the model.

Next, we would use the parser method to control the variable that we can input into our training process. The overall code that we would put in our script to train our model is in the code below.

Lastly, we need to put four different functions that SageMaker requires to make inferences: model_fn, input_fn, output_fn, and predict_fn.

With our script ready, we would run the training process. In the next step, we would pass the script we created above into the SKLearn estimator. This estimator is a Sagemaker object that would handle the entire training process, and we would only need to pass all the parameters similar to the code below.

If the training is successful, you will end up with the following report.

If you want to check the Docker image for the SKLearn training and your model artifact location, you can access them using the following code.

With the model in place, we would then deploy the model into an API endpoint that we can use for prediction. To do that, we can use the following code.

If the deployment is successful, the model endpoint is created, and you can access it to create a prediction. You can also see the endpoint in the Sagemaker dashboard.

You can now make predictions with this endpoint. To do that, you can test the endpoint with the following code.

Congratulation. You have now successfully deployed your model in the AWS Cloud. After you have finished the testing process, dont forget to clean up the endpoint. You can use the following code to do that.

Dont forget to shut down the instance you use and clean up the S3 storage if you dont need it anymore.

For further reading, you can read more about the SKLearn estimator and Batch Transform inferences if you prefer to not have an endpoint model.

AWS Cloud platform is a multi-purpose platform that many companies use to support their business. One of the services often used is for data analytic purposes, especially model production. In this article, we learn to use AWS SageMaker and how to deploy the model into the endpoint.Cornellius Yudha Wijaya is a data science assistant manager and data writer. While working full-time at Allianz Indonesia, he loves to share Python and Data tips via social media and writing media.

Visit link:

Deploying Your Machine Learning Model to Production in the Cloud - KDnuggets

With the summer of data in the rear-view mirror, here are the key … – SiliconANGLE News

Theres no talking about the Summer of Data without including a significant addition about the year of artificial intelligence the two are inextricably linked and will remain so in the months to come.

Thats because the rise of AI has led to the need for incredible amounts of data, and projections indicate that data centers are to become the worlds largest energy consumers, rising from 3% of total electricity use in 2017 to 4.5% by 2025. Indeed, more companies are seeing their data needs grow on a yearly basis, leading to the characterization that every company is a data company.

With an eye on approaching this challenge, various technological advancements have rolled out in 2023, including a necessity for data storage innovation. Next-generation storage solutions are estimated to be valued at more than $150 billion by 2032, according to a recent studyfrom Global Market Insights Inc.

Its also no surprise that every vendor offering data-related solutions are striving to secure a share of whats estimated to be a total addressable market in the tens of billions of dollars when it comes to data platforms, noted Rob Strechay, lead analyst for theCollective from theCUBE, in an analysis for SiliconANGLE.

The opportunities for storage platform vendors and data platform vendors lie in integrating data platforms as-a-service into their storage offerings, Strechay wrote.

With all of these changes and demands in mind, some of the major players in data including Snowflake Inc., MongoDB Inc., VAST Data Inc. and Databricks Inc. spent the Summer of Data unveiling their strategies as data becomes even more important in support of AIs evolution.

Though all of these companies and others like them are responding to the same challenges, their solutions differ. Thats why, with the Summer of Data in the rearview, its worth a recap of what we learned so far and where these companies could be heading next.

This feature is part of SiliconANGLE Medias ongoing series with theCUBE exploring the latest developments in the data storage and AI market.

Before this years Snowflake Summit, the companys stated target of $10 billion in revenue for fiscal year 2028 left plenty of open questions about how they might get there. Over the course of this year, meanwhile, theCUBE has produced a number of in-depth analyses, laying out a mental model for the future of data platforms.

In his post-summit analysis, theCUBE analyst Dave Vellante discussed the vision outlined by Snowflake during this years Snowflake Summit, from its keynote presentations to product announcements. The companys intention was clearly to be the number one platform on which this new breed of data applications will be built, according to Vellante.

This weeks Snowflake Summit further confirmed our expectations with a strong top-line message of All Data/All Workloads, and a technical foundation that supports an expanded number of ways to access data, Vellante wrote. Squinting through the messaging and firehose of product announcements, we believe Snowflakes core differentiation is its emerging ability to be a complete platform for data applications. Just about all competitors either analyze data or manage data.

Other companies have also been weighing their strategies as the world of data storage evolves and as data and AI converge. For VAST, that looks like an evolution beyond being a storage company. In early August, VAST announced a new, global data infrastructure for AI called the VAST Data Platform, with an aim to unify data storage, database and virtualized compute engine services in a scalable system.

By bringing together structured and unstructured data in a high-performance, globally distributed namespace with real-time analysis, VAST is not only tackling fundamental DBMS challenges of data access and latency but also offering genuinely disruptive data infrastructure that provides the foundation organizations need to solve the problems they havent yet attempted, Market Strategy analyst Merv Adrian said at the time of the announcement.

Meanwhile, the realities of modern business with challenges such as the skills shortage considered mean developers must be kept happy. That has been good news for companies such ascloud database provider MongoDB.

The company recently saw its stock soar with blowout fiscal first-quarter earnings results, which posed an interesting question to watch in advance of MongoDB .local NYC in June: Was AI contributing to the surge in stock price?

DevOps democratization has surged over the past 20 years, but AI has posed a new wrinkle. AI isnt the only thing to consider as developers seek to go next-level with their data, according to Mark Porter, chief technology officer of MongoDB, during an interview with theCUBE during MongoDB .local NYC.

It is currently the thing thats really exciting, and being able to build great apps that do great things with your core data is always going to be important, he said. But whats happening is people are enhancing their apps with AI.

With hundreds of people using MongoDB as the foundation of their AI apps, Porter pointed to the companys developer data platform as key to this arrangement. Meanwhile, Databricks recently acquired Okera Inc., a data governance platform with a focus on AI with a stated goal to expand its own governance and compliance capabilities when it comes to machine learning and large language model AIs. Customers used to control access to their data using simple data controls that only needed to address one plane, such as a database.

The rise of AI, in particular machine learning models and LLMs, is making this approach insufficient, the Databricks team, including Chief Executive Officer Ali Ghodsi, explained in the announcement.

As an industry leader, many are watching what Databricks is doing closely, including Vellante. The big question for the company this summer was surrounding how it would execute its critical strategic decisions in the future as hype and confusion continued to swirl around the world of AI.

Emerging customer data requirements and market forces are conspiring in a way that we believe will cause modern data platform players generally and Databricks specifically to make some key directional decisions and perhaps even reinvent themselves, Vellante wrote in an edition of his Breaking Analysis series.

After the Data + AI Summit, those connections began to come into better view. In a new world where data is influenced by broader trends in AI, Databricks is back in its wheelhouse, according to Doug Henschen, vice president and principal analyst at Constellation Research Inc.

I think generative AI for the last three years, theyve been building up the warehouse side of their Lakehouse and making a case, he said. All this time data science has been their wheelhouse, and their strength and their customers are here, while others are making announcements of previews thatll help eventually down the road on AI. This is where its really happening, and theyre building generative models today.

TheSummer of Data may be over, but its clear that the evolution of AI will continue to play into the strategy of major players in data for many months to come. That will lead to solutions such as the adoption of next-generation storage solutions and their valuation at over $150 billion by 2023.

Though the AI-powered hybrid-multi-super cloud comes with various demands on data, companies such as those mentioned above have laid out their plans during the Summer of Data, and the year ahead will be critical as those same companies are tasked to execute. So, too, will various data platforms continue to evolve.

Most traditional applications are built on compute, networking and storage infrastructure, but the future will see applications program the real world, George Gilbert, a contributor to theCUBE, wrote in a recent analysis.

In that world, data-driven digital twins representing real-world people, places, things and activities will be on the platform, Gilbert wrote, which explains the stakes at hand.

On balance, we believe that the distributed nature of data originating from real-world things, combined with the desire for real-time actions, will further stress existing data platforms. We expect a variety of approaches will emerge to address future data challenges, he wrote. These will come at the problem starting from a traditional data management perspective (such as Snowflake), a data science view of the world (such as Databricks) and core infrastructure prowess (cloud/infrastructure-as-a-service, compute and storage vendors).

Clearly, the challenges around data remain the same as AI continues its meteoric rise. The upcoming months will be critical as the needs of companies in this new world continue to be of paramount importance.

TheCUBEis an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate thecontent you create as well Andy Jassy

THANK YOU

See the original post here:

With the summer of data in the rear-view mirror, here are the key ... - SiliconANGLE News