Category Archives: Data Science
$424K grant to better predict weather, climate through machine … – University of Hawaii
Improved weather and climate forecasting using machine learning and artificial intelligence is the focus of a new University of Hawaii at Mnoa project. Results are expected to have a major impact in Hawaii and other tropical climate areas around the world.
Associate Professor Peter Sadowski from the Information and Computer Sciences Department in the College of Natural Sciences earned a five-year, $424,293 CAREER grant from the National Science Foundation (NSF). CAREER grants are designed to support early-career faculty to serve as academic role models in research and education.
One of the risks of climate change for Hawaii is extreme weather events, and current scientific models are poor at estimating these risks, Sadowski said. This project will provide a completely new approach modeling these risks, using the latest advancements in AI (artificial intelligence).
Sadowskis project will develop machine-learning methods to predict the risk of adverse weather and climate events. AI will be used to develop new data-driven computational methods for modeling risk and apply these methods to weather applications.
In particular, these models will be applied to forecasting solar irradiance and precipitation, two areas that are particularly important for tropical islands such as the Hawaiian Islands. Estimating the risk of rapid changes in solar power generation is necessary for managing energy grids that are seeing a rapid increase in variable renewable sources, and floods claim hundreds of lives and billions in property damage each year in the U.S. alone.
Artificial intelligence methods have greatly improved translating text into predictions using images and video. A key development is the ability to learn probabilistic models of images and video. The research will leverage existing data from numerical simulations of atmospheric variables, observations from satellites and ground-based weather station data from the NSF-funded CHANGE-HI project. The machine-learning methods developed by this project will complement existing physics-based weather prediction models by providing location-specific forecasts with increased speed, higher resolution and probabilistic accuracy.
This research will be paired with an educational outreach program that includes a summer data science course for high school students and a workshop to share data science teaching materials with Hawaiis K12 teachers.
More here:
$424K grant to better predict weather, climate through machine ... - University of Hawaii
The Ethical Imperative of AI Differential Privacy in Data Science – Fagen wasanni
The Ethical Imperative of AI Differential Privacy in Data Science
The rapid advancements in artificial intelligence (AI) and data science have opened up new horizons for various industries, from healthcare to finance. The ability to analyze massive amounts of data has led to significant breakthroughs in areas such as personalized medicine, fraud detection, and even self-driving cars. However, with these technological advancements comes the responsibility to ensure that the privacy of individuals is protected. This is where the concept of differential privacy comes into play, serving as an ethical imperative in the realm of AI and data science.
Differential privacy is a mathematical framework that allows data scientists to analyze and share data while preserving the privacy of individuals within the dataset. It works by adding a carefully calculated amount of noise to the data, ensuring that the results of any analysis remain statistically accurate while making it virtually impossible to identify any individuals information. This approach has gained significant traction in recent years, with tech giants like Apple and Google adopting differential privacy techniques to protect user data.
The ethical imperative of implementing differential privacy in AI and data science stems from the potential harm that can be caused by privacy breaches. In todays digital age, personal information is more valuable than ever, and the consequences of mishandling such data can be severe. For instance, unauthorized access to medical records could lead to discrimination based on health conditions, while financial data breaches can result in identity theft and fraud. Furthermore, the misuse of personal information can have long-lasting psychological effects on individuals, leading to a loss of trust in institutions and a sense of vulnerability.
In addition to the potential harm caused by privacy breaches, there is also a growing concern about the potential for AI algorithms to perpetuate and even amplify existing biases and inequalities. This is particularly relevant in the context of machine learning, where algorithms are trained on large datasets to identify patterns and make predictions. If the data used to train these algorithms contains biases, the resulting AI systems can inadvertently perpetuate these biases, leading to unfair and discriminatory outcomes.
Differential privacy can help mitigate these concerns by ensuring that sensitive information is protected while still allowing for valuable insights to be gleaned from the data. By preserving individual privacy, differential privacy reduces the risk of harmful consequences resulting from privacy breaches. Moreover, by allowing data scientists to work with anonymized data, differential privacy can help to identify and address potential biases in AI algorithms, leading to more fair and equitable outcomes.
The ethical imperative of differential privacy in AI and data science is further underscored by the growing body of legislation aimed at protecting individual privacy. Regulations such as the European Unions General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA) have placed stringent requirements on organizations to safeguard personal information and provide greater transparency and control to individuals over their data. Implementing differential privacy techniques can help organizations comply with these regulations while still enabling them to leverage the power of AI and data science to drive innovation and growth.
In conclusion, the ethical imperative of AI differential privacy in data science is clear. As the world becomes increasingly data-driven, it is crucial for organizations to strike a balance between harnessing the power of AI and data science and protecting the privacy of individuals. Differential privacy offers a promising solution to this challenge, enabling data scientists to gain valuable insights from data while preserving individual privacy and mitigating the risk of harmful consequences. By adopting differential privacy techniques, organizations can not only meet their ethical obligations but also build trust with their customers and stakeholders, ensuring the long-term success and sustainability of their AI and data science initiatives.
Link:
The Ethical Imperative of AI Differential Privacy in Data Science - Fagen wasanni
Argonne and University of Chicago researchers improve … – Argonne National Laboratory
The introduction of millions of electric vehicles (EVs) on to the power grid will create a transformational opportunity for Americas decarbonization efforts. However, it also brings with it an important challenge. Scientists and engineers are looking for the best way to ensure that vehicles can be charged smartly, efficiently, cheaply and clean by a grid that may not be able to accommodate them all at once or all the time.
Researchers at the U.S. Department of Energys Argonne National Laboratory and graduate students at the University of Chicago are collaborating on an exciting new project to tackle that challenge. This project will use a particular combination of computational rewards and punishments a technique called reinforcement learning to train an algorithm to help schedule and manage the charging of a diverse set of electric vehicles.
The first group of vehicles that the team is studying are those being charged by Argonne employees at the laboratorys Smart Energy Plaza, which offers both AC regular chargers and DC fast chargers. Because employees dont typically need their vehicles during the workday, there can be some flexibility in terms of when each car gets charged.
Theres a certain total amount of power that can be allocated, and different people have different needs in terms of when they need to have their cars available at the end of the day, said Argonne principal electrical engineer Jason Harper. Being able to train a model to work within the constraints of a particular employees departure time while being cognizant of peak demands on the grid will allow us to provide efficient, low-cost charging.
When you have a lot of EVs charging at the same time, they can create a peak demand on the power station. This introduces increased charges, which were trying to avoid, added Salman Yousaf. Yousaf is a graduate student in applied data science at the University of Chicago who is working on the project with three other students.
The reinforcement learning in the algorithm works by incorporating feedback from positive results, like an EV having the desired amount of charge at the designated departure time. It also incorporates negative results, like having to draw power past a certain peak threshold. Based on this data, the charge scheduling algorithm can make more intelligent decisions about which cars to charge when.
Smart charge scheduling is really an optimization problem, Harper said. In real time, the charging station is constantly having to make tradeoffs to make sure that each car is being charged as efficiently as possible.
Although the Argonne charging stations are the first location where the projects researchers are performing reinforcement learning, there is the potential to expand far beyond the laboratorys gates. Theres a lot of flexibility when it comes to charging at home, where overnight charging would allow for some ability to move around how the charging load is distributed, Yousaf said.
True smart charging is really taking into consideration all of the actors in the ecosystem, Harper added. That means the utility, the charging station owner and the EV driver or homeowner. We want to meet the needs of everyone while still being mindful of the restrictions that everyone faces.
Future work with the model will involve a simulation of a much larger charging network that will initially be based on data collected from Argonnes chargers.
Harper and his colleagues have also developed a mobile app called EVrest that allows users of networked charging stations (in this case, initially Argonne employees) to reserve stations and participate in smart charge scheduling. The EVrest platform collects data on charging behavior and will use that data to train future AI models to aid in smart charge management and vehicle grid integration.
Read the original post:
Argonne and University of Chicago researchers improve ... - Argonne National Laboratory
Genedata Announces Licensing Agreement with Gilead Sciences to … – Bio-IT World
Genedata, the leading provider of enterprise software solutions for biopharmaceutical R&D, today announced a licensing agreement with Gilead Sciences, Inc. to help the company leverage the power of multi-omics data in the discovery of life-changing therapeutics across multiple indications. The agreement includes a software license to Genedata Profiler, a domain-specific, state-of-art data integration and analytics platform, and access to consulting resources to support the adoption and integration of the solution within the companysinfrastructure.
At Gilead, we work every day to discover innovative therapeutics for people with life-threatening diseases, said Li Li, Executive Director, Research Data Sciences at Gilead. Utilizing Genedata Profiler will enhance our capabilities for integration and analysis of large-scale multi-modal datasets as we continue to accelerate research for the discovery of transformative medicines.
As a comprehensive, end-to-end computational solution, Genedata Profiler enables biopharmaceutical organizations to conduct data-rich discovery and translational research and make more informed decisions, ultimately supporting the development of more effective, targeted treatments. It empowers data scientists to decipher complex patterns associated with a specific disease from Next Generation Sequencing (NGS) and other high-dimensional data, facilitating target identification and evaluation and biomarker research for a range of therapeutic indications.
The main benefits of Genedata Profiler include:
Beyond software, Genedata provides scientific domain expertise and 26 years of experience in supporting the biopharma industry in increasing the productivity and quality of their research output through digital transformation.
We are excited to have signed Gilead Sciences to our rapidly growing customer list for our Genedata Profiler platform, said Othmar Pfannes, Ph.D., CEO of Genedata. Genedata Profiler democratizes access to R&D data and includes fit-for-purpose analytics, while automating highly complex data operations, all key requirements for enabling biopharmaceutical organizations to generate precision medicines as efficiently as possible.
About Gilead Sciences
Gilead Sciences, Inc. is a biopharmaceutical company that has pursued and achieved breakthroughs in medicine for more than three decades, with the goal of creating a healthier world for all people. The company is committed to advancing innovative medicines to prevent and treat life-threatening diseases, including HIV, viral hepatitis, COVID-19, and cancer. Gilead operates in more than 35 countries worldwide, with headquarters in Foster City, California.
LinkedIn|Twitter
About Genedata
Genedata transforms data into intelligence with innovative software solutions that incorporate extensive biopharma R&D domain knowledge. Multinational biopharmaceutical organizations and cutting-edge biotechs around the globe rely on Genedata to digitalize and automate data-rich and complex R&D processes. From early discovery all the way to the clinic, Genedata solutions help maximize the ROI in R&D expenditure. Founded in 1997, Genedata is headquartered in Basel, Switzerland with additional offices in Boston, London, Munich, San Francisco, Singapore, and Tokyo.
LinkedIn | Twitter | YouTube
ContactAllison KurzGenedataPublic Relationspr@genedata.com
Disclaimer
The statements in this press release that relate to future plans, events or performance are forward-looking statements that involve risks and uncertainties, including risks associated with uncertainties related to contract cancellations, developing risks, competitive factors, uncertainties pertaining to customer orders, demand for products and services, development of markets for the Company's products and services. Readers are cautioned not to place undue reliance on these forward-looking statements, which speak only as of the date hereof. The Company undertakes no obligation to release publicly the result of any revisions to these forward-looking statements that may be made to reflect events or circumstances after the date hereof or to reflect the occurrence of unanticipated events.
All product and service names mentioned are the trademarks of their respective companies.
Follow this link:
Genedata Announces Licensing Agreement with Gilead Sciences to ... - Bio-IT World
Research Rooted in Machine Learning Challenges Conventional … – National Institute of Justice
Researchers have developed a new analytical method to better understand how individuals move toward violent extremism.
Using machine learning, a form of artificial intelligence, the method reveals clusters of traits associated with possible pathways to terrorist acts. The resource may improve our understanding of how an individual becomes radicalized toward extremist violence.
The report on a scientific study that deploys those tools and blends elements of data science, sociology, and criminology is calling into question some common assumptions about violent extremism and the homegrown individuals who are motivated to engage in behaviors supporting violent jihadist ideologies. See Table 1.
Table 1 shows select key insights from the project aimed at developing a new computational methodology that can mine multiple large databases to screen for behaviors associated with violent extremism.
The study departs from the research communitys common use of demographic profiles of extremist individuals to predict violent intentions. Profiling runs the risk of relying on ethnic stereotypes in extremism studies and law enforcement practices, particularly with respect to American Muslims. According to the researchers, the method isolated the behaviors associated with potential terrorist trajectories, after being trained with thousands of text data coded by researchers.
Machine learning is an application of artificial intelligence that uses existing data to make predictions or classifications about individuals, actions, or events. The machine learns by observing many examples until it can statistically replicate them.
Researchers scanned large datasets to spot traits or experiences that are collectively associated with terrorist trajectories employing a process that blends machine learning (see What Is Machine Learning?), and an evidence-based behavioral model of radicalization associated with violence and other terrorism-related activities.
The machine-learning computational method analyzes, while learning from, copious data to isolate behaviors associated with potential terrorist trajectories.
The graph component depicts clusters of behavioral indicators that reveal those trajectories. The datasets generating those indicators include investigator notes, suspicious activity reports, and shared information. See "What Do We Mean by Graph? Defining It in Context."
This tool for understanding violent extremism is the work of Colorado State University and Brandeis University investigators, supported by the National Institute of Justice. The tool aims to isolate somewhat predictable radicalization trajectories of individuals or groups who may be moving toward violent extremism.
A key element of the work was the development of a Human-in-the-Loop system, which introduces a researcher into the data analysis. Because the data are so complex, the researcher mitigates difficulties by assisting the algorithm at key points during its training. As part of the process, the researcher writes and rewrites an algorithm to pick up key words, phrases, or sentences in texts. Then the researcher sorts those pieces of text with other text segments known to be associated with radicalization trajectories.
The Human-in-the-Loop factor is designed to help researchers code data faster, build toward a law enforcement intelligence capable of capturing key indicators, and enable researchers to transform textual data into a graph database. The system relies on a software-based framework designed to help overcome challenges posed by massive data volumes and complex extremist behaviors.
The research stems from the premise that radicalization is the product of deepening engagements that can be observed in changing behaviors. This concept is based on researchers observations that the radicalization process occurs incrementally.
The radicalization trajectory concept suggests that a linear pathway exists from an individual entertaining extremist ideas to ultimately taking extremist action marked by violence in the name of ideology.
The research findings validated that premise.
The researchers used 24 different behavioral indicators to search databases for evidence of growing extremism. Some examples of indicators are desire for action, issuance of threats, ideological rebellion, and steps toward violence. (See Figure 1 for an example of a set of cues, or behaviors, that the researchers associate with one behavioral indicator associated with planning a trip abroad.)
Source: Dynamic, Graph-Based Risk Assessments for the Detection of Violent Extremist Radicalization Trajectories Using Large Scale Social and Behavioral Data, by A. Jayasumana and J. Klausen, Table 5, p. 23.
Because violent extremism remains a relatively rare phenomenon, data on known individuals who committed terrorist events was mined to identify cues representing behavioral extremist trajectories. To that end, researchers collected three types of data:
The sources of collected data were public documents ranging from news articles to court documents, including indictments and affidavits supporting complaints.
Of the 1,241 individuals studied, the researchers reported that 421 engaged in domestic terrorist violence, 390 became foreign fighters, and 268 became both foreign fighters and individuals engaged in domestic terrorism. A minority (162) were convicted of nonviolent terrorism-related offenses.
Researchers analyzed time-stamped behavioral data such as travel abroad, a declaration of allegiance, information seeking, or seeking a new religious authority using graph techniques to assess the order of subjects behavioral changes and most common pathways leading to terrorism-related action. See the sidebar What do we mean by graph? Defining it in context.
The researchers made several notable findings beyond those presented in Table 1.
Although researchers found that terrorist crimes are often the work of older (at least 25 years old, on average) individuals, the agecrime relationship varied across types of terrorist offenses. They found that, on average, people who committed nonviolent extremist acts were 10 years older than those who became foreign fighters. Although younger men (median age 23) are more likely to take part in insurgencies abroad, slightly older men (median ages 25-26) who have adopted jihadist ideologies are more likely to engage in violent domestic terrorist attacks. Individuals who did something violent at home were, on average, four years older than foreign fighters.
Researchers also found that men and a few women at any age may engage in nonviolent criminal support for terrorism. Also, men are six times more likely than women to commit violent offenses, both in the United States and abroad.
According to this study, individuals who have adopted jihadist ideologies and who are immigrants are more likely than those who are homegrown to engage in domestic extremist violence.
The dataset, comprising more than 1,200 individuals who had adopted jihadist ideologies, was used to track radicalization trajectories. It was limited by the availability of sufficiently detailed text sources, which introduced an element of bias. Much of the public data on terrorism come from prosecutions, but not all terrorism-related offenses are prosecuted in state or federal U.S. courts. Some of the subjects died while fighting for foreign terror organizations, which limited the available information on them.
Although data from public documents may be freely shared, the researchers noted that research based on public sources can be extremely time consuming.
Often public education efforts on anti-terrorism take place at schools where children learn about recruitment tactics by extremist groups and warning signs of growing extremism. However, the study found that more than half of those who commit extremist violent acts in the United States are older than 23 and typically not in school. This suggests that anti-terrorism education efforts need to expand beyond school settings.
By using machine learning to identify persons on a trajectory toward extremist violence, the research supports a further move away from relying on user profiles of violent extremists and toward the use of behavioral indicators.
The research described in this article was funded by NIJ award 2017-ZA-CX-0002, awarded to Colorado State University. This article is based on the grantee report Dynamic, Graph-Based Risk Assessments for the Detection of Violent Extremist Radicalization Trajectories Using Large Scale Social and Behavioral Data, by A. Jayasumana and J. Klausen.
A graph, in the context of this research, is a mathematical representation of a collection of connections (called edges) between things (called nodes). Examples would be a social network or a crime network, or points on a map with paths connecting the points. The concept is analogous to cities, and roads or flights paths connecting them, on a map. The researchers in this violent extremism study isolated clusters of traits representing a more likely pathway to violent extremism. The concept is similar to a map app choosing roads that are least congested (allowing for most traffic) between two points. Graphs in this sense can be quite visual and make good conventional graphics.
Return to text
Originally posted here:
Research Rooted in Machine Learning Challenges Conventional ... - National Institute of Justice
Discover the Golden Ratio: How to Perfectly Balance Data Science … – Medium
On a regular day, I found myself standing in front of an art piece at a local museum. As my eyes scanned the grandeur of the structure, a realization hit me. Business, like this work of art, also follows a golden ratio, a harmonious blend of data science and intuition that could alter the course of your decision-making journey.
Can you imagine two seemingly distinct entities like data science and intuition being the Da Vinci Code to effective business decision making? It sounds absurd, doesnt it? The Gen Z in me wants to joke, Whats next? Yoga and quantum physics?
Let me tell you a story. Sandra, an entrepreneur with a tech startup, was on the verge of collapse. She had all the data science insights but ignored her gut instincts. Then, she met an old mentor who revealed the golden ratio, the fusion of data and intuition. In no time, she turned the tables around. Her startup thrived, and she was hailed as a visionary.
Doesnt it sound like a plot twist? Welcome to the reality of todays business world where data crunching machines and human intuition come together to create magic. Here, one complements the other, filling in the gaps, and thats what we call the golden ratio.
Scientific evidence backs this approach. A study from the University of Amsterdam shows that intuitive decisions backed by data analysis are often more effective. When data science meets intuition, we get an amalgamation of raw numerical power and the human touch.
But, hold on! Dont pack your bags for a trip to a secluded mountain to hone your intuition. And dont invest all your money in the most advanced data analytics tool, either. The golden ratio isnt about extremes. Its about finding the perfect balance.
Now that we have the recipe, lets prepare the dish. Over time, with practice and reflection, youll start seeing the golden ratio in your decisions. Its a continuous process, a journey towards harmonious decision-making.
So, are you ready to discover your golden ratio and revolutionize your decision-making?
Imagine a day when you stand before your business, seeing the golden ratio in action, driving success. Remember, the journey to finding the perfect balance can be as thrilling as standing in front of a masterpiece and realizing that it holds the secret to your business success.
Drop your experiences in the comments and lets discuss how the golden ratio is reshaping decision-making landscapes. Follow my Medium account for more unique insights and daring ideas. Lets embark on this exciting journey together!
See the article here:
Discover the Golden Ratio: How to Perfectly Balance Data Science ... - Medium
Simplifying with Data-Driven Warehouse Optimization – Supply and Demand Chain Executive
BluePlanetStudio/stock.adobe.com
Theres a perfect storm brewing in the evolution of warehouses and DC businesses and the optimization solution lies in creating a recipe in which technology is a main ingredient.
The industry faces a number of challenges. Increased customer expectations and the growth of ecommerce - magnified by labor issues - have put more pressure on financial performance and corporate transformation.
After the pandemic made distribution excellence even more essential to obtaining necessary goods and services, the warehouse function has been elevated to a central strategy for many companies.Companies have changed their thinking from viewingdistribution as a cost center tothinking aboutdistribution as acompetitive advantage.There is an urgency to act on these changing market conditions, but some changes take a long time to implement.
So, whats the answer? In my eyes, its a marriage of the old and new. If you use data science-led innovative software products as the platform and infrastructure for growth, you can apply them to the basic tasks in the DC picking, batching, travel. To tie things together its a collaboration between workers and technology that enables workers to be more efficient and effective at their jobs.
A Lucas Systems survey found thatnearly 3 out of 4 (74%) on-floor workers will consider a pay cut at another company for an opportunity to use technology if it helps them in their job. Workers also said they are physically spent, spending over a third of their day walking and would welcome techs help in the form of robots or other tech tools. This tells me that workers are eager to use technology in new beneficial ways.
Fromconstant enhancementsfocused on incremental process changes tocontinuous optimizationand operational transformation,DCs need tools that can help them manage the week-to-week, day-to-day and even the hour-to-hour changes that occur. Its about finding the real problems, not the symptoms of the problems. It also is about prioritizing solutions which optimize DC operations and processes and allow frontline workers to work to the top of their ability level.
The opportunity lies mainly in three areas:
There have generally been only slight gains in picker productivity or in slotting programs to reduce travel, with pickers in most facilities still spending more than 30% of their day traveling throughout the warehouse rather than picking product. But there are ways to optimize and reduce travel and generate immediate productivity gains without changing overall systems. Thats where AI, software and technology come in.
Lets look at how applying technology could transform your operation. In batching work, there are quite a number of variables to consider, including:
Software can dynamically apply real-time optimization algorithms, putting work into priority sequence, based on rules that you can configure and adjust, and run millions of possible combinations in a split -second, so that users are never waiting for work. For example, maybe you concentrate pick density early in the day, when there is time before the first orders are due at the dock. As the day progresses, the software can automatically transition to optimize for priority, as route cut-off times are approaching, based on the schedule for that day or week. The software then determines an optimized path for the user to take through the warehouse to complete their work, taking into account a number of factors such as:
Whats important to understand is that you can generally achieve excellent results without completely changing and updating systems, like a brand-new WMS or ERP. The behind-the-scenes parts of the process, and how you get to the answers is incredibly complex and best handled by software technology, ideally using artificial intelligence and machine learning to continually iterate and enhance the process.
Using AI to optimize DC processes has proven to be powerful in ecommerce pick-to-cart operations, where customers report doubling productivity without any fundamental changes to their picking processes. Although the biggest productivity gains are seen in each picking, the AI-based software tools have proven equally applicable to case picking, replenishment and other activities where workers are visiting many locations per work assignment. In grocery and food DCs, AI-based optimization has demonstrated travel savings of 15%-30% in case pick to pallet applications.
The right software optimizes hands-on work and streamlines manual processes, thus making the work easier, faster and more accurate for workers. For managers, machine learning-based applications provide new insights and recommendations, empowering them rather than taking control out of their hands. Ultimately, thats better for all, and for the overall business.
Read more from the original source:
Simplifying with Data-Driven Warehouse Optimization - Supply and Demand Chain Executive
OSU Marine and Geology Repository Scores $4.6 Million National … – The Corvallis Advocate
Oregon State University has been awarded $4.6 million in grants from the National Science Foundation to continue operating theMarine and Geology Repository, one of the nations largest repositories of oceanic sediment cores, for the next five years.
Funds will expand access to more than 22 miles of oceanic sediment cores and tens of thousands of marine rock specimens that reveal Earths history and document changes in climate, biology, volcanic and seismic activity, meteorite interactions and more.
The repository is one of four National Science Foundation-supported marine geology repositories in the country and the only one on the West Coast. Oregon State has operated the repository continuously since the 1970s. It is used by researchers and students from around the world.
The funds allow the repository to keep operating and to modernize, said Joseph Stoner, ageology and geophysics professor in theCollege of Earth, Ocean, and Atmospheric Sciencesand co-directorof the repository.
We are continually digitizing our data holdings so that we can make it easier for researchers to find what they want to work on, Stoner said. Our goal is to have baseline data for everything in the collection available online. This digitization will set us up for the future as data science evolves.
In the past, researchers would hear about a specimen or a collection and come to the repository to find it and study it, said Anthony Koppers, the facilitys other co-director. But many sediment cores and rock samples in the collection are not well known to the national and international science communities and therefore not well studied.
Examples from the repositorys inventory includes a sediment core estimated at 25 million years old; a sediment core collected from the Peru-Chile Trench, at a water depth of 26,500 feet; an Antarctic sediment core collected from a depth of 1,285 meters below the ice; and the oldest core in the collection, an Antarctic core collected on the icebreaker Burton Island in February 1962.
Digitizing our records will enhance the whole collection and make it possible for more cores and samples to be discovered in our holdings and new research to be done, said Koppers, who also serves as associate vice president for research advancement and strategy in OSUs research office and a professor of marine geology in the College of Earth, Ocean, and Atmospheric Sciences.
Of the total funds awarded, $2.5 million comes from NSFs Office of Polar Programs and $2.1 million comes from the NSF Division of Ocean Sciences.
Many of the cores in the collection are stored on racks in an 18,000-square-foot refrigerated warehouse in Corvallis, Oregon, near OSUs main campus. Some sediment cores as well as a trove of Antarctic ice cores are stored in large walk-in freezers kept at 20 degrees below zero Fahrenheit. Rock samples are stored at room temperature in hundreds of gray totes and stacked on pallets.
The repository, which wasexpanded and relocated in 2020to a facility on Research Way in Corvallis, also includes lab facilities for analyzing cores as well as seminar spaces and offices for faculty and visiting researchers.
With the new funding, the directors are also planning a series of summer workshops for graduate students, postdoctoral scholars and early-career researchers to learn more about the repository and how to process the cores and rock samples, work with these materials, perform non-destructive measurements and carry out scientific interpretations.
Students and researchers would be paired with mentors to learn more about working with sediment cores and rock samples and could explore a research topic using the repositorys facilities and equipment.
The basic lessons on working with sediment cores are not available to students in every graduate program, so this can help fill that gap, Koppers said.
Read the rest here:
OSU Marine and Geology Repository Scores $4.6 Million National ... - The Corvallis Advocate
Jaywing welcomes Mustafa Khoshnaw and Abisola Akinjole – Consultancy.uk
Out of over 100 applicants for its student placement programme, Jaywing has selectedMustafa Khoshnaw and Abisola Akinjole to join its risk and data science advisory wing.
Jaywing is a professional services firm with 300+ employees in the UK, France and Australia. Founded in 1999, the companys heritage is in risk consulting, working to deliver data analysis skills to clients in the financial services industry.
As Jaywing expands its team, the firm has announced the recruitment of two junior consultants in its London office.
According to a release from the firm, the move aligns with Jaywing's vision to reinforce its workforce, embark on new large-scale projects, and provide senior staff with enhanced supervisory opportunities.
Abisola Akinjole, who is pursuing a degree in big data analytics at Sheffield Hallam University, will serve as a skilled data storyteller during her 11-month tenure with Jaywings consulting wing.Meanwhile, Mustafa Khoshnaw, a data science student at Manchester Metropolitan University, will be joining Jaywing for a nine-month placement.
Following their placements with Jaywing, both students will return to university to complete their final dissertations, equipped with the invaluable experience gained during their time with the consultancy.
In the meantime, Jaywing will also welcome two additional students in September, further expanding the consultancys team.
"The durations of these placements are designed to allow Abisola and Mustafa to seamlessly integrate into our projects, gain crucial industry experience, and contribute to the overall success of the team," said Katie Stones, an analytics director at Jaywing. "We are thrilled to welcome these talented masters' students and are committed to supporting their continued growth in their studies and future careers."
Welcoming students and young workers to placements particularly in the summer months is a common practice in the consulting sector. Jaywing alone received over 100 applications for the placement positions, and identified 12 promising candidates who were invited to participate in an assessment day at Jaywings office in Sheffield, before making their selection.
Earlier in the year, Deloitte also added450 university students across its UK offices, while BJSS invested1 million in the creation of a new apprenticeship scheme.
Read more from the original source:
Jaywing welcomes Mustafa Khoshnaw and Abisola Akinjole - Consultancy.uk
Diabetes study finds a wealth of information at your fingertips – Hamilton Health Sciences
Hamilton Health Sciences researchers and data science experts are using artificial intelligence to examine if the capillaries in nailbeds can identify diabetes and its complications.
Despite being separate health conditions, diabetes and heart disease are strongly connected. According to the Heart and Stroke Foundation, people with diabetes are more likely to develop heart disease at a younger age, and are three times more likely to die of heart disease.
This world-first innovation could lead to better management of diabetes and its complications.
Identifying a patients risk of these complications due to diabetes means preventative measures can be developed. While it has already been established that the capillaries in the retina of the eyes can assess the risk of retinal complications from diabetes, Hamilton Health Sciences (HHS) Dr. Reema Shah wondered if the capillaries in nailbeds could be a novel method to providing more information about diabetes in a less invasive manner.
With this device, images can be taken of the capillaries from the fingers nail fold.
During her research fellowship in 2016, Shah set out to find the answer in partnership with her supervisor Dr. Hertzel Gerstein, an HHS endocrinoligist and recognized leader in diabetes research by the Canadian and American Diabetes Associations.
Shah and Gerstein developed a research study using images from the fingers nail fold. The nail fold attaches the nail to the rest of the skin through the protective cuticle. The image is captured through a simple, non-invasive, portable diagnostic test called a capillaroscopy.
Then, in 2020, they partnered with HHS Centre for Data Science and Digital Health (CREATE) to use artificial intelligence to analyze the images.
We had been seeking partnerships with various external groups to do the machine learning aspects of the projects for quite some time with no success, says Shah. Finding the CREATE teams expertise in-house finally helped us move the project forward.
Dr. Reema Shah
The study looked at the capillaries of 120 adult patients with and without type 1 or type 2 diabetes and with and without cardiovascular disease. CREATEs machine learning experts used a deep learning technique called convolutional neural networks to analyze a total of 5,236 nail fold images, approximately 44 images per participant.
It resulted in accurately distinguishing between patients who did and did not have diabetes and could predict which patients are at greater risk for developing cardiovascular complications. These results were published in the Journal of Diabetes in February of 2023.
This proof-of-concept study demonstrates the potential of machine learning for identifying people with microvascular capillary changes from diabetes based on nail fold images, and for possibly identifying those most likely to have diabetes-related complications, says Shah.
The team is now looking to expand to a larger number of patients and broaden the scope to see if the risk of other complications from diabetes can also be assessed. Eventually, this discovery could be used in low- and middle-income countries to help provide access to screening tools where there is limited access to health-care professionals.
Jeremy Petch
This world-first innovation could lead to better management of diabetes and its complications, says Jeremy Petch, director of CREATE. And, its happening right here in Hamilton thanks to our in-house team of data scientists and AI experts that are building relationships with clinicians who ask great questions and need support finding solutions.
Read more from the original source:
Diabetes study finds a wealth of information at your fingertips - Hamilton Health Sciences