Category Archives: Data Science
Opinions expressed by Entrepreneur contributors are their own.
An article in Harvard Business Review once called being a data scientist "the sexiest job of the 21st century." So what does one have to do to earn that title?
A data scientist can tackle multifaceted challenges through the utilization of data combined with machine learning approaches. Data science as a course, on the other hand, is a multidisciplinary field of study that combines computer science with statistical methodology and business competencies. To qualify as a data scientist, they need to possess unique experience alongside expertise within primary data science settings. This may include statistical analysis, data visualization, utilization of machine learning methodology, comprehensionand assessing conceptual challenges linked to businesses.
What does the ideal future look like in regards to science?Science enthusiasts would likelyenvision a steady progression of technology over the next five years. Science and technological innovations are continuously improving, newer opportunities are being created and more recent techniques are being opened up for enhancing business operations for individuals and organizations.
Many organizations are delving into data science as the key to increasing their competitiveness. As a result, production has also improved over the last few years. Take Apple and Amazon as examples. Both companies have improved their global brand positioning,realized steady profits and are on target to continue to grow partly due to their high-end reliance on data science.
Related: Why 'Data Scientist' Will Continue to Be 'the Sexiest Job of the 21st Century'
We are constantly being faced with unpredictable situations like the Covid pandemicwhich has called for businesses to do what they can to minimize human-to-human contact. Data science and rapidly changing technology have helped drive these changes and prove that a bright future exists. This will, however, depend on the quality and the extent of data that organizations can acquire.
Since there is a greater emphasis on consumer behavior data, organizations are constantly searching for the best way to collect this information. In addition, there have been more calls for ethics and legal compliance within every sector of the economy. This increases the need for data science to be utilized, ensuring the acquired data is safely and securely stored. Confidentiality is also of the utmost importance.
All this focus on data science makes data scientists pretty crucial for businesses of all sizes. These professionals have the competencies for developing machine learning frameworksand offer value for the vast acquired datasets at their disposal.
Despite the growing use of AI, the demand for data scientists should continue to rise. A data scientist generally delves into analyses combined with output. AI acts as the key component of machine learning, which is based on developing self-sustaining frameworks. This generates set outcomes that lack interactions. Moreover, AI delves into the aspect of an evolving framework as opposed to analyses. However, its value is still yet to be comprehensively explored, and this may pose a challenge for the future of data scientists.
Related: Reasons Why Data Science Will Continue to Be the Most Desirable Job of the Decade
But despite the projected setbacks for data scientists, various positives should keep hopes up. One is the increasedgranularization of data scientists' roles. The other is the increased need for expertise for attaining unique workstreams and also upholding competitiveness through the utilization of specialized knowledge. Looking forward, there will be more significant opportunities for developing more advanced algorithms and pushing the field to showcase what data scientists can offer within the world of science and technology.
Rene Cummings arrived at the University of Virginia in October 2020 as the School of Data Sciences first data-activist-in-residence. Cummings, who speaks internationally on artificial intelligence ethics and inclusive innovation, also lectures on big data ethics in the schools data science masters program.
The No. 1 question she gets from UVA students is also big: How do I make a difference? Students want to know how to think about making decisions to ensure their choices are ethical and serve society.
Our work at UVA is to give students the confidence to act responsibly and on behalf of the public good, Cummings said. We want students to understand why justice and social good and civic-mindedness are so critical to the work that we are doing in technology.
Now Cummings is furthering her contributions to this mission by helping lead UVAs role in the Public Interest Technology University Network, a consortium of 43 academic institutions focused on building the field of public interest technology and preparing the next generation of civic-minded technologists. Her co-leader at UVA, with whom she will serve a three-year term, is Jonathan L. Goodall, professor of civil engineering in the School of Engineering and Applied Science.
Technology can help address many challenges facing cities and communities, but technological solutions must be developed in partnership with communities so that they are trusted and targeted in their use, Goodall said. Public interest technology is a new field at this interface between technology and community engagement, with the aim of creating technology that best serves the public interest. It is exciting to work with Rene to build a community of folks from across Grounds engaging in this new field.
UVA was one of 21 college and university founding members of the network, convened in 2019 by New America, the Ford Foundation and the Hewlett Foundation. The goal of the Public Interest Technology University Network, which uses the abbreviation PIT-UN, is to collaborate on new curricula, faculty training, experiential learning opportunities and innovative ways to support students who enter public interest technology fields. The network provides grants to its members to support these efforts.
UVAs relationship with the network was initiated by Louis Nelson, vice provost for academic outreach, who quickly moved to recruit content experts from across Grounds to guide the work.
While public service and community-facing programs are clearly in the academic outreach domain, UVA is well-positioned to grow a stronger footprint in technology and ethics, Nelson said. Technology is going to shape the future, and I am thrilled that Rene and Jonathan are going to be leading and representing UVA in this space.
As a founding partner, UVA has a critical role to play, particularly at this moment, Cummings said. We have the ability to harness the power of the public in building justice-oriented, equitable, diverse and inclusive technology that is responsible, trustworthy and good for all.
Cummings, who started her career as a journalist to give a voice to the underserved, went on to advocate as a criminologist, criminal psychologist and AI ethicist. She brings to data science a passion for developing ideas around how to create principled technology.
Goodall comes to the co-leadership role as a 2020 recipient of a network grant, one of three received by UVA since the inaugural grant cycle. The funding will help strengthen the Community Fellows Program, which is jointly spearheaded by UVA Engineerings Link Lab for cyber-physical systems and the Center for Civic Innovation, a local nonprofit. The fellows program supports citizen-defined, civic innovation projects that serve the Charlottesville community. The 2021 cohort of fellows was announced Sept. 16.
A civil engineer by training, Goodall collaborates with cities facing flooding challenges due to climate change. He works in infrastructure, hydrology and technology, focusing on flood solutions and resiliency measures that best serve the localities.
Goodall is also the associate director of the Link Lab and leads research projects related to smart cities technology, one of the Link Labs key research focus areas. Interaction with local community is a critical component of the work and this new role builds on that foundation.
As co-leaders of UVAs role in the network, Cummings and Goodall will promote opportunities for UVA peers to connect and cultivate public interest technology collaborations.
Our purpose is to bring together researchers from a range of disciplines to imagine creative new solutions toward justness and fairness in the technology ecosystem, Cummings said. We seek to inspire interdisciplinary approaches that leverage the extraordinary promise, potential and power of technology for the social good and for the public good.
Goodall and Cummings also will lead UVA teams in cooperative efforts with other network member schools aimed at supporting the use of data and technology to deliver better outcomes to the public
Working with peer institutions will be imperative in defining what public interest technology will look like in the future, Goodall said. This problem is bigger than any one college or university, so collaborating across universities will be important.
The Public Interest Technology University Network collaboration offers an extraordinary opportunity to reimagine the world in a way that technology can be used for the benefit of all, Cummings said. Everything I have done in my past prepared me for that future.
View original post here:
CAMBRIDGE, Mass., Sept. 29, 2021 /PRNewswire/ -- MetaCell, an innovative life science software company specialized in creating cutting-edge research software for major pharma, biotech, and academic institutions, has launched MetaCell Cloud Hosting a brand new online product providing advanced cloud computing solutions to facilitate research and innovation in life science and healthcare organizations of all sizes.
MetaCell Cloud Hosting is a brand new online product providing advanced cloud computing solutions to facilitate research and innovation in life science and healthcare organizations of all sizes. (PRNewsfoto/MetaCell)
Introducing MetaCell Cloud Hosting
MetaCell specializes in designing custom software services for the pharmaceutical industry, the healthcare sector, and for researchers in academia. In doing so, MetaCell helps its customers overcome challenging information management problems that they have found difficult to navigate with their IT departments and with large service providers like Amazon and Google. MetaCell Cloud Hosting provides scientists, pharmaceutical companies and research institutions with a turnkey online product to host and process their life sciences data and applications.
A unique feature provided by MetaCell Cloud Hosting is its customer-tailored capability for biomedical and life science data and software applications that delivers the optimal allocation of cloud resources based on the budget and performance goals of the researchers. This includes affordable storage on trusted servers and access to world-class computing resources, which can be efficiently scaled up to meet the growing need for big data analytics, bioinformatics, digital health, and artificial intelligence. Enabling hosted applications to comply with all major international regulatory frameworks such as GDPR, HIPAA, SOC 2, and relevant ISO standards is another significant value add that MetaCell brings to the market with the release of this new product.
Stephen Larson, CEO of MetaCell, said: "We're thrilled that we are officially launching our MetaCell Cloud Hosting product. From advanced custom software applications to single page websites showing off their work, Cloud Hosting will help researchers avoid the headaches associated with ongoing management of their online software and data holdings."
Dr. Rick Gerkin, Associate Research Professor at Arizona State University (ASU), commented: "Besides enabling us to host our research applications and data in a safe cloud infrastructure, MetaCell Cloud Hosting will save us precious time which we'll no longer spend trying to fix technical issues, and instead dedicate it to what we care about most: advancing our research. We've been partnering with MetaCell for a number of years and they have demonstrated their expertise in developing and maintaining our cloud software and databases. We look forward to taking advantage of their new product."
MetaCell is a life science-focused software company composed of scientists and software engineers with deep domain expertise in computational neuroscience, molecular biology, data science, and enterprise-grade online software development. Over the last ten years, MetaCell has established a global presence by partnering with the world's largest pharmaceutical companies including Pfizer and Biogen, leading universities such as Yale University, Princeton University, UCSD, UCL, ASU, SUNY Downstate, and University of Edinburgh, as well as innovative organizations such as CAMH, INCF, and EMBL-EBI.
View original content to download multimedia:https://www.prnewswire.com/news-releases/metacell-launches-innovative-cloud-hosting-for-life-science-and-healthcare-301387233.html
Go here to see the original:
Top Data Scientists Recognized for Advanced Research and Applied Data Science in Topics Spanning COVID-19, Disaster Work Zones, and Diverse Time Range Queries
SAN DIEGO, Sept. 28, 2021 /PRNewswire/ -- The Association for Computing Machinery (ACM) Special Interest Group on Knowledge Discovery and Data Mining (SIGKDD) today announced the recipients of the SIGKDD Best Paper Awards, recognizing papers presented at the annual SIGKDD conference that advance the fundamental understanding of the field of knowledge discovery in data and data mining. Winners were selected from more than 2,200 papers initially submitted for consideration to be presented at KDD 2021, which took place Aug. 14-18. Of the 394 papers chosen for the conference, three awards were granted: Best Paper in the Research Track, Best Paper in the Applied Data Science Track, and Best Student Paper.
"Academic and industrial researchers from all over the world submitted papers to KDD 2021 to showcase the newest innovations in the field of machine learning knowledge discovery," noted Dr. Haixun Wang, chair of the SIGKDD award committee. "Those selected for recognition have pushed the frontier of machine learning especially in tackling real-world problems." The SIGKDD Best Papers of 2021 are as follows:
Research Track: "Fast and Memory-Efficient Tucker Decomposition for Answering Diverse Time Range Queries," by Jun-gi Jang and U Kang (both from Seoul National University) Having investigated methods that analyze dense tensors to discover hidden factors, researchers showed that the Zoom-Tucker is a fast and memory-efficient Tucker decomposition method for finding hidden factors of temporal tensor data in an arbitrary time range. The paper illustrates that by elaborately decoupling the preprocessed results included in diverse time range and carefully determining the order of computations, the Zoom-Tucker method is up to 171.9x faster and requires up to 230x less space than existing methods, providing a creative solution that yielded astounding results.
Research Track, Student Paper: "Spectral Clustering of Attributed Multi-Relational Graphs," by Ylli Sadikaj (University of Vienna), Yllka Velaj (University of Vienna), Sahar Behzadi Soheil (University of Vienna), and Claudia Plant (University of Vienna) Having investigated the challenge of graph clustering when complex data in many domains are represented as both attributed and multi-relational networks, researchers proposed SpectralMix, a joint dimensionality reduction technique for multi-relational graphs with categorical node attributes. SpectralMix integrates all information available from the attributes, the different types of relations, and the graph structure to enable a sound interpretation of the clustering results.
Applied Data Science Track: "Supporting COVID-19 Policy Response with Large-Scale Mobility-Based Modeling," by Serina Chang (Stanford University), Mandy Wilson (University of Virginia), Bryan Leroy Lewis (University of Virginia), Zakaria Mehrab (University of Virginia), Emma J. Pierson (Microsoft Research), Pang Wei Koh (Stanford University), Jaline Gerardin (Northwestern University), Beth Red Bird (Northwestern University), David Grusky (Stanford University), Madhav Marathe (University of Virginia), and Jure Lesovec (Stanford University) The authors introduced a decision-support tool that utilizes large-scale data and epidemiological modeling to quantify the impact of changes in mobility on infection rates. The model captured the spread of COVID-19 by using a fine-grained, dynamic mobility network that encodes the hourly movements of people from neighborhoods to individual places, with more than 3 billion hourly edges. The paper describes the robust computational infrastructure required to support millions of model realizations that can simulate a wide variety of reopening plans, giving policymakers an analytical tool to assess the tradeoffs between future infections and mobility restrictions.
Applied Data Science Track, Runner Up: "Energy-Efficient 3D Vehicular Crowdsourcing for Disaster Response by Distributed Deep Reinforcement Learning," by Hao Wang (Beijing Institute of Technology), Chi (Harold) Liu (Beijing Institute of Technology), Zipeng Dai (Beijing Institute of Technology), Jian Tang (DiDi Chuxing), and Guoren Wang (Beijing Institute of Technology) The authors presented DRL-DisasterVC(3D), a distributed deep reinforcement learning framework, to maximize the amount of collected data from unmanned vehicles in a 3-dimensional (3D) disaster work zone. The paper described a 3D convolutional neural network with multi-head-relational attention for spatial modeling and auxiliary pixel control for spatial exploration, and a novel disaster response simulator, called "DisasterSim," used to conduct extensive experiments to show that DRL-DisasterVC(3D) maximizes data collection, geographical fairness, and energy efficiency, while minimizing data dropout due to limited transmission rate.
The technical program committees for the Research Track and the Applied Data Science Track identified and nominated a highly selective group of papers for the Best Paper Awards. The nominated papers were then independently reviewed by a committee led by Chair Haixun Wang, vice president of engineering and algorithms at Instacart; Professor Wei Wang, University of California, Los Angeles; Professor Beng Chin, National University of Singapore; Professor Jiawei Han, University of Illinois at Urbana-Champaign; and Sanjay Chawla, research director of Qatar Computing Research Institute's data analytics department.
For more information on KDD 2021, please visit: https://www.kdd.org/kdd2021/.
About ACM SIGKDD: ACM is the premier global professional organization for researchers and professionals dedicated to the advancement of the science and practice of knowledge discovery and data mining. SIGKDD is ACM's Special Interest Group on Knowledge Discovery and Data Mining. The annual KDD International Conference on Knowledge Discovery and Data Mining is the premier interdisciplinary conference for data mining, data science and analytics.
View original content to download multimedia:https://www.prnewswire.com/news-releases/kdd-2021-honors-recipients-of-the-sigkdd-best-paper-awards-301386915.html
SOURCE ACM SIGKDD
Follow this link:
SAN JOSE, Calif. & HYDERABAD, India, September 29, 2021--(BUSINESS WIRE)--Analytics Insight has announced 'Big Data Analytics Companies of the Year in its September magazine issue. The issue is focusing on trailblazing companies that are analyzing data to accelerate business development.
The magazine recognizes ten futuristic companies driving exponential growth in big data analytics ecosystem through innovative solutions. They aid businesses to achieve success by understanding patterns and deriving meaningful answers from the past, present, and future data. These companies use the latest tools and software to analyze and provide a wide variety of insightful solutions that help take the right business decision. Here is the list of the top ten big data analytics companies that are redefining the industry by giving an edge to business in 2021.
Featuring as the Cover Story is SWARM Engineering, a company transforming the way people solve problems, with a vision to democratize the use of AI and make it easily accessible to everyday business users. SWARM is focused on tackling inefficiencies in the agri-food supply chain, reducing food waste, and lowering carbon footprints, by finding more effective ways to operate processes such as forecasting yield or balancing supply and demand.
The issue features SDG Group and Qubedocs as the Companies of the Month.
SDG Group: SDG Group is a trailblazer in the data and analytics field. For the past 25 years, the company is helping clients transform data into business decisions. SDG Group serves its vision through services and capabilities that support the whole data journey.
Qubedocs: Qubedocs provides automated documentation for IBMs TM1/Planning Analytics (PA) tool. Some of the USAs largest corporate budgeting and planning models rely on the power and versatility of the company.
Other honourable companies include,
VisionSoft: VisionSoft is one of the top big analytics companies addressing the key challenges faced by the industry in controlling and managing big data as well as complexities with HANA Solutions, cloud managed services, and digitization.
Tenzai Systems: Tenzai Systems is a purpose-driven AI company founded by award-winning data science leaders with the vision of helping organizations realize the true potential of artificial intelligence. The company creates impactful AI solutions for clients that are accessible.
Stefanini Group: Stefanini Group is a US$1 billion global technology company that provides organizations of all shapes and sizes with a broad portfolio of digital transformation services and solutions. The company offers AI, automation, cloud, IoT, and user experience services.
Flip Robo: Flip Robo is an artificial intelligence and development company. It specializes in chatbots, web scrapping, and building algorithms that help people scale up their businesses. The company is also an expertise in web and mobile development.
Cloudera: Cloudera accelerates digital transformation for the worlds largest enterprises. The company helps innovative organizations across all industries to tackle transformational use cases and extract real-time insights from an ever-increasing amount of data to drive value.
SAP: SAP is the market leader in enterprise application software, helping companies of all sizes and in all industries run at their best. SAPs machine learning, Internet of Things (IoT), and advanced analytics technologies help turn customers businesses into intelligent enterprises.
Alteryx: Alteryx helps customers achieve outcomes from their data to create business-changing breakthroughs every day. The companys human-centred analytics automation platform unifies data science, analytics, and process automation together to help clients harness complex data.
"The emergence of technology in diverse sectors has led to the generation of massive amounts of data every day. Without data analytics, businesses cannot understand what this information means. In this issue, Analytics insight aims to recognize and celebrate big data analytics companies that are sophisticating the business decision-making process with the help of data," says Adilin Beatrice, Associate Manager at Analytics Insight.
Read the detailed coverage here. For more information, please visit https://www.analyticsinsight.net/.
About Analytics Insight
Analytics Insight is an influential platform dedicated to insights, trends, and opinions from the world of data-driven technologies. It monitors developments, recognition, and achievements made by AI, big data, and analytics companies across the globe. The Analytics Insight Magazine features opinions and views from top leaders and executives in the industry who share their journey, experiences, success stories, and knowledge to grow profitable businesses.
To set up an interview or advertise your brand, contact email@example.com
View source version on businesswire.com: https://www.businesswire.com/news/home/20210929005638/en/
Ashish SukhadeveFounder & CEOEmail: firstname.lastname@example.org Tel: +91-40-23055215http://www.analyticsinsight.net
Read more here:
Most serious data scientistsprefer R to Python, but if you want to work in data science or machine learning in an investment bank, you're probably going to have to put your partiality to R aside. Banks overwhelmingly use Python instead.
"Python is preferred to R in banks for a number of reasons," says the New York-based head of data science at one leading bank. "There's greater availability of machine learning packages like sklearn in Python; it'sbetter for generic programming tasks and is more easily productionized; plusPython's better for data cleaning (like Perl used to be) and for text analysis."
For this reason, he said banks have moved their data analysis to Python almost entirely. There are a few exceptions: some strats jobs use R, but for the most part Python predominates.
Nonetheless, R still has its fans. Jeffrey Ryan, the former star quant at Citadel is a big proponent of R and runs an annual conference on R in finance (canceled this year due to COVID-19). "R was designed to be data-centric and was researcher built," says Ryan. "Whereas Python co-opted R's data frame and time series, via Pandas [the open source software library for data manipulation in Python built by Wes McKinney, a former software developer at Two Sigma.]"
R is still used in statistical work and research, says Ryan. By comparison, Python is the tool of "popular data analysis," and is easy to use without learning statistics. "Python found a whole new audience of programmers at the exact right moment in history," Ryan reflects. "When programmers (more numerous than statisticians) want to work with data, Pythonhas the appeal of a single language that "does it all" - even if it technically does none of this by design."
Given the importance of data in financial services, it might be presumed that banks would favor the more capable language, even if itdoes require extra effort to master. However, Graham Giller, chief executive officer at Giller Investments and a former head of data science research at JPMorgan and Deutsche Bank, says banks have settled on Python over R because banks' IT departmentsare predominantly run by computer scientists rather than people who care a lot about data.
"Personally I like R a lot," says Giller."R ismuch more of a tool for professional statisticians, meaning people who are interested in inference about data, rather than computer scientistswho are people interested in code." As the computer scientists in banks have gained traction, Giller says banks have "replaced quants with IT professionals or with quants who deep down want to be IT professionals," and they've brought Python with them.
For the pure mathematicians in finance, it's all a bit frustrating. Pandas was built on the back of R, but has taken on a life of its own. "Pandas started out as a way to bring an R like environment to Python," says Giller, observing that Pandas can be "horrifically slow and inefficient" by comparison.
Most people don't care about this though: the more that Python and Pandas are used, the more use cases they have. "R has a relatively smaller user base than Pythonat this point," says Ryan."Thisin turn means a lot of tools start to get created around python and data, and it builds upon its success."
Have a confidential story, tip, or comment youd like to share? Contact:email@example.com the first instance. Whatsapp/Signal/Telegram also available.
Bear with us if you leave a comment at the bottom of this article: all our comments are moderated by human beings. Sometimes these humans might be asleep, or away from their desks, so it may take a while for your comment to appear. Eventually it will unless its offensive or libelous (in which case it wont.)
Photo byVitaly VlasovfromPexels
See the rest here:
World Artificial Intelligence and Data Science Conference (ADC-2021) is going to take place on 13th October 2021 at 9 am virtual. The conference is going to be an amalgamation of both, the next-generation technologies as well as the strategies from across the Artificial Intelligence and Data Science world. This is a perfect opportunity to discover the practical part of the implementation of AI and Data Science in taking your business ahead in 2021 and in years to come.
When we talk about digital acceleration, it has completely changed the competitive landscape. And in this present scenario, as AI and Data leaders need to understand and work towards expanding their organizations or institutes, the ADC-2021 is a great platform that can address the AI and Data fundamentals surrounding diverse businesses without compromising on trust, quality, and integrity.
The program of ADC-2021 emphasizes the business-critical data management challenges, methodologies, tools, and strategies that organizations can apply while solving the issues in their organizations. The conclave aims to bring all the people associated with AI and Data Science such as data scientists, researchers, business leaders, C-level executives, innovators, entrepreneurs, and many more active people across the globe under one umbrella to share their experiences that can be helpful to a bigger ecosystem.
So what are you waiting for? Its a shoutout to all the enterprise executives, IT decision-makers, business leaders, heads of innovation, chief data scientists, chief data officers, data architects, data analysts, tech-providers, tech start-ups, venture capitalists, and all the innovators to grab this wonderful opportunity and to explore great aspects of AI and Data Science streams from across the industry experts.
To know more and to be part of the conference, register here!
See you there at ADC-2021 on 13th October 2021 at 9 am!
Share This ArticleDo the sharing thingy
About AuthorMore info about author
Analytics Insight is an influential platform dedicated to insights, trends, and opinions from the world of data-driven technologies. It monitors developments, recognition, and achievements made by Artificial Intelligence, Big Data and Analytics companies across the globe.
Read the rest here:
mRNA Could Fight Diseases Such as Alzheimer’s and Cancer, With Help of UVA Scientist – University of Virginia
This year, the public was introduced to messenger ribonucleic acid, or mRNA, when it became a hero in the worldwide race to develop COVID-19 vaccines. Scientists lab-engineered mRNA to instruct human cells how to recognize, and then destroy, the spike protein that is the entryway for the virus.
Highly effective and precise, the approach offered a glimpse into the power of mRNA technology. Messenger RNA could one day help the human body tackle diseases like cancer with the same effectiveness.
Yanjun Qi, an associate professor of computer science in the University of Virginias School of Engineering and Applied Science, could hold the key to making that happen.
DNA, a humans genetic code, holds the instructions that direct cells in performing all biological functions. Messenger RNA carries those instructions to the cells. Scientists hope to harness the bodys DNA-to-mRNA-to-cell action pathway a process called gene expression for precision medicine.
Before that potential can become a reality, however, researchers must discover what instructions the genes in our DNA are sending through messenger RNA.
Qi is on the leading edge of this discovery. She is using powerful deep-learning models to analyze biomedical data to uncover how genes and messenger RNA interact.
The relationship between the DNAs instructions; their messengers, the mRNA; and how they direct cell activity is not really clear for the majority of disease, Qi said. What we are trying to understand is the DNA-to-the-messenger-RNA step, because it informs us how the genetic code is connected to the expression of disease.
Uncovering those connections could lead to a future with highly targeted therapies. Just as mRNA can instruct a cell to block a virus from invading the body, the DNAs messengers could one day arm cells with the relevant instructions to mount a front-line defense against disease, well before it can even take hold.
Qi stresses that the work is in the earliest stage of discovery.
Huge amounts of data about genetic code are being compiled, she said. The question is how to make sense out of that data for useful purposes. We are creating artificial intelligence tools to find things that are entire unknowns right now. We are talking about long time horizons, and we believe we are going to get there.
The far-reaching finish line reflects the sheer size of the task. A humans genetic code includes 6 billion data points that are contributing to gene expressions, which are then connected to the more than 1013(10 million million) cells of the human body.
There are biological pathways from genes to the mRNAs to proteins that perform millions of functions, Qi said. Decoding such a massive amount of detail into specific pathways for disease is a gargantuan task.
That is where the powerful artificial intelligence-based computer models come into play. They can detect patterns in the data that make it easier to find those connections.
A model can generalize inferences from what it has seen before and apply that to unknowns and more quickly recognize something new, Qi said. Each new finding helps narrow focus of the ongoing search because the computer is learning from the history of the data to recognize basic rules.
When new rules are uncovered, researchers can extrapolate them and go outside the existing knowledge.
In her role as adjunct faculty member in both the UVA School of Medicines Center for Public Health Genomics and the UVA School of Data Science, Qi collaborates with biological researchers in different areas of medicine to model data for a better understanding of genetic code and its relationship to disease.
A lifelong interest in holistic and natural approaches to science, combined with a keen desire to create the mathematical tools to solve complex problems, put Qi on the path that overlaps AI with biology and medicine.
Computer science is a skill and a tool because the algorithms we create are agnostic and can tackle any task, she said. A good tool can profoundly solve the task.
Qis longest-running collaboration since joining UVA Engineering in 2013 is with Center for Public Health Genomics resident member Clint L. Miller, an assistant professor in the School of Medicines Department of Public Health Sciences. The two have created a tool that can be used on specific data to study genetic factors related to cardiovascular disease risk.
Miller initially reached out to Qi for her expertise when a student in his lab expressed interest in learning more about artificial intelligence methods.
After the first meeting, we realized that we had complementary research programs and similar interests, so we naturally started collaborating, he said.
Miller points out that the fields of genomic medicine and genetic-informed drug discovery are rapidly evolving due to the plummeting cost of DNA sequencing combined with the rise of more scalable computational analysis tools.
We are now at a key inflection point where the integration of large-scale human genetic datasets with AI-based predictive algorithms can be harnessed to develop the next generation of precision medicines, he said. The goal of our work in this space is to accelerate the discovery and translation of genetic-based medicines.
Miller, who holds secondary appointments in biomedical engineering as well as biochemistry and molecular genetics, believes that the key to innovation lies in bridging knowledge gaps across disciplines. By combining his expertise in the biology of disease with Qis knowledge of machine learning, they hope to answer the most pressing biomedical questions in the field.
In every collaboration, Qi equally weighs the power of listening, learning and forming understandings with the power of the AI models themselves.
I am always trying to build good tools, she said. In order to create good tools, you have to understand your user. I seek to understand the problems that the biologists I collaborate with are trying to solve specifically what data are they using.
This year, Qi was recognized for her contributions to research that advances medicine when she was recruited as a National Scholar of Data and Technology by the National Institutes of Health. She will be contributing ideas and tools that will leverage large genomics datasets to provide better understanding of Alzheimers disease in the quest for effective treatments.
Read the original:
The food industry is one of the most powerful and largest industries today, and with an extremely high demand to fulfill the customer needs especially in a densely populated country like India where the food industry is massive.
The business or rather the food industry includes many members and different stages, from producers like the farmers to distributors like ITC to grocers like Metro, More supermarkets, and restaurants. The food or goods must be sourced, maintained with, and delivered, at times safeguarded as well. To manage such intricacy and complexity, proper structured data and administration and for this, extremely advanced data science and computer science techniques such as AI and ML are required.
The objective of AI is to investigate the limitless likelihood space to think of the most appropriate answer for any issue. From shopping for food to eaterys food deliveries, everything is being customized for clients dependent on their purchasing conduct. Some of the biggest food business tycoons are shifting to AI technology to improvise manufacturing and processing to food.
Below are the ways, data science is changing the food industry:
Food delivery is presently a science and a critical differentiator for any business serving in the food business. A lot of preparation as coordinations and exceptions goes into simply getting a warm pizza from the caf to a clients doorstep on schedule.
Big data and data science frameworks and examinations can be utilized to screen and comprehend factors like traffic, climate, route changes, course changes, development, and even distance. This data is utilized to assemble a more intricate model that works out the time needed to make a travel to a delivery spot.
One unimaginably significant part of the cutting-edge world is the client opinion that forms the first priority. Its the overall tendency of the client towards a brand, its items, and individual encounters. The information which can be statistically interpreted is created by observing exercises across social media which are then pulled, incorporated, examined, and surprisingly even visualized. This information helps with driving better business choices and decisions. To understand and interpret patterns and well-known things or merchandise, sentiment analysis is one of the significant tools.
With any business, spread mindfulness and construct some brand reliability. This area is another aspect where information sciences become an integral factor that comes into play. It offers some incredible bits of knowledge into showcasing systems when and where your brand or items or products as such may be applicable, what ought to be the promoting stages, and who are your possible clients and customers.
It is extremely important for the current inventory network of the supply chain to offer smooth tasks and further develop client relations in the present market. This incorporates all members and products sources. Brands can build trust and good relations with the clients, convey better products and set up power through transparency. Clients are persuaded that their number one brands utilize sterile, harmless to the ecosystem, and pitilessness-free drives. Big data permits organizations and suppliers to follow their sourced and shipped products and consequently give advancement of such transparency and hence customer trust and profits are gained.
Artificial intelligence fuelled whether estimation process that is additionally being utilized by Indian farmers to understand the dynamics of the myriad and unpredicted climate like in our country. It also helps the transportation companies to further develop yields and decrease delivery costs. Also, propels in advanced robotics are being utilized in all pieces of manufacturing, including food processing and distribution.
The following are different manners by which information science is improving the Indian food business.
Apart from this, data science is also used to understand the various tastes of the people regionally since India is extremely rich and varied in various cultures, data science and big data help to sort the taste and food cultures according to the demographics and psychographics of the people.
The utilization of data science in the food business has a huge impact directly from the right sort of products being sourced, the nature of food being served, and the ideal conveyance of the food at the doorstep of the client. The job of data cant be ignored on the off chance that one needs to govern the food business and lead in this digital economy.
Originally posted here:
Media advisory: Kevin Leicht to testify before congressional subcommittee about disinformation – University of Illinois News
CHAMPAIGN, Ill. Kevin T. Leicht, a professor of sociology at the University of Illinois Urbana-Champaign, will testify before the U.S. House of Representatives Science, Space and Technology Committees Subcommittee on Investigations and Oversight on Tuesday, Sept. 28.
Leichts remarks will focus on the role of internet disinformation in fomenting distrust of experts and established scientific knowledge, in particular with respect to COVID-19 vaccines and miracle cures.
Leicht and Alan Mislove, of Northeastern University, and Laura Edelson, of New York University, will co-present The Disinformation Black Box: Researching Social Media Data at 10 a.m. EDT. They will testify remotely via videoconferencing.
The hearing will be livestreamed at https://science.house.gov/hearings.
Leichts research centers on the political and social consequences of social inequality and cultural fragmentation. His current work explores the growing skepticism toward scientists and attacks on experts and established scientific knowledge spread via social media.
Leicht is the primary investigator on the National Science Foundation-funded project RAPID: Tracking and Network Analysis of the Spread of Misinformation Regarding COVID-19.
U. of I. faculty members Joseph Yun, the director of data science research services and a professor of accounting in the Gies College of Business; and Brant Houston, the Knight Chair Professor in Investigative and Enterprise Reporting in the College of Media; are co-primary investigators on the project.
See original here: