Category Archives: Data Science
Heard on the Street 10/28/2021 – insideBIGDATA
Welcome to insideBIGDATAs Heard on the Street round-up column! In this new regular feature, we highlight thought-leadership commentaries from members of the big data ecosystem. Each edition covers the trends of the day with compelling perspectives that can provide important insights to give you a competitive advantage in the marketplace. We invite submissions with a focus on our favored technology topics areas: big data, data science, machine learning, AI and deep learning. Enjoy!
Paradigm shifts needed to establish greater harmony between artificial intelligence (AI) and human intelligence (HI). Commentary by Kevin Scott, CTO, Microsoft.
Comparing AI to HI is a long history of people assuming things that are easy for them will be easy for machines and vice versa, but its more the opposite. Humans find becoming a Grand Master of chess, or performing very complicated and repetitive data work, difficult, whereas machines are easily able to do those things. But on things we take for granted, like common sense reasoning, machines still have long way to go. AI is a tool to help humans do cognitive work. Its not about whether AI is becoming the exact equivalent of HIthats not even a goal Im working toward.
Importance of utilizing the objectivity of data to discover areas of opportunity within organizations. Commentary by Eric Mader, Principal Consultant, AHEAD.
During this era of accelerated tech adoption and digital transformation, more and more companies are turning to data collection and storage to fuel business decisions. This is, in part, because 2020 lacked the stability to serve as a functional baseline for business decisions in 2021. With this surge of organizations leaning on data and analytics, its essential to have a clear understanding of how this information can be helpful, but also harmful. These companies may inherently understand the importance of analyzing their data, but their biggest problem lies in their approach to preventing data bias. Accepting the natural appetite within organizations to lead their data to some degree is an important first step in capitalizing on the true objectivity of data. Luckily, there is one piece of advice that organizations must remember to avoid falling prey to confirmation bias in data analytics: Be careful how you communicate your findings. Its a simple practice that can make all the difference. While the researchers analyzing the data should be aware of common statistical mistakes and their own biases towards a specific answer, careful attention should also be paid to how data is presented. Data teams have to avoid communicating their findings in ways that might be misleading or misinterpreted. By upholding a level of meticulousness within your data strategy, organizations can ensure that their data approach is working with them and not against them.
Delivering Better Patient Experiences with AI. Commentary by Joe Hagan, Chief Product Officer at LumenVox.
Call management is critical in the healthcare industry to support patient needs such as scheduling, care questions and prescription refills. However, data suggests that more than 50% of contact agents time is spent on resetting passwords for patient applications and portals each password taking three or more minutes to reset. How can healthcare call centers overcome this time suck and better serve patients? AI-enabled technologies such as speech recognition and voice biometrics. According to a survey from LumenVox and SpinSci, nearly 40% of healthcare providers want to invest in AI for their contact centers in the next one to three years. The great digital disruption in healthcare that prioritizes the patient experience is here. As healthcare contact centers take advantage of technologies such as AI, they will be better equipped to deliver high quality service to patients.
AI is the Future of Video Surveillance. Commentary by Rick Bentley, CEO of Cloudastructure.
Not long ago, if you wanted a computer to recognize a car then it was up to you to explain to it what a car looks like: Its got these black round things called wheels at the bottom, the windows kind of go around the top half-ish part, theyre kind of rounded on top and shiny, the lights on the back are sometimes red, the ones up front are sometimes white, the ones on the side are sometimes yellowAs you can imagine, it doesnt work very well.Today you just take 10,000 pictures of cars and 50,000 pictures of close-but-not cars (motorcycles, jet skis, airplanes) and your AI/ML platform will do a better job of detecting cars than you ever could.Intelligent AI and ML powered video solutions are the way of the future, and if businesses dont keep up, they could be putting themselves and their employees at risk. Two years ago, more than 90% of enterprises were still stuck on outdated on-premises security methods, many with limited if no AI intelligence. The industry is under rapid transformation as they take advantage of this technology and move their systems to the cloud.Enhanced AI functionality and cloud adoption in video surveillance have allowed business owners and IT departments to monitor the security of their businesses from the safety of their homes. The AI surveillance solution can be accessed from any authorized mobile device and the video is stored safely off premises so that it cannot be hacked, and it is safe from environmental hazards. Additionally, powerful AI analytics allow intelligent surveillance systems to sort through large volumes of footage to identify interesting activity more than 10x faster and more accurately than manual, on-premises solutions. In the upcoming years, AI functionality will continue to get more and more advanced allowing businesses to generate real-time insight and enable a rapid response to incidents resulting in a more efficient and safer society.
How Businesses Can Properly Apply and Maximize AIs Impact. Commentary by Bren Briggs, VP of DevSpecOpsatHypergiant.
Businesses that do not utilize AI and ML solutions will become increasingly irrelevant. This is not because AI or ML are magic bullets, but rather because utilizing them is a hallmark of innovative, resilience-first thinking. For businessesto maximize the impact of AI, they must first pose critical business questions and then seek solutions that streamline data management as well as improve and strengthen business processes. AI, when utilized well, helps companies predict problems and then act to swiftly respond to those challenges.I always encourage companies tofocuson the basics:hire the experts, set up the models for success, and pick the AI solutions that will most benefit their organization. In doing that, companies should ramp up their data scienceand MLOps teams, which can help assess which AI problems are most likely to be successful and have a strong ROI.A cost/benefit analysis will help you determine if an AI integration is actually the best use of your companys resources at any given time.
AIas aSoftwareDeveloperin theContext ofProductDevelopment. Commentary by Jonathan Grandperrin, CEO,Mindee.
As artificial intelligencecontinues to take great stridesandmoremachine learningbased products for engineering teams emerge,a challenge arises, apressing need for software developers to skill up and understand how AI functionsand how to appropriately leverage them.AIs capabilities to automate tasks and optimize multiple if not all processes have revolutionized the world. To reach the promised efficiency, AI must be integrated into all day-to-day products, such as websites, mobile applications, even products like smart TVs. However, a problemcomes upfor developers in this context: AI does not rely on the same principles as software development. Different from software development, which relies on deterministic functions, most of the time, AI is based on a statistical approximation, which changes the whole paradigm from the point of view of a software developer.It is also the reason behind the rise of data science positions in software companies.To succeed, developers must hold the capacity to create AI models from scratch and provide technical teams with ML features that they can understand andutilize.Fortunately, itsbecomingincreasinglycommon to see ML libraries for developers. In fact,those looking to ramp up on their ML skills can participate in intro courses thatcan easilyextend their skillset.
How to Solve MLs Long Tail Problem. Commentary by Russell Kaplan, Scale AIs Head of Nucleus.
The most common problem in machine learningtodayis uncommon data. With ML deployed in ever more production environments, challenging edge cases have become the norm instead of the exception. Best practices for ML model development are shifting as a result. In the old world, ML engineers would collect a dataset, make a train/test split, and then go to town on training experiments: tuning hyperparameters, model architectures, data augmentation, and more. When the test set accuracy was high enough, it was time to ship. Training experiments still help, but are no longer highest leverage. Increasingly, teams hold their models fixed while iterating on their datasets, rather than the other way round. Not only does this lead to larger improvements, its also the only way to make targeted fixes to specific ML issues seen in production. In ML, you cannot if-statement your way out of a failing edge case. To solve long tail challenges in this way, its not enough to make dataset changesyou have to make the right dataset changes. This means knowing where in the data distribution your model is failing. The concept of one test set no longer fits. Instead, high performing ML teams curate collections of many hard tests sets covering a diversity of settings, and measure accuracy on each. Beyond helping inform what data to collect and label next to drive the greatest model improvement, the many-test-sets approach also helps catch regressions. If aggregate accuracy goes up but performance in critical edge cases goes down, your model may need another turn before you ship.
Top Data Privacy Considerations for M&As. Commentary by Matthew Carroll, CEO, Immuta.
M&A deals continue to soar globally, with the technology and financial services and insurance and industries leading the pack. However, with the increase in deals comes an increase in deals that fall through. Research shows that one of the primary reasons M&A deals fall through at a surprisingly high rate between 70% and 90% is data privacy and regulatory concerns as more companies move their data to the cloud. M&A transactions lead to an instantaneous growth in the number of data users,but the scope of data used is often complex and risky especially when it involves highly-sensitive personal, financial or health-related data. With two companies combining their separate vast data sets, its imperative to find an efficient way to ensure that data protection methods and standards are consistent, that only authorized users can access data for approved uses, and privacy regulations are adhered to across jurisdictions and the globe. Merging data is just the beginning. Once mergers are completed, the joint entities must be able to provide comprehensive auditing to prove compliance. Without a strong data governance framework, stakeholder buy-in, and automated tools that work across both companies data ecosystems, this can lead to unmanageable and risk-prone processes that inhibit the value of the combined data and could lead to data vulnerabilities.
Why Training Your AI Systems Is More Complex Than You Think. Commentary by Doug Gilbert, CIO and Chief Digital Officer,Sutherland.
Fewenterprises, if any, areready to deploy AI systemsor ML models thatare completely freefrom any formofhuman intervention oroversight. Whentrainingalgorithms,its important to first understand the inherent risks of bias from the training environment, the selection training data and algorithms based upon human expertise in that particular field, and the application of AI against the very specific problem it was trained to solve. The avoidance of any or all of these can lead to unpredictable or negative outcomes. Human oversight using methods such asHuman-in-the-Loop (HitL), Reinforcement Learning, Bias Detection, andContinuous Regression TestinghelpsensureAI systems are trained adequately and effectivelytodeal with real-life interactions, work and use cases and create positive outcomes.
Scientific vs. Data Science Use Cases. Commentary by Graham A. McGibbon, Director of Partnerships, ACD/Labs.
Current scientific informatics systems support electronic representations of data obtained from experiments and tests which are often confirmatory analyses with interpretations. Data science is more often exploratory and the supporting systems typically rely on data pipelines and large amounts of clean and comprehensive data required for appropriate statistical treatments. Data science systems are founded on large volumes of data being well-identified via metadata, which is needed for the critical capability of machines to self-interpret these large datasets, and subsequently derives correlations and predictions that are otherwise not obvious. Ultimately, some of these systems could cycle continuously and autonomously given sufficient coupling with automated data generation technologies. However, scientists want the ability to judge the output of their analyses and view and explore unanticipated features in their data along with any machine-derived interpretations. Consequently, these scientific consumers need representations of results that they can easily evaluate. When comparing the current output capabilities of data science systems to contemporary or historical scientific systems, they lack some of the semiotics that domain-specific scientists expect. As such, there remains a need to bridge data science and domain-specific science, particularly if changes are desired in the latter to make it machine-interpretable for further adoption. Its important to understand that data science and domain-specific science will likely have to make adjustments for accommodating the other to ultimately reap the full benefit of generating human-interpretable knowledge outputs.
Why Predictive Analytics are Increasingly Important for Smart, Sustainable Operations. Commentary by Steve Carlini, Vice President of Innovation and Data Center at Schneider Electric.
In the data center world, predictive analytics are used mainly for critical components in the power and cooling architecture to prevent unplanned downtime. For example, a DCIM solution can look at UPS system batteries and collect data on a number of discharges, temperatures,and overall ageto come up with recommendations for battery replacement. These recommendations are based on different risk tolerances, for example, the software will say something like,Battery System 4 has a 20% chance of failure next week and a 50% chance of failure with 2 months.Facility operators can then manage risk and make informed decisions regarding battery replacement. When using analytics on larger data centers, it is important that facility-level systems are included because they are the backbone of IT rooms.Power system software must cover the medium voltage switchgear, the busway, the low voltage switchgear, all the transformers, all the power panels, and the power distribution units. Cooling system software must cover the cooling towers, the chillers, the pumps, the variable speed drives, and the Computer Room Air Conditioners (CRACs).Due to the scale and level of machinery in larger data centers, its necessary that all systems are included for comprehensive predictive analytics. As edge data centers become a critical part of the data center architecture and are deployed at scale, DCIM software benefits from unlimited cloud storage and data lakes.Predictive analytics become highly valuable as almost none of these sites are manned with service technicians. The DCIM system can leverage predictive analytics with a certain degree of linkage and automation to dispatch service personnel and replacement parts.As more data is collected, the accuracy of these analytics leveraging machine learning models will become trusted.This is already in process, as even todayoperators of mission critical facilities have the ability to plan ordesign systems with less physical redundancy and rely on the software for advanced notifications regarding battery health.
Sign up for the free insideBIGDATAnewsletter.
Join us on Twitter:@InsideBigData1 https://twitter.com/InsideBigData1
View post:
Insights on the Data Science Platform Global Market to 2027 – Featuring Microsoft, IBM and Google Among Others – Yahoo Finance
Dublin, Oct. 25, 2021 (GLOBE NEWSWIRE) -- The "Global Data Science Platform Market (2021-2027) by Component, Deployment, Organization Size, Function, Industry Vertical, and Geography, Competitive Analysis, Impact of Covid-19, Ansoff Analysis" report has been added to ResearchAndMarkets.com's offering.
The Global Data Science Platform Market is estimated to be USD 43.3 Bn in 2021 and is expected to reach USD 81.43 Bn by 2027, growing at a CAGR of 11.1%.
Key factors such as a massive increase in data volume due to increasing digitalization and automation of processes have been a crucial driver in the growth of data science platform. Besides, the enterprises are increasingly focusing on analytical tools for deriving insights into consumer behavior and purchasing patterns. This, in turn, has been shaping their business decisions and strategies to compete in the market. Besides, the adoption of data science platforms has found its way in various industry verticals such as manufacturing, IT, BFSI, retail, etc. All these factors have helped in contributing to the growth of the data science platform market.
However, the costs attached to the deployment of these platforms, along with less workforce with domain expertise capabilities and threats to data privacy, has been a hindrance in the growth of the market.
Market Dynamics
Drivers
High Generation of Data Volumes
Rising Focus On Data-Driven Decisions
Increasing Adoption of Data Science Platforms Across Diversified Industry Verticals
Restraints
Opportunities
Increasing Adoption of Data-Driven Technologies by Enterprises
Increasing Demand for Public Cloud
Investments and Funding in Development of Big Data and Related Technologies by Public and Private Sectors
Challenges
Segments Covered
By Component, the market is classified as platform and services. Amongst the two, the Platforms segment holds the highest market share. With a rise in digitalization and automation in various processes, data has been at the forefront. With massive data being churned by the enterprises, the availability of data science platforms is proving beneficial to provide real-time insights and streamline the business processes accordingly. Therefore, enterprises are adopting these platforms to bring process uniformity and business efficiency. This has accelerated the demand for the platform segment.
By Deployment, the Cloud-based segment is estimated to hold the highest market share. The cloud-based platforms are comparatively cost-effective and scalable with the ease of deployment. Since they can be accessible with minimum capital requirements, they are considered a resourceful source of deployment in varied industry sectors.
By Organization Size, Large Enterprises hold the highest market share. A data science platform has essential tools such as predictive analytical tools that can help an organization derive insights and provide meaningful business outcomes. The large scale organizations have the financial backing to invest in such solutions and provide an enhanced customer experience. The real-time insights can help these enterprises in improvising their business processes too. Therefore such enterprises hold a higher demand for data science platforms.
By Function, the Marketing And Sales segment is estimated to hold a high market share. The availability of data science platform in marketing and sales has decipher more insights about buyer behaviour patterns, marketing spending, and help the enterprises generate more ROI. The enterprises are also depending on these platforms due to their reliability in services, reducing financial risks, thereby generating higher revenues. Besides, these platforms are capable of providing an enhanced customer experience. This has led to a high adoption rate in the marketing and sales segment resulting in market segment growth.
By Industry Vertical, the BFSI sector adequately implements such platforms for proactively engaging in fraud detection and providing their customers the needed security. The data science platform can help manage customer data, reduce the complexity in operations, and provide insightful data for risk modeling for investment bankers. Also, the banks are often engaged in providing their customer's personalized services, storing massive data. These platforms can be helpful in this regard, thereby supporting the BFSI market segment growth. Besides the BFSI segment, the healthcare segment has also been drawing lucrative opportunities from these platforms. One of the prominent applications has been in the medical imaging segment. These platforms are being used in the diagnostic segment for improving diagnostic accuracy and efficiency.
By Geography, North America is projected to lead the market. The factors attributed to the growth of the market are the presence of capital intensive industries seeking to deploy a data science platform by integrating with their current IT infrastructure to have a competitive edge over the market. The region has a comparatively faster adoption rate to newer technological solutions due to a solid technological infrastructure. This has further led to a rise in the data science platform vendors offering new solutions to the enterprises. All these factors have aided in the growth of the data science platform of this region.
Story continues
Company Profiles
Some of the companies covered in this report are Microsoft Corporation, IBM Corporation, SAS Institute, Inc., SAP SE, RapidMiner, Inc., Dataiku SAS, Alteryx, Inc., Fair Issac Corporation, MathWorks, Inc., Teradata, Inc, etc.
Competitive Quadrant
The report includes Competitive Quadrant, a proprietary tool to analyze and evaluate the position of companies based on their Industry Position score and Market Performance score. The tool uses various factors for categorizing the players into four categories. Some of these factors considered for analysis are financial performance over the last 3 years, growth strategies, innovation score, new product launches, investments, growth in market share, etc.
Why buy this report?
The report offers a comprehensive evaluation of the Global Data Science Platform Market. The report includes in-depth qualitative analysis, verifiable data from authentic sources, and projections about market size. The projections are calculated using proven research methodologies.
The report has been compiled through extensive primary and secondary research. The primary research is done through interviews, surveys, and observation of renowned personnel in the industry.
The report includes in-depth market analysis using Porter's 5 force model and the Ansoff Matrix. The impact of Covid-19 on the market is also featured in the report.
The report also contains the competitive analysis using Competitive Quadrant, Infogence's Proprietary competitive positioning tool.
Report Highlights:
A complete analysis of the market including parent industry
Important market dynamics and trends
Market segmentation
Historical, current, and projected size of the market based on value and volume
Market shares and strategies of key players
Recommendations to companies for strengthening their foothold in the market
Key Topics Covered:
1 Report Description
2 Research Methodology
3 Executive Summary
4 Market Overview4.1 Introduction 4.2 Market Dynamics4.2.1 Drivers4.2.2 Restraints4.2.3 Opportunities4.2.4 Challenges4.3 Trends
5 Market Analysis5.1 Porter's Five Forces Analysis5.2 Impact of COVID-195.3 Ansoff Matrix Analysis
6 Global Data Science Platform Market, By Component6.1 Introduction6.2 Platform6.3 Services6.3.1 Managed Services6.3.2 Professional Services6.3.2.1 Training and Consulting6.3.2.2 Integration and Deployment6.3.2.3 Support and Maintenance
7 Global Data Science Platform Market, By Deployment7.1 Introduction7.2 Cloud7.3 On-premises
8 Global Data Science Platform Market, By Organization Size8.1 Introduction8.2 Large Enterprises8.3 Small and Medium-sized Enterprises
9 Global Data Science Platform Market, By Function9.1 Introduction9.2 Marketing9.3 Sales9.4 Logistics9.5 Finance and Accounting9.6 Customer Support9.7 Others
10 Global Data Science Platform Market, By Industry Verticals10.1 Introduction10.2 Banking, Financial Services, and Insurance (BFSI)10.3 Telecom and IT10.4 Retail and E-Commerce10.5 Healthcare and Life sciences10.6 Manufacturing10.7 Energy and Utilities10.8 Media and Entertainment10.9 Transportation and Logistics10.10 Government10.11 Others
11 Global Data Science Platform Market, By Geography
12 Competitive Landscape12.1 Competitive Quadrant12.2 Market Share Analysis12.3 Competitive Scenario12.3.1 Mergers & Acquisitions12.3.2 Agreements, Collaborations, & Partnerships12.3.3 New Product Launches & Enhancements12.3.4 Investments & Fundings
13 Company Profiles13.1 Microsoft Corporation13.2 IBM Corporation 13.3 Google, Inc13.4 Wolfram13.5 DataRobot Inc.13.6 Sense Inc.13.7 RapidMiner Inc. 13.8 Domino Data Lab13.9 Dataiku SAS13.10 Alteryx, Inc.13.11 Oracle13.12 Tibco Software Inc.13.13 SAS Institute Inc.13.14 SAP SE13.15 The Mathworks, Inc.13.16 Cloudera, Inc.13.17 H2O.ai13.18 Fair Issac Corporation13.19 Teradata, Inc13.20 Kaggle Inc., 13.21 Micropole S.A. 13.22 Continuum Analytics, Inc.13.23 C&F Insight technology solutions13.24 Civis Analytics, Inc.13.25 VMware Inc13.26 Alpine Data Labs,13.27 Thoughtworks Inc13.28 MuSigma13.29 Tableau Software LLC
14 Appendix
For more information about this report visit https://www.researchandmarkets.com/r/w83wb2
Go here to see the original:
Ericsson helps Giga to map connectivity in more than a million schools – Ericsson
One year after Ericsson entered a global partnership with UNICEF to support the Giga Initiatives school connectivity mapping efforts, the initiative has reached a major milestone in mapping the location and connectivity status of one million schools.
Giga - founded by UNICEF and the International Telecommunication Union (ITU) in 2019 - aims to connect every school to the internet by 2030 and every young person to information, opportunity, and choice.
Mapping schools is a key pillar of Giga as it helps provide an understanding of the scale of investment, actions and partnerships needed to bridge the digital divide and provide all school children around the world with access to digital learning opportunities.
Ericssons support for the initiative is in line with the companys vision to create a world in which limitless connectivity improves lives - including school and learning opportunities - redefines business and pioneers a sustainable future.
Over the past year, Ericsson has provided funding and applied data science to help map internet coverage in schools across seven countries. Along with contributions from multiple partners, this has helped Giga accelerate the mapping work and pass the one-million-school milestone. Under the partnership, Ericsson has committed to help map connectivity in schools across 35 countries by the end of 2023, supporting Gigas ambition of mapping every school in the world.
School connectivity breeds opportunity and fosters inclusivity
Giga works on the premise that connecting schools to the internet is one of the most impactful ways of improving life chances. Through school connectivity children have access to a wider pool of information, a range of learning styles and receive a higher standard of education.
The improvement in learning and the understanding of technology which results from an internet-enabled education is vital to improving digital literacy and closing the digital divide. A workforce that has been educated to this higher standard is more likely to be innovative and foster ground-breaking ideas, leading to economic development and job creation.
The Economist Intelligence Unit (EIU) report - Connecting Learners: Narrowing the Educational Divide - sponsored by Ericsson in support of UNICEF, found that nations with low broadband connectivity have the potential to realize up to 20 percent GDP growth by connecting schools to the internet, if access is affordable and accompanied by investment in skills, content and devices.
While progress has been achieved in the first year of the partnership, to meet this global challenge, collective action is needed. Ericsson is calling on internet service providers and political stakeholders to join Giga and donate their time and resources to accelerate the bridging of the digital divide.
Heather Johnson, Vice President, Sustainability and Corporate Responsibility, Ericsson, says: According to the ITU, 369 million young people don't have access to the internet and 260 million children aged 5-16 receive no schooling. This results in exclusion and fewer resources to learn and limits future potential for many young people. Mapping schools is a crucial first step in connecting every school to the internet and every student to opportunity and choice.
Johnson adds: This milestone of over one million schools mapped is a testament to the power of public private partnerships. Its the first step to achieving universal school connectivity. But there is more to be done and the industry must come together to play its part in closing the digital divide.
Building on a decade of digital inclusion
For more than a decade, Ericsson has worked to promote digital inclusion and increase opportunities for education worldwide. Ericssons partnership with UNICEF, in support of Giga, reflects the ambitions of Ericsson Connect To Learn, which aims to empower teachers, students and schools with technology to deliver a quality education thats accessible to all.
This partnership with UNICEF has seen Ericsson commit resources for data engineering and data science capacity to accelerate school connectivity mapping. Ericsson lends technical expertise and assistance to the collection, validation, analysis, visualization and monitoring of school connectivity data. This data enables governments and the private sector to design and deploy digital solutions that enable online learning for children and young people.
Chris Fabian, Co-Lead Giga, UNICEF, says: Ericsson's expertise has helped Giga's data science team build better models for school connectivity. Technical partnerships, like this one, are vital to Giga as we create an open-source resource of school locations and connectivity that, as of today, includes more than one million schools.
Related news:Ericsson and UNICEF launch global partnership to map school internet connectivityNew report: Connecting schools has the potential to boost GDP by up to 20 percent in the least connected nations
NOTES TO EDITORS:
See current school mapping efforts at http://www.projectconnect.world Or, for more information on this important initiative and how to get involved, please visit https://gigaconnect.org
FOLLOW US:Subscribe to Ericsson press releases hereSubscribe to Ericsson blog posts herehttps://www.twitter.com/ericssonhttps://www.facebook.com/ericssonhttps://www.linkedin.com/company/ericsson
MORE INFORMATION AT:Ericsson Newsroommedia.relations@ericsson.com (+46 10719 69 92)investor.relations@ericsson.com (+46 10719 00 00)
ABOUT ERICSSON:Ericsson enables communications service providers to capture the full value of connectivity. The companys portfolio spans the business areas Networks, Digital Services, Managed Services and Emerging Business. It is designed to help our customers go digital, increase efficiency and find new revenue streams. Ericssons innovation investments have delivered the benefits of mobility and mobile broadband to billions of people globally. Ericsson stock is listed on Nasdaq Stockholm and on Nasdaq New York. http://www.ericsson.com
ABOUT THE GIGA INITIATIVE:Launched in 2019, Giga is a global initiative to connect every school to the Internet and every young person to information, opportunity and choice. It is a broad partnership led by UNICEF and International Telecommunication Union (ITU) and harnesses engagement and leadership from governments, business, civil society, technology providers, donors, and finance experts. Giga maps school connectivity in real-time, creates models for innovative financing, and supports governments contracting for school connectivity. As a global UNICEF partner, Ericsson supports Gigas work to map connectivity levels in schools. https://gigaconnect.orgFollow Giga on Twitter and LinkedIn
Read more:
Ericsson helps Giga to map connectivity in more than a million schools - Ericsson
NIH awards nearly $75M to catalyze data science research in Africa – National Institutes of Health
News Release
Tuesday, October 26, 2021
New program will establish data science research and training network across the continent.
The National Institutes of Health is investing about $74.5 million over five years to advance data science, catalyze innovation and spur health discoveries across Africa. Under its new Harnessing Data Science for Health Discovery and Innovation in Africa (DS-I Africa) program, the NIH is issuing 19 awards to support research and training activities. DS-I Africa is an NIH Common Fund program that is supported by the Office of the Director and 11 NIH Institutes, Centers and Offices.
Awards will establish a consortium consisting of a data science platform and coordinating center, seven research hubs, seven data science research training programs and four projects focused on studying the ethical, legal and social implications of data science research. Awardees have a robust network of partnerships across the African continent and in the United States, including numerous national health ministries, nongovernmental organizations, corporations, and other academic institutions.
This initiative has generated tremendous enthusiasm in all sectors of Africas biomedical research community, said NIH Director Francis S. Collins, M.D., Ph.D. Big data and artificial intelligence have the potential to transform the conduct of research across the continent, while investing in research training will help to support Africas future data science leaders and ensure sustainable progress in this promising field.
The University of Cape Town (UCT) will develop and manage the initiatives open data science platform and coordinating center, building on previous NIH investments in UCTs data and informatics capabilities made through the Human Heredity and Health in Africa (H3Africa) program. UCT will provide a flexible, scalable platform for the DS-I Africa researchers, so they can find and access data, select tools and workflows, and run analyses through collaborative workspaces. UCT will also administer and support core resources, as well as coordinate consortium activities.
The research hubs, all of which are led by African institutions, will apply novel approaches to data analysis and AI to address critical health issues including:
The research training programs, which leverage partnerships with U.S. institutions, will create multi-tiered curricula to build skills in foundational health data science, with options ranging from masters and doctoral degree tracks, to postdoctoral training and faculty development. A mix of in-person and remote training will be offered to build skills in multi-disciplinary topics such as applied mathematics, biostatistics, epidemiology, clinical informatics, analytics, computational omics, biomedical imaging, machine intelligence, computational paradigms, computer science and engineering. Trainees will receive intensive mentoring and participate in practical internships to learn how to apply data science concepts to medical and public health areas including the social determinants of health, climate change, food systems, infectious diseases, noncommunicable diseases, health surveillance, injuries, pediatrics and parasitology.
Recognizing that data science research may uncover potential ethical, legal and social implications (ELSI), the consortium will include dedicated ELSI research addressing these topics. This will include efforts to develop evidence-based, context specific guidance for the conduct and governance of data science initiatives; evaluate current legal instruments and guidelines to develop new and innovative governance frameworks to support data science health research in Africa; explore legal differences across regions of the continent in the use of data science for health discovery and innovation; and investigate public perceptions and attitudes regarding the use of data science approaches for healthcare along with the roles and responsibilities of different stakeholder groups regarding intellectual property, patents, and commercial use of genomics data in health. In addition, the ELSI research teams will be embedded in the research hubs to provide important and timely guidance.
A second phase of the program is being planned to encourage more researchers to join the consortium, foster the formation of new partnerships and address additional capacity building needs. Through the combined efforts of all its initiatives, DS-I Africa is intended to use data science to develop solutions to the continents most pressing public health problems through a robust ecosystem of new partners from academic, government and private sectors.
In addition to the Common Fund (CF), the DS-I Africa awards are being supported by the Fogarty International Center (FIC), the National Cancer Institute (NCI), the National Human Genome Research Institute (NHGRI), the National Institute of Allergy and Infectious Diseases (NIAID), the National Institute of Biomedical Imaging and Bioengineering (NIBIB), the Eunice Kennedy Shriver National Institute of Child Health and Human Development (NICHD), the National Institute of Dental and Craniofacial Research (NIDCR), the National Institute of Environmental Health Sciences (NIEHS), the National Institute of Mental Health (NIMH), the National Library of Medicine (NLM) and the NIH Office of Data Science Strategy (ODSS). The initiative is being led by the CF, FIC, NIBIB, NIMH and NLM.
More information is available at https://commonfund.nih.gov/AfricaData.
Photos depicting data science activities at awardee institutions are available for downloading at https://commonfund.nih.gov/africadata/images.
About the NIH Common Fund: The NIH Common Fund encourages collaboration and supports a series of exceptionally high-impact, trans-NIH programs. Common Fund programs are managed by the Office of Strategic Coordination in the Division of Program Coordination, Planning, and Strategic Initiatives in the NIH Office of the Director in partnership with the NIH Institutes, Centers, and Offices. More information is available at the Common Fund website:https://commonfund.nih.gov.
About the National Institutes of Health (NIH):NIH, the nation's medical research agency, includes 27 Institutes and Centers and is a component of the U.S. Department of Health and Human Services. NIH is the primary federal agency conducting and supporting basic, clinical, and translational medical research, and is investigating the causes, treatments, and cures for both common and rare diseases. For more information about NIH and its programs, visit http://www.nih.gov.
NIHTurning Discovery Into Health
###
Continue reading here:
NIH awards nearly $75M to catalyze data science research in Africa - National Institutes of Health
Worldwide Data Science Platform Industry to 2027 – Increasing Adoption of Data-Driven Technologies by Enterprises Presents Opportunities – PRNewswire
DUBLIN, Oct. 26, 2021 /PRNewswire/ -- The "Global Data Science Platform Market (2021-2027) by Component, Deployment, Organization Size, Function, Industry Vertical, and Geography, Competitive Analysis, Impact of Covid-19, Ansoff Analysis" report has been added to ResearchAndMarkets.com's offering. The Global Data Science Platform Market is estimated to be USD 43.3 Bn in 2021 and is expected to reach USD 81.43 Bn by 2027, growing at a CAGR of 11.1%.
Key factors such as a massive increase in data volume due to increasing digitalization and automation of processes have been a crucial driver in the growth of data science platform. Besides, the enterprises are increasingly focusing on analytical tools for deriving insights into consumer behavior and purchasing patterns. This, in turn, has been shaping their business decisions and strategies to compete in the market. Besides, the adoption of data science platforms has found its way in various industry verticals such as manufacturing, IT, BFSI, retail, etc. All these factors have helped in contributing to the growth of the data science platform market.
However, the costs attached to the deployment of these platforms, along with less workforce with domain expertise capabilities and threats to data privacy, has been a hindrance in the growth of the market.
Market Dynamics
Drivers
Restraints
Opportunities
Challenges
Company Profiles
Some of the companies covered in this report are Microsoft Corporation, IBM Corporation, SAS Institute, Inc., SAP SE, RapidMiner, Inc., Dataiku SAS, Alteryx, Inc., Fair Issac Corporation, MathWorks, Inc., Teradata, Inc, etc.
Competitive Quadrant
The report includes Competitive Quadrant, a proprietary tool to analyze and evaluate the position of companies based on their Industry Position score and Market Performance score. The tool uses various factors for categorizing the players into four categories. Some of these factors considered for analysis are financial performance over the last 3 years, growth strategies, innovation score, new product launches, investments, growth in market share, etc.
Why buy this report?
Report Highlights:
Key Topics Covered:
1 Report Description
2 Research Methodology
3 Executive Summary
4 Market Overview4.1 Introduction 4.2 Market Dynamics4.2.1 Drivers4.2.2 Restraints4.2.3 Opportunities4.2.4 Challenges4.3 Trends
5 Market Analysis5.1 Porter's Five Forces Analysis5.2 Impact of COVID-195.3 Ansoff Matrix Analysis
6 Global Data Science Platform Market, By Component6.1 Introduction6.2 Platform6.3 Services6.3.1 Managed Services6.3.2 Professional Services6.3.2.1 Training and Consulting6.3.2.2 Integration and Deployment6.3.2.3 Support and Maintenance
7 Global Data Science Platform Market, By Deployment7.1 Introduction7.2 Cloud7.3 On-premises
8 Global Data Science Platform Market, By Organization Size8.1 Introduction8.2 Large Enterprises8.3 Small and Medium-sized Enterprises
9 Global Data Science Platform Market, By Function9.1 Introduction9.2 Marketing9.3 Sales9.4 Logistics9.5 Finance and Accounting9.6 Customer Support9.7 Others
10 Global Data Science Platform Market, By Industry Verticals10.1 Introduction10.2 Banking, Financial Services, and Insurance (BFSI)10.3 Telecom and IT10.4 Retail and E-Commerce10.5 Healthcare and Life sciences10.6 Manufacturing10.7 Energy and Utilities10.8 Media and Entertainment10.9 Transportation and Logistics10.10 Government10.11 Others
11 Global Data Science Platform Market, By Geography
12 Competitive Landscape12.1 Competitive Quadrant12.2 Market Share Analysis12.3 Competitive Scenario12.3.1 Mergers & Acquisitions12.3.2 Agreements, Collaborations, & Partnerships12.3.3 New Product Launches & Enhancements12.3.4 Investments & Fundings
13 Company Profiles13.1 Microsoft Corporation13.2 IBM Corporation 13.3 Google, Inc13.4 Wolfram13.5 DataRobot Inc.13.6 Sense Inc.13.7 RapidMiner Inc. 13.8 Domino Data Lab13.9 Dataiku SAS13.10 Alteryx, Inc.13.11 Oracle13.12 Tibco Software Inc.13.13 SAS Institute Inc.13.14 SAP SE13.15 The Mathworks, Inc.13.16 Cloudera, Inc.13.17 H2O.ai13.18 Fair Issac Corporation13.19 Teradata, Inc13.20 Kaggle Inc., 13.21 Micropole S.A. 13.22 Continuum Analytics, Inc.13.23 C&F Insight technology solutions13.24 Civis Analytics, Inc.13.25 VMware Inc13.26 Alpine Data Labs,13.27 Thoughtworks Inc13.28 MuSigma13.29 Tableau Software LLC
14 Appendix
For more information about this report visit https://www.researchandmarkets.com/r/do5ov7
Media Contact: Research and Markets Laura Wood, Senior Manager [emailprotected]
For E.S.T Office Hours Call +1-917-300-0470 For U.S./CAN Toll Free Call +1-800-526-8630 For GMT Office Hours Call +353-1-416-8900
U.S. Fax: 646-607-1904 Fax (outside U.S.): +353-1-481-1716
SOURCE Research and Markets
http://www.researchandmarkets.com
More here:
Multi-institution project to train Kenyan experts to bring social determinants to bear on modeling health outcomes – Newswise
Note: Images available at:
https://nyutandon.photoshelter.com/galleries/C0000xCFqnU2NY3U/G0000xjy6qJCcrTQ/Rumi-Kenya-NIH
Newswise BROOKLYN, New York, Tuesday, October 26, 2021 A data-science training program for equipping leaders to support the improvement of health outcomes in Kenya, led by a team from NYU, Brown University, and Moi University in Kenya, was chosen as one of 19 initiatives funded by The National Institutes of Health (NIH) under its new Harnessing Data Science for Health Discovery and Innovation in Africa (DS-I Africa) program.
The $1.7 million award, part of the NIHs mission to advance data science, catalyze innovation and spur health discoveries across Africa, establishes a consortium consisting of a data science platform and coordinating center, seven research hubs, seven data science research training programs, and four projects focused on studying the ethical, legal and social implications of data science research.
The main principal investigator for the NYU-Moi Data Science for Social Determinants Training Program (DSSD) is Rumi Chunara, associate professor of computer science and engineering and biostatistics at the NYU Tandon School of Engineering and NYU School of Global Public Health (NYU GPH). The DSSD training program represents a significant opportunity to leverage NYU's strengths in data science, machine learning and artificial intelligence in a collaborative fashion with global partners to improve data science capacity, specifically for health.
The goal of the project is to develop future leaders in data science who are equipped to gather and analyze data to better leverage deep and rich surveys, as well as internet and other digitized data sources that can help the collaborators capture information on the social determinants of health. The project, includes researchers at NYU Courant, NYU GPH, NYU Wagner, the Center for Urban Science and Progress (CUSP), the NYU Center for Data Science, and the NYU Grossman School of Medicine. It constitutes an extension into a real-world training program of Chunaras previous work on incorporating social determinants into predictive modeling for individual health outcomes.
To develop best practices in treatment and analytics for health outcomes, social determinants must be part of the data mix because they provide context on broader forces impinging on the health both of individuals and for communities. I want to thank the NIH for their acknowledgment of this." said Chunara. "Besides advancing local efforts in Kenya in data science and health, we also envision our program will augment global knowledge on data science practices.
DSSDs design will rapidly expand the local base of expertise via curriculum development, resulting in two Ph.D. (4-year training) and a total of six postdoctoral (2-year) and faculty (12-14 month) trainees, who will study at NYU. Additionally, eight masters and two Ph.D. trainees will commence or complete training (2-year and 4-year training, respectively) through newly developed data science tracks at Moi University.
Connecting with data science industries and organizations with a presence in Kenya, including IBM, Deep Learning Indaba, DataKind, AI.Kenya and Aga Khan University Nairobi and Karachi, will create intellectual meeting spaces for a variety of talented trainees from both data science and health backgrounds, to propel and sustainably advance the fields capacity in Kenyan institutions as well as the DS-I consortium.
About the New York University Tandon School of Engineering
The NYU Tandon School of Engineering dates to 1854, the founding date for both the New York University School of Civil Engineering and Architecture and the Brooklyn Collegiate and Polytechnic Institute. A January 2014 merger created a comprehensive school of education and research in engineering and applied sciences as part of a global university, with close connections to engineering programs at NYU Abu Dhabi and NYU Shanghai. NYU Tandon is rooted in a vibrant tradition of entrepreneurship, intellectual curiosity, and innovative solutions to humanitys most pressing global challenges. Research at Tandon focuses on vital intersections between communications/IT, cybersecurity, and data science/AI/robotics systems and tools and critical areas of society that they influence, including emerging media, health, sustainability, and urban living. We believe diversity is integral to excellence, and are creating a vibrant, inclusive, and equitable environment for all of our students, faculty and staff. For more information, visit engineering.nyu.edu.
###
http://www.facebook.com/nyutandon @NYUTandon
Read the original post:
Takeda’s Krista McKee on Shifting the Data Ecosystem – Bio-IT World
October 26, 2021 | TRENDS FROM THE TRENCHESKrista McKees 20-year career in pharmaspanning Genzyme, smaller biotechs, Novartis vaccine development, and the last 7 years at Takedahas steadily taken her on a trajectory toward data science. She is currently Head of Insights and Analytics at Takedas Data Sciences Institute (DSI).
McKee recently sat down with Stan Gloss, founding partner at BioTeam, to discuss how Takeda is recognizing the importance of data alignment and accessibility and the choreography necessary to keep it all moving forward. Bio-IT World was invited to listen in.
Editors Note: Trends from the Trenches is a regular column from BioTeam, offering a peek into some of their most interesting case studies. A life science IT consulting firm at the intersection of science, data and technology, BioTeam builds innovative scientific data ecosystems that close the gap between what scientists want to do with dataand what they can do. Learn more at http://www.bioteam.net.
Stan Gloss: Can you tell me more about the Data Sciences Institute at Takeda?
Krista McKee: Absolutely. The Data Sciences Institute (DSI) is a collection of R&D functions that are focused on data. Some of those functions are the more known ones, like statistics, programming, and epidemiology. Others are less known and focused on things such as data architecture, data flows, digital endpoints and other applications of data and digital. Throughout these functions, we are working to apply data science principles to get medicines to patients faster.
What does your role look like in DSI and what sets it apart?
My group, called data architecture and digital solutions, falls into the lesser-known category. We are a group that enables R&D to interact with data in new ways. We do this by partnering across functions and into IT to deploy strategies around data architecture, data governance and access, data harmonization and enrichment, and data insights. When you think about that across R&D, the remit is quite broad. In the last five years, we've gone from showing what's possible through specific use cases, to now starting to lead in establishing a data ecosystem that is going to be scalable for R&D and get our data ready for AI. Were also working to enable an analyst and data science community throughout R&D who can readily access the right data.
Can you tell me more about the different data types and how DSI is prioritizing the exposure and organization of them?
When you think about R&D data, there's operations data, there's financial data, there's clinical trial data, there's research data, just to name a few. We have demonstrated that you get a lot of value if you can start to bring in and organize certain types of data for both learning and use in day-to-day decision making. And applying automation to make it easier to interact with data across the ecosystem is important.
How did your prior experience bring you into this role?
From biotech R&D, the vaccine development machine at Novartis, a leadership role in Takedas oncology therapeutic area unit and especially seeing the accountability necessary in the clinical trial delivery world, I learned about the trade-offs and interdependencies of data management. And despite how complex and hard all this is, it could be much easier if we take better care of our data as an asset and manage it accordingly.
Once the DSI was created within Takeda, I quickly found myself just gravitating toward it and finally partnered with my current boss on a project called Platypus, a Bio-IT World award winner in 2018, where we effectively knocked down silos around the clinical data with the right controls and governance to give the right people modern access to do what Takeda needed to do from an oversight perspective in our trials.
We love knocking down silos. Did that project create momentum?
That success led to another project where we automated the aggregation of data within programs for more proactive safety signal management. This project is one I am most proud of because of its direct benefit to patients. It also led to a project where we're bringing the clinical data that we've harmonized on the cloud together with organized pre-clinical data to enable reverse translational insights. Weve also supported efforts on operational data to give insights much more quickly and broadly than before.
Can you talk about the culture around this transformation?
At a large organization like Takeda, the old way was always hard; people owned data and thought of that data only in the context of their own use. The owners of a certain type of data never really appreciated that other people also needed these same data to effectively do their jobs.
Through our work, we are consistently showing people that we can use modern technology to organize and expose datawith the right controls of course. Sitting as a business function close to the core of R&D, we understand governance is a big part of what we do and were working to deploy an efficient governance less focused on "Why do you need it?" and more on "Why can't you have it? That's been a huge challenge all along, but it's something we've created a framework around and are continuing to refine.
Aligned with this, we are starting to communicate three levels of data. Level one is raw or novel data. We need to make sure we have all this data coming into our cloud-based ecosystem. Level two, or aligned data, is the data we harmonize or enrich and organize into accessible data models. And level three, consumed data, refers to democratized use of data to inform the many facets of R&D. Our overall objective is to appropriately and efficiently allow insights from anywhere across R&D. And our hope is that on Day 1 at Takeda, any R&D analyst or data scientist can have an appropriately comprehensive and transparent view of all three levels of data.
So, would I be correct to hear you say that the culture around this is really shifting?
Yes, we're starting to shift and show the value of that deeper organization.In most meetings I attend right now, I refer to the three levels of data, and people get it. They understand the value in taking the time to get data ready so that those insights can be achieved.
Is this value mostly understood in the context of ML/AI?
When we talk about AI/ML, there's a lot of potential value there, but most of it is still very unrealized in the R&D space. There is a need for iterating and a lot of potential vendors with which to partner. We need to have an ecosystem where the lift to engage someone with our data isn't going to be so big every time.
Additionally, the questions that certain data types can answer are much broader than the discipline in which the data types themselves sit. When you think about machine learning algorithms, you need a lot of data. So making sure data scientists working on operational prediction algorithms, for instance, have access to the many types of data that could meaningfully inform their algorithms is important.
Can you take me through a little of the journey that you took to transform just that little piece, moving away from my data and breaking down silos?
The brilliance of DSI at its inception was that we were a unique group in the organization that was bigger than any one particular functional allegiance. We were declared to serve Takeda R&D and were able to operate from the perspective that all data is Takedas data. And anyone within R&D could come to us and have a listening ear. If someone had a data need, they had a place to go and a group that was both incentivized and had the means to help them.
It's all about aligning people, technology, and processes. Other companies have data science centers of excellence and specific platforms, they talk about a data fabric. Does Takeda have a platform that all these data are fed into and everybody can access?
What we have centers around AWS, the data lake approach, and mechanisms to enable consumption of data of various types such as data marts or APIs. Until recently, my group has been more focused on directly delivering insights based off our cloud-based data layers. Now we are pivoting to focus more completely on maturing the data foundation and governance so that insights can come from anywhere.
Are there specific rules about which data are fed into this platform and how they are structured?
We are advanced in our culture and what we have in the cloud-based environment today. We have a mandate across R&D that says all R&D data goes into our cloud-based ecosystem and are deploying tools to expose what's there and the teams to appropriately govern what's there. We are also working to ensure that these data can be readily accessible in modern data exploration and data science platforms in which Takeda has invested.
The amount of investment in time for structuring and aligning data can't be underestimated. How much of your data would you characterize as FAIR compliant?
That's a good question. I dont know that I could give a number that I could hang my hat on. When we say all R&D data goes into this ecosystem, there's just so much. So much remains untapped even with the good progress we've made. As I've talked to my counterparts in various companies, one thing that is noteworthy for Takeda is that even as we're doing the heavy lifting to get that organization of the data, we're doing it in the context of delivering the value along the way. That said, we still have a ton of work to do.
What are the biggest challenges that you face now in continuing the evolution of your digital transformation?
The art of bringing together people, process, and technology in a meaningful and enduring way that continues to improve over time is a challenge. There's certainly a sprint aspect to it and a marathon aspect to it and finding that balance of getting the right initiatives defined with the right cross-functional attention and ownership that is enduring, it just takes a lot of effort.
Another challenge is in effectively deploying efforts across a large organization to get things moving. Many people's day-to-day jobs dont change, but functional experts need to find a space to be part of these initiatives and to really re-engineer how they are working every day. People are willing but there's a lot of choreographing, and I like to think of myself very much as a choreographer, bringing all these pieces together and having the early wins and the excitement maintained over time to get that long-term value for the organization.
Beautiful. What advice would you give organizations that are significantly behind you in terms of evolution?
What's been really critical and what I've valued a lot at Takeda has been the boldness of its leaders. In the beginning of my time at Takeda, that boldness was most noticeably a communicated one. Messages like, If you see a problem, feel empowered to find a way to fix it, resonated with me in my earlier days. Over time, the leadership messaging has been bolstered by bold actionwhich is critical to Takedas success. Data and digital imperatives are now commonplace at Takeda. Theres space for that innovation but also acceptance that it is an iterative process which will include failures from which we will need to pivot. There is also a culture that notices and rewards progress. There is something about people coming together to really drive forward innovation and progress that of all the places I have been, it's unique and it's why I love working for Takeda.
See the article here:
Takeda's Krista McKee on Shifting the Data Ecosystem - Bio-IT World
Stitch Fix CEO: ‘Data science and algorithms are at the core’ of the company – Oakland News Now
Oakland News Now
video made by the YouTube channel with the logo in the videos upper left hand corner. OaklandNewsNow.com is the original blog post for this type of video-blog content.
StitchFix #datascience Yahoo Finances Sibile Marcellus spoke with Stitch Fix CEO Elizabeth Spaulding about how the company utilizes data science and
via IFTTT
Note from Zennie62Media and OaklandNewsNow.com : this video-blog post demonstrates the full and live operation of the latest updated version of an experimental Zennie62Media , Inc. mobile media video-blogging system network that was launched June 2018. This is a major part of Zennie62Media , Inc.s new and innovative approach to the production of news media. What we call The Third Wave of Media. The uploaded video is from a YouTube channel. When the YouTube video channel for Yahoo Finance uploads a video it is automatically uploaded to and formatted automatically at the Oakland News Now site and Zennie62-created and owned social media pages. The overall objective here, on top of our is smartphone-enabled, real-time, on the scene reporting of news, interviews, observations, and happenings anywhere in the World and within seconds and not hours is the use of the existing YouTube social graph on any subject in the World. Now, news is reported with a smartphone and also by promoting current content on YouTube: no heavy and expensive cameras or even a laptop are necessary, or having a camera crew to shoot what is already on YouTube. The secondary objective is faster, and very inexpensive media content news production and distribution. We have found there is a disconnect between post length and time to product and revenue generated. With this, the problem is far less, though by no means solved. Zennie62Media is constantly working to improve the system network coding and seeks interested content and media technology partners.
Oakland News Online Links From Oakland's Only News Aggregator Blog
Oakland News Now Archives Oakland News Now Archives Select Month October 2021 (4231) September 2021 (1111) August 2021 (843) July 2021 (725) June 2021 (431) May 2021 (393) April 2021 (463) March 2021 (320) February 2021 (315) January 2021 (356) December 2020 (319) November 2020 (349) October 2020 (444) September 2020 (445) August 2020 (496) July 2020 (462) June 2020 (391) May 2020 (301) April 2020 (289) March 2020 (239) February 2020 (221) January 2020 (262) December 2019 (161) November 2019 (183) October 2019 (226) September 2019 (173) August 2019 (231) July 2019 (239) June 2019 (194) May 2019 (137) April 2019 (224) March 2019 (164) February 2019 (142) January 2019 (181) December 2018 (147) November 2018 (168) October 2018 (173) September 2018 (192) August 2018 (183) July 2018 (176) June 2018 (125) May 2018 (28) April 2018 (18)
Continue reading here:
Stitch Fix CEO: 'Data science and algorithms are at the core' of the company - Oakland News Now
Miami Dade College to Host 10th Anniversary School of Science White Coat Ceremony and STEM Research Symposium – The Reporter
Miami, Oct. 26, 2021 The School of Science at Miami Dade College (MDC) will host the 10th anniversary White Coat Ceremony at 6 p.m. on Thursday, Oct. 28. and the STEM Research Symposium on Saturday, Oct. 30, at 9 a.m. at the North Campus. Both events will focus on the training and recognition of up-and-coming researchers in the fields of science, technology, engineering, and mathematics (STEM).
This 10th anniversary White Coat Ceremony and STEM Research Symposium is a milestone for the School of Science and a testament to Miami Dade Colleges focus to put students first, especially in the area of STEM, said Dr. Victor Okafor, Dean of the School of Science.
More than 60 students will officially be inducted into MDCs Bachelor of Science in Biological Sciences program during the ceremony. During the event, students and other attendees will hear inspirational reflections from alumni, such as Alejandro Tamayo, a 2020 bachelors graduate who is currently a Ph.D. student in the molecular cell and developmental biology graduate program at the University of Miami. In addition, Paul M.T. Pham, who serves as Director of Research and Development at New Vision Pharmaceuticals, where his mission is making products better through innovation, will deliver a keynote address. With several years in pharmaceutical industry experience, he is knowledgeable about manufacturing, quality control, quality assurance, regulatory, safety and government regulations including FDA and DEA compliance.
On Oct. 30, the STEM Research Symposium will showcase the original research of more than 100 MDC students who worked alongside scientists from MDC, the University of Florida, University of Miami, Nova Southeastern University, Florida Atlantic University, and St. Thomas University, as part of the School of Sciences Summer STEM Research Institute and year-round research program. Undergraduate research is a hands-on learning activity that enriches a students undergraduate experience. Participation in research broadens and deepens the students classroom learning and supports the development of a range of professional skills for the workforce or in pursuit of advanced degrees in STEM areas.
The featured research projects cover a wide variety of fields, including biology, physics, chemistry, microbiology, engineering, and genetics. The Symposium audience will have the exclusive opportunity of joining Genomics expert Carlos Bustamante during his keynote address. Bustamante is a scientist, investor, and entrepreneur focused on the application of data science and genomics technology to problems in medicine, agriculture, and biology. He is on leave from Stanford University where he is a Professor of Biomedical Data Science, Genetics, and Biology. He was founding Director of the Stanford Center for Computational, Evolutionary, and Human Genomics and is Inaugural Chair of the Department of Biomedical Data Science. Bustamante has a passion for building new academic units, non-profits, and companies to solve pressing scientific challenges. He is also currently Founder and Chairman of the Board of Galatea Bio, Inc.; a Director at EdenRoc Sciences, LLC and Etalon DX; Founder of Arc Bio, LLC and CDB Consulting LTD; and has served as SAB member for more than a dozen companies. Carlos received his Ph.D. in Biology and MS in Statistics from Harvard University.
We congratulate undergraduate students for their commitment to scholarship and celebrate the dedication of faculty mentors who ensure the research was realized, said Dr. Loretta Ovueraye, MDCs Vice-Provost of Workforce Programs and Professional Learning,
WHAT: 10th Annual White Coat Ceremony and STEM Research Symposium
WHEN: White Coat Ceremony-Thursday, Oct. 28, at 6 p.m.
STEM Research Symposium- Saturday, Oct. 30, at 9 a.m.
WHERE: MDC North Campus, 11380 NW 27th Avenue
White Coat Ceremony- Science Complex Plaza
STEM Research Symposium -Conference Center, Room 3249
Livestream of White Coat Ceremony will be available at http://www.mdc.edu/livestream
Livestream of STEM Research Symposium will be available at https://libraryguides.mdc.edu/STEMSymposium
For more information, please contact Dr. Victor I. Okafor, Dean of the School of Science, 305-237-1757, vokafor@mdc.edu.
See the rest here:
FDA takes hands-on approach to upskill workforce under data modernization action plan – Federal News Network
The Food and Drug Administration is giving employees a chance to learn data skills through hands-on projects, with less emphasis on traditional classrooms and coursework.
The FDA is launching a data modernization action plan focused in part on developing data skills within the existing FDA workforce and recruiting new hires with these in-demand skills.
FDA Chief Data Officer Ram Iyer, speaking Oct. 21 at an ACT-IAC event, said the data strategy looks to improve data capacity through driver projects, which add value to the agency but also help the workforce develop data skills through these hands-on tasks.
Lets say an example is a supply chain project: We should deliver the supply chain needs and forecasting for the agency. But we should also take this as an opportunity to improve our master data management, or data cataloging or pulling external data into the system. If you dont do that, then we are just becoming an episodic organization that maybe just develops use cases and use cases, and then we are not learning anything from that.
The strategy also focuses on data practices, which provide a reusable framework for how the agency should curate, use and share key data sets across multiple projects.
The data modernization strategy builds off momentum from a technology modernization plan the FDA launched in September 2019. Last month, the FDA also reorganized its IT, data and cybersecurity office to create the Office of Digital Transformation, which reports directly to the FDA commissioner.
Iyer said data modernization requires the agency to adopt a product mindset, that leads the agency to think about the benefits to end-users within FDA and across its partner agencies.
There is a very big emphasis that we are doing all of this not in isolation, but to help us collect better data and make better decisions, Iyer said.
Iyer said that this approach allows FDA to pull the plug on projects that dont show results.
The approach we take here is not to spend millions of dollars on the products and services, [but] what is the minimum product that we can actually respond to that, rather than making them wait six, nine, 12 months before we deliver something?
For example, Iyer said he recently pulled back on a project the agency already spent $150,000 on, after deciding it wasnt a good fit for its architecture.
Some people might think of that as, Hey, you promised something and it was not delivered. We look at it in a completely different direction. We say put something in the market and see how it works, and if it doesnt work, we pull it back or we pivot, Iyer said.
Iyer leads a data modernization steering committee of senior agency executives and subject-matter experts that oversees the rollout of the data strategy at FDA.
Their role is also to engage and act as ambassadors, and because of their senior positions, they have [been] putting it in their town halls, putting it in their newsletters and communication has really helped us to run this, Iyer said.
FDA has stood up four working groups dedicated to the workforce, data drivers and practices plus an additional group focused on stakeholder engagement. Iyer said about 60 employees across the agency are currently participating in the working groups.
The workforce working group sees remote work as an opportunity to recruit prospective employees from across the country. Meanwhile, Iyer said hes rolling out a 70/20/10 model to train the workforce on data skills.
We dont want to just roll out training after training to our team members. We want to have them learn 70% of their needs through projects, so were going to identify the right projects, get the team engaged. We want about 20% to come from peer mentoring and coaching on certain skills whether its data mining skills or its visualization techniques or storytelling, Iyer said.
For the remaining 10%, the FDA will focus on traditional classroom or online training.
Launched a data science 101 program last month called Data Forward. Iyer said more than 1,400 employees signed up for the initial lunch and learn. About 33% of attendees said they were intimidated by data science at the beginning of the presentation, but nearly all participants said they had a better appreciation for data science by the end of the presentation.
Small wins that we think will help us to deliver the larger impact for the agency, Iyer said.
Iyer said FDAs centers became increasingly cross-dependent on each others data during the pandemic from tracking infection rates of farm workers to tracking the status of the supply chain for pharmaceutical drugs.
But amid that increased demand for data-sharing, Iyer said they found FDAs data and technology assets to be highly fragmented and not easily sharable within different parts of the agency.
The agency also encountered challenges sharing data externally with other agencies. Iyer said FDA went through a different process, to share data with the Department of Veterans Affairs, for example, compared to the process of sharing data with the Centers for Disease Control and Prevention.
This was really becoming clear, that in a normal state, we could deal with these differentiated methods, but when you are in a crisis, you cant be creating [or] recreating these processes, Iyer said.
The rest is here: