Category Archives: Data Science
Structured dataset of human-machine interactions enabling adaptive … – Nature.com
This section describes the data collection process. It starts by describing the design of the experiment and the setup, including a description of the acquisition and processing elements of the methodology.
The experiment was conducted using a machine in which multiple operators interacted through the same HMI to perform a mixture creation task. In this scenario, an industrial mixing machine from the food sector was utilized, which offers the advantage of being regularly used throughout the day by several users across two working shifts. Each time a mixture was ordered, the operator carried out a series of individual interactions with the HMI. These interactions were related to adjusting various parameters, including additive quantity, mixture type, and the use of containers. These parameters directly influenced the properties of the final product.
Users interacted with the machine through a mobile app that was specifically designed for the experiment. Operators accessed the app by scanning a QR code, after which they proceeded to select the required mixture. The captured interactions included two key components: (i) the order and sequence of steps the user followed, and (ii) the time interval in which the user interacted with the machine.
Twenty-seven volunteer operators, aged between 23 and 45 years, participated in the experiment. Each operator granted formal consent to have their daily interactions recorded through the app. In total, 10,608 interactions were captured over a period of 151 days. All data was anonymized and does not contain sensitive user information.
Figure1 illustrates the methodology for data acquisition, which begins with the preparation stage. This stage encompasses two steps: firstly, the user interface (UI) is formally described using a user interface description language (UIDL), which consists of a mark-up language that describes the entire HMI12. In this study, the JSON format was employed to represent each visual element in the HMI, with each element assigned a unique alphanumeric identifier.To provide an example of the UIDL utilized in this study, Fig.2 displays a representation of the UI alongside its corresponding UIDL.
Data acquisition methodology.
UIDL JSON description example of a UI.
The HMI was implemented using Next.js, a React framework and Chakra UI. A dedicated function was created to programmatically generate the HMI using the user interface descriptor. The interface is designed to be responsive and can be used on tactile devices.
Next, the interaction process representation required to prepare a mixture in the machine is described as a Finite State Machine (FSM), which is a model consisting of states, transitions, and inputs used to represent processes or systems. In this process, the user adjusts the parameters of a mixture until the values are considered correct (Fig.3).
Interaction process representation (FSM).
During the active phase of the experiment, when users access the machine using the application, a non-intrusive layer captures the interactions and stores them in a database (capture interactions). The information captured includes the user identity, the timestamp of the interaction in EPOCH format, and the identification of the interacted element (store raw interactions) (see Table1). Once this information is collected, the data processing step generates the sequences.
The goal of this step of the methodology is to generate valid sequences of interactions for each user. Perer & Wang13 define a sequence of events (E=langle {e}_{1},{e}_{2},...,{e}_{m}rangle ) (ei D) as an ordered list of events ei, where D is a set of events known and the order is defined by i. This means that the event ei occurs before the event ei+1. Additionally, in this process is considered that E must contain at least two events e to be accepted as a sequence9.
Using this definition and taking as input the raw interactions, it is possible to define valid interaction sequences as ({s}_{i}=left[{e}_{begin},{e}_{1}^{i},ldots ,{e}_{k}^{i},{e}_{end}right]) where si is a set of events and:
The events ebegin and eend are known, determining the beginning and the ending of the interaction sequence
The variable l determines the length of the interaction sequence and its value should be >=2
The sequences are extracted using the Valid sequences extractor algorithm presented by Reguera-Bakhache et al.9. As demonstrated in the FSM (Fig.3), the interaction process initializes when an interaction occurs in any of the elements that allow the parametrization of the mixture and finalizes when the user clicks the button BTN1OK.
From the 10,608 interactions recorded, 1358 valid sequence interactions were generated. The composition of each interaction sequence is described in the following section.
See original here:
Structured dataset of human-machine interactions enabling adaptive ... - Nature.com
AICTE plans to upskill in Artificial Intelligence and Data Science … – Education Times
The All India Council For Technical Education (AICTE) has directed colleges and technical institutions to widely disseminate the report National Program on Artificial Intelligence to promote upskilling in the technical sectors, with a renewed focus on ethics in AI.
The report was prepared by the committee constituted by the Union Ministry of Electronics and Information Technology (MieTY) and was issued in June 2023 as a part of MieTYs National Program on Artificial Intelligence (NPAI). The committee listed several programmes on Artificial Intelligence (AI) and Data Sciences and other measures to promote upskilling.
Recommendations of the committee included skilling of youths in AI and data science should start from the early school levels. Further, the report has suggested a basic curriculum for different levels aligned with the National Higher Education Qualifications Framework (NHEQF) and the National Credit Framework (NCrF).
The union government aims to establish a comprehensive programme for leveraging transformative technologies to foster inclusion, innovation and adoption for social impact under the NPAI.
As per the NPAI, the ministry focuses on four pillars of the AI ecosystem skilling in AI, responsible AI, data management office and setting up the national centre on Al. The report, along with the skilling programmes on AI and data science, has also stressed the need to dedicate at least 10% of each course to ethics in AI. Every course, small or big, must have a module on ethical AI for a minimum of 10% of its duration. Ethical considerations, transparency, fairness, and privacy must be integrated into AI training programs to ensure that AI systems are developed and deployed responsibly, the report mentioned.
Read more from the original source:
AICTE plans to upskill in Artificial Intelligence and Data Science ... - Education Times
71% Of Employers May Be Left Behind In The Generative AI Race – Forbes
priority for employersgetty
The 2023 Skills Index report from BTG (Business Talent Group) unearthed some striking data about artificial intelligence and its integration in today's workplace. Data science, artificial intelligence, and machine learning remain in-demand skills, with data science and machine learning at 100%+ demand compared to previous years. And this is anticipated to be the case for at least the next few years as AI tools continue to roll out following the emergence of chat GPT in the market in November 2022.
However, while demand is at an all-time high, one year on from ChatGPT's launch, approximately 71% of employers are still facing challenges due to lacking the internal expertise on how to effectively use artificial intelligence, and specifically generative AI, as part of their non-technical workflow.
Some of the core challenges highlighted by the report include lack of clarity as to AI regulations, little understanding amongst senior leadership teams, concerns related to data protection and security, being too busy with other important matters, and finally lack of understanding as to where it can be best applied. This poses a significant challenge as relates to AI integration, and means that the World Economic Forum's predictions of generative AI boosting the economy by up to 14.4 trillion may be retarded in progress due to limited knowledge.
So what can be done to resolve this internal knowledge gap?
The very first step to take is for key internal stakeholders and business partners to develop awareness of AI and its capabilities through undertaking training, and how it can improve decision-making, forecasting, analysis, and everyday workflows. With this knowledge, leaders can be empowered to make the right choices for their organizations.
Another simple solution would be for employers to call in external AI consultants who are verified subject matter experts within this domain, and whose expertise relates specifically to AI ethics and regulations, and data protection and security. These consultants could work in collaboration with employers to advise them on how to integrate AI into their work without compromising data or trust.
Another longer term approach which might be more suitable for some employers would be too hire someone to oversee AI change management, or to have an AI focus group. Although the concept is relatively new, this type of change management may prove to be highly effective in rolling out artificial intelligence usage, department by department, until everyone is using AI tools to boost their productivity.
You might want to consider running pilot projects to test user experience and acceptance before rolling out across the entire company, and collate this feedback to assess what tools are best for you and your organisation's objectives. Once you have done this, you can work on scaling gradually and gathering feedback for each user group.
Another important step would be to provide extensive training to employees at all levels, from senior leadership down to middle managers and entry level, on how to deploy AI, and to establish ethical guidelines on what its capabilities are. This will help to remove any misconceptions or worries surrounding using this technology.
Adopting and integrating AI should be a top priority for employers. If it's not yours right now, it will be when your competitors gain the upper hand and steal your talent and your customers. Through persistence, experimentation, and training, generative AI can be the new normal of work life, freeing up employees to more creative endeavors and enabling improved mental health and wellbeing through reduced work.
Who knows, this might result in the long-awaited four day work week?
As the 23-year-old Founder and CEO of Rachel Wells Coaching, I am dedicated to unlocking career and leadership potential for Gen Z and millennial professionals. I am a corporate career coach with over 8 years of experience. My clients range from professionals at graduate to senior executive level, in both the public and private sectors. I have coached clients in more than seven countries globally and counting, and I've also directed teams and operations in my previous roles as public sector contract manager, to deliver large-scale national educational, career development, and work-readiness programs across the UK. I am a LinkedIn Top Voice in Career Counseling, and LinkedIn Top Voice in Employee Training, and am a former contributor to the International Business Times. As an engaging motivational speaker, my passion is in delivering motivational talks, leadership and career skills masterclasses, corporate training, and workshops at events and in universities. I currently reside with my family in London, UK.
Go here to read the rest:
71% Of Employers May Be Left Behind In The Generative AI Race - Forbes
Navigating the Path to Mastery: Earning a Masters in Data Science – DNA India
In the fast-evolving landscape of technology, data has become the cornerstone of innovation, and those equipped with the skills to analyze and interpret it are in high demand. Pursuing a Masters in Data Sciencehas emerged as a strategic move for individuals seeking to harness the power of data to drive decision-making processes across industries. This article will explore everything you need to know about earning a Masters in Data Science, from program essentials to crucial concepts like regression in machine learning.The Rising Demand for Data Science Professionals
COMMERCIAL BREAK
SCROLL TO CONTINUE READING
As organizations increasingly recognize the transformative potential of data, the demand for skilled data scientists has surged. This trend is evident across sectors, from finance and healthcare to marketing and technology. A Masters in Data Science is designed to equip individuals with the advanced skills to tackle complex data challenges and derive meaningful insights.
1. Foundational Courses:
Masters programs typically commence with foundational courses covering fundamental concepts in statistics, programming languages (such as Python or R), and database management. These lay the groundwork for more complex and advanced topics.
2. Advanced Analytics and Machine Learning:
Central to a Masters in Data Science is the exploration of advanced analytics and machine learning. Here, students delve into algorithms, predictive modeling, and statistical analysis. One key machine learning concept is regression, a statistical method for predicting outcomes based on historical data.
3. Regression in Machine Learning:
Regression in machine learning enables data scientists to understand the relationship between variables. Mastering regression models is essential for predicting numerical outcomes, making it a cornerstone of data science education.
4. Big Data Technologies:
As the volume and variety of data continue to grow, proficiency in big data technologies is paramount. Students often learn to work with tools like Hadoop and Spark to efficiently process and analyze vast datasets.
5. Data Visualization:
The ability to communicate findings effectively is crucial for a data scientist. Programs usually include coursework on data visualization, emphasizing tools like Tableau or Matplotlib to create compelling visual representations of data insights.
With the proliferation of data science programs, selecting the right one can be overwhelming. Consider factors such as curriculum depth, faculty expertise, industry partnerships, and opportunities for hands-on experience through projects or internships. Additionally, look for programs that align with your career goals, whether that involves specializing in a particular industry or gaining expertise in a specific aspect of data science.
While data science certifications can provide valuable skill validation, a Masters degree offers a comprehensive and in-depth education. The decision between the two depends on your career aspirations. A Masters program provides a holistic understanding of data science, combining theoretical knowledge with practical application, making graduates versatile professionals.
The dynamic nature of data science requires professionals to stay updated with the latest developments. Even after completing a Masters program, ongoing learning through workshops, webinars, and conferences is crucial. Platforms offering continued education and specialized courses can further enhance your expertise and keep you at the forefront of the field.
Pursuing a Masters in Data Science is a strategic investment in a future-proof career. Beyond a program's essential components, mastering concepts like regression in machine learning is key to becoming a proficient data science professional. As the field evolves, staying curious and committed to continuous learning will set you apart in the competitive data science landscape.
Pursuing a Masters in Data Science is a transformative step toward becoming a data-driven professional. Whether you're exploring regression in machine learning or immersing yourself in big data technologies, the journey promises to be intellectually rewarding, opening doors to many promising and rewarding opportunities in the exciting world of data science.
Disclaimer: Above mentioned article is a Consumer connect initiative, This article is a paid publication and does not have journalistic/editorial involvement of IDPL, and IDPL claims no responsibility whatsoever.
Read this article:
Navigating the Path to Mastery: Earning a Masters in Data Science - DNA India
Analytics/AI conference brings new perspectives to businesses and … – University at Buffalo
BUFFALO, N.Y. The University at Buffalo School of Management hosted the inaugural Eastern Great Lakes Analytics Conference on Nov. 3 and 4, marking the first formal gathering of industry experts and academic researchers in Western New York dedicated to exploring the frontiers of data analytics and artificial intelligence.
Organizers welcomed 130 attendees from more than 40 organizations, creating a dynamic platform for exchanging insights and shaping the future of these transformative technologies.
The first day of the event featured a prestigious lineup of industry speakers, including executives from M&T Bank, National Fuel, Hidden Layer and Lockheed Martin, who shared their expertise on real-world applications of data analytics and AI.
To complement the industry perspectives from day one, researchers from such renowned institutions as Cornell University, University of Rochester, University of Pittsburgh and the University of Toronto showcased their cutting-edge advancements in data analytics and AI on day two, sparking engaging discussions on the implications of these innovations for businesses.
The blend of academic and industry participation fostered a stimulating environment, enabling attendees to delve into the latest advancements in data analytics and AI while exploring their practical applications, says Sanjukta Smith, chair and associate professor of management science and systems in the UB School of Management. This synergy generated invaluable insights into the evolving landscape of these technologies and their potential to revolutionize business operations and strategic decision-making.
Smith co-chaired the conference with Kyle Hunt, assistant professor, and Dominic Sellitto, clinical assistant professor, both in the UB School of Managements Management Science and Systems department.
The success of the Eastern Great Lakes Analytics Conference underscores our commitment to fostering innovation and collaboration in the field of data analytics and AI, says Ananth Iyer, dean of the UB School of Management. As the regions premier business school, the School of Management is poised to continue leading the way in shaping the future of these transformative technologies and empowering businesses to harness their full potential.
Now in its 100th year, the UB School of Management is recognized for its emphasis on real-world learning, community and impact, and the global perspective of its faculty, students and alumni. The school also has been ranked by Bloomberg Businessweek, Forbes and U.S. News & World Report for the quality of its programs and the return on investment it provides its graduates. For more information about the UB School of Management, visit management.buffalo.edu.
See original here:
Analytics/AI conference brings new perspectives to businesses and ... - University at Buffalo
Spatial Data Management For GIS and Data Scientists – iProgrammer
Videos of the lectures taught in Fall 2023 at the University of Tennesseeare now available as a YouTube playlist. They provide a complete overview of the concepts of GeoSpatial science usingGoogle Earth Engine, PostgresSQL GIS , DuckDB, Python and SQL.
Taught on campus, but recorded for the rest of us to enjoy for free, by Dr. Qiusheng Wu, an Associate Professor in the Department of Geography & Sustainability at the University of Tennessee. Dr. Qiusheng is also an Amazon Visiting Academic and a Google Developer Expert (GDE) for Earth Engine.
The target groups addressed by the course are GIScientists and geographers who want to learn about Data Science, and the other way around, data scientists who want to work with geographical data; and of course students in that area.
Geographical data nowdays are everywhere. At its simplest form you'll be familiar with Google Maps, Mobile applications and social media metadata, while at the more advanced, there's the need to model objects that exist in the real world and are location aware. The software industry aside, lately there's many traditional business that started working with that kind of data.
In this course then you'll learn how to manage geospatial and bigdata using Google Earth Engine, PostgresSQL GIS , DuckDB, Python and SQL which you will use to query, analyze, and manipulate spatial databases effectively. Take note that PostGIS, a geospatial extension to Postgres is the the most popular Postgres extension. Under that perspective, the course's value which explores various techniques for efficiently retrieving and managing spatial data, explodes multifold.
As such, students who successful complete the course should be able to:
The tech stack used throughout the course is impressive too. Tools that are going to be used include:
The course is making use of that stack beginning very early on, as seen by the curriculum spanning 13 weeks:
Week 1: Course IntroductionWeek 1: Spatial Data ModelsWeek 2: Installing Miniconda and geemapWeek 2: Introducing Visual Studio CodeWeek 2: Setting Up Powershell for VS CodeWeek 2: Introducing Git and GitHubWeek 3: Python BasicsWeek 3: Getting Started with GeemapWeek 4: Using Earth Engine ImageWeek 4: Filtering Image CollectionWeek 4: Filtering Feature CollectionWeek 5: Styling Feature CollectionWeek 5: Earth Engine Data CatalogWeek 5: Visualizing Cloud Optimized GeoTIFF (COG)Week 6: Visualizing STAC and Vector DataWeek 6: Downloading OpenStreetMap DataWeek 6: Visualizing Earth Engine DataWeek 7: Timeseries visualization and zonal statisticsWeek 7: Parallel processing with the map functionWeek 7: Earth Engine data reductionWeek 8: Creating Cloud-free Imagery with Earth EngineWeek 9: Downloading Earth Engine ImagesWeek 9: Downloading Earth Engine Image CollectionsWeek 9: Earth Engine ApplicationsWeek 10: DuckDB for GeospatialWeek 10: Introduction to DuckDB (CLI, Python API, VS Code, DBeaver)Week 10: DuckDB CLI and SQL BasicsWeek 10: Introducing SQL Basics with DuckDBWeek 11: Intro to the DuckDB Python APIWeek 11: Importing Spatial Data Into DuckDBWeek 11: Exporting Spatial Data From DuckDBWeek 12: Working with Geometries in DuckDBWeek 13: Analyzing Spatial Relationships with DuckDBWeek 13: Visualizing Geospatial Data in DuckDB with leafmap and lonboard
Of course 13 weeks was the duration on campus. The rest we can enjoy at a self pace. The videos are alsoaccompanied by an online reference book in HTML format.
Quality wise, Dr. Qiusheng Wu clearly explains the concepts and showcases the whole process of working with the tools that handle geodata. Which means that even if you are not familiar with Geo-science, the course is well worth attending regardless due to the tech stack employed, especially the PostgreSQL part. If on the other hand you already are a data scientist, then this is a must do.
Youtube playlist
Course
Book
Hydra Turns PostgreSQL Into A Column Store
To be informed about new articles on IProgrammer,sign up for ourweekly newsletter,subscribe to theRSSfeedandfollow us on Twitter,Facebook orLinkedin.
Make a Comment or View Existing Comments Using Disqus
or email your comment to: comments@i-programmer.info
See the article here:
Spatial Data Management For GIS and Data Scientists - iProgrammer
Explore the List of Data Scientist Openings in the US – Analytics Insight
In the ever-expanding landscape of technology, data has emerged as the lifeblood of innovation, driving decision-making processes across industries. As businesses strive to harness the power of data, the demand for skilled professionals capable of extracting actionable insights continues to surge. Among the most sought-after roles in this data-driven era is that of a Data Scientist. In the United States, the pursuit of excellence in data science has given rise to a multitude of opportunities for individuals passionate about transforming raw data into meaningful narratives.
This article explores the landscape of Data Scientist openings in the US, shedding light on the diverse and exciting possibilities that await those keen on navigating the world of data analytics.
PitchBook
Job Responsibilities:
Work with cross-functional teams to analyze and evaluate data on customer behavior.
Create and deploy innovative data models and algorithms to gain valuable insights.
Predictive and prescriptive analytics may be used to lead data-driven initiatives for improving the customer journey.
Construct and maintain complex datasets derived from customer interactions and engagement. Communicate actionable insights to multiple stakeholders, demonstrating the impact of various business segments on Sales, Customer Success, and overall company.
Keep abreast of market trends and develop analytics methodologies to continually improve our analytics skills.
Mentoring and advising junior data scientists in the team
Customer and marketing data should be analyzed to find patterns, trends, and opportunities throughout the customers lifetime.
Investigate consumer touchpoints and interactions to learn about their preferences and behavior.
Create predictive models that estimate consumer behaviors like churn, lifetime value, conversion rates, and buy proclivity.
Apply here
US Foods
Responsibilities:
You will be required to do the following as an Associate Data Scientist:
Data Preparation: Extract data from diverse databases; do exploratory data analysis; cleanse massage, and aggregate data.
Best Practices and Standards: Ensure that data science features and deliverables are adequately documented and executable for cross-functional consumption.
Collaboration: Work with more senior team members to do ad hoc analyses, collaborate on code and reviews, and provide data narrative.
Model Development and Execution: As needed, monitor model performance and retraining efforts.
Communication: Share findings and thoughts on various data science activities with other members of the data science and decision science teams.
Carry out additional responsibilities as assigned by the manager
Apply here
Disney Entertainment & ESPN Technology
San Francisco, CA
Required Qualifications:
7+ years of analytical experience is required, as well as a Bachelors degree in advanced mathematics, statistics, data science, or a related field of study.
7+ years of expertise in building machine learning models and analyzing data in Python or R
5+ years of experience developing production-level, scalable code (e.g., Python, Scala)
5+ years of experience creating algorithms for production system deployment
In-depth knowledge of contemporary machine learning algorithms (such as deep learning), models, and their mathematical foundations
Comprehensive knowledge of the most recent natural language processing methods and contextualized word embedding models
Experience building and managing pipelines (AWS, Docker, Airflow) as well as designing big-data solutions with technologies such as Databricks, S3, and Spark
Knowledge of data exploration and visualization tools such as Tableau, Looker, and others
Knowledge of statistical principles (for example, hypothesis testing and regression analysis)
Apply here
Asurion
Nashville, TN, USA
Qualifications:
Drive a test-and-learn methodology with a Minimum Viable Product (MVP) and push to learn quickly
Candidate must have the ability to find the root cause, describe, and solve difficult problems in confusing settings
Ability to interact and cooperate with people from many departments inside the organization, ranging from operations teammates to product managers and engineers
Excellent communication (written and spoken) and presentation abilities, especially the ability to create and share complex ideas with peers
The candidate must have creative ideas and arent hesitant to roll up your sleeves to get the job done
Requires a masters degree in analytics, computer science, electrical engineering, computer engineering, or a comparable advanced analytical & optimization discipline, as well as an open mind and an open heart
Familiarity with at least one deep learning framework, such as PyTorch or Tensorflow
Deep Learning and/or Machine Learning expertise earned via academic education or any amount of internship/work experience
Statistics, optimization theoretical principles, and/or optimization problem formulation knowledge acquired via academic coursework or any amount of internship/work experience.
Apply here
Read more from the original source:
Explore the List of Data Scientist Openings in the US - Analytics Insight
Jobs of 2030; skills to develop for the future in an increasingly competitive world – The Financial Express
By Sonya Hooja
It is true that the job market is ever-evolving. However, a child born in 2010 will likely face a vastly different reality than a young student stepping out of college the same year just 20 years apart. With the smartphone and internet revolution, artificial intelligence and data science having grown leaps and bounds between 2010 and 2030, fresh graduates are now facing more and more uncertainty in the job landscape.
The job market is rapidly evolving, driven by technological advancements, changing demographics, and global economic shifts. Jobs that exist today may become obsolete, while new opportunities will emerge. To thrive in this increasingly competitive world, individuals need to develop diverse skills that align with the demands of the future job market.
According to Googles Skills of the Future report, the jobs of the future will place a premium on adaptability, digital literacy, and problem-solving abilities. The report highlights the importance of soft skills such as critical thinking, creativity, and emotional intelligence. In addition, it emphasizes the need for continuous learning and upskilling throughout ones career. The ability to navigate through an ever-changing technological landscape will be crucial, as new technologies emerge and replace traditional job roles.
India is experiencing a significant shift in its job market, with emerging sectors offering exciting opportunities. Some of these sectors include:
Data science and analytics: With the increasing reliance on data-driven decision-making, professionals skilled in data science, machine learning, and data analytics will be in high demand. The ability to derive meaningful insights from vast amounts of data will be crucial for organizations across industries.
AI and Machine Learning: To succeed in the job market of 2030, individuals must continuously learn, adapt, and embrace emerging trends in Artificial Intelligence (AI) and Machine Learning (ML). As per a WEF report from 2023, 85% of companies are looking to maximize AI usage in the next 5 years predicting a rise in hiring of students of AI. In fact, AI and Machine Learning Specialists top the list of fast-growing jobs.
Cybersecurity: The growing digitization of businesses and the increasing threat of cyber-attacks, will have cybersecurity professionals play a vital role in safeguarding digital assets. Individuals with skills in cybersecurity, ethical hacking, and risk management will be in high demand.
Fintech: Technical expertise, financial knowledge, analytical skills and problem-solving skills are the most in-demand skills in the fintech sector. As the world of finance takes to digital platforms,
the Best Workplaces in BFSI in 2023 report by Great Place To Work predicts that the BFSI sector in India is expected to experience a significant increase in hiring activities, with companies planning to hire 26% more employees than the current year. Fintech firms, in particular, are leading the way with a 41% increase in their hiring intent.
What remains crucial in the fast-paced and competitive job market of 2030 is the need for individuals to develop a diverse skill set to thrive. In India, emerging sectors offer promising career prospects. By continuously learning, adapting, and embracing emerging trends, individuals can position themselves for success in the jobs of the future.
The author is founder and COO of Imarticus Learning. Views are personal.
Continued here:
The Government Needs Fast Data: Why is the Federal Reserve … – insideBIGDATA
Back in May of this year, the Federal Reserve was deciding whether to hike interest rates yet again. Evercore ISI strategistssaid in a note that, The absence of any such preparation [for a raise] is the signal and gives us additional confidence that the Fed is not going to hike in June absent a very big surprise in the remaining data, though we should expect a hawkish pause.
Well, they were right. The Federal Reserveultimately decidedto keep its key interest rate at about 5% after ten consecutive meetings during which it was hiked. This brings about an important question: Should there ever be very big surprises (or any surprises, for that matter) in the data on which the Fed bases these critical decisions?
In my opinion, the answer is no. There shouldnt ever be a question of making an incorrect economic decision because the right dataisindeed available. But the truth is, the Federal Reserve has been basing most of its decisions on stale, outdated data.
Why? The Fed uses a measure of core inflation to make its most important decisions, and that measure is derived from surveys conducted by the Bureau of Labor Statistics. While they may also have some privileged information the public isnt privy to, by nature, surveys take a while to administer. By the time the data is processed and cleaned up, its essentially already a month old.
Everyone can agree that having faster, more up-to-date data would be ideal in this situation. But the path to getting there isnt linear: Itll require some tradeoffs, taking a hard look at flaws in current processes, and a significant shift in mindset that the Fed may not be ready for.
Here are some things to consider:
Fast vs accurate: We need to find a happy medium
At some point, the Fed will need to decide whether its worth trying a new strategy of using fast, imperfect data in place of the data generated by traditional survey methods. The latter may offer more statistical control, but it becomes stale quickly.
Making the switch to using faster data will require a paradigm shift: Survey data has been the gold standard for decades at this point, and many people find comfort in its perceived accuracy. However, any data can fall prey to biases.
Survey data isnt a silver bullet
Theres a commonly held belief that surveys are conducted very carefully and adjusted for biases, while fast data that comes from digital sources can never be truly representative. While this may be the case some of the time, survey biases are a well-documented phenomenon. No one solution is perfect, but the difference is that the problems associated with survey data have existed for decades and people have become comfortable with them. When confronted with the issues posed by modern methods, they are much more risk-averse.
In my mind, the Feds proclivity toward survey data has a lot to do with the fact that most people working within the organization are economists, not computer scientists, developers, or data scientists (who are more accustomed to working with other data sources). While theres a wealth of theoretical knowledge in this space, theres also a lack of data engineering and data science talent, which may soon need to change.
A cultural shift needs to occur
We need a way to balance both accuracy and forward momentum. What might this look like? To start, it would be great to see organizations like the U.S. Census, the Bureau of Labor Statistics, and the Bureau of Economic Analysis (BEA) release more experimental economic trackers. Were already starting to see this here and there: For example, the BEAreleased a trackerthat monitors consumer spending.
Traditionally, these agencies have been very conservative in their approach to data, understandably shying away from methods that might produce inaccurate results. But in doing so, theyve been holding themselves to an impossibly high bar at the cost of speed. They may be forced to reconsider this approach soon, though. For years, theres beena steady declinein federal survey response rates. How can the government collect accurate economic data if businesses and other entities arent readily providing it?
When it comes down to it, weve become accustomed to methodologies that have existed for decades because were comfortable with their level of error. But by continuing to rely solely on these methods, we may actually end up incurring more error as things like response rates continue to fall. We need to stay open to the possibility that relying on faster, external data sources might be the necessary next step to making more sound economic decisions.
About the Author
Alex Izydorczyk is the founder and CEO ofCybersyn, the data-as-a-service companymaking the worlds economic data available to businesses, governments, and entrepreneurs onSnowflake Marketplace. With more than seven years of experience leading the data science team at Coatue, a $70 billion investment manager,Alex brings a wealth of knowledge and expertise to the table. As the architect of Coatues data science practice, he led a team of over 40 people in leveraging external data to drive investment decisions. Alexs background in private equity data infrastructure also includes an investment in Snowflake. His passion for real-time economic data led him to start Cybersyn in 2022.
Sign up for the free insideBIGDATAnewsletter.
Join us on Twitter:https://twitter.com/InsideBigData1
Join us on LinkedIn:https://www.linkedin.com/company/insidebigdata/
Join us on Facebook:https://www.facebook.com/insideBIGDATANOW
Excerpt from:
The Government Needs Fast Data: Why is the Federal Reserve ... - insideBIGDATA
New training centre will bridge the gap between environmental … – University of Oxford
The UK is in a strong position to harness the power of AI to transform many aspects of our lives for the better. Crucial to this endeavour is nurturing the talented people and teams we need to apply AI to a broad spectrum of challenges, from healthy aging to sustainable agriculture, ensuring its responsible and trustworthy adoption.
Professor Dame Ottoline Leyser, UKRI Chief Executive
The Intelligent Earth Centre is one of a cohort of twelve new UK Research and Innovation (UKRI) CDTs in AI, based at 16 universities.
Michelle Donelan, Secretary of State for Science, Innovation, and Technology, said: The UK is at the very front of the global race to turn AIs awesome potential into a giant leap forward for peoples quality of life and productivity at work, all while ensuring this technology works safely, ethically, and responsibly. The plans we are announcing today will future-proof our nations skills base, meaning we can reap the benefits of AI as it continues to develop. At the same time, we are taking the first steps to put the power of this technology to work, for good, across Government and society.
Addressing a skills gap between AI and environmental science
The remarkable breakthroughs in AI and machine learning over recent decades offer the potential to revolutionize environmental research and provide novel solutions to address Earths environmental crises from climate change and biodiversity loss, to pollution and clean energy. However, this is currently restricted by a crucial skills gap: environmental scientists often lack expertise in data sciences, limiting their ability to leverage AI and machine learning tools, whereas data scientists typically do not have specific knowledge in environmental sciences.
Professor Philip Stier (Department of Physics, University of Oxford), Director for The Intelligent Earth Centre, said: Traditional, siloed training in environmental and data science has created a bottleneck for UK leadership in science, innovation, and entrepreneurship in this emergent space. Hence, the Intelligent Earth Centre will meet the urgent need for interdisciplinary training at the interface between the environment and AI.
The new centre has been funded by a major 12 million grant from UKRI, with additional funding from the University of Oxford and a wide range of partners, including IBM, Google, DeepMind, the European Space Agency, Planet, the Met Office, Trillium Technologies (FDL Europe), and the Satellite Applications Catapult. These partners will host the Centres students for placements, enabling them to develop their skills further.
An interdisciplinary initiative
The Intelligent Earth Centre will be intrinsically interdisciplinary, delivering tailored training in both environmental science and data science, and facilitating ambitious, intersectoral projects. Following a rigorous taught programme covering AI tools, frameworks, and environmental datasets, students will work in interdisciplinary groups to tackle grand challenges in environmental science with increasing complexity. Such applications of AI could include next generation climate models that run at a fraction of the computational cost and environmental footprint, automated tracking of biodiversity loss and unregulated pollution sources from space, or rapid alert systems for environmental disasters.
Professor Stier added: Not only will The Intelligent Earth Centre provide highly qualified graduates for a wide range of industries, but we also expect our own students to drive innovation and found their own start-ups, supported by the programmes dedicated training in enterprise, impact, and responsible AI.
AI is rapidly transforming environmental sciences allowing to scale existing research to unseen levels and to enter entirely new areas of research. Innovation in this area will be increasingly limited by the access to highly skilled graduates - the graduates of The Intelligent Earth Centre will contribute to fill this gap.
Professor Philip Stier, Director for The Intelligent Earth Centre
The Centre will have two entry streams for applicants: one for numerate candidates from environmental science backgrounds and the other for environmentally-driven candidates from computer science, data science, mathematics, statistics, or physics backgrounds. The first PhD positions will start in September 2024 and will open for applications soon with a deadline in January 2024. All details will be provided on The Intelligent Earth website.
Associate Professor Hannah Christensen (Department of Physics), who will lead on Equality, Diversity, and Inclusion for The Intelligent Earth Centre, said: Equality, Diversity, and Inclusion spans all our activities, from the way we admit, teach, and assess our students, to the timing and choice of cross-cohort social events. Were also proud of our widening access initiatives, which include internships for candidates from underrepresented backgrounds, access scholarships through our Africa Oxford and Academic Futures programmes as well as our ambitious partnership with the African Institute for Mathematical Sciences.
UKRI AI Centre for Doctoral Training in AI for the Environment (Intelligent Earth) involves the following University of Oxford Departments: Physics, Biology, Computer Science, Earth Sciences, Engineering Science, Statistics, and the School of Geography and the Environment. It is a collaboration with the following non-academic partners: IBM, Google, DeepMind, NVIDIA, ESA, Planet, Met Office, Trillium Technologies (FDL Europe), UK Centre for Ecology & Hydrology, National Centre for Atmospheric Science, On the Edge, Natural State, ConservationXLabs, and Satellite Applications Catapult.
More information can be found on The Intelligent Earth Centre website.
Read more from the original source:
New training centre will bridge the gap between environmental ... - University of Oxford