Category Archives: Data Science
Professor Carla Johnson Partners with College of Engineering … – NC State College of Education News
NCState College of Education Professor Carla Johnson is partnering with several national and international companies to help diversify the STEM workforce by preparing more than 30 people for careers in data science and artificial intelligence (AI).
Johnson is the principal investigator on Enabling Access for Historically Underserved and Underrepresented Groups to Experiential Learning and Credentials in Artificial Intelligence (ExLENT-AI), which is funded by a $1 million National Science Foundation ExLENT grant. Through the project, she, along with Professor Min Chi and Associate Professor Collin Lynch from NCStates College of Engineering, will develop an externship that provides experiential learning opportunities for people from a variety of different backgrounds including low-income and first-generation students to broaden access to STEM careers.
Disparate access to computer science education and tools continues to be an important equity issue. The digital divide, an opportunity gap for many economically disadvantaged students, continues to promulgate inequities in STEM, Johnson said. Providing active, experiential, mentored learning opportunities for individuals from underserved groups can enhance participation in STEM careers generally and can be particularly influential in fostering positive identities and dispositions toward computer science.
Through a partnership with Delta Airlines, Lexmark, Randstad and other organizations, Johnson and the project team will design and implement a 40-week externship program that includes weekly workshop sessions alongside industry mentoring, job shadowing and the ability for participants to work on real-world, authentic industry tasks with partner companies.
It is incredibly valuable for our participants to engage in authentic, real-world applications of the content and skills they are learning in our program. Our partners provide context for what people are learning and serve as important mentors and role models, Johnson said.
ExLENT-AI is an extension of Johnsons work on the Artificial Intelligence (AI) Academy, which is preparing 5,000 individuals for roles in the field of artificial intelligence through a $6 million grant from the U.S. Department of Labor. However, while the AI Academy has only been available to current employees of partner companies as an on-the-job training program, the new project will target individuals who are historically excluded or underserved in STEM and currently unemployed or underemployed.
In addition to using evidence-based best practices to attract and prepare a diverse group of learners to consider emerging technology careers, the ExLENT-AI project will also establish a community of learners through mentorship as well as support participants through their ultimate job search and placement process as they enter careers in the field.
Our program will provide a pathway to careers for these individuals which have the potential to transform their lives, Johnson said.
More:
Freakonomics author: Objections to data science in K-12 education make no sense – Fortune
The three-year battle over Californias new math framework has produced calamity and confusion on all fronts. As fighting raged across op-ed pages and X (formerly known asTwitter), the fog of war obscured the inescapable truth: The data revolution is here and our kids are not prepared for it.
From ChatGPT to personal finance, nearly every decision we make in our daily lives is now dominated by data. Eight out of ten of the fastest-growing careers this year involve data science. A decade from now, it will be difficult to find any job that is not data-driven.
We need to equip our students for this new reality by teaching them basic data literacy in K-12. We can all see this, but somehow the politics of the moment have turned this idea into a raging debate.
The new critics of data science instruction seem to have three common objections. Their first claim is that data science programs are somehow watering down math. That is indeed possible, especially if districts treat data-related classes as a form of remediation, but this should not be the case. Data science is a very challenging subject, combining traditional math, statistics, computer programming, and complex datasets. In many ways, it demands more of students, requiring critical thinking, creativity, and a nuanced understanding of the context within which data have been generated.
A second objection is that learning data science in high school is somehow illegitimate because students wont yet have the mathematics skills required of professional data scientists. This is an odd argument. Can high school students never learn anything about physics because they dont understand differential calculus? Can they not find beauty in a Shakespearean sonnet if they dont know the rules of iambic pentameter?
The third claim is that data science coursework will crowd out calculus or some of the other math required for college STEM degrees. This is an important concern, but it assumes that every part of todays curriculum is absolutely critical to that path. Do we really think that is true? Having spent many nights at the kitchen table helping my kids with their homework, I suspect its not. And we (parents) shouldnt ignore the more than 130 college disciplines that now require data and statistics basics as the world changesincluding math and engineering.
We adults can stand around and dither, but young people are not waiting for us to figure this out. In college, students are rushing toward data science courses with astonishing speed. The number of data science undergraduate degree programs has exploded nationallyandin every state. At the University of Wisconsin, Madison, it has quickly becomethe fastest-growing major. Not to be outdone, UC Berkeley recently launchedan entire collegededicated to the subject. Our own institution, the University of Chicago, has hired 25 faculty in data science to keep up with student demand.
Sixteen other states have already officially launched or recommended data science in K-12. Some are creating full-year courses, while others are completely redesigning their math pathways. Leading STEM high schools throughout the country are teaching their students the UC Berkeley Data8 program, one of the best collegiate data science courses in the country. Just recently, a group of AP Statistics teachersorganizeda national data science challenge that attracted more than 5,000 students.
Without leadership from policymakers and educators, this revolution will still happen, but the benefits will go disproportionately to the students who are already advantaged. Wealthy parents and tech employees will teach their kids these skills through summer and after-school programs. Is this what we want? Or do we want to ensure that every child gets at least a basic level of data literacy?
If this all rings true to you, if you believe that a modern K-12 education requires at least some data science instruction, then you can help move us toward action. Ask your local school to incorporate data across school subjects throughout K-12. Ask your teachers to bring modern data tools into the way they teach. And ask your school leaders to offer data-focused math coursesand support their educators with the right resources to do so.
Lets put down our weapons in this math war and start fighting again for our kids futures.
StevenLevitt is an economist, the founder of The Center for Radical Innovation for Social Change (RISC) at UChicago, and the author of Freakonomics.
The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs ofFortune.
See the rest here:
Freakonomics author: Objections to data science in K-12 education make no sense - Fortune
Haywood promoted to director of data science at Cape Fear Collective – Greater Wilmington Business Journal
Dante Haywood has been named director of data science for Cape Fear Collective.
Haywood has been with CFC since January 2021. He was originally drawn to the organization because of its commitment to using data to make unique positive impacts, according to a news release.
"It was not just that CFC had a data mission, it washowthat data would be collected, store, and used. Too often, data is an inaccessible and mysterious field of work, but during my interviews and conversations, CFC was breaking down those barriers to demonstrate that we can make more impact by humanizing the story of data in our everyday lives," Haywood said in a news release.According to the release, Haywood's proudest accomplishment at CFC was engineering the programming scripts that keep theCommunity Data Platformrunning.
"His favorite part of his job is the ability to remove technical barriers, secure funding and collaborate with peopleto achieve more than either party initially thought possible," the release stated.
Looking into the future, Haywood's goal is to use his new role to further strengthen CFC's data science capabilities and social impact measurement, according to the release.
"Data science is a swiftly moving field with new technologies always around the bend," he said in the release. "We have a huge horizon for bringing these technologies to the public. I want to rally our partners in a way that creates the most systemic impact."
Follow this link:
Internet Of Things (IOT): Application In Hazardous Locations … – Data Science Central
Introduction to Internet of Things (IOT):
Internet of Things (IoT) represents the fourth-generation technology that facilitates the connection and transformation of products into smart, intelligent and communicative entities. IoT has already established its footprint in various business verticals such as medical, heath care, automobile, and industrial applications. IoT empowers the collection, analysis, and transmission of information across various networks, encompassing both server and edge devices. This information can then undergo further processing and distribution to multiple inter-connected devices through cloud connectivity.
IoT is used in the Oil and Gas Industry for two basic reasons: First low power design, a fundamental requirement for intrinsically safe products, Second two-way wireless communication. These two advantages are a boon for the products used in Oil and Gas industries. The only challenge is for the product design to meet the hazardous location certification.
An intrinsic safe certification is mandatory for any device placed in hazardous locations. The certification code depends on the type of protection, zone, and the region where the product shall be installed.
In the North American and Canadian markets, the area classification is done in three classes:
Class I: Location where flammable gases and vapors are present.
Class II: Location where combustible dust is present.
Class III: Location where flying is present.
The hazardous area is further divided into two divisions, based upon the probability that a dangerous fuel to air mixture will occur or not.
Dvision-1: Location is where there is a high probability (by underwriting standards) that an explosive concentration of gas or vapor is present during normal operation of the plant.
Division-2: Location is where there is a very low probability that the flammable material is present in the explosive concentration during normal operation of the plant; so, an explosive concentration is expected only in case of a failure of the plant containment system.
The GROUP is also one of the meaningful nomenclatures of the hazardous area terms.
The four gas groups were created so that electrical equipment intended to be used in hazardous (classified) locations could be rated for families of gases and vapors and tested with a designated worst-case gas/air mixture to cover the entire group.
The temperature class definitions are used to designate the maximum operating temperatures on the surface of the equipment, which should not exceed the ignition temperature of the surrounding atmosphere.
Areas classified per NEC Article 505 are divided into three zones based on the probability of an ignitable concentration being present, rather than into two divisions as per NEC article 501. Areas that would be classified division 1 are further divided into zone 0 and zone 1. A zone 0 area is more likely to contain an ignitable atmosphere than zone 1 area. Division 2 and zone 2 areas are essentially equivalent.
Zone-0: Presence of ignitable concentration of combustible gases and vapors continuously, or for long periods of time.
Zone-1: Intermittent hazard may be present.
Zone-2: Hazard will be present under abnormal conditions.
IoT-based products can be designed for various applications, a few of them are listed below:
A typical block diagram of the IoT application is shown below:
Figure 1: IOT Block Diagram
An IoT product might consist of a battery as a power source or can be powered externally from either 9V ~ 36V DC supply available in the process control applications or 110/230Vac input.
The microcontroller can be selected based on the applications, power consumption, and the peripheral requirements. The microcontroller converts the analog signal to digital and based on the configuration can send the data on wired/wireless to the remote station. Analog signal conditioning stands as a pivotal component of the product, bridging the connection between the sensor and facilitating the conversion of analog signals for compatibility with the microcontroller. The Bluetooth interface suggested in the example is due to its wide acceptance and low power consumption. The wireless interface depends on the end-application of the product.
The electronics design of an IoT product for a hazardous location is very complex and needs a careful selection of the architecture and base components as compared to the IoT developed for commercial applications. In case the IoT is for a hazardous location, the product must be intrinsically safe and should not cause an explosion under fault conditions. The product architecture should be designed considering various mechanical, and electronics requirements as defined in the IEC 60079 standards, certification requirements and the functional specifications.
Power Source: This is one of the main elements in an IoT-based product. Battery selection should meet the overall power budget of the product, followed by the battery lifetime. In case of intrinsic safety, special consideration is required for where the battery in charged. IEC 60079-11 clause 7.4 provide details for the type of battery and its construction details. Separation distance from the battery and electrical interface should be done as per Table-5 of IEC 60079-11. If the battery is used in the compartment, sufficient ventilation must be provided to ensure that no dangerous gas accumulation occurs during discharge or inactivity periods. In scenarios where IoT operates on DC power sources such as 9~36Vdc (nominal 24Vdc), the selection of power supply barrier protection becomes a critical consideration, particularly when catering to intrinsic safety norms. This necessitates a thorough analysis of the products prerequisites and the mandatory certifications. Adding to the complexity is the existence of IoT devices functioning on 230Vac, demands intrinsic safe calculations and certifications aligned with Um = 250V.
Microcontroller: Its central processing unit of the IoT product. The architecture of the microcontroller, power, and clock frequency processing must be carefully selected for a particular application. The Analog to Digital Conversion (ADC) part of the microcontroller should be selected based on the required accuracy, update rate, and resolution. Microcontroller should have enough sleep modes so that the power is optimally utilized for IoT applications and should have sufficient memory/peripheral interface to meet the product specifications.
Analog Signal Conditioning: The front-end block should meet the intrinsic safe requirements as per the IEC 60079 standards and should also protect the product from EMI-EMC testing. Barrier circuit should provide enough isolation for meeting the spark-gap ignition requirements and impedance requirement of the transducer. Also, along with the safety requirements, the designer should ensure that extracted sensor signal is not degraded from the excessive noise present in outside environment. All the sensors used for collecting data from the process parameters to the signal conditioning block must be certified for the particular zone.
Wireless Communications: There are various wireless options available for sending data from the IoT product to the sensor such as (6LOWPAN, ZigBEE, ZWave, Bluetooth, Wi-Fi, Wireless HART). Selection of a particular wireless interface requires knowledge of end application, RF-power, antenna, and protocol. Selection of the interface for a particular IoT application should be done keeping these basic things in mind:
In case of intrinsic safe applications, its important to note that the use of certified modules does not directly confer suitability for deployment in hazardous locations. The product must undergo fresh testing within an intrinsic safe lab to assess both quantifiable and non-quantifiable ffaults, along with spark testing. or the countable and non-countable faults and spark testing. The RF power transmitted from the devices should be limited as per Table-1x of IEC 60079-0.
When building IoT solutions for hazardous locations, special conditions relating to creepage and clearance, encapsulation, and separation distance must be carefully considered. Also, when battery and RF signals are used, its expected the designer should be aware of the applicable standards and limitation of these standards for such products.
With more than 25 years of experience in designing mission-critical and consumer-grade embedded hardware designs, eInfochips is well poised to make products which are smaller, faster, reliable, efficient, intelligent and economical. We have worked on developing complex embedded control systems for avionics and industrial solutions. At the same time, we have also developed portable and power efficient systems for wearables, medical devices, home automation and surveillance solutions.
eInfochips, as an Arrow company, has a strong ecosystem of manufacturing partners who can help right from electronic prototype design, manufacturing, production, and certification. eInfochips works closely with the contract manufacturers to make sure that the designs are optimized for testing (DFT) and manufacturing (DFM) to reduce design alterations on production transfer. To know more about this contact us.
einfochips can help product-based companies to develop intrinsically safe products and get the product certified by lab for various certifications like ATEX/IECEx/CSA.
Kartik Gandhi, currently serving in the capacity of Senior Director of Engineering, possesses a distinguished career spanning over two decades, marked by a profound expertise in fields including Business Analysis, Presales, and Embedded Systems. Throughout his professional journey, Mr. kartik has demonstrated his proficiency across diverse platforms, notably Qualcomm and NXP, and has contributed his talents to several esteemed product-based organizations.
Dr. Suraj Pardeshi has more than 20 years of experience in Research & Development, Product Design & Development, and testing. He has worked on various IoT-enabled platforms for Industrial applications. He has more than 15 publications in various National and International journals. He holds two Indian patents, Gold Medalist and Ph.D (Electrical) from M.S University, Vadodara.
See the article here:
Internet Of Things (IOT): Application In Hazardous Locations ... - Data Science Central
Data Architect Job Description, Skills, and Salary in 2023 | Spiceworks – Spiceworks News and Insights
Data architects are technical professionals in charge of an organizations data systems. They use their IT and design skills to plan, analyze, and implement data solutions for internal use and user-facing applications. Years of study and experience are required to become a data architect; however, its an in-demand role that can get you a six-figure pay package.
Sample Data Architecture Certification from IBM
Source: Algirdas Javtokas CertificationsOpens a new window
Data is currently among the most valuable enterprise assets in the world today. NewVantage Partners 11th annual survey of chief data officers (CDOs) and chief data and analytics officers indicates that 82% of organizations intend to increase investments in data modernization in 2023.
Investment in data products, artificial intelligence (AI), and machine learning was cited as a top priority. This suggests that most businesses require experts who can ensure that their data is readily accessible, organized, and constantly updated. That is where the role of a data architect comes in.
A data architect specializes in developing and optimizing database models to hold and easily access company data. These professionals analyze system specifications, build data models, and guarantee the datas integrity and security.
Data architects are typically part of a companys data science team. They oversee data system initiatives and work closely with data analysts, other data architects, and data scientists. They typically report to the data system and data science leaders. If a company has a CDO, a senior data architect could report directly to that individual.
So, why are data architects necessary in the first place, given that companies could hire data scientists, data analysts, and other data experts? The role of data architects is vital. Without these specialists, data security compliance and data flow can be compromised. This is because they provide the framework for data infrastructure protocols. The architect determines how the other members of the data science team and the company as a whole will construct these systems and manage stored data.
Data architects are specialists at collecting and storing enormous amounts of data. They use their understanding of data collection, analysis, and storage to create an enterprise-wide data structure. Being excellent at math, having the capacity to solve complex problems, and possessing programming expertise are all necessary for this position. They can work for government agencies, universities, IT, financial, or even technical services firms.
Data is at the core of all applications and software. Over time, improperly structured data can look like a pile of spaghetti. This can lead to extended disorder in the software development process and long-term issues for developers. This is where the role of a data architect is important.
Contrary to the widespread belief that data architects solely focus on databases, much more is there to their job than simply creating structured query language (SQL) tables. In the software development process, the role of a data architect is to convert business requirements into established guidelines and standards. This requires deciphering obscure details and transforming them into something logically tangible. Usually, this theoretically tangible construction assumes the shape of technical specifications, modeling of data structures, their interrelationships, connections, and long-term viability concerning business requirements.
When coders create databases ad hoc, the resulting construction could become unstable over time. A data architect would analyze current requirements and design data pipelines that are versatile enough to accommodate future changes, feature additions, or any other need that may arise over time.
While a data architect and engineers roles are closely related, they are not the same.
Together, data architects and engineers create a companys data framework. The data architect conceptualizes the entire framework while the data engineer implements the plan. Data architects prepare the data and construct the framework that data scientists or analysts utilize. Data engineers assist data architects in developing the search and retrieval architecture.
Regarding the difference between data analysts and data architects, the former operates more on the business side of things. Their daily responsibilities include cost estimates, business case writing, stakeholder consultations, and high-level content, which is closer to marketing and sales functions.
The function of a data architect is more hands-on and positioned closely to the software development divisions. Their technical proposals often get turned into software. A data architects day would be spent organizing, refactoring, and unraveling data at a macroscopic level. This includes reorganizing or establishing new data structures for an app and resolving how these models are standardized, passed on, and utilized.
However, like analysts, a data architect can navigate between the various business layers and stakeholders. This activity aims to collect specifications and convert them into a suitable format for software development.
To become a data architect, aspirants need to follow the following steps:
1. Obtain a bachelors degree in a related discipline
A bachelors degree in computer engineering, computer science, information technology, or a comparable field is typically necessary for data architects. Masters degrees can prove helpful but are not mandatory. Usually, data architects have many years of experience in application design, system development, and data management. Therefore, you should successfully finish coursework in these areas.
2. Apply for a summer internship while in college
Data architecture isnt generally an entry-level position. As such, you should gain as much experience as possible early on to prepare for this role. Look for apprenticeships in IT that will help you develop application frameworks and network administration skills. Most leading technology companies offer summer internships to seniors in college, which can give you a leg up in your career as a data architect.
3. Get certified
Being certified always helps. The Institute for Certification of Computing Professionals offers the most popular certification, Certified Data Professional (CDP). Before taking a certification test, applicants must possess at least two years of IT work experience and a bachelors degree.
4. Build on your experience
Those interested in data architecture may require three to five years of work experience and proven project success. Apply for entry-level positions in programming and database management. Keep honing your database development, design, management, modeling, and warehousing capabilities. This is a good time to take on gig projects to add to your portfolio, such as helping a small business migrate its data systems.
5. Apply for a data architect job
After four to five years of experience, youre ready to apply for a data architect position. Look for work in financial markets, educational institutions, healthcare and insurance firms, and other organizations that gather and analyze massive amounts of client data. Software as a service (SaaS) and artificial intelligence companies also employ data architects to power their applications.
See More: What Is Data Science? Definition, Lifecycle, and Applications
Like a regular architect, a data architect designs an organizations data layout blueprint. These designs are then used to create databases as well as other systems. However, this is just a basic explanation, and the roles and responsibilities of a data architect are much more.
A data architect bridges business and IT. Consequently, the data framework they create must conform to both their organizations goals and broader industry standards. For instance, C-suite executives would want to enhance the accuracy and availability of data insights to make better decisions. As a result, a data architect will prioritize this. They will also offer counseling or arguments if something is technically impossible.
The foundations of an organizations IT infrastructure are data models, metadata structures, and pipelines. Throughout the organizations life cycle, they recommend how data is collected, used, controlled, shared, and restored. In addition, they ensure compliance with regulatory requirements and data security. A data architect establishes and distributes a common data vocabulary alongside more technical artifacts. This helps maintain consistency across the organization, even in non-technical teams.
Data architects must track and sustain system health by performing regular tests, fixing issues, and quickly fixing bugs. In addition, they identify key performance indicators (KPIs) to gauge and track the efficacy of the data infrastructure and its individual components. If KPI targets arent met, a data architect would need to suggest methods, such as new technologies, that can improve the current framework.
A data architect calculates how information is safeguarded and who can control it. Further, this professional must ensure compliance with data-related rules, regulations, and guidelines. Consider healthcare data that contains confidential details, referred to as protected health information (PHI), and is bound by HIPAA regulations. If an organization works with medical records and paperwork not stored in medical facilities, it is the data architects job to set up access controls, data encryption, anonymity, and additional security measures.
The General Data Protection Regulation (GDPR) is for gathering, storing, and processing personal information in the European Union. This privacy legislation must be taken into consideration when creating any data architecture.
A data architect oversees the tasks of data engineers, comparable to how a building architect supervises a construction crew setting the foundation for a new building. This ensures their databases, apps, or other data systems conform to the framework. Depending on the development of their data unit, they would also need to coordinate with third-party data suppliers to develop architecture-compliant guidelines.
A data governance policy is an annotated document that details the objectives, processes, and company standards for data management. It outlines metrics and best practices to guarantee data quality, confidentiality, and security. This document ensures all parties agree on who is liable for what reasons and how information must be administered at various phases of its lifecycle. While data architects arent the only individuals who create policies, they significantly contribute to developing data-related regulations and norms.
See More: How to Build Tech and Career Skills for Web3 and Blockchain
Like most technical professionals, data architects need both hard and soft skills to succeed in their roles. The top skill requirements for a data architect include:
A data architects daily tasks and responsibilities involve direct collaboration with data engineers or data scientists. This professional must, therefore, be intimately familiar with an extensive range of data-related technologies such as SQL/NoSQL databases, ETL/ELT tools, etc. Furthermore, a data architects experience with popular tools such as Microsoft Power BI and Tableau is a major asset.
A data architect would frequently need to fix several complexities with data systems, quickly locate the root cause of an issue, and create efficient solutions. In addition, data architects serve as mediators between organizations and data science experts. Their goal is to match technical specifications with business requirements. To succeed in this challenging endeavor, they must demonstrate critical thinking abilities. This facilitates the identification of a companys objectives and the use of its technical expertise to reduce expenses and maximize profits.
Data management reveals the value of a companys data, and it is the duty of a data architect to ensure that metadata rules are relevant to all of the companys data. This means that a data architect must have a solid understanding of data lifecycle management (DLM) and how metadata is applied during each phase of DLM.
Even though data architects rarely need to write code, proficiency in various prominent programming languages is necessary. This is because they must adapt data architectures for various applications in different programming languages.
The top skills here include:
Data architects increasingly need to understand AI, machine learning, natural learning processing, and pattern recognition. This is because AI solves real-world use cases just like data-related issues. An understanding of these tools is also required since they facilitate the use of clustering in text mining and data administration by data architects.
A key skill set for any data architect is data modeling. It entails depicting data flow with structured and architecturally correct diagrams to simplify an elaborate software system. Before creating an app, data models help stakeholders find and address flaws or vulnerabilities.
See More: 5 Hottest Tech Jobs To Go For in 2023
A data architect is a mid-level or a senior-level role. As a result, this professional commands a high salary of $131,375 annually in the US, according to Glassdoor data last updated on June 21, 2023. On top of that, these professionals can earn additional cash compensation of $23,271 on average from bonuses, commissions, etc.
Data architect salaries can be as high as $200,000 annually or more, depending on the company one joins. For example, on average, Cisco pays its data architects $224,214 annually, while IBM salaries are close to $180,000. Further, this is an in-demand role in the financial services industry, with jobs available at leading banks such as JP Morgan and Bank of America.
Healthcare providers such as HCA Healthcare and Intermountain Health also employ data architects at a six-figure salary. Therefore, it is worth putting in the hard work, getting certified, honing your data architecture skills, and gaining experience since there is much room to grow in this career.
See More: Five Best Career Choices For Certified Data Scientists
Typically, data architects join as data architecture associates and move on to a more senior position until they ascend to the chief data officer (CDO) role. However, your skills as a data architect are highly transferable, and several other jobs can also be explored.
See More: 9 Skills You Need to Become a Freelance Data Scientist in 2021
Data architecture is now an in-demand role that companies such as Salesforce and IBM offer certifications for. Information is now central to nearly every business process, and enterprise applications need to utilize data meaningfully. A data architect can be a valuable asset to an organization and command a high salary by successfully mobilizing and monetizing information.
Did this article answer your queries about a data architect? Tell us on FacebookOpens a new window , XOpens a new window , and LinkedInOpens a new window . Wed love to hear from you!
Image Source: Shutterstock
Continued here:
The AUC Data Science Initiative partners with Mastercard to further … – PR Newswire
Through a $6.5M grant, Mastercard will support the expansion of data science education and research efforts across the nation's Historically Black Colleges and Universities.
ATLANTA, Oct. 16, 2023 /PRNewswire/ -- The Atlanta University Center (AUC) Data Science Initiative announces the launch of a new partnership with Mastercard at 12:00 PM on October 18th at the AUC Robert W. Woodruff Library. The event will detail the innovative partnership which is supported by a $6.5 million grant from Mastercard to drive the expansion of data science across the nation's Historically Black Colleges and Universities (HBCUs).
"The AUC Data Science Initiative has had great success engaging AUC students and faculty resulting in significant national impacts, primarily increasing the presence and employment of Black data scientists in the workforce," said David Thomas, Ph.D., chair of the Atlanta University Center Consortium Board of Trustees and Morehouse College president. "This partnership with Mastercard will amplify these efforts by providing a resource to all HBCUs creating pathways of innovation in data science."
Through a $6.5M grant, Mastercard will support the expansion of data science education and research efforts at HBCU's.
"As technology advancements in the field of data science impact both our local and global economic foundation, we need to ensure we are enabling the future workforce with pathways in data science knowledge that prioritize equitable access to opportunity for all," said Salah Goss, senior vice president for social impact for the Mastercard Center for Inclusive Growth.
The partnership seeks to develop new or reframed courses created across HBCUs guided by industry needs. New computer science faculty will be hired at an AUC institution and will work across HBCUs to strengthen data-specific curriculum and programming. This partnership will expand successful AUC Data Science Initiative programs.
Dr. Talitha Washington, Ph.D., Director of the AUC Data Science Initiative, will lead collaboration with other HBCUs to create new innovations in curricula and research. "There is a growing workforce need for data scientists and other professionals who possess data science skills," said Washington. "Data science impacts everything that we do, and we need all talent at all HBCUs to drive innovations."
The $6.5 million investment builds on and is informed by Mastercard's previous work with HBCUs leveraging Mastercard's unique expertise to create industry-informed programs to increase student placement in the workforce.
Learn more about the AUC Data Science Initiative: https://datascience.aucenter.edu and to attend the Oct 18th Mastercard partnership event: https://tinyurl.com/MastercardDSI
Media Contact:[emailprotected]
SOURCE Atlanta University Center Data Science Initiative
Go here to see the original:
The AUC Data Science Initiative partners with Mastercard to further ... - PR Newswire
Elke Rundensteiner Receives the Prestigious IEEE Test-of-Time … – WPI News
Elke Rundensteiner, the William Smith Deans Professor in Computer Science and Founding Head of WPIs Data Science Program, recently received the InfoVis 20-Year Test-of-Time Award from the Institute of Electrical and Electronics Engineers (IEEE) for her pioneering work on data visualization and visual analytics in 2003.
This award honors articles published at previous IEEE VIS (Visual Identification System) conferences, in this case in 2003, that have withstood the test of time by remaining useful 20 years later and that have had significant impact and influence on future advances within and beyond the visualization community, according to the awards organizers. Award selection is based on measures such as the numbers of citations, the quality and influence of its ideas, and other criteria.
Rundensteiner and her team, which included the late computer science professor Matthew Ward and former PhD students Jing Yang and Wei Peng, are being honored for their work on interactive hierarchical dimension ordering, spacing, and filtering for the exploration of high-dimensional datasets.
I fondly remember my close research collaboration with my colleague Matt Ward over a 17-year time span from 1998 to 2014 that resulted in a series of 7 National Science Foundation (NSF) research grants and one National Security Agency (NSA) grant for our work at the intersection of visualization and data analytics, Rundensteiner said. This allowed us to collaborate with countless joint PhD students, contributing cutting-edge advances to the then-newly emerging area of visual analytics, which led to this inspiring award. Matt was not only a creative thinker at the forefront of his time, he was a supportive colleague and generous friend and remains a true inspiration for me.
According to the award selection committee, the work that Rundensteiner and her team undertook presents a thoughtful, elegant, and powerful approach to managing the complexities of high-dimensional data and reducing clutter in visualizations such as parallel coordinates. The teams research provided insight by clustering the dimensions of high-dimensional data sets into a hierarchical structure (instead of just clustering the data itself), which then can be exploited to make sense of this complex data more efficiently. The paper, Interactive heirarchical dimension ordering, spacing and filtering for exploration of high dimensional datasets, laid the groundwork for subsequent research and influenced the design of other tools and techniques, the award committee noted.
Citations to the original paper have increased over time, showing evidence of lasting value, and the ideas introduced in the work are still relevant today, the award committee wrote. The paper shows us how we can solve a problem through interactive visualization design and presents convincing options for future analysts and designers. These ideas underpin subsequent research on synthesizing new summary dimensions, contribute to contemporary thinking on explainability, and have influenced the design of many other high dimensional visualization tools and techniques.
Read this article:
Elke Rundensteiner Receives the Prestigious IEEE Test-of-Time ... - WPI News
UB takes AI chat series to Grand Island Central School District – University at Buffalo
BUFFALO, N.Y. Artificial intelligence has the potential to drastically alter education systems.
It can provide students personalized learning experiences and instantaneous feedback, as well as deliver data to teachers on how to better engage students and improve curriculum.
But there are concerns, everything from students using AI to write essays, unintentional bias within AI programs, and job loss among teachers.
These topics and more will be discussed Thursday at Grand Island Central School District during the second installment of UB | AI Chat Series, Advancing Education with Responsible AI.
News media are invited to the panel discussion, as well as AI demonstrations and a poster session that will follow.
When: 6-7:30 p.m. on Thursday, Oct. 19.
Where: Grand Island Senior High School, 1100 Ransom Road, Grand Island, New York, 14072.
Best time for visuals: From 7-7:30 p.m., UB students will demonstrate AI programs and display posters that describe their work.
Who: The panel discussion will feature:
Suzanne Rosenblith, dean of the UB Graduate School of Education, will moderate the discussion. Brian Graham, superintendent of Grand Island Central School District will deliver a welcome address. And Venu Govindaraju, SUNY Distinguished Professor and UB vice president for research and economic development, will provide opening remarks.
Background: The two-year AI chat series will feature faculty-led and moderated discussions that explore how UB researchers from a wide variety of academic disciplines are harnessing artificial intelligence for the betterment of society.
It will spotlight significant new projects underway at UB such as the National AI Institute for Exceptional Education, which the National Science Foundation (NSF) funded with $20 million in January, as well as nearly $6 million in NSF-sponsored research to help older adults recognize and combat online scams and disinformation, among other endeavors.
Go here to see the original:
UB takes AI chat series to Grand Island Central School District - University at Buffalo
Opinion: The Rise of the Data Physicist – American Physical Society
In the search for new physics, a new kind of scientist is bridging the gap between theory and experiment.
By Benjamin Nachman | October 13, 2023
Traditionally, many physicists have divided themselves into two tussling camps: the theorists and the experimentalists. Albert Einstein theorized general relativity, and Arthur Eddington observed it in action as bending starlight; Murray Gell-Mann and George Zweig thought up the idea of quarks, and Henry Kendall, Richard Taylor, Jerome Freidman, and their teams detected them.
In particle physics especially, the divide is stark. Consider the Higgs boson, proposed in 1964 and discovered in 2012. Since then, physicists have sought to scrutinize its properties, but theorists and experimentalists dont share Higgs data directly, and theyve spent years arguing over what to share and how to format it. (Theres now some consensus, although the going was rough.)
But theres a missing player in this dichotomy. Who, exactly, is facilitating the flow of data between theory and experiment?
Traditionally, the experimentalists filled this role, running the machines and looking at the data but in high-energy physics and many other subfields, theres too much data for this to be feasible. Researchers cant just eyeball a few events in the accelerator and come to conclusions; at the Large Hadron Collider, for instance, about a billion particle collisions happen per second, which sensors detect, process, and store in vast computing systems. And its not just quantity. All this data is outrageously complex, made more so by simulation.
In other words, these experiments produce more data than anyone could possibly analyze with traditional tools. And those tools are imperfect anyway, requiring researchers to boil down many complex events into just a handful of attributes say, the number of photons at a given energy. A lot of science gets left out.
In response to this conundrum, a growing movement in high-energy physics and other subfields, like nuclear physics and astrophysics, seeks to analyze data in its full complexity to let the data speak for itself. Experts in this area are using cutting-edge data science tools to decide which data to keep and which to discard, and to sniff out subtle patterns.
Machine learning, in particular, has allowed scientists to do what they couldnt before. For example, in the hunt for new particles, like those that might comprise dark matter, physicists dont look for single, impossible events. Instead, they look for events that happen more often than they should. This is a much harder task, requiring data-parsing at herculean scales, and machine learning has given physicists an edge.
Nowadays, the experimentalists who manage the control rooms of particle accelerators are seldom the ones developing the tools of machine learning. The former are certainly experts; they run colliders, after all. But in projects of such monumental scale, nobody can do it all, and specialization reigns. After the machines run, the data people step in.
The data people arent traditional theorists, and theyre not traditional experimentalists (though many identify as one or the other). But theyre here already, straddling different camps and fields, proving themselves invaluable to physics.
For now, this scrappy group has no clear name. They are data scientists or specialized physicists or statisticians, and they are chronically interdisciplinary. Its high time we recognize this group as distinct, with its own approaches, training regimens, and skills. (Its worth noting, too, data physics discreteness from computational physics. In computational physics, scientists use computing to cope with resource limitations; in data physics, scientists deal with data randomness, making statistics what you might call phystatistics a more vital piece of the equation.)
Naming delivers clout and legitimacy, and it shapes how future physicists are educated and funded. Many fields have fought to earn this recognition, like biological physics, sidelined for decades as an awkward meeting of two unlike sciences and now a full-fledged and vibrant subfield.
Its the data wranglers turn. I propose that we give these specialists a clear identity the data physicists. Unlike a traditional experimentalist, a data physicist probably wont have much hands-on experience with instrumentation. They probably won't spend time soldering together detector parts, a typical experience for experimentalists-in-training. And unlike a theorist, they may not have much experience with first-principles physics calculations, outside of coursework.
But the data physicist does have the core skills to understand and interrogate data complete with a strong foundation in data science, statistics, and machine learning as well as the computational and theoretical background to relate this data to underlying physical properties.
The data physicists have their work cut out for them, given the enormous amount of data being churned out by experiments in and beyond high-energy physics. Their efforts will, in turn, improve the development of new experimentation methods, which are today often developed from simpler, synthetic datasets that dont map perfectly to the real world.
But this data will go underutilized without a skilled cohort of scientists who can deftly handle it with new tools, like machine learning. In this sense, Im not merely arguing for name recognition. We need to identify and then train the next generation, to tackle the data we have right now.
How? First, we need the right degrees: Universities should develop programs explicitly for data physicists in graduate school. I expect the data physicist to have a strong physics background and extensive training in statistics,data science, and machine learning. Take my own path as a starting point: I studied computational aspects of particle theory as a masters student and took many courses in statistics as a PhD student, which led to naturally interdisciplinary research between physics and statistics/machine learning and between theorists and experimentalists.
The right education is a start, but the field also needs tenure-track positions and funding. There are promising signs, including new federal funding to help institutions launch Artificial Intelligence Institutes dedicated to advancing this research. But while investments like this fuel interdisciplinary research, they dont support new faculty not directly, at least. And if youre not at one of the big institutions that receive these funds, youre out of luck.
This is where small-scale funding must step in, including money for individual research groups, rather than for particular experiments. This is easier said than done, because a typical group grant, which a PI uses to fund themselves and a student or postdoc, forces applicants to adhere to the traditional divide: theory or experiment, or hogwash. The same goes for the Department of Energys prestigious Early Career Award there is no box to check for interdisciplinary data physics.
As tall an order as this funding is, it could be easier to achieve than a change in attitude. Physicists might well be famous for many of humanitys greatest discoveries, but theyre also notorious for their exclusionary, if not outright purist, suspicion of interdisciplinary science. Physics that borrows tools and draws inspiration from other fields from cells in biological physics, say, or from machine learning in data science is often dressed down as not real physics. This is wrong, of course, but its also a bad strategy: A great way to lose brilliant physicists is to scoff at them.
Not all are skeptical; far more, in fact, are excited. Within APS, the Topical Group on Data Science (GDS) is growing rapidly and might soon become a Division on Data Science, a reflection of the fields growing role in physics. My own excitement about working directly with data inspired me to become an experimentalist myself, although I realize now how restrictive that label was.
As available data grows, so does our need for data physicists. Lets start by calling them what they are. But then lets do the hard work: educating, training, and funding this brilliant new generation.
Benjamin Nachman is a Staff Scientist at Berkeley Lab, where he leads the Machine Learning for Fundamental Physics Group, and a Research Affiliate at the UC Berkeley Institute for Data Science. He is also a Secretary of the APS Topical Group on Data Science.
The author wishes to thank the Editor, Taryn MacKinney, for her work on this article, and David Shih for coining the term 'data physicist' at a recent Particle Physics Community Planning Exercise.
Read more from the original source:
Opinion: The Rise of the Data Physicist - American Physical Society
Ducera Partners and Growth Science Ventures Announce the Formation of Ducera Growth Ventures – Yahoo Finance
NEW YORK, October 16, 2023--(BUSINESS WIRE)--Ducera Partners LLC ("Ducera"), a leading investment bank, and Growth Science Ventures, a data science focused venture capital firm, today announced the launch of Ducera Growth Ventures ("Ducera Growth").
Ducera Growth Ventures will focus on identifying, analyzing, and managing innovation-based venture capital investments in funds that include strategic corporate clients. The platform will be led by Michael Kramer, Founding Partner and Chief Executive Officer of Ducera, and Thomas Thurston, one of the worlds leading data scientists, Founder of Growth Science Ventures, and a Senior Advisor to Ducera.
Unlike traditional venture capital investing, Ducera Growth will combine Duceras investment banking expertise with Growth Sciences proprietary analytics and big data systems to identify unique early-stage growth companies. With an adherence to classic disruption theory and competitive threat regression analyses, the platform will deploy growth capital on behalf of its strategic corporate clients in future market leaders and next-generation companies that demonstrate the potential to produce new customers, reduce costs, and/or create new markets that are consistent with a clients long-term vision.
Michael Kramer, Chief Executive Officer of Ducera Partners, said, "Ducera continues to evolve as a full-service investment bank and strategic advisory firm, and we are focused on providing our clients with access to innovative solutions that we believe have the ability to add significant value to their businesses. Ducera Growth Ventures is an exciting new initiative that I believe will disrupt traditional venture capital investing and redefine how companies access and/or acquire innovative external technologies."
Thomas Thurston, Founder of Growth Science Ventures and Senior Advisor to Ducera, added, "Innovation is now happening at speeds, scales, and levels of complexity that only advanced computing can adequately analyze and make sense of. Yet with the right technological tools and a mastery of how to use them, companies can identify and capture growth from disruptive opportunities more rapidly and consistently than ever before. Utilizing data science and artificial intelligence presents a significant opportunity for companies to enhance their productivity and decision making in support of their organic and inorganic growth strategies. I am thrilled to partner with Michael to form Ducera Growth Ventures and look forward to working with a broad array of Duceras corporate clients in support of their venture investing interests."
Story continues
Mr. Thurston and Ducera previously launched the first of a six-part mini-series focused on how Ducera is using data science and artificial intelligence to advise clients in their development of corporate innovation and growth. Learn more by visiting: https://ducerapartners.com/news/thomas-thurston-partner-and-founder-of-growth-science-ventures-has-joined-the-firm-as-a-senior-advisor/
About Ducera Partners
Ducera Partners is a leading investment banking advisory practice with expertise in restructuring, strategic advisory, liability management, capital markets, wealth management, and growth capital. Since its founding in June 2015, Ducera Partners has advised on over $750 billion in transactions across various industries. Ducera Partners has offices in New York, Los Angeles, and Stamford. For more information about Ducera Partners, please visit http://www.ducerapartners.com.
About Ducera Growth Ventures
Ducera Growth Ventures focuses on identifying, analyzing, and managing innovation-based investments across a broad array of market segments and industries. The platform seeks to combine Duceras investment banking expertise with Growth Science Ventures proprietary analytics and big data to identify early-stage growth companies that have the potential to be successful over the long term. With an adherence to classic disruption theory and competitive threat regression analyses, Ducera Growth Ventures invests growth capital on behalf of its strategic corporate clients in future market leaders and next-generation companies that have the potential to produce new customers for Duceras clients, reduce costs, and/or create new markets that are consistent with a clients long-term vision. For more information about Ducera Growth Ventures, please visit http://www.ducerapartners.com
About Growth Science Ventures
Growth Science Ventures was founded by Thomas Thurston, one of the worlds leading data scientists. The firm utilizes data science to identify disruptive startups and counsels clients in connection with the development and launch of new products and services. For nearly 20 years Growth Science has continued to evolve its proprietary analytics, capabilities, and AI infrastructure through research collaborations with more than 60 of the world's largest, market-leading multinational companies spanning more than 1,000 market segments. For more information about Growth Science Ventures, please visit http://www.gsventures.com.
View source version on businesswire.com: https://www.businesswire.com/news/home/20231016545173/en/
Contacts
Mike GellerProsek Partnersmgeller@prosek.com
View post: