Page 11234..1020..»

Leaders in the evolution of the liberal arts and sciences: SCHEV approves new W&M school – William & Mary

The evolution of the liberal arts and sciences took a significant step forward Tuesday.

The State Council of Higher Education for Virginia (SCHEV) approved William & Marys School of Computing, Data Sciences, and Physics. The school aligns with W&Ms academic mission and expands the universitys ability to prepare students to thrive in a data-rich world.

The school brings together four of the universitys high-performing units: applied science, computer science, data science and physics. These will move into the new school in the fall of 2025. The school will be the sixth at W&M since its inception and the first in over 50 years. A national search for the dean of computing, data sciences, and physics is underway.

I appreciate SCHEVs shared commitment to preparing broadly educated, forward-thinking citizens and professionals, said President Katherine A. Rowe. The jobs of tomorrow belong to those prepared to solve tomorrows problems. Machine learning, AI, computational modeling these are essential modes of critical thinking and core to a liberal arts education in the 21st century.

While the school and its new administrative structure were officially approved Tuesday, its foundations are already in place. The school, brought to life by an extensive feedback and consultation process, will coalesce four programs currently operating within the Faculty of Arts & Sciences.

William & Marys Board of Visitors unanimously approved the new administrative structure in November 2023. To be housed in the heart of campus with the completion of phase four of the Integrated Science Center in fall 2025, the school will be a space where graduate and undergraduate students excel in a combination of disciplines and where research opportunities will be expanded, continuing to attract world-class faculty and external investments.

Innovation has been part of William & Mary since its inception, and this school will serve as the catalyst for countless new discoveries, partnerships and synergies, said Provost Peggy Agouris. The School of Computing, Data Sciences, and Physics is launching at a pivotal time within these dynamic fields, and Im incredibly proud to continue our journey of interdisciplinary growth and excellence across our undergraduate and graduate program offerings. I am grateful to SCHEV Council members for their belief in our vision and to all involved who made this a reality.

The university submitted the formal application to SCHEV, the state agency that governs new schools and new programs, earlier this spring.

In establishing a standalone school, William & Mary will grant more visibility and autonomy to these high-performing academic areas; it will also provide a single point of contact for external collaboration. The school will strengthen existing partnerships for example, with the Thomas Jefferson National Accelerator Facility in Newport News while facilitating cooperation with external parties promoting scientific and technological advancement.

The four academic areas in the new school are already experiencing strong growth in external investment (over $9 million in 2023) and student numbers. Masters students from the new schools constituent areas represented one-third of all Arts & Sciences masters students, with this proportion rising to almost two-thirds when considering doctoral programs.

In the new structure, high-impact research in data-intensive fields will further converge with academic and professional career preparedness, meeting increased student and employer demand while achieving goals from the universitys Vision 2026 strategic plan.

Undergraduate candidates will not apply to the school directly. W&M second-year students in good standing will be able to enter the school as long as they meet criteria established by the school and the major, and will continue to have the opportunity to double major or minor in areas offered by other W&M programs. Interdisciplinary collaborations between the school and the rest of the university will be expanded, combining cutting-edge innovation with William & Marys distinctive strengths in the liberal arts and sciences.

We do our best work when we do it together, Agouris said. Aligning our computer science, data science, applied science and physics programs under one school will deepen the universitys impact on fields that are rapidly changing and increasingly important. Our students come here wanting to understand and change the world. Now more than ever, they will leave better equipped to do just that.

Antonella Di Marzio, Senior Research Writer

See original here:

Leaders in the evolution of the liberal arts and sciences: SCHEV approves new W&M school - William & Mary

Read More..

I Used to Hate Overfitting, But Now Im Grokking It | by Laurin Heilmeyer | Jul, 2024 – Towards Data Science

The surprising generalisation beyond overfitting 8 min read

As someone who spent considerable time with various computer science topics, where mathematical abstractions can sometimes be very dry and abstract, I find the practical, hands-on nature of data science to be a breath of fresh air. It never fails to amaze me how even the simplest ideas can lead to fascinating results.

This article faces one of these surprising revelations I just stumbled upon.

Ill never forget how the implementation of my Bachelors thesis went. While it was not about machine learning, it had a formative effect on me, and Ill manage to constantly remind myself of it when tinkering with neural networks. It was an intense time because the thesis was about an analytical model of sound propagation that aimed to run in the browser, thus having very limited performance leading to long running simulations. It constantly failed to complete after running for many hours. But the worst experience was interpreting wrongly configured simulations with confusing results, which often made me think the whole model was nonsense.

The same happens from time to time when I actually train neural networks myself. It can be exhausting to

Link:

I Used to Hate Overfitting, But Now Im Grokking It | by Laurin Heilmeyer | Jul, 2024 - Towards Data Science

Read More..

WOC and BEYA STEM community mourn the loss of a pioneering data scientist and cybersecurity leader – BlackEngineer.com

Dr. Nefertiti Jackson, who was a prominent figure in data science and cybersecurity, has died.

Her obituary states that there will be a memorial service on Wednesday, July 24 at 11:00 AM at the Allen Temple Baptist Church, located at 8501 International Blvd, Oakland, CA. Additionally, there will be a viewing on Tuesday, July 23, at C. P. Bannon Mortuary, located at 6800 International Blvd, Oakland, CA.

Dr. Jackson was a distinguished data scientist and technical leader who made significant contributions to the field of cybersecurity.

Her career included roles at the National Security Agency (NSA) where she played a key role in identifying and addressing system inefficiencies.

She held degrees in mechanical engineering, biomedical engineering, and a Ph.D. in Applied Physics from the University of Michigan in conjunction with Howard University.

During her career, Dr. Jackson worked on various groundbreaking systems, including detecting anomalies in network traffic and developing alert mechanisms for health and continuous systems monitoring.

She was also passionate about sharing her knowledge and expertise, regularly participating in STEM conferences and educational outreach programs.

In addition to her professional accomplishments, Dr. Jackson was deeply involved in promoting STEM education.

She served as a board member for a local private school and was instrumental in creating opportunities for young leaders to engage with cybersecurity and data science.

Dr. Jacksons impact extended to her alma mater, Tuskegee University, where she worked to establish a pipeline for future cybersecurity leaders focused on artificial intelligence (AI) and machine learning (ML).

Dr. Nefertiti Jacksons contributions to the fields of data science and cybersecurity, as well as her dedication to education and outreach, leave a lasting legacy.

Link:

WOC and BEYA STEM community mourn the loss of a pioneering data scientist and cybersecurity leader - BlackEngineer.com

Read More..

UC San Diego Launches New School of Computing, Information and Data Sciences – HPCwire

July 22, 2024 The University of California Board of Regents has approved the creation of the new School of Computing, Information and Data Sciences (SCIDS) at UC San Diego, a critical advance in UC San Diegos long history of leading innovation and education in artificial intelligence, computing and data science disciplines that are rapidly reshaping modern life.

One of 12 schools at UC San Diego and just the fourth to be added in the 21st century SCIDS will bring together faculty across disciplines to improve the human condition by better understanding how data shapes society, and to prepare the next generation of highly skilled workers driving artificial intelligence advancements.

The School of Computing, Information and Data Sciences exemplifies UC San Diegos commitment to addressing one of the most compelling needs of modern times transforming data into actionable knowledge, said Chancellor Pradeep K. Khosla. Computing and data literacy are key to meeting the needs of students and the state of California, advancing critical research areas like the future of artificial intelligence, and bolstering the universitys mission of public service.

SCIDS will be enlivened by an anticipated 8,000 students, including many who will come through a robust community college pipeline; and more than 50 faculty across 16 academic disciplines.

The school will play a key role in advancing data science across all disciplines, as well as advancing state-of-the-art computing applications. Additionally, it will serve as a catalyst for increasing collaborations across existing schools, academic departments and disciplines to establish new fields of inquiry.

By translating data science from the classroom to research and the broader workplace, the school will prepare students for their careers by providing opportunities for them to engage directly with industry and government partners, including emergency responders; municipal, state and national resource management organizations; and nonprofits. Students will learn first-hand how data science can allow organizations to better address societal problems ranging from climate change mitigation and social justice issues, to technical challenges and healthcare.

Pursuing cross-collaborative research opportunities and creating interdisciplinary educational programs are integral parts of the UC San Diego community, said Executive Vice Chancellor Elizabeth H. Simmons. The School of Computing, Information and Data Sciences is just the latest example of our commitment to working across disciplines to expand knowledge in a burgeoning field and improve our community and our world. It fits perfectly within our educational structure.

Foundational Pillars of the New School

The new school combines the strengths of theSan Diego Supercomputer Center(SDSC), a national leader in high-performance and data-intensive computing, and theHaliciolu Data Science Institute(HDSI), a pioneering interdisciplinary institute that advances data science and AI education and research.

Together these resources form the foundational pillars of SCIDS and will position UC San Diego to support the growing demand for data science and computing expertise across the research and educational mission of the university.

As part of a national effort to address a shortage of advanced computing resources, the U.S. National Science Foundation (NSF) established the General Atomics-affiliated SDSC in the 1980s, which transformed academic scientific communities like UC San Diego.

In its 40-year history, SDSC has provided computing resources to a range of domestic stakeholders and federal agencies, state agencies tackling crises like extreme weather and wildfire, and the UC San Diego and the UC system, providing researchers with in-house computational and data resources to accelerate scientific discovery.

The new school at UC San Diego will grow our impact on society via translational computing, information and data sciences, and bring AI education to community and teaching colleges across California and the nation via our AI infrastructure, said SDSC Director Frank Wrthwein, a founding faculty member of HDSI. Combining our strengths with those of HDSI optimizes our leadership in innovation for science, technology, education and society.

UC San Diegos depth in technology-related domains, anchored by engineering and mathematics, deepened further when the university established HDSI in 2018 with philanthropic support from Computer Science and Engineering Alumnus Taner Haliciolu.

Educating the next generation of machine learning engineers and data analysts, HDSI brings together an interdisciplinary team of faculty and researchers from areas ranging from computer science to communications to medicine. Working together, these researchers explore new computational methods, new mathematical models and guide societal and ethical impacts of data science.

HDSI, for example, is home to the NSF-funded AI Institute for Learning-Enabled Optimization at Scale (TILOS), which explores AI optimization and advances chip design, networks and contextual robotics.

HDSI and SDSC share the unique challenge of building transdisciplinary academics and research. Their coming together under SCIDS will involve new synergies and realize tremendous new possibilities in creating talent in emerging areas, including artificial intelligence, said HDSI Director Rajesh Gupta.

UC San Diegos undergraduate major in data science was first developed and shepherded in 2016 by the Department of Computer Science and Engineering before the degree programs transfer to HDSI. HDSI graduated its first class of bachelors students in 2020, and initiated masters and doctorate programs in 2022. HDSI also offers a minor in data science, with a growing population of students, and it is in the process of launching a joint M.S.-M.D. program with health sciences. Currently, there are 51 faculty appointments in HDSI. Student graduates include 814 bachelors students between 2020 and 2024, and 22 masters students between 2022 and 2024. One Ph.D. student graduated in 2024.

Source: UC San Diego

View original post here:

UC San Diego Launches New School of Computing, Information and Data Sciences - HPCwire

Read More..

Evolution of Data Science: New Age Skills for the Modern End-to-End Data Scientist | by Col Jung | Jul, 2024 – Towards Data Science

21 min read

In the 1980s, Wall Street discovered that physicists were great at solving complex financial problems that made their firms a bucket load of money. Becoming a quant meant joining the hottest profession of the time.

Twenty years later, in the late 2000s, as the world was on the cusp of a big data revolution, a similar trend emerged as businesses sought a new breed of professionals capable of sifting through all that data for insights.

This emerging field became known as data science.

In 2018, I transitioned from academia to industry while completing my PhD in modelling frontier cancer treatments, working for one of the largest banks in Australia.

I was joined by seven other PhD candidates from top universities across Australia, all specialising in different areas, from diabetes research and machine learning to neuroscience and rocket engineering.

Fascinatingly, all of us eventually found ourselves working in the banks big data division something we still joke about to this day.

Continued here:

Evolution of Data Science: New Age Skills for the Modern End-to-End Data Scientist | by Col Jung | Jul, 2024 - Towards Data Science

Read More..

Most Data Quality Initiatives Fail Before They Start. Heres Why. | by Barr Moses | Jul, 2024 – Towards Data Science

Show me your data quality scorecard and Ill tell you whether you will be successful a year from now. 7 min read

Every day I talk to organizations ready to dedicate a tremendous amount of time and resources towards data quality initiatives doomed to fail.

Its no revelation that incentives and KPIs drive good behavior. Sales compensation plans are scrutinized so closely that they often rise to the topic of board meetings. What if we gave the same attention to data quality scorecards?

Even in their heyday, traditional data quality scorecards from the Hadoop era were rarely wildly successful. I know this because prior to starting Monte Carlo, I spent years as an operations VP trying to create data quality standards that drove trust and adoption.

Over the past few years, advances in the cloud and metadata management have made organizing silly amounts of data possible.

Data engineering processes are starting to trend towards the level of maturity and rigor of more longstanding engineering disciplines. And of course, AI has the potential to streamline everything.

While this problem isnt and probably will never be completely solved, I have seen organizations adopt best practices that are the difference between initiative successand having another kick-off meeting 12 months later.

Here are 4 key lessons for building data quality scorecards:

The most sure way to fail any data related initiative is to assume all data is of equal value. And the best only way to determine what matters is to talk to the business.

Brandon Beidel at Red Ventures articulates a good place to start:

Id ask:

Now, this may be easier said than done if you work for a sprawling organization with tens of thousands of employees distributed across the globe.

In these cases, my recommendation is to start with your most business critical data business units (if you dont know that, I cant help you!). Start a discussion on requirements and priorities.

Just remember: prove the concept first, scale second. Youd be shocked how many people do it the other way around.

One of the enduring challenges to this type of endeavor, in a nutshell, is data quality resists standardization. Quality is, and should be, in the eye of use case.

The six dimensions of data quality are a vital part of any data quality scorecard and an important starting point, but for many teams, thats just the beginning and every data product is different.

For instance, a financial report may need to be highly accurate with some margin for timeliness whereas a machine learning model may be the exact opposite.

From an implementation perspective this means measuring data quality has typically been radically federated. Data quality is measured on a table-by-table basis by different analysts or stewards with wildly different data quality rules given wildly different weights.

This makes sense to a degree, but so much gets lost in translation.

Data is multi-use and shared across use cases. Not only is one persons yellow quality score another persons green, but its often incredibly difficult for data consumers to even understand what a yellow score means or how its been graded. They also frequently miss the implications of a green table being fed data by a red one (you know, garbage in, garbage out).

Surfacing the number of breached rules is important, of course, but you also need to:

So then what else do you need? You need to measure the machine.

In other words, the components in the production and delivery of data that generally result in high quality. This is much easier to standardize. Its also easier to understand across business units and teams.

Airbnb Midas is one of the more well known internal data quality score and certification programs and rightfully so. They lean heavily into this concept. They measure data accuracy but reliability, stewardship, and usability actually comprise 60% of the total score.

Many data teams are still in the process of formalize their own standards, but the components we have found to highly correlate to data health include:

Yay, another set of processes were required to follow! said no one ever.

Remember the purpose of measuring data health isnt to measure data health. The point, as Clark at Airbnb put it, is to drive a preference for producing and using high quality data.

The best practices Ive seen here are to have a minimum set of requirements for data to be on-boarded onto the platform (stick) and a much more stringent set of requirements to be certified at each level (carrot).

Certification works as a carrot because producers actually want consumers to use their data, and consumers will quickly discern and develop a taste for highly reliable data.

Almost nothing in data management is successful without some degree of automation and the ability to self-serve. Airbnb discarded any scoring criteria that 1) wasnt immediately understandable and 2) couldnt be measured automatically.

Your organization must do the same. Even if its the best scoring criteria that has ever been conceived, if you do not have a set of solutions that will automatically collect and surface it, into the trash bin it must go.

The most common ways Ive seen this done are with data observability and quality solutions, and data catalogs. Roche, for example, does this and layers on access management as part of creating, surfacing and governing trusted data products.

Of course this can also be done by manually stitching together the metadata from multiple data systems into a homegrown discoverability portal, but just be mindful of the maintenance overhead.

Data teams have made big investments into their modern data and AI platforms. But to maximize this investment, the organization both data producers and consumers must fully adopt and trust the data being provided.

At the end of the day, whats measured is managed. And isnt that what matters?

View post:

Most Data Quality Initiatives Fail Before They Start. Heres Why. | by Barr Moses | Jul, 2024 - Towards Data Science

Read More..

From Ephemeral to Persistence with LangChain: Building Long-Term Memory in Chatbots – Towards Data Science

8 min read

In a previous article I wrote about how I created a conversational chatbot with OpenAI. However, if you have used any of the chatbot interfaces like ChatGPT or Claude et al., you would notice that when a session is closed and reopened, the memory is retained, and you can continue the conversation from where you left off. That is exactly the experience I want to create in this article.

I will use LangChain as my foundation which provides amazing tools for managing conversation history, and is also great if you want to move to more complex applications by building chains.

Code for recreating everything in this article can be found at https://github.com/deepshamenghani/langchain_openai_persistence.

I will start by creating a loop for the user to input questions for the chatbot. I will assign this to the variable humaninput. For now, instead of an LLM output

Read more:

From Ephemeral to Persistence with LangChain: Building Long-Term Memory in Chatbots - Towards Data Science

Read More..

Three Mind-Blowing Ideas in Physics: The Stationary Action Principle, Lorentz Transformations, and the Metric Tensor – Towards Data Science

How mathematical innovations yield increasingly more accurate models of the physical world 25 min read

While physics arouses the curiosity of the general public, many find the math daunting. Yet many of the central ideas in physics arise from simpler principles that have been tweaked and modified into increasingly complex formalisms that better map physical phenomena.

While many physics graduates end up working in data science, can mathematical insights in physics inform and enrich the data scientist? I argue yes. Even though data science as a distinct discipline is relativity new, the collection and analysis of data pervades the history of physics such as the collection of astronomical observation by Johannes Kepler from which he derived his laws of planetary motion. Both physics and data science extract patterns from data, though typically data science deals with statistical patterns while physics with lawful or nomological patterns. Having an understanding of fundamental laws can help data scientists with modelling complex systems and develop simulations of real-world phenomena.

In my own work, maintaining a strong interest in physics has helped me make important connections between information theory and statistical mechanics. Further, it has helped me understand the flexibility of mathematics, in particular linear algebra and calculus, in modelling physical systems constrained by spatial dimensions and more abstract multidimensional systems that include social and stochastic patterns. Moreover, it can be inspiring as well as intellectually gratifying to understand the rudiments of how physics models the world around us and how the incremental improvements of physics have required molding the math to fit and predict the data that nature supplies.

In this article, I odyssey through three mathematical ideas that underpin much of physics: the stationary action principle (also known as the principle of least action), Lorentz transformations, which describe time and space transformations in Einsteins special theory of relativity, and the metric tensor, which underlies the math of General Relativity (the theory of gravity as spacetime curvature).

The Stationary Action principle is perhaps the most important in all of physics because it threads through classical and quantum mechanics. It forms an alternative though equivalent formulation to the classical equations of motion invented by Newton for describing the evolution of a physical system. Specifically, it describes the motion of a physical system in time by determining the path that minimizes something called the action. The action is a functional, namely a function that takes functions as inputs, that describes the path of the system as stationary with respect to path variations between two points. Understanding the action as a functional, specifically as scoring the path variations, is key to understanding the concept behind it. The specifics of this will become clearer in the exposition below. This remarkable result articulates motion as a type of optimization function within given constraints.

Lorentz Transformations describe how the coordinates of time and space are intertwined into a unified metric that enables their measurements to proportionally change relative to observers in inertial frames of reference while conserving the speed of light. This formalism ensures that the speed of light remains constant across frames of reference, contrary to Newtonian assumptions that would have the speed of light change against invariable units of space and time. Before the theory of special relativity, the constancy of the speed of light was an experimentally observed phenomenon that did not fit into the framework of classical physics.

Finally, we explain the mathematical ideas behind the metric tensor, which describes length or distance in curved spaces. The metric tensor is a bilinear, symmetric identity matrix that generalizes the Pythagorean theorem underlying flat, Euclidean space to any possible space including curved surfaces. Curved surfaces were used by Einstein to describe the distortion of spacetime in the presence of gravity. As data scientists, youre likely very familiar with the Euclidean distance and linear algebra so appreciating the concepts behind the metric tensor should be a natural step. The metric tensor developed by Bernhard Riemann forms the foundation of non-Euclidean geometry and remarkably generalizes the notion of length to any underlying geometry.

The Principle of Least Action or the Stationary Action Principle constitutes the centrepiece of physics. It subsumes the equations of motion and mathematically articulates the transition rule of a physical system across time.

To begin to appreciate this principle recall that Newtons second law computes the trajectory of a system of particles by taking three inputs: the masses of the of the particles, the forces acting on the system, and the initial positions and velocities, and determines the evolution rule through F=ma, where m denotes mass and a acceleration. In contrast to the Newtonian method, the principle of least action computes the trajectory of the system by taking in the initial and final positions, masses and velocities (and other constraints depending on the system) but omits forces. It subsequently selects the path that minimizes a quantity called the action. Before we explain exactly what the action consists in, we need to understand an alternative formulation to Newtons equations called the Lagrangian.

The Lagrangian L is computed as the difference between Kinetic energy T and potential energy V, where T is given by the product of mass and velocity squared divided by 2 (2 denoting the average between initial velocity and final velocity), and V by the product of the mass of the object m, the gravitational constant g and the height of the object above ground h (the computation of potential energy varies with the system).

Why is the Lagrangian computed as the difference between kinetic and potential energy? Because as the system moves it converts potential energy into kinetic and the difference between the two captures the dynamic interplay between these two types of energy. It is important to note conversely that the total energy is computed as the sum of these two values.

The inputs to the Lagrangian are the positions x and the velocities v, denoted by (x dot), where the dot denotes the first derivative. This is because the velocity is computed as the first derivative of the position.

To compute the Lagrangian we need to minimally know the velocities, general coordinates, positions and the masses of the particles. Potential energy depends on the positions particles (or set of particles) since it describes the potential work it can do, whereas the kinetic energy depends on particle velocities since it describes the motion of the particle.

How does the action come into the picture? Imagine you have two points on a curved plane and you need to find the shortest distance. There are many paths between the two points, but only one path or line that represents the shortest distance. The action is analogous to this problem. In order to find the trajectory of the system, we need to select a path that minimizes the action. A corollary of this is that the action stays stationary through the evolution of the system.

Since the action must be stationary, the first-order partial derivative of the action must therefore be zero:

At a high level, the action is described by the path integral of the Lagrangian for a given time interval [t, t]. Even though the integral of a function from point t to t is typically understood as the area under the curve, the path integral of the Lagrangian should not be intuitively thought of as an area, but rather the integration of a functional, which is a function that takes another function(s) as input and outputs a scalar. The input will be the Lagrangian. The output defines the action. Across the many paths the system could take between t and t, we will see that it takes precisely the path that minimizes the action.

Heres the simple formula for the action as the path integral of Lagrangian:

Now, since the definite integral can be computed as the Riemannian sum of products of the y output of f(x) and the change of x denoted by x, as k area partitions approach infinity, we can compute the action as the Riemannian sum of products of the Lagrangian and the time derivative dt. In other words, the definite integral of the Lagrangian can be computed by minimizing the action across the time interval.

The action consists of the path integral of the Lagrangian between the initial position and the end position of the system. This means that the path integral minimizes the action by computing the difference between potential and kinetic energy. The fundamental theorem of calculus allows us to compute the action as a continuous interval between t and t, even though it can also be computed in discrete time steps N. Now if we were to imagine the action as a sum of discrete time steps N, we would compute it as the sum of products of the value of the Lagrangian at each time step and the value of time t.

The Lagrangian typically depends on positions and velocities but can also be time-dependent. The Lagrangian is said to be time dependent if it changes with time even if its position and velocities stay constant. Otherwise, the Lagrangian implicitly depends on time through changing positions and velocities. For the time independent formulation, we substitute L(x, ) into the equation to indicate dependence on positions and velocities:

Now, we know from the law of the conservation of momentum, that the derivative of the sum of all momenta of a system is equal to zero. In other words, in an isolated system, the total momentum is always conserved or remains constant. The derivative of a constant is zero, since the rate of change is held ceteris paribus or equal. In Newtonian mechanics, the third law of motion, which states that for every action theres an opposite and equal reaction, expresses the conservation of total momentum.

Similarly, the law of the conservation of energy, holds that the total energy of an isolated system is conserved across any transformation: the time derivative of total energy is zero. Unlike momentum, however, energy comes in different forms. It is the total of all these forms that is conserved. Articulated in terms of motion, there are only the forms of energy weve been talking about all along: kinetic and potential.

Since the Lagrangian is defined as the difference between these two forms of energy, when the Lagrangian is invariant under time translations, it implies the conservation of energy.

Something analogous to the conservation of energy occurs with respect to the Action. In the signed trajectory, nature selects the path that minimizes the value of the action. This minimization is similar to the minimization of of a function in optimization problems, except that the action represents a multitude of variables include all the coordinates at every instant of time. This extremizing character is expressed by the Euler-Lagrange equation, which forms the equation of motion.

What are the Euler-Lagrange equations? They are the differential equations that tell system how to move from one instant in time to the next. Now, Im not going to derive the equations here, but intuitively we will set the derivative of the action A with respect to position dx to 0. Put differently, we consider a small variation in the path, and require that the partial derivative of the action be zero.

This yields the two terms of the Euler-Lagrange equation: the time derivative of the partial derivative of the Lagrangian with respect to velocity, and the partial derivative of the Lagrangian with respect to position. Respectively, these represent the changes in kinetic (changes in momentum) and potential energy. Setting the difference between these two quantities to zero, yields the action minimizing Euler-Lagrange equation.

The Euler-Lagrange equation in a single coordinate or degree of freedom is given below, where L denotes the Lagrangian, velocity and x position.

In natural language, this reads as the time derivative (d/dt) of the partial derivative of the Lagrangian with respect to velocity (L/) minus the partial derivative of the Lagrangian with respect to position (L/x) equals zero. Intuitively, this can be rephrased as the instantaneous rate of change of time of the instantaneous rate of change of the Lagrangian with respect to velocity minus the instantaneous rate of the change of the Lagrangian with respect to position, is stationary.

Distilling it further, the Euler-Lagrange equation implies that the motion of a physical system corresponds to an extremum of the integral of the Lagrangian, which is the action.

The equation can be generalized to arbitrary coordinates (x, y, zn) :

In concrete scenarios, the action is a functional, that is to say a function of a function that involves the mapping from a function input (the Lagrangian) to a scalar output (the value of the action).

While the Stationary Action Principle enables efficient calculation of the trajectory of physical system, it requires knowing the starting and ending positions. In lieu of this global picture, we substitute the Newtonian formalism, which requires knowing the positions and initial velocities of the particles.

The Stationary Action principle can be adapted to quantum physics with important caveats, where all the possible paths between initial and final states are considered and the action takes the sum of the probability amplitudes of each path to compute the probabilistic evolution of the system.

Given this formulation, the classical stationary action principle can be thought of as a special case of the quantum formulation, in which given all paths the stationary action paths dominate.

Understanding Lorentz Transformations is a portal into Einsteins Special Theory of Relativity. They constitute the mathematical framework for computing relativistic spacetime transformations in inertial or uniform frames of reference, that is, frames of reference that exclude gravity.

A crucial concept at the heart of special relativity is that motion can only be described with respect to some frame of reference and not in absolute terms. If Im driving, for example, Im standing still with respect to the car but moving with respect to my house.

The idea of relativistic motion exists in classical mechanics and was first described by Galileo.

The groundbreaking insight embedded in Special Relativity is not relativistic motion, but rather what stays the same or constant across space translations. In classical mechanics, all motion is indiscriminately relative, whereas the coordinates of space and time change only in additive fashion while remaining static and independent of each other for all observers.

The relative motion assumption in classical mechanics implied that the motion of light should obey relativistic laws. In other words, if Im standing still and holding a flashlight, whereas youre driving and holding a flashlight, the motion of light from your flashlight should measure as the sum of the speed of light and your velocity.

Experimental evidence, however, contradicts this assumption. In reality, regardless of the frame of reference, light measures as a constant. In other words, empirical evidence attests to the speed of light being absolute.

Instead of finding error with the observation, Einstein posited the constancy of light speed as a law of nature. If light always measures the same, then what must change is the representation of the coordinates of space and time.

In order to understand how Einsteins theory of Special Relativity achieves this, it is important to have a cursory grasp of the simplified equations of motion described by classical mechanics. These will be modified so that relative motion between observers does not alter the speed of light but rather alters an interwoven metric of space and time. This has the peculiar consequence that the measures of time and distance will vary across observers when velocities approach the luminal limit.

The equations of motion are often condensed into the acronym SUVAT (s = distance, u = initial velocity, v = velocity, a = acceleration, t = time):

In order to make Lorentz transformations intelligible, we will be using spacetime diagrams. These reverse the axes of distance and time such that time is represented as the x axis and distance as the y axis. Further, we use the y axis to represent large distance intervals since we want to explain motion relative to the speed of light. Now light travels at 3 *10 m/s. In our spacetime diagrams, one second will correspond exactly to this distance. This has the consequence that the straight diagonal of our diagram situated at 45 angle between our axes, represents the constancy of light speed across time. In fact, the diagonals across a Cartesian grid will represent the asymptotic limits of light speed which will constrain our translations of time across the y axis and translations of space across the x axis.

Now any straight line diagonal to our Cartesian grid not at a 45 angle will represent uniform motion at subluminal velocity. In the Newtonian picture, the speed of light is just like any other speed. This means that an obtuse angle larger than 45 will represent faster than light velocity. Furthermore, the speed of light will be relative to a frame of reference. If Im travelling at half light velocity in the same direction as light, from my frame of reference I will observe light as moving at half light velocity since Im catching up to it with half its speed. The assumptions underlying this model involve retaining unchanging units of time and distance such that time and spatial intervals remain constant for all frames of reference.

The leap from regarding space and time as independent measures to integrating them into a continuum called spacetime involves transforming the variable of time into a measure of distance. We do this by weighting the time variable with c, standing for the speed of light constant. When we multiply c by t we get ct, which measures 1 light m/c.

In the Newtonian-Galilean picture, two frames of reference S and S are given by the coordinates (x, t) and (x,t) respectively where the apostrophe symbol, pronounced x prime and t prime, serves to distinguish two relative frames of reference (and does not denote differentiation as in normal contexts) . These frames are invertible and the inverses are equivalent to each other within Galilean relativity. From the frame of reference of S the coordinates of S, position and time, are given by x = (x-vt) and t = (t- vx/c) respectively. Likewise, from the frame of reference of S the coordinates of S are given by x = (x + vt) and t = (t+vx/c). However, these translations wind up making light relative rather than spacetime. The question arises as to how we can translate from S S such that we conserve c (the speed of light), while proportionally scaling the time and distance variables (more correctly, the spacetime continuum)?

A way of deriving these translations is to make use of the spacetime diagrams we introduced above where we scaled time by the constant c 299 x 10. The translation were seeking is expressed as the following:

In fact, we will use this symmetry or equivalence between frames of reference to derive the gamma factor as the common scaling factor for spacetime translations between relative frames of reference such that they reflect luminal constancy. This Galilean symmetry of relative motion is illustrated by the graphs below expressing the two frames of reference we introduced as inverses of each other:

Since the speed of light is constant across all frames of reference, if we start from the origin for both frames of reference (x = 0 and t=0), the path of light will satisfy the following equations (recall that the diagonal at 45 represents the speed of light where one unit of time corresponds to the distance travels in one unit of distance):

The conversion from x to x is given by the equation below, where x is simply the difference between x and the product of velocity and time. Now, in order to derive the Lorentz transformation, we need some factor to scale our spatiotemporal transformation. The factor equals v/c the ratio of velocity and the speed of light and is used to scale ct light-speed scaled time. If we expand the expression, we find that it algebraically reduces to the Newtonian transformation in the brackets. As we will see, when the Lorentz factor approaches 1, the Lorentz transformations become equivalent to their Newtonian counterparts, which correspond to our everyday notion of the simultaneity of events. The formulas below demonstrate how we get from the initial formula to the gamma scaled transformation formula for relative position:

Similarly, we can derive the time transformation from the t frame to t with the equation below. Since were using spacetime diagrams, we start with ct. We see that ct can be computed through the difference between ct and beta scaled x and the whole expression scaled by the Lorentz factor . We can algebraically solve for t by expanding the expression, which reduces the solution for t to t-vx/c scaled by :

When speeds are very small, vx/c reduces to 0 and reduces to 1, yielding t=t. This result corresponds to our everyday Newtonian experience where 1 second for me at rest is more or less equal to your second, while moving at a constant velocity relative to me.

As you might have noticed, the transformation to x involves ct as a term and the transformation to t involves x as a term. By factoring in as terms in each others reference frame transformations, time and space become interwoven into a co-dependent continuum where a unit change in one variable corresponds to a unit change in the other. This interrelationship will account for the proportionality of time dilation and space contraction described by Lorentz transformations.

How do we ascertain the value of the Lorentz factor? One way is to multiply our translation equations and solve for the common factor. Remember that we can replace x and x with ct and ct, respectively, due to the equality we introduced earlier. This will let us cancel out like terms and solve for :

Now we can express the x frame of reference by the following substitution:

And can express the t frame of reference by the following substitution:

In each equation, as the velocity v approaches the speed of light, the v/c approaches the number 1 and the value of the denominator approaches the 0. We know from E=mc that objects with rest mass cannot, as a matter of physical principle, be accelerated to equal luminal speeds. As such, it is not physically possible for the value of denominator to equal 0. The 0 limit represents an infinite rapidity (which denotes the angle of the transformation). As rapidity approaches infinity, time approaches rest and the measurement of length approaches zero.

On the other hand, when the velocity is small, v/c is a very small number, and the value of the denominator approaches 1. When the denominator (called the Lorentz factor) equals either 1 or ~ 1, the Lorentz factor becomes insignificant and the equation approximates Newtonian motion. That is to say, the equations of motion are given by the numerator, which reduce to Newtons equations of motion.

The Lorentz factor constitutes the key to understanding Lorentz transformations. If you recall back to Galilean relativity, the interchangeability of inertial frames of reference is achieved through rotations. Rotations are described by trigonometric functions. Trigonometric functions conserve Euclidean distance. Specifically, rotations conserve the radius. This means that units of length remain constant across transformations.

Analogously, Lorentz transformations conserve the spacetime metric. Unlike the Euclidean metric, the spacetime metric makes all spatiotemporal transformations relative to the speed of light as an absolute value. For this reason, the speed of light forms an asymptote that Lorentz transformations approach but cannot equal. In the spacetime diagram the speed of light is denoted by the equalities x = ct and x = ct. If you recall back to our spacetime diagram, the asymptotes consist of the diagonals cutting across both axes. Since the range of spacetime transformations are both infinite (meaning that the they output a range of to + ) yet asymptotic to our diagonals, they are described by hyperbolic functions or rotations. Hyperbolic rotations are functions analogous to the trigonometric functions but that use hyperbolas instead of circles. Unlike circles which are finite, hyperbolic rotations can stretch to infinite ranges. Their equivalents to the trigonometric functions can be described as exponential operations on the special number e (2.718), where the analogue to sin(x) is denoted by sinh(x) and the analogue to the cos(x) is denoted by cosh(x) described by the following functions respectively:

Just like in a unit circle (sin x, cos x) describe its points, (cosh x, sinh x) form the right half of a unit hyperbola. The angle of hyperbolic rotations in the context of special relativity is called rapidity denoted by the symbol eta . Here are the hyperbolic rotations equivalent to the Lorentz transformations we derived earlier:

The relationship between the Lorentz factor and the rapidity of hyperbolic rotations is the following:

If Galilean rotations conserve the radius or Euclidean distance, then what do Lorentzian transformations conserve? They conserve the Minkowski metric, given by the following equality which is analogous to Euclidean distance:

Since actual Lorentz transformations occur in four dimensions, 1 of time and 4 of space or analogously 4 spacetime dimensions, the four dimensional Minkowski interval is given by the following equation:

The gif diagram below visualizes these hyperbolic transformations as spacetime distortions in two dimensions that approach the diagonal asymptotes as velocity approaches the speed of light. The distortions on the grid indicate the distortions in the spacetime metric as a result of the relative speeds of observers. As speeds approach the luminal limit, space (the horizontal axes hyperbolas) contracts and time (the vertical axes hyperbolas) dilates. These intertwined transformations conserve the Minkowski metric s, which proportionally scales these transformations against the invariance of lightspeed.

Space contraction and time dilation can be inverted between observers at rest and observers moving at uniform or inertial speeds. If youre uniformly moving at close to the luminal limit relative to someone at rest, it is equally correct to describe you as at rest and the other person as moving at close to light speed.

Lorentz Transformations in Special Relativity occur in flat space pseudo-Euclidean space. What is a flat space? It is a geometry where the metric, or distance measure between points, is constant. The most well known metric of flat space is defined by the Pythagorean Theorem. Another flat metric includes the Minkowski spacetime metric we discussed above.

The Euclidean metric defines the distance between two points as the square root of the sum of squared lengths of the shortest sides of a right triangle. This follows from the Pythagorean Theorem: a + b = c.

Described geometrically, the Euclidean distance between two points is given by square root of the sum of the squared differences between each coordinate (x,y).

The Pythagorean Theorem can be generalized to n dimensions:

Accordingly, we can express Euclidean distance in the three dimensions by the formula below:

However, this generalization conserves distance as a property of Euclidean flat space. Put differently, the metric stays constant.

In order to understand the metric tensor, we need to learn to see the Pythagorean Theorem as a special case of flat or Euclidean space.

In other words, we need to define a value-neutral space such that Euclidean distance defined by the Pythagorean theorem can be derived as a special case.

Before we can do this, we must ask why is it that the differences between the coordinates are squared in the Pythagorean theorem? This can be explained in any number of ways, but an intuitive explanation is geometric. They are squared because it produces geometric areas of equal lengths, given that areas are products of length and width, which lets us compute the hypotenuse as the square root of the sum of squares of the right angled sides. This answer is given by the metric tensor defined by the Kronecker delta, which outputs 1 if i=j and 0 if ij.

However, we can also demonstrate the result through the generalized metric of a space, where the metric tensor consists of a smoothly varying inner product on the tangent space.

What is a tangent space? A tangent space is the set of all vectors tangent to a point on a manifold.

The general form of the equation is given below, where g represents the metric tensor and v the index of each metric tensor value per coordinate term and dX indicates infinitesimal displacements per coordinate:

Given the above equation, we can express the squared distance between two points in two dimensions as the following sum:

In the above formula, the zero and ones beside the g coefficient as well as x variables represent indices. Specifically, they represent the permutation matrix of 0 and 1, namely: 01, 00, 11, 10.

The dx and dx coefficients represent infinitesimal displacements of two different coordinates, where again 0 and 1 are indices. The product of the displacement of each coordinate are multiplied by the corresponding value of g, the metric tensor.

Therefore, in the above formula, g represents a coefficient of the metric tensor for each index. Why are there four terms in the above formula? Because two points are described by four coordinates or scalar values. In Euclidean geometry, the implicit basis vectors are the tangent vectors (0,1) and (1,0). These tangent vectors span the entire Euclidean space. Now g defines the inner product between tangent vectors at any point on the vector space. And the values of g are obtained through the inner product of all the possible combinations of the basis vectors.

When the values of the coefficients represent an orthonormal relationship between two points, the values of g reduce to the identity matrix:

In two dimensions or a system of two coordinates, we can express the Euclidean distance as the product of the metric tensor and the squared vector of the distance between each coordinate. Because for right angles in flat Euclidean space the metric tensor is an identity matrix, the squared distance between two points reduces to the Pythagorean Theorem as shown below:

The above formula can also be expressed as a linearly weighted combination expressed in our first formulation:

As you can see above, when g=0, we eliminate the latter two terms, reducing the equation to the Euclidean distance. Weve therefore explained how the generalized form of the metric tensor implies Euclidean distance as a special or limiting case.

What about when the shortest distance cannot be expressed by the Euclidean distance? In our everyday intuitions, we presuppose the existence of right angles for the lengths of the opposite and adjacent lines in order to satisfy the Pythagorean theorem as a distance measure of the hypotenuse. In linear algebra, it is equivalent of assuming orthonormal bases as the metric of the space. Bases define as the set of linearly independent vectors that span that vector space. Orthonormal bases are perpendicular unit vectors or unit vectors whose inner product is zero.

But this a priori assumption may be unfounded empirically. In fact, the underlying geometry may be curved or skewed in different ways. If this is the case, how do we then express the shortest distance between two points? To define a non-Euclidean space we take a different choice of basis vectors for our metric. The inner product of the permutation space of those basis vectors will output the metric tensor that defines distance and angles in that metric through linear combination of any infinitesimal displacements of two points, given by the formula:

Now, lets take a look at an example with polar coordinates (r, ), where r denotes the radius and theta the angle. The g metric tensor is obtained through the inner products of the permutation space of (r, ) as shown below:

If we consider Euclidean polar coordinates, the metric tensor will come out to the matrix below:

This is because distance is calculated through:

Now the distance between two points (r) and (r) is given by calculating the distances r-r and - and plugging them into the following formula:

So far, all our examples have been in a two dimensional space. Of course, we could extend the same ideas to three or N dimensional spaces. The metric tensor for a three dimensional space will be a 3x3 matrix and so on and so forth.

Understanding the metric tensor constitutes a major stepping stone in understanding General Relativity and Einsteins Field Equations.

In General Relativity, Einsteins field equations make use of the metric tensor to describe the curved geometry of spacetime.

Specifically, Einsteins field equations make use of three tensors: 1) Einsteins Tensor G, which describes the curvature of spacetime from the derivatives of the metric tensor, 2) the energy-stress tensor T, which describes the distribution of matter and energy in the universe, and 3) the metric tensor g, which defines the measure of lengths and angles in the curved geometry. Einsteins field equations are usually summarized by the equation below:

In General Relativity, the metric tensor consist of a 4x4 matrix comprising of 16 components. Just as in our 2 dimensional example, the metric tensor consists of the permutation space of all dimensions, in this case 3 of space and 1 of time combined into 4 spacetime dimensions. However, since the matrix is necessarily symmetric, only 10 of these components are independent of each other.

Excerpt from:

Three Mind-Blowing Ideas in Physics: The Stationary Action Principle, Lorentz Transformations, and the Metric Tensor - Towards Data Science

Read More..

Freddy’s enhances analytics and insights with Domo data suite – Chain Store Age

Freddy's is expanding its tech partnership with Domo.

Freddys Frozen Custard & Steakburgers has tapped a new data science solution to assist in its analytics operations.

Domos Data Science Suite will assist the quick-serve chain known for itscooked-to-order steakburgers, shoestring fries and freshly churned frozen custard treats in gainingfurther insights into the restaurants data, giving franchisees the ability to optimize pricing and model menu mix.

Freddys has partnered with Domo since 2015, and is now incorporating the companys machine learning and artificial intelligence-powered suite to better gain analytics and insights about the business. Domo says through the partnership, Freddys has achieved several immediate wins and launched and optimized a new guest loyalty program to effectively incentivize its most high-value guests. In addition, the restaurant chain gained the ability to accurately score locations so it could conduct A/B testing initiatives like price optimization and menu mix modeling.

[READ MORE: Nation's largest restaurant chains increase units by 2% in 2023]

With the number of guests Freddys serves daily, its important that they have the training and solutions they need to understand and leverage data at every Freddys location, and we are proud that they are using Domo as a critical part of that solution, said Mark Maughan, chief analytics officer and senior VP of customer success at Domo.

Read the original here:

Freddy's enhances analytics and insights with Domo data suite - Chain Store Age

Read More..

ChatGPT tips and tricks for Beginners | by Mehul Gupta | Data Science in your pocket | Jul, 2024 – Medium

We have discussed the following topics

Popularity and Usage of ChatGPT

Effectiveness in Handling Research

ChatGPT vs. Search Engines

Future of Search Engines

Impact on Software Development

Importance of Prompting

Writing Cover Letters Using ChatGPT

Detecting AI-Generated Content

ChatGPT and Mathematics

Future of ChatGPT

Lets get started !

Host: How popular is ChatGPT, and how often do you use it for personal or professional tasks?

Guest: Since ChatGPTs introduction, Ive become quite dependent on it for various tasks, including coding and content creation. I write blogs on Medium and create videos on YouTube, and I find that ChatGPT can assist with nearly everything now. It simplifies coding and helps with reading lengthy research papers by summarizing them.

Host: How effective do you think ChatGPT is in handling research papers and summarizing articles?

Guest: A common mistake I see among beginners is blindly copying and pasting answers from ChatGPT. Its crucial to have a basic understanding of the topic. For instance, I recently worked with a complex research paper on a new methodology called Dora by Nvidia. Instead of spending a week reading and summarizing it, I read a blog to grasp the concept and then used ChatGPT to get its perspective. However, I had to correct some misconceptions in its output. Thus, while ChatGPT is helpful, its essential to verify the information.

Host: Can ChatGPT replace search engines, or is it merely an alternative?

Guest: In my experience, Ive significantly reduced my use of search engines for finding blogs and research papers. I now rely on integrated search tools like Perplexity, which collate resources and provide summarized information. However, for discovering new online tools, I still find search engines more effective. While ChatGPT is great for reading and exploring, it doesnt yet excel in providing the best answers for tool recommendations.

Host: Do you think ChatGPT could completely replace search engines in the next five years?

Guest: I believe that within the next 23 years, we might see a significant shift. Maintaining traditional search engines could become less viable as more people prefer the quick, summarized answers provided by AI bots. The demand for detailed blog posts may decline as users seek concise information.

Host: How relevant are AI tools like ChatGPT for software development?

Guest: Initially, the coding abilities of AI tools were limited, but they have improved significantly. ChatGPT now provides useful code structures for proof-of-concept purposes and helps with understanding new libraries. While it may not handle every coding task perfectly, it can assist with 6070% of the work, allowing developers to focus on refining their logic.

Host: How important are prompts for getting quality results from ChatGPT? Whats the best way to write a prompt?

Guest: Prompts are crucial for achieving good results. I recently wrote a book on LangChain, and I emphasized that two key factors for optimal output from a language model are a good model and a well-crafted prompt. If you input poor quality prompts, youll receive poor quality responses. Think of it like working with an intern; while they may have technical knowledge, they need detailed instructions to perform well.

For example, if you ask ChatGPT to prepare an itinerary for the UK, you need to provide specifics like the number of days, check-in, and check-out dates. The more details you include, the better the results. Additionally, consider using prompt engineering techniques, such as asking the model to role-play.

For instance, if you want technical insights, you could instruct it to respond as a leading researcher in the field. When generating a blog post, first ask for the ideal structure, then request the post based on that structure. More detailed prompts yield better results.

Host: What about writing cover letters or statements of purpose for applications? Any suggestions?

Guest: When using ChatGPT for these documents, be smart. If the output seems too generic or bot-like, rephrase it. A trick I use is to take a response from ChatGPT and ask another AI tool, like Perplexity, to rephrase it. This helps mix ideas and makes detection more difficult. Use simple sentences to avoid sounding overly sophisticated, which could raise suspicion.Theres also a new feature in ChatGPT called custom GPTs, which integrates with various third-party websites. For example, I used a resume improvement tool that analyzed my resume and suggested enhancements. If you struggle with writing prompts, consider using existing extensions and tools available online.

Host: Are there any tools for detecting AI-generated content that recruiters might use?

Guest: Ive explored various detection tools, and most claim to identify AI-generated content, but many of them are ineffective. If you mix your ideas into the AI-generated text, it becomes hard to detect. Even humans may struggle to identify slight modifications in the text. As people become more familiar with AI-generated patterns, its essential not to copy blindly.

Host: How effective do you think ChatGPT is in handling mathematical or analytical problems?

Guest: Currently, ChatGPT struggles with complex mathematical problems. While it can handle basic calculations, it tends to break down with more complicated tasks. Mathematics is a language in itself, and the focus on training models for math is lacking. However, improvements are being made, and I believe that future iterations will handle math more effectively.

Host: What are your thoughts on the future power of ChatGPT and similar tools?

Guest: ChatGPT is already powerful, and claims about future versions, like GPT-5 having PhD-level intelligence, suggest significant advancements. If these tools become widely available, there could be economic implications, such as layoffs, as companies may prefer AI over human employees. The concept of Artificial General Intelligence (AGI) is still distant, but advancements in language models will continue. Basic machine learning models may become obsolete as more sophisticated tools like AutoGPT emerge, capable of executing tasks autonomously. This cleaned version retains the essential points while improving readability and coherence.

Hope you liked the episode

Read the original post:

ChatGPT tips and tricks for Beginners | by Mehul Gupta | Data Science in your pocket | Jul, 2024 - Medium

Read More..