Category Archives: Computer Science

Programmer: How We Know Computers Won’t Surpass the Human Mind – Walter Bradley Center for Natural and Artificial Intelligence

Heres a brief excerpt from Chapter 22 of Minding the Brain (Discovery Institute Press, 2023), The Human Minds Sophisticated Algorithm and Its Implications, by programmer Winston Ewert. His discussion is based on the halting problem: No computer knows when another computer will halt, though humans do.

Is the human mind a computer? If not, what is it? Before we can answer, we must first clarify, what exactly is a computer? Historically, the term computer actually referred not to machines but to humans. Typically, these were teams of people working together to perform long and tedious calculations. They helped with such tasks as computing the positions of planets, producing mathematical tables, and simulating fluid dynamics. What made them computers was that they were following a procedure. They were not expected or allowed to engage in creative thinking or problem-solving; instead, every action they took was guided by the procedure given to them. All that our modern computers do is automate this procedure- following activity. Human computers and machine computers are similar in that they operate strictly by following a procedure.

What exactly constitutes a procedure? A procedure provides a step-by-step method for solving a particular class of problems. The procedure defines how to proceed at every step of the task, leaving no decision up to the judgment of the person or machine following the procedure. In the context of computers, these procedures are typically called algorithms.

However, not every task can be reduced to a procedure. Researchers working in theoretical computer science have proven that a number of tasks cannot be reduced to a procedure. There is no procedure that can be written that will reliably perform these tasks. For example, there is no procedure that determines whether a logical statement, in first-order or higher logic, follows from a given set of premises.

Is everything that the human mind can do reducible to a procedure or program, even if we are not consciously aware of the procedure? Could we, in principle, duplicate the abilities of the human using a computer program? Or are there at least some tasks that the human mind can accomplish which cannot be reduced to a procedure? Are there things that the human mind can do which could not be duplicated by any procedure or program?

We have sketched an argument that generating cognitive abilities requires greater cognitive ability. This has a number of interesting consequences: First, human cognitive ability will never be matched by artificial intelligence. We have argued that the only way to obtain an accurate partial halting detector is using a more powerful halting detector. When humans devise artificially intelligent systems, they use their internal powerful halting detection abilities to verify and/or construct the implicit halting detection present in the artificially intelligent system. However, they are only capable of devising a halting detector less powerful than the one they have. As such, we would expect that while humans will get better at building artificial intelligence systems, they will never be able to match themselves.

Second, the singularity will not happen. The idea of the singularity is that an artificially intelligent system will be able to build a slightly more intelligent artificial intelligence (AI) system. That system will, in turn, devise an even more intelligent system. This process, repeated over and over, will culminate in artificially intelligent systems which will leave humans far behind. However, the only way to obtain a partial halting detector is using a more powerful partial halting detector. An AI system cannot build a slightly more intelligent partial halting detector. Thus, the singularity will not occur.

Third, the human mind has a transcendent origin. Standard evolutionary theory claims that the human mind was produced by natural selection operating on random mutations. However, this would be a case of a very computationally simple process constructing an accurate, highly powerful halting detector. This cannot happen if the only way to obtain a partial halting detector is by using a more powerful halting detector. Instead, the human mind must have derived from something with more powerful halting detection abilities. Yet we cannot explain the human mind by an infinite regress of increasingly powerful partial halting detectors. Rather, the human mind must eventually be explained by a non-computational form of intelligence for whom the halting problem is no obstacle.

You may also wish to read: Programmers: Why materialism cant explain human creativity Eric Holloway and Robert Marks explain why its unlikely that the mind that enables human creativity is merely the product of animal evolution. The total space-time information capacity of the universe falls significantly short of the ability to generate meaningful text of only a few hundred letters.

Read the rest here:

Programmer: How We Know Computers Won't Surpass the Human Mind - Walter Bradley Center for Natural and Artificial Intelligence

Generative AI: Unlocking the Power of Synthetic Data To Improve Software Testing – SciTechDaily

DataCebo, an MIT spinoff, leverages generative AI to produce synthetic data, aiding organizations in software testing, patient care improvement, and flight rerouting. Its Synthetic Data Vault, used by thousands, demonstrates the growing significance of synthetic data in ensuring privacy and enhancing data-driven decisions. Credit: SciTechDaily.com

MIT spinout DataCebo helps companies bolster their datasets by creating synthetic data that mimic the real thing.

Generative AI is getting plenty of attention for its ability to create text and images. But those media represent only a fraction of the data that proliferate in our society today. Data are generated every time a patient goes through a medical system, a storm impacts a flight, or a person interacts with a software application.

Using generative AI to create realistic synthetic data around those scenarios can help organizations more effectively treat patients, reroute planes, or improve software platforms especially in scenarios where real-world data are limited or sensitive.

For the last three years, the MIT spinout DataCebo has offered a generative software system called the Synthetic Data Vault to help organizations create synthetic data to do things like test software applications and train machine learning models.

The Synthetic Data Vault, or SDV, has been downloaded more than 1 million times, with more than 10,000 data scientists using the open-source library for generating synthetic tabular data. The founders Principal Research Scientist Kalyan Veeramachaneni and alumna Neha Patki 15, SM 16 believe the companys success is due to SDVs ability to revolutionize software testing.

DataCebo offers a generative software system called the Synthetic Data Vault to help organizations create synthetic data to do things like test software applications and train machine learning models. Credit: Courtesy of DataCebo. Edited by MIT News.

In 2016, Veeramachanenis group in the Data to AI Lab unveiled a suite of open-source generative AI tools to help organizations create synthetic data that matched the statistical properties of real data.

Companies can use synthetic data instead of sensitive information in programs while still preserving the statistical relationships between datapoints. Companies can also use synthetic data to run new software through simulations to see how it performs before releasing it to the public.

Veeramachanenis group came across the problem because it was working with companies that wanted to share their data for research.

MIT helps you see all these different use cases, Patki explains. You work with finance companies and health care companies, and all those projects are useful to formulate solutions across industries.

In the next few years, synthetic data from generative models will transform all data work, Kalyan Veeramachaneni says. From left: Kalyan Veeramachaneni, Co-Founder; Andrew Montanez, Director of Engineering; and Neha Patki, Co-Founder, VP of Product. Credit: Courtesy of DataCebo

In 2020, the researchers founded DataCebo to build more SDV features for larger organizations. Since then, the use cases have been as impressive as theyve been varied.

With DataCebos new flight simulator, for instance, airlines can plan for rare weather events in a way that would be impossible using only historical data. In another application, SDV users synthesized medical records to predict health outcomes for patients with cystic fibrosis. A team from Norway recently used SDV to create synthetic student data to evaluate whether various admissions policies were meritocratic and free from bias.

In 2021, the data science platform Kaggle hosted a competition for data scientists that used SDV to create synthetic data sets to avoid using proprietary data. Roughly 30,000 data scientists participated, building solutions and predicting outcomes based on the companys realistic data.

And as DataCebo has grown, its stayed true to its MIT roots: All of the companys current employees are MIT alumni.

Although their open-source tools are being used for a variety of use cases, the company is focused on growing its traction in software testing.

You need data to test these software applications, Veeramachaneni says. Traditionally, developers manually write scripts to create synthetic data. With generative models, created using SDV, you can learn from a sample of data collected and then sample a large volume of synthetic data (which has the same properties as real data), or create specific scenarios and edge cases, and use the data to test your application.

For example, if a bank wanted to test a program designed to reject transfers from accounts with no money in them, it would have to simulate many accounts simultaneously transacting. Doing that with data created manually would take a lot of time. With DataCebos generative models, customers can create any edge case they want to test.

Its common for industries to have data that is sensitive in some capacity, Patki says. Often when youre in a domain with sensitive data youre dealing with regulations, and even if there arent legal regulations, its in companies best interest to be diligent about who gets access to what at which time. So, synthetic data is always better from a privacy perspective.

Veeramachaneni believes DataCebo is advancing the field of what it calls synthetic enterprise data, or data generated from user behavior on large companies software applications.

Enterprise data of this kind is complex, and there is no universal availability of it, unlike language data, Veeramachaneni says. When folks use our publicly available software and report back if works on a certain pattern, we learn a lot of these unique patterns, and it allows us to improve our algorithms. From one perspective, we are building a corpus of these complex patterns, which for language and images is readily available.

DataCebo also recently released features to improve SDVs usefulness, including tools to assess the realism of the generated data, called the SDMetrics library as well as a way to compare models performances called SDGym.

Its about ensuring organizations trust this new data, Veeramachaneni says. [Our tools offer] programmable synthetic data, which means we allow enterprises to insert their specific insight and intuition to build more transparent models.

As companies in every industry rush to adopt AI and other data science tools, DataCebo is ultimately helping them do so in a way that is more transparent and responsible.

In the next few years, synthetic data from generative models will transform all data work, Veeramachaneni says. We believe 90 percent of enterprise operations can be done with synthetic data.

View original post here:

Generative AI: Unlocking the Power of Synthetic Data To Improve Software Testing - SciTechDaily

Dealing with the limitations of our noisy world – MIT News

Tamara Broderick first set foot on MITs campus when she was a high school student, as a participant in the inaugural Womens Technology Program. The monthlong summer academic experience gives young women a hands-on introduction to engineering and computer science.

What is the probability that she would return to MIT years later, this time as a faculty member?

Thats a question Broderick could probably answer quantitatively using Bayesian inference, a statistical approach to probability that tries to quantify uncertainty by continuously updating ones assumptions as new data are obtained.

In her lab at MIT, the newly tenured associate professor in the Department of Electrical Engineering and Computer Science (EECS) uses Bayesian inference to quantify uncertainty and measure the robustness of data analysis techniques.

Ive always been really interested in understanding not just What do we know from data analysis, but How well do we know it? says Broderick, who is also a member of the Laboratory for Information and Decision Systems and the Institute for Data, Systems, and Society. The reality is that we live in a noisy world, and we cant always get exactly the data that we want. How do we learn from data but at the same time recognize that there are limitations and deal appropriately with them?

Broadly, her focus is on helping people understand the confines of the statistical tools available to them and, sometimes, working with them to craft better tools for a particular situation.

For instance, her group recently collaborated with oceanographers to develop a machine-learning model that can make more accurate predictions about ocean currents. In another project, she and others worked with degenerative disease specialists on a tool that helps severely motor-impaired individuals utilize a computers graphical user interface by manipulating a single switch.

A common thread woven through her work is an emphasis on collaboration.

Working in data analysis, you get to hang out in everybodys backyard, so to speak. You really cant get bored because you can always be learning about some other field and thinking about how we can apply machine learning there, she says.

Hanging out in many academic backyards is especially appealing to Broderick, who struggled even from a young age to narrow down her interests.

A math mindset

Growing up in a suburb of Cleveland, Ohio, Broderick had an interest in math for as long as she can remember. She recalls being fascinated by the idea of what would happen if you kept adding a number to itself, starting with 1+1=2 and then 2+2=4.

I was maybe 5 years old, so I didnt know what powers of two were or anything like that. I was just really into math, she says.

Her father recognized her interest in the subject and enrolled her in a Johns Hopkins program called the Center for Talented Youth, which gave Broderick the opportunity to take three-week summer classes on a range of subjects, from astronomy to number theory to computer science.

Later, in high school, she conducted astrophysics research with a postdoc at Case Western University. In the summer of 2002, she spent four weeks at MIT as a member of the first class of the Womens Technology Program.

She especially enjoyed the freedom offered by the program, and its focus on using intuition and ingenuity to achieve high-level goals. For instance, the cohort was tasked with building a device with LEGOs that they could use to biopsy a grape suspended in Jell-O.

The program showed her how much creativity is involved in engineering and computer science, and piqued her interest in pursuing an academic career.

But when I got into college at Princeton, I could not decide math, physics, computer science they all seemed super-cool. I wanted to do all of it, she says.

She settled on pursuing an undergraduate math degree but took all the physics and computer science courses she could cram into her schedule.

Digging into data analysis

After receiving a Marshall Scholarship, Broderick spent two years at Cambridge University in the United Kingdom, earning a master of advanced study in mathematics and a master of philosophy in physics.

In the UK, she took a number of statistics and data analysis classes, including her first class on Bayesian data analysis in the field of machine learning.

It was a transformative experience, she recalls.

During my time in the U.K., I realized that I really like solving real-world problems that matter to people, and Bayesian inference was being used in some of the most important problems out there, she says.

Back in the U.S., Broderick headed to the University of California at Berkeley, where she joined the lab of Professor Michael I. Jordan as a grad student. She earned a PhD in statistics with a focus on Bayesian data analysis.

She decided to pursue a career in academia and was drawn to MIT by the collaborative nature of the EECS department and by how passionate and friendly her would-be colleagues were.

Her first impressions panned out, and Broderick says she has found a community at MIT that helps her be creative and explore hard, impactful problems with wide-ranging applications.

Ive been lucky to work with a really amazing set of students and postdocs in my lab brilliant and hard-working people whose hearts are in the right place, she says.

One of her teams recent projects involves a collaboration with an economist who studies the use of microcredit, or the lending of small amounts of money at very low interest rates, in impoverished areas.

The goal of microcredit programs is to raise people out of poverty. Economists run randomized control trials of villages in a region that receive or dont receive microcredit. They want to generalize the study results, predicting the expected outcome if one applies microcredit to other villages outside of their study.

But Broderick and her collaborators have found that results of some microcredit studies can be very brittle. Removing one or a few data points from the dataset can completely change the results. One issue is that researchers often use empirical averages, where a few very high or low data points can skew the results.

Using machine learning, she and her collaborators developed a method that can determine how many data points must be dropped to change the substantive conclusion of the study. With their tool, a scientist can see how brittle the results are.

Sometimes dropping a very small fraction of data can change the major results of a data analysis, and then we might worry how far those conclusions generalize to new scenarios. Are there ways we can flag that for people? That is what we are getting at with this work, she explains.

At the same time, she is continuing to collaborate with researchers in a range of fields, such as genetics, to understand the pros and cons of different machine-learning techniques and other data analysis tools.

Happy trails

Exploration is what drives Broderick as a researcher, and it also fuels one of her passions outside the lab. She and her husband enjoy collecting patches they earn by hiking all the trails in a park or trail system.

I think my hobby really combines my interests of being outdoors and spreadsheets, she says. With these hiking patches, you have to explore everything and then you see areas you wouldnt normally see. It is adventurous, in that way.

Theyve discovered some amazing hikes they would never have known about, but also embarked on more than a few total disaster hikes, she says. But each hike, whether a hidden gem or an overgrown mess, offers its own rewards.

And just like in her research, curiosity, open-mindedness, and a passion for problem-solving have never led her astray.

Continued here:

Dealing with the limitations of our noisy world - MIT News

Q&A: Is artificial intelligence defined the same way across disciplines? – Penn State University

UNIVERSITY PARK, Pa. Due to its rapid rise in everyday life, artificial intelligence (AI) technology has become increasingly relevant to social scientists. A team led by Penn State researchers reviewed a variety of social science literature and found that studies often defined AI differently. By drawing from some of these areas and computer science, the researchers created a single definition and framework that they said they hope will be compatible across disciplines.

Lead author HomeroGil de Ziga, Distinguished Professor in Media Effects and AI in the Donald P. Bellisario College of Communications at Penn State, said the definition is a starting point. It is purposefully broad so it can both adapt as AI evolves and boost interdisciplinary collaboration among researchers. The work, discussed by Gil de Ziga in the Q&A below, was published in the journal Political Communication with co-authors Timilehin Durotoye, a doctoral student in the Bellisario College, and Manuel Goyanes, assistant professor at the University Carlos III de Madrid.

Q: How did you identify the need for an artificial intelligence definition specifically for the social sciences?

Gil de Ziga: Obviously in society today, AI is picking up. Its not just scientific anymore. It has a human basis for all citizens. Regardless of the country that you're living in, AI is becoming more important. For computer scientists, its been around for decades. But for us who are thinking about how its going to be integrated in daily life, artificial intelligence is in its infancy. So, starting with computer science, we gathered different definitions from what had been written about AI. My co-authors and I found that there was not a large consensus about what AI is or what it might be. We realized that the definitions were not concrete and were often defined in a way so they fit a particular papers study.

Q: What is the definition that emerged from your study?

Gil de Ziga: Our definition says: AI is thetangible real-world capability of non-human machines or artificial entities to perform, task solve, communicate, interact and logically act akin to biological humans.

Q: How does your definition for AI differ from a discipline outside the social sciences?

Gil de Ziga: If someone is writing a study on Alexa, they might define artificial intelligence in a very particular way. For example, they may say AI is a machine that performs smart tasks. Or they may base it on the systems ability to interpret external data. When it comes to journalism and communication, the definitions might abandon the machine and instead define AI as a set of algorithms designed to generate and distribute media, text and images. So, thats why we wanted to combine all of these definitions and generate something that will work across disciplines.

Originally posted here:

Q&A: Is artificial intelligence defined the same way across disciplines? - Penn State University

WilmU, Code Differently team up for 18 credit opportunity in tech – Milford LIVE

A new partnership between Wilmington University and a national coding group will provide Delaware residents the opportunity to earn a big chunk of college credits in concentrations like computer science, cybersecurity and data analysis.

The university has partnered with Code Differently, which provides hands-on training and education through coding classes that gives participants the technical and cognitive skills they need to succeed in technology-driven workplaces.

The partnership, announced Tuesday, provides Code Differently participants up to 18 college credits at WilmU.

Since its establishment in 2018, 800 First State adults have received software development training from Code Differently, with an 89% completion rate and an 85% work-placement rate.

According to the organization, the most recent group of participants included 15 students who started the 20-week coding course in February.

This collaboration with Code Differently speaks to our mission of providing opportunity and flexibility to students, and it also addresses our comprehensive focus on technology, said LaVerne Harmon, WilmU president.

Harmon said the school understands the high demand for skilled information technology professionals, and she suspects that need will continue to grow.

This partnership reflects an opportunity for innovation to meet accessibility in higher education, she said.

Stephanie Eldridge, CEO and co-founder of Code Differently, said the organization wants to eliminate barriers to learning and success, and is committed to the advancement of all of its participants.

Lindsay Rice, the WilmUs senior director of Academic Partnerships, said the partnership leverages what its participants have learned and provides an easy transfer to bachelors programs directly connecting to Code Differently programs.

As students embark on their educational journeys with WilmU, Rice said, they save time and money while earning a competitive degree.

Upon completion of our 20-week full stack coding program, this agreement allows all of our participants, past, present and future, to earn 18 credits that can be applied directly to in-demand undergraduate computer science degree programs at WilmU, Eldridge said. Our partnership with WilmU opens the door for all participants who realize that higher education is the other key to their success.

Raised in Doylestown, Pennsylvania, Jarek earned a B.A. in journalism and a B.A. in political science from Temple University in 2021. After running CNNs Michael Smerconishs YouTube channel, Jarek became a reporter for the Bucks County Herald before joining Delaware LIVE News.

Post Views: 362

See the original post here:

WilmU, Code Differently team up for 18 credit opportunity in tech - Milford LIVE

The best computer science universities in Latin America – AOL

Universal Images Group via Getty Images

The coveted computer science degree has promised graduates six-figure salaries and exciting careers in high-tech spaces since the personal computer took off in the 1990s.

As an academic discipline, computer science dates back even further. In the 1960s, Purdue University became the first major institution to found a department dedicated to the practice, complete with history books written by faculty members because, at that point, none existed.

These days, the U.S., China, and Singapore are often associated with top-tier computer science degrees, but there are highly respected universities in nearly every corner of the world offering an education in the field. Revelo collected rankings from U.S. News and World Report to identify the top 10 universities for computer science in Latin America as part of a larger global analysis.

Slowly but steadily, Latin America has developed some of the world's most promising tech hubs. The region is outpacing others to become a top destination for developers, according to a 2023 report by HackerRank. Students considering studying computer science abroad can add universities in this large, diverse region to their list thanks to its commitment to internationalizing.

Web designers, software developers, computer network architects, research scientists, and systems administrators all leverage computer science knowledge as the basis of their job. And it's a job that analysts project to be in high demandeven as layoffs at major tech companies dominate headlines.

The typical U.S. worker in a computer science-based career today earns an income of $100,530, according to the Bureau of Labor Statistics. The bureau also estimates the industry will require 377,500 new workers each year for the next decade, as the industry is set to grow faster than the average growth rate of all other industries.

With the tech industry facing criticism for lacking diversity and products like artificial intelligence built on mostly white male perspectives, employers could benefit from looking beyond traditional institutions in recruiting efforts. From 2010-2020, the number of women graduating from U.S. schools with computer science degrees rose only 3%, and graduates from underrepresented racial groups remained flat at just over 20% of all graduates.

U.S. News and World Report ranked 778 universities globally with at least 250 academic research papers, calculating its subject scores on a 0-100 scale based on the number of publications and citations an institution received, its global and regional research reputation, and other factors.

- Location: Niteroi, Brazil - Computer science score: 18.0 out of 100 (#710 globally) - Overall score: 38.4 out of 100 (#1,017 globally) - Enrollment: 49,554

- Location: Fortaleza, Brazil - Computer science score: 20.0 out of 100 (#683 globally) - Overall score: 39.2 out of 100 (#977 globally) - Enrollment: Not available

- Location: Sao Carlos, Brazil - Computer science score: 22.4 out of 100 (#650 globally) - Overall score: 41.0 out of 100 (#896 globally) - Enrollment: Not available

- Location: Curitiba, Brazil - Computer science score: 23.5 out of 100 (#630 globally) - Overall score: 25.5 out of 100 (#1,603 globally) - Enrollment: Not available

- Location: Florianopolis, Brazil - Computer science score: 23.8 out of 100 (#626 globally) - Overall score: 47.9 out of 100 (#618 globally) - Enrollment: Not available

- Location: Brasilia, Brazil - Computer science score: 25.2 out of 100 (#605 globally) - Overall score: 45.4 out of 100 (#710 globally) - Enrollment: Not available

- Location: Buenos Aires, Argentina - Computer science score: 26.5 out of 100 (#588 globally) - Overall score: 53.8 out of 100 (#426 globally) - Enrollment: Not available

- Location: Mexico City, Mexico - Computer science score: 28.4 out of 100 (#556 globally) - Overall score: 54.3 out of 100 (#405 globally) - Enrollment: 172,729

- Location: Santiago, Chile - Computer science score: 32.4 out of 100 (#472 globally) - Overall score: 57.7 out of 100 (#314 globally) - Enrollment: 31,579

- Location: Curitiba, Brazil - Computer science score: 32.7 out of 100 (#470 globally) - Overall score: 42.6 out of 100 (#816 globally) - Enrollment: Not available

- Location: Sao Paulo, Brazil - Computer science score: 33.7 out of 100 (#449 globally) - Overall score: 51.4 out of 100 (#497 globally) - Enrollment: Not available

- Location: Mexico City, Mexico - Computer science score: 33.7 out of 100 (#449 globally) - Overall score: 36.7 out of 100 (#1,095 globally) - Enrollment: Not available

- Location: Monterrey, Mexico - Computer science score: 34.6 out of 100 (#438 globally) - Overall score: 44.2 out of 100 (#759 globally) - Enrollment: 49,696

- Location: Rio de Janeiro, Brazil - Computer science score: 35.9 out of 100 (#412 globally) - Overall score: 54.1 out of 100 (#413 globally) - Enrollment: 45,964

- Location: Recife, Brazil - Computer science score: 37.4 out of 100 (#396 globally) - Overall score: 40.9 out of 100 (#901 globally) - Enrollment: Not available

- Location: Porto Alegre, Brazil - Computer science score: 39.6 out of 100 (#364 globally) - Overall score: 53.6 out of 100 (#432 globally) - Enrollment: Not available

- Location: Belo Horizonte, Brazil - Computer science score: 44.3 out of 100 (#301 globally) - Overall score: 52.6 out of 100 (#468 globally) - Enrollment: Not available

- Location: Santiago, Chile - Computer science score: 45.0 out of 100 (#285 globally) - Overall score: 54.4 out of 100 (#400 globally) - Enrollment: 37,286

- Location: Campinas, Brazil - Computer science score: 48.2 out of 100 (#246 globally) - Overall score: 58.7 out of 100 (#294 globally) - Enrollment: 31,199

- Location: Sao Paulo, Brazil - Computer science score: 55.1 out of 100 (#154 globally) - Overall score: 68.5 out of 100 (#120 globally) - Enrollment: 82,010

This story features data reporting by Paxtyn Merten, writing by Dom DiFurio, and is part of a series utilizing data automation across 5 regions.

This story originally appeared on Revelo and was produced and distributed in partnership with Stacker Studio.

Visit link:

The best computer science universities in Latin America - AOL

Computer science professor leaves MIT ‘dream job’ for Yeshiva due to Jew-hatred – JNS.org

(February 26, 2024 / JNS)

After quitting his dream job at Massachusetts Institute of Technology due to antisemitism on campus, Mauricio Karchmer is fitting in at his new job at Yeshiva University.

The computer scientist has, in his first two days at Yeshiva, already mentored students, taught courses in multiple domains of expertise, and helped both university leadership and the broader community understand the dynamics on college campuses outside of YU, Noam Wasserman, dean of Yeshivas Sy Syms School of Business, told JNS.

Weissman said Karchmer is already brainstorming with department chairs at the school about a course he is designing for the fall semester, which will bring together his expertise in financial engineering and computer science.

The professor also held a fireside chat at Yeshiva with Rabbi Ari Berman, the university president, aboutantisemitism on campus.

Karchmer observed that the stakes are much bigger than just the war with Hamas, because The Palestinians are a pawn and Israel is a proxy, Weissman said.

Karchmer announced his move to Yeshiva, where he is a visiting guest faculty member, on LinkedIn. He said he was honored to be part of a deeply grounded institution with leaders who lead by living up to their values.

Also on LinkedIn, Berman wrote that it was a privilege to welcome Karchmer to the faculty. As a top-tier professor in his field and a leader who lives his values with integrity and authenticity, he is a role model to us all, Berman said.

In an article for The Free Press, Karchmer noted that MIT drew comparisons between Israel and Hamas. A message from the head of his department and its diversity, equity and inclusion office sent out a message riddled with equivocations, without mentioning the barbarity of Hamass attack, stating only that we are deeply horrified by the violence against civilians and wish to express our deep concern for all those involved, Karchmer wrote.

I was shocked that my institutionled by people who are meant to see the world rationallycould not simply condemn a brutal terrorist act, he added.

Wasserman told JNS he was very impressed with Karchmers mix of humility and desire to learn, combined with steadfast adherence to his values even when they meant having to leave his dream job at MIT when those values were threatened.

You have read 3 articles this month.

Register to receive full access to JNS.

Read the original post:

Computer science professor leaves MIT 'dream job' for Yeshiva due to Jew-hatred - JNS.org

Spring 2024 Bethe Lecture bridges physics and computer science | Cornell Chronicle – Cornell Chronicle

Artificial intelligence applications perform amazing feats winning at chess, writing college admission essays, passing bar exams but the complexity of these systems is so large they rival that of nature, with all the challenges that come with understanding nature.

An approach to a better understanding of this computer science puzzle is emerging from an unexpected direction: physics. Lenka Zdeborov, professor of physics and computer science at cole Polytechnique Fdrale de Lausanne in Switzerland, is using methods from theoretical physics to peer inside AI algorithms and answer fundamental questions about how they work and what they can and cannot do.

Zdeborov will visit campus in March to deliver the spring 2024 Bethe Lecture: Bridging Physics and Computer Science: Understanding Hard Problems, is Wednesday, March 13 at 7:30 p.m. in Schwartz Auditorium, Rockefeller Hall.

Zdeborov, who enjoys erasing boundaries between theoretical physics, mathematics and computer science, will explore how principles from statistical physics provide insights into challenging computational problems. Through this interdisciplinary lens, she has uncovered phase transitions that delineate the complexity of tasks, distinguishing between those easily tackled by computers and those posing significant challenges.

Read the full story on the College of Arts and Sciences website.

See original here:

Spring 2024 Bethe Lecture bridges physics and computer science | Cornell Chronicle - Cornell Chronicle

Earthly Exploration | College of Engineering & Applied Science – CU Boulder’s College of Engineering & Applied Science

From innovative underground drones and weather satellites, to improving indoor air quality and climate prediction, researchers are finding new ways to look at the world.

Associate Professor Claire Monteleoni Computer Science

Monteleoni is a leading researcher in the new and interdisciplinary field of climate informatics, broadly defined as any research combining climate science with approaches from statistics, machine learning and data mining. She uses machine learning to combine climate models to get the best possible predictions of future outcomes and to forecast hurricane tracks to give communities more time to prepare. Her group also uses machine learning to explore how extreme weather events like drought are related to climate change overall. In September, Monteleoni brought to Boulder the eighth International Workshop on Climate Informatics, an event she co-founded in 2011.

Assistant Professor Marina Vance Mechanical Engineering

Vance recently led the largest collaborative study to date on indoor air quality at a research house at the University of Texas Austin. The project, titled HOMEChem, was conducted with 20 faculty members from 13 universities. Researchers outfitted the house with varying instrumentation and then performed everyday activities like cooking and cleaning, with the goal of understanding the chemical processes happening in indoor environments and how they may affect those inside.

Professor Rajagopalan Balaji Civil, Environmental and Architectural Engineering

Balajis research is an interdisciplinary effort to ensure sustainable water quantity and quality for growing populations under increasing climate variability. His current research looks at how past societies responded to climate variations and how the lessons learned there can be applied to current natural resource management problems. Other projects model climate extremes at national parks and examine the health effects of climate change, like the growing risk of epidemics of chronic kidney disease.

Professor Albin Gasiewski Electrical, Computer and Energy Engineering

Thanks to Gasiewskis work, there will soon be a fleet of mini-satellites orbiting the Earth, providing improved weather forecasting to people who need it most, including farmers, airlines and shipping companies. His teams work, licensed to space technology company Orbital Micro Systems, would allow for observation of the Earth every 15 minutes using microwave eyes. Unlike the more common infrared or optical satellites, these passive microwave frequencies can see through clouds, detect water vapor and precipitation, and track weather conditions as they evolve. The work has also provided learning opportunities for dozens of undergraduate students in electrical engineering and the Colorado Space Grant, who have contributed to both the microwave sensing systems and the vehicles that will take them to low-Earth orbit.

Professor Michael D. McGehee Chemical and Biological Engineering

McGehee and his team are developing windows that can switch from clear to tinted when voltage is applied, depending on the season or time of day. The windows dont need blinds, allow for more natural light and cut down on glare. Another new technique theyre working on could increase solar energy cell efficiency from 21 percent to 25 percent. It works by covering existing cells with a kind of perovskite created from salt solutions that offers great light absorption, among other properties prized in a variety of technologies. McGehee works closely with the National Renewable Energy Laboratory in Boulder.

Professor Sean Humbert Mechanical Engineering

Humbert is leading an interdisciplinary engineering team to design drones that can explore underground environments like subway tunnels, mines and caves. Its part of a $4.5 million grant from the U.S. Defense Advanced Research Projects Agency (DARPA) that challenges teams from across the country to complete three increasingly difficult underground tasks to discover the best systems and methods. The work may one day enable teams of flying and rolling drones to work together to search through dark and dangerous environments to find human survivors of earthquakes, chemical spills and more. The project starts in September, when Humberts group will begin testing its robot on a mock search and rescue operation in miles of steam tunnels.

Read this article:

Earthly Exploration | College of Engineering & Applied Science - CU Boulder's College of Engineering & Applied Science

Spring 2024 Bethe Lecture bridges physics and computer science – The College of Arts & Sciences

Artificial intelligence applications perform amazing feats winning at chess, writing college admission essays, passing bar exams but the complexity of these systems is so large they rival that of nature, with all the challenges that come with understanding nature.

An approach to a better understanding of this computer science puzzle is emerging from an unexpected direction: physics. Lenka Zdeborov, professor of physics and computer science at cole Polytechnique Fdrale de Lausanne in Switzerland, is using methods from theoretical physics to peer inside AI algorithms and answer fundamental questions about how they work and what they can and cannot do.

Zdeborov will visit campus in March to deliver the spring 2024 Bethe Lecture; Bridging Physics and Computer Science: Understanding Hard Problems, is Wednesday, March 13 at 7:30 p.m. in Schwartz Auditorium, Rockefeller Hall.

Zdeborov, who enjoys erasing boundaries between theoretical physics, mathematics and computer science, will explore how principles from statistical physics provide insights into challenging computational problems. Through this interdisciplinary lens, she has uncovered phase transitions that delineate the complexity of tasks, distinguishing between those easily tackled by computers and those posing significant challenges.

Lenka Zdeborov is a leading figure in the scientific community whose work I've long admired, solving problems inphysics,computer science,artificial intelligence and manyotherfields, said James Sethna, the James Gilbert White Professor of Physical Sciences in the College of Arts and Sciences (A&S). Zdeborov's focus is on the study of machine learning, where she has been addressing and solving a number of fundamental properties and limitations of the algorithms underlying the amazing performance of ChatGPT and related AI tools.

Zdeborovs expertise is in studying the properties of so-called hard problems near the transition between solvable and insoluble. She and her colleagues can predict how quickly certain AI systems can learn from examples, and how well or poorly they generalize from these examples. She studies how challenging it is to extract information from noisy data, and where extracting this information is possible or infeasible, determining where an algorithm is better than random guessing.

The deep networks that underlie much of modern AI are designed by humans, but their complexity with hundreds of billions of parameters is staggering, said Thorsten Joachims, associate professor of computer science in the Cornell Ann S. Bowers College of Computing and Information Science. Analyzing their complex behavior with approaches from physics provides exciting new insights and levels of understanding.

In addition to the public Bethe Lecture, Zdeborov will give a physics colloquium talk, From Bethe Lattice to Unraveling Complex Systems, on Monday, March 11 at 4 p.m. in Schwartz Auditorium, Rockefeller Hall.

Zdeborov will also participate in an AI Lunchtime Seminar, On Generalization and Uncertainty in Learning with Neural Networks, on Friday, March 15 at noon in 122 Gates Hall, with Zoom available in 700 Clark Hall.

At cole Polytechnique Fdrale de Lausanne, Zdeborov leads the Statistical Physics of Computation Laboratory.

Zdeborov received a Ph.D. in physics from University Paris-Sud and from Charles University in Prague in 2008. She spent two years in the Los Alamos National Laboratory as a directors postdoctoral fellow; then spent 10 years as a researcher at Centre National de la Recherche Scientifique (CNRS) working in the Institute of Theoretical Physics in Saclay, France. She has served as an editorial board member for Journal of Physics A, Physical Review E, Physical Review X, SIMODS, Machine Learning: Science and Technology and Information and Inference.

Her honors include the CNRS bronze medal in 2014; the Philippe Meyer Prize in Theoretical Physics in 2016; the Irne Joliot-Curie prize in 2018; the Gibbs Lectureship of AMS and the Neuron Fund Award in 2021.

The Hans Bethe Lecture Series, established by the Department of Physics and the College of Arts and Sciences, honors Bethe, Cornell professor of physics from 1936 until his death in 2005. Bethe won the 1967 Nobel Prize in physics for his description of the nuclear processes that power the sun.

See original here:

Spring 2024 Bethe Lecture bridges physics and computer science - The College of Arts & Sciences