Category Archives: Computer Science

Recognizing fake news now a required subject in California schools – Pleasanton Weekly

Pushing back against the surge of misinformation online, California will now require all K-12 students to learn media literacy skills such as recognizing fake news and thinking critically about what they encounter on the internet.

Gov. Gavin Newsom last month signed Assembly Bill 873, which requires the state to add media literacy to curriculum frameworks for English language arts, science, math and history-social studies, rolling out gradually beginning next year. Instead of a stand-alone class, the topic will be woven into existing classes and lessons throughout the school year.

"I've seen the impact that misinformation has had in the real world how it affects the way people vote, whether they accept the outcomes of elections, try to overthrow our democracy," said the bill's sponsor, Assemblymember Marc Berman, a Democrat from Menlo Park. "This is about making sure our young people have the skills they need to navigate this landscape."

The new law comes amid rising public distrust in the media, especially among young people. A 2022 Pew Research Center survey found that adults under age 30 are nearly as likely to believe information on social media as they are from national news outlets. Overall, only 7% of adults have "a great deal" of trust in the media, according to a Gallup poll conducted last year.

Media literacy can help change that, advocates believe, by teaching students how to recognize reliable news sources and the crucial role that media plays in a democracy.

"The increase in Holocaust denial, climate change denial, conspiracy theories getting a foothold, and now AI ... all this shows how important media literacy is for our democracy right now," said Jennifer Ormsby, library services manager for the Los Angeles County Office of Education. "The 2016 election was a real eye-opener for everyone on the potential harms and dangers of fake news."

AB 873 passed nearly unanimously in the Legislature, underscoring the nonpartisan nature of the topic. Nationwide, Texas, New Jersey and Delaware have also passed strong media literacy laws, and more than a dozen other states are moving in that direction, according to Media Literacy Now, a nonprofit research organization that advocates for media literacy in K-12 schools.

Still, California's law falls short of Media Literacy Now's recommendations. California's approach doesn't include funding to train teachers, an advisory committee, input from librarians, surveys or a way to monitor the law's effectiveness.

Keeping the bill simple, though, was a way to help ensure its passage, Berman said. Those features can be implemented later, and he felt it was urgent to pass the law quickly so students can start receiving media literacy education as soon as possible. The law goes into effect Jan. 1, 2024, as the state begins updating its curriculum frameworks, although teachers are encouraged to teach media literacy now.

Berman's law builds on a previous effort in California to bring media literacy to K-12 classrooms. In 2018, Senate Bill 830 required the California Department of Education to provide media literacy resources lesson plans, project ideas, background to the state's K-12 teachers. But it didn't make media literacy mandatory.

The new law also overlaps somewhat with California's effort to bring computer science education to all students. The state hopes to expand computer science, which can include aspects of media literacy, to all students, possibly even requiring it to graduate from high school. Newsom recently signed Assembly Bill 1251, which creates a commission to look at ways to recruit more computer science teachers to California classrooms. Berman is also sponsoring Assembly Bill 1054, which would require high schools to offer computer science classes. That bill is currently stalled in the Senate.

Understanding media, and creating it

Teachers don't need a state law to show students how to be smart media consumers, and some have been doing it for years. Merek Chang, a high school science teacher at Hacienda La Puente Unified in the City of Industry east of Los Angeles, said the pandemic was a wake-up call for him.

During remote learning, he gave students two articles on the origins of the coronavirus. One was an opinion piece from the New York Post, a tabloid, and the other was from a scientific journal. He asked students which they thought was accurate. More than 90% chose the Post piece.

"It made me realize that we need to focus on the skills to understand content, as much as we focus on the content itself," Chang said.

He now incorporates media literacy in all aspects of his lesson plans. He relies on the Stanford History Education Group, which offers free media literacy resources for teachers, and took part in a KQED media literacy program for teachers.

In addition to teaching students how to evaluate online information, he shows them how to create their own media. Homework assignments include making TikTok-style videos on protein synthesis for mRNA vaccines, for example. Students then present their projects at home or at lunchtime events for families and the community.

"The biggest impact, I've noticed, is that students feel like their voice matters," Chang said. "The work isn't just for a grade. They feel like they're making a difference."

Ormsby, the Los Angeles County librarian, has also been promoting media literacy for years. Librarians generally have been on the forefront of media literacy education, and California's new law refers to the Modern School Library Standards for media literacy guidelines.

Ormsby teaches concepts like "lateral reading" (comparing an online article with other sources to check for accuracy) and reverse imaging (searching online to trace a photo to its original source or checking if it's been altered). She also provides lesson plans, resources and book recommendations such as "True or False: A CIA analyst's guide to spotting fake news" and, for elementary students, "Killer Underwear Invasion! How to spot fake news, disinformation & conspiracy theories."

She's happy that the law passed, but would like to see librarians included in the rollout and the curriculum implemented immediately, not waiting until the frameworks are updated.

The gradual implementation of the law was deliberate, since schools are already grappling with so many other state mandates, said Alvin Lee, executive director of Generation Up, a student-led advocacy group that was among the bill's sponsors. He's hoping that local school boards decide to prioritize the issue on their own by funding training for teachers and moving immediately to get media literacy into classrooms.

"Disinformation contributes to polarization, which we're seeing happen all over the world," said Lee, a junior at Stanford who said it's a top issue among his classmates. "Media literacy can address that."

In San Francisco Unified, Ricardo Elizalde is a teacher on special assignment who trains elementary teachers in media literacy. His staff gave out 50 copies of "Killer Underwear!" for teachers to build activities around, and encourages students to make their own media, as well.

Elementary school is the perfect time to introduce the topic, he said.

"We get all these media thrown at us from a young age, we have to learn to defend ourselves," Elizalde said. "Media literacy is a basic part of being literate. If we're just teaching kids how to read, and not think critically about what they're reading, we're doing them a disservice."

Excerpt from:

Recognizing fake news now a required subject in California schools - Pleasanton Weekly

Argonne receives funding to advance diversity in STEM – Argonne National Laboratory

The U.S. Department of Energy (DOE) has awarded DOEs Argonne National Laboratory funding as part of the Reaching a New Energy Sciences Workforce (RENEW) initiative, aimed at fostering diversity in STEM and advancing innovative research opportunities.

DOE announced $70 million to support internships, training programs and mentorship opportunities at 65 different institutions, including 40 higher-learning institutions that serve minority populations. By supporting these partnerships, DOE aims to create a more diverse STEM talent pool capable of addressing critical energy, environmental and nuclear challenges.

To compete on the global stage, America will need to draw scientists and engineers from every pocket of the nation, and especially from communities that have been historically underrepresented in STEM, said U.S. Secretary of Energy Jennifer M. Granholm. The RENEW initiative will support talented, motivated students to follow their passions for science, energy and innovation, and help us overcome challenges like climate change and threats to our national security.

RENEW will offer hands-on experiences and open new career avenues for young scientists and engineers from minority-serving institutions.

Argonne is partnering with six minority-serving institutions to mentor 24 undergraduate and eight doctoral students on research projects related to artificial intelligence (AI) and autonomous discovery (AD), an initiative that is harnessing the power of robotics, machine learning and AI to accelerate the pace of science. Computational biologist Arvind Ramanathan is co-PI on the project, which is called Mobilizing the Emerging Diverse AI Talent through Design and Automated Control of Autonomous Scientific Laboratories. Argonne will leverage its AD facilities, such as the Rapid Prototyping Lab, where researchers identify common issues that can arise during AD and then quickly create and test solutions.

The lead PI is Sumit Kumar Jha, professor of computer science at Florida International University. Other university partners include Bowie State University, Cleveland State University, Oakland University and University of Central Florida.

The RENEW initiative leverages the unique capabilities of DOEs national laboratories, user facilities and research infrastructure to provide valuable training opportunities for students, faculty and researchers from underrepresented backgrounds. This project is funded by the DOE Office of Science, Advanced Scientific Computing Research program.

See the original post here:

Argonne receives funding to advance diversity in STEM - Argonne National Laboratory

Making Computing Sustainable, With Help from NSF Grant – Yale University

With research projects - including one that recently received a $1.3 million grant - and an upcoming course, Prof. Robert Soul is looking at new ways to make computing more sustainable.

Working with Prof. Noa Zilberman from Oxford University, Soul has received a grant jointly funded by the United Kingdoms Engineering and Physical Sciences Research Council (EPSRC) and United States National Science Foundation (NSF) for work that aims to reduce the energy consumption of computing. Specifically, it sets its sights on computer networks, which consume an estimated one-and-a-half times the energy of all data centers, according to some reports. In contrast to other large scale computer infrastructures, accounting for the carbon emissions of the network is extremely hard.

The project is designed to collect information about the power consumption of network devices, specifically the computer hardware involved in connecting users to computer networks. This includes switches, which connect different computers together, and the network interface cards in computers or servers that connect users to the network. Traditional computer networks try to optimize the paths to reduce latency and achieve the fastest response possible.

But what we're hypothesizing is that you could actually instead choose paths that would result in the lowest amount of power consumed, or maybe the greenest path, said Soul, associate professor of computer science & electrical engineering. We want to measure how much power they are consuming and the quality of the power that they're consuming. For example, did they come from a green energy source? So were collecting the data that would allow you to make these informed decisions, and designing the network algorithms that would change a routing behavior in order to reduce the overall carbon footprint.

One possible way to do that is to develop systems that send computer traffic to a path that consumes energy from a green energy source. Another is a system that chooses a path that minimizes overall power consumption.

Another component to Souls work in this area is a collaboration with Prof. Rajit Manohar, the John C. Malone Professor of Electrical Engineering and Computer Science. Theyre developing network hardware that can go into idle mode when its inactive, very much like some cars with engines that automatically turn off at red lights.

There's a problem with current network hardware in that its not really able to go into idle mode because a part of it is always running to see if information is arriving, he said. So Rajit and I have been looking at whether we can design hardware devices - a new network switch - that would consume energy in proportion to the amount of traffic that it's seeing. And if it did see less traffic, it would go into idle mode

Soul is also co-teaching a course next year on sustainable computing with Dr. Eve Schooler, an IEEE Fellow and Yale alum. The course, they said, takes a broader view of the subject.

We're trying to do more of a survey of different approaches to improving the carbon efficiency of computer networks in general, Soul said. But even beyond that, we're also looking at a broader discussion at the policy level, where the intersection of sustainability and technology meet.

Schooler said the course will cover a large swath of topics. For instance, it might explore issues like the role that computing can play in the Intergovernmental Panel on Climate Change or how large institutions perform carbon accounting.

We'll also focus on networking, and on topics around the streaming infrastructure, the content distribution networks, she said. Other topics will be about large algorithms, like large language models - the ChatGPTs of the world - and Bitcoin or some of the crypto currencies that are also large consumers of electricity.

Read more:

Making Computing Sustainable, With Help from NSF Grant - Yale University

The mind’s eye of a neural network system – Purdue University

WEST LAFAYETTE, Ind. In the background of image recognition software that can ID our friends on social media and wildflowers in our yard are neural networks, a type of artificial intelligence inspired by how own our brains process data. While neural networks sprint through data, their architecture makes it difficult to trace the origin of errors that are obvious to humans like confusing a Converse high-top with an ankle boot limiting their use in more vital work like health care image analysis or research. A new tool developed at Purdue University makes finding those errors as simple as spotting mountaintops from an airplane.

In a sense, if a neural network were able to speak, were showing you what it would be trying to say, said David Gleich, a Purdue professor of computer science in the College of Science who developed the tool, which is featured in a paper published in Nature Machine Intelligence. "The tool weve developed helps you find places where the network is saying, Hey, I need more information to do what youve asked. I would advise people to use this tool on any high-stakes neural network decision scenarios or image prediction task.

Code for the tool is available on GitHub, as are use case demonstrations. Gleich collaborated on the research with Tamal K. Dey, also a Purdue professor of computer science, and Meng Liu, a former Purdue graduate student who earned a doctorate in computer science.

In testing their approach, Gleichs team caught neural networks mistaking the identity of images in databases of everything from chest X-rays and gene sequences to apparel. In one example, a neural network repeatedly mislabeled images of cars from the Imagenette database as cassette players. The reason? The pictures were drawn from online sales listings and included tags for the cars stereo equipment.

Neural network image recognition systems are essentially algorithms that process data in a way that mimics the weighted firing pattern of neurons as an image is analyzed and identified. A system is trained to its task such as identifying an animal, a garment or a tumor with a training set of images that includes data on each pixel, tagging and other information, and the identity of the image as classified within a particular category. Using the training set, the network learns, or extracts, the information it needs in order to match the input values with the category. This information, a string of numbers called an embedded vector, is used to calculate the probability that the image belongs to each of the possible categories. Generally speaking, the correct identity of the image is within the category with the highest probability.

But the embedded vectors and probabilities dont correlate to a decision-making process that humans would recognize. Feed in 100,000 numbers representing the known data, and the network produces an embedded vector of 128 numbers that dont correspond to physical features, although they do make it possible for the network to classify the image. In other words, you cant open the hood on the algorithms of a trained system and follow along. Between the input values and the predicted identity of the image is a proverbial black box of unrecognizable numbers across multiple layers.

The problem with neural networks is that we cant see inside the machine to understand how its making decisions, so how can we know if a neural network is making a characteristic mistake? Gleich said.

Rather than trying to trace the decision-making path of any single image through the network, Gleichs approach makes it possible to visualize the relationship that the computer sees among all the images in an entire database. Think of it like a birds-eye view of all the images as the neural network has organized them.

The relationship among the images (like networks prediction of the identity classification of each of the images in the database) is based on the embedded vectors and probabilities the network generates. To boost the resolution of the view and find places where the network cant distinguish between two different classifications, Gleichs team first developed a method of splitting and overlapping the classifications to identify where images have a high probability of belonging to more than one classification.

The team then maps the relationships onto a Reeb graph, a tool taken from the field of topological data analysis. On the graph, each group of images the network thinks are related is represented by a single dot. Dots are color coded by classification. The closer the dots, the more similar the network considers groups to be, and most areas of the graph show clusters of dots in a single color. But groups of images with a high probability of belonging to more than one classification will be represented by two differently colored overlapping dots. With a single glance, areas where the network cannot distinguish between two classifications appear as a cluster of dots in one color, accompanied by a smattering of overlapping dots in a second color. Zooming in on the overlapping dots will show an area of confusion, like the picture of the car thats been labeled both car and cassette player.

What were doing is taking these complicated sets of information coming out of the network and giving people an in into how the network sees the data at a macroscopic level, Gleich said. The Reeb map represents the important things the big groups and how they relate to each other and that makes it possible to see the errors.

Topological Structure of Complex Predictions was produced with the support of the National Science Foundation and the U.S. Department of Energy.

Writer/Media contact: Mary Martialay; mmartial@purdue.edu

Source: David Gleich; dgleich@purdue.edu

Read more:

The mind's eye of a neural network system - Purdue University

Op-ed: Remember moderate opinions exist but are often concealed … – The Huntington News

After spending many hours grinding through my work, theres nothing I find more refreshing than opening up my laptop and scrolling through Reddit in the evening. My 2022 Reddit Recap affirmed that I spent the most time scrolling through the Northeastern subreddit, which makes sense considering how much I use it to navigate my academic journey.

Besides all the sh*t posts and yet another entertaining complaint about finding rats in an apartment, I take advantage of the fact that many Northeastern students go on Reddit to describe their experiences there. Hence, when building my schedule or figuring out which professors best align with how I learn, I read through subreddit posts meticulously.

Even though I use Reddit to, supposedly, relieve my stress about the unknowns of my college career, I have come to a point where I got so caught up in the opinions of other students about various classes that I lost my sense of identity and confidence about the path I could pursue in college. I let my potential be defined by the experiences of people on a social media platform.

Particularly, when I was stressed about which major I wanted to pursue after deciding that a path toward the medical field would be too stressful, I decided to browse through Reddit to learn about peoples experiences with other majors at Northeastern. If youre an avid visitor of the Northeastern subreddit, you would know that there are countless posts about peoples frustrations with Northeasterns computer science classes. The idea of spending at least 10 hours of work on classes with unsupportive professors, combined with the fact that computer science did not come naturally to me in high school, made me immediately dismiss the major as an option.

I didnt even bother to talk to people in real life about their experiences with the major, let alone discuss the major with an academic advisor from Khoury College of Computer Sciences. I had let the fear I felt from seeing many people complain about Fundamentals of Computer Science 1, or Fundies, on Reddit prevent me from potentially pursuing the field. In high school, there was no subreddit to prevent me from taking four AP classes during my senior year. I had a slight glimmer of confidence to get me through that pursuit.

Looking back at my initial doubts about Fundies, I realize I shouldve been more open toward my friend from another school who affirmed I would be fine, considering my strong work ethic. I consulted with my friend, a computer science and behavioral neuroscience major, over the summer about what she thought about the Fundies posts on Reddit, and talking to her just reaffirmed the fact that people tend to only post online only if they have strong opinions.

It seems that people tend to post on Reddit, and even TRACE, another online resource I spend too much time stressing over, if they have extremely positive or (more likely) extremely negative experiences with a class or professor. Reddit and TRACE are double-edged swords. On one hand, they are accessible resources on which to gather up-to-date information about classes Im interested in from a large pool of students. On the other hand, the opinions contained in those resources are biased and seem to often come out of spitefulness.

As a data science and psychology combined major, I remember feeling distraught when I learned I had to take Discrete Structures. Discrete is another computer science class that gets complained about on Reddit. I mentally prepared myself to receive a bad grade in the class but found that I was able to be much more successful in the class than expected. I definitely do not think that Discrete is an easy class. Still, I found it manageable if youre willing to review the lecture videos as needed, go to office hours as early and as much as needed, attend recitations and generally just work hard and responsibly.

After completing Discrete, I truly started to contemplate what my life would have been like if I had chosen to become a computer science major instead. I adore data science, but sometimes I do wonder what it would have been like if I had been a computer science and music technology combined major instead.

Northeasterns Reddit community does have its empowering moments. About a month ago, a Northeastern student posted alleging the school had commented out code that would have enabled students to automatically donate their leftover meal swipes for a week. I also often see posts about people expressing mental health concerns that receive heartwarming and reassuring comments. And the infinite comedic posts complaining about the quality of Snells study environment always release any tensions I feel after a rough workday.

Social media sites such as Reddit can be fun and even uplifting, but its important to establish boundaries with them and not let them control the trajectory of your life.

Even though Im fortunate enough to be majoring in a field I can see myself enjoying in the future, I worry for other people who have the potential to excel at computer science, or other fields with a reputation for being difficult at Northeastern, but end up letting biased online opinions deteriorate their confidence.

Reddit and TRACE are accessible resources to gauge the difficulty of a class, but just because an overwhelming amount of students complain about a class does not mean you wont turn out fine. Northeastern is a school that encourages students to engage in exploratory learning, and I dont wish for peoples journeys in this school to be tailored by what they have seen on the internet by other students. Talk to your friends, advisors and other in-person resources if youre worried about what kind of path you want to pursue at this school. Once youve looked at a post, after youve read it, try not to dwell on it.

Jethro R. Lee is a third-year data science and psychology combined major. He can be reached at [emailprotected].

See the rest here:

Op-ed: Remember moderate opinions exist but are often concealed ... - The Huntington News

Carnegie Mellon Honors Three Faculty With Professorships – Carnegie Mellon University

Alex John London

London is an internationally recognized ethicist from the Department of Philosophy(opens in new window) who is frequently called upon to address critical societal problems. He brings deep disciplinary expertise to bear and collaborates with the best technical and scientific minds to make an impact on policy, technology, medicine and science.

London joined CMU in 2000 and in 2016 was named the Clara L West Professor of Ethics and Philosophy. He is the director of the Center for Ethics and Policy(opens in new window) and chief ethicist at the Block Center for Technology and Society(opens in new window). An elected Fellow of the Hastings Center, Londons work focuses on ethical and policy issues surrounding the development and deployment of novel technologies in medicine, biotechnology and artificial intelligence, on methodological issues in theoretical and practical ethics, and on cross-national issues of justice and fairness.

In 2022, Oxford University Press published his book, For the Common Good: Philosophical Foundations of Research Ethics. It has been called a philosophical tour de force, a remarkable achievement and a vital foundation on which policy progress should indeed, must be built. He also is co-editor of Ethical Issues in Modern Medicine, one of the most widely used textbooks in medical ethics.

London is a member of the World Health Organization Expert Group on Ethics and Governance of AI. In addition, he is a member of the U.S. National Academy of Medicine Committee on Creating a Framework for Emerging Science, Technology, and Innovation in Health and Medicine. He also co-leads the ethics core for the National Science Foundation AI Institute for Collaborative Assistance and Responsive Interaction for Networked Groups.

For more than a decade, London has helped to shape key ethical guidelines for the oversight of research with human participants. He is currently a member of the U.S. National Science Advisory Board for Biosecurity, and he has served as an ethics expert in consultations with organizations including the U.S. National Institutes of Health, the World Medical Association and the World Bank.

Heidari is faculty in the Department of Machine Learning(opens in new window) and the Software and Societal Systems Department(opens in new window) (S3D). She is also affiliated with the HCII, CyLab(opens in new window), the Block Center, Heinz College of Information Systems and Public Policy(opens in new window) and the Carnegie Mellon Institute for Strategy and Technology(opens in new window).

Heidaris research broadly concerns the social, ethical and economic implications of artificial intelligence, and in particular, issues of fairness and accountability through the use of machine learning in socially consequential domains. Her work in this area has won a best paper award at the Association for Computing Machinery (ACM) Conference on Fairness, Accountability, and Transparency; an exemplary track award at the ACM Conference on Economics and Computation; and a best paper award at the IEEE Conference on Secure and Trustworthy Machine Learning .

Heidari co-founded and co-leads the university-wide Responsible AI Initiative(opens in new window). She has organized several scholarly events on topics related to responsible and trustworthy AI, including multiple tutorials and workshops at top-tier academic venues specializing in artificial intelligence.

She is particularly interested in translating research contributions into positive impact on AI policy and practice. She has organized multiple campus-wide events and policy convenings, bringing together diverse groups of experts to address such topics as AI governance and accountability and contribute to ongoing efforts in this area at various levels of government.

Myers is director of the HCII in the School of Computer Science with an affiliated appointment in S3D. He received the ACM SIGCHI Lifetime Achievement Award in Research in 2017 for outstanding fundamental and influential research contributions to the study of human-computer interaction, and in 2022 SCS honored him with its Alan J. Perlis Award for Imagination in Computer Science "for pioneering human-centered methods to democratize programming." He is an IEEE Fellow, ACM Fellow, member of the CHI Academy, and winner of numerous best paper awards and most influential paper awards.

Myers has authored or edited more than 550 publications, and he has been on the editorial board of six journals. He has been a consultant on user interface design and implementation to over 90 companies and regularly teaches courses on user interface design and software. Myers received a Ph.D. in computer science at the University of Toronto, where he developed the Peridot user interface tool. He received masters and bachelors degrees from the Massachusetts Institute of Technology, and belongs to the ACM, SIGCHI, IEEE and the IEEE Computer Society.

View original post here:

Carnegie Mellon Honors Three Faculty With Professorships - Carnegie Mellon University

This 3D printer can watch itself fabricate objects – MIT News

With 3D inkjet printing systems, engineers can fabricate hybrid structures that have soft and rigid components, like robotic grippers that are strong enough to grasp heavy objects but soft enough to interact safely with humans.

These multimaterial 3D printing systems utilize thousands of nozzles to deposit tiny droplets of resin, which are smoothed with a scraper or roller and cured with UV light. But the smoothing process could squish or smear resins that cure slowly, limiting the types of materials that can be used.

Researchers from MIT, the MIT spinout Inkbit, and ETH Zurich have developed a new 3D inkjet printing system that works with a much wider range of materials. Their printer utilizes computer vision to automatically scan the 3D printing surface and adjust the amount of resin each nozzle deposits in real-time to ensure no areas have too much or too little material.

Since it does not require mechanical parts to smooth the resin, this contactless system works with materials that cure more slowly than the acrylates which are traditionally used in 3D printing. Some slower-curing material chemistries can offer improved performance over acrylates, such as greater elasticity, durability, or longevity.

In addition, the automatic system makes adjustments without stopping or slowing the printing process, making this production-grade printer about 660 times faster than a comparable 3D inkjet printing system.

The researchers used this printer to create complex, robotic devices that combine soft and rigid materials. For example, they made a completely 3D-printed robotic gripper shaped like a human hand and controlled by a set of reinforced, yet flexible, tendons.

Our key insight here was to develop a machine-vision system and completely active feedback loop. This is almost like endowing a printer with a set of eyes and a brain, where the eyes observe what is being printed, and then the brain of the machine directs it as to what should be printed next, says co-corresponding author Wojciech Matusik, a professor of electrical engineering and computer science at MIT who leads the Computational Design and Fabrication Group within the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL).

He is joined on the paper by lead author Thomas Buchner, a doctoral student at ETH Zurich, co-corresponding author Robert Katzschmann PhD 18, assistant professor of robotics who leads the Soft Robotics Laboratory at ETH Zurich; as well as others at ETH Zurich and Inkbit. The research appears today in Nature.

Contact free

This paper builds off a low-cost, multimaterial 3D printer known as MultiFab that the researchers introduced in 2015. By utilizing thousands of nozzles to deposit tiny droplets of resin that are UV-cured, MultiFab enabled high-resolution 3D printing with up to 10 materials at once.

With this new project, the researchers sought a contactless process that would expand the range of materials they could use to fabricate more complex devices.

They developed a technique, known as vision-controlled jetting, which utilizes four high-frame-rate cameras and two lasers that rapidly and continuously scan the print surface. The cameras capture images as thousands of nozzles deposit tiny droplets of resin.

The computer vision system converts the image into a high-resolution depth map, a computation that takes less than a second to perform. It compares the depth map to the CAD (computer-aided design) model of the part being fabricated, and adjusts the amount of resin being deposited to keep the object on target with the final structure.

The automated system can make adjustments to any individual nozzle. Since the printer has 16,000 nozzles, the system can control fine details of the device being fabricated.

Geometrically, it can print almost anything you want made of multiple materials. There are almost no limitations in terms of what you can send to the printer, and what you get is truly functional and long-lasting, says Katzschmann.

The level of control afforded by the system enables it to print very precisely with wax, which is used as a support material to create cavities or intricate networks of channels inside an object. The wax is printed below the structure as the device is fabricated. After it is complete, the object is heated so the wax melts and drains out, leaving open channels throughout the object.

Because it can automatically and rapidly adjust the amount of material being deposited by each of the nozzles in real time, the system doesnt need to drag a mechanical part across the print surface to keep it level. This enables the printer to use materials that cure more gradually, and would be smeared by a scraper.

Superior materials

The researchers used the system to print with thiol-based materials, which are slower-curing than the traditional acrylic materials used in 3D printing. However, thiol-based materials are more elastic and dont break as easily as acrylates. They also tend to be more stable over a wider range of temperatures and dont degrade as quickly when exposed to sunlight.

These are very important properties when you want to fabricate robots or systems that need to interact with a real-world environment, says Katzschmann.

The researchers used thiol-based materials and wax to fabricate several complex devices that would otherwise be nearly impossible to make with existing 3D printing systems. For one, they produced a functional, tendon-driven robotic hand that has 19 independently actuatable tendons, soft fingers with sensor pads, and rigid, load-bearing bones.

We also produced a six-legged walking robot that can sense objects and grasp them, which was possible due to the systems ability to create airtight interfaces of soft and rigid materials, as well as complex channels inside the structure, says Buchner.

The team also showcased the technology through a heart-like pump with integrated ventricles and artificial heart valves, as well as metamaterials that can be programmed to have non-linear material properties.

This is just the start. There is an amazing number of new types of materials you can add to this technology. This allows us to bring in whole new material families that couldnt be used in 3D printing before, Matusik says.

The researchers are now looking at using the system to print with hydrogels, which are used in tissue-engineering applications, as well as silicon materials, epoxies, and special types of durable polymers.

They also want to explore new application areas, such as printing customizable medical devices, semiconductor polishing pads, and even more complex robots.

This research was funded, in part, by Credit Suisse, the Swiss National Science Foundation, the U.S. Defense Advanced Research Projects Agency, and the U.S. National Science Foundation.

Original post:

This 3D printer can watch itself fabricate objects - MIT News

Building next generation autonomous robots to serve humanity – CU Boulder’s College of Engineering & Applied Science

Featured on CBS Sunday Morning

Sean Humbert discusses the team's award-winning research developing autonomous robots that can navigate challenging conditions. The team demonstrated the robots for CBS during a recent visit to the Edgar Mine in Idaho Springs, CO.

Watch on CBS News

Since completion of the Subterranean Challenge, faculty and students have been conducting follow-on research and competitions with multiple corporate and government partners.

Research further advancing the capabilities of the Subterranean Challenge Robots is being led by numerous CU Boulder laboratories.

One thousand feet underground, a four-legged creature scavenges through tunnels in pitch darkness. With vision that cuts through the blackness, it explores a spider web of paths, remembering its every step and navigating with precision. The sound of its movements echo eerily off the walls, but it is not to be feared this is no wild animal; it is an autonomous rescue robot.

Initially designed to find survivors in collapsed mines, caves, and damaged buildings, that is only part of what it can do.

Created by a team of University of Colorado Boulder researchers and students, the robots placed third as the top US entry and earned $500,000 in prize money at a Defense Advanced Projects Research Agency Subterranean Challenge competition in 2021.

Two years later, they are pushing the technology even further, earning new research grants to expand the technology and create new applications in the rapidly growing world of autonomous systems.

Ideally you dont want to put humans in harms way in disaster situations like mines or buildings after earthquakes; the walls or ceilings could collapse and maybe some already have, said Sean Humbert, a professor of mechanical engineering and director of the Robotics Program at CU Boulder. These robots can be disposable while still providing situational awareness.

The team developed an advanced system of sensors and algorithms to allow the robots to function on their own once given an assignment, they make decisions autonomously on how to best complete it.

A major goal is to get them from engineers directly into the hands of first responders. Success requires simplifying the way the robots transmit data into something approximating plain English, according to Kyle Harlow, a computer science PhD student.

The robots communicate in pure math. We do a lot of work on top of that to interpret the data right now, but a firefighter doesnt have that kind of time, Harlow said.

To make that happen Humbert is collaborating with Chris Heckman, an associate professor of computer science, to change both how the robots communicate and how they represent the world. The robots eyes a LiDAR sensor creates highly detailed 3D maps of an environment, 15 cm at a time. Thats a problem when they try to relay information the sheer amount of data clogs up the network.

Humans dont interpret the environment in 15 cm blocks, Humbert said. Were now working on whats called semantic mapping, which is a way to combine contextual and spatial information. This is closer to how the human brain represents the world and is much less memory intensive.

The team is also integrating new sensors to make the robots more effective in challenging environments. The robots excel in clear conditions but struggle with visual obstacles like dust, fog, and snow. Harlow is leading an effort to incorporate millimeter wave radar to change that.

We have all these sensors that work well in the lab and in clean environments, but we need to be able to go out in places such as Colorado where it snows sometimes, Harlow said.

Where some researchers are forced to suspend work when a grant ends, members of the subterranean robotics team keep finding new partners to push the technology further.

Eric Frew, a professor of aerospace at CU Boulder, is using the technology for a new National Institute of Standards and Technology competition to develop aerial robots drones instead of ground robots, to autonomously map disaster areas indoors and outside.

Our entry is based directly on the Subterranean Challenge experience and the systems developed there, Frew said.

Some teams in the competition will be relying on drones navigated by human operators, but Frew said CU Boulders project is aiming for an autonomous solution that allows humans to focus on more critical tasks.

Although numerous universities and private businesses are advancing autonomous robotic systems, Humbert said other organizations often focus on individual aspects of the technology. The students and faculty at CU Boulder are working on all avenues of the systems and for uses in environments that present extreme challenges.

Weve built world-class platforms that incorporate mapping, localization, planning, coordination all the high level stuff, the autonomy, thats all us, Humbert said. There are only a handful of teams across the world that can do that. Its a huge advantage that CU Boulder has.

Originally posted here:

Building next generation autonomous robots to serve humanity - CU Boulder's College of Engineering & Applied Science

New Tool for Building and Fixing Roads and Bridges: Artificial … – The New York Times

In Pennsylvania, where 13 percent of the bridges have been classified as structurally deficient, engineers are using artificial intelligence to create lighter concrete blocks for new construction. Another project is using A.I. to develop a highway wall that can absorb noise from cars and some of the greenhouse gas emissions that traffic releases as well.

At a time when the federal allocation of billions of dollars toward infrastructure projects would help with only a fraction of the cost needed to repair or replace the nations aging bridges, tunnels, buildings and roads, some engineers are looking to A.I. to help build more resilient projects for less money.

These are structures, with the tools that we have, that save materials, save costs, save everything, said Amir Alavi, an engineering professor at the University of Pittsburgh and a member of the consortium developing the two A.I. projects in conjunction with the Pennsylvania Department of Transportation and the Pennsylvania Turnpike Commission.

The potential is enormous. The manufacturing of cement alone makes up at least 8 percent of the worlds carbon emissions, and 30 billion tons of concrete are used worldwide each year, so more efficient production of concrete would have immense environmental implications.

And A.I. essentially machines that can synthesize information and find patterns and conclusions much as the human mind can could have the ability to speed up and improve tasks like engineering challenges to an incalculable degree. It works by analyzing vast amounts of data and offering options that give humans better information, models and alternatives for making decisions.

It has the potential to be both more cost effective one machine doing the work of dozens of engineers and more creative in coming up with new approaches to familiar tasks.

But experts caution against embracing the technology too quickly when it is largely unregulated and its payoffs remain largely unproven. In particular, some worry about A.I.s ability to design infrastructure in a process with several regulators and participants operating over a long period of time. Others worry that A.I.s ability to draw instantly from the entirety of the internet could lead to flawed data that produces unreliable results.

American infrastructure challenges have become all the more apparent in recent years Texas power grid failed during devastating ice storms in 2021 and continues to grapple with the states needs; communities across the country from Flint, Mich., to Jackson, Miss., have struggled with failing water supplies; and more than 42,000 bridges are in poor condition nationwide.

A vast majority of the countrys roadways and bridges were built several decades ago, and as a result infrastructure challenges are significant in many dimensions, said Abdollah Shafieezadeh, a professor of civil, environmental and geodetic engineering at Ohio State University.

The collaborations in Pennsylvania reflect A.I.s potential to address some of these issues.

In the bridge project, engineers are using A.I. technology to develop new shapes for concrete blocks that use 20 percent less material while maintaining durability. The Pennsylvania Department of Transportation will use the blocks to construct a bridge; there are more than 12,000 in the state that need repair, according to the American Road & Transportation Builders Association.

Engineers in Pittsburgh are also working with the Pennsylvania Turnpike Commission to design a more efficient noise-absorbing wall that will also capture some of the nitrous oxide emitted from vehicles. They are planning to build it in an area that is disproportionately affected by highway sound pollution. The designs will save about 30 percent of material costs.

These new projects have not been tested in the field, but they have been successful in the lab environment, Dr. Alavi said.

In addition to A.I.s speed at developing new designs, one of its largest draws in civil engineering is its potential to prevent and detect damage.

Instead of investing large sums of money in repair projects, engineers and transportation agencies could identify problems early on, experts say, such as a crack forming in a bridge before the structure itself buckled.

This technology is capable of providing an analysis of what is happening in real time in incidents like the bridge collapse on Interstate 95 in Philadelphia this summer or the fire that shut down a portion of Interstate 10 in Los Angeles this month, and could be developed to deploy automated emergency responses, said Seyede Fatemeh Ghoreishi, an engineering and computer science professor at Northeastern University.

But, as in many fields, there are increasingly more conversations and concerns about the relationship between A.I., human work and physical safety.

Although A.I. has proved helpful in many uses, tech leaders have testified before Congress, pushing for regulations. And last month, President Biden issued an executive order for a range of A.I. standards, including safety, privacy and support for workers.

Experts are also worried about the spread of disinformation from A.I. systems. A.I. operates by integrating already available data, so if that data is incorrect or biased, the A.I. will generate faulty conclusions.

It really is a great tool, but it really is a tool you should use just for a first draft at this point, said Norma Jean Mattei, a former president of the American Society of Civil Engineers.

Dr. Mattei, who has worked in education and ethics for engineering throughout her career, added: Once it develops, Im confident that well get to a point where youre less likely to get issues. Were not there yet.

Also worrisome is a lack of standards for A.I. The Occupational Safety and Health Administration, for example, does not have standards for the robotics industry. There is rising concern about car crashes involving autonomous vehicles, but for now, automakers do not have to abide by any federal software safety testing regulations.

Lola Ben-Alon, an assistant professor of architecture technology at Columbia University, also takes a cautionary approach when using A.I. She stressed the need to take the time to understand how it should be employed, but she said that she was not condemning it" and that it had many great potentials.

Few doubt that in infrastructure projects and elsewhere, A.I. exists as a tool to be used by humans, not as a substitute for them.

Theres still a strong and important place for human existence and experience in the field of engineering, Dr. Ben-Alon said.

The uncertainty around A.I. could cause more difficulties for funding projects like those in Pittsburgh. But a spokesman for the Pennsylvania Department of Transportation said the agency was excited to see how the concrete that Dr. Alavi and his team are designing could expand the field of bridge construction.

Dr. Alavi said his work throughout his career had shown him just how serious the potential risks from A.I. are.

But he is confident about the safety of the designs he and his team are making, and he is excited for the technologys future.

After 10, 12 years, this is going to change our lives, Dr. Alavi said.

Originally posted here:

New Tool for Building and Fixing Roads and Bridges: Artificial ... - The New York Times

Doctoral Candidate One of Only 30 Young Researchers Invited to … – Yeshiva University

Dear Students, Faculty, Staff and Friends,

I am pleased to present to you this Guide to our plans for the upcoming fall semester and reopening of our campuses. In form and in content, this coming semester will be like no other. We will live differently, work differently and learn differently. But in its very difference rests its enormous power.

The mission of Yeshiva University is to enrich the moral, intellectual and spiritual development of each of our students, empowering them with the knowledge and abilities to become people of impact and leaders of tomorrow. Next years studies will be especially instrumental in shaping the course of our students lives. Character is formed and developed in times of deep adversity. This is the kind of teachable moment that Yeshiva University was made for. As such, we have developed an educational plan for next year that features a high-quality student experience and prioritizes personal growth during this Coronavirus era. Our students will be able to work through the difficulties, issues and opportunities posed by our COVID-19 era with our stellar rabbis and faculty, as well as their close friends and peers at Yeshiva.

To develop our plans for the fall, we have convened a Scenario Planning Task Force made up of representatives across the major areas of our campus. Their planning has been guided by the latest medical information, government directives, direct input from our rabbis, faculty and students, and best practices from industry and university leaders across the country. I am deeply thankful to our task force members and all who supported them for their tireless work in addressing the myriad details involved in bringing students back to campus and restarting our educational enterprise.

In concert with the recommendations from our task force, I am announcing today that our fall semester will reflect a hybrid model. It will allow many students to return in a careful way by incorporating online and virtual learning with on-campus classroom instruction. It also enables students who prefer to not be on campus to have a rich student experience by continuing their studies online and benefitting from a full range of online student services and extracurricular programs.

In bringing our students back to campus, safety is our first priority. Many aspects of campus life will change for this coming semester. Gatherings will be limited, larger courses will move completely online. Throughout campus everyone will need to adhere to our medical guidelines, including social distancing, wearing facemasks, and our testing and contact tracing policies. Due to our focus on minimizing risk, our undergraduate students will begin the first few weeks of the fall semester online and move onto the campus after the Jewish holidays. This schedule will limit the amount of back and forth travel for our students by concentrating the on-campus component of the fall semester to one consecutive segment.

Throughout our planning, we have used the analogy of a dimmer switch. Reopening our campuses will not be a simple binary, like an on/off light switch, but more like a dimmer in which we have the flexibility to scale backwards and forwards to properly respond as the health situation evolves. It is very possible that some plans could change, depending upon the progression of the virus and/or applicable state and local government guidance.

Before our semester begins, we will provide more updates reflecting our most current guidance. Please check our website, yu.edu/fall2020 for regular updates. We understand that even after reading through this guide, you might have many additional questions, so we will be posting an extensive FAQ section online as well. Additionally, we will also be holding community calls for faculty, students, staff and parents over the next couple of months.

Planning for the future during this moment has certainly been humbling. This Coronavirus has reminded us time and time again of the lessons from our Jewish tradition that we are not in full control of our circumstances. But our tradition also teaches us that we are in control of our response to our circumstances. Next semester will present significant challenges and changes. There will be some compromises and minor inconveniences--not every issue has a perfect solution. But faith and fortitude, mutual cooperation and resilience are essential life lessons that are accentuated during this period. And if we all commit to respond with graciousness, kindness, and love, we can transform new campus realities into profound life lessons for our future.

Deeply rooted in our Jewish values and forward focused in preparing for the careers and competencies of the future, we journey together with you, our Yeshiva University community, through these uncharted waters. Next year will be a formative year in the lives of our students, and together we will rise to the moment so that our students will emerge stronger and better prepared to be leaders of the world of tomorrow.

Best Wishes,

Ari Berman

Read more:

Doctoral Candidate One of Only 30 Young Researchers Invited to ... - Yeshiva University