Category Archives: Computer Science
Opera gives voice to Alan Turing with help of artificial intelligence – Yale News
A few years ago, composer Matthew Suttor was exploring Alan Turings archives at Kings College, Cambridge, when he happened upon a typed draft of a lecture the pioneering computer scientist and World War II codebreaker gave in 1951 foreseeing the rise of artificial intelligence.
In the lecture, Intelligent Machinery, a Heretical Theory, Turing posits that intellectuals would oppose the advent of artificial intelligence out of fear that machines would replace them.
It is probable though that the intellectuals would be mistaken about this, Turing writes in a passage that includes his handwritten edits. There would be plenty to do, trying to understand what the machines were trying to say, i.e., in trying to keep ones (sic) intelligence up to the standard set by the machines
To Suttor, the passage underscores Turings visionary brilliance.
Reading it was kind of a mind-blowing moment as were now on the precipice of Turings vision becoming our reality, said Suttor, program manager at Yales Center for Collaborative Arts and Media (CCAM) a campus interdisciplinary center engaged in creative research and practice across disciplines and a senior lecturer in the Department of Theater and Performance Studies in Yales Faculty of Arts and Sciences.
Inspired by Turings 1951 lecture, and other revelations from his papers, Suttor is working with a team of musicians, theater makers, and computer programmers (including several alumni from the David Geffen School of Drama at Yale) to create an experimental opera, called I AM ALAN TURING, which explores his visionary ideas, legacy, and private life.
I didnt envision a chronological biographical operatic piece To me, it was much more interesting to investigate Turings ideas.
Matthew Suttor
In keeping with Turings vision, the team has partnered with artificial intelligence on the project, using successive versions of GPT, a large language model, to help write the operas libretto and spoken text.
Three work-in-progress performances of the opera formed the centerpiece of the Machine as Medium Symposium: Matter and Spirit, a recent two-day event produced by CCAM that investigated how AI and other technologies intersect with creativity and alter how people approach timeless questions on the nature of existence.
The symposium, whose theme Matter and Spirit was derived from Turings writings, included panel discussions with artists and scientists, an exhibition of artworks made with the help of machines or inspired by technology, and tour of the Yale School of Architectures robotic lab led by Hakim Hasan, a lecturer at the school who specializes in robotic fabrication and computational design research.
All sorts of projects across fields and disciplines are using AI in some capacity, said Dana Karwas, CCAMs director. With the opera, Matthew and his team are using it as a collaborative tool in bringing Alan Turings ideas and story into a performance setting and creating a new model for opera and other types of live performance.
Its also an effective platform for inviting further discussion about technology that many people are excited about or questioning right now, and is a great example of the kind of work were encouraging at CCAM.
Turing is widely known for his work at Bletchley Park, Great Britains codebreaking center during World War II, where he cracked intercepted Nazi ciphers. But he was also a path-breaking scholar whose work set the stage for the development of modern computing and artificial intelligence.
His Turing Machine, developed in 1936, was an early computational device that could implement algorithms. In 1950, he published an article in the journal Mind that asked: Can machines think? He also made significant contributions to theoretical biology, which uses mathematical abstractions in seeking to better understand the structures and systems within living organisms.
A gay man, Turing was prosecuted in 1952 for gross indecency after acknowledging a sexual relationship with a man, which was then illegal in Great Britain, and underwent chemical castration in lieu of a prison sentence. He died by suicide in 1954, age 41.
Before visiting Turings archive, Suttor had read Alan Turing: The Enigma, Andrew Hodges authoritative 1983 biography, and believed the mathematicians life possessed an operatic scale.
I didnt envision a chronological biographical operatic piece, which frankly is a pretty dull proposition, Suttor said. To me, it was much more interesting to investigate Turings ideas. How do you put those on stage and sing about them in a way that is moving, relevant, and dramatically exciting?
Thats when Smita Krishnaswamy, an associate professor of genetics and computer science at Yale, introduced Suttor and his team to OpenAI and several Zoom conversations with representatives of the company about the emerging technology followed. Working with Yale University Librarys Digital Humanities Lab, the team built an interface to interact with an instance, or single occurrence, of GPT-2, training it with materials from Turings archive and the text of books hes known to have read. For example, they knew Turing enjoyed George Bernard Shaws play Back to Methuselah, and Snow White, the Brothers Grimm fairytale, so they shared those texts with the AI.
The team began asking GPT-2 the kinds of questions that Turing had investigated, such as Can machines think? They could control the temperature of the models answers or, the creativity or randomness and the number of characters the responses contained. They continually adjusted the settings on those controls and honed their questions to vary the answers.
Some of the responses are just jaw-droppingly beautiful, Suttor said. You are the applause of the galaxy, for instance, is something you might print on a T-shirt.
In one prompt, the team asked the AI technology to generate lyrics for a sexy song about the operas subject, which yielded the lyrics to Im a Turing Machine, Baby.
In composing the operas music, Suttor and his team incorporated elements of Turings work on morphogenesis the biological process that develops cells and tissues and phyllotaxis, the botanical study of mathematical patterns found in stems, leaves, and seeds. For instance, Suttor found that diagrams Turing had produced showing the spiral patterns of seeds in a sunflower head conform to a Fibonacci sequence, in which each number is the sum of the two before it. Suttor superimposed the circle of fifths a method in music theory of organizing the 12 chromatic pitches as a sequence of perfect fifths onto Turings diagram, producing a unique mathematical, harmonic progression.
Suttor repeated the process using prime numbers numbers greater than 1 that are not the product of two smaller numbers in place of the Fibonacci sequence, which also produced a harmonic series. The team sequenced analog synthesizers to these harmonic progressions.
It sounds a little like Handel on acid, he said.
The workshop version of I AM ALAN TURING was performed on three consecutive nights before a packed house in the CCAM Leeds Studio. The show, in its current form, consists of eight pieces of music that cross genres. Some are operatic with a chorus and soloist, some sound like pop music, and some evoke musical theater. While Suttor composed key structural pieces, the entire team has collaborated like a band while creating the music.
At the same time, the shows storytelling is delivered through various modes: opera, pop, and acted drama. At the beginning, an actor portraying Turing stands at a chalkboard drawing the sunflowers spiral pattern.
Another scene is drawn from a transcript of Turings comments during a panel discussion, broadcast by the BBC, about the potential of artificial intelligence. In that conversation, Turing spars with a skeptical colleague who doesnt believe machines could reach or exceed human levels of intelligence.
Turing made that point during that BBC panel that hed trained machines to do things, which took a lot of work, and they both learned something from the process, Suttor said. I think that captures our experience working with GPT to draft the script.
The show also contemplates Turings sexuality and the persecution he endured because of it. One sequence shows Turing enjoying a serene morning in his kitchen beside a partner, sipping tea and eating toast. His partner reads the paper. Turing scribbles in a notebook. A housecat makes its presence felt.
Its the life that Turing never had, Suttor said.
In high school, Turing had a close friendship with classmate Christopher Morcom, who succumbed to tuberculosis while both young men were preparing to attend Cambridge. Morcom has been described as Turings first true love.
Turing wrote a letter called Nature of Spirit to Christophers mother in which he imagines the possibility of multiple universes and how the soul and the body are intrinsically linked.
In the opera, a line from the letter is recited following the scene, in Turings kitchen, that showed a glimmer of domestic tranquility: Personally, I think that spirit is really eternally connected with matter but certainly not always by the same kind of body.
The show closed with an AI-generated text, seemingly influenced by Snow White: Look in the mirror, do you realize how beautiful you are? You are the applause of the galaxy.
The I AM ALAN TURING experimental opera was just one of many projects presented during Machine as Medium: Matter and Spirit, a two-day symposium that demonstrated the kinds of interdisciplinary collaborations driven by Yales Center for Collaborative Arts and Media (CCAM).
An exhibition at the centers York Street headquarters highlighted works created with, or inspired by, various kinds of machines and technology, including holograms, motion capture, film and immersive media, virtual reality, and even an enormous robotic chisel. An exhibition tour allowed the artists to connect while describing their work to the public. The discussion among the artists and guests typifies the sense of community that CCAM aims to provide, said Lauren Dubowski 14 M.F.A., 23 D.F.A.,CCAM's assistant director,who designed and led the event.
We work to create an environment where anyone can come in and be a part of the conversation, Dubowski said. CCAM is a space where people can see work that they might not otherwise see, meet people they might not otherwise meet and talk about the unique things happening here.
See the article here:
Opera gives voice to Alan Turing with help of artificial intelligence - Yale News
U.S. Education Departments Office for Civil Rights Releases New … – US Department of Education
The U.S. Department of Educations(Department) Office for Civil Rights (OCR) today releasednew civil rights datafrom the 202021 school year, offering critical insight regarding civil rights indicators during that coronavirus pandemic year. OCR also released seven data reports and snapshots, including A First Look: Students Access to Educational Opportunities in the Nations Public Schools, which provides an overview of these data and information.
In America, talent and creativity can come from anywhere, but only if we provide equitable educational opportunities to students everywhere, said U.S. Secretary of Education Miguel Cardona. We cannot be complacent when the data repeatedly tells us that the race, sex, or disability of students continue to dramatically impact everything from access to advanced placement courses to the availability of school counselors to the use of exclusionary and traumatic disciplinary practices. The Biden-Harris Administration has prioritized equity for underserved students throughout our historic investments in education, and we will continue to partner with states, districts, and schools to Raise the Bar and provide all students with access to an academically rigorous education in safe, supportive, and inclusive learning environments.
OCRs Civil Rights Data Collection (CRDC) is a mandatory survey of public schools serving students from preschool to grade 12. The purpose of the CRDC is to provide the federal government and members of the public with vital data about the extent to which students have equal educational opportunities required by federal civil rights laws. While OCR generally collects the CRDC biennially, the 2020-21 CRDC is the first published since the 2017-18 collection (which was released in 2020), because OCR paused the collection due to the pandemic. OCRs 2020-21 CRDC contains information collected from over 17,000 school districts and over 97,000 schools. These data include student enrollment; access to courses, teachers, other school staff, and the Internet and devices; and school climate factors, such as student discipline, harassment or bullying, and school offenses.
The 2020-21 CRDC reflects stark inequities in education access throughout the nation. For example, high schools with high enrollments of Black and Latino students offered fewer courses in mathematics, science, and computer science than schools with low enrollments of Black and Latino students. English learner students and students with disabilities, who received services under the federal Individuals with Disabilities Education Act, had a lower rate of enrollment in mathematics and science courses when compared to enrollment rates of all high school students.
These new CRDC data reflect troubling differences in students experiences in our nations schools, said Assistant Secretary for Civil Rights Catherine E. Lhamon. We remain committed to working with school communities to ensure the full civil rights protections that federal law demands.
As part of todays release of the 2020-21 CRDC, OCR launched a redesigned CRDC website that now includes an archival tool with access to historical civil rights data from 1968 to 1998 which can be found here. The 2020-21 CRDC public-use data file, reports, and snapshots are available on the Departments redesigned CRDC website. Additional reports and snapshots will be posted periodically on the website.
Key data points from the 202021 CRDC are below and highlighted in one or more of the data reports or snapshots.
School Offenses
Student Discipline
Restraint and Seclusion
The Departments data reports and snapshots are available here and listedbelow.
The Department will release additional data reports and snapshots on key topics such as student access to courses and programs and data specific to English learner students and to students with disabilities.
Read this article:
U.S. Education Departments Office for Civil Rights Releases New ... - US Department of Education
Why the Godfather of A.I. Fears What He’s Built – The New Yorker
I love this house, but sometimes its a sad place, he said, while we looked at the pictures. Because she loved being here and isnt here.
The sun had almost set, and Hinton turned on a little light over his desk. He closed the computer and pushed his glasses up on his nose. He squared up his shoulders, returning to the present.
I wanted you to know about Roz and Jackie because theyre an important part of my life, he said. But, actually, its also quite relevant to artificial intelligence. There are two approaches to A.I. Theres denial, and theres stoicism. Everybodys first reaction to A.I. is Weve got to stop this. Just like everybodys first reaction to cancer is How are we going to cut it out? But it was important to recognize when cutting it out was just a fantasy.
He sighed. We cant be in denial, he said. We have to be real. We need to think, How do we make it not as awful for humanity as it might be?
How usefulor dangerouswill A.I. turn out to be? No one knows for sure, in part because neural nets are so strange. In the twentieth century, many researchers wanted to build computers that mimicked brains. But, although neural nets like OpenAIs GPT models are brainlike in that they involve billions of artificial neurons, theyre actually profoundly different from biological brains. Todays A.I.s are based in the cloud and housed in data centers that use power on an industrial scale. Clueless in some ways and savantlike in others, they reason for millions of users, but only when prompted. They are not alive. They have probably passed the Turing testthe long-heralded standard, established by the computing pioneer Alan Turing, which held that any computer that could persuasively imitate a human in conversation could be said, reasonably, to think. And yet our intuitions may tell us that nothing resident in a browser tab could really be thinking in the way we do. The systems force us to ask if our kind of thinking is the only kind that counts.
During his last few years at Google, Hinton focussed his efforts on creating more traditionally mindlike artificial intelligence using hardware that more closely emulated the brain. In todays A.I.s, the weights of the connections among the artificial neurons are stored numerically; its as though the brain keeps records about itself. In your actual, analog brain, however, the weights are built into the physical connections between neurons. Hinton worked to create an artificial version of this system using specialized computer chips.
If you could do it, it would be amazing, he told me. The chips would be able to learn by varying their conductances. Because the weights would be integrated into the hardware, it would be impossible to copy them from one machine to another; each artificial intelligence would have to learn on its own. They would have to go to school, he said. But you would go from using a megawatt to thirty watts. As he spoke, he leaned forward, his eyes boring into mine; I got a glimpse of Hinton the evangelist. Because the knowledge gained by each A.I. would be lost when it was disassembled, he called the approach mortal computing. Wed give up on immortality, he said. In literature, you give up being a god for the woman you love, right? In this case, wed get something far more important, which is energy efficiency. Among other things, energy efficiency encourages individuality: because a human brain can run on oatmeal, the world can support billions of brains, all different. And each brain can learn continuously, rather than being trained once, then pushed out into the world.
As a scientific enterprise, mortal A.I. might bring us closer to replicating our own brains. But Hinton has come to think, regretfully, that digital intelligence might be more powerful. In analog intelligence, if the brain dies, the knowledge dies, he said. By contrast, in digital intelligence, if a particular computer dies, those same connection strengths can be used on another computer. And, even if all the digital computers died, if youd stored the connection strengths somewhere you could then just make another digital computer and run the same weights on that other digital computer. Ten thousand neural nets can learn ten thousand different things at the same time, then share what theyve learned. This combination of immortality and replicability, he says, suggests that we should be concerned about digital intelligence taking over from biological intelligence.
How should we describe the mental life of a digital intelligence without a mortal body or an individual identity? In recent months, some A.I. researchers have taken to calling GPT a reasoning enginea way, perhaps, of sliding out from under the weight of the word thinking, which we struggle to define. People blame us for using those wordsthinking, knowing, understanding, deciding, and so on, Bengio told me. But even though we dont have a complete understanding of the meaning of those words, theyve been very powerful ways of creating analogies that help us understand what were doing. Its helped us a lot to talk about imagination, attention, planning, intuition as a tool to clarify and explore. In Bengios view, a lot of what weve been doing is solving the intuition aspect of the mind. Intuitions might be understood as thoughts that we cant explain: our minds generate them for us, unconsciously, by making connections between what were encountering in the present and our past experiences. We tend to prize reason over intuition, but Hinton believes that we are more intuitive than we acknowledge. For years, symbolic-A.I. people said our true nature is, were reasoning machines, he told me. I think thats just nonsense. Our true nature is, were analogy machines, with a little bit of reasoning built on top, to notice when the analogies are giving us the wrong answers, and correct them.
On the whole, current A.I. technology is talky and cerebral: it stumbles at the borders of the physical. Any teen-ager can learn to drive a car in twenty hours of practice, with hardly any supervision, LeCun told me. Any cat can jump on a series of pieces of furniture and get to the top of some shelf. We dont have any A.I. systems coming anywhere close to doing these things today, except self-driving carsand they are over-engineered, requiring mapping the whole city, hundreds of engineers, hundreds of thousands of hours of training. Solving the wriggly problems of physical intuition will be the big challenge of the next decade, LeCun said. Still, the basic idea is simple: if neurons can do it, then so can neural nets.
Here is the original post:
Why the Godfather of A.I. Fears What He's Built - The New Yorker
Computational imaging researcher attended a lecture, found her … – MIT News
Soon after Kristina Monakhova started graduate school, she attended a lecture by Professor Laura Waller 04, MEng 05, PhD 10, director of the University of California at Berkeleys Computational Imaging Lab, who described a kind of computational microscopy with extremely high image resolution.
The talk blew me away, says Monakhova, who is currently an MIT-Boeing Distinguished Postdoctoral Fellow in MIT's Department of Electrical Engineering and Computer Science. It definitely changed my trajectory and put me on the path to where I am now. I knew right away that this is what I wanted to do. It was the perfect combination of signal processing, hardware, and algorithms, and I could use it to make more capable imaging sensors for diverse applications.
Today, Monakhovas research involves creating cameras and microscopes designed to produce not high-resolution images for human consumption, but rather information-dense images to be used by algorithms. She aspires to combine imaging system physics with deep learning.
She points out that the purpose of cameras has been fundamentally changed by automation. In many contexts, people dont look at the images; algorithms do, she explains.
A good example of when the data in an image is more important than its visual representation or sharpness is in skin cancer diagnosis, where measuring specific light wavelengths using a hyperspectral camera can help determine whether a certain skin lesion is cancerous and, if so, malignant. While hyperspectral cameras generally cost more than $20,000, Monakhova has designed a cheap computational camera that could be adapted for such diagnosis.
Monakhova says she inherited her early academic ambition from her mother, who brought her to this country from Russia when she was 4 years old.
My mother is my role model and inspiration. She immigrated to the U.S. as a single mother and raised me while completing her PhD in electrical engineering, Monakhova says. I remember spending my elementary school holidays sitting in her classes, drawing. She tried to get me excited about math and science as a child and I guess she succeeded!
By middle school, Monakhova had discovered her interest in engineering after joining a robotics team. When many years later she started graduate school at UC Berkeley, she chose robotics as her first lab, although Waller's computational microscopy lecture drew her away to Waller's lab and to her current field of research.
Starting in the MIT Postdoctoral Fellowship Program for Engineering Excellence in fall 2022, Monakhova experienced another life-changing event.
My daughter was born on the first day of work at MIT, making for a particularly exciting first day, she says.
Born four weeks early, the baby required an elaborate system of feeding, a process that took almost two hours and needed to be repeated in three-hour increments, which left the parents just one out of every three hours to do everything else.
The first four or five months were a whirlwind of challenges and emotions and doctor appointments, Monakhova says.
Despite those challenges, the new mother continued with her fellowship. Knowing that a postdoc is often a bridge to a faculty position, she took special advantage of a series of program presentations focused on what its like to be a professor and the academic job search process. Although the presentations took place while she was on maternity leave and she wasn't required to participate, Monakhova still attended via Zoom.
I could call in and listen while breastfeeding my newborn infant, she says. I went on the academic job market, and this series was useful to help me get my job materials together and prepare for my interviews.
Monakhova says she is "thankful that MIT has a relatively good maternity and family leave policy, as well as crucial resources, such as lactation rooms, back-up daycare, and a fantastic on-campus daycare program with financial aid available. Without these resources and support, I would have had to quit my career. In order to attract and retain women in science and engineering, we need family-friendly policies that dont penalize women for having babies.
By June, Monakhova had landed a position as an assistant professor at Cornell Universitys Department of Computer Science. Having deferred the appointment, shell start in fall 2024.
Referring to her upcoming work as a professor and lab leader at Cornell, Monakhova says, Im particularly excited to try to set up a collaborative, friendly lab culture where mental health and work-life balance are prioritized, and failure is seen as an important step in the research process.
Throughout her academic career, Monakhova says community has been extremely important. The MIT Postdoctoral Fellowship Program for Engineering Excellence, which was designed to develop the next generation of faculty leaders and help guide MITs School of Engineering toward supporting more women and others who are underrepresented in engineering, allowed her to explore new research questions in a different area and work with some amazing MIT students on some exciting projects.
I believe its important to help each other out and create a welcoming environment where everyone has the support and resources they need to thrive, says Monakhova, who has an exemplary record of mentoring and giving back. Research and breakthroughs dont happen in isolation theyre the result of teams and communities of people working together.
The rest is here:
Computational imaging researcher attended a lecture, found her ... - MIT News
SCS Researchers To Receive $1.2M for Continued DOE Nuclear … – Carnegie Mellon University
The U.S. Department of Energy will continue funding research on nuclear fusion at Carnegie Mellon Universitys School of Computer Science.
The DOE recently announced(opens in new window) $16 million in funding for nine projects spread across 13 institutions, including CMU, that aim to establish the scientific foundation needed to develop a fusion energy source. The projects focus on advancing innovative fusion technology and collaborative research on both small-scale experiments and theDIII-D National Fusion Facility(opens in new window) in San Diego, the largest tokamak operating in the United States. CMU will receive about $1.2 million over three years.
While establishing the scientific basis for fusion energy, we must also improve the maturity of existing fusion technologies and explore entirely new innovations that have the potential to revolutionize the fusion landscape, saidJean Paul Allain, DOE associate director of the Office of Science for Fusion Energy Sciences. The extensive capabilities at DIII-D make it the ideal facility to pursue areas of great potential that are not sufficiently mature for adoption by the private sector.
Nuclear fusion happens when hydrogen nuclei smash, or fuse, together. This process releases a tremendous amount of energy but remains challenging to maintain at levels necessary for putting electricity on the grid. One method to produce nuclear fusion uses magnetic fields to contain a plasma of hydrogen at the required temperature and pressure to fuse the nuclei. This process happens inside a tokamak a massive machine that uses magnetic fields to confine the hydrogen plasma in a donut shape called a torus. Containing the plasma and maintaining its shape require hundreds of micromanipulations to the magnetic fields and blasts of additional hydrogen particles.
The DOE funding will allow Jeff Schneider, a research professor in the Robotics Institute, and his team to continue their research on using machine learning to control fusion reactions.
Last year, Ian Char, a doctoral candidate in theMachine Learning Department advised by Schneider, used reinforcement learning to control the hydrogen plasma of the tokamak at DIII-D(opens in new window). Char was the first CMU researcher to run an experiment on the sought-after machines, the first to use reinforcement learning to affect the rotation of a tokamak plasma, and the first person to try reinforcement learning on the largest operating tokamak machine in the United States.
Schneider and his team will now attempt to develop a machine-learning-based system that simultaneously controls the injection of hydrogen particles, the shape of the plasma, and its current and density. Developing such a system is critical to the development of ITER, formerly known as the International Thermonuclear Experimental Reactor an international nuclear fusion research project that will be the worlds largest tokamak when it is completed in 2025.
The proposed work will bring the power of machine learning techniques to plasma control at DIII-D. This will set the stage for the successful operation of ITER, which requires plasma control at a level beyond current capabilities, and will also expand the scientific understanding of plasma evolution and instabilities, Schneider said. Carnegie Mellon University is leveraging its expertise in machine learning to help the global scientific community harness a new source of clean, abundant energy.
As it has in the past, the CMU team will collaborate with the Princeton Plasma Physics Laboratory and the SLAC National Accelerator Laboratory at Stanford on the work.
The rest is here:
SCS Researchers To Receive $1.2M for Continued DOE Nuclear ... - Carnegie Mellon University
The Reach of Online Learning to Ensure Continuing Access to … – USC Viterbi School of Engineering
Website created by female students from Afghanistan as part of a DEN at USC Viterbi course.
With many students in the world today living under challenging circumstances, continuing access to educational opportunities can be nearly impossible. Recognizing these unforeseen challenges, USC Viterbi faculty turned to DEN@Viterbi, the Distance Education Network at USC Viterbi, with more than 50 years of experience in hybrid and remote learning, to help students whose education has been suddenly interrupted or curtailed. As a result, over the last year, free access to USC Viterbi engineering classes and workshops were offered to students, living in two different regions in the world, war-torn Ukraine and Afghanistan, in order to ensure that students in such unique and volatile circumstances had the opportunity to continue their education.
Leveraging the DEN platform, established five decades ago (ahead of online learning common today), Astronautics Professor Mike Gruntman hosted a free online course on fundamentals of space systems for students and faculty in Ukraine. Gruntman emphasized that this humanitarian initiative by the Viterbi School offered important opportunities for specialists in Ukraine to maintain academic excellence in a rapidly developing area of technology that would play an important role in the rebuilding of the country in the future.
Simultaneously, a number of Afghan female students participated in two free educational opportunities:
Seventy-five women last year participated in the first such opportunity, a global course in innovation (Principles and Practices of Innovation) taught by Professor Stephen Lu through USCs iPodia program. The more than decade-old iPodia program allows for students from different parts of the world to simultaneously attend the same class using the DEN platform. The Afghan students joined classmates from universities in Brazil, China, Germany, Greece, Israel, Mexico, Taiwan, Uganda and the United States.
The second such educational opportunity was the creation of a series of skills-based short courses at USC, that has become known as the Afghan Pathways Program (APP). Through USC Viterbis Information Technology Program (ITP) (which focuses on applied technology coursework), professors Trina Gregory and Nayeon Kim taught women (now permitted to study at home) how to create websites and to code in Python. For twelve weeks, these Afghan students met three times a week with their instructors. Forty certificates in web development and/or Python programming have been earned thus far by the female Afghan students who completed the courses. Both programs were coordinated in collaboration with the non-profit Afghanistan-US Democratic Peace and Prosperity Council (DPPC).
A snapshot of the DEN at Viterbi dashboard.
Said USC ViterbiDean Yannis C. Yortsos, We are fortunate to have the ability to reach and provide engineering education to many students in many parts of the world, where such access is curtailed.
USC Vice Dean and Interim ITP Director, Erik A. Johnson, said about the courses for women and girls in Afghanistan, We so easily take for granted to educational opportunities that we have here; these short courses are providing instruction and skills to women who have no other direct opportunity to continue their education.
Trina Gregory who is an Associate Professor of Information Technology Practice and who taught the courses for Afghan women remarked,The students inspired me to continue to fight for education for girls and women.
One individual affiliated with the DPPC believes that this can be a model for other universities to follow. The virtual classroom is a window of hope for Afghan girls and women. Further, she believes engineering, computer science, and codingthese disciplines and skills are key to womens independence. She believes it is imperative so that the generation does not get lost.
USC Viterbi continues to offer coursework to students, including introduction to web development to a second cohort of Afghan women this term. In addition, Afghan women are currently participating in a course, Astronomy 101: The Universe through A Cultural Lens, being taught by USC Dornsife Professor Vahe Peroomian, an iPodia fellow.
Lu added, iPodia enables USC Viterbi to extend our classrooms to hard-to-reach places across the globe so our students can learn together with peers of various backgrounds. The Afghan participants in iPodia classes are not just students but also teachers to our students.
Published on November 17th, 2023
Last updated on November 17th, 2023
View post:
The Reach of Online Learning to Ensure Continuing Access to ... - USC Viterbi School of Engineering
ASU center brings faculty together to research human-robot solutions – ASU News Now
November 15, 2023
To help mitigate the world biodiversity crisis, Arizona State Universitys Julie Ann Wrigley Global Futures Laboratory has recruited Harris Lewin, a prominent genome scientist currently spearheading one of biologys most ambitious moonshot goals a complete DNA catalog of life's genetic code by the end of this decade.
Lewin leads the Earth BioGenome Project, a massive coalition of worldwide scientists and 50-plus ongoing projects that has a primary goal of completing high-quality DNA reference genomes the gold standard of an organisms complete DNA genetic code and sequence for all higher organisms on Earth, an estimated 1.8 million species. Harris Lewin, scientists and leaders of the Earth BioGenome Project. Download Full Image
The global secretariat of the project, which was at the University of California, Davis (UC Davis), will also move to ASU in December.
You really have to know who's there before you can really understand biology, Lewin said. And right now, with only 10% of the species that exist having been named for most of life, or 80% to 90% of all life, we don't even know whats there.
Lewins appointment as professor in ASU's Global Futures Laboratory boosts its comprehensive strategy to develop solutions for our worlds planetary systems challenges, including the current biodiversity crisis. There are an estimated two-thirds of higher organisms that may face the urgent threat of a new mass extinction, primarily due to the activities of humans that impact natural ecosystems and drive climate change.
Today, with trying to build scalable models on understanding how ecosystems function and how they might be restored and remediated, we have to have detailed understanding of the organisms in those ecosystems, Lewin said. We need to move as quickly as we can, because if species that comprise critical ecosystems are lost, they may never be recovered again.
Once a species goes extinct, scientists forever lose the ability to better understand what sustained its life, or if that species might be used to improve food or medicine production.
As our worlds life-supporting systems continue to be stressed to levels that have never before been recorded, the significance of the Earth BioGenome Project cannot be understated, said Peter Schlosser, vice president and vice provost of Global Futures at ASU.
To have a pioneering scientist like Harris Lewin and his colleagues identify ASU and the Julie Ann Wrigley Global Futures Laboratory as not simply a logical home for this endeavor, but a preferred home because of our facilities and global network of partners, speaks volumes," Schlosser said. "As with all work designed to help shape options for a thriving future for our world and its inhabitants, this project is of the highest urgency and requires a deep cohort of experts from around the world.
The 19th century naturalist Charles Darwin wrote about the complexity of life on Earth describing it as endless forms most beautiful 164 years ago in his profound book on evolution, On the Origin of Species.
In the 20th century, the structure of DNA was discovered. The combination and exact order of DNA chemical letters abbreviated as A, C, T or G are responsible for the blueprints of life. To better decipher this blueprint, DNA sequencing was invented in the 1970s.
With advances in sequencing technology in the 1990s, academic, private and government labs raced to complete the genomes for the first bacterium, yeast, nematode and fruit fly. The first draft of the Human Genome Project, a Herculean effort at the time, was completed in 2003, taking an international consortium of scientists 13 years to do so and at an estimated cost of $3 billion dollars.
Fifteen years later, Lewin co-founded the Earth BioGenome Project, or EBP, and today chairs its executive council.The project was announced at the World Economic Forum in Davos, Switzerland, at the beginning of 2018 and officially launched at the Wellcome Trust in London later that year.
He describes the EBP as a critical biology infrastructure project that will allow scientists to stand on the shoulder of giants to see further and better understand the worlds biodiversity akin to how astronomers have used tools such as the Webb Space Telescope to understand the nature of the universe.
Genomes are the infrastructure for the future of biology and the bioeconomy, Lewin said. Much like how the Webb Telescope allows you to peer into the cosmos to understand the origins and evolution of the universe, having all the sequence of eukaryotic life those with a nucleus will facilitate understanding of the origin and evolution of life on Earth.
Key facets of a bio-driven economy from genome science include renewable biofuels from algae, food crops like corn and soybeans, threats like agricultural pests, model scientific organisms for drug and medicine development, and biodefense and biosecurity issues, such as the recent worldwide COVID-19 pandemic. Other products of the bioeconomy will involve new industrial catalysts, biomaterials and drugs.
Working together, to date, the EBP has completed a pilot phase of about 2,000 genomes. Among the EBP are 55 genome projects underway, the largest led by the U.K.s Wellcome Sanger Institute, Rockefeller University, the European Union, Genome Canada, China, a pan-African consortium and Australia. In the U.S., Rockefeller University leads the Vertebrate Genomes Project, which has now completed over 300 genomes, and the California Conservation Genomics Project has finished over 150 genomes.
With rapid advances in DNA sequencing technology and computing power, Lewin thinks the EBP can sequence the rest of all 1.8 million named eukaryotic species for around the same cost as the human genome draft within the next 10 years.
Funding for the EBP will come from a variety of worldwide endeavors.
Theres no central funding, Lewin said. It's a distributed model. Each of these projects raises their own money, but they're all agreeing to coordinate and work together with common standards towards the goal of sequencing all eukaryotes in 10 years. The limitation these days is really not the sequencing technology, the limitation is acquiring taxonomically well-identified, vouchered and ethically sourced samples from all over the world.
The next goal is to complete 10,000 genomes by the end of 2025. When fully up to speed, the affiliated projects of the EBP will need to sequence an estimated 1,500 genomes per day to meet its ambitious goal.
This also includes a very aggressive set of standards for a collection of samples, all the metadata that gets collected with them, and how the sequencing is to be done and to what specifications in terms of quality, Lewin said.
With the move to ASU, there will now be abundant opportunities to develop an EBP at ASU program to sequence and better understand iconic life found in desert climates from the mighty arms of the saguaro cactus to Gila monsters to Gimbal quail to the diamondback rattlesnake.
The EBP at ASU will be greatly strengthened by the National Science Foundation NEON (National Ecological Observatory Network)Biorepository, directed by Nico Franz, Virginia M. Ullman Professor of Ecology and Biocollections director.
Our team is thrilled to have the opportunity to work with Harris Lewin, Franz said. We have a shared, inclusive vision to advance EBP at ASU and beyond. This model is based on sound biodiversity sampling design, ethical data governance and broadly impacting education in the computational life sciences."
From the worlds coral reefs to rainforests which together account for an estimated 75% of worldwide biodiversity to temperate land climates, ASU has been at the forefront of developing innovative solutions for understanding and conserving biodiversity.
ASU will be one of the global centers for Earth BioGenome Project, not just on the sample provision side, but all the way through sequencing, assembly and analysis, Lewin said. We certainly have early plans to try and understand desert ecosystems and to reveal the impacts of climate change on those critical ecosystems, including aquatic ecosystems.
The Earth BioGenome Project now joins the newSchool of Ocean Futures, NeoBio, Bermuda Institute of Ocean Science, Center for Global Discovery and Conservation Science, and Center for Biodiversity Outcomes as ASUs academic lead initiatives to help solve the world biodiversity crisis.
We are excited to see how this work integrates with programs like the Bermuda Institute of Ocean Sciences and Nico Franzs research with the Biodiversity Knowledge Integration Centerand NEON, Schlosser said. These collaborationscan help repair, preserve and protect our worlds ecosystems.
Lewins official ASU appointment began Nov. 1.For the past 12 years, Lewin served as distinguished professor of evolution and ecology and former vice chancellor for research at UCDavis. He is a member of theNational Academy of Sciences and won theWolf Prize in Agriculturefor his research into cattle genomics. He has been a leader in the field of mammalian comparative genomics and has made major contributions to our understanding of chromosome evolution and its relationship to adaptation, speciation and the origins of cancers. Previously, Lewin worked at theUniversity of Illinois for 27 years and, in 2003, served as the founding director of theCarl R. Woese Institute for Genomic Biology.
Read more:
ASU center brings faculty together to research human-robot solutions - ASU News Now
Computer Science most preferred course by Indian students in US – DTNEXT
CHENNAI: Computer Science is the most preferred course for Indian students studying in the United States followed by engineering programmes, new data with the US consulate, Chennai revealed on Monday.
United States-India Educational Foundation (USIEF) regional officer Maya Sundararajan said that of the total, 41.2% of Indian students choose Computer Science courses at Under Graduate (UG) and Post Graduate (PG) levels in the US. She said engineering programmes in the US are the second preferred choice with a total of 26.9% of students studying that course. Though the number of students was not disclosed, Maya said a total of 11.6% of Indian students wanted to study management courses in the US.
Life Science course and Health accounts to 5.6% and 2.5% respectively, she added. She said even at the international level, Computer Science and Engineering is the most preferred course in the US. When asked which places in the US were most preferred by Indian students, the official said Texas, California, and New York were the priority states for Indian students to pursue both UG and PG courses there.
Stating that USIEF engages institutions of higher education in the US and in India to help foster and enhance linkages between them, she advised the students to choose only accredited universities.
US Consul General in Chennai, Christopher W Hodges said that graduate students in India, who come for higher studies in the US, were focusing more on research.
It is great to see the trajectory in the increase of Indian students studying in the US, he added.
Stressing the need for the linkage between the industry and education, he said this would help the students in higher education get practical experience provided by the companies.
Pointing out that cooperation between the US and India has reached the next level, he said many institutions including the Indian Institute of Technology have ties with universities in the US, which would help student exchange programmes.
See the original post:
Computer Science most preferred course by Indian students in US - DTNEXT
Realistic talking faces created from only an audio clip and a person’s … – EurekAlert
image:
(L-R) NTU School of Computer Science and Engineering (SCSE) PhD student Mr Zhang Jiahui, NTU SCSE Associate Professor Lu Shijian, NTU SCSE PhD graduate Dr Wu Rongliang, and NTU SCSE PhD student Mr YuYingchen, presenting a video produced by DIRFA based on Assoc Prof Lus photo.
Credit: NTU Singapore
A team of researchers from Nanyang Technological University, Singapore (NTU Singapore) has developed a computer program that creates realistic videos that reflect the facial expressions and head movements of the person speaking, only requiring an audio clip and a face photo.
DIverse yet Realistic Facial Animations, or DIRFA, is an artificial intelligence-based program that takes audio and a photo and produces a 3D video showing the person demonstrating realistic and consistent facial animations synchronised with the spoken audio (see videos).
The NTU-developed program improves on existing approaches (see Figure 1), which struggle with pose variations and emotional control.
To accomplish this, the team trained DIRFA on over one million audiovisual clips from over 6,000 people derived from an open-source database called The VoxCeleb2 Dataset to predict cues from speech and associate them with facial expressions and head movements.
The researchers said DIRFA could lead to new applications across various industries and domains, including healthcare, as it could enable more sophisticated and realistic virtual assistants and chatbots, improving user experiences. It could also serve as a powerful tool for individuals with speech or facial disabilities, helping them to convey their thoughts and emotions through expressive avatars or digital representations, enhancing their ability to communicate.
Corresponding author Associate Professor Lu Shijian, from the School of Computer Science and Engineering (SCSE) at NTU Singapore, who led the study, said: The impact of our study could be profound and far-reaching, as it revolutionises the realm of multimedia communication by enabling the creation of highly realistic videos of individuals speaking, combining techniques such as AI and machine learning. Our program also builds on previous studies and represents an advancement in the technology, as videos created with our program are complete with accurate lip movements, vivid facial expressions and natural head poses, using only their audio recordings and static images.
First author Dr Wu Rongliang, a PhD graduate from NTUs SCSE, said: Speech exhibits a multitude of variations. Individuals pronounce the same words differently in diverse contexts, encompassing variations in duration, amplitude, tone, and more. Furthermore, beyond its linguistic content, speech conveys rich information about the speaker's emotional state and identity factors such as gender, age, ethnicity, and even personality traits. Our approach represents a pioneering effort in enhancing performance from the perspective of audio representation learning in AI and machine learning." Dr Wu is a Research Scientist at the Institute for Infocomm Research, Agency for Science, Technology and Research (A*STAR), Singapore.
The findings were published in the scientific journal Pattern Recognition in August.
Speaking volumes: Turning audio into action with animated accuracy
The researchers say that creating lifelike facial expressions driven by audio poses a complex challenge. For a given audio signal, there can be numerous possible facial expressions that would make sense, and these possibilities can multiply when dealing with a sequence of audio signals over time.
Since audio typically has strong associations with lip movements but weaker connections with facial expressions and head positions, the team aimed to create talking faces that exhibit precise lip synchronisation, rich facial expressions, and natural head movements corresponding to the provided audio.
To address this, the team first designed their AI model, DIRFA, to capture the intricate relationships between audio signals and facial animations. The team trained their model on more than one million audio and video clips of over 6,000 people, derived from a publicly available database.
Assoc Prof Lu added: Specifically, DIRFA modelled the likelihood of a facial animation, such as a raised eyebrow or wrinkled nose, based on the input audio. This modelling enabled the program to transform the audio input into diverse yet highly lifelike sequences of facial animations to guide the generation of talking faces.
Dr Wu added: Extensive experiments show that DIRFA can generate talking faces with accurate lip movements, vivid facial expressions and natural head poses. However, we are working to improve the programs interface, allowing certain outputs to be controlled. For example, DIRFA does not allow users to adjust a certain expression, such as changing a frown to a smile.
Besides adding more options and improvements to DIRFAs interface, the NTU researchers will be finetuning its facial expressions with a wider range of datasets that include more varied facial expressions and voice audio clips.
Explainer video: How DIRFA uses artificial intelligence to generate talking heads
Video 2: A DIRFA-generated talking head with just an audio of former US president Barrack Obama speaking, and a photo of Associate Professor Lu Shijian.
Video 3: A DIRFA-generated talking head with just an audio of former US president Barrack Obama speaking, and a photo of studys first author Dr Wu Rongliang.
Pattern Recognition
Imaging analysis
People
Audio-driven talking face generation with diverse yet realistic facial animations
31-Aug-2023
Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.
More:
Realistic talking faces created from only an audio clip and a person's ... - EurekAlert
Materials Discovery to Enable Computers that Think More Like … – USC Viterbi School of Engineering
A USC Viterbi research team has discovered a new semiconductor with a unique property that will allow for energy efficient computers that function more like the human brain. Image/Unreal
Artificial intelligence is already transforming how we work and live, automating time-consuming tasks and streamlining our decision-making.
However, AI algorithms are mostly run on conventional complementary metal oxide semiconductor (CMOS)-based hardware. This requires them to be trained with large datasets to accomplish even the simplest tasks, such as image analysis or facial recognition. Processing these data-intensive requests requires vast computing resources, like data centers. The process consumes significant amounts of energy.
A USC Viterbi School of Engineering research team has discovered a new semiconductor with a unique material property that can enable more energy-efficient computing hardware that functions like the human brain. Two related research papers were published recently in the journals Advanced Materials and Advanced Electronic Materials. The research is led by Huandong Chen, a 2023 Materials Science Ph.D. graduate in the Mork Family Department of Chemical Engineering and Materials Science, from the group of Jayakanth Ravichandran, an associate professor in chemical engineering and materials science and electrical and computer engineering.
The human brain is excellent at associative learning we have an innate ability to call up memories, make connections, and understand objects and stimuli in relation to each other. Human brains utilize interconnected neurons and synapses to store information locally where it is processed. Our brains are capable of handling highly sophisticated tasks and operating at remarkably low energy consumption. Developing neuromorphic computing hardware hardware that mimics the architecture and operation of the human brain is highly desired in the quest to achieve energy-efficient advanced computing.
Hardware materials that mimic the brain
Philip and Cayley MacDonald Endowed Early Career Chair Jayakanth Ravichandran.
If a material can move abruptly between two states (also known as phase transitions), this provides the foundation for hardware that mimics the brain. For example, a slight difference in temperature dramatically changes a materials electrical conductivity the ease of passing an electrical current from a high to a low value, or vice versa. Such neuron-inspired phase change devices have been achieved only in a handful of materials.
The USC Viterbi researchers discovered novel electronic phase transitions in a semiconductor and leveraged those intriguing physical properties to demonstrate an abrupt electrical conductivity change with varying temperature and applied voltage, which can enable the development of energy-efficient neuromorphic computing.
Ravichandran holds the Philip and Cayley MacDonald Endowed Early Career Chair. His group has been working on a semiconductor material known as barium titanium sulfide (BaTiS3) since 2017. The groups work resulted in the BaTiS3 material showing a world-record high birefringence property a phenomenon in which a ray of light is split into two rays. In a recent unrelated work, they discovered an even higher value in a related material.
However, as a semiconductor, we do not expect any abrupt phase transition in BaTiS3, said Ravichandran.
Naively thinking, this material should behave like a boring semiconductor without any expectation of a phase transition, said Chen.A surprising discovery
Ravichandran and his group were surprised to observe the signatures of phase transitions in the BaTiS3 material when measuring its electrical properties under different temperatures. Upon cooling the material, the electrical resistivity of BaTiS3 increases, and it undergoes a transition at around 240 Kelvin (about -33 Celsius), featuring an abrupt change in electrical conductivity. With further cooling, it continues to increase until 150 Kelvin (about -123 Celsius), after which the material goes through another transition with increased electrical conductivity.
It is always exciting to observe abnormal behavior in our experiments, but we have to check carefully to make sure that those phenomena are real and reproducible, said Ravichandran.
In this work, Chen performed careful experiments to rule out contributions from many extrinsic factors, such as contact resistance and strain status, which could complicate this effect. It was demonstrated that the unique property originated within the material itself.
Postdoctoral researcher in the Ravichandran Group Huandong Chen
This is particularly important when characterizing such a new material system. One good example of not ruling out other factors was the recent drama surrounding the so-called room temperature superconductor LK-99, where it seems the sharp drop in resistivity at around 105 Celsius is likely from an impurity, known as Cu2S, said Chen.
The team also investigated how the crystal structure of the BaTiS3 material changes during these electronic phase transitions, corresponding to the changes in electrical conductivity.
Boyang Zhao, a Materials Science Ph.D. candidate from Ravichandrans group, traveled to the synchrotron at Lawrence Berkeley National Lab to map out the structure evolution. By combining the information from the electrical and structural measurements which are key experimental features for the interesting phenomena called charge density wave phase transition the team could claim the existence of charge density wave order in BaTiS3.
Weve discovered one very special charge density wave phase change material. Most charge density wave materials only go from a metal state which is high conductivity to an insulator state which is low conductivity. What we have found is that you can go from a low conductivity state to a low conductivity state. Such insulating-to-insulating transition is very, very rare, with only a handful of examples out there. So, scientifically, its very interesting, said Ravichandran.
How the phase transitions in BaTiS3 work is not fully understood yet. The team collaborated with Rohan Mishras group from Washington University in St. Louis, performing materials modeling to obtain a deeper understanding of the material system. Current experimental and theoretical findings suggest that the observed phase change phenomena have an unexpected origin compared to most charge density wave materials. The team is conducting further studies to understand this phenomenon better.
The latest Advanced Materials research on novel phase change material discovery was conducted with collaborators from the University of Washington in Seattle, Washington University in St. Louis, Columbia University, Oak Ridge National Laboratory, and Lawrence Berkley National Laboratory.
A prototype showing the material in action
In a follow-up work that was recently published in Advanced Electronic Materials, Chen and his collaborators fabricated the first prototype neuronal device using the BaTiS3 material. They were able to show abrupt switching by varying current and voltage. They also showed oscillations in voltage that signified fast switching between two states in the phase transition. Similar voltage oscillations are observed in the brain.
This is an important step towards actual electronic device applications of BaTiS3. It is also quite exciting to see such a short period of time between this prototype device demonstration and the fundamental material property discovery, said Chen.
The frequency of voltage oscillations was altered by the operation temperature and channel sizes. A lower operation temperature and a shorter device channel size give rise to higher oscillation frequencies.
We expect that much more sophisticated neuronal functionalities can be achieved by connecting multiple BaTiS3 neurons to each other or integrating with other passive synaptic devices, as has been successfully demonstrated in another phase change system VO2. Future efforts in making this material in the thin film form that features phase transitions and is potentially compatible with our semiconductor manufacturing could be of great interest to both the research community and the semiconductor industry, said Chen.
This work in Advanced Electronic Materials was done in collaboration with Robert G. and Mary G. Lane Endowed Early Career Chair Han Wangs group in USCs Ming Hsieh Department of Electrical and Computer Engineering. Other authors include Materials Science Ph.D. candidate Nan Wang and Electrical Engineering Ph.D. candidate Hefei Liu.
Ravichandran serves as a co-director for the Core Center of Excellence in NanoImaging (CNI).
Ravichandran and his research team at USC are supported by the MURI program of the Army Research Office and the U.S. National Science Foundations Ceramics Program.
Published on November 14th, 2023
Last updated on November 14th, 2023
More here:
Materials Discovery to Enable Computers that Think More Like ... - USC Viterbi School of Engineering