Category Archives: Computer Science

Former professional drummer finds new groove in computer science … – Clemson News

June 26, 2023June 27, 2023

Justin Cromer is trading in his drumsticks for silicon chips.

The 34-year-old Florence native worked as a session drummer in Nashville for five years before finding his way to Clemson University to pursue a Bachelor of Science in computer science.

He plans to finish his studies online from Redmond, Washington, where he has landed a job with SpaceX, working on the team that makes chips for Starlink satellites.

I got my dream job twice, said Cromer, who is on track to graduate in December.

Cromers path is a testament to the transformative power of education, especially when it is combined with determination. And it illustrates how Clemson Universitys inclusive community strives to open a place for all students no matter where they are on the career ladder.

Jacob Sorber, a School of Computing associate professor who mentored Cromer, said his life experience and maturity are uncommon for undergraduates.

Hes very good at identifying what is needed to be successful, Sorber said. Rather than say, I guess well see what they teach me in class, he looks at what we didnt cover that would be useful. That makes a huge difference in the kind of student he is and the kinds of outcomes he is likely to have.

In his first chapter as an undergraduate, Cromer pursued a Bachelor of Music at the University of North Texas. Shortly before graduation in 2011, he moved back to Florence to finish his last two classes online and to live at home to save money.

Cromer taught music at a private elementary school in his first job out of college and tucked away about $1,200 to move to Nashville and become a professional musician. But when he took a closer look at the numbers, he could see it wasnt enough for a start in a new city.

Instead, Cromer set sail on cruise ships, playing drums while traveling the world. His room and board were included so he was able to save a good chunk of his pay before moving to the Music City.

When he first arrived in Nashville, Cromer hung out in places where he liked the music and started making connections to land his first few gigs. His classical training gave him an edge and helped firm up his reputation, which led to bigger gigs.

I ended up getting hired at the last minute for work with larger artists, Cromer recalled. They would say, Hey, weve got a big crowd in Indiana this weekend. Our drummer is having a baby. Can you show up?'

Cromer found his priorities changing after about five years in Nashville. He wanted a family and a dog and saw that making a living as a musician was becoming harder.

Cromers time in the music business coincided with the rise of file-sharing and streaming services, which radically changed how music is distributed and how artists are paid. And for those who do make a living as musicians, long stretches on the road away from home can be brutal, he said.

It was time for a change so Justin headed back to Florence to come up with a new plan. That was where he was living when his uncle, Rodney Godwin, was in a tractor wreck.

With his extensive injuries, it would take months of recovery for Godwin just to drive again, and he needed someone to run his business, Dun-Rite Automotive & Transmissions. Godwin saw it as an opportunity for his nephew to lead and to push himself to reach for higher goals.

He did, 100%, said Godwin, who owns six businesses in the Florence area. He had operations running so smoothly I never did take back operations of that shop to this day. When he was leaving, he brought somebody else in and trained them how to do it.

Cromer said that while he was running the shop, he knew he wanted to eventually do something else for a living, and that is when he discovered computer science and software.

He understood he would have to go back to college to pursue those interests professionally and began to prepare for math placement tests by studying after work on the Khan Academy website.

The stakes in his personal life were getting higher. He had met a woman from Myrtle Beach, Annamarie Haines Guest, and wanted to marry her. He sold two of his four drum sets to help pay for the ring, and she said yes.

Annamarie, whose last name is now Cromer, said she never doubted that Justin would make the transition from drummer to computer scientist.

He is such a driven individual, she said. Once he puts his mind to something, theres nothing he cant do. Hes going to dedicate all of his time and effort to it, and thats what he did.

Annamarie was a student at the University of South Carolina at the time, so Justin moved to Columbia and enrolled at Midlands Technical College to start treading the path to a computer science degree. He taught drum lessons on the side to earn money.

When Cromer decided he wanted to transfer to Clemson, he took a close look at the curriculum and started preparing before even setting foot on campus. He bought a book, did the exercises it offered and took university courses available online for free.

It was also around this time that Cromer discovered Sorbers YouTube channel. The channel, which has more than 124,000 subscribers, features videos of Sorber teaching programming and how to become a more confident software developer.

At first, Cromer had no idea Sorber taught at Clemson. But once Justin figured it out, he reached out to Sorber, who encouraged him to get in touch once on campus.

By the time Cromer arrived at Clemson, he knew he wanted to write software for spacecraft. He showed up in Sorbers office on his first day and told him about his plans and how enthusiastic he was.

Sorber found room for Cromer in the PERSIST lab, and he started conducting research on sensors.

I didnt know what I was doing, but through his guidance and the friends and mentors Ive met here in the lab, Ive been able to learn a lot and get that much closer to making my dreams come true, Cromer said.

He is finding that his hard work and determination are starting to pay off.

In Sorbers lab, Justin has earned a reputation as an outstanding team member who regularly helps with debugging, data collection and other tasks while remaining upbeat, even with a heavy course load that keeps him busy as many as 80 hours a week.

At the end of the spring 2023 semester, Cromer won an award recognizing him as the Outstanding Senior in the School of Computing or Applied Sciences.

Its taken some work, but the dreams that started brewing years ago are coming true. Justin and Annamarie have two dogs, a golden retriever, Jenny, and a Jack Russell terrier, Zack. The dogs, along with all Justin has learned at Clemson and years past, are going with them when they leave for Washington this summer.

The biggest change Ive seen in myself since coming here is a real respect for perseverance, he said. I found that success isnt always about being the smartest person in the room. A lot of times its just about keeping on going when everybody else might give up.

Read this article:

Former professional drummer finds new groove in computer science ... - Clemson News

Gao named interim chair of geosciences, geological and petroleum … – Missouri S&T News and Research

Dr. Stephen S. Gao, Curators Distinguished Teaching Professor of geosciences and geological and petroleum engineering (GGPE) at Missouri University of Science and Technology, has accepted the role of interim chair of his home department effective Aug. 1.

For the past year, Gao has served for the past year as interim chair of computer science at Missouri S&T.

Leading the computer science department has been a wonderful experience, but I am honored to now move back to my home department, Gao says. Our GGPE department has some of the countrys top scholars, and our students do tremendous work as well. I am excited to support everyones great work, while also growing our student enrollment and research portfolio.

Gao is a renowned geophysicist who studies earthquakes, seismic anisotropy, receiver function analysis, and the structure and dynamics of the earths crust and mantle. He has been a member of the Missouri S&T faculty since 2006 and is director of the universitys High Performance Computing Research Center.

Gao earned a bachelors degree in marine geology and geophysics from Ocean University of China, College of Marine Geosciences, in Qingdao. He earned a masters degree and a Ph.D. in geophysics and space physics from the University of California, Los Angeles, where he also served as a post-doctoral researcher.

Gao has won multiple awards at S&T for being an outstanding faculty member and teacher, and he was elected a fellow of the Geological Society of America in 2012.

Gao replaces Dr. Jeffrey Cawlfield, a professor of geological engineering, who has led the department since August 2021. Dr. David Borrok, vice provost and dean of the College of Engineering and Computing, says Cawlfield should be commended for his work as chair.

Dr. Cawlfield has been a dedicated S&T faculty member since 1987, and he has done an outstanding job leading the GGPE department, Borrok says. He has served S&T in a variety of leadership roles and has always worked hard to support his students, faculty and staff.

Dr. Gao will have big shoes to fill, but I have no doubt that he will be up for the challenge, Borrok says. He has already demonstrated his abilities as a leader for the computer science department, and now he will be able to guide the future of his home department.

For more information about Missouri S&Ts GGPE programs, visit ggpe.mst.edu.

Missouri University of Science and Technology (Missouri S&T) is a STEM-focused research university of over 7,000 students. Part of the four-campus University of Missouri System and located in Rolla, Missouri, Missouri S&T offers 101 degrees in 40 areas of study and is among the nations top 10 universities for return on investment, according to Business Insider. For more information about Missouri S&T, visit http://www.mst.edu.

Continued here:

Gao named interim chair of geosciences, geological and petroleum ... - Missouri S&T News and Research

We are wasting up to 20 percent of our time on computer problems … – Science Daily

Even though our computers are now better than 15 years ago, they still malfunction between 11 and 20 per cent of the time, a new study from the University of Copenhagen and Roskilde University concludes. The researchers behind the study therefore find that there are major gains to be achieved for society by rethinking the systems and involving users more in their development.

An endlessly rotating beach ball, a program that crashes without saving data or systems that require illogical procedures or simply do not work. Unfortunately, struggling with computers is still a familiar situation for most of us. Tearing your hair out over computers that do not work remains very common among users, according to new Danish research.

In fact, so much that, on average, we waste between 11 and 20 per cent of our time in front of our computers on systems that do not work or that are so difficult to understand that we cannot perform the task we want to. And this is far from being good enough, says Professor Kasper Hornbk, one of the researchers behind the study.

"It's incredible that the figure is so high. However, most people experience frustration when using computers and can tell a horror story about an important PowerPoint presentation that was not saved or a system that crashed at a critical moment. Everyone knows that it is difficult to create IT systems that match people's needs, but the figure should be much lower, and one thing that it shows is that ordinary people aren't involved enough when the systems are developed," he says.

Professor Morten Hertzum, the other researcher behind the study, emphasises that most frustrations are experienced in connection with the performance of completely ordinary tasks.

"The frustrations are not due to people using their computers for something highly advanced, but because they experience problems in their performance of everyday tasks. This makes it easier to involve users in identifying problems. But it also means that problems that are not identified and solved will probably frustrate a large number of users," says Morten Hertzum.

The problems are only too recognisable

To examine this issue, the researchers have been assisted by 234 participants who spend between six and eight hours in front of a computer in their day-to-day work.

In one hour, the researchers told them to report the situations in which the computer would not work properly, or where the participants were frustrated about not being able to perform the task they wanted.

The problems most often experienced by the participants included that: "the system was slow," "the system froze temporarily," "the system crashed," "it is difficult to find things." The participants had backgrounds such as student, accountant, consultant, but several of them actually worked in the IT industry.

"A number of the participants in the survey were IT professionals, while most of the other participants were highly competent IT and computer users. Nevertheless, they encountered these problems, and it turns out that this involves some fundamental functions," says Kasper Hornbk.

The participants in the survey also responded that 84 per cent of the episodes had occurred before and that 87 per cent of the episodes could happen again. And, according to Kasper Hornbk, we are having the same fundamental problems today that we had 15-20 years ago.

"The two biggest categories of problems are still about insufficient performance and lack of user-friendliness," he says.

Morten Hertzum adds: "Our technology can do more today, and it has also become better, but, at the same time, we expect more from it. Even though downloads are faster now, they are often still experienced as frustratingly slow. "

88 per cent use a computer at work

According to Statistics Denmark, 88 per cent of Danes used computers, laptops, smartphones, tablets or other mobile devices at work in 2018. In this context, the new study indicates that a half to a whole day of a normal working week may be wasted on computer problems.

"There is a lot of productivity lost in workplaces throughout Denmark because people are unable to perform their ordinary work because the computer is not running as it should. It also causes a lot of frustrations for the individual user," says Kasper Hornbk.

This means that there are major benefits to be gained for society if we experienced fewer problems in front of our computers. According to Kasper Hornbk, the gains can, for example, be achieved if more resources are invested in rethinking how faults are presented to us on the computer.

"Part of the solution may be to shield us from knowing that the computer is working to solve a problem. In reality, there is no reason why we need to look at an incomprehensible box with commands or a frozen computer. The computer could easily solve the problems without displaying this, while it provided a back-up version of the system for us, so that we could continue to work with our tasks undisturbed," says Kasper Hornbk.

At the same time, IT developers should involve the users even more when designing the systems to make them as easy to use -- and understand -- as possible. For, according to the researcher, there are no poor IT users, only poor systems.

"When we're all surrounded by IT systems that we're cursing, it's very healthy to ascertain that it's probably not the users that are the problem, but those who make the systems. The study clearly shows that there is still much room for improvement, and we therefore hope that it can create more focus on making more user-friendly systems in the future," concludes Kasper Hornbk.

Facts:

Original post:

We are wasting up to 20 percent of our time on computer problems ... - Science Daily

‘I’m not going to take it for granted’ – uta.edu

Wednesday, Jun 28, 2023 Herb Booth : Contact

Minh Tram, a doctoral student in the Department of Computer Science and Engineering at The University of Texas at Arlington, has received a prestigious Science, Mathematics and Research for Transformation (SMART) scholarship from the U.S. Department of Defense (DoD).

This is an amazing opportunity, and Im not going to take it for granted, Tram said. There arent many chances for an academic to get involved with this type of program, and depending on where Im needed, I may gain access to things that Id never know about in the civilian world.

Tram, who has been a student at UTA since 2017, focuses his research on creating novel augmented and virtual reality capabilities for human-robot interaction. He earned bachelors and masters degrees in computer science and just completed his first year as a doctoral student. For the last three years, he has worked in the Robotic Vision Lab, run by Assistant Professor William Beksi, which focuses on robot perception, human-robot interaction and autonomous systems.

Tram uses simulation software to observe and train humans and robots to perform more intuitively and alleviate the amount of processing necessary to do a task. His work could be used, for example, to make autonomous robots require less oversight and maintenance.

Minh is one of my best students, Beksi said. He is doing pioneering research in my lab that will enable us to improve machine learning via virtual reality. This will have broad positive impacts across a variety of fields, including robotics, education and workforce development.

SMART scholarship recipients earn full tuition, a significant annual stipend and a book and health allowance. They also complete a summer internship at a DoD facility and are assigned an experienced mentor. The full scholarship and other benefits allow participants to focus on complex research to further the DoDs mission and create a lasting impact. The program is a one-for-one commitment: For every year of degree funding, the recipient commits to working for a year with the DoD as a civilian employee.

-Written by Jeremy Agor, College of Engineering

Continue reading here:

'I'm not going to take it for granted' - uta.edu

Diagnosed with a brain tumour at 12, hardworking Computer … – Derry Journal

Lauren Monaghan always took a keen interest in computers and maths.

Her decision to attend Ulster University was inspired by her software development teacher who rated the course, and encouragement from her friends who were already at Magee.

But the road to university wasnt always smooth.

I was diagnosed with a very rare brain tumour at age 12. Soon after, I went through two surgeries with the second leading to a number of complications and further medical conditions.

"I had to take time out of school to travel to the USA for trials and treatment, as well as to adjust to new medications to keep my body functioning. I still have rough days coping with the complications of my tumour, but Ive learned to adapt and make the most of the good days.

Lauren worked with a tutor from home to continue her studies. She was advised by her consultants to leave education after her GCSEs.

Despite the doctors advice, I knew I wanted to keep going I wanted to do my A-Levels and go onto higher education. The number of complications after the surgery and radiotherapy treatment I had undergone were challenging but I worked hard to catch up with classmates and get the grades I needed.

Lauren caught up with her classmates with the help of teachers and support staff and was on track to sit her A-Levels when another health issue emerged.

Me and my family always joke that it wasnt the very serious brain tumour that held me back a year but an unrelated medical problem that required surgery. No one saw that coming.

Lauren received treatment, went back to school to sit her A-Levels and was successful in her application to Ulster University in 2019.

She found her lecturers and Student Wellbeing very supportive. Lauren enjoyed campus life. Her mum made the trip a few times a week from Ballymoney to bring her to Derry.

In first year, Lauren joined the Procraftination arts and crafts society to meet people and get involved in the social side of university life.

When things moved online, Lauren became chair, hosting Zoom meetings throughout lockdowns.

After completing a placement as a software engineer, Lauren returned to campus for her final year and began thinking about what she wanted to do next.

I was contacted on LinkedIn to apply for a two-year graduate programme at Citi. It was my very first graduate job interview so I was quite nervous! But Im delighted to say I was successful with my application and I am due to start my role this September as a graduate software intern.

Outside of academia, Lauren is involved with various charities:

I am currently a committee member for Brainwaves NI, a volunteer-led charity which funds research into brain tumours and provides support to brain tumour patients and their families in Northern Ireland.

"I also perform with Cancer Fund for Childrens Care-Free Choir. This charity helped me connect with other young people who understood what I was going through. These charities as well as Young Lives vs Cancer, supported me and my family which I am eternally grateful for.

Read the original here:

Diagnosed with a brain tumour at 12, hardworking Computer ... - Derry Journal

A quantum forest grows in New Haven – Yale News

On a foggy, Friday night recently, with old hip hop songs filling the air and hundreds of happy people taking in the International Festival of Arts & Ideas, a small quantum forest sprouted up on the New Haven Green.

Dozens of illuminated beacons, each one 6-feet tall, were spread out in rows, not far from the festivals main stage. As people walked, danced, and posed for selfies among them, the beacons lit up in blue, green, red, or white.

Can you tell us how this works? asked Edward Sullivan, as he and Yvonne Benjamin, both of Shelton, Connecticut, approached Florian Carle, the official tour guide for this mini maze.

What do you know about quantum physics? asked Carle, who, in his day job, is manager of the Yale Quantum Institute (YQI).

As Carle explained it, what looked like a random light show was, in fact, the orchestration of a sophisticated quantum computing technology that has taken years to develop technology that may someday change daily life.

More specifically, the site-specific exhibit, called Beneath the Green, the Quantum and created by artist Stewart Smith during his year-long residency at YQI, employs Yales latest research into quantum error correction, a concept in quantum computing research that is one of the biggest remaining hurdles in creating full-scale quantum computers that perform calculations an order of magnitude faster than traditional computers.

Quantum computers store information as data in the form of quantum bits known as qubits that are notoriously finnicky and susceptible to errors. Researchers are attempting to correct these errors with additional, stabilizer qubits that are interspersed with the data qubits. Earlier this year, Yale researchers were able to more than double the lifetime of a qubit, effectively showing that quantum error correction is a practical tool for building quantum computers.

For this piece, I knew I wanted to work with light and I knew I wanted something that would be meaningful for the faculty and students, said Smith, a graphic designer who earned his MFA in graphic design from Yale in 2008. Right now, there are faculty and students here who are working on algorithms to correct qubit errors they are at the forefront of their field.

After consulting with YQI researchers, Smith worked with Carle and computer science graduate student Yue Wu to create the conceptual themes and software for the art installation. Alpay Kasal and his experiential design company Bignoodle in San Francisco created the custom hardware.

The beacons use proximity sensors and custom-built software that runs in a Web browser to allow spectators to become active participants. The movement of each participant through the art installation creates a simulation of quantum errors. The correction software goes to work correcting these errors, causing the beacons to change color.

A red light indicates an error has been located; a blue light indicates the error has been resolved. Green and purple lights signal that errors are overwhelming the system.

YQI likely will recreate the installation in the fall, on campus, Carle said.

For now, at the Green, YQIs quantum forest quickly attracted scientists and non-scientists alike.

This is, by far, the most ambitious science outreach idea weve done, said A. Douglas Stone, the Carl A. Morse Professor of Applied Physics and Physics, and YQIs deputy director, as he surveyed the scene. It has real complexity to it.

But not too complex for festivalgoers Sullivan and Benjamin, who happily danced their way through the lights after getting their tutorial.

It was very interesting, especially after getting an explanation about how it works, Sullivan said. We didnt get it at first, until we walked away from the first light, and it turned blue.

This definitely helped connect the dots, Benjamin said.

Read more:

A quantum forest grows in New Haven - Yale News

Mosquito watch: USF researchers urge use of global dashboard in … – University of South Florida

By: Cassidy Delamarter, University Communications and Marketing

Researchers at the University of South Florida are urging the public to take photos of mosquitoes and share them to help track and mitigate the potential spread of malaria. The Florida Department of Health has issued a statewide mosquito-borne illness advisory after four confirmed cases of malaria in Sarasota County. An additional case has also been reported in Texas.

Ryan Carney, assistant professor of integrative biology, and Sriram Chellappan, professor of computer science and engineering, developed mosquitodashboard.org, which utilizes data provided by ordinary citizens and artificial intelligence to identify the location and species of disease-carrying mosquitoes.

Funded in part by the National Science Foundation, the public dashboard serves as an aggregation of data from multiple smartphone apps, including NASAs GLOBE Observer, iNaturalist and Mosquito Alert, where people are encouraged to be citizen scientists and upload photos of any mosquitoes that they find. The data from each app is displayed on the dashboard, which features an interactive map that allows users to analyze mosquitoes near them and around the world.

It would be phenomenal for citizen scientists in Sarasota County and beyond to download and use our partner apps, Carney said. Citizen scientists with smartphones can serve as extra sets of eyes to help monitor these malaria mosquitoes, in locations and at a scale otherwise impossible via traditional mosquito trapping methods. Importantly, by contributing valuable data on exactly where these malaria mosquitoes are found in their community, everyday citizens can help guide local mosquito surveillance and control programs."

By leveraging the photos uploaded, the team has gathered more than a half million images, allowing their artificial intelligence algorithm to better identify mosquitoes in the adult and larval stage a critical element to mitigating mosquito-borne diseases. By identifying the species of mosquito, the team can determine its potential for carrying diseases and alert local authorities.

"Advances in artificial intelligence algorithms yield novel technologies for accurate, fast and large-scale surveillance of malaria-spreading mosquitoes, Chellappan said. The impact of these technologies is significantly amplified when fueled by data from the general public, the consequence of which greatly strengthens our fight against malaria.

These technologies have proven successful locally and globally. In Tampa Bay, the team recently examined the abundance and ecological drivers of Aedes aegypti, a mosquito that carries dengue, yellow fever and Zika. They hope their study will serve as a framework for leveraging mosquito abundance data to inform habitat models and local control efforts. With that, the team will further examine the abundance of mosquitoes capable of transmitting malaria in Florida and Texas.

Ultimately, the strategy is to further deploy our arsenal of next-generation digital technologies to enable more accurate and precise surveillance and control of mosquitoes carrying deadly diseases in Florida and beyond, Carney said.

Carney and Chelleppan are working to obtain additional funding to make the technology even more accessible around the globe.

Read the original post:

Mosquito watch: USF researchers urge use of global dashboard in ... - University of South Florida

Professor AI: Harvard is planning to deploy ChatGPT-like bot as instructor in Computer Science – Firstpost

Harvard University is planning to deploy a ChatGPT-like AI bot as an instructor in one of its Computer Science courses. Human professors of the course say the aim is to achieve a teacher-to-student ratio of 1:1 in Harvard's CS50, one of its most popular courses, using AI

After taking away jobs in the IT industry, journalism and content creation, the rise of artificial intelligence is now posing a threat to the teaching profession, particularly in the field of coding.

Harvard University is at the forefront of incorporating AI into its coding program, as it plans to introduce an AI chatbot similar to ChatGPT as an instructor in its renowned Computer Science 50 course.

Making an AI-based instructorThe human instructors of the programme propose developing the AI teacher using OpenAIs advanced GPT 3.5 or GPT 4 models, showcasing Harvards dedication to utilizing cutting-edge AI technology for educational purposes. The program is set to commence in September, and enrolled students will be encouraged to utilize this AI tool.

According to CS50 professor David Malan, the aim is to achieve a teacher-to-student ratio of 1:1 in CS50 by leveraging AI. The AI chatbot will provide students with software-based tools that can support their learning around the clock, accommodating their individual preferences and pace.

This personalised support has been challenging to deliver on a large scale through platforms like edX and OpenCourseWare, making these features beneficial for both on-campus and remote students.

AIs rising popularityThe introduction of the AI chatbot instructor aligns with the current surge in popularity of AI tools. OpenAIs ChatGPT, launched in November 2022, has quickly become the fastest-growing app ever, attracting a staggering 100 million active users within just two months. Users are drawn to its versatile functionality, which includes generating code, composing poetry, and writing essays.

However, concerns regarding the accuracy and potential hallucinations of AI persist, as acknowledged even by Google. The tech giant recently cautioned users that its AI-powered Bard may not always provide correct information.

AI has its pitfalls, but also limitless potential Professor Malan acknowledges these limitations and stresses the importance of critical thinking for students when encountering AI-generated content. He emphasizes that students must exercise their own judgment when evaluating information.

Nevertheless, he remains optimistic about the future of these tools and highlights the value of feedback from both students and teachers in refining AIs capabilities. Active participation from educators and students will contribute to the ongoing improvement of this technology.

Read all theLatest News,Trending News,Cricket News,Bollywood News,India NewsandEntertainment Newshere. Follow us onFacebook,TwitterandInstagram.

Updated Date: June 29, 2023 12:57:44 IST

View original post here:

Professor AI: Harvard is planning to deploy ChatGPT-like bot as instructor in Computer Science - Firstpost

New tool explains how AI ‘sees’ images and why it might mistake an … – Brown University

PROVIDENCE, R.I. [Brown University] Why is it that artificial intelligence systems can outperform humans on some visual tasks, like facial recognition, but make egregious errors on others such as classifying an image of an astronaut as a shovel?

Like the human brain, AI systems rely on strategies for processing and classifying images. And like the human brain, little is known about the precise nature of those processes. Scientists at Brown Universitys Carney Institute for Brain Science are making strides in understanding both systems, publishing a recent paper that helps to explain computer vision in a way the researchers say is accessible as well as more useful than previous models.

Both the human brain and the deep neural networks that power AI systems are referred to as black boxes because we dont know exactly what goes on inside, said Thomas Serre, a Brown professor of cognitive, linguistic and psychological sciences and computer science. The work that we do at Carneys Center for Computational Brain Science is trying to understand and characterize brain mechanisms related to learning, vision, and all kinds of things, and highlighting the similarities and differences with AI systems.

Deep neural networks use learning algorithms to process images, Serre said. They are trained on massive sets of data, such as ImageNet, which has over a million images culled from the web organized into thousands of object categories. The training mainly involves feeding data to the AI system, he explained.

We don't tell AI systems how to process images for example, what information to extract from the images to be able to classify them, Serre said. The AI system discovers its own strategy. Then computer scientists evaluate the accuracy of what they do after theyve been trained for example, maybe the system achieves 90% accuracy on discriminating between a thousand image categories.

Serre collaborated with Brown Ph.D. candidate Thomas Fel and other computer scientists to develop a tool that allows users to pry open the lid of the black box of deep neural networks and illuminate what types of strategies AI systems use to process images. The project, called CRAFT for Concept Recursive Activation FacTorization for Explainability was a joint project with the Artificial and Natural Intelligence Toulouse Institute, where Fel is currently based. It was presented this month at theIEEE/CVF Conference on Computer Vision and Pattern Recognition in Vancouver, Canada.

Serre shared how CRAFT reveals how AI sees images and explained the crucial importance of understanding how the computer vision system differs from the human one.

CRAFT provides an interpretation of the complex and high-dimensional visual representations of objects learned by neural networks, leveraging modern machine learning tools to make them more understandable to humans. This leads to a representation of the key visual concepts used by neural networks to classify objects. As an example, lets think about a type of freshwater fish called a tench. We built a website that allows people to browse and visualize these concepts. Using the website, one can see that AI systems concept of a tench includes sets of fish fins, heads, tails, eyeballs and more.

These concepts also reveal that deep networks sometimes pick up on biases in datasets. One of the concepts associated with the tench, for example, is the face of a white male, because there are many photos online of sports fishermen holding fish that look like tench. (Yet the system can still distinguish a man from a fish.) In another example, the predominant concept associated with a soccer ball in neural networks is the presence of soccer players on the field. This is likely because the majority of internet images featuring soccer balls also include individual players rather than solely the ball itself.

Excerpt from:

New tool explains how AI 'sees' images and why it might mistake an ... - Brown University

Researchers teach an AI to write better chart captions – EurekAlert

Chart captions that explain complex trends and patterns are important for improving a readers ability to comprehend and retain the data being presented. And for people with visual disabilities, the information in a caption often provides their only means of understanding the chart.

But writing effective, detailed captions is a labor-intensive process. While autocaptioning techniques can alleviate this burden, they often struggle to describe cognitive features that provide additional context.

To help people author high-quality chart captions, MIT researchers have developed a dataset to improve automatic captioning systems. Using this tool, researchers could teach a machine-learning model to vary the level of complexity and type of content included in a chart caption based on the needs of users.

The MIT researchers found that machine-learning models trained for autocaptioning with their dataset consistently generated captions that were precise, semantically rich, and described data trends and complex patterns. Quantitative and qualitative analyses revealed that their models captioned charts more effectively than other autocaptioning systems.

The teams goal is to provide the dataset, called VisText, as a tool researchers can use as they work on the thorny problem of chart autocaptioning. These automatic systems could help provide captions for uncaptioned online charts and improve accessibility for people with visual disabilities, says co-lead author Angie Boggust, a graduate student in electrical engineering and computer science at MIT and member of the Visualization Group in the Computer Science and Artificial Intelligence Laboratory (CSAIL).

Weve tried to embed a lot of human values into our dataset so that when we and other researchers are building automatic chart-captioning systems, we dont end up with models that arent what people want or need, she says.

Boggust is joined on thepaperby co-lead author and fellow graduate student Benny J. Tang and senior author Arvind Satyanarayan, associate professor of computer science at MIT who leads the Visualization Group in CSAIL. The research will be presented at the Annual Meeting of the Association for Computational Linguistics.

Human-centered analysis

The researchers were inspired to develop VisText fromprior workin the Visualization Group that explored what makes a good chart caption. In that study, researchers found that sighted users and blind or low-vision users had different preferences for the complexity of semantic content in a caption.

The group wanted to bring that human-centered analysis into autocaptioning research. To do that, they developed VisText, a dataset of charts and associated captions that could be used to train machine-learning models to generate accurate, semantically rich, customizable captions.

Developing effective autocaptioning systems is no easy task. Existing machine-learning methods often try to caption charts the way they would an image, but people and models interpret natural images differently from how we read charts. Other techniques skip the visual content entirely and caption a chart using its underlying data table. However, such data tables are often not available after charts are published.

Given the shortfalls of using images and data tables, VisText also represents charts as scene graphs. Scene graphs, which can be extracted from a chart image, contain all the chart data but also include additional image context.

A scene graph is like the best of both worlds it contains almost all the information present in an image while being easier to extract from images than data tables. As its also text, we can leverage advances in modern large language models for captioning, Tang explains.

They compiled a dataset that contains more than 12,000 charts each represented as a data table, image, and scene graph as well as associated captions. Each chart has two separate captions: a low-level caption that describes the charts construction (like its axis ranges) and a higher-level caption that describes statistics, relationships in the data, and complex trends.

The researchers generated low-level captions using an automated system and crowdsourced higher-level captions from human workers.

Our captions were informed by two key pieces of prior research: existing guidelines onaccessible descriptions of visual mediaand a conceptual model from our group forcategorizing semantic content. This ensured that our captions featured important low-level chart elements like axes, scales, and units for readers with visual disabilities, while retaining human variability in how captions can be written, says Tang.

Translating charts

Once they had gathered chart images and captions, the researchers used VisText to train five machine-learning models for autocaptioning. They wanted to see how each representation image, data table, and scene graph and combinations of the representations affected the quality of the caption.

You can think about a chart captioning model like a model for language translation. But instead of saying, translate this German text to English, we are saying translate this chart language to English, Boggust says.

Their results showed that models trained with scene graphs performed as well or better than those trained using data tables. Since scene graphs are easier to extract from existing charts, the researchers argue that they might be a more useful representation.

They also trained models with low-level and high-level captions separately. This technique, known as semantic prefix tuning, enabled them to teach the model to vary the complexity of the captions content.

In addition, they conducted a qualitative examination of captions produced by their best-performing method and categorized six types of common errors. For instance, a directional error occurs if a model says a trend is decreasing when it is actually increasing.

This fine-grained, robust qualitative evaluation was important for understanding how the model was making its errors. For example, using quantitative methods, a directional error might incur the same penalty as a repetition error, where the model repeats the same word or phrase. But a directional error could be more misleading to a user than a repetition error. The qualitative analysis helped them understand these types of subtleties, Boggust says.

These sorts of errors also expose limitations of current models and raise ethical considerations that researchers must consider as they work to develop autocaptioning systems, she adds.

Generative machine-learning models, such as those that power ChatGPT, have been shown to hallucinate or give incorrect information that can be misleading. While there is a clear benefit to using these models for autocaptioning existing charts, it could lead to the spread of misinformation if charts are captioned incorrectly.

Maybe this means that we dont just caption everything in sight with AI. Instead, perhaps we provide these autocaptioning systems as authorship tools for people to edit. It is important to think about these ethical implications throughout the research process, not just at the end when we have a model to deploy, she says.

Boggust, Tang, and their colleagues want to continue optimizing the models to reduce some common errors. They also want to expand the VisText dataset to include more charts, and more complex charts, such as those with stacked bars or multiple lines. And they would also like to gain insights into what these autocaptioning models are actually learning about chart data.

This research was supported, in part, by a Google Research Scholar Award, the National Science Foundation, the MLA@CSAIL Initiative, and the United States Air Force Research Laboratory.

###

Written byAdam Zewe

Paper: VisText: A Benchmark for Semantically Rich Chart Captioning

https://vis.mit.edu/pubs/vistext.pdf

VisText: A Benchmark for Semantically Rich Chart Captioning

See original here:

Researchers teach an AI to write better chart captions - EurekAlert