Category Archives: Computer Science

New CMU Robotics director says diversity is key to the institute’s future – 90.5 WESA

Matthew Johnson-Roberson, an autonomous vehicle and delivery robot developer, will be the new head of Carnegie Mellon's Robotics Institute, the university announced Thursday.

The appointment will see Johnson-Roberson return to the school where he earned a bachelors degree in computer science in 2005. He cited a class with robotics pioneer William Red Whittaker as the origin of his love of robotics.

"It's an honor to come back and work with some of the same people who inspired me," Johnson-Roberson said about returning to CMU. "I couldnt ask to work with a more talented group of roboticists."

Among Johnson-Robersons goals for the institute is bringing in new and diverse voices and ideas. Unique perspectives can generate new ideas about how to solve the worlds problems with robotics, and do it better, he said.

According to Johnson-Roberson, meeting individually with students who havent thought about robotics as a career is one way to achieve that.

[Showing them] you could belong here. Theres a space for you here and when people do show up, making sure they feel supported, he said. Make it a human place. A place where people feel like they have an opportunity to build robots, which is honestly one of the coolest things you could ever hope to do.

Johnson-Roberson, who earned a Ph.D. at the University of Sydney, is currently an associate professor of engineering at the University of Michigan's naval architecture and marine engineering department as well as the electrical engineering and computer science departments.

He co-directs the University of Michigans Ford Center for Autonomous Vehicles and leads the deep robot optical lab. The DROP lab develops underwater robotics for ocean mapping and data collection.

He also co-founded Refraction AI, a delivery robotics company focusing on last-mile logistics. The companys four-foot-tall robots have been delivering food and other goods to customers in Ann Arbor, Mich. since 2019, and deployed in Austin, Texas earlier this year.

Johnson-Robersons appointment to the robotics institute could mean Refractions robots will begin to appear on Pittsburgh streets.

Im really hopeful! he said, laughing. I think that Pittsburgh is an amazing location for those kinds of deployments, because of the variety of weather and terrain.

Carnegie Mellon University officials cited Johnson-Robersons wide-ranging robotics research and impressive resume in the Universitys announcement.

"Matt's expansive background and expertise equip him well to lead the development of robotic systems across RI and SCS," said Martial Hebert, dean of the school of computer science. "The Robotics Institute, the School of Computer Science and the entire Carnegie Mellon community are thrilled to welcome Matt back to campus and excited to work with him."

Johnson-Roberson said the sky is the limit when it comes to what kinds of research he will encourage students to pursue.

"We're at a really important inflection point in the trajectory of robotics," Johnson-Roberson said. "It is a larger field. There are more students interested in robotics, and people are building systems that work. We have an opportunity to determine how we want to deploy robotics in the world and how [we can] use that technology to produce the most good."

Johnson-Roberson will replace Srinivasa Narasimhan. Narasimhan has directed the robotics institute since 2019 after Martial Hebert resigned his post to become dean of the school of computer sciences.

Read the original:

New CMU Robotics director says diversity is key to the institute's future - 90.5 WESA

Simulating Galaxy Formation in Mesmerizing Detail for Clues to the Universe – SciTechDaily

In astrophysics, we have only this one universe which we can observe, Mark Vogelsberger, an MIT physics professor, says. With a computer, we can create different universes, which we can check.

For all its brilliant complexity, the Milky Way is rather unremarkable as galaxies go. At least, thats how Mark Vogelsberger sees it.

Our galaxy has a couple features that might be a bit surprising, like the exact number of structures and satellites around it, Vogelsberger muses. But if you average over a lot of metrics, the Milky Way is actually a rather normal place.

He should know. Vogelsberger, a newly tenured associate professor in MITs Department of Physics, has spent much of his career recreating the birth and evolution of hundreds of thousands of galaxies, starting from the very earliest moments of the universe on up to the present day. By harnessing the power of supercomputers all over the world, he has produced some of the most precise theoretical models of galaxy formation, in mesmerizing detail.

MIT Associate Professor Mark Vogelsberger has spent much of his career recreating the birth and evolution of hundreds of thousands of galaxies, starting from the very earliest moments of the universe, on up to the present day. In this portrait illustration, the background shows the topology of halo-scale gas flows around a single TNG50 system. Credit: Jose-Luis Olivares, MIT. Background figure courtesy of IllustrisTNG Collaboration.

His simulations of the universe have shown that galaxies can evolve into a menagerie of shapes, sizes, colors, and clusters, exhibiting a clear diversity in the galaxy population, which matches with what astronomers have observed in the actual universe. Using the simulations as a sort of computational movie reel, scientists can rewind the tape to study in detail the physical processes that underlie galaxy formation, as well as the distribution of dark matter throughout the universe.

At MIT, Vogelsberger is continuing to refine his simulations, pushing them farther back in time and over larger expanses of the universe, to get a picture of what early galaxies may have looked like. With these simuilations, he is helping astronomers determine what sort of structures next-generation telescopes might actually be able to see in the early universe.

Vogelsberger grew up in Hackenheim, a small village of about 2,000 residents in western Germany, where nearly every night was a perfect night for stargazing.

There was very little light pollution, and there was literally a perfect sky, he recalls.

When he was 10, Vogelsbergers parents gave him a childrens book that included facts about the solar system, which he credits with sparking his early interest in astronomy. As a teenager, he and a friend set up a makeshift astronomy laboratory and taught themselves how to set up telescopes and build various instruments, one of which they designed to measure the magnetic field of different regions of the sun.

Germanys university programs offered no astronomy degrees at the time, so he decided to pursue a diploma in computer science, an interest that he had developed in parallel with astronomy. He enrolled at the Kalrsruhe Institute of Technology for two semesters, then decided to pivot to a general physics diploma, which he completed at the University of Mainz.

He then headed to the University of Munich, where he learned to apply computer science techniques to questions of astronomy and astrophysics. His PhD work there, and at the Max Planck Institute for Astrophysics, involved simulating the detailed structure of dark matter and how its distributed at small scales across the universe.

The numerical simulations that he helped to develop showed that, at small scales comparable to the size of the Earth, dark matter can clump and move through the universe in streams, which the researchers were able to quantify for the first time through their simulations.

I always enjoyed looking through a telescope as a hobby, but using a computer to do experiments with the whole universe was just a very exciting thing, Vogelsberger says. In astrophysics, we have only this one universe which we can observe. With a computer, we can create different universes, which we can check (with observations). That was very appealing to me.

In 2010, after earning a PhD in physics, Vogelsberger headed to Harvard University for a postdoc at the Center for Astrophysics. There, he redirected his research to visible matter, and to simulating the formation of galaxies through the universe.

He spent the bulk of his postdoc building what would eventually be Illustris a highly detailed and realistic computer simulation of galaxy formation. The simulation starts by modeling the conditions of the early universe, around 400,000 years after the Big Bang. From there, Illustris simulates the expanding universe over its 13.8-billion-year evolution, exploring the ways in which gas and matter gravitate and condense to form stars, black holes, and galaxies.

If you ran one of these simulations from beginning to end on a desktop computer it would take a couple thousand years, Vogelsberger says. So, we had to split this work among tens of thousands of computers to get to a reasonable run time of around six months.

He and his colleagues ran the simulations on supercomputers in France, Germany, and the United States to reproduce the evolution of galaxies within a cubic volume of the universe measuring 350 million light years across the largest simulation of the universe ever developed at the time.

The initial output from Illustris took the form of numbers. Vogelsberger went a step further to render those numbers into visual form, condensing the enormously complex computations into short, stunning videos of a rotating cube of the early expanding universe, sprouting seeds of swirling galaxies.

Vogelsberger and his colleagues published a paper in Nature in 2014, detailing the simulations output, along with its visualizations. Since then, he has received countless requests for the simulations, from scientists, media outlets, and planetariums, where the visualizations of galaxy formation have been projected onto domes in high definition. The simulations have even been commemorated in the form of a German postal stamp.

In 2013, Vogelsberger joined the physics faculty at MIT, where he remembers having initial doubts over whether he could keep up with the top of the top.

I realized very quickly that people have high expectations, but they also help you to achieve what you need to achieve, and the department is extremely supportive on all levels, he says.

At MIT, he has continued to refine computer simulations for both galaxy formation and dark matter distribution. Recently, his group released Illustris TNG, a larger and more detailed simulation of galaxy formation. They are also working on a new simulation of radiation fields in the early universe, as well as exploring different models for dark matter.

All these simulations start with a uniform universe nothing but helium, hydrogen, and dark matter, Vogelsberger says. And when I watch how everything evolves to resemble something like our universe, it makes me wonder at how far we have gotten with our understanding of physics. Humankind has been around for a short period; nevertheless, weve been able to develop all these theories and technologies to be able to do something like this. Its pretty amazing.

See more here:

Simulating Galaxy Formation in Mesmerizing Detail for Clues to the Universe - SciTechDaily

Collaboration on Data and Computational Sciences Announces 2021-2022 Projects to Advance Cancer Breakthroughs – HPCwire

Nov. 12, 2021 The Oden Institute for Computational Engineering and Sciences(Oden Institute),The University of Texas MD Anderson Cancer Center(MD Anderson) andTexas Advanced Computing Center(TACC) have announced the second round of projects to be funded through their next round of cooperative research and educational program in Oncological Data and Computational Sciences.

The strategic initiative between the three institutions was designed to align mathematical modeling and advanced computing methods with MD Andersons oncology expertise to bring forward new approaches that can improve outcomes for patients with unmet needs.

Led by Karen Willcox, Ph.D., director of the Oden Institute, David Jaffray, Ph.D., chief technology and digital officer at MD Anderson, and Dan Stanzione, Ph.D., executive director at TACC, the collaborative effort continues to gain momentum with initial success of the first round of projects and its second annual retreat. Together, the institutions leverage their expertise to accelerate the development of innovative, data-driven solutions for patients, as well as to provide a solid foundation upon which further cancer research breakthroughs can be made. The initiative builds upon ongoing collaborations between the Oden Institutes Center for Computational Oncology, led by Tom Yankeelov, Ph.D., and MD Andersons Department of Imaging Physics, led by John Hazle, Ph.D.

The latest projects which include new ways for identifying, characterizing and treating prostate cancer, blood-related cancer, liver cancer and skin cancer highlight the growing opportunity to bring computational approaches deep into cancer research and care.

This is the beginning of a strong collaboration in oncological data and computational science between the Oden Institute, MD Anderson and TACC, and we look forward to our continued cooperation in advancing digitally-enabled efforts to end cancer, Yankeelov said.

Along with project award funding of $50,000, each collaborative team has access to 12,500 core computing hours at TACC. Ernesto Lima, Sc.D., research associate at the Oden Institutes Center for Computational Oncology and TACC, will assist all groups in their implementation of the computational aspects of the project on the high-performance computing platforms available at UT Austin.

Selected Projects

Prostate cancer is the fifth leading cause of cancer death in the United States. It affects one in every seven men and the causes are essentially unknown.

This study will use deterministic models of computational medicine, developed at the Oden Institute, and informed by extensive clinical data obtained from MD Anderson. We are integrating analyses of advanced multiparametric magnetic resonance imaging or MRI (mpMRI) within a computational modeling framework, Venkatasen said. The use of mpMRI has become integral to the diagnosis and monitoring of prostate cancer patients, both to assess tumor status and to guide clinical decision-making.

The study will bring medical research one step closer to realizing patient-centric care. It will provide predictions of cancerous tumor growth for individual patients, rather than statistical guidelines, said Dr. Thomas J.R. Hughes from the Oden Institute.

Every three minutes, one person in the U.S. is diagnosed with a blood cancer. The nature of such diseases, including leukemia, lymphoma and myeloma, is complex as they tend to carry their own unique set of variables that must be considered when determining treatment. Finding more reliable ways to analyze malignancies at the single cell level would greatly assist in advancing individualized healthcare.

In this collaboration, the research team aims to map out the molecular state of hematopoietic malignancies (the presence of tumors affecting the process through which the body manufactures blood cells) in single-cell resolution.

Advanced technologies have made it possible to achieve even higher resolution images of cellular differences, thereby providing a better understanding of the function of an individual cell in the context of its microenvironment, in this case the bodys system for manufacturing blood cells, Yi said.

Chen and his team will work on the multi-omics reference atlas, a novel biomedical approach where the data sets of distinct omic groups genome, proteome, transcriptome, epigenome, microbiome etc. are combined during analysis to provide a more comprehensive guide for studying the molecular state of hematopoietic malignancies.

We have a novel computational methodology that allows us to integrate data produced by different single-cell modalities together to reveal novel cell populations and associated molecular signatures, Chen said. This is particularly exciting because these capabilities are either not yet possible or are very costly to obtain without using the proposed methodology and validation strategies.

Liver cancer remains an incurable disease that is fatal in the majority of cases, with more than 40,000 new cases estimated to be diagnosed in the U.S. in 2021.

This pilot project will focus on the liver with the interdisciplinary expertise in mathematical modeling, chemistry and interventional radiology at MD Anderson being leveraged to investigate a mechanistic framework for guiding therapy delivery.

Thermoembolization provides a novel conceptual endovascular approach for treating primary liver tumors, Fuentes said. The approach we are taking is unique in that it subjects the target tumor to simultaneous hyperthermia, ischemia and chemical denaturation in a single procedure.

My team will determine the contribution of heat, pH, and hypoxia [the absence of enough oxygen in the tissues to sustain bodily functions] in the therapeutic outcome using unique in vitro platforms we have developed here at UT Austin, Rylander said.

The goal of this unique project is to develop computational tools to help pathologists make more accurate diagnoses for managing patients who encounter borderline melanocytic lesions that could potentially develop into more serious forms of skin cancer.

Accurate discrimination between melanomas and benign nevi can be extremely difficult at times, even among expert dermatopathologists, Aung said.

Bajaj, who directs the Center for Computational Visualization in the Oden Institute, will oversee the development and application of artificial intelligence methods for the project.

Our goals are to develop and train advanced machine (deep) learning algorithms to equivariantly transform stained images to embeddings where this discrimination is disentangled, thereby enabling rapid detection and accurate estimation of the percentage of melanocytes co-expressing MART1 and Ki67 in borderline melanocytic lesions, as well as PD-L1 in tumor cells for potential treatment with immunotherapy, using multiplex histochemical studies with tumor-specific makers.

Source: John Holden, Oden Institute for Computational Engineering and Sciences

Link:

Collaboration on Data and Computational Sciences Announces 2021-2022 Projects to Advance Cancer Breakthroughs - HPCwire

Algorithms aren’t fair. Robin Burke wants to change that – CU Boulder Today

Scroll through an app on your phone looking for a song, movie or holiday gift, and an algorithm quietly hums in the background, applying data it'sgathered from you and other users to guess what you like.

But mounting research suggests these systems, known as recommender systems, can be biased in ways that leave out artists and other creators from underrepresented groups, reinforce stereotypes or foster polarization.

Armed with a new $930,000 grant from the National Science Foundation, University of Colorado Boulder Professor Robin Burke is working to change that.

If a system takes the advantage that the winners in society already have and multiplies it, that increases inequality and means we have less diversity in art and music and movie making, said Burke, who is all the chair of the Information Science Department at CU Boulder. We have a lot to lose when recommender systems arent fair.

First developed in the 1990s, these complex machine-learning systems have become ubiquitous, using the digital footprints of what people clicked on, at what time and where to drive what apps like Netflix, Spotify, Amazon, Google News, TikTok and many others recommend to users. In the past decade researchers and activists have raised an array of concerns about the systems. One recent study found that a popular music recommendation algorithm was far more likely to recommend a male artist than a female artist, reinforcing an already biased music ecosystem in which only about a quarter of musicians in the Billboard 100 are women or gender minorities.

If your stuff gets recommended, it gets sold and you make money. If it doesnt you dont. There are important real-world impacts here, said Burke.

Another study found that Facebook showed different job ads to women than men, even when the qualifications were the same, potentially perpetuating gender bias in the workplace.

Meantime, Black creators have criticized TikTok algorithms for suppressing content from people of color.

Professor Robin Burke

And numerous social media platforms have been under fire for making algorithmic recommendations that have spread misinformation or worsened political polarization.If a system only shows us the news stories of one group of people, we begin to think that is the whole universe of news we need to pay attention to, said Burke.

In the coming months, Burke, Associate Professor Amy Voida and colleagues at Tulane University will work alongside the nonprofit Kiva to develop a suite of tools companies and nonprofits across disparate industries can use to create their own customized fairness-aware systems (algorithms with a built-in notion of how to optimize fairness).

Key to the research, they said, is the realization that different stakeholders within an organization have different, and sometimes competing, objectives.

For instance, a platform owner may value profit-making, which mightin and of itselflead an ill-craftedor unfair algorithm to show the most expensive product instead of the one that suits the user best.

An algorithm designed to assure that one underrepresented group of artists or musicians rises higher in a search engine might inadvertently end up making another group pop up less.

Sometimes being fair to one group may mean being unfair to another group, said co-principal investigator Nicholas Mattei, an assistant professor of computer science at Tulane University. The big idea here is to create new systems and algorithms that are able to better balance these multiple and sometimes competing notions of fairness.

In a unique academic-nonprofit partnership, Kivawhich enables users to provide microloans to underserved communities around the globewill serve as co-principal investigator, providing data and the ability to test new algorithms in a live setting.

Researchers, with the help of students in the departments of Information Science and Computer Science, will conduct interviews with stakeholders to identify the organizations fairness objectives, build them into a recommender system, test it in real-timewith a small subset of users and report results.

No one-size-fits-all system will work for every organization, the researchers said.

But ultimately, they hope to provide a concrete set of open-source tools that both companies and nonprofits can use to build their own.

We want to fill the gap between talking about fairness in machine learning and putting it into practice, said Burke.

Link:

Algorithms aren't fair. Robin Burke wants to change that - CU Boulder Today

Research Associate in Artificial Intelligence / Robotics, Multi-Agent Path Finding job with ROYAL HOLLOWAY, UNIVERSITY OF LONDON | 271630 – Times…

Department of Computer Science

Location: EghamSalary: 36,438 to 43,061 per annum - including London AllowancePostType: Full TimeClosingDate: 23.59 hours GMT on Monday 03 January 2022Reference: 1121-457

Full Time, Fixed Term

Outstanding candidates in Artificial Intelligence and Robotics are invited to apply for a post of Research Associate in the Department of Computer Science at Royal Holloway University of London.

The Department, established in 1968 (one of the oldest Computer Science departments in the world), is one of the top departments of Computer Science in the UK. In the most recent Research Excellence Framework (REF 2014), it scored 11th for the quality of the research output, with a third of the publications recognised as world-leading and a further half internationally excellent (87% in total). The Department is home to outstanding researchers in algorithms and complexity, artificial intelligence, machine learning, bioinformatics, and distributed and global computing. Over the past five years, the department has undertaken an ambitious plan of expansion: fifteen new academic members of staff were appointed, new undergraduate and integrated-master programmes were created, and five new postgraduate-taught programmes were launched. The department is involved in multiple inter/multidisciplinary activities, from electrical engineering to psychology and social sciences.

The position is funded by the two-year Innovate UK grant VersaTile Transforming warehousing and distribution with industry leading AI planning and system controls, held by Prof Sara Bernardini. The project will be undertaken jointly with the companies Tharsus, one of the UK's most advanced robotics companies, and JR Dynamics, an SME based in North-East England. The project stems from the significant pressure currently faced by warehousing and distribution operations to transform their systems and processes in response to increasing market demand and throughput, intense competition, and consumer preference for more variety, quicker delivery at a lower cost. The project will tackle this problem by creating and integrating an industry-leading AI-planning capability and a critical infrastructure-scale control system. The project spans service robotics, high-frequency logistics, and AI-based control system development.

The successful candidates will hold a PhD or equivalent, have a strong research record in Artificial Intelligence and/or Robotics and a strong interest in practice-driven research. Experts in AI Planning, Path Planning, Multi-Agent Systems, Autonomous Navigation, and, in general, Autonomous Systems, and Cognitive Robotics are particularly encouraged to apply. Experience in engaging with industry or contributing to outreach activities would also be valuable. The duties and responsibilities of this post include conducting individual and collaborative research on the themes of the project, developing software based on the needs of the project, writing reports on the advancement of the project, and producing high-quality outputs for publication in high-profile journals or conference proceedings.

This is a full-time and fixed term post (extensions are possible), available from January 2022 or as soon as possible thereafter. We offer close interaction with industry via secondments and collaborations with world-leading research institutions in the UK and abroad

We also offer a highly competitive rewards and benefits package including:

The post is based in Egham, Surrey where the College is situated in a beautiful, leafy campus near to Windsor Great Park and within commuting distance from London.

For an informal discussion about the post, please contact Prof Sara Bernardini onsara.bernardini@rhul.ac.ukor +44-1784-276792.

To view further details of this post and to apply please visithttps://jobs.royalholloway.ac.uk. For queries on the application process the Human Resources Department can be contacted by email at:recruitment@rhul.ac.uk

Royal Holloway recognises the importance of helping its employees balance their work and home life by offering flexible working arrangements. We are happy to consider a request for flexible working for this post including part time, job share or compressed working hours.

Please quote the reference: 1121-457

Closing Date:Midnight, 3rdJanuary 2022

Interview Date: TBC

We particularly welcome female applicants as they are under-represented at this level in the Department of Computer Science within Royal Holloway, University of London.

Furtherdetails: JobDescription&PersonSpecification

See more here:

Research Associate in Artificial Intelligence / Robotics, Multi-Agent Path Finding job with ROYAL HOLLOWAY, UNIVERSITY OF LONDON | 271630 - Times...

Report: Southeast New Mexico’s rural school districts struggle in computer science – Carlsbad Current Argus

Vaccine equity pits rural against urban America

The U.S. vaccine campaign has heightened tensions between rural and urban America, where educators in urban counties are driving hours to rural areas to get vaccinated. (March 1)

AP

A recent report shows New Mexico has made progress in improving access to computer science education but may be struggling in some areas.

According to the 2021 State of Computer Science Education report, 44% of public high schools offereda foundational computer science course during the2020-21 school year, compared to 32% in 2019-20 and 23% in 2018-19.

In Southeast New Mexico 17 out of the region's 33 school districts do not offer foundational computerscience classes, according to Code's Computer Science Access Report.

Some of these districts include Jal Public Schools, Artesia Public Schools and Tularosa Municipal Schools.

In suburban areas 50% of students have access to computer science classes.That number drops to 46% in urban areas, 43% in rural areas and 38% in towns.

Schools with a high number of students on the Free or Reduced Lunch (FRL) programalso have less access, per the report.

Education briefs: Loving Schools names honor roll students

Schools that have more than half of their students on FRL are 13% to 25% less likely to have computer science classes, according to the report.

Carlsbad Municipal Schools Superintendent Dr. Gerry Washburn saidcomputer science and IT has become an essential part of the region's industries like oil and gas, andthere is a need for professionals in the field.

According to the report, there are an average of 2,925 job openings for computer science positions in New Mexico each month.

High schools across the region have created programs aimed at preparing students for these positions.

Gerrymandering: Two communities, side by side, have wildly different education outcomes by design

Carlsbad High Schooloffers a variety of computer science classes as part of its new academy system that was implemented this year.

Under theAcademy of Business Information Technology,students can takecomputer programming classes and even get paid to work as computer technicians for the district, according to the CHS curriculum guide.

High schools in Hobbs and Loving also offer computer science classes on topics likeweb design and computer graphics as an elective.Alamogordo High School and Roswell High School also have computer science classes as part of their career and technical training programs.

New Mexico is one of the states that have adopted plans to improve access to computer science educationalong with Alabama, Massachusetts and Oklahoma.

Computer science is vital for each New Mexico students education, empowering them to skillfully navigate life, education and career opportunities, Public Education Secretary (Designate) Kurt Steinhaus said. We are pleased to see continued improvement to computer science education in New Mexico. NMPED will continue to prioritize and promote computer science so all students have access to this promising career path.

This degree can earn you more than an MBA

Recent grads with a masters degree in computer science can earn more than recent MBA grads, according to U.S. News. Buzz60's Sean Dowling has more.

Buzz60

The PED'scomputer science task-forcereleased afive-year plan in June 2021, which includes the creation of new policies and teacher certifications for computer science. Under the plan, every high school in the state will have higher-level computer science and IT classes by 2026.

In February 2021 state legislators passed House Bill 188 which aims to create a license endorsement in secondary computer science. New Mexico is also the first state to create two positions that oversees computer science in the Math and Science Bureau and theCollege and Career Readiness Bureau.

Claudia Silva is a reporterfrom the UNMLocal ReportingFellowship. Shecan be reached at csilva2@currentargus.com, by phone at(575) 628-5506 or on Twitter @thewatchpup.

Go here to read the rest:

Report: Southeast New Mexico's rural school districts struggle in computer science - Carlsbad Current Argus

Washington People: Chenyang Lu – The Source – Washington University in St. Louis – Washington University in St. Louis Newsroom

Chenyang Lu is not a civil engineer.

For a computer scientist, though, he builds a lot of bridges. Particularly between the fields of computer science and health care.

Lu is the Fullgraf Professor in the Department of Computer Science & Engineering at Washington University in St. Louis McKelvey School of Engineering. His research is concerned with the Internet of Things (IoT), cyber-physicalsystems and artificial intelligence, and he is particularly keen on how these technologies can improve healthcare.

As part of several teams with surgeons and physicians, Lu has been testing Fitbit activity trackers in studies that have shown that these relatively inexpensive wearable devices can play a valuable role in improving patient health.

We can collect data such as step count, heart rate and sleep cycles, which we use with our machine-learning models to predict deterioration or improvement in a patients health status. Lu said. These efforts demonstrate tremendous potential for wearable and machine learning to improve health care.

Using data from Fitbits, for example, Lu and his collaborators have demonstrated the ability to predict surgical outcomes of pancreatic cancer patients with higher success than the current risk assessment tool.

The goal is improved health care, but where some of the most challenging problems arise, Lu finds engineering solutions. Obstacles can include subpar data, or simply not enough data to get useful information from wearable devices.

You have to extract features using engineering techniques, Lu said. How do we take this noisy, lousy data from wearables and extract robust and predictive features to generate something clinically meaningful and informative so we can actually predict something?

Getting useful information from messy data is one of the reasons his colleagues at the School of Medicine value his partnership.

Chenyang has established himself as an expert in how to interpret and connect the dots for this high-dimensional data. Thats why I think hes so prolific, said Philip Payne, the Janet and Bernard Becker Professor and director of the Institute for Informatics, associate dean for health information and data science and chief data scientist at the School of Medicine.

Prolific he may be, but Lu is eager to do more.

We can scale this up to perfect the technology and broaden the scope so it can be used on larger groups of different types of patients, Lu said. I look forward to collaborating with even more physicians and surgeons to expand this work.

Read more on the engineering website.

Read more from the original source:

Washington People: Chenyang Lu - The Source - Washington University in St. Louis - Washington University in St. Louis Newsroom

GA Dept of Ed Awards Third Round of Grants to Build Computer Science Teacher Capacity – All On Georgia

The Georgia Department of Education is awarding athird roundof grants to school districts to help them build teacher capacity around computer science education, State School Superintendent Richard Woods announced today. Seventeen school districts will receive the grants, for a total allocation of $279,000.

Computer science (CS) has become a high-demand career across multiple industries, and includes skills all students need to learn. Thus far, the largest challenge for school districts in building this new discipline is building teaching capacity, though that capacity is expanding. There are currently 615 credentialed CS teachers and just under 1,000 middle and high schools in Georgia. Thats up from 403 teachers in 2020 and 250 in 2019.

The grant provides funding for teachers to participate in professional learning opportunities, including credential programs, to help mitigate the remaining gap.

Technology has emerged as a vital component of our daily lives, with its impact growing stronger each day,Lieutenant Governor Geoff Duncan said.For Georgia to maintain its role as a national leader in economic innovation, we must continue to prioritize investments in education and workforce development. I commend the Georgia Department of Education for working to meet this growing demand for computer science professionals in the Peach State.

We must continue preparing our children for the jobs and opportunities theyll encounter in the future not just for the economy of today,Superintendent Woods said.These CS Capacity Grants help school systems build a pipeline of qualified computer science teachers within their district to ensure children are prepared with the technical skills and the experience in problem-solving and real-world thinking that will serve them well in any career they choose.

Funds were awarded through a competitive application process, with priority given to school systems serving highly impoverished and/or rural communities.

The grant is aligned to GaDOEs Roadmap to Reimagining K-12 Education, which calls for setting the expectation that every child, in every part of the state, has access to a well-rounded education including computer science.

Cherokee County Schools Colquitt County Schools Dade County Schools Decatur County Schools Effingham County Schools Emanuel County Schools Fayette County Schools Gainesville City Schools Glynn County Schools Greene County Schools Jefferson City Schools McIntosh County Schools Pike County Schools Pulaski County Schools Seminole County Schools Walker County Schools Wayne County Schools

Read the original:

GA Dept of Ed Awards Third Round of Grants to Build Computer Science Teacher Capacity - All On Georgia

Facebook Isn’t Shutting Down Its Facial Recognition System After All – News @ Northeastern – News@Northeastern

Facebook recently announced that it would be shutting down its facial recognition system and deleting its store of face-scan data from the billion people who opted in to the system. In a press release from Meta, the newly minted parent company of Facebook, the vice president of artificial intelligence heralded the change as one of the largest shifts in facial recognition usage in the technologys history.

Ari E. Waldman is a professor of law and computer science. He is also the faculty director for Northeasterns Center for Law, Information and Creativity. Courtesy photo

But Northeastern legal scholars see it differently, and are renewing their calls for government oversight of facial recognition and other advanced technologies.

This is yet another example of Facebook misdirection, says Ari E. Waldman, professor of law and computer science at the university. Theyre deleting the face scans but keeping the algorithmic tool that forms the basis of those scans, and are going to keep using the tool in the metaverse.

Indeed, in an interview shortly after the announcement, Meta spokesperson Jason Grosse told a Vox reporter that the companys commitment not to use facial recognition doesnt apply to its metaverse productsa suite of interactive tools that Meta engineers are building to create a virtual, simulated environment layered over the physical one we interact in day to day.

What this announcement amounts to is that Facebook is throwing out the data they dont need anymore, says Waldman, who also serves as faculty director for Northeasterns Center for Law, Information and Creativity. This is like when your neighbor is playing music really loud, and they stop not because youre concerned about the sound, but because theyre done practicing.

Woodrow Hartzog is a professor of law and computer science at Northeastern University. Photo by Matthew Modoono/Northeastern University

While the announcement isnt the paradigm-shifting change its Meta authors might have users believe, it does have an upside, says Woodrow Hartzog, professor of law and computer science at Northeastern.

I appreciate the fact that this announcement gives a little bit more fuel for privacy advocates who argue that facial recognition is not inevitable, its not invaluable, and its not toothpaste thats out of the tubewe can change the way we use this technology, he says.

Far from signaling an end to widespread facial recognition technology, the move by Facebook may create a vacuum in the market that other tech companies will race to fill with their own databases of paired face-name data, Hartzog says. We may see other companies hit the accelerator now that theres a market opportunity opened up by Facebook, he says.

Together, the landscape of facial recognition and other biometric data collection software is one that requires lawmakers to create oversight, not rely upon tech companies to self-regulate, the law scholars say.

It shows that existing laws continue to drive a need for a more robust and stable regulatory framework for biometrics and other surveillance tools, Hartzog says.

For media inquiries, please contact Shannon Nargi at s.nargi@northeastern.edu or 617-373-5718.

Read more from the original source:

Facebook Isn't Shutting Down Its Facial Recognition System After All - News @ Northeastern - News@Northeastern

Tableau to add new business science tools to analytics suite – TechTarget

Self-service data science capabilities and integrations with Slack highlight the latest Tableau analytics platform upgrades.

Tableau, founded in 2003 and based in Seattle, unveiled new capabilities -- not all of which are generally available -- on Tuesday during Tableau Conference 2021, the vendor's virtual user conference.

Both the self-service data science capabilities and Slack integrations are slated for general availability in 2022. And according to Francois Ajenstat, Tableau's chief product officer, the impetus for developing both was Tableau's mission of enabling people to better see and understand data.

"What we're trying to do is make analytics easier to use for anyone, anywhere, with any data, and we're trying to do that by broadening access to analytics so that it's available to everyone," he said on Nov. 8 during a virtual press conference. "We need to make analytics dramatically easier to use -- more accessible and more collaborative."

The self-service data science capabilities aim to extend predictive analytics capabilities beyond data scientists while the integrations with Slack enable collaboration, Ajenstat continued.

"We're going to extend the advanced capabilities of the platform, [and] and we're going to bring analytics in the flow of business so that everyone can be a data person," he said.

Tableau, which was acquired by Salesforce in June 2019, introduced the concept of business science in March 2021 when it unveiled its first integration with Salesforce's Einstein Analytics in Tableau 2021.1. Business science, as defined by Tableau, is enabling business users without training in statistics and computer science with data science capabilities using augmented intelligence and machine learning.

The integration added Einstein Discovery -- a no-code tool in Salesforce's Einstein Analytics platform -- to Tableau to give Tableau customers the ability to do predictive modeling and generate prescriptive recommendations.

In 2022, Tableau analytics platform updates will build on the initial business science capabilities enabled by Einstein Discovery with the additions of Model Builder and Scenario Planning.

Model Builder is designed to help teams collaboratively develop and consume predictive models within their Tableau workflows using Einstein Discovery. Scenario Planning, meanwhile, takes advantage of Einstein's AI capabilities to enable users to compare scenarios, build models and understand expected outcomes to better make data-driven decisions.

The business science capabilities are very interesting to me. With solutions that enable business agility being so important today, Model Builder and Scenario Planning will prove to be a difference-maker in the future direction of modern businesses. Mike LeoneAnalyst, Enterprise Strategy Group

Once generally available next year, the two capabilities will be significant for users, according to Mike Leone, an analyst at Enterprise Strategy Group.

"The business science capabilities are very interesting to me," he said. "With solutions that enable business agility being so important today, Model Builder and Scenario Planning will prove to be a difference-maker in the future direction of modern businesses."

The key to their effectiveness will be their ability to simplify what are otherwise complex tasks, Leone continued.

"What-if analysis can be a difficult endeavor without the right data and tools, but with Tableau expanding their business science capabilities, they are looking to drastically simplify planning, prediction and outcome analysis in a collaborative way rooted in AI and self-service," he said.

Likewise, Boris Evelson, an analyst at Forrester Research, noted that Tableau is at the forefront of adding augmented analytics capabilities to its platform, but noted that the vendor is not alone in that respect.

"What we now call Augmented BI -- BI infused with AI -- is a capability that most modern BI platforms have already invested in," he said.

"In our recent [research], Tableau was found as one of the leaders in that evaluation, but so were Microsoft, Oracle, Tibco, Sisense and Domo," Evelson said, referring to other tech giants and independent vendors with popular analytics systems.

In addition to the added business science capabilities, Tableau analytics platform updates next year will include three new integrations with Slack, which Salesforce acquired for $27.7 billion in December 2020.

When ready for release, Tableau users will have access to Ask Data in Slack, Explain Data in Slack and Einstein Discovery in Slack.

Ask Data is an AI capability that enables users to query data using natural language rather than requiring them to write code, while Explain Data is automatically generates explanations of data points. Both were first introduced in 2019 and were upgraded in Tableau's June 2021 analytics platform update.

"The Slack integration is very notable," Evelson said.

It's notable, he continued, because most people who use data to inform their decisions don't spend the majority of their time in their analytics platform environments. Instead, they spend most of their time in work applications and collaboration platforms.

"BI platforms are not what we call systems of work -- digital workspaces where people live 9 to 5," Evelson said. "These decision-makers' systems of work are ERP, CRM, planning/budgeting or other business applications, or productivity applications such as email, calendar, and increasingly -- due to remote work -- collaboration tools like Slack."

Similarly, Leone noted that Tableau's integrations with Slack are in line with what data workers now require and have the potential to enable new discovery.

"I think collaboration enablement is critical to the next level of data-centric adoption, and Tableau's announcements are aligned perfectly," he said. "Imagine being in a Slack channel and seeing a team member ask a data question with an instant answer. That answer could introduce a new data angle to attack a problem or spark an open discussion that leads down a new innovation path."

Beyond the expansion of its business science capabilities and Slack integrations, Tableau unveiled a host of other features both generally available now and scheduled for upcoming analytics platform updates.

Among them are security, administration, community engagement and collaboration tools, including:

Virtual Connections, Centralized Row-Level Security, the Tableau Exchange and Hire Me button in Tableau Public are now generally available.

Connected Applications will be available with the release of Tableau 2021.4 before the end of the year, and all other capabilities are expected to be generally available in 2022, according to Tableau.

Beyond helping users better see and understand their data, data governance is a priority for Tableau, according to Ajenstat. Features such as Centralized Row-Level Security and Share Prep Flows are aimed at addressing the trustworthiness and security of data.

"Having access to data is really important, but we need to also make sure people can trust the data," Ajenstat said. "If you can't trust the data, you can't trust the insight, and this is why we need to make trust a fundamental part of the Tableau platform."

Enterprise Strategy Group is a division of TechTarget.

See original here:

Tableau to add new business science tools to analytics suite - TechTarget