Category Archives: Computer Science
Notable Deaths 2023: Science and Technology – The New York Times
Luiz Barroso, 59
Engineer who widened Google's reach
William P. Murphy Jr., 100
An inventor of the modern medical blood bag
Don Walsh, 92
Record-breaking deep sea explorer
Frank Borman, 95
Astronaut who led first orbit of the moon
Roland Griffiths, 77
Psychedelics researcher who changed minds
Hoosen Coovadia, 83
Medical force in South Africas H.I.V. fight
M.S. Swaminathan, 98
Scientist who helped conquer famine in India
Endel Tulving, 96
Influential theorist of the structure of memory
Ian Wilmut, 79
Scientist behind Dolly the cloned sheep
Ferid Murad, 86
Nobelist who saw how a gas can aid the heart
Douglas Lenat, 72
Scientist who tried to give A.I. common sense
John Warnock, 82
Inventor of the PDF
Sliman Bensmaia, 49
Neuroscientist who gave feeling to prosthetic limbs
W. Jason Morgan, 87
Theorist of plate tectonics
Kevin Mitnick, 59
Once the most wanted computer outlaw
Evelyn M. Witkin, 102
Geneticist who discovered how DNA repairs itself
Dr. Susan Love, 75
Public face of the war on breast cancer
John B. Goodenough, 100
Nobelist who created the lithium-ion battery
Donald Triplett, 89
Case 1 in the study of autism
Roger S. Payne, 88
Influential biologist who recorded whale serenades
Harald zur Hausen, 87
Nobelist who found the cause of cervical cancer
Virginia Norwood, 96
Inventor of a tool for mapping Earth from space
Gordon E. Moore, 94
Intel co-founder behind Moores Law
Raphael Mechoulam, 92
Father of cannabis research
William A. Wulf, 83
Computer scientist who helped make the internet
Paul Berg, 96
Biochemist who launched genetic engineering
Charles Silverstein, 87
Psychologist who fought homophobia
K. Alex Mller, 95
Nobel-winning innovator in ceramic superconductors
Walter Cunningham, 90
Astronaut who helped pave the way to the moon
View post:
Notable Deaths 2023: Science and Technology - The New York Times
Computer Science Faculty job with KIMEP University | 37576602 – The Chronicle of Higher Education
KIMEP University
Computer Science Faculty Position Description
KIMEP University invites applications for faculty positions (Assistant/Associate/Full Professor) in Computer Science. The bachelor program in Computer Science is a newly created program that will admit its first students in the 2024-2025 academic year. KIMEP University is the most prestigious and dynamic university in Kazakhstan and Central Asia and has a growing student body. Faculty are expected to teach various courses in the Computer Science program as well as some elective courses for the Business Information Systems program. The appointment will start in Fall 2024 (August, 2024). Responsibilities involve teaching, research, and service.
Qualifications
Applicants must have earned a PhD. In Computer Science or Software Engineering from an accredited institution and/or working as a faculty in such accredited schools, and having demonstrated consistent and good quality of scholarship published in SCI-SSCI and Scopus Q1-Q2 ranked journals. Candidates will be expected to publish in such journals. The teaching load for this position will be 3 courses (9 hours) per academic semester.
KIMEP University
KIMEP University is the leading American style, internationally accredited, English language academic institution. The university provides a world class academic experience and a unique international environment to all its students and faculty. KIMEP was established in 1992 and has built a very strong regional reputation as a leading university in higher education. All academic programs are ranked among the top in Kazakhstan.
Almaty, Kazakhstan
The city of Almaty is a beautiful, modern and vibrant city situated at the base of majestic Tien Shan Mountains in Southeast Kazakhstan. The city has a population of 2 million people and is the financial, cultural, summer and winter sports and cosmopolitan capital of Kazakhstan. Kazakhstan is located in the heart of Eurasia with important commercial inroads bridging Asia and Europe. Kazakhstan's dynamically changing economic, social, educational and cultural environment provides incredible opportunities for significant and original research.
Compensation
Rank and salary are competitive and commensurate with experience and qualifications. Compensation after-tax compares favorably with net salaries in western countries. Combined with a low cost of living, the salary becomes even more competitive in real terms.
Limited on campus housing is available to rent. In addition to salary, a benefits package includes basic healthcare, reduced tuition rates for KIMEP courses, and subsidy for relocation allowance. Summer paid teaching is typically available. The salary will be subject to a deduction of 10% income tax.
Application Process
Please submit the following documents to KIMEP University HR portal: https://hr.kimep.kz/en-US/Home/Vacancy/567
Address any questions to: recruitment@kimep.kz
Closing dates for submission of applications: January 31, 2024
Applications will be evaluated on an ongoing basis and will continue until the position is filled. Only shortlisted candidates will be informed and invited for interviews by BCB search committee.
Original post:
Computer Science Faculty job with KIMEP University | 37576602 - The Chronicle of Higher Education
Busting 3 Myths About Teaching AI in the Classroom – Education Week
The most common mental picture of an artificial intelligence lesson might be this: High schoolers in a computer science class cluster around pricey robotics equipment and laptops, solving complex problems with the help of an expert teacher.
While theres nothing wrong with that scenario, it doesnt have to look that way, educators and experts say. Teaching AI can start as early as kindergarten. Students can learn key AI concepts without plugging in a single device. And AI can be applied to just about any subjecteven basketball practice.
Educators from around the world shared how they have been implementing AI in their classes on a webinar hosted earlier this month by the International Society for Technology in Education, a nonprofit that helps educators make the most of technology.
ISTE has offered professional development allowing educators to explore AI for six years, training some 2,000 educators. The nonprofit also offers sample lessons for students at every grade level that can be applied across a range of subjects.
Heres how educators who went through the training have used it in their classroomsand busted three big myths about teaching AI concepts to K-12 students.
Its never too early to start teaching AI, educators and experts say.
Cameron McKinley, a technology integration coach for Alabamas Hoover City Schools, has taught AI concepts to kindergarteners through 2nd graders. She starts by having students sort cards with pictures of different objects into categories, the same way intelligent machines sort data. Then, she has students use an AI computer program, Quickdrop. The students draw pictures for the technology to interpret.
It can be a good lesson in AIs potential for misunderstanding. For instance, the program asked one student to draw glasses, so she drew something she might drink milk or water out of. The machine, though, was looking for eyeglasses that can improve vision.
It was important that the student not get frustrated, McKinley said. We encourage students to learn from failures of the technology, McKinley said.
You dont need pricey devices to teach AI, educators argue.
Adam Brua, an information technology teacher at Rutland Intermediate School in Vermont, likes working on the unplugged activities ISTE recommends with his 6th grade students. In one activity, students create a graph featuring the characteristics of different animals, showing which animals have fur, four legs, a tail, and/or paws, for instance. That mirrors how machines learn to sort and categorize information.
Its an activity any educator can do, almost anywhere, Brua said. None of this requires expensive equipment or an advanced understanding of AI, Brua said.
But these sorts of tasks still allow students to analyze AIs strengths and weaknesses, Brua said. AI technologies can do certain tasks extremely well, such as image and speech recognition, while other tasks, such as discerning emotions are better left to be done by humans, Brua said.
AI is a technology, sure, but there are ways to integrate it into all kinds of subjects, not just computer science.
For instance, Brandon Taylor, who volunteers as a teacher at Chicago Prep Academy, a school with a focus on student athletes, worked with his basketball player students to create an AI program that could analyze and provide feedback on skills such as shooting, dribbling, and agility through video recordings of students.
And Stacy George, an assistant professor at the University of Hawaii, worked with pre-service teachers on an AI social studies lesson. The budding teachers helped 2nd graders train a teachable machine to distinguish locally grown foods from those that must be flown into the state.
It kept the students engaged, said one pre-service teacher in a video George shared on the webinar. It was something different from what theyre normally used to.
More here:
Busting 3 Myths About Teaching AI in the Classroom - Education Week
Internationally acclaimed computer science and health analytics expert named Dean of Ontario Tech’s Faculty of … – News
Dr. Carolyn McGregor, incoming Dean, Faculty of Business and Information Technology, Ontario Tech University
Ontario Tech University announces Dr. Carolyn McGregor AM as the new Dean of the Faculty of Business and Information Technology (FBIT), effective Monday, January 1, 2024.
Since moving from Australia to Canada in 2007 to join Ontario Tech as the universitys Canada Research Chair (Health Informatics), Dr. McGregor has become an internationally renowned research leader in Big Data analytics, artificial intelligence (AI), edge (remote location) computing and data mesh (cross-domain) infrastructures.
She is the Ontario Tech Research Chair in Artificial Intelligence (AI) for Health and Wellness, and also the founding co-Director of the Joint Research Centre in Artificial Intelligence for Health and Wellness between Ontario Tech University and the University of Technology Sydney, Australia.
In addition to her academic role as a full Professor, she has held key administrative roles within FBIT, including Interim Dean since July 1, 2023, and previously Associate Dean, Research and Graduate Studies.
Dr. McGregors leading-edge research achievements are highlighted by her international award-winning Artemis and Athena AI platforms for health, wellness, resilience and adaptation in critical care, astronaut health, firefighter training and tactical officer resilience assessment and development. Numerous film and television documentary profiles of her projects and partnerships by producers from around the world have earned major international exposure for her research and for Ontario Tech.
She has more than 200 refereed publications, more than $15 million in research funding, and three patents in multiple jurisdictions. She has deployed her Artemis platform in two hospitals in Ontario, and leads Canadian Department of National Defence research (in collaboration with Ontario Techs ACE Core Research Facility) for new pre-deployment solutions for human-performance in extreme weather. In 2022, she led a research study on the Axiom Ax-1 first all-private astronaut mission, in collaboration with the Canadian Space Agency (CSA) and NASA She also leads the Space Health study, supported by the CSA, on the International Space Station.
She has served on national research grant-selection committees for Canada, France, Germany and the U.K., and is regularly called upon by national and global media to provide insight on the latest technology trends.
Among her many accolades, in 2014 she was named to the Order of Australia (AM) General Division, by Queen Elizabeth II, for her significant service to science and innovation through health-care information systems. In 2017, she was featured in the 150 Stories series commissioned by the Lieutenant Governor of Ontario and the Government of Canada to commemorate Ontarios 150th anniversary. In 2018, she was named one of Digital Health Canadas Women Leaders in Digital Health.
She currently serves as a member of the Institute of Electrical and Electronics Engineers (IEEE) Computer Societys Board of Governors, and a Director on the Board for Compute Ontario.
Prior to joining Ontario Tech, she led the strategic development of the foundational business analytics strategies for one of the largest banks and the largest retail chain in Australia, along with many other large corporations. From these experiences in the business world, she envisioned opportunity to reapply her analytics expertise and AI knowledge to the realm of health care, with an ongoing goal to improve health outcomes for all.
Dr. Carolyn McGregors pioneering research, together with her wealth of leadership experience and international recognition, position her well to lead Ontario Tech Universitys Faculty of Business and Information Technology as it pursues innovation to enhance the well-being of individuals, communities and our planet, and prepares its graduates to make a strong impact in their communities. The universitys senior leadership team thanks Dr. McGregor for her leadership during her interim appointment, and looks forward to working with her as she takes on this new role.- Dr. Lori A. Livingston, Provost and Vice-President, Academic, Ontario Tech University
Ontario Tech Universitys Faculty of Business and Information Technology is renowned for its transformative research that focuses on applying digital technologies for good; its strong industry partnerships; and its innovative undergraduate and graduate academic programs that prepare students to succeed in the workplace. I am thrilled to take on this leadership role and look forward to working with our team of diverse and innovative faculty members as we challenge and inspire students to push their own boundaries of thinking and learning, and build a brighter future together.- Dr. Carolyn McGregor, incoming Dean, Faculty of Business and Information Technology, Ontario Tech University
Follow this link:
Modernizing the Internet’s architecture through software-defined networks – Tech Explorist
For about 30 years, the way data moves on the internet hasnt mostly stayed the same. Now, researchers from Cornell and the Open University of the Netherlands are trying to update it. Theyve created a programmable network model that lets researchers and network administrators customize how data moves, giving them more control over the internets air traffic control system. This could make the internet work better and be more adaptable.
When people started working on software-defined networking (SDN), they mainly focused on essential features to control how data moves through the network. However, recent efforts have looked into more advanced features, like packet scheduling and queueing, which impact performance.
One interesting concept is PIFO trees. They provide a flexible and efficient way to program how packets are scheduled. Previous studies have demonstrated that PIFO trees can handle various practical algorithms, including strict priority, weighted fair queueing, and hierarchical schemes. However, we still need a better understanding of the underlying properties and meanings of PIFO trees.
This new research studies PIFO trees from a programming language perspective. In the research paper, the researchers are setting the foundation for the next generation of networking technology. This includes the hardware (physical equipment) and the software (programs running on it). The goal is to create a system that can quickly adapt to different scheduling needs online.
Anshuman Mohan, a doctoral candidate in the field of computer science in the Cornell Ann S. Bowers College of Computing and Information Science said,It takes time to design, test and deploy hardware. Once weve rolled it out, we are financially and environmentally incentivized to keep using that hardware. This is in tension with the ever-changing demands of those who manage networks running on that hardware.
In creating the next generation of networking technology, the research team focused on a crucial component: the network switch. This device, about the size of a small pizza box, plays a vital role in making networks and the internet work.
Switches connect devices to a computer network and manage the flow of data. They are responsible for packet scheduling, which determines how data moves through a network. Imagine the switch as handling packets of data from various usersemails, website visits, or video calls on Zoom. The switchs packet scheduler organizes and prioritizes these data clusters based on rules set by network managers. Finally, the switch sends these packets to other switches until they reach the users device.
However, until now, it has not been easy to customize this air traffic control process. The reason is that scheduling parameters are traditionally baked into the switch by the manufacturer. Now, this rigidity doesnt work.
Mohan said,Our work uses techniques for programming languages to explain how a wide variety of packet scheduling policies options can be realized on a single piece of hardware. The users could reconfigure their scheduling policy every hour if they wanted, and, thanks to our work, find that each of those policies magically fits on the same piece of hardware.
Journal Reference:
Original post:
Modernizing the Internet's architecture through software-defined networks - Tech Explorist
What Are We Building, and Why? | TechPolicy.Press – Tech Policy Press
Audio of this conversation is available via your favorite podcast service.
At the end of this year in which the hype around artificial intelligence seemed to increase in volume with each passing week, its worth stepping back and asking whether we need to slow down and put just as much effort into questions about what it is we are building and why.
In todays episode, were going to hear from two researchers at two different points in their careers who spend their days grappling with questions about how we can develop systems and modes of thinking about systems that lead to more just and equitable outcomes, and that preserve our humanity and the planet:
What follows is a lightly edited transcript of the discussion.
Batya Friedman:
I'm Batya Friedman. I'm in the Information School at the University of Washington, a professor there and I co-direct both the value sensitive design lab and also the UW Tech Policy Lab.
Aylin Caliskan:
I am Aylin Caliskan and I'm an assistant professor at the Information School. I am an affiliate of the Tech Policy Lab right now. I am also part of the Responsible AI Systems and Experiences Center, the Natural Language Processing Group, as well as the Value Sensitive Design Lab.
Batya Friedman:
Aylin is also the co-director-elect for the Tech Policy Lab. As I am winding down on my career and stepping away from the university, Aylin is stepping in and will be taking up that pillar.
Justin Hendrix:
And we have a peculiar opportunity during this conversation to essentially talk about that transition, talk a little bit about what you have learned, and also to look at how the field has evolved as you make this transition and into retirement and turn over the reins as it were.
But Dr. Friedman, I want to start with you and just perhaps for my listeners, if they're not familiar with your career and research, just ask you for a few highlights from your career, how your work has influenced the field of AI bias and consideration around design and values and technological systems. What do you consider your most impactful contribution over these last decades?
Batya Friedman:
Well, I think one clear contribution was a piece of work that I did with Helen Nissenbaum back in the mid-nineties. Actually, we probably began in the very early nineties on bias and information computing systems published in 1996, and at that time I think we were probably a little bit all by ourselves working on that. I think that the journal didn't quite know what to do with it at the time, and that's a paper that if you look at the trajectory of its citations, it had a very slow uptake. And then I think as computing systems have spread in society over the last five to seven years, we've seen just an enormous reference back and from another sense of impact of the work I've done, which is not just around bias but around human values more generally and how to account for those in our technical work. Just as one example of evidence of impact and the Microsoft responsible AI work and impact assessments that they published within the last year, they acknowledge heavily drawing on value-sensitive design and its entire framework in the work that they've done.
Justin Hendrix:
I want to ask you just to maybe reflect on that work with Helen Nissenbaum for a moment, and some of the questions that you were asking, what is a biased computer system? Your examples started off with a look at perhaps the way that flight reservation systems work. Can you kind of cast us back to some of the problems that you wanted to explore and the way that you were able to define the problem in this kind of pre-web moment?
Batya Friedman:
Well, we were looking already at that time at ways in which information systems were beginning to diffuse across society and we were beginning to think about which of those were visible to people and which of those were in some sense invisible because they were hidden in the technical code. In the case of airline reservation systems, this has to do with what shows up on the screen. And you can imagine too that algorithms, that technical algorithms where someone is making a technical decision. I have a big database of people who need organs and everyone in the database is stored in alphabetical order. I need to display some of those. And so it's just an easy technical decision to start at the beginning of the list and put those names up on the screen. The challenge comes when you have human beings and the way human beings work is once we find a match, we're kind of done.
So you sure wish in that environment if you needed an organ, your last name started with an A and not a Z. So it's starting to look at that and trying to sort out where are the sources of bias coming from, which are the ones that already pre-exist in society like redlining, which we're simply embedding into the technology. We're almost bringing them over, which of them are coming from just making good technical choices without taking context into account. But then once you embed that in a social environment, bias may emerge. And then also starting to think about systems that given the environment they were developed for, may do a reasonable job managing bias that's never perfect. But then when you use them with a very different population, a very different context, different cultural assumptions, then you see what emerges or bias. And so at that time we identified these three broad sources for bias and systems. So pre-existing social bias, technical bias from just technical decisions, and then this category of emergent bias. And those categories have stood the test of time. So that was way back in the mid-nineties and I think they're still quite relevant and quite helpful to people working say in generative AI systems today.
Justin Hendrix:
That perhaps offers me the opportunity to ask Dr. Caliskan a question about your work, and maybe it's a compound question, which is to describe the work that you've been doing around AI bias and some of the work you've done looking specifically at translation engines. How do you see the frameworks, the ideas that come from Dr. Friedman's work sort of informing the research you're doing today and where do you see it going in future?
Aylin Caliskan:
This is a great question. In 2015, '16 was frequently using translation systems, statistical machine translation systems, and I kept noticing bias translation patterns. For example, one of my native languages is Turkish, and Turkish is a gender-neutral language. There is a pronoun all meaning he, she, it or they. And my other native language is Bulgarian and it's a grammatically gendered language, more gendered than English, and it has the Cyrillic alphabet. So I would frequently use translation to text my family, for example, in Bulgaria. And when I was translating sentences such as O bir doktor and O bir hemire meaning he or she is a doctor, he or she is a nurse. The outcomes from translation systems were consistently, he's a doctor, she's a nurse. And then we wanted to understand what is happening with natural language processing systems that are trained on large-scale language corpora and why they are exhibiting bias in decision-making processes such as machine translation generating outputs.
And we couldn't find any related work or any empirical studies except Batya's work from 1996 bias in computer systems. And then we decided to look into this in greater detail as especially language technology started becoming very widely used since their performance was improving. Even all the developments we have in artificial intelligence computing and information systems and then studying the representations in the language domain, which you can think of as natural language processing models and the way they perceive the world, the way they perceive language. I've found out that perception is biased when it comes to certain concepts or certain social groups. For example, certain names that might be more representative of underrepresented groups or historically disadvantaged groups were closer in the representational space to more disadvantaging words versus historically dominant groups representation or words that are related to them were closer in the representational space mathematically to more positive words.
And then we developed a principled and generalizable method to empirically study bias in computer systems to find out that large-scale language corpora are a source of implicit biases that have been documented in social cognition in society for decades. And these systems that are trained on large-scale sociocultural data, embed the biases that are inhuman produced data reflecting systemic inequities, historically disadvantaging data and biases related to the categories Batya mentioned. And over years, we have shown that this generalizes to artificial intelligence systems that are trained on large-scale sociocultural data because large-scale sociocultural data is a reflection of society, which is not perfect. And AI systems learn these reflections in imperfect ways, adding their own, for example, emergent biases and associations as well. And since then I have been focusing on this topic, and it is a great coincidence that the person that contributed foundational work in this area is at the same school as I am.
Justin Hendrix:
Dr. Friedman, when you think about this trajectory of the initial foundational work that you were doing close to 30 years ago and the work that's just been described, do you think about this sort of trajectory of where we've got to in both our understanding of these issues and perhaps also our societal response or the industry response or even the policy response? Do you think we've really made progress? I mean, this question around bias and AI systems bias and technological systems generally it's better understood, but to some extent, I don't have exact numbers on this, it seems like a bigger problem today perhaps than it's ever been. Is that just a sort of function of the growth of the role of technology in so many aspects of society?
Batya Friedman:
Well, one, at a certain point just has to say yes to that, right? Because if we had, as was predicted many years ago, only five computers in the world and it wasn't used in very many sectors of society, then I don't think we would be as concerned. So certainly the pervasive, widespread, pervasive uptake is part of the motivation behind the concern. I think an interesting question here to ask is in what ways can we hold ourselves accountable, both as technologists and also as members of society and governments and private sector for being alert and checking these issues as they emerge? So for example, I talked about a situation where you have a database, everybody's in alphabetical order, and then you display that data in alphabetical order on a screen that can only list 20 names at a time. We know that's problematic. And initially, we didn't know that was problematic.
So if you did that say 30 years ago, there would be unfortunate biases that would result and it was problematic. But now that we know that's a problem, I would say any engineer who builds a system in that way should be held accountable, that would be actually negligent. And this is how we have worked in engineering for a long time, which is as we develop systems, as we gain experience with our methods and techniques, what we consider to be a best practice changes. And the same is true say for building reliable systems or correct systems. We can't build a reliable or correct system, a fully reliable or correct system yet we still hold that out as an ideal. And we have methods that we hold ourselves accountable to. And then if we have failures, we look to see if those methods were used and if they were, then we try to mitigate the harms but we don't cry negligence. And I think the same things can apply here.
So then to your question, I would say we have a lot less experience at this moment in time with understanding what are the methods that can really help us identify these biases early on and in what ways do we need to remain alert? How can we diagnose these things? How can we mitigate them as they unfold? We know that many of these things we will be able to identify or see in advance. So some we can, but other things are going to unfold as people take up these systems. And so we know that our processes also need to engage with systems as they're being deployed in society. And so that's a real... In some ways, a shift in terms of how we, at least with computational systems, how we think about our responsibilities towards them. If I were talking about building bridges, you would say, oh yes, of course you need a maintenance plan. You need people examining the bridge once a year to see if there are new cracks, mitigating those cracks when they happen. So we know how to do this as engineers with other kinds of materials. We're less experienced with digital materials.
Justin Hendrix:
We are though seeing a sort of industry pop up around some of these ideas, folks who are running consultancies, building technological tools, et cetera, to deliver on the ability to investigate various systems for bias. Some of that's being driven by laws. I'm sitting in New York City, there's a law around bias and automated employment decisions that's recently come into effect for instance, what do you make of that? What do you think of the kind of, I suppose, commercialization of some of your ideas?
Batya Friedman:
Let's go back to building bridges or I spent a lot of time actually in the Bay Area, so I'm familiar with earthquakes. And thinking by analogy, if we can build a building, if I use the very best techniques we know and it can withstand a 6.0 Earthquake, and we have a 3.0 Earthquake and the building collapses, then I'm going to be looking at, well, what were the processes that were used? And I'm going to make that cry of negligence and I have a sense of standards and the people who are doing the work are going to be held accountable. If on the other hand it was a 9.0 earthquake, and we actually don't know how to build for that, we're going to consider that a tragedy, but we aren't going to say that the engineers did a poor job. So I think one of the first things we need to be able to do is take an honest look at where we are with respect to our methods and techniques and best practices.
I think we're at the very beginning. Like anything, we will only get good at something if we work really hard at it. So I think that we need to be investing a lot of our resources in how to develop better methods for identifying or anticipating biases early on and techniques for having ways in which those can be reported and ways in which they can then be mitigated. And those techniques and processes need to take into account not just those people who are technically savvy and have access to technology, but to recognize that people who may never even put their hands on the technology might be significantly affected by biases in the system and that they need ways that are perhaps non-technical ways of communicating harms that they're experiencing and then having those be taken up seriously and addressed. So I would say that we're at the very beginning of learning how to do that.
I would also observe that many of the resources are coming from certain places and those places and people who are making those decisions have certain interests. And so can we impartially look at those things so that we take a more broader swath of the stakeholders who are impacted so that when we start to identify where and how the stakeholders need to be accounted for and where and how the resources are being allocated to ensure we develop methods that will account for these stakeholders that there's something even-handed happening there. So a lot of this is about process and a lot of the process that I've just sketched is fundamental to value-sensitive design, which is really how do we foreground these values and use them to drive the way in which we do our engineering work, or in this case we view policy as a form of technology. So one moves forward on the technical aspects and the policy regulatory aspects as a whole, and that broadens your design space. So to your question, I would say we're at the very beginning and a really critical question to ask is those forces that are moving forward, are they themselves representing too narrow a slice of society and how might we broaden there? How do we do that first assessment? And then how might we broaden that in an even-handed manner?
Justin Hendrix:
Dr. Caliskan, can I ask you, as you think about some of the things that are in the head lines today, some of the technologies that are super hot at the moment, large language models, generative AI more broadly, is there anything inherent perhaps in those technologies that makes looking for bias or even having some of these considerations any more difficult? There's lots of talk about the challenges of explainability, the black box nature of some of these models, the lack of transparency and training data, all the sorts of problems that would seem to make it more difficult to be certain that we're following the kinds of best practices that Dr. Friedman just discussed.
Aylin Caliskan:
This is a great question. Dr. Friedman just mentioned that we are trying to understand the landscape of risks and harms here. These are new technologies that became recently very popular. They've reached the public recently, although they have been developed for decades now. And in the past we have been looking at more traditional use cases based on, for example, decision-making systems. For example, in college admissions, resume screening, employment decisions or representational harms that manifest directly in the outputs of AI systems. But right now, the most widely used generative AI systems are typically offered by just a few companies. And they have the data about what these systems are being used. And since many of them are considered general-purpose AI systems, people might be using them for all kinds of purposes to automate mundane tasks to collaborate with AI. However, we do not have information about these use cases and such information might be proprietary and it might benefit with market decisions in certain cases, but without understanding how exactly these generative AI systems are being used by millions if not billions of people, we cannot trivially evaluate potential harms and risks.
We need to understand the landscape better so that we can develop evaluation methods to measure these systems that are not transparent, that are not easy to interpret. And once we understand how society is co-evolving with these systems, we can develop methods not just to measure things and evaluate potential harms, but also think about better ways to mitigate these problems that are socio-technical where technical solutions by themselves are not sufficient, and we need regulatory approaches in this space as well as raising public awareness as Dr. Friedman mentioned. Stakeholders, users, how can they understand how these systems might be impacting them when they are using them for trivial tests? What kinds of short-term and long-term harms might they experience? So we need a holistic approach to understand where these systems are deployed, how they are being used, how to measure them, and what can be done to understand and mitigate the harms.
Batya Friedman:
So I'd like to pick up on one of the things that you mentioned there, which is that there are very large systems, language systems that are being used for all kinds of mundane tasks. And I'd just like to think about that for a minute, have us think together about that. So I'm imagining a system, all kinds of things in my life, this system is now becoming the basis in which I am engaging in things. It begins to structure the language that I use. It not only structures language, but it structures in certain ways, thought. And I want to contrast that with a view of human flourishing where the depth and variety of human experience, the richness of human experience is the kind of world that we want to live in, where there are all kinds of different ways of thinking about things, cultural ways, language poetic ways, different kinds of expressions.
Even what Aylin was talking about in the beginning, she grows up speaking Turkish and Bulgarian and now English, right? Think of her ability for expression across those languages. That's something I'll never experience. So I think another question that we might ask separate from the bias question perhaps related, but separate has to do with a certain kind of homogenization as these technologies pervade so much of society and even cross national international boundaries embedded in them are ways of thinking and what happens when over time. And inter-generationally, you think of young people coming of age in these technologies and absorbing almost in the background, in their ocean behind them, a very similar way of thinking and being in the world. What are the other things that are beginning to disappear? And I wonder if there isn't a certain kind of impoverishment about our collective experience as human beings on the planet that can result from that.
And so I think that's a very serious concern that I have. And beyond that specific concern, what I want to point out is that arriving at that concern comes from a certain kind of, I would say principled systemic way of thinking about what does it mean if we take this technology seriously and think of it at scale in terms of uptake and over longer periods of time, what might those implications be? And then if we could agree on a certain notion of human flourishing that would allow for this kind of diversity of thought, then that might really change how we wanted to use and disseminate this kind of technology or integrate it into our everyday lives. And we might want to make a different set of decisions now then the set of decisions that seem to be unfolding.
Justin Hendrix:
I think that's a fairly bald critique of some of the language we're hearing from Silicon Valley entrepreneurs who are suggesting that AI is the path to abundance, that is the path to some form of flourishing that seems to be mostly about economic prosperity. Do you think of your ideas as sort of standing in opposition perhaps to some of the things that we're hearing from those Silicon Valley leaders?
Batya Friedman:
I guess, what I would step back and say is what are the things that are really important to us in our lives? If we think about that societally, we think about that from different cultural perspectives. What are the things that we might identify? And then to ask the question, how can we use this technology to help us realize those values that really matter to us? And I would also add to that thinking about the planet. Our planet is quite astonishing, right? It is both finite and regenerative to the extent that we don't destroy the aspects that allow for regeneration. And so I think another question we can also ask about this technology, it depends on data, right? And where does data come from? Well, data comes from measurement, right? Of something, somehow. Well, how do we get measurement? Well, somehow we have to have some kind of sensors or some kind of instrumentation or something such that we can measure things, such that we can collect those things all together and store them somewhere, such that we can operate on them with some kinds of algorithms that we've developed.
Okay, so you can see where this is going, which is if you take that at scale, there's actually a huge amount of physical infrastructure that supports the kind of computation we're talking about for the kind of artificial intelligence people are talking about now. So while on the one hand we may think about AI as something that exists in the cloud, and the cloud is this kind of ephemeral thing. In fact, what the cloud really is, is a huge number of servers that are sitting somewhere and generating a lot of heat, so need to be cooled, often cooled with water, often built with lots of cables, using lots of extractive minerals, etc, etc. And not only that, but the technology itself deteriorates and some needs to be replaced at a certain number of years, whether it's five years or 10 years or 25 years. When you think about doing this at scale, the magnitude of that is enormous.
So the environmental impact of this kind of technology is huge. And so we can ask ourselves, well, how sustainable, how wise is that of a choice to build our society based on these kinds of technologies that require that kind of relationship to materials? And by materials I mean the physical materials, the energy, the water, all of that. So when I step back and I think about the flourishing of our society and technologies, tools and technologies and infrastructure that can support that over time for myself, I'm looking for technologies that make sense on a finite and regenerative planet with the population scales that we have right now, right? We could shrink the population and that would change a lot of things as well. Those are the kinds of questions. So what I would say about many of the people making decisions around artificial intelligence right now is that I don't think they're asking those questions, at least seriously and in a way in which it would cause them to rethink how they are building and implementing those technologies.
So there are explorations. There are explorations about putting data centers at the bottom of the ocean because it's natural cooling down there. There are explorations around trying to improve, say, battery storage or energy storage. But the question is, do we invest and build a society that is dependent on these technologies before we've actually solved those issues, right? And just by analogy, think about nuclear power. When I was an undergraduate, there were discussions, nuclear power plants were being built, and the question of nuclear waste had not been solved. And the nuclear engineers I talked to at the time said, "Well, we've just landed on the moon. We're going to be able to solve that problem too in 10 years. Don't worry about it." Well, here it is, how many years later, decades later, and we still have nuclear waste sitting in the ground that will be around for enormous periods of time.
That's very dangerous, right? So how do we not make that same kind of mistake with computing technologies? We don't need to throw the baby out with the bathwater, but we can ask ourselves if this direction is a direction more like nuclear waste around nuclear power, or if there is an alternative way to go, and what would that be? And could we be having public conversation at this level? And could we hold our technical leaders, both the leaders of the companies, the CEOs and our technologists accountable to these kind of criteria as well? And that I think would be a really constructive way for us to be moving at this moment in time.
Justin Hendrix:
So just in the last few days, we've seen the EU agree apparently some final version of its AI Acts, which will eventually become law depending on makes it through last checks and process there. We've seen the executive order from the Biden administration around artificial intelligence. We're seeing a slew of policies emerge across states in the US which are more likely perhaps to become law than anything in the US Congress. What do you make right now of whether the policy response to artificial intelligence in particular is sufficient to the types of problems and challenges that you're describing? And I might ask you both that question, but Dr. Caliskan for you, how do you think about the role of the lab in engaging with these questions going forward in these next few years?
Aylin Caliskan:
We are at the initial stages of figuring out goals and standards moving forward with these powerful technologies. And we have many questions. In some cases, we do not exactly even know what the questions are, but the technology is already out there. It has been developed and deployed, and it is currently having impact on individuals and society at scale. So regulation is behind and accordingly nowadays, we see a lot of work interest and demand in this area to start understanding the questions and find some answers. But given that the technology is being deployed in all kinds of socio-technical contexts, understanding the impact and nuance in each domain sector will take time. Although the technology is still evolving very rapidly and proliferating in all kinds of application scenarios, it is great that there is some progress and there is more focus on this topic in society, in the regulatory space and in academia, in the sciences as well.
But it's moving very rapidly. So rapidly that we are not able to necessarily catch the problems on time to come up with solutions, and the problems are rapidly growing. So how can we ensure that when developing and deploying these systems, we have more robust standards and checkpoints before these systems are released and impact individuals, make decisions that change life's outcomes and opportunities? Is there a way to slow down so that we can have higher quality work in this space to identify and potentially come up with solutions to these problems? And I would also like to note that yes, the developments from the EU or the executive order are great, but even when we try to scratch the surface to find some solutions, they will not be permanent solutions. These are socio-technical systems that evolve with society, and we will have to keep dealing with any side effects dynamically on an ongoing basis. Similar to the analogy Dr. Friedman just made about bridges and their annual maintenance. You will need to keep looking into what kinds of problems and benefits might emerge from these systems. How can we amplify the benefits and figure out ways to mitigate the problems while understanding that these systems are impacting everyone and the earth with great scale?
Justin Hendrix:
That maybe gives me an opportunity to ask you, Dr. Friedman, a question about problem definition. So there's been a lot of discussion here about what are the right questions to ask? What are the ways that we can understand the problems and how best to address them? Close to 30 years on these questions, research career essentially about developing frameworks to put these questions into, what have you learned about problem definition and what do you hope to sort of pass along during this transition as you sort of pass the baton here?
Batya Friedman:
So I would just say that my work in value sensitive design is fundamentally about that. And we deploy human values as what is important to people in their lives, especially things with moral and ethical import. And that definition has worked well for us over time. And along with that, we've developed methods and processes. So I think of the work that we've done in a way, there's the adage about you can give a man a fish or you can teach him how to fish and he'll feed himself for the rest of his life, or I suppose you could give that to any person and they will be able to do that. I think that the work that we've been involved in is thinking about, well, what does that fishing rod look like? Are there flies and what are those flies about? What are the methods for casting that makes sense and how can you pass along those methods? And also there's knowledge of the river and knowledge of the fish, and knowledge of the insects and the environment and taking all of that into account and also knowing that if you over fish the river, then there won't be fish next year.
And there may be people upstream who are doing something and people downstream who are doing something, and if you care about them, want to ensure that your fishing is also not going to disrupt the things that are important to them in their lives. So you need to bring them into the conversation. And so I would say what my contribution has been, has been to frame things such that one understands the roles of the tools and technologies, those fishing rods, those flies, the methods, the also understanding of the environment and how to tap into the environment and the knowledge there and to broaden and have tools, other kinds of tools for understanding and engaging with other stakeholders who might be impacted in these systemic things. So my hope is that I can pass that set of knowledge, tools, practices to Aylin, who will in her lifetime encounter the new technologies, whatever those happen to be as they unfold, that she will not be starting from scratch and having to try and figure out for herself how to design a good fishing rod.
She can start with the ones that we've featured out and she's going to need refinements on that, and she's going to decide that there are places where the methods don't yet do a good enough job. And there's other things that have happened. Maybe there was a massive flash flood and that's changed the banks and the river, and there's something different about the environment to understand, but I hope she's not starting from scratch, but could take those things, extend and build them and has the broader ethos of the exploration as a way to approach these problems. So that's what I hope I'm passing and trust that she will take up and make good wise choices with it. I think that's all we can do, right? We're all in this together through the long term.
Aylin Caliskan:
I am very grateful that I have this opportunity to learn from colleagues that have been deeply thinking about these issues for decades when no one even had an idea about these topics that are so important for our society. And in this process, I am learning, and this is an evolving process, but I feel very grateful that I have the opportunity to work with the tech policy led community, including Batya, Ryan, Yoshi, who have been so caring, thoughtful, and humane in their research, always incorporating and prioritizing human values and providing a strong foundation, a good fishing rod to tackle these problems. And I am very excited that we will continue to collaborate on these topics. It is very exciting, but at the same time, it is challenging because these impacts come with great responsibility and I look forward to doing our best given our limited time and resources and figure out ways to also pass these methodologies, these understandings, these foundational values to our tech policy community and future generations as they will be the ones that will have these fishing rods to focus on these issues in the upcoming decades and centuries.
Batya Friedman:
I wanted to pick up on something also that Aylin had said, not in this comment, but the comment before about slowing things down. And I just wanted to make another observation for us. Historically, when people have developed new tools and technologies, they have been primarily of a physical sort, though I think something like writing in the printing press are a little bit different, but they take a certain amount of time to actually produce things, it takes a certain amount of time for things to be disseminated. And during that time, people have a chance to think and a chance to come to have a better understanding of what a wise set of decisions might be. We don't always make wise decisions, but at least we have some time and human thought seems to take time, right? Ultimately, we all have our brains and they operate at a certain kind of speed and a certain kind of way.
I think one of the things we can observe about our digital technologies and the way in which we have implemented them now is that if I have a really good idea this afternoon and I have the right set of technical skills, I can sit down and I can build something and by 7:00 in the morning, I can release that and I can release and broadcast that basically across the globe. And others who might encounter it, if the stars align, might pick that up. And by 5 o'clock in the evening, twenty-four hours from when I had the first idea, maybe this thing that I've built is being used in many places by hundreds, if not thousands or tens of thousands, hundreds of thousands of people, right? Where is there time for wisdom in that? Where is there time for making wise decisions? So I think in this moment we have a temporal mismatch, shall we say, between our capacity as human beings to make wise choices, to understand perhaps the moral and ethical implications of one set of choices versus another, and the speed at which new ideas can be implemented, disseminated, and taken up at scale.
And that is a very unique challenge I think, of this moment. And so thinking about that really carefully and strategically, I think would be hugely important. So without other very good ideas, one thing one might say is, well, what can we do to speed up our own abilities to think wisely? That might be one kind of strategy. Another strategy might be, well, can we slow this part down, the dissemination part down if we can't manage to make ourselves go more quickly here in terms of our own understandings of wisdom, but at least getting the clarity of that structural issue on the table and very visible, I think is also helpful. And from a regulatory point of view, I think understanding that is probably also pretty important. Usually when people say you're slowing down a technology, that's seen as quite a negative thing, I think it's squashing innovation. But I think when you understand that we are structurally in a different place and we don't have a lot of experience yet, maybe that's some additional argument for trying to use regulation to slow things in a substantial way. And what heuristics we might use, I don't know, but I think that is really worth attending to.
Justin Hendrix:
Well, I know that my listeners are also on the same journey that you've described of trying to think through these issues and trying to imagine solutions that perhaps, well maybe fit the bill of wisdom or a flourishing society, certainly a democratic and equitable and more just society.
I want to thank both of you for all of the work that you've done to advance these ideas, both the work that's come before and the work that's to come from both of you and from UW and the Tech Policy Lab more generally. I thank you both for talking to me today. Thank you so much.
Batya Friedman:
Thank you.
Aylin Caliskan:
Thank you.
Read more from the original source:
What Are We Building, and Why? | TechPolicy.Press - Tech Policy Press
How AI can help journalists find diverse and original sources – Tech Xplore
This article has been reviewed according to ScienceX's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:
fact-checked
preprint
trusted source
proofread
close
What would news stories be without proper sources? To tell a compelling story, reporters need to find newsworthy narratives and trustworthy information. Such information typically comes from a wide pool of publications, official records and experts, all with their own biases, expertise, opinions and backgrounds. The pool of interview candidates is plentiful yet overwhelming to navigate.
Artificial intelligence, however, may serve as a guide.
Researchers from the USC Information Sciences Institute are creating a source-recommendation engine designed to suggest references for journalists. "In practice, the software application would analyze a given text or topic and suggest relevant sources by cross-referencing against a database of potential interviewees, experts or informational resources," said Emilio Ferrara, a professor of computer science and communication at the USC Viterbi School of Engineering. "The tool could provide contact details, areas of expertise and previous work of the sources," he added.
The tool's development is being led by Alexander Spangher, a computer science Ph.D. student at USC Viterbi who previously worked as a data scientist at the New York Times. While immersed in the journalism industry, Spangher witnessed the pressure of traditional newsrooms. "I haven't spoken to a single local journalist that was not totally overstretched," he remarked. "There have been news deserts and papers shutting down. It's areas like this that we really want to assist and build tools for."
Motivated to provide helpful resources for reporters, Spangher is creating various AI gadgets, including a source-recommendation system prefaced in his paper, "Identifying Informational Sources in News Articles," that was accepted to the 2023 Conference on Empirical Methods in Natural Language Processing and is now posted to the arXiv preprint server.
To create an AI model that can suggest sources, the researchers first laid the groundwork: how are human journalists currently using sources in news writing? To study this, they gathered a dataset of sentences from over a thousand news articles and annotated the source of the information, as well as the sourcing category (e.g., "direct quotes," "indirect quotes," "published works" and "court proceedings").
A thousand annotated news articles, however, were not enough data for the researchers to draw firm conclusions about all the myriad ways journalists use sources across reporting genres. But, it was enough to train a language model (LM) to continue the annotation process. "Language models are AI frameworks that process and understand human language by analyzing large volumes of text for patterns and context," explained Ferrara, senior author of the paper.
The LMs the researchers trained could detect source attributions with 83% accuracy, revealed the authors. Now equipped with these LMs, they annotated roughly 10,000 news articles and delved further into understanding the compositionality of news writing: when and how do journalists currently use sources?
The AI models found that, on average, roughly half the information in news articles came from sources and, in each article, there are usually one to two major sources (i.e., those contribute 20% or more of the information in the article) and two to eight minor ones (those that contribute less). "The AI also discovered that the first and last sentences were the most likely to be sourced," Spangher explained, adding that reporters often lead with cited information and close with a quote to send the reader off.
The researchers challenged their new algorithm with one more test: could they detect if a source was missing? If AI can recognize when information is lacking, then it can be configured to know when to recommend a particular expert to complete the full picture.
Analyzing 40,000 articles with some sources randomly removed, the AI models easily noticed when a major source was absent but had difficulties with minor ones. Although they may be the least crucial to a story, less obvious sources may also be the most valuable recommendations that an AI could one day make, Spangher said.
"You're going to draw a lot of information from the main participants, but supplementary voices are going to provide extra color and details to the article," he noted. "It's going to be a challenge to get the engine to recognize and recommend minor sources, but they may be the most helpful."
The researchers also think the tool will be significant if it can diversely recommend sources. "It can introduce journalists to new, diverse voices beyond their usual network, thus reducing the reliance on familiar sources and potentially bringing in fresh perspectives," Ferrara said.
However, every AI system is prone to bias if not appropriately designed, he added. "To ensure diversity in source databases, standards should include representation from a wide range of demographics, disciplines and perspectives," he noted.
Jonathan May, a research associate professor of computer science at USC Viterbi and ISI lead researcher, imagines a future where the sourcing engine jumpstarts the reporting process, allowing journalists to be more efficient.
"Technology that can help us do creative work and be our creative best is a good thing," said May, a co-author of the paper. "That's why I'm hopeful for it."
The team plans to collaborate with journalists to gather feedback for further improvements.
"With projects like this, I really thrive off talking to journalists and understanding their needs, viewpoints and what they think will or won't work," Spangher said. "Any solution to local journalism will require a bunch of different people with a bunch of different backgrounds coming together."
More information: Alexander Spangher et al, Identifying Informational Sources in News Articles, arXiv (2023). DOI: 10.48550/arxiv.2305.14904
Journal information: arXiv
Read the original post:
How AI can help journalists find diverse and original sources - Tech Xplore
Computer Science students win top prize at the 30th annual Social … – University of Waterloo
On November 22, GreenHouse held their 30th Social Impact Showcase at United College, celebrating the next generation of innovators. GreenHouse is a social impact incubator for students and community members who want to create environmental or social change, with their early-stage business venture ideas.
Richard Myers, principal at United College, speaking at theopening remarks of the fall 2023 GreenHouse Social Impact Showcase.
The University of Waterloo from its inception has been distinguished by a focus on innovation and entrepreneurship that is really unique in this country, said Richard Myers, principal at United College, during his opening remarks.
Waterloo is consistently ranked the top university in Canada for entrepreneurs by Pitchbook University Rankings.
A decade ago, we noticed the outstanding achievements of Waterloos first student incubator, Velocity, which tends to focus on Engineering and Computer Science students. GreenHouse was conceived as a kind of complementary piece to Velocity, focusing on innovation aimed at social or environmental progress. Were pleased to host students from all six Waterloo faculties in equal numbers over the years, where we have made our programming available at zero cost to student participants.
The Social Impact Showcase held two rounds of pitches with 10 student-led startups competing for more than $18,000 in funding. From assistive technologies to community programs, GreenHouse announced the six winning teams who received between $500 to $8,000 in funding to support their sustainable ideas. While Rising SheFarmers was awarded the fall 2023 Peoples Choice and received an additional $1,000 in funding.
University of Waterloo and United College students pitch at the fall 2023 GreenHouse Social Impact Showcase.
Safi $8,000
Waterloo students Miraal Kabir, Martin Turuta and Daria Margarit, created Safi as the worlds first off-the-grid pasteurization monitoring unit to prevent the spread of milk-borne diseases in East Africa.
East Africa has the highest global incidence of ill health and death from heart disease, with 30 per cent of deaths in children under the age of five, explained Kabir, a computer science student and co-founder of Safi.
That is because right now theres no access to safe and quality milk. There are many problems in the dairy supply chain, which is why at Safi, we have patented the worlds first off-the-grid pasteurization control unit aimed towards smallholder farmers and vendors.
In 2023 alone, the Safi team has travelled to Africa twice this year to create a network of creation with more than 100 farmers in a WhatsApp group chat that communicate back and forth with the Safi team to make a product thats catered towards them.
The Safi team previously pitched at the 2021 Concept $5K startup finals and the spring 2023 Social Impact Showcase, where they won and received funding towards their innovative startup. Now, Safi has received the go ahead from the Rwandan dairy supply chain to do a pilot program with the six most sellers (that are all women) in Kenya and Rwanda.
Safi co-founders Martin Turuta (second left), Miraal Kabir, Filip Birger (head of Engineering at Safi) and Daria Margarit receive $8,000 in funding towards expandingSafi, the worlds first off-the-grid pasteurization monitoring unit to prevent the spread of milk-borne diseases in East Africa.
Patient Companion $5,000
Founded by Engineering student Christy Lee, Patient Companion is an easy-to-use solution devised as a communication app between nurses and patients to not only improve patient experience, but also helps reduce stress and burnout for nurses.
When I was volunteering at a hospital and at a long-term care center for two years, I saw a constant number of lights flashing across the hallway, Lee said. With the current nurse call system, the nurses do not know what types of requests the patients are making, and which patients need immediate help.
Lee explained how on average, nurses are assigned between five to nine patients each, where roughly 56 per cent of requests made by patients are non-urgent requests. As a result, 3 per cent of the time nurses would forget to come back after asking what the patient needs, while 10 per cent of requests get cancelled.
Patient Companion allows patients to make specific requests via the Patient Companion app that will then automatically prioritize the requests on the nurses end. While requests for water or blankets can be distributed among personal safety workers, volunteers or other available staff, which will ultimately reduce the workload and stress for nurses.
Patient Companion will also be competing as a finalist team in the fall 2023 Velocity Pitch Competition.
Engineering student, Christy Lee, receives $5,000 in funding towards Patient Companion an easy-to-use solution devised as a communication app between nurses and patients to not only improve patient experience, but also helps reduce stress and burnout for nurses.
Rising SheFarmers $2,000
Founded by masters student, Lydia Madintin Konlan, Rising SheFarmers want to empower rural women in Ghana to get out of poverty through mushroom farming.
I counted a lot of women [in Ghana] who struggle a lot to get access to decent work and employment due to limited access to productive resources, Konlan explained. We decided that the 57 per cent of women who remain having limited access to work can have something to do by producing mushrooms.
Rising SheFarmers has been able to produce a supplier to five women in these five communities across Ghana, produce 1,500 bags of mushrooms to these women and make a huge impact in their lives. The market for mushrooms is growing and Konlan and her team are committed to this transformative journey by making agriculture more sustainable for women to work in.
With the funding, Rising SheFarmers hope to grow their reach from 200 to 300 women and become an incorporation where rural women in Ghana can farm mushrooms for income.
More than 1,600 community members placed their votes for the People's Choice award. WhereRising SheFarmersreceived 54 per cent (900) of the total number of votes, making themthe fall 2023 Peoples Choice award. Konlan received an additional funding of $1,000 to grow Rising SheFarmers.
Raising SheFarmers founder,Lydia Madintin Konlan, receive $2,500 in funding towards empowering rural women in Ghana to get out of poverty through mushroom farming. Konlan also received the People's Choice award with an additional funding of $1,000.
Braille Buddy $1,500
Braille Buddy is designing computer-vision powered braille books to help low-vision individuals learn braille independently. Led by Shaahana Naufal, Julia Turner, Mathurah Ravigulan and Ayla Orucevic, the group of fourth-year systems design engineeringstudents are hoping to address the declining literacy rate among American children who have visual impairments and living in low-income communities.
Weve completed our image classification model, where we essentially take an image of each page of our braille book that is being read and isolate each character into its own images, explained Orucevic, a Faculty of Engineering student.
Then we would feed those images into our machine learning model which converts them into English texts. Were also building out the motion tracking feature, which would allow us to detect the coordinates of finger movement along with paper recognition.
With the Social Impact Fund, Braille Buddy hopes to register their device with the Ontario Assistive Devices program and partner with various school boards and low-vision organizations that the team are already in contact with, to pilot their device in real classroom and make Braille Buddy a reality for many children.
Student venture, Braille Buddy, receive $1,500 Social Impact Fund towards designingcomputer-vision powered braille books to help low-vision individuals learn braille independently.
WhereCafe $1,500
Founded by solo travellers, Carla Castaneda and Wanetha Sudswong (MEng 23), who want to empower other solo female travellers to travel around the world safely by using artificial and authentic intelligence to do the research for you.
WhereCafe has developed a mobile application where solo female travellers can type in their location and destination and the app will find the safest path from A to B. The app also provides multiple safety features to make travelling alone a bit easier. The first feature provides users the ability to set up an automated text (or call someone in their contacts) along the way based on their GPS location. The second feature provides the ability to add a report, where the app will then notify other travellers to avoid or navigate away from the reported area.
Our competitors dont know that it is extremely valuable to hear from our solo female travellers and well be the first to implement safe navigation and crowd sourcing in the same platform, Sudswong said.
Castaneda and Sudswong hopes to tap into the $11.5 billion market that had a record of more than 900 million travellers worldwide in 2022 alone. In the next six months, WhereCafe will start building a community in the Kitchener-Waterloo region for solo female travellers interested in exploring the city.
Student venture, WhereCafe, receive $1,500 Social Impact Fund towards empowering other solo female travellers to travel around the world safely by using artificial and authentic intelligence to do the research for you.
Real Research $500
Real Research is led by Faculty of Science student Ria Menon, whose student-run venture program is providing undergraduate students with more opportunities to get involved in scientific research labs on campus.
During the Spring 2023 term, Real Research saw a 100 per cent recommendation rate from their first cohort (20 students)of the program. Where the applications have doubled in size with a staggering 117 applicants for the fall cycle.
That just demonstrates the need for the Real Research program to fill in the gap that is missing for undergraduate students and research to be connected, said Menon, who looks forward to expanding the Real Research program to help advance the future of research.
Menon previously pitched at the spring 2023 Social Impact Showcase where Real Research received the Peoples Choice award. Since then, Real Research has launched their pilot program, received and incorporated feedback, and allocated additional funding from the Faculty of Science Foundation.
Student venture, Real Research, receive $500 Social Impact Fund towards supporting the student-run venture program in providing undergraduate students with more opportunities to get involved in scientific research labs on campus.
During the Social Impact Showcase, Erin Hogan, GreenHouse Programs Manager, announced that United College and the Faculty of Arts will be launching a new Social Innovation and Impact minor in the Fall 2024 term.
The Social Innovation and Impact minor will open up pathways for existing GreenHouse students and beyond to engage in pitches and projects, while they receive an official academic credit towards their degree. The launch of the new minor program also provides other Waterloo students with the ability to research, design, launch and test social innovations through applied and experiential learning opportunities.
More information to come on the Social Innovation and Impact minor.
Interested in making social or environmental change? Learn more about the GreenHouse Social Impact Showcase that is held each term and get started on your venture idea today by visiting the United College website.
Excerpt from:
Computer Science students win top prize at the 30th annual Social ... - University of Waterloo
Practical Methods for Integrating Computer Science into Core … – Education Week
Michael Nagler
Superintendent, Mineola School District
He believes strongly in the districts mission to inspire students to become lifelong learners that exhibit strength of character and contribute positively to a global society. During his twenty three years with the district, he has been a big proponent of using technology to engage students in rigorous content. All five schools in Mineola have been recognized as Apple distinguished schools. Mineola is also a member of the League of Innovative Schools, Dr. Nagler is the Chairperson of the Advisory Board.
Mineola was one of the first schools in the State to implement a comprehensive computer science curriculum starting in kindergarten. Mineola is also at the forefront of digital student portfolios. Dr. Nagler recently utilized the Districts coding platform to create his own digital portfolio. http://michaelnagler.oyosite.com
Dr. Nagler was the 2020 New York State Superintendent of the Year and was a Finalist for the 2020 National Superintendent of the Year. He recent published a book entitled: The Design Thinking, Entrepreneurial, Visionary Planning Leader- a Practical Guide for Thriving in Ambiguity
Here is the original post:
Practical Methods for Integrating Computer Science into Core ... - Education Week
NHGRI selects Adam Phillippy as first director of new Center for … – National Human Genome Research Institute
The National Human Genome Research Institute (NHGRI), part of the National Institutes of Health (NIH), has selected Adam Phillippy, Ph.D. as the founding director of the new Center for Genomics and Data Science Research within the Institutes Intramural Research Program. In this role, he will provide scientific and administrative leadership, foster a collaborative and inclusive research environment and provide mentorship for researchers within the Center.
Since joining NHGRI in 2015, Dr. Phillippy has been an investigator and head of the Genome Informatics Section, where his research group develops and uses computational methods to sequence and analyze genome sequences. As a key leader of the Telomere-to-Telomere Consortium, he played a pivotal role in generating the first truly complete human genome sequence, which revealed the presence of over 200 million additional bases of DNA. He is also a major contributor in the international Human Pangenome Reference Consortium, which published the first draft of a human pangenome, a more complete collection of genome sequences that captures more human diversity.
Adams vision for our new Center will uniquely position NHGRI to lead the burgeoning computational genomics and data science fields, said Charles Rotimi, Ph.D., Scientific Director of NHGRIs Intramural Research Program. Adam has established himself as an expert in genome sequence assembly and analysis not only within NIH but in the broader scientific community. I cant think of anyone more qualified and ready for this role.
Previously called the Computational and Statistical Genomics Branch, the Center for Genomics and Data Science Research represents a reconfiguration of the research program to meet current challenges and opportunities in using computational strategies to study human and other genomes. As a newly designated center, the program aims to eventually have collaborative connections with other NIH institutes and bring together a larger set of local experts.
Adams vision for our new Center will uniquely position NHGRI to lead the burgeoning computational genomics and data science fields. Adam has established himself as an expert in genome sequence assembly and analysis not only within NIH but in the broader scientific community. I cant think of anyone more qualified and ready for this role.
The new Center will be developing and using cutting-edge computational approaches to analyze genome sequence data and conducting research in basic and applied genomics, comparative genomics, bioinformatics and genomic medicine. The newly established Center and its researchers are highly complementary with other components of NHGRIs Intramural Research Program, further enhancing their collective and collaborative abilities to address a fundamental challenge in genomics: understanding how genomic variants affect genome function in giving rise to phenotype.
As the genomics field becomes increasingly more data-intensive, the development of more powerful computational tools and technologies is necessary for making continued research advances, said Dr. Phillippy. Im excited to lead this incredibly talented and interdisciplinary group of investigators as we bring new knowledge and approaches to genomics research and medicine.
After graduating from Loyola University Maryland with a B.S. in computer science, Dr. Phillippy worked at The Institute for Genomic Research (TIGR) before earning his M.S. and Ph.D. in computer science from the University of Maryland, College Park. Prior to joining NIH, he worked at the National Bioforensic Analysis Center, where he founded and led a bioinformatics group that developed genomic methods and analyzed DNA sequence data for the Federal Bureau of Investigation.
Dr. Phillippy has authored and co-authored over 130 peer-reviewed research papers and scientific reviews. He has received numerous awards, including the U.S. Presidential Early Career Award for Scientists and Engineers and the NIH Directors Award. He was also a finalist for the 2022 Samuel J. Heyman Service to America Medal and was named one of the worlds most influential people of 2022 by TIME magazine for his work on completing the human genome sequence.
Dr. Phillippy will begin his appointment as Director of the Center for Genomics and Data Science Research in the near future.
See more here: