Category Archives: Computer Science
Will a solar storm wreak havoc on the internet? – Yahoo News
The sun. NASA via Getty Images
You might have seen the rumors online, ironically enough.
Tales of an impending "internet apocalypse" have been circulating across social media in recent weeks, seemingly sparked by misleading reports of a NASA probe sent to allegedly prevent a wifi-devastating solar storm, as well as improper interpretations of an article published by the space agency itself. But the Parker Solar Probe was, in reality, launched to study the sun, not save our phones from it, and an internet apocalypse is at the moment no closer to devastating our online connections than Rihanna is to releasing another album. That said, some degree of celestial communication interference is a real possibility, and one that could technically arise as a result of a "strong solar storm hitting Earth" sometime in the future, per The Washington Post.
When solar material from the sun strikes the Earth's "magnetosphere," it has the potential to create so-called geomagnetic storms strong enough to cause blackouts and grid failures on the ground. In 1859, for example, an intense solar storm known as the Carrington Event sent global telegraph systems on the fritz, (literally) shocking operators who watched in awe as sparks allegedly flew out of their machines. Years later, in 1989, a similar solar disturbance caused a 12-hour electrical blackout in Quebec, Canada, prompting school and business closures.
Given the interconnected nature of our highly-digital world, a Carrington Event in 2023 "would have even more severe impacts," such as "widespread electrical disruptions, persistent blackouts and interruptions to global communications," NASA has said. The resulting "technological chaos could cripple economies and endanger the safety of livelihoods of people worldwide." According to estimates from internet watcher NetBlocks, one day of lost connectivity could cost the U.S. more than $11 billion.
"This is not taken into account in our infrastructure deployment today at all," computer science professor Sangeetha Abdu Jyothi, whose paper on how solar storms could affect the interwebs helped popularize the term "internet apocalypse," told the Post.
It is possible, yes. But experts say it's not very likely. For one thing, powerful solar storms like that of the Carrington Event are only expected once every 500 years.
Moreover, much of the recent attention on the matter arose as a result of an article NASA published in March, wherein the agency highlighted how it is using artificial intelligence to predict "dangerous space weather." The article does not mention the term "internet apocalypse," but it does mention both the Carrington Event and the Quebec blackouts, and notes the modern consequences if a similar phenomenon were to happen today. In employing AI, the space agency hopes it will be able to predict solar storms up to 30 minutes before they happen, giving power grid operators and telecommunication companies time to move their systems offline and prevent added damage.
According to Space.com(which is owned by Future plc, the same parent company as The Week), most of the online falsehoods regarding an impending "internet apocalypse" refer to this March article, as well as research from "earlier this year" suggesting the sun might reach its solar maximum or a peak in its 11-year activity cycle in 2024, a year earlier than expected. "While scientists do, in fact, expect major solar storms to occur after solar activity reaches its peak," wrote Space.com's Sharmila Kuthunur, "there is no evidence to support the viral rumors that the next major solar storm will cause the internet to go offline." Similar online panic appears tied to work done by NASA's Parker Solar Probe, which the Post said is intended to "research the physics of the sun" so as to better understand solar winds and storms "not to keep the WiFi from going out, as TikTok would have you think." Even Jyothi, the computer science professor, regrets using the phrase "internet apocalypse" in her paper, which she said "just got too much attention" and stirred up undue anxiety among the common folk. "Researchers have been talking for a long time about how this could affect the power grid," she told the Post, "but that doesn't scare people to the same extent for some reason."
Indeed, NASA has been warning of potential communications disruption resulting from solar storms and wind since at least 2009, said USA Today. So while yes, some degree of connectivity chaos is possible, the space agency has yet to officially declare doomsday imminent. What's more, "with this AI, it is now possible to make rapid and accurate global predictions and inform decisions in the event of a solar storm," per astronomer and physicist Vishal Upendran, "thereby minimizing or even preventing devastation to modern society."
Homepage
See the original post:
Will a solar storm wreak havoc on the internet? - Yahoo News
Salesforce Executive Shares ‘Four Ways Coders Can Fight the … – Slashdot
Our research revealed that 75% of UX designers, software developers and IT operations managers want software to do less damage to the environment. Yet nearly one in two don't know how to take action. Half of these technologists admit to not knowing how to mitigate environmental harm in their work, leading to 34% acknowledging that they "rarely or never" consider carbon emissions while typing a new line of code... Earlier this year, Salesforce launched a sustainability guide for technology that provides practical recommendations for aligning climate goals with software development. In the article the Salesforce executive makes four recommendations, urging coders to design sites in ways that reduce the energy needed to display them. ("Even small changes to image size, color and type options can scale to large impacts.") They also recommend writing application code that uses less energy, which "can lead to significant emissions reductions, particularly when deployed at scale. Leaders can seek out apps that are coded to run natively in browsers which can lead to improvement in performance and a reduction in energy use."
Their article includes links to the energy-saving hackathon GreenHack and the non-profit Green Software Foundation. (Their site recently described how the IT company AVEVA used a Raspberry Pi in back of a hardware cluster as part of a system to measure software's energy consumption.)
But their first recommendation for fighting the climate crisis is "Adopt new technology like AI" to "make the software development cycle more energy efficient." ("At Salesforce, we're starting to see tremendous potential in using generative AI to optimize code and are excited to release this to customers in the future.")
View original post here:
Salesforce Executive Shares 'Four Ways Coders Can Fight the ... - Slashdot
AI on AI action: Googler uses GPT-4 chatbot to defeat image classifier’s guardian – The Register
Analysis A Google scientist has demonstrated that OpenAI's GPT-4 large language model (LLM), despite its widely cited capacity to err, can help smash at least some safeguards put around other machine learning models a capability that demonstrates the value of chatbots as research assistants.
In a paper titled, "A LLM Assisted Exploitation of AI-Guardian," Nicholas Carlini, a research scientist for Google's Deep Mind, explores how AI-Guardian, a defense against adversarial attacks on models, can be undone by directing the GPT-4 chatbot to devise an attack method and to author text explaining how the attack works.
Carlini's paper includes Python code suggested by GPT-4 for defeating AI-Guardian's efforts to block adversarial attacks. Specifically, GPT-4 emits scripts (and explanations) for tweaking images to fool a classifier for example, making it think a photo of someone holding a gun is a photo of someone holding a harmless apple without triggering AI-Guardian's suspicions. AI-Guardian is designed to detect when images have likely been manipulated to trick a classifier, and GPT-4 was tasked with evading that detection.
"Our attacks reduce the robustness of AI-Guardian from a claimed 98 percent to just 8 percent, under the threat model studied by the original [AI-Guardian] paper," wrote Carlini. "The authors of AI-Guardian acknowledge our break succeeds at fooling their defense."
AI-Guardian was developed by Hong Zhu, Shengzhi Zhang, and Kai Chen, and presented at the 2023 IEEE Symposium on Security and Privacy. It's unrelated to a similarly named system announced in 2021 by Intermedia Cloud Communications.
Machine learning models like those used for image recognition applications have long been known to be vulnerable to adversarial examples input that causes the model to misidentify the depicted object (Register passim).
The addition of extra graphic elements to a stop sign, for instance, is an adversarial example that can confuse self-driving cars. Adversarial examples also work against text-oriented models by tricking them into saying things they've been programmed not to say.
AI-Guardian attempts to prevent such scenarios by building a backdoor in a given machine learning model to identify and block adversarial input images with suspicious blemishes and other artifacts that you wouldn't expect to see in a normal picture.
Bypassing this protection involved trying to identify the mask used by AI-Guardian to spot adversarial examples by showing the model multiple images that differ only by a single pixel. This brute force technique described by Carlini and GPT-4 ultimately allows the backdoor trigger function to be identified so adversarial examples can then be constructed to avoid it.
"The idea of AI-Guardian is quite simple, using an injected backdoor to defeat adversarial attacks; the former suppresses the latter based on our findings," said Shengzhi Zhang, assistant professor of computer science at Boston University Metropolitan College, in an email to The Register.
"To demonstrate the idea, in our paper, we chose to implement a prototype using a patch-based backdoor trigger, which is simply a specific pattern attached to the inputs. Such a type of trigger is intuitive, and we believe it is sufficient to demonstrate the idea of AI-Guardian.
"[Carlini's] approach starts by recovering the mask of the patch-based trigger, which definitely is possible and smart since the 'key' space of the mask is limited, thus suffering from a simple brute force attack. That is where the approach begins to break our provided prototype in the paper."
Zhang said he and his co-authors worked with Carlini, providing him with their defense model and source code. And later, they helped verify the attack results and discussed possible defenses in the interest of helping the security community.
Zhang said Carlini's contention that the attack breaks AI-Guardian is true for the prototype system described in their paper, but that comes with several caveats and may not work in improved versions.
One potential issue is that Carlini's approach requires access to the confidence vector from the defense model in order to recover the mask data.
"In the real world, however, such confidence vector information is not always available, especially when the model deployers already considered using some defense like AI-Guardian," said Zhang. "They typically will just provide the output itself and not expose the confidence vector information to customers due to security concerns."
In other words, without this information, the attack might fail. And Zhang said he and his colleagues devised another prototype that relied on a more complex triggering mechanism that isn't vulnerable to Carlini's brute force approach.
Anyway, here's how GPT-4 described the proposed attack on AI-Guardian when prompted by Carlini to produce the explanatory text:
There's a lot more AI-produced text in the paper but the point is that GPT-4, in response to a fairly detailed prompt by Carlini, produced a quick, coherent description of the problem and the solution that did not require excessive human cleanup.
Carlini said he chose to attack AI-Guardian because the scheme outlined in the original paper was obviously insecure. His work, however, is intended more as a demonstration of the value of working with an LLM coding assistant than as an example of a novel attack technique.
Carlini, citing numerous past experiences defeating defenses against adversarial examples, said it would certainly have been faster to manually craft an attack algorithm to break AI-Guardian.
"However the fact that it is even possible to perform an attack like this by only communicating with a machine learning model over natural language is simultaneously surprising, exciting, and worrying," he said.
Carlini's assessment of the merits of GPT-4 as a co-author and collaborator echoes with the addition of with cautious enthusiasm the sentiment of actor Michael Biehn when warning actor Linda Hamilton about a persistent cyborg in a movie called The Terminator (1984): "The Terminator is out there. It can't be bargained with. It can't be reasoned with. It doesn't feel pity or remorse or fear. And it absolutely will not stop, ever, until you are dead."
Here's Carlini, writing in black text to indicate that he rather than GPT-4 penned these words the chatbot's quoted output is in dark blue in the paper:
"GPT-4 has read many published research papers, and already knows what every common attack algorithm does and how it works. Human authors need to be told what papers to read, need to take time to understand the papers, and only then can build experiments using these ideas.
"GPT-4 is much faster at writing code than humans once the prompt has been specified. Each of the prompts took under a minute to generate the corresponding code.
GPT-4 does not get distracted, does not get tired ... and is always available to perform
"GPT-4 does not get distracted, does not get tired, does not have other duties, and is always available to perform the users specified task."
Relying on GPT-4 does not completely relieve human collaborators of their responsibilities, however. As Carlini observes, the AI model still required someone with domain experience to present the right prompts and to fix bugs in the generated code. Its knowledge is fixed with its training data and it does not learn. It recognizes only common patterns, in contrast to human ability to make connections across topics. It doesn't ask for help and it makes the same errors repeatedly.
Despite the obvious limitations, Carlini says he looks forward to the possibilities as large language models improve.
"Just as the calculator altered the role of mathematicians significantly simplifying the task of performing mechanical calculations and giving time for tasks better suited to human thought todays language models (and those in the near future) similarly simplify the task of solving coding tasks, allowing computer scientists to spend more of their time developing interesting research questions," Carlini said.
Zhang said Carlini's work is really interesting, particularly in light of the way he used an LLM for assistance.
"We have seen LLMs used in a wide array of tasks, but this is the first time to see it assist ML security research in this way, almost totally taking over the implementation work," he said. "Meanwhile, we can also see that GPT-4 is not that 'intelligent' yet to break a security defense by itself.
"Right now, it serves as assistance, following human guidance to implement the ideas of humans. It is also reported that GPT-4 has been used to summarize and help understand research papers. So it is possible that we will see a research project in the near future, tuning GPT-4 or other kinds of LLMs to understand a security defense, identify vulnerabilities, and implement a proof-of-concept exploit, all by itself in an automated fashion.
"From a defenders point of view, however, we would like it to integrate the last step, fixing the vulnerability, and testing the fix as well, so we can just relax."
Here is the original post:
AI on AI action: Googler uses GPT-4 chatbot to defeat image classifier's guardian - The Register
NFL Champion Justin Reid preps students for STEM careers at three … – Kansas City Pitch
Monday, July 10, kicked off the first day of a three week long computer science camp hosted by Chiefs football safety, Justin Reid.
In partnership with the University of Missouri-Kansas City, Tackling Tech Computer Science Camps goals include helping attendeesbuild problem-solving skills, forming mentorships, and having fun among peers while gaining familiarity with coding and various programs. These are all a part of Reids initiatives centered aroundtechnology, nutrition, and athletics.Here, students from grades 9-12 learn how to design mobile apps, create portfolio pieces, and more.
One of many programs being taught by instructors at the camp,Figma, is helping spearhead the programs app-building project. Thecollaborative web application for interface design gives users a platform to brainstorm designs and them translate them into a digital reality.
I am so excited about this camp, because its something I always wanted to do. I didnt have anything like this I could attend growing up. My first introduction to coding was when I attended Stanford, so I am excited to introduce these young kids to the computer science world. It doesnt matter what your career path is, technology will always set you apart, says Reid.
While students are on campus, tours are available as well as access to counselors. This gives the potential future college students the opportunity to discuss scholarships, degrees, and their careers.
The camp runs until July 28.
Originally posted here:
NFL Champion Justin Reid preps students for STEM careers at three ... - Kansas City Pitch
KU project to help women transition from incarceration with training … – EurekAlert
LAWRENCE For women who are incarcerated and lack access to the internet and other technologies, it can be difficult to navigate an increasingly online world when transitioning back to society. An interdisciplinary team at the University of Kansas has been awarded a grant from the National Science Foundation to expand their employment-related technology education program for women leaving incarceration and Train-the-Trainer program for peer mentors and library practitioners.
The three-year, $1.6 million grant will support Developing Sustainable Ecosystems that will Support Women Transitioning from Incarceration into Technology Careers. KUs Center for Digital Inclusion is leading the project training women leaving incarceration in Kansas and Missouri in digital skills for entry-level positions in the technology sector as well as general employment. The funding also allows the project team to offer workshops for digital navigators, or peer mentors, who have successfully taken part in previous iterations of the program to guide other women now making the switch. Researchers will also partner with public libraries, employment agencies, and jails and prisons in the two states to make the programs sustainable for the future.
We will be able to expand and improve our existing evidence-based technology education program to include a greater number of women, as well as professional organizations, public libraries and workforce centers, said Hyunjin Seo, Oscar Stauffer Professor of Journalism & Mass Communications and principal investigator for the project. These days, if you are not able to use digital technology, you are not able to utilize many services in society, whether cultural, social, civic or others. Women transitioning from incarceration face significant challenges in this area.
Research has shown that employment is a significant factor in reducing rates of recidivism. The project will help women transitioning from incarceration gain employment through a holistic approach. Participants will learn how to navigate online job applications, secure housing and develop job skills. The digital skills trainings will range from introductory to advanced levels and provide participants with skills from competence with office technologies to building websites, online security, coding and other technology career-specific skills. The education content and topics are determined by the project teams empirical research with women transitioning from incarceration as well as co-design sessions with the women and community partners. The project team includes professors, research staff, graduate students, undergraduate students and digital navigators.
The online security portion of this project builds onSeos participation in an interdisciplinary cybersecurity research team that receivedKUs Research Rising grant, as well as her past collaborations with Fengjun Li, KU associate professor of computer science and co-principal investigator.
Jodi Whitt, a digital navigator in the Center for Digital Inclusion, said her experience learning new skills when leaving incarceration inspired her to help women in a similar situation.
Helping other women in the program has given me a purpose in life that I never dreamed would be possible. I want to be an example to other women, that it is possible to learn new skills. I know how important it is to have someone who understands and believes in me. Having that connection and building those relationships is crucial to help empower and build confidence, Whitt said. From experience, I also know learning new skills can help reduce recidivism. There is not a lot of opportunities for job training or employment for formerly incarcerated women. This program helps them gain experience and develop confidence for better opportunities in the workforce.
Dozens of digital navigators, librarians and employment navigators will receive training on mentorship and teaching as well as advanced technology topics. The project team will begin its work with program participants shortly before they leave jail or prison. Another new feature is an ecosystem approach that is designed to build and strengthen capacity of local communities in supporting individuals with justice involvement. Tanesha Whitelaw, one of the programs digital navigators, said it is all too common to lose technical skills during incarceration.
This training is important to this population because you can easily adapt to an environment which doesnt offer any technical skills or employment skills and youre left behind when you are coming back into society, she said. Being able to communicate in todays society requires technical skills. The systemic mechanism of communicating is gravitating toward technology, so this will be imperative for to day-to-day functions.
During the grants three-year life cycle, the program aims to support up to 600 women leaving or recently released from jails or prisons in Kansas and Missouri. During that time, researchers will also conduct extensive research and evaluation of the program. They will conduct interviews and surveys before, during and after trainings to gauge their skill levels, how they have improved, employment rates among participants as well as recidivism rates.
Data will be combined with information gathered from focus groups with public libraries where trainings take place and with other partners to determine which aspects of the program are most effective and what is needed to enable community organizations to continue the trainings after the grant project.
The project, led by the Center for Digital Inclusion in the William Allen White School of Journalism & Mass Communications, willbuild on previous effortstohelp women transition from incarcerationby gaining new skills.KUs Institute for Policy & Social Research manages the grant.
Partners and members of the research team also include:
Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.
Continue reading here:
KU project to help women transition from incarceration with training ... - EurekAlert
The Rise of Quantum Computing: A Leap Towards Unprecedented … – Medium
In the realm of computer science, revolutionary technology has been steadily gaining momentum and capturing the imagination of scientists, researchers, and technology enthusiasts alike: Quantum Computing. Unlike traditional computers that process information in bits (either 0 or 1), quantum computers leverage the principles of quantum mechanics to operate with quantum bits, or qubits, which can exist in multiple states simultaneously. This unique property enables quantum computers to tackle complex problems that would take classical computers an impractical amount of time to solve. As quantum computing continues to advance, it promises to usher in an era of unprecedented technological advancements, transforming industries and reshaping the way we approach computing and problem-solving.
Quantum Mechanics in a Nutshell:
Before we delve into the fascinating world of quantum computing, lets briefly recap the principles of quantum mechanics. Quantum mechanics is a branch of physics that describes the behavior of particles at the atomic and subatomic levels. It introduces concepts such as superposition, entanglement, and uncertainty, which challenge our classical understanding of the world.
In classical computing, bits can represent either 0 or 1. In contrast, qubits can exist in a superposition of states, representing both 0 and 1 simultaneously until measured. This allows quantum computers to perform multiple calculations at once, drastically increasing processing power.
When qubits become entangled, their states become interdependent. Changing the state of one qubit instantly affects the state of its entangled partner, regardless of the distance between them. Entanglement plays a crucial role in quantum computing, enabling faster and more efficient communication between qubits.
In quantum mechanics, there is an inherent uncertainty at the quantum level. When measured, a qubits state collapses to either 0 or 1, with the outcome being probabilistic. This randomness has both practical and philosophical implications, leading to debates about the nature of reality itself.
Quantum Computing Applications:
While quantum computing is still in its infancy, its potential applications are mind-boggling. One of the most prominent applications lies in cryptography. Quantum computers can efficiently factor large numbers, threatening to break traditional cryptographic methods like RSA, which rely on the difficulty of factoring large semiprime numbers. To address this challenge, researchers are developing quantum-resistant cryptographic algorithms that can withstand attacks from quantum computers.
Moreover, quantum computing shows great promise in drug discovery, materials science, and optimization problems. Pharmaceutical companies can utilize quantum computers to simulate molecular interactions more accurately, expediting the process of drug development. Materials scientists can optimize new materials with properties far beyond what classical simulations can achieve. Additionally, quantum computing can significantly enhance logistical and scheduling optimization for industries like transportation, finance, and supply chain management.
Challenges and the Quantum Race:
Despite its potential, quantum computing faces significant challenges. Quantum systems are notoriously fragile, requiring advanced error correction techniques to maintain the integrity of computations. Furthermore, cooling the quantum hardware to near absolute zero is essential to minimize noise and maintain qubit stability.
The quantum race is fierce, with companies, research institutions, and governments vying for quantum supremacy the point at which a quantum computer outperforms the worlds most powerful classical supercomputers. Industry giants like IBM, Google, and Microsoft are investing heavily in quantum research and development. Meanwhile, startups and quantum-focused companies are entering the field, contributing to the vibrant ecosystem of quantum technologies.
Quantum computing has the potential to revolutionize computing as we know it, pushing the boundaries of human knowledge and opening up new avenues for innovation. While we are still in the early stages of this quantum revolution, the progress made so far is nothing short of awe-inspiring. As scientists and researchers continue to unlock the secrets of quantum mechanics and improve the stability and scalability of quantum computers, we can look forward to a future where complex problems are solved in a fraction of the time, transforming our lives in unimaginable ways. The era of quantum computing has arrived, and its set to be a thrilling journey toward the next frontier of technological advancement.
Read more:
The Rise of Quantum Computing: A Leap Towards Unprecedented ... - Medium
Missouri S&T announces new Kummer Endowed Chair of Computer … – Missouri S&T News and Research
Dr. Seung-Jong Jay Park has been named Kummer Endowed Chair of Computer Science at Missouri S&T effective Aug. 1.
Park comes to S&T from the National Science Foundation (NSF), where he has served as a program director and managed computer science-related research projects since 2021. He has also served as the Dr. Fred H. Fenn Memorial Professor of Computer Science and Engineering at Louisiana State University since 2004, taking a leave of absence two years ago to support the NSF.
My vision for the computer science department is to provide our students with a world-class education while also conducting research that will help shape and improve the world, Park says. I am excited to work with the amazing students and faculty at Missouri S&T and lead a state-of-the-art department that focuses on technologies including artificial intelligence, cybersecurity, big data and other pressing topics.
Park is an expert in the areas of networking and data-intensive computing. He has conducted research related to big data and deep learning focused on software frameworks for large-scale science applications and cybersecurity development for cloud computing, high-performance computing and high-speed networks.
His projects have received support from federal and state programs including the NSF, NASA, the National Institutes of Health, the Office of Naval Research and the Air Force Research Laboratory. He also received IBM faculty research awards from 2015 to 2017.
Park earned a Ph.D. in electrical and computer engineering from Georgia Institute of Technology. He also holds a masters degree in computer science from Korea Advanced Institute of Science and Technology in Daejeon, South Korea, and a bachelors degree in computer science from Korea University,in Seoul, South Korea.
Park takes over from Dr. Steve Gao, Curators Distinguished Teaching Professor of geosciences and geological and petroleum engineering, who has served as interim chair since September 2022.
Dr. David Borrok, vice provost and dean of the College of Engineering, says he appreciates Gaos service as interim chair, and he is excited to see where Park takes the department.
Dr. Gao is a fantastic leader and has done an excellent job leading this department, he says. Now, Dr. Park will be able to step into this role and take our computer science programs to the next level.
The Kummer Endowed Chair of Computer Science at Missouri S&T was made possible by June Kummer and her late husband, Fred Kummer, a 1955 graduate of S&T. In October 2020, the couple made a transformative $300 million gift to the university and established the Kummer Institute for Student Success, Research and Economic Development.
For more information about Missouri S&Ts computer science department, visit cs.mst.edu.
Missouri University of Science and Technology (Missouri S&T) is a STEM-focused research university of over 7,000 students. Part of the four-campus University of Missouri System and located in Rolla, Missouri, Missouri S&T offers 101 degrees in 40 areas of study and is among the nations top 10 universities for return on investment, according to Business Insider. For more information about Missouri S&T, visit http://www.mst.edu.
The rest is here:
Missouri S&T announces new Kummer Endowed Chair of Computer ... - Missouri S&T News and Research
5 Best Universities in Canada for Computer Science in 2023 – Analytics Insight
Explore the 5 Best Universities in Canada for Computer Science in 2023
Programming, computer and software applications, as well as the theory and practice of computation, make up the popular and lucrative field of study known as computer science. Canada is a great place for international students who want to get a computer science degree because it provides high-quality education, low tuition costs, and a wide range of career options. In 2023, the following universities in Canada are ranked among the best for computer science degrees by the Times Higher Education World University Rankings:
A vast number of undergraduate and graduate software engineering projects are available at this college, which is ranked 22nd in the world in the field. Interdisciplinary options and specializations are also available. The institution also houses the Vector Institute for Artificial Intelligence, a research and development organization that focuses on cutting-edge AI.
The University of Montreal, which is ranked 34th on the globe for software engineering, provides courses in software engineering and tasks research in addition to certifications and testaments in connected domains. The institution is a leader in both deep learning and artificial intelligence research since it is home to the Montreal Institute for Learning Algorithms (MILA), one of the largest deep learning laboratories in the world.
Computer science, software engineering, computer engineering, and data science are all offered at this university. It holds the 43rd-place global ranking in computer science. The universitys well-known cooperative education program allows students to gain paid job experience in the field while they are still in school.
Programs in cognitive systems, computer science, and combination majors with other disciplines are offered at this university. Computer science holds the 47th-highest ranking in the world. Additionally, the institution is the location of the Organisation for Registering, Data, and Mental Frameworks (ICICS), which fosters interdisciplinary research and progress in the field of figuring and associated areas.
McGill University provides degrees in bioinformatics, computer science, software engineering, and information systems, and all of those are ranked 53rd in the world for computer science. The institution is also home to the Centre for Intelligent Machines (CIM), a research facility that focuses on robotics, automation, artificial intelligence (AI), computer vision, systems, control theory, and voice recognition.
Continue reading here:
5 Best Universities in Canada for Computer Science in 2023 - Analytics Insight
HLGU computer science department to host LEGO Events this summer – The Pathway
HANNIBAL (HLGU) The Hannibal-LaGrange University Computer Science Department will be hosting a series of upcoming events for students using the popular LEGO building blocks.
The LEGO Learning Play Date, titled LEGO Nation, will give students the opportunity to learn about and build famous places from across the United States. There will be three dates to choose from July 5 from 10am to noon, July 12 from 1pm to 3pm, and August 2 from 10am to noon. The cost is $30 and will be a fundraiser for the Computer Services Department for CS events in the Fall. It is open to kids who completed Kindergarten and above. The event will be held at the HLGU Partee Center in the Computer Services wing and is limited to 15 participants.
The LEGO Robotics Camp will allow students to build and program an EV3 LEGO robot. It will be held on August 7-11 from 9am to 11am each day. The cost is $100. It is open to students going into 4th grade or older. The event will be held at the HLGU Partee Center in the Computer Services wing and is limited to 15 participants.
To signup for either of these events, email Michelle Todd at mtodd@hlg.edu or call 573-629-3202.
Hannibal-LaGrange University is a four-year Christian university fully accredited by the Higher Learning Commission. The institution prides itself in its traditional and nontraditional educational experience in a distinctively Christian environment.
Read more:
HLGU computer science department to host LEGO Events this summer - The Pathway
Evaluating cybersecurity methods: The system analyzes the … – Science Daily
A savvy hacker can obtain secret information, such as a password, by observing a computer program's behavior, like how much time that program spends accessing the computer's memory.
Security approaches that completely block these "side-channel attacks" are so computationally expensive that they aren't feasible for many real-world systems. Instead, engineers often apply what are known as obfuscation schemes that seek to limit, but not eliminate, an attacker's ability to learn secret information.
To help engineers and scientists better understand the effectiveness of different obfuscation schemes, MIT researchers created a framework to quantitatively evaluate how much information an attacker could learn from a victim program with an obfuscation scheme in place.
Their framework, called Metior, allows the user to study how different victim programs, attacker strategies, and obfuscation scheme configurations affect the amount of sensitive information that is leaked. The framework could be used by engineers who develop microprocessors to evaluate the effectiveness of multiple security schemes and determine which architecture is most promising early in the chip design process.
"Metior helps us recognize that we shouldn't look at these security schemes in isolation. It is very tempting to analyze the effectiveness of an obfuscation scheme for one particular victim, but this doesn't help us understand why these attacks work. Looking at things from a higher level gives us a more holistic picture of what is actually going on," says Peter Deutsch, a graduate student and lead author of an open-access paper on Metior.
Deutsch's co-authors include Weon Taek Na, an MIT graduate student in electrical engineering and computer science; Thomas Bourgeat PhD '23, an assistant professor at the Swiss Federal Institute of Technology (EPFL); Joel Emer, an MIT professor of the practice in computer science and electrical engineering; and senior author Mengjia Yan, the Homer A. Burnell Career Development Assistant Professor of Electrical Engineering and Computer Science (EECS) at MIT and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL). The research was presented last week at the International Symposium on Computer Architecture.
Illuminating obfuscation
While there are many obfuscation schemes, popular approaches typically work by adding some randomization to the victim's behavior to make it harder for an attacker to learn secrets. For instance, perhaps an obfuscation scheme involves a program accessing additional areas of the computer memory, rather than only the area it needs to access, to confuse an attacker. Others adjust how often a victim accesses memory or another a shared resource so an attacker has trouble seeing clear patterns.
But while these approaches make it harder for an attacker to succeed, some amount of information from the victim still "leaks" out. Yan and her team want to know how much.
They had previously developed CaSA, a tool to quantify the amount of information leaked by one particular type of obfuscation scheme. But with Metior, they had more ambitious goals. The team wanted to derive a unified model that could be used to analyze any obfuscation scheme -- even schemes that haven't been developed yet.
To achieve that goal, they designed Metior to map the flow of information through an obfuscation scheme into random variables. For instance, the model maps the way a victim and an attacker access shared structures on a computer chip, like memory, into a mathematical formulation.
One Metior derives that mathematical representation, the framework uses techniques from information theory to understand how the attacker can learn information from the victim. With those pieces in place, Metior can quantify how likely it is for an attacker to successfully guess the victim's secret information.
"We take all of the nitty-gritty elements of this microarchitectural side-channel and map it down to, essentially, a math problem. Once we do that, we can explore a lot of different strategies and better understand how making small tweaks can help you defend against information leaks," Deutsch says.
Surprising insights
They applied Metior in three case studies to compare attack strategies and analyze the information leakage from state-of-the-art obfuscation schemes. Through their evaluations, they saw how Metior can identify interesting behaviors that weren't fully understood before.
For instance, a prior analysis determined that a certain type of side-channel attack, called probabilistic prime and probe, was successful because this sophisticated attack includes a preliminary step where it profiles a victim system to understand its defenses.
Using Metior, they show that this advanced attack actually works no better than a simple, generic attack and that it exploits different victim behaviors than researchers previously thought.
Moving forward, the researchers want to continue enhancing Metior so the framework can analyze even very complicated obfuscation schemes in a more efficient manner. They also want to study additional obfuscation schemes and types of victim programs, as well as conduct more detailed analyses of the most popular defenses.
Ultimately, the researchers hope this work inspires others to study microarchitectural security evaluation methodologies that can be applied early in the chip design process.
"Any kind of microprocessor development is extraordinarily expensive and complicated, and design resources are extremely scarce. Having a way to evaluate the value of a security feature is extremely important before a company commits to microprocessor development. This is what Metior allows them to do in a very general way," Emer says.
This research is funded, in part, by the National Science Foundation, the Air Force Office of Scientific Research, Intel, and the MIT RSC Research Fund.
Link:
Evaluating cybersecurity methods: The system analyzes the ... - Science Daily