Category Archives: Artificial Intelligence
Artificial intelligence is the future of cybersecurity – Technology Record
Cybercriminals are using artificial intelligence (AI) to evolve the sophistication of attacks at a rapid pace. In response, an increasing number of organisations are also adopting the technology as part of their cybersecurity strategies. According to research conducted in Mimecasts State of Email Security Report 2021, 39 per cent of organisations are utilising AI to bolster their email defences.
Although were still in the early phases of these technologies and their application to cybersecurity, this is a rising trend. Businesses using advanced technologies such as AI and layered email defences, while also regularly training their employees in attack-resistant behaviours, will be in the best possible position to sidestep future attacks and recover quickly.
Mimecast is integrating AI capabilities to help halt some of cybersecuritys most pervasive threats. Take the use of tracking pixels in emails, for example, which both BBC and ZDNet have called endemic. Spy trackers embedded in emails have become ubiquitous often by marketers but also, increasingly, by cybercriminals looking to gather information to weaponise highly targeted business email compromise attacks.
Mimecasts CyberGraph uses machine learning, a subset of AI, to block these hard-to-detect email threats, thus limiting reconnaissance and mitigating human error. CyberGraph disarms embedded trackers and uses machine learning and identity graph technologies to detect anomalous malicious behaviour. Because the AI is continually learning, it requires no configuration, thus lessening the burden on IT teams and reducing the likelihood of unsafe misconfiguration. Plus, as an add-on to Mimecast Email Security, CyberGraph offers differentiated capability integrated into an existing secure email gateway, streamlining your email security strategy.
AI is here, and here to stay. Although its use is not a silver bullet, theres a strong case for it in the future of cybersecurity. Mimecast CyberGraph combines with many other layers of protection. It embeds colour-coded warning banners in emails to highlight detected risks, and it solicits user feedback. This feedback strengthens the machine learning model and can update banners across all similar emails to highlight the new risk levels.
As more cyber resilience strategies begin to adopt AI, it will be vital that people and technology continue to inform one another to provide agile protection against ever-evolving threat landscapes. Innovations such as CyberGraph provide evidence that AI has a promising value proposition in cybersecurity.
Duncan Mills is the senior product marketing manager at Mimecast
This article was originally published in the Summer2021 issue of The Record. To get future issues delivered directly to your inbox, sign up for a free subscription.
Read this article:
Artificial intelligence is the future of cybersecurity - Technology Record
South America Workplace Services Market Forecast to 2028: Unification of Artificial Intelligence (AI) to Revolutionize Workplace Services Business -…
DUBLIN--(BUSINESS WIRE)--The "South America Workplace Services Market Forecast to 2028 - COVID-19 Impact and Regional Analysis By Service Type, Organization Size and Large Enterprises, and Vertical" report has been added to ResearchAndMarkets.com's offering.
Consumer Goods and Retail Segment is expected to be the fastest growing during the forecast period for the SAM region.
SAM Workplace Services Market is expected to reach US$ 7680.59 million by 2028 from US$ 3365.00 million in 2021. The market is estimated to grow at a CAGR of 12.5% from 2021 to 2028.
The report provides trends prevailing in the SAM workplace services market along with the drivers and restraints pertaining to the market growth. Rising significance of enterprise mobility is the major factor driving the growth of the SAM workplace services market. However, issues associated with the escalating security concerns hinder the growth of SAM workplace services market.
The market for SAM workplace services market is segmented into service type, organization size, vertical, and country. Based on services type, the market is segmented into end-user outsourcing services, and tech support services core. In 2020, the end-user outsourcing services segment held the largest share SAM workplace market.
Based on organization type the workplace services market is divided into- Small and medium-sized enterprises (SMEs) and large enterprises. Large enterprises is expected to the fastest growing segment over the forecast period. On the basis of vertical the market is segmented into Media and Entertainment, BFSI, Consumer Goods and Retail, Manufacturing, Healthcare and Life Sciences, Education, Telecom- IT and ITES, Energy and Utilities, Government and Public Sector, Others. The Telecom-IT and ITES segment accounts for largest market share in the 2020
The presence of various developing countries in SAM makes this region one of the key markets for the future growth of the workplace services market. The growing population, rising disposable income, high demand for advanced technologies, and huge focus on digital transformation are some of the key factors expected to drive the growth of the workplace services market in SAM.
The high number of confirmed cases and deaths due to COVID-19 in major SAM countries such as Brazil, Peru, Chile, Ecuador, and Argentina have affected the region in 2020. Subsequent to the coronavirus epidemic, IT and software businesses have received a lift in several countries of SAM since digital acceleration, and the need for remote work has accelerated the market growth even in the pandemic. Thus, the workplace services market is not majorly affected during the pandemic.
Accenture, Atos SE, Cognizant, Inc., Fujitsu Limited, HCL Technologies, IBM Corporation, NTT DATA Corporation, Tata Consultancy Services Limited, Unisys Corporation, and Wipro Limited are among some of the leading companies in the SAM workplace services market.
The companies are focused on adopting organic growth strategies such as product launches and expansions to sustain their position in the dynamic market. For instance, in 2020 Fujitsu announced it has signed a two-year agreement with HMRC, under the agreement Fujitsu will provide its Digital Workplace Services to HMRC.
Market Dynamics
Market Drivers
Market Restraints
Market Opportunities
Market Trend
Companies Mentioned
For more information about this report visit https://www.researchandmarkets.com/r/rjhwae
View original post here:
South America Workplace Services Market Forecast to 2028: Unification of Artificial Intelligence (AI) to Revolutionize Workplace Services Business -...
Data, Algorithms and Artificial" Intelligence: What Is The Problem? – The Costa Rica News
Data, algorithms and artificial intelligence (AI) are topics with a constant presence in many regions of the world in debates that range from futuristic technologies such as autonomous vehicles, to everyday applications that negatively affect our communities. As feminist, anti-capitalist and anti-racist activists, we must understand the implications and policies of these technologies, as in many cases they accentuate inequalities related to wealth and power and reproduce racial and gender discrimination.
Data, algorithms and artificial intelligence occupy more and more space in our lives, although, in general, we are hardly aware of their existence. Its impacts, at times, can be equally invisible, but they are related to all our struggles for a more just world. Access to these technologies is uneven, and the balance is increasingly tilting toward powerful institutions such as the Armed Forces, the police, and businesses.
TIP: Get our latest content byjoining our newsletter. Don't miss out onnews that matterin Costa Rica. Click here.
Only a few private agents have the computational capacity to run the most robust AI models, so even universities depend on them for their research. As for data, we produce it every day, sometimes consciously, sometimes just keeping our smartphones with oneself all the time without even using them.
A few years ago, the Facebook-Cambridge Analytica scandal made headlines, using data to influence votes and elections in the UK and US. Generally, we only learned this from whistleblowers. [1], since there is a total lack of transparency around the algorithms and the data inserted in them, which makes it difficult to understand their impact. Some examples help us understand how these technologies and the way they are implemented change decision-making methods, worsen working conditions, intensify inequality and oppression and even damage the environment.
Automated decision making (ADM) systems use data and algorithms to make decisions on behalf of humans. They are changing not only how decisions are made, but also where and by whom. In some cases, they shift decision-making from public space to private spaces, or effectively place control over public space in the hands of private companies.
Some insurers have implemented ADM and AI technologies to determine the legitimacy of claims notices. According to them, this is a more efficient and profitable way to make these decisions. But, often, information about what data is used and what criteria are applied to these determinations is not made available to the public because it is considered a trade secret of a company.
In some cases, insurers even use data to forecast risks and calculate rates based on expected behaviors, which is just a new way of affecting the principle of solidarity, which is the basis of group insurance, and accentuating neoliberal and individualistic principles.
Furthermore, these models use data from the past to predict future outcomes, which makes them inherently conservative and predisposed to reproduce or even intensify forms of discrimination suffered in the past. Although they do not use race directly as identifiable data, indicators such as ZIP codes generally serve the same purpose, and these AI models tend to discriminate against racialized communities.
Not only private companies, but also governments have AI systems in place to provide services more efficiently and detect fraud which is usually synonymous with cost reduction. Chile is among the countries that have started a program to use AI to manage healthcare, reduce waiting times, and make treatment decisions. Critics of the program fear that the system will cause harm by perpetuating prejudices based on race, ethnicity or country of origin and gender.
Argentina developed a model in collaboration with Microsoft to prevent school dropouts and early pregnancy. Based on information such as neighborhood, ethnicity, country of origin or hot water supply, an algorithm predicts which girls are most likely to get pregnant, and based on that, the government directs services. But, in fact, the government is using this technology to avoid having to implement extensive sexuality education, which, incidentally, does not enter the models calculations for predicting teenage pregnancy.
Under the banner of Smart Cities, city governments are handing over entire neighborhoods to private companies for experimentation with technologies. Sidewalk Labs, a subsidiary of Alphabet (the company that owns Google), wanted to develop a neighborhood in Toronto, Canada, and collect massive amounts of data on residents to, among other things, predict their movements in order to regulate traffic. The company even had plans to apply its own taxes and control some public services. If it werent for the activists who mobilized against this project, the government would have simply handed over the public space to one of the largest and most powerful private companies in the world.
Putting decision-making power over public space in the hands of private companies is not the only problem for initiatives such as Smart Cities. An example from India shows that they too tend to create large-scale surveillance mechanisms. Lucknow City Police recently announced a plan to use cameras and facial recognition technology (FRT) to identify expressions of distress on womens faces.
Under the guise of combating violence against women, several cities in India have spent exorbitant amounts of money to implement surveillance systems, money that could have been invested in community-led projects to combat gender-based violence.
Rather than address the root of the problem, the government perpetuates patriarchal norms by creating surveillance regimes. Additionally, facial recognition technology has been shown to be significantly less accurate for non-cis white men, and emotion-sensing technology is considered highly flawed.
AI is causing heightened surveillance in many areas of life in many countries, but especially in liberal democracies: from monitoring software that monitors students during online exams to what is known as smart surveillance, which tends to intensify surveillance of already marginalized communities.
A good example of this is body cameras, which have been heralded as solutions to combat police brutality and serve as an argument against demands for budget cuts or even abolition of the police. From a feminist perspective, it should be noted that surveillance technologies not only exist in the public space, but also play an increasingly important role in domestic violence.
Law enforcement authorities also create gang databases that generate further discrimination in racialized communities. It is well known that private data mining companies such as Palantir or Amazon support immigration agencies in the deportation of undocumented immigrants. AI is used to foresee crimes that will occur and who will commit them. Because these models are based on past crime and criminal record data, they are highly skewed toward racialized communities. Furthermore, in fact, they can contribute to crime rather than prevent it.
Another example of how these AI surveillance systems support white supremacy and heterosexual patriarchy is airport security systems. Black women, Sikh men [3] and Muslim women are more frequently targeted by invasive inquiries. And because these models and technologies enforce cisnormativity, trans and non-binary people are identified as divergent and are inspected.
Surveillance technologies are not only used by the police, immigration agencies and the military. It is increasingly common for companies to monitor their employees using AI. As in any other context, surveillance technologies in the workplace reinforce existing discrimination and power disparities.
This development may have started within the big platform and big data companies [4], but the fastest growing sector, data capitalism imposes new working conditions not only on the workers of the sector its scope is even greater. Probably the best-known example of this type of surveillance is Amazon, where employed people are constantly monitored and if their productivity rates continually fall below expectations, they are automatically fired.
Other examples include the clothing retail sector, where tasks such as organizing merchandise for the demonstration are now decided by algorithms, depriving working people of their autonomy. Black people and other racialized people, especially women, are more likely to hold low-paid and unstable jobs and are therefore often the most affected by this dehumanization of work. Platform companies like Amazon or Uber, with the support of huge amounts of applied capital, not only change their industries, but also manage to impose changes in the legislation that weaken the protection of workers and affect entire economies.
Thats what they did in California, claiming that the change would create better opportunities for racialized working women. However, a recent study concluded that this change actually legalized racial subordination.
So far we have seen that AI and algorithms contribute to power disparities, shift decision-making sites from public space to non-transparent private companies, and intensify the harms inherent in racist, capitalist, heteropatriarchal, and cisnormative systems. Furthermore, these technologies frequently attempt to give the impression that they are fully automated, when in reality they rely on a large amount of cheap labor.
And, when fully automated, they are capable of consuming absurd amounts of energy, as demonstrated in the case of some language processing models. Bringing these facts to light cost the employment of leading researchers. The strategies of activists to resist these technologies and / or give visibility to the damage caused by them
In general, the first step in these strategies is to understand the damages that can result and to document where the technologies are being applied. The Our Data Bodies project produced the Digital Defense Playbook, a material aimed at raising public awareness of how communities are affected by data-driven technologies.
The No la Ma IA [Not My AI] platform, for example, has been mapping biased and disruptive projects in Latin America. The group Organizers Warning Notification and Information for Tenants OWN-IT!] Built a data bank in Los Angeles to help tenants against rent increases. In response to predictive policing technology, activists created the White Collar Crime Risk Zone Map to anticipate where in the US financial crime is most likely to occur.
Some people have decided to stop using certain tools, such as the Google or Facebook search engine, thus refusing to provide even more data to these companies. They argue that the problem is not individual data, but the dataset used to restructure environments that extract more from us in the form of data and labor, and that are becoming less and less transparent.
Another strategy is data obfuscation or masking: activists created plug-ins that randomly click on Google ads or randomly like Facebook pages to fool algorithms. There are also ways to prevent AI from recognizing faces in photos and using them to train algorithms. The Oracle for Transfeminist Technologies presents a totally different approach, a deck that invites the exercise of collective imagination for a different technology.
Indigenous people living on Turtle Island (US and Canada) are already very familiar with surveillance and with the collection of large volumes of data about them that are used against them. From this experience, they created approaches to First Nations data sovereignty: principles related to data collection, access, and ownership to prevent further harm and enable First Nations, Mtis, and Inuit [ 5] benefit from their own data.
AI, algorithms, and data-driven technologies arent just troublesome privacy issues. Much more is at stake. As we organize our struggles, we most likely use data-producing technologies for companies that profit from data capitalism. We need to be aware of the implications of this, the damage these technologies cause and how to resist them so that our mobilizations are successful.
LIKE THIS ARTICLE?Sign up to our newsletter and we will send you updates of our latest content as soon as they are available.Click here.
Go here to see the original:
Data, Algorithms and Artificial" Intelligence: What Is The Problem? - The Costa Rica News
Artificial Intelligence and the Gods Behind the Masks – WIRED
Why dont you go join them? asked Ozioma. Showing up behind Amaka on the balcony, the landlady lit an English-brand cigarette, leaned against the railings, and peered down.
I used to be the dance queen of our village, Ozioma went on, her eyes hazy with nostalgia. Not trying to brag here, but not a single boy could take his eyes off me. My father hated when I danced, though. He threatened to hit me every time he caught me dancing.
Did you listen to him?
Ozioma laughed heartily. Why on earth would a child give up what they love because their parents said no? Eventually, I found a way that could allow me to at least finish the dance.
What was it? asked Amaka.
I would wear an Agbogho Mmuo every time I danced.
What? Amakas eyes widened. The Agbogho Mmuo was the sacred mask of northern Igbo, representing maiden spirits as well as the mother of all living creation.
See, my father had your exact expression when he saw me with the mask. He had no choice but to bow down, to show his respect to the mask and the goddess it embodies. Of course, after I was done with the dance, with the mask stripped off, I would get my share of scolding, said Ozioma, beaming with pride, as if the memory had temporarily brought her back to the days when she was a young girl.
Upon hearing Oziomas story, Amaka felt an idea, blurry and shapeless, darting across his mind like a fish. He scrunched up his face, thinking. The mask
Yes, child. The mask is where my power came from.
Strip off the mask? Strip off the mask, murmured Amaka.
All of a sudden, he leapt to his feet and kissed Ozioma on the cheek. Thank you, oh thank you, my dance queen! He dashed back to his room, leaving behind the hustle and bustle of the parade and a very confused Ozioma.
Maybe spinning a lie and putting it in FAKAs mouth wont make his followers abandon their idol, Amaka told Chi via video chat that afternoon, excited with his new discovery. But stripping off its mask and revealing the hidden puppet master might.
No one knows who the puppet master is, though, Chi replied.
Exactly! Amaka beamed. Cant you see? It means that the puppet master can be anyone.
So, youre suggesting that
I can strip off FAKAs mask and make him any person you want him to be.
Chi fell silent in the video chat.
Youre a fucking genius, Chi finally muttered.
Ndewo, Amaka said, preparing to sign off.
Wait, Chi looked up. It means that you need to create a face that exists in reality.
Yes.
A face that can fool all the anti-fake detectors, added Chi, musing. Think about the color distortion, the noise pattern, the compression rate variation, the blink frequency, the biosignal is it doable?
I need time, said Amaka. And unlimited cloud AI computing power.
Ill get back to you. Chi logged off.
Amaka gazed at his own reflection in the dimming monitor screen. The adrenaline rush that had initially washed over him had faded. He saw on his face not excitement, but exhaustion and an unsettled feeling, as if he had betrayed a guardian spirit watching from above.
In theory anyone could fake a perfect image or video, at least well enough to fool the existing anti-fake detectors. The problem was the costcomputing power.
Fakes and their detectors were engaged in an eternal battle, like Eros and Thanatos. Amaka had his work cut out for him, but he was determined to succeed in achieving his singular goal: the creation of a real, human face.
Read more here:
Artificial Intelligence and the Gods Behind the Masks - WIRED
AIMe A standard for artificial intelligence in biomedicine – Innovation Origins
An international research from several universities including Maastricht University (UM) has proposed a standardized registry for artificial intelligence (AI) work in biomedicine. Aim is to improve the reproducibility of results and create trust in the use of AI algorithms in biomedical research and, in the future, in everyday clinical practice. The scientists presented their proposal in the scientific journal Nature Methods.
In the last decades, new technologies have made it possible to develop a wide variety of systems that can generate huge amounts of biomedical data. For example in cancer research. At the same time, completely new possibilities have developed for examining and evaluating this data using artificial intelligence methods. AI algorithms in intensive care units, e.g., can predict circulatory failure at an early stage. That is based on large amounts of data from several monitoring systems by processing a lot of complex information from different sources at the same time.
Read the complete press release here.
Want to be inspired 365 days per year? Heres the opportunity. We offer you one "origin of innovation" a day in a compact Telegram message. Seven days a week, delivered around 8 p.m. CET. Straight from our newsroom. Subscribe here, it's free!
This great potential of AI systems leads to an unmanageable number of biomedical AI applications. Unfortunately, the corresponding reports and publications do not always adhere to best practices or provide only incomplete information about the algorithms used or the origin of the data. This makes assessment and comprehensive comparisons of AI models difficult. The decisions of AIs are not always comprehensible to humans and results are seldomly fully reproducible. This situation is untenable, especially in clinical research, where trust in AI models and transparent research reports are crucial to increase the acceptance of AI algorithms and to develop improved AI methods for basic biomedical research.
To address this problem, an international research team including the UM has proposed the AIMe registry forartificialintelligence in biomedical research, a community-driven registry that enables users of new biomedical AI to create easily accessible, searchable and citable reports that can be studied and reviewed by the scientific community.
The freely accessible registry is available athttps://aime-registry.organd consists of a user-friendly web service that guides users through the AIMe standard and enables them to generate complete and standardised reports on the AI models used. A unique AIMe identifier is automatically created, which ensures that the report remains persistent and can be specified in publications. Hence, authors do not have to cope with the time-consuming description of all facets of the AI used in articles for scientific journals and simply refer to the report in the AIMe registry.
Read next: More focus on the social impact of AI
Original post:
AIMe A standard for artificial intelligence in biomedicine - Innovation Origins
New Artificial Intelligence Technology Poised to Transform Heart Imaging – University of Virginia
A new artificial-intelligence technology for heart imaging can potentially improve care for patients, allowing doctors to examine their hearts for scar tissue while eliminating the need for contrast injections required for traditional cardiovascular magnetic resonance imaging.
A team of researchers who developed the technology, including doctors at UVA Health,reports the success of the approach in a new article in the scientific journal Circulation. The team compared its AI approach, known as virtual native enhancement, with contrast-enhanced cardiovascular magnetic resonance scans now used to monitor hypertrophic cardiomyopathy, the most common genetic heart condition. The researchers found that virtual native enhancement produced higher-quality images and better captured evidence of scar in the heart, all without the need for injecting the standard contrast agent required for cardiovascular magnetic resonance scans.
This is a potentially important advance, especially if it can be expanded to other patient groups, said researcher Dr.Christopher Kramer, the chief of the Division of Cardiovascular Medicine at UVA Health, Virginias only designated Center of Excellence by theHypertrophic Cardiomyopathy Association. Being able to identify scar in the heart, an important contributor to progression to heart failure and sudden cardiac death, without contrast, would be highly significant. Cardiovascular magnetic resonance scans would be done without contrast, saving cost and any risk, albeit low, from the contrast agent.
Hypertrophic cardiomyopathy is the most common inheritable heart disease, and the most common cause of sudden cardiac death in young athletes. It causes the heart muscle to thicken and stiffen, reducing its ability to pump blood and requiring close monitoring by doctors.
The new virtual native enhancement technology will allow doctors to image the heart more often and more quickly, the researchers say. It also may help doctors detect subtle changes in the heart earlier, though more testing is needed to confirm that.
The technology also would benefit patients who are allergic to the contrast agent injected for cardiovascular magnetic resonance scans, as well as patients with severely failing kidneys, a group that avoids the use of the agent.
The new approach works by using artificial intelligence to enhance T1-maps of the heart tissue created by magnetic resonance imaging. These maps are combined with enhanced MRI cines, which are like movies of moving tissue in this case, the beating heart. Overlaying the two types of images creates the artificial virtual native enhancement image.
Based on these inputs, the technology can produce something virtually identical to the traditional contrast-enhanced cardiovascular magnetic resonance heart scans doctors are accustomed to reading only better, the researchers conclude. Avoiding the use of contrast and improving image quality in [cardiovascular magnetic resonance] would only help both patients and physicians down the line, Kramer said.
While the new research examined virtual native enhancements potential in patients with hypertrophic cardiomyopathy, the technologys creators envision it being used for many other heart conditions as well.
While currently validated in the [hypertrophic cardiomyopathy] population, there is a clear pathway to extend the technology to a wider range of myocardial pathologies, they write. [Virtual native enhancement] has enormous potential to significantly improve clinical practice, reduce scan time and costs, and expand the reach of [cardiovascular magnetic resonance] in the near future.
The research team consisted of Qiang Zhang, Matthew K. Burrage, Elena Lukaschuk, Mayooran Shanmuganathan, Iulia A. Popescu, Chrysovalantou Nikolaidou, Rebecca Mills, Konrad Werys, Evan Hann, Ahmet Barutcu, Suleyman D. Polat, HCMR investigators, Michael Salerno, Michael Jerosch-Herold, Raymond Y. Kwong, Hugh C. Watkins, Christopher M. Kramer, Stefan Neubauer, Vanessa M. Ferreira and Stefan K. Piechnik.
Kramer has no financial interests in the research, but some of his collaborators are seeking a patent related to the imaging approach. A full list of disclosures is included in the paper.
The research was made possible by work funded by the British Heart Foundation, grant PG/15/71/31731; the National Institutes of Healths National Heart, Lung and Blood Institute, grant U01HL117006-01A1; the John Fell Oxford University Press Research Fund; and the Oxford BHF Centre of Research Excellence, grant RE/18/3/34214. The research was also supported by British Heart Foundation Clinical Research Training Fellowship FS/19/65/34692, National Institute for Health Research (NIHR) Oxford Biomedical Research Centre at The Oxford University Hospitals NHS Foundation Trust, and the National Institutes of Health.
To keep up with the latest medical research news from UVA, subscribe to theMaking of Medicineblog.
Follow this link:
New Artificial Intelligence Technology Poised to Transform Heart Imaging - University of Virginia
Here’s how AI will accelerate the energy transition – World Economic Forum
The new IPCC report is unequivocal: more action is urgently needed to avert catastrophic long-term climate impacts. With fossil fuels still supplying more than 80% of global energy, the energy sector needs to be at the heart of this action.
Fortunately, the energy system is already in transition: renewable energy generation is growing rapidly, driven by falling costs and growing investor interest. But the scale and cost of decarbonizing the global energy system remain gigantic, and time is running out.
To-date, most of the energy sectors transition efforts have focused on hardware: new low-carbon infrastructure that will replace legacy carbon-intensive systems. Relatively little effort and investment has focused on another critical tool for the transition: next-generation digital technologies, in particular artificial intelligence (AI). These powerful technologies can be adopted more quickly at larger scales than new hardware solutions, and can become an essential enabler for the energy transition.
Three key trends are driving AIs potential to accelerate energy transition:
1. Energy-intensive sectors including power, transport, heavy industry and buildings are at the beginning of historic decarbonization processes, driven by growing government and consumer demand for rapid reductions in CO2 emissions. The scale of these transitions is huge: BloombergNEF estimates that in the energy sector alone, achieving net-zero emissions will require between $92 trillion and $173 trillion of infrastructure investments by 2050. Even small gains in flexibility, efficiency or capacity in clean energy and low-carbon industry can therefore lead to trillions in value and savings.
2. As electricity supplies more sectors and applications, the power sector is becoming the core pillar of the global energy supply. Ramping up renewable energy deployment to decarbonize the globally expanding power sector will mean more power is supplied by intermittent sources (such as solar and wind), creating new demand for forecasting, coordination, and flexible consumption to ensure that power grids can be operated safely and reliably.
3. The transition to low-carbon energy systems is driving the rapid growth of distributed power generation, distributed storage and advanced demand-response capabilities, which need to be orchestrated and integrated through more networked, transactional power grids.
Navigating these trends presents huge strategic and operational challenges to the energy system and to energy-intensive industries. This is where AI comes in: by creating an intelligent coordination layer across the generation, transmission and use of energy, AI can help energy-system stakeholders identify patterns and insights in data, learn from experience and improve system performance over time, and predict and model possible outcomes of complex, multivariate situations.
AI is already proving its value to the energy transition in multiple domains, driving measurable improvements in renewable energy forecasting, grid operations and optimization, coordination of distributed energy assets and demand-side management, and materials innovation and discovery. But while AIs application in the energy sector has proven promising so far, innovation and adoption remain limited. That presents a tremendous opportunity to accelerate transition towards the zero-emission, highly efficient and interconnected energy system we need tomorrow.
AI holds far greater potential to accelerate the global energy transition, but it will only be realized if there is greater AI innovation, adoption and collaboration across the industry. That is why the World Economic Forum has today released Harnessing AI to Accelerate the Energy Transition, a new report aimed at defining and catalysing the actions that are needed.
The report, written in collaboration with BloombergNEF and Dena, establishes nine 'AI for the energy transition principles' aimed at the energy industry, technology developers and policy-makers. If adopted, these principles would accelerate the uptake of AI solutions that serve the energy transition by creating a common understanding of what is needed to unlock AIs potential and how to safely and responsibly adopt AI in the energy sector.
The principles define the actions that are needed to unlock AIs potential in the energy sector across three critical domains:
1. Governing the use of AI:
2. Designing AI thats fit for purpose:
3. Enabling the deployment of AI at scale:
AI is not a silver bullet, and no technology can replace aggressive political and corporate commitments to reducing emissions. But given the urgency, scale, and complexity of the global energy transition, we cant afford to leave any tools in the toolbox. Used well, AI will accelerate the energy transition while expanding access to energy services, encouraging innovation, and ensuring a safe, resilient, and affordable clean energy system. It is time for industry players and policy makers to lay the foundations for this AI-enabled energy future, and to build a trusted and collaborative ecosystem around AI for the energy transition.
Written by
Espen Mehlum, Head of Energy,Materials & Infrastructure Program-Benchmarking & Regional Action, World Economic Forum
Dominique Hischier, Program Analyst - Energy, Materials Infrastructure Platform, World Economic Forum
Mark Caine, Project Lead, Artificial Intelligence and Machine LearningProject Lead, Artificial Intelligence and Machine Learning, World Economic Forum
The views expressed in this article are those of the author alone and not the World Economic Forum.
Visit link:
Here's how AI will accelerate the energy transition - World Economic Forum
An artificial intelligence approach for selecting effective teacher communication strategies in autism education | npj Science of Learning -…
Data collection
A data set was formed through structured classroom observations in 20 full-day sessions over 5 months in 2019 at a special school with criteria of ASC for admission in East London. Participants included three teachers (one male, two females), their teaching assistants (all females), and seven children (four males, three females) aged from 6 to 12 years across 3 classes. The childrens P-scales range from P3 to P6; P-scale commonly ranges from P1 to P8, with P1P3 being developmental non-subject-specific levels, and with P4P8 corresponding to expected levels for typical development at ages 5648. In addition, the children are also described as social or language partners on the SCERTS scale used by the school. In our study, none of the participating students were classified as conversational partners. The attributes of the student cohort are presented in Supplementary Table 3.
A coding protocol was developed through an iterative process with the participating teachers, and a grid was used for recording teacher-student interaction observations. Comments and suggestions from the teachers were taken into consideration and reflected throughout the multiple revised drafts and the final versions of the coding protocol and recording grid. For each observation instance, we recorded the student identifier, time stamp, teaching objective, teaching type, the context for this teaching type, the students observed emotional state, teachers communication strategy, and the corresponding student response (outcome). Where applicable we also recorded additional notes and the type of activity (e.g. yoga). Although notes were used for context and interpretation for the data analysis as a whole, they were not included in our machine learning function experiments given their free-form inconsistency. Table 1 details all the subcategories that were considered as inputs to the machine learning models. Up to two teaching types and teacher communications could be attributed to a single observation; the rest of the categories can only be represented by one subtype. For example, an observation coded as "3, academic, giving instruction/modelling, whole class, positive, verbal/gesture, full response (the time stamp is omitted) represents that student no. "3, being in a positive emotional state, fully responded to a teachers verbal and gesture instruction, when teaching was taking place in a whole class environment, its type was modelling and had an overall academic objective. This may refer to an interaction instance where the teacher is delivering a yoga lesson to the whole class: the teacher is demonstrating a yoga move by gesturing while verbally explaining it and asking the students to do the same; the student then responds by doing the move with an observably happy expression.
All observed adult-student interactions during the school day, permitted by the teachers, were recorded. The aim was to rapidly record situation-strategy-outcome data points "in vivo inside and outside the classroom. Locations of the observations outside the classroom include the playground, library, music room, main hall, canteen, therapy rooms, and garden. Overall, these resources were regularly used throughout the observational sessions. The instances recorded for each student vary slightly from 753 to 880 (=780, =45) and in total a sample of 5460 full observations were collected.
From the 5460 observations we collected, only 5001 are distinct. If we ignore the students response, unique observations are reduced to 4880, and if we also ignore the teachers communication strategy, then this number becomes 4357. Hence, there are instances in our data that are overlapping, but this is expected given that teachers and students may perform similarly throughout a specific teaching session. The level of support for each teacher communication strategy is equal to 3128 (709) times for a verbal communication, 1717 (357) for using an object, 1642 (181) for the gesture, 1465 (575) for a physical prompt, and 981 (165) for a picture, where in parentheses we report the number of times the underpinned communication was the only one performed (from a maximum of two communications). Although the small student and teacher sample does not allow for generalisations, we see that teachers tend to verbally engage with students quite frequently (57.29%), either in combination with another communication or as the sole means of communication. The full student response rate for each communication strategy (irrespectively of co-occurrence with another one) is equal to 64.02% (64.90%, 60.68%) for picture, 60.92% (62.48%, 57.73%) for an object, 60.61% (64.34%, 53.56%) for a physical prompt, 57.67% (59.67%, 51.80%) for a gesture, and 53.20% (55.21%, 46.45%) for a verbal communication; the rates in the parentheses are breakdowns for the language and social partner SCERTS classifications, respectively, reaffirming those language partners are in general more responsive, with a more pronounced relative difference when verbal or physical prompts are deployed. In addition, performing two versus one communication is more effective in producing a full student response. In particular, the full, partial, and no response breakdowns for single communications are 50.58%, 21.84%, and 27.58%, compared to 60.01%, 21.82%, and 18.17% for two teacher communications. Although the presence of two communications naturally increases the probability of choosing the correct means of interaction, the current outcome reaffirms the hypothesis that an incorrect communication strategy does not greatly affect the student when a desirable one co-occurs. The observed features with the greatest bivariate correlation with the student response are the negative emotional state of the student (r=0.184, p0.001), the encouragement/praise teaching type (r=0.124, p0.001), and the redirection teaching type (r=0.124, p0.001).
A machine learning classification task aims to learn a function f:Xy, where ({{{bf{X}}}}in {{mathbb{R}}}^{mtimes n}), y{1,,k}m denote the observations (inputs) and the response variable (outcomes), respectively; m, n, k represent the numbers of observations and outcomes, observation categories (features), and outcome classes, respectively. Here, in the most feature-inclusive case, we define X as an aggregation of six feature categories, namely student attributes (age, sex, P-level, SCERTS classification), teaching objective, teaching type, context for teaching type, the students observed emotional state, and teachers communication strategy. All feature categories, apart from age, were coded as c-dimensional tuples of 1s and 0s, where c is the respective number of different subtypes for each category (Table 1), and ones are used to denote the activated subtype(s). Student age was coded as a real number from 0 to 1, using a linear mapping scheme, where 0 and 1 represent 5 and 12 years of age, respectively. The response variable y takes a binary definition representing two classes, a full response output versus otherwise. The rational behind this merging was to generate a more balanced classification task (56.59% full student response labels) as well as alleviate any issues arising from a miscategorisation of partial (21.86%) or no response (21.55%) outcomes.
We train and evaluate the performance of various machine learning functions in predicting the students type of response. We deploy three broadly used classifiers in the literature: (a) a variant of logistic regression (LR)55 that uses elastic net regularisation56 for feature selection, (b) a random forest (RF)57 with 2000 decision trees, and (c) a Gaussian Process (GP)58 with a composite covariance function (or kernel) that we describe below. We devise three problem formulations, where we incrementally add more elements in the observed data (input). In the first instance, we consider all observed categories apart from student attributes. Then, we include student attributes as part of the feature space and, to represent this change, augment method abbreviations with "-. Finally, in both previous setups, we explore autoregression by including the observed data and student responses for up to the previous =5 teacher-student interactions. While performing autoregression, we maintain all three types of recorded student responses in the input data.
Although logistic regression and random forests treat the increased input space without any particular intrinsic additive modelling, the modularity of the GP allows us to specify more customised covariance functions on these different inputs. GP models assume that f:Xy is a probability distribution over functions denoted as (f({{{bf{x}}}}) sim ,{{mbox{GP}}},(mu ({{{bf{x}}}}),k({{{bf{x}}}},{{{bf{x}}}}^{prime} ))), where ({{{bf{x}}}},{{{bf{x}}}}^{prime}) are rows of X, () is the mean function of the process, and k(,) is the covariance function (or kernel) that captures statistical relationships in the input space. We assume that (x)=0, a common setting for various downstream applications59,60,61,62, and use the following incremental (through summation) covariance functions:
$$k({{{bf{x}}}},{{{bf{x}}}}^{prime} )={k}_{{{{rm{SE}}}}}({{{{bf{x}}}}}_{c},{{{{bf{x}}}}}_{c}^{prime}) ,$$
(1)
$$k({{{bf{x}}}},{{{bf{x}}}}^{prime} )={k}_{{{{rm{SE}}}}}({{{bf{a}}}},{{{bf{a}}}}^{prime} )+{k}_{{{{rm{SE}}}}}({{{{bf{x}}}}}_{c},{{{{bf{x}}}}}_{c}^{prime}) ,$$
(2)
$$k({{{bf{x}}}},{{{bf{x}}}}^{prime} )={k}_{{{{rm{SE}}}}}({{{{bf{x}}}}}_{c},{{{{bf{x}}}}}_{c}^{prime})+{k}_{{{{rm{SE}}}}}({{{{bf{x}}}}}_{p},{{{{bf{x}}}}}_{p}^{prime})+{k}_{{{{rm{SE}}}}}({{{{bf{y}}}}}_{p},{{{{bf{y}}}}}_{p}^{prime}) ,,{{mbox{and}}},$$
(3)
$$k({{{bf{x}}}},{{{bf{x}}}}^{prime} )={k}_{{{{rm{SE}}}}}({{{bf{a}}}},{{{bf{a}}}}^{prime} )+{k}_{{{{rm{SE}}}}}({{{{bf{x}}}}}_{c},{{{{bf{x}}}}}_{c}^{prime})+{k}_{{{{rm{SE}}}}}({{{{bf{x}}}}}_{p},{{{{bf{x}}}}}_{p}^{prime})+{k}_{{{{rm{SE}}}}}({{{{bf{y}}}}}_{p},{{{{bf{y}}}}}_{p}^{prime}) ,$$
(4)
where kSE(,) denotes the squared exponential covariance function, xc denotes the current observation including the teachers communication strategy, a is the vector containing student attributes, and xp, yp denote the past observations and student response outcomes, respectively. Therefore, Eq. (1) refers to the kernel in the simplest task formulation where only currently observed data are used, Eq. (2) expands on Eq. (1) by adding a kernel for student attributes, and Eqs. (3) and (4) add kernels for including previous observations and student responses (autoregression). Using an additive problem formulation, where a kernel focuses on a part of the feature space, generates a simpler optimisation task and tends to provide better accuracy63. This is also confirmed by our empirical results.
We apply 10-fold cross-validation as follows. We randomly shuffle the observed samples (5460 in total) and then generate 10 equally sized folds. We use 9 of these folds to train a model, and 1 to test, repeating this training-testing process 10 times, using all formed folds as test sets. By doing this we are solving a task, whereby observations from the same student can exist in both the training and the test sets (although these observations are strictly distinct). That was an essential compromise here given the limited number of different students (7). The same exact training and testing process (and identical data splits) is used for all classification models and problem formulations. We learn the regularisation hyperparameters of logistic regression by cross-validating on the training data; this may result in potentially different choices for each fold. The hyperparameters of the GP models are learned using the Laplace approximation58,64. Performance is assessed using standard classification metrics, and in particular accuracy, precision, recall, and their harmonic mean known as the F1 score. For completeness, we also assess the best-performing model by testing on data from a single student that is not included in the training set, repeating the same process for all students in our cohort (leave-one-student-out, 7-fold cross-validation; see SI for more details).
Ethical approval was granted by the Research Ethics Committee at the Institute of Education, University College London (United Kingdom), where the research was conducted. The parents/guardians of the participating children, the school management, and their teachers gave their written informed consent. All participant information has been anonymised. Raw data and derived data sets were securely stored on the researchers encrypted computer systems with password protection.
Further information on research design is available in the Nature Research Reporting Summary linked to this article.
New study will use artificial intelligence to improve treatments for people with multiple long-term conditions – University of Birmingham
The NIHR has awarded 2.5 million for new research led by the University of Birmingham that will use artificial intelligence (AI) to produce computer programmes and tools that will help doctors improve the choice of drugs in patients with clusters of multiple long-term conditions.
Called the OPTIMAL study (OPTIMising therapies, discovering therapeutic targets and AI assisted clinical management for patients Living with complex multimorbidity), the research aims to understand how different combinations of long-term conditions and the medicines taken for these diseases interact over time to worsen or improve a patients health.
The study will be led by Dr Thomas Jackson and Professor Krish Nirantharakumar at the University of Birmingham and carried out in collaboration with the University of Manchester, University Hospitals Birmingham NHS Foundation Trust, NHS Greater Glasgow & Clyde, University of St Andrews,and theMedicines and Healthcare Products Regulatory Agency.
An estimated 14 million people in England are living with two or more long-term conditions, with two-thirds of adults aged over 65 expected to be living with multiple long-term conditions by 2035.
Dr Thomas Jackson, Associate Professor in Geriatric Medicine at the University of Birmingham, said: Currently when people have multiple long-term conditions, we treat each disease separately. This means we prescribe a different drug for each condition, which may not help people with complex multimorbidity which is a term we use when patients have four or more long-term health problem.
A drug for one disease can make another disease worse or better, however, presently we do not have information on the effect of one drug on a second disease. This means doctors do not have enough information to know which drug to prescribe to people with complex multimorbidity.
Krish Nirantharakumar, Professor in Health Data Science and Public Health at the University of Birmingham, added: Through our research, we can group such people based on their mixes of disease. Then we can study the effects of a drug on each disease mix. This should help doctors prescribe better and reduce the number of drugs patients need. This will lead to changes in healthcare policy which would benefit most people with complex multimorbidity.
The research is one of a number of studies being funded by the NIHRs Artificial Intelligence for Multiple Long-Term Conditions (AIM) call, which is aligned to the aims of the NHSX AI Lab, that combine data science and AI methods with health, care and social science expertise to identify new clusters of disease and understand how multiple long-term conditions develop over the life course.
The call will fund up to 23 million of research in two waves, supporting a pipeline of research and capacity building in multiple long-term conditions research. The first wave has invested nearly 12 million into three Research Collaborations, nine Development Awards and a Research Support Facility, including the University of Birmingham-led study.
Improving the lives of people with multiple long-term conditions and their carers through research is an area of strategic focus for the NIHR, with its ambitions set out in its NIHR Strategic Framework for Multiple Long-Term Conditions Research.
Professor Lucy Chappell, NIHR Chief Executive and chair of the AIM funding committee, said: This large-scale investment in research will improve our understanding of clusters of multiple long-term conditions, including how they develop over a persons lifetime.
Over time, findings from this new research will point to solutions that might prevent or slow down the development of further conditions over time. We will also look at how we shape treatment and care to meet the needs of people with multiple long-term conditions and carers.
To date NIHR has invested 11million into research on multiple long-term conditions through two calls in partnership with the Medical Research Council, offering both pump-priming funds and funding to tackle multimorbidity at scale.
See the original post here:
New study will use artificial intelligence to improve treatments for people with multiple long-term conditions - University of Birmingham
New Traffic Sensor Uses Artificial Intelligence to Detect Any Vehicle on the Road – autoevolution
And naturally, the closer we get to smart intersections becoming more mainstream, the more technologies to power them go live, some of them with insanely advanced capabilities that nobody would have imagined some 10 years ago.
Iteris, for example, a company providing smart mobility infrastructure management, has come up with the worlds first 1080p high-definition (HD) video and four-dimensional (4D) radar sensor with integrated artificial intelligence (AI) algorithms.
In plain English, this is a traffic monitoring sensor that authorities across the world can install in their systems to get 1080p (thats HD resolution) video as well as 4D radar data using a technology bundling AI algorithms.
This means the new sensor is capable of offering insanely accurate detection, and just as expected, it can spot not only cars, but also trucks, bikes, and many other vehicle types. The parent company says the sensor has been optimized to also detect vulnerable road users, such as pedestrians.
In case youre wondering why a traffic management center (TMC) needs such advanced data, the benefits of this sensor go way beyond the simple approach when someone keeps an eye on the traffic in a certain intersection.
TMCs can be linked to connected cars, so the information collected by the sensor can be transmitted right back on the road where the new-generation vehicles can act accordingly based on the detected information. And this is why AI-powered detection is so important, as it offers extra accuracy, preventing errors and wrong information from being sent to connected cars.
In other words, it can help avoid collisions, reduce the speed when pedestrians are detected, and overall optimize the traffic flow because after all, everybody wants to get rid of traffic jams in the first place.
Were probably still many years away from the moment such complex sensors become more mainstream, but Iteris new idea is the living proof the future is already here. Fingers crossed, however, for authorities across the world to notice how much potential is hiding in this new-gen technology.
The rest is here:
New Traffic Sensor Uses Artificial Intelligence to Detect Any Vehicle on the Road - autoevolution