Page 2,136«..1020..2,1352,1362,1372,138..2,1502,160..»

Enhanced Security, Streamlined Automation and Deployment Features Shine in Release 6.0 of StarlingX, the Open Source Platform for Edge – PR Web

The latest release of StarlingX marks another incredible milestone to advance cloud technologies for mission critical industries. ... We are encouraged to see continued ecosystem development from a thriving open source community.

AUSTIN, Texas (PRWEB) February 01, 2022

StarlingXthe open source edge computing and IoT cloud platform optimized for low-latency and high-performance applicationsis available in its 6.0 release today. StarlingX combines Ceph, OpenStack, Kubernetes and more to create a full-featured cloud software stack that provides everything telecom carriers and enterprises need to deploy an edge cloud on a few servers or hundreds of them.

New features in StarlingX 6.0 include:

***Download StarlingX 6.0 at https://opendev.org/starlingx***

Since StarlingX was first released in 2018, the StarlingX open source community has continued to advance and mature this unique cloud platform that offers high availability and low latency for edge workloads, said Ildiko Vancsa, Senior Manager, Community & Ecosystem for the Open Infrastructure Foundation. It is exciting to see the community delivering more advanced functionality for a broad variety of edge applications. The sixth release of the project tackles security enhancements and takes crucial steps towards supporting zero touch deployment and management of edge sites on a large scale that delivers tremendous value as users are deploying the platform in production.

Key Features of StarlingX 6.0To further support the low-latency and distributed cloud requirements of edge computing and industrial IoT use cases, the community prioritized these features in StarlingX 6.0:

Learn more about these and other features of StarlingX 6.0 in the communitys release notes.

OpenInfra Community Drives StarlingX ProgressThe StarlingX project launched in 2018, with initial code for the project contributed by Wind River and Intel. Active contributors to the project include Wind River, Intel and 99Cloud. Well-known users of the software in production include T-Systems, Verizon and Vodafone. The StarlingX community is actively collaborating with several other groups such as the OpenInfra Edge Computing Group, ONAP, Akraino and more.

Community Accolades for StarlingX 6.0 The latest release of StarlingX marks another incredible milestone to advance cloud technologies for mission critical industries. The community has seen tremendous growth in commercial adoption and investments across markets by major organizations and contributors. As a strong ongoing supporter of the project and original contributor to the code base, we look forward to continuing our collaboration and delivering expertise for the distributed cloud by drawing from our technologies such as Wind River Studio, as well as collaboration with key initiatives such as O-RAN. We are encouraged to see continued ecosystem development from a thriving open source community. Paul Miller, Chief Technology Officer, Wind River

The StarlingX community is continuously making significant progress. Were excited to see StarlingX 6.0 to be available with a lot of enhancements and new features. As the 5G era approaches, StarlingX is a key component to meet edge computing requirements. 99Cloud has witnessed and participated in the StarlingX 6.0 release which brings the maturity of the edge cloud platform to a new stage. As one of the leading contributors of StarlingX, well continuously contribute to the community and work with customers and partners to promote StarlingX 6.0 to more commercial deployment. Shuquan Huang, Technical Director, 99Cloud Inc.

Project Resources

About StarlingXStarlingX is the open source edge computing and IoT cloud platform optimized for low latency and high performance applications. It provides a scalable and highly reliable edge infrastructure, tested and available as a complete stack. Applications include industrial IoT, telecom, video delivery and other ultra-low latency use cases. StarlingX ensures compatibility among diverse open source components and provides unique project components for fault management and service management, among others, to ensure high availability of user applications. StarlingX is the ready-for-deployment code base for edge implementations in scalable solutions. StarlingX is an Open Infrastructure Foundation project. http://www.starlingx.io

Share article on social media or email:

Read more:
Enhanced Security, Streamlined Automation and Deployment Features Shine in Release 6.0 of StarlingX, the Open Source Platform for Edge - PR Web

Read More..

Second Trojan asteroid confirmed to be leading our planet around the Sun – The Register

Scientists have confirmed the discovery of Earth's second Trojan asteroid leading the planet in its orbit around its nearest star.

Dubbed 2020 XL5, the hunk of space rock was discovered in December 2020. Although excitement surrounded the early observations of a second Earth Trojan, low observational coverage meant uncertainties in the data were too great for a scientific confirmation.

Trojan asteroids are small bodies sharing an orbit with a planet, which remain in a stable orbit approximately 60 degrees ahead of or behind the main body.

Venus, Mars, Jupiter, Uranus, and Neptune all have them but it wasn't until 2011 that asteroid 2010 TK7 was found to be the first Earth could lay claim to. Now a second was confirmed this week.

Around 1.18km across (give or take 80m), 2020 XL5 is probably made of carbon and is the larger of the Earth's Trojan asteroids to be discovered, according to the study published in Nature Communications. Both lead our planet in its trajectory around the Sun.

Toni Santana-Ros, postdoctoral researcher at Barcelona University's Institut de Cincies del Cosmos and his team used archival data from Catalina Sky Survey which revealed promising data from Mount Lemmon telescope in Arizona and the online repository of images from Vctor M. Blanco Telescope, Chile. They combined this data with optical images of 2020 XL5 from 4m class telescopes, the Southern Astrophysical Research telescope in Chile and the Lowell Discovery Telescope in Arizona.

They also made new observations using the European Space Agency's Optical Ground Station 1m telescope on Tenerife, Spain, watching the skies from February 9 last year until March 16. The integration of the orbit data employed ESA AstOD orbit determination software.

As well as confirming the finding, their study shows the Earth Trojan's orbit is likely to remain stable for at least 4,000 years.

They suggest the object may have been thrown out of the Solar System's main asteroid belt following an interaction with Jupiter, but more work is needed to confirm the idea.

Because it is bigger than its sibling, the newly confirmed space rock may be a better candidate for a future fly-by mission, the researchers suggested.

Read the original post:
Second Trojan asteroid confirmed to be leading our planet around the Sun - The Register

Read More..

Another Massive Display as AMD hails ‘outstanding’ 2021, teases Genoa and Bergamo chips – The Register

AMD has hailed 2021 as an "outstanding" year with each of its business units growing significantly, thanks to strong sales of its Epyc server chips and data centre GPUs. The firm is hoping to continue this with its Genoa chips this year and Bergamo in 2023.

In a conference call to disclose AMD's Q4 and year-end financial results, president and CEO Lisa Su said the firm had exceeded its growth goals and delivered a record year. In particular, she claimed that data centre revenue had more than doubled year-on-year.

In servers, Su said revenue had more than doubled year-over-year and increased by a double-digit percentage sequentially, driven by demand across both cloud and enterprise customers. She also picked out data centre graphics revenue as more than doubling year-on-year, driven by HPC wins for AMD's latest Instinct MI200 accelerators, with platforms coming this quarter from Asus, Dell, HP, Lenovo, Supermicro, and others.

"We're still cloud-weighted relative to enterprise. But enterprise has made a really nice progress. It's a sizable business, and we've made progress with the larger OEMs as well as across a number of regional OEMs," Su said.

AMD's computing and graphics segment's revenues, meanwhile, were up 32 per cent growth to $2.584bn. Su said this was driven by sales of Ryzen processors and Radeon graphics processors. She also noted the "industry has seen some price increases across the supply chain."

When questioned about the company's own pricing strategies by an analyst, Su said that "without a doubt, the predominant growth is products. So it's units and average selling prices from the mix of the product, and that's the predominant growth."

Although data centre is not broken out into a specific business unit at AMD, Su claimed that revenue for data centre products constituted "a mid-20 percentage of overall revenue" for 2021, and indicated that the firm expected 2022 to be another year of growth based on signals it was getting from customers for current and next-generation products.

"Demand for our product is very strong, and we look forward to another year of significant growth and share gains as we ramp our current products and launch our next wave of Zen 4 CPUs and RDNA 3 GPUs. We have also made significant investments to secure the capacity needed to support our growth in 2022 and beyond," the CEO said.

Further to the supply chain issues, Su said that AMD has made significant investments in wafer capacity as well as substrate capacity, adding: "We feel very good about our progress in the supply chain to meet the 2022 guidance." Looking ahead, Su said that AMD is already sampling its Genoa Epyc processors to customers now and is on track to launch later this year, while shipments of the Bergamo chips are planned to follow in the first half of 2023.

Genoa is set to feature up to 96 Zen 4 cores and next-generation memory and I/O technologies, according to AMD, while Bergamo features a version of the Zen 4 core called Zen 4c that has been specifically optimised for cloud-native computing.

"Bergamo is a high core power-efficient CPU that can be used in the same platforms as Genoa. It will feature up to 128 CPU cores and deliver significant performance and power efficiency advantages for cloud workloads," Su claimed.

AMD also recently got clearance from the Chinese regulatory authorities for its planned takeover of FPGA maker Xilinx. Su said that she was "extremely excited about Xilinx" and the combination of AMD and Xilinx technology, saying that the firm has been planning for the integration and has had interest from customers anxious to talk about combined road maps.

Su hinted that there was an opportunity for edge deployments in communications and 5G networks, saying: "As we bring Xilinx into the equation, they have very deep relationships with a number of these accounts. And so we see that as an incremental positive as we think about EPYC in communications."

FPGAs have been finding new uses in the data centre over recent years, as accelerators for AI processing or as part of SmartNICs, and rival Intel has even offered Xeon chips combined with an FPGA for select customers.

For the longer term, Su expressed confidence in AMD's future, based on its roadmap and the commitments it has from customers.

"We are confident in our ability to continue growing significantly faster than the market, based on our expanded roadmap investments and the deep relationships we have established with a broad set of customers who view AMD as a strategic enabler of their success," she said.

Read more here:
Another Massive Display as AMD hails 'outstanding' 2021, teases Genoa and Bergamo chips - The Register

Read More..

Machine Learning with Python Certification | freeCodeCamp.org

Tensorflow icon

Machine learning has many practical applications that you can use in your projects or on the job.

In the Machine Learning with Python Certification, you'll use the TensorFlow framework to build several neural networks and explore more advanced techniques like natural language processing and reinforcement learning.

You'll also dive into neural networks, and learn the principles behind how deep, recurrent, and convolutional neural networks work.

TensorFlow is an open source framework that makes machine learning and neural networking easier to use.

The following video course was created by Tim Ruscica, also known as Tech With Tim. It will help you to understand TensorFlow and some of its powerful capabilities.

Not PassedNot Passed0/32

Neural networks are at the core of what we call artificial intelligence today. But historically they've been hard to understand. Especially for beginners in the machine learning field.

Even if you are completely new to neural networks, these video courses by Brandon Rohrer will get you comfortable with the concepts and the math behind them.

Not PassedNot Passed0/4

Machine learning has many practical applications. By completing these free and challenging coding projects, you will demonstrate that you have a good foundational knowledge of machine learning, and qualify for your Machine Learning with Python certification.

See the rest here:
Machine Learning with Python Certification | freeCodeCamp.org

Read More..

Machine learning helps improve the flash graphene process – Graphene-Info

Scientists at Rice University are using machine-learning techniques to fine-tune the process of synthesizing graphene from waste through flash Joule heating. The researchers describe in their new work how machine-learning models that adapt to variables and show them how to optimize procedures are helping them push the technique forward.

Machine learning is fine-tuning Rice Universitys flash Joule heating method for making graphene from a variety of carbon sources, including waste materials. Credit: Jacob Beckham, from: Phys.org

The process, discovered by the Rice lab of chemist James Tour, has expanded beyond making graphene from various carbon sources to extracting other materials like metals from urban waste, with the promise of more environmentally friendly recycling to come. The technique is the same: blasting a jolt of high energy through the source material to eliminate all but the desired product. However, the details for flashing each feedstock are different.

"Machine-learning algorithms will be critical to making the flash process rapid and scalable without negatively affecting the graphene product's properties," Prof. Tour said.

"In the coming years, the flash parameters can vary depending on the feedstock, whether it's petroleum-based, coal, plastic, household waste or anything else," he said. "Depending on the type of graphene we wantsmall flake, large flake, high turbostratic, level of puritythe machine can discern by itself what parameters to change."

Because flashing makes graphene in hundreds of milliseconds, it's difficult to follow the details of the chemical process. So Tour and company took a clue from materials scientists who have worked machine learning into their everyday process of discovery.

"It turned out that machine learning and flash Joule heating had really good synergy," said Rice graduate student and lead author Jacob Beckham. "Flash Joule heating is a really powerful technique, but it's difficult to control some of the variables involved, like the rate of current discharge during a reaction. And that's where machine learning can really shine. It's a great tool for finding relationships between multiple variables, even when it's impossible to do a complete search of the parameter space". "That synergy made it possible to synthesize graphene from scrap material based entirely on the models' understanding of the Joule heating process," he explained. "All we had to do was carry out the reactionwhich can eventually be automated."

The lab used its custom optimization model to improve graphene crystallization from four starting materialscarbon black, plastic pyrolysis ash, pyrolyzed rubber tires and cokeover 173 trials, using Raman spectroscopy to characterize the starting materials and graphene products.

The researchers then fed more than 20,000 spectroscopy results to the model and asked it to predict which starting materials would provide the best yield of graphene. The model also took the effects of charge density, sample mass and material type into account in their calculations.

Lat month, the Rice team developed an acoustic processing method to analyze LIG synthesis in real time.

Read more:
Machine learning helps improve the flash graphene process - Graphene-Info

Read More..

Competitive programming with AlphaCode – DeepMind

Solving novel problems and setting a new milestone in competitive programming.

Creating solutions to unforeseen problems is second nature in human intelligence a result of critical thinking informed by experience. The machine learning community has made tremendous progress in generating and understanding textual data, but advances in problem solving remain limited to relatively simple maths and programming problems, or else retrieving and copying existing solutions. As part of DeepMinds mission to solve intelligence, we created a system called AlphaCode that writes computer programs at a competitive level. AlphaCode achieved an estimated rank within the top 54% of participants in programming competitions by solving new problems that require a combination of critical thinking, logic, algorithms, coding, and natural language understanding.

In our preprint, we detail AlphaCode, which uses transformer-based language models to generate code at an unprecedented scale, and then smartly filters to a small set of promising programs.

We validated our performance using competitions hosted on Codeforces, a popular platform which hosts regular competitions that attract tens of thousands of participants from around the world who come to test their coding skills. We selected for evaluation 10 recent contests, each newer than our training data. AlphaCode placed at about the level of the median competitor, marking the first time an AI code generation system has reached a competitive level of performance in programming competitions.

To help others build on our results, were releasing our dataset of competitive programming problems and solutions on GitHub, including extensive tests to ensure the programs that pass these tests are correct a critical feature current datasets lack. We hope this benchmark will lead to further innovations in problem solving and code generation.

Continue reading here:
Competitive programming with AlphaCode - DeepMind

Read More..

Using Deep Learning to Find Genetic Causes of Mental Health Disorders in an Understudied Population – Neuroscience News

Summary: A new deep learning algorithm that looks for the burden of genomic variants is 70% accurate at identifying specific mental health disorders within the African-American community.

Source: CHOP

Minority populations have been historically under-represented in existing studies addressing how genetic variations may contribute to a variety of disorders. A new study from researchers at Childrens Hospital of Philadelphia (CHOP) shows that a deep learning model has promising accuracy when helping to diagnose a variety of common mental health disorders in African American patients.

This tool could help distinguish between disorders as well as identify multiple disorders, fostering early intervention with better precision and allowing patients to receive a more personalized approach to their condition.

The study was recently published by the journalMolecular Psychiatry.

Properly diagnosing mental disorders can be challenging, especially for young toddlers who are unable to complete questionnaires or rating scales. This challenge has been particularly acute in understudied minority populations. Past genomic research has found several genomic signals for a variety of mental disorders, with some serving as potential therapeutic drug targets.

Deep learning algorithms have also been used to successfully diagnose complex diseases like attention deficit hyperactivity disorder (ADHD). However, these tools have rarely been applied in large populations of African American patients.

In a unique study, the researchers generated whole genome sequencing data from 4,179 patient blood samples of African American patients, including 1,384 patients who had been diagnosed with at least one mental disorder This study focused on eight common mental disorders, including ADHD, depression, anxiety, autism spectrum disorder, intellectual disabilities, speech/language disorder, delays in developments and oppositional defiant disorder (ODD).

The long-term goal of this work is to learn more about specific risks for developing certain diseases in African American populations and how to potentially improve health outcomes by focusing on more personalized approaches to treatment.

Most studies focus only on one disease, and minority populations have been very under-represented in existing studies that utilize machine learning to study mental disorders, said senior author Hakon Hakonarson, MD, Ph.D., Director of the Center for Applied Genomics at CHOP.

We wanted to test this deep learning model in an African American population to see whether it could accurately differentiate mental disorder patients from healthy controls, and whether we could correctly label the types of disorders, especially in patients with multiple disorders.

The deep learning algorithm looked for the burden of genomic variants in coding and non-coding regions of the genome. The model demonstrated over 70% accuracy in distinguishing patients with mental disorders from the control group. The deep learning algorithm was equally effective in diagnosing patients with multiple disorders, with the model providing exact diagnostic matches in approximately 10% of cases.

The model also successfully identified multiple genomic regions that were highly enriched formental disorders, meaning they were more likely to be involved in the development of these medical disorders. The biological pathways involved included ones associated with immune responses, antigen and nucleic acid binding, a chemokine signaling pathway, and guanine nucleotide-binding protein receptors.

However, the researchers also found that variants in regions that did not code for proteins seemed to be implicated in these disorders at higher frequency, which means they may serve as alternative markers.

By identifying genetic variants and associated pathways, future research aimed at characterizing their function may provide mechanistic insight as to how these disorders develop, Hakonarson said.

Author: Press OfficeSource: CHOPContact: Press Office CHOPImage: The image is in the public domain

Original Research: Open access.Application of deep learning algorithm on whole genome sequencing data uncovers structural variants associated with multiple mental disorders in African American patients by Yichuan Liu et al. Molecular Psychiatry

Abstract

Application of deep learning algorithm on whole genome sequencing data uncovers structural variants associated with multiple mental disorders in African American patients

Mental disorders present a global health concern, while the diagnosis of mental disorders can be challenging. The diagnosis is even harder for patients who have more than one type of mental disorder, especially for young toddlers who are not able to complete questionnaires or standardized rating scales for diagnosis. In the past decade, multiple genomic association signals have been reported for mental disorders, some of which present attractive drug targets.

Concurrently, machine learning algorithms, especially deep learning algorithms, have been successful in the diagnosis and/or labeling of complex diseases, such as attention deficit hyperactivity disorder (ADHD) or cancer. In this study, we focused on eight common mental disorders, including ADHD, depression, anxiety, autism, intellectual disabilities, speech/language disorder, delays in developments, and oppositional defiant disorder in the ethnic minority of African Americans.

Blood-derived whole genome sequencing data from 4179 individuals were generated, including 1384 patients with the diagnosis of at least one mental disorder. The burden of genomic variants in coding/non-coding regions was applied as feature vectors in the deep learning algorithm. Our model showed ~65% accuracy in differentiating patients from controls. Ability to label patients with multiple disorders was similarly successful, with a hamming loss score less than 0.3, while exact diagnostic matches are around 10%. Genes in genomic regions with the highest weights showed enrichment of biological pathways involved in immune responses, antigen/nucleic acid binding, chemokine signaling pathway, and G-protein receptor activities.

A noticeable fact is that variants in non-coding regions (e.g., ncRNA, intronic, and intergenic) performed equally well as variants in coding regions; however, unlike coding region variants, variants in non-coding regions do not express genomic hotspots whereas they carry much more narrow standard deviations, indicating they probably serve as alternative markers.

See the original post:
Using Deep Learning to Find Genetic Causes of Mental Health Disorders in an Understudied Population - Neuroscience News

Read More..

Artificial Intelligence Creeps on to the African Battlefield – Brookings Institution

Even as the worlds leading militaries race to adopt artificial intelligence in anticipation of future great power war, security forces in one of the worlds most conflict-prone regions are opting for a more measured approach. In Africa, AI is gradually making its way into technologies such as advanced surveillance systems and combat drones, which are being deployed to fight organized crime, extremist groups, and violent insurgencies. Though the long-term potential for AI to impact military operations in Africa is undeniable, AIs impact on organized violence has so far been limited. These limits reflect both the novelty and constraints of existing AI-enabled technology.

Artificial intelligence and armed conflict in Africa

Artificial intelligence (AI), at its most basic, leverages computing power to simulate the behavior of humans that requires intelligence. Artificial intelligence is not a military technology like a gun or a tank. It is rather, as the University of Pennsylvanias Mark Horowitz argues, a general-purpose technology with a multitude of applications, like the internal combustion engine, electricity, or the internet. And as AI applications proliferate to military uses, it threatens to change the nature of warfare. According to the ICRC, AI and machine-learning systems could have profound implications for the role of humans in armed conflict, especially in relation to: increasing autonomy of weapon systems and other unmanned systems; new forms of cyber and information warfare; and, more broadly, the nature of decision-making.

In at least two respects, AI is already affecting the dynamics of armed conflict and violence in Africa. First, AI-driven surveillance and smart policing platforms are being used to respond to attacks by violent extremist groups and organized criminal networks. Second, the development of AI-powered drones is beginning to influence combat operations and battlefield tactics.

AI is perhaps most widely used in Africa in areas with high levels of violence to increase the capabilities and coordination of law enforcement and domestic security services. For instance, fourteen African countries deploy AI-driven surveillance and smart-policing platforms, which typically rely on deep neural networks for image classification and a range of machine learning models for predictive analytics. In Nairobi, Chinese tech giant Huawei has helped build an advanced surveillance system, and in Johannesburg automated license plate readers have enabled authorities to track violent, organized criminals with suspected ties to the Islamic State. Although such systems have significant limitations (more on this below), they are proliferating across Africa.

AI-driven systems are also being deployed to fight organized crime. At Liwonde National Park in Malawi, park rangers use EarthRanger software, developed by the late Microsoft co-founder, Paul Allen, to combat poaching using artificial intelligence and predictive analytics. The software detects patterns in poaching that the rangers might overlook, such as upticks in poaching during holidays and government paydays. A small, motion-activated poacher cam relies on an algorithm to distinguish between humans and animals and has contributed to at least one arrest. Its not difficult to imagine how such a system might be repurposed for counterinsurgency or armed conflict, with AI-enabled surveillance and monitoring systems deployed to detect and deter armed insurgents.

In addition to the growing use of AI within surveillance systems across Africa, AI has also been integrated into weapon systems. Most prominently, lethal autonomous weapons systems use real-time sensor data coupled with AI and machine learning algorithms to select and engage targets without further intervention by a human operator. Depending on how that definition is interpreted, the first use of a lethal autonomous weapon system in combat may have taken place on African soil in March 2020. That month, logistics units belonging to the armed forces of the Libyan warlord Khalifa Haftar came under attack by Turkish-made STM Kargu-2 drones as they fled Tripoli. According to a United Nations report, the Kargu-2 represented a lethal autonomous weapons system because it had been programmed to attack targets without requiring data connectivity between the operator and munition. Although other experts have instead classified the Kargu-2 as a loitering munition, its use in combat in northern Africa nonetheless points to a future where AI-enabled weapons are increasingly deployed in armed conflicts in the region.

Indeed, despite global calls for a ban on similar weapons, the proliferation of systems like the Kargu-2 is likely only beginning. Relatively low costs, tactical advantages, and the emergence of multiple suppliers have led to a booming market for low-and-mid tier combat drones currently being dominated by players including Israel, China, Turkey, and South Africa. Such drones, particularly Turkeys Bakratyar TB2, have been acquired and used by well over a dozen African countries.

While the current generation of drones by and large do not have AI-driven autonomous capabilities that are publicly acknowledged, the same cannot be said for the next generation, which are even less costly, more attritable, and use AI-assisted swarming technology to make themselves harder to defend against. In February, the South Africa-based Paramount Group announced the launch of its N-RAVEN UAV system, which it bills as a family of autonomous, multi-mission aerial vehicles featuring next-generation swarm technologies. The N-RAVEN will be able to swarm in units of up to twenty and is designed for technology transfer and portable manufacture within partner countries. These features are likely to be attractive to African militaries.

AIs limits, downsides, and risks

Though AI may continue to play an increasing role in the organizational strategies, intelligence-gathering capabilities, and battlefield tactics of armed actors in Africa and elsewhere, it is important to put these contributions in a broader perspective. AI cannot address the fundamental drivers of armed conflict, particularly the complex insurgencies common in Africa. African states and militaries may overinvest in AI, neglecting its risks and externalities, as well as the ways in which AI-driven capabilities may be mitigated or exploited by armed non-state actors.

AI is unlikely to have a transformative impact on the outbreak, duration, or mitigation of armed conflict in Africa, whose incidence has doubled over the past decade. Despite claims by its makers, there is little hard evidence linking the deployment of AI-powered smart cities with decreases in violence, including in Nairobi, where crime incidents have remained virtually unchanged since 2014, when the citys AI-driven systems first went online. The same is true of poaching. During the COVID-19 pandemic, fewer tourists and struggling local economies have fueled significant increases, overwhelming any progress that has resulted from governments adopting cutting-edge technology.

This is because, in the first place, armed conflict is a human endeavor, with many factors that influence its outcomes. Even the staunchest defenders of AI-driven solutions, such as Huawei Southern Africa Public Affairs Director David Lane, admit that they cannot address the underlying causes of insecurity such as unemployment or inequality: Ultimately, preventing crime requires addressing these causes in a very local way. No AI algorithm can prevent poverty or political exclusion, disputes over land or national resources, or political leaders from making chauvinistic appeals to group identity. Likewise, the central problems with Africas militariesendemic corruption, human rights abuses, loyalties to specific leaders and groups rather than institutions and citizens, and a proclivity for ill-timed seizures of powerare not problems that artificial intelligence alone can solve.

In the second place, the aspects of armed conflict that AI seems most likely to disruptremote intelligence-gathering capabilities and air powerare technologies that enable armies to keep enemies at arms-length and win in conventional, pitched battles. AIs utility in fighting insurgencies, in which non-state armed actors conduct guerilla attacks and seek to blend in and draw support from the population, is more questionable. To win in insurgencies requires a sustained on the ground presence to maintain order and govern contested territory. States cannot hope to prevail in such conflicts by relying on technology that effectively removes them from the fight.

Finally, the use of AI to fight modern armed conflict remains at a nascent stage. To date, the prevailing available evidence has documented how state actors are adopting AI to fight conflict, and not how armed non-state actors are responding. Nevertheless, states will not be alone in seeking to leverage autonomous weapons. Former African service members speculate that it is only a matter of time before before the deployment of swarms or clusters of offensive drones by non-state actors in Africa, given their accessibility, low costs, and existing use in surveillance and smuggling. Rights activists have raised the alarm about the potential for small, cheap, swarming slaughterbots, that use freely available AI and facial recognition systems to commit mass acts of terror. This particular scenario is controversial, but according to American Universitys Audrey Kurth Cronin, it is both technologically feasible and consistent with classic patterns of diffusion.

The AI armed conflict evolution

These downsides and risks suggest the continued diffusion of AI is unlikely to result in the revolutionary changes to armed conflict suggested by some of its more ardent proponents and backers. Rather, modern AI is perhaps best viewed as continuing and perhaps accelerating long-standing technological trends that have enhanced sensing capabilities and digitized and automated the operations and tactics of armed actors everywhere.

For all its complexity, AI is first and foremost a digital technology, its impact dependent on and difficult to disentangle from a technical triad of data, algorithms, and computing power. The impact of AI-powered surveillance platforms, from the EarthRanger software used at Liwonde to Huawei-supplied smart policing platforms, isnt just a result of machine-learning algorithms that enable human-like reasoning capabilities, but also on the ability to store, collect, process collate and manage vast quantities of data. Likewise, as pointed out by analysts such as Kelsey Atherton, the Kargu 2 used in Libya can be classified as an autonomous loitering munition such as Israels Harpy drone. The main difference between the Kargu 2 and the Harpy, which was first manufactured in 1989, is where the former uses AI-driven image recognition, the latter uses electro-optical sensors to detect and hone in on enemy radar emissions.

The diffusion of AI across Africa, like the broader diffusion of digital technology, is likely to be diverse and uneven. Africa remains the worlds least digitized region. Internet penetration rates are low and likely to remain so in many of the most conflict-prone countries. In Somalia, South Sudan, Ethiopia, the Democratic Republic of Congo, and much of the Lake Chad Basin, internet penetration is below 20%. AI is unlikely to have much of an impact on conflict in regions where citizens leave little in the way of a digital footprint, and non-state armed groups control territory beyond the easy reach of the state.

Taken together, these developments suggest that AI will cause a steady evolution in armed conflict in Africa and elsewhere, rather than revolutionize it. Digitization and the widespread adoption of autonomous weapons platforms may extend the eyes and lengthen the fists of state armies. Non-state actors will adopt these technologies themselves and come up with clever ways to exploit or negate them. Artificial intelligence will be used in combination with equally influential, but less flashy inventions such as the AK-47, the nonstandard tactical vehicle, and the IED to enable new tactics that take advantage or exploit trends towards better sensing capabilities and increased mobility.

Incrementally and in concert with other emerging technologies, AI is transforming the tools and tactics of warfare. Nevertheless, experience from Africa suggests that humans will remain the main actors in the drama of modern armed conflict.

Nathaniel Allen is an assistant professor with the Africa Center for Strategic Studies at National Defense University and a Council on Foreign Relations term member. Marian Ify Okpali is a researcher on cyber policy and the executive assistant to the dean at the Africa Center for Strategic Studies at National Defense University. The opinions expressed in this article are those of the authors.

Microsoft provides financial support to the Brookings Institution, a nonprofit organization devoted to rigorous, independent, in-depth public policy research.

Continued here:
Artificial Intelligence Creeps on to the African Battlefield - Brookings Institution

Read More..

CEO of Alberta-based company says it’s time for Alberta, companies to invest in AI and machine learning – Edmonton Journal

Breadcrumb Trail Links

Now is the time for Alberta-based companies and the province to invest more in AI and machine learning technology, said the CEO of an Edmonton company.

This advertisement has not loaded yet, but your article continues below.

Cam Linke, CEO of Alberta Machine Intelligence Institute (Amii), said its a special time in AI machine learning with lots of advancements being made.

This isnt just an academic thing, there is the ability and tools to be able to apply machine learning to a myriad of business problems, said Linke. Right now, businesses dont have to make enormous investments upfront, they can make reasoned investments around a business plan that can have a meaningful business impact right now.

However, Linke said at the same time, the field is growing rapidly.

Its kind of a special time where its sitting right at the intersection of engineering, where it can be applied right now, and science, where the fields continuing to learn, grow and do new things, he said.

This advertisement has not loaded yet, but your article continues below.

Linke said there is a carrot in the stick when it comes to regions and companies around machine learning where the carrot is creating a lot of opportunity, business value and the ability to create a competitive advantage in your industry.

The stick of it is that if youre not, your competitor is, he said. You kind of have to, not just because theres great opportunity there, but someone in your industry and one of your competitors is going to take advantage of this technology and they will have a competitive edge over you if youre not making that investment.

Linke added Alberta is ahead of many provinces due to the province investing in machine learning since 2002 and the federal governments Pan-Canadian AI Strategy announced five years ago.

This advertisement has not loaded yet, but your article continues below.

Amii is a non-profit that supports and invests in world-leading research and training primarily done at the University of Alberta. Linke said the company has partnered with more than 100 companies, from small start-ups to multi-nationals like Shell, to help in the AI and machine learning fields.

Linke said Amii has worked with companies on implementing things such as predictive maintenance which can predict when a machine may fail which helps a company get in front of repairs before a more expensive incident occurs. Another example is the machine learning and reinforcement learning used at a water treatment plant optimizing the amount of water that can be treated, while trying to reduce the amount of energy used.

Linke said Alberta is already seeing the impacts and work of more AI and machine learning being introduced.

Were seeing it by the amount of investment by large companies in the area, the amount of investment in start-ups and the growth of start-ups in the area and were seeing it with the number of jobs and the number of people hired in the area, said Linke.

ktaniguchi@postmedia.com

twitter.com/kellentaniguchi

This advertisement has not loaded yet, but your article continues below.

Sign up to receive daily headline news from the Edmonton Journal, a division of Postmedia Network Inc.

A welcome email is on its way. If you don't see it, please check your junk folder.

The next issue of Edmonton Journal Headline News will soon be in your inbox.

We encountered an issue signing you up. Please try again

Postmedia is committed to maintaining a lively but civil forum for discussion and encourage all readers to share their views on our articles. Comments may take up to an hour for moderation before appearing on the site. We ask you to keep your comments relevant and respectful. We have enabled email notificationsyou will now receive an email if you receive a reply to your comment, there is an update to a comment thread you follow or if a user you follow comments. Visit our Community Guidelines for more information and details on how to adjust your email settings.

Go here to read the rest:
CEO of Alberta-based company says it's time for Alberta, companies to invest in AI and machine learning - Edmonton Journal

Read More..

How to build healthcare predictive models using PyHealth? – Analytics India Magazine

Machine learning has been applied to many health-related tasks, such as the development of new medical treatments, the management of patient data and records, and the treatment of chronic diseases. To achieve success in those SOTA applications, we must rely on the time-consuming technique of model building evaluation. To alleviate this load, Yue Zhao et al have proposed a PyHealth, a Python-based toolbox. As the name implies, this toolbox contains a variety of ML models and architecture algorithms for working with medical data. In this article, we will go through this model to understand its working and application. Below are the major points that we are going to discuss in this article.

Lets first discuss the use case of machine learning in the healthcare industry.

Machine learning is being used in a variety of healthcare settings, from case management of common chronic conditions to leveraging patient health data in conjunction with environmental factors such as pollution exposure and weather.

Machine learning technology can assist healthcare practitioners in developing accurate medication treatments tailored to individual features by crunching enormous amounts of data. The following are some examples of applications that can be addressed in this segment:

The ability to swiftly and properly diagnose diseases is one of the most critical aspects of a successful healthcare organization. In high-need areas like cancer diagnosis and therapy, where hundreds of drugs are now in clinical trials, scientists and computationalists are entering the mix. One method combines cognitive computing with genetic tumour sequencing, while another makes use of machine learning to provide diagnosis and treatment in a range of fields, including oncology.

Medical imaging, and its ability to provide a complete picture of an illness, is another important aspect in diagnosing an illness. Deep learning is becoming more accessible as data sources become more diverse, and it may be used in the diagnostic process, therefore it is becoming increasingly important. Although these machine learning applications are frequently correct, they have some limitations in that they cannot explain how they came to their conclusions.

ML has the potential to identify new medications with significant economic benefits for pharmaceutical companies, hospitals, and patients. Some of the worlds largest technology companies, like IBM and Google, have developed ML systems to help patients find new treatment options. Precision medicine is a significant phrase in this area since it entails understanding mechanisms underlying complex disorders and developing alternative therapeutic pathways.

Because of the high-risk nature of surgeries, we will always need human assistance, but machine learning has proved extremely helpful in the robotic surgery sector. The da Vinci robot, which allows surgeons to operate robotic arms in order to do surgery with great detail and in confined areas, is one of the most popular breakthroughs in the profession.

These hands are generally more accurate and steady than human hands. There are additional instruments that employ computer vision and machine learning to determine the distances between various body parts so that surgery can be performed properly.

Health data is typically noisy, complicated, and heterogeneous, resulting in a diverse set of healthcare modelling issues. For instance, health risk prediction is based on sequential patient data, disease diagnosis based on medical images, and risk detection based on continuous physiological signals.

Electroencephalogram (EEG) or electrocardiogram (ECG), for example, and multimodal clinical notes (e.g., text and images). Despite their importance in healthcare research and clinical decision making, the complexity and variability of health data and tasks need the long-overdue development of a specialized ML system for benchmarking predictive health models.

PyHealth is made up of three modules: data preprocessing, predictive modelling, and assessment. Both computer scientists and healthcare data scientists are PyHealths target consumers. They can run complicated machine learning processes on healthcare datasets in less than 10 lines of code using PyHealth.

The data preprocessing module converts complicated healthcare datasets such as longitudinal electronic health records, medical pictures, continuous signals (e.g., electrocardiograms), and clinical notes into machine learning-friendly formats.

The predictive modelling module offers over 30 machine learning models, including known ensemble trees and deep neural network-based approaches, using a uniform yet flexible API geared for both researchers and practitioners.

The evaluation module includes a number of evaluation methodologies (for example, cross-validation and train-validation-test split) as well as prediction model metrics.

There are five distinct advantages to using PyHealth. For starters, it contains more than 30 cutting-edge predictive health algorithms, including both traditional techniques like XGBoost and more recent deep learning architectures like autoencoders, convolutional based, and adversarial based models.

Second, PyHealth has a broad scope and includes models for a variety of data types, including sequence, image, physiological signal, and unstructured text data. Third, for clarity and ease of use, PyHealth includes a unified API, detailed documentation, and interactive examples for all algorithmscomplex deep learning models can be implemented in less than ten lines of code.

Fourth, unit testing with cross-platform, continuous integration, code coverage, and code maintainability checks are performed on most models in PyHealth. Finally, for efficiency and scalability, parallelization is enabled in select modules (data preprocessing), as well as fast GPU computation for deep learning models via PyTorch.

PyHealth is a Python 3 application that uses NumPy, scipy, scikit-learn, and PyTorch. As shown in the diagram below, PyHealth consists of three major modules: First is the data preprocessing module can validate and convert user input into a format that learning models can understand;

Second is the predictive modelling module is made up of a collection of models organized by input data type into sequences, images, EEG, and text. For each data type, a set of dedicated learning models has been implemented, and the third is the evaluation module can automatically infer the task type, such as multi-classification, and conduct a comprehensive evaluation by task type.

Most learning models share the same interface and are inspired by the scikit-API learn to design and general deep learning design: I fit learns the weights and saves the necessary statistics from the train and validation data; load model chooses the model with the best validation accuracy, and inference predicts the incoming test data.

For quick data and model exploration, the framework includes a library of helper and utility functions (check parameter, label check, and partition estimators). For example, a label check can check the data label and infer the task type, such as binary classification or multi-classification, automatically.

PyHealth for model building

Now below well discuss how we can leverage the API of this framework. First, we need to install the package by using pip.

! pip install pyhealth

Next, we can load the data from the repository itself. For that, we need to clone the repository. After cloning the repository inside the datasets folder there is a variety of datasets like sequenced based, image-based, etc. We are using the mimic dataset and it is in the zip form we need to unzip it. Below is the snippet clone repository, and unzip the data.

The unzipped file is saved in the current working directory with the name of the folder as a mimic. Next to use this dataset we need to load the sequence data generator function which serves as functionality to prepare the dataset for experimentation.

Now we have loaded the dataset. Now we can do further modelling as below.

Here is the fitment result.

Through this article, we have discussed how machine learning can be used in the healthcare industry by observing the various applications. As this domain is being quite vast and N number application, we have discussed a Python-based toolbox that is designed to build a predictive modelling approach by using various deep learning techniques such as LSTM, GRU for sequence data, and CNN for image-based data.

Read the original:
How to build healthcare predictive models using PyHealth? - Analytics India Magazine

Read More..