Category Archives: Artificial Intelligence
Texas A&M To Offer Courses On Responsible A.I. – Texas A&M University Today
Texas A&M University has joined a new nationwide program that aims to boost college-level curricula about responsible artificial intelligence. The university was selected as a participant in February through an application process headed by theCollege of Liberal Arts, theGlasscock Center for Humanities Researchand theDepartment of Philosophy.
Maria Escobar-Lemmon, associate dean for research and graduate education in the College of Liberal Arts, highlighted two objectives of the program. The first is to bring different points of view into the topic of artificial intelligence.
This program is being offered by the National Humanities Center, and its an alliance between the National Humanities Center and Google that is intended to broaden the range of voices to include humanistic scholars so that we have people with different backgrounds, training and disciplinary perspectives engaging on the issue, Escobar-Lemmon said. That way, its not just those who are writing the code that tells these machines how to talk to each other. Its people who are thinking about what it means to be human and how humanity can benefit from this technology.
The second objective is to create a learning curriculum directly addressing these issues. Texas A&Ms philosophy department was tasked with developing the course curriculum. Emily Brady, professor of philosophy and Susanne M. and Melbern G. Glasscock Directors Chair, feels that Texas A&Ms past curricula makes the department more than qualified for this unique opportunity.
The philosophy department is really well positioned and certainly, it was an important part of the application that we had to submit to the National Humanities Center to be awarded this funding, Brady said. Theyre well positioned already to offer a humanities oriented course because they already have a lot of expertise in this area. There are scholars in the philosophy department who study applied ethics, ethics of technology, and ethics in relation to issues in engineering and computer science. Already, the department of philosophy has a very popular course in ethics in engineering that is taught jointly with the College of Engineering.
Brady is optimistic about the impact this program will have not only on students, but society as a whole.
I think that its a fantastic curriculum design project because its thinking about the concept of responsibility and how that relates to questions about the role of artificial intelligence in society, Brady said. It will certainly benefit students by enabling them to understand the role of technology in society better, so they will grasp ethical questions posed by advancements in science and ethical questions that arise as particular technological and scientific advancements take place. Its a really interesting way of trying to think about how the humanities and sciences can work together to understand the role of artificial intelligence in society. It will benefit both sides through learning about each others research and methods.
Theodore George, department head and professor of philosophy, said the course is currently in the process of being developed with consultation from experts across the country, but is expected to be completed by the end of the calendar year. Once it has been approved by the university, the course titled Responsible Artificial Intelligence will be available for all undergraduate students to take.
Read more:
Texas A&M To Offer Courses On Responsible A.I. - Texas A&M University Today
Boosting US Fighter Jets NASA Research Applies Artificial Intelligence To Hypersonic Engine Simulations – EurAsian Times
Researchers from the National Aeronautics and Space Administration (NASA) have teamed up with the US Department of Energys Argonne National Laboratory (ANL) to develop artificial intelligence (AI) to enhance the speed of simulations to study the behavior of air surrounding supersonic and hypersonic aircraft engines.
Fighter jets such as F-15s regularly exceed Mach 2 two times the speed of sound during the flight which is known as supersonic level. On a hypersonic flight which is Mach 5 and beyond, an aircraft flies faster than 3,000 miles per hour.
Hypersonic speeds have been made possible since the 1950s by the propulsions systems used for rockets however, engineers and scientists are working on advanced jet engine designs to make the hypersonic flight much less expensive than a rocket launch and more common such as for commercial flight, space exploration, and national defense purposes.
The newly published paper by a team of researchers from NASA and ANL details the machine learning techniques to reduce the memory and cost required to conduct computational fluid dynamics (CFD) simulations related to fuel combustion at supersonic and hypersonic speeds.
The paper was previously presented at the American Institute of Aeronautics and Astronautics SciTech Forum in January.
Before building and testing any aircraft, CFD simulations are used to determine how the various forces surrounding an aircraft in flight will interact with it. CFD consists of numerical expressions representing the behavior of fluids such as air and water.
When an aircraft breaks the sound barrier which involves traveling at speeds surpassing that of sound, it generates a shock wave which is a disturbance that makes the air around it hotter, denser, and higher in pressure causing it to behave very violently.
At hypersonic speeds, the air friction created is so strong that it could melt parts of a conventional commercial plane.
The air-breathing jet engines draw in oxygen to burn fuel as they fly so the CFD simulations have to account for major changes in the behavior of air, not only surrounding the plane but also as it moves through the engine and interacts with fuel.
While a conventional plane has fan blades to push the air along, in planes approaching Mach 3 and above speeds, their movement itself compresses the air. These aircraft designs, known as scramjets, are important to attain fuel efficiency levels that rocket propulsion cannot.
So, when it comes to CFD simulations on an aircraft capable of breaking the sound barrier, all the above factors add new levels of complexity to an already computationally intense exercise.
Because the chemistry and turbulence interactions are so complex in these engines, scientists have needed to develop advanced combustion models and CFD codes to accurately and efficiently describe the combustion physics, said Sibendu Som, a study co-author and interim center director of Argonnes Center for Advanced Propulsion and Power Research.
NASA has a hypersonic CFD code known as VULCAN-CFD which is specially meant for simulating the behavior of combustions in such a volatile environment.
This code uses something called flamelet tables where each flamelet is a small unit of a flame within the entire combustion model. This data table consists of different snapshots of burning fuel in one huge collection which takes up a large amount of computer memory to process.
Therefore, researchers at NASA and the ANL are exploring the use of AI to simplify these CFD simulations by reducing the intensive memory requirements and computational costs, to increase the pace of development of barrier-breaking aircraft.
Computational Scientists at ANL used a flamelet table generated by Argonne-developed software to train an artificial neural network that could be applied to NASAs VULCAN-CFD code. The AI used values from the flamelet table to learn shortcuts about determining the combustion behavior in supersonic engine environments.
The partnership has enhanced the capability of our in-house VULCAN-CFD tool by leveraging the research efforts of Argonne, allowing us to analyze fuel combustion characteristics at a much-reduced cost, said Robert Baurle, a research scientist at NASA Langley Research Center.
Countries across the world are racing to achieve hypersonic flight capability and an essential part of this race are simulation experiments where there is huge potential for the application of emerging tech such as AI and machine learning (ML).
Last month, according to a recent EurAsian Times report, Chinese researchers led by a top-level advisor to the Chinese military on hypersonic weapon technology, claimed a significant breakthrough in an AI system that can design new hypersonic vehicles autonomously.
Moreover, in February a Chinese space company called Space Transportation announced plans for tests beginning next year on a hypersonic plane capable of doing 7,000 miles per hour.
The company claimed that their plane could fly from Beijing to New York in an hour.
Developing countries are being left behind in the AI race – and that’s a problem for all of us – Economic Times
By Joyjit Chatterjee and Nina Dethlefs, University of Hull Cottingham
Artificial Intelligence (AI) is much more than just a buzzword nowadays. It powers facial recognition in smartphones and computers, translation between foreign languages, systems which filter spam emails and identify toxic content on social media, and can even detect cancerous tumours. These examples, along with countless other existing and emerging applications of AI, help make people's daily lives easier, especially in the developed world.
As of October 2021, 44 countries were reported to have their own national AI strategic plans, showing their willingness to forge ahead in the global AI race. These include emerging economies like China and India, which are leading the way in building national AI plans within the developing world.
Notably, the lowest-scoring regions in this index include much of the developing world, such as sub-Saharan Africa, the Carribean and Latin America, as well as some central and south Asian countries.
The developed world has an inevitable edge in making rapid progress in the AI revolution. With greater economic capacity, these wealthier countries are naturally best positioned to make large investments in the research and development needed for creating modern AI models.
In contrast, developing countries often have more urgent priorities, such as education, sanitation, healthcare and feeding the population, which override any significant investment in digital transformation. In this climate, AI could widen the digital divide that already exists between developed and developing countries.
The hidden costs of modern AI AI is traditionally defined as "the science and engineering of making intelligent machines". To solve problems and perform tasks, AI models generally look at past information and learn rules for making predictions based on unique patterns in the data.
AI is a broad term, comprising two main areas - machine learning and deep learning. While machine learning tends to be suitable when learning from smaller, well-organised datasets, deep learning algorithms are more suited to complex, real-world problems - for example, predicting respiratory diseases using chest X-ray images.
Many modern AI-driven applications, from the Google translate feature to robot-assisted surgical procedures, leverage deep neural networks. These are a special type of deep learning model loosely based on the architecture of the human brain.
Crucially, neural networks are data hungry, often requiring millions of examples to learn how to perform a new task well. This means they require a complex infrastructure of data storage and modern computing hardware, compared to simpler machine learning models. Such large-scale computing infrastructure is generally unaffordable for developing nations.
Beyond the hefty price tag, another issue that disproportionately affects developing countries is the growing toll this kind of AI takes on the environment. For example, a contemporary neural network costs upwards of US$150,000 to train, and will create around 650kg of carbon emissions during training (comparable to a trans-American flight). Training a more advanced model can lead to roughly five times the total carbon emissions generated by an average car during its entire lifetime.
Developed countries have historically been the leading contributors to rising carbon emissions, but the burden of such emissions unfortunately lands most heavily on developing nations. The global south generally suffers disproportionate environmental crises, such as extreme weather, droughts, floods and pollution, in part because of its limited capacity to invest in climate action.
Developing countries also benefit the least from the advances in AI and all the good it can bring - including building resilience against natural disasters.
Using AI for good While the developed world is making rapid technological progress, the developing world seems to be underrepresented in the AI revolution. And beyond inequitable growth, the developing world is likely bearing the brunt of the environmental consequences that modern AI models, mostly deployed in the developed world, create.
But it's not all bad news. According to a 2020 study, AI can help achieve 79 per cent of the targets within the sustainable development goals. For example, AI could be used to measure and predict the presence of contamination in water supplies, thereby improving water quality monitoring processes. This in turn could increase access to clean water in developing countries.
The benefits of AI in the global south could be vast - from improving sanitation, to helping with education, to providing better medical care. These incremental changes could have significant flow-on effects. For example, improved sanitation and health services in developing countries could help avert outbreaks of disease.
But if we want to achieve the true value of "good AI", equitable participation in the development and use of the technology is essential. This means the developed world needs to provide greater financial and technological support to the developing world in the AI revolution. This support will need to be more than short term, but it will create significant and lasting benefits for all. (This article is syndicated by PTI from The Conversation)
Go here to read the rest:
Developing countries are being left behind in the AI race - and that's a problem for all of us - Economic Times
Top 5 Benefits of Artificial intelligence in Software Testing – Analytics Insight
Have a look at the top 5 benefits of using Artificial intelligence in software testing
One of the recent buzzwords in the software development industry is artificial intelligence. Even though the use of artificial intelligence in software development is still in its infancy, the technology has already made great strides in automating software development. Integrating AI in software testing enhanced the quality of the end product as the systems adhere to the basic standards and also maintain company protocols. So, let us have a look at some of the other crucial benefits offered by AI in software testing.
A method of testing that is getting more and more popular every day is image-based testing using automated visual validation tools. Many ML-based visual validation tools can detect minor UI anomalies that human eyes are likely to miss.
Shared automated tests can be used by the developers to catch problems quickly before sending them to the QA team. Tests can be run automatically whenever the source code changes, checked in and notified the team or the developer if they fail.
Manual testing is a slow process. And every code change requires new tests that consume the same amount of time as before. AI can be leveraged to automate the test processes. AI provides for precise and continuous testing at a fast pace.
AI/ ML tools can read the changes made to the application and understand the relationship between them. Such self-healing scripts observe changes in the application and start learning the pattern of changes and then can identify a change at runtime without you having to do anything.
With software tests being repeated each time source code is changed, manually happening those tests can be not only time-consuming but also expensive. Interestingly, once created automated tests can be executed over and over, with zero additional cost at a much quicker pace.
Conclusion: The future of artificial intelligence and machine learning is bright. AI and its adjoining technologies are making new waves in almost every industry and will continue to do so in the future.
Share This ArticleDo the sharing thingy
About AuthorMore info about author
Analytics Insight is an influential platform dedicated to insights, trends, and opinions from the world of data-driven technologies. It monitors developments, recognition, and achievements made by Artificial Intelligence, Big Data and Analytics companies across the globe.
More here:
Top 5 Benefits of Artificial intelligence in Software Testing - Analytics Insight
Insights on the Artificial Intelligence in Digital Genome Global Market to 2028 – by Offering, Technology, Functionality, Application, End-user, and…
Dublin, April 18, 2022 (GLOBE NEWSWIRE) -- The "Artificial Intelligence in Digital Genome Market, by Offering, by Technology, by Functionality, by Application, by End User, and by Region - Size, Share, Outlook, and Opportunity Analysis, 2021 - 2028" report has been added to ResearchAndMarkets.com's offering.
Digital genome is a comprehensive digital set of genetic material that occurs in a cell or an organism. It is a simpler way to gather information concerning chronic diseases and utilized by experts to get a nearer look of genetic disorders. A digital genome acts as a supporter that facilitates instant access to trait sequences to resolve unending custom queries.
In genomics artificial intelligence focuses on the use of artificial intelligence (AI) in the development of computer systems that can perform tasks such as mapping genomes. Artificial intelligence and machine learning methods are currently been used to overcome various problems faced by genomics such as annotating genomic sequence elements, identifying splice sites, promoters, enhancers, and positioned nucleosomes.
Market Dynamics
Factors such as key players in the market are focusing on growth strategies such as development in AI tools and collaborations which is expected to drive the growth of the global artificial intelligence in digital genome market over forecast period.
For instance, in May 2020, NVIDIA, a U.S. based multinational technology company, had developed new artificial intelligence and genomic sequencing capabilities to help researchers track and treat COVID-19. Moreover, in September 2019, Novartis, an American Swiss multinational pharmaceutical corporation and Microsoft, a U.S. based multinational technology corporation, announced a multiyear alliance which will leverage data & artificial intelligence (AI) to transform how medicines are discovered, developed, and commercialized.
Key features of the study:
Key Topics Covered:
1. Research Objectives and Assumptions
2. Market Purview
3. Market Dynamics, Regulations, and Trends Analysis
4. Global Artificial Intelligence in Digital Genome Market- Impact of Coronavirus (COVID-19) Pandemic
5. Global Artificial Intelligence in Digital Genome Market, By Offering, 2017 - 2028, (US$ Mn)
6. Global Artificial Intelligence in Digital Genome Market, By Technology, 2017 - 2028, (US$ Mn)
7. Global Artificial Intelligence in Digital Genome Market, By Functionality, 2017 - 2028, (US$ Mn)
8. Global Artificial Intelligence in Digital Genome Market, By Application, 2017 - 2028, (US$ Mn)
9. Global Artificial Intelligence in Digital Genome Market, By End User, 2017 - 2028, (US$ Mn)
10. Global Artificial Intelligence in Digital Genome Market, By Region, 2017 - 2028, (US$ Mn)
11. Competitive Landscape
12. Section
For more information about this report visit https://www.researchandmarkets.com/r/7y29sv
What Is Artificial Intelligence? – ExtremeTech
To many, AI is just a horrible Steven Spielberg movie. To others, its the next generation of learning computers. But what is artificial intelligence, exactly? The answer depends on who you ask. Broadly, artificial intelligence (AI) is the combination of computer science and robust datasets, deployed to solve some kind of problem.
Many definitions of artificial intelligence include a comparison to the human mind or brain, whether in form or function. Alan Turing wrote in 1950 about thinking machines that could respond to a problem using human-like reasoning. His eponymous Turing test is still a benchmark for natural language processing. Later, Stuart Russell and John Norvig observed that humans are intelligent, but were not always rational. Russell and Norvig saw two classes of artificial intelligence: systems that think and act like a human being, versus those that think and act rationally. Today, weve got all kinds of programs we call AI.
Many AIs employ neural nets, whose code is written to emulate some aspect of the architecture of neurons or the brain. However, not all intelligence is human-like. Nor is it necessarily the best idea to emulate neurobiological information processing. Thats why engineers limit how far they carry the brain metaphor. Its more about how phenomenally parallel the brain is, and its distributed memory handling. As defined by John McCarthy in 2004, artificial intelligence is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable.
Moreover, the distinction between a neural net and an AI is often a matter of philosophy, more than capabilities or design. Many AI-powered systems are neural nets, under the hood. We also call some neural nets AIs. For example, OpenAIs powerful GPT-3 AI is a type of neural net called a transformer (more on these below). A robust neural nets performance can equal or outclass a narrow AI. There is much overlap between neural nets and artificial intelligence, but the capacity for machine learning can be the dividing line.
Conceptually: In the sense of its logical structure, to be an AI, you need three fundamental parts. First, theres the decision process usually an equation, a model, or just some code. AIs often perform classification or apply transformations. To do that, the AI must be able to decide on patterns in the data. Second, theres an error function some way for the AI to check its work. And third, if the AI is going to learn from experience, it needs some way to optimize its model. Many neural networks do this with a system of weighted nodes, where each node has both a value and a relationship to its network neighbors. Values change over time; stronger relationships have a higher weight in the error function.
Deep learning networks have more hidden layers than conventional neural networks. Circles are nodes, or neurons.
Physically: Typically, an AI is just software. AI-powered software services like Grammarly and Rytr use neural nets, like GPT-3. Those neural nets consist of equations or commands, written in things like Python or Common Lisp. They run comparisons, perform transformations, and suss out patterns from the data. They run on server-side hardware, usually, but which hardware isnt important. Any conventional silicon will do, be it CPU or GPU. However, there are dedicated hardware neural nets, a special kind of ASIC called neuromorphic chips.
Not all ASICs are neuromorphic designs. However, neuromorphic chips are all ASICs. Neuromorphic design is fundamentally different from CPUs, and only nominally overlaps with a GPUs multi-core architecture. But its not some exotic new transistor type, nor any strange and eldritch kind of data structure. Its all about tensors. Tensors describe the relationships between things; theyre a kind of mathematical object that can have metadata, just like a digital photo has EXIF data.
Modern Nvidia RTX GPUs have a huge number of tensor cores. That makes sense if youre drawing moving polygons, each with some number of properties or effects that apply to it. But tensors can handle more than just spatial data. The ability to parallelize tensor calculations is also why GPUs get scalped for crypto mining, and why theyre used in cluster computing, especially for deep learning. GPUs excel at organizing many different threads at once.
But no matter how elegant your data organization might be, it still has to filter through multiple layers of software abstraction before it ever becomes binary. Intels neuromorphic chip, Loihi 2, affords a very different approach.
Loihi 2 is a neuromorphic chip that comes as a package deal with a software ecosystem named Lava. Loihis physical architecture invites almost requires the use of weighting and an error function, both defining features of AI and neural nets. The chips biomimetic design extends to its electrical signaling. Instead of ones and zeroes, on or off, Loihi fires in spikes with an integer value capable of carrying much more data. It begs to be used with tensors. What if you didnt have to translate your values into machine code and then binary? What if you could just encode them directly?
Machine learning models that use Lava can take full advantage of Loihi 2s unique physical design. Together, they offer a hybrid hardware-software neural net that can process relationships between multiple entire multi-dimensional datasets, like an acrobat spinning plates.
AI tools like Rytr, Grammarly and others do their work in a regular desktop browser. In contrast, neuromorphic chips like Loihi arent designed for use in consumer systems. (At least, not yet.) Theyre intended for researchers. Instead, neuromorphic engineering has a different strength. It can allow silicon to perform another kind of biomimicry. Brains are extremely cheap, in terms of power use per unit throughput. The hope is that Loihi and other neuromorphic systems can mimic that power efficiency to break out of the Iron Triangle and deliver all three: good, fast, and cheap.
If the three-part logical structure of an AI sounds familiar, thats because neural nets have the same three logical pillars. In fact, from IBMs perspective, the relationship between machine learning, deep learning, neural networks and artificial intelligence is a hierarchy of evolution. Its just like the relationship between Charmander, Charmeleon and Charizard. Theyre all separate entities in their own right, but each is based on the one before, and they grow in power as they evolve. We still have Charmanders even though we also have Charizards.
Artificial intelligence as it relates to machine learning, neural networks, and deep learning. Image: IBM
When an AI learns, its different than just saving a file after making edits. To an AI, learning involves changing its process.
Many neural nets learn through a process called back-propagation. Typically, a neural net is a feed-forward process, because data only moves in one direction through the network. Its efficient, but its also a kind of ballistic (unguided) process. In back-propagation, however, later nodes in the process get to pass information back to earlier nodes. Not all neural nets perform back-propagation, but for those that do, the effect is like changing the coefficients in front of the variables in an equation.
We also divide neural nets into two classes, depending on what type of problems they can solve. In supervised learning, a neural net checks its work against a labeled training set or an overwatch; in most cases, that overwatch is a human. For example, SwiftKey learns how you text, and adjusts its autocorrect to match. Pandora uses listeners input to finely classify music, in order to build specifically tailored playlists. 3blue1brown even has an excellent explainer series on neural nets, where he discusses a neural net using supervised learning to perform handwriting recognition.
Supervised learning is great for fine accuracy on an unchanging set of parameters, like alphabets. Unsupervised learning, however, can wrangle data with changing numbers of dimensions. (An equation with x, y and z terms is a three-dimensional equation.) Unsupervised learning tends to win with small datasets. Its also good at recognizing patterns we might not even know to look for.
Transformers are a special, versatile kind of AI capable of unsupervised learning. They can integrate many different streams of data, each with its own changing parameters. Because of this, theyre great at handling tensors. Tensors, in turn, are great for keeping all that data organized. With the combined powers of tensors and transformers, we can handle more complex datasets.
Video upscaling and motion smoothing are great applications for AI transformers. Likewise, tensors are crucial to the detection of deepfakes and alterations. With deepfake tools reproducing in the wild, its a digital arms race.
The person in this image does not exist. This is a deepfake image created by StyleGAN, Nvidias generative adversarial neural network.
Video signal has high dimensionality. Its made of a series of images, which are themselves composed of a series of coordinates and color values. Mathematically and in computer code, we represent those quantities as matrices or n-dimensional arrays. Helpfully, tensors are great for matrix and array wrangling. DaVinci Resolve, for example, uses tensor processing in its (NVidia RTX) hardware-accelerated Neural Engine facial recognition utility. Hand those tensors to a transformer, and its powers of unsupervised learning do a great job picking out the curves of motion on-screen and in real life.
In fact, that ability to track multiple curves against one another is why the tensor-transformer dream team has taken so well to things like natural language processing. And the approach can generalize. Convolutional transformers a hybrid of a CNN and a transformer excel at image recognition on the fly. This tech is in use today, for things like robot search and rescue or assistive image and text recognition, as well as the much more controversial practice of dragnet facial recognition, la Hong Kong.
The ability to handle a changing mass of data is great for consumer and assistive tech, but its also clutch for things like mapping the genome, and improving drug design. The list goes on. Transformers can also handle different kinds of dimensions, not just the spatial, which is useful for managing an array of devices or embedded sensors like weather tracking, traffic routing, or industrial control systems. Thats what makes AI so useful for data processing at the edge.
Not only does everyone have a cell phone, there are embedded systems in everything. This proliferation of devices gives rise to an ad hoc global network called the Internet of Things (IoT). In the parlance of embedded systems, the edge represents the outermost fringe of end nodes within the collective IoT network. Edge intelligence takes two main forms: AI on edge and AI for edge. The distinction is where the processing happens. AI on edge refers to network end nodes (everything from consumer devices to cars and industrial control systems) that employ AI to crunch data locally. AI for the edge enables edge intelligence by offloading some of the compute demand to the cloud.
In practice, the main differences between the two are latency and horsepower. Local processing is always going to be faster than a data pipeline beholden to ping times. The tradeoff is the computing power available server-side.
Embedded systems, consumer devices, industrial control systems, and other end nodes in the IoT all add up to a monumental volume of information that needs processing. Some phone home, some have to process data in near real-time, and some have to check and correct their own work on the fly. Operating in the wild, these physical systems act just like the nodes in a neural net. Their collective throughput is so complex that in a sense, the IoT has become the AIoT the artificial intelligence of things.
As devices get cheaper, even the tiny slips of silicon that run low-end embedded systems have surprising computing power. But having a computer in a thing doesnt necessarily make it smarter. Everythings got Wi-Fi or Bluetooth now. Some of it is really cool. Some of it is made of bees. If I forget to leave the door open on my front-loading washing machine, I can tell it to run a cleaning cycle from my phone. But the IoT is already a well-known security nightmare. Parasitic global botnets exist that live in consumer routers. Hardware failures can cascade, like the Great Northeast Blackout of summer 2003, or when Texas froze solid in 2021. We also live in a timeline where a faulty firmware update can brick your shoes.
Theres a common pipeline (hypeline?) in tech innovation. When some Silicon Valley startup invents a widget, it goes from idea to hype train to widgets-as-a-service to disappointment, before finally figuring out what the widgets actually good for.
Oh, okay, there is an actual hypeline. Above: The 2018 Gartner hype cycle. Note how many forms of artificial intelligence showed up on this roller coaster then and where they are now. Image: Gartner, 2018
This is why we lampoon the IoT with loving names like the Internet of Shitty Things and the Internet of Stings. (Internet of Stings devices communicate over TCBee-IP.) But the AIoT isnt something anyone can sell. Its more than the sum of its parts. The AIoT is a set of emergent properties that we have to manage if were going to avoid an explosion of splinternets, and keep the world operating in real time.
In practice, artificial intelligence is often the same thing as a neural net capable of machine learning. Theyre both software that can run on whatever CPU or GPU is available and powerful enough. Neural nets often have the power to perform machine learning via back-propagation. Theres also a kind of hybrid hardware-and-software neural net that brings a new meaning to machine learning. Its made using tensors, ASICs, and neuromorphic engineering by Intel. Furthermore, the emergent collective intelligence of the IoT has created a demand for AI on, and for, the edge. Hopefully we can do it justice.
Go here to see the original:
What Is Artificial Intelligence? - ExtremeTech
Stanford center uses AI and machine learning to expand data on women’s and children’s health, director says – The Stanford Daily
Stanfords Center for Artificial Intelligence in Medicine and Imaging (AIMI) is increasing engagement around the use of artificial intelligence (AI) and machine learning to build a better understanding of data on womens and childrens health, according to AIMI Director and radiology professor Curt Langlotz.
Langlotz explained that, while AIMI initially focused on applying AI to medical imaging, it has since expanded its focus to applications of AI for other types of data, such as electronic health records.
Specifically, the center conducts interdisciplinary machine learning research that optimizes how data of all forms are used to promote health, Langlotz said during a Monday event hosted by the Maternal and Child Health Research Institute (MCHRI). And that interdisciplinary flavor is in our DNA.
The center now has over 140 affiliated faculty across 20 departments, primarily housed in the engineering department and the school of medicine at Stanford, according to Langlotz.
AIMI has four main pillars: building an infrastructure for data science research, facilitating interdisciplinary collaborations, engaging the community and providing funding.
The center provides funding predominantly through a series of grant programs. Langlotz noted that the center awarded seven $75,000 grants in 2019 to fund mostly imaging projects, but it has since diversified funding to go toward projects investigating other forms of data, such as electronic health records. AIMI also collaborated with the Human-Centered Institute for Artificial Intelligence (HAI) in 2021 to give out six $200,000 grants, he added.
Outside of funding, AIMI hosts a virtual symposium on technology and health annually and has a health-policy committee that informs policymakers on the intersection between AI and healthcare. Furthermore, the center pairs industry partners with laboratories to work on larger research projects of mutual interest as part of the only industry affiliate program for the school of medicine, Langlotz added.
Industry often has expertise that we dont, so they may have expertise on bringing products to markets as they may know what customers are looking for, Langlotz said. And if were building these kinds of algorithms, we really would like them to ultimately reach patients.
Heike Daldrup-Link, a professor of radiology and pediatrics, and Alison Callahan, a research scientist at the Center for Biomedical Informatics, shared their research funded by the AIMI Center that rests at the intersection of computer science and medicine.
Daldrup-Links research involves analyzing childrens responses to lymphoma cancer therapy with a model that examines tumor sites using positron emission tomography (PET) scans. These scans reveal the metabolic processes occurring within tissues and organs, according toDaldrup-Link. The scans also serve as a good source to build algorithms because there are at least 270,000 scans per year from lymphoma patients, resulting in a large amount of available data.
Callahan is building AI models to extract information from electronic health records to learn more about pregnancy and postnatal health outcomes. She explained that much of the health data available from records is currently unstructured, meaning it does not conform to a database or simple model. Still, AI methods can really shine in extracting valuable information from unstructured content like clinical texts or notes, she said.
Callahan and Daldrup-Link are just two examples of researchers who use AI and machine learning methods to produce novel research on womens and childrens health. Developing new methods such as these are important in solving complex problems related to the field of healthcare, according to Langlotz.
If youre working on difficult and interesting applied problems that are clinically important, youre likely to encounter the need to develop new and interesting methods, Langlotz said. And thats proven true for us.
Read the original post:
Stanford center uses AI and machine learning to expand data on women's and children's health, director says - The Stanford Daily
Policy experts stress the need to regulate artificial intelligence in health care – Urology Times
Not having policies in place to regulate artificial intelligence (AI) and machine learning (ML) could have dire consequences across every sector of the health care industry.
That was the point made by Brian Scarpelli and Sebastian Holst during their presentation titled A modest proposal for AI regulation in healthcare, held during the HIMSS22 Global Health Conference in Orlando. Scarpelli is the senior global policy counsel for the Connected Health Initiative and Holst is principal with Qi-fense, a consulting group that works in AI and ML.
ML properties do more than challenge domain-specific applications of technology, Scarpelli and Holst write. Many of these properties will force an evaluation and retooling of core manufacturing, quality, and risk frameworks that have effectively served as the foundation of todays industry-specific regulations and policies.
Heres some key points from their presentation on the growth of AI/ML and the need for regulation.
AI can potentially revolutionize health care in all facets. It can reduce administrative burdens for providers and payer and allow for resources to be deployed within a health system to serve vulnerable patient populations. It can manage public health emergencies such as the COVID-19 pandemic and help improve both preventive care and diagnostic efficiency.
According to Scarpelli and Holst, the growth in machine learning products has surged since 2015, starting first with processing applications, including products for processing radiological images, and has since progressed into diagnosis applications, particularly also in the radiological space to assist with triage and prioritization.
The number of patents coded to machine learning and health informatics has exploded, from 165 in 2017 to more than 1,100 in 2021.
While AI is promising, there are potential legal and ethical challenges that must be addressed. For example, one of the major themes of the HIMSS22 conference has been the challenge of achieving health equity and eliminating implicit bias. Thats one of the major challenges of AI as well since AI solutions can be biased. Many sessions focused on how diverse teams are needed when creating AI solutions to ensure that the programs dont carry the same biases as society, which could exacerbate current social problems, according to Tania M. Martin-Mercado, MS, MPH, a clinical researcher who presented on How implicit bias affects AI in healthcare.
During her presentation, she pointed to an example of an online tool that estimates risk of breast cancer calculates a lower risk for Black or Latinx women than White even when every other risk factor is identical.
A diverse group of health agencies, including the FDA, HHS, CMS, FTC, and the World Health Organization, are developing regulations and asking from guidance from various stakeholders, including AI developers, physicians and other providers, patients, medical societies, and academic institutions.
Scarpelli says that the vision for successful AI follows four principals. It should:
This article originally appeared on the website MedicalEconomics.com
Here is the original post:
Policy experts stress the need to regulate artificial intelligence in health care - Urology Times
Artificial Intelligence: A game-changer for the Indian Education System – The Financial Express
With the rapid advancement of technology, Artificial Intelligence (AI) has become one of the key aspects of growth and innovation across industries. It is thus imperative that the youth is made familiar with the basic concepts of AI from their childhood. In fact, it looks like the process has already started. Madhya Pradesh government had recently announced the introduction of an Artificial Intelligence course for students from class 8. Chief Minister Shivraj Singh Chouhan had said that this is going to be the first such initiative in the country.
India has always advocated for universal learning, and Artificial Intelligence constitutes an integral part of that. It is important for educators across states in India to start integrating the topic of AI into their classrooms as it can definitely help the education system achieve the impossible.
Lets dive into the various advantages of introducing Artificial Intelligence in the Indian education system:
According to a UNESCO report released in 2021, there are about 1.2 lakh single-teacher schools in the country, of which 89 percent are in rural areas. The report suggests that India needs around 11.16 lakh additional teachers to meet this shortfall. AI can help overcome this shortage and can provide easy access to education for one and all.
For professors and teachers to focus on every individual students needs and requirements is difficult, and it is going to get tougher with the rapidly growing population. This problem can be resolved if our education system resorts to implementing AI programs in classrooms which will not only help in assessing every students learning graph but also help them navigate through their weaknesses.
Artificial Intelligence can help teachers with administrative work like creating feedback for students, grading papers, arranging parent-teacher interactions etc. AI applications like text-to-speech can help teachers save time as usual on a daily basis, This will not only save their time but also make room for the teachers to focus more on the creative aspect of teaching.
AI programs like chatbots can also do the job of assisting students in answering and resolving their queries any time any place. They wont have to wait to see their teachers to get the answers, they can easily march ahead of time by simply with a click of a button.
In todays day and age, it is important to optimize the process of learning for each and every child. There are a number of possibilities to what AI could do if introduced as an integral part of the education system. It is up to us to make the most of it.
Follow this link:
Artificial Intelligence: A game-changer for the Indian Education System - The Financial Express
UNSW researcher receives award recognising women in artificial intelligence – UNSW Newsroom
UNSW Engineering Professor Flora Salim has been honoured for her pioneering work in computing and machine learning by Women in AI, aglobal advocacy group for women in the artificial intelligence (AI) field.
The 2022 Women in AI Awards Australia and New Zealand recognised women across various industries committed to excellence in AI.
Finalists were judged on innovation, leadership and inspiring potential, global impact, and the ability of the AI solution to provide a social good for the community.
Prof. Salim was recognised for her AI achievements in the Defence and Intelligence award category.
The award acknowledged her research in the cross-cutting areas of ubiquitous computing and machine learning, with a focus on efficient, fair, and explainable machine learning for multi-dimensional sensor data, towards enabling situational and behaviour intelligence for multiple applications.
I am thrilled and honoured to receive this award. This highlights our efforts into advancing AI and machine learning techniques for sensor data, Prof. Salim said.
I would like to acknowledge my students, postdocs, collaborators, and mentors. I hope we can inspire more women to join us towards solving difficult AI problems that matter.
Prof. Salim is the inaugural Cisco Chair in Digital Transport in the School of Computer Science and Engineering at UNSW Sydneyand a member of the Australian Research Council (ARC) College of Experts, having recently moved fromRMIT Universitys School of Computing Technologies, Melbourne.
Her research on human-centred computing AI and machine learning for behaviour modelling with multimodal spatial-temporal data has received funding from numerous partners, resulting in more than 150 papers and three patents.
Research led by Prof. Salim with collaborators from Microsoft Research and RMIT University on task characterisation and automating task scheduling led to insights that influenced the research and development of several new Microsoft product features.
UNSW Dean of Engineering, Professor Stephen Foster congratulated Prof. Salim on receiving an award that promotes women in the AI sector.
Artificial intelligence will reshape every corner of our lives in the coming years, so its pleasing to see brilliant women recognised for shaping the future of AI, Prof. Foster said.
I congratulate Prof. Salim for being on the forefront of AI today.
Women in AI is a global not-for-profit network working towards empowering women and minorities to excel and be innovators in the AI and Data fields.
The awards were held at a gala dinner at the National Gallery of Victoria in Melbourne.
Visit link:
UNSW researcher receives award recognising women in artificial intelligence - UNSW Newsroom