Category Archives: Artificial Intelligence
USA receives grant to research using artificial intelligence to forecast the weather – WKRG News 5
MOBILE, Ala. (WKRG) Meteorologists such as our First Alert Storm Team use computer models to help them forecast the weather. The University of South Alabama along with four other universities received a $5 million grant from the National Science Foundation to conduct research using machine learning and artificial intelligence to improve forecasting the weather.
Souths portion of the grant is almost $800,000 and will be use to teach the forecast models how to more accurately predict local weather such as sea breezes, weather events that could impact agriculture in our region and large scale weather such as hurricanes.
Dr. Sytske Kimball, the chair of the Department of Earth Sciences at the University of South Alabama said, So if we can improve targeted forecasting.like conditions in this particular area are going be such that people should evacuate, whereas over here, people can just shelter in place. That would make a HUGE difference in recovery and emergency management.
This project will train the models by feeding hundreds and thousands of past weather events and data into what are called neural networks. A neural network is one of the systems (algorithms) used in machine learning. This system is made up of many parts and each part focuses on learning how to better forecast different weather parameters (temperature, humidity, winds, pressure.etc) by recognizing patterns from past weather events/patterns.
Dr. Tom Johnsten, Associate Professor for the School of Computing at South, Each one will learn a particular weather parameter to predict. So we might have one neural network that would learn how to predict the temperature, or the humidity, pressure, wind speed and so forth.
The students in the school of computing and meteorology at South will also be able to participate in this research which includes gathering data from and maintaining the network of weather stations in our area.
LATEST STORIES
Go here to see the original:
USA receives grant to research using artificial intelligence to forecast the weather - WKRG News 5
Reply is "Best in Class" Provider of Artificial Intelligence According to teknowlogy Group/PAC – Business Wire
TURIN, Italy--(BUSINESS WIRE)--Reply is among the leading service providers for Artificial Intelligence in the PAC INNOVATION RADAR for AI-related services, an industry study conducted by the independent research and consulting company teknowlogy Group/PAC.
The survey analyses the service performance of 30 international consulting firms and service providers that implement projects in the field of Artificial Intelligence. Providers in the PAC INNOVATION RADAR "AI-related Services in Germany 2020" are compared across four main metrics: breadth of the service spectrum, local delivery capability, investments in AI-specific solutions and staff training.
The study certifies Reply as "Best in Class" provider in the market for AI-related services in three categories: Sales, Service and Marketing, Logistic and SCM as well as Production and IoT.
Filippo Rizzante, Reply CTO: "Reply has been following a clear AI strategy for many years: AI is firmly anchored in our corporate strategy and therefore part of numerous customer projects. Worldwide, our teams are working on the development of innovative projects using Artificial Intelligence and Machine Learning. This result underlines the extensive industry expertise and deep technology know-how of Reply".
For further information on the 2020 report: LINK
ReplyReply [MTA, STAR: REY, ISIN: IT0005282865] specialises in the design and implementation of solutions based on new communication channels and digital media. As a network of highly specialised companies, Reply defines and develops business models enabled by the new models of AI, cloud computing, digital media and the internet of things. Reply delivers consulting, system integration and digital services to organisations across the telecom and media; industry and services; banking and insurance; and public sectors. http://www.reply.com
teknowlogy Group/PACteknowlogy Group is the leading independent European research and consulting firm in the fields of digital transformation, software, and IT services. It brings together the expertise of two research and advisory firms, each with a strong history and local presence in the fragmented markets of Europe: CXP and PAC (Pierre Audoin Consultants). http://www.teknowlogy.com and http://www.pac-online.com
Read the original:
Reply is "Best in Class" Provider of Artificial Intelligence According to teknowlogy Group/PAC - Business Wire
Neuromorphic Computing: How the Brain-Inspired Technology Powers the Next-Generation of Artificial Intelligence – Interesting Engineering
As a remarkable product of evolution, the human brain has a baseline energy footprint of about 20 watts; this gives the brain the power to process complex tasks in milliseconds. Todays CPUs and GPUs dramatically outperform the human brain for serial processing tasks. However, the process of moving data from memory to a processor and back creates latency and, in addition, expends enormous amounts of energy.
Neuromorphic systems attempt to imitate how the human nervous system operates. This field of engineering tries to imitate the structure of biological sensing and information processing nervous systems. In other words, neuromorphic computing implements aspects of biological neural networks as analogue or digital copies on electronic circuits.
Neuromorphics are not a new concept in any way. Like many other emerging technologies which are getting momentum just now, neuromorphics have been silently under development for a long time. But it was not their time to shine yet. More work had to be done.
Over 30 years ago, in the late 1980s, Professor Carver Mead, an American scientist, engineer, and microprocessor pioneer, developed the concept of neuromorphic engineering, also known as neuromorphic computing.
Neuromorphic engineering describes the use of very-large scale integration (VLSI) systems containing electronic analog circuits. These circuits were arranged in a way that mimics neuro-biological architectures present in the human nervous system
Neuromorphic computing gets its inspiration from the human brains architecture and dynamics to create energy-efficient hardware for information processing, making it capable of highly sophisticated tasks.
Neuromorphic computing includes the production and use of neural networks. It takes its inspiration from the human brain with the goal of designing computer chips that are able to merge memory and processing. In the human brain, synapses provide a direct memory access to the neurons that process information.
For decades, electrical engineers have been fascinated by bio-physics and neural computation, and the development of practical mixed-signal circuits for artificial neural networks. The challenge is in working across a broad range of disciplines spanning from electron devices to algorithms. However, the practical usefulness of neuromorphic systems will be used in everyday life, and this alone makes the effort worth it.
"Artificial Intelligence (AI) needs new hardware, not just new algorithms. Were at a turning point, where Moores law is reaching its end leading to a stagnation of the performance of our computers. Nowadays, we are generating more and more data that needs to be stored and classified,"said Professor Dmitri Strukov, an electrical engineer at the University of California at Santa Barbara in an interview with Nature Communications about the opportunities and challenges in developing brain-inspired technologies, namely neuromorphic computing, when asked why we need neuromorphic computing.
Dmitri Strukov goes on telling Nature Communications how the recent progresses in AI allow automating this process, with data centers multiplying at a cost of consuming an exponentially increasing amount of electricity, which is a potential problem for our environment. "This energy consumption mainly comes from data traffic between memory and processing units that are separated in computers,"said Strukov.
"It wastes electrical energy and it considerably slows down computational speed. Recent developments in nanotechnology offer the possibility to bring huge amounts of memory close to processing, or even better, to integrate this memory directly in the processing unit,said Dmitri Strukov.
According to Strukov, the idea of neuromorphic computing is to take inspiration of the brain for designing computer chips that merge memory and processing. In the brain, synapses provide a direct memory access to the neurons that process information. That is how the brain achieves impressive computational power and speed with very little power consumption. By imitating this architecture, neuromorphic computing provides a path to building smart neuromorphic chips that consume very little energy and, meanwhile, compute fast.
To some, it may seem that neuromorphic computing is part of a distant future. However, neuromorphic technology is here, closer than what you think it is. Beyond research and futuristic speculation, Intels Neuromorphic Lab created a self-learning neuromorphic research chip initially under the code-name Loihi (pronounced low-ee-hee). Loihi, Intel's fifth neuromorphic chip, was announced in September 2017 as a predominantly research chip. Since then, it has come a long way.
As an interesting related fact, Intel's chosen name for the chip, Lihi, means 'long' in Hawaiian, and is the newest --sometimes referred to as youngest-- active submarine volcano in the HawaiianEmperor seamount chain, a string of volcanoes that stretches about 6,200 km (3,900 miles) northwest of Lihi.
Now back to the chip. Loihi is a neuromorphic manycore processor with on-chip learning.Intels 14-nanometer Loihi chip contains over 2 billion transistors, 130,000 artificial neurons, and 130 million synapses.
Loihi chip integrates a wide range of novel features for the field such as programmable synaptic learning rules. According to Intel, the neuromorphic chip is the next-generation Artificial Intelligence enabler.
The Abstract of the paper Loihi: A Neuromorphic Manycore Processor with On-Chip Learningpublished by IEEE Micro reads:
Loihi is a 60-mm 2 chip fabricated in Intel's 14-nm process that advances the state-of-the-art modeling of spiking neural networks in silicon. It integrates a wide range of novel features for the field, such as hierarchical connectivity, dendritic compartments, synaptic delays, and, most importantly, programmable synaptic learning rules. Running a spiking convolutional form of the Locally Competitive Algorithm, Loihi can solve LASSO optimization problems with over three orders of magnitude superior energy-delay-product compared to conventional solvers running on a CPU iso-process/voltage/area. This provides an unambiguous example of spike-based computation, outperforming all known conventional solutions.
Most recently,Intel and Sandia National Laboratories signed a three-year agreementto explore the value of neuromorphic computing for scaled-up Artificial Intelligence problems.
According to Intel, Sandia will kick-off its research using a 50-million neuron Loihi-based system that was delivered to its facility in Albuquerque, New Mexico. This initial work with Loihi will lay the foundation for the later phase of the collaboration, which is expected to include continued large-scale neuromorphic research on Intels upcoming next-generation neuromorphic architecture and the delivery of Intels largest neuromorphic research system to this date, which could exceed more than 1 billion neurons in computational capacity.
Upon the release of the agreement, Mike Davies, Director of Intels Neuromorphic Computing Lab, said: By applying the high-speed, high-efficiency, and adaptive capabilities of neuromorphic computing architecture, Sandia National Labs will explore the acceleration of high-demand and frequently evolving workloads that are increasingly important for our national security. We look forward to a productive collaboration leading to the next generation of neuromorphic tools, algorithms, and systems that can scale to the billion neuron level and beyond."
Clearly, there are great expectations on what the neuromorphic technology promises. While most neuromorphic research to this date has focused on the technologys promise for edge use cases, new developments show that neuromorphic computing could also provide value for large, complex computational problems that require real-time processing, problem solving, adaptation, and fundamentally learning.
Intel, as a leader in neuromorphic research, is actively exploring this potential by releasing a 100-million neuron system, Pohoiki Springs, to the Intel Neuromorphic Research Community (INRC). Initial research conducted on Pohoiki Springs demonstrates how neuromorphic computing can provide up to four orders of magnitude better energy efficiency for constraint satisfaction a standard high-performance computing problem compared to state-of-the-art CPUs.
One of the goals of the joint effort aims to better understand how emerging technologies, such as neuromorphic computing, can be utilized as a tool to address some of the current most pressing scientific and engineering challenges.
These challenges include problems in scientific computing, counterproliferation, counterterrorism, energy, and national security. The possibilities are diverse and perhaps unlimited. As we can see, there are more applications than the ones one might have thought at the start.
Advance research in scaled-up neuromorphic computing is, at this point, paramount to determine where these systems are most effective, and how they can provide real-world value. For starters,this upcoming new research is going to evaluate the scaling of a variety of spiking neural network workloads, from physics modeling to graph analytics to large-scale deep networks.
According to Intel, these sorts of problems are useful for performing scientific simulations such as modeling particle interactions in fluids, plasmas, and materials. Moreover, these physics simulations increasingly need to leverage advances in optimization, data science, and advanced machine learning capabilities in order to find the right solutions.
Accordingly, potential applications for these workloads include simulating the behavior of materials, finding patterns and relationships in datasets, and analyzing temporal events from sensor data. We can say, that this is just the beginning. There is yet to be seen what real-life applications are going to emerge.
The fact that neuromorphic systems are designed to mimic the human brain raises important ethical questions. Neuromorphic chips utilized in Artificial Intelligence have, indeed, more in common with human cognition than with the conventional computer logic.
What perceptions, attitudes, and implications can this bring in the future when a human encounters a machine in the room that has more similarities in their neural networks to the neural networks of a human, rather than to a microprocessor?
While neuromorphic technology is still in its infancy, the field is advancing rapidly. In the near future, commercially available neuromorphic chips will most likely have an impact on edge devices, robotics, and Internet of Things (IoT) systems. Neuromorphic computing is on its waytoward low-power, miniaturized chips that can be able to infer and learn in real time. Indeed, we can expect exciting times ahead in the field of neuromorphic computing.
Related Articles:
See the original post here:
Neuromorphic Computing: How the Brain-Inspired Technology Powers the Next-Generation of Artificial Intelligence - Interesting Engineering
Asia Pacific Digital Transformation Research Report 2020-2025: Focus on 5G, Artificial Intelligence, Internet of Things, and Smart Cities – PRNewswire
DUBLIN, Oct. 12, 2020 /PRNewswire/ -- The "Digital Transformation Asia Pacific: 5G, Artificial Intelligence, Internet of Things, and Smart Cities in APAC 2020 - 2025" report has been added to ResearchAndMarkets.com's offering.
This report identifies market opportunities for deployment and operations of key technologies within the Asia Pac region. While the biggest markets China, Korea, and Japan often get the most attention, it is important to also consider the fast-growing ASEAN region including Indonesia, Malaysia, Philippines, Singapore, Thailand, Brunei, Laos, Myanmar, Cambodia, and Vietnam. In fact, many lessons learned in leading Asia Pac countries will be applied to the ASEAN region. By way of example, H3C Technologies Co. is planning to offer a comprehensive digital transformation platform within Thailand that includes core cloud and edge computing, big data, interconnectivity, information security, IoT, AI, and 5G solutions.
The AI segment is currently very fragmented, characterized with most companies focusing on silo approaches to solutions. Longer-term, researchers see many solutions involving multiple AI types as well as integration across other key areas such as the Internet of Things (IoT) and data analytics. AI is expected to have a big impact on data management. However, the impact goes well beyond data management as we anticipate that these technologies will increasingly become part of every network, device, application, and service.
Data analytics at the edge of networks is very different from centralized cloud computing as data is contextual (example: collected and computed at a specific location) and may be processed in real-time (e.g. streaming data) via big data analytics technologies. Edge Computing represents an important ICT trend in which computational infrastructure is moving increasingly closer to the source of data processing needs. This movement to the edge does not diminish the importance of centralized computing such as is found with many cloud-based services. Instead, computing at the edge offers many complementary advantages including reduced latency for time-sensitive data, lower capital costs, and operational expenditures due to efficiency improvements.
For both core cloud infrastructure and edge computing equipment, the use of AI for decision making in IoT and data analytics will be crucial for efficient and effective decision making, especially in the area of streaming data and real-time analytics associated with edge computing networks. Real-time data will be a key value proposition for all use cases, segments, and solutions. The ability to capture streaming data, determine valuable attributes, and make decisions in real-time will add an entirely new dimension to service logic. In many cases, the data itself, and actionable information will be the service.
Many industry verticals will be transformed through AI integration with enterprise, industrial, and consumer product and service ecosystems. It is destined to become an integral component of business operations including supply chains, sales, and marketing processes, product and service delivery, and support models. The term for AI support of IoT (or AIoT) is just beginning to become part of the ICT lexicon as the possibilities for the former adding value to the latter are only limited by the imagination.
AI, IoT, and 5G will provide the intelligence, communications, connectivity, and bandwidth necessary for highly functional and sustainable smart cities market solutions. These technologies in combination are poised to produce solutions that will dramatically transform all aspects of ICT and virtually all industry verticals undergoing transformation through AI integration with enterprise, industrial, and consumer product and service ecosystems. The convergence of these technologies will attract innovation that will create further advancements in various industry verticals and other technologies such as robotics and virtual reality.
In addition, these technologies are destined to become an integral component of business operations including supply chains, sales, and marketing processes, product and service delivery, and support models. There will be a positive feedback loop created and sustained by leveraging the interdependent capabilities of AI, IoT, and 5G (e.g. a term coined as AIoT5G). For example, AI will work in conjunction with IoT to substantially improve smart city supply chains. Metropolitan area supply chains represent complex systems of organizations, people, activities, information, and resources involved in moving a product or service from supplier to customer.
Smart cities in particular represent a huge market for Asia Pac digital transformation through a combination of solutions deployed urban environments that are poised to transform the administration and support of living and working environments. Accordingly, Information and Communications Technologies (ICT) are transforming at a rapid rate, driven by urbanization, the industrialization of emerging economies, and the specific needs of various smart city initiatives. Smart city development is emerging as a focal point for growth drivers in several key ICT areas including 5G, AI, IoT, and the convergence of AI and IoT known as the Artificial Intelligence of Things or simply AIoT.
Sustainable smart city technology deployments depend upon careful planning and execution as well as monitoring and adjustments as necessary. For example, feature/functionality must be blended to work efficiently across many different industry verticals as smart cities address the needs of disparate market segments with multiple overlapping and sometimes mutually exclusive requirements. This will stimulate the need for both cross-industry coordination as well as orchestration of many different capabilities across several important technologies.
Target Audience:
Select Report Findings:
Report Benefits:
Companies Mentioned
For more information about this report visit https://www.researchandmarkets.com/r/7zosw9
Research and Markets also offers Custom Research services providing focused, comprehensive and tailored research.
Media Contact:
Research and Markets Laura Wood, Senior Manager [emailprotected]
For E.S.T Office Hours Call +1-917-300-0470 For U.S./CAN Toll Free Call +1-800-526-8630 For GMT Office Hours Call +353-1-416-8900
U.S. Fax: 646-607-1907 Fax (outside U.S.): +353-1-481-1716
SOURCE Research and Markets
http://www.researchandmarkets.com
See the article here:
Asia Pacific Digital Transformation Research Report 2020-2025: Focus on 5G, Artificial Intelligence, Internet of Things, and Smart Cities - PRNewswire
This Week in Washington IP: Antitrust in the Ninth Circuit, Shaping Artificial Intelligence and Promoting Security in 5G Networks – IPWatchdog.com
This week in Washington IP events, the House of Representatives remains quiet during district work periods, while the Senate focuses this week on the nomination of Amy Coney Barrett to serve on the U.S. Supreme Court. Various tech related events will take place at policy institutes this week, including several at the Center for Strategic & International Studies exploring efforts to maintain American leadership in semiconductor manufacturing and innovation in the intelligence community. The Hudson Institute is hosting a virtual event this week to discuss the impacts of the Ninth Circuits recent decision to overturn Judge Lucy Kohs injunction against Qualcomms patent licensing practices.
The Hudson Institute
Antitrust in the 21st Century: The Ninth Circuits Decision in FTC v. Qualcomm
At 12:00 PM on Monday, online video webinar.
In early August, a panel of circuit judges in the U.S. Court of Appeals for the Ninth Circuit issued a unanimous 3-0 decision in favor of Qualcomm in its appeal against Judge Lucy Kohs ruling in favor of the Federal Trade Commission (FTC), which featured an injunction against Qualcomm for its patent licensing practices in the semiconductor industry. While the FTC pursues en banc review of the Ninth Circuits decision, this event will explore the FTCs chances for success on that petition as well as current guiding principles for those operating at the intersection of intellectual property rights and antitrust law. Speakers at this event will include Judge Paul R. Michel (Ret.), Former Chief Judge, U.S. Court of Appeals for the Federal Circuit; Richard A. Epstein, Professor of Law, New York University, Senior Fellow, Hoover Institution, and Professor Emeritus and Senior Lecturer, University of Chicago; Dina Kallay, Head of Competition (IPR, Americas, and Asia-Pacific), Ericsson; and Urka Petrov?i?, Senior Fellow, Hudson Institute.
Center for Strategic & International Studies
American Leadership in Semiconductor Manufacturing
At 2:00 PM on Tuesday, online video webinar.
This June, Representatives Michael McCaul (R-TX) and Doris Matsui (D-CA) introduced H.R. 7178, the Creating Helpful Incentives to Produce Semiconductors (CHIPS) for America Act. If enacted, the bill would create a tax credit for entities investing in semiconductor manufacturing facilities, among other incentives meant to support domestic chipmakers. This event, which will focus on the importance of maintaining dominance in the semiconductor sector in the face of growing challenges from China, will feature a discussion between Rep. McCaul, who is also Co-Chair, House Semiconductor Caucus & Lead Republican, House Foreign Affairs Committee; and James Andrew Lewis, Senior Vice President and Director, Technology Policy Program.
Information Technology & Innovation Foundation
How Will Quantum Computing Shape the Future of AI?
At 9:00 AM on Wednesday, online video webinar.
The power of quantum computing to compute algorithms more quickly than classical computing relies in large part upon the nascent technologys ability to model extremely complex problems, giving quantum computers the ability to create stronger forecasts in sectors where many variables come into play, such as weather predictions. In artificial intelligence (AI), quantum algorithms could be a great boon in solving complex problems like climate forecasts and discovering novel drug compounds, so those nations which can take the lead in quantum computing will also likely have an edge in AI development. This event will feature a discussion with a panel including Hodan Omaar, Policy Analyst, Center for Data Innovation, ITIF; Freeke Heijman, Director, Strategic Development, QuTech Delft; Joseph D. Lykken, Deputy Director of Research, Fermi National Accelerator Laboratory; Markus Pflitsch, Chairman and Founder, Terra Quantum AG; and moderated by Eline Chivot, Senior Policy Analyst, Center for Data Innovation, ITIF.
The Hudson Institute
The Future of American Spectrum Policy: Is DoDs Request for Information the Best Direction?
At 3:00 PM on Wednesday, online video webinar.
In early August, the White House and the U.S. Department of Defense (DoD) announced a plan to devise a spectrum sharing framework that frees up 100 megahertz (MHz) of continuous mid-band spectrum currently held by the DoD to be auctioned by the Federal Communications Commission (FCC) for supporting the growth of 5G networks across the U.S. A request for information (RFI) issued by the DoD on September 18 to identify innovative solutions for dynamic spectrum sharing which effectively supports national security while freeing up additional spectrum to be used by the 5G industry. Speakers at this event will include Harold Furchtgott-Roth, Director, Center for the Economics of the Internet; Michael ORielly, Commissioner, FCC; Robert McDowell, Former Commissioner, FCC; and Grace Koh, Ambassador and Special Advisor, Bureau of Economic and Business Affairs.
U.S. Patent and Trademark Office
Hear From USPTO Experts at the State Bar of Texas Advanced Intellectual Property Litigation Course
At 9:00 AM on Thursday, online video webinar.
On Thursday morning, the USPTO kicks off a two-day series of intellectual property litigation workshops being offered in partnership with the Intellectual Property Law Section of the State Bar of Texas. USPTO experts speaking at this event will include Molly Kocialski, Director, Rocky Mountain Regional Office; Miriam L. Quinn, Administrative Patent Judge, Patent Trial and Appeal Board; Todd J. Reves, Office of Policy and International Affairs; and Megan Hoyt, Dallas Regional Outreach Officer.
Center for Strategic & International Studies
Innovation in the Intelligence Community
At 3:00 PM on Thursday, online video webinar.
The U.S. intelligence community is careful to maintain secrecy in its operations but this can come at a cost to that sectors ability to support the development of innovative technologies like quantum computing and artificial intelligence. However, a recent report on the innovation race in the intelligence community issued by House Permanent Select Committee on Intelligences Subcommittee on Strategic Technologies and Advanced Research provides several recommendations for the intelligence community to support tech development in areas crucial for national security. This event will feature a discussion on the report between Representative Jim Hines (D-CT), Chairman, House Permanent Select Committee on Intelligences Subcommittee on Strategic Technologies and Advanced Research; and James Andrew Lewis, Senior Vice President and Director, Technology Policy Program.
Center for Strategic & International Studies
Sharpening Americas Innovative Edge
At 11:00 AM on Friday, online video webinar.
Although the United States lept to the forefront of global tech dominance thanks in large part to federal investment in R&D programs, the nations research funding continues to follow an outdated Cold War-era funding model for research. This event coincides with a report released by CSISs Trade Commission on Affirming American Leadership which outlines a national strategy for developing important technology sectors so that the U.S. can remain ahead of its global counterparts in those fields. This event will feature a discussion with a panel including Ajay Banga, CEO, Mastercard; Richard Levin, Former President, Yale University; Kavita Shukla, Founder and CEO, The FRESHGLOW Co.; and moderated by Matthew P. Goodman, Senior Vice President for Economics and Simon Chair in Political Economy, CSIS.
The Heritage Foundation
5G: The Emerging Markets Trojan Horse
At 1:00 PM on Friday, online video webinar.
The United States and several governments across Europe have sounded the alarm in recent years over the risks of foreign surveillance of domestic networks enabled by the use of network infrastructure hardware developed by growing Chinese telecom firms like Huawei and ZTE which have close ties with the Chinese communist government. While these developed nations have taken steps to prevent such issues in the 5G supply chain, governments in Africa and other developing areas of the world are forced to choose between protecting national security and building these crucial networks. Speakers at this event will include Bonnie Glick, Deputy Administrator, United States Agency for International Development; Joshua Meservy, Senior Policy Analyst, Africa and the Middle East; and hosted by Klon Kitchen, Director, Center for Technology Policy.
U.S. Patent and Trademark Office
2020 Patent Public Advisory Committee Annual Report Discussion
At 1:00 PM on Friday, online video webinar.
On Friday afternoon, the USPTOs Patent Public Advisory Committee (PPAC) will convene a meeting to discuss the annual report that the committee will prepare on the agencys policies, performance and user fees which will be delivered to the White House and Congress by the end of the fiscal year.
Image Source: Deposit PhotosCopyriht: jovannigImage ID: 12633480
Prime Minister Janez Jana: Digitisation and artificial intelligence will become an essential integral part of the future – Gov.si
Today, Prime Minister Janez Jana gave an opening speech at the third Skills Summit entitled Skills Strategies for a World in Recovery. This year's summit, organised by the Organization for Economic Cooperation and Development (OECD), is held virtually and hosted by Slovenia. The Prime Minister greeted the participants in a video address. In addition to Mr Jana, Deputy Secretary-General of the OECD, Ulrik Vestergaard Knudsen, also spoke at the opening of the event, which was attended by the Minister of Education, Simona Kustec, the Minister of Labour, Janez Cigler Kralj, the Minister of Health, Toma Gantar, and the Minister of Public Administration, Botjan Koritnik.
The purpose of this year's Skills Summit is to foster a global discussion on how politics and practices, with their comprehensive approach and through cooperation of all stakeholders, can contribute to the promotion of culture and the development of lifelong learning systems. This year, the Skills Summit is particularly focused on the recovery and resilience of individuals and systems in the context of the coronavirus epidemic.
In his opening speech, the Prime Minister pointed out that the COVID-19 epidemic revealed the advantages and the obstacles of a digital economy. "We have to learn from this difficult time and act accordingly," noted Mr Jana and added that the new post-COVID-19 international environment requires us to embrace a pragmatic way of thinking and acting in economy. He considers that we first need to enable the economic recovery and provide a flexible infrastructure. "Skills are in the centre of a long-term economic recovery and competitiveness of Slovenia, the European Union and the OECD countries," pointed out the Prime Minister, highlighting that it is precisely the skills that are crucial for a long-term economic recovery and that we need to hone skills in all areas.
In his opinion, the world is on the verge of a fourth industrial revolution. "If we wish to be successful, we need to provide our workforce and economy with specific soft and hard skills related to digitisation and innovation. Success is not only built by software and hardware specialists, but also by people with knowledge on artificial intelligence, robotics and nanotechnology," he observed. Mr Jana maintained that we also need people with new management capabilities, adding that jobs based on communication skills, social perception, persuasion and negotiation cannot be replaced by robots. "To become successful, to fulfil our potential and to tackle the challenges ahead, we need to foster soft and hard skills alike. The content of the new curricula must be able to satisfy the demands of our new environment," Prime Minister Janez Jana stated clearly.
He also pointed out that numerous OECD studies showed that students lack mathematical skills, skills in reading comprehension, active listening and writing, judgment and decision-making skills, systemic analysis and evaluation skills, and complex problem-solving skills. In his opinion, the aforementioned skills should receive special attention in schools. Furthermore, accounting and business and project management should be incorporated in teaching and curricula.
Continuing his opening speech, the Prime Minister emphasised "We know not what the future will bring, however, we do know that digitisation and artificial intelligence will be an integral part of it."
Ten years ago, there were no such contemporary jobs that are most sought-after today. To make individuals successful and competitive in future, we must equip them with unconventional interdisciplinary skills covering a broad range of knowledge. In his opinion, what applies for individuals also applies for countries. "The adaptable, the flexible, the creative, the eager learners and the out-of-the-box thinkers will be the winners. The skills of charting unknown waters are valuable and essential for the future economic and political well-being," he added.
Concluding the opening address, Prime Minister Janez Jana maintained that this is the policy that Slovenia is pursuing and that every effort will be made for this policy to be followed by the European Union and the OECD countries. "The EU budget and recovery fund are designed to support our efforts. Sufficient funds are available to facilitate urgent investments, to create the economy of the future and to provide the individuals with skills required in the future."
The event at Brdo pri Kranju was also attended, in person or virtually, by State Secretary at the Ministry of Economic Development and Technology, Simon Zajc, Vice President of the European Commission and European Commissioner for Promoting the European Way of Life, Margaritis Schinas, European Commissioner for Jobs and Social Rights, Nicolas Schmit, European Commissioner for Innovation, Research, Culture, Education and Youth, Mariya Gabriel, and Head of the Representation of the European Commission in the Republic of Slovenia, Zoran Stani.
Artificial intelligence could speed up breast screening in north-east – Aberdeen Evening Express
Artificial intelligence could be used to carry out breast screenings in the north-east, a leading doctor has revealed.
During breast cancer awareness month, which runs for the duration of October, NHS Grampian is canvassing patients for their views on the introduction of pioneering technology.
Specialists believe it could reduce the need for a mammogram an X-ray image of the breast to be examined by up to three consultants.
They say that could, in turn, lead to the process speeding up, with faster screenings and test results.
Dr Gerald Lip, clinical director of north-east breast screening at NHS Grampian, said: In the future, an artificial intelligence computer programme could examine a persons mammogram an X-ray image of the breast.
We want to see how ourclientswould like to see this technology used and are asking for their opinions on several scenarios as part of our survey.
At the moment two specialists examine the images and another senior specialist can then take a look at the image as well if they disagree with each other.
The breast screening unit at ARI already has experience of innovation, and last year unveiled the first X-ray integrated biopsy machine to be used in Scotland.
Dr Lip said: One of the benefits of involving AI initially is itcouldreplace one or both of the specialists at that firstscreeningimage examination stage.
This would, in turn, free-up specialists which would allow us to increase capacity for both patient appointments clinics and across the system, which would ultimately benefit all patients, speed up the whole process and cut waiting times.
Dr Lip said there were a few ways AI could operate within the department in future.
He added: It could replace one of the specialists at that initial stage. If the specialist and AI flags the image is not quite right or identifies it as abnormal, the patient would be invited back to an appointment with a specialist. If the specialist and AI disagreed with each other, a second specialist would examine the image.
Another scenario we are asking about is that the AI replaces both the specialists at the initial image examination and if it flags up an abnormality the patient is invited to an appointment.
In either of these situations, AI would ultimately lead to faster screening and results for our clients.
The survey on the potential introduction of AI has been opened to patients on the NHS breast screening programme, and depending on the response could be opened online in the future.
It will examine respondents knowledge of AI to understand how opinions vary in the screening population.
Read more from the original source:
Artificial intelligence could speed up breast screening in north-east - Aberdeen Evening Express
AI is the next national security frontier, but Israel may be losing its edge – Haaretz.com
Developing a national strategy for artificial intelligence, including its ethical aspects, is critical for Israels future security, a study published last week by the Institute for National Security Studies argued.
Proper management of the field of artificial intelligence in Israel holds great potential for preserving and improving national security, wrote Dr. Liran Antebi, an INSS research fellow, in the Hebrew-language study, which was prepared with assistance from experts in the high-tech industry, the defense establishment, the government and academia.
Titled Artificial intelligence and Israeli national security, the study starts from the assumption that AI will eventually be of decisive importance worldwide in terms of both economics and security, especially if predictions that AIs capabilities will someday exceed human ones prove accurate.
Artificial intelligence will create a new industrial revolution of the greatest scope in history, Antebi wrote. And this will naturally widen the gaps between countries with high technological capabilities and those that are left behind.
Drones & big data
The study detailed numerous military applications, both extant and future, for AI. One example is autonomous weapons systems like robots and drones that are capable of searching for, identifying and attacking targets independently, with almost no human involvement.
But the revolution wont take place only on the battlefield, the study noted. Other examples include intelligence systems capable of processing vast quantities of video footage to identify targets autonomously; autonomous vehicles; drone swarms; improved logistics systems; cyberwarfare and cyber-defense; planning, decision making, command and control; and brain-machine interface (controlling machines and computers via the brain).
Therefore, the study argued, Israel must define AI as a strategic goal. To keep Israel from being left behind, decision makers must become familiar with the field and set policies that will enable it to cope with the enormous competition that is emerging and preserve its competitive advantage.
The studys main recommendation is to draft a national strategy and then set up an agency to manage its implementation, based on a multiyear plan that includes funding allocations.
A field this important shouldnt be left to market forces, Antebi wrote. Israel cant allow itself to delay, because a failure in this field may well have serious ramifications.
We've got more newsletters we think you'll find interesting.
Please try again later.
The email address you have provided is already registered.
In recent years, two committees have been established with the goal of developing a national strategy and attendant policies for AI. The first was headed by Orna Berry and the second by Isaac Ben-Israel and Eviatar Matania. But the latter committees preliminary conclusions were harshly criticized by several government agencies after being reported in the media.
Antebi argued that its essential to set up an operative agency similar to the National Cyber Directorate, with a special emphasis on integrating AI into the defense establishment.
Many countries primarily China, the United States and some European states have already developed national strategies for AI and allocated funding for them, the study noted. As one example, Antebi cited the Joint Artificial Intelligence Center set up by the U.S. Department of Defense in 2018 to coordinate efforts to develop and apply AI systems.
In 2019, U.S. President Donald Trump signed an executive order for the American AI Initiative, whose goal is to promote AI technology. The Defense Department said that by 2023, it will invest $2 billion in projects in this field.
Haaretz has also reported in the past that the United Arab Emirates is trying to position itself as leader in this field and even have an AI minister.
National security multiplier
The study argued that Israel must encourage greater integration of AI capabilities in the defense establishment the army, other security agencies and defense industries in fields such as cyber, drones and intelligence.
Israel has a comparative advantage in technological fields, among other things in unmanned vehicles and cyber, which are significant defense fields, it said. Integrating them with AI as a force multiplier could greatly assist Israel in preserving and augmenting its national security, both through military means and due to its other economic and international ramifications.
Nevertheless, the study warned, the army and the defense establishment are having trouble keeping up with changes in the field. This is primarily due to the small amount of defense funding earmarked specifically for AI, as well as the difficulty of retaining high-quality personnel due to competition from the private sector.
Moreover, Antebi noted, there is bureaucratic resistance within the defense establishment to rapid technological change a problem typical of many large organizations. This is evident in its reliance on legacy systems that it has used for many years. Such systems are very hard to replace.
She therefore recommended creating structural models that will enable the defense establishment to keep up with the pace of change something that will require the system to be more flexible.
She also advised investing in training designated personnel and allocating funding to incentivize talented people to stay in the army. In addition, she wrote, its necessary to train people who arent high-tech experts to ensure that the spine of the armys chain of command is acquainted with AI.
A competitive advantage
Antebis study poses a challenge that may come as a surprise to many people. The defense establishment, she wrote, Does almost no independent research and development that creates a basis for future capabilities. Instead, it relies on technology developed by commercial companies and academia.
Consequently, she recommended that the defense establishment invest more resources in basic research in general, and particularly in research and development in areas of AI where Israel already has a competitive advantage, like drones and cyber.
Another recommendation was to set up an orderly system for monitoring and analyzing the progress different players have made on AI and encouraging information sharing within the defense establishment.
The prospect of AI being integrated into the defense establishment naturally rouses quite a few fears. Were all familiar with the horrific scenarios from science fiction films smart systems that get out of control and do things for their own reasons. That isnt likely to happen anytime soon, but the study did warn against integrating AI into the army too quickly, without any ability to understand the system and the factors that lead it to make decisions.
For instance, albeit in the context of the police rather than the army, it has become clear in recent years that existing facial recognition technologies discriminate against minorities. Very troubling scenarios obviously arise if smart weapons or intelligence systems were to be encoded with the same biases against minorities.
The study also discussed the moral dilemmas inherent in war in this regard. It noted that some people believe AI would be able to make better and more accurate decisions during combat, since it wouldnt be influenced by fear, fatigue or other emotions (like hatred) that affect human beings. But others argue that without human emotions, its impossible to make proper moral decisions about the use of the military, such as refraining from attacking civilian targets and not employing disproportionate force against the enemy.
Therefore, the study recommended that standards and supervisory mechanism be based on ensuring safety i.e., on ensuring that any AI tools developed comply with existing norms and principles. This should include drafting a code of ethics for the defense establishments use of AI, Antebi said.
She also recommended focusing research and development into AI on tools that assist people rather than replacing them until the safety and reliability of the technology has been proven. Finally, she advised that the administrative and legal questions arising from the use of AI systems be addressed as soon as possible.
See more here:
AI is the next national security frontier, but Israel may be losing its edge - Haaretz.com
Four Practical Applications Of Artificial Intelligence And 5G – Forbes
Pixabay
It is no secret that artificial intelligence (AI) is a technical marketing whitewash. Many companies claim that its algorithms and data scientists enable a differentiated approach in the networking infrastructure space. However, what are the practical applications of AI for connectivity and, in particular, 5G? From my perspective, it encapsulates four key areas. Here I will provide my insights into each and highlight what I believe is the practical functionality for operators, subscribers and equipment providers.
Smart automation
Automation is all about reducing human error and improving network performance and uptime through activities such as low to no-touch device configuration, provisioning, orchestration, monitoring, assurance and reactive issue resolution. AI promises to deliver the "smarts" in analyzing the tasks above, steering networking to a more closed-loop process. Pairing all of this with 5G should help mobile service providers offer simpler activations, higher performance and the rapid deployment of new services. The result should be higher average revenue per subscriber (ARPU) for operators and a more reliable connection, and better user experience.
Predictive remediation
Over time, I believe AI will evolve to enable network operators to move from reactive to proactive issue resolution. They will be able to evaluate large volumes of data for anomalies and make course corrections before issues arise. 5G should enable networks to better handle these predictive functions' complexity and support significantly more connected devices. We're beginning to see AI-powered predictive remediation applied to the enterprise networking sector to positive results, via some tier one carriers and 5G infrastructure providers such as Ericsson. In my opinion, one of the most significant impacts of AI in mobile networks will be the reduction of subscriber churn. That is a huge considerationcarriers are spending billions of dollars building fixed and mobile 5G networks. They must be able to add and retain customers.
Digital transformation acceleration
One of the pandemic's silver linings is the acceleration, out of necessity, of businesses' digital transformation. The distributed nature of work from home has put tremendous pressure on corporate and mobile networks from a scalability, reliability and security perspective. Many connectivity infrastructure providers are embracing AIOps for its potential to supercharge DevOps and SecOps. AI will also help operators better manage the lifecycle of 5G deployments from a planning, deployment, ongoing operations and maintenance perspective. For example, China Unicom leveraged AI to transform how it internally manages operations and how it interfaces with partners and customers. In 2019, the operator reported a 30% reduction in time to product delivery and a 60% increase in productivity for leased line activations.
Enhanced user experiences
The combination of AI and 5G will unlock transformative user experiences across consumer and enterprise market segments. I expanded on this topic in my Mobile World Congress 2019 analysis, which you can find here if interested. At a high level, AI has the potential to reduce the number of subscriber service choices, presenting the most relevant ones based on past behavior. I believe the result will be higher subscriber loyalty and operator monetization.
Wrapping Up
Though AI is hyped all around, there is particular synergy with 5G. Mobile networks are no longer just a "dumb pipe" for data access. AI can improve new device provisioning, deliver high application and connectivity performance, accelerate digital transformation and provide exceptional user experiences. For service providers, I also believe AI and 5G will result in operational expense savings and drive incremental investment in new service delivery. In my mind, that is a win-win for subscribers, operators, and infrastructure providers alike.
Disclosure: My firm, Moor Insights & Strategy, like all research and analyst firms, provides or has provided research, analysis, advising, and/or consulting to many high-tech companies in the industry, including Ericsson. I do not hold any equity positions with any companies cited in this column.
See the original post:
Four Practical Applications Of Artificial Intelligence And 5G - Forbes
What Is GPT-3 And Why Is It Revolutionizing Artificial Intelligence? – Forbes
Theres been a great deal of hype and excitement in the artificial intelligence (AI) world around a newly developed technology known as GPT-3. Put simply; it's an AI that is better at creating content that has a language structure human or machine language than anything that has come before it.
What Is GPT-3 And Why Is It Revolutionizing Artificial Intelligence?
GPT-3 has been created by OpenAI, a research business co-founded by Elon Musk and has been described as the most important and useful advance in AI for years.
But theres some confusion over exactly what it does (and indeed doesnt do), so here I will try and break it down into simple terms for any non-techy readers interested in understanding the fundamental principles behind it. Ill also cover some of the problems it raises, as well as why some people think its significance has been overinflated somewhat by hype.
What is GPT-3?
Starting with the very basics, GPT-3 stands for Generative Pre-trained Transformer 3 its the third version of the tool to be released.
In short, this means that it generates text using algorithms that are pre-trained theyve already been fed all of the data they need to carry out their task. Specifically, theyve been fed around 570gb of text information gathered by crawling the internet (a publicly available dataset known as CommonCrawl) along with other texts selected by OpenAI, including the text of Wikipedia.
If you ask it a question, you would expect the most useful response would be an answer. If you ask it to carry out a task such as creating a summary or writing a poem, you will get a summary or a poem.
More technically, it has also been described as the largest artificial neural network every created I will cover that further down.
What can GPT-3 do?
GPT-3 can create anything that has a language structure which means it can answer questions, write essays, summarize long texts, translate languages, take memos, and even create computer code.
In fact, in one demo available online, it is shown creating an app that looks and functions similarly to the Instagram application, using a plugin for the software tool Figma, which is widely used for app design.
This is, of course, pretty revolutionary, and if it proves to be usable and useful in the long-term, it could have huge implications for the way software and apps are developed in the future.
As the code itself isn't available to the public yet (more on that later), access is only available to selected developers through an API maintained by OpenAI. Since the API was made available in June this year, examples have emerged of poetry, prose, news reports, and creative fiction.
This article is particularly interesting where you can see GPT-3 making a quite persuasive attempt at convincing us humans that it doesnt mean any harm. Although its robotic honesty means it is forced to admit that "I know that I will not be able to avoid destroying humankind," if evil people make it do so!
How does GPT-3 work?
In terms of where it fits within the general categories of AI applications,GPT-3 is a language prediction model. This means that it is an algorithmic structure designed to take one piece of language (an input) and transform it into what it predicts is the most useful following piece of language for the user.
It can do this thanks to the training analysis it has carried out on the vast body of text used to pre-train it. Unlike other algorithms that, in their raw state, have not been trained, OpenAI has already expended the huge amount of compute resources necessary for GPT-3 to understand how languages work and are structured. The compute time necessary to achieve this is said to have cost OpenAI $4.6 million.
To learn how to build language constructs, such as sentences, it employs semantic analytics - studying not just the words and their meanings, but also gathering an understanding of how the usage of words differs depending on other words also used in the text.
It's also a form of machine learning termed unsupervised learning because the training data does not include any information on what is a "right" or "wrong" response, as is the case with supervised learning. All of the information it needs to calculate the probability that it's output will be what the user needs is gathered from the training texts themselves.
This is done by studying the usage of words and sentences, then taking them apart and attempting to rebuild them itself.
For example, during training, the algorithms may encounter the phrase the house has a red door. It is then given the phrase again, but with a word missing such as the house has a red X."
It then scans all of the text in its training data hundreds of billions of words, arranged into meaningful language and determines what word it should use to recreate the original phrase.
To start with, it will probably get it wrong potentially millions of times. But eventually, it will come up with the right word. By checking its original input data, it will know it has the correct output, and weight is assigned to the algorithm process that provided the correct answer. This means that it gradually learns what methods are most likely to come up with the correct response in the future.
The scale of this dynamic "weighting" process is what makes GPT-3 the largest artificial neural network ever created. It has been pointed out that in some ways, what it does is nothing that new, as transformer models of language prediction have been around for many years. However, the number of weights the algorithm dynamically holds in its memory and uses to process each query is 175 billion ten times more than its closest rival, produced by Nvidia.
What are some of the problems with GPT-3?
GPT-3's ability to produce language has been hailed as the best that has yet been seen in AI; however, there are some important considerations.
The CEO of OpenAI himself, Sam Altman, has said, "The GPT-3 Hype is too much. AI is going to change the world, but GPT-3 is just an early glimpse."
Firstly, it is a hugely expensive tool to use right now, due to the huge amount of compute power needed to carry out its function. This means the cost of using it would be beyond the budget of smaller organizations.
Secondly, it is a closed or black-box system. OpenAI has not revealed the full details of how its algorithms work, so anyone relying on it to answer questions or create products useful to them would not, as things stand, be entirely sure how they had been created.
Thirdly, the output of the system is still not perfect. While it can handle tasks such as creating short texts or basic applications, its output becomes less useful (in fact, described as "gibberish") when it is asked to produce something longer or more complex.
These are clearly issues that we can expect to be addressed over time as compute power continues to drop in price, standardization around openness of AI platforms is established, and algorithms are fine-tuned with increasing volumes of data.
All in all, its a fair conclusion that GPT-3 produces results that are leaps and bounds ahead of what we have seen previously. Anyone who has seen the results of AI language knows the results can be variable, and GPT-3s output undeniably seems like a step forward. When we see it properly in the hands of the public and available to everyone, its performance should become even more impressive.
View original post here:
What Is GPT-3 And Why Is It Revolutionizing Artificial Intelligence? - Forbes