Category Archives: Artificial Intelligence
8 Artificial Intelligence, Machine Learning and Cloud Predictions To Watch in 2020 – Irish Tech News
Artificial Intelligence, Machine Learning and Cloud Predictions by Jerry Kurata and Barry Luijregts, Pluralsight. In this article, they share their predictions for the ways that AI, ML and the cloud will be used differently in 2020 and beyond.
This decade has seen a seismic shift in the role of technology, at work and at home. Just ten years ago, technology was a specialist discipline in the workplace, governed by experts. At home things were relatively limited and tech was more in the background. Today technology is at the centre of how everyone works, lives, learns and plays. This prominence is shifting the way we think about, use, interact with and the expectations we have for technology, and we wanted to share some reflections and predictions for the year ahead.
AI Jerry Kurata
Increased User Expectations
As users experience assistants like Alexa and Siri, and cars that drive themselves, the expectations of what applications can do has greatly increased. And these expectations will continue to grow in 2020 and beyond. Users expect a stores website or app to be able to identify a picture of an item and guide them to where the item and accessories for the item are in the store. And these expectations extend to consumers of the information such as a restaurant owner.
This owner should rightfully expect the website built for them to help with their business by keeping their site fresh. The site should drive business to the restaurant by determining the sentiment of reviews, and automatically display the most positive recent reviews to the restaurants front page.
AI/ML will go small scale
We can expect to see more AI/ML on smaller platforms from phones to IoT devices. The hardware needed to run AI/ML solutions is shrinking in size and power requirements, making it possible to bring the power and intelligence of AI/ML to smaller and smaller devices. This is allowing the creation of new classes of intelligent applications and devices that can be deployed everywhere, including:
AI/ML will expand the cloud
In the race for the cloud market, the major providers (Amazon AWS, Microsoft Azure, Google Cloud) are doubling down on their AI/ML offerings. Prices are decreasing, and the number and power of services available in the cloud are ever increasing. In addition, the number of low cost or free cloud-based facilities and compute engines for AI/ML developers and researchers are increasing.
This removes much of the hardware barriers that prevented developers in smaller companies or locales with limited infrastructure from building advanced ML models and AI applications.
AI/ML will become easier to use
As AI/ML is getting more powerful, it is becoming easier to use. Pre-trained models that perform tasks such as language translation, sentiment classification, object detection, and others are becoming readily available. And with minimal coding, these can be incorporated into applications and retrained to solve specific problems. This allows creating a translator from English to Swahili quickly by utilizing the power of a pre-trained translation model and passing it sets of equivalent phrases in the two languages.
There will be greater need for AI/ML education
To keep up with these trends, education in AI and ML is critical. And the need for education includes people developing AI/ML applications, and also C-Suite execs, product managers, and other management personnel. All must understand what AI and ML technologies can do, and where its limits exist. But of course, the level of AI/ML knowledge required is even greater for people involved with creating products.
Regardless of whether they are a web developer, database specialist, or infrastructure analyst, they need to know how to incorporate AI and ML into the products and services they create.
Cloud Barry Luijbregts
Cloud investment will increase
In 2019, more companies than ever adopted cloud computing and increased their investment in the cloud. In 2020, this trend will likely continue. More companies will see the benefits of the cloud and realize that they could never get the same security, performance and availability gains themselves. This new adoption, together with increased economies of scale, will lower prices for cloud storage and services even further.
Cloud will provide easier to use services
Additionally, 2020 will be the year where the major cloud providers will offer more and easier-to-use AI services. These will provide drag-and-drop modelling features and more, out-of-the-box, pre-trained data models to make adoption and usage of AI available for the average developers.
Cloud will tackle more specific problems
On top of that, in 2020, the major cloud vendors will likely start providing solutions that tackle specific problems, like areas of climate change and self-driving vehicles. These new solutions can be implemented without much technical expertise and will have a major impact in problem areas.
Looking further ahead
As we enter a new decade, we are on the cusp of another revolution, as we take our relationship with technology to the next level. Companies will continue to devote ever larger budgets to deploying the latest developments, as AI, machine learning and the cloud become integral to the successful running of any business, no matter the sector.
There have been murmurings that this increase in investment will have an impact on jobs. However, if the right technology is rolled out in the right way, it will only ever complement the human skillset, as opposed to replacing it. We have a crucial role to play in the overall process and our relationship with technology must always remain as intended; a partnership.
Jerry Kurata and Barry Luijregts are expert authors at Pluralsight and teach courses on topics including Artificial Intelligence (AI) and machine learning (ML), big data, computer science and the cloud. In recent years, both have seen first-hand the development of these technologies, the different tools that organisations are investing in and the changing ways they are used.
See more stories here.
More information about Irish Tech News and the Business Showcase
FYI the ROI for you is => Irish Tech News now gets over 1.5 million monthly views, and up to 900k monthly unique visitors, from over 160 countries. We have over 860,000 relevant followers on Twitter on our various accounts & were recently described as Irelands leading online tech news site and Irelands answer to TechCrunch, so we can offer you a good audience!
Since introducing desktop notifications a short time ago, which notify readers directly in their browser of new articles being published, over 16000 people have now signed up to receive them ensuring they are instantly kept up to date on all our latest content. Desktop notifications offer a unique method of serving content directly to verified readers and bypass the issue of content getting lost in peoples crowded news feeds.
Drop us a line if you want to be featured, guest post, suggest a possible interview, or just let us know what you would like to see more of in our future articles. Were always open to new and interesting suggestions for informative and different articles. Contact us, by email, twitter or whatever social media works for you and hopefully we can share your story too and reach our global audience.
Irish Tech News
If you would like to have your company featured in the Irish Tech News Business Showcase, get in contact with us at [emailprotected] or on Twitter: @SimonCocking
Read the original here:
8 Artificial Intelligence, Machine Learning and Cloud Predictions To Watch in 2020 - Irish Tech News
Tip: Seven recommendations for introducing artificial intelligence to your newsroom – Journalism.co.uk
Artificial intelligence is now commonly used in journalism for anything from combing through large datasets to writing stories.
To help you prepare for the future, the Journalism AI team at Polis, London School of Economics and Political Science (LSE), put together a training module seven things to consider before adopting AI in your news organisation.
"Keep in mind that this is not a manual for implementation," writes professor Charlie Beckett who leadsJournalism AI.
"The recommendations will help you reflect on your newsroom AI-readiness but they wont tell you how to do design a strategy. We link to more resources that might help you with that and we hope to produce more training resources ourselves in the near future."
For more insights into the Journalism AI report, you can watch this three-minute video, as well as Charlie Becketts presentation of the report at its launch event.
If you like our news and feature articles, you can sign up to receive our free daily (Mon-Fri) email newsletter (mobile friendly).
Read the rest here:
LTTE: It’s important to know of weaponized artificial intelligence – Rocky Mountain Collegian
Editors Note:All opinion section content reflects the views of the individual author only and does not represent a stance taken by The Collegian or its editorial board. Letters to the Editor reflect the view of a member of the campus community and are submitted to the publication for approval.
To the Editor,
I am writing this essay to bring awareness and recognition to a fast-approaching topic in the field of military technology weaponized artificial intelligence.
Weaponized AI is any military technology that operates off a computer system that makes its own decisions. Simply put, anything that automatically decides a course of action against an enemy without human control would fall under this definition.
Weaponized AI is a perfect example of a sci-fi idea that has found its way into the real world and is not yet completely understood. This said, weaponized AI places global security at risk and must be recognized by institutions like Colorado State University before it becomes widely deployed on the battlefield.
Nations are constantly racing to employ the next best weapon as it is developed. AI is no exception. Currently, AI is responsible for the one of the largest technology competitions since that of nuclear weapons during the Cold War. At the top of this competition is China and the United States.
With little to no international restrictions on the deployment of AI weaponry, a modern arms race will continue to develop, creating tensions between world powers as fear of the opposing team reaching the perfect AI weapon arises.
The other inherent danger is the gap that is being created between advanced world powers and countries who are incapable of developing such technology. The tendency for global conflict to occur between these nations increases, as powers that wield weaponized AI have a distinct edge over countries that do not employ AI. This allows room for misuse of this power given the lack of international regulations on using this tech.
What we have is a blurring of moral boundaries as we come closer to allowing this technology to determine who is a true threat.
Going further, my studies have shown that this technology poses considerable risk to international human rights laws. In its current state, weaponized AI is found to be unreliable in doing what it is intended to do. As an example, Project Maven, a current AI used by the United States, only identifies military threats using complex algorithms.
While this seems harmless, the direction in which the world is taking this technology is not. What would happen if this technologys unreliability costs innocent lives due to a targeting error that AIs, like Project Maven, are prone to making? Likewise, who would take responsibility for the actions of a machine?
What we have is a blurring of moral boundaries as we come closer to allowing this technology to determine who is a true threat. These kinds of errors cannot be tolerated by the rules of modern warfare.
A final obstacle surrounding AI is the United Nations inability to come to a consensus on its use. Researcher Eugenio Garcia with the United Nations stated, Advanced military powers remain circumspect (guarded) about introducing severe restrictions on the use of these technologies.
Although people easily recognize the dangers that AI poses to national security, countries are not willing to restrict the development. Furthermore, with minimal current legislation on the unreliability of the technology, weaponized AI will move further than what we can control.
While I make these claims, one must recognize that the technology does offer the benefit of removing soldiers from the battlefield. However, nations around the world are not monitoring this rising issue.
Colorado State University, being a tier one research facility that has investment in military technology, will be the institution that does step up to the plate and recognize catastrophe before it happens. These threats to global security may not be present now, but if we do not advocate for international legislation, these dangers will become reality.
Sincerely,
Thomas Marshall
Third-year mechanical engineering student at CSU
Working under Azer Yalin as an undergraduate research assistant exploring Air Force technology
The Collegians opinion desk can be reached atletters@collegian.com. To submit a letter to the editor, pleasefollow the guidelines at collegian.com.
Original post:
LTTE: It's important to know of weaponized artificial intelligence - Rocky Mountain Collegian
Gympass Launches Lisbon Technology Hub With the Acquisition of Flaner, Emerging Artificial Intelligence Leader – Bonner County Daily Bee
Bonner County Daily Bee - Business, Gympass Launches Lisbon Technology Hub With the Acquisition of Flaner, Emerging Artificial Intelligence Leader '); $(this).addClass('expanded'); $(this).animate({ height: imgHeight + 'px' }); } } }); }); function closeExpand(element) { $(element).parent('.expand-ad').animate({ height: '30px' }, function () { $(element).parent('.expand-ad').removeClass('expanded'); $(element).remove(); }); } function runExpandableAd() { setTimeout(function() { $('.expand-ad').animate({ height: $('.expand-ad img').height() + 'px' }); }, 2000); setTimeout(function() { $('.expand-ad').animate({ height: '30px' }); }, 4000); } function customPencilSize(size) { var ratio = 960/size; var screenWidth = $('body').width(); if (screenWidth > 960) screenWidth = 960; $('.expand-ad__holder').parent('.ad').css('padding-bottom', (screenWidth / ratio) + 'px'); $('.expand-ad__holder').css({ height: (screenWidth / ratio) + 'px' }); $('.expand-ad').css({ height: (screenWidth / ratio) + 'px' }); $('.expand-ad img').css('height', 'auto'); $('.expand-ad embed').css('height', 'auto'); $('.expand-ad embed').css('width', '100%'); $('.expand-ad embed').css('max-width', '960px'); } function customSize(size, id) { var element = jQuery('script#' + id).siblings('a').children('img'); if (element.length 960) screenWidth = 960; element.css('height', (screenWidth / ratio) + 'px'); } (function () { window.addEventListener('message', function (event) { $(document).ready(function() { var expand = event.data.expand; if (expand == 'false') { $('.expand-ad__holder').removeClass('expand-ad__holder'); $('.expand-ad').removeClass('expand-ad'); } }); }, false); function loadIframe(size, id) { $('.ad').each(function () { var iframeId = $(this).children('ins').children('iframe').attr('name'); var element = $(this).children('ins').children('iframe'); if (element.length > 0) { var ratio = 960 / size; var screenWidth = $('body').width(); if (screenWidth > 960) screenWidth = 960; element.css('height', (screenWidth / ratio) + 'px'); } }); } })();
See original here:
Iktos and Almirall Announce Research Collaboration in Artificial Intelligence for New Drug Design – Business Wire
PARIS--(BUSINESS WIRE)--Iktos, a company specialized in Artificial Intelligence for novel drug design and Almirall, S.A. (ALM), a leading skin-health focused global pharmaceutical company, today announced a collaboration agreement in Artificial Intelligence (AI), where Iktos generative modelling technology will be used to design novel optimized compounds, to speed up the identification of promising drug candidates for undisclosed Almirall drug discovery program(s).
Iktos AI technology, based on deep generative models, helps bring speed and efficiency to the drug discovery process, by automatically designing virtual novel molecules that have all desirable characteristics of a novel drug candidate. This tackles one of the key challenges in drug design: rapid and iterative identification of molecules which simultaneously validate multiple bioactive attributes and drug-like criteria for clinical testing.
This partnership is an example of how we intend to explore the enormous possibilities offered by technology to find new molecules and to speed up clinical development, said Dr. Bhushan Hardas, Executive Vice President R&D, Chief Scientific Officer of Almirall. The health sector lags behind others in the digital world. Almirall wants to be at the forefront of innovation to develop holistic and transversal approaches. Artificial Intelligence will provide Almirall a unique opportunity to combine our proficiency with the preciseness and celerity to truly make a difference in patients' lives.
We are thrilled to initiate a new research collaboration with Almirall commented Yann Gaston-Math, President and CEO of Iktos. This new collaboration is further testimony to the leadership position that Iktos has developed in the field of AI for de novo drug design, in little more than two years of existence. We are eager to demonstrate to our collaborators the power of Iktos technology to accelerate their research, and to get the opportunity to further improve by confronting our approach to a new use case, consistently with our strategy to prove our value in real-life projects.
Iktos has recently announced several collaborations with biopharmaceutical companies where Iktos AI technology is used to accelerate the design of promising compounds, and has published, at the EFMC 2018 meeting, an experimental validation of the technology in a real-life drug discovery project. Iktos generative modelling SaaS software, Makya, is now available on the market, and Iktos intends to release its retrosynthesis SaaS platform Spaya as a beta version, before the end of 2019.
About Iktos
Incorporated in October 2016, Iktos is a French start-up company specialized in the development of artificial intelligence solutions applied to chemical research, more specifically medicinal chemistry and new drug design. Iktos is developing a proprietary and innovative solution based on deep learning generative models, which enables, using existing data, to design molecules that are optimized in silico to meet all the success criteria of a small molecule discovery project. The use of Iktos technology enables major productivity gains in upstream pharmaceutical R&D. Iktos offers its technology both as professional services and as a SaaS software platform, Makya.
About Almirall
Almirall is a leading skin-health focused global pharmaceutical company that partners with healthcare professionals, applying Science to provide medical solutions to patients and future generations. Our efforts are focused on fighting against skin health diseases and helping people feel and look their best. We support healthcare professionals by continuous improvement, bringing our innovative solutions where they are needed.
The company, founded almost 75 years ago with headquarters in Barcelona, is listed on the Spanish Stock Exchange (ticker: ALM). Almirall has been key in value creation to society according to its commitment with to major shareholders and through its decision to help others, to understand their challenges and to use Science to provide solutions for real life. Total revenues in 2018 were 811 million euros. More than 1,800 employees are devoted to Science.
For more information, please visit almirall.com
Read the original:
Artificial Intelligence to be Used for Charting, Intel Collection – Department of Defense
Nautical, terrain and aeronautical charting is vital to the Defense Department mission. This job, along with collecting intelligence, falls to the National Geospatial-Intelligence Agency.
Two senior DOD officials think that artificial intelligence will aid NGA's mission.
Mark D. Andress, NGA's chief information officer, and Nand Mulchandani, chief technology officer from DODs Joint Artificial Intelligence Center, spoke yesterday at the AFCEA International NOVA-sponsored 18th Annual Air Force Information Technology Day in Washington.
The reason charts are so vital is that they enable safe and precise navigation, Andress said. They are also used for such things as enemy surveillance and targeting, as well as precision navigation and timing.
This effort involves a lot of data collection and analysis, which is processed and shared through the unclassified, secret or top secret networks, he said, noting that AI could assist them in this effort.
The AI piece would involve writing smart algorithms that could assist data analysts and leader decision making, Andress said.
He added that the value of AI is that it will give analysts more time to think critically and advise policymakers while AI processes lower-order analysis that humans now do.
There are several challenges to bringing AI into NGA, he observed.
One challenge is that networks handle a large volume of data that includes text, photos and livestream. The video streaming piece is especially challenging for AI because it's so complex, he said.
Andress used the example of an airman using positioning, navigation and timing, flying over difficult terrain at great speed and targeting an enemy. "An algorithm used for AI decision making that is 74% efficient is not one that will be put into production to certify geolocation because that's not good enough,"he said.
Another problem area is that NGA inherited a large network architecture from other agencies that merged into NGA. They include these Defense Mapping Agency organizations:
The networks of these organizations were created in the 1990s and are vertically designed, he said, meaning not easily interconnected. That would prove a challenge because AI would need to process information from all of these networks to be useful.
Next, all of these networks need to continuously run since DOD operates worldwide 24/7, he said. Pausing the network to test AI would be disruptive.
Therefore, Andress said AI prototype testing is done in pilots in isolated network environments.
However, the problem in doing the testing in isolation is the environments don't represent the real world they'll be used in, he said.
Nonetheless, the testing, in partnership with industry, has been useful in revealing holes and problems that might prevent AI scalability.
Lastly, the acceptance of AI will require a cultural shift in the agency. NGA personnel need to be able to trust the algorithms. He said pilots and experimentation will help them gain that trust and confidence.
To sum up, Andress said AI will eventually become a useful tool for NGA, but incorporating it will take time. He said the JAIC will play a central role in helping the agency getting there.
Mulchandani said the JAIC was set up last year to be DOD's coordinating center to help scale AI.
Using AI for things like health records and personnel matters is a lot easier than writing algorithms for things that NGA does, he admitted, adding that eventually it will get done.
Mulchandani said last year, when he came to DOD from Silicon Valley, the biggest shock was having funding for work one day and then getting funding pulled the next due to continuing resolutions. He said legislators need to fix that so that AI projects that are vital to national security are not disrupted.
Read this article:
Artificial Intelligence to be Used for Charting, Intel Collection - Department of Defense
Tech experts agree its time to regulate artificial intelligence if only it were that simple – GeekWire
AI2 CEO Oren Etzioni spakes at the Technology Alliances AI Policy Matters Summit. (GeekWire Photo / Monica Nickelsburg)
Artificial intelligence is here, its just the beginning, and its time to start thinking about how to regulate it.
Those were the takeaways from the Technology Alliances AI Policy Matters Summit, a Seattle event that convened experts and government officials for a conversation about artificial intelligence. Many of those experts agreed that the government should start establishing guardrails to defend against malicious or negligent uses of artificial intelligence. But determining what shape those regulations should take is no easy feat.
Its not even clear what the difference is between AI and software, said Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence, on stage at the event. Where does something cease to be a software program and become an AI program? Google, is that an AI program? It uses a lot of AI in it. Or is Google software? How about Netflix recommendations? Should we regulate that? These are very tricky topics.
Regulations written now will also have to be nimble enough to keep up with the evolving technology, according to Heather Redman, co-founder of the venture capital firm, Flying Fish Ventures.
Weve got a 30-40 year technology arc here and were probably in year five, so we cant do a regulation that is going to fix it today, she said during the event. We have to make it better and go to the next level next year and the next level the year after that.
With those challenges in mind, Etzioni and Redman recommend regulations that are tied to specific use cases of artificial intelligence, rather than broad rules for the technology. Laws should be targeted to areas like AI-enabled weapons and autonomous vehicles, they said.
My suggestion was to identify particular applications and regulate those using existing regulatory regimes and agencies, Etzioni said. That both allows us to move faster and also be more targeted in our application of regulations, using a scalpel rather than a sledgehammer.
He believes the rules should include a mandatory kill switch on all AI programs and requirements that AI notify users when they are not interacting with a human. Etzioni also stressed the importance of humans taking responsibility for autonomous systems, though it isnt clear whether the manufacturer or user of the technology will be liable.
Lets say my car ran somebody over, he said. I shouldnt be able to say my dog ate my homework. Hey I didnt do it, it was my AI car. Its an autonomous vehicle. We have to take responsibility for our technology. We have to be liable for it.
Redman also sees the coming tide of A.I. regulation as a business opportunity for startups seeking to break into the industry. Her venture capital firm is inundated with startups pitching an A.I. and M.L. first approach but Redman said there are two other related fields, or stacks as she describes them, that companies should be exploring.
If you talk to somebody on Wall Street, they dont care what tech stack theyre running their trading on theyre looking at new evolutions in law and policy as big opportunities to build new businesses or things that will kill existing businesses, she said.
From a startup perspective, if youre not thinking about the law and policy stack as much as youre thinking about the tech stack, youre making a mistake, Redman added.
But progress toward a regulatory framework has been slow at the local and federal level. In the last legislative session, Washington state almost became one of the first to regulate facial recognition, the controversial technology that is pushing the artificial intelligence debate forward. But the bill died in the state House. Lawmakers plan to introduce data privacy and facial recognition bills again next session.
Redman said shes disappointed Washington state wasnt a first-mover on AI regulation because the company is home to two of the tech giants consumers trust most with their data: Amazon and Microsoft. Amazon is in the political hot seat along with many of its tech industry peers but the Seattle tech giant has not been implicated in the types of data privacy scandals plaguing Facebook.
We are the home of trusted tech, Redman said, and we need to lead on the regulatory frameworks for tech.
Follow this link:
Artificial Intelligence Isn’t an Arms Race With China, and the United States Shouldn’t Treat It Like One – Foreign Policy
At the last Democratic presidential debate, the technologist candidate Andrew Yang emphatically declared that were in the process of potentially losing the AI arms race to China right now. As evidence, he cited Beijings access to vast amounts of data and its substantial investment in research and development for artificial intelligence. Yang and othersmost notably the National Security Commission on Artificial Intelligence, whichreleased its interim report to Congress last monthare right about Chinas current strengths in developing AI and the serious concerns this should raise in the United States. But framing advances in the field as an arms race is both wrong and counterproductive. Instead, while being clear-eyed about Chinas aggressive pursuit of AI for military use and human rights-abusing technological surveillance, the United States and China must find their way to dialogue and cooperation on AI. A practical, nuanced mix of competition and cooperation would better serve U.S. interests than an arms race approach.
AI is one of the great collective Rorschach tests of our times. Like any topic that captures the popular imagination but is poorly understood, it soaks up the zeitgeist like a sponge.
Its no surprise, then, that as the idea of great-power competition has reengulfed the halls of power, AI has gotten caught up in therace narrative.ChinaAmericans are toldis barreling ahead on AI, so much so that the United States willsoon be lagging far behind. Like the fears that surrounded Japans economic rise in the 1980s or the Soviet Union in the 1950s and 1960s, anxiety around technological dominance are really proxies for U.S. insecurity about its own economic, military, and political prowess.
Yet as technology, AI does not naturally lend itself to this framework and is not a strategic weapon.Despite claims that AI will change nearly everything about warfare, and notwithstanding its ultimate potential, for the foreseeable future AI will likely only incrementally improve existing platforms, unmanned systems such as drones, and battlefield awareness. Ensuring that the United States outpaces its rivals and adversaries in the military and intelligence applications of AI is important and worth the investment. But such applications are just one element of AI development and should not dominate the United States entire approach.
The arms race framework raises the question of what one is racing toward. Machine learning, the AI subfield of greatest recent promise, is a vast toolbox of capabilities and statistical methodsa bundle of technologies that do everything from recognizing objects in images to generating symphonies. It is far from clear what exactly would constitute winning in AI or even being better at a national level.
The National Security Commission is absolutely right that developments in AI cannot be separated from the emerging strategic competition with China and developments in the broader geopolitical landscape. U.S. leadership in AI is imperative. Leading, however, does not mean winning. Maintaining superiority in the field of AI is necessary but not sufficient. True global leadership requires proactively shaping the rules and norms for AI applications, ensuring that the benefits of AI are distributed worldwidebroadly and equitablyand stabilizing great-power competition that could lead to catastrophic conflict.
That requires U.S. cooperation with friends and even rivals such as China. Here, we believe that important aspects of the National Security Commission on AIs recent report have gotten too little attention.
First, as the commission notes, official U.S. dialogue with China and Russia on the use of AI in nuclear command and control, AIs military applications, and AI safety could enhance strategic stability, like arms control talks during the Cold War. Second, collaboration on AI applications by Chinese and American researchers, engineers, and companies, as well as bilateral dialogue on rules and standards for AI development, could help buffer the competitive elements of anincreasingly tense U.S.-Chinese relationship.
Finally, there is a much higher bar to sharing core AI inputs such as data and software and building AI for shared global challenges if the United States sees AI as an arms race. Although commercial and military applications for AI are increasing, applications for societal good (addressing climate change,improving disaster response,boosting resilience, preventing the emergence of pandemics, managing armed conflict, andassisting in human development)are lagging. These would benefit from multilateral collaboration and investment, led by the United States and China.
The AI arms race narrative makes for great headlines, buttheunbridled U.S.-Chinese competition it implies risks pushing the United States and the world down a dangerous path. Washington and Beijing should recognize the fallacy of a generalized AI arms race in which there are no winners. Instead, both should lead by leveraging the technology to spur dialogue between them and foster practical collaboration to counter the many forces driving them apartbenefiting the whole world in the process.
Originally posted here:
Joint Artificial Intelligence Center Director tells Naval War College audience to ‘Dive In’ on AI – What’sUpNewp
Were on a mission to provide quality local and independent community news, information, and journalism.
Saying the most important thing to do is just dive in, Lt. Gen. Jack Shanahan, director of the Department of Defense Joint Artificial Intelligence Center, talked to U.S. Naval War College students and faculty on Dec. 12 about the challenges and opportunities of fielding artificial intelligence technology in the U.S. military.
On one side of the emerging tech equation, we need far more national security professionals who understand what this technology can do or, equally important, what it cannot do, Shanahan told his audience in the colleges Mahan Reading Room.
On the other side of the equation, we desperately need more people who grasp the societal implications of new technology, who are capable of looking at this new data-driven world through geopolitical, international relations, humanitarian and even philosophical lenses, he said.
At the Joint AI Center, established in 2018 at the Pentagon, Shanahan is responsible for accelerating the Defense Departments adoption and integration of AI in order to quickly affect national security operations at the largest possible scale.
He told the Naval War College audience that the most valuable contribution of AI to U.S. defense will be how it helps human beings to make better, faster and more precise decisions, especially during high-consequence operations.
AI is like electricity or computers. Like electricity, AI is a transformative, general-purpose enabling technology capable of being used for good or for evil but not a thing unto itself. It is not a weapons system, a gadget or a widget, said the Air Force general whose prior position was director of Project Maven, a Defense Department program using machine learning to autonomously extract objects of interest from photos or video.
If I have learned anything over the past three years, its that theres a chasm between thinking, writing and talking about AI, and doing it, Shanahan said.
There is no substitute whatsoever for rolling up ones sleeves and diving in an AI project, he said.
Shanahan said adapting the Department of Defense to the AI world will be a multigenerational journey, requiring both urgency and patience.
He compared this moment in history to the period between World War I and World War II, when new ideas led to an explosion not just in military innovation but in technology advancement that eventually helped create Silicon Valley.
Now, the private sector is leading the way on AI, which leaves the Defense Department playing catch-up, Shanahan said. However, he added that he sees the U.S. militarys efforts running at a tempo comparable to commercial industry in five years from now.
China, he said, sees AI as a way to leapfrog over the current U.S. defense advantages.
The Chinese military has identified intelligent-ization as a military revolution on par with mechanization from the internal combustion engine, Shanahan said. They are sprinting to incorporate AI technology in all aspects of their military, and the Chinese commercial industry is more than willing to help.
After the speech, in an interview, Shanahan said AI isnt an arms race, but it is a strategic competition.
Regardless of what China does or does not do in AI, we have to accelerate our adoption of it. Its that important to our future, he said.
For example, Shanahan said, in 15 years, what if China has a fully AI-enabled military force, and the United States does not.
To me that scenario brings us an unacceptably high risk of failure because of the speed of the fight in the future, which we have not been prepared for as a result of fighting in the Middle East for 20-some years, he said. That, to me, is the best stark example of why we have to move in this direction.
Looking at the importance of military higher education in the effort, Shanahan said the role of institutions such as the Naval War College is to make a place for the militarys rising stars to think about new ways to harness AI.
What you are here to do is think strategy, the strategic and societal implications of using emerging and disruptive technology, he said.
You will find somebody comes out of here that has a spark, a lightbulb moment, that wants to go back and try this idea they developed while they were here, said Shanahan, who is a 1996 graduate of the Naval War Colleges College of Naval Command and Staff.
The Joint AI Center director said another role for military higher-education institutions is research on practical applications of AI.
Its the thinking about grand strategy and technology together that may be as important to the future of operating concepts as anything else, he said.
Source: USNWC Public Affairs Office | Jeanette Steele, U.S. Naval War College Public Affairs
We Cant Do This Without You
Sign up for Whats Up Newps FREE daily newsletter, youll never miss a thing from us! Just enter your email address below!
See more here:
Artificial intelligence is writing the end of Beethoven’s unfinished symphony – Euronews
In the run-up to Ludwig van Beethoven's 250th birthday, a team of musicologists and programmers is using artificial intelligence to complete the composer's unfinished tenth symphony.
The piece was started by Beethoven alongside his famous ninth, which includes the well-known Ode To Joy.
But by the time the German composer died in 1827, there were only a few notes and drafts of the composition.
The experiment risks failing to do justice to the beloved German composer. Tthe team said the first few months yielded results that sounded mechanical and repetitive.
But now the project leader, Matthias Roeder, from the Herbert von Karajan Institute, insists the AI's latest compositions are more promising.
"An AI system learns an unbelievable amount of notes in an extremely short time," said Roeder. "And the first results are a bit like with people, you say 'hmm, maybe it's not so great'. But it keeps going and, at some point, the system really surprises you. And that happened the first time a few weeks ago. We're pleased that it's making such big strides."
The group is in the process of training an algorithm that will produce a completed symphony. They're doing this by playing snippets of Beethoven's work and leaving the computer to improvise the rest of it. Afterwards, they correct the improvisation so it fits with the composer's style.
Similar projects have been undertaken before. Schubert's eighth symphony was finished using AI developed by Huawei. It received mixed reviews.
The final result of the project will be performed by a full orchestra on 28 April next year in Bonn as part of a series of celebrations of Beethoven's work.
The year of celebrations begins on December 16th with the opening of his home in Bonn as a museum after renovation.
Read the original post:
Artificial intelligence is writing the end of Beethoven's unfinished symphony - Euronews