Category Archives: Artificial Intelligence

How to get past AI resume readers and get that job – The Burn-In

More and more job applicants need to persuade Artificial Intelligence (AI) of their qualities before they can reach the interview stage for a vacant position. It is AI that regularly reads resume submissions which makes the job of writing a resume somewhat different from years ago. AI will be looking for keywords and phrases in the resume in order to find the candidates best suited to the position that is available. Where a company has some very successful employees, it may be that AI is looking for similarities between an application and the resumes that those employees had submitted, and the latters development once they joined the company.

This is not an exact science and its success is a subjective opinion. Nevertheless, it is something that job applicants need to consider when presenting a resume for inspection. It begs the question as to whether the most able of people need help to fashion their resume in such a way that it will trigger a positive response from the robot reading it.

Companies looking to recruit rarely understand the processes that AI follows, and do not really need to as long as they are persuaded that it works effectively to fill important vacancies. An added advantage of artificial intelligence is that it can identify an applicant that is suitable for another possible vacancy within a company, a vacancy for which the candidate is unaware.

Given AI is a reality, it makes absolute sense to employ any of the top 5 resume writing services to make adjustments to any existing resume to get AI approval. Alternatively, a good writer will start from scratch to lay out a resume in the best way, highlighting the things that will get a positive response.

It may seem obvious that no one will get past AI without having the necessary skills for the vacancy. That does not mean that skills alone will give an applicant the chance to get to the next stage. Specific words and phrases are still important with professional resume writers understanding them without having to refer to search engines. A cloud generator will identify them for example should you wish to research for yourself. It is important not to underestimate the importance of skills that can be repeated throughout the resume.

There are reasons why similarly qualified applicants will stand out against each other. They include evidence that an applicant has shown the ability to work well within a team in the past, can create good relationships with co-workers and business contacts. Communication skills have never been as important before.

The resume itself must be clear and well laid out. Here are some tips to improve how it should appear to AI and potential employers when they are finally read by humans.

Artificial Intelligence is a valuable resource that is widely used. By definition, it is able to sift the good from the not so good when it comes to reading resumes and creating a shortlist. If you want to succeed with a job application, you should master the requirements of AI or engage someone with the knowledge to do it for you. It is not cheating. After all, it is you who will have to do the job if you succeed.

Author bio: Alina Burakova is a very experienced writer and researcher who can initiate a readers interest in a given subject. She often turns her attention to in-depth analysis in the spheres of modern technology and artificial intelligence and the impact they are making in the modern world.

Read more here:
How to get past AI resume readers and get that job - The Burn-In

Artificial Intelligence Technology | VantagePoint

The Birth of Artifical Intelligence

Artificial intelligence (A.I.) is said to have had its humble beginning in 1956 at a summer-long conference called the Dartmouth Summer Research Project on Artificial Intelligence. The purpose of the conference was to investigate which aspects of human learning could be programmed into thinking machines, the precursor to computers. The intent was to find ways for these thinking machines to solve problems and learn from their mistakes, just as humans do.

Since then, Artificial intelligence has not so quietly been the engine behind transformative changes in how people think, make buying decisions, and relate to one another in a way while redefining the very definition of community, friendship and social discourse.

AI is disrupting the status quo across a broad cross section of industries by demonstrating its superiority and effectiveness as a powerful data mining, pattern recognition and forecasting tool that can tackle challenges that no human or other mathematical-based technology can achieve.

Siri (Apple) and Alexa (Amazon), have already worked their way into our daily lives as A.I. personal assistants. Social media platforms like Facebook have helped warm up the public to this disruptive technology that is reshaping nearly every industry and government worldwide.

Now, with more than 60 years of progress, this transformative technology is poised to make the difference between growth and decline for businesses worldwide as it disrupts and transforms practically every aspect of the human experience touched by technology.

Artificial intelligence, as its been understood to date by the public, conjures up fearful images of armies of robots rising up and challenging humankind for supremacy in intelligence as well as in physical prowess. Taken to its logical extreme this doomsday perspective of this incredible predictive technology has been widely promoted over the years in science fiction books and in big screen horror movies. While such fears are not entirely unfounded, they are, at this point in history, quite premature given the current state-of-the-art in artificial intelligence.

Yet, despite these concerns and even fear of where this technology will take us over the next century, artificial intelligence has, not so quietly, been the engine behind transformative changes in how people think, make buying decisions, and relate to one another in a way which has been redefining the very definition of community, friendship and every other aspect of social discourse. These changes to date have been mainly due to the application of A.I. to such areas as facial recognition and speech recognition, within the overall general arena known as pattern recognition.

See more here:
Artificial Intelligence Technology | VantagePoint

5 ways artificial intelligence helps students learn | The …

Whether we discuss K-12 or academic students, the role of education is to prepare young minds for future development as best as possible. With the continued rise of artificial intelligence (AI) technologies, being familiar with emerging new tech has become a necessity for students across the globe. According to Adobe, AI has become a norm due to its application in data processing, with worldwide data growth projected to increase by 61 percent by 2025. Additionally, demand for AI-enabled talent has increased twofold in the past years, with tech and financial service companies absorbing 60 percent of young graduates.

This data paints a clear image of the world we live in, as we move away from manual labor and into automation and machine learning. However, simply plugging AI technologies into a classroom without rhyme or reason wont help students reach their zenithquite the opposite in fact.

Lets take a look at what role teachers of today have in the world of tomorrow, as well as the benefits of AI in classrooms.

Before we discuss the net positives of AI in student learning, we should evaluate the position of teachers in the process. According to Upskilled, 48 percent of educators have a strong interest in professional development using digital learning technologies to increase their students engagement and achievements. However, around 75 percent of teachers find their workloads unmanageable, as student personalization and curation demands increase each year.

When we discuss AI in education,our first thought goes to the ethical question of using digital technologies with minors who are unfamiliar with it. However, with the internet and social media at their fingertips, children of today are far more tech-savvy than previous generations, making them fit for AI-enabled education. Brian Sinclair, Academic Advisor at Trust My Paper, spoke on the matter recently: The future is already here. Conservatism and hesitation in terms of active involvement of AI and digital technologies in classrooms can severely handicap students in regards to future career prospects. It falls on the shoulders of teachers and academia to bite the proverbial bullet and offer new learning opportunities to their student bodies.

In such an environment, the role of teachers is to act as supervisors, intermediaries, mentors, and advisorsnot as de facto sources of knowledge. Thus, teachers workload becomes more manageable; while each student receives the attention they require in developing much-needed future-proof skillsets.

So, what are the concrete ways in which classrooms benefit from AI integration in 2020 and beyond?

As with any aspect of personal development, every student requires a certain degree of personalization when it comes to his or her education. While some students may excel in STEM fields, others might lean toward arts and crafts, which pose a problem for teachers in regards to curation.

Luckily, by integrating AI technologies as a means to automate mundane tasks related to data processing, teachers can spend more time with their students. Thus, every student can receive more attention, guidance, and mentorship than they otherwise would thanks to newly-implemented AI-based assistance. Its worth noting, however, that teachers still have the final say in all AI findings and conclusions, relying on the technology only as a tool. This can help keep education grounded firmly in human relations and avoid dystopian situations where AI is solely responsible for the wellbeing of young minds.

Grading is an important yet taxing process that requires teachers, especially in K-12, to assess each students body of work with great care and consideration. However, this is sometimes impossible due to the simple fact that teachers are human beings with limited time and energy.

AI technologies can thus find their place in the classroom environment as student assessment algorithms which can help in grading individual projects, papers, and tests. Such assessment can be especially useful in STEM subjects where solutions are somewhat predictable (not in academic levels, however). Again, using AI in such a way can further elevate teachers attention toward students themselves rather than paperwork, the bane of formal education.

There are a plethora of benefits to be found in distance-enabled learning, not the least of which is the ability to facilitate a global classroom. Students from across the world can collaborate via AI-enabled platforms with the supervision and guidance of their teachers developing a variety of soft and hard skills.

Mathew Riley, Education Specialist at Top Essay Writing and Content Writer at Best Essay Education spoke briefly: The world has become smaller. Those of us that belong in the millennial and older generations dont have the perspective of todays youth when it comes to digital technologies. That being said, todays K-12 students are far more apt and welcoming of distance-learning platforms than any of those which preceded them.

As weve touched on previously, integrating AI into classrooms during K-12 education can drastically improve students aptitude in IT and tech-related fields. This is further bolstered by the fact that todays employers already favor candidates with technological literacy far more than their peers.

Further study by Adobe shows that only 17 percent of adults are confident in their ability to use digital tools to pursue further career development. Additionally, 57 percent of employers reinforce the value of soft digital skills in addition to hard skills like programming in terms of digital employee literacy. Thus, students who come into contact with meaningful AI-based learning at an early age wont have issues with grasping more advanced technological concepts more quickly.

While one-way feedback from a teacher to student isnt anything new, the opposite can become a revolutionary educational change thanks to AI-based algorithms. Gathering student feedback in regards to AI implementation and subsequent reevaluation of curriculum content can become a reality.

Teachers can use the opportunity to enable chatbots or similar AI algorithms to engage students after each study session and gather comments and suggestions frequently. Thus, students become active participants in how and what they will learn going forward. This can result in much higher student interest in the learning process, as their voices will start to matter more than ever before.

The prospect of integrating artificial intelligence into classrooms around the world is as wonderful as it is demanding for teachers and students alike. Like any change, the transition toward digital technologies and away from analogue will take years until it functions as intended. However, pioneering an innovative way of teaching, and learning, should be enough to get anyone excited about AI in education.

Author Bio: Kristin Savage nourishes, sparks and empowers using the magic of a word. Now she works as a freelance writer at Supremedissertations and ClassyEssay, Kristin also does some editing work at GrabMyEssay. Along with pursuing her degree in Creative Writing, Kristin was gaining experience in the publishing industry, with expertise in marketing strategy for publishers and authors.

Read the original post:
5 ways artificial intelligence helps students learn | The ...

Second plenary meeting of the Ad Hoc Committee on Artificial Intelligence (CAHAI) – Council of Europe

Directorate General Human Rights and Rule of Law, Council of Europe

The second plenary meeting of the Ad Hoc Committee on Artificial Intelligence (CAHAI) will be held from 6 to 8 July 2020, bringing together representatives of the 47 Council of Europe member states, observer states (Canada, USA, Holy See, Japan, Mexico) as well as civil society, academia and the Council of Europe's Internet partners.

The CAHAI observer group is expanding with the participation for the first time of Israel and 12 new stakeholders. Other international organisations (EU, OECD, UNESCO) will also contribute to CAHAIs work.

CAHAI members will make concrete proposals on the feasibility study of a future legal framework on artificial intelligence (AI) based on human rights, democracy and the rule of law. In this connection, they will address issues such as the mapping of legal instruments applicable to AI and the opportunities and risks arising from the design, development and application of AI on human rights, rule of law and democracy, which have already been subject of a preliminary analysis.

Other issues such the scope and main elements of the above-mentioned legal framework will also be discussed.

This will provide the necessary impetus for the preparation of the first draft of the feasibility study, which will be presented at the CAHAI plenary meeting in December 2020.

Relevant links:

Go here to see the original:
Second plenary meeting of the Ad Hoc Committee on Artificial Intelligence (CAHAI) - Council of Europe

Artificial Intelligence and Global Security | Center for a …

The CNAS Artificial Intelligence and Global Security Initiative explores how the artificial intelligence (AI) revolution could lead to changes in global power, the character of conflict, and crisis stability. The Initiative also examines the security dimensions of AI safety and prospects for international cooperation.

This research initiative is informed by the Task Force on Artificial Intelligence and National Security and the Artificial Intelligence and International Stability Project. The AI Task Force, composed of private industry leaders, former senior government officials, and academic experts, is co-chaired by former Deputy Secretary of Defense Robert O. Work and Dr. Andrew Moore, Head of Google Cloud Artificial Intelligence. The AI and International Stability Project is building a community of practice from academia, business, and government policymakers to understand how AI is likely to develop and shape the international security environment.

Join the Mailing List

Learn more

Learn more about how the United States can maintain its artificial intelligence leadership in a new era of global competition.

Research agenda

The Initiatives research agenda covers a range of issues related to the implications of the AI revolution for global security, including:

Reports and analysis

Media

Senior Fellow and Director, Technology and National Security Program

Senior Fellow, Technology and National Security Program

Fellow, Technology and National Security Program

Adjunct Senior Fellow, Technology and National Security Program

Adjunct Senior Fellow, Technology and National Security Program

Adjunct Senior Fellow, Technology and National Security Program

Read the rest here:
Artificial Intelligence and Global Security | Center for a ...

How Artificial Intelligence will help Volkswagen boost production by 30 per cent – Hindustan Times

Volkswagen is looking to boost its production by as much as 30 per cent in next five years by using Artificial Intelligence at its facilities. The Industrial Computer Vision AI technology will help the carmaker in image recognition processes and speed up production time by reducing manual interventions

The process extracts information from optical data, such as the real environment at the plant, which it then evaluates using artificial intelligence (AI). The procedure is similar to the human capability of recognising, processing and analysing images. Volkswagen has been working with this technology for several years and is now intensifying its efforts.

The first application, which is to be rolled out via the new Volkswagen Industrial Cloud throughout the Group next year, is currently being tested by Porsche in Leipzig. The application functions as follows: several labels are attached to each vehicle produced, for example with vehicle information or notes on airbags. Many of these labels contain country-specific information and are written in the customers language. The proper application of these labels is ensured by Computer Vision.

At the Porsche plant in Leipzig, an employee on the production line now scans the vehicle identification number to ensure clear identification of the vehicle. Photos are taken of each label attached to the car. The app checks the images to ensure that the labels have the correct content and are written in the appropriate language on a real-time basis and provides the production line employee with feedback on whether everything is correct. This saves several minutes per vehicle.

Another solution currently being prepared for use throughout the Group comes from Ingolstadt, where Audi uses it for quality testing at the press shop. Cameras combined with software based on machine learning detect the finest cracks and defects in components.

Volkswagen has set up a team of about 60 Computer Vision experts for the further development of the technology and the evaluation of new utilisation possibilities. In addition to the use of the technology in production, Volkswagen plans applications along the entire value stream, for example in sales and after-sales. For development work on the optical procedure, Volkswagen is recruiting experts for this area in Berlin, Dresden, Munich and Wolfsburg. In addition, the Group continues to build up its skills in the fields of camera technology, machine learning and the operation of Computer Vision solutions.

Read more:
How Artificial Intelligence will help Volkswagen boost production by 30 per cent - Hindustan Times

Security Think Tank: Artificial intelligence will be no silver bullet for security – ComputerWeekly.com

By

Published: 03 Jul 2020

Undoubtedly, artificial intelligence (AI) is able to support organisations in tackling their threat landscape and the widening of vulnerabilities as criminals have become more sophisticated. However, AI is no silver bullet when it comes to protecting assets and organisations should be thinking about cyber augmentation, rather than just the automation of cyber security alone.

Areas where AI can currently be deployed include the training of a system to identify even the smallest behaviours of ransomware and malware attacks before it enters the system and then isolate them from that system.

Other examples include automated phishing and data theft detection which are extremely helpful as they involve a real-time response. Context-aware behavioural analytics are also interesting, offering the possibility to immediately spot a change in user behaviour which could signal an attack.

The above are all examples of where machine learning and AI can be useful. However, over-reliance and false assurance could present another problem: As AI improves at safeguarding assets, so too does it improve attacking them. As cutting-edge technologies are applied to improve security, cyber criminals are using the same innovations to get an edge over these defences.

Typical attacks can involve the gathering of information about a system or sabotaging an AI system by flooding it with requests.

Elsewhere, so-called deepfakes are proving a relatively new area of fraud that poses unprecedented challenges. We already know that cyber criminals can litter the web with fakes that can be almost impossible to distinguish real news from fake.

The consequences are such that many legislators and regulators are contemplating the establishment of rule and law to govern this phenomenon. For organisations, this means that deepfakes could lead to much more complex phishing in future, targeting employees by mimicking corporate writing styles or even individual writing style.

In a nutshell, AI can augment cyber security so long as organisations know its limitations and have a clear strategy focusing on the present while constantly looking at the evolving threat landscape.

Ivana Bartoletti is a cyber risk technical director at Deloitte and a founder of Women Leading in AI.

More here:
Security Think Tank: Artificial intelligence will be no silver bullet for security - ComputerWeekly.com

Letters to the editor – The Economist

Jul 4th 2020

Artificial intelligence is an oxymoron (Technology quarterly, June 13th). Intelligence is an attribute of living things, and can best be defined as the use of information to further survival and reproduction. When a computer resists being switched off, or a robot worries about the future for its children, then, and only then, may intelligence flow.

I acknowledge Richard Suttons bitter lesson, that attempts to build human understanding into computers rarely work, although there is nothing new here. I was aware of the folly of anthropomorphism as an AI researcher in the mid-1980s. We learned to fly when we stopped emulating birds and studied lift. Meaning and knowledge dont result from symbolic representation; they relate directly to the visceral motives of survival and reproduction.

Great strides have been made in widening the applicability of algorithms, but as Mr Sutton says, this progress has been fuelled by Moores law. What we call AI is simply pattern discovery. Brilliant, transformative, and powerful, but just pattern discovery. Further progress is dependent on recognising this simple fact, and abandoning the fancy that intelligence can be disembodied from a living host.

ROB MACDONALDRichmond, North Yorkshire

I agree that machine learning is overhyped. Indeed, your claim that such techniques are loosely based on the structure of neurons in the brain is true of neural networks, but these are just one type among a wide array of different machine- learning methods. In fact, machine learning in some cases is no more than a rebranding of existing processes. If by machine learning we simply mean building a model using large amounts of data, then good old ordinary least squares (line of best fit) is a form of machine learning.

TOM ARMSTRONGToronto

The scope of your research into green investing was too narrow to condemn all financial services for their woolly thinking (Hotting up, June 20th). You restricted your analysis to microeconomic factors and to the ability of investors to engage with companies. It overlooked the bigger picture: investors can also shape the macro environment by structured engagement with the system itself.

For example, the data you used largely originated from the investor-led Carbon Disclosure Project (for which we hosted the first ever meeting, nearly two decades ago). In addition, investors have also helped shape sustainable-finance plans in Britain, the EU and UN. Investors also sit on the industry-led Taskforce on Climate-related Financial Disclosure, convened by the Financial Stability Board, which has proved effective.

It is critical that governments apply a meaningful carbon price. But if we are to move money at the pace and scale required to deal with climate risk, governments need to reconsider the entire architecture of markets. This means focusing a wide-angled climate lens on prudential regulation, listing rules, accounting standards, investor disclosure standards, valuation conventions and stewardship codes, as well as building on new interpretations of legal fiduciary duty. This work is done most effectively in partnership with market participants. Green-thinking investors can help.

STEVE WAYGOODChief responsible investment officerAviva InvestorsLondon

Estimating indirectly observable GDP in real time is indeed a hard job for macro-econometricians, or wonks, as you call us (Crisis measures, May 30th). Most of the components are either highly lagged, as your article mentioned, or altogether unobservable. But the textbook definition of GDP and its components wont be changing any time soon, as the reader is led to believe. Instead what has always and will continue to change are the proxy indicators used to estimate the estimate of GDP.

MICHAEL BOERMANWashington, DC

Reading Lexingtons account of his garden adventures (June 20th) brought back memories of my own experience with neighbours in Twinsburg, Ohio, in the late 1970s. They also objected to vegetables growing in our front yard (the only available space). We were doing it for the same reasons as Lexington: pleasure, fresh food to eat, and a learning experience for our young children. The neighbours, recently arrived into the suburban middle class, saw it as an affront. They no longer had to grow food for their table. They could buy it at the store and keep it in the deep freeze. Our garden, in their face every day, reminded them of their roots in Appalachian poverty. They called us hillbillies.

Arthur C. Clarke once wrote: Any sufficiently advanced technology is indistinguishable from magic. Our version read, Any sufficiently advanced lifestyle is indistinguishable from hillbillies.

PHILIP RAKITAPhiladelphia

Bartleby (May 30th) thinks the benefits of working from home will mean that employees will not want to return to the office. I am not sure that is the case for many people. My husband is lucky. He works for a company that already expected its staff to work remotely, so had the systems and habits in place. He has a spacious room to work in, with an adjustable chair, large monitor and a nice view. I do not work so he is not responsible for child care or home schooling.

Many people are working at makeshift workspaces which would make an occupational therapist cringe. Few will have a dedicated room for their home office, so their work invades their mental and physical space.

My husband has noticed that meetings are being set up both earlier and later in the day because there is an assumption that, as people are not commuting, it is fine to extend their work day. Colleagues book a half-hour meeting instead of dropping by someones desk to ask a quick question. Any benefit of not commuting is lost. My husband still struggles to finish in time to have dinner with our children. People with especially long commutes now have more time, but even that was a change of scenery and offered some incidental exercise.

JENNIFER ALLENLondon

As Bartleby pointed out, the impact of pandemic working conditions wont be limited to the current generation. By exacerbating these divides, will covid-19 completely guarantee a future dominated by the baby-Zoomers?

MALCOLM BEGGTokyo

The transition away from the physical office engenders a lackadaisical approach to the work day for many workers. It brings to mind Ignatius Reillys reasoning for his late start at the office from A Confederacy of Dunces:

I avoid that bleak first hour of the working day during which my still sluggish senses and body make every chore a penance. I find that in arriving later, the work which I do perform is of a much higher quality.

ROBERT MOGIELNICKIArlington, Virginia

This article appeared in the Letters section of the print edition under the headline "On artificial intelligence, green investing, GDP, gardens, working from home"

Continue reading here:
Letters to the editor - The Economist

Artificial Intelligence Systems Will Need to Have Certification, CISA Official Says – Nextgov

Vendors of artificial intelligence technology should not be shielded by intellectual property claims and will have to disclose elements of their designs and be able to explain how their offering works in order to establish accountability, according to a leading official from the Cybersecurity and Infrastructure Security Agency.

I dont know how you can have a black-box algorithm thats proprietary and then be able to deploy it and be able to go off and explain whats going on, said Martin Stanley, a senior technical advisor who leads the development of CISAs artificial intelligence strategy. I think those things are going to have to be made available through some kind of scrutiny and certification around them so that those integrating them into other systems are going to be able to account for whats happening.

Stanley was among the speakers on a recent Nextgov and Defense One panel where government officials, including a member of the National Security Commission on Artificial Intelligence, shared some of the ways they are trying to balance reaping the benefits of artificial intelligence with risks the technology poses.

Experts often discuss the rewards of programming machines to do tasks humans would otherwise have to labor onfor both offensive and defensive cybersecurity maneuversbut the algorithms behind such systems and the data used to train them into taking such actions are also vulnerable to attack. And the question of accountability applies to users and developers of the technology.

Artificial intelligence systems are code that humans write, but they exercise their abilities and become stronger and more efficient using data that is fed to them. If the data is manipulated, or poisoned, the outcomes can be disastrous.

Changes to the data could be things that humans wouldnt necessarily recognize, but that computers do.

Weve seen ... trivial alterations that can throw off some of those results, just by changing a few pixels in an image in a way that a person might not even be able to tell, said Josephine Wolff, a Tufts University cybersecurity professor who was also on the panel.

And while its true that behind every AI algorithm is a human coder, the designs are becoming so complex, that youre looking at automated decision-making where the people who have designed the system are not actually fully in control of what the decisions will be, Wolff says.

This makes for a threat vector where vulnerabilities are harder to detect until its too late.

With AI, theres much more potential for vulnerabilities to stay covert than with other threat vectors, Wolff said. As models become increasingly complex it can take longer to realize that something is wrong before theres a dramatic outcome.

For this reason, Stanley said an overarching factor CISA uses to help determine what use cases AI gets applied to within the agency, is to assess the extent to which they offer high benefits and low regrets.

We pick ones that are understandable and have low complexity, he said.

Among other things federal personnel need to be mindful of is who has access to the training data.

You can imagine you get an award done, and everyone knows how hard that is from the beginning, and then the first thing that the vendor says is OK, send us all your data, hows that going to work so we can train the algorithm? he said. Those are the kinds of concerns that we have to be able to address.

Were going to have to continuously demonstrate that we are using the data for the purpose that it was intended, he said, adding, Theres some basic science that speaks to how you interact with algorithms and what kind of access you can have to the training data. Those kinds of things really need to be understood by the people who are deploying them.

A crucial but very difficult element to establish is liability. Wolff said ideally, liability wouldbe connected to a potential certification program where an entity audits artificial intelligence systems for factors like transparency and explainability.

Thats important, she said, for answering the question of how can we incentivize companies developing these algorithms to feel really heavily the weight of getting them right and be sure to do their own due diligence knowing that there are serious penalties for failing to secure them effectively.

But this is hard, even in the world of software development more broadly.

Making the connection is still very unresolved. Were still in the very early stages of determining what would a certification process look like, who would be in charge of issuing it, what kind of legal protection or immunity might you get if you went through it, she said. Software developers and companies have been working for a very long time, especially in the U.S., under the assumption that they cant be held legally liable for vulnerabilities in their code, and when we start talking about liability in the machine learning and AI context, we have to recognize that thats part of what were grappling with, an industry that for a very long time has had very strong protections from any liability.

View from the Commission

Responding to this, Katharina McFarland, a member of the National Security Commission on Artificial Intelligence, referenced the Pentagons Cybersecurity Maturity Model Certification program.

The point of the CMMC is to establish liability for Defense contractors, Defense Acquisitions Chief Information Security Officer Katie Arrington has said. But McFarland highlighted difficulties facing CMMC that program officials themselves have acknowledged.

Im sure youve heard of the [CMMC], theres a lot of thought going on, the question is the policing of it, she said. When you consider the proliferation of the code thats out there, and the global nature of it, you really will have a challenge trying to take a full thread and to pull it through a knothole to try to figure out where that responsibility is. Our borders are very porous and machines that we buy from another nation may not be built with the same biases that we have.

McFarland, a former head of Defense acquisitions, stressed that AI is more often than not viewed with fear and said she wanted to see more of a balance in procurement considerations for the technology.

I found that we had a perverse incentive built into our system and that was that we took, sometimes, I think extraordinary measures to try to creep into the one percent area for failure, she said, In other words, we would want to 110% test a system and in doing so, we might miss the venue of where its applicability in a theater to protect soldiers, sailors, airmen and Marines is needed.

She highlighted upfront a need for testing a verification but said it shouldnt be done at the expense of adoption. To that end, she asks that industry help by sharing the testing tools they use.

I would encourage industry to think about this from the standpoint of what tools would we needbecause theyre using themin the department, in the federal space, in the community, to give us transparency and verification, she said, so that we have a high confidence in the utility, in the data that were using and the AI algorithms that were building.

See the rest here:
Artificial Intelligence Systems Will Need to Have Certification, CISA Official Says - Nextgov

Increasing Transparency at the National Security Commission on Artificial Intelligence – Lawfare

In 2018, Congress established the National Security Commission on Artificial Intelligence (NSCAI)a temporary, independent body tasked with reviewing the national security implications of artificial intelligence (AI). But two years later, the commissions activities remain little known to the public. Critics have charged that the commission has conducted activities of interest to the public outside of the public eye, only acknowledging that meetings occurred after the fact and offering few details on evolving commission decision-making. As one commentator remarked, Companies or members of the public interested in learning how the Commission is studying AI are left only with the knowledge that appointed people met to discuss these very topics, did so, and are not yet releasing any information about their recommendations.

That perceived lack of transparency may soon change. In June, the U.S. District Court for the District of Columbia handed down its decision in Electronic Privacy Information Center v. National Security Commission on Artificial Intelligence, holding that Congress compelled the NSCAI to comply with the Federal Advisory Committee Act (FACA). Under FACA, the commission must hold open meetings and proactively provide records and other materials to the public. This decision follows a ruling from December 2019, holding that the NSCAI must also provide historical documents upon request under the Freedom of Information Act (FOIA). As a result of these decisions, the public is likely to gain increased access to and insight into the once-opaque operations of the commission.

Lawmakers established the NSCAI in the John S. McCain National Defense Authorization Act (NDAA) for fiscal 2019 1051, which tasked the commission with consider[ing] the methods and means necessary to advance the development of artificial intelligence, machine learning, and associated technologies to comprehensively address the national security and defense needs of the United States. The commissions purview includes an array of issues related to the implications and uses of artificial intelligence and machine learning for national security and defense, including U.S. competitiveness and leadership, research and development, ethics, and data standards.

The NSCAI is currently chaired by Eric Schmidt, the former executive chairman of Googles parent company, Alphabet. The commissions 15 membersappointed by a combination of Congress, the secretary of defense and the secretary of commercereceive classified and unclassified briefings, meet in working groups and engage with industry. They report their findings and recommendations to the president and Congress, including in an annual report.

The Electronic Privacy Information Center (EPIC), a research center focused on privacy and civil liberties issues in the digital age, submitted a request to the NSCAI in September 2019, seeking access to upcoming meetings and records prepared by the commission under FACA and FOIA. In the six-month period prior to the request, the NSCAI held more than a dozen meetings and received over 100 briefings, according to EPIC. At the time it filed the lawsuit, EPIC noted that the commissions first major report was also one month overdue for release. When the commission did not comply with the requests under FOIA and FACA, EPIC brought suit under the two laws.

EPICs complaint alleged that the NSCAI had conducted its operations opaquely in its short lifespan. Since its establishment, the commission has operated almost entirely in secret with meetings behind closed doors[,] and has failed to publish or disclose any notices, agendas, minutes, or materials. If Congress had intended the NSCAI to comply with FOIA and FACA, such activity would not satisfy the statutes requirements. Given the potential implications of federal artificial intelligence decisions for privacy, cybersecurity, human rights, and algorithmic bias, EPIC argued that [p]ublic access to the records and meetings of the AI Commission is vital to ensure government transparency and democratic accountability. The complaint also noted the potential ramifications of commission activities for the government, private sector, and public, as well as the importance of artificial intelligence safeguards in the national security context due to limited public oversight. According to EPIC, increasing public participation would permit greater input into the development of national AI policy by those whose privacy and data security could potentially be affected.

The U.S. District Court for the District of Columbia addressed EPICs FOIA claim in a December 2019 decision. FOIA requires agencies to disclose their records to a party upon request, barring exemptions (including for information classified to protect national security). EPIC alleged that the NSCAI failed to uphold its obligations under FOIA to process FOIA requests in a timely fashion; to process EPICs FOIA requests in an expedited manner, in accordance with EPICs claims of urgency; and to make available for public inspection and copying its records, reports, transcripts, minutes, appendixes, working papers, drafts, studies, agenda, or other documents. The commission, which at the time did not have a FOIA processing mechanism in place or other pending FOIA requests, argued that it was not an agency subject to FOIA.

The courts inquiry centered on whether the NSCAI is an agency under FOIA. Comparing the language establishing the NSCAI with FOIAs definition of agency, the court held that the NSCAI is subject to FOIA. In his decision, District Judge Trevor McFadden noted that Congress could have hardly been clearer. As a result, since that time, the commission has had to produce historical documents in response to FOIA requests.

FACA, by contrast, applies forward-looking requirements specifically to federal advisory committees. These mandates include requiring committees to open meetings to the public and announce them in the Federal Register, and to make reports, transcripts and other commission materials publicly available. The measures aim to inform the public about and invite public engagement with the committees that provide expertise to the executive branch. EPIC alleged that the NSCAI violated FACA by failing to hold open meetings and provide notice of them, and by failing to make records available to the public. EPIC sought mandamus relief pursuant to the alleged FACA violations.

In its June decision, the district court ruled that FACA applies to the NSCAI. The commission had filed a motion to dismiss the FACA claims, arguing that it could not be subject to both FOIA and FACA. Since the court had previously held the NSCAI to be an agency for purposes of FOIA, the commission reasoned that it could not simultaneously be an advisory committee under FACA. District Judge McFadden disagreed. Invoking the Roman God Januss two facesone forward-looking and the other backward-facinghe wrote, [L]ike Janus, the Commission does indeed have two faces, and ... Congress obligated it to comply with FACA as well as FOIA. The court could not identify a conflict between the requirements of the two statutes, despite differences in their obligations and exceptions. Rather, it noted that if such conflicts arise, it will be incumbent on the parties and the Court to resolve any difficulties. The court dismissed additional claims under the Administrative Procedure Act (APA) for lack of subject matter jurisdiction, as it determined that the commission is not an agency under the APA definition.

The courts decision turned on whether the NSCAI is an advisory committee subject to FACA. The court determined that the statutory text of the 2019 NDAA establishing the NSCAI fit[s] the [FACA] definition of advisory committee like a glove. Furthermore, turning to the full text of the 2019 NDAA, the court noted that the law contains at least two instances in which it explicitly exempts a government body from FACA. The court read the 2019 NDAA as silent when FACA applies and explicit when FACA does not apply. Given Congresss silence on the applicability of FACA to the NSCAI in the 2019 NDAAand again in the 2020 NDAAthe court reasoned that Congress intended the NSCAI to be subject to FACA.

In determining the NSCAI to be subject to FACA, in addition to FOIA, the court has compelled the commission to adopt a more transparent operating posture going forward. Since the December 2019 decision on FOIA, the NSCAI has produced a number of historical records in response to FOIA requests. The recent ruling on FACA grounds requires the NSCAI to hold open meetings, post notice of meetings in advance and make documents publicly available. As a result, the commissions process of compiling findings and developing recommendations for government action related to artificial intelligence and machine learning will likely become more accessible to the public.

The two court decisions come in time to have a noticeable impact on the remaining term of the temporary commission. While the NSCAI was previously due to disband later in 2020, the NDAA for fiscal 2020 1735 extended the commissions lifespan by one year, to October 1, 2021. Citing federal budgetary timelines and the pace of AI development, the commission released its first set of recommendations in March 2020 and expressed its intent to publish additional recommendations on a quarterly basis thereafter. The commission is due to submit its final report to Congress by March 1, 2021. As the NSCAI prepares to enter its final year of operations and develop its closing recommendations, the public will have a clearer window into the commissions work.

View original post here:
Increasing Transparency at the National Security Commission on Artificial Intelligence - Lawfare