The Not-So-Hidden FTC Guidance On Organizational Use Of Artificial Intelligence (AI), From Data Gathering Through Model Audits – Technology – United…

Our last AI post on this blog, the New (if Decidedly Not 'Final') Frontier ofArtificial Intelligence Regulation, touched on both the FederalTrade Commission's (FTC) April 19, 2021, AI guidance and the EuropeanCommission's proposed AI Regulation. The FTC's 2021guidance referenced, in large part, the FTC's April 2020 post"UsingArtificial Intelligence and Algorithms." The recent FTCguidance also relied on older FTC work on AI, including a January2016 report, "Big Data: A Tool for Inclusion orExclusion?," which in turn followed a September 15, 2014,workshop on the same topic. The Big Data workshop addressed datamodeling, data mining and analytics, and gave us a prospective lookat what would become an FTC strategy on AI.

The FTC's guidance begins with the data, and the 2016guidance on big data and subsequent AI development addresses thismost directly. The 2020 guidance then highlights importantprinciples such as transparency, explain-ability, fairness,accuracy and accountability for organizations to consider. And the2021 guidance elaborates on how consent, or opt-in, mechanisms workwhen an organization is gathering the data used for modeldevelopment.

Taken together, the three sets of FTC guidance - the 2021, 2020,and 2016 guidance ? provide insight into the FTC's approach toorganizational use of AI, which spans a vast portion of the datalife cycle, including the creation, refinement, use and back-endauditing of AI. As a whole, the various pieces of FTC guidance alsoprovide a multistep process for what the FTC appears to view asresponsible AI use. In this post, we summarize our takeaways fromthe FTC's AI guidance across the data life cycle to provide apractical approach to responsible AI deployment.

Evaluation of a data set should assess the quality ofthe data (including accuracy, completeness and representativeness)? and if the data set is missing certain population data, theorganization must take appropriate steps to address and remedy thatissue (2016).

An organization must honor promises made to consumersand provide consumers with substantive information about theorganization's data practices when gathering information for AIpurposes (2016). Any related opt-in mechanisms for such datagathering must operate as disclosed to consumers (2021).

An organization should recognize the data compilationstep as a "descriptive activity," which the FTC definesas a process aimed at uncovering and summarizing "patterns orfeatures that exist in data sets" - a reference to data mining scholarship (2016) (note that theFTC's referenced materials originally at mmds.org are nowredirected).

Compilation efforts should be organized around a lifecycle model that provides for compilation and consolidation beforemoving on to data mining, analytics and use (2016).

An organization must recognize that there may beuncorrected biases in underlying consumer data that will surface ina compilation; therefore, an organization should review data setsto ensure hidden biases are not creating unintended discriminatoryimpacts (2016).

An organization should maintain reasonable security overconsumer data (2016).

If data are collected from individuals in a deceitful orotherwise inappropriate manner, the organization may need to deletethe data (2021).

An organization should recognize the model and AIapplication selection step as a predictive activity, where anorganization is using "statistical models to generate newdata" - a reference to predictive analytics scholarship (2016).

An organization must determine if a proposed data modelor application properly accounts for biases (2016). Where there areshortcomings in the data model, the model's use must beaccordingly limited (2021).

Organizations that build AI models may "not selltheir big data analytics products to customers if they know or havereason to know that those customers will use the products forfraudulent or discriminatory purposes." An organization must,therefore, evaluate potential limitations on the provision or useof AI applications to ensure there is a "permissiblepurpose" for the use of the application (2016).

Finally, as a general rule, the FTC asserts that underthe FTC Act, a practice is patently unfair if it causes more harmthan good (2021).

Organizations must design models to account for datagaps (2021).

Organizations must consider whether their reliance onparticular AI models raises ethical or fairness concerns(2016).

Organizations must consider the end uses of the modelsand cannot create, market or sell "insights" used forfraudulent or discriminatory purposes (2016).

Organizations must test the algorithm before use (2021).This testing should include an evaluation of AI outcomes(2020).

Organizations must consider prediction accuracy whenusing "big data" (2016).

Model evaluation must focus on both inputsand AI models may not discriminate against aprotected class (2020).

Input evaluation shouldinclude considerations of ethnically based factors or proxies forsuch factors.

Outcome evaluation iscritical for all models, including facially neutral models.

Model evaluation should consider alternative models, asthe FTC can challenge models if a less discriminatory alternativewould achieve the same results (2020).

If data are collected from individuals in a deceptive,unfair, or illegal manner, deletion of any AI models or algorithmsdeveloped from the data may also be required (2021).

Organizations must be transparent and not misleadconsumers "about the nature of the interaction" ? and notutilize fake "engager profiles" as part of their AIservices (2020).

Organizations cannot exaggerate an AI model'sefficacy or misinform consumers about whether AI results are fairor unbiased. According to the FTC, deceptive AI statements areactionable (2021).

If algorithms are used to assign scores to consumers, anorganization must disclose key factors that affect the score,rank-ordered according to importance (2020).

Organizations providing certain types of reports throughAI services must also provide notices to the users of such reports(2016).

Organizations building AI models based on consumer datamust, at least in some circumstances, allow consumers access to theinformation supporting the AI models (2016).

Automated decisions based on third-party data mayrequire the organization using the third-party data to provide theconsumer with an "adverse action" notice (for example, ifunder the Fair Credit Reporting Act 15 U.S.C. 1681(Rev. Sept. 2018), such decisions deny an applicant anapartment or charge them a higher rent) (2020).

General "you don'tmeet our criteria" disclosures are not sufficient. The FTCexpects end users to know what specific data areused in the AI model and how the data are used bythe AI model to make a decision (2020).

Organizations that change specific terms of deals basedon automated systems must disclose the changes and reasoning toconsumers (2020).

Organizations should provide consumers with anopportunity to amend or supplement information used to makedecisions about them (2020) and allow consumers to correct errorsor inaccuracies in their personal information (2016).

When deploying models, organizations must confirm thatthe AI models have been validated to ensure they work as intendedand do not illegally discriminate (2020).

Organizations must carefully evaluate and select anappropriate AI accountability mechanism, transparency frameworkand/or independent standard, and implement as applicable(2020).

An organization should determine the fairness of an AImodel by examining whether the particular model causes, or islikely to cause, substantial harm to consumers that is notreasonably avoidable and not outweighed by countervailing benefits(2021).

Organizations must test AI models periodically torevalidate that they function as intended (2020) and to ensure alack of discriminatory effects (2021).

Organizations must account for compliance, ethics,fairness and equality when using AI models, taking into accountfour key questions (2016; 2020):

How representative is thedata set? Does the AI model account for biases? How accurate are the AI predictions? Does the reliance on the data set raise ethical or fairnessconcerns?

Organizations must embrace transparency andindependence, which can be achieved in part through the following(2021):

Using independent,third-party audit processes and auditors, which are immune to theintent of the AI model. Ensuring data sets and AI source code are open to externalinspection. Applying appropriate recognized AI transparency frameworks,accountability mechanisms and independent standards. Publishing the results of third-party AI audits.

Organizations remain accountable throughout the AI datalife cycle under the FTC's recommendations for AI transparencyand independence (2021).

The content of this article is intended to provide a generalguide to the subject matter. Specialist advice should be soughtabout your specific circumstances.

Read this article:
The Not-So-Hidden FTC Guidance On Organizational Use Of Artificial Intelligence (AI), From Data Gathering Through Model Audits - Technology - United...

Related Posts

Comments are closed.