Building a Better U.S. Approach to TikTok and Beyond – Lawfare

One of the defining technology decisions of the Trump administration was its August 2020 ban on TikTokan executive order to which legal challenges are still playing out in the courts. The incoming Biden-Harris administration, however, has indicated its intention to pivot away from Trumps approach on several key technology policies, from the expected appointment of a national cyber director to the reinvigoration of U.S. diplomacy to build tech coalitions abroad. President Biden will need to make policy decisions about software made by companies incorporated in foreign countries, and to what extent that might pose national security risks. There may be a future TikTok policy, in other words, that isnt at all aboutor at least isnt just aboutTikTok.

In April 2020, Republican Rep. Jim Banks introduced legislation in the House of the Representatives that sought to require developers of foreign software to provide warnings before consumers downloaded the products in question. Its highly likely that similar such proposals will enter Congress in the next few years. On the executive branch side, the Biden administration has many decisions ahead on mobile app supply chain security, including whether to keep in place Trumps executive order on TikTok. These questions are also linked to foreign policy: President Biden will need to decide how to handle Indias bans of Chinese software applications, as India will be a key bilateral tech relationship for the United States. And the U.S. government will also have to make choices about cloud-based artificial intelligence (AI) applications served from other countriesthat is, where an organizations AI tools are run on third-party cloud serversin the near future.

In this context, what might a better U.S.policy on the security risks of foreign-made software look like? The Trump administrations TikTok executive order was more of a tactical move against a single tech firm than a fully developed policy. The new administration will now have the opportunity to set out a more fully realized, comprehensive vision for how to tackle this issue.

This analysis offers three important considerations for the U.S. executive branch, drawing on lessons from the Trump administrations TikTok ban. First, any policy needs to explicitly define the problem and what it sets out to achieve; simply asserting national security issues is not enough. Second, any policy needs to clearly articulate the alleged risks at play, because foreign software could be entangled with many economic and security issues depending on the specific case. And third, any policy needs to clearly articulate the degree to which a threat actors supposed cost-benefit calculus makes those different risks likely. This is far from a comprehensive list. But failure to address these three considerations in policy design and implementation will only undermine the policys ultimate effectiveness.

Defining the Problem

First, any policy on foreign software security needs to be explicitly clear about scopethat is, what problem the government is trying to solve. Failure to properly scope policies on this front risks confusing the public, worrying industry and obscuring the alleged risks the government is trying to communicate. This undermines the governments objectives on all three fronts, which is why scoping foreign software policies clearly and explicitlyin executive orders, policy memos and communication with the publicis critical.

Trumps approach to TikTok and WeChat provides a lesson in what not to do. Arguably, the TikTok executive order was not even a policy: It was more a tactical-level move against a single tech firm than a broader specification of the problem set and development of solutions. Trump had discussed banning TikTok in July 2020 as retaliation for the Chinese governments handling of the coronavirusso, putting aside that this undermined the alleged national security motives behind the executive order, the order issued on TikTok wasnt completely out of the blue. That said, the order on WeChat that accompanied the so-called TikTok ban was surprising, and its signing only created public confusion. Until then, much of the congressional conversation on Chinese mobile apps had focused on TikTok, and the Trump administration had given no warning that WeChat would be the subject of its actions too. Whats more, even after the executive orders were signed in August, most of the Trump administrations messaging focused just on TikTok, ignoring WeChat. The administration also wrote the WeChat executive order with troublingly and perhaps sloppily broad language that scoped the ban as impacting Tencent Holdingswhich owns WeChat and many other software applicationsand thus concerned gaming and other software industries, though the administration subsequently stated the ban was aimed only at WeChat.

Additionally, the Trump administrations decisions on U.S.-China tech often blurred together trade and national security issues. The Trump administration repeatedly suggested that TikToks business presence in mainland China inherently made the app a cybersecurity threat, without elaborating on why the executive orders focused solely on TikTok and WeChat rather than other software applications from China too. Perhaps the bans were a possible warning shot at Beijing about potential collection of U.S. citizen databut its worth asking if that warning shot even worked given the legal invalidations of the TikTok ban and the blowback even within the United States. Again, the overarching policy behind these tactical decisions was undeveloped. It was unclear if TikTok and WeChat were one-off decisions or the beginning of a series of similar actions.

Going forward, any executive branch policy on foreign software needs to explicitly specify the scope of the cybersecurity concerns at issue. In other words, the executive needs to clearly identify the problem the U.S. government is trying to solve. This will be especially important as the incoming Biden administration contends with cybersecurity risks emanating not just from China but also from Russia, Iran and many other countries. If the White House is concerned about targeted foreign espionage through software systems, for example, those concerns might very well apply to cybersecurity software developed by a firm incorporated in Russiawhich would counsel a U.S. approach not just limited to addressing popular consumer apps made by Chinese firms. If the U.S. is concerned about censorship conducted by foreign-owned platforms, then actions by governments like Tehran would certainly come into the picture. If the problem is a foreign government potentially collecting massive amounts of U.S. citizen data through software, then part of the policy conversation needs to focus on data brokers, toothe large, unregulated companies in the United States that themselves buy up and sell reams of information on U.S. persons to anyone whos buying.

Software is constantly moving and often communicating with computer systems across national borders. Any focus on a particular company or country should come with a clear explanation, even if it seems relatively intuitive, as to why that company or country poses a particularly different or elevated risk compared to other sources of technology.

Clearly Delineate Between Different Alleged Security Risks

The Trump administrations TikTok ban also failed to clearly articulate and distinguish between its alleged national security concerns. Depending on ones perspective, concerns might be raised about TikTok collecting data on U.S. government employees, TikTok collecting data on U.S. persons not employed by the government, TikTok censoring information in China at Beijings behest, TikTok censoring information beyond China at Beijings behest, or disinformation on the TikTok platform. Interpreting the Trump administrations exact concerns was difficult, because White House officials were not clear and explicit about which risks most concerned them. Instead, risks were blurred together, with allegations of Beijing-compelled censorship thrown around alongside claims that Beijing was using the platform to conduct espionage against U.S. persons.

If there was evidence that these practices were already occurring, the administration did not present it. If the administrations argument was merely that such actions could occur, the administration still did not lay out its exact logic. There is a real risk that the Chinese government is ordering, coercing or otherwise compelling technology companies incorporated in its borders to engage in malicious cyber behavior on its behalf worldwide, whether for the purpose of censorship or cyber operations. Beijing quite visibly already exerts that kind of pressure on technology firms in China to repress the internet domestically. Yet to convince the public, industry, allies, partners, and even those within other parts of government and the national security apparatus that a particular piece or source of foreign software is a national security risk, the executive branch cannot overlook the importance of clear messaging. That starts with clearly articulating, and not conflating, the different risks at play.

The spectrum of potential national security risks posed by foreign software is large and depends on what the software does. A mobile app platform with videos and comments, for instance, might collect intimate data on U.S. users while also making decisions about content moderationso in that case, its possible the U.S. government could have concerns about mass data collection, censorship and information manipulation all at once. Or, to take another example, cybersecurity software that runs on enterprise systems and scans internal company databases and files might pose an array of risks related to corporate espionage and nation-state espionagebut this could have nothing to do with concerns about disinformation and content manipulation.

Software is a general term, and the types and degrees of cybersecurity risk posed by different pieces of software can vary greatly. Just as smartphones are not the same as computing hardware in self-driving cars, a weather app is not the same as a virtualization platform used in an industrial plant. Software could be integrated with an array of hardware components but not directly connect back to all those makers: Think of how Apple, not the manufacturers of subcomponents for Apple devices, issues updates for its products. Software could also directly connect back to its maker in potentially untrusted ways, as with Huawei issuing software updates to 5G equipment. It could constantly collect information, such as with the TikTok app itself and it could learn from the information it collects, like how TikTok uses machine learning and how many smartphone voice-control systems collect data on user speech. This varied risk landscape means policymakers must be clear, explicit and specific about the different alleged security risks posed by foreign software.

Give Cost-Benefit Context on Security Risks

Finally, the U.S. government should make clear to the public the costs and benefits that a foreign actor might weigh in using that software to spy. Just because a foreign government might hypothetically collect data via something like a mobile appwhether by directly tapping into specific devices or by turning to the apps corporate owner for data hand-oversdoesnt mean that the app is necessarily an optimal vector for espionage. It might not yield useful data beyond what the government already has, or it might be too costly relative to using other active data collection vectors. Part of the U.S. governments public messaging on cyber risk management should therefore address why that particular vector of data collection would be more attractive than some other vector, or what supplementary data it would provide. In other words, what is the supposed value-add for the foreign government? This could also include consideration of controls offered by the softwares country of originfor example, transparency rules, mandatory reporting for publicly traded companies, or laws that require cooperation with law enforcement or intelligence servicesmuch like the list of trust criteria under development as part of Lawfares Trusted Hardware and Software Working Group.

In the case of the Trump administrations TikTok executive order, for example, there was much discussion by Trump officials about how Beijing could potentially use the app for espionage. But administration officials spoke little about why the Chinese intelligence services would elect to use that vector over others, or what about TikTok made its data a hypothetical value-add from an intelligence perspective.

If the risk concern is about targeted espionage against specific high-value targets, then the cost-benefit conversation needs to be about what data that foreign software provides, and how easily it provides that benefit, relative to other methods of intelligence collection. If the risk concern is about bulk data collection on all the softwares users, then the cost-benefit conversation needs to be about why that data is different from information that is openly available, was stolen via previous data breaches, or is purchasable from a U.S. data broker. That should include discussing what value that data adds to what has already been collected: Is the risk that the foreign government will develop microtargeted profiles on individuals, supplement existing data, or enable better data analytics on preexisting information?

The point again is not that TikToks data couldnt add value, even if it overlapped with what Chinese intelligence services have already collected. Rather, the Trump administration did not clearly articulate Beijings supposed cost-benefit calculus.

Whatever the specific security concern, managing the risks of foreign espionage and data collection through software applications is in part a matter of assessing the potential payoff for the adversary: not just the severity of the potential event, or the actors capabilities, but why that actor might pursue this option at all. Policy messaging about these questions speaks to the governments broader risk calculus and whether the U.S. government is targeting the most urgent areas of concern. For instance, if the only concern about a piece of foreign software is that it collects data on U.S. persons, but it then turns out that data was already publicly available online or heavily overlaps with a foreign intelligence services previous data theft, would limiting that foreign softwares spread really mitigate the problems at hand? The answer might be yes, but these points need to be articulated to the public.

Conclusion

A key part of designing federal policies on software supply chain security is recognizing the globally interconnected and interdependent nature of software development today. Developers working in one country to make software for a firm incorporated in a second may sell their products in a third country and collect data sent to servers in a fourth. Software applications run in one geographic area may talk to many servers located throughout the world, whether a Zoom call or Gmailand the relatively open flow of data across borders has enabled the growth of many different industries, from mobile app gaming to a growing number of open-source machine-learning tools online.

If the U.S. government wants to draw attention to security risks of particular pieces or kinds of foreign software, or software coming from particular foreign sources, then it needs to be specific about why that software is being targeted. Those considerations go beyond the factors identified here. The WeChat executive order, for instance, wasnt just unclear in specifying the national security concerns ostensibly motivating the Trump administration; it also failed to discuss what a ban on WeChat in the United States would mean for the apps many users. Hopefully, greater attention paid to these crucial details will help better inform software security policies in the future.

More here:
Building a Better U.S. Approach to TikTok and Beyond - Lawfare

Related Posts

Comments are closed.