Thursday, May 13, 2021

Expected impact of the executive order improving cybersecurity

The President of the United States has issued an executive order concerning cybersecurity.  https://www.whitehouse.gov/briefing-room/presidential-actions/2021/05/12/executive-order-on-improving-the-nations-cybersecurity/

This is my rapid-response analysis.

The order was undoubtedly in preparation for a considerable amount of time, but it was released following the ransomware attack on the Colonial Pipeline in the US, which carries petroleum products along most of the East Coast.

Technically, the executive order applies only to US government contractors, but many of the provisions apply to the entire supply chain leading up to these directly specified customers.  As a result, most of the orders will impact to varying degrees any company that does business with a government contractor as well as any government t contractor or supplier.  An executive order cannot compel companies that are not government contractors to change what they do, but those who do not may be excluded from doing business with the government and any of its contractors or suppliers, so it, in effect, applies to almost all companies.

The executive order is intended to do several things:

·       Remove barriers to sharing threat information, primarily between the government and private entities, but it will also have the effect of making it easier to share information among private entities.

·       Strengthen cybersecurity standards.

·       Mandate the wider use of zero-trust methods and architectures.

·       Require software developers to maintain greater visibility.

·       Make public security information so that consumers can evaluate the security of a software system.  As an outcome it establishes an “Energy-Star” like program for rating software security.

·       Mandate the use of multi-factor authentication where appropriate.

·       Strengthen the requirements around encryption at rest and for data in motion.

·       Establish a cybersecurity review board.

·       Create a standard playbook for responding to cyber-incidents. I predict that this will end up being a mandate that each company have a standard procedure for dealing with cyber-incidents.

·       Improve capabilities to detect cybersecurity incidents

·       Improve investigative and remediation capabilities.

Analysis

The order provides a lot of common sense ideas for how to improve cybersecurity—common sense, that is, if you spend your time thinking about cybersecurity.  Nothing in the order seems outlandish or overly burdensome.  Cybersecurity is the grand challenge of the 21st Century and it is increasingly obvious that we need to pay a lot more attention to it.  Cybersecurity failures are expensive and highly damaging to the reputations of those organizations that are attacked.

The order discusses removing the contractual barriers that prevent companies from sharing information about cyberattacks.  Although strictly, these barriers include only those in US federal contracts, there will be increasing pressure to share information among all concerned parties.  Any information relevant to cyber incidents or potential incidents must be reported promptly to relevant government agencies, using industry-recognized formats. The extent of sharing will certainly increase, but it will still require a careful balance among business interests, privacy, and coordinated defense.

The focus of the order is to bring systems up to modern cybersecurity standards. NIST, the National Institute of Standards and Technology has been very active in creating these standards.  Organizations may need to review their security standards to be sure that they meet current standards.  I would expect, in addition, that future standard will be developed that will require additional investments.  The order contains an intention to invest in technology and personnel to match the modernization goals.  It will require congressional action, however, to actually fund these good intentions.

The order mandates transitioning to Zero Trust Architecture.  The order defines Zero Trust Architecture as “a set of system design principles, and a coordinated cybersecurity and system management strategy based on an acknowledgement that threats exist both inside and outside traditional network boundaries.”  This framework allows users full access to the specific computational features that they need to perform their jobs.  Traditional security architectures put all of their effort in defending the perimeter of a network.  Once through the firewall, an attacker would essentially have free range because all machines within the firewall were considered fully protected.  Zero Trust Architecture reverses that assumption.  Every machine is suspect, no matter where it located until it is verified that the machine has a need for access to a resource and permission to access it. 

Defenders have to correctly defend their systems every time, but attackers need only succeed once.  It is no longer a matter of whether attackers will pierce the firewall, it is when and how will they find a way to do it.  Therefore, internal as well as peripheral defenses are necessary, and Zero-Trust Architectures provide a framework for that internal + periphery protection.

The order requires new documentation and compliance frameworks.  These requirements may impose some additional requirements on how companies document their processes and products.

One of the most impactful features of the new order is its focus on preventing supply chain attacks.  It requires software that can resist attacks and detect tampering.  Each provider will be required to verify that its software has not been compromised, including any software that is used for development and deployment as well as in the components that are used.  The government, with the involvement of the relevant parties, will be developing guidelines that can be used to evaluate software security, including the practices of developers and suppliers.  These parties will need to demonstrate their conformance with secure practices.  The guidelines are expected to include (quoting from the order):
          (i)     secure software development environments, including such actions as:
              (A)  using administratively separate build environments;
              (B)  auditing trust relationships;
              (C)  establishing multi-factor, risk-based authentication and conditional access across the enterprise;
              (D)  documenting and minimizing dependencies on enterprise products that are part of the environments used to develop, build, and edit software;
              (E)  employing encryption for data; and
              (F)  monitoring operations and alerts and responding to attempted and actual cyber incidents;
          (ii)    generating and, when requested by a purchaser, providing artifacts that demonstrate conformance to the processes set forth in subsection (e)(i) of this section; 
          (iii)   employing automated tools, or comparable processes, to maintain trusted source code supply chains, thereby ensuring the integrity of the code;
          (iv)    employing automated tools, or comparable processes, that check for known and potential vulnerabilities and remediate them, which shall operate regularly, or at a minimum prior to product, version, or update release;
          (v)     providing, when requested by a purchaser, artifacts of the execution of the tools and processes described in subsection (e)(iii) and (iv) of this section, and making publicly available summary information on completion of these actions, to include a summary description of the risks assessed and mitigated;
          (vi)    maintaining accurate and up-to-date data, provenance (i.e., origin) of software code or components, and controls on internal and third-party software components, tools, and services present in software development processes, and performing audits and enforcement of these controls on a recurring basis;
          (vii)   providing a purchaser a Software Bill of Materials (SBOM) for each product directly or by publishing it on a public website;
          (viii)  participating in a vulnerability disclosure program that includes a reporting and disclosure process;
          (ix)    attesting to conformity with secure software development practices; and
          (x)     ensuring and attesting, to the extent practicable, to the integrity and provenance of open source software used within any portion of a product.

Companies will need to provide a software bill of materials that reflects all of the components included in the code.  Modern code often contains many components, some of which are purchased, some are open source, and some are developed in house. Each of those components could introduce malware in what is called a supply chain attack.  The attacker corrupts the component during its development, without the producer’s knowledge.  The producer distributes this corrupt component and certifies that actually comes from the producer, lulling its users into a false sense of security.

The order includes an effort to build a security rating system that can be applied to IoT (Internet of Things) and other systems.  This rating system is intended to mimic the Energy Star ratings and make it easy for customers (individuals and government agencies) to determine that a system has been evaluated for security and what its status is.

The order mandates the development of a standard set of procedures for dealing with cybersecurity incidents.  From the private sector point of view, I expect that this mandate will end up being a requirement that each party have in place standard operating procedures for dealing with these attacks and for communicating about them with the government and the public. 

An important opportunity for innovation is the mandate to improve detection of cybersecurity vulnerabilities.  Right now, we are very effective at blocking malicious activity the periphery (e.g., at the firewall), but we have seen that not all attacks come in through the same channels.  We would benefit from a capability that identified evidence of an attack from within the network. Eventually attackers will get into the internal systems and we will need Endpoint Detection and Response measures to detect the presence of attackers and remove them.

The order is very clear about the need for security-related logs to be protected by cryptographic methods.  Providers may need to adjust some logging procedures to meet this requirement.

Conclusion

Overall, many of the mandates of this order involve features that are already known or are in development.  The mandates for how suppliers develop and deliver software are likely to be the most impactful.  If nothing else, this order highlights the need for enhanced cybersecurity, which should make it easier to persuade organizations of the importance of these measures. 

Tuesday, May 4, 2021

Regulating artificial intelligence: EU Proposal for a regulation laying down harmonised rules on artificial intelligence

In late April, the European Commission proposed new rules that they intend to support transforming Europe into a global hub for artificial intelligence (AI).  Unfortunately, from my perspective, these proposed rules fall far short of the mark.  The authors of this report do not seem to have an adequate understanding of what AI does, how it does it, or how to evaluate it.  They make supportive comments for funding future AI research, but the bulk of the document seems to be about regulating it, rather than supporting it, and they miss the mark on those regulations.

The report begins with four examples of practices that would be prohibited under the regulation.  These include systems that seek “to manipulate persons through subliminal techniques beyond their consciousness.” Or to “exploit vulnerabilities of specific vulnerable groups such as children or persons with disabilities in order to materially distort their behaviour.” Systems that would “classify the trustworthiness of natural persons based on their social behaviour in multiple contexts or known or predicted personal or personality characteristics.” They also highlight a prohibition against “the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement.”

The first of these highlighted prohibitions has nothing to do with artificial intelligence and is a long debunked practice in any case.  Subliminal messaging is the idea introduced by James Vicary in 1957 that one could present messages so briefly, below the threshold (the meaning of subliminal) of conscious detection, presumably so that the message could avoid filtering by the conscious mind and directly control behavior through the subconscious.  Vicary eventually admitted that he made up the original experiment on which this idea is based and no one has ever demonstrated that it does work.  Subliminal messaging does not work and never has.  How artificial intelligence would be used to disseminate it is a mystery to me as is their focus on this pseudo-phenomenon in this report. 

Similarly, the second practice that they highlight, exploiting vulnerabilities of a person due to their age or mental disability also does not materially involve artificial intelligence.  First, it is insulting to imply that age alone makes one cognitively disabled.  Second, it is hard to see what role artificial intelligence might play in taking advantage of that disability.  Third, there are other laws already available to protect cognitively vulnerable individuals and it is unclear why new laws, let alone new laws about artificial intelligence in this context are needed.

Their third example also does not depend on artificial intelligence.  It prohibits the use of a social scoring system for evaluating the trustworthiness of people, in which the social score could lead to unjustified or disproportionately detrimental treatment.  One could use artificial intelligence, I suppose to keep track of these evaluations, but they do not depend on any specific kind of computational system.  We have had creditworthiness scoring for more years than we have had effective computational systems to implement them.

Finally, their fourth highlighted example, seeks to outlaw the use of “real-time” remote biometric identification. That may be a worthy goal in itself, but again, it does not require artificial intelligence to implement it.

Except for the subliminal messaging concern, one could argue that artificial intelligence makes these to-be-prohibited activities more efficient, and therefore, more intrusive.  One could employ teams of expert face recognizers, so-called “super recognisers,” to monitor a public space, but even their prodigious capabilities and the number of people that can be monitored simultaneously is limited relative to a machine’s. I don’t think that we know how the accuracy of these super-recognisers compares with that of machines, but the machines never need breaks, and never take vacations.  But that is not the point of the proposed regulation, it is to control surveillance, not the technology. My point is that it is possible and, I think, preferable, to make laws about directly about prohibited activities than it is to try to regulate them through regulation of technology.

Every form of communication has the potential to exploit vulnerabilities of people.  Sometimes without them noticing it.  Influencing people’s behavior through communication is often called advertising.  

Another problem with this report is its definition of artificial intelligence.  They note that the “notion of AI system should be clearly defined to ensure legal certainty.”  They then provide a definition in Annex I that is so broad that it would encompass every form of computer programming and statistical analysis.  They include machine learning, logic and knowledge based approaches, knowledge representation, knowledge bases, deductive engines, statistical approaches, and search and optimization methods.  That’s just about anything done with a computer.

All programming can be categorized as using logic methods.  All forms of knowledge must be stored in some kind of structure, which could be called a knowledge base.  Every form of statistical analysis is necessarily a statistical approach.

There are many other issues with these proposed regulations, but I want to focus on just one more here.  The authors intend for the proposed legislation to make sure that AI can be trusted.  But there is actually very little here on trusting AI and much of what there is, concerns trusting the people who develop and use the software.  The software is only as good as the uses to which it is put—by the people who deploy it. The report discusses several general principals with which the AI usage should comply, but again these principles concern the use of the software, rather than the software itself.  The closest they come, I think, to suggesting how trustworthiness of software can be assessed is this: “For high-risk AI systems, the requirements of high quality data, documentation and traceability, transparency, human oversight, accuracy and robustness, are strictly necessary to mitigate the risks to fundamental rights and safety posed by AI and that are not covered by other existing legal frameworks.”  Other than saying that artificial intelligence systems should be trustworthy by complying with EU standards for privacy and so on, they offer almost nothing about how that trustworthiness could be effectively assessed.  They argue (correctly, I agree) that the use of such systems should not disadvantage any protected groups, but they do not offer any suggestions for how that disadvantage might be identified and so offer no advice about how it might be mitigated.  If they want to build trust among the public for the use of AI and build some level of legal certainty to promote the development of cutting edge technology (which the regulation is intended to support), then they should provide more advice about how to determine the compliance of any system, with its human appliers, to the principals that are widely articulated. 

The problem with this report is not any antipathy toward artificial intelligence, but rather a lack of understanding of the problems that the government really does need to solve and the means by which to solve them.  As it sits, its definition of artificial intelligence includes just about anything that can be done using rules, programs, automation, or any other systematic approach.  It seeks to regulate things that are unrelated to artificial intelligence.  I don’t argue that those things should not be regulated, but they should be regulated for their own consequences, not for the means by which they are implemented. 

The key is to find a balance between the risks and benefits of any process.  How can the process be used for good while protecting the people who will use the system or be subjected to it?  How can we support innovation while providing protections? Ones obligations to society do not depend on the technology used to identify or meet them.  On the other hand, it is very clear that artificial intelligence and innovation in artificial intelligence are likely to be huge economic drivers in the coming years.  If the EU, any of its member states, or any other country in the world is to succeed in that space, it will require government support, encouragement, and, again, a balance between opportunity and responsibility.

Artificial intelligence in this proposal is a kind of MacGuffin.  It is a “plot device” that brings together several disparate human ethical and legal responsibilities.  But it is the humans’ behavior that is regulated by law.  This proposal would benefit from a more clear-eyed analysis of just how that behavior should be guided for the benefit, rather than the detriment, of society.