Many of the problems posed by AI are rooted in the secrecy surrounding how it works and the data it feeds on, so allowing the national security community to take the lead on AI will only make matters worse.
President Joe Biden’s White House recently issued a memo on “Strengthening US leadership in the field of artificial intelligence“Which included, among other things, a directive for the National Security Service to become a global leader in the use of artificial intelligence.
At the direction of the White House, the national security state is expected to assume this leadership position by poaching great minds from academia, the private sector, and, most worryingly, leveraging already operating private AI models to achieve national security goals.
Private AI systems run by tech companies are already incredibly opaque, to our detriment. People are uncomfortable, and rightly so Companies that use AI to decide all sorts of things about their livesfrom how likely they are to commit a crime, to their eligibility for a job, to issues related to immigration, insurance and housing.
For-profit companies rent out their automated decision-making services to all kinds of companies and employers, and most of us affected will never know that a computer has chosen us, let alone understand how the choice was made or be able to appeal that decision.
But it could get worse: The combination of private AI and national security secrecy threatens to make an already secretive system even more opaque and unaccountable.
The constellation of organizations and agencies that make up the national security apparatus is notoriously secretive. The Electronic Frontier Foundation and other civil liberties organizations I had to fight in court over and over again to try to unravel even the most basic frameworks for monitoring global cloud networks and the rules that govern them.
Giving this machine control over artificial intelligence would create a Frankenstein’s monster of secrecy, lack of accountability, and decision-making power. As the executive branch pushes agencies to tap private sector AI expertise, more and more information about how these AI models work will be hidden under an impenetrable veil of government secrecy.
It’s like the old computer science axiom that says “garbage in, garbage out” – without transparency, data containing the systemic biases of our society will train AI to propagate and amplify these biases. With secret training data and black box algorithms that the general public cannot analyze, bias becomes “tech-washed” and oppressive decisions are hidden behind the supposed objectivity of the code.
AI works by collecting and processing vast amounts of data, so understanding what information it holds and how it reaches conclusions will all become incredibly fundamental to how the national security state thinks about issues. This means that the state is likely to not only make the argument that AI training data may need to be confidential, but may also argue that companies need, under penalty of law, to keep their governing algorithms secret as well.
As the memo says, “AI has emerged as an era-defining technology and has demonstrated significant and growing importance to national security. The United States must lead the world in the responsible application of AI in appropriate national security functions.”
The default approach taken by national security agencies is to keep public opinion in the dark. The default approach to AI should be clear transparency and accountability in training data and algorithmic decision making. These are inherently contradictory goals, and moving AI’s rapidly expanding influence on our society into the murky realm of national security could spell disaster for decades to come.
Matthew Guariglia is a senior policy analyst at the Electronic Frontier Foundation, a digital civil rights organization based in San Francisco.