For now, the White House is keeping the news mum, but according to POLITICO, a draft executive order that may be published as soon as next week, aims to set guidelines that would regulate and shape the AI landscape.
It would streamline high-skilled immigration, create a raft of new government offices and task forces and set the stage for the use of more AI in nearly every facet of life touched by the federal government, from health care to education, trade to housing, and more, while maintaining greater control over the impacts created by AI.
Mindful of the expansion of AI into every facet of life, business and officialdom, President Biden will deploy numerous federal agencies to monitor the risks of artificial intelligence and develop new uses for the technology while attempting to protect workers, according to a draft executive order obtained by POLITICO.
Though previous White House AI efforts have been criticized for lacking enforcement teeth, the new guidelines would give federal agencies influence in the US market through their buying power and their enforcement tools. Biden’s order specifically directs the Federal Trade Commission, for instance, to focus on anti-competitive behavior and consumer harms in the AI industry — a mission that Chair Lina Khan has already publicly embraced.
To coordinate the federal government’s AI activities, the order will also appoint a White House AI Council chaired by the White House Deputy Chief of Staff for Policy and staffed with representatives from every major agency.
It is widely acknowledged that the expansion of AI poses grave risks to society that must be addressed. Experts are already wrestling with the most obvious of these.
Ethical and moral dilemmas: AI systems may not share the same values, norms, and principles as humans, and may make decisions that harm or violate the rights of people or other living beings. For example, autonomous weapons may cause unintended casualties or escalate conflicts. Facial recognition may infringe on privacy or enable discrimination. Social media algorithms may manipulate users or spread misinformation.
Safety and reliability: AI systems may not behave as expected or intended, and may cause errors or failures that have serious consequences. Some prime examples of this risk is that self-driving cars may malfunction or crash; medical diagnosis systems may misdiagnose or prescribe wrong treatments and financial trading systems may cause market instability or fraud.
On a broader scale, there may be social and economic impacts such as, AI systems may disrupt the existing social and economic structures and automation may displace workers or reduce their wages. Equally, surveillance may erode civil liberties or increase authoritarianism.
These are all serious risks to the functioning of society as we have known it thus far, and the Biden draft order aims to contain or even forestall the disruptions caused by AI.