Biden Unleashes New Executive Order On AI

( – President Biden recently signed an executive order (EO) on artificial intelligence, which he described as a “landmark” move. This decision has sparked diverse reactions among experts in the rapidly evolving field of technology.

Christopher Alexander, Chief Analytics Officer of Pioneer Development Group, highlighted a key focus of the Biden AI executive order: the provision of “testing data” for federal government review. According to Alexander, allowing scrutiny of the proprietary “black box” algorithms is crucial to addressing potential biases in AI algorithms. He emphasized the need for a bipartisan and technocratic effort, cautioning against letting political ideology interfere, as it could exacerbate the threats posed by AI.

Biden’s executive order, touted as the “most sweeping actions ever taken to protect Americans from the potential risks of AI systems,” mandates AI developers to share safety test results with the government. The order also aims to establish standards for monitoring AI safety and implement safeguards to protect Americans’ privacy in the face of rapid technological advancements.

Jon Schweppe, Policy Director of American Principles Project, acknowledged the validity of concerns that prompted the executive order but argued that some aspects of the order prioritize the wrong issues. Schweppe advocated for direct government oversight in areas such as scientific research and homeland security but cautioned against excessive micromanagement.

He suggested a role for private oversight and proposed holding AI companies and creators liable for their AI’s actions, with citizens granted a private right of action in case of harm.

Ziven Havens, Policy Director of the Bull Moose Project, praised Biden’s order as a “decent first attempt at AI policy.” He highlighted the importance of guidelines and regulations on topics such as watermarks, workforce impact, and national security. However, Havens expressed concerns about the timeline for developing these guidelines, emphasizing the risk of falling behind in the global AI race due to bureaucratic inefficiencies.

Phil Siegel, Founder of the Center for Advanced Preparedness and Threat Response Simulation, commended the thoroughness of Biden’s order but questioned whether it aimed to address too much. Siegel outlined four pillars for AI regulation: protecting vulnerable populations, developing laws that consider AI’s scope, ensuring fair algorithms by eliminating bias, and ensuring trust and safety in algorithms.

While he praised the executive order on pillars three and four, he deemed it incomplete on the first two, emphasizing the need for congressional engagement to transform aspects of the order into law.