Automated Authority: AI Algorithms


At a Glance
AI algorithms can self-modify and create new code when fed new data, raising unprecedented questions about agency and accountability, even as AI produces documented harms across domains, from discriminatory hiring to dangerous content recommendations.
U.S. regulation relies primarily on industry self-governance, while the EU and China have developed more assertive oversight frameworks—the EU seeks regulatory influence, while China pursues technological self-sufficiency.
As corporate AI power concentrates, public interest approaches are needed to ensure democratic oversight and equitable benefits.
Meaningful AI governance must center on human autonomy, balancing innovation with protection against algorithmic harms.

Minds in the Machine

Within two months of its release, Chat GPT reached 100 million monthly active users, setting a record for user adoption by January 2023. AI-powered systems have leapt into our lives in forms ranging from speech recognition to autonomous driving to medical diagnosis. The breakneck speed and vast scope of adoption raise many policy questions, with AI algorithms at their core.

AI can be broadly defined as computer systems that perform tasks typically requiring human intelligence. Different technologies fall under this umbrella, from predictive AI (in hiring) to generative AI (ChatGPT). Algorithmic governance is a key layer in an “AI Sovereignty Stack,” as Luca Belli suggests, alongside energy, data, computing power, human talent, and cybersecurity.

AI algorithms are fundamentally different from previous generations. Earlier algorithms included search systems designed for relevance or social media algorithms coded to maximize engagement. Modern AI departs from the “rule-based” approach, as Kaifu Lee explains: AI models let machines develop pattern recognition capabilities by learning from enormous numbers of examples, leveraging neural networks. In other words, AI models can self-modify and create new algorithms when fed new data.

Such abilities raise thorny questions of agency and accountability. Can a computer be an author, as Timothy Butler asked in 1982? Should AI become a legal person, as Lawrence Solum questioned in 1991? If self-driving cars cause harm, who is to blame? Should creators and marketers be held responsible for algorithmic harms? Might the Turing Test be misguided for determining agency, in light of the Chinese Room Argument, which notes that AI merely manipulates symbols without having a “mind” of its own? In one notable case, Thaler v. Perlmutter (2025), the U.S. court ruled that artwork generated by an AI cannot be copyrighted, as relevant statutes consider only human authors.

Source: new america