Blog 7: “Legal Doctrine” and “More to AI Bias” essays

     The Brookings article argues that the legal idea of disparate impact, which lets people challenge policies that unintentionally harm certain groups, could play a key role in preventing AI-driven discrimination. Since many algorithms don’t have clear intent behind their decisions, this doctrine helps ensure that unfair outcomes are still addressed by requiring developers to explain their choices and find fairer solutions. A related NIST report points out that AI bias isn’t just about bad data, it also comes from human decisions, institutional habits, and social context. Together, these ideas highlight that holding AI accountable means looking beyond intent and focusing on the real-world effects of its decisions.

     Both articles make a great case for rethinking how we hold AI accountable. The Brookings article rightly argues that disparate impact law could be one of the few practical tools we have to challenge algorithmic discrimination, especially when bias hides behind claims of “neutral” technology. The NIST report pushes this idea further by showing that AI bias isn’t just a data problem, but also a human one. So fighting AI bias should involve demanding accountability from the people and institutions behind building it.

How might developers and companies resist or respond to being held accountable under disparate impact standards? Do you think existing legal frameworks like disparate impact are enough to regulate AI bias, or do we need new laws? 


Comments

Popular Posts