波士顿大学法学院师生探讨算法偏见
打击这种偏见是技术法律诊所工作的核心,也是波士顿大学法学院其他几门课程和举措的核心方面。
The 白色面具 that 技术法律诊所 client Joy Buolamwini wore as part of her inquiry into artificial intelligence-powered gender classification products will soon be on display at the 巴比肯中心 in London, but 波士顿大学法学院 students got an up-close-and-personal look last fall.
That’s because clinic students, under the guidance of Director 安德鲁·塞拉斯, represented Buolamwini in her study of the facial analysis technology, called 性别阴影. 他们帮助她确保她的澳门威尼斯人注册网站研究不违反《计算机欺诈和滥用法案》(Computer Fraud and Abuse Act),并提前将结果通知了相关公司,让他们有机会解决产品的缺点,包括对有色人种和传统上具有女性特征的人的识别能力差。
这项澳门威尼斯人注册网站研究是算法偏见的一个鲜明例子,这种偏见已经扎根于我们日常生活中的许多技术中,指导我们看到的产品、工作或住房的在线广告,决定我们从政府机构获得的照顾,甚至影响执法机构如何分配预防犯罪的资源。
打击这种偏见是技术法律诊所与布兰姆维尼和其他客户合作的核心,也是波士顿大学法学院其他几门课程和举措的核心内容。 The law school has a longstanding collaboration with the Hariri Institute for Computing and Computational Science and Engineering—the 波士顿大学网络安全、法律与社会联盟—in which law professors, computer science researchers, and social scientists engage on critical questions involving technology and ethics. Last fall, two BU Law professors—斯泰西多根 and 丹妮拉卡鲁索—teamed up with computer science Professor 跑Canetti and faculty from Harvard, Columbia, and the University of California at Berkeley to launch a new online course called 算法法则. And this summer, Danielle Keats Citron, an internationally recognized information privacy expert and a leading scholar on algorithmic bias, is joining the BU Law faculty.
算法偏差是机器学习的产物,“最简单的”,塞拉斯说,就是开发一种算法,“它可以适应你提供给它的数据”。
他说:“如果你长时间给它提供大量数据,它就能——找不到更好的词了——学习。”
当然,问题在于,这些算法是在向人类学习,而人类有各种各样的隐性和显性偏见。 当一个人受到指责时,歧视是很难证明的; 现在,民权倡导者和监管机构正在努力解决如何通过不断变化的算法证明歧视的问题。 In March, the US Department of Housing and Urban Development 脸谱网起诉, claiming its algorithms allow advertisers to discriminate by only allowing certain types of people to see ads. In 2018, an Arkansas judge 命令 that state’s Department of Human Services to stop using an algorithm to determine the number of at-home care hours people with disabilities receive because so many patients’ hours had decreased dramatically. And the ACLU and other organizations 提高了 questions about the use of algorithms by law enforcement agencies trying to predict where crimes will occur.
More than a decade ago, Citron raised the idea of “技术正当程序”—the ability to have notice of and challenge decisions made by non-human arbiters. She 认为 that one way to ensure such due process would be to have routine algorithmic auditing by the US Federal Trade Commission, which protects consumers against unfair business practices. More recently, Citron 已经证明了这一点 for abandoning punitive algorithmic decision-making in favor of more “pro-social” uses of the technology, such as offering translation for non-English speakers interacting with government agencies.
Citron表示:“我们正处于一个非常不确定的时刻,通常在涉及技术时,我们会先采用,然后再提出问题。” “我们这样做的方式对人们的生活和机会产生了重大影响。”
杜根表示同意。
“法律应该如何处理这种偏见的问题极其困难,”她说。 “问题的部分原因在于,法律学者和政策制定者对机器学习的复杂性缺乏认识。 Technologists, on the other hand, don’t always appreciate the complexity of the legal and regulatory framework in which they work. 我们正处于这样一个阶段,我们确实需要相互学习,从法律的角度对技术发生的事情有一个复杂的理解,反之亦然。 这就是我们开设这门课程的原因,也是我们与哈里里澳门威尼斯人注册网站研究所合作努力实现的目标。”
这种方法似乎正在起作用。 茱莉亚舒尔 她想从事技术法律方面的工作,去年秋天参加了算法法课程。
她解释说:“我上那门课不是为了寻找答案,而是为了寻找该问哪些问题。” 第二年,舒尔还在技术法律诊所帮助客户,包括解决算法偏见问题。明年,她将成为香橼的澳门威尼斯人注册网站研究助理。
“为了给计算机科学家提供最好的建议,你需要理解他们的语言,他们也需要理解你的语言,”她说。 “我认为,这就是我们开设这些课程非常成功的地方:我们在教导未来的律师不要害怕新鲜事物,而是要直面挑战。”