New! H2O now has access to new and up-to-date cases via CourtListener and the Caselaw Access Project. Click here for more info.

Main Content

Internet & Society: The Technologies and Politics of Control (Spring 2019)

“Our Machines Now Have Knowledge We’ll Never Understand” by David Weinberger, Wired (2017).

The format in which machine learning algorithms represent the “knowledge” that drives their predictions is often not human-readable. What are some of the dangers of this lack of interpretability? Are there deployment situations in which we might be okay with uninterpretable algorithms which provide good results?