TY - CONF
T1 - Politics of Adversarial Machine Learning
AU - Albert, Kendra
AU - Penney, Jonathon
AU - Schneier, Bruce
AU - Kumar, Ram Shankar Siva
AU - Penney, Jon
N1 - Kendra Albert et al, "Politics of Adversarial Machine Learning" (Paper delivered at Towards Trustworthy ML: Rethinking Security and Privacy for ML Workshop, Eighth International Conference on Learning Representations (ICLR), Virtual Conference, 26 April 2020), archived online: ICLR 2020 < https://iclr.cc/virtual_2020/workshops_6.html > [perma.cc/6HDW-KCGV].
PY - 2020/1/1
Y1 - 2020/1/1
N2 - In addition to their security properties, adversarial machine-learning attacks and defenses have political dimensions. They enable or foreclose certain options for both the subjects of the machine learning systems and for those who deploy them, creating risks for civil liberties and human rights. In this paper, we draw on insights from science and technology studies, anthropology, and human rights literature, to inform how defenses against adversarial attacks can be used to suppress dissent and limit attempts to investigate machine learning systems. To make this concrete, we use real-world examples of how attacks such as perturbation, model inversion, or membership inference can be used for socially desirable ends. Although the predictions of this analysis may seem dire, there is hope. Efforts to address human rights concerns in the commercial spyware industry provide guidance for similar measures to ensure ML systems serve democratic, not authoritarian ends.
AB - In addition to their security properties, adversarial machine-learning attacks and defenses have political dimensions. They enable or foreclose certain options for both the subjects of the machine learning systems and for those who deploy them, creating risks for civil liberties and human rights. In this paper, we draw on insights from science and technology studies, anthropology, and human rights literature, to inform how defenses against adversarial attacks can be used to suppress dissent and limit attempts to investigate machine learning systems. To make this concrete, we use real-world examples of how attacks such as perturbation, model inversion, or membership inference can be used for socially desirable ends. Although the predictions of this analysis may seem dire, there is hope. Efforts to address human rights concerns in the commercial spyware industry provide guidance for similar measures to ensure ML systems serve democratic, not authoritarian ends.
KW - Artificial Intelligence
KW - AI
KW - Machine Learning
KW - Ml
KW - Security
KW - Socio-Technical Systems
KW - Adversarial Machine Learning
KW - Privacy
KW - Human Rights
KW - Spyware
KW - Politics of Technology
KW - Politics of Machine Learning
UR - https://digitalcommons.schulichlaw.dal.ca/scholarly_works/1808
M3 - Presentation
ER -