Countering Privacy Nihilism
Countering Privacy Nihilism
Of growing concern in privacy scholarship is artificial intelligence (AI), as a powerful producer of inferences. Taken to its limits, AI may be presumed capable of inferring "everything from everything," thereby making untenable any normative scheme, including privacy theory and privacy regulation, which rests on protecting privacy based on categories of data - sensitive versus non-sensitive, private versus public. Discarding data categories as a normative anchoring in privacy and data protection as a result of an unconditional acceptance of AI's inferential capacities is what we call privacy nihilism. An ethically reasoned response to AI inferences requires a sober consideration of AI capabilities rather than issuing an epistemic carte blanche. We introduce the notion of conceptual overfitting to expose how privacy nihilism turns a blind eye toward flawed epistemic practices in AI development. Conceptual overfitting refers to the adoption of norms of convenience that simplify the development of AI models by forcing complex constructs to fit data that are conceptually under-representative or even irrelevant. While conceptual overfitting serves as a helpful device to counter normative suggestions grounded in hyperbolic AI capability claims, AI inferences shake any privacy regulation that hinges protections based on restrictions around data categories. We propose moving away from privacy frameworks that focus solely on data type, neglecting all other factors. Theories like contextual integrity evaluate the normative value of privacy across several parameters, including the type of data, the actors involved in sharing it, and the purposes for which the information is used.
Severin Engelmann、Helen Nissenbaum
计算技术、计算机技术
Severin Engelmann,Helen Nissenbaum.Countering Privacy Nihilism[EB/OL].(2025-07-24)[2025-08-10].https://arxiv.org/abs/2507.18253.点此复制
评论