Owen C. King | research overview

I work in two different areas of ethics. In the first area, at the intersection of normative ethics and metaethics, I examine kinds of value, like well-being, that pertain to individual persons and their lives. Second, in applied ethics, I investigate the moral issues raised by new computing technology, especially as related to automated prediction of our attributes and thoughts.

Well-being and related kinds of value

In short, the central thesis of my dissertation was this: When philosophers and other people have talked about well-being, more than one kind of value has been in play. In my dissertation, I worked to tease apart these different kinds of value. I have been working on a few related papers, all of which share the goal of exposing the structure of the conceptual landscape around well-being. The payoff is greater clarity in our thinking about what benefits people. This, in turn, may make us (both as philosophers and citizens) more accute in reflecting on, and more effective in facilitating, greater good for people.

"Pulling Apart Well-being at a Time and the Goodness of a Life" Ergo (2018)

I argue that we must distinguish between a person's well-being at a time and the goodness of her life as a whole. Frequently, the concept of well-being and the concept of a good life are used interchangeably. And, even when they are distinguished, it is commonly assumed that the relationship between them is straightforward. I argue that this is a mistake. Of course it is true that the goodness of a person's life partly depends on her well-being at the moments of her life. But the goodness of a life depends also on facts other than the momentary well-being. Although others have noted this and hence argued that the goodness of a life cannot be simply the sum of the well-being in the life, I show that the same considerations support a much stronger conclusion: We have no guarantee that increases in well-being, even all else equal, will result in a better life on the whole. The result is that we have at least two distinct concepts of what is good for a person which ought to be theorized and assessed independently.

"The Good of Today Depends Not on the Good of Tomorrow: A Constraint on Theories of Well-Being" Philosophical Studies (2020)

This article addresses three questions about well-being. First, is well-being future-sensitive? I.e., can present well-being depend on future events? Second, is well-being recursively dependent? I.e., can present well-being (non-trivially) depend on itself? Third, can present and future well-being be interdependent? The third question combines the first two, in the sense that a yes to it is equivalent (given some natural assumptions) to yeses to both the first and second. To do justice to the diverse ways we contemplate well-being, I consider our thought and discourse about well-being in three domains: everyday conversation, social science, and philosophy. This article’s main conclusion is that we must answer the third question with no. Present and future well-being cannot be interdependent. The reason, in short, is that a theory of well-being that countenances both future-sensitivity and recursive dependence would have us understand a person’s well-being at a time as so intricately tied to her well-being at other times that it would not make sense to consider her well-being an aspect of her state at particular times. It follows that we must reject either future-sensitivity or recursive dependence. I ultimately suggest, especially in light of arguments based on assumptions of empirical research on well-being, that the balance of reasons favors rejecting future-sensitivity.

"De Se Pro-Attitudes and Distinguishing Personal Goodness" [under development]

Think of well-being and the goodness of a person's life as species of what is good for a person—i.e., personal goodness, in contrast to goodness simpliciter. And consider the class of response-dependence theories of personal goodness that say, roughly, that what is good for a person (in some way) is what she is disposed to desire or to favor under certain conditions. A prima facie problem for these theories is that not everything a person is disposed to desire or to favor seems good for her. A person may desire preservation of remote wetlands. But, if the wetlands are preserved, that is not in any straightforward sense good for her; it does not increase her well-being or improve her life, especially if she is unaware of it. The solution I advance is that the desires that are relevant to personal goodness have a special sort of content: they have de se or essentially indexical content. If this is right, it shows us something distinctive about what is good for persons; it shows how well-being, goodness of a life, and the like, are bound up with a distinctively first-personal kind of thinking.

Ethics of automated prediction: artificial social cognition

Modern AI systems based on machine learning excel at making predictions about unobserved phenomena on the basis of instances already observed. An ethical issue for automated prediction emerges when we notice that judgments and inferences, even when produced by a reliable mechanism, may not be ethically neutral. For example, judgments based on certain stereotypes may be objectionable, even when accurate. Humans tend to be guided and restrained by a sense of discretion when making these judgments. An automated system designed with a narrow set of objectives is not similarly restrained. I have become convinced that this gives rise to some morally worrisome scenarios—particularly, when machine learning systems are tasked with predicting, classifying, or describing what people are thinking. I have been working on several papers exploring these issues.

"Machine Learning and Irresponsible Inference: Morally Assessing the Training Data for Image Recognition Systems" In On the Cognitive, Ethical, and Scientific Dimensions of Artificial Intelligence, Berkich & d’Alfonso (eds.), Springer. (2019)

[chapter PDF]
Just as humans can draw conclusions responsibly or irresponsibly, so too can computers. Machine learning systems that have been trained on data sets that include irresponsible judgments are likely to yield irresponsible predictions as outputs. In this paper I focus on a particular kind of inference a computer system might make: identification of the intentions with which a person acted on the basis of photographic evidence. Such inferences are liable to be morally objectionable, because of a way in which they are presumptuous. After elaborating this moral concern, I explore the possibility that carefully procuring the training data for image recognition systems could ensure that the systems avoid the problem. The lesson of this paper extends beyond just the particular case of image recognition systems and the challenge of responsibly identifying a person’s intentions. Reflection on this particular case demonstrates the importance (as well as the difficulty) of evaluating machine learning systems and their training data from the standpoint of moral considerations that are not encompassed by ordinary assessments of predictive accuracy.

"Presumptuous Aim Attribution, Conformity, and the Ethics of Artificial Social Cognition" Ethics and Information Technology (2020)

Imagine you are casually browsing an online bookstore, looking for an interesting novel. Suppose the store predicts you will want to buy a particular novel: the one most chosen by people of your same age, gender, location, and occupational status. The store recommends the book, it appeals to you, and so you choose it. Central to this scenario is an automated prediction of what you desire. This article raises moral concerns about such predictions. More generally, this article examines the ethics of artificial social cognition—the ethical dimensions of attribution of mental states to humans by artificial systems. The focus is presumptuous aim attributions, which are defined here as aim attributions based crucially on the premise that the person in question will have aims like superficially similar people. Several everyday examples demonstrate that this sort of presumptuousness is already a familiar moral concern. The scope of this moral concern is extended by new technologies. In particular, recommender systems based on collaborative filtering are now commonly used to automatically recommend products and information to humans. Examination of these systems demonstrates that they naturally attribute aims presumptuously. This article presents two reservations about the widespread adoption of such systems. First, the severity of our antecedent moral concern about presumptuousness increases when aim attribution processes are automated and accelerated. Second, a foreseeable consequence of reliance on these systems is an unwarranted inducement of interpersonal conformity.

"Ethics of Artificial Social Cognition: Groundwork and Exemplification" [under development; presented at the KU Leuven Technology and Society Conference]

This paper delineates and engages the ethics of artificial social cognition (EASC). Social cognition comprises the processes by which humans understand one another and ourselves. Central questions in the psychology of social cognition concern how we figure out what others are thinking. Beyond psychological questions about how social cognition actually operates, we can raise moral questions about how it ought to operate. Indeed, a distinction between the psychology of social cognition and the ethics of social cognition is already evident in studies of stereotyping: Psychologists study how stereotypes work, while ethicists assess the moral status of stereotyping. We confront the ethics of artificial social cognition when noting that artificial systems—especially those that issue precise predictions about characteristics of individual persons on the basis of massive datasets—employ modes of inference quite different from our ordinary human modes of social cognition.

This paper offers two related arguments for attention to EASC, one that focuses on the rationalizing role of the attribution of pscyhological states, and one resting on the fact that some social cognitive predictions are especially likely to constitute self-fulfilling prophecies. The paper culminates with two brief substantive inquiries in EASC, first, engaging recent controversies about the use of artificial intelligence for prediction of criminal recidivism, second articulating a concern that automated systems, unlike humans, may exercise inadequate forbearance in the ascription of mental states to persons whose thoughts are evolving.

Ethics of automated prediction: self-fulfilling prophecies

Another area where automated prediction raises problems is with predictions that are self-fulfilling. Because these predictions affect the very outcomes they predict, they call for evaluation in terms of categories beyond accuracy, including ethical categories. I have been collaborating with Mayli Mertens on a couple of papers about self-fulfilling prophecies.

"Self-fulfilling Prophecy in Human and Machine Prediction" [under development]

We worry about self-fulfilling prophecies at moments when we suspect a purportedly innocent prediction may not be so innocent after all. A self-fulfilling prophecy is not simply a true prediction, but also a prediction that somehow brings about its own truth. As such, self-fulfilling prophecies constitute a distinct form of agency, with practical and ethical dimensions, which should be subject to the relevant standards of evaluation, not merely to the epistemic standards that bear on typical predictions. Although self-fulfilling prophecies are discussed in a variety of fields, including sociology, economics, medicine, and computing, there exists no account of self-fulfilling prophecies that combines precision with generality sufficient to illuminate the phenomenon in all of these areas. The aim of this article is to offer a satisfactorily precise and general account of self-fulfilling prophecies that allows us to discern exactly when and how they may be problematic. Crucially, our account encompasses the two interlocking sides of self-fulfilling prophecy: first, how the relevant prediction is put to practical use, which we call the prediction’s employment; second, how the subject of the prediction responds, which we call system sensitivity. Our normative critique demonstrates the distinctive tendency of self-fulfilling prophecies to generate undesirable feedback loops, without prompting any error signals. We close with a practical guide for catching and diagnosing self-fulfilling prophecies.

"Predicting Quality-of-Life for Prognostication and Policy: Two Levels of Self-Fulfilling Prophecy" (with Mayli Mertens) [under development]

Medical prognosis is commonly understood as an objective prediction of medical outcome. However, the act of prognostication itself may powerfully influence the very outcomes it predicts. Specifically, medical prognosis may sometimes constitute a self-fulfilling prophecy. We demonstrate this with a close examination of medical and social consequences of the advent of particular prognostic technologies. Neuroprognostic tools such as somatosensory evoked potential tests (SSEP) and, very recently, continuous electroencephalogram monitoring (cEEG) are used to predict neurological outcome of coma after cardiac arrest. We show that predictions of poor neurological outcome, when tied to end-of-life decisions such as withholding and withdrawal of life-sustaining treatment, create self-fulfilling prophecies at two distinct levels. First, we explicate the suggestion, already in the medical literature, that a particular act of prognosis may constitute a self-fulfilling prophecy. Second, self-fulfilling prophecies may occur also at the broader social level, with predictions that certain lives will not be worth living, which affect policies which end up affecting whether or not these lives are worth living. These two levels of self-fulfilling prophecy complicate the decision-making regarding innovation and policy in significant ways. We reflect on how to best move forward regarding the development of neuroprognostic tools in light of these effects.

Computing and professionalism

I am especially interested in one issue in Computing Ethics which, unlike the papers just described, is not directly related to automated prediction. The issue is about whether and under what conditions computing should be considered a profession, in the way that medicine, engineering, librarianship, and law are professions.

"Anti-features, the Developer-User Relationship, and Professionalism" [under development]

This paper is an attempt to develop some conceptual resources—particularly the concept of an anti-feature—helpful for articulating ethically significant aspects of the relationship between software developers and end-users. After describing many examples of anti-features and considering several definitions proposed by others, I explain what all anti-features have in common. Roughly, an anti-feature is some software functionality that (1) is intentionally implemented, (2) is not intended to benefit the user, (3) makes the software worse, from the standpoint of the intended user. (This makes anti-features distinct from both features and bugs). I argue that, if we are to consider software development a profession, a condition on a person having the status of professional software developer is that she exercise forbearance regarding the implementation of anti-features.

For a basic overview, see my essay, "Is It a Feature? Is it a Bug? No, It’s an Antifeature." at the Center for Digital Ethics and Policy at Loyola University Chicago.