Retributivism, Punishment, and Moral Value

In the comments on another post https://secularfrontier.infidels.org/2017/01/05/memoriam-derek-parfit-1942-2017/ , the contrast between retributivist and consequentialist models of punishment came up. Here is a thought-experiment I present to my classes on this contrast.

Suppose that in lieu of life-imprisonment for major crimes, the technology exists to plug offenders into a Matrix-like situation: they are to be imprisoned for the rest of their lives in a completely virtual reality. Suppose that you are in charge of determining the character of the virtual reality environment for offenders. To make things simple, suppose you have two basic choices:

(A) A virtual paradise – simulated natural beauty, sensations of pleasure and physical comfort, diverse and varied opportunities for exploration, access to a wide variety of intellectual resources (books, movies, etc.)

(B) A virtual wasteland – a simulated world that is bleak, barren, boring, sparse, colorless, and physically uncomfortable (sensations of extreme cold or heat, hunger, etc.).

Which virtual reality do you think is morally appropriate for the worst criminal offenders?

Of course, if offenders knew they would end up in the virtual paradise, this would defeat the deterrent purpose of punishment, so we can stipulate that everyone but you believes offenders will be subjected only to the virtual wasteland. In fact, if the prohibition of cruel punishment were to be abolished, you could even make the virtual wasteland into a virtual hell, where offenders will suffer nothing but torment until death (after which, according to many theists, things will only get worse).

It is also necessary, for this thought experiment, to stipulate that offenders cannot be disconnected once they are wired in – attempts to do so would kill the person.

It is also necessary to stipulate that each offender will be the sole “inhabitant” of the virtual world – offenders will not share their virtual prison with any other real individuals, though perhaps the world might be stocked with artificial inhabitants (what, in computer gaming parlance would be classified as NPCs).

So, if no positive consequentialist purpose would be served by subjecting offenders to the virtual wasteland rather than the virtual paradise, then are there any remaining moral considerations that would suggest the right thing to do is to choose the wasteland over the paradise?

Speaking for myself, I used to lean toward the retributivist model. The thought of, essentially, rewarding people for egregiously immoral behavior by wiring them up to the paradise situation just seemed wrong. I imagined an offender thinking something like “Hah! I killed all those orphans and they said they were going to send me to a wasteland, and look – this place is awesome! I wish I had killed even more of those kids!” It is hard not to feel revulsion at the character of such a person, even if it never leads to the performance of any further objectionable actions. I would like to say that having such a character is somehow intrinsically bad. But the more I have thought about how to justify such a stance, the less I feel able to do so. States of character just don’t seem to have intrinsic value or disvalue. Instead, they seem to have only instrumental value insofar as they will affect the ways people interact with or respond to others. But since there are no others in the virtual reality, no states of character have any moral value at all anymore. The states of character would be objectionable if they were to exist in someone who is still embedded in the real world, but for someone who will never interact with the real world ever again, whatever states of character they may have are morally irrelevant.

Is there anything in the virtual environment that WOULD have intrinsic and not merely instrumental value? I incline toward the view that only positive or negative experiential states (pleasure, pain, happiness, unhappiness). In a world with only one sentient individual, there is no right or wrong (unless the concept of self-wrong makes sense) – only good and bad. The choice to put someone in a permanent virtual reality in which they are and forever will be the only inhabitant, therefore, is nothing more or less than the choice of whether they will be in a morally better or a morally worse world.

What becomes, then, of the concept of desert? Doesn’t the offender deserve the worse world? I have come to think, though, that the concept of desert is not plausibly isolable from contexts of future interactions with others. When someone is given what we think they deserve, this signals affirmation of certain strategies of interaction that we value or disvalue.