Tuesday, October 28, 2025

Book review: "Culpability," by Bruce Holsinger

This new novel is an excellently written and gripping tale of possible moral consequences of AI development. At the end of each chapter I found it difficult to stop reading.

I don’t usually enjoy novels or articles about ethics or morality, ethics being personal codes of behavior, morality being group codes (in this essay, for brevity, they are both termed “morality”), because morality is subjective, expressing an individual’s reaction to a subject, whether it induces attraction or revulsion. Thus, many cultures have found human sacrifice to be moral, while we don’t (I hope). We can debate forever if free speech is morally correct no matter what its subject, to no avail, because there’s no absolute cosmic backdrop of right and wrong determining a permissible subject, whether it be political expression or pornographic. We have intense arguments about abortion which we will never resolve, because the ultimate criterion is whether someone finds abortion sometimes necessary for ultimately humane reasons, or does not.

In Culpability, however, Hilsinger taps into moral questions arising from AI, questions that are so new to human cultures that we may need to argue about an absolute right or wrong just so we can decide, as a group, how to handle the technology.

While avoiding spoilers, I’ll just reveal that the novel deals with “moral” questions concerning AI powered auto-drive in cars, and applications to medical, military and social questions (such as: Is it moral to create an AI “friend” for a vulnerable pre-teen girl?). Holsinger does not resolve such questions; he presents them as they present themselves.

There is this question: Would auto-drive be culpable if the familiar trolly conundrum arises, in which a trolly conductor must decide which track to switch to, one that will kill 5 children, or the other which will kill one old man? Most people would choose the first option, but can killing the old man, taken in itself, be considered “good,” or “moral”? What if such a decision needs to be programmed into a car’s auto-drive, as it and many similar decisions surely will? Would the person programming the system bear some responsibility for killing the old man? At some point - as they appear more conscious- will actions initiated by AI be judged moral or immoral? Or is it fair to say that AI systems are and will remain amoral? In this sense can a human be amoral, making decisions along practical guidelines, without personal reference to societal conceptions of “right” and “wrong”?

[Note: Google Docs, in one of its moods, has insisted on putting the remainder of this post in italics, which was not my intention.]

The novel outlines the state of current public discourse on AI: intense but drawing no conclusions. There are constant calls for standards and limitations, but no seeming progress in that direction.

For reference, we might consider the development of the atomic bomb at the close of World War II. There was no public discussion about the wisdom of this effort, about the state of constant tension and possible extinction it would throw our species into, a state that has endured to the present, and will continue into the future as far as we can see.

But what if we had publicly debated the development of nuclear weapons? Would it have made a difference? Given what we see in the world today, the bomb would have been developed regardless of any debate. Likewise with AI, no matter how much alarm we express about it, or laws we pass to control it, every sci-fi application you can think of will be pursued, undertaken in secret as was the atom bomb.

Thus morality is only expressed in locations where that morality has force.

Then, what is morality? It does not appear to entail universal agreement, accepted by everyone involved. It is subjective. Does this mean that “ethics” and “morality” are meaningless? I’d rather think it means that a culture should adopt a morality that serves it well and interacts positively with surrounding cultures and the world.

Can we achieve connections between public and covert moralities? Such a goal is impeded by leaders like President Trump, whose thrust is to promote fantasy moralities for people on the receiving end (e.g. “We need weaponized AI technology to defend us from hostile countries that are developing it first”), while promoting a different morality for covert groups (e.g. “We need weaponized AI technology to control our own people”)?

At least we have the ability to think about what’s happening, but this ability may not last. What we really need is a political force that, unlike impotent relics like the Democratic and Republican parties, will have some power to determine our coming moralities.