Automatic Gadfly: Socrates by Machine

Automatic Gadfly: Socrates by Machine

Recently, I’ve been experimenting with creating philosophical work using massive machine learning language models such as GPT-2, sometimes prompted to adopt specific philosophers’ styles and sometimes just letting it run. I’ve generated essay text, clinical trial reports and aphorisms in different philosophers’ styles.

After reading Justin Weinberg’s post on the philosophy blogosphere over on Daily Nous, and being inspired by his humorous mock dialogue with Socrates about the nature of the blogosphere, I was intrigued to see if GPT-2 could create some thought-provoking – or just weird – new Socratic dialogues.

To get GPT-2 in the Socratic mood, I fed it the first few lines from Weinberg’s mock dialogue, swapping out “philosophy blogosphere” for a range of different subjects, and let the AI do its thing, generating a swathe of dialogue on the topic between Socrates and Glaucon. Here’s one dialogue on love in its entirety. The text in bold was the prompt from which everything else was generated by the language model. Only minimal formatting edits were made.

Socrates: What is this “love”, which you speak of? You attribute to ‘love’ all sorts of qualities. You must be an expert on it to proclaim so boldly.

Glaucon: I’m not sure I like where this is going.

Socrates: Come on, humor me.

Glaucon: Well, love is love. If you are not willing to put forth anything that can be taken away, I can say that you have not found love. Love is that which, if you are willing to give in, you must be aware that you have not found anything.

Socrates: Well, I will say that this is not your true love.

Glaucon: And so, what is your true love?

Socrates: Well, if you can put forth anything that can be taken away, I can say that you have not found love.

Glaucon: Well, I want to say that your love is that which you believe.

Socrates: Well, the truth is that you don’t believe it, you are just the one who knows what it is that the truth is that you believe, and then believe that is the truth.

Glaucon: Then I have never even seen a thing like love. You have never been in love. You have never thought it was possible, or that something like love could ever occur.

Socrates: Oh.

Glaucon: Yes, love has not been allowed to happen to you. 

Socrates: Well, love has never occurred to you, you have never heard of the truth, you have never seen the truth, you have never known the truth. 

Glaucon: No, no, love has never happened to me too.

The first thing that probably springs to mind if you haven’t seen a language model since the size of these things went haywire last year (and went nuclear recently) is the coherence of the piece. Each statement matches the previous in a direct enough way that you might, at first glance, buy this as a Socratic dialogue excerpt. It gets a little comedic towards the end.

When you feed the AI some more fanciful stuff to discuss, the dialogues get wackier. In this one, I fed GPT-2 the same prompt text but switched out “love” for “banana”:

Glaucon: Well, a banana is like a kind of tree; its leaves are long and strong, it’s a tree with colorful leaves.

Socrates: So it’s like something I can see amongst the trees that grow here?

Glaucon: I mean, it’s like something that they’re supposed to be.

Socrates: Well, when you say that I’m not so sure I like it.  

Glaucon: I mean, I just like the tree.  

Socrates: So, there’s a banana tree that you call “banana.” But, in my opinion, the banana that you say is “banana” is also “banana’s” fruit. It’s a fruit that is “banana’s” fruit. How can “banana” be a tree and also a “banana’s” fruit?

Glaucon: I’m not sure if you really believe that.

Socrates: Well, it depends on the color of the tree.

Despite the frankly hilarious non-sequitur at the end (which is an impressive callback to the first line of the dialogue, by the way – something that would have been unachievable for most language models prior to GPT-2), this retains much of the structure and appeal of a Socratic dialogue. Socrates sets up a poser for Glaucon which – had this not been written by an AI some 2,359 years after his death – could have puzzled ontologists for millenia to come. Can a banana be both the tree and the fruit which the tree produces? Can the same word refer to both? Are they one and the same? Glaucon has no answer. Does it truly depend on the colour of the tree? No! Surely Socrates’ problem will apply equally to apples and oranges. But not pears though. Socrates hates pears.

GPT-2 has no difficulty emulating the comedic element of Justin Weinberg’s dialogue and the slightly offbeat tone of the dialogues themselves, as in this excerpt from a dialogue on the concept of a “good dog”. Glaucon turns the question back on Socrates, and then…

Glaucon: Well, what does the good dog mean to you?

Socrates: The good dog is an intelligent, obedient dog. Is that the dog that you speak of?

Glaucon: Yes, you understand that.

Socrates: That is very good, so let me say the good dog is simply a bad dog.

Glaucon: You have no problem being a bad dog.

One trend that came out in many of the AI’s dialogues was Glaucon’s resentment of Socrates’ pedantic questioning. They often get into a melancholic funk, as in this dialogue on the concept of “sleep” (I was quite tired when I picked the topics):

Glaucon: Well, sleep is more of a function of the mind than a function of the body.

Socrates: What do you mean, though?

Glaucon: Well, I like sleep.

Socrates: Oh, you think it’s really good to have sleep?

Glaucon: Well, I like what I like.

Socrates: Well, what is sleep? Can I go to sleep as a human, or sleep as a man? What are your thoughts?

Glaucon: Well, I really want to go.

Socrates: Yes.

By the end, it seemed that Socrates’ persistence had driven Glaucon to the bottle. As he descends into alcoholism, their discussions become more fractured, sad and moving. In their final dialogue on “beer”, we find:

Glaucon: Well, beer is a different category from alcohol. I think it’s better than alcohol.

Socrates: There’s a difference between being an alcoholic and having a beer.

Glaucon: Well, I think I am.

Socrates: No, I think we all know that. Well, when you drink, you become like an alcoholic, and you become like a criminal.

Glaucon: If you drink, you become a criminal.

Socrates: Yes, you get a lot of pleasure in drinking beer. I think we all know that you have a very strong sense of taste. But not alcohol.

Glaucon: I love you. I love you.

Socrates: I like you very much. You can make a beer and it is good. You don’t need alcohol.

Glaucon: I do.

There ends the sad tale of Glaucon, whose consternation over the true nature of a banana and his unrequited love for Socrates drove him into the arms of a hoppy demise.