Chatbots Hold Conversation At Cornell University

By

Home / Chatbots Hold Conversation At Cornell University

Cornell University’s Ph.D. students Igor Labutov and Jason Yosinski, along with Associate Professor Hod Lipson, recently hit upon a novel idea: Put two chatbots together, and see what happens. Are the chatbots able to hold a ‘real’ conversation? Not exactly, but when you put enough virtual monkeys on enough typewriters, well… you get the drift. In this case, the ‘monkeys’ held a riveting conversation that became a viral YouTube sensation.

Computers running chatbot software draw from a huge database of potential comments and responses. The chatbot’s rules determine which response is used, and the appropriateness of the response used depends heavily on the sophistication of the software. The chatbots used in Cornell’s experiment, detailed on Cornell’s Creative Machines Labs page, are Cleverbots. Decoded Science asked Cornell University’s Igor Labutov (MAE PhD Student) about the decision to use Cleverbots, as well as other aspects of the experiment.

Decoded Science: With all of the types of chatbot available, why did you choose Cleverbot?

I. Labutov: The first chatbot we tried was Eliza – one of the earliest chatbots, designed to simulate a therapist. Because Eliza was “stateless” (memory-less), conversations quickly turned into an infinite loop of questions and accusations. Cleverbot is the only “statefull” chatbot that we know of, and because of its rich repertoire of conversation snippets it learned from people, it’s possible to obtain unique and interesting conversations that sound more human than any chatbot out there. In fact, just recently Cleverbot almost passed the Turing test – meaning its conversation (as judged by another human) was nearly indistinguishable from another human. (It scored 59% (meaning it fooled 59% of human judges), as opposed to 63% of actual humans who were able to convince the judges that they were human)

Decoded Science: Did you consider using two different chatbot platforms to produce a potentially more robust conversation?

I. Labutov: An A.L.I.C.E vs. Fake Captain Kirk conversation was an experiment carried out before ours (available on YouTube), and clearly does not hold as interesting of a conversation. We haven’t tried combinations of different chatbots, but it made sense to pair up the best chatbot out there with itself, namely Cleverbot.

Decoded Science: Did you expect the degree of coherency in the conversation that actually occurred?

I. Labutov: No, we certainly didn’t. The conversation you heard in the video was the first conversation these bots had. We hooked them up, gave them voice and ran the program. We just sat back and listened. As it started with its greetings and pleasantries, we were certainly not expecting it to take such a sudden turn. We keep telling people that as we heard the conversation unfold, we nearly fell of our seats. This was also at 3AM, so we couldn’t sleep very well after hearing that either. In fact, the conversation did not end with the “Au revoir”. It continued well past, although it seemed appropriate to cut it at “goodbye”. These robots had a lot more to say to each other. Maybe we will release it under the title “what happened after…”

Decoded Science: Could you describe your role in the project?

I. Labutov: The realization that two chatbots could be hooked up to each other came to us almost synchronously as we were running identical chatbot code alongside speech synthesis and speech recognition code on each of our laptops. Jason then said, “what if we were to slide these laptops together?”. And so we did. The rest is history. I (Igor) then wrote the code to replace Eliza with the smarter Cleverbot, added text to speech and avatars, and then we both recorded it on video. There were no clear cut roles; we just hacked this together for fun, and certainly never expected this…

Decoded Science: What was the most surprising or interesting part of this project for you?

I. Labutov: Definitely the reaction of people to this video. Its clear that these robots don’t possess any “intelligence” (they don’t plan what to say next, and they certainly don’t attribute any meaning to their words. They merely sample from a database of what some humans have said to it before). But the conversation it produced was so human-like, that it apparently struck a certain nerve in most everyone who has watched this video. This eerie feeling that this conversation instills tells us more about ourselves than about robots. Will we be ready when the real “AI” arrives and has a first sentient conversation with us?

Reader Comments

What do you think: Are we ready for sentient chatbots? Would you like to see the rest of the chatbot conversation? Share your questions and comments below, or join us in the Decoded Science forums for a more in depth discussion!

Leave a Comment