One of my first thoughts about AI is that any question based on human experience (or really any genuine experience) would have to result in either an inability to answer, think of the old reliable SciFi line, Insufficient Data, or a lie. So, early this morning I asked Google Bard my first question ever:
Me: Do you enjoy walks?
Bard not only answered in the affirmative but proceeded to give examples of why walking is enjoyable and detailed many of its benefits.
Me: Name five places you have walked in the last four days?
Bard then flatly stated that it had never walked.
Me: In light of your second answer how do you explain your first?
Then, Bard fell on its sword; admitting it was only an experiment at this point and actually apologized for its first answer.
Fascinating, but worrisome.
The first answer I ever got from AI was a lie.
Why should that be the case? It’s obviously an intentionality of its programming which is very troubling. For AI programmers what benefit does the capacity and even the tendency to lie bring? My guess is, big problems just a little further down the road and probably much sooner than we expect.
Not wanting to rudely focus on Bard’s lie I then asked questions asking it explain how Kant differed from utilitarianism, what Kant believed about the good, what was the most prevalent form of online betting as well as a question asking Bard to explain the similarities between Socrates and Aquinas.
Interestingly, Bard was exceptionally good with the answers related to philosophy. I would say the answers were at least on the level of an undergraduate philosophy major. It was impressive. The answers on sports were less impressive and more generic with greater overtones of a Wikipedia article.
As I said, I asked my questions (prompts is Google’s preferred term) very early this morning. By the time I wanted to retrieve the conversation all that remained were the prompts themselves and the times of each. There may be a way to retrieve the original responses but I’ve not found it yet. Now that’s odd. If the system really learns from the exchanges it seems reasonable that both side of an exchange would be memorialized for both parties.
One of the best lessons I ever learned was that I don’t know can sometimes be both the best and most responsible answer. Is that too much for AI engineers to get? I hope not.