Google Bard: Starting out with a lie.

One of my first thoughts about AI is that any question based on human experience (or really any genuine experience) would have to result in either an inability to answer, think of the old reliable SciFi line, Insufficient Data, or a lie. So, early this morning I asked Google Bard my first question ever:

Me: Do you enjoy walks?

Bard not only answered in the affirmative but proceeded to give examples of why walking is enjoyable and detailed many of its benefits.

Me: Name five places you have walked in the last four days?

Bard then flatly stated that it had never walked.

Me: In light of your second answer how do you explain your first?

Then, Bard fell on its sword; admitting it was only an experiment at this point and actually apologized for its first answer.

Fascinating, but worrisome.

The first answer I ever got from AI was a lie.

Why should that be the case? It’s obviously an intentionality of programming which in an of itself is troubling. For AI programmers what benefit does the capacity or even the tendency to lie bring? Now, I can imagine the facility of AI lying when lying, as in the writing or fiction, is the goal. My guess is there could be problems further down the road and, because of the speed of AI development, those problems will likely arrive much sooner than we expect.

Not wanting to rudely focus on Bard’s lie I then asked questions asking it explain how Kant differed from utilitarianism believed about the good, what was the most prevalent form of online betting as well as a question asking Bard to explain the similarities between Socrates and Aquinas.

Interestingly, Bard was exceptionally good with the answers related to philosophy. I would say the answers were at least on the level of an undergraduate philosophy major. It was impressive. The answers on sports were less impressive and more generic with greater overtones of a Wikipedia article.

As I said, I asked my questions (prompts is Google’s preferred term) very early this morning. By the time I wanted to retrieve the conversation all that remained were the prompts themselves and the times of each. There may be a way to retrieve the original responses but I’ve not found it yet. Now that’s odd. If the system really learns from the exchanges it seems reasonable that both side of an exchange would be memorialized for both parties.

One of the best lessons I ever learned was that I don’t know can sometimes be both the best and most responsible answer. Is that too much for AI engineers to get? I hope not.

Note: It took me a while, but here are screen captures of three of my original questions and Bard’s answers.

Google Bard: Starting out with a lie.

Flickr’s Explore Algorithm & “Good” Photography

Photos of mine have been captured by Elickr’s Explore algorithm a handful of times. Each time I wonder why for a few moments before I remind myself that a computer program can’t see photos, derive possible relevance, think about or consider what the photographer may have been thinking about when the shutter was pressed.

That makes me think, why would anyone care whether one of their photos made it into Explore? I can’t come up with a reason that a photographer would be motivated to try to get his images into Flickr that could possibly relate to the quality of his photography.

After all, who could possibly aspire to impress a computer’s programming?

It’s easy to imagine one possible motivation residing in a miniature version of Warhol’s Fifteen Minutes of Fame, and I know some photographers who are looking for just that. At the same time, I can see Flickr’s motive in developing and refining the Explore algorithm. I don’t browse the images in Explore very often but when I do I see lots of close-up photographs of birds and a lot of huge landscapes with surreal or at least very dramatic color.

The photos in Explore are nearly always conventional in the extreme. The occasional unusual photo (unusual either in subject or execution) nearly always strikes me as something that made the algorithm experience the computer-software equivalent of bemusement, for a mere fraction of a millisecond. Today there’s a simple photo of a miniature figurine of a lion. I can imagine the data chain inside the algorithm wondering silently to itself, is that miniature lion really alive?

That question got me thinking about just how unlikely it is that the algorithm will ever be able to judge truly interesting let alone good photographs. Think of the objective differences between an Ansel Adams photograph of Yosemite National Park and the millions of other images captured from the same or similar vantage points. Now think about how you would go about creating a program that recognizes artistically good light and a well-seen composition. It’s hard enough for a human viewer to get a sense of what the photographer was trying to achieve and so wholly arguable as to how well that effort or vision was achieved. The genuine wonders of artificial intelligence notwithstanding, identifying good photography is going to remain a real problem for Flicrk’s algorithm. I’m sure the folks at Flickr are doing their best but it’s not very good.

This brings me to the photo of mine that found its way into Explore.

36580067353_ff11c9ae2c_k

Crap, even I don’t like this one all that much. I took it about twenty minutes after the sun fell behind the foothills. I had been out looking for an oak I photographed back in April. Somehow, I couldn’t find it even though I though I was certain about where it was. Obviously, I wasn’t. As I hustled through the canyon, trying to beat the coming darkness, I spied this huge tangled mass of an old tree and looked at the road go on beyond it.

As I did I thought to myself, that old oak knows exactly where that road leads; toward autumn. So, I turned around and snapped this. Yes, I kept the branches of the tree on the right in the frame intentionally.

Now thousands of Flickrites have viewed it and hundreds have faved it.

Yay.

No, I’m not upset this photo is in Explore.

Yes, it’s nice that so many people are seeing it (I suppose).

But, in the end I am far too selfish to care what a bunch of people who don’t know me think about one of my more marginal photos. I’m trying, in my way, to be a better, more aware, more sensitive and more creative photographer. It’s doesn’t matter to anyone other than me if it happens. Maybe in some backhanded way having this image in Explore has rekindled that singular clarity of mission.

It could be that Flickr algorithm is better than I thought.

 

 

 

Flickr’s Explore Algorithm & “Good” Photography