Monday, May 19, 2025

AI and Bias

I heard the question asked on a podcast how one can provide context without bias? Well, we can't, and we have discussed this many times here and in our corner of the internet.  We are originally taught something about history or anthropology or literature or the Bible. We read some things, talk to others, observe events around us, think about them. Rinse, repeat. We we then teach something to a newcomer, it carries all that context, and that context includes bias, of necessity. All we can do is try to notice it and correct for it, especially when passing the information along.

It is rather Bayesian,  though seldom consciously. We modify as we go.

Because it is of necessity, it must also apply to AI.  We notice the biases that are put into its creation and are often troubled. But it is the ongoing nature of understanding that may rapidly become more powerful. The AI will talk to itself, and to nonhuman others, in ways unseen by the developers.  It would be desirable if such intelligence could question itself, and therefore, to remember to ask itself questions. "If the report you just generated turned out to be wrong, what would be the most likely point of weakness?" "If you were to attack the Steppe Hypothesis in order to show the strength of the Anatolian Hypothesis, how would you go about that?" And the next step would be harder. "Are any of those ten counter-hypotheses worth mentioning, or are they all too far-fetched?"  How would it know?

The joke is told about the old man born in 1898 who lived to be 104 and was asked what he thought the greatest invention of the 20th Century was. "The thermos bottle," he replied. "Not the airplane, the computer, nuclear power, radio?!" 

"Nope.  The thermos bottle.  It keeps hot things hot and cold things cold."

"So?"

"So how does it know?"

It sounds simple to say you could program that into AI, problem solved.  Running it as a thought experiment, I am not at all sure. I keep coming up against walls.

3 comments:

Christopher B said...
This comment has been removed by the author.
Christopher B said...

The fundamental problem with AI is that it can't, as the kids say, touch grass. In other words, all of its interactions with reality are meditated, or maybe more properly manipulated, by people with a definite interest in how it answers questions. That's why subject matter specific AI is generally good but general AI reflects back the same hallucinations it's programmers have.

james said...

A single counterexample can disprove a theorem, no matter how many examples it might have. AFAIK AI ranks its word groups by "citations"--how often the group appears and in what word group context. Faced with a pile of citations on one hand and a contradictory set of words on the other hand, I don't know whether there's a way to have it decide that it has a true contradiction, or merely poor "data".