One of the difficulties in AI to date has been hallucination. That is, if something sounds plausible according to the huge number of texts that have been generated over the centuries, it might pass muster unnoticed when an AI is asked a question. Steve Hsu offered as an example "Is there a United Airlines flight landing in Boston around 6pm tomorrow and are there still tickets?" The answer could come back "Yes, there are a few such flights every weeknight, and there are still tickets," even if there are no flights, because it is the sort of thing that could very easily be true and thus end up as a prediction and answer. Or, asking an online AI manual for your new software how to fix something, you might be told there is a dropdown menu under Help and it is the fourth entry, even if in reality there are only three.
A few startups claim to have solved this by moving the AI off the Large Language Models and restricting the answers to a specific set of documents, which can be adjusted. If your company has new software named Simulacrum coming out, you can train your AI to use only the documents on your computer that contain any of a couple of dozen words, such as simulacrum, software, startup, manual, etc. Sounds plausible. I wouldn't know myself. But for help desks, currently staffed by overseas companies with low wages in countries where lots of people speak pretty good (or even excellent) English, such as India or the Philippines, the AIs are about to surpass them in effectiveness, precisely because they can be restricted to a narrower group of texts, and will not be bothered about whether the program knows what John Dryden, or even Spencer Dryden said.
That was my suspicion about hallucinations after the Google image debacle. Humans can avoid hallucinations because we are grounded in historical reality. AI has no such frame 9f reference.
ReplyDeleteThe key issue is that the AI can't check outside of its model to see if the conclusions are accurate to the world it is describing. The first idea was to make the model as broad as possible, in the hope of capturing the world more accurately by including more information. The new idea is to take more care with the inputs, so that the model includes only information we have checked ourselves to be sure is accurate.
ReplyDeleteThis won't work either, however, because the AI can't tell the difference between its hallucinations and the accurate information. It's all just data to the AI. If it pulls the words together in the wrong order, well, you'll find that out when you go out into the world and try to apply the lesson.
You can go back and tell the AI it got it wrong, but it can't tell the difference between you lying to it about the world and you reporting truthfully. Thus, either it is prone to learning bad information from interactions, or it can't learn to correct its errors. You can address this by granting the power to give it interactions it can learn from only to trusted persons, but that will in turn sharply limit the speed at which it can learn.
I ran an experiment with ChatGPT a while back in which I had it diagnose a mechanical error with my Jeep (which I have since sold, precisely because it produced so many mechanical errors and it was getting hard to find parts for the older model). It gave me a whole series of wrong information, which would have been helpful in narrowing down the actual problem by fixing a lot of plausible things that weren't actually the problem. It never did identify the actual problem, but when I told it what the problem proved to be it agreed that was another possibility.
At the moment, AI is just now becoming better than the average call center worker overseas. If it does that - if it drives better than 90% of all drivers - if it a better Air Traffic Controller, radiologist, police detective - isn't our objection then simply that we don't feel good about its mistakes, but are forgiving of worse mistakes from an actual human?
ReplyDeleteAt first that would seem indefensible, as there are more dead humans from the human judgement than from the AI judgement. Yet is that so? Might we legitimately sacrifice those humans to the god of greater efficiency for some other reason? At what point does the abstract have to yield to the practical?
I recall sometime ago, near the beginning of the autonomous car debate, that someone positied the bottleneck to deploying the technology will be assessing blame if something goes wrong. We don't always forgive human mistakes, either.
ReplyDeleteWho do you sue (or put in jail) if an AI-driven car runs down a kid bolting into the street after a ball? The 'driver' (who is really a passenger)? The owner of the car? The company that made the car? The company that wrote/trained the driving software? The programming team themselves?
Is doctor assisted by an AI program solely liable for errors in treatment or diagnosis? Should he or his patients be able to sue the AI creator for malpractice?
Air traffic control and detective work might be more likely to use AI sooner because of the doctrine of qualified immunity.
AVI, I'm not really arguing against using AI, in limited cases with appropriate safeguards. For plenty of things like help desks, you're right: it's probably better than some guy in Pakistan trying to communicate with you in a language you don't completely share, as he pores over old manuals and tries to figure out your problem based on your incomplete and atechnical descriptions. If you'd gone through all the steps the AI suggested in fixing the Jeep, you wouldn't have fixed it and you'd have wasted a lot of time, perhaps purchased some parts you didn't need, but otherwise no harm would be done.
ReplyDeleteWhat I am interested in is this issue they call 'hallucination.' This is the part where the philosophical aspect of my character is more interested in the the theory than the application. I also have a pragmatic aspect to my character that's happy to go with what works for now, and get back to theorizing later; but once in a while I get interested in thinking something through. This one is kind of neat.
What's really happening here is a variation on a bigger problem for all formal systems of deduction, such as Gödel's incompleteness theorems apply to axiomatic systems. It's not possible to have a closed system like that which can prove its own consistency, and there are things that are true that you can't prove within the system. What these LLMs are is a vast sort of algorithm (which Gödel's theorems address specifically), but they're still closed even when you include everything anyone has ever written. They can't look outside and check themselves against the facts.
So there are always going to be true things they can't tell you are true; what's interesting about hallucination is that there are false things they can't tell aren't true. Perhaps even more interesting, once they have hallucinated the hallucination gets folded into the data set, so it is a self-replicating error. There may be workarounds for that pragmatically, but so far once they start to drift the drift accelerates as old hallucinations get folded into the logic producing new ones.
I guess I do have a pragmatic goal for all this theory, which is that I'm trying to figure out how to attack and destroy hostile AIs (e.g., ones deployed by China against Americans, say). We may need weapons against such things, and this looks like a promising one.
I can remember back in my college days in the early 1990s, when neural networks were just coming into being, one of my professors somewhat denigrating them because it was much too easy for them to learn something other than what you were training them to do. The canonical example of the time was a system designed to recognize tanks in photographs had been trained on a sample of photos where all pictures of tanks were taken on sunny days, thus the network got to be very good at distinguishing which photos had been taken on sunny days rather than on recognizing tanks.
ReplyDeleteAnalogously, I think that ChatGPT in particular, and likely other such engines have been trained to produce answers which the user expects to get, rather than answers that are factually true. Hence the observed phenomenon of producing non-existent cases which happen to stand for the exact principle the lawyer using the systems cited in court.