Wednesday, January 11, 2023

GPT-3

The opinions about AI or AGI have sounded very similar over the years.

1. It's never going to be quite as good in humans in some ways, so don't worry about it. (I don't think people believe the danger is that it will exceed us in every single way so much as that it may exceed us in some way that we can't control.)

2. I worked to develop some of these technologies and only let my children use their device 1 hour a week under supervision.

3. I worked to help develop some of these technologies and now am hiding out on an organic farm in Idaho Kuala Lampur and don't let my children have digital watches. (Well that's helpful)

4. It's going to be fine, because all the objections  are from people who haven't seen the great stories of hybrid robot creatures in Japanese manga, or believe in fragments of stupid religions, or like, must just hate technology or something. Look how great spreadsheets and email worked out! (I am not detecting that you answered the objections in that answer.)

5. Well, what is reality, anyway?  What is humanity? Isn't your real objection just that people will feel nervous and worried about their jobs, which we could fix by adopting more progressive policies, like the Europeans do?

6. I understand lots of technical stuff but not AI so much, and frankly, I'm quite nervous.

7. (And finally, something I hope is close to true.)  There will be major changes, but I think most of us will adjust and only some things will be terrible, and frankly, it's too late to worry about it because GPT-4 is just about ready and GPT-7 already envisioned, and we're stuck with it either way. We who actually understand most of this know that a lot of it is uncertain, and still think it's going to be net positive.

6 comments:

David Foster said...

I've sometimes gotten quite impressive answers, sometimes a response like you'd expect from a student who didn't bother to study but doesn't want to admit it, and sometimes answers that range from wrong to completely ridiculous.

Venture Capitalist Paul Graham analogized systems of this type to 'artificial pseudo-intellectuals', trying to look like they understand something without really understanding it. But he still thinks the implications of these systems will be very significant.

Assistant Village Idiot said...

Yes, I heard that at times it sounds like a college bullshitter, but I think that's actually pretty tough to do. If you screw with it and try to trap GPT-3 it can be wrong and ridiculous, but if you ask straight questions it's not bad. Or so I hear.

james said...

I'm thinking of explanations and assistance with procedures.

I figure that this will be helpful with boilerplate stuff. I've run across too many forms with "just-different-enough" explanatory jargon to believe that people always, or even usually, get it right.

But if there's a chance of confusion or subtlety, I'd not trust AI's without some cross-checking.

Anonymous said...

Its not artificial intelligence. It is machine learning, which done well is very powerful.

Deep Blue learned to beat the best chess players, but that is not all that big a deal as a chess board is pretty small and the possibilities are limited. AlphaGo was very impressive, and a computer learning to beat the best Go players, was huge. The game is large and the possibilities are as well.

AlphaGo played a lot of games with its self, to get that good and a look at the wiki page will teach you a lot about what AI really is.

Ganzir said...

I gave it some Hoeflin test items to gauge its 'reasoning' ability. It got about half of the verbal analogies right, but gave some silly answers to others. It also completely botched all the non-verbal problems I gave it, returning nonsensical answers. I was, however, surprised that it solved RUTHLESS : MYRMIDON :: IMITATIVE : EPIGONE.

After this experiment plus several hours' worth of other conversation, I think GPT-3 is perfectly described by what Arthur Jensen wrote about Williams syndrome patients in his book The g factor (pages 258-259):

"It is much harder to imagine the behavior of persons who are especially deficient in all abilities involving g and all of the major group factors, but have only one group factor that remains intact. In our everyday experience, persons who are highly verbal, fluent, articulate, and use a highly varied vocabulary, speaking with perfect syntax and appropriate expression, are judged to be at least average or probably superior IQ. But there is a rare and, until recently, little-known genetic anomaly, Williams syndrome, in which the above-listed characteristics of high verbal ability are present in persons who are otherwise severely mentally deficient, with IQs averaging about 50. In most ways, Williams syndrome persons appear to behave with no more general capability of getting along in the world than most other persons with similarly low IQs. As adults, they display only the most rudimentary scholastic skills and must live under supervision. Only their spoken verbal ability has been spared by this genetic defect. But their verbal ability appears to be “hollow” with respect to g. They speak in complete, often complex, sentences, with good syntax, and even use unusual words appropriately. (They do surprisingly well on the Peabody Picture Vocabulary Test.) In response to a series of pictures, they can tell a connected and fully elaborated story, accompanied by appropriate, if somewhat exaggerated, emotional expression. Yet they have exceedingly little ability to reason, or to explain or summarize the meaning of what they say. On most spatial ability tests they generally perform on a par with Down syndrome persons of comparable IQ, but they also differ markedly from Down persons in peculiar ways. Williams syndrome subjects are more handicapped than IQ-matched Down subjects in figure copying and block designs.

Comparing Turner’s syndrome [the distribution of g among women with Turner’s syndrome is typical, but they almost always have impaired spatial ability – Ganzir] with Williams syndrome obviously suggests the generalization that a severe deficiency of one group factor in the presence of an average level of g is far less a handicap than an intact group factor in the presence of a very low level of g."

Assistant Village Idiot said...

Great stuff! I would not have thought to try Hoeflin's tests on GPT-3. If you have not read the Oliver Sacks books about neurological conditions and targeted deficiencies, The Man Who Mistook His Wife For a Hat I have given as a gift to several young people, and recommend it for its readability and introduction to the who field of intactness vs deficiency and what can be done about it.

I had patients with Turner's Syndrome, BTW, but have never read up on it in the outside population, only what to do with them when they are in hospitals and supervised living.