The Dwarkesh Patel interview with Terence Tao. I am always surprised at how young Patel is: 25. He started this podcast when he was 19. Tao is my nomination for smartest person in the world, though I admit that is related to name recognition and my bias toward mathematicians. He achieved a 760 on his SAT math when he was 8. I still think of even him as a youngster, but he is 50 now.
Every minute or three during the interview I thought Huh. I didn't know that. They come in a flood. So I could flood them for you, but I think that would just be a list without context. I'll try something else.
Peer review came in an era when there were lots of theories, and we had to sort out signal from noise.* Lots of small experiments looking for signal, with detecting false signal being one of the constant reminders for experimenters and reviewers. Criticising a study is often an exercise in "that's a false signal." Improved communication, especially the internet reduced the cause the cost of communication to zero. Even just word processing allowed an incredible increase in the number of papers that could be generated, and increased the length of them. Peer review was overwhelmed by cheap quick communication alone. In the same way, the cost of isolating signal from noise has basically gone to zero from AI. Perhaps that is not the problem that we think it is. We still need to verify theories against #DATA. But the way that we do that, peer review, may no longer be necessary because it is simply overwhelmed. AI makes everyone so productive even with its semi-thinking that checking it is impossible. Tao mentioned that AI excels at breadth, human intelligence excels at depth, and then they are complementary. He calls this Amazing and Disappointing, and compares it to the search engine, which was a stunning leap forward that became taken for granted just a few years later.
He considers it something of an accident that AI was developed around thousandfold increases in data such as LLM's, versus first principles reasoning. It's not immediately apparent that that's a good way to go, but it was the one we could do and got in first. He compared it to astronomy, when Kepler could not move forward on his theories without data, and Tycho Brahe had the data. We can't know whether that is going to be more fruitful because we don't know the future. It still might be worthwhile to focus on teaching AI to reason from first principles instead. It made be think of Yogi Berra "Prediction is difficult, especially about the future."
Bethany, 30 minutes in he discusses how the social aspect of science has a larger effect than we think. Newton wrote in Latin and did not write engagingly. Everyone was jealous for their discoveries and would not even commit ideas to paper for fear others would steal them. Newton was not well understood and explained in his day because of this. Darwin, OTOH, wrote clearly and in English when there was an existing network of sharing information. Persuasiveness, having a narrative that could be grasped, even though he admitted there were all kinds of gaps in it, turned out to be key. Ideas bubble up but recede and disappear if they don't get picked up. Tao thought that AI might prove helpful in discovering things already discovered because it does literature review so quickly. But what occurred to me was the persuasiveness of a True Crime narrative, because it fits preexisting beliefs (see previous confirmation bias post) can quickly overwhelm the data. In your discussion of base rates and known versus suspected data, the effect of AI will change the territory. I don't predict it will fix things, because the situation is dynamic. But it has to change it.
*Especially in the social sciences, that drove a lot of the replication crisis. It was widely considered respectable to engage in purely exploratory thinking and devise experiments to illustrate that rather than prove it. Wouldn't it be cool if people did even terrible things because an authority told them to? It would allow us to train humans to be what we want. So give us the power to do that. Or in another area I think matriarchal societies would be less violent. So lets find some and prove that they are, so we can make that part of Western education systems and bring peace on earth. The triumph of preferred narrative over data. Even in hard sciences you get things like String Theory, amyloid hypothesis, or universal grammar infected by preferences that win out for a time because they deny resources to less-preferred theories.
No comments:
Post a Comment