bmks270 said:KingofHazor said:
I've used Claude and several other AIs quite a bit in an attempt to find help in doing scholarly research. The positive is that they, Claude in particular, can suggest ideas that I had not even considered. Nor, as best I can tell, has anyone else ever considered them. In other words, Claude appears to have original ideas.
The bad is that the net output is worthless. Every idea, no matter how original, has to be anchored in some reality. Claude will cite articles in support of his/its novel ideas, but the articles turn out not to exist. Claude readily admits that it is hallucinating, but admits so in a very friendly, disarming manner.
It raises the question, in my mind at least, how much the output of these AIs can be completely trusted. I came across an article recently in which the author claimed that these flaws cannot be cured but are baked into the very hardware of the AIs. Is that correct? I have no idea. But his thesis is that we are quickly reaching the ceiling for the AIs, rather than the exponential improvement that many AI bros are claiming.
My personal experience, using AIs for things like scholarly research, and mundane things like shopping for the best prices, is that AI output cannot be trusted to be accurate at all.
Hallucinations are baked in because it's really a next word predictor based on training data. It doesn't know facts from fiction. It doesn't use logic or reasoning. It's interesting that some of the ideas appear to you to be novel. Maybe because its training is on word associations, and your training is in a research field?
AI is really good at code because code is so structured. The prediction of the next word is a lot easier as a result.
It just returns words that look a lot like words in the training data.
Good points, but it also just makes **** up. I can't figure out how that is done based on a statistical algorithm, or however the AIs work.