World's top physicists say AI has won and to prepare for what comes after

21,575 Views | 290 Replies | Last: 1 mo ago by Rocky Rider
one safe place
How long do you want to ignore this user?
I remember when the pencil-necked geeks were predicting we would all be flying around with jet packs on our back and in flying cars.
McNasty
How long do you want to ignore this user?
AG
TexasRebel said:

Who?

And, how do they fare offline, again?


Who cares about offline?
infinity ag
How long do you want to ignore this user?
Spergin said:

TexasRebel said:

Logos Stick said:

Give this a listen, then mock it, dismiss it and hope.




All I know is AI is really bad at writing robust code.

AI also isn't much use offline.


Considering the fact that every tech company is now heavily using it for code, this is clearly and obviously false and out of date.


They aren't. Don't be fooled by press releases
infinity ag
How long do you want to ignore this user?
TexasRebel said:

Spergin said:

TexasRebel said:

Logos Stick said:

Give this a listen, then mock it, dismiss it and hope.




All I know is AI is really bad at writing robust code.

AI also isn't much use offline.


Considering the fact that every tech company is now heavily using it for code, this is clearly and obviously false and out of date.


Have you seen how bad websites have gotten lately?

The companies relying on AI are doing terribly.
I'm not saying it can't lay a framework quicker than I can, but squishing bugs? Nope. Not a chance.


microsoft claims 50% code written by AI but hiring 10000 engineers in India ha ha

Lots of 60+ suckers to sell snake oil to.
hph6203
How long do you want to ignore this user?
AG
Cynic said:

How will we know when AI gets something wrong if we no longer understand anything?
The entire economy is going to collapse into the limitations of physics. How will you know it made a mistake? It will be obvious, because it doesn't work. The scorecard is its alignment with physical laws.
Law-Apt_3G
How long do you want to ignore this user?
Been prepping with lots of little bottles of booze for currency. Now I am hoarding 1GB SD cards to hand out like candy to the little AIs. Plan is to be they/their king.
Logos Stick
How long do you want to ignore this user?
infinity ag said:

TexasRebel said:

Spergin said:

TexasRebel said:

Logos Stick said:

Give this a listen, then mock it, dismiss it and hope.




All I know is AI is really bad at writing robust code.

AI also isn't much use offline.


Considering the fact that every tech company is now heavily using it for code, this is clearly and obviously false and out of date.


Have you seen how bad websites have gotten lately?

The companies relying on AI are doing terribly.
I'm not saying it can't lay a framework quicker than I can, but squishing bugs? Nope. Not a chance.


microsoft claims 50% code written by AI but hiring 10000 engineers in India ha ha

Lots of 60+ suckers to sell snake oil to.



Can you post some links to your claims?
TexasRebel
How long do you want to ignore this user?
AG
Law-Apt_3G said:

Been prepping with lots of little bottles of booze for currency. Now I am hoarding 1GB SD cards to hand out like candy to the little AIs. Plan is to be they/their king.


Give them "RAM chips" as treats like Gypsy, Crow, Tom Servo, & Cam-Bot.
TexasRebel
How long do you want to ignore this user?
AG
hph6203 said:

Cynic said:

How will we know when AI gets something wrong if we no longer understand anything?
The entire economy is going to collapse into the limitations of physics. How will you know it made a mistake? It will be obvious, because it doesn't work. The scorecard is its alignment with physical laws.


Each human only gets one good attempt at breaking the physical laws. AI can do it over and over again.

bmks270
How long do you want to ignore this user?
AG
Spergin said:

Logos Stick said:




The same will apply to regular engineering as well. Given design specs and regulations, there is no reason to assume the same cannot be done there too.


Engineering is creative, creating things that aren't in any training data.

Being creative in accounting lands you in prison.


bmks270
How long do you want to ignore this user?
AG
Over_ed said:

bmks270 said:

5-10% bull**** rate of hallucinations is still too frequent to ever replace humans.

AI will spew a bunch of jargon, and humans will have to filter the outputs to what nonsense and what is true insights worth considering.

Problems arise when no more experts exist to do the filtering because all of the he roles were replaced by AI and new experts were created to replace those that retire.

They are only getting better. Yes it still makes mistakes. But fewer than I saw from many of data centric peers in industry.

And if you are getting that many hallucinations, your prompts are crappy.

I have several different, multipage prompts. They all have the AI do validity checking on all inputs and data, as well as computations, table construction, source citations, ... while using multi agent "teams" to handle various perspectives and/or sequencing.

And I am using AI for fun. I am sure the guys/gals doing this for real are generally doing a much better job than I.


I wouldn't be so sure others are much better at AI.

I think a large complex prompting framework that can reduce errors is not widely utilized and/or only useful to a small number of work types.

It may also be more effort than it's worth if it is only for an infrequent problem and won't see much re-use.
doubledog
How long do you want to ignore this user?
No need to wait. Just call your medical professional.
hph6203
How long do you want to ignore this user?
AG
bmks270 said:

Spergin said:

Logos Stick said:




The same will apply to regular engineering as well. Given design specs and regulations, there is no reason to assume the same cannot be done there too.


Engineering is creative, creating things that aren't in any training data.

Being creative in accounting lands you in prison.



Engineering is not creating things that aren't in any of the training data. It's taking things in the training data and applying it in a new way. They're not creating new physics. Not only can AI do that, but it is going to understand physics better than the best engineer and then be able to apply that understanding as well as the best engineer.

I wish people would stop claiming that AI can't come up with novel approaches to problems. The first viral AI news story that kicked off this run of "this time might be different" was exactly that. Alpha Go was taking a massive amount of data from games played by humans and then its own play of the game based upon that human trained data and created strategies never employed by humans. Things that looked like horrendous mistakes by people and turned out to be brilliant.

2014: "A computer can't beat a professional Go player."
2015: "A computer can't beat the best professional Go players."
2016: "…. What just happened?"
2017: "A human can't beat a computer in Go."

Follow that pattern for everything. Human knowledge IS pattern recognition and computers do it better and faster. The limiter is our datasets to feed the computers, not the computers themselves.

In ~7 years Alpha Fold close to doubled the progress to near 100% in protein folding. Literally Nobel Prize level research won by people who did not have a background in the field they won the prize for, because they understood AI. Not because they were expert chemists, but because they were experts in AI and therefore they became experts in chemistry.

The same thing is presently happening in math.

By 2030 I would be stunned if you're getting much in the way of prizes in STEM for projects completed without AI. Would be pretty surprised if humans are even involved in the process to a significant degree by 2035.

In the not too distant future we're going to have widespread embodied AIs (basically artificial humans) doing trillions of daily "experiments" about the physical world making discoveries from pattern recognition that were previously not known. Just through observation and testing. People are under the misperception that it's going to be safe to be a plumber, a welder, or an HVAC, and while that may be temporarily true I think within a decade, maybe 15 years, it won't be.

The robots are coming.

hph6203
How long do you want to ignore this user?
AG
I'll add that people will say things like "It incorrectly referenced this paper, it's stupid!"

They're under the false impression that it's going to be referencing any human knowledge with respect to STEM fields in the long tail of time. The only remnant of its knowledge in 50 years derived from humans is going to be that the impetus of its creation was a human idea. All the physics, chemistry, engineering etc derived from its own observations, not ours.

It'll probably still be able to quote Shakespeare and speak Esperanto.
TexasRebel
How long do you want to ignore this user?
AG
It's not that it incorrectly cites references.

It's that it cites references that don't even exist.
Sid Farkas
How long do you want to ignore this user?
AG
The economy is leveraged...We're in for an economic depression if these guys are just wrong about the timing of the payoff...



https://www.wsj.com/tech/ai/ai-spending-tech-companies-compared-02b90046?st=sS2dyq&reflink=desktopwebshare_permalink

There are more analytics in the article.
Quote:

It's bigger than the railroad expansion of the 1850s, the Apollo space program that put astronauts on the moon in the 1960s and the decadeslong build-out of the U.S. interstate highway system that ended in the 1970s.
We're talking about the data centers now being built and financed by some of the world's biggest companies in the artificial-intelligence boom.

bmks270
How long do you want to ignore this user?
AG
hph6203 said:

bmks270 said:

Spergin said:

Logos Stick said:




The same will apply to regular engineering as well. Given design specs and regulations, there is no reason to assume the same cannot be done there too.


Engineering is creative, creating things that aren't in any training data.

Being creative in accounting lands you in prison.



Engineering is not creating things that aren't in any of the training data. It's taking things in the training data and applying it in a new way. They're not creating new physics. Not only can AI do that, but it is going to understand physics better than the best engineer and then be able to apply that understanding as well as the best engineer.

I wish people would stop claiming that AI can't come up with novel approaches to problems. The first viral AI news story that kicked off this run of "this time might be different" was exactly that. Alpha Go was taking a massive amount of data from games played by humans and then its own play of the game based upon that human trained data and created strategies never employed by humans. Things that looked like horrendous mistakes by people and turned out to be brilliant.

2014: "A computer can't beat a professional Go player."
2015: "A computer can't beat the best professional Go players."
2016: "…. What just happened?"
2017: "A human can't beat a computer in Go."

Follow that pattern for everything. Human knowledge IS pattern recognition and computers do it better and faster. The limiter is our datasets to feed the computers, not the computers themselves.

In ~7 years Alpha Fold close to doubled the progress to near 100% in protein folding. Literally Nobel Prize level research won by people who did not have a background in the field they won the prize for, because they understood AI. Not because they were expert chemists, but because they were experts in AI and therefore they became experts in chemistry.

The same thing is presently happening in math.

By 2030 I would be stunned if you're getting much in the way of prizes in STEM for projects completed without AI. Would be pretty surprised if humans are even involved in the process to a significant degree by 2035.

In the not too distant future we're going to have widespread embodied AIs (basically artificial humans) doing trillions of daily "experiments" about the physical world making discoveries from pattern recognition that were previously not known. Just through observation and testing. People are under the misperception that it's going to be safe to be a plumber, a welder, or an HVAC, and while that may be temporarily true I think within a decade, maybe 15 years, it won't be.

The robots are coming.




The go AI is not an LLM, and was not trained in the way LLMs are trained.

Recent AI hype is all around LLMs and agents.

Your AI examples still have humans behind them that know what needs to done, and know what's important. AIs don't have agency to know what needs to be solved or know when their solution is good or bad.

AI is making humans more effective but not replacing them.

Humanoid robot Embodied AI is a meme.

Specialized robots are much more efficient way to manufacture and solve problems.

Show me humanoid robots applying more strength than carrying folded laundry.
TexasRebel
How long do you want to ignore this user?
AG
Two human traits that lead to innovation are boredom and fatigue.

AI has neither.
DOG XO 84
How long do you want to ignore this user?
AG
CrockerAg98 said:

Quote:

Looks like a great time to retire.

Officially retired for 7 days. Freshman…..Wildcat!!!

Huge electrical manufacturing company, really starting to immerse everyone in AI. Really neat technology everything from summarizing meeting to composing emails. Just wasn't for me…too old I guess.


Retire from SE?

Roger that.
hph6203
How long do you want to ignore this user?
AG
TexasRebel said:

It's not that it incorrectly cites references.

It's that it cites references that don't even exist.
It doesn't matter.
TexasRebel
How long do you want to ignore this user?
AG
McNasty said:

TexasRebel said:

Who?

And, how do they fare offline, again?


Who cares about offline?


You can't think of any sectors that use cutting edge technology with an air-gap?
hph6203
How long do you want to ignore this user?
AG
Tesla and Waymo's autonomous driving aren't LLM's either. No one narrowed the field to just LLM's. Do you think LLMs are the only AI being developed? It's reinforcement learning. LLM's are just the most consumer exposed version.

The robot example is the capacity to escape the data center and observe the world for themselves. In order to launch itself into a round off it has to be able to lift more than laundry. Atlas has a rated lifting capacity of 66 lbs. That's most physical labor either with a single individual or tandem lift, for heavier objects than 120lbs you're not routinely having people do that job unaided by some other mechanical advantage (trolley etc). It only gets more capable from here.

The reason we have LLMs is because it's the easiest way to expose the AI to the sum total of human intelligence. The next phase is the AI exposing itself to reality through robots/machines, observing it, recognizing patterns, improving, propagating, repeating. There will be AI operated humanoid robots and AI operated stoves and AI operated toothbrushes, because the quantity of compute is expanding and the cost of compute is collapsing.
Deputy Travis Junior
How long do you want to ignore this user?
You need to go read more about AI and especially about the cutting edge products. You're telling us that AI can't write good code or debug but you haven't even heard of anthropic (the leader in the AI coding space that's valued at over half a trillion dollars). You also have an odd fixation with local AI even though the overwhelming majority of white collar use cases (the subject of the OP) will have internet access.
Rocky Rider
How long do you want to ignore this user?
AG
I have no doubt that AI can can code, but I won't be getting on a plane, putting my life in the hands of a medical device, etc which is controlled by software written by Al until it's thoroughly tested by a human.
hph6203
How long do you want to ignore this user?
AG
Instead of conceptualizing creativity as creation arising from nothing conceptualize it as a spontaneous compression of data a person has been exposed to into a concept due to a recognition of patterns and it won't be quite as difficult to believe an AI can be "creative" (they can be, they already are), because that's roughly what creativity is.

The understanding of AI of the future is going to arise from its observation of reality not through an explanation through human language. That is a lossy process that can disrupt pattern recognition. Event -> data stream -> Language -> compression -> data stream as opposed to Event -> data stream -> compression -> data stream.

The utility of LLM's is not going to be the intelligence of the AI, but rather the ability of humans to convey to the AI what they want from it and for the AI to explain to humans its solution to their request. I don't think an AI is going to design a working fusion reactor and also not be able to explain the way it works. Discovery is harder than understanding, you can explain a concept way easier than it is to create it.

People are 100% going to miscalculate the rate of advancement of AI, some overestimates and underestimates. AI advancements are not continuous, and the data acquisition doesn't come all at once. Right now we're processing LLMs, when the data pipe explodes into self acquisition through machines the rate of current advancement will seem slow by comparison. When AI begins designing and building robots for more efficient data collection it will go even faster.
TexasRebel
How long do you want to ignore this user?
AG
Deputy Travis Junior said:

You need to go read more about AI and especially about the cutting edge products. You're telling us that AI can't write good code or debug but you haven't even heard of anthropic (the leader in the AI coding space that's valued at over half a trillion dollars). You also have an odd fixation with local AI even though the overwhelming majority of white collar use cases (the subject of the OP) will have internet access.


White collar workers think AI is intelligent. That it's somehow more than threaded database recollection.
hph6203
How long do you want to ignore this user?
AG
TexasRebel said:

Deputy Travis Junior said:

You need to go read more about AI and especially about the cutting edge products. You're telling us that AI can't write good code or debug but you haven't even heard of anthropic (the leader in the AI coding space that's valued at over half a trillion dollars). You also have an odd fixation with local AI even though the overwhelming majority of white collar use cases (the subject of the OP) will have internet access.


White collar workers think AI is intelligent. That it's somehow more than threaded database recollection.
The error you're making is thinking that human intelligence is more than that. That's human intelligence. Exposure, pattern recognition and then application.

My recollection is you made arguments that computers can't drive cars (could've been another user with a similar name). That is being proven wrong in real time.

In the future we're going to look back and think "man we were dumb for thinking we were smart."
TexasRebel
How long do you want to ignore this user?
AG
Which computers can actually drive cars?
Last I saw they're still killing people and misidentifying objects.

Human intelligence is more than pattern recognition. It's asking why a pattern happens. Plenty of humans don't do that.

I was actually shocked to learn, some time ago, that some humans have no inner monologue. Can you imagine?
Deputy Travis Junior
How long do you want to ignore this user?
TexasRebel said:

Deputy Travis Junior said:

You need to go read more about AI and especially about the cutting edge products. You're telling us that AI can't write good code or debug but you haven't even heard of anthropic (the leader in the AI coding space that's valued at over half a trillion dollars). You also have an odd fixation with local AI even though the overwhelming majority of white collar use cases (the subject of the OP) will have internet access.


White collar workers think AI is intelligent. That it's somehow more than threaded database recollection.


Every time you a point you make is mercilessly shredded you pivot to some other statement. I don't know what your goal is here.

Yes, AI is pattern recognition honed by thoughtful reward systems (reinforcement learning). But guess what? It works really, really well, especially in fields like programming and math where the desired outcomes are very easy to define.
TexasRebel
How long do you want to ignore this user?
AG
I've never pivoted from the statement that "AI" is not intelligent.

Also, which part of mathematics do you think has a predetermined outcome?
Deputy Travis Junior
How long do you want to ignore this user?
It doesn't matter if it's intelligent in the way we define and measure intelligence, the things that matter - and what was raised in the OP - are can it 1) exceed human performance on lots of white collar work and 2) reduce the amount of labor required? The answer to both is a resounding yes. Yet you're trying to argue "no" while citing 2 year old info and demonstrating no awareness of the major players in the space that are already doing the things that you say AI can't do.

hph6203
How long do you want to ignore this user?
AG
TexasRebel said:

Which computers can actually drive cars?
Last I saw they're still killing people and misidentifying objects.
Waymo has reduced the rate of severe accidents by 90%, has 2000+ cars on the road driving 2+ million miles every week. They just raised $16 billion for expansion.

Tesla is in Austin with (some) vehicles without safety drivers and is scaling this year, dedicated vehicle is to begin production this quarter and expanding into Dallas, Houston, Orlando, Miami, Vegas, and Phoenix this quarter likely first with safety drivers and then removal of the safety drivers.

Lemonade announced a 50% reduction in insurance premiums for miles driven with FSD active, because it is safer than human drivers alone.

Quote:

Human intelligence is more than pattern recognition. It's asking why a pattern happens. Plenty of humans don't do that.
"Why" is just more pattern recognition. Plenty of humans don't do that well, because the capacity for compression and attention is reduced relative to others.

Vocalizing why isn't necessary for intelligence to exist either. There are things my dog understands about the world that I don't, and he couldn't explain it, but he knows it none the less because he is exposed to things I am not because he has different sensory capacity than I do.

We literally manufacture capacity for pattern recognition for computers. Today is the dumbest they're going to be.

Quote:

I was actually shocked to learn, some time ago, that some humans have no inner monologue. Can you imagine?
Yes, and it's not necessary to be intelligent. Humans have a variety of different experiences of the world. The fastest speed readers in the world don't subvocalize words in their mind, they just absorb the information text to concept.
reineraggie09
How long do you want to ignore this user?
AG
GeorgiAg said:

TexasRebel said:

GeorgiAg said:

Tex117 said:

GeorgiAg said:

Tex117 said:

AozorAg said:

I've tried using the most expensive AI tools available in my law practice, and I would still be committing malpractice if I didn't redo most of it myself. Whatever everybody is seeing in the hard sciences, it's not showing up in the legal world. Also I expect we're going to get some state legislation prohibiting AI practice of law in various forms in the near future. I think my job is safe for another decade or so at least.

Yeah, its not quite capable of high level legal work yet. But, is it as good as a 1-3 year actually good associate? Yes.

Is it a good editor in terms of writing your thoughts down and needing it streamlined? Absolutely.



Agree completely.

I have gone from review docs/fact -> traditional research -> drafting/writing -> review/final edits

to

Get facts/docs -> put into AI -.> verify/edit.

It speeds everything up.

What it has done with document review is incredible. There is no question the legal field is going to change significantly. But man....as a law student right now...I would be VERY concerned about getting a job.



What still blows my mind is I can now upload Xrays, etc... and it can read it.


No it can't.

It can only regurgitate what data says about similar x-rays.

The only fields that are in trouble are archaeology and paleontology.

Radiology is trained pattern recognition based upon prior examples. Humans learn from looking at prior films too. A computer will do this 1000X faster than a human. AI will complement radiologists, not replace them. You still have to check it.

Same thing we mentioned above with law and speeding up or complementing tasks.

For me, if someone comes in with a medmal file, I can upload the images to AI and get a $0 initial opinion. If it checks out, then I will spend the money for a radiologist review.


Human radiologists took a big hit with the "gorilla" study a few years ago. Researchers hid a gorilla image in lung radiographs and didn't tell the radiologists it was there. Radiologists were told they were doing metastatic checks on lung films. Something like 90% of radiologists missed the gorilla.

Agree that AI can take over radiology. I can't wait until it comes to vet med.
TexasRebel
How long do you want to ignore this user?
AG
Was Waymo the one using the Indian call center equivalent of remote drivers?
TexasRebel
How long do you want to ignore this user?
AG
Quote:

We literally manufacture capacity for pattern recognition for computers. Today is the dumbest they're going to be.



And they are no less dumb than those in 1953. Just quicker at it and smaller.
 
×
subscribe Verify your student status
See Subscription Benefits
Trial only available to users who have never subscribed or participated in a previous trial.