TexasRebel said:
Who?
And, how do they fare offline, again?
Spergin said:TexasRebel said:Logos Stick said:
Give this a listen, then mock it, dismiss it and hope.Anthropic CEO:
— ₕₐₘₚₜₒₙ (@hamptonism) February 5, 2026
Software engineering will be completely obsolete in 6-12 months… pic.twitter.com/EwKq8l7HE7
All I know is AI is really bad at writing robust code.
AI also isn't much use offline.
Considering the fact that every tech company is now heavily using it for code, this is clearly and obviously false and out of date.
TexasRebel said:Spergin said:TexasRebel said:Logos Stick said:
Give this a listen, then mock it, dismiss it and hope.Anthropic CEO:
— ₕₐₘₚₜₒₙ (@hamptonism) February 5, 2026
Software engineering will be completely obsolete in 6-12 months… pic.twitter.com/EwKq8l7HE7
All I know is AI is really bad at writing robust code.
AI also isn't much use offline.
Considering the fact that every tech company is now heavily using it for code, this is clearly and obviously false and out of date.
Have you seen how bad websites have gotten lately?
The companies relying on AI are doing terribly.
I'm not saying it can't lay a framework quicker than I can, but squishing bugs? Nope. Not a chance.
The entire economy is going to collapse into the limitations of physics. How will you know it made a mistake? It will be obvious, because it doesn't work. The scorecard is its alignment with physical laws.Cynic said:
How will we know when AI gets something wrong if we no longer understand anything?
infinity ag said:TexasRebel said:Spergin said:TexasRebel said:Logos Stick said:
Give this a listen, then mock it, dismiss it and hope.Anthropic CEO:
— ₕₐₘₚₜₒₙ (@hamptonism) February 5, 2026
Software engineering will be completely obsolete in 6-12 months… pic.twitter.com/EwKq8l7HE7
All I know is AI is really bad at writing robust code.
AI also isn't much use offline.
Considering the fact that every tech company is now heavily using it for code, this is clearly and obviously false and out of date.
Have you seen how bad websites have gotten lately?
The companies relying on AI are doing terribly.
I'm not saying it can't lay a framework quicker than I can, but squishing bugs? Nope. Not a chance.
microsoft claims 50% code written by AI but hiring 10000 engineers in India ha ha
Lots of 60+ suckers to sell snake oil to.
Law-Apt_3G said:
Been prepping with lots of little bottles of booze for currency. Now I am hoarding 1GB SD cards to hand out like candy to the little AIs. Plan is to be they/their king.
hph6203 said:The entire economy is going to collapse into the limitations of physics. How will you know it made a mistake? It will be obvious, because it doesn't work. The scorecard is its alignment with physical laws.Cynic said:
How will we know when AI gets something wrong if we no longer understand anything?
Spergin said:Logos Stick said:Goldman Sachs is rolling out Anthropic’s AI model to automate accounting and compliance roles completely.
— Rohan Paul (@rohanpaul_ai) February 6, 2026
Anthropic engineers have been embedded at Goldman for 6 months, co-developing systems that act like “digital co-workers” for high-volume, process-heavy tasks.
The new setup… pic.twitter.com/KMvQkkTAMs
The same will apply to regular engineering as well. Given design specs and regulations, there is no reason to assume the same cannot be done there too.
Over_ed said:bmks270 said:
5-10% bull**** rate of hallucinations is still too frequent to ever replace humans.
AI will spew a bunch of jargon, and humans will have to filter the outputs to what nonsense and what is true insights worth considering.
Problems arise when no more experts exist to do the filtering because all of the he roles were replaced by AI and new experts were created to replace those that retire.
They are only getting better. Yes it still makes mistakes. But fewer than I saw from many of data centric peers in industry.
And if you are getting that many hallucinations, your prompts are crappy.
I have several different, multipage prompts. They all have the AI do validity checking on all inputs and data, as well as computations, table construction, source citations, ... while using multi agent "teams" to handle various perspectives and/or sequencing.
And I am using AI for fun. I am sure the guys/gals doing this for real are generally doing a much better job than I.
Engineering is not creating things that aren't in any of the training data. It's taking things in the training data and applying it in a new way. They're not creating new physics. Not only can AI do that, but it is going to understand physics better than the best engineer and then be able to apply that understanding as well as the best engineer.bmks270 said:Spergin said:Logos Stick said:Goldman Sachs is rolling out Anthropic’s AI model to automate accounting and compliance roles completely.
— Rohan Paul (@rohanpaul_ai) February 6, 2026
Anthropic engineers have been embedded at Goldman for 6 months, co-developing systems that act like “digital co-workers” for high-volume, process-heavy tasks.
The new setup… pic.twitter.com/KMvQkkTAMs
The same will apply to regular engineering as well. Given design specs and regulations, there is no reason to assume the same cannot be done there too.
Engineering is creative, creating things that aren't in any training data.
Being creative in accounting lands you in prison.
Quote:
It's bigger than the railroad expansion of the 1850s, the Apollo space program that put astronauts on the moon in the 1960s and the decadeslong build-out of the U.S. interstate highway system that ended in the 1970s.
We're talking about the data centers now being built and financed by some of the world's biggest companies in the artificial-intelligence boom.
hph6203 said:Engineering is not creating things that aren't in any of the training data. It's taking things in the training data and applying it in a new way. They're not creating new physics. Not only can AI do that, but it is going to understand physics better than the best engineer and then be able to apply that understanding as well as the best engineer.bmks270 said:Spergin said:Logos Stick said:Goldman Sachs is rolling out Anthropic’s AI model to automate accounting and compliance roles completely.
— Rohan Paul (@rohanpaul_ai) February 6, 2026
Anthropic engineers have been embedded at Goldman for 6 months, co-developing systems that act like “digital co-workers” for high-volume, process-heavy tasks.
The new setup… pic.twitter.com/KMvQkkTAMs
The same will apply to regular engineering as well. Given design specs and regulations, there is no reason to assume the same cannot be done there too.
Engineering is creative, creating things that aren't in any training data.
Being creative in accounting lands you in prison.
I wish people would stop claiming that AI can't come up with novel approaches to problems. The first viral AI news story that kicked off this run of "this time might be different" was exactly that. Alpha Go was taking a massive amount of data from games played by humans and then its own play of the game based upon that human trained data and created strategies never employed by humans. Things that looked like horrendous mistakes by people and turned out to be brilliant.
2014: "A computer can't beat a professional Go player."
2015: "A computer can't beat the best professional Go players."
2016: "…. What just happened?"
2017: "A human can't beat a computer in Go."
Follow that pattern for everything. Human knowledge IS pattern recognition and computers do it better and faster. The limiter is our datasets to feed the computers, not the computers themselves.
In ~7 years Alpha Fold close to doubled the progress to near 100% in protein folding. Literally Nobel Prize level research won by people who did not have a background in the field they won the prize for, because they understood AI. Not because they were expert chemists, but because they were experts in AI and therefore they became experts in chemistry.
The same thing is presently happening in math.
By 2030 I would be stunned if you're getting much in the way of prizes in STEM for projects completed without AI. Would be pretty surprised if humans are even involved in the process to a significant degree by 2035.
In the not too distant future we're going to have widespread embodied AIs (basically artificial humans) doing trillions of daily "experiments" about the physical world making discoveries from pattern recognition that were previously not known. Just through observation and testing. People are under the misperception that it's going to be safe to be a plumber, a welder, or an HVAC, and while that may be temporarily true I think within a decade, maybe 15 years, it won't be.
The robots are coming.
CrockerAg98 said:Quote:
Looks like a great time to retire.
Officially retired for 7 days. Freshman…..Wildcat!!!
Huge electrical manufacturing company, really starting to immerse everyone in AI. Really neat technology everything from summarizing meeting to composing emails. Just wasn't for me…too old I guess.
Retire from SE?
It doesn't matter.TexasRebel said:
It's not that it incorrectly cites references.
It's that it cites references that don't even exist.
McNasty said:TexasRebel said:
Who?
And, how do they fare offline, again?
Who cares about offline?
Deputy Travis Junior said:
You need to go read more about AI and especially about the cutting edge products. You're telling us that AI can't write good code or debug but you haven't even heard of anthropic (the leader in the AI coding space that's valued at over half a trillion dollars). You also have an odd fixation with local AI even though the overwhelming majority of white collar use cases (the subject of the OP) will have internet access.
The error you're making is thinking that human intelligence is more than that. That's human intelligence. Exposure, pattern recognition and then application.TexasRebel said:Deputy Travis Junior said:
You need to go read more about AI and especially about the cutting edge products. You're telling us that AI can't write good code or debug but you haven't even heard of anthropic (the leader in the AI coding space that's valued at over half a trillion dollars). You also have an odd fixation with local AI even though the overwhelming majority of white collar use cases (the subject of the OP) will have internet access.
White collar workers think AI is intelligent. That it's somehow more than threaded database recollection.
TexasRebel said:Deputy Travis Junior said:
You need to go read more about AI and especially about the cutting edge products. You're telling us that AI can't write good code or debug but you haven't even heard of anthropic (the leader in the AI coding space that's valued at over half a trillion dollars). You also have an odd fixation with local AI even though the overwhelming majority of white collar use cases (the subject of the OP) will have internet access.
White collar workers think AI is intelligent. That it's somehow more than threaded database recollection.
Waymo has reduced the rate of severe accidents by 90%, has 2000+ cars on the road driving 2+ million miles every week. They just raised $16 billion for expansion.TexasRebel said:
Which computers can actually drive cars?
Last I saw they're still killing people and misidentifying objects.
"Why" is just more pattern recognition. Plenty of humans don't do that well, because the capacity for compression and attention is reduced relative to others.Quote:
Human intelligence is more than pattern recognition. It's asking why a pattern happens. Plenty of humans don't do that.
Yes, and it's not necessary to be intelligent. Humans have a variety of different experiences of the world. The fastest speed readers in the world don't subvocalize words in their mind, they just absorb the information text to concept.Quote:
I was actually shocked to learn, some time ago, that some humans have no inner monologue. Can you imagine?
GeorgiAg said:TexasRebel said:GeorgiAg said:Tex117 said:GeorgiAg said:Tex117 said:AozorAg said:
I've tried using the most expensive AI tools available in my law practice, and I would still be committing malpractice if I didn't redo most of it myself. Whatever everybody is seeing in the hard sciences, it's not showing up in the legal world. Also I expect we're going to get some state legislation prohibiting AI practice of law in various forms in the near future. I think my job is safe for another decade or so at least.
Yeah, its not quite capable of high level legal work yet. But, is it as good as a 1-3 year actually good associate? Yes.
Is it a good editor in terms of writing your thoughts down and needing it streamlined? Absolutely.
Agree completely.
I have gone from review docs/fact -> traditional research -> drafting/writing -> review/final edits
to
Get facts/docs -> put into AI -.> verify/edit.
It speeds everything up.
What it has done with document review is incredible. There is no question the legal field is going to change significantly. But man....as a law student right now...I would be VERY concerned about getting a job.
What still blows my mind is I can now upload Xrays, etc... and it can read it.
No it can't.
It can only regurgitate what data says about similar x-rays.
The only fields that are in trouble are archaeology and paleontology.
Radiology is trained pattern recognition based upon prior examples. Humans learn from looking at prior films too. A computer will do this 1000X faster than a human. AI will complement radiologists, not replace them. You still have to check it.
Same thing we mentioned above with law and speeding up or complementing tasks.
For me, if someone comes in with a medmal file, I can upload the images to AI and get a $0 initial opinion. If it checks out, then I will spend the money for a radiologist review.
Quote:
We literally manufacture capacity for pattern recognition for computers. Today is the dumbest they're going to be.