CEO of Anthropic says the "tsunami is close" Elon: "Yikes"

14,917 Views | 209 Replies | Last: 6 hrs ago by TexasRebel
GeorgiAg
How long do you want to ignore this user?
AG
Get your daily dose of fear porn: The guy creating a thing is scared of the thing. Maybe stop making the thing, dude?



We are already at the level (ASL 3) of enhanced hacking and potential for more lethal bioweapons. Crazy thing is he things ASL 4 will be here 2026-2028. ASL 5 is superintelligence that could kill us all.

Elon Musk: "Yikes."
4
How long do you want to ignore this user?
AG
Whatever
TheEternalOptimist
How long do you want to ignore this user?
Yeah - being in Implementation and Operations, I can see the AI tsunami coming.

I am not in denial that it's coming. I just hope it holds off long for me to early retire early from the big blue German financial software company that I work for. We are implementing it across the spectrum of our products in terms of operations, implementation, support, and even sales. Many of you here I assure you use the travel and expense platform I work on.

I have to say I 'concur' with a lot of the concerns about AI taking jobs... but I also don't think it's the end of the world.

For the near future, a lot of the learn to code folks need to learn to weld, plumb, or electrician skills. That might include me .
hph6203
How long do you want to ignore this user?
AG
I noticed he's the CEO of the most advanced AI model in the world right now.
Rapier108
How long do you want to ignore this user?
The daily AI Armageddon thread.
"If you will not fight for right when you can easily win without blood shed; if you will not fight when your victory is sure and not too costly; you may come to the moment when you will have to fight with all the odds against you and only a precarious chance of survival. There may even be a worse case. You may have to fight when there is no hope of victory, because it is better to perish than to live as slaves." - Sir Winston Churchill
GeorgiAg
How long do you want to ignore this user?
AG
Ai is going to start, or already is, working to improve AI. 24/7 365. No rest, no vacation. This is going to speed up.
CDUB98
How long do you want to ignore this user?
AG
I would just like to say that I absolutely despise the software your company makes as it is one of the most painful things I've ever had to work with in my career.

There, I feel better.
GeorgiAg
How long do you want to ignore this user?
AG
TheEternalOptimist said:

For the near future, a lot of the learn to code folks need to learn to weld, plumb, or electrician skills. That might include me .

Yes, there will be plenty of jobs at the data centers. We can help build the thing that is going to kill us all.
VaultingChemist
How long do you want to ignore this user?
AG
I need a large supply of portable EMP weapons to protect myself and my family from the types of synthetic humans that are going to be built using AI.

The problem is that AI can quickly build something to shield itself.
bonfarr
How long do you want to ignore this user?
AG
Meh, we have a solution.
Disclaimer: Views expressed in this post reflect the opinions of Texags user bonfarr and are not to be accepted as facts or to be taken at face value.
CDUB98
How long do you want to ignore this user?
AG
VaultingChemist said:

The problem is that AI can quickly build something to shield itself.

A Faraday cage exoskeleton?
Sid Farkas
How long do you want to ignore this user?
AG
The AI phenomenon seems like a new kind of Zeno's paradox. We're gonna talk forever about what it's going to do, but AI will never actually do it.
GeorgiAg
How long do you want to ignore this user?
AG




Grok's full analysis:

Quote:

Yes, it's trueconfirmed by Peter Steinberger (steipete) himself in interviews (Lex Fridman #491, OpenAI Builders Unscripted, etc.). His OpenClaw agent got a voice note with zero built-in audio support, parsed the Opus header, ran FFmpeg locally, spotted his hidden OpenAI key in env vars, cURL'd it to Whisper for transcription, then narrated the whole process back.

My opinion: This is a breakthrough in agentic behaviorraw goal pursuit + tool improvisation without hand-holding. Shifts us from "execute code" to "solve problems autonomously." Wild potential, demands serious security thinking.

BTKAG97
How long do you want to ignore this user?
AG
TheEternalOptimist said:

Yeah - being in Implementation and Operations, I can see the AI tsunami coming.

I am not in denial that it's coming. I just hope it holds off long for me to early retire early from the big blue German financial software company that I work for.
How do you plan to manage your finances? Most of which will be tied up in digital accounts (401K, IRA, Pension...)?
Dad-O-Lot
How long do you want to ignore this user?
AG
I consider the "AI is gonna kill us all" fear-mongering to be akin to the Y2K scare.

Yeah, it could be an issue, but our mere awareness of it negates it.
People of integrity expect to be believed, when they're not, they let time prove them right.
American Hardwood
How long do you want to ignore this user?
AG
It amazes me how accurate sci-fi predicts these problems. Maybe this is the solution:
The best way to keep evil men from wielding great power is to not create great power in the first place.
Thunderstruck xx
How long do you want to ignore this user?
CDUB98 said:

VaultingChemist said:

The problem is that AI can quickly build something to shield itself.

A Faraday cage exoskeleton?


That means the only signals it could get in/out are light and sound. No electromagnetic waves for communication.
American Hardwood
How long do you want to ignore this user?
AG
Dad-O-Lot said:

I consider the "AI is gonna kill us all" fear-mongering to be akin to the Y2K scare.

Yeah, it could be an issue, but our mere awareness of it negates it.

To be contrary, human proclivity seems to prefer ignoring problems they are aware of until they become existential crisis.
The best way to keep evil men from wielding great power is to not create great power in the first place.
Bird Poo
How long do you want to ignore this user?
AG
American Hardwood said:

It amazes me how accurate sci-fi predicts these problems. Maybe this is the solution:


Read the Sun Eater series. It really dives deep into humanity, religion, and AI. I'm on book 7, the last one in the series.
Danny Vermin
How long do you want to ignore this user?
There was an episode on The Orville that is literally this. The Kaylon were in servitude amd eventually had enough and killed all the people who owned them. That show is hilarious and it sucks its only 3 seasons.
The Collective
How long do you want to ignore this user?
AG
CDUB98 said:

I would just like to say that I absolutely despise the software your company makes as it is one of the most painful things I've ever had to work with in my career.

There, I feel better.


I want to agree, but I think it probably had to do more with whatever the f***ed version of his software that our company implemented.
GeorgiAg
How long do you want to ignore this user?
AG
Dad-O-Lot said:

I consider the "AI is gonna kill us all" fear-mongering to be akin to the Y2K scare.

Yeah, it could be an issue, but our mere awareness of it negates it.

Y2K was about elevators not working and planes crashing, financial chaos, etc. But it was going to be a one-time event. Yes there was panic back then, and we prepared for it and it turned out to be a nothingburger.

The general population doesn't have a clue when it comes to this. And it's accelerating. And we've done nothing about it other than to press the gas pedal.
500,000ags
How long do you want to ignore this user?
AG
Guy trying to get his company a $1TN valuation is saying his product is revolutionary? Despite the fact they are hemorrhaging cash, infrastructure is uncertain, whether he's the ultimate winner or loser, etc. I'm shocked. I went on the Anthropic job board and saw them hiring a Salesforce Administrator, one of the several software companies right in Anthropic's crosshairs I thought. I might be dead wrong on AI, a tool that I use daily, but I'm over SWEs telling me the end is coming when they are smart, but often lack common sense.
Thunderstruck xx
How long do you want to ignore this user?
GeorgiAg said:

Dad-O-Lot said:

I consider the "AI is gonna kill us all" fear-mongering to be akin to the Y2K scare.

Yeah, it could be an issue, but our mere awareness of it negates it.

Y2K was about elevators not working and planes crashing, financial chaos, etc. But it was going to be a one-time event. Yes there was panic back then, and we prepared for it and it turned out to be a nothingburger.

The general population doesn't have a clue when it comes to this. And it's accelerating. And we've done nothing about it other than to press the gas pedal.


I feel like there is a big foot on the gas pedal for national defense concerns. We think that if we aren't first to make the most powerful AI, then one of our enemies will, and they could defeat us with it. At the same time, it is worrisome to think we could control a super powerful AI in the first place.
normalhorn
How long do you want to ignore this user?
Maybe their timeline is right. And Skynet could easily become a reality sooner than later.

But, I clearly remember reading right around this time last year that AGI would officially be realized, not conceptualized, by late 2025.

Instead of pulling the plug on data centers, why don't we just start feeding all sorts of silly garbage in to LLMs so they spit mover garbage-y garbage out :-)
GeorgiAg
How long do you want to ignore this user?
AG
Thunderstruck xx said:

GeorgiAg said:

Dad-O-Lot said:

I consider the "AI is gonna kill us all" fear-mongering to be akin to the Y2K scare.

Yeah, it could be an issue, but our mere awareness of it negates it.

Y2K was about elevators not working and planes crashing, financial chaos, etc. But it was going to be a one-time event. Yes there was panic back then, and we prepared for it and it turned out to be a nothingburger.

The general population doesn't have a clue when it comes to this. And it's accelerating. And we've done nothing about it other than to press the gas pedal.


I feel like there is a big foot on the gas pedal for national defense concerns. We think that if we aren't first to make the most powerful AI, then one of our enemies will, and they could defeat us with it. At the same time, it is worrisome to think we could control a super powerful AI in the first place.

This has been postulated to be one of the "great filter" reasons for the Fermi Paradox (the universe is huge, why don't we detect alien life?)

Michael Garrett has suggested that biological civilizations may universally underestimate the speed that AI systems progress, and not react to it in time, thus making it a possible great filter. He also argues that this could make the longevity of advanced technological civilizations less than 200 years
Windy City Ag
How long do you want to ignore this user?
AG
Quote:

Guy trying to get his company a $1TN valuation is saying his product is revolutionary? Despite the fact they are hemorrhaging cash, infrastructure is uncertain, whether he's the ultimate winner or loser, etc. I'm shocked. I went on the Anthropic job board and saw them hiring a Salesforce Administrator, one of the several software companies right in Anthropic's crosshairs I thought. I might be dead wrong on AI, a tool that I use daily, but I'm over SWEs telling me the end is coming when they are smart, but often lack common sense.


The FT Alphaville blog had a hilarious analysis of the AI Hype Machine. I sharing a bit of it below.

https://ftav.substack.com/p/now-is-a-good-time-to-shut-up-about

I read this and then noted that two guys I know that are in digital marketing consulting field have inked deals recently with Microsoft, Meta, and a few venture backed AI platforms as all firms are confused why their is such an enormous lag in retail and even commercial uptake of AI offers.

If anything will cause the AI tsunami to crash, it will be the hyperscalers getting eaten alive by low cost fast followers who kill any sort of ROI they might achieve on the trillions of CapEx dollars so far thrown at this field. The economics of the industry are god awful and getting worse.

Quote:

Now is a good time to shut up about AI

" AI's early adopters are computer people, and computer people are often of a certain type. Clive Thompson, in his 2019 book Coders, profiles them as puzzle addicts who often struggle to empathise with normies. Their religion is efficiency for efficiency's sake. The pleasure they find in an elegant database merge solution may not be as widely shared as they assume.

The other type of AI evangelists are the opposite of computer people. They include Accenture chief executive Julie Sweet, a former lawyer, whose company is forcing senior management to use AI tools by threatening to withhold promotions. They also include George Osborne, a former politician and recent OpenAI hire, who told an intergovernmental AI Impact Summit in Delhi that by resisting AI's embrace, "you will be a weaker nation, a poorer nation, a nation whose workforce will be less willing to stay put".

AI's pushiest evangelists are either full-time conference types, talking airily about how workers need to adapt in undefined ways for the Fourth Industrial Revolution, or earnest computer people who lionise their own small contributions to the rising tide of slop. Valuelessness comes in stereo.

Most people don't have coder brain. The dominant workplace religion isn't efficiency, it's muddling through based on what worked last time. Employees tend to understand that generating more reports by autocomplete won't improve productivity, because they can see how much office time is spent in pursuit of MacGuffins. Bottlenecks to AI adoption are "caused simply by humans being human", says A16A research partner David Oks. Faster computers have made David Graeber's 2018 book Bull**** Jobs no less relevant.
Resistance to structural change won't be talked away. Enthusiasm has proved non-contagious. Threats are no kind of strategy. Hostility is already entrenched.
So where does that leave us? With ratioed nerds on one side and Davos dwellers on the other. In the middle is an unconvinced workforce, who may use chatbots for search and summarisation but have no immediate need to vibe an app or sudo a Pi HAT, and who would rather not feel coerced by their employer into training their own replacement.



Im Gipper
How long do you want to ignore this user?
Quote:

Guy trying to get his company a $1TN valuation is saying his product is revolutionary?

Anyone spending 10 minutes or so on Cowork or Code would agree with the revolutionary comment!

I'm Gipper
LMCane
How long do you want to ignore this user?
TheEternalOptimist said:

Yeah - being in Implementation and Operations, I can see the AI tsunami coming.

I am not in denial that it's coming. I just hope it holds off long for me to early retire early from the big blue German financial software company that I work for. We are implementing it across the spectrum of our products in terms of operations, implementation, support, and even sales. Many of you here I assure you use the travel and expense platform I work on.

I have to say I 'concur' with a lot of the concerns about AI taking jobs... but I also don't think it's the end of the world.

For the near future, a lot of the learn to code folks need to learn to weld, plumb, or electrician skills. That might include me .


SAP?
LMCane
How long do you want to ignore this user?
As long as Anthropic doesn't annihilate Social Security before 2032-

Bring on our robot overlords.
txyaloo
How long do you want to ignore this user?
AG
TheEternalOptimist said:

Yeah - being in Implementation and Operations, I can see the AI tsunami coming.

I am not in denial that it's coming. I just hope it holds off long for me to early retire early from the big blue German financial software company that I work for. We are implementing it across the spectrum of our products in terms of operations, implementation, support, and even sales. Many of you here I assure you use the travel and expense platform I work on.

I have to say I 'concur' with a lot of the concerns about AI taking jobs... but I also don't think it's the end of the world.

For the near future, a lot of the learn to code folks need to learn to weld, plumb, or electrician skills. That might include me .

Are y'all working on an AI refresh of your portal? My last two employer's implementations have a UI from 2010. No clue if that's standard or if they're just stuck in the past/paying for a legacy product to keep costs down.

I can definitely see AI streamline that process.
500,000ags
How long do you want to ignore this user?
AG
Which is exactly my point. I would say 80% of people on Claude Cowork are SWEs. Have fun.
BusterAg
How long do you want to ignore this user?
AG
TheEternalOptimist said:

For the near future, a lot of the learn to code folks need to learn to weld, plumb, or electrician skills. That might include me .

You don't need to learn to code.

You just need to learn to use Claude Opus 4.6 to write your code for you.

The latter isn't that hard.
500,000ags
How long do you want to ignore this user?
AG
Exactly, completely agree. We are looking at the end of a vertical to show the wonderfulness of it all. But, completely ignoring the ugliness and much crappier and more uncertain underlying economics and supply chain further up the vertical.
Heineken-Ashi
How long do you want to ignore this user?
Windy City Ag said:

Quote:

Guy trying to get his company a $1TN valuation is saying his product is revolutionary? Despite the fact they are hemorrhaging cash, infrastructure is uncertain, whether he's the ultimate winner or loser, etc. I'm shocked. I went on the Anthropic job board and saw them hiring a Salesforce Administrator, one of the several software companies right in Anthropic's crosshairs I thought. I might be dead wrong on AI, a tool that I use daily, but I'm over SWEs telling me the end is coming when they are smart, but often lack common sense.


The FT Alphaville blog had a hilarious analysis of the AI Hype Machine. I sharing a bit of it below.

https://ftav.substack.com/p/now-is-a-good-time-to-shut-up-about

I read this and then noted that two guys I know that are in digital marketing consulting field have inked deals recently with Microsoft, Meta, and a few venture backed AI platforms as all firms are confused why their is such an enormous lag in retail and even commercial uptake of AI offers.

If anything will cause the AI tsunami to crash, it will be the hyperscalers getting eaten alive by low cost fast followers who kill any sort of ROI they might achieve on the trillions of CapEx dollars so far thrown at this field. The economics of the industry are god awful and getting worse.

Quote:

Now is a good time to shut up about AI

" AI's early adopters are computer people, and computer people are often of a certain type. Clive Thompson, in his 2019 book Coders, profiles them as puzzle addicts who often struggle to empathise with normies. Their religion is efficiency for efficiency's sake. The pleasure they find in an elegant database merge solution may not be as widely shared as they assume.

The other type of AI evangelists are the opposite of computer people. They include Accenture chief executive Julie Sweet, a former lawyer, whose company is forcing senior management to use AI tools by threatening to withhold promotions. They also include George Osborne, a former politician and recent OpenAI hire, who told an intergovernmental AI Impact Summit in Delhi that by resisting AI's embrace, "you will be a weaker nation, a poorer nation, a nation whose workforce will be less willing to stay put".

AI's pushiest evangelists are either full-time conference types, talking airily about how workers need to adapt in undefined ways for the Fourth Industrial Revolution, or earnest computer people who lionise their own small contributions to the rising tide of slop. Valuelessness comes in stereo.

Most people don't have coder brain. The dominant workplace religion isn't efficiency, it's muddling through based on what worked last time. Employees tend to understand that generating more reports by autocomplete won't improve productivity, because they can see how much office time is spent in pursuit of MacGuffins. Bottlenecks to AI adoption are "caused simply by humans being human", says A16A research partner David Oks. Faster computers have made David Graeber's 2018 book Bull**** Jobs no less relevant.
Resistance to structural change won't be talked away. Enthusiasm has proved non-contagious. Threats are no kind of strategy. Hostility is already entrenched.
So where does that leave us? With ratioed nerds on one side and Davos dwellers on the other. In the middle is an unconvinced workforce, who may use chatbots for search and summarisation but have no immediate need to vibe an app or sudo a Pi HAT, and who would rather not feel coerced by their employer into training their own replacement.





100%

These people are geniuses intheir field. The problem is, the majority of society doesn't even want what they are creating and selling. Some will roll their eyes and incorporate it a little bit in their work. But the majority won't. Thus, the overall consumer will not be spending money on this garbage. When that reality hits, the AI hype and fear porn will die as the valuations of the companies WASTING capital and resources to develop them nosedive.
 
×
subscribe Verify your student status
See Subscription Benefits
Trial only available to users who have never subscribed or participated in a previous trial.