Hegseth vs. Anthropic

9,524 Views | 108 Replies | Last: 3 days ago by Captain Winky
chaca5151
How long do you want to ignore this user?
This isn't about Anthropic or the specific issues involved. It's about the fundamental idea that technology integrated into our military should be exclusively managed by our elected or appointed leaders. No private company has the authority to set the rules for using our most sensitive national security systems, especially since those rules can change and are open to interpretation. The Department of War cannot rely on a system that a private company could disable at any time. A private company working for the DoD doesn't have the final say over what gets launched. I voted for one person to get that approval.
*** No one loves Mexico more than people who refuse to live there ***

Not everyone gets the same version of me. One person might tell you I have an amazing beautiful soul. Another might tell you I’m a cold-hearted a$$^ole. Believe them both. I don’t treat people badly. I treat them accordingly - unknown
fullback44
How long do you want to ignore this user?
AG
mickeyrig06sq3 said:

Stmichael said:

Someone needs to get through to Hegseth that no form of AI is even close to ready to handle this sort of thing correctly. It would be a colossal **** up to push for this kind of autonomous decision making to be given to a chat bot.

They'll never be ready, or at least they should never be trusted as ready. I don't care how sophisticated and accurate AI gets; there always needs to be a human in the decision-making tree for causing death.

How would AI cause people to die? I'm not understanding this
Jeeper79
How long do you want to ignore this user?
AG
chaca5151 said:

This isn't about Anthropic or the specific issues involved. It's about the fundamental idea that technology integrated into our military should be exclusively managed by our elected or appointed leaders. No private company has the authority to set the rules for using our most sensitive national security systems, especially since those rules can change and are open to interpretation. The Department of War cannot rely on a system that a private company could disable at any time. A private company working for the DoD doesn't have the final say over what gets launched. I voted for one person to get that approval.
You're missing the point. The person you elected wants to hand that approval off to a machine. Did you sign up for THAT?

Oh, and the whole big brother thing, which I honestly thought conservatives were against.

"Govern me harder, Daddy!"
Jeeper79
How long do you want to ignore this user?
AG
fullback44 said:

mickeyrig06sq3 said:

Stmichael said:

Someone needs to get through to Hegseth that no form of AI is even close to ready to handle this sort of thing correctly. It would be a colossal **** up to push for this kind of autonomous decision making to be given to a chat bot.

They'll never be ready, or at least they should never be trusted as ready. I don't care how sophisticated and accurate AI gets; there always needs to be a human in the decision-making tree for causing death.

How would AI cause people to die? I'm not understanding this
AI gets things wrong. Failure rates can be in the double digits. What if an AI-powered turret shot up our own guys? Or preemptively launched a counter strike on missiles that aren't even there.

Plus we haven't actually ruled out a SkyNet scenario. Even this week, Anthropic conceded that it couldn't guarantee that its fail safes work. And that's not just an Anthropic problem.
fullback44
How long do you want to ignore this user?
AG
Jeeper79 said:

fullback44 said:

mickeyrig06sq3 said:

Stmichael said:

Someone needs to get through to Hegseth that no form of AI is even close to ready to handle this sort of thing correctly. It would be a colossal **** up to push for this kind of autonomous decision making to be given to a chat bot.

They'll never be ready, or at least they should never be trusted as ready. I don't care how sophisticated and accurate AI gets; there always needs to be a human in the decision-making tree for causing death.

How would AI cause people to die? I'm not understanding this

AI gets things wrong. Failure rates can be in the double digits. What if an AI-powered turret shot up our own guys?

Plus we haven't actually ruled out a SkyNet scenario. Even this week, Anthropic conceded that it couldn't guarantee that its fail safes work. And that's not just an Anthropic problem.

Ok I see, so if it were to be used for military equipment things could go wrong …
MaxPower
How long do you want to ignore this user?
chaca5151 said:

This isn't about Anthropic or the specific issues involved. It's about the fundamental idea that technology integrated into our military should be exclusively managed by our elected or appointed leaders. No private company has the authority to set the rules for using our most sensitive national security systems, especially since those rules can change and are open to interpretation. The Department of War cannot rely on a system that a private company could disable at any time. A private company working for the DoD doesn't have the final say over what gets launched. I voted for one person to get that approval.
That's fine but I'd be very concerned with whoever is willing to work with the DoD with those parameters. At best they will be incompetent…..
Less Evil Hank Scorpio
How long do you want to ignore this user?
AG
Pichael Thompson said:

My guess is the msm took Hegseth's point way out of context as usual, but I'll wait to see


Do Trump and Hegseth's comments today change your mind? The reporting seems spot on.
Old McDonald
How long do you want to ignore this user?
Mr.Milkshake said:

Lol just have to say I love reading the liberal tears over stuff like this
crazy that "no government mass surveillance or autonomous weapon targeting" is now the liberal position
Jeeper79
How long do you want to ignore this user?
AG
Old McDonald said:

Mr.Milkshake said:

Lol just have to say I love reading the liberal tears over stuff like this
crazy that "no government mass surveillance or autonomous weapon targeting" is now the liberal position
it is if Trump says it is.
A_Gang_Ag_06
How long do you want to ignore this user?
AG
fullback44 said:

mickeyrig06sq3 said:

Stmichael said:

Someone needs to get through to Hegseth that no form of AI is even close to ready to handle this sort of thing correctly. It would be a colossal **** up to push for this kind of autonomous decision making to be given to a chat bot.

They'll never be ready, or at least they should never be trusted as ready. I don't care how sophisticated and accurate AI gets; there always needs to be a human in the decision-making tree for causing death.

How would AI cause people to die? I'm not understanding this


43 years old and more relevant than ever.

Logos Stick
How long do you want to ignore this user?
chaca5151 said:

This isn't about Anthropic or the specific issues involved. It's about the fundamental idea that technology integrated into our military should be exclusively managed by our elected or appointed leaders. No private company has the authority to set the rules for using our most sensitive national security systems, especially since those rules can change and are open to interpretation. The Department of War cannot rely on a system that a private company could disable at any time. A private company working for the DoD doesn't have the final say over what gets launched. I voted for one person to get that approval.


I disagree. I do not trust a Dem President with the ability to use AI at their will. The last president got a bunch of people fired and blacklisted for not injecting an experimental chemical.
Ozzy Osbourne
How long do you want to ignore this user?
So the Army has to agree to the T&Cs that Anthropic comes up with? What a joke.

The whole "supply chain risk" thing is petty, but I agree with Hegseth that the military shouldn't be accountable to a corporate T&Cs agreement.

Maybe the government should fund their own open-weights model. China has put out some pretty good ones lately (GLM 5, Kimi K2.5, minimax m2.5). Models are a commodity.
Jeeper79
How long do you want to ignore this user?
AG
Ozzy Osbourne said:

So the Army has to agree to the T&Cs that Anthropic comes up with? What a joke.

The whole "supply chain risk" thing is petty, but I agree with Hegseth that the military shouldn't be accountable to a corporate T&Cs agreement.

Maybe the government should fund their own open-weights model. China has put out some pretty good ones lately (GLM 5, Kimi K2.5, minimax m2.5). Models are a commodity.
The military already isn't accountable unless they want to be. Anthropic isn't forcing them to do anything they don't want to do. It's a deal that fell through. I don't understand all this MAGA hatred for Anthropic all of a sudden. Nobody twisted their arm to take a deal they don't want.
Deputy Travis Junior
How long do you want to ignore this user?
Ozzy Osbourne said:

So the Army has to agree to the T&Cs that Anthropic comes up with? What a joke.

The whole "supply chain risk" thing is petty, but I agree with Hegseth that the military shouldn't be accountable to a corporate T&Cs agreement.

Maybe the government should fund their own open-weights model. China has put out some pretty good ones lately (GLM 5, Kimi K2.5, minimax m2.5). Models are a commodity.


Anthropic is saying "our software cannot reliably perform the tasks you want it to perform and may malfunction and kill innocent people if you try to use it that way." They believe what they're saying so strongly that they're willing to lose hundreds of millions of sales to stand by it. Yes, the government should respect that assertion.

And if Trump had any brains he'd throw Hegseth out on his ass for whatever dystopian warrantless domestic surveillance program he's trying to cook up. The military hads absolutely no business engaging in that.
hph6203
How long do you want to ignore this user?
AG
TexasRebel
How long do you want to ignore this user?
AG
Jeeper79 said:

fullback44 said:

mickeyrig06sq3 said:

Stmichael said:

Someone needs to get through to Hegseth that no form of AI is even close to ready to handle this sort of thing correctly. It would be a colossal **** up to push for this kind of autonomous decision making to be given to a chat bot.

They'll never be ready, or at least they should never be trusted as ready. I don't care how sophisticated and accurate AI gets; there always needs to be a human in the decision-making tree for causing death.

How would AI cause people to die? I'm not understanding this
AI gets things wrong. Failure rates can be in the double digits. What if an AI-powered turret shot up our own guys? Or preemptively launched a counter strike on missiles that aren't even there.

Plus we haven't actually ruled out a SkyNet scenario. Even this week, Anthropic conceded that it couldn't guarantee that its fail safes work. And that's not just an Anthropic problem.


If fail-safes don't fail safely, they aren't.
HTownAg98
How long do you want to ignore this user?
Saxsoon
How long do you want to ignore this user?
AG
chaca5151 said:

This isn't about Anthropic or the specific issues involved. It's about the fundamental idea that technology integrated into our military should be exclusively managed by our elected or appointed leaders. No private company has the authority to set the rules for using our most sensitive national security systems, especially since those rules can change and are open to interpretation. The Department of War cannot rely on a system that a private company could disable at any time. A private company working for the DoD doesn't have the final say over what gets launched. I voted for one person to get that approval.


I bet you didn't vote for a machine Intelligence to kill indiscriminately or for mass surveillance of our citizens. I thought Bush and Homeland Security were bad but I guess since it's Trump yall get on your knees

I voted for Trump but not for this
Mr.Milkshake
How long do you want to ignore this user?
1. Dario is mostly full of ****
2. No one knows what the govt is actually doing, but safe to believe it's not in our best interest.
3. No one knows what Anthropic is actually doing, but safe to believe it's not in our best interest.
4. The US military cannot be beholden to a corp like Anthropic or any other
harge57
How long do you want to ignore this user?
AG
Logos Stick said:

There goes $200 mil. I think Pete was over the top here. Oh well, OpenAI will do what they want, I'm sure. I wonder how Elon feels about Trump's position.


It will be more than $200 mil.

This makes it very difficult for any contractor to the feds to roll out anthropic.

If I roll it out to my 40k consultants I'll constantly have to be carving things out for fed contractors etc. Much easier for me to just roll out my agents with other models.

Side note anthropic has been a ***** to negotiate with on the commercial side as well. Won't even give us a lawyer to negotiate terms with until they get a commitment of spend of minimum 8 figures and are asking for 9.

I will admit claude code and cowork are the soup dejour right now, but give it 3 months and it's probably something else.
Yukon Cornelius
How long do you want to ignore this user?
AG
I think they want autonomous drones running AI that be launched and not communicated with again and carry out the search and destroy mission.
Logos Stick
How long do you want to ignore this user?
Yeah, I didn't consider contractors. Gonna be a lot more than $200 mil.
Mr.Milkshake
How long do you want to ignore this user?
There are plenty of good open source alternatives to Claude code. You can also just rig up Claude code to use someone else's model very very easily
tysker
How long do you want to ignore this user?
AG
Software is speech under the 1A. Does the government have the authority to compel software to engage in illegal activities and actions that would be considered unconstitutional?
BMX Bandit
How long do you want to ignore this user?
tysker said:

Software is speech under the 1A. Does the government have the authority to compel software to engage in illegal activities and actions that would be considered unconstitutional?


They aren't compelling them to do anything.

They were negotiating a contract. There is no free speech implication
Jeeper79
How long do you want to ignore this user?
AG
Yukon Cornelius said:

I think they want autonomous drones running AI that be launched and not communicated with again and carry out the search and destroy mission.
You want SkyNet? This is how you get SkyNet.
Jeeper79
How long do you want to ignore this user?
AG
BMX Bandit said:

tysker said:

Software is speech under the 1A. Does the government have the authority to compel software to engage in illegal activities and actions that would be considered unconstitutional?


They aren't compelling them to do anything.

They were negotiating a contract. There is no free speech implication
Precisely. They were negotiating a contract. And it fell through. I don't know why one party decided the other was traitorous just because they didn't get what they wanted.
tysker
How long do you want to ignore this user?
AG
BMX Bandit said:

tysker said:

Software is speech under the 1A. Does the government have the authority to compel software to engage in illegal activities and actions that would be considered unconstitutional?


They aren't compelling them to do anything.

They were negotiating a contract. There is no free speech implication

If the government is compelling the AI to push the boundaries of Constitutionality without human intervention and oversight and sign-off it is most certainly a 1A issue.

Don't be naive about the slippery slope of these agentic outcomes.
PDEMDHC
How long do you want to ignore this user?
AG
If only marvel made a movie about constant surveillance and how they can build a weapon to kill everyone…

Hail hydra!
HTownAg98
How long do you want to ignore this user?
If the military is saying this won't be used for mass surveillance on US citizens, it will 100% be used for mass surveillance on US citizens.
Jeeper79
How long do you want to ignore this user?
AG
This forum loves to cite 1984, but when a real-world example of mass surveillance is presented by the current administration, they pretend like it's a good thing.

"The Party told you to reject the evidence of your eyes and ears. It was their final, most essential command." George Orwell (1984)
Logos Stick
How long do you want to ignore this user?
Jeeper79 said:

BMX Bandit said:

tysker said:

Software is speech under the 1A. Does the government have the authority to compel software to engage in illegal activities and actions that would be considered unconstitutional?


They aren't compelling them to do anything.

They were negotiating a contract. There is no free speech implication
Precisely. They were negotiating a contract. And it fell through. I don't know why one party decided the other was traitorous just because they didn't get what they wanted.


Why did Hegseth call it a betrayal? Unless Anthropic unofficially agreed to the terms early on - to get the initial government contract in the first place - and then refused when signature time came, I don't get that statement.
Deputy Travis Junior
How long do you want to ignore this user?
From what I've read (and I want to note I haven't seen the primary source so it may be wrong), Anthropic's position has been consistent since the beginning.

I've loved what Hegseth has done to date - beginning to fix the wasteful procurement process and refocusing the military on its primary mission (not social justice) were both badly needed. But this beef with anthropic and his attempt to destroy a leading American company are the one "oh sh*t" that cancels out a hundred "atta boys."
Saxsoon
How long do you want to ignore this user?
AG
BMX Bandit said:

tysker said:

Software is speech under the 1A. Does the government have the authority to compel software to engage in illegal activities and actions that would be considered unconstitutional?


They aren't compelling them to do anything.

They were negotiating a contract. There is no free speech implication

Oh yes, blackballing them as a Supply Chain threat isn't compelling them after the negotiations fell through.

Sure Jan
Logos Stick
How long do you want to ignore this user?
 
×
subscribe Verify your student status
See Subscription Benefits
Trial only available to users who have never subscribed or participated in a previous trial.