Hegseth vs. Anthropic

9,515 Views | 108 Replies | Last: 3 days ago by Captain Winky
Less Evil Hank Scorpio
How long do you want to ignore this user?
AG


FNC is reporting there was a meeting today in which Hegseth demanded the two restrictions imposed by Anthropic on their models used by the gov't be lifted. Those two restrictions are 1) no mass surveillance and 2) no "autonomous kinetic operations", in other words a human has to be involved and not just let the model decide who to kill.

Here is the quote from the tweet:

Quote:

At issue is Anthropic's two stipulations that its advanced AI model currently used in the Pentagon's classified systems is NOT used for autonomous kinetic operations (Anthropic currently requires human oversight of autonomous operations when used to kill things for safety reasons because they don't know how the autonomous system will react and could even endanger soldiers using the model; soldiers and others could lose control of the model and automatically start killing large groups without humans in the "kill chain.") Second Anthropic bars its models from being used for mass domestic surveillance. Hegseth wants these restrictions lifted.




Demanding those two restrictions be lifted is extremely troubling. Threatening Anthropic if they don't lift those restrictions is also troubling.
Jugstore Cowboy
How long do you want to ignore this user?
AG
BlackGold
How long do you want to ignore this user?
AG
The power of pentagon contracts… Anthropic will bend the knee for the money. If they won't, there are plenty of others that will. Tough spot to be in.
BigRobSA
How long do you want to ignore this user?
Less Evil Hank Scorpio
How long do you want to ignore this user?
AG
There is reporting out there now that Anthropic's current position is that they will not comply, but time will tell if they really want to weather that storm.

Their CEO made a very interesting point that an AI system lacks the crucial component of being able to use judgment to refuse an unconstitutional order. I hadn't thought of that before, but that logic could pave the way for this to become a Supreme Court issue at some point.
boulderaggie
How long do you want to ignore this user?
AG
Yep.
Pichael Thompson
How long do you want to ignore this user?
My guess is the msm took Hegseth's point way out of context as usual, but I'll wait to see
giddings_ag_06
How long do you want to ignore this user?
AG
Come on BigBear! Tell them you'll do whatever they want if Anthropic won't.
BTKAG97
How long do you want to ignore this user?
AG
I've seen this play out on TV... It's called "Person of Interest".

Anthropic will go rogue and develop competing AI to battle the evil governmental AI. Though it will morph into something even bigger and will help save individuals the AI has calculated to be in imminent danger!
Less Evil Hank Scorpio
How long do you want to ignore this user?
AG
Pichael Thompson said:

My guess is the msm took Hegseth's point way out of context as usual, but I'll wait to see

Yep, notoriously hard on Trump Fox News is doing their best to spin this I'm sure...
Eliminatus
How long do you want to ignore this user?
AG
Less Evil Hank Scorpio said:

There is reporting out there now that Anthropic's current position is that they will not comply, but time will tell if they really want to weather that storm.

Their CEO made a very interesting point that an AI system lacks the crucial component of being able to use judgment to refuse an unconstitutional order. I hadn't thought of that before, but that logic could pave the way for this to become a Supreme Court issue at some point.

Probably the only time I wish SCOTUS acted proactively. We NEED to get ahead of this coming storm because it is not going to go away. I am of the mind fully AI controlled killchains are inevitable at this point (and probably not even by the U.S.) but the longer we stand against it with ironclad laws, the better.

AI needs to be muzzled from the get go in every avenue, not reacted to after the fact in everything. Which is where we currently are. I hate new laws like everyone else, but no one can argue that this is not different this time around.
K2-HMFIC
How long do you want to ignore this user?
If Anthropic doesn't want to do business with DoD, fine.

Labeling them a supply chain risk seems problematic.
Eliminatus
How long do you want to ignore this user?
AG
K2-HMFIC said:

If Anthropic doesn't want to do business with DoD, fine.

Labeling them a supply chain risk seems problematic.

The DOD can be extremely petty and punitive if they hear the word "No". This isn't the first time.

More so under Hegseth.
ErnestEndeavor
How long do you want to ignore this user?
Anthropic's CEO is one of the biggest proponents of AI safety initiatives. He's seen what goes on in tests behind the scenes and it freaks him out.

What he doesn't want to see is some autonomous system going rogue or hallucinating and taking out the wrong group of people.
bmks270
How long do you want to ignore this user?
AG
I would think AI for defense use would need to be specially developed for that purpose and not a general model.
A. G. Pennypacker
How long do you want to ignore this user?
AG
Less Evil Hank Scorpio said:



FNC is reporting there was a meeting today in which Hegseth demanded the two restrictions imposed by Anthropic on their models used by the gov't be lifted. Those two restrictions are 1) no mass surveillance and 2) no "autonomous kinetic operations", in other words a human has to be involved and not just let the model decide who to kill.

Here is the quote from the tweet:

Quote:

At issue is Anthropic's two stipulations that its advanced AI model currently used in the Pentagon's classified systems is NOT used for autonomous kinetic operations (Anthropic currently requires human oversight of autonomous operations when used to kill things for safety reasons because they don't know how the autonomous system will react and could even endanger soldiers using the model; soldiers and others could lose control of the model and automatically start killing large groups without humans in the "kill chain.") Second Anthropic bars its models from being used for mass domestic surveillance. Hegseth wants these restrictions lifted.




Demanding those two restrictions be lifted is extremely troubling. Threatening Anthropic if they don't lift those restrictions is also troubling.

WTF !!! Is Hegseth freakin' crazy? Hard to believe this is actually true. Must be more to the story.
Saxsoon
How long do you want to ignore this user?
AG
What a ****ing embarrassment Hegseth is
Mr.Milkshake
How long do you want to ignore this user?
Lol just have to say I love reading the liberal tears over stuff like this
A. G. Pennypacker
How long do you want to ignore this user?
AG
Mr.Milkshake said:

Lol just have to say I love reading the liberal tears over stuff like this

What's liberal about not wanting AI controlled weapons going rogue?
Stmichael
How long do you want to ignore this user?
AG
Someone needs to get through to Hegseth that no form of AI is even close to ready to handle this sort of thing correctly. It would be a colossal **** up to push for this kind of autonomous decision making to be given to a chat bot.
mickeyrig06sq3
How long do you want to ignore this user?
AG
Stmichael said:

Someone needs to get through to Hegseth that no form of AI is even close to ready to handle this sort of thing correctly. It would be a colossal **** up to push for this kind of autonomous decision making to be given to a chat bot.

They'll never be ready, or at least they should never be trusted as ready. I don't care how sophisticated and accurate AI gets; there always needs to be a human in the decision-making tree for causing death.
harge57
How long do you want to ignore this user?
AG
Yall are extremely naive at the AI capabilities here. They are not asking a chatgpt who to blow up. Lol.
mickeyrig06sq3
How long do you want to ignore this user?
AG
harge57 said:

Yall are extremely naive at the AI capabilities here. They are not asking a chatgpt who to blow up. Lol.

It's not asking what to blow up. You would use the AI (combined with parameters that you send) to be able to identify things you classify as valid targets. It's easily doable now.
mickeyrig06sq3
How long do you want to ignore this user?
AG
mickeyrig06sq3 said:

harge57 said:

Yall are extremely naive at the AI capabilities here. They are not asking a chatgpt who to blow up. Lol.

It's not asking what to blow up. You would use the AI (combined with parameters that you send) to be able to identify things you classify as valid targets. It's easily doable now.

To expand on the "easily doable". It's easy to identify and choose to kill the target. The hard part is making sure your AI knows when not to kill. Let's say we've got a drone outfitted with air-to-ground capabilities. Take AI with access to cell records and facial rec. The IMEI you're looking for lights up on a tower you're monitoring. A drone nearby is able to confirm that the person with the cell phone is the target you've been looking for, launch. Except you didn't program the AI to account that he was going into a mosque, or he was at a hospital, etc.

However, using AI for every step above (except the launch part) is perfectly acceptable. Now I can have 15 drones in the air doing the things above. But at the launch step, it just sends an alert to the airman with the information it's gathered and then the airman brings up the drone and takes it from there.
500,000ags
How long do you want to ignore this user?
AG
What an odd notion that a CEO is turning down one of the most lucrative contracts available due to reservations over his own product, and Hegseth is threatening him to proceed anyway.
harge57
How long do you want to ignore this user?
AG
500,000ags said:

What an odd notion that a CEO is turning down one of the most lucrative contracts available due to reservations over his own product, and Hegseth is threatening him to proceed anyway.


Probably because if we don't China will roll out a ****tier version sooner.
ts5641
How long do you want to ignore this user?
I'd say those are two pretty damned good restrictions.
AColunga07
How long do you want to ignore this user?
AG
Can we just let Ukraine test this on Russians for us? And then review performance data of human in the loop decision making vs no human in kill chain?

It's worth considering that the human in the loop might prevent errors (wrongful killings) but might result in preventable deaths because it is presumably slower. The end result is the same in my opinion.

This is from the perspective that "everyone" (china, Russia, and even us one day) will end up doing this. Might as well lead the charge and be the best at it.
chris1515
How long do you want to ignore this user?
AG
That could be a big positive for Anthropic and establish their branding as the safe/ethical AI choice.

Did the DOW not read the terms and conditions of the initial agreement or just decide to eff them terms?
Proposition Joe
How long do you want to ignore this user?
harge57 said:

500,000ags said:

What an odd notion that a CEO is turning down one of the most lucrative contracts available due to reservations over his own product, and Hegseth is threatening him to proceed anyway.


Probably because if we don't China will roll out a ****tier version sooner.


This.

Those who believe that AI leading to our complete destruction is coin-flip odds right now don't believe that because they think it's just become so smart it will overtake us and turn us into batteries ala The Matrix.

They believe a Cold War scenario is inevitable where multiple countries are racing to upgrade their systems and "can't fall behind!" which will lead them to have less and less guardrails, which will lead to automated decisions that lead us down the road to destruction.

And right now the only thing that makes that less likely is civil unrest and countries cannibalizing themselves when AI and robotics/automation does away with more than half the jobs.

So it's well on it's way to killing us all - we're just not sure of the timetable yet
Drahknor03
How long do you want to ignore this user?
AG
Anthropic's AI is very left-wing coded. I'm guessing this came up because the Pentagon asked it to do something completely legal, and it refused.
Houston Lee
How long do you want to ignore this user?
AG
Less Evil Hank Scorpio said:



FNC is reporting there was a meeting today in which Hegseth demanded the two restrictions imposed by Anthropic on their models used by the gov't be lifted. Those two restrictions are 1) no mass surveillance and 2) no "autonomous kinetic operations", in other words a human has to be involved and not just let the model decide who to kill.

Here is the quote from the tweet:

Quote:

At issue is Anthropic's two stipulations that its advanced AI model currently used in the Pentagon's classified systems is NOT used for autonomous kinetic operations (Anthropic currently requires human oversight of autonomous operations when used to kill things for safety reasons because they don't know how the autonomous system will react and could even endanger soldiers using the model; soldiers and others could lose control of the model and automatically start killing large groups without humans in the "kill chain.") Second Anthropic bars its models from being used for mass domestic surveillance. Hegseth wants these restrictions lifted.




Demanding those two restrictions be lifted is extremely troubling. Threatening Anthropic if they don't lift those restrictions is also troubling.

Did you ever consider that the enemies of the USA are developing advanced AI without these restrictions? IMO, we absolutely must be able to do mass surveillance and autonomous kinetic operations. I don't like it. I think things could go wrong. But, THIS IS COMING. We MUST be ready and willing to do this because our adversaries won't blink at the chance to do it.
MaxPower
How long do you want to ignore this user?
ErnestEndeavor said:

Anthropic's CEO is one of the biggest proponents of AI safety initiatives. He's seen what goes on in tests behind the scenes and it freaks him out.

What he doesn't want to see is some autonomous system going rogue or hallucinating and taking out the wrong group of people.
I dunno about you but I'm skeptical of anyone who says they are a big proponent of AI safety that is also working for the DoD. That dog don't hunt.
Proposition Joe
How long do you want to ignore this user?
Houston Lee said:

Less Evil Hank Scorpio said:



FNC is reporting there was a meeting today in which Hegseth demanded the two restrictions imposed by Anthropic on their models used by the gov't be lifted. Those two restrictions are 1) no mass surveillance and 2) no "autonomous kinetic operations", in other words a human has to be involved and not just let the model decide who to kill.

Here is the quote from the tweet:

Quote:

At issue is Anthropic's two stipulations that its advanced AI model currently used in the Pentagon's classified systems is NOT used for autonomous kinetic operations (Anthropic currently requires human oversight of autonomous operations when used to kill things for safety reasons because they don't know how the autonomous system will react and could even endanger soldiers using the model; soldiers and others could lose control of the model and automatically start killing large groups without humans in the "kill chain.") Second Anthropic bars its models from being used for mass domestic surveillance. Hegseth wants these restrictions lifted.




Demanding those two restrictions be lifted is extremely troubling. Threatening Anthropic if they don't lift those restrictions is also troubling.

Did you ever consider that the enemies of the USA are developing advanced AI without these restrictions? IMO, we absolutely must be able to do mass surveillance and autonomous kinetic operations. I don't like it. I think things could go wrong. But, THIS IS COMING. We MUST be ready and willing to do this because our adversaries won't blink at the chance to do it.


"Mr. President, we must not allow a mine shaft gap!"
K2-HMFIC
How long do you want to ignore this user?
Drahknor03 said:

Anthropic's AI is very left-wing coded. I'm guessing this came up because the Pentagon asked it to do something completely legal, and it refused.



The debate is about Anthropic usage policy…

Anthropic has concerns about its software being used for mass surveillance or autonomous targeting.

The concern is largely based on issues of being held liable if the software isn't good enough.

DoD on the other hand says : "if you sell it to us we can do what we want with it and if you don't, we're going to label you a supply chain risk."
Last Page
Page 1 of 4
 
×
subscribe Verify your student status
See Subscription Benefits
Trial only available to users who have never subscribed or participated in a previous trial.