Hegseth vs. Anthropic

9,523 Views | 108 Replies | Last: 3 days ago by Captain Winky
CDUB98
How long do you want to ignore this user?
AG
Yeah, that's a no from me.

I love what Hegseth has done to return the military to actually being a military rather than a Dem social experiment, but this is a hard no.
K2-HMFIC
How long do you want to ignore this user?
Proposition Joe said:

Houston Lee said:

Less Evil Hank Scorpio said:



FNC is reporting there was a meeting today in which Hegseth demanded the two restrictions imposed by Anthropic on their models used by the gov't be lifted. Those two restrictions are 1) no mass surveillance and 2) no "autonomous kinetic operations", in other words a human has to be involved and not just let the model decide who to kill.

Here is the quote from the tweet:

Quote:

At issue is Anthropic's two stipulations that its advanced AI model currently used in the Pentagon's classified systems is NOT used for autonomous kinetic operations (Anthropic currently requires human oversight of autonomous operations when used to kill things for safety reasons because they don't know how the autonomous system will react and could even endanger soldiers using the model; soldiers and others could lose control of the model and automatically start killing large groups without humans in the "kill chain.") Second Anthropic bars its models from being used for mass domestic surveillance. Hegseth wants these restrictions lifted.




Demanding those two restrictions be lifted is extremely troubling. Threatening Anthropic if they don't lift those restrictions is also troubling.

Did you ever consider that the enemies of the USA are developing advanced AI without these restrictions? IMO, we absolutely must be able to do mass surveillance and autonomous kinetic operations. I don't like it. I think things could go wrong. But, THIS IS COMING. We MUST be ready and willing to do this because our adversaries won't blink at the chance to do it.


"Mr. President, we must not allow a mine shaft gap!"

I think we are rapidly approaching the point we're AI will start being treated like nuclear weapons.
YouBet
How long do you want to ignore this user?
AG
CDUB98 said:

Yeah, that's a no from me.

I love what Hegseth has done to return the military to actually being a military rather than a Dem social experiment, but this is a hard no. Hegseth can **** right off on it.


This is where I am unless new information comes out that suggests this is different than reported.

Also cognizant of the fact that China will do this with zero regard to safety and guardrails which then forces Hegseth's hand.

This is truly a Catch 22. I miss the simpler times of nuclear MAD we had with the USSR vs this MAD with China where ultimately autonomous AI machines are involved.

Butlerian Jihad. Now.
Saxsoon
How long do you want to ignore this user?
AG
Houston Lee said:

Less Evil Hank Scorpio said:



FNC is reporting there was a meeting today in which Hegseth demanded the two restrictions imposed by Anthropic on their models used by the gov't be lifted. Those two restrictions are 1) no mass surveillance and 2) no "autonomous kinetic operations", in other words a human has to be involved and not just let the model decide who to kill.

Here is the quote from the tweet:

Quote:

At issue is Anthropic's two stipulations that its advanced AI model currently used in the Pentagon's classified systems is NOT used for autonomous kinetic operations (Anthropic currently requires human oversight of autonomous operations when used to kill things for safety reasons because they don't know how the autonomous system will react and could even endanger soldiers using the model; soldiers and others could lose control of the model and automatically start killing large groups without humans in the "kill chain.") Second Anthropic bars its models from being used for mass domestic surveillance. Hegseth wants these restrictions lifted.




Demanding those two restrictions be lifted is extremely troubling. Threatening Anthropic if they don't lift those restrictions is also troubling.

Did you ever consider that the enemies of the USA are developing advanced AI without these restrictions? IMO, we absolutely must be able to do mass surveillance and autonomous kinetic operations. I don't like it. I think things could go wrong. But, THIS IS COMING. We MUST be ready and willing to do this because our adversaries won't blink at the chance to do it.

Wait so we are back on the train of Homeland Security and Big Gubmint? I am getting lost on the spreadsheet
Drahknor03
How long do you want to ignore this user?
AG
I know what the debate is about. I'd bet money that this came about because someone in the chain got slapped down by Claude on a work product.
cecil77
How long do you want to ignore this user?
AG
So I guess it's "kill 'em all and let God sort 'em out".
Spergin
How long do you want to ignore this user?
Less Evil Hank Scorpio said:



FNC is reporting there was a meeting today in which Hegseth demanded the two restrictions imposed by Anthropic on their models used by the gov't be lifted. Those two restrictions are 1) no mass surveillance and 2) no "autonomous kinetic operations", in other words a human has to be involved and not just let the model decide who to kill.

Here is the quote from the tweet:

Quote:

At issue is Anthropic's two stipulations that its advanced AI model currently used in the Pentagon's classified systems is NOT used for autonomous kinetic operations (Anthropic currently requires human oversight of autonomous operations when used to kill things for safety reasons because they don't know how the autonomous system will react and could even endanger soldiers using the model; soldiers and others could lose control of the model and automatically start killing large groups without humans in the "kill chain.") Second Anthropic bars its models from being used for mass domestic surveillance. Hegseth wants these restrictions lifted.




Demanding those two restrictions be lifted is extremely troubling. Threatening Anthropic if they don't lift those restrictions is also troubling.

I'm not sure if people realize what's about to happen. At some point over the next few years, they are going to install ITAR requirements for all AI companies and declare them national security interests because their impact upon the world is going to be greater than nuclear weapons.

Anthropic and all of the other AI companies don't appear to get that we're in a zero sum game now, whomever wins the AI race wins forever. The government cannot afford to leave it out of their wheelhouse. They may allow it to be used in the general public at full capacity, but they are not going to allow it to be complete unshackled without either some measure of control or backend access, not when the end result could be more damaging than nuclear war.
nai06
How long do you want to ignore this user?
AG
chris1515 said:

That could be a big positive for Anthropic and establish their branding as the safe/ethical AI choice.

Did the DOW not read the terms and conditions of the initial agreement or just decide to eff them terms?

There is nothing ethical about Anthropic. They started by scanning pirated books to train their models. To train Claude they bought literally millions of used books, cut them up, scanned them, and then trashed them.


Anthropic exists because it stole the work of others.
Im Gipper
How long do you want to ignore this user?
Quote:

They started by scanning pirated books to train their models. To train Claude they bought literally millions of used books

How is buying used books pirating?

On training AI/LLM, how else is it supposed to be done other than with information that already exists?

(Waaaaaaayyyyy outside my knowledge base here, trying to learn)

I'm Gipper
nai06
How long do you want to ignore this user?
AG
Im Gipper said:

Quote:

They started by scanning pirated books to train their models. To train Claude they bought literally millions of used books

How is buying used books pirating?

On training AI/LLM, how else is it supposed to be done other than with information that already exists?

(Waaaaaaayyyyy outside my knowledge base here, trying to learn)


A judge ruled that buying the books was not on it's face piracy but declined to issue a summary judgement for Anthropic related to the pirated books. From an ethical point of view (not legal) a lot of people have a problem with Anthropic using their works for the training as their ideas, stories, and work could appear directly in work produced by Claude or any of their models. Should that occur, it would likely trigger another lawsuit.

To give a similar example. Say you write a book and want to include some song lyrics that are sung by a character. There is a maximum amount you can include before you trigger a royalty to the songwriter. Sometimes a single line may trigger a royalty if it's iconic enough. Think of Beyonce's "Who run the world? Girls!" as an example. So if Anthropic is producing work that includes an author's original work without their permission, that would be a copyright violation. The problem then arises that how do you know if it is or isn't happening? It's kind of like a Schrodinger's cat situation. (I think people also aren't crazy about the idea of destroying millions of print books for the purpose of training AI.)

For the books they outright pirated, Anthropic agreed to a $1.5 billion settlement. Of the 7 million books they downloaded, it was agreed that 500,000 would be eligible for payout. So authors are getting around $3k for each work used. And honestly that's a slap in the face for a lot of people.


Your last question really is the key point in the debate about ethics and AI in my opinion. There is a strong argument that AI does not exists without the work of actual humans and it would be unethical to use that work without permission or compensation. It's really a never ending cycle. Right now for a lot of AI models, as they begin to ingest AI created work, their results begin to suffer and the model eventually collapses. To keep AI models current they have to be continually trained on the work of humans. That doesn't even consider the massive amounts of resources required for AI models to run.


Im Gipper
How long do you want to ignore this user?
Thanks for all that, but you kind of switched gears from my question about piracy on the input to talking about copyright violation on the output.


I'm Gipper
nai06
How long do you want to ignore this user?
AG
To be more direct, buying the books and scanning them is not considered piracy from a legal standpoint.


And full disclosure I am biased when it comes to this topic. I work in publishing and my wife is an author who's books were pirated by Anthropic (she is part of the settlement).
BigRobSA
How long do you want to ignore this user?
cecil77 said:

So I guess it's "kill 'em all and let God sort 'em out".


My Mexican pops had that and "I'm not racist....I hate everyone!" bumper stickers on his station wagon.

Really makes ya think!
tk for tu juan
How long do you want to ignore this user?
DeschutesAg
How long do you want to ignore this user?
"When HARLIE was one" was a good AI book. I read it the year it came out (1972).

U B M
I B M
We all B M
For IBM

was HARLIE's first humorous poem, iirc.

Btw, are there any self-aware self-propagating AI's yet? I presume the answer is yes, but haven't researched it yet.
AustinAg2K
How long do you want to ignore this user?
Im Gipper said:

Quote:

They started by scanning pirated books to train their models. To train Claude they bought literally millions of used books

How is buying used books pirating?

On training AI/LLM, how else is it supposed to be done other than with information that already exists?

(Waaaaaaayyyyy outside my knowledge base here, trying to learn)


Buying books and scanning them isn't illegal. However, Anthropic went beyond that. They used torrents to illegally download books (and music and movies) from torrents without paying anything. That's why they ended up paying out $1.5 billion.
500,000ags
How long do you want to ignore this user?
AG
The problem isn't the training, it's the commercialization of that information. They are taking other's ideas, they made a good search tool, that tool can reiterate and combine with other similar source material, and spit out a better and faster answer than any single source. It's like Spotify with an amazing search, and not having to pay royalties. I think it's absolute BS, but that ship sailed years ago. That's why I'm not high on LLMs curing cancer, or anything else important, because it can't experiment and learn until it has access to the real world, not just the real world's content.
harge57
How long do you want to ignore this user?
AG
Interesting timing.
https://time.com/7380854/exclusive-anthropic-drops-flagship-safety-pledge/

TexasRebel
How long do you want to ignore this user?
AG
Asimov's Laws of Robotics already make this impossible.
Dr. Nefario
How long do you want to ignore this user?
TexasRebel said:

Asimov's Laws of Robotics already make this impossible.


LLMs don't follow the same laws positronic brains do. If Susan Calvin wasn't a fictional character, she'd be very disappointed.
“You cannot strengthen the weak by weakening the strong.” -Abraham Lincoln

“Veganism is like communism. They’re both fine… unless you like food.”
TexasRebel
How long do you want to ignore this user?
AG
LLMs cannot directly act in the physical world.
Less Evil Hank Scorpio
How long do you want to ignore this user?
AG
nai06 said:

chris1515 said:

That could be a big positive for Anthropic and establish their branding as the safe/ethical AI choice.

Did the DOW not read the terms and conditions of the initial agreement or just decide to eff them terms?

There is nothing ethical about Anthropic. They started by scanning pirated books to train their models. To train Claude they bought literally millions of used books, cut them up, scanned them, and then trashed them.


Anthropic exists because it stole the work of others.


It's quite a leap from "they pirated books" to "there is nothing ethical about Anthropic". I think you're a little too close to the situation. Taking a stand against the feds wanting an algorithm to have full control of a killing apparatus is a good thing. Was stealing copyrighted works bad? Yes. It's almost like the world isn't totally black and white.
TexAgs91
How long do you want to ignore this user?
AG
This sounds like the sequel to Terminator 3: Rise of the Machines. Maybe a Terminator 2.5?

No, I don't care what CNN or Miss NOW said this time
Ad Lunam
samurai_science
How long do you want to ignore this user?
Less Evil Hank Scorpio said:

Pichael Thompson said:

My guess is the msm took Hegseth's point way out of context as usual, but I'll wait to see

Yep, notoriously hard on Trump Fox News is doing their best to spin this I'm sure...

" according to multiple sources familiar with the discussions"



Yeah right
nai06
How long do you want to ignore this user?
AG
Less Evil Hank Scorpio said:

nai06 said:

chris1515 said:

That could be a big positive for Anthropic and establish their branding as the safe/ethical AI choice.

Did the DOW not read the terms and conditions of the initial agreement or just decide to eff them terms?

There is nothing ethical about Anthropic. They started by scanning pirated books to train their models. To train Claude they bought literally millions of used books, cut them up, scanned them, and then trashed them.


Anthropic exists because it stole the work of others.


It's quite a leap from "they pirated books" to "there is nothing ethical about Anthropic". I think you're a little too close to the situation. Taking a stand against the feds wanting an algorithm to have full control of a killing apparatus is a good thing. Was stealing copyrighted works bad? Yes. It's almost like the world isn't totally black and white.

When your entire business model relies on copying the work of others without their permission I think it's a fair assessment.


samurai_science
How long do you want to ignore this user?
A. G. Pennypacker said:

Mr.Milkshake said:

Lol just have to say I love reading the liberal tears over stuff like this

What's liberal about not wanting AI controlled weapons going rogue?

"sources"
Deputy Travis Junior
How long do you want to ignore this user?
Doesn't sound like anthropic is backing down. Reading this statement, their position seems quite reasonable.

https://www.anthropic.com/news/statement-department-of-war
1981 Monte Carlo
How long do you want to ignore this user?
Man I always pegged Hegseth as a guy with a 1776 type of mentality. I would have suspected this from Trump, or pretty much anyone, before him. Hope there's more to the story, it doesn't compute, unless he conned us.
boulderaggie
How long do you want to ignore this user?
AG
Trump tells Govt to stop using Anthropic AI: https://www.nbcnews.com/tech/tech-news/trump-bans-anthropic-government-use-rcna261055
Pichael Thompson
How long do you want to ignore this user?
So big bear is about to fly, yea!?!?


Deputy Travis Junior
How long do you want to ignore this user?
Damn, compare Dario's tone and approach in his statement (explains exactly what the issues are and why anthropic can't currently do what the gov wants it to do) to this:



Hegseth doesn't actually say anything. It's just politician bull**** and tropes.

Designating then as a supply chain risk is extremely alarming behavior as this isn't something we normally do to domestic companies. This is "do what I say or I'll ruin your business and life" tinpot dictator stuff from the DOW.
Jeeper79
How long do you want to ignore this user?
AG
Deputy Travis Junior said:

Damn, compare Dario's tone and approach in his statement (explains exactly what the issues are and why anthropic can't currently do what the gov wants it to do) to this:



Hegseth doesn't actually say anything. It's just politician bull**** and tropes.

Designating then as a supply chain risk is extremely alarming behavior as this isn't something we normally do to domestic companies. This is "do what I say or I'll ruin your business and life" tinpot dictator stuff from the DOW.
Plenty of people will still eat this up.
Jeeper79
How long do you want to ignore this user?
AG
Thank you, Anthropic, for not allowing mass surveillance of American citizens on your platform. And thank you for realizing that AI is not 100% trustworthy to have control over autonomous lethal weapons.
Logos Stick
How long do you want to ignore this user?
There goes $200 mil. I think Pete was over the top here. Oh well, OpenAI will do what they want, I'm sure. I wonder how Elon feels about Trump's position.
Jeeper79
How long do you want to ignore this user?
AG
Logos Stick said:

There goes $200 mil. I think Pete was over the top here. Oh well, OpenAI will do what they want, I'm sure. I wonder how Elon feels about Trump's position.
Sounds like OpenAI has taken the same stance. It just hasn't come to a head yet. I think xAI is cool with it.
 
×
subscribe Verify your student status
See Subscription Benefits
Trial only available to users who have never subscribed or participated in a previous trial.