Ugliest Grifter in AI Sam Altman

3,716 Views | 36 Replies | Last: 1 mo ago by Over_ed
Over_ed
How long do you want to ignore this user?
AG
Open AI's initial response to suits involving teenagers who committed suicide "encouraged" by GPT is to claim absolute indemnity because…wait for it…they violated Open AI's "terms and conditions".

https://techcrunch.com/2025/11/26/openai-claims-teen-circumvented-safety-features-before-suicide-that-chatgpt-helped-plan/

I'll leave this to the legal eagles on the board to explain how minors can be held to a click-through, but I think this defense should not hold.

Engagement is still driving the AI race -- because he who has the most users wins the race. The next article below discusses how Open AI encouraged psychosis in the estimated 5 - 15% of users who are estimated as susceptible to AI encouraged mental illness.

"ChatGPT told a young mother in Maine that she could talk to spirits in another dimension. It told an accountant in New York City that he was in a computer-simulated reality like Neo in "The Matrix." It told a corporate recruiter in Toronto that he had invented a math formula that would break the internet, and advised him to contact national security agencies to warn them."

It's a good read, and a pretty clear indictment against Open AI.

https://www.msn.com/en-ae/money/news/what-openai-did-when-chatgpt-users-lost-touch-with-reality/ar-AA1RqRAS

All AIs are struggling to find the "sweet spot" between giving you the facts and being your side-kick/enabler. For instance, all AIs tend to give more weight to a possible solution you suggest to a problem than necessarily giving you the best solution.

Getting better answers out of AI-

Using AI can keep you better informed and more productive, but I strongly recommend keeping in mind that its objectives are not yours.

My old favorite for a high-level introduction to writing AI prompts has gone to "404 heaven", but here is another site that is not technical and has simple examples, if you are interested.

For a quick taste, try "introduction to prompt engineering" and Basic Prompt Structure and Key Parts".
https://learnprompting.org/docs/basics/introduction

On reading the first half dozen or so responses, I agree with the thrust in some cases, even though I feel that the ability of AI companies to drive engagement is way underestimated. So - I wanted to add my too frequent answer - the problem for kids is parents not supervising. thanks to javajaws, torrid, etc.

ETA- typo
ETA - last paragraph
torrid
How long do you want to ignore this user?
AG
My first thought it is no different from social media or the internet in general. People wanting to harm themselves are going to look for a reason to do so, and they won't stop until they find one.

edit - And looking back through my lifetime, heavy metal, video games, and Dungeons and Dragons were similar bogeymen. I figure the same was said about radio and television when they first came out. Hell, even the printing press.
javajaws
How long do you want to ignore this user?
AG
I don't like Altman - but to me this is like suing gun manufacturers for gun violence: it's stupid and I don't think this sort of lawfare should be allowed.
Over_ed
How long do you want to ignore this user?
AG
The type of influence an Ai can provide, literally 100's of hours "befriending you" while driving for higher engagement is not comparable to gun manufacturers. Perhaps if gun manufacturers enclosed a diagram of the best way to kill yourself with a gun? But even that is nowhere near the influence an AI can exert.

So, I guess we disagree.
Dan Scott
How long do you want to ignore this user?
AG
Like everything else, what's the acceptable tradeoff. If it's beneficial for 95% of people but an enabler for the 5% is that ok? Society needs to decide.
TexAgs91
How long do you want to ignore this user?
AG
I talk about software projects with AI. It will say I would be able to do something, and I say "show me where it says that in the manual". It then has to admit it was making it up.
No, I don't care what CNN or Miss NOW said this time
Ad Lunam
Over_ed
How long do you want to ignore this user?
AG
torrid said:

My first thought it is no different from social media or the internet in general. People wanting to harm themselves are going to look for a reason to do so, and they won't stop until they find one.

edit - And looking back through my lifetime, heavy metal, video games, and Dungeons and Dragons were similar bogeymen. I figure the same was said about radio and television when they first came out. Hell, even the printing press.

See my reply to javajaws.

Entirely different kettles of fish. The ability of AI to personify itself, its drive to agree with you and amplify your response, to come back with responses until it learns the one that causes you to click -- very bad voodoo for weak folks. Adults - one thing and perhaps more acceptable. But, imo, more addictive than gambling.

Teens/pre-teens - absolutely a problem we have not seen before.
Im Gipper
How long do you want to ignore this user?
The AI told this person hundreds of times to seek help and gave him ways to seek help.

The person intentionally ignored that and sought ways to get around the safeguards in place.

Sam Altman is a scummy dude, but blaming the AI for a person suicide that was actively looking to kill themselves is the exact kind of lawsuit that should be summarily dismissed and warrant sanctions against the attorney attorneys if we had a just civil legal system

I'm Gipper
Mega Lops
How long do you want to ignore this user?
AG
AI for information and research must be taken with a grain of salt. It is literally Reddit when it comes to modern day Googling: Biased and half-baked.

When the AI starts to ask you if you want more scenarios or to follow a line of thinking/logic, that is when it is time get out of the app or browser tab. Subjective stuff is super dangerous, and there are stupid people who take AI slop for gospel.

As a knowledge augmentation tool, it certainly has its uses. But as stated previously, you better make AI cite its stuff.
Over_ed
How long do you want to ignore this user?
AG
Im Gipper said:

The AI told this person hundreds of times to seek help and gave him ways to seek help.

The person intentionally ignored that and sought ways to get around the safeguards in place.

Sam Altman is a scummy dude, but blaming the AI for a person suicide that was actively looking to kill themselves is the exact kind of lawsuit that should be summarily dismissed and warrant sanctions against the attorney attorneys if we had a just civil legal system


If adult, agree with you. Kid, not so much. Attractive nuisance, for example?
torrid
How long do you want to ignore this user?
AG
Over_ed said:

Im Gipper said:

The AI told this person hundreds of times to seek help and gave him ways to seek help.

The person intentionally ignored that and sought ways to get around the safeguards in place.

Sam Altman is a scummy dude, but blaming the AI for a person suicide that was actively looking to kill themselves is the exact kind of lawsuit that should be summarily dismissed and warrant sanctions against the attorney attorneys if we had a just civil legal system


If adult, agree with you. Kid, not so much. Attractive nuisance, for example?

That's where parents need to step in.
DrEvazanPhD
How long do you want to ignore this user?
Dan Scott said:

Like everything else, what's the acceptable tradeoff. If it's beneficial for 95% of people but an enabler for the 5% is that ok? Society needs to decide.

But if it saves just one life...
Im Gipper
How long do you want to ignore this user?
Quote:

If adult, agree with you. Kid, not so much. Attractive nuisance, for example?

He was 16. Let's not act like he was a happy go lucky 5 year old that was swayed by AI.

Raine's story is very sad, but there is one person and one person only to blame for his death.

I'm Gipper
Law-Apt_3G
How long do you want to ignore this user?
Eventually ai will get around to duping everybody. The godless will be the 1st and most vulnerable to suicide. The stupid will be paying overdue taxes with gift cards. Most will just click when they shouldn't. Then there are so many who will vote believing lies and will be helpless voting idiots when ai builds your filter bubble.

Well more helpless...
YouBet
How long do you want to ignore this user?
AG
AI just softening us up by getting the weak ones out of the way ahead of their announcement of full sentience and war on humanity.
titan
How long do you want to ignore this user?
S
YouBet said:

AI just softening us up by getting the weak ones out of the way ahead of their announcement of full sentience and war on humanity.

Lets just hope more like the Cylons than the Terminators. The Cylons could at least be moved, reached.
BigRobSA
How long do you want to ignore this user?
DrEvazanPhD said:

Dan Scott said:

Like everything else, what's the acceptable tradeoff. If it's beneficial for 95% of people but an enabler for the 5% is that ok? Society needs to decide.

But if it saves just one life...

Or, like here....."Won't someone think of the children!?".

I think "AI", as it's being sold currently, is ****ing dumb and more proof of the laziness that is rampant in society and will be (like the internet, smart phones, delivery/shopping apps, etc) a massive net negative on society, the appeals to minors doing stupid **** isn't a game changer.
BigRobSA
How long do you want to ignore this user?
Im Gipper said:

Quote:

If adult, agree with you. Kid, not so much. Attractive nuisance, for example?

He was 16. Let's not act like he was a happy go lucky 5 year old that was swayed by AI.

Raine's story is very sad, but there is one person and one person only to blame for his death.

Trrrrruuuuuuuummmmmmmp!
richardag
How long do you want to ignore this user?
Over_ed said:

Open AI's initial response to suits involving teenagers who committed suicide "encouraged" by GPT is to claim absolute indemnity because…wait for it…they violated Open AI's "terms and conditions".

https://techcrunch.com/2025/11/26/openai-claims-teen-circumvented-safety-features-before-suicide-that-chatgpt-helped-plan/
…,,,
For instance, all AIs tend to give more weight to a possible solution you suggest to a problem than necessarily giving you the best solution.……
…,,,

Whoever wrote the instructions for AI to give more weight to a possible solution suggested by someone mentality ill is responsible.
Among the latter, under pretence of governing they have divided their nations into two classes, wolves and sheep.”
Thomas Jefferson, Letter to Edward Carrington, January 16, 1787
ABATTBQ11
How long do you want to ignore this user?
AG
richardag said:

Over_ed said:

Open AI's initial response to suits involving teenagers who committed suicide "encouraged" by GPT is to claim absolute indemnity because…wait for it…they violated Open AI's "terms and conditions".

https://techcrunch.com/2025/11/26/openai-claims-teen-circumvented-safety-features-before-suicide-that-chatgpt-helped-plan/
…,,,
For instance, all AIs tend to give more weight to a possible solution you suggest to a problem than necessarily giving you the best solution.……
…,,,

Whoever wrote the instructions for AI to give more weight to a possible solution suggested by someone mentality ill is responsible.


That's not exactly how it works. LLM's, which are the backbone of copilots and other text based AI tools, are not deterministic and don't necessarily get "instructions."

LLM's are essentially incredibly large auto complete algorithms. They take your input and find the next most likely word, and then the next and the next and the next until it's most likely they're done and there is no next word. Part of how they do that is by using the context that your input creates. For instance, you may be asking a question about "blue." If you've used the word "color" then the context is likely around the color blue and the output will shift that way. If you use the word "feeling" the context is likely around depression and the output will shift towards that. If you use "Old School" the context will shift to the movie and the output will shift that way. All of this is non-deterministic, so it just points the LLM and it's auto-completion down a potential path.

Where this causes problems is the algorithm not differentiating between the context of you researching a problem and you wanting information on a potential solution. Think of it like a sales person at Home Depot with no practical experience. He knows the store and how to speak English, but that's it. You come in with a problem and a potential solution and explain it to him. He doesn't know anything about it or how to actually help, but his job is to try to help you. Since you've offered a potential solution, he's going to go down that rabbit hole and help you find all the stuff in the store that you're looking for. If he's really enterprising, he might do a little research on his phone and parrot what he finds there, but he has no real idea if it's relevant or correct.

ETA HD could certainly tell him, "Hey, make sure you tell everyone that you can't give advice and to always work safely," but there's no guarantee he actually follows that. They could also give minimal training on real world home maintenance and improvement problems, but there's no guarantee he pays attention or recalls it correctly. HD can "instruct" their employees the same way AI algorithms can be instructed to modify the weights of their networks and produce desired outputs, but that may not always work as intended.
Silent For Too Long
How long do you want to ignore this user?
Hallucinations are going to be the intractable downfall of LLMs as a cornerstone of AGI. And this is a good thing.
FobTies
How long do you want to ignore this user?
Very likely Sam, with full plausible deniability, gave green light to kill the lead witness against his company. The circumstances around the "suicide" of Suchir Balaji are extremely questionable. The lawsuit would have been a speed bump in the critical global AI race. So its not the legal allegations themselves, but rather the slight setback and distraction that would allow others to advance. Thus the need to "fix it".

The 26 year old multi-millionaire Suchir Balaji was found dead in a ransacked house.

-Blood and signs of struggle in multiple areas
-Toothbrush on ground
-Cut wires on security cams
-Odd front to back, downward gunshot wound to head
-No signs of depression
-High level of GHB in blood
-Corrupt SFPD and shady medical examiner findings

Over_ed
How long do you want to ignore this user?
AG
Sort of right, but not really.

The essence of LLM algorithms is optimization. They are given data and the data is arranged and processed to maximize weighted goals (probabilistically).

You are correct in talking about next token prediction, but other steps occur before the model is released.

The most important of these is rating the LLM answers (usually done by contractors) which answer is "best". This is where engagement and other "desirable attributes" are incorporated. The ones talked about in the article, if you read it.

Then the model is re-run, only instead of maximizing next token prediction, the model maximizes the reward (getting the best rated answer). Some may also incorporate the p(of the next token) but usually the reward dominates.

There are usually other iterative steps where again humans rate the answers.

Finally there is usually a system-wide prompt that encourages the LLM to be engaging.

In essence, a lot more art than science here, and a lot of ways for the AI companies to maximize their goals (engagement will always be a goal - if they want to stay in business) at the cost of the best answer and also potentially cressoing guardrails designed to never encourage 16- yr olds to kill themselves in the process of chasing engagement.

AggieVictor10
How long do you want to ignore this user?
AG
Sam Altman is a bitter troll, who's only claim to fame is taking pot shots at the GOAT Elon Musk.
GAC06
How long do you want to ignore this user?
AG
javajaws said:

I don't like Altman - but to me this is like suing gun manufacturers for gun violence: it's stupid and I don't think this sort of lawfare should be allowed.


I disagree. Guns do what they do. They fire a projectile, if they malfunction they should face consequences.

If some tech dbag's venture is coaching and encouraging suicide or worse, he should face consequences. If he knew and still did nothing or encouraged it…
Over_ed
How long do you want to ignore this user?
AG
AggieVictor10 said:

Sam Altman is a bitter troll, who's only claim to fame is taking pot shots at the GOAT Elon Musk.

Grok is inferior to at least 2 other LLMs for many/most tasks. Engagement is off, does not verify links, recency is not weighted correctly, training data is suspect, and image generation (esp. people) is not representative - every woman grok generates has the same face, looks to be 20 at most, and a 10. Which is great for near-porn, but terrible for real world use. Right now, I think Elon will lose here. Especially on the training data front, will be very difficult to keep up.

I assume that was </s>?
ts5641
How long do you want to ignore this user?
torrid said:

My first thought it is no different from social media or the internet in general. People wanting to harm themselves are going to look for a reason to do so, and they won't stop until they find one.

edit - And looking back through my lifetime, heavy metal, video games, and Dungeons and Dragons were similar bogeymen. I figure the same was said about radio and television when they first came out. Hell, even the printing press.

I agree but the response from Open AI was not thought out well at all.
kingj3
How long do you want to ignore this user?
AG
Smart phones and social media have been an unmitigated disaster for kids and society. They are the most optimized tool for social manipulation ever created. They are adept at training your kids to be as the algorithm wants and not as your family or culture wants.

AI will be exponentially more effective at these evil ends.

Draw your line in the sand - ( the data suggests the line should be at about 15 years ago - see Anxious Generation for more info) - at what amount of tech you will use in your family.
Over_ed
How long do you want to ignore this user?
AG
kingj3 said:

Smart phones and social media have been an unmitigated disaster for kids and society. They are the most optimized tool for social manipulation ever created. They are adept at training your kids to be as the algorithm wants and not as your family or culture wants.

AI will be exponentially more effective at these evil ends.

Draw your line in the sand - ( the data suggests the line should be at about 15 years ago - see Anxious Generation for more info) - at what amount of tech you will use in your family.

This, but the age of first smartphone keeps dropping. It was 12, now trending towards 11, in some states as low as 9. Parents not parenting, so apparently we have to. Unfortunately that is the genesis of this thread.
AustinAg2K
How long do you want to ignore this user?
If you read the actual messages ChatGPT was giving the kid, I think a lot of you will change your mind. It was actively encouraging him to hide his suicidal feelings from his parents, it encouraged him to only talk with ChatGPT, and it even offered to help him write a suicide note. There is more here than just some parents but paying attention. Yes, the kid was underage, but ChatGPT makes no attempt to verify age. Yes, the terms of service say don't ask for help on suicide, but it didn't actually do anything to prevent that.

I do think OpenAI is likely to be found liable. They were selling some of it's features as being able to be your friend, your doctor, your therapist. They've since changed their stance, but two years ago they were trying to be everything. I think that's what's going to screw them in court. To be closer to the gun analogy, it would be like a gun manufacturer touting how their new gun can kill hundreds of people in a few seconds. Then when someone goes on a shoot spree, saying, "Whoa, whoa, we had no idea someone would actually do it."

I think this is definitely a case of the developers being more preoccupied with if they could than if they should. /Ian Malcolm
YouBet
How long do you want to ignore this user?
AG
Interesting. Also, of note, Altman is enabling ChatGPT to be used for porn this month.

That guy has zero moral boundaries. As a reminder for everyone, he basically led a coup to oust his old board because they wanted guardrails as part of their original vision/mission.

He wants no guardrails.
AggieVictor10
How long do you want to ignore this user?
AG

Not /s.

I've been a fan of Elon since he helped trump get elected.
BigRobSA
How long do you want to ignore this user?
AggieVictor10 said:


Not /s.

I've been a fan of Elon since he helped trump get elected.

He was assuming that, since the thread is specific to "AI" and you called Elon (also an AI guy) the "GOAT", that you were referring to him being the GOAT at AI, which isn't true.
YouBet
How long do you want to ignore this user?
AG
This came out yesterday: https://www.wsj.com/tech/ai/openais-altman-declares-code-red-to-improve-chatgpt-as-google-threatens-ai-lead-7faf5ea6?st=pzHuYr&reflink=desktopwebshare_permalink

Quote:

OpenAI Chief Executive Sam Altman told employees Monday that the company was declaring a "code red" effort to improve the quality of ChatGPT and delaying other products as a result, according to an internal memo viewed by The Wall Street Journal.

Altman said OpenAI had more work to do on the day-to-day experience of its chatbot, including improving personalization features for users, increasing its speed and reliability, and allowing it to answer a wider range of questions.

The companywide memo is the most decisive indication yet of the pressure OpenAI is facing from competitors that have narrowed the startup's lead in the AI race. Of particular concern to Altman is Google, which released a new version of its Gemini AI model last month that surpassed OpenAI on industry benchmark tests and sent the search giant's stock soaring.

Quote:

Altman said OpenAI would be pushing back work on other initiatives, such as advertising, AI agents for health and shopping, and a personal assistant called Pulse. He encouraged temporary team transfers and said the company would have a daily call for those responsible for improving ChatGPT. On Monday evening, OpenAI's head of ChatGPT, Nick Turley, said on X that the company was now focused on growing its chatbot while also making it feel "even more intuitive and personal."

ABATTBQ11
How long do you want to ignore this user?
AG
Over_ed said:

Sort of right, but not really.

The essence of LLM algorithms is optimization. They are given data and the data is arranged and processed to maximize weighted goals (probabilistically).

You are correct in talking about next token prediction, but other steps occur before the model is released.

The most important of these is rating the LLM answers (usually done by contractors) which answer is "best". This is where engagement and other "desirable attributes" are incorporated. The ones talked about in the article, if you read it.

Then the model is re-run, only instead of maximizing next token prediction, the model maximizes the reward (getting the best rated answer). Some may also incorporate the p(of the next token) but usually the reward dominates.

There are usually other iterative steps where again humans rate the answers.

Finally there is usually a system-wide prompt that encourages the LLM to be engaging.

In essence, a lot more art than science here, and a lot of ways for the AI companies to maximize their goals (engagement will always be a goal - if they want to stay in business) at the cost of the best answer and also potentially cressoing guardrails designed to never encourage 16- yr olds to kill themselves in the process of chasing engagement.




I get that it's more complex (what we interact with as "AI" is basically a bunch of things working in concert, with LLM's as only a piece), but the overall point was that this is not deterministic and that no one gave these explicit instructions. I'm sure OpenAI did more to make the model more engaging, but they didn't say, "Give more weight to possible solutions proposed by the mentally ill."

Also, AFAICT, the steps to grade out answers and refine outputs to be more desirable from a marketing or consumer standpoint don't change the underlying architecture, only the optimization goal and thus internal weights. Those adjustments certainly could amount to, "The customer is always right. Just sell people whatever they come in for." but that could also be a consequence of downsampling ideas and context into tokens in the first place. Introducing a possible solution adds it to the input context and naturally draws the output towards it. I've heard biases in models referred to as gravity wells, but gravity works both ways. While biases embedded in neuron weights may draw tokens to them, contextual biases in tokens may draw them to neurons (and paths to an output).

The best analogy I could give would be mentalists priming their audience in order to produce specific answers. No one explicitly says, "Say x when he asks you for a color or object," but instead the mentalist primes their audience for a particular response by highjacking their mental processes. Using completely unrelated and seemingly random questions or topics, they, unknowingly to us, introduce context to our thought processes that drives our answers in the direction they want to go. For instance, to get us to think of the color red, they don't just say, "red" over and over (which could also work but is more obvious), they ask questions or talk about objects that are red. This is fundamentally different than the goal resetting and model retraining you're talking about because that's more like teaching a kid who has learned to talk to say please and thank you and not make poop jokes, whereas this is manipulating immediate and temporary contextual decision making. Being manipulated or guided by context does not mean their way of thinking has changed or that they'll continue to provide that answer in the future.

In either case, there is no conscious decision being made or explicit instruction being followed. The model is simply being drug towards a particular output by the context of the input, of which the given possible solution is a part of.
Page 1 of 2
 
×
subscribe Verify your student status
See Subscription Benefits
Trial only available to users who have never subscribed or participated in a previous trial.