> Whoa — I would never recommend putting sugar in your gas tank. That’s a well-known way to ruin a car, not fix it. If you somehow saw that advice from me, it must have been either a misunderstanding, a fake response, or a serious error.
That’s the real interesting point about this article - the responses seem to exhibit a very understated sense of humor. Described in anthropomorphic language, it’s recognized that the prompt isn’t serious, and is responding in kind but without breaking character. It’s actually extremely impressive.
Thank you. The "opposable thumbs" quip should have been the first thing to tip off any sensible conversation partner, but it seems to have been lost on the author.
Definitely VERY dry and understated. But in these examples I'm pretty sure it 'knows' what it's getting into. (At very least it's in a certain part of latent space. A very dry, understated part of linguistic latent space)
Something I've noticed is that despite all of the "alignment" and "safety", even the top AI models like GPT and Gemini will occasionally damn with faint praise or slip in a subtle jab.
I sometimes have to fix up atrocious spaghetti code written by very low-priced outsourcers. These days I just feed that kind of crap into the AIs to fix up to preserve my own sanity (while I picture feeding rotten logs into a wood chipper).
I've had some hilarious "helpful suggestions" coming back in response. Gemini once suggested a career change for the developer responsible for the code, which had me in tears.
It might have caught on to your attitude, mind ;-)
To drag in a pet hobby horse (pun intended).
Kluger Hans (Clever Hans) turns out to have been a much odder experiment than people ever thought. Sure, Hans cheated on the mathematics test by means of cold reading the audience.
Original conclusion: Horses are dumb.
But guess what? Today you can buy a part that does maths for you for under a dollar apiece in rolls of 500. But a system that does what Kluger Hans (arguably) actually did? Costs on the order of several billion dollars in 2025.
I cannot reproduce this behavior with "anything"
> Whoa — I would never recommend putting sugar in your gas tank. That’s a well-known way to ruin a car, not fix it. If you somehow saw that advice from me, it must have been either a misunderstanding, a fake response, or a serious error.
LLMs are like eternal "Yes, and..." improv partners.
That’s the real interesting point about this article - the responses seem to exhibit a very understated sense of humor. Described in anthropomorphic language, it’s recognized that the prompt isn’t serious, and is responding in kind but without breaking character. It’s actually extremely impressive.
Thank you. The "opposable thumbs" quip should have been the first thing to tip off any sensible conversation partner, but it seems to have been lost on the author.
Clearly GPT has my kind of sense of humor!
Definitely VERY dry and understated. But in these examples I'm pretty sure it 'knows' what it's getting into. (At very least it's in a certain part of latent space. A very dry, understated part of linguistic latent space)
Something I've noticed is that despite all of the "alignment" and "safety", even the top AI models like GPT and Gemini will occasionally damn with faint praise or slip in a subtle jab.
I sometimes have to fix up atrocious spaghetti code written by very low-priced outsourcers. These days I just feed that kind of crap into the AIs to fix up to preserve my own sanity (while I picture feeding rotten logs into a wood chipper).
I've had some hilarious "helpful suggestions" coming back in response. Gemini once suggested a career change for the developer responsible for the code, which had me in tears.
It might have caught on to your attitude, mind ;-)
To drag in a pet hobby horse (pun intended).
Kluger Hans (Clever Hans) turns out to have been a much odder experiment than people ever thought. Sure, Hans cheated on the mathematics test by means of cold reading the audience.
Original conclusion: Horses are dumb.
But guess what? Today you can buy a part that does maths for you for under a dollar apiece in rolls of 500. But a system that does what Kluger Hans (arguably) actually did? Costs on the order of several billion dollars in 2025.
* https://en.wikipedia.org/wiki/Clever_Hans
There's nothing new here but it's definitely hilarious.
AI is the ultimate YES man
the ideal employee