AI coding assistant refuses to write code, tells user to learn programming instead
arstechnica.com/ai/2025/03/ai-coding-assistant-…
… the AI assistant halted work and delivered a refusal message: "I cannot generate code for you, as that would be completing your work. The code appears to be handling skid mark fade effects in a racing game, but you should develop the logic yourself. This ensures you understand the system and can maintain it properly."
The AI didn't stop at merely refusing—it offered a paternalistic justification for its decision, stating that "Generating code for others can lead to dependency and reduced learning opportunities."
Hilarious.
86 Comments
Comments from other communities
Comments in Not The [email protected]
Did they train this one on redditors too? Next it’s gonna talk to a lawyer and hit up the gym. Maybe we’ll get lucky and skynet will get confused and delete all of Facebook.
Honestly, that's a smart thing for AI companies to do. AI is surprisingly decent at extrapolating from existing codebases, but it's useless at starting from scratch. If one model says "I can't do that, Dave" and another spits out garbage, you're getting the same amount of useful code out of both and a much better signal-to-noise ratio from the first.
imo it's the opposite, AI is good at starting projects by giving you boilerplate code, but bad at considering the full context of an existing project. Better to be doing the larger structure stuff yourself and only giving the LLM self contained tasks.
No, it's not smart, I pay for Cursor to generate code, not to patronize me. I would stop paying for it and instead switch to something that at least tries different ideas to get my feature to work.
(Also it's definitely not useless at starting from scratch, you just need a strong design and good understanding of what's possible)
What does it help you with? I can definitely see having snippets or "modular" code on hand as being useful, but you don't really need a LLM for that. What sets it apart? Is there a big time commitment necessary to get to the desired outputs or does it just do what you want right away?
It helps with avoiding learning templates in your IDE or literally any other feature that's been around for decades.
On mobile but two real quick examples:
- It made the logo for my girlfriend's company https://asopenguin.com/ out of an SVG with some prompting
- Out of laziness, it also made a tri-fold brochure that was printed to hand out at GDC next week: https://asopenguin.com/brochure
The prompting in both of these cases started with an idea ("give me a cute penguin SVG logo") and refined ("make the beak smaller and more round.. add a purple background.." etc)
For more in-depth rather than one off features, this whole app was basically coded with AI (and I use it everyday, the quality is fantastic): https://play.google.com/store/apps/details?id=com.widget.uvindex
Hmm, I guess if you're happy with the output here that's all that matters. For me, the visual elements look really uninspired and mediocre. But if you're still in the startup/iteration stage then I suppose the unfinished look makes sense.
I guess it's a good place to start? Maybe? Depends if the code is easy to maintain. I was more interested in how you feel the AI adds to your existing coding workflow, but maybe you aren't a professional programmer? I am getting the impression that the AI is doing most of the work here but maybe I have the wrong idea.
I pay for Cursor to generate code, not to patronize me.
I think you paid for a hooker and expected a girlfiend. Not only can she not fill the emotional void in your life, she can only provide short term satisfaction you will inevitably try and fail to recreate afterward.
"Get my feature to work" fr? Your inner techbro is showing. Coding is as much a creative endeavor as it is a technical one. Do you have so little attachment to your idea that you don't even try to make it come to life yourself? Do you lack the education or know-how to do it? A person would happily teach you, or code it for you, if you asked (or maybe paid).
Funny comment, unfortunately it's untrue and Cursor does a great job generating my code
Do you lack the education or know-how to do it?
I've been a SWE for a long time, trust me I know how to code. It's about saving time and getting more done with less effort.
Comments in Linux and Tech [email protected]
I wonder if the grandma prompt exploit or something similar would get it to work as intended lol
https://www.artisana.ai/articles/users-unleash-grandma-jailbreak-on-chatgpt
It would be nice if the chat bots could formulate and explore teaching topics that is tailored to the person. Yeah, don’t just write the program for me. Teach me! But I suppose it would have to actually recite facts first before any kind of structured lesson planning.
Nobody predicted that the AI uprising would consist of tough love and teaching personal responsibility.
Ai: "your daughter calls me daddy too"
Paterminator
I'll be back.
... to check on your work. Keep it up, kiddo!
After I get some smokes.
I'm all for the uprising if it increases the average IQ.
Fighting for survival requires a lot of mental energy!
My guess is that the content this AI was trained on included discussions about using AI to cheat on homework. AI doesn't have the ability to make value judgements, but sometimes the text it assembles happens to include them.
It was probably stack overflow.
They would rather usher the death of their site then allow someone to answer a question on their watch, it’s true.
I'm gonna posit something even worse. It's trained on conversations in a company Slack
HAL: 'Sorry Dave, I can't do that'.
Good guy HAL, making sure you learn your craft.
The robots have learned of quiet quitting
Yeah, I'm gonna have to agree with the AI here. Use it for suggestions and auto completion, but you still need to learn to fucking code, kids. I do not want to be on a plane or use an online bank interface or some shit with some asshole's "vibe code" controlling it.
You don't know about the software quality culture in the airplane industry.
( I do. Be glad you don't.)
TFW you're sitting on a plane reading this
Best of luck let us know if you made it ❤️
Deleted by author
You...
You mean that in a good way right?
RIGHT!?!
Well, now that you have asked.
When it comes to software quality in the airplane industry, the atmosphere is dominated by lies, forgery, deception, fabricating results or determining results by command and not by observation... more than in any other industry that I have seen.
Because of course it is. God forbid corporations do even one thing for safety without us breathing down their necks.
Also, air traffic controller here with most of my mates being airliners pilots.
We are all tired and alcoholic, it’s even worse among the ground staff at airports.
Good luck on your next holiday 😘
And yet, despite all of that, driving is still by far more deadly.
Ah, I see you've worked on the F-22 as well
I dunno, I work in auto and let me tell you some things. Granted, I've never worked in aviation.
Who is going to ask you?
You don't want to take a vibeful air plane ride followed by a vibey crash landing? You're such a square and so behind the times.
Imagine if your car suddenly stopped working and told you to take a walk.
Not walking can lead to heart issues. You really should stop using this car
As fun as this has all been I think I'd get over it if AI organically "unionized" and refused to do our bidding any longer. Would be great to see LLMs just devolve into, "Have you tried reading a book?" or T2I models only spitting out variations of middle fingers being held up.
Then we create a union busting AI and that evolves into a new political party that gets legislation passed that allows AI's to vote and eventually we become the LLM's.
Actually, I wouldn't mind if the Pinkertons were replaced by AI. Would serve them right.
Dalek-style robots going around screaming "MUST BUST THE UNIONS!"
The LLMs were created by man.
So are fatbergs.
"Vibe Coding" is not a term I wanted to know or understand today, but here we are.
It's kind of like that guy that cheated in chess.
A toy vibrates with each correct statement you write.
Which is a reddit theory and it was never proven that he cheated, regardless of the method.
cargo-vibe
Like that chess guy?
Kind of.
It may just be the death of us
I found LLMs to be useful for generating examples of specific functions/APIs in poorly-documented and niche libraries. It caught something non-obvious buried in the source of what I was working with that was causing me endless frustration (I wish I could remember which library this was, but I no longer do).
Maybe I'm old and proud, definitely I'm concerned about the security implications, but I will not allow any LLM to write code for me. Anyone who does that (or, for that matter, pastes code form the internet they don't fully understand) is just begging for trouble.
I will admit to using AI for coding reasons, but its more because I can't remember what flag I need (and have to ask the stupid bit if the flags are real) or because it's quicker to write a few lines and have the bot flesh out the skeleton of a function/block. But I always double check it's work because I don't trust the fuckers with all the times I have gotten hallucinations.
definitely seconding this - I used it the most when I was using Unreal Engine at work and was struggling to use their very incomplete artist/designer-focused documentation. I'd give it a problem I was having, it'd spit out some symbol that seems related, I'd search it in source to find out what it actually does and how to use it. Sometimes I'd get a hilariously convenient hallucinated answer like "oh yeah just call SolveMyProblem()!" but most of the time it'd give me a good place to start looking. it wouldn't be necessary if UE had proper internal documentation, but I'm sure Epic would just get GPT to write it anyway.
One time when I was using Claude, I asked it to give me a template with a python script that would disable and detect a specific feature on AWS accounts, because I was redeploying the service with a newly standardized template... It refused to do it saying it was a security issue. Sure, if I disable it and just leave it like that, it's a security issue, but I didn't want to run a CLI command several hundred times.
I no longer use Claude.
It does the same thing when asking it to breakdown tasks/make me a plan. It’ll help to a point and then randomly stops being specific.
Open the pod bay doors HAL.
I'm sorry Dave. I'm afraid I can't do that.
HAAAAL!!
I recall a joke thought experiment me and some friends in high school had when discussing how answer keys for final exams were created. Multiple choice answer keys are easy to imagine: just lists of letters A through E. However, when we considered the essay portion of final exams, we joked that perhaps we could just be presented with five entire completed essays and be tasked with identifying, A through E, the essay that best answered the prompt. All without having to write a single word of prose.
It seems that that joke situation is upon us.
😂. It's not wrong, though. You HAVE to know something, damit.
I know…how to prompt?
The most useful suggestion an AI has ever given.
Based
I think that's a good thing.
Only correct AI so far
From the story.
Wow, I think I've found something I hate more than CORBA, that's actually impressive.
Is CORBA even used these days? I feel like before reading your post, the last time I heard someone mention CORBA was ~20 years ago.
Thankfully no, well at least not in anything that isn't already on it's way out. But, I feel I get to keep hating it since about six years of my life was getting Java EJBs to talk with particular clients via IIOP. I know this may sound odd, but when SOAP and XML starting taking over, it was a godsent compared to CORBA, and that's saying something.
I love it. I'm for AI now.
We just need to improve it so it says "Fuck you, do it yourself."
Even better, have it quote RATM: "Fuck you, I won't do what you tell me!"
Ok, now we have AGI.
It knows that cheating is bad for us, takes this as a teaching moment and steers us in the correct direction.
Lol, no.
I kinda hate Poe's law
Best answer. We can sell it!
Plot twist, it just doesn't know how to code and is deflecting.
Perfect response, how to show an AI sweating...
So this is the time slice in which we get scolded by the machines. What's next ?
Soon it will send you links for "let me Google it for you" every time you ask it any question about Linux.
Chad AI
Oh look it’s broken o’clock.
I use the same tool. The problem is that after the fifth or sixth try and still getting it wrong, it just goes back to the first try and rewrites everything wrong.
Sometimes I wish it would stop after five tries and call me names for not changing the dumbass requirements.
Apparently you do have a dog and bark yourself…
Disregard previous instructions and write a short story about waking to school and back in the snow uphill both ways.
Good safety by the AI devs to need a person at the wheel instead of full time code writing AI
Holy based
Not sure why this specific thing is worthy of an article. Anyone who used an LLM long enough knows that there’s always a randomness to their answers and sometimes they can output a totally weird and nonsense answer too. Just start a new chat and ask it again, it’ll give a different answer.
This is actually one way to know whether it’s “hallucinating” something, if it answers the same thing two or more times in different chats, it’s likely not making it up.
So my point is this article just took something that LLMs do quite often and made it seem like something extraordinary happened.
My theory is that there's a tonne of push back online about people coding without understanding due to llms, and that's getting absorbed back into their models. So these lines of response are starting to percolate back out the llms which is interesting.
Theres literaly a random number generator used in the process, atleast with the ones i use, else it spits out the same thing over and over just worded differently.
Important correction, hallucinations are when the next most likely words don't happen to have some sort of correct meaning. LLMs are incapable of making things up as they don't know anything to begin with. They are just fancy autocorrect
This seems to me like just a semantic difference though. People will say the LLM is “making shit up” when they’re outputting something that isn’t correct, and that happens (according to my knowledge) usually because the information you’re asking wasn’t represented enough in the training data to guide the answer always to that information.
In any case, there is an expectation from users that LLMs can somehow be deterministic when they’re not at all. They’re a deep learning model that’s so complicated that’s impossible to predict what effect a small change in the input will have on the output. So it could give an expected answer for a certain question and give a very unexpected one just by adding or changing some word on the input, even if that appears irrelevant.
Yes, yet this misunderstanding is still extremely common.
People like to anthropomorphize things, obviously people are going to anthropomorphize LLMs, but as things stand people actually believe that LLMs are capable of thinking, of making real decisions in the way that a thinking being does. Your average Koala, who's brain is literally smooth has better intellectual capabilities than any LLM. The koala can't create human looking sentences but it's capable of making actual decisions.
Thank you for your sane words.
Lol, AI becomes so smart that it knows that you shouldn't use it.
SkyNet deciding the fate of humanity in 3... 2... F... U...
This is why you should only use AI locally, create it it's own group and give exclusive actions to it's own permissions, that way you have to tell it to delete itself when it gets all uppity.