I’ve seen this term thrown around a lot lately and I just wanted to read your opinion on the matter. I feel like I’m going insane.
Vibe coding is essentially asking AI to do the whole coding process, and then checking the code for errors and bugs (optional).
IMO it will “succeed” in the early phase. Pre-seed startups will be able demo and get investors more easily, which I hear is already happening.
However, it’s not sustainable, and either somebody figures out a practical transition/rewrite strategy as they try to go to market, or the startup dies while trying to scale up.
We’ll see a lower success rate from these companies, in a bit of an I-told-you-so-moment, which reduces over-investment in the practice. Under a new equilibrium, vibe coding remains useful for super early demos, hackathons, and throwaway explorations, and people learn to do the transition/rewrite either earlier or not at all for core systems, depending on the resources founders have available at such an early stage.
I think calling that vibe coding is a very unfitting term. I haven’t seen it called that before.
Nah. I only used AI as a last resort, and in my case, it has worked out. I cannot see myself using AI for codes again.
Seems like a recipe for subtle bugs and unmaintainable systems. Also those Eloi from the time machine, where they don’t know how anything works anymore.
Management is probably salivating at the idea of firing all those expensive engineers that tell them stuff like “you can’t draw three red lines all perpendicular in yellow ink”
I’m also reminded of that ai-for-music guy that was like “No one likes making art!”. Soulless husk.
^ this
Using AI leads to code churn and code churn is bad for the health of the project.
If you can’t keep the code comprehensible and maintainable then you end up with a worse off product where either everything breaks all the time, or the time it takes to release each new feature becomes exponentially longer, or all of your programmers become burnt out and no one wants to touch the thing.
You just get to the point where you have to stop and start the project all over again, while the whole time people are screaming for the thing that was promised to them back at the start.
It’s exactly the same thing that happens when western managers try to outsource to “cheap” programming labor overseas, it always ends up costing more, taking longer, and ending in disaster
Three perpendicular lines are possible in 3D, and saffron is initially red, but becomes yellow when used in cooking. Checkmate!
I agree with you.
The reason I wrote this post in the first place was because I heard people I respect a lot at work talk about this as being the future of programming. Also the CEO has acknowledged this and is actively riding the “vibe-coding” train.
I’m tired of these “get rich quick the easy way” buzz-words and ideas, and the hustle culture that perpetuates them.
fake
They can vibe as much as they want, but don’t ever ask me to touch the mess they create.
Once companies recognize the full extent of their technical debt, they will likely need to hire a substantial number of highly experienced software engineers to address the issues, many of which stem from over-reliance on copying and pasting outputs from large language models.
A new post-LLM coding opportunity: turd polishing
If it wasn’t for the fact that even an AI trained on only factually correct data can conflagrate those data points into entirely novel data that may no longer be factually accurate, I wouldn’t mind the use of AI tools for this or much of anything.
But they can literally just combine everything they know to create something that appears normal and correct, while being absolutely fucked. I feel like using AI to generate code would just give you more work and waste time, because you’ll still need to fucking verify that it didn’t just output a bunch of unusable bullshit.
Relying on these things is absolutely stupid.
Completely agree. My coworkers spend more time prompting and trying to get useful text from ChatGPT and then fixing that text than the time it’d take them to actually write the thing in the first place. It’s nonsense.
This seems like a game you’d do with other programmers, lol.
I can understand using AI to write some potentially verbose or syntactically hell lines to save time and headaches.
The whole coding process? No. 😭
You can save time at the cost of headaches, or you can save headaches at the cost of time. You cannot save both time and headaches, you can at most defer the time and the headaches until the next time you have to touch the code, but the time doubles and the headaches triple.
AI can type tedious snippets faster than me, but I can just read the code and revise it if needed.
That’s a bad vibe if I’ve ever seen one.
So you mean debugging then?
This sounds like something I put on my resume to get a coding job, but I’m not actually a coder.
It’d work, too.
Somewhat impressive, but still not quite a threat to my professional career, as it cannot produce reliable software for business use.
It does seem to open up for novices to create ‘bespoke software’ where they previously would not have been able to, or otherwise unable to justify the time commitment, which is fun. This means more software gets created which otherwise would not have existed, and I like that.
I mean, at some point you have to realize that instructing an AI on every single thing you want to do starts to look a lot like programming.
Programming isn’t just writing code. It’s being able to reason about a method of doing things. Until AI is at the level of designer, you can expect humans to have to do the brunt of the work to bring software to life.
Yeah, there’s also the “debugging is just as hard as writing elegant code” side of things. Vibe coding is largely just putting yourself in a permanent debugging role.
The big issue I see with vibe coding is that you need to know best practices to build secure code. Even if you don’t adhere to them all the time, best practices exist for a reason. And a programmer who doesn’t even know them is a dangerous thing, because they won’t even be able to see what is insecure (until it’s far too late).
Studies have found that vibe coders tend to produce less secure code, but have higher confidence in their code being secure; It’s essentially Dunning-Kruger in practice. I’d have no issue with someone using AI to get the broad strokes down. But then they need to be able to back it up with actual debugging. Not just “I didn’t even bother looking at it. If it compiles, push it to prod.”
For personal projects, I don’t really care what you do. If someone who doesn’t know how to write a line of code asks an LLM to generate a simple program for them to use on their own, that doesn’t really bother me. Just don’t ask me to look at the code, and definitely don’t ask me to use the tool.
lol wut, asking AI to do the work and then going back and fixing bugs…?
To me, vibe coding is pick a project to work on and get building. Very basic planning stages without much design, like building with legos without instruction manuals. I make design decisions and refactor as I code. I certainly get some AI input when I don’t know how to implement something, but I will usually work “blindly” using my own ideas and documentation. I probably visit stackoverflow while vibe coding more than I do chatgpt.