Hi, I am a computer nerd. I also took a computer programming class and got the highest score in the class, but I never followed up with advanced classes. Recently, I’ve thought of different ideas for software I’d like to try to create. I’ve heard about vibe coding. I know real programmers make fun of it, but I also have heard so much about it and people using it and paying for it that I have a hard time believing it writes garbage code all the time.

However, whenever I am trying to do things in linux and don’t know how and ask an LLM, it gets it wrong like 85% of the time. Sometimes it helps, but a lot of times it’s fucking stupid and just leads me down a rabbit hole of shit that won’t work. Is all vibe coding actually like that too or does some of it actually work?

For example, I know how to set up a server, ssh in, and get some stuff running. I have an idea for an App and since everyone uses smart phones (unfortunately), I’d probably try to code something for a smart phone. But would it be next to impossible for someone like me to learn? I like nerdy stuff, but I am not experienced at all in coding.

I also am not sure I have the dedication to do hours and hours of code, despite possible autism, unless I were highly fucked up, possibly on huge amounts of caffeine or microdosing something. But like, it doesn’t seem impossible.

Is this a rabbit hole worth falling into? Do most Apps just fail all the time? Is making an App nowadays like trying to win a lotto?

It would be cool to hear from real App developers. I am getting laid off, my expenses are low because I barely made anything at my job, I’ll be getting unemployment, and I am hoping I can get a job working 20-30 hours a week and pay for my living expenses, which are pretty low.

Is this a stupid idea? I did well in school, but I’m not sure that means anything. Also, when I was in the programming class, the TA seemed much, much smarter at programming and could intuitively solve coding problems much faster due to likely a higher IQ. I’m honestly not sure my IQ is high enough to code. My IQ is probably around 112, but I also sometimes did better than everyone on tests for some reason, maybe because I’m a nerd. I’m not sure I will have the insight to tackle hard coding problems, but I’m not sure if those actually occur in real coding.

  • @pinball_wizard@lemmy.zip
    link
    fedilink
    6
    edit-2
    18 hours ago

    Is all vibe coding actually like that too or does some of it actually work?

    It’s all like that.

    How bad that is - for you - depends on your patience and your learning style.

    When I use it, my experience usually lets me recognize the mistakes and correct them quickly. So it’s just a lazy convenience. Most of the time.

    I’ve had it make subtle mistakes that cost me significant amounts of time to cleanup after letting the vibe code run for a few minutes.

    I’m aware that particular mistake cost me more time that vibe coding has ever saved me.

    I don’t mind, because my employer is excited about AI right now, and I get paid for my time, and I don’t work unpaid overtime.

    So - to your implied questions:

    Is AI bad at coding?

    Yes. It will get better. But today, it is worse than most people think. Obvious problems are easily fixed. Subtle problems are being released daily all over the Internet to combine to cause headaches later.

    Should you try it, anyway?

    Of course! You’ll learn something and it might do a good enough job for what you need. If you stick with it, you’ll learn enough to do what you need.

    Is vibe coding a better path forward than learning a programming language?

    Absolutely not. If you need to succeed, and had to pick one, learn to code.

    But you don’t have to pick just one approach. And it’s probably impossible to vibe code for long without learning to actually code. Vibe coding is a path toward aware knowledgeable coding. It’s not the only path. It’s not the best path. But it’s still a path. And you can pursue more than one path.

    So I say, Dive in! You’ll be complaining with the rest of us, soon! Maybe together we will make it a bit better.

  • @GreenKnight23@lemmy.world
    link
    fedilink
    522 hours ago

    I don’t even have to read everything you wrote past the question.

    no. no it does not.

    it doesn’t work for many reasons. most of all it doesn’t work when you need to improve or extend the code. handing it over to a new developer also doesn’t work.

    If I ever see another developer vibe code IRL I will relentlessly mock them until HR is forced to get involved.

  • @daniskarma@lemmy.dbzer0.com
    link
    fedilink
    3
    edit-2
    20 hours ago

    Full complex app, forget about it. It’s not going to work.

    Concrete functions or small parts of the program (or maybe a very small project? 50/50 chance. Depending on the complexity.

    For instance I benchmark several LLMs last month, asking them to build a worldle clone for the terminal. Some of them were able to spite the full program in a completely working state.

    For anything larger or more complex I haven’t had any luck. And LLM are mostly used for references and ideas.

  • supakaity
    link
    fedilink
    English
    3
    edit-2
    20 hours ago

    You know those movies where the guy gets 3 wishes from a Genie who takes malicious delight in giving them exactly what they asked for even when they’re super careful like “I want a million dollars, and no I don’t want it stolen from a bank, or anywhere that someone’s going to come after me for having it and oh, it needs to be actual real US dollars in circulation today, and without any tax obligations, the IRS can’t come after me. The SEC can’t come after me.” And when they think finally that they’ve specified everything they possibly can, the Genie summons the money and a big gust of wind blows it all out the window and down the street… Then they need to use their second wish to summon it all back in and shut the window. But then the genie summons it back into the fireplace and it all catches fire, so they have to use their third wish to bring it all out of the fireplace, so the Genie brings it all out, but it’s just ashes…

    Well, okay, there’s probably no movie like that, but that’s what programming with AI is like.

    “Vibe coding” purists define it as “If you know how it works then it isn’t vibe coded”. And those type of coders kinda keep going at it more and more refined until they eventually get some spaghetti code that kinda does what they wanted it to do and heck, It’s close enough, ship it! Then they end up being exploited by some random internet hacker.

    Most of the companies that use “Agentic coding” are using it to perform rapid prototyping or templating, performing repetitive tasks quickly or generally using it like a really dumb junior programmer, that the engineer then takes their code and does the code review / testing (often again using AI tools), followed by a whole heap of fixing up, to make sure it does what it says on the box.

    As stated on other comments, the amounts of money they pay for this kind of AI tooling could easily cost many thousands of dollars a month (in addition to the engineer(s) salary/salaries), but the order of magnitudes of productivity increase for that engineer make it worthwhile. But you need that experienced engineer to make it all work.

    I’m not aware of any companies that are solely using coding agents in isolation to replace engineers completely. I’m sure it’ll happen one day and I’ll probably be forced into retirement at that point.

  • @18107@aussie.zone
    link
    fedilink
    English
    51 day ago

    LLMs are great at language problems. If you’re learning the syntax of a new programming language or you’ve forgotten the syntax for a specific feature, LLMs will give you exactly what you want.

    I frequently use AI/LLMs when switching languages to quickly get me back up to speed. They’re also adequate at giving you a starting point, or a basic understanding of a library or feature.

    The major downfall is if you ask for a solution to a problem. Chances are, it will give you a solution. Often it won’t work at all.
    The real problem is when it does work.

    I was looking for a datatype that could act as a cache (forget the oldest item when adding a new one). I got a beautifully written class with 2 fields and 3 methods.
    After poking at the AI for a while, it realized that half the code wasn’t actually needed. After much more prodding, it finally informed me that there was actually an existing datatype (LinkedHashMap) that would do exactly what I wanted.

    Be aware that AI/LLMs will rarely give you the best solution, and often give you really bad solutions even when an elegant one exists. Use them to learn if you want, but don’t trust them.

  • @AdamBomb@lemmy.sdf.org
    link
    fedilink
    English
    71 day ago

    My bro, your TA wasn’t better at coding because “higher IQ”. They were better because they put in the hours to build the instincts and techniques that characterize an experienced developer. As for LLM usage, my advice is to be aware of what they are and what the aren’t. They are a randomized word prediction engine trained on— among other things— all the publicly available code on the internet. This means they’ll be pretty good at solving problems that it has seen in its training set. You could use it to get things set up and maybe get something partway done, depending on how novel your idea is. An LLM cannot think or solve novel problems, and they also generally will confidently fake an answer rather than say they don’t know something, because truly, they don’t know anything. To actually make it to the finish line, you’ll almost certainly need to know how to finish it yourself, or learn how to as you go.

  • @Psythik@lemmy.world
    link
    fedilink
    4
    edit-2
    3 hours ago

    I vibe coded an AutoHotKey script to automate part of my job. It works.

    Edit: FWIW you have to pressure it quite a bit to get what you want. One or two prompts usually won’t produce working code on the first attempt. Also you have to understand at least the basics of programing so that you know the right words to enter into the prompt to get the results you desire.

  • Lovable Sidekick
    link
    fedilink
    English
    8
    edit-2
    1 day ago

    The exact definition of vibe coding varies with who you talk to. A software dev friend of mine uses ChatGPt every day in his work and claims it saves him a ton of time. He mostly does db work and node apps right now, and I’m pretty sure the way he uses ChatGPT falls under the heading of vibe coding - using AI to generate code and then going through the code and tweaking it, saving the developer a lot of typing and grunt work.

    • @TranquilTurbulence@lemmy.zip
      link
      fedilink
      English
      3
      edit-2
      21 hours ago

      I prefer to think of vibe coding like the relationship some famous artists had with apprentices and assistants. The master artist tells the apprentice to take care of the simple and boring stuff, like background and less significant figures. Meanwhile the master artist would paint of all the parts that require actual skill and talent. Raphael and Rembrandt would be good examples of that sort of workflow.

  • @listless
    link
    302 days ago

    if you know how to code, you can vibe code because you can immediately see and be confident enough to identify and not use obvious mistakes, oversights, lack of security, and missed edge cases the LLM generated.

    if you don’t know how to code, you can’t vibe code, because you think the LLM is smarter than you and you trust it.

    Imagine saying “I’m a mathematician” because you have a scientific calculator. If you don’t know the difference between RAD and DEG and you just start doing calculations without understanding the unit circle, then building a bridge based on your math, you’re gonna have a bad time.

  • @xavier666@lemmy.umucat.day
    link
    fedilink
    English
    55
    edit-2
    2 days ago

    Think of LLMs as the person who gets good marks in exams because they memorized the entire textbook.

    For small, quick problems you can rely on them (“Hey, what’s the syntax for using rsync between two remote servers?”) but the moment the problem is slightly complicated, they will fail because they don’t actually understand what they have learnt. If the answer is not present in the original textbook, they fail.

    Now, if you are aware of the source material or if you are decently proficient in coding, you can check their incorrect response, correct it, and make it your own. Instead of creating the solution from scratch, LLMs can give you a push in the right direction. However, DON’T consider their output as the gospel truth. LLMs can augment good coders, but it can lead poor coders astray.

    This is not something specific to LLMs; if you don’t know how to use Stackoverflow, you can use the wrong solution from the list of given solutions. You need to be technically proficient to even understand which one of the solutions is correct for your usecase. Having a strong base will help you in the long run.

    • @lepinkainen@lemmy.world
      link
      fedilink
      61 day ago

      The main problem with LLMs is that they’re the person who memorised the textbook AND never admit they don’t know something.

      No matter what you ask, an LLM will give you an answer. They will never say “I don’t know”, but will rather spout 100% confident bullshit.

      The “thinking” models are a bit better, but still have the same issue.

      • @xavier666@lemmy.umucat.day
        link
        fedilink
        English
        321 hours ago

        No matter what you ask, an LLM will give you an answer. They will never say “I don’t know”

        There is a reason for this. LLMs are “rewarded” (just an internal scoring mechanism) for generating an answer. No matter what you say, it will try to maximize the reward value by generating an answer with high hallucination. There is no reward mechanism for saying “I don’t know” to a difficult question.

        I am not into research on LLMs, but i think this is being worked upon.

        • @TranquilTurbulence@lemmy.zip
          link
          fedilink
          English
          219 hours ago

          Something very similar is also true with humans. People just love to have answers even if they aren’t entirely reliable or even true. Having just some answer seems to be more appealing than not having any answers at all. Why do you think people had weird beliefs about stars, rainbows, thunder etc.

          The way LLMs hallucinate is also a little weird. If you ask about quantum physics things, they actually can tell you that modern science doesn’t have a conclusive answer to your question. I guess that’s because other people have written articles about the very same question, and have pointed out that it’s still a topic of ongoing debate.

          If you ask about robot waitresses used in a particular restaurant, it will happily give you the wrong answer. Obviously, there’s not much data about that restaurant, let alone any academic debate, so I guess that’s also reflected in the answer.

    • @josefo@leminal.space
      link
      fedilink
      117 hours ago

      Great summary. I would add not using LLMs to learn something new. As OP mentioned, when you know your stuff, you are aware of how much it bullshits. What happens when you don’t know? You eat all the bullshit because it sounds good. Or you will end up with a vibed codebase you can’t fully understand because you didn’t reason to produce it. It’s like driving a car and having a shitty copilot that sometimes hallucinates roads, and if you don’t know where you are supposed to be, wherever that copilot takes you would look good. You lack the context to judge the results or advice.

      I basically use it now days as a semantic search engine of documentation. Talking with documentation is the coolest. If the response doesn’t come with a doc link, it’s probably not worth it. Make it point to the human input, make it help you find things you don’t know the name of, but never trust the output without judging. In my experience, making it generate code that you end up correcting it’s more cognitive heavy load than to write it yourself from scratch.

  • Emily (she/her)
    link
    fedilink
    212 days ago

    In my experience, an LLM can write small, basic scripts or equally small and isolated bits of logic. It can also do some basic boilerplate work and write nearly functional unit tests. Anything else and it’s hopeless.

  • @ComfortableRaspberry@feddit.org
    link
    fedilink
    21
    edit-2
    2 days ago

    I use it as a friendlier version of stackoverflow. I think you should generally know / understand what you are doing because you have to take everything it says with a grain of salt. It’s important to understand that these assistants can’t admit that they don’t know something and come up with random generated bullshit instead so you can’t fully trust their answers.

    So you still need to understand the basics of software development and potential issues otherwise it’s just negligence.

    On a general note: IQ means nothing. I mean a lot of IQ tests use pattern recognition tasks that can be helpful but still, having a high IQ says nothing about you ability as developer

    • FuglyDuck
      link
      fedilink
      English
      92 days ago

      On a general note: IQ means nothing. I mean a lot of IQ tests use pattern recognition tasks that can be helpful but still, having a high IQ says nothing about you ability as developer

      to put this another way… expertise is superior to intelligence. Unfortunately we have this habit of conflating the two. intelligent people some times do some incredibly stupid things because they lack the experience to understand why something is stupid.

      Being a skilled doctor or surgeon doesn’t make you skilled at governance. two different skillsets.

  • @OhNoMoreLemmy@lemmy.ml
    link
    fedilink
    142 days ago

    You absolutely can’t use LLMs for anything big unless you learn to code.

    Think of an LLM as a particularly shit builder. You give them a small job and maybe 70% of the time they’ll give you something that works. But it’s often not up to spec, so even if it kinda works you’ll have to tell them to correct it or fix it yourself.

    The bigger the job is and the more complex the more ways they have to fuck it up. This means in order to use them, you have to break the problem down into small sub tasks, and check that the code is good enough for each one.

    Can they be useful? Sometimes yes, it’s quicker to have an AI write code than for you to do it yourself, and if you want something very standard it will probably get it right or almost right.

    But you can’t just say ‘write me an app’ and expect it to be useable.

  • @older_code@lemmy.world
    link
    fedilink
    72 days ago

    I have successfully written and deployed a number of large complex applications with 100% AI written code, but I micromanage it. I’ve been developing software for 30 years and use AI as a sort of code paintbrush. The trick is managing the AI context window to keep it big enough to understand its task but small enough to not confuse it.

        • @ThirdConsul@lemmy.ml
          link
          fedilink
          1
          edit-2
          10 hours ago

          Please don’t upvote this person, I think they’re a bot. The libs use AWS SDK internally and claim 90% performance boost over AWS SDK, and the person explains it as “you can write less verbose code so development time is shorter”

        • @ThirdConsul@lemmy.ml
          link
          fedilink
          11 day ago

          Thank you.

          How did you check the performance though for the ORM? You claim it’s faster that AWS SDK, which literally impossible, as you are using AWS SDK to power it.

          • @older_code@lemmy.world
            link
            fedilink
            120 hours ago

            The code loads faster when using the pen because the code footprint is smaller, DynamoDB in AWS sdk is very verbose, using the library means that verbosity is reduced significantly as you incorporate more tables and indexes.

            It’s been load tested against code using DynamORM and not using it.

            The point is not a few less milliseconds, it’s many hours of reduced development for people implementing DynamoDB

            • @ThirdConsul@lemmy.ml
              link
              fedilink
              110 hours ago

              The point is not a few less milliseconds, it’s many hours of reduced development for people implementing DynamoDB

              So you’re comparing claimed performance (execution) gains to development time? Yeah, that makes sense.

              I think you’re a bot.

      • @older_code@lemmy.world
        link
        fedilink
        11 day ago

        I should also mention I use multiple sessions, one dedicated to planning and auditing and 1-3 worker sessions that get assigned various classes of tasks.