Ask HN: Am I the only one not using AI?

I've tried using various AI tools and models over the past couple of years, but honestly it feels like it gives me a false sense of confidence. Plus, the time I supposedly save building things gets eaten up debugging, correcting, improving the AI-generated slop.

Am I using the tools wrong or are others finding the same thing?

18 points | by acqbu 2 days ago

17 comments

  • nicohayes 1 day ago
    You're definitely not alone. Social media amplifies the "AI is everywhere" narrative, but in reality? Most people are still shipping code the old-fashioned way.

    I'd estimate maybe 20% of devs have actually integrated AI into their daily workflow beyond occasional ChatGPT queries. The other 80% either tried it and bounced off the friction, or are waiting to see which tools actually stick.

    Not using AI doesn't mean you're falling behind - it means you're avoiding cargo-culting. The real skill is knowing when it's worth the context-switching cost and when grep + your brain is faster.

  • softwaredoug 2 days ago
    AI ruins your flow. That's the biggest problem. I sit here and wait for Claude to do something. Then I get distracted by social media.

    No these things don't actually work if you study human psychology:

    * Switching to another work task (what for like a minute?)

    * Playing chess or something (sure its better than social media, but still a distraction)

    But I do like AI tools that don't interfere with my flow like Github Copilot, or even chatting with Claude / ChatGPT about a task I'm doing.

    • digital_sawzall 1 day ago
      I started doing pushups between claude code responses. I started with 10 but now I rip ~50 like nothing. I'm getting a pull up bar and trying to do the same. Pull ups until it completes then prompt and again, squats, pushups, ect. I'm getting stronger and better at code.
    • objcts 1 day ago
      stare out the window. look at clouds. wonder how they take the shapes they do. think about water and how it moves through time and space. how those water molecules were once in a bowl of rice or loaf of bread. how many other things has this water been in? what about the water in my body, right now? holy shit, i’ve been a cloud before…

      oh, claude’s done now. how does this thing work?

    • bn-l 2 days ago
      E-e-e-xactly. It took an embarrassing long time for me to come to this conclusion also. There’s something hypnotising about seeing it work which is also distracting.

      I wonder if I’ve actually saved time overall or, if I was in an uninterrupted flow state I would have done not just a better but also quicker job.

    • galaxy_gas 1 day ago
      I just ask right now Cursor-GPT about where a service was being called from, its has over 10 minutes and it hasn't come up with an answer. Just constant grepping and reading and planning next moves

      So aggravating

  • tstrimple 2 days ago
    Just started a Claude Code experiment this week. I'm building a new NAS but instead of using an off the shelf appropriate distro like TrueNAS I just installed NixOS and I'm having Claude Code fully manage the entire operating system. It's going pretty well so far. Initially it would reach for tools like dig that weren't available on the install but after a "# memorize We're on NixOS, you need to try to do things the NixOS way first. Including temporarily installing tools via nix-shell to run individual commands." those issues went away and it's doing NixOS things.

    From a clean NixOS command line install, we've got containers and vms handled. Reverse proxy with cloudflare tunnels with all endpoints automatically getting and renewing SSL certs. All the *arr stack tools and other homelab stuff you'd expect. Split horizon DNS with unbound and pihole running internally. All of my configurations backed up in github. I didn't even create the cloudflare tunnels or the github repos. I had claude code handle that via API and cli tools. The last piece I'm waiting on to tie it all together are my actual data drives which should be here tomorrow.

    Is this a smart thing to do? Absolutely not. Tons of things could go wrong. But NixOS is fairly resilient and rollbacks are easy. I don't actually have anything running on the NAS in use yet and I've got my synology limping along until I finish building this replacement. It's still an open question whether I'll use Claude Code like this to manage the NAS once I've migrated my data and my family has switched over. But I've had a very good experience so far.

  • Jeremy1026 1 day ago
    I've recently started to use it to help with writing tests. I'll write the code, then scaffold out the test scenarios that I want it to do, give it my code, my scaffolding, and say fill it in. It's done pretty well and saves me a ton of time doing the part of the job that I hate the most. I go through and tweak probably 10% of the generated code, and typically about 1 out of 30 tests will fail, badly, and I'll have to rewrite it from scratch.
  • prossercj 2 days ago
    I don't use it for large-scale code generation, but I do find it useful for small code snippets. For example asking how to initialize a widget in Kendo UI with specific behavior. With snippets, I can just run the code and verify that it works with minimal effort. It's often more about reminding me of something I already knew rather than discovering something novel. I wouldn't trust it with anything novel.

    In general, I think of it as a better kind of search. The knowledge available on the internet is enormous, and LLMs are pretty good at finding and synthesizing it relative to a prompt. But that's a different task than generating its own ideas. I think of it like a highly efficient secretary. I wouldn't ask my secretary how to solve a problem, but I absolutely would ask if we have any records pertaining to the problem, and perhaps would also ask for a summary of those records.

  • dasefx 1 day ago
    My workflow is simple, step 1) THINK hard about the problem by yourself, 2) Define rough sketches of function names, params, flow, etc. adapt to your problem 3) Iterate with any LLM and create an action plan, this is where you correct everything, before any code is written 4) Send the plan to one the CLI LLM thingies and attack the points one by one so you don't run out of context.

    So far has been working beautifully for real work stuff, sometimes the models do drift, but if you are actually paying attention to the responses, you should be able to catch it early.

  • overvale 1 day ago
    “I've come up with a set of rules that describe our reactions to technologies:

    1. Anything that is in the world when you’re born is normal and ordinary and is just a natural part of the way the world works.

    2. Anything that's invented between when you’re fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it.

    3. Anything invented after you're thirty-five is against the natural order of things.”

    ― Douglas Adams

  • davydm 2 days ago
    I found the same thing, so I don't bother with ai-gen code AT ALL. I found that the time wasted fixing up the slop was not worth it - it's more efficient to code it yourself, as shown by studies (eg referred to here: https://www.linkedin.com/pulse/vibe-coding-myth-when-feeling...).

    No, your vibe-coding is not more productive, unless your only metrics for productivity are commit counts, PR counts, deployment counts. I can commit, PR and deploy crap all day long and "score well" - and this is what people are clinging to with their ai-gen defenses. I'm really sorry to inform you that your experienced "speed-up" is just a trick of the brain (remembering from an article written, iirc, by Gurwinder, but I'm having trouble finding it now) - you're actually going slower, and your brain is tricking you into thinking it's faster because whilst the ai was "coding", you didn't have to, so it feels like more of a win than it actually is, considering the results.

  • Ekaros 2 days ago
    I can't actually be bothered myself either...

    Did take a look at gemini result, but then it was different from immediate results under it so didn't leave lot of confidence even to get simplest things right.

  • sph 2 days ago
    I don’t and I won’t. My large clients do not care. The day they start to require any of that nonsense, I’ll drop them as a client. Simple as that.

    I have decided to be radical about AI and LLM: I don’t like them because they are a waste of time, and I would like them even less if they were this magical world-changing technology people want us to believe. I am at a point of my career where concerns of productivity or how to operate in large-scale tech companies are the least of my problems, while I increasingly appreciate the artistic craft of programming and computers, especially in small-scale to improve our lives rather than accumulate profit. So while I could admit LLMs they have their use, I want to consciously follow a path where human intelligence and well-being is of the utmost concern, and any attempt at creating intelligent machines is tantamount to blasphemy.

    Under this philosophy, seeing that all the talk about imminent AGI has led to creating spam and porn at large scale, I can only roll my eyes and remain on the sidelines while we continue down this idiotic path of resource extraction and devaluation of human ingenuity for the profit of the few.

    • propablythrown 2 days ago
      I use llms everyday for coding assistance the same way I used search engines in the past, and to that extent I do not see an issue. Why would you avoid that out of principle?
      • sph 1 day ago
        For the same reason Richard Stallman only uses free software. Sometimes, it's good to have a moral imperative and stick to it. This is mine.

        I'm far enough in my career to know that avoiding coding assistance or LLM-assisted "search" won't make my life or craft worse in any way. Quite the opposite, in fact.

        • bhag2066 1 day ago
          Shouldn't you refuse to use anything but pencil and paper by that logic. An abacus. No not that that's technology. Only your fingers? The godhead resides equally in the pedal of a flower the gears in an engine, the human typed code on servers as well as the machine generated code on be very same servers.
        • propablythrown 1 day ago
          You might be right, as we don't know yet if these tools actually enhance or make our craft better in the long term.
    • sexyman48 2 days ago
      I’ll drop them as a client

      Not unless they drop you first.

  • incomingpain 2 days ago
    >Am I using the tools wrong or are others finding the same thing?

    Like any new tool, there is a learning curve. The Curve is rather steep right now with the horizon changing to quickly. The right tool also matters a great deal; right now you can run a model at home on 32gb vram that's objectively better than gpt 3.5 from 2023 or grok 2.

    >lus, the time I supposedly save building things gets eaten up debugging, correcting, improving the AI-generated slop.

    Those complaining about ai slop are almost certainly complaining about lack of prompt engineering skills.

    Let me also explain the proper evolution here.

    In 2021, you would go to stackoverflow, copy some of your code or ask a question and hopefully someone helped you sometime. Then you'd get the help and probably paste their code in.

    In 2024, you would go to AI, copy some of your code, ask a question and the AI responds quickly. The solution might be bad, buggy, and so you reprompt because your first prompt wasnt engineered well. You finally get good code and copy and paste.

    In 2025, why all this copy and paste? Why not use agentic where it does the copy and paste for you. It knows what to read, and what to do.

    Also 2025, what if you have AI also orchestrating 1 level higher and verifying that it itself is doing a good job.

    • wara23arish 2 days ago
      One is passively receiving the answer and the other is actively reading and comparing multiple choices.

      If you were the type that would just copy paste whatever came up first, then yeah its just quicker to do it that way.

    • jf22 2 days ago
      You forgot 2023 where you'd generate get half-lucid unit tests
  • yodsanklai 19 hours ago
    In my company (big tech), the tools are integrated in our environment, and most people use them, and actually a 20-30% of the new code in prod is generated by AI. I was skeptical, and I don't like this new world, but it's happening.

    I don't know what will be the final form of this, how our jobs will be impacted, and how much more productive we really are with the tool. But it's not a hype, these tools are here to stay and have changed the way we code. I don't think they will replace coders but they will make the best programmers more efficient.

    As you said, it's easy to lose time with the generated slop, but someone who use the tools wisely is more efficient.

  • supernes 2 days ago
    Every single time I try to use it for research or learning it ends up spitting subtly invalid code. Results range from imaginary APIs that don't exist to straight up invalid syntax, not to mention outdated info, contradictory reasoning and flat out wrong explanations.

    Maybe spending $200/mo or whatever to access the top-of-the-line models will mitigate some of that, but I'd rather come up with the solution and gain the understanding myself, then spend the money on something worthwile.

  • redhale 1 day ago
    Yes, you are using the tools wrong.

    These tools are _hard_ to use well. Very hard. Deceptively hard. So hard that smart engineers bounce off of them believing them to be all smoke and hype. But if you study the emerging patterns, experiment, and work through the difficulty and rough edges, you will come out the other side with a new skill that will have you moving faster than you believed possible.

    There are people who will think I'm either lying or delusional. It's fine. They just haven't made it through the fog yet. They'll get there eventually.

  • brettkromkamp 2 days ago
    It's a mixed bag. It depends on your problem domain, the problem you are trying to solve (within that domain), the context you provide the LLM, the output it generates (you are using libraries to coerce the output into predictable (JSON) structures, right?). What's more, based on what you are trying to do, the LLM you are using might have sufficient training data, but not necessarily so (resulting in possible hallucinations/confabulations). So, there's that. Also, LLMs are not deterministic, they can (and will) generate a different response every time you call them (even if the context you provided is the same). So, yeah... sometimes these things really deliver and other times, it's just... meh!
  • curvaturearth 2 days ago
    Yep agreed
  • ajay_as 2 days ago
    [dead]