• tias@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    2
    ·
    edit-2
    3 hours ago

    I’ll confess I only skimmed the article, but it seems like just a bunch of unsubstantiated opinions and I don’t buy it.

    Using AI generated code is like pair programming with a junior programmer. You tell the junior what to do and then you correct their mistakes by telling them how to do better. In my experience, explaining things to someone else makes you better at your craft. Typically this cycle includes me changing the code manually at the end, and then possibly feeding it back to ChatGPT for another cycle of changes.

    Apart from letting me realize and test my ideas quicker, this allows me to raise the abstraction level of my thinking. I can spend more time on architecture and on seeing the bigger picture, and less time being blinded by the nitty gritty details. I would say it makes me both a faster and a better programmer.

    • Sage1918@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      2 hours ago

      Bugs never occur in the high-level/big picture land, it usually come up in the low-level/implementation land. Should you entrust these to AI ?

      • tias@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        31 minutes ago

        Only because bugs are defined as errors in implementation details. You can still have errors in your design (sometimes referred to as design bugs).

        It’s not about “entrusting” to AI any more than I would be entrusting important code to a junior developer to just go off and push to production on his own. We still have code review, pair programming etc. As I said, I read the output code, point out issues with it, and in the end make manual adjustments to fit what I want. It’s just a way of building up the bulk of the code more quickly and then you refine it.