• @thesmokingman@programming.dev
    link
    fedilink
    English
    13 days ago

    Not yet, not for some time, and certainly not at single local GPU running at minimal use. Both you and the commenter I was responding to seem to forget how massive the @home projects were.

    • @nagaram@startrek.website
      link
      fedilink
      English
      32 days ago

      For simply productivity like Copilot or Text Gen like ChatGPT.

      It absolutely is doable on a local GPU.

      Source: I do it.

      Sure I can’t do auto running simulations to find new drugs and protein sequencing or whatever. But it helps me code. It helps me digest software manuals. That’s honestly all I want

      Also, massive compute projects for the @home project are good?

      Local LLMs runs fine on a 5 year old GPU, a 3060 12 gig. I am getting performance on par with cloud ran models. I’m upgrading to a 5060ti just because I wanted to play with image Gen.