Is there anyway to make it use less at it gets more advanced or will there be huge power plants just dedicated to AI all over the world soon?

  • calamityjanitor@lemmy.world
    link
    fedilink
    arrow-up
    15
    ·
    2 days ago

    OpenAI noticed that Generative Pre-trained Transformers get better when you make them bigger. GPT-1 had 120 million parameters. GPT-2 bumped it up to 1.5 billion. GPT-3 grew to 175 billion. Now we have models with over 300 billion.

    To run, every generated word requires doing math with every parameter, which nowadays is a massive amount of work, running on the most power hungry top of the line chips.

    There are efforts to make smaller models that are still effective, but we are still in the range of 7-30 billion to get anything useful out of them.