We all know there’s a lot of hype and skepticism around AI, and over the last year or so I’ve been hearing a lot about “Agentic” AI. I’ve struggled to get a real grasp on what that means without working examples; however, I’ve began to see hints of something. Videos mocking coders who are scrolling their phones while waiting for the AI to complete a task. Peers claiming Claude but not GPT can do complex reasoning and planning. Not much, but enough for me to stop ignoring the term as purely buzz word.

Agentic AI is defined as “an autonomous systems that act independently to achieve complex, multi-step goals without continuous human oversight.” This seems fanciful, but my basic understanding is that these Agentic systems are do the large scale reasoning then use other apps to achieve smaller sub-goals. Essentially these systems allow for pipelines to be set up as verbal lists of tasks then they work their way through the tasks with some perhaps limited problem solving. A crucial aspect of this seems to be that if you give the bot more tools it can do more and handle more failures. Sometimes more tools means a text book or document on your work to help it reason and plan. Sometimes more tools means writing a script for it to use in future analyses.

Now, while these sound mildly interesting, they’re essentially useless if they’re locked behind a pay wall. I’m not paying some company to think poorly for me. Someone else’s tools are not an extension of my skills or personal power since I’d be neither able nor willing to build on them. However, the notion of Local Agentic AI changes this. If it’s on my computer even if I don’t fully understand what it’s doing, I can build on it. I can control it and treat it as an extension of myself – as humans do with all tools.

I’m a modest coder, and even the basic AI has expanded my abilities there just by helping me find algorithms I wouldn’t have known how to find before. I have ran Local LLMs, but I’ve not tried these Agentic LLMs. I worry I was unimpressed too quickly, and gave up on a potentially useful tool. If I can tell the local agent to make a rough version of a function that does XXXX, then I can get more done. If I can tell it to write a simple script that makes this table that I’d normally just do by hand, check the script, then link that scipt to a command for the task I wouldn’t normally trust the AI with then the AI can do a larger chunk of my work. The more scripts I make, the more the AI can do. The more scripts I download from open source communities, the more the AI can do. I don’t have to trust the AI if all it’s doing is moving information around and triggering scripts. I just have to check the scripts. If we start adding in robotics… yeah, I can see the hype.

Of-course, the counter argument is that we’ve had IFTTT triggers and pipelines for decades. So maybe this isn’t fundamentally new, but is it still an impetus to download more tools and build more pipelines? Will I fall behind if I don’t figure out how to use this efficiently and effectively (FOMO)? Does anyone here have experience with Agentic LLMs (especially local)? Also, what’s the best Lemmy community for learning more about this sort of thing and maybe also hooking it up to basic robots?

  • TootSweet@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    1
    ·
    24 hours ago

    This post isn’t a question. It’s pushing an agenda. It’s barely even disguised.

    But I’ll answer as if the title of the post is the question:

    What role could Agentic AI have in the future?

    General-purpose AI isn’t able to function reliably without a human driving it. The hype you’ve heard about AI (“agentic” or otherwise) is just hype. It’s a bubble. The bubble will continue for a time and then pop. After which time, the term “Agentic AI” will be looked back on with embarassment similarly to the way we look back on the Beanie Babie craze of the late 1990s.

    How should individuals prepare?

    By avoiding getting entangled/invested in anything connected to the AI bubble.

    • tristynalxander@mander.xyzOP
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      2
      ·
      edit-2
      22 hours ago

      I apologize if the context/background comes off as an agenda. It’s not that I’m trying to convince people one way or another, but I am concerned the anti-AI sentiment may be causing people to dismiss useful tools. I was attempting to provide some of my thoughts including that concern as context to my more general question of what I might need to do to properly utilize the tools.

      If it helps, I agree that you shouldn’t spend any money on anything AI. To me, most “generative” AI is like a programming package. NumPy is a genuinely a really big deal, and coding without it is foolish, but people who don’t code shouldn’t worry about it. It’s not yet clear to me where AI agents fall as a tool in the world, and I’m genuinely trying to work that out. It might be useful purely as a coding tool – at the very least I think I want to try it as a coding tool. I’m also a biologist so I’m very keen to use robots to automate routine tasks – not sure if the AI will be a tool to build that automation or be a part of it.