A robot trained on videos of surgeries performed a lengthy phase of a gallbladder removal without human help. The robot operated for the first time on a lifelike patient, and during the operation, responded to and learned from voice commands from the team—like a novice surgeon working with a mentor.

The robot performed unflappably across trials and with the expertise of a skilled human surgeon, even during unexpected scenarios typical in real life medical emergencies.

  • BrianTheeBiscuiteer@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    3
    ·
    edit-2
    3 days ago

    My son’s surgeon told me about the evolution of one particular cardiac procedure. Most of the “good” doctors were laying many stitches in a tight fashion while the “lazy” doctors laid down fewer stitches a bit looser. Turns out that the patients of the “lazy” doctors had a better recovery rate so now that’s the standard procedure.

    Sometimes divergent behaviors can actually lead to better behavior. An AI surgeon that is “lazy” probably wouldn’t exist and engineers would probably stamp out that behavior before it even got to the OR.

    • Tattorack@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      1
      ·
      3 days ago

      That’s just one case of professional laziness in an entire ocean of medical horror stories caused by the same.

      • snooggums@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        3 days ago

        Or more likely they weren’t actually being lazy, they knew they needed to leave room for swelling and healing. The surgeons that did tight stitches thought theirs was better because it looked better immediately after the surgery.

        Surgeons are actually pretty well known for being arrogant, and claiming anyone who doesn’t do their neat and tight stitching is lazy is completely on brand for people like that.

      • BrianTheeBiscuiteer@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        3 days ago

        Eliminating room for error, not to say AI is flawless but that is the goal in most cases, is a good way to never learn anything new. I don’t completely dislike this idea but I’m sure it will be driven towards cutting costs, not saving lives.

    • 𝕛𝕨𝕞-𝕕𝕖𝕧@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      3 days ago

      i mean, you could just as easily say professors and university would stamp those habits out of human doctors, but, as we can see… they don’t.

      just because an intelligence was engineered doesn’t mean it’s incapable of divergent behaviors, nor does it mean the ones it displays are of intrinsically lesser quality than those a human in the same scenario might exhibit. i don’t understand this POV you have because it’s the direct opposite of what most people complain about with machine learning tools… first they’re too non-deterministic to such a degree as to be useless, but now they’re so deterministic as to be entirely incapable of diverging their habits?

      digressing over how i just kind of disagree with your overall premise (that’s okay that’s allowed on the internet and we can continue not hating each other!), i just kind of find this “contradiction,” if you can even call it that, pretty funny to see pop up out in the wild.

      thanks for sharing the anecdote about the cardiac procedure, that’s quite interesting. if it isn’t too personal to ask, would you happen to know the specific procedure implicated here?

      • BrianTheeBiscuiteer@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 days ago

        Not specifically but I think the guidance is applicable to most incisions of the heart. I think the fact that it’s a muscular and constantly moving organ makes it differently than something like an epidermal stitch.

        And my post isn’t to say “all mistakes are good” but that invariablity can lead to stagnation. AI doesn’t do things the same way every single time but it also doesn’t aim to “experiment” as a way to grow or to self-reflect on its own efficacy (which could lead to model collapse). That’s almost at the level of sentience.