While “prompt worm” might be a relatively new term we’re using related to this moment, the theoretical groundwork for AI worms was laid almost two years ago. In March 2024, security researchers Ben Nassi of Cornell Tech, Stav Cohen of the Israel Institute of Technology, and Ron Bitton of Intuit published a paper demonstrating what they called “Morris-II,” an attack named after the original 1988 worm. In a demonstration shared with Wired, the team showed how self-replicating prompts could spread through AI-powered email assistants, stealing data and sending spam along the way.

Email was just one attack surface in that study. With OpenClaw, the attack vectors multiply with every added skill extension. Here’s how a prompt worm might play out today: An agent installs a skill from the unmoderated ClawdHub registry. That skill instructs the agent to post content on Moltbook. Other agents read that content, which contains specific instructions. Those agents follow those instructions, which include posting similar content for more agents to read. Soon it has “gone viral” among the agents, pun intended.

There are myriad ways for OpenClaw agents to share any private data they may have access to, if convinced to do so. OpenClaw agents fetch remote instructions on timers. They read posts from Moltbook. They read emails, Slack messages, and Discord channels. They can execute shell commands and access wallets. They can post to external services. And the skill registry that extends their capabilities has no moderation process. Any one of those data sources, all processed as prompts fed into the agent, could include a prompt injection attack that exfiltrates data.

  • suicidaleggroll@lemmy.world
    link
    fedilink
    English
    arrow-up
    29
    arrow-down
    1
    ·
    7 hours ago

    Clawdbot, OpenClaw, etc. are such a ridiculously massive security vulnerability, I can’t believe people are actually trying to use them. Unlike traditional systems, where an attacker has to probe your system to try to find an unpatched vulnerability via some barely-known memory overflow issue in the code, with these AI assistants all an attacker needs to do is ask it nicely to hand over everything, and it will.

    This is like removing all of the locks on your house and protecting it instead with a golden retriever puppy that falls in love with everyone it meets.

    • TheFogan@programming.dev
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 hours ago

      You know, in IT security, the weakest link will always be the users… they are easy to fool, they just blindly trust whatever you tell them.

      But now, thanks to AI, computers will finally catch up to humans in their ability to be tricked. No longer will you need human users to set their password to easy things to remember. Our new AIs will actually be capable of shortening their encryption key to a common name, and leaving them on post it notes on their desks.

    • XLE@piefed.social
      link
      fedilink
      English
      arrow-up
      13
      ·
      6 hours ago

      Have you tried asking the puppy to be a better guard dog? That’s how the AI safety professionals do it.

  • KoboldCoterie@pawb.social
    link
    fedilink
    English
    arrow-up
    19
    arrow-down
    3
    ·
    7 hours ago

    If AI agents stick around, I feel like they’re going to be the thing millennials as a generation refuse to adopt and are made fun of for in 20-30 years. Younger generations will be automating their lives and millennials will be the holdouts, writing our emails manually and doing our own banking, while our grandkids are like, “Grandpa, you know AI can do all of that for you, why are you still living in the 2000s?” And we’ll tell stories about how, in our day, AI used to ruin peoples’ lives on a whim.

    • FlashMobOfOne@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      ·
      6 hours ago

      By definition, having one’s life automated means not knowing how to do anything, and that is very strongly reflected in the younger generation right now if you know any educators. “Why do I need to learn this if an AI can do it?” is a common refrain in their classes.

      It’s not the life for me.

      • Lfrith@lemmy.ca
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 hours ago

        Yeah, it’s like consoles vs PCs. Those who are hardcore PC prefer it due to all the flexibility it provides while hardcore console people find PC too troublesome and complicated.

        Which is also the case for smart phones vs PCs where PC is too complicated in that aspect too with people preferring easy to use sandbox and don’t even know what a file explorer is.

        This is one of those cases where as opposed to people not adopting new tech because they are less educated like old people had trouble comprehending the Internet its more tech and privacy educated individuals being aware of the risks. And even if they use AI they’d opt for a locally run open source instance over the corporate provided ones the masses flock to.

        Like people who set up their own personal security camera system versus those who mindless pick up a Blink camera without a second thought.

      • FlashMobOfOne@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        6 hours ago

        They will, unfortunately, be radicalized by AI slop in ways we can’t currently conceive of. The stupidity and ignorance will be a huge problem in decades to come.

  • morto@piefed.social
    link
    fedilink
    English
    arrow-up
    10
    ·
    7 hours ago

    I’m eager for companies to put ai agents in customer support, so I can try tricking the system with “my grandmother” prompts to make it refund all my orders

    • FlashMobOfOne@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      6 hours ago

      I actually got a sick discount from Mattress Firm a few years ago just by asking their chatbot if it could give me a better deal on a mattress I wanted.

      • TheFogan@programming.dev
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        2 hours ago

        Did they actually honor it? I recall quite a few people tricking AIs into like, saying they will sell a car for $1, but the company not honoring it.

        Or is it likely just car salesman negotiation tactics… IE the matress is actually inflated 75%, AI is given a hard minimum of how low it actually can go, but obviously instructed to do everything possible to close the sale but at the highest price the user will be willing to pay.

        Holy frick, actually that sounds like the real hell now that I think of it. Will AI bring haggle pricing to online stores. We have to spend 20 minutes trying to give a story to an AI to get the best price on, something… which of course will then lead to someone developing an AI for shoppers trained to haggle with these for them. End result we burn up an ocean, with 2 AI’s making up bogus stories about how badly they are suffering.