I’ve been lucky enough in the past few years to have ridden my bike in a bunch of cities all over the world. Looking back, here are lessons about urban cycling I learned from 12 different ones. Thi…

  • Phoenixz@lemmy.ca
    link
    fedilink
    English
    arrow-up
    10
    ·
    17 days ago

    We don’t like cars

    Probably most of us don’t like AI that much either. Thank you for the transcript, in principle I’m fine with it, but it being AI I have no idea if this is correct or not

    • Multiplexer@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      4
      ·
      17 days ago

      I totally understand that.

      But I personally like watching videos (instead of reading about the related topics) even less than I like using AI.
      So for me summarizing transcripts is the lesser evil.

      And if I do it anyway, I might as well share the result and spare people with similar views the effort.
      Does no harm and might help.

      • Phoenixz@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 days ago

        But it does harm

        The ridiculously extreme power and water consumption for your translation alone, does harm

        The hardware requirements for AI have driven Hardware costs through the roof, that’s harm

        The incorrect results and day dreaming constantly gives bad information to people. For this transcript, you’ll have to check it by readong it while listening to the video. With that, you might as well write it yourself

        AI harms

        • Multiplexer@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          1
          ·
          6 days ago

          The ridiculously extreme power and water consumption for your translation alone, does harm

          That’s a common misconception. The consumption for running an LLM once it exists (as opposed to training it) is actually astonishing low. There are portals that let you calculate it (mainly for the open models for which data is available). I did that for an estimated 20000 tokens (don’t have the original count any more) and a fitting model. Would have consumed ~0.5Wh of energy.

          For comparison: actually streaming the 18min youtube video only once would consume an estimated 250Wh, so about 500times as much energy as my AI summary (based on the revised shift project study for streaming energy consumption).
          So in this case, it prevented harm by giving people the option to not watch the video (including me).

          Concerning the hallucinations: We routinely use AI summaries of meeting transcripts at work. Never stumbled across hallucinations for this type of tasks.
          (Completely different story when asking it for intrinsic knowledge though, where hallucinations occur regularly).

          As for the hardware costs: Using open models doesn’t really have an impact on those, as long as you stay away from the big players.