- cross-posted to:
- bicycles@lemmy.ca
- cross-posted to:
- bicycles@lemmy.ca
I’ve been lucky enough in the past few years to have ridden my bike in a bunch of cities all over the world. Looking back, here are lessons about urban cycling I learned from 12 different ones. Thi…



But it does harm
The ridiculously extreme power and water consumption for your translation alone, does harm
The hardware requirements for AI have driven Hardware costs through the roof, that’s harm
The incorrect results and day dreaming constantly gives bad information to people. For this transcript, you’ll have to check it by readong it while listening to the video. With that, you might as well write it yourself
AI harms
That’s a common misconception. The consumption for running an LLM once it exists (as opposed to training it) is actually astonishing low. There are portals that let you calculate it (mainly for the open models for which data is available). I did that for an estimated 20000 tokens (don’t have the original count any more) and a fitting model. Would have consumed ~0.5Wh of energy.
For comparison: actually streaming the 18min youtube video only once would consume an estimated 250Wh, so about 500times as much energy as my AI summary (based on the revised shift project study for streaming energy consumption).
So in this case, it prevented harm by giving people the option to not watch the video (including me).
Concerning the hallucinations: We routinely use AI summaries of meeting transcripts at work. Never stumbled across hallucinations for this type of tasks.
(Completely different story when asking it for intrinsic knowledge though, where hallucinations occur regularly).
As for the hardware costs: Using open models doesn’t really have an impact on those, as long as you stay away from the big players.