- cross-posted to:
- bicycles@lemmy.ca
- cross-posted to:
- bicycles@lemmy.ca
I’ve been lucky enough in the past few years to have ridden my bike in a bunch of cities all over the world. Looking back, here are lessons about urban cycling I learned from 12 different ones. Thi…



Personally I agree with his assessment of bicycling here in Germany where I live.
And for all of you who prefer reading to watching videos, I let AI generate a transcript summary.
1. Montreal: Small changes build over time
2. Paris: It is about priorities, not space
3. New York: Political courage is essential
4. London: Benefits extend beyond transportation
5. Edmonton: Better cities can happen anywhere
6. Seattle: Think creatively about space
7. Oulu, Finland: You can build winter cycling cities
8. Ottawa & Washington D.C.: Progress is local, not national
9. Victoria: You can build a family bike city
10. Berlin & Hamburg: Normalizing cycling is not enough
11. Calgary: Build on what you have
We don’t like cars
Probably most of us don’t like AI that much either. Thank you for the transcript, in principle I’m fine with it, but it being AI I have no idea if this is correct or not
I totally understand that.
But I personally like watching videos (instead of reading about the related topics) even less than I like using AI.
So for me summarizing transcripts is the lesser evil.
And if I do it anyway, I might as well share the result and spare people with similar views the effort.
Does no harm and might help.
But it does harm
The ridiculously extreme power and water consumption for your translation alone, does harm
The hardware requirements for AI have driven Hardware costs through the roof, that’s harm
The incorrect results and day dreaming constantly gives bad information to people. For this transcript, you’ll have to check it by readong it while listening to the video. With that, you might as well write it yourself
AI harms
That’s a common misconception. The consumption for running an LLM once it exists (as opposed to training it) is actually astonishing low. There are portals that let you calculate it (mainly for the open models for which data is available). I did that for an estimated 20000 tokens (don’t have the original count any more) and a fitting model. Would have consumed ~0.5Wh of energy.
For comparison: actually streaming the 18min youtube video only once would consume an estimated 250Wh, so about 500times as much energy as my AI summary (based on the revised shift project study for streaming energy consumption).
So in this case, it prevented harm by giving people the option to not watch the video (including me).
Concerning the hallucinations: We routinely use AI summaries of meeting transcripts at work. Never stumbled across hallucinations for this type of tasks.
(Completely different story when asking it for intrinsic knowledge though, where hallucinations occur regularly).
As for the hardware costs: Using open models doesn’t really have an impact on those, as long as you stay away from the big players.