I got a copy of the text from the email, and added it below, with personal information and link trackers removed.
Hello [receiver’s name],
I’ve long dreamed about working for Mozilla. I learned how to send encrypted e-mail using Mozilla Thunderbird, and I’ve been a Firefox user since almost as long as I can remember. In more recent years, I’ve been an avid follower of Mozilla’s advocacy work, and was lucky enough to partner with Mozilla on investigative journalism in my last job.
In many ways, Mozilla was the dream – and now, as the leader of the Foundation, my job is to make my dreams for Mozilla come true. What that means, though, is making your dreams come true – for a trustworthy and open future of technology; for tech that is a tool for liberation, not limitation; and for tech that values people over profit.
So I’m reaching out to technologists, activists, researchers, engineers, policy experts, and, most importantly, to you – the people who make up the Mozilla community – to ask a simple question.
[receiver’s name]. What is your dream for Mozilla? I invite you to take a moment to share your thoughts by completing this brief survey.
Let’s start with this question:
Question 1: What is most important to you right now about technology and the internet?
- Protecting my privacy online
- Avoiding scams
- Choosing products, apps, technology, and services that I can trust
- Keeping children safe online
- Responsible use of AI
- Keeping the internet is open and free
- Knowing how to spot misinformation
- Other (please specify)
With your help, together we can imagine and create the Internet we want. Thank you for being a part of this.
Always yours,
Nabiha Syed Executive Director Mozilla Foundation
They seem to have a foregone conclusion that AI is a positive thing, rather than something that should be eradicated like smallpox or syphilis.
“Responsible use of AI” could mean things like providing small offline models for client-side translation. They’re actually building that feature and the preview is already amazing.
Not just building it’s shipping by default. That is, language detection and code that displays a popup asking you whether you want to download the actual translation model is shipping by default. About twelve megs per model, so 24 for a language pair.
IMO, there’s no such thing as responsible AI use. All of the uses so far are bad, and I can’t see any that would work as well as a trained human. Even worse, there’s zero accountability; when an AI makes a mistake and gets people killed, no executives or programmers will ever face any criminal charges because the blame will be too diffuse.
There is no gray. Only black and white!
So who should be held accountable when (mis)use of AI results in a needless death? Or worse?
Let’s say a company creates an AI taxi that runs you over leaving you without legs. Who are you going to sue?
“Oh it’s grey, so I’ll have a dollar from each shareholder.” That doesn’t sound right to me.
Who’s getting killed because of the “translate page” button in my browser?
I hate AI as much as the next AI-sceptic but that argument is just nonsense. We have plenty of machinery and other company owned assets already that could injure a human being without a direct human intervention causing the injury. Every telephone pole rotting through and falling on someone would legally be a similar situation.
I’m no AI enthusiast, but this is clear hyperbole. Of course there are uses for it; it’s not magic, it’s just technology. You’ll have been using some of them for years before the AI fad came along and started labelling everything.
Translation services are a good example. Google Translate and Bing Translate have both been using machine learning neural networks as their core technology for a decade and more. There’s no other way of doing it that produces anything close to as good a result. And yes, paying a human translator might get you good results too, but realistically that’s not a competitive option for the vast majority of uses (nobody is paying a translator to read restaurant menus or train station signage to them).
This whole AI assistant fad can do one as far as I’m concerned, but the technologies behind the fad are here to stay.
Actually, the AI assistant fad isn’t all bad.
HomeAssistant has an open souce assistant pipeline that integrates into the most flexible smart home software around. It is completely local and doesn’t rely on the cloud at all. Essentially it could make Alexa’s and google homes (that literally spy on you and send key phrases back to your built data collection profile) obsolete. That is a way not to have to rely on corporate bullshit privacy invasion to have a good smart home.
Indeed transcribing and translating (and preserving dying languages and being able to re-teach them) are 2 of the best consumer uses for AI. Then there is accelerating disease and climate research.
If these were the use cases that were pushed instead of fucking conversational assistants, replacements for customer support that only direct to existing incomplete docs, taking away artists’ jobs, and creating 1984 “you can’t trust your own eyes and ears” in real time, then AI would actually be very worthwhile.
The “translate page” button in my browser is evil? Get a grip.
There are valid uses for AI. It is much better at pattern recognition than people. Apply that to healthcare and it could be a paradigm shift in early diagnosis of conditions that doctors wouldn’t think to look for until more noticeable symptoms occur.
It already has been applied to healthcare, and nearly every other industry, and has been for more than a decade.
The current LLM hype is the only thing most people know of when they hear “AI”. Which is a shame.
Peak hype-based ignorance 🤣
Being this confident while also not knowing how AI has been in use for more than the last decade, and going off on a rant on AI mistakes when a defining feature of AI is to solve problems that classical programming cannot, but without guaranteed results, is cringe AF
You’re going to upset a lot of chess players if you get rid of all AI.
It’s because it is a positive thing. Just because awful businesses hijacked and abused it doesn’t mean it’s all bad. Mozilla is approaching it in a positive way imo.
And what, exactly, is positive about it, that has no associated negative outcomes?
Gee I dunno. Maybe that you get to translate web pages without sending that data to Google?
Gee I dunno. Maybe blind people being able to browse the web better?
Gee I dunno. Maybe to stop people being scammed?
E: I’m assuming you downvoted because you hate privacy, blind people, not being able to scam people with fake reviews, or some combination of the above lol.
Specific to generative AI, I think client side generation can be a good thing, such as sentiment analysis or better word suggestions/autocomplete.
A number of other helpful tasks have negative outcomes, but if someone is going to use it, then I prefer they use the version of the tech that minimizes those negative outcomes. Whether Mozilla should be focussing on building that is a different matter though
AI that isn’t generative AI has a lot of positive uses, but usually that’s not what these discussions are about
Interpreting MRI scans?
Translating language?
Object detection on assembly lines?
Object detection to sort recycling?
Identifying disease markers?
Classifying data?
…etc
Things that it’s been used for for ages now, and has become ubiquitous for.
I for one do know that and am not against AI religiously and have used it to great effect and STILL DON’T WANT TO DOWNLOAD IT WITH MY BROWSER. Just make it an addon.
I mean, generally, it is.
It’s just that the uneducated masses don’t realize that “AI” outside of today’s LLMs has been improving our technological life for well over a decade now.
And so abused and misused for just as long. LLms and the hype and slop is a relatively new thing, this is old, useful, technology.
“Eradicated” is literally impossible, entire swathes of industries can only operate at the levels of efficiency they have come to rely on because of specialized models. And have for ages now, long before the hype and slop started.
Not every model is an LLM 🤦