You’re absolutely right and I think more people need to understand this. What we now call “AI” refers to a lot of things that are not new and have been happening for decades, just without the complete and callous disregard for humanity that the current AI companies are exemplifying.
The thing is, the things they were doing didn’t used to be called AI, nobody pretended machine learning models were intelligent. People really need to start learning more of the terminology around these technologies, because it’s important, even if you hate them. You might even learn that you don’t have to hate all of them, because they’re not all the same. AI is an ugly label being painted onto everything in software nowadays with broad strokes and a lot of it is deserved but there is significant room for nuance here and people should endeavor to have a more nuanced understanding of the topic than they currently do. AI is not LLM which is not GPT which is not computer vision which is not machine learning which is not agentic coding which is not tool usage which is not Generative AI which is not chatbots which is not sentience.
“ALL AI: BAD!” is the logic of a simpleton. Don’t be a simpleton. Educate yourself, and begin to understand what about it is bad, because there is plenty about it that is very bad indeed. This technology is going to be transformative whether you love it or hate it. Even if it’s 100% terrible (and honestly it’s not) you still need to know your enemy. Trying to fight against something you don’t understand is the first step to losing.
The whole concept of AI is evolving. When OCR was new, that was cutting edge AI. Nowadays, even your phone can pull text out of a photo. It’s still based on neural networks, but people don’t really think of it as AI any more.
What about the selective background blurring during a video call? Also AI. What about frame generation or resolution up scaling? Also AI. There are lots of examples like this, and people don’t really think of them as AI any more.
The actual issue is that they do think it’s AI, meaning, they think the phone is using a local llm model and sending the image there so it magically does the OCR or some shit.
They don’t know that there are specialized NN, be it CNN for image processing or RNN for language processing. And frankly they don’t need to know, it’s not their purview, as long as they don’t talk shit as if they knew 😅
Oh yeah, there’s so much AI in everything we do nowadays. For years now, Apple has been casually mentioning AI in pretty much every WWDC. Until recently, all of it has been easy-to-ignore type of AI. If it doesn’t generate images or text, it doesn’t really feel like “real AI”, and consequently, tends to fly under the radar.
if you don’t want to feed your texts to the corpus for ai training, you better go live in a forest, because anything appearing on the internet inevitably ends up in the training corpus.
If your distaste is particularly about translation apps, then you can selfhost an llm even on your phone, and use it for translation. Even a really small one would be ok, as LLMs are designed primarily for tasks like these, even tho everyone seems to forget this.
Kagi Translate to Deepl.
But the first thing I recommend, is to replace TikTok with no TikTok.
And Roblox to no Roblox.
There’s also Loops as an alternative.
I’m not interested in using my texts to train AI; are there any other options?
All translation services use AI, and have done for decades. There’s pretty much no other way to do it.
You’re absolutely right and I think more people need to understand this. What we now call “AI” refers to a lot of things that are not new and have been happening for decades, just without the complete and callous disregard for humanity that the current AI companies are exemplifying.
The thing is, the things they were doing didn’t used to be called AI, nobody pretended machine learning models were intelligent. People really need to start learning more of the terminology around these technologies, because it’s important, even if you hate them. You might even learn that you don’t have to hate all of them, because they’re not all the same. AI is an ugly label being painted onto everything in software nowadays with broad strokes and a lot of it is deserved but there is significant room for nuance here and people should endeavor to have a more nuanced understanding of the topic than they currently do. AI is not LLM which is not GPT which is not computer vision which is not machine learning which is not agentic coding which is not tool usage which is not Generative AI which is not chatbots which is not sentience.
“ALL AI: BAD!” is the logic of a simpleton. Don’t be a simpleton. Educate yourself, and begin to understand what about it is bad, because there is plenty about it that is very bad indeed. This technology is going to be transformative whether you love it or hate it. Even if it’s 100% terrible (and honestly it’s not) you still need to know your enemy. Trying to fight against something you don’t understand is the first step to losing.
Diabolical to hit them with the “You’re absolutely right”
I’m glad it wasn’t lost on people. Still gotta have some fun sometimes, especially when I’m pissed off about the state of the world.
The whole concept of AI is evolving. When OCR was new, that was cutting edge AI. Nowadays, even your phone can pull text out of a photo. It’s still based on neural networks, but people don’t really think of it as AI any more.
What about the selective background blurring during a video call? Also AI. What about frame generation or resolution up scaling? Also AI. There are lots of examples like this, and people don’t really think of them as AI any more.
The actual issue is that they do think it’s AI, meaning, they think the phone is using a local llm model and sending the image there so it magically does the OCR or some shit.
They don’t know that there are specialized NN, be it CNN for image processing or RNN for language processing. And frankly they don’t need to know, it’s not their purview, as long as they don’t talk shit as if they knew 😅
Oh yeah, there’s so much AI in everything we do nowadays. For years now, Apple has been casually mentioning AI in pretty much every WWDC. Until recently, all of it has been easy-to-ignore type of AI. If it doesn’t generate images or text, it doesn’t really feel like “real AI”, and consequently, tends to fly under the radar.
the one baked into firefox downloads small specialized models and runs on your device.
if you don’t want to feed your texts to the corpus for ai training, you better go live in a forest, because anything appearing on the internet inevitably ends up in the training corpus.
If your distaste is particularly about translation apps, then you can selfhost an llm even on your phone, and use it for translation. Even a really small one would be ok, as LLMs are designed primarily for tasks like these, even tho everyone seems to forget this.
https://f-droid.org/packages/dev.davidv.translator
Offline on-device translator. Also uses AI because what else?
A self-hosted AI?