Four months ago I asked if and how people used AI here in this community (https://lemmy.world/post/37760851).
Many people said that didn’t use it, or used only for consulting a few times.
But in those 4 months AIs evolved a lot, so I wonder, is there people who still don’t use AI daily for programming?
I use AI for small, atomic, stuff that don’t bring any intellectual value to spend time for.
Like “Typescrit. Find smallest element in an array”. “Python. Simulate keboard event to avoid computer going to sleep mode”. Or copy/past error message because I missed an import and I just want to know which one.
I also use it sometime for well identified algorithm that could be interesting but are not the core of the problem. Like “C#. Clustering algorithm to group points together in a point cloud”.
The generated code is catastrophic in term of performances/memory, but it’s good enough 80% of the time.
But eveytime I tried to use AI for higher level stuff, or that require several interdependant concepts, it ended up into hallucination pit.
- I have this problem
- Cool ! Use solution A !
- Doesn’t work
- My Bad, use solution B !
- Doesn’t exist
- Indeed ! For this problem you should apply method A which will work !
- (-_-)’
Web dev here, never used it :) I like to think for myself
But in those 4 months AIs evolved a lot
Has it really? I don’t feel like it’s much different for programming compared to 4 months ago.
Yes, it evolved almost exponentially in these 4 months. It’s just bizarre what recent models can do and how consistent they do it.
If you never tried it, of course, you won’t know the difference. But for those who tried surely saw a huge improvement.
It’s not that I’ve never tried it, I’ve dabbled in it consistently over the last few years. If you had said there was a major difference compared to 2 years or maybe even a year ago, sure. In the last 4 months, I guess we’ve gotten stuff like Claude 4.6, which saw an increase in coding performance by 2.5% according to SWE benchmarks. An improvement, sure, but certainly not an exponential one and not one which will fix the fundamental weaknesses of AI coding. Maybe I’m out of the loop though, so I’m curious, what are those exponential improvements you’ve seen over the last 4 months? Any concrete models or tools?
I decided to try Qwen 3.5 Plus via Qwen Code CLI (Gemini CLI fork) and it’s bizarre what it can do.
It can figure out when it’s struggling to something, look on the internet for questions and docs to understand things better. It takes a lot of actions by itself, not like that bad models from 4 months ago that gets stuck on endless thinking and tweaking and never fix anything.
Recent models are thinking each time more like human programmers.
I think you’re mistaking improvements in tooling as improvements in the LLMs. LLMs are plateauing. The idea of exponential growth is an illusion. We took 20+ year old technology, geared it toward text (the LLM), and trained it on the entire Internet. Then, it’s popularity grew exponentially.
This is the hype narrative that Altman, Dario, Jensen, etc. push. They are trying to convince everyone that what we have is Model T Ford of AI. Just imagine where we’ll be in 6 months!
My company got me a license and there is a clear push to start using it for more mundane tasks (initial code review, migrations and so on). I use it whenever I think it will be faster but it rarely is. In personal projects I used it for some boring tasks like migrating scripts and it’s definitely faster than learning completely new tools but it sucks not to understand the code you’re using. Also, I know I would do it better myself (just 10x slower). I might use it for some other personal apps which are kind of ‘fire and forget’ tools, not something I’m planning on maintaining.
I mostly don’t use AI… At least not directly for programming. I use it for other things like translating, formatting text, etc. i sometimes ask AI to make something for prototyping purposes.
I will occasionally ask AI to solve programming problems, more to keep up with current trends. I like to keep informed with what AI can and cannot do because even if I choose not to use them, the same will not be true with my coworkers or other people I interact with. Having a good understanding of the current “meta” for AI lets me know what to look out for in the context of avoiding disasters.
I still refuse to use it for anything other than minor curiosity. I often read AI summaries on coding questions I throw at DDG or google, but I’ll usually open the Stack Overflow link or Reddit to read the actual post just to be sure.
Of course.
My reasons for not using AI are the same as they were four months ago and will be the same in four months, regardless of what the models can or can’t do.
Ask again in four years.
What are your reasons?
The place you work don’t force you to use it?
I’ve been noticing all companies are forcing devs to use AIs to be more productive, even for simple things like write git commits.
I noticed how quickly my own skills started deteriorating when trying to work with it. I’m trying to build my skills, not outsource them.
I also don’t love the environmental impact, nor the immorality of how they got/get their training sets for the base models.
If my work tried to force me to use it, I would be looking to change employer. Or lie and say I use it. But our AI use is heavily regulated and generally disencouraged, so luckily no issues there.
I don’t think your code being used for training is a concern anymore. They’ll eventually keep finding new codes until it reaches its peak. Refusing to share your code for training will just postpone the inevitable, AI code will improve to its peak sooner or later.
You replied to only one of my points, and that’s not even what I said…
They train new models on base models, and I’m talking about how they scraped the internet without permission or how websites sold their users data without compensation and how no one was ever given any opportunity to opt out of sharing your work and your words to train these base models on.
Without that grand scale theft we would have no base models anywhere near what we have now.
I’m not opposed to willingly sharing, I’m opposed to profiting from stealing.
Your mistake is to think that I want to prove something, I don’t want to mention all your points, this is just a comment, not a scientific discussion.
Programmers working with obscure languages. LLMs will give broken code and hallucinate stuff a lot more in these cases.
Also, if you already dominate the basics for your language and can quickly search examples of uses for unknown functions/methods, LLMs become mostly useless.I have consistently been trying to use AI for the actual tasks I need to complete for work over the last year or so (we are gently encouraged to try it, but thankfully not forced). I have found it to be “successful” at maybe 1 in 10 tasks I give it. Even when successful, the code quality is so low I edit heavily before it’s pushed and attributed to me.
I think the problem I have is I rarely work on boilerplate stuff.
deleted by creator
I’ve come around on it somewhat at work. Recent models really are getting pretty impressive. It’s at the point where I can tell it to read a Jira ticket and implement it, and for simple ones it basically just does it. I’m not sure it’s worth the massive environmental and infrastructures detriments (or rather, I’m pretty sure it’s not), but it’s definitely a productivity boost.
It’s also creating cognitive debt tho - every change it does for me automagically is one I don’t have to think about and ‘earn’ myself. You could argue the AI compensates for that by then explaining the code for you, but I think it will lead to some bad results in the mid-long term.
For any personal programming, I don’t/wouldn’t use it, beyond just replacing Google searches maybe. It defeats the fun of it, and cost money on top of that.
At my job I don’t. I once used it for some open source code where I implemented a fairly complex one line formula; I did eventually figure out the problem and don’t remember how helpful the AI’s suggestions were.
I keep hearing “oh this new model is better!”
I have a test case I’ve been using. A real-world piece of code I needed, that isn’t something super original but has a few tricky steps in it.
The first time I’ve seen AI able to get even close to finishing the task was recently. Its code worked, and there were only a few minor tweaks I had to make before it was in a condition I’d consider acceptable to merge into my own work.
It took about 30 minutes to do its task, maybe longer. I spent 5 minutes reviewing it but it would have been longer if I hadn’t previously done the task myself and knew exactly what I was looking for. I think it took me about an hour when I did that task myself the first time. Ultimately using AI for this might have saved me about 15 minutes?
I guess it might be borderline useful at that rate. I might look into using it more in the future, but I still don’t really expect it to become a tool I regularly use.
I don’t use AI for programming ever
Yes. I have tried various agents over the last ~1.5 years on multiple occasions on a bunch of different kinds of engineering type tasks. So far there has been a total of 1 time where the output was reasonable enough that I could build on it and not feel ashamed of the result (and that time probably saved me like half an hour). All other times, I wasted a bunch of time debugging crap and then just wrote the thing from scratch myself.
The closest I’ve come to somewhat consistent success with them is when I struggled to come up with a good search query for an issue I was having and after asking a longer prompt to an LLM it either gave me a close enough answer that I could figure it out from there, or the answer included some keywords that helped me come up with a query that got the results I needed.
By and large, I consider them crap for anything beyond the basics. On the other hand, I absolutely understand why they may look great in cases where the person using them doesn’t have an idea of what the output should look like. They’re a minimal productivity boost at best, at an insane cost.







