Are all uses of ai out of the question?

I understand most of the reasoning around this. Training ai models requires gigantic datacenters that consume copious amounts of resources (electricity and water) making them more expensive for everyone for something that doesn’t have much benefits. If anything it feels like its fuled a downturn in quality of content, intelligence and pretty much everything.

With the recent job market in the US I’ve found myself having little to no choice on what I work on and I’ve found myself working on ai.

The more I learn about it the angrier I get about things like generative ai. So things that are open ended like generating art or prose. Ai should be a tool to help us, not take away the things that make us happy so others can make a quick buck taking a shortcut.

It doesn’t help that ai is pretty shit at these jobs, spending cycles upon cycles of processing just to come up with hallucinated slop.

But I’m beginning to think that when it comes to ai refinement there’s actually something useful there. The idea is to use heuristics that run on your machine to reduce the context and amount of iterations/cycles that an ai/LLM spends on a specific query. Thus reducing the chance of hallucinations and stopping the slop. The caveat is that this can’t be used for artistic purposes as it requires you to be able to digest and specify context and instructions, which is harder to do for generative ai (maybe impossible, I haven’t gone down that rabbit hole cause I don’t think ai should be generating any type of art at all)

The ultimate goal behind refinement is making existing models more useful, reducing the need to be coming up with new models every season and consuming all our resources to generate more garbage.

And then making the models themselves need less hardware and less resources when executed.

I can come up with some examples if people want to hear more about it.

How do you all feel about that? Is it still a hard no in that context for ai?

All in all hopefully I won’t be working much longer in this space but I’ll make the most of what I’m contributing to it to make it better and not further reckless consumption.

  • 6nk06@sh.itjust.works
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    3 months ago

    AI for analysis (neural networks or machine learning) has been used for years and it’s a good tool to validate or confirm some data. It’s a useful tool meant for humans.

    Chatbots and waifu generators are only used by billionaires to brainwash people who are already brainwashed by TV and social media, and it’s very bad.

    Also LLMs hallucinate and choke on their own vomit. Who can accept such an unreliable application?

  • JakenVeina@midwest.social
    link
    fedilink
    arrow-up
    1
    ·
    3 months ago

    Nope. Machine learning has been a legit comp sci field for decades. Tons of really cool applications in medicine, for example.

    For me, the current “AI” frenzy has a couple core problems. Avoid those, and you’re good.

    • LLM chat bots being treated as intelligent, when they fundamentally can NEVER be intelligent. (I.E. researchers have determine that there IS no solution to hallucinations, without starting over from scratch with a different modeling strategy).
    • Training machine models on public, or even copyrighted, works, and then thinking its okay to use that for personal profit. In particular, the stark inequality in how copyright is enforced for normal people, but ignored for private “AI” enterprises.
    • The capacity of “AI” tools to cripple human potential, rather than reinforce or elevate it, and the fact that so many people confuse the former for the latter. Like, I can genuinely relate to the idea of having artistic ideas in your head and not having the skills to bring them into reality, and seeing image generators as just a way to fill in the gap. Like, there IS a promotion of creativity there, but being able to run prompts through an image generator is NOT the same thing as spending years developing actual skill, and too many people choosing the “AI” route would be a net negative for humanity. Similarly (and more intimately for me, as a software developer), having too many people rely too heavily on “AI” tools in software development is going to produce a generation of HORRIFICALLY incompetent developers. And that’s not just theoretical, we’re already seeing the impact of overreliance of “AI” in the industry.
  • technocrit@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    3 months ago

    The main problem with “AI” is that it doesn’t exist. It’s a grift based on pseudo-science and hype.

    As far as the diverse technologies falsely labelled as “AI”, they’ll largely suffer the same fate as any technology under capitalism: The technology will be used for resource extraction, extreme privilege, and violent enforcement.

    I agree that some of these technologies are actually useful and getting better. For example. I enjoy search summaries even though they’re not always perfect. However, there is absolutely no “intelligence” involved in these computer programs. Furthermore, I think the the applications to grifting, violence, surveillance, cops/prisons, genocide, etc. far outweigh the almost insignificant progress in search summaries, auto-complete, chatbots, content generation, and other so-called “AI” programs. The term “AI” is used to lump these diverse programs together in order to collective success and deny failure. In order to promote genocidal applications under the cover of applications for generating pictures of cats.

    I prefer more scientifically accurate terms like statistics, big data, etc., but there are much less useful for grifting, expose the underlying grift (eg. stealing/spying data, using math/computation instead of magic, the value of individual programs, etc.)… The term “AI” is used to obscure the scientific reality. None of these companies want to talk about where their data comes from or how it’s manipulated. It’s just “intelligence” smh. “AI” is a whitewashing term to obscure the many underlying problems. So more scientifically valid terms are almost completely avoided.

  • artyom@piefed.social
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 months ago

    Not at all. But it is almost exclusively abused. It literally makes people dumber if they use it to learn something.

    Some examples:

    1. Dictation and summaries. AI “note-takers” are being built into videoconferencing platforms and this is super helpful, not only for later reference in a search, but also for a supervisor to just get a quick glimpse at what’s going on with employees.
    2. Searching a private database. Google Drive search it ironically horrendous. Like if you type in the exact name of a folder, it will show you a bunch of other shit and not that folder. Meanwhile I can ask Gemini a very specific question that may be in a single cell of some spreadsheet somewhere and it will not only give me the answer but also link to the associated spreadsheet and even the specific cell where it found the answer.

    It just doesn’t have nearly as many applications as the techbros would dupe you I to believing. And it’s producing harm on a massive scale, in so many in different ways.