Early reactions to Nvidia’s DLSS 5 were swift and skeptical, with some observers likening the technology to an Instagram-style filter applied over gameplay footage. Nvidia CEO Jensen Huang refuted the allegations, but subsequent clarifications have helped outline how the system actually works – and where it can fall short.
So if i understood this clearly, because of halucinations during gameplay there can be things which aren’t actually there?
like you see that guys haircut from the front, you move to the side ingame, and it disapears? or things in the distance change or disapear when you get closed to them?
Yes, or like we saw in the demo, someone’s arm disappears, a ball becomes a blurry shapeless blob, and many others.
This tech is the same tech that powers other ““Generative AI””, meaning exact the issues with asking for a hand and getting one with 7.5 fingers can now happen in real time, in video games supporting DLSS 5.
It is straight up an AI slop filter over top of a game. There’s not much more to say about it.
Dayum they really fucked this up. I have been waiting for this tech for a decade now and all everyone can do is be angry. Now I get the binary opinions of idiots if I mention I will use it. Have fun hating haters I got old ass games to play with AI filters because FUN! Well we will see how it goes lol.
I’m not sure why this is such a big deal. It’s only going to affect the 7 or 8 people in the world that can afford the 2 top of the line graphics cards and the RAM required to run it.
Well, I hate the AI slop look it outputs and until now DLSS was supposed to increase performance, now it does the exact opposite. Also fuck Nvidia, they deserve all the hate .
This is a culmination of years in which gamers kept getting ignored when they ask for some consideration to their needs like more memory, better Linux drivers, and smaller form factors. Instead we get hit with their server farm AI slop runoff and told this is what we want. To me it’s an insult for wanting something that’s more specialized to gaming and graphical work rather than some LLM platform.
Ahh,I get it now. It’s not about them creating something that sucks, it’s about them creating something that sucks instead of something else we’ve been asking for that would be actually useful. Thanks for filling in the missing piece.
So they have this nice 3D card, which they had a hand in inventing and “perfecting” to render the entire 3D scene in beautiful, stunning detail, and then another card with AI instructions that totally ignores all of that just happened, takes a screenshot and puts a filter on it in real time basically. What a massive waste of power and computation.
Best comment about this was from a video posted yesterday:
Nvidia keeps saying that this tech is still a work in progress, yet they made the decision to release a demo in its current state…
Yes because it makes people talk about that instead of their love affair with Palantir and their passionate support for Israel.
Whoever decided to showcase DLSS5 in it‘s current form is probably getting a raise and a nice bonus for this diversion.
All good, I can hate them for more than one thing!
GenAI is the ultimate demoware. Bro, it’ll get better. Just look how good it is now.
Just one more data center, bro! Promise!
What isn’t? Everything is growing, decaying, or changing in some way, honestly.
AI is the most rushed to market product I think I’ve ever seen in my life. It makes Cyberpunk’s release like like a polished gem in comparison. Yes, things evolve over their life cycle for better or worse, but none of this other things have been so ingrained in everything, cost even a fraction of LLMs, both monetarily and environmentally, it sucked as hard.
AI is a different monster. A shitty shitty monster.
The thing is, it’s not rushed. ‘AI’ as in what we call LLM today exists for quite a few years now and is just a tool which is used for a lot of I’ll fitting uses. Training also has its limits and it will not suddenly not hallucinate or get sentient.
Yet it was sold this way. It’s the sunk cost fallacy, stupidity and the attempt to cut literally every corner. And it’s failing hard.
It’s because CEOs don’t play cyberpunk, but they did try chatgpt and got an immediate boner thinking about all the people they could lay off.
More like they saw the potential to ruin most plebs’ already limited abilities to figure out wtf is happening in the world, thought they would be too smart to fall for that shit, and decided they should open Pandora’s box and sell it like it’s Prometheus’s fire
Unlike Pandora’s box, though, a lot of the dumber applications of this stuff will go back in when the VC money dries up.
Can’t tell if serious.
As microslop was constantly saying last year, LLMs and their ilk are a product in search of an application.
Every company is desperate to find anything these garbage machines can do well enough to validate the trillion or so dollars pumped into them.
Late edit: Also, Salesforce is literally mostly about barely functional tech with shiny demos. Thats why there is a consulting and customization industry worrth at least 10s if not 100s of billions that supports just their software.
But then Nvidia really does not need to. They sell hardware. They need to design new better hardware and make good drivers. But it’s never enough, is it? It always has to be more, like cancer.
That’s clearly the insane part, like okay it can be a bit helpful in this or that scenario, but they spent like every person on earth would want to pay 250 euros a month for it…
Well, if it isn’t little Lisa Slopson! The tech bros answer to a QUESTION NO ONE ASKED!?
Nslopia
Demos are very often an example of in progress works or technology. That literally happens all the time.
Doesnt really matter IMO. If you have known bugs and flaws you dont showcase those, or if they are present in the showcase you atleast adress them and show what is to be expected upon release. NVIDIA just flat out didnt care. As soon as motion increases the artefacting is crazy. How do you even decide that this is remotely good enough for a demo?


Nvidia hears people like motion blur and AI slop so they put some AI slop in their motion blur.
Ugh. “Everyone is doing BLOOM, lets also do BLOOM but at +150% more!”
I remember that, motion blur came after and now I guess ai 😓
:3PS: The hallucinations are artistic freedom 😂
“Hallucinations” are an inherent part of the programming.
It is literally impossible to prevent them. The systems work on building the fuzzy average response to a query via complex statistics. There is no thinking or creativity.
Okay but that is not what the person said or what the poster above quoted as being the best part. I’m not commenting on the overall performance I’m just saying that demos very often are exactly what that sentence implies they shouldn’t be.
And yet, they chose to demo a broken technology with obvious bugs and flaws. The demos from tech companies are supposed to make people excited, not recoil in disgust.
This isn’t some tiny company, either. It’s fucking nVidia, who supposedly has the money to create a good demo.
Instead of doing this bullshit, can we just have regular DLSS be actually good? I can’t stand turning it on for my handheld because it’s a blurry, smeary mess as is.
You might be thinking of FSR? I don’t know of any handhelds that support DLSS as they all use AMD hardware, and they also don’t support the latest AMD RDNA tech so you’re such with crappy FSR 2 most of the time which is indeed a horrible blurry glitchy mess.
The Switch 2 has it since it’s the only Nvidia handheld as far as I know.
Ah you’re right. I was using DLSS as a catch all term forgetting fsr is the AMD version
Further confirming this is not meant to ever be used by actual gamers, and instead exists only to advertise real time genAI modification to existing video media.
img2img slop filter for every frame in real time. Great job nvidia what a dumb waste of resources.
Wasn’t DLSS working fine before, wtf did they do to it?
The waste is the point.
It needs to be more expensive, because that can be leveraged for higher valuations.
Haven’t you heard? Everything must contain generative AI now.
DLSS stands for “deep learning super scaling.” It was always gen-ai. Those extra details weren’t being revealed, they were being generated.
While true, the way DLSS 2/3/4 does it is to take a bunch of low res renders of the game over time while wiggling the camera very slightly, and stitch them all together to generate a new, higher res image that very closely matches what the original would have looked like. The GenAI part is essentially just a very advanced temporal blending function that’s really good at detecting and smoothing out edges.
DLSS 5 then runs an AI Instagram filter on top of the frame for “enhanced visuals”, because obviously we want our games to look like cheap AI slop.
Wtf did they do to it.
✨AI ✨
But it was working fine and probably cheaper, this makes it worse. Where the fuck is QA?
Where the fuck is QA?
✨ They replaced them with AI ✨
“Those responsible for sacking the people who have just been sacked have been sacked.”











