• 0 Posts
  • 398 Comments
Joined 6 months ago
cake
Cake day: March 8th, 2024

help-circle

  • MSAA is pretty solid, but it has its own quirks and it’s super heavy for how well it works. There’s a reason we moved on from it and towards TAA eventually. And DLSS is, honestly, just very good TAA, Nvidia marketing aside.

    I am very confused about the concept of “fake perfromance”. If the animation looks smooth to you then it’s smooth. None of it exists in real life. Like every newfangled visual tech, it’s super in-your-face until you get used to it. Frankly, I’ve stopped thinking about it on the games where I do use it, and I use it whenever it’s available. If you want to argue about increased latency we can talk about it, but I personally don’t notice it much in most games as long as it’s relatively consistent.

    I do understand the feeling of having to worry about performance and being hyper-aware of it being annoying, but as we’ve litigated up and down this thread, that ship sailed for PC gaming. If you don’t want to have to worry, the real answer is getting a console, I’m afraid.


  • Yeah, optimizing for scalability is the only sane choice from the dev side when you’re juggling hardware ranging from the Switch and the Steam Deck to the bananas nonsense insanity that is the 4090. And like I said earlier, often you don’t even get different binaries or drivers for those, the same game has to support all of it at once.

    It’s true that there are still some set targets along the way. The PS5 is one, the Switch is one if you support it, the Steam Deck is there if you’re aiming to support low power gaming. But that’s besides the point, the PS5 alone requires two to three setups to be designed, implemented and tested. PC compatibility testing is a nightmare at the best of times, and with a host of display refresh rates, arbitrary resolutions and all sorts of integrated and dedicated GPUs from three different vendors expected to get support it’s outright impossible to do granularly. The idea that PC games have become less supported or supportive of scalability is absurd. I remember the days where a game would support one GPU. As in, the one. If you had any other one it was software rendering at best. Sometimes you had to buy a separate box for each supported card.

    We got used to the good stuff during the 900 series and 1000 series from Nvidia basically running console games maxed out at 1080p60, but that was a very brief slice of time, it’s gone and it’s not coming back.


  • Yeah, although I am always reluctant to quantify visual quality like that. What is “65% better” in terms of a game playing smoothly or looking good?

    The PS5 Pro reveal was a disaster, partially because if you’re trying to demonstrate how much nicer a higher resolution, higher framerate experience is, a heavily compressed, low bitrate Youtube video that most people are going to watch at 1080p or lower is not going to do it. I have no doubt that you can tell how much smoother or less aliased an image is on the Pro. But that doesn’t meant the returns scale linearly, you’re right about that. I can tell a 4K picture from a 1080p one, but I can REALLY tell a 480p image from a 1080p one. And it’s one thing to add soft shadows to a picture and another to add textures to a flat polygon.

    If anything, gaming as hobby has been a tech thing for so long that we’re not ready to have shift to being limited by money and artistic quality rather than processing power. Arguably this entire conversation is pointless in that the best looking game of 2024 is Thank Goodness You’re Here, and it’s not even close.


  • Yep. The thing is, even if you’re on high end hardware doing offline CGI you’re using these techniques for denoising. If you’re doing academic research you’re probably upscaling with machine learning.

    People get stuck on the “AI” nonsense, but ultimately you need upscaling and denoising of some sort to render certain tier of visuals. You want the highest quality version of that you can fit in your budgeted frame time. If that is using machine learning, great. If it isn’t, great as well. It’s all tensor math anyways, it’s about using your GPU compute in the most efficient way you can.




  • What do you mean “suddenly”? I was running path tracers back in 1994. It’s just that they took minutes to hours to generate a 480p image.

    The argument is that we’ve gotten to the point where new rendering features rely on a lot more path tracing and light simulation that used to not be feasible in real time. Pair that with the fact that displays have gone from 1080p60 vsync to 4K at arbitrarily high framerates and… yeah, I don’t think you realize how much additional processing power we’re requesting.

    But the good news is if you were happy with 1080p60 you can absolutely render modern games like that in a modern GPU without needing any upscaling.


  • That’s fine, but definitely not a widespread stance. Like somebody pointed out above, most players are willing to lose some visual clarity for the sake of performance.

    Look, I don’t like the look of post-process AA at all. FXAA just seemed like a blur filter to me. But there was a whole generation of games out there where it was that or somehow finding enough performance to supersample a game and then endure the spotty compatibility of having to mess with custom unsupported resolutions and whatnot. It could definitely be done, particularly in older games, but for a mass market use case people would turn on SMAA or FXAA and be happy they didn’t have to deal with endless jaggies on their mid-tier hardware.

    This is the same thing, it’s a remarkably small visual hit for a lot more performance, and particularly on higher resolution displays a lot of people are going to find it makes a lot of sense. Getting hung up on analyzing just “raw” performance as opposed of weighing the final results independently of the method used to get there makes no sense. Well, it makes no sense industry-wide, if you happen to prefer other ways to claw back that performance you’re more than welcome to deal with bilinear upscaling, lower in-game settings or whatever you think your sweet spot it, at least on PC.



  • I don’t see how that’s the case. Most people prefer more fps over image quality, so minor artifacting from DLSS is preferable to the game running much slower with cleaner image quality. That is consistent with the PS data (which wasn’t a poll, to my understanding).

    I also dispute the other assumption, that “we suck at optimizing performance”. The difference between now and the days of the 1080Ti when you could just max out games and call it a day, is that we’re targeting 4K at 120fps and up, as opposed to every game maxing out at 1080p60. There is no target for performance on PC anymore, every game can be cranked higher. We are still using CounterStrike for performance benchmarks, running at 400-1000fps. There will never be a set performance target again.

    If anything, optimization now is sublime. It’s insane that you can run most AAA games on both a Steam Deck and a 4090 out of the same set of drivers and executables. That is unheard of. Back in the day the types of games you could run on both a laptop and a gaming PC looked like WoW instead of Crysis. We’ve gotten so much better at scalability.



  • Well, sure, but that’s also because on PC I can choose to buy DRM-free games and have guaranteed backwards compatibility for the foreseeable future. Plus it’s not a closed system based on a console that launched with a drive. People (me included) already own PS5 discs, not from a previous generation, but from this one. It’s bad enough that I need to keep my PS3 around to play PS3 games, it’d be absurd to not be able to play PS5 games I already own because the thing is physically unable to ingest them out of the box.

    So yeah, for people in that position the Pro is a hundred bucks more expensive than it says on the sticker, which is already a ridiculously high number.




  • It’s probably worth highlighting that, despite the clickbait headline, this guy has not been affiliated with Sony or Playstation for almost twenty years, and these days he’s mostly an investor on multiple middleware and outsourcing videogame-adjacent companies.

    Also, to his credit, what he’s actually saying is that he’s optimistic that people and the industry will rebound fairly quickly and may need to bridge themselves over to the next gig somehow. The other thing he apparently proposes is “go lay on a beach somewhere”. But he’s not saying that you should go be an Uber driver if you lost your games industry job, he’s saying it’s likely that your games industry skillset will remain valuable and you’ll find something else soon-ish.

    I hate that media keeps making me do this and defend people I disagree with. I think there’s an interesting debate here about whether the gig-fire-hire-repeat flow of the games industry is good or sustainable, and about what alternatives there are. But if you go and clickbait this hard I’m kinda forced to point that out first and now we’re all arguing about what was said and not about the underlying issue.



  • He shipped enough clunkers (and terrible design decisions) that I never bought the mythification of Jobs.

    In any case, the Deck is a different beast. For one, it’s the second attempt. Remember Steam Machines? But also, it’s very much an iteration on pre-existing products where its biggest asset is pushing having an endless budget and first party control of the platform to use scale for a pricing advantage.

    It does prove that the system itself is not the problem, in case we hadn’t picked up on that with Android and ChromeOS. The issue is having a do-everything free system where some of the do-everything requires you to intervene. That’s not how most people use Windows (or Android, or ChromeOS), and it’s definitely not how you use any part of SteamOS unless you want to tinker past the official support, either. That’s the big lesson, I think. Valve isn’t even trying to push Linux, beyond their Microsoft blood feud. As with Google, it’s just a convenient stepping stone in their product design.

    What the mainline Linux developer community can learn from it, IMO, is that for onboarding coupling the software and hardware very closely is important and Linux should find a way to do that on more product categories, even if it is by partnering with manufacturers that won’t do it themselves.