- cross-posted to:
- gaming@beehaw.org
- cross-posted to:
- gaming@beehaw.org
is there that much difference between 2015 and 2025 hardware? (like now people are saying moore’s law is dead and stuff)
the fact, that steam reviews were on mostly negative on launch day, and it suddenly got mostly positive means steam has once again tempered with the statistics. fuck valve as well.
I’m an ok gamer, maybe even good, but premium? I don’t know… I guess this game isn’t for me. Better spend my money on other games then.
The problem is Randy.
“we haven’t optimised this at all, lol, $80 please”
This reminds me of the “AAAA” game thing.
I’ll say it here again, i have an 7900xtx i expect it to run silky smooth.
If it doesn’t, that’s on you brother.
I also play on 1440p and it doesn’t reach a well enough framerate at ultra settings, i set it on high and adjusted some things to lower settings to get a better framerate.
Premium gamer spotted
Sounds like it works on your machine. What’s your point?
Eh. I didn’t even finish borderlands 3. won’t bother getting 4
Did not read the article. I have a 3070 graphics card (windows 10)and the game ran fine. I had a problem with not being able to select a different weapon until I messed with it quite a bit and my friend had one crash. He has a 3050.
Frankly I expected much worse, but this is just not a good response. Is he using this as an excuse NOT to fix it?
While I have no desire to defend Randy, Twitter is as Twitter does, and unless you spend time looking at his whole timeline, it sounds like he’s saying only stupid shit like this. He did actually acknowledge the issues, and stated that they’re working on them but also that for now the best way to play is with FSR/DLSS and frame gen.
I disagree with this deeply. He makes arguments about the imperceptibility of latency in frame gen, but that’s only true when the base framerate is high enough. DLSS is probably fine, but it’s also pretty fair for those who are using an 80 or 90 class card to complain about struggling at 1440p native, let alone 4k.
How dare you have a reasonable response when we all just wanna be pissed. /s
But seriously some people should just not interact with the public. It’s a thankless job and you have to have very thick skin and the patience of a saint. I also have a theory that a large percentage of CEOs are sociopaths/psychopaths so …
Thanks.
That’s the thing, Randy, we aren’t interested in a monster truck, we just need a car. A 5090 is not a leaf blower, and your game looks more like a clown car than a monster truck given the way it runs. Thanks for telling us that you are basically only interested in whales, although I can’t imagine they are happy either regardless of how they’ve probably thrown money at you already.
What a total dick. Lots of people with high end PCs are all saying how it runs like shit, so what’s the next excuse?
They’re not premium gamers. It’s about what’s inside your soul, not the kind of PC you have.
Ah, my mistake. If only our souls were as pristine and premium as Randy Pitchford lmao
We can’t all be Randy Pitchford. I bet it runs so well for him. :(
Same argument Sony gave when EverQuest 2 launched with stupid high required specs. World of Warcraft launched a month later you could run on any video card from the last 5 years and the whole franchise still hasn’t recovered.
Well now I am giving even less of a fuck about this series that should’ve ended after the second title
Such a bad argument. There is no reason for the game to not support lower end hardware except for lazy development. Not a good sign for the future of borderlands. This is also the type of game that really sucks if you don’t get a locked 60 FPS. Borderlands 3 is still a laggy mess on my steam deck. Sometimes it just stutters forever. Constantly generating shaders or something. There is no reason for it to be that way and lowering the setting has almost no effect on the actual performance. This is 100% the fault of the devs. They are pushing half complete products to market not the consumer.
Also as many others have stated, not everyone has $4000 to drop on a PC. My most powerful machine is a Skylake processor with a 1070. It runs most games fine, it’s just the handful of unoptimized unreal engine games that run badly. I have nearly limitless options to buy other games from devs who actually care about us poorer folk. A 4070 ti is like 800-1000 rn. Probably won’t even run this game well. This is in an age where we have had nearly 100% inflation in a few years. Most people can barely afford to buy a car without spending half of their paycheck. They should be trying to make their games work on older hardware now more than ever. Ram is cheap, there are some things you can work around. It’s usually not worth it to target low RAM devices, but there is really no good reason for you game to not scale well to lower end GPUs. You don’t have to have only 4k textures on disk. You can easily automate a process to create lower poly, and lower resolution textures, implement modest lighting systems. It’s pretty easy in relation to other things. Cpus are mid, you shouldn’t necessarily target 10 year old CPUs if it’s going to make the game worse but your game shouldn’t be so unoptimized that you need 5.5 ghz on a single core to get 60 fps. You should at least have a proper lod system in place so that you can support lower end GPUs in many cases where it’s not very difficult.
The issue with many of these modern triple AAA games is they are trying to avoid as much work as possible. They are trying to avoid targeting lower end hardware because it’s a bit of work and they are struggling to finish their games. They need to plan for these things from the start and work them into their process.
Hell, I don’t know many people willing to drop 4k on a gaming rig. Most people I know with a gaming computer are in the 1-2k range and miss when you could get decent performance under 1k. Like, if I can’t get playable performance out of a several years old mid range computer I’m not buying your game, especially not for $70
That was a reference to a video I saw, something like, trying to play borderlands at 60 FPS on a $4k computer.
2k is about the minimum these days for a full system when you include taxes and shipping. That will get you a midrange system. You can get lower end stuff or buy a used graphics card. Personally I’m still rocking a 1070 and it’s excellent for like 99% of games. I’m lucky that the handful of games that won’t run on it I don’t care about anyways.
Also a less known fact, real inflation, not government reported is probably close to 100% over the past 10 years. So really a $2000 machine today is the same as a $1000 computer 10-15 years ago. Our wages didn’t go up of course. That’s the whole point of a fiat currency and inflation! It’s a clever and sneaky wealth tax. It’s a way to cut your wages quickly in a way that 90% of people don’t understand. They just yell at the gas station clerk because their soda is nearly $5. Their poor little brains can’t conceive of a concept so sophisticated as they are actually being payed less, stuff doesn’t cost more. People aren’t going to make stuff for free and give it to them. It’s just not a simple number so it confuses them. If businesses had to adjust your pay to match real inflation, guess what?, there would be no inflation and no fiat currency. No reason for it to exist because they couldn’t screw us out of our wages without telling us to our face Just extra paperwork and time for managers, little fake economic growth and no unnatural bubbling of markets.
Ram is cheap
Kind of divering from the larger point, but that’s true — RAM prices haven’t gone up as much as other things have over the years. I do kind of wonder if there are things that game engines could do to take advantage of more memory.
I think that some of this is making games that will run on both consoles and PCs, where consoles have a pretty hard cap on how much memory they can have, so any work that gets put into improving high-memory stuff is something that console players won’t see.
checks Wikipedia
The XBox Series X has 16GB of unified memory.
The Playstation 5 Pro has 16GB of unified memory and 2GB of system memory.
You can get a desktop with 256GB of memory today, about 14 times that.
Would have to be something that doesn’t require a lot of extra dev time or testing. Can’t do more geometry, I think, because that’d need memory on the GPU.
considers
Maybe something where the game can dynamically render something expensive at high resolution, and then move it into video memory.
Like, Fallout 76 uses, IIRC, statically-rendered billboards of the 3D world for distant terrain features, like, stuff in neighboring and further off cells. You’re gonna have a fixed-size set of those loaded into VRAM at any one time. But you could cut the size of a given area that uses one set of billboards, and keep them preloaded in system memory.
Or…I don’t know if game systems can generate simpler-geometry level-of-detail (LOD) objects in the distance or if human modelers still have to do that by hand. But if they can do it procedurally, increasing the number of LOD levels should just increase storage space, and keeping more preloaded in RAM just require more RAM. You only have one level in VRAM at a time, so it doesn’t increase demand for VRAM. That’d provide for smoother transitions as distant objects come closer.
You can divide stuff up into memory however you want, into objects, arrays, whatever. Generally speaking the GPU memory is used for things which will run fast in the streaming processors of the GPU. They are small processors specialized for a limited set of tasks that involve 3D rendering. The types of thing you would have in GPU memory are textures, models, shader scripts, various buffers created to store data for rendering passes like lighting and shadow, zbuffers, and the frame buffer and stuff.
Other things are kept in the ram and are used by the CPU which has many instruction sets and many optimizations for different types of tasks. CPUs are really good at running unpredictable code. They have very large and complex cores which do all kinds of things like branch prediction( taking several paths through code ahead of time when there is free time available) it has direct access to the PCI bus and access things like the south and north bridge, storage controller, io devices, etc.
Generally on a game engine most of the actual logic is happening on the CPU because this is very complex and arbitrary code that is calculation heavy. Things like the level data, AI, collisions, physics, streaming data and stuff is handled by the CPU. The CPU prepares frames by batching many things into one call to the GPU. This is because the GPU is good at taking a command from the CPU and performing that task many times simultaneously. Things like pixels for example. If the CPU had to send every instruction to the GPU in sequence it would be very slow. This is because of the physical distance between the GPU and CPU and also just that a script would only do one thing at a time in a loop. Shaders are different. They are like running a function across a large data set utilizing the 1000 + cores in an average modern GPU.
There are other differences as well. The CPU has access to low latency memory where the GPU prefers higher latency but high bandwidth memory. This is because the types of operations the GPU is doing are much more predictable and consistent. CPUs are very arbitrary and often the CPU might end up taking a path that is unusual so the memory it has to access might be scattered and arbitrary.
So basically most of the game engine and game logic runs in memory because it’s essentially a sequential program that is very linear and arbitrary and because the CPU has many tools in its tool boxes for different tasks, like AVX, SSE, and stuff like this. Most of the visual stuff like 3D transformation and shading and sampling take place on the GPU because its high bandwidth and highly parallel yet with some cores, yet you have many of them that can operate independently.
Ram is very useful but is always limited by console tech. It is particularly important in more interactive and sandboxy type games. Stuff like voxels. It also comes in handy when running sim or rts games. Engines are usually designed around console specs so they can release on those platforms. It can be used for anything even rendering, but it is extremely slow compared to GPU memory in actual bandwidth, which is usually less then an inch away from the actual GPU and has a large bus interface, something like 128-512 bit. This is how many physical wires connect the memory chip to the GPU. It limits how much data you can send in one chunk or cycle. With a 64 bit interface you can only send one 64 bit word at a time. Many processes can pack 4 of those into a 256 word and send them at once getting a 4x speed increase on a 256 bit bus, or 8x speed on a 512 bit bus.
So you have higher bandwidth, high latency memory on a wide bus which feeds a very predictable set of many simple processors. Usually when you want to load memory into the GPU you have to prepare it with the CPU and send it over the PCI bus. This is far too slow to actually use system ram to augment the GPU ram. It’s slow in latency and ram, so if you were to do so, your GPU will be sitting idle like 80% of the time waiting on packets, and then it will only get a 64 or 128 bit packet from the ram, not to mention the CPU overhead of constantly managing the memory in real time.
Having high ram requirements wouldn’t be the worse thing in the world because it’s cheap and can really help some types of games which have large and complex worlds with lots of physics and things happening. Ram is cheap. Not optimizing for GPUs is pretty bad especially with prices these days. That will not happen much because games tend to be written in languages like C++ which manage memory in a very low level way, so they tend to just take about as much as they need. One of the biggest reasons you use a language like C++ to write game engines is because you can decide how and when to allocate and free memory. This prevents stuttering. If the system is handling memory you tend to get a good deal of stuttering because the CPU will get loaded for half a sec here and there as the garbage collector tries to free 2 GBs of memory or something. This tends to make games engines very structured when it comes to the amount of memory they use. Since they are mostly trying to reuse code as much as possible, and are targeting consoles, they usually just aim for the amount of ram they know they will have on consoles. Things like extra draw distance on PCs and stuff can use more memory.
LODs can be generated in real time but this is slow. You can do nearly anything with code. It’s just if it’s a good fit for your application. In a game engine every cycle is precious. You are updating the entire scene, moving all your data, preparing a frame, resolving all interactions, running scripts, and everything else in just over 16 ms for 60 fps. The amount of data your PC in processing in just 16 ms will blow your mind. Usually 3-12 passes in the renderer. A very simple engine will draw a zbuffer, where during this 16 ms it determines the distance to the closest object behind every pixel, then using this data to figure out what needs to actually be drawn. Then it’s taking these objects and resolving the normals, basically figuring out if the polygon is facing towards or away from the player. This is cutting out rendering the vast majority of polygons. Then the lighting data and everything is combined with this and sent to the GPU which actually goes through a list of polygons need to be drawn, and then looking up the points to draw the polygons. It’s also casting rays from a light source and shading the scene. This is very simple, basically a quake or doom like game. Modern games are much more complex. They draw each frame many times with many different buffers Generating different data and using it for the next pass. Generating LODs is just something that isn’t done unless needed for some reason, like dynamic terrain or voxel terrain. In a game that is mostly static geometry there is not really any reason to give up that compute time when you can just pregen them. Generating LODs in real time is quite a process. You have to load a region into memory, reduce it’s polygon, downsize the texture. Generate a new mesh and texture, and place it in the world. This would be a back and forth between the GPU and CPU. Some games do it however. 7dtd, space engineers, Minecraft with a distant terrain mod, and I’m sure many others generate LODs on another thread, but these are usually fairly low quality meshes.
There aren’t enough monster truck owners to support his game. If he gets his wish, Gearbox is going to lose a whole lot of money
The reality is that it is a mass market game. It needs mass market adoption. Currently much of the market is locked out due to performance issues
Everyone without a 5090 should immediately refund the game and use these remarks as the justification.
Can we please standardize a pc parts price point? (Not sure I’m saying this right)
Like, “it doesn’t matter where technology is, $600 gets you ‘low’, &1000 gets you ‘high’, and $2000 gets you ‘ultra’.”
$2000 gets you just a high end graphics card. >.>’
Yeah so, either the prices need to come down, or devs optimize for those price points. Or both. Because two thousand dollars for a gpu is ridiculous.
When everyone stops chasing cryptocoin and AI the prices will come down.
As long as people will pay 2k for a video card, they’ll charge 2k for a video card :(
The market doesn’t discern between gamers, cryptobros, and corporations.
I thought we were about to have a break when everything went ASIC, but that just didn’t last.
No one will ever stop chasing AI. It’s the holy grail of corporate efficiency, just you… your shareholders… and some unpaid robot slaves. The dream.
What’s with this “you” middle man? Let’s just get rid of that and have shareholders-robots.
Eventually, the bubble will burst.
Venture Cap paid for the first round of hardware; it has to make real money for the second round.
Once the token price rises to the actual cost of buying the ephemeral hardware it’s running on, no one will want to use it for the hard stuff.