deleted by creator
deleted by creator
VR has much more stringent latency requirements than normal gaming, because moving your head needs to be in perfect sync with the world “staying still”. The manual calibration needed for this is around 8ms, which is compensating for the render/presentation time of a single frame at 120fps.
It sounds like a nitpick, but I can tell you from experience that it makes a very significant difference.
At its most basic level, Slitterhead is clearly an action game.
Ok, so it’s not a horror game.
This is the same mistake people make with titles like Dead Space… They sell it as horror, but that just makes it disappointing for people who like actual horror games.
I ended up having to recreate it from memory in a DAW and used Google Assistant to identify it
So you’re telling me to stop paying my bills in protest of Google Chrome, because I’m not technically being forced to pay my bills
What a privileged life to lead, where you aren’t forced to use certain websites by (e.g.) your student loan servicer, or your utility company
This sounds like it was written by ChatGPT
Still waiting for them to fix latency reporting for AMD 7000 series GPUs
https://steamcommunity.com/app/250820/discussions/3/3802777845426075295/
HTTP is stored in the balls
deleted by creator
Just because the other kids are doing it, that doesn’t make it ok
A delayed game is always better than a rushed game, thank you WB Games for letting the developers deliver something they’re happy with
stop dismissing performance questions
I did not dismiss it, I said measure the performance yourself.
Performance matters, learning about performance matters
Which is why I said you should measure performance. It’s no use waffling about unmeasurable performance gains.
Did they ask if they should optimize, or did they ask which one generates more performant assembly?
To be pedantic, GDScript is an interpreted language, and does not generate bytecode or assembly. This means that the code performance is highly dependent on runtime conditions, and needs to be measured in the place where it’s used.
Maybe they already measured and already knows this is a bottleneck.
If they already measured, then they would know which one is faster, because they measured it.
I swear half the reason every piece of modern software runs like shit is because nobody bothered to learn how to optimize
This is unrelated to what I said, which is “you should measure your performance to see what you need to optimize”.
There’s tons of little “premature” optimizations that you can do that aren’t evil.
And all of these optimizations are just as effective after you measure them to see if they’re needed, and they’re no longer premature.
Estimating time complexity and load size
Accurately estimating the performance impact of a design choice means the optimization is no longer premature. The rule-of-thumb is about using optimizations without taking appropriate time to their overall performance benefit. The particular question asked by the OP is very very unlikely to have any significant performance impact at all, unless it’s in an extremely hot loop running millions of times per frame, at which point you should measure it to see which one is faster in your use case.
Rule #1 of programming: Write good code first, then measure performance.
I’m a sucker for this whole album https://open.spotify.com/album/5XPdkIryKSpTKW21HUtvV0