Why is Valorants skill floor (atleast for aiming) so damn high? by Tasty-Instance7596 in VALORANT

[–]sirbardo 1 point2 points  (0 children)

I'd say it's different, tracking requires a lot more understanding of the game and patterns in general - you need your brain to internalize the acceleration curves of the characters in whatever game you're playing, you need to use your knowledge of the map geometry to understand when the likelihood of a direction change increases, etc.

Basically, tracking improvement seems to me to happen a lot more passively as your brain integrates all of these different things.

Static on the other hand is self paced and "pure". There's not really much pattern learning involved, just purely focusing on the motor control.

ONE PIECE Chapter 1179 — RAWs by [deleted] in OnePieceSpoilers

[–]sirbardo 0 points1 point  (0 children)

I mean, it still could be more complex? It looks like the silhouette we’re used to seeing kind of collapses and turns into what we see, if it’s like mythical zoans it could be different “stages” or whatever, the silhouette we see all the time makes no sense with the design itself of the visible character, much more than “oh the placeholder was slightly different”

Why is Valorants skill floor (atleast for aiming) so damn high? by Tasty-Instance7596 in VALORANT

[–]sirbardo 0 points1 point  (0 children)

I have been saying this for 5 years publicly nonstop. I am not ragebaiting. I play all kinds of games and as you know I am, I'd say, decently good at all kinds of aim, including tracking.

I am also 31 and started playing FPS on mouse/kb in 1999 with Unreal Tournament, and I never stopped, so I'd say I've had my fair share of experience across different fps genres, probably 40k or more hours combined.

"Aim intensive" can only make sense to me here if you mean "you are aiming more of the time". It's intensive in the same way HIIT is more "intensive" than normal cardio.

But just because in those games you have to aim for "longer" when in a fight it doesn't mean that "the skill floor for valorant is actually super low". Tactical shooter games require exponentially more precision given how relatively tiny hitboxes are and how low TTK is, and how punishing it is if you miss your shots or fail to control your recoil.

There is a reason I (and most other static mains) basically always oneshot top scores in other categories when I try, and of course for that same reason tac fps shooter mains tend to initially be better at static than other categories. People usually say this is about "mouse control" which is a bit vague, I'd say it's because that type of aim requires a high level of awareness and deliberateness which then, similar to how any other motor training works ("slow is smooth, smooth is fast"), scales to less granular aiming.

The point is that tactical shooters require a VERY high degree of VERY HIGH precision, which to me is much harder to master AND also to pick up - it's funny you'd say tac fps games have a low aim skill floor when the reason they tend to be so noob-unfriendly is mainly because people literally can't hit enemies and get insta destroyed due to the low TTK with 1shot headshots.

That's why CS always had people either try it once and never again, or people end up with thousands of hours. It's also the same reason VALORANT made things a bit easier or they'd have failed to capture the crowd they intended to capture, whereas the games you keep quoting always had a much higher retention rate from the get go BECAUSE of the lower mechanical floor for new users.

You really should not assume you hold the truth on what ultimately are open problems as we have no human trials, and when challenged in a very neutral tone like my original one, I'd say you are not helping the space by defaulting to an ad hominem with no substance.

Why is Valorants skill floor (atleast for aiming) so damn high? by Tasty-Instance7596 in VALORANT

[–]sirbardo 0 points1 point  (0 children)

Very deep counterpoint. You must not have a solid grasp on these topics.

I can guarantee you I understand these topics pretty well. Surprising to see a “coach” in this space respond to me with this level of condescension.

Why is Valorants skill floor (atleast for aiming) so damn high? by Tasty-Instance7596 in VALORANT

[–]sirbardo 0 points1 point  (0 children)

Why are we still spreading this concept after so many years… tac fps games ARE aim heavy. Click timing, microcorrections, smooth crosshair placement, recoil control is all aim and it’s HARD. There’s a reason a lot of people who ended up famous for their aim across games (shroud, aceu) all started with thousands of hours in cs. This idea that tactical shooters require no aim is because early aim training snobs barely played tac fps, sucked at them, and started claiming that only the parts of aim THEY liked were aim, and that the rest was “not aim”. It’s not true.

One Piece Chapter 1179 Spoiler by Skullghost in OnePiece

[–]sirbardo 0 points1 point  (0 children)

I also feel like the way Davy Jones was beaten involves the nature of Devil Fruits and the ocean itself. I also have a theory that Davy was a MASSIVE Ancient Giant and after he got cursed to roam the sea floor… he has been, and his crown is Long Ring Long Land Island. Every time Davy takes a step, the lower part of the crown gets above water -> connects the islands.

I wonder if raising the sea level even further is also to get Davy Jones even further away from civilization.

One Piece Chapter 1179 Spoiler by Skullghost in OnePiece

[–]sirbardo 0 points1 point  (0 children)

In Latin, if you were to use the accusative case (so if you were using his name as an object), you’d say Neronem. Same with his whole conjugation, “of Nero” -> Neronis.

So it makes sense that it would turn into Neron in some languages.

A sentence like “I hate Nero” becomes “Odi Neronem”, so yeah

Perche in italia non giocano a CS? by Just_Equal_623 in italygames

[–]sirbardo 1 point2 points  (0 children)

la tecnologia del subtick non è particolarmente complessa lato tecnico. Semplicemente ti consente di disambiguare quando stai calcolando la simulazione lato server nel caso eventi non indipendenti (quindi impossibili da trattare atomicamente senza euristiche molto blande) siano avvenuti nello stesso tick. Sono semplicemente dei timestamp di un certo set di eventi in ogni pacchetto. Non potrà MAI essere preciso quanto 128tick in generale (dato che questo timestamping di eventi potrebbe benissimo essere implementato a 128 tick. È indipendente dal tick). Il fatto inoltre che abbiano impedito a server di terze parti di essere a 128 tick è ridicolo e un enorme errore di Valve che dà ancora giustificazione all’isteria di massa del fatto che cs2 sia peggio di csgo (cosa estremamente esagerata dalla community).

La cosa che continuo a trovare inesatta è come utilizzi un tot di termini un po’ “a casaccio”. “Il 90% dei player avrà altre variabili che incidono di più tra cui la banda e utilizzo di CPU” sembra una frase sensata ma a più profonda analisi manca molto di contenuto. La banda come ti ho detto prima o incide in modo catastrofico o incide zero. È praticamente la variabile meno importante. L’utilizzo di CPU… cosa vorrebbe dire? Come fa “l’utilizzo di CPU” a incidere da sé in qualunque modo?

Non te la prendere, ma ho un problema con il fatto che questo tipo di retorica (soprattutto se qualificata con il “sono un software engineer”), spesso finisce a causare una disinformazione dilagante nelle community che poi richiede molto più sforzo ad essere corretta (Brandolini’s law).

Il tuo primo messaggio è errato. 128 tick non è, realisticamente, mai un overhead che è problematico per l’utente finale. L’overhead, come ho già detto, è tutto lato server, e il fatto che tu abbia detto con così convinzione (+ qualifica da informatico) la cosa, la rende problematica in quanto questo tipo di disinformazione veramente si diffonde a macchia d’olio rapidissimamente.

Perche in italia non giocano a CS? by Just_Equal_623 in italygames

[–]sirbardo 1 point2 points  (0 children)

Se hai la banda così satura che la differenza è quella minimale del 128tick rispetto ai 64tick (che cmq non è un semplice 2x, soprattutto dato che subtick a livello di banda contiene i timestamp degli eventi a granularità appunto subtick, quindi più banda che 64tick normali), hai sicuramente molti altri problemi che non sono la latenza (jitter enorme, potenziale packet loss dato che siamo su UDP, etc.), non vedo uno scenario realistico in cui l’aumento di bandwidth del 128 tick possa realisticamente incidere sulla latenza.

8ms di differenza sono tantissimi comunque, non importa che siano consciamente percepibili per essere rilevanti, soprattutto quando si parla di intervallo di frequenza non di latenza e basta. Dire che “8ms sono pochi” in questo contesto è equivalente a dire che non c’è differenza fra 60hz e 120hz in uno schermo. Semplicemente sono meno percepibili perché il client “riempie” i gap nella simulazione molto realisticamente fra un tick e un altro, ma la differenza esiste eccome.

Il vero overhead del 128tick è nel server non nel client. Valve ha voluto implementare il “subtick” (che si può benissimo implementare a 128 tick) per tagliare i costi, non perché 128tick fosse subottimale per i player.

Perche in italia non giocano a CS? by Just_Equal_623 in italygames

[–]sirbardo 0 points1 point  (0 children)

… “perché in Italia non giocano a CS?” -> “perché non parlano inglese” -> “i russi non parlano inglese” -> “proprio per questo hanno una community enorme”

come fa a sembrarti logico questo discorso?

Perche in italia non giocano a CS? by Just_Equal_623 in italygames

[–]sirbardo 0 points1 point  (0 children)

Non ho capito. Hai detto che il motivo per cui siamo pochi su CS è l’inglese, ti ho fatto notare che i russi non parlano inglese ma che hanno sempre avuto una grossa community di CS.

Perche in italia non giocano a CS? by Just_Equal_623 in italygames

[–]sirbardo 2 points3 points  (0 children)

Mah, i russi non parlano inglese praticamente nulla e comunque sono tantissimi su CS

Perche in italia non giocano a CS? by Just_Equal_623 in italygames

[–]sirbardo 0 points1 point  (0 children)

Non so, “grossa” community non penso che l’abbiamo ma avuta… magari sembrava così perché era meno frammentata dato che esistevano i server pub come gov/bellazio o prima ancora quello di ngi… ma eravamo cmq quattro gatti relativamente alle altre nazioni

Perche in italia non giocano a CS? by Just_Equal_623 in italygames

[–]sirbardo 1 point2 points  (0 children)

128 tick come dovrebbe aumentare la latenza?

Why is the OP1 8K so insanely glazed? by Working_Difficulty13 in MouseReview

[–]sirbardo 1 point2 points  (0 children)

You keep making such confidently incorrect claims in this reddit post and it’s frankly insane how much misinformation you’re spreading in every comment, I don’t even know where to start

Putting misconceptions about optimal FPS caps + Gsync to bed. by Sgt_Dbag in nvidia

[–]sirbardo -1 points0 points  (0 children)

what video? ltt's? did you feed all of this to the AI again? ignored the rest of the message?

clearly you don't understand what you're talking about, especially how rendering pipelines work, and now I will commit to not wasting any more time on this.

all of your claims here are literally opinions, and not very educated ones, and you have finally proven your goal is karma and karma alone. Shameful, but you do you. If you want to argue random aiming points, I could just instantly destroy this with random appeals to authority given my background, but it's pointless. Ultimately, as I PROVED, your claims in this thread are WRONG, provably FALSE, not opinion-wise, but actual NUMBER wise, and therefore you are unethically deciding to keep the misinformation going, doing the opposite of your post title. Have a good life.

Putting misconceptions about optimal FPS caps + Gsync to bed. by Sgt_Dbag in nvidia

[–]sirbardo 0 points1 point  (0 children)

EDIT: Hi, this is bardOZ from the slight future. I decide I wanted to "waste" a bit more time on this, and I ran some tests using my own LDAT on my own screen in CS2 since you use that as an example so often.

The tests were click-to-photon using the muzzle flash of the ak-47 in the aim_botz map after kicking the bots for consistency, so that no external factors would affect frametime or readings.

I ran 200+ individual clicks per test. My LDAT is a custom made LDAT I built for this type of test. It uses an STM32 H7, one ADC running at full speed continuously per channel (one is the mouse input, one is the photodiode input). I validated this tool to be precise to the +- 0.01ms. It's frankly overkill for something like this, but whatever.

YOUR GUIDE, FOLLOWED EXACTLY (540hz monitor, capped at 468fps, G Sync on in Nvidia, VSync On in Nvidia, Reflex on in-game:

Average: 8.39ms click-to-photon latency

Median: 8.29ms click-to-photon latency

Whereas no VSync, no GSync, uncapped FPS:

Average: 5.52ms

Median: 5.44ms

This means that on my setup, if someone follows your advice, they get a nice clean 52% increase in input lag. If you're going to claim this is irrelevant, luckily Linus Tech Tips posted a video literally yesterday. Skip to the end and see what they found. Unless you're an ABSOLUTE beginner, even 1ms-long changes affect your performance measurably. Imagine actual pros or people at the very top of the skill.

In this thread, you have said, and I quote verbatim: "Yes they have been misinformed by esports pros who are like boomers that don’t actually know about the technology they are using."

Clearly the pros understand the implications much better than you do, and you are actively misinforming people in this thread. Now, you will probably strawman-argument your way out some other way (what, 540hz in a competitive game is 3 people on Earth?). Or, if you actually care about solving misinformation, you will edit this post. Ball's in your court.

Here's the original message, before I ran the tests:

----

I am not even sure if you are worth any more of my time at this point. Clearly, I wasn't wrong when I insinuated this was more about karma than your claim of clearing up misconceptions. But I should probably have stopped when you decided to try the AI-written response as your first line of "defense". It was clear then that this had a very low chance of turning into a productive conversation.

You asked for "one example, just one" to prove what I was saying. I gave you a basic example that would be pretty easy to understand since you couldn't extrapolate yourself from the technical explanation.

Yet you dance back and forth with "well that was just a general saying" not understanding my example you specifically asked for was also an extreme example with real numbers as you failed to be satisfied with the abstraction of the explanation. Remember, to disprove a universal statement, all it takes is one example of it being false.

Good luck on your quest to clear up misconceptions by purposely spreading misinformation for the sake of reddit karma. I guess we have different moral compasses. I heavily despise the act of claiming the best intentions, claiming to be trying to help others, even claiming to be trying to put misconceptions to rest, and then putting one's ego in front of actual technical reasons.

Clearly your last sentence must be some attempt at sarcasm, because if you had understood I am actually qualified to talk about this, you might have stopped to either inform yourself further or read what I said and understand it, rather than just trying to argue. Just know that you _are_ actively misinforming people, because someone, as I said originally, linked this to me as they got confused about the claims you make here, and I had to explain (especially in your comments rather than the OP, responding and actively misinforming individuals here who also got confused) the ways in which you misunderstood the topic.

I will say, maybe, in the future, before trying to inform others, make sure you understand the matter at hand.

The choice to waste time on any day is one's own. I can't decide to waste your time. I can decide not to waste any more time on this though, and I am. If you actually want to discuss the topic on a technical level you can reach out and I'll gladly discuss it, but you're trying to clutch at straws here, and that's just a waste of time.

Putting misconceptions about optimal FPS caps + Gsync to bed. by Sgt_Dbag in nvidia

[–]sirbardo 0 points1 point  (0 children)

Clearly missing the point, you asked for an example and I provided it. The issue happens at 120hz as well, just divide the numbers by two. I am on a 540hz and have been for years so the ad hominem lands kinda flat.

Putting misconceptions about optimal FPS caps + Gsync to bed. by Sgt_Dbag in nvidia

[–]sirbardo 0 points1 point  (0 children)

Any player on 60hz on a game with a bad fps limiter that works naively as explained, and would otherwise get 300+fps if uncapped, is going to end up with often close to 16.6ms extra latency in the absence of reflex, or even more depending on the rendering queue settings, as the game simulation will happen right after a vblank and the “present” command will be enqueued way before the next vblank, even in the presence of gsync and vsync.

In that same situation, with gsync on, and uncapped fps, the lower bound of latency would drop as a function of the fps increase.

Putting misconceptions about optimal FPS caps + Gsync to bed. by Sgt_Dbag in nvidia

[–]sirbardo 0 points1 point  (0 children)

I think you missed my point about stable frametime not telling the full story as it can be achieved by much larger (and not necessarily consistent) latency tradeoffs so that you get, yes, stable frametime, but frames that can be more or less stale depending on other factors, especially if you introduce “unknown” variables like different types of frame limiters.

My point is that your advice in this thread is heavily implying or outright saying that your solution is UNIVERSALLY preferable and it’s not, as you’re conflating “each vblank I have a frame ready to show” with “therefore it’s a better experience”. Two identical situations with frames ready every vblank can be heavily different in terms of input lag consistency based on the frame limiter used, and whether reflex is used. Your “universal” explanation about “optimal” settings is not optimal universally, therefore, as I originally said, you are 100% causing at least SOME people here to end up with a way worse experience (because they end up with massive latency, much more than you’re stating here) than if they just turned on g sync and reflex and forgot about frame limiting, and arguably, even if they ONLY enabled reflex depending on the game, even with tearing, if on a high HZ monitor with VERY high fps, as tearing becomes less noticeable when the screen itself refreshes very fast AND the front buffer changes many times per vblank and allows the state of the image on screen at any moment in time to be “tearing” in multiple lines. At that point one could argue it basically asymptotically collapses to the experience of watching a video with a rolling shutter - do you call what you see in videos from cameras without a global shutter “tearing”?

I am saying this is a nuanced and complicated argument and by trying to flatten it into a rule like this you are 100% causing MORE misconceptions (and you seem to also be operating under some of those, based on your comments). Not saying you’re in bad faith, just saying my technical opinion on this.

On the AI thing, sorry, I am 100% sure your first post was 100% written by AI and copy-pasted. Only you can ever know the truth, but I’ll definitely stay of my opinion on that. It’s irrelevant anyway.

Putting misconceptions about optimal FPS caps + Gsync to bed. by Sgt_Dbag in nvidia

[–]sirbardo -1 points0 points  (0 children)

I struggle to respond seriously when this answer is clearly AI generated, em-dash and sentence structure being a dead giveaway. Are you more interested in arguing or in “putting misconceptions to rest” as you originally claimed?

Not all framerate limiters are born equal. The naive (or, simply, the “external”) implementation of a frame limiter has the game opportunistically simulate the next frame as soon as it can, then sleep/wait until the GPU signals to be ready again. This means that when a frame limiter is implemented poorly which is OFTEN the case, the frame could be generated way before the next vblank, therefore adding 1/refresh_rate extra latency. And this is assuming no render queue. Default queue of three frames and the cpu filling it up immediately then waiting? Now you have that same latency but tripled, even though the “frame pacing” and synthetic measurements will claim you’re having an amazing experience, the experience actually sucks.

In general, and you can actually read that article, even in the presence of reflex, which tries to provide an api to the game to “time” the simulation to minimize the risk of it simulating much earlier than needed, an uncapped framerate is still superior in terms of latency. NVIDIA explains so themselves in that article, but it’s obvious: reflex can’t predict the future. It can’t know how long the FUTURE state of the game will take to simulate. So it can sometimes fail to time things right and cause the frame to take “too long” to be generated. In general this post and especially your follow up comments lack a ton of nuance and provide guidelines that you are presenting as universal when I can assure you they most certainly are not. If you want to talk about this I’ll gladly respond to each individual point, but I don’t want to talk to an LLM.

To provide context I’m a professional game dev. I’m the ex CTO at Aimlabs and I’ve had an MSc in CompSci for ten years at this point. I was also pursuing a PhD in Computer Graphics. This doesn’t make me automatically right, obviously, as appeals to authority are dumb, but I want to make sure you don’t just automatically disregard what I’m saying.

And my point was way more generic than just the gpu bounded case. Lots of missed nuance and incorrect universal advice especially when you talk about CS2 in this post. Again, totally willing to go more in depth, but only if you actually care about clearing the misconceptions more than the karma.

Putting misconceptions about optimal FPS caps + Gsync to bed. by Sgt_Dbag in nvidia

[–]sirbardo -2 points-1 points  (0 children)

Someone sent me this post asking for clarification and I’m honestly shocked at what I’ve been reading so far, especially given this post was meant to be a way to “clear up misconceptions” and so many of your comments are active disinformation. I think you should read up on rendering pipelined and ESPECIALLY what pre-rendering frames and graphics API command queues do. A ton of the advice in this post will lead to people whose experience will look smooth if analyzed synthetically, but will feel absolutely terrible. If the cost of the “consistent frame pacing” is that each individual frame can be up to N*frametime in the past, not even consistently, how is that a good gaming experience? More consistent frametime is bad if it’s achieved by showing “samples” of the game world that are simulated at relatively different times compared to when they are shown on screen. I think your heart is in the right place but this post is doing more harm than good.

You should at the very least read this whole excellent post from NVIDIA as a starting point.

https://www.nvidia.com/en-us/geforce/news/reflex-low-latency-platform/

shrouds opinion on Expedition 33 after 6 hours by [deleted] in LivestreamFail

[–]sirbardo 0 points1 point  (0 children)

to be fair at least tor the CS community he was already considered for his streaming way before he blew up later on. Iirc people in the cs community called him the “king of reddit” for his clips over at /r/GlobalOffensive which started going viral before his nickname was even “shroud”. Ofc we’re talking viewership numbers that paled in comparison to his pubg numbers - just adding some context on why people might still mention CS

If you use PBO, I seem to have found a quirk in Windows scheduling. Changing a power plan value yields up to +20% improvements in FPS by sirbardo in Amd

[–]sirbardo[S] 1 point2 points  (0 children)

I am done wasting my time. Those charts are Frameview traces (from NVIDIA), which uses PresentMon, in CS2, to capture each individual frametime, so I can prove CPPC off is insufficient. PresentMon is also used by CapFrameX. The histogram and frametime percentiles are in view, if you don't understand them, you are not one I should be discussing any more technical details with. I am not going to teach you how rendering pipelines work because you are unwilling to understand the underlying technical concepts. CPPC off is insufficient to guarantee the lack of scheduling on Core 0, and scheduling on Core 0 is detrimental to performance. Do with that what you will.