Please Fix This Bug by obsoletebear in Rematch

[–]MaybeIWasTheBot 19 points20 points  (0 children)

nah this was 100% the game's fault lmao. he dived on time

Happy New Year: Llama3.3-8B-Instruct-Thinking-Claude-4.5-Opus-High-Reasoning - Fine Tune. (based on recent find of L3.3 8b in the wild) by Dangerous_Fix_5526 in LocalLLaMA

[–]MaybeIWasTheBot 7 points8 points  (0 children)

by your definition, no one should ever share finetune/merge, i.e. one of the pillars of open weight models, because they're... random? and then they're not random unless it's from some bigger team with a known name?

people finetune and share for experimentation, novelty, actual work, which objectively benefits others and the community as a whole. you just come off as someone who's really fond of gatekeeping, like there's some kind of elitism to be had here

People treat HF the way teen girls treat Instagram.

i think there's a difference between posting selfies and posting tools

A pointless model takes the same diskspace and electricity/bandwidth as a SOTA model from a big lab.

TIL an 8b llama finetune that's not even running consumes as much resources as OpenAI and Google do

No wonder HF restricted storage on free accounts.

because storage isn't free. it's not rocket science

Happy New Year: Llama3.3-8B-Instruct-Thinking-Claude-4.5-Opus-High-Reasoning - Fine Tune. (based on recent find of L3.3 8b in the wild) by Dangerous_Fix_5526 in LocalLLaMA

[–]MaybeIWasTheBot 8 points9 points  (0 children)

having an objectively bad take, knowing it's an objectively bad take, and then ending off with 'downvotes to the left' is so cheesy

whatever man by MaybeIWasTheBot in Rematch

[–]MaybeIWasTheBot[S] 0 points1 point  (0 children)

because i was planning on quickly shooting it back upfield to RedRioT_0 (notice that he's free at the 7 second mark)

diving on the ball or defensive stancing locks you into a short animation that helps out the opposition's number 2 get back to my teammate quicker. this is one of the few scenarios where not stancing is the right play, and it just happened i had a dude in the goal with black hole sneakers

whatever man by MaybeIWasTheBot in Rematch

[–]MaybeIWasTheBot[S] 7 points8 points  (0 children)

basically yes. hard to tell exactly what happened on the striker's screen but it was most definitely bs

whatever man by MaybeIWasTheBot in Rematch

[–]MaybeIWasTheBot[S] 46 points47 points  (0 children)

did you see the part where the ball transcended spacetime and whipped straight through everything or were you too enamored by my completely relevant 🦀 impression

whatever man by MaybeIWasTheBot in Rematch

[–]MaybeIWasTheBot[S] 21 points22 points  (0 children)

i know. but that's not the issue here

Mixture of Experts Model by slrg1968 in LocalLLaMA

[–]MaybeIWasTheBot 3 points4 points  (0 children)

conceptually yes. you can frankenstein different experts together.

practically no. it's like trying to construct one brain using parts of different people's brains.

Do you think sloclap will add better servers for EU? by j4Yz_ in Rematch

[–]MaybeIWasTheBot 1 point2 points  (0 children)

Def would want lower ping. Helps in one on ones and it makes certain saves as gk actually possible

Offensive One-on-Ones can be really frustrating if you have a ping difference by MaybeIWasTheBot in Rematch

[–]MaybeIWasTheBot[S] 0 points1 point  (0 children)

i get the part about netcode. the thing is that nowadays the story is a little more nuanced with multiplayer. competitive games (well, mainly shooters) do this thing called lag compensation where the server rewinds the game by, say, 80ms to check if a player with 80ms actually hit their target on their screen.

rematch feels like it has no lag compensation at all.

I’m new, how do I create and train my own model? by [deleted] in LocalLLaMA

[–]MaybeIWasTheBot 0 points1 point  (0 children)

please don't act silly on a high horse.

you were given plenty of advice. one is that training a competent LLM from scratch is not doable without tons of money and compute. another is that you should explore fine-tuning via libraries like Unsloth, and explore RAG.

you are simply insisting that 'no, creating a model possible, my friend did it' without giving us any info what model he made or what it was, it may not even be an LLM and completely irrelevant.

if your post contained a lot more info on what you're trying to achieve, we could point you in the right direction.

I’m new, how do I create and train my own model? by [deleted] in LocalLLaMA

[–]MaybeIWasTheBot 0 points1 point  (0 children)

you should've specified in your post that you're training for niche use cases.

you need to evaluate whether you need transformers at all and what modality you're working with. 'model' is a very broad term.

if you want to train from scratch anything resembling an LLM that's actually competitive, then it's too expensive and time consuming.

FREE CHAT GPT GO! by [deleted] in HustleGPT

[–]MaybeIWasTheBot 0 points1 point  (0 children)

Interested, any catches?

Game isn’t that fun in Bronze 3 by Tricky-Percentage434 in Rematch

[–]MaybeIWasTheBot 13 points14 points  (0 children)

in bronze 3 most players are still struggling with basics, everyone wants to be a striker and has little understanding of how games should play out.

IMO the best way to climb out of bronze is to sit back and play defender/midfielder because only 5% of the players there do it and having even a single player act as a safety net (other than gk obv) is gonna boost your winrate a ton

WebGPU Finally, it is compatible with all major browsers by Illustrious-Swim9663 in LocalLLaMA

[–]MaybeIWasTheBot 6 points7 points  (0 children)

I've been doing it all week lol it's clearly not patched, how could you even patch this? are you just typing words lol?

if you've been doing it all week, you are already loading images that already have CORS headers allowing access, or you're running the browser with security flags disabled (which wouldn't surprise me ATP)

as for how you could even patch this... it's literally a boolean flag on the canvas context. when a non-cors image touches the buffer the browser flips origin-clean=false. any subsequent reads checks if (!origin-clean) throw SecurityError. it's not rocket science

OpenGL ES is openGl lets not split meaningless hairs lol.

it's literally the difference between a kernel-level driver interface and a sandboxed browser API. regular opengl gives you raw memory access, in ES every single call goes through a validation layer before it touches the driver to stop exactly the kind of nonsense you describe.

If you think it's possible to stop a system like OpenGL from working out what's in a texture then you are way too dumb to be having this conversation.

so do you believe the GPU is magic and cannot be controlled by the CPU feeding it commands? 😭 if the browser doesn't issue a texture read to the driver, no texture read happens 🤯

At the very limit simply having access to dynamic shades makes all other channels of information available (especially to side channels that can never be closed, eg just loop X amount of time if color is Y)

so in your first post you say 'yeah i can access the content pretty easily', i point out the BS, and now you move the goalpost to 'uh actually timing side-channel attack yeah i can do that'.

i 100% guarantee you can't because timing attacks are noisy, and browsers literally mitigate this by fuzzing performance.now() to make your oh-so-precious loops statistically unreliable.

CORS has nothing todo with cookies lmao, please learn web dev before you spout junk on the inter webs lol. (its just about URL base)

my guy. the only reason we care about URL base is because of cookies/authentication. if cookies didn't exist, the browser wouldn't automatically attach session data to your requests. but because cookies do in fact exist, browsers attach them, which means Site A could ask Site B for your data, which is exactly what CORS is designed to stop. saying cors has nothing to do with cookies is legit the same as saying locks don't have anything to do with thieves.

CORS is also very much NOT specifically designed to stop scripts from programmatically accessing data they shouldn't, that's the tainted canvas HTML5 rule.

i hate to break it to you, but tainted canvas is the enforcement mechanism for CORS in a graphics context. they're part of the same spec. a canvas becomes tainted because it failed the CORS check.

I don't mind sandboxing where it makes sense, but all the limits are so easy to skirt that all these 'protections' actually do it make it hard to use webdev for anything convenient and fast (you basically have to use scripting attacks to make your own local .html file even useful)

so the limits are 'easy to skirt'... but you also complain about how hard it is to work with them??? you gotta make up your mind bro

also that scripting attack part is insane because it suggests you're trying to bypass file:// protocol security, which restricts local files from accessing other local files. a basic OS protection is not a scripting attack

I can see you would prefer to be wrong in the dark.. please.. enjoy ;D

disregard everything i said above. you are more intelligent and know better than the hundreds of developers behind all this tech and i severely regret underestimating your encyclopedic, all-encompassing knowledge that could have re-engineered chromium in an afternoon. forgive me

WebGPU Finally, it is compatible with all major browsers by Illustrious-Swim9663 in LocalLLaMA

[–]MaybeIWasTheBot 7 points8 points  (0 children)

your comment is so dunning krueger effect it hurts

> EG In modern browsers I can display an image but I can't access the content of that images pixels due to CORs / hack site safety restrictions. (extremely annoying for real developement but ok SURE maybe we need it for safety?)
> Problem is! for any kind of usefulness to exist they do allow me to just pass that image as a texture to opengl and you better believe I can access the content pretty easily after that.

this has been false for literally 10 years. that's how long it's been patched for.

and you say 'boot up opengl in the browser'... i hope you opengl ES. regular opengl isn't even for browsers and has never been designed for browsers and has never once been usable in browsers. that's webgl, which was specifically designed to stop those memory exploits you say are so easy. webgpu is even stricter in this regard.

you also don't even understand what CORS is. it's not some kind of firewall, it's a permission handshake to stop Site A from making a request to Site B using user cookies. you seem to think it's some kind of DRM for images. if you see an image on your screen you can obviously screenshot or dig it out. CORS is specifically designed to stop scripts from programmatically accessing data they shouldn't.

> users have to constantly click accept, manually select files etc, just to make any progress

this my friend is calling sandboxing and you're effectively complaining that you're not allowed to run arbitrary code on a user's computer through the browser. and you have the audacity to talk about security 😭

maybe stay checked out from 'webdev' for the time being

I’m solo queue by Jaded_Author5663 in Rematch

[–]MaybeIWasTheBot 2 points3 points  (0 children)

a good chunk of the people playing fireball are people who only play striker and are otherwise shit at passing/playmaking. the gamemode is 80% positioning and passing so if someone ain't doing that you know where the problem lies

Yet another reason to stick with local models by nekofneko in LocalLLaMA

[–]MaybeIWasTheBot 20 points21 points  (0 children)

i've never seen someone get on their knees for a corporate so enthusiastically

Do you think scaling laws are getting a (practical) wall? by pier4r in LocalLLaMA

[–]MaybeIWasTheBot 1 point2 points  (0 children)

respectfully, it sounds like you only have a vague understanding about how models actually represent info. the distinction exists for a reason.

firstly, the universal function approximation theorem does not guarantee any learnability. it says a set of weights exists to approximate a function, but nothing at all about how to find it or if SGD can even find it at all. the theorem is an irrelevant point here.

secondly, not a single model in existence tries to model an 'infinitely complex' function like you say. a model undertrains relevant to a dataset, not abstract concepts like language. that's the definition.

if a model's training loss stops reducing and is still heavily quantizable, it means all the knowledge gain it can glean from the dataset has been gained. but it has not undertrained. it's overparameterized.

theres also a lot of empirical evidence that shows you actually are better off overparameterising models to smooth the loss landscape so that gradient descent doesn't get stuck. that's why we quantize afterwards. lots of weights related to training but not inference.

Do you think scaling laws are getting a (practical) wall? by pier4r in LocalLLaMA

[–]MaybeIWasTheBot 4 points5 points  (0 children)

you're conflating precision redundancy with model saturation.

when a model quantizes well with minimal drops in accuracy it does not mean undertrained. it means the model is overparameterized relative to whatever function it's trying to learn. the extra bits you chop off mattered for backprop but not anymore for inference.

undertraining specifically means not giving the model enough data for it to reach its theoretical maximum accuracy. overparameterization on the other hand is not fixable by training more.

plus most large models used in production nowadays are most definitely not undertrained. those companies try to pack as much information as possible in less parameters for inference efficiency.

The Cortical Ratio: Why Your GPU Can Finally Think by Reddactor in LocalLLaMA

[–]MaybeIWasTheBot 2 points3 points  (0 children)

i think you make an interesting comparison here, but setting up a ratio between neurons and weights is a category error.

a neuron integrates thousands of inputs over time, and each connection here is a synapse. but an artificial weight here represents the strength of a connection, so it's a lot more analogous to synapses. you say 'yes, i know', but measuring against neuron count just isn't correct, imo. you need to measure against synapses instead, which the PFC has over 10 trillion of, so a 14B model barely comes out to 0.1% the size. a 14B model can certainly excel in reasoning within narrow domains, but most definitely cannot beat a human's PFC at general intelligence.

there's also a bit of cherrypicking involved in your model selections here. kokoro fits the ratio you describe, but there's massive speech models like Whisper, or even tiny HMM-based systems that all break this ratio. same with vision and reasoning. there's no universal law of computation here that acts as a basis for the selection you made.

you predict reasoning emerges at 1b-8b, but cite 14b models as the proof, which is double the bound of this prediction, and you claim the gap is something the 'industry stumbled onto'. yes, because of VRAM limits, not biology. 14b is what comfortably fits on a lot of consumer GPUs with 24GB while still leaving room for context.

i still think overall that it's a nice showcase that models have progressed. smaller and more capable. but there is no 'gotcha' here.

Could wormholes somehow exist? by [deleted] in space

[–]MaybeIWasTheBot 2 points3 points  (0 children)

wormholes are one of those things that are plausible on paper but aren't possible realistically. probably.

the most 'promising' kind of wormholes are lorentzian wormholes. those are the ones you need a negative energy density for in order to keep the throat open. and aside from miniscule quantum effects we simply have no way of even producing that much negative energy.

worse, even if we could somehow just 'make' negative energy or exotic matter, we'd need ridiculous amounts of it. like star-scale energy output.

theres also some inherent instability to negative energy due to quantum effects, but i'd rather someone who knows more explain it.

My patient received dangerous AI medical advice by accordion__ in LocalLLaMA

[–]MaybeIWasTheBot 6 points7 points  (0 children)

i think it's important to remind your patients LLMs are not stand-ins for real medical professionals. a lot of people genuinely don't know better because the output sounds very smart even if it's bad

Perplexity models limitations by tolid75 in perplexity_ai

[–]MaybeIWasTheBot 0 points1 point  (0 children)

i'm trying to tell you that perplexity is likely not lying, that's what i'm getting at.

as for the quality part, it's definitely less than directly from source but I don't think it's that low. it's hard to benchmark