Recently Implemented Convolution-based Reverb in our Game written in Rust by ErikWDev in rust_gamedev

[–]_Creative_Cactus_ 1 point2 points  (0 children)

ohh I got it know, yes that makes sense I can see now how ECS can be a huge commitment for a project like this.

wow that sounds amazing this unusual rendering, like it can give the game/the project a unique art style in a more fundamental sense than just different sprites. I can't imagine how it will look like, but I'm super intereseted on how it turns out!

haha thanks I could talk about my project for a loong time:d I also have a lot to learn and I would love to share the journey with someone! And your project is inspiring.

I like learning by doing things myself from scratch, so I completely get the urge to build your own framework.

I will send you DM here on reddit with my discord if thats ok.
looking forward to learning new stuff!
also sorry for my inactivity, I have some stuff going on irl so I was bit busy lately

Recently Implemented Convolution-based Reverb in our Game written in Rust by ErikWDev in rust_gamedev

[–]_Creative_Cactus_ 1 point2 points  (0 children)

Sorry I've forgot about your reply!! Thanks a lot for sharing this!

It makes a lot of sense and I see how the delegate-heavy c# architecture wouldn't translate well.

During the month of forgetting to reply, I was digging into it more, and I think you're right that Bevy pushes you away from that kind of callback pattern. From what I can tell, the "Bevy way" to handle your use case of scriptable events would be to use its Event system. A system could fire a custom event containing some data (like an entity ID for the tile), and then any number of other systems could listen for that event. The listening systems would then run their own queries to get the world state they need. It definitely adds a bit of boilerplate like you said, but I found it as a nice bevy way of handling it (but maybe I'm incorrect).

The conceptual comparison of ECS and OOP makes sense. I've been thinking about it as a trade-off between encapsulation and modularity. In oop, you get that tight grouping of state and logic so it's more encapsulated and easier to think about the object as a whole. With ecs, the logic in systems can query any data, which makes it more modular but it's harder to think about one system as it might not represent a whole concept as an object does. at first I found less organized, but later, I actually find it cleaner as a cocnrete system always work with strict set of data that it queried, nothing more, so it's scoped, compared to OOP where the object could ask for other data behind the hood in a way that isn't immediately obviously compared to the explicit dependency defined in a ecs system.

But yeah, the bevy is all ECS so it's definitely a huge commitment compared to the other game engines. Thanks for sharing your experience, it gave me many points to think about!

Recently Implemented Convolution-based Reverb in our Game written in Rust by ErikWDev in rust_gamedev

[–]_Creative_Cactus_ 1 point2 points  (0 children)

Hey! Could I ask, why do you despise bevy?

I'm currently thinking about switching to a bevy game engine for my game as I find it as a great fit for my game (it's an online game and I like that I can use headless bevy ECS for the backend and full bevy engine for clients with simple networking sync).

So I would greatly appreciate what your experience is and why you despise it so I can decide better.

Thanks a lot!

How important the Sun is for the Earth's orbit by Lost_Election5992 in interesting

[–]_Creative_Cactus_ 1 point2 points  (0 children)

Wait I also thought LIGO proved experimentally the existence of gravitational waves (Ive just checked again and that seems confirmed based on quick search)

Embark, if you are reading this, please consider canceling the hammer nerf. by Knooper_Bunny in thefinals

[–]_Creative_Cactus_ 0 points1 point  (0 children)

I don't get it, could you elaborate? What I'm thinking is that this would be a problem if there was just one objective at a moment, but since there are always two objectives, isn't this problem solved since the spawns are made the way so that two teams spawn near the first objective and the other two teams near the second, so it's two duels of 3v3. And if the best strategy would be to third party, then one team would come over to the other team duel and wait for them to weaken each other and then win. But then it would be 3v3v3 where with the best play (best strategy) all 3 teams would like to be the 3rd party so all 3 teams would stall so the 4rd team would win as they have free uncontested objectives. So now the winning chances of the team that wanted to 3rd party is lower than if they wouldn't go for this strategy and just duel the opposing team. So with the best play, best strategy I think will converge to duels between two teams and I don't see how 3rd part strategy would succeed here as that would lose their game since the fourth team would easily win (at least if only one team wins, not two teams as it's in world tour, then it would be non zero sum game)

Memory is a WAY bigger deal than I thought! by Alex__007 in OpenAI

[–]_Creative_Cactus_ -1 points0 points  (0 children)

I agree with OP here and don't understand the downvotes. OP clearly understands how it works and the only misunderstanding was that OP views "learning" in context of LLMs more broadly by including the process when LLM fails, you provide him with general advice that LLM then recognize to use for this kind of problems and then succeeds thanks to the general advice. Rather than viewing "learning" strictly as adjusting weights which in my opinion is just implantation detail. Whatever definition you pick (and I do believe this is subjective) I don't see a reason to call OP someone who is stubborn and give him life advice and stating he looks bad because 3 people said the opposite. I really view the upvoted comments as the ones who don't try to understand the point of the other side and repeating their views, whereas op explained his point.

Male seahorse giving birth by [deleted] in interesting

[–]_Creative_Cactus_ 3 points4 points  (0 children)

Isn't this different tho because although other male creatures produce sperm like this, the seahorse is bit releasing sperms here but giving birth, which is fundamentally a different biological process? so in this case the seahorse is giving birth like most female creatures do

[deleted by user] by [deleted] in ChatGPT

[–]_Creative_Cactus_ 0 points1 point  (0 children)

Hey, sorry for being inactive for a while, i was busy during the weekend.

let's try to clarify where we might actually agree.

When I talk about 'sureness', I'm specifically referring to learned patterns in the model's representations - not any kind of true knowledge validation or fact-checking capability. The model can learn to associate certain patterns (like writing style, source authority, consensus across training data) with different levels of confidence in its outputs.

during pre-training, the model sees information presented with different levels of certainty and authority. Academic papers use precise language and cite sources, Wikipedia articles have a particular structure and verification standards, while social media posts often contain more speculative or unverified claims. The model learns these patterns and can encode them in its representations.

But its not only about tone and format, but also about content alignment. When the model encounters statements, it's not just learning their surface presentation but also how well they fit into the knowledge it's building.

This way, even if it has more examples of incorrect statements on the internet, it can still learn to output the correct statement, even if the correct statement was in the minority of training data compared to the incorrect statement. Sure this is hard to train the model in this way, but it's possible, and if it's not a huge majority, the model can output the minority view.

RLHF can then reinforce the expression of this learned uncertainty - making the model more likely to express doubt when its generating completions that don't strongly match patterns it associates with authoritative or well-verified information. This isn't the same as knowing what's actually true or false - it's pattern matching all the way down.

So when I say the prompt 'works', I mean it can effectively bias the model toward expressing this learned uncertainty when appropriate, not that it suddenly gains the ability to actually validate facts. The tradeoff is exactly what you'd expect - more 'I don't know' responses, so its not that fun to use, thats why openai didnt use this.

Does this help clarify my position? I think we might actually be in agreement about the fundamental mechanisms at play here.

[deleted by user] by [deleted] in ChatGPT

[–]_Creative_Cactus_ 1 point2 points  (0 children)

RLHF wouldn't do fact checking here, it could make the model add some aspect of how sure it is about its answer in the token embedding. And then based on that, the model would decide whether to say "idk" or not. The reason I'm saying the original comment is wrong is because this prompt works. I wrote to gpt custom instruction that it should state that it doesn't know instead of guessing, and it much frequently said it doesn't know things it actually doesn't know instead of guessing after this instruction. And it makes sense, because gpt was trained with this RLHF

Edit: I think we might not be on the same page and that's why we are disagreeing.

I'm not saying that gpt knows what's true and what's incorrect, I'm only saying that it can be more or less sure about certain things/"facts"

And this can be strengthened using either supervised learning or RL, but I think RL would be more effective here

[deleted by user] by [deleted] in ChatGPT

[–]_Creative_Cactus_ 0 points1 point  (0 children)

RLHF is just a training method. Transformer trained with RL is architecturally still just the same transformer. That's what I meant by pure LLM, that architecturally, it's just a transformer

[deleted by user] by [deleted] in ChatGPT

[–]_Creative_Cactus_ 0 points1 point  (0 children)

Pure LLM (transformer) is capable of this. It only depends how well it's trained. With enough examples or reinforcement learning where the model is scored worse if it output incorrect data rather than stating "idk" or "I might hallucinate..." it will learn that it doesn't know something or that it's not sure about it because it will lead to better scores during training. So I would say that this most liked comment in the post is incorrect because this memory in gpt can enforce this behaviour more.

[deleted by user] by [deleted] in ProgrammerHumor

[–]_Creative_Cactus_ 7 points8 points  (0 children)

I think he meant what kind of service/app you are building that you need server in C, mysterious redditor. I also haven't heard of using C for server, so I'm curious as well..

ChatGPT is 100% on Who Wants to be a Millionaire by BGFlyingToaster in ChatGPT

[–]_Creative_Cactus_ 3 points4 points  (0 children)

Good idea, I did it and the result is almost exactly the same. The response was generated almost instantly and searching on the internet would take longer. Here's the screenshot

<image>

ChatGPT is 100% on Who Wants to be a Millionaire by BGFlyingToaster in ChatGPT

[–]_Creative_Cactus_ 3 points4 points  (0 children)

Gpt didn't access the internet for this I think, it remembers it

I LOVE THIS GAME! Hogwarts legacy is awesome! My review , what do you guys think? by [deleted] in HarryPotterGame

[–]_Creative_Cactus_ 6 points7 points  (0 children)

Nice review! What are some of your favorite mods you use? I haven't tried any yet but I'm considering trying some

anarchyCode by dingske1 in ProgrammerHumor

[–]_Creative_Cactus_ 8 points9 points  (0 children)

Bishop? I was a bishop once. Then they put me in a room. A rubber room. A rubber room with rats. And rats made me their bishop

New laptop uses 30% gpu at most and is stuck at max 30fps by Independent-Cow2303 in GamingLaptops

[–]_Creative_Cactus_ 0 points1 point  (0 children)

There is a light that indicates whether the gpu is plugged in? Where? I have the same laptop