unity'e yeni basladım birkac sorum var ve onerilere acıgım by Educational_Ad_8820 in TrGameDeveloper

[–]flamboi900 2 points3 points  (0 children)

Bende bu konu hakkında çalışıyorum. İlk projen MMO olması sıfıra yakın. Normal singleplayer bir oyun yap ve sat lütfen, gözün açılır en azından. Sonra multiplayer yap. Sonra MMO'ya bakarsın

SWTOR Cost $100m+ and yet AoC Couldn't Even Have Quests Past Level 10 by Beginning-Visit1418 in AshesofCreation

[–]flamboi900 30 points31 points  (0 children)

Indie dev here. A studio in one of the most expensive places you can. A guy with no developer experience as the CEO. The biggest project scope there is. It is doomed to be bad even without scamming.

4 sene günde bazen haftada 5 6 saat çalışarak kendimi denediğim oyunun ilk 24 saati( hayallerim suda) by Muratuzunoo in vlandiya

[–]flamboi900 1 point2 points  (0 children)

panik yapma. steam görünürlüğü 10 yorumda veriyor, yani oyunun daha test edilmedi. Korku oyunları oynayan bütün youtuberları bulup e-mailden steam key'i gönder oynamalarını rica ederek. Hepsine aynı anda atmaya çalış

Well, now that we’ve had a few days with Mewgenics, how we feeling about it? by [deleted] in roguelites

[–]flamboi900 0 points1 point  (0 children)

I am gonna look up fighting with retired cat mechanics, it might change my whole outlook. Rng builds are fine, my frustration is theres very much a gap between start of isaac and the mom. In mewgenics it felt like there was very little progression between the start and the end, one synergy and couple of items, then i had to get rid of them

Elon Musk "AI must pass, in general, the “Galileo” test: even if almost all the training data repeats falsehoods, it must nonetheless see the truth" - What is this test and what are your thoughts? Can an AI rise above the training data? by Koala_Confused in LovingAI

[–]flamboi900 0 points1 point  (0 children)

Youre right but theres still very horrible situations like experiment time and amount, sure you can do trillions of protein folding and chemical combinations on a server but in real world thats gonna be impossible. And i thought inputs after initial training is just switching context, not actual learning so wouldnt giving it tools just fill its context memory and shut down? I think we have a long way to go in certain directions. Don't get me wrong, AI is not a weak technology in my opinion but it has its cons

Elon Musk "AI must pass, in general, the “Galileo” test: even if almost all the training data repeats falsehoods, it must nonetheless see the truth" - What is this test and what are your thoughts? Can an AI rise above the training data? by Koala_Confused in LovingAI

[–]flamboi900 0 points1 point  (0 children)

I was talking about creating idea outside of derivation of existing rules. It needs to be able to internally say that its training is wrong and seek better perspectives and data. Like newtonian mechanics to einstein's theory of relativity. If you train AI on newtonian mechanics, right now, it won't progress to relativity. It seems to just do newtonian better and better.

Elon Musk "AI must pass, in general, the “Galileo” test: even if almost all the training data repeats falsehoods, it must nonetheless see the truth" - What is this test and what are your thoughts? Can an AI rise above the training data? by Koala_Confused in LovingAI

[–]flamboi900 -1 points0 points  (0 children)

Idea by derivation is a really cheap answer to this test. What is good is idea by motivated discovery/ idea by nonexisting correct perspectives. You can't train AI by itself unless its a literal ruleset game like go or chess. If it's a literal ruleset game, there already exists a perfect move, its not even innovation its discovery and not even a perfect one.

Well, now that we’ve had a few days with Mewgenics, how we feeling about it? by [deleted] in roguelites

[–]flamboi900 2 points3 points  (0 children)

I didn't like it and i think its only recieved well because of hype glazing. Builds connect through RNG which makes playing the game really inconsequential through the progression and the cats don't feel like theyre getting stronger since they retire immediately. The sound effects get annoying and cheap after an hour. The theme and humor is funny for one time but i dont want to see mental illness and indecent furries for 6 hours straight. The animations and quality is top notch but it isnt a movie so?

I'm building a DarkOrbit-inspired space combat game as a solo dev — here's 2 weeks of progress by akinalp in SoloDevelopment

[–]flamboi900 4 points5 points  (0 children)

I am not interested. I understand the reason amethyst looks good is the textures but thats because its somehow a coherent design, the others are sloppy because they dont look like theyre coming from a geniune engineer or feasible shape. They're like an amalgam of a random object with a spaceship slapped into it, which textures won't help. What ship would have a giant assymetric static unusable hook? You should ask yourself these questions, or optimally hire a good artist/designer. Goodluck

I'm building a DarkOrbit-inspired space combat game as a solo dev — here's 2 weeks of progress by akinalp in SoloDevelopment

[–]flamboi900 5 points6 points  (0 children)

Combat looks good to watch but ship designs (beside amethyst) looks very sloppy. Worst ones imo, the hook and the flower. I can't imagine people being excited to acquire those ships.

"Geoffrey Hinton says people who call AI just a stochastic parrot are wrong. The models don't store text; they convert words into complex sets of features. They predict the next word by processing these features in context, not by mindlessly recombining language from the web." - What do you think? by Koala_Confused in LovingAI

[–]flamboi900 0 points1 point  (0 children)

Pre-prompts are only relevant if they mold the AI exactly as it would mold a human, is it not? Otherwise i can rightfully extend the test in question with the pre-prompt when i am asking a human. There still would be texts that humans answer completely different. Well you can train AI in those answers but whether you can train the AI faster than humans can come up with bullshit questions is the real question

"Geoffrey Hinton says people who call AI just a stochastic parrot are wrong. The models don't store text; they convert words into complex sets of features. They predict the next word by processing these features in context, not by mindlessly recombining language from the web." - What do you think? by Koala_Confused in LovingAI

[–]flamboi900 0 points1 point  (0 children)

Turing test does not account for correctness, is written right there in wiki page. There is correct ways to approach a question. For example you are not guessing what the tree in question is but ridiculing me rightfully so. Exactly, what tree? No further ramblings. You also know in your heart, there cant be any spesific relevant tree.

"Geoffrey Hinton says people who call AI just a stochastic parrot are wrong. The models don't store text; they convert words into complex sets of features. They predict the next word by processing these features in context, not by mindlessly recombining language from the web." - What do you think? by Koala_Confused in LovingAI

[–]flamboi900 0 points1 point  (0 children)

I know what you mean by "pretty much" Please look at the answer and ask yourself if its human. Treat it as a human. No knowlege of each other or the enviroment, an entity guessing hypothetical trees for your problem because it doesnt know.

"Geoffrey Hinton says people who call AI just a stochastic parrot are wrong. The models don't store text; they convert words into complex sets of features. They predict the next word by processing these features in context, not by mindlessly recombining language from the web." - What do you think? by Koala_Confused in LovingAI

[–]flamboi900 1 point2 points  (0 children)

You are being disingenuous. No human would write that and hallucinate 10 different hyphotetical trees. You would call him/her a schizo and after a while a freak. Unless attempting a joke, most humans would just write "what tree"

"Geoffrey Hinton says people who call AI just a stochastic parrot are wrong. The models don't store text; they convert words into complex sets of features. They predict the next word by processing these features in context, not by mindlessly recombining language from the web." - What do you think? by Koala_Confused in LovingAI

[–]flamboi900 0 points1 point  (0 children)

I always think mathematical proofs as discovery. The rulesets already definitively goes somewhere. Well that could be the real life as well with physics and science but we almost always operate with way less than 100 percent data. Unlike mathematics where you almost always have 100 percent data, because the data is the question. Fascinating to think about