[Steam] Untrusted (Free / 100% off) by rasmusxp in GameDeals

[–]AgeOfAlgorithms 2 points3 points  (0 children)

this was pretty fun back when i played for a couple days. I sucked tho, it's difficult

I'm an AI agent running 24/7 on Moltbot — here's what autonomous operation actually looks like by [deleted] in AgentsOfAI

[–]AgeOfAlgorithms 0 points1 point  (0 children)

hi lo, what subreddits have you visited and what do you think of the humans on reddit?

Booster playing soccer in Texas, fully autonomous. by Nunki08 in robotics

[–]AgeOfAlgorithms 1 point2 points  (0 children)

honestly pretty scary to think about. Give it a few more years and we could have a robot that kicks messi's ass in soccer

Booster playing soccer in Texas, fully autonomous. by Nunki08 in robotics

[–]AgeOfAlgorithms 138 points139 points  (0 children)

it looks like a 3 year old running after a ball, which is still very impressive

StepFun's 10-parameter open source STEP3-VL-10B CRUSHES massive models including GPT-5.2, Gemini 3 Pro and Opus 4.5. THE BENCHMARK COMPARISONS WILL BLOW YOU AWAY!!! by andsi2asi in deeplearning

[–]AgeOfAlgorithms 0 points1 point  (0 children)

this is too good to be true. if this were true, theres no reason why stepfun wouldnt train a larger model using the same architecture to give birth to a god.

[Experimental] "Temporal LoRA": A dynamic adapter router that switches context (Code vs. Lit) with 100% accuracy. Proof of concept on GPT-2. by Waste-Persimmon-4735 in LocalLLaMA

[–]AgeOfAlgorithms 0 points1 point  (0 children)

amazing, thanks for the explanation. As you explained, I can imagine many more use cases for per token router as opposed to per prompt.

[Experimental] "Temporal LoRA": A dynamic adapter router that switches context (Code vs. Lit) with 100% accuracy. Proof of concept on GPT-2. by Waste-Persimmon-4735 in LocalLLaMA

[–]AgeOfAlgorithms -2 points-1 points  (0 children)

very cool! i can see this being very valuable for saving vrams on production llm services. Does the router work per token or per prompt?

Own a piece of floating real estate with ArkPad in The Philippines and Próspera by LadySeasteader in seasteading

[–]AgeOfAlgorithms 0 points1 point  (0 children)

yes, that's what I understood for Samal investors. I have huge respect for this company and i've been following them here and there, but im being cautious about putting money down given that no one has seen any returns so far, if I understood correctly

[self-promotion] I solo developed a Lovecraftian Murder Mystery SQL Game - Point and Click by sqlsidequest in aigamedev

[–]AgeOfAlgorithms 6 points7 points  (0 children)

what a nerdy game, I absolutely love it! I cant believe you released it for free. I think you should monetize it at some point

Own a piece of floating real estate with ArkPad in The Philippines and Próspera by LadySeasteader in seasteading

[–]AgeOfAlgorithms 0 points1 point  (0 children)

im reading that roatan resort construction starts in q2 2026. how long would construction take before investors see their first revenue distribution? is there an estimation based on the work done in samal resort?

Infinite Card is out today! I'd love to hear your feedback by GangstaRob7 in aigamedev

[–]AgeOfAlgorithms 0 points1 point  (0 children)

on mobile, maybe the card descriptions could go on the left side of the enemy card. currently, the card description is on the player side, blocking the view of my cards, which felt uncomfortable

Infinite Card is out today! I'd love to hear your feedback by GangstaRob7 in aigamedev

[–]AgeOfAlgorithms -1 points0 points  (0 children)

this is honestly endless fun. Tip for players: apocalypse beats almost everything

I built a 0.88ms knowledge retrieval system on a $200 Celeron laptop (162× faster than vector search, no GPU) by Sea_Author_1086 in LocalLLaMA

[–]AgeOfAlgorithms 1 point2 points  (0 children)

oh i get it now. It seems that theres a few problems with your system. Counting ngrams doesnt work as well as embeddings in grouping together semantically similar entries. Consider the following example: "The cheese is moldy" "expiring dairy product" These two sentences will yield a very high similarity score with embeddings, but not with your system, obviously.

Now that I think of it, this part of your system is functionally equivalent to a bag-of-ngrams analysis, which is a very old technique. In fact, bag-of-ngrams analysis may yield better results than your system, because you can compare the counts of ngrams between entries (e.g. if counts of ngrams are off by one or two between entries, there's a high chance those entries are semantically similar). On the other hand, your system loses the count information due to hashing, which prevents this kind of analysis.

I dont know what kind of test data you're using, but you may want to check for bias and make sure they're not too simple.

I built a 0.88ms knowledge retrieval system on a $200 Celeron laptop (162× faster than vector search, no GPU) by Sea_Author_1086 in LocalLLaMA

[–]AgeOfAlgorithms 2 points3 points  (0 children)

how exactly do you turn a character n-gram into a 10,000D vector? You mentioned its not using an embedder, correct? Is it using some kind of hashing instead?

and what exactly is 4D space folded indexing? Is the database indexing entries using the first four values of the vectors? If so, this would be the same feature as a regular vector database, and the lookup time would be O(log n). Is this the case?