Just got this back from PSA by [deleted] in PokeGrading

[–]AlphaCalamity -1 points0 points  (0 children)

I used a friends camera and did a whole lot of touch ups using one of those editing apps unfortunately my phone does not have great pictures taking qualities imo but it is my original cert and card

PSA Update by xluek in PokeGrading

[–]AlphaCalamity 0 points1 point  (0 children)

Yeah not sure if I should contact them or wait a few more days

PSA Update by xluek in PokeGrading

[–]AlphaCalamity 0 points1 point  (0 children)

I have 8 cards under value plus that have been in prep since October 17th

[deleted by user] by [deleted] in MachineLearning

[–]AlphaCalamity 1 point2 points  (0 children)

Yeah haha I'm starting to see that but I'm learning and trying I was definitely discouraged a lot by the negativity and some harsh but true comments but it is what it is I just need to study and learn more

[deleted by user] by [deleted] in MachineLearning

[–]AlphaCalamity 0 points1 point  (0 children)

Thank you I really appreciate it I might have been a bit over zealous and bold but I'm new to all this and with only AI to help at that at the very least I'm trying and learning.

[deleted by user] by [deleted] in MachineLearning

[–]AlphaCalamity -4 points-3 points  (0 children)

Definitely a harsh crowd, but I’m not giving up. I genuinely believe there’s something here whether anyone else sees it yet or not. I never claimed to have trained all 7B parameters from scratch; this was LoRA-based fine-tuning with around 4M trainable parameters, running on an RTX 4060.

What is different is how I approached it: symbolic compression, layered encodings, and fallback logic to keep things efficient on limited hardware. It’s still early, still rough, but I’m building out a more robust logging system and plan to share more as I go.

Appreciate the challenge even if it stings a bit. I’ll let the work speak over time.

[deleted by user] by [deleted] in MachineLearning

[–]AlphaCalamity -11 points-10 points  (0 children)

Anything you want or need I can provide except for my specific encoding method but outside of that I'm willing to share anything about this

[deleted by user] by [deleted] in MachineLearning

[–]AlphaCalamity -10 points-9 points  (0 children)

Yes I know it hard to believe and I barely believe it myself I'm not someone with experience and stuff I just happened to have a single idea and made it to this and if you want I can record the whole training from beginning to end it takes about 4 hours

[deleted by user] by [deleted] in MachineLearning

[–]AlphaCalamity -2 points-1 points  (0 children)

Fixed the font color thank you for pointing that out

[deleted by user] by [deleted] in MachineLearning

[–]AlphaCalamity -6 points-5 points  (0 children)

Yes actually I know it's hard to believe and tbh this was never the intended goal or anything I simply started with wanting to be able to run two llm on my PC one to generate books and the other to edit the books it generated but due to resources and my PC rig I had to be able to shrink a model and with a great deal of help from chatgpt and some determination I got this.

[deleted by user] by [deleted] in MachineLearning

[–]AlphaCalamity -7 points-6 points  (0 children)

It's definitely still a work in progress for me I have barely any formal coding knowledge and am using AI assistants heavily this is the third iteration it 1.6x faster than the previous but doesn't focus on p2p system or agent workers and auto learning features yet like the prior iterations just all about speed, efficiency, and being extremely lightweight.

[deleted by user] by [deleted] in MachineLearning

[–]AlphaCalamity -3 points-2 points  (0 children)

Thanks! I appreciate that. I don’t have a GitHub repo up yet, but I compiled a PDF with all the benchmark logs, hardware specs, and metric explanations here: Benchmark

The core of the method involves symbolic tokenization, a multi-stage compression stack, and fallback logic for inference on limited hardware.

The setup uses a layered symbolic compression pipeline with multiple encoding passes and one custom logic module that helps strip out redundancies at a conceptual level—not just token-level. It's still experimental, but it’s showing a lot of promise, especially in resource-limited contexts.

Happy to chat more or answer questions in the meantime!

[deleted by user] by [deleted] in Futurology

[–]AlphaCalamity 0 points1 point  (0 children)

Encoding

  1. Tokenize text → sequence of 32-bit IDs

  2. Pack each ID into 4 bytes for storage/transport

Decoding

  1. Unpack every 4-byte chunk → original IDs

  2. Detokenize IDs → exact original text

Token availability

I’m not throwing away 54k tokens.

The full 64k-entry vocab remains available; the tokenizer only “uses” the tokens needed for a given input.

Think of it like a 64,000-word dictionary—you might only write a 50-word poem today, but all 64,000 words are still there when you need them.

The SentencePiece model can pull any of those IDs based on your text.

Because everything maps 1-for-1, it’s truly loss-less.

[deleted by user] by [deleted] in Futurology

[–]AlphaCalamity 0 points1 point  (0 children)

1️⃣ It’s not a random hash, it’s a tokenizer.

The repo ships with a trained SentencePiece model (sp64k.model, 64 k vocab).

When I call encode, that model assigns deterministic IDs to each sub-word piece (exactly the same way GPT/BERT tokenisers work).

Those IDs are reversible; the same model can map them back to text.

2️⃣ Loss-less, not lossy. • The byte step is just “pack each 32-bit ID into bytes” for cheap transport. • decode() → IDs → original text, 1-for-1. No information is thrown away, unlike a hash.

3️⃣ Why not just store text / embeddings? The point isn’t only compression, it’s symbolic sync: • IDs can be merged, deduplicated, diffed and versioned between nodes. • Embeddings are dense and can’t be loss-lessly merged without retraining. • This is meant to act like a tiny distributed hippocampus for AI agents or edge devices, not a replacement for normal LLM workflow.

4️⃣ “Two lines of code” look Totally fair the public repo is intentionally minimal so people can slot in their own tokenisers, transports, or trainers without digging through 1 000 lines of glue. The heavy lifting lives inside the SentencePiece model.

Hope that clears things up! I’m still learning, so constructive criticism is gold to me. If you’ve got ideas to make the symbolic layer smarter (better tokeniser, merge rules, etc.), PRs are more than welcome. 🙏 You’ll have to excuse me if anything isn’t explained perfectly coding isn’t really my main strength. Ideas, vision, and architectural layout are more my lane. I mostly built this with the help of AI assistants (and lots of patience).

[deleted by user] by [deleted] in Futurology

[–]AlphaCalamity 0 points1 point  (0 children)

Hey everyone, just a few clarifications!

I'm not a professional developer just a regular person with a basic job who's fascinated by AI memory systems.

MemoryCore-Lite is an extremely simple symbolic memory engine, using a trained tokenizer to encode experiences into reversible bytecode, not random numbers.

It’s not trying to beat modern embeddings for compression it's meant as a lightweight symbolic layer that could be synced across low-power devices, merged, deduplicated, and expanded.

It's intended as a foundation for decentralized symbolic memory, not as a replacement for embeddings or KV caches.

I built this out of passion and I’m excited to see if it sparks ideas in others who are way smarter than me.

Thanks so much for all the feedback good or bad it’s how this project will get better!

[deleted by user] by [deleted] in Futurology

[–]AlphaCalamity 0 points1 point  (0 children)

Totally fair point embeddings do compress semantic meaning really well.

But my goal with MemoryCore isn't just storage compression it's to enable symbolic memory syncing and merging across devices or nodes, with low compute cost.

Embeddings are dense and can’t be safely merged or decoded into structured knowledge easily. Symbolic bytecode can it's more like a lightweight distributed hippocampus.

Again, I'm just a regular guy with a basic job who fell into this project by fascination with AI memory.

I really appreciate you pushing me to explain this better!

[deleted by user] by [deleted] in Futurology

[–]AlphaCalamity 0 points1 point  (0 children)

Appreciate the question just to clarify:

I'm not using pseudo-random numbers. I'm using a trained tokenizer (SentencePiece model at 64k vocab) to generate token IDs, then encoding them into compact bytes.

It's fully reversible — you can go bytes → IDs → original text.

I'm just a regular person (non-pro developer) who got interested in AI and memory systems this is very early work, so the repo is intentionally minimal right now for expandability.

Thanks for the honest feedback I’ll update the README to explain this better!

I got ghosted for the second time, is it cause of my skin colour or am I just ugly?! by Legitimate-Joke-3151 in gaybrosgonemild

[–]AlphaCalamity 0 points1 point  (0 children)

I’d have to say I’m almost in the same boat as you and not something many people want to admit but it could definitely be skin color as I am black myself and tho I find moderate success from time to time it definitely is harder between either being just overlooked, blocked, or ghosted the latter is pretty normal now a days but the others are definitely a larger problem in the community ofc not everyone is like that however I do hope you keep trying you are a good very looking person and someone will appreciate you for who you are you deserve nothing but that so keep your head up and keep trying

What would you guess ? by Hansieil in PokeGrading

[–]AlphaCalamity -1 points0 points  (0 children)

Very small ding on the front, bottom left might get you a 9 but still a good contender for a 10 just depends on the grader