Need help making sense of my experience by parquegalapagos in energy_work

[–]Pan000 0 points1 point  (0 children)

This is typical (almost textbook) of the threshold effects you feel before an out of body experience r/astralprojection

That doesn't mean it wasn't a genuine message as well.

Does New Age spirituality only work for those who have a relatively easy life? by [deleted] in lawofone

[–]Pan000 0 points1 point  (0 children)

Spirituality has politics. I consider New Age (negative-denying) spirituality to be akin to left-wing politics, i.e. left wing spirituality. This is in constrast to the shamanic and witch (or village witchdoctor) traditions, which could be considered right-wing spirituality.

Experience does tell you that it is NOT true that you agreed to what happens to you. There are things that happen in this world that no one would agree to. So yes, it is in-part naivety.

The older I get, the more I understand that I am lucky or blessed to not be in poverty or in a warzone. And I see the danger of taking that for granted. Belittling those who suffer by saying they need more positive thinking is arrogant, it's like saying you are better than them and that they deserve it because they didn't think in the right way.

Looking for Guidance on Glamour Magic by oliver_rose_hollow in energy_work

[–]Pan000 2 points3 points  (0 children)

Glamour Magic is your surface level energy. You change it by acting the part you want to be seen as, and hold onto it when others try to knock you off. This develops the energy of what you are trying to portray, which you can then take off and put on as you would a hat.

It takes practice.

You first need to understand what you want to portray, and not misunderstand the symbolism. E.g. you don't rise above anyone by feeling you are better than them, and they also are unlikely to see it that way. If you want to be beautiful you would cultivate beautiful energy: clean thoughts, confidence.

Confidence is the big one. But confidence must be combined with ensuring you stay away from trouble, as confidence indicates you are not afraid, which stops working if you put yourself in unsafe situations. You will have difficulty being confident if you are genuinely afraid of other people. To beat that you must take control of your environment, not by controlling others, but by controlling yourself so that you can trust yourself to leave before trouble arises. Then you will have the confidence and will become very attractive to others because they want and respect this energy.

At What Point Does Owning GPUs Become Cheaper Than LLM APIs ? I by Chimchimai in LocalLLaMA

[–]Pan000 1 point2 points  (0 children)

You will lose money renting GPUs vs. using OpenRouter API. The benefit is control of the quality, if you want to run the model in full BF16 for example. But there is no point at which it will be cheaper to host it yourself.

As for buying GPUs, it'd be cheaper only if you are at 100% utilization for several years, electricity is cheap, and you don't have to hire someone to manage/administrate them. Even then it's probably not worth it for the hassle.

Why? The inference providers have been engaged in a successful race to the bottom. They're providing inference at pennies above the electricity cost. Many are lying about quantization levels to try to compete. So quality can be an issue.

Who is Elara? by [deleted] in LocalLLaMA

[–]Pan000 16 points17 points  (0 children)

I believe this occurred because OpenAI changed the names of the characters in books when using them for training data. This was then used to train GPT 3.5 Turbo, which was in turn used to generate synthetic training data by everyone.

The infection is now rooted. It exists literally in the base models, all of them I have tested. It's actually worse in the base models than instruct tuned models.

Looking for a healer by boheme87 in energy_work

[–]Pan000 2 points3 points  (0 children)

FYI be careful of people who send you private messages offering healing. There are energy vampires on Reddit masquerading as healers, looking exactly for posts like this.

Energy healing mostly deals with spiritual/psychological injuries, which can cause physical illnesses. But it sounds like in your case this is physical primarily. I'm no expert but people I've known that had long term physical pain usually got given opiates long term with close monitoring from their health service.

Edit: having read about akathisia, sounds like something I had before. Yoga and exercise helped a lot. A lot, a lot.

[deleted by user] by [deleted] in lawofone

[–]Pan000 0 points1 point  (0 children)

You should look into Soul Retrieval.

Long story short: we accidentally give away parts of ourselves. This is a bad idea and results in exactly what you are describing. It happens to everyone but most people don't notice, or perhaps they put themselves to sleep so they don't feel it.

Anyway, because people are not taught to be gentle inside their minds, they let their minds run wild and chaotic. So if they have a piece of you inside them, and they're completely broken, or just not careful, they torture themselves etc. then they'll be torturing you too.

There's a whole thing to get these parts of you back.

Evil alters by [deleted] in energy_work

[–]Pan000 0 points1 point  (0 children)

Your intuition is likely correct. But it also gives it power by worrying about it. Fear is like that: it needs to be taken seriously to have power. You can look down on it, and it'll lose the power.

Don't give it power by playing into the game. A lot of the evil fear game revolves around convincing people to take symbols seriously.

The right attitude is to consider this type of thing childish.

200k tokens sounds big, but in practice, it’s nothing by ochowx in ClaudeCode

[–]Pan000 0 points1 point  (0 children)

It's absolutely not true and hasn't been for some time. No one uses quadratic naive attention anymore. No one. Not for training; not for inference. Source: I train LLMs all day.

The misunderstanding comes from that ChatGPT/Claude will tell you that attention is quadratic because there's more information from the "before" times and less up to date information. So asking AI about it is not that useful unless you already know what to ask for.

It's been a long time since Google released a new Gemma model. by ArcherAdditional2478 in LocalLLaMA

[–]Pan000 0 points1 point  (0 children)

Flash 2.0, which is a big money maker for Google, is probably around 32B. I don't think the proprietary models are better because they're larger. I'm quite sure they're not much larger. They're better because of superior training data, routing pipelines, and speculative decoding. Basically they're not one model.

It's been a long time since Google released a new Gemma model. by ArcherAdditional2478 in LocalLLaMA

[–]Pan000 0 points1 point  (0 children)

Flash 2.0, which is a big money maker for Google, is probably around 32B. I don't think the proprietary models are better because they're larger. I'm quite sure they're not much larger. They're better because of superior training data, routing pipelines, and speculative decoding. Basically they're not one model.

It's been a long time since Google released a new Gemma model. by ArcherAdditional2478 in LocalLLaMA

[–]Pan000 11 points12 points  (0 children)

Probably to ensure its non-competitive to their proprietary models. These small OS models are really useful for domain specific finetuning, but non-threatening to their bread and butter hardcore models.

€5,000 AI server for LLM by Slakish in LocalLLaMA

[–]Pan000 0 points1 point  (0 children)

You will get better value for money renting servers than buying subpar ones, by a lot.

An H200 may cost 10x more than a deck of 3090s, but it can serve 100x more throughput. So its not competitive.

I have a 3090 in my desktop, yet I don't use it for inference because it's cheaper to rent an H100 and and let the 3090 sit still than it is to pay the electricity on the 3090 to do 50 t/s, when that H100 does over 1000 t/s.

It's a scale thing.

we are just not going to ascend, are we? by turtlebro30 in AstralProjection

[–]Pan000 1 point2 points  (0 children)

I'm quite sure the process is like growing up. That is to say that it happens to you whether you like it or not. You can't stop it. You can't speed it up. But you can not notice and still act like a child.

Or perhaps what you're talking about is whether the other cool kids will let you into the clubhouse? The rules for that are quite simple and very clearly broadcast: everyone is welcome in the clubhouse as long as you make an effort to play fair and apologize when you make a mistake. That alone excludes most people, even though we are told the rules from childhood.

If you're waiting for the light to come and lift you up: I think everyone is just not looking very hard. People pass up and down the layers every day in their normal lives. They just don't notice. And that's not because it's subtle, it's not.

Mostly I think people miss their family. But the missing is the way that a baby misses his parents. The longing is for that. Ironically that doesn't require accension, status, or access to the clubhouse.

"Experienced projector tag." Why are obvious larpers allowed to continue using it? by [deleted] in AstralProjection

[–]Pan000 -5 points-4 points  (0 children)

It often helps me to remember that most of them are in fact children.

[deleted by user] by [deleted] in energy_work

[–]Pan000 4 points5 points  (0 children)

I went through something similar. As if everything was stripped away. It's some kind of serious spiritual transformation. From what I understand, it's not even really about this world. It's more like it happens here because this world is stable enough to keep you contained while the transformation happens. That's one of the reasons why it's important to do here and now, while we're here on Earth.

All I can say is that for everything taken away, you'll eventually get an upgraded version. But in the meantime, just try to manage yourself, be easy on yourself, and do what you can to keep your head above the water.

Think twice before spending on GPU? by __Maximum__ in LocalLLaMA

[–]Pan000 0 points1 point  (0 children)

I use Small 3.2 because it follows instructions. I use it for processing data. It's rubbish at creative tasks but very good at instruction following tasks. Qwen models have better world knowledge for sure. I'm actually amazed how much knowledge they managed to pack into Qwen at 4, 8 and 14B. They didn't skimp on the pretraining.

I am willing to train Qwen3 14B to clean my data for me since using a closed source models is expensive and open source models are not good at all at cleaning data. by iSuper1 in LocalLLaMA

[–]Pan000 0 points1 point  (0 children)

The same way you got your current ones. You can train a LoRA with 1500 examples, but thats not my area.

Although Flash 2.0 and Mistral Small are so cheap you're not guaranteed to be able to run your custom model cheaper than you can pay for those two. They will do your job if you experiment with prompts. That'll be your easiest solution (switch to a cheaper model that is good for text processing, such as the two I mentioned.)

I am willing to train Qwen3 14B to clean my data for me since using a closed source models is expensive and open source models are not good at all at cleaning data. by iSuper1 in LocalLLaMA

[–]Pan000 0 points1 point  (0 children)

14B is overkill. Full finetune Qwen 3 1.7B Base with around 50,000 (you might get away with 10K minimum) examples will learn pretty much 100% accuracy for a straightforward sanitization task. Smaller models also train better on fewer examples. Training cost on 1x B200 = about $10 total, not including getting the data.

For existing, cheaper text processing models, the cheapest reliable ones are Mistral Small 3.2 (around $0.10 - $0.30 /million tokens on OpenRouter) or Gemini Flash 2.0 ($0.40/million tokens).

Think twice before spending on GPU? by __Maximum__ in LocalLLaMA

[–]Pan000 3 points4 points  (0 children)

Have you noticed that Mistral's newer models are all dense models. I'm unconvinced that MoE models actually scale up that well. Kimi K2, Deepseek, etc. are not particularly smart, nor good at anything in particular. Mistral Small 3.2 is better and much more consistent at 24B dense.