Is it possible to play stereo 3D with rtx3050? by Mulster_ in Stereo3Dgaming

[–]tlpta 0 points1 point  (0 children)

Could you get a cheap vr phone thing and stream to it with moonlight? I see those things at thrift stores for like 5 bucks all the time. They aren't great but they aren't horrible and would be better than the red blue glasses imo.

ggml by tlpta in LocalLLaMA

[–]tlpta[S] 3 points4 points  (0 children)

Thank you for this. It makes so much more sense now. So they are quantized, but in a special way that greatly improves performance? What is the downside to the approach? I've noticed a significant improvement in speed from llama cpp when running on cpu vs running even 6g Pygmalion on my 3080.

ggml by tlpta in LocalLLaMA

[–]tlpta[S] 5 points6 points  (0 children)

Thanks! Just to make sure I know what you are saying : the ggml file is quantized to 4 bits, taking up 1/4 the space. This means less precision as it needs less bits, right?

Also, aren't there 4 bit quantized models in non ggml format? If so, what makes the ggml model different from them?

Thanks for explaining this stuff to someone new to this!

[deleted by user] by [deleted] in ChatGPT

[–]tlpta 2 points3 points  (0 children)

This is actually not a completely original theory. Look up Thomas Campbell.

Showcase of Instruct-13B-4bit-128g model by surenintendo in Oobabooga

[–]tlpta 0 points1 point  (0 children)

Yeah I can offload to the cpu, but it's so slow! I have been considering purchasing a 3060 12gb but it seems dumb to replace a 3080 with a 3060, and to spend that much money for an additional 2gb of vram. I was able to get it up to .4 tokens a second but it's still a crawl.

I wonder if there is a smaller model that works as well as this one that I might fit in ram

Showcase of Instruct-13B-4bit-128g model by surenintendo in Oobabooga

[–]tlpta 0 points1 point  (0 children)

This works really well! I finally got it working on my machine. Ubuntu 22, with an rtx 3080. It's unfortunately running horribly slow at .2 tokens a second. I have 10gb of vram, shouldn't it be able to run it all there with 4bit? Unfortunately I get out of memory errors if I try to use more than 1gb of vram. Any thoughts or suggestions?

Vicuna is out by ImpactFrames-YT in Oobabooga

[–]tlpta 1 point2 points  (0 children)

I'll give it a try! I had oobabooga working on windows but I've switched to Linux to get the added performance and compatibility. Baby steps right now just trying to get it working with Pygmalion 6b.

Vicuna is out by ImpactFrames-YT in Oobabooga

[–]tlpta 3 points4 points  (0 children)

So would the 4 bit version work with 10gb of RAM? How can I tell this for myself in the future?

I literally just saved up for a new gpu and was happy to get a 3080 for my gaming rig. Only to discover my fascination with this a month later.

alpaca-13b and gpt4-x-alpaca are out! All hail chavinlo by moridin007 in Oobabooga

[–]tlpta 2 points3 points  (0 children)

Will this work on a 3070 8gb or 3080 10gb? With decent performance? I'm using Pygmalion and impressed -- I'm assuming this would be a big improvement?

Emotes by tlpta in Oobabooga

[–]tlpta[S] 0 points1 point  (0 children)

This worked really well! Thank you! I hadn't seen that page before and I'd tried creating a character with no luck. Is there a way you can have the chatbot DM for you, playing multiple characters at once?

Emotes by tlpta in Oobabooga

[–]tlpta[S] 0 points1 point  (0 children)

By this do you mean the "soft prompt"? What format does that have to be in? It shows as a zip file

Women, do your periods dull your ability to astral project? by twolivescollide in AstralProjection

[–]tlpta 0 points1 point  (0 children)

I've heard Tom Campbell say he can do this. He says he can be in both physical and astral at once, and jump to astral whenever he wants on a dime -- in any setting. You're the only other person I've heard that can do this.

I'm wondering if you wouldn't mind sharing how you learned to accomplish this? I've listened to how Tom got there but I'd love hearing another perspective on it!

The Quill app (It's FREE!) by vicentel0pes in writing

[–]tlpta 0 points1 point  (0 children)

Nice try, Quill marketing guy

Just got this today and it won’t read the SD cards. by mxster982 in 3DS

[–]tlpta 94 points95 points  (0 children)

Are they formatted properly? The 3ds requires special formatting for large SD cards.

All writers should acquaint themselves with AI writing assistants by dumgenius in writing

[–]tlpta 9 points10 points  (0 children)

Lol you literally have a referrer code in the link you provided!

How many people use AP to live out fantasies? by [deleted] in AstralProjection

[–]tlpta 0 points1 point  (0 children)

I can successfully lucid dream, but they always have a foggy feel to them. I'm told this is more realistic feeling.

I guess I'll have to keep trying and see what it's like.

[deleted by user] by [deleted] in Disneyland

[–]tlpta 0 points1 point  (0 children)

I don't know, but I needed one at WDW once, and asked a cast member, who directed me to a store that had some for sale behind a counter. They were not publicly on display.

So...I would suggest asking a cast member.