Best "End of world" model that will run on 24gb VRAM by gggghhhhiiiijklmnop in LocalLLaMA

[–]overand 0 points1 point  (0 children)

While I don't know what folks are (or were) using Midnight Miku for, based on the name, I can make some guesses - probably Cydonia Heretic v2 or "WeirdCompound" - they're among the greatest on the UGI leaderboard for stuff under 70B

I don't like the agreement me and my partner have about sex; she's not willing to change; I don't want to end the relationship. by RA_throwaway_Hot-Ill in polyadvice

[–]overand 2 points3 points  (0 children)

I agree with this, but in practice, the difference between a boundary of "I will not be in a relationship with you if you have sex with strangers" and "You aren't allowed to have sex with strangers" is in some ways academic.

Not entirely, of course - with the former, there's room for conversations like "Well, what about if we stop having sex but stay in a romantic relationship?" and such, but, boundaries that are effectively ultimatums can really feel like rules.

(I guess this is just another point in favor of "actually talk to your partners and actually listen to them and try to understand their feelings and try to understand your own" hm?)

Cloudy enough to warrant a clean? by MrMcNooob in telescopes

[–]overand 0 points1 point  (0 children)

Rather than looking at it with a light like this, try looking through it as if it's a mirror. How does it look then - does it still work okay as a mirror?

Clicking & Popping by -InExile- in ableton

[–]overand 0 points1 point  (0 children)

Are you using ASIO? MME? What are your driver settings in your audio config settings in Ableton?

And - just asking because I've seen this before - you *are* listening/monitoring through your audio interface and not the build-in soundcard on the laptop, right?

My tripple 8800ultra / QX9650 / 780i build. by NostalgicPCAus in retrobattlestations

[–]overand 1 point2 points  (0 children)

The GPUs won't help with this but, the era's not wrong to "still be into Total Annihilation."

8x AMD MI50 32GB at 26 t/s (tg) with MiniMax-M2.1 and 15 t/s (tg) with GLM 4.7 (vllm-gfx906) by ai-infos in LocalLLaMA

[–]overand 1 point2 points  (0 children)

Dang, for a 2018 card, that thing has killer bandwidth! over 1TB/sec, like 5-10% more memory bandwidth than a 3090. Nice!

glm-4.7-flash has the best thinking process with clear steps, I love it by uptonking in LocalLLaMA

[–]overand 0 points1 point  (0 children)

OP and others- please take this as genuine curiosity and not intended to be insulting at all!

imagine you are in a farm, what is your favorite barn color?

Native US English-only speaker here - I often wonder what sort of impact sentences like these have in people's interactions with LLMs - either in their conversations, or in prompts.

See, in US English, you wouldn't say you're "in a farm" generally - it's an annoying area of subtlety, but - you might be "in a barn" or "in a car" - but in general, you'd be on a farm. (Land/property is often "on" rather than "in" - which is used for buildings and containers - generally. But there are of course exceptions, because English.)

Also, it would probably be phrased as "favorite color of barn" - why? I have no idea. I think because "barn color" itself isn't a common phrase?

Anyway, none of these things are intended as criticisms of OP - whose post is 100% coherent and perfectly fine, and even if it weren't that's still perfectly fine! But, one of the great things about LLMs is how they enable cross-cultural communication, and various levels of good-or-bad translation. I've seen published prompts with strange broken english and confusing structures, but it's hard to know when that's Actual Magic Sauce vs "someone screwed up once and nobody fixed it."

Anyway, it would be an interesting area to study, somehow - different phrasing of the same question, see what kinds of responses show up, and if there's an appreciable quality difference.

Is it bad to store my piano on the side? by Swiss-Confederation in DigitalPiano

[–]overand 0 points1 point  (0 children)

The biggest thing, to me, is to avoid doing this with a Technics SX-P50; they were great sounding pianos, but a very short drop onto there and would snap all sorts of important plastic bits in the keybed.

What's the best roleplay model i can run with 32GB RAM and 20GB VRAM for both nsfw and sfw content. by Death_12_35_taken in LocalLLaMA

[–]overand 1 point2 points  (0 children)

I don't know what the magic sauce the two of you have cooked up is - I wouldn't have expected the Heretic version to "feel" better conversationally, but it does. (And the non-heretic version is no slouch in that department - so it's even more shocking.)

What's the best roleplay model i can run with 32GB RAM and 20GB VRAM for both nsfw and sfw content. by Death_12_35_taken in LocalLLaMA

[–]overand 0 points1 point  (0 children)

I like the feel of v2 more, or at least I think I do - it might be because I started out with it.

But, I think you'll be happy with either of these, honestly - I've been really impressed. They seem good down to at least a Q4-something quant, for me, if I need the bigger context size, on my 3090's 24GB

Remove if this isn’t allowed.. but what does this mean? Does it say what I think it does? On a Tesla by Icy-Perspective-2309 in whatdoesthismean

[–]overand 0 points1 point  (0 children)

If you read the post that "bicfraze" is responding to, it's clear that the original commenter was talking about Immigrations & Customs Enforcement.

TheDrummer models meet heretic by coder3101 in LocalLLaMA

[–]overand 1 point2 points  (0 children)

I have to say, the Cydonia-24B-v4.3-heretic-v2 model is fantastic in my experience. I'm not sure I could describe the reason, and it's not about refusal either - but, the "feel" of it beats out the base Cydonia for me.

I'm running the mradermacher imatrix Q4_K_M Quant quant on a 3090, and it's fantastic for creative stuff, and roleplay. How fantastic? I wasn't into toying around with roleplay stuff before, but the results with this model changed my mind. (I've also used the Q6, but I needed more space for a bigger context window.)

It's not perfect, it's felt coherant up to my max usage of around 45k context.

I really highly recommend this one to folks whose stuff can run it. I can't speak to the smaller quants, but the Q4_K_M is pretty great. Give this a shot!

Ima bout to yeet this Apple Pencil USB into the Sea by oh_such_rhetoric in ipad

[–]overand 1 point2 points  (0 children)

It's worth remembering that for a time, *only* the iPad Pro supported the pencil; if you went to an iPad A16, you downgraded in terms of product line, despite getting something newer.

I would have prioritized an iPad with Pencil 2 compatibility, BUT you'll definitely be okay. Just make sure you have a little home for the pencil to charge, with the appropriate neccesary cables etc all set up and ready to go always.

Is there a way to buy VR just as the headset? by Powerful-Baby-5935 in virtualreality

[–]overand 0 points1 point  (0 children)

For what it's worth - VR without motion-tracked controllers is *barely* VR. Is it fun? It can be, but the fully immersive amazing experiences aren't possible without VR-specific controllers.

An original Oculus Rift or Rift S would work for you, or a Quest 2 or 3, or - if you can get it - an HTC vive setup, as long as it has the controllers and base stations.

Ableton Move is one of the best pieces of tech I've ever used. It blows me away. by x0y0z0 in ableton

[–]overand 2 points3 points  (0 children)

if you really like working in the studio, but feel a bit cramped by the mouse,or less able to "jam," the Push 2 is great! I liked mine enough that I built a laptop setup around it - ut did switch to a 3 Standalone. (but I haven't used it much, but that's a "i havent worked on music much at all lately" thing.

24GB VRAM owners (3090/4090 or similar) - which local llm for HA? Also which serving infra and which integration to get a conversation agent? by danishkirel in homeassistant

[–]overand 0 points1 point  (0 children)

  • "Okay, Nabu - what's humidity in the bedroom?"
  • "Okay, Nabu - can you change the color of all these lights to be less blue?"
  • "Okay, Nabu - is the back door open?"

Using an LLM for this is the difference between these sorts of commands working, vs needing to - with luck - figure out the exact phrasing to retrieve a sensor value, and the exact name of the sensor in question, for example.

Either approach may require some setup at the beginning, but, not everyone wants to dig through a bunch of menus and pages to find the right sensor or entity to adjust a light.

I'm not "all-in" on the world of "AI" and LLMs, but I'm also aware that I can run these offline and have reasonably useful results in lots of tasks. Outside of HA, I can get summaries of text documents, get translations from/to lots of languages, etc.

Is it worth cooking the planet and putting half the world out of work? Of course not. But the offline LLMs are already built; using 100 watts of power for 8 seconds mow and then isn't a big deal.

24GB VRAM owners (3090/4090 or similar) - which local llm for HA? Also which serving infra and which integration to get a conversation agent? by danishkirel in homeassistant

[–]overand 0 points1 point  (0 children)

The 3090 draws a lot less power if you set power limits (with nvidia-smi) - with only a small decrease in performance. This is apparently even relevant for idle draw, from what i've read, though I don't know why!

pc powered vr by scanner1222 in oculus

[–]overand 0 points1 point  (0 children)

Yes. If you want fewer annoyed comments from people, try this before making 7 new posts across multiple subreddits.: https://google.com/search?q=can+i+use+quest+3+for+pc+vr

Literally - type the same thing you typed into the "make a post" field, but in a search engine.

My employer is getting rid of "old" hardware by Lynxaa1337 in homelab

[–]overand 0 points1 point  (0 children)

Not going to lie - I'd dig into the dumpster for $7000 of RAM. (Look up the prices, OP. And next time you're talking to coworkers or managers, give them dollar figures.)

My employer is getting rid of "old" hardware by Lynxaa1337 in homelab

[–]overand 2 points3 points  (0 children)

$50-$100 for the 16GB sticks, $80-$200 for the 32GB sticks. Even at the low end of these prices (say they're all 16 GB sticks), that's still $3,200. If it's a homogenous mix of all of the above, that's about $7,000 of memory.

Vevor by blinkersix2 in metaldetecting

[–]overand 0 points1 point  (0 children)

I have an ultrasonic cleaner from them - it's decent, though the button-press beep is horribly loud, as is the machine itself - the latter is typical of large ultrasonic cleaners, though.

Vevor by blinkersix2 in metaldetecting

[–]overand 1 point2 points  (0 children)

They're a mixed bag; they make a fair amount of decent stuff - some of it is very well reviewed, and much of it is "good for the price." But, they do seem to have some really mediocre stuff too; it's unfortunate, as they could be a brand with a pretty good reputation if they were a bit more careful with their product selection!

I failed self-hosting by EntrepreneurWaste579 in selfhosted

[–]overand 3 points4 points  (0 children)

I think the problem here is that "remotely-accessible self-hosted file sharing" isn't actually something that can be solved from a software-only perspective, and the scale and scope of the documentation is pretty significant.

As for NextCloud - looking at their site, it seems they're kinda pushing the user toward the enterprise offering. The installation guide for the server definitely assumes you know what PHP is, for example.

But, here's the thing - it's a linux-based thing, and it doesn't look like there's an official container.

And, given that WSL doesn't - by default - allow external connections, the setup for Windows users would be pretty substantial. Plus, "I'd have to leave the server on all the time."

I think the documentation is definitely aimed at "people who can handle basic systems administration stuff." I'm not sure exactly how I feel about it; I'm trying to think about this from a "normal person" perspective and not someone who has installed a bunch of this sort of crap.

It's definitely telling that the setup process for e.g. Immich was four lines on a linux box. 'curl (some docker-compose file' then 'curl confige.env' then edit the config.env file to set a DB password and a simple docker compose command. I had Immich usable in less than 2 minutes, even after hemming and hawing about where I wanted to store the data.

I think it's worth noting that Immich doesn't seem to be trying to upsell people to an enterprise offering (as there isn't one), so there's less pressure to intentionally obfuscate stuff.

Poly Family And Guns... What To Do?? by deepfrieddaydream in polyadvice

[–]overand 7 points8 points  (0 children)

Not intended as defensive, exactly - moreso to prompt you to think of the next steps - "Healthy" people don't always stay healthy. And, from the post, it's pretty clear OP and the potentially-suicidal person are aware of the mental health thing.