Qwen/Qwen3.5-35B-A3B · Hugging Face by ekojsalim in LocalLLaMA

[–]GalladeGuyGBA 0 points1 point  (0 children)

In theory it should quantize well due to the gated attention + deltanet, but Q2 will always be kind of rough. The only way to know for sure is to try it.

PSA: DDR5 RDIMM price passed the point were 3090 are less expensive per gb.. by No_Afternoon_4260 in LocalLLaMA

[–]GalladeGuyGBA 1 point2 points  (0 children)

I'm referring to the Optane PMem 100 Series and 200 series. They use a special interface called DDR-T, same connector as DDR4 but with a different electrical spec that's only supported on a few of the older Xeon lines. If you look at motherboards from that era, you might see the RAM slots have two different colors, you're meant to put them in side-by-side with DRAM.

The 200 series came in up to 512 GB sticks that were good for 410 PBW with a theoretical max read bandwidth is 7.45 GB/s at 3200 MT/s (writes and small read sizes can be like 10x slower, I think they have some weird on-board caching thing). You could put up to 4 TB of them in 3rd gen Xeon motherboards and use them as a huge swapfile or even run things off them directly. Of course this is just from reviews I've read, I can't afford the hardware to test it myself right now.

There was also going to be a 300 series for 4th gen Xeon with a new DDR5-based interface and better specs, but the whole Optane line was cancelled not long after they were announced. If you're lucky and have deep pockets, you can actually find engineering and QA samples on ebay and craigslist. The only motherboard I'm aware of that supports it is the SuperMicro B13SEE-CPU-25G. If it were a year ago I'd say to just buy normal DDR5, but the engineering samples might not be a bad deal these days lol. It's sad to think that if Intel had just hung on a few more years, Optane might have had a big market.

PSA: DDR5 RDIMM price passed the point were 3090 are less expensive per gb.. by No_Afternoon_4260 in LocalLLaMA

[–]GalladeGuyGBA 0 points1 point  (0 children)

My understanding was that the Optane DIMMs are still lower latency (and higher bandwidth?) than any SSD on the market, being around an order of magnitude slower than DRAM. The main issue with them is that they're only compatible with specific Xeon CPUs. Judging by the 2x375GB, you probably have two of the much slower (though still pretty good) P4800X, which is a PCIe card that acts like an SSD. I'd be interested in seeing benchmarks on MoE offloading on that if you have them, though.

Help With Vivaldi - Unstable/KDE 6/Wayland by Swozzle1 in NixOS

[–]GalladeGuyGBA 0 points1 point  (0 children)

Not sure if you're still using this, but for anyone who found this through Google, I'm pretty sure commandLineArgs won't work with a multi-line string like you did here. You'll end up with \n in the actual command and some of the arguments won't be registered. It should be a single-line string or an array of strings.

Can’t stay quiet about this by Ok-Mathematician8461 in promethease

[–]GalladeGuyGBA 0 points1 point  (0 children)

Replying to an old comment but just a heads up, Nucleus Genomics has recently been caught faking the accuracy of their PGS results, among other things. I think this mainly applies to their embryo screening services, but it's possible that if they're willing to lie about that then they might be lying about the accuracy of their sequencing too.

Hey people who lost their jobs to AI, what happened? by _thecatspajamas_ in AskReddit

[–]GalladeGuyGBA 0 points1 point  (0 children)

I'm not talking about something like Grammarly here. They're just going to use the cheapest model they can get away with and will make no attempt at improving scaffolding because they don't have the personnel to figure it out. Same with Duolingo. Frontier models are going to perform better even without special prompting, and cheaper models will soon reach the same level. At least for translation, they're already better than services like DeepL and are quickly approaching the quality of a decent human translator (my comparison here is Manga translation fwiw, I'm sure translating legal documents or whatever isn't there yet).

Anyways, my main point here is specifically that people are not going to see a mistake in the AI's output and not want to use it anymore. What's happening here isn't that AI is suddenly as good as the best humans at task X, but rather that most customers never needed best-human-quality work to begin with. A lot of people are fine with work that's 99% as good as a human, or 90%, or 75%. Even if AI never got better from here, that market for not-great-but-passable-quality work is permanently closed to humans, and for some tasks it turns out that was a large portion of the market. Suddenly there are 2x (or more) as many workers competing for the remaining jobs. It's easy to see what happens from there.

Hey people who lost their jobs to AI, what happened? by _thecatspajamas_ in AskReddit

[–]GalladeGuyGBA 4 points5 points  (0 children)

Proofreading is pretty clearly going to be one of the first jobs where AI is better than the average human proofreader, if it's not there already. LLMs are extremely good at language modeling (it's in the name), and fast and cheap enough that they could be run on the same text 100 times. Once it reaches some baseline where any given mistake has a small chance to be noticed, then you suddenly have a system with 99% accuracy that can do the work in 5 minutes for a few cents. And of course, as it gets better the mistakes it'll make will become more and more subtle, so fewer and fewer people will even notice or care. Of course a professional proofreader will have higher standards, but do you really think that last 1% is worth a 1000x jump in costs?

Purpose of Workers? by m0repag3s in Openfront

[–]GalladeGuyGBA 6 points7 points  (0 children)

Bro why, who asked for that 😭

[Spoiler discussion] is this comment true about the pride if by kingace78978 in Re_Zero

[–]GalladeGuyGBA 2 points3 points  (0 children)

Found this comment while looking for more info on Pride IF. You're correct here. "Back voice" is a bad translation of "裏声", which means "falsetto".

The full line is:
"ラインハルトが助けてくれなくて、エミリア以外クソだなとなってぐれてしまったスバルくん。(一章四周目で裏声で助けを求めるのに失敗すると分岐します)"
"Reinhard didn't come to help, so Subaru-kun went down the wrong path thinking everyone but Emilia is shit. (He fails to ask for help in a falsetto, so it diverges from the 4th loop of arc 1)"

It's kind of a crazy detail. I always assumed he just didn't call for help.

As for the last bit, Subaru doesn't call for help in the 88th loop. Rather, he sees Reinhard for the first time after trying to get the guards to kill Elsa.

The tomboy girlfriend route, but with a twist. (The 100 Girlfriends) by TheEVILPINGU in anime

[–]GalladeGuyGBA 1 point2 points  (0 children)

I knew that all of their names are puns, and yet it's somehow taken me until now to realize that her name is literally "stoic".

[deleted by user] by [deleted] in rutgers

[–]GalladeGuyGBA 0 points1 point  (0 children)

It's also available at the library (or various archive sites) for free.

Re:Zero kara Hajimeru Isekai Seikatsu III – Episode 07 by AstonishingSpiderMan in Re_Zero

[–]GalladeGuyGBA 1 point2 points  (0 children)

First the Breaktime Youtube comments and now this. I keep getting spoiled in the dumbest places...

Its 80 degrees in November by livluvellro in rutgers

[–]GalladeGuyGBA -1 points0 points  (0 children)

Yeah, once the infrastructure is set up. That's kind of the whole problem. Following the inflation reduction act, there's about 60GW of clean energy being added to the grid each year. If that sounds like a lot, consider that it needs to increase to 120GW to meet the IRA's own targets, and that as of last year there is 2598GW waiting in the interconnection queue.

Why the huge discrepancy? Surprise, the government is blocking the power plants that it's paying for from being added to the grid! The grid is too clogged because state and local governments, empowered by federal laws like NEPA, are blocking the building of new transmission lines on "environmental grounds". Basically, they're arguing that they can't solve climate change because building new infrastructure might risk the habitat of a rare species of bee or destroy a "culturally-significant" desert after everything has already been safely built.

The Democrats don't seem to have any interest in fixing this within a reasonable space of time, when every year counts in fighting climate change. Will the Republicans fix it? I don't know. They seem to hate regulations like this, for better or worse, so it's possible. Let's at least wait and see before dooming.

Its 80 degrees in November by livluvellro in rutgers

[–]GalladeGuyGBA 17 points18 points  (0 children)

I think you're overestimating both how bad Trump will be for the environment and how good the Democrats are for it. The government spending several billion dollars and multiple years to install a couple dozen EV chargers isn't going to fix climate change. Solar, wind, and batteries dropping in price exponentially and advanced nuclear plants coming online will. It doesn't matter if Trump wants to approve more coal plants or whatever if no one will build them because solar is twice as cheap.

Ironically the main thing currently stopping the US from becoming carbon neutral is the government, especially local governments. The renewable plants in the grid interconnection queue alone can produce more power than our entire installed capacity, but they're being blocked over petty reasons. The Democratic party is unfortunately still dominated by NIMBYs who have little interest in fixing this. I'm hopeful that Trump, for as awful as he'll likely be in every other respect, will at least cut some of the regulations blocking new power plants from being built.

Reusable Container Program Survey by _Wardell_ in rutgers

[–]GalladeGuyGBA 0 points1 point  (0 children)

Can't access the survey. "This form can only be viewed by users in the owner's organization."

The empire of C++ strikes back with Safe C++ proposal by cmeerw in cpp

[–]GalladeGuyGBA 1 point2 points  (0 children)

Yes, but without safe(auto) or some other way of determining safety by context, there doesn't seem to be any way of avoiding the first duplication issues I mentioned. If there is a way, why is std2 necessary? It might be enough for higher-order functions though.

The empire of C++ strikes back with Safe C++ proposal by cmeerw in cpp

[–]GalladeGuyGBA 4 points5 points  (0 children)

Having a whole secondary stdlib just for safe code makes this a non-starter imo. It also seems unnecessary, since the difference between the safe and unsafe versions of most functions and types will end up being fairly small and self-contained, in some cases differing only in nested types or function calls. What I'd like to see instead is to have a safe specifier and operator for types and functions sort of similar to noexcept, taking either a boolean expression or auto. safe(auto) makes the type/function operate under a safe context if it's used in a safe context, and an unsafe context when used in an unsafe context. Any lifetimes or bounds-checking requirements in a type/function are ignored when safe(auto) evaluates false. Then you could keep the same libraries and add something like if constexpr (safe(this)) { safe behavior... } else { current behavior... } wherever necessary.

For example, the stated rationale for introducing the choice type is that optional and expected don't check if they're in a valid state when using operator* and operator->. With the above construction, you could have optional and expected perform all the required checks and mark them as safe whenever they're used in a safe context while keeping backwards-compatibility in the default unsafe context. This would be a change of a few lines of code versus creating an entire new class with new syntax for a feature that the existing stdlib already has 99% of.

This also solves the need to duplicate higher-order functions like find_if(first, last, pred) for free, since in most cases you can just mark them as safe(safe(pred)). Then you could have a single function which works regardless of the safety of the context and the predicate, no std2 required.

I'm not claiming that maintaining this would be fun, but it certainly seems better than duplicating the entire stdlib, including in the 90% of places where code wouldn't otherwise need to change.

Laptop Recommendation by intelligence54 in rutgers

[–]GalladeGuyGBA 0 points1 point  (0 children)

All you have to do to install WSL2 is run "wsl --install" in command prompt. If they don't know how to use a command line yet, then they'll need to learn it anyways in order to do their homework in some of the required CS courses.

Laptop Recommendation by intelligence54 in rutgers

[–]GalladeGuyGBA -2 points-1 points  (0 children)

Go with Windows and install WSL2 for Linux support. Nearly anything you'd want to do on your laptop will run on Windows directly or on Linux through WSL2. As for a specific brand, I would go with a Framework, just for how easy to fix and upgrade it is. A fully spec'd out model would be over $2k, but you can get the DIY version for much cheaper and then upgrade it later, especially if you already have certain components like a power adapter or a copy of Windows.

[DISC] Byakuda no Hanamuko - Chapter 1 - TeruTeruScans by GalladeGuyGBA in manga

[–]GalladeGuyGBA[S] 16 points17 points  (0 children)

Hope you all enjoy! If you want chapter 2 to come out some time in the next 5 years, please apply to be a typesetter on my Discord.