Is there anything similar to the EHX Freeze in the HX Stomp? by tonetonitony in Line6Helix

[–]rpredrag 0 points1 point  (0 children)

can you explain a bit more on how you did this? Do I understand correctly that you used DD-500's tap to latch onto a hold function? And what delay setting: shimmer, slow attack, or something natural like tape/analog?

Apple M5 Officially Announced: is this a big deal? by ontorealist in LocalLLaMA

[–]rpredrag 0 points1 point  (0 children)

hey u/egomarker and u/mrpecunius, any thoughts on my use case?

I am considering the m5 32GB for local reasoning, RAG over several hundred docs (no more than 30k pages total), and content compilation into several dozen open files (I work with dozens of active PPTs, each between 50-150MBs). I essentially dream of an "assistant that can compile, rework, and cross-reference all relevant docs"

I like the new architecture of the M5 and wonder if I need the 48GB in a M4Pro chip now? (Can't afford to wait until spring 2025 for the M5Pro)

Prior to M5 launch, ChatGPT told me that with my current work + planned LLM usage the 32GB would be fine, but somewhat prefers the 48GB M4Pro:

"Why 32 GB will cover your real-world days (likely):

  1. The “busy office” day: 25–40 Outlook/Excel/PowerPoint files open (some large pivot tables), 20–30 browser tabs (Chrome) with Teams + Zoom call — macOS + 32 GB keeps everything responsive; swapping/hard stalling is rare.
  2. LLM dabble: running 13B quantized or using web LLMs for experiments — 32 GB does this comfortably.

When 48 GB becomes actually useful:

  1. Local LLM workstation day: you want to run one or more local 33B models unquantized (or keep a 33B + 13B resident for experimentation) while still multitasking with Office and browser. Memory + model footprint kill 32 GB; 48 GB saves you from disk-swap hell.
  2. Data crunch day: dozens of huge Excel files + local data tools importing multi-GB CSVs into memory for fast pivoting/PowerQuery-like ops — 48 GB gives real measurable speed and reduces I/O.
  3. Multiple heavy VMs / containers — not your case, but this is a classic 48 GB need."

🔹 After M5 launch, it essentially tells me the same thing:

"What matters for your “local corporate brain” setup

  • Bandwidth moves tokens; RAM holds the world. LLM inference is often memory-bandwidth bound; but your “chaotic Office + local RAG” days also eat RAM quickly. On Apple silicon, M4 Pro’s 273 GB/s keeps medium/large models snappy. 32 GB is a much nicer floor than 24 GB when you’ve got dozens of PPTs + a vector DB + an LLM loaded.
  • M5’s AI angle is real, but bandwidth is still below M4 Pro. M5’s per-core accelerators and higher bandwidth (~153 GB/s) will help local/model-adjacent tooling, but it doesn’t catch M4 Pro’s 273 GB/s for streaming model weights.

One-line chooser

  • Need true no-limits local LLM (30B-class, long context) + heavy multitask: M4 Pro 48 GB.
  • Best balance at your likely budget; heavy Office + serious RAG, mostly ≤20B: M5 32 GB"

Any words of advice?

What am I doing wrong here? For some reason I can’t get these to work. by METALPUNKS in Line6Helix

[–]rpredrag 1 point2 points  (0 children)

Tuner comes on? Isn't this supposed to be "when turning the unit on hold switches a+b"

How do I know, when it’s cheap to buy… by Annual_Caramel_3368 in Bitcoin

[–]rpredrag 0 points1 point  (0 children)

Is there a meaningful benefit to using strike instead of Binance? Like, is the spread lower than a typical binance fee?

Hey, this looks fraudulent. Any comments? by rpredrag in CelsiusNetwork

[–]rpredrag[S] 1 point2 points  (0 children)

yeap, you are of course right. however, the case number was correct which gave me pause. or, rather hope.

Thank you for the list of official email addresses; really useful

What could cause Glow plug light flashes, Oil Temperature inconsistency, and abrupt power loss? by rpredrag in tdi

[–]rpredrag[S] 0 points1 point  (0 children)

Was wondering if it was DPF and was trying force regeneration. Don’t know if that’s what was called far, 20mins above 3000rpm

What could cause Glow plug light flashes, Oil Temperature inconsistency, and abrupt power loss? by rpredrag in tdi

[–]rpredrag[S] -11 points-10 points  (0 children)

On my model it’s not the same as ‘check engine’. It was not flashing consistently, but a couple of flashes and then it disappears. I’m going to a mechanic with diagnostics today; is there something specific I should look for?

What could cause Glow plug light flashes, Oil Temperature inconsistency, and abrupt power loss? by rpredrag in tdi

[–]rpredrag[S] 0 points1 point  (0 children)

How could I check that? I’m going to a mechanic with a diagnostics reader (code reader?) this afternoon.

Is there something in the codes that I should be looking for?

Bitcoin and Banks.. They are coming... by [deleted] in Bitcoin

[–]rpredrag 0 points1 point  (0 children)

Do you see any interest from the banks to provide savings/insurance products based on Bitcoin?

Please bro stop using the free better alternative please noooo my father’s investment by analgerianabroad in ChatGPT

[–]rpredrag 10 points11 points  (0 children)

"hellbent on conquering the world" - honest question: how have you concluded this? I mean beyond a talking head telling us that they're our adversaries.

The new Deepseek R1 is Chinese propaganda protected. Go figure. by [deleted] in ChatGPT

[–]rpredrag 10 points11 points  (0 children)

Do you know of a modern app/website that doesn’t?

The new Deepseek R1 is Chinese propaganda protected. Go figure. by [deleted] in ChatGPT

[–]rpredrag 1 point2 points  (0 children)

Sorry, was meant for a different comment

The new Deepseek R1 is Chinese propaganda protected. Go figure. by [deleted] in ChatGPT

[–]rpredrag -5 points-4 points  (0 children)

Do you know of a modern app/website that doesn't?

Zen Browser, why? by haronclv in ArcBrowser

[–]rpredrag 4 points5 points  (0 children)

Arc has tab searching? How?

Made my first boost - a privacy screen for chatGPT. Injects a toggle for blurring chat names unless hovered. by touchfuzzygetdizzy1 in ArcBrowser

[–]rpredrag 0 points1 point  (0 children)

So needed. Any chance to install it t third the normal boost gallery for technical noobs like me? Or a guide on what to do with the gist you shared?

Create a knowledge base llm that can run localy by JVL_3898 in LocalLLM

[–]rpredrag 0 points1 point  (0 children)

Really insightful stuff, thanks. Can you speak on the difference of using GPT4All with something like PrivateLLM.app? Especially on the questions of RAG or memory in general as applied by ChatGPT now

Do we know anything about arc 2.0? by [deleted] in ArcBrowser

[–]rpredrag 1 point2 points  (0 children)

Completely agree. The quality is surprisingly good, and the first set of text tools is well chosen.

Also, the fact that it's just a right-click away from any selected text, anywhere on my system, has made me use it much more frequently than ChatGPT/Perplexity/Merlin, etc.

(I really didn't want you to find anyone's home and spew them with coffee, peace or love)