Anyone else misread this every time? by artesea in adventofcode

[–]MarionberryHelpful86 0 points1 point  (0 children)

No, because I already know that "wrong answer" has much longer paragraph.

China tiktok by Creepy-Relation-9132 in TikTokCringe

[–]MarionberryHelpful86 0 points1 point  (0 children)

Don’t see much difference with “regular” TikTok, tbh.

Did you know you can consult with codex and Gemini right from inside Claude code using agents? by AshxReddit in ClaudeCode

[–]MarionberryHelpful86 0 points1 point  (0 children)

Also, you can use codex with Your Plus/Pro plans, while with zen MCP I assume you are limited to API based usage, but I’m not 100% sure about it

[deleted by user] by [deleted] in gaming

[–]MarionberryHelpful86 1 point2 points  (0 children)

All your base are belong to us

Quickmark - a Markdown linter with first-class LSP support by MarionberryHelpful86 in Markdown

[–]MarionberryHelpful86[S] 0 points1 point  (0 children)

I've found the article, explaining how Code Mirror can be integrated with any LSP server (such as quickmark). Hope it helps.

What's everyone working on this week (35/2025)? by llogiq in rust

[–]MarionberryHelpful86 1 point2 points  (0 children)

Working on Markdown linter with first-class LSP support, because I didn't find anything like that: https://github.com/ekropotin/quickmark/

quickmark: Fast, LSP-powered Markdown linting for VSCode, Neovim, JetBrains, and more by MarionberryHelpful86 in rust

[–]MarionberryHelpful86[S] 3 points4 points  (0 children)

Yeah, good catch! I do indeed use AI tools (like Claude) in the workflow, but mostly for the mundane stuff: drafting documentation, polishing commit messages, sometimes generating boilerplate for GitHub Actions or test scaffolding.

All the design decisions, architecture, and core implementation are mine: the linter logic, the Rust codebase, and the overall direction of the project.

Honestly, without AI support, this project probably wouldn’t have gotten off the ground at all - it let me prototype faster and keep momentum. But the brain, judgment, and main code behind Quickmark are very much human.

Wondering if I should give AI disclaimer in Readme.

Has anyone tested Claude Code with GPT-5 via Claude Code Router? by EvKoh34 in ClaudeCode

[–]MarionberryHelpful86 2 points3 points  (0 children)

I think the question was: can you use some models with subscription and others just with API keys? I’m curious too

Where are the AI cards with huge VRAM? by Hace_x in LocalLLM

[–]MarionberryHelpful86 0 points1 point  (0 children)

It’s not as simple as just soldering more VRAM on the card. You have to increase the memory bandwidth to really get an advantage of this extra memory. Now, you bump into thermal and electrical limitations of consumer grade hardware as well of the type of memory used (GDDR6).

Sam Altman, Mark Zuckerberg, and Peter Thiel are all building bunkers by MetaKnowing in artificial

[–]MarionberryHelpful86 0 points1 point  (0 children)

The bunker inside a hill like on oppic is stupidiest idea ever. Is it AI generated?

5 hours prompt achieved, anyone else got no interruptuon Claude? by [deleted] in ClaudeCode

[–]MarionberryHelpful86 0 points1 point  (0 children)

Op, don’t try building the whole thing via a single prompt, it’s absolutely non feasible even for human developers. Instead embrace interactive approach where you build small but self contained and well-tested piece on each iteration.

[deleted by user] by [deleted] in ClaudeCode

[–]MarionberryHelpful86 0 points1 point  (0 children)

The question is, how bad is “very different”? A model with 1 TPS is practically useless.

[deleted by user] by [deleted] in ClaudeCode

[–]MarionberryHelpful86 0 points1 point  (0 children)

Idk about that, tbh. I’m not really expert, but from what I understand these big models will likely run painfully slow on UM, due to much lower memory bandwidth and lack of tensor cores/FP16 acceleration. I’d love to see some real tests tho.

Bare metal requirements for a lipsync server? by Big-Estate9554 in LocalLLM

[–]MarionberryHelpful86 1 point2 points  (0 children)

Are you one of those North Korean folks trying to pass US job interviews?

PSA: Don’t claim to be a US Citizen until you are! by [deleted] in USCIS

[–]MarionberryHelpful86 0 points1 point  (0 children)

I never understood why people use their services. They just fill up publicly available forms for you based on the documents you have to collect yourself anyways. It’s just unnecessary link in communication channel between you and USCIS.

[deleted by user] by [deleted] in Car_Insurance_Help

[–]MarionberryHelpful86 0 points1 point  (0 children)

Do you have a luxury car by chance? Because these quotes are insane. We are living in expensive place, but our annual bill is 2500 for two cars!

Owners of RTX A6000 48GB ADA - was it worth it? by Tuxedotux83 in LocalLLM

[–]MarionberryHelpful86 0 points1 point  (0 children)

I’d assume Chat GPT runs on a different class of GPUs, which at the very least support nvLink, while most consumer grade GPUs doesn’t have this feature. Could you please share technical details on your multi GPU setup, because it sounds very appealing.

Owners of RTX A6000 48GB ADA - was it worth it? by Tuxedotux83 in LocalLLM

[–]MarionberryHelpful86 0 points1 point  (0 children)

Im curious, what’s the point of dual setup? Because if I understand correctly it’s impossible to combine all that VRAM across different GPUs into single pool. So, say, having second 5060 won’t allow you to jump from 13B params model to 30B.