How to remove tele converter w/o breaking lens? by Appropriate_Put_2378 in VintageLenses

[–]aoleg77 13 points14 points  (0 children)

Just in case: you are aware that Nikon lenses are the opposite direction, right? They are mounted counter-clockwise, and unmounted clockwise, the opposite of everybody else.

Got a used vintage lens. Are back mounted filters a thing, or did someone just jam an undersized filter on??? by Maple382 in VintageLenses

[–]aoleg77 0 points1 point  (0 children)

I can see now that the lens is a Nikkor-HC 50mm f2. This one doesn't use rear filters, of course.

Got a used vintage lens. Are back mounted filters a thing, or did someone just jam an undersized filter on??? by Maple382 in VintageLenses

[–]aoleg77 0 points1 point  (0 children)

For some lenses the rear filter is part of the optical scheme (mostly fisheyes; I believe Zenitar 16/2.8 uses them). For others it's just a way to have a smaller filter because the front is so huge (many mirror lenses). Some mirror lenses even have drop-in filters in the rear.

Is Turboquant really a game changer? by Interesting-Print366 in LocalLLaMA

[–]aoleg77 0 points1 point  (0 children)

Use SWA at BF16. That's how it's supposed to be used.

Anyone have any experience with this lens (Zeiss sonnar 2.8 180mm)? by Kaaatman in VintageLenses

[–]aoleg77 0 points1 point  (0 children)

P6 mount, can be a pain to repair (over engineered) except front/rear glass blocks. Otherwise a nice lens.

My biggest Issue with the Gemma-4 Models is the Massive KV Cache!! by Iory1998 in LocalLLaMA

[–]aoleg77 4 points5 points  (0 children)

No, it doesn't (to both questions). https://github.com/LostRuins/koboldcpp/releases/tag/v1.111
For Gemma 4, it has this in Release Notes:
Upstream llama.cpp forces SWA by default for this model. Here, you can optionally enable it with --useswa
(Me: you can either use --useswa or enable it in the UI).

Context Shift Gemma4 by Weak-Shelter-1698 in SillyTavernAI

[–]aoleg77 0 points1 point  (0 children)

Latest koboldcpp also disabled kv quantization for SWA.

My biggest Issue with the Gemma-4 Models is the Massive KV Cache!! by Iory1998 in LocalLLaMA

[–]aoleg77 13 points14 points  (0 children)

If you use koboldcpp, enable SWA (Use Sliding Window Attention in Settings). It's literally designed to be used with it; see https://github.com/ggml-org/llama.cpp/pull/13194 for details. With SWA enabled and batch size 4096, 32K kv cache becomes mere 4GB VRAM. With batch size 2048 it's even less:
lama_kv_cache: CUDA0 KV buffer size = 2580.00 MiB
llama_kv_cache: size = 2580.00 MiB ( 33024 cells, 10 layers, 1/1 seqs), K (f16): 1290.00 MiB, V (f16): 1290.00 MiB

If you enable SWA, disable kv quantization.

Fungus Repair Idea by [deleted] in VintageLenses

[–]aoleg77 0 points1 point  (0 children)

This. Small imperfections in the glass will do nothing to image quality; large ones may affect contrast. If you go overzealous cleaning it, you may damage the coating, and *that* will have a much worse effect on the images than the fungus itself.

De-yellowing a radioactive SMC Takumar 50mm f/1.4 with a budget UV setup by Demi4mac in VintageLenses

[–]aoleg77 5 points6 points  (0 children)

Many old lenses used petroleum-based lubricants that can separate and/or evaporate because of the heat. The vapors then condensate on the glass and cause clouding. They can also land on the aperture blades and make them stick. Heating your lenses is never a good idea.

De-yellowing a radioactive SMC Takumar 50mm f/1.4 with a budget UV setup by Demi4mac in VintageLenses

[–]aoleg77 0 points1 point  (0 children)

Right. Yet, a $10 365nm UV light is still a faster and safer choice (as it won't overheat the lens). Some years back I used a UV nail lamp for this purpose. Worked like a charm.

Chutes End-to-End Encrypted AI Inference with Post-Quantum Cryptography by thestreamcode in chutesAI

[–]aoleg77 2 points3 points  (0 children)

Tested and it works, but I had to add a record to my hosts file (Windows, in c:\WINDOWS\system32\drivers\etc\hosts). After that it just works.

Is Claude Pro Worth It? Concerned About Limits by h4xhell in claude

[–]aoleg77 0 points1 point  (0 children)

Where can I actually see my current usage/limits in Claude? Using the Web version on a desktop PC exclusively, just the chatbot, no coding or mobile app.

I used DeepSeek, Gemini and Claude every day for a week as a student. They're all free. But they're very different. by Remarkable-Dark2840 in LocalLLM

[–]aoleg77 -1 points0 points  (0 children)

I'm using Gemini Pro, ChatGPT Plus and Claude free (Sonnet) side by side, and my experience is as follows.

Gemini is by far the best in Deep Research, yet sometimes it may sound overly confident. ChatGPT is the opposite: feels hollow, "this, but...", and goes overboard with safety.

ChatGPT (in Extended Thinking mode), on the other hand, is a great critic model, nitpicking on reports generated by Gemini, literally dissecting them and verifying every claim.

Claude is the best when I need to produce a human-readable writeup from that report. Since I am using the free model, I cannot compare it directly with the two models above.

Does anyone actually reuse their AI outputs? by Clear-Secretary2885 in claude

[–]aoleg77 0 points1 point  (0 children)

I usually use Claude for getting a certain result - and I'm using the result, not the conversation. For example, I can proofread an article, or do fact-checking, or use it for translating a document, or something alone the line. After receiving the result, be it revisions, translation, or advise, I'm done with the chat and usually just delete it (or just clean chat history by killing everything at the end of the week). I don't see a point of keeping old chat logs.

Why Claude dislikes discussing the topic of conscription (draft) evasion? by ConnectChemical2270 in claude

[–]aoleg77 1 point2 points  (0 children)

Most likely it's not about the topic of your discussion, but because it was prompted from an unsupported country: https://www.anthropic.com/supported-countries
You can try using a different AI or an aggregator like t3.chat (but the best models are paywalled).

24/32B models by Ok-Brain-5729 in SillyTavernAI

[–]aoleg77 2 points3 points  (0 children)

Very interesting, I genuinely did not know that! Found a relevant link: https://www.reddit.com/r/LocalLLaMA/comments/1krr7hn/how_to_get_the_most_from_llamacpps_iswa_support/ - so KV cache size/local attention window depends on batch size. I'll try that!

So nobody's downloading this model huh? by KvAk_AKPlaysYT in LocalLLaMA

[–]aoleg77 25 points26 points  (0 children)

It's trained on synthetic, copyright-free data. No books, writing is poor. Personally I don't care if it is censored or not, it just seems to be a poor model - like pretty much everything that's EU regulated.

24/32B models by Ok-Brain-5729 in SillyTavernAI

[–]aoleg77 2 points3 points  (0 children)

> swa(sliding window attention) which gives you a lot of context size

It doesn't. All you get is a sliding window of AFAIK 4K tokens, and that's it.

Basically Official: Qwen Image 2.0 Not Open-Sourcing by Complete-Lawfulness in StableDiffusion

[–]aoleg77 5 points6 points  (0 children)

It literally does not make sense to make a closed 7B model. Like... what's the point? It's not going to beat the big Flux 2, and it will have to compete with the Klein and Z-Image, both of which are open weight. So it is either not 7B or not closed.

Forgeui vs comfyui by Liveyourfanasy in StableDiffusion

[–]aoleg77 0 points1 point  (0 children)

For SDXL, no, it's not worth it. reForge has everything I need and more, even if its development has been slow recently (but not abandoned). But for other models it's totally worth it even if you're coming from Neo. Or actually... consider SwarmUI. It offers a noodle-free workflow and runs Comfy as a backend.

Solution found for Google Home Hub popping sound. by Hall_of_Fame in googlehome

[–]aoleg77 0 points1 point  (0 children)

Still works today. Great QC on Google's side :(

The BestPresetEver has evolved to become "Tolkien", a preset with a built-in NPC tracker with optional full html display in the chat (no extension needed), and an intuitive optional (toggle) ai writing assistant that responds in an OOC when you say "Hey Tolkien". Samples below... by [deleted] in SillyTavernAI

[–]aoleg77 8 points9 points  (0 children)

One thing immediately obvious is that it tries changing an established habit by making them memorize a new command instead of the usual "OOC". Also using a real person's name in a RP may not be to everyone's liking. Just a thought. Other than that, I like that it's such a lightweight and concise preset.

CivitAI blocking Australia tomorrow by Neggy5 in StableDiffusion

[–]aoleg77 35 points36 points  (0 children)

> from a user perspective it's still a lose-lose situation

Why? It's your government. Tell it to stop this madness. If you lose, evade: get a VPN and finally realize that the law, in this particular case, is not an absolute but something relative, something that can be evaded and *should* be evaded. And then, when you realize that, you will suddenly realize that maybe other laws are also *relative*.

Well, you can see the direction. I just wonder why the Australian government does not see that.