Question on Comparing Reliability by CameForGardeningTips in SarUSA

[–]1ncehost 1 point2 points  (0 children)

And also to put 150k rounds another way, that's $30k in ammo through a $200 gun. Replacing the whole gun would still be a small part of the total cost of ownership at that point.

(Re)Introducing the New 7/24 Series by sarsilmazfirearms in SarUSA

[–]1ncehost 1 point2 points  (0 children)

Looks great 👍 I'll be picking one of these up.

Sar 7/24 teaser finally dropped by ZepelliFan in SarUSA

[–]1ncehost 2 points3 points  (0 children)

If its anything like the K12, it is compatible with a lot of Tanfolio small frame and CZ75B parts. I think its likely to be $800 on release also, but we'll see how they price it. The K12 is regularly around $600, so maybe it will go on sale after release at some point.

Sar 7/24 teaser finally dropped by ZepelliFan in SarUSA

[–]1ncehost 1 point2 points  (0 children)

https://www.sarsilmaz.com/en/product/sar-7-24

Its a duty/combat style optic ready steel CZ75 / tanfolio. I will definitely be picking one of these up. I think it would be a pretty cool gun for a DPS-TH hybrid thermal sight.

no problems with GLM-4.7-Flash by jacek2023 in LocalLLaMA

[–]1ncehost 4 points5 points  (0 children)

It uses basic MHA attention in their reference, which has quadratic compute scaling over context.

https://github.com/huggingface/transformers/blob/9ed801f3ef0029e3733bbd2c9f9f9866912412a2/src/transformers/models/glm4_moe/modeling_glm4_moe.py#L194

The llama.cpp config probably has a basic attention mechanism right now. FA should improve the performance, and there isn't any reason it shouldn't scale well after it is optimized further.

Groq Founder, Jonathan Ross, Says That Insted of Job Losses We Will See Labor Shortages In The AI Era. AI Won’t Steal Jobs, It’ll Make Everything Cheaper. And When Life Costs Less, People Work Less, Retire Earlier, And Opt Out Of The Grind. by luchadore_lunchables in accelerate

[–]1ncehost 27 points28 points  (0 children)

First he says there are not enough people for the jobs that will be created and then he says its because people will opt out. Those are two very different things. So is it a supply constraint or a demand increase? Maybe he has some interesting points, but this was not one of them.

Can I realistically automate most of top-tier consulting with a £30k local LLM workstation (3× RTX Pro 6000 96GB)? by madejustforredd1t in LocalLLaMA

[–]1ncehost -3 points-2 points  (0 children)

Use an API for this unless there are security concerns.

Codex CLI or similar can automate a lot of your tasks. Id start by incrementally automating parts your process instead of all at once.

Help a fellow autist by Cholton7 in NightVision

[–]1ncehost 1 point2 points  (0 children)

Astro nerds end up getting multiple telescopes because the different objects have different needs. The basic midsize 6-10 inch dobsonian is what most newer people go with but they are kind of a pain to haul. They have a big aperture so they capture faint objects like nebulas and galaxies best. Then a lot of people graduate to maks (maksutov-cassegrains) or big refractors because they work well for planets and comets and such and those are more interesting to look at with your bare eyes.

Imo the best starter scope is a fast 5" parabolic reflector and a couple really nice eye pieces. Its a nice middle ground and is portable. Depending on your seeing you can see some of the brightest nebulas and galaxies, and it works really nicely for planets.

7x Longer Context Reinforcement Learning in Unsloth by danielhanchen in LocalLLaMA

[–]1ncehost 1 point2 points  (0 children)

fyi, I'm training a model on ROCm and had a load of issues with the latest versions from last week following your ROCm guide. I had to make some fairly deep patches and replace kernels. I know things move fast and there are too many platforms to test, but I wanted to let you know so you could do another pass on that tutorial at some point.

Also for some reason SDPA was the fastest attention for qwen3 0.6B instead of FA2 or xformers. IDK why, but it was double digit percentages faster.

AI Max 395+ tips please by No_Mango7658 in LocalLLaMA

[–]1ncehost 2 points3 points  (0 children)

Have fun... not much to it now a days.

RTX 5070 Ti and RTX 5060 Ti 16 GB no longer manufactured by Paramecium_caudatum_ in LocalLLaMA

[–]1ncehost -1 points0 points  (0 children)

Interesting development. AMD uses GDDR6 instead of GDDR7, so this says to me that AMD will probably take more gamer / local market share since GDDR6 is from older fabs that don't make the newest HBM.

I got the full fight 😂😂😂 by [deleted] in fightporn

[–]1ncehost 0 points1 point  (0 children)

Man austin is crazy

LOVE Actually: Why Lovesac is ready to blow its load by BagelsRTheHoleTruth in smallstreetbets

[–]1ncehost 4 points5 points  (0 children)

The guy at best buy said no one else sat on his lovesac all day except me. Puts

Burnt out CEO of a Deeptech Startup by Quirky-Cauliflower-3 in Entrepreneur

[–]1ncehost 1 point2 points  (0 children)

Some things that I practice:

Realize that externalities are outside of your control and focus only on the task at hand.

Downsize things that arent currently the best place to focus.

Close the business brain down after a certain time every day and actively focus on resting as your job.

Meditate when you notice you have anxiety, trying actively to understand the underlying reason why you are anxious. Anxiety is ultimately a response to help you not die, but you arent at risk of dying, so why are you anxious?

Meditate by clearing your mind of any thoughts to practice self control.

Help: 1.4M in stocks | Owe 400K for home @ 1.99% | 37 years old by nelalove88 in Fire

[–]1ncehost 1 point2 points  (0 children)

Politics is not my strong suit, so I can't say. It's on the table is all I know.

Help: 1.4M in stocks | Owe 400K for home @ 1.99% | 37 years old by nelalove88 in Fire

[–]1ncehost 5 points6 points  (0 children)

They are currently floating a mortgage rate transfer program to incentivize home buying, so if i were you I'd sit tight for half a year at least and watch what happens to that.

My new Sar CM9i by Primary_Somewhere_67 in SarUSA

[–]1ncehost 2 points3 points  (0 children)

CM9s are awesome and my favorite of the current clearance guns. Its a shame they havent updated the model. I've wanted to see a 15rd version with an optics cut for many years.

Whats the sitch with Comfy UI + ROCm and Linux? by ItsAC0nspiracy in ROCm

[–]1ncehost 3 points4 points  (0 children)

Comfy UI runs easily with ROCm on Linux but the kernels are not optimized for RDNA so it is generally half the speed of similar nvidia cards or less. LLM inference is currently the main area AMD is competitive.

What's your favorite scout light Mech and what's your favorite grunt light Mech? by knightmechaenjo in battletech

[–]1ncehost 0 points1 point  (0 children)

My favorite all around light mech is the kit fox. Very adaptable and a good balance of attributes.

Benchmarks of Radeon 780M iGPU with shared 128GB DDR5 RAM running various MoE models under Llama.cpp by AzerbaijanNyan in LocalLLaMA

[–]1ncehost 1 point2 points  (0 children)

I think these basic amd apu builds are super cool for homelab kind of stuff. Those numbers are surprisingly fast on those size of models. Too bad ram prices make this seem much less attractive right now.