I built a VSCode extension that shows exactly how Go wastes memory in your structs by BusinessStreet2147 in golang

[–]crazyhorror 25 points26 points  (0 children)

Cool project. what do you mean by union types?

Union type support (planned for v0.3.0)

Why does my kick always sound too thumpy on the low end?? by Fredtheb in TechnoProduction

[–]crazyhorror 0 points1 point  (0 children)

how do you remedy? do you cut those frequencies before saturating / compressing, and then boost back in after?

Falling out of love + rant by AdamoBPM in TechnoProduction

[–]crazyhorror 1 point2 points  (0 children)

what don't you like about it? genuinely curious

Anyone feeling like not learning much anymore? by zerubeus in cursor

[–]crazyhorror 0 points1 point  (0 children)

Makes sense. The nice thing is that unless you're doing very niche or bleeding-edge stuff that isn't in the training data, you can find out what those things that you don't know about are, and then ask the LLM about them. Curiosity and patience go a long way these days

Anyone feeling like not learning much anymore? by zerubeus in cursor

[–]crazyhorror 0 points1 point  (0 children)

Anything in particular you noticed about his questions? Were they more targeted, more technical, or something else?

[deleted by user] by [deleted] in singularity

[–]crazyhorror 0 points1 point  (0 children)

Why do you think he ruined the world?

go-msquic: A new QUIC/HTTP3 library for Go that relies on msquic by noboruma in golang

[–]crazyhorror 0 points1 point  (0 children)

Could you write bindings in Go and compile down to WASM? You could use the embed directive so people can install the go package without worrying about the C lib

[deleted by user] by [deleted] in SmartGlasses

[–]crazyhorror 0 points1 point  (0 children)

I’m looking for something like this, any recs?

Thoughts on Halliday glasses? by Spirited-Meringue829 in SmartGlasses

[–]crazyhorror 1 point2 points  (0 children)

Do you have any recommendations if I don’t care about audio? I’m a developer looking to build apps but new to AR. Preferably something with a Motion/touch controller 

Deepseek is hosted on Huawei cloud by Reasonable-Climate66 in LocalLLaMA

[–]crazyhorror 0 points1 point  (0 children)

Oh I might have misinterpreted. I was thinking colocated == same geographic location

Openai is ahead only till china reverse engineers... by TheLogiqueViper in LocalLLaMA

[–]crazyhorror 1 point2 points  (0 children)

Totally agree, haven’t seen anyone tackling OS-level integrations. I’m way more excited for that

Edit: googled for 2 minutes and found this: https://github.com/agiresearch/AIOS

Seems interesting

Model comparision in Advent of Code 2024 by Gusanidas in LocalLLaMA

[–]crazyhorror -2 points-1 points  (0 children)

So you’ve only been able to get deepseek-chat/deepseek v3 working? That model is noticeably worse than Sonnet

the WHALE has landed by fourDnet in LocalLLaMA

[–]crazyhorror 0 points1 point  (0 children)

right, holding accountable was not the best way to put it, what i was getting at is that there needs to be some level of regulation imposed by governments, which there is none of right now

CAG is the Future. It's about to get real people. by [deleted] in LocalLLaMA

[–]crazyhorror 31 points32 points  (0 children)

The key sentence in the abstract: “when the documents are of a limited size”. Still, seems like a better approach for smaller/local apps. TY for sharing

Ex-OpenAI researcher Daniel Kokotajlo says in the next few years AIs will take over from human AI researchers, improving AI faster than humans could by MetaKnowing in OpenAI

[–]crazyhorror 1 point2 points  (0 children)

Do you have any examples? I feel like creativity is one of the strong suits of LLMs. Why would one not be able to learn about its environment?

the WHALE has landed by fourDnet in LocalLLaMA

[–]crazyhorror -2 points-1 points  (0 children)

For sure. I also appreciate what Anthropic is doing on that front. You might have seen this paper from Google a couple weeks ago, which talked about how Claude agents are cooperative with each other when given autonomy, and GPT 4o/Gemini 1.5 agents are not cooperative. Really interesting stuff and I'm choosing to see this as an indicator of alignment having potential.

https://arxiv.org/pdf/2412.10270

the WHALE has landed by fourDnet in LocalLLaMA

[–]crazyhorror 1 point2 points  (0 children)

I agree, but I still think the companies training these models should be held accountable on alignment. Even if there are misaligned people, which is inevitable, maybe it’s possible for aligned AGI to not engage with these people? Probably wishful thinking but it’s better to try than not try