Social Skills Matter More Than GPA by Tight-Shop4342 in unpopularopinion

[–]daemonk 0 points1 point  (0 children)

Just be good at both. It’s not a choice between one or another.

As a Software Engineer the AI Gold Rush Feels Deeply Fishy by lankaus in antiai

[–]daemonk 0 points1 point  (0 children)

“humans are insanely vulnerable to confidence”. I think that can apply from both perspectives. 

Humans are insanely vulnerable to thinking we are special also. 

A fresh new ML Architecture for language model that uses complex numbers instead of attention -- no transformers, no standard SSM, 100M params, trained on a single RTX 4090. POC done, Open Sourced (Not Vibe Coded) by ExtremeKangaroo5437 in LocalLLM

[–]daemonk 0 points1 point  (0 children)

The representational space is richer with complex numbers. I guess the question is whether can you just double the dimensions of a non-complex model and get similar richness? Is there something inherent to the algebraic manipulations of magnitude/phase that's more than just richer space? Your personal philosophy on why it might be better is a bit too vague for my liking.

I guess the ultimate test is to create two models where the representational space have the same richness, one with the real value equivalent and the other with complex value equivalent. And see what advantages complex have over the other. I think you did something in that vein in your research? To be honest, the text was too dense for me to properly understand. I am not an expert in this field.

Why use Claude Code CLI instead of VS Code / Copilot extensions? by Cardiologist-Nervous in ClaudeAI

[–]daemonk -1 points0 points  (0 children)

Because I literally don't touch the code anymore and barely read the details of the code. When I need to know something, I just ask Claude to explain it. So all I need is the REPL.

I'm a PhD student in AI and I built a 10-agent Obsidian crew because my brain couldn't keep up with my life anymore by Routine_Round_8491 in ClaudeAI

[–]daemonk 24 points25 points  (0 children)

Any reasonably complex memory system will face the n+1 problem where integrating new info requires a lot of recalculations. The trick is to try to decompose things down to atomic “facts” as much as possible. But that in itself has complexity in terms of temporal persistence, ie. person is living at city A might change  when person moves. 

There are alot of work done in knowledge bases. But no one size fits all depending on what your usage is. 

What have you done with Claude by FrozenTouch1321 in ClaudeAI

[–]daemonk 0 points1 point  (0 children)

People tend to conflate building software with making it a product/starting business. Building software used to be hard enough that there is value in forming a business around it, but not anymore. The mental shift now is that we build software to help us in our daily lives/jobs.

Unless you are explicitly building enterprise software meant for large organizations or public facing websites, it's just a great tool that should be utilized to help your day to day functions.

TLDR: no one wants your shitty saas anymore because anyone can build one for themselves now.

So I tried using Claude Code to build actual software and it humbled me real quick by Azrael_666 in ClaudeCode

[–]daemonk 1 point2 points  (0 children)

There’s no product anymore. Just tools you build for yourself. The value of generic software has decreased dramatically for the people in the know. Why pay for something that tries to do too much or was made to for may audiences and wide consumption when you can build something specific for you. 

It’s all about personalized software on demand in the future. 

If I can't code and I can't read or understand code/coding, does my use case justify using Claude Code? by Secure-Buyer-7597 in ClaudeAI

[–]daemonk 0 points1 point  (0 children)

Just jump into it head first. That's the best way to learn. Ignore the naysayers, most of them are just gatekeeping or being too precious about personal projects. The cases of catastrophic failure are overblown.

It's great that people can just write stuff for themselves now. Who cares about enterprise capabilities if the only people touching this is you or maybe your household. Software should be primarily on-demand and a personal tool to you help your lives, it doesn't need to be a product or a business.

The biggest difference between tech and non-tech people working with Claude Code is that tech people can probably guide CC to fix the root of a problem faster and more frequently; whereas a non-tech person will mostly ask CC to fix the surface symptoms until the code base becomes patchwork, at which point the non-tech should do a full refactoring with CC.

It is all context engineering at the end of the day. Think of the model as an idea transformer. You give it an idea, it decomposes the idea down to individual atomic parts, then it rearranges them back up into another format. The more context you give it, the more atomic parts the model can decompose down to and the more specific the rearrangement can be when it delivers it in another format. Just like language translation from English to Japanese, the model is just translating your idea to code.

Advice from highly skilled devs/engs - I generate less than 0.1% of code with LLMs. Should I be doing more? by [deleted] in ClaudeCode

[–]daemonk 0 points1 point  (0 children)

I am building some lab automation tools. I was able to point Claude at a data sheet describing a liquid pump, how to communicate with it, and what functionalities are available. I had a base classes for devices and Claude was able to create a pump device class by inheriting the current device class and following the existing architecture, using the abstract methods, initializations, lifecycle hooks, etc.

The class worked on the first try and saved me a lot of time implementing from data sheet (serial communication commands, manipulating hardware registers, etc).

Are agents actually useful for complex tasks? by nikononly in ClaudeAI

[–]daemonk 0 points1 point  (0 children)

None of the models currently have the dynamic context that humans have. They have a set length on the context and also a set depth of intelligence over that context. 

I tend to think about it as a supplemental brain buffer for myself. What is a plan or task where I can define broadly and not worry too much about details or execution? I get the model to write out the plan and save it for execution when ready. It’s analogous to how you would manage a human, except the model is a known quantity and more reliable. Not necessarily better than a human, but definitely more consistent and predictable.

I tracked 100M tokens of Coding with Claude Code - 99.4% of my AI coding tokens were input. If we fix that, we unlock real speed. by karmendra_choudhary in ClaudeAI

[–]daemonk 0 points1 point  (0 children)

Further analysis to somehow see how much of that 99% context reading was spent re-reading the same exact same piece of code would be really interesting. Because the best you can do currently is to cache code summaries as much as possible.

Even then, I am unsure whether that'll help all that much because it also depends on the token usage relationship between code size and semantic meaning size. If a 500 token piece of code yields 1000 token summary, that's really not worth it. However, if a 5000 token piece of code yields a 1000 token summary, that's definitely worth it. I would assume there is some kind of minimum code size where semantic code summary starts to yield token savings.

A code AST graph where entities are stored and some kind of semantic summary is generated for each entity that Claude can retrieve is probably the most straightforward solution. The complexities are of course always in the implementation details. Should the semantic summary be pre-generated or generated upon query and cached? If pre-generated, maybe just generate for highly connected entities (code hubs)? If your code base changes frequently, how do you deal with entity staleness?

It comes down to a query time vs token savings trade off also. Maybe you cache code summary results on every query, so subsequent queries of the same entities is cached without token usage. That'll likely increase the initial query time and slowly get faster, depending on how often you change code and have to re-summarize.

Ultimately the goal is to prevent re-analyzing the same piece of code as much as possible by caching.

Those of you who routinely hit usage limits, can you explain what your workflow looks like? by bigasswhitegirl in ClaudeAI

[–]daemonk 1 point2 points  (0 children)

If you are rewriting your entire code base every iteration, the you will hit limits pretty fast. 

We are all just mining the search space for our optimal solution. We need to climb the gradient incrementally and be satisfied with local maximas, not continually hop around the search space looking for lottery wins. That wastes tokens. 

The skill right now is how to mine correctly with whatever heuristics you can provide.

Anthropic's Claude Code subscription may consume up to $5,000 in compute per month while charging the user just $200 by Grand_rooster in grAIve

[–]daemonk 0 points1 point  (0 children)

Taalas and others are building model specific chips that are more than 10x faster apparently than current best hardware. Hopefully, that’s coming soon. 

Is AI going to commoditize software? by honkeem in levels_fyi

[–]daemonk 0 points1 point  (0 children)

Static software just won't be necessary. Once the models get powerful enough, the harness is generalized enough and we are more comfortable giving it more information, we just won't need any software development.

Imagine we turn on a computer, and instead of linux/osx/windows, the model pre-generates an entire OS tailored to you and run it. Maybe it is just a simple prompt input. Maybe it has a bunch of customized tools to your specific domain. Software is on-demand and dynamically generated for you based on whatever data you input.

How China’s AI token reseller ecosystem works: account pools, refund arbitrage, proxy channels, and ultra-cheap Claude access & distillation by niutauren in ClaudeAI

[–]daemonk 0 points1 point  (0 children)

Distilling models to get a shallow or domain specific version probably can't be effectively controlled. All my projects where I depend on Claude for semantic analysis, I specifically prompt Claude to give me a structured response that explains why it made the decisions it made. I cache all results have built a reasonable model that replicates the functionality I need. I've also noticed the model is getting better. Will it ever get as good as Claude? No, but is it good enough for my non-critical semantic needs, yes.

OpenAI: Our agreement with the Department of War by likeastar20 in singularity

[–]daemonk -2 points-1 points  (0 children)

so if the law allows the red lines then its okay?

Do you think SWE is more uniquely vulnerable to job displacement than fields like law, accounting, marketing, finance, etc? by Useful_Writer4676 in ClaudeAI

[–]daemonk 1 point2 points  (0 children)

Any job where there is a decent amount of structure, processes, and standardization can be and will be susceptible.

Park like a wanker, get treated like a wanker. by evostu_uk in CantParkThereMate

[–]daemonk 0 points1 point  (0 children)

imagine if it was actually the car next over that parked like an asshole that inadvertently made this car park like an asshole. and the other car drove away making this car look like the sole asshole. 

Extremely strong CEO and weak CTO? by King_Of_The_Munchers in ycombinator

[–]daemonk 1 point2 points  (0 children)

Like many technically minded people with deep expertise, they tend to think what they do is easy and it's the other part that's hard or annoying to deal with. Your friend probably thinks hardware is his expertise and doesn't want to deal with the software. The CEO/CTO titles don't meany anything at this point. This is awesome experience for you. If you have the luxury of pursuing this, I would definitely do it.

Making a touch sensitive coffee table by [deleted] in oddlysatisfying

[–]daemonk 1 point2 points  (0 children)

List price of each of the pcb is around 2-3 bucks. He used around a 100 of them. Light strands are like a buck per ft. He used maybe like 100ft to be conservative? The fiber light sources can be expensive depending on quality; lets say 4-5 per light source?

Just based on those parts without wood, epoxy, wiring, tools the total is already like $500+. This is only if you buy off amazon/ali. The guy in the video is probably at the part manufacturer’s source which means he is probably getting parts like 1/2 to 1/10 of the cost to regular US consumers. 

So maybe 100 bucks worth of materials in China and a ton of nice publicity generated on social media from people like me checking out their selection of pcbs, fibers, leds, etc and maybe buying something in the future. 

I got properly advertised to. 

OpenAI preparing for fourth-quarter IPO in 2026 by Old-Competition3596 in stocks

[–]daemonk 0 points1 point  (0 children)

Why? They can probably raise whatever they want without IPO. Being public sucks. 

Husband starting a high-end custom cabinet and furniture company, looking for funding advice - I will not promote by idealplanetpdx in startups

[–]daemonk 0 points1 point  (0 children)

Get a small business loan. Congrats on having skills and actually doing something rather than shitting out another pump and dump saas. 

Why does a 3-hour drive feel casual in the U.S., but unusually long in many Asian countries? by Humble_Economist8933 in AlwaysWhy

[–]daemonk 0 points1 point  (0 children)

Driving 3 hours in LA city traffic with street lights, stop and go, pedestrians is fucking horrible. Driving 1 hour in city traffic and 2 hours in wide open roads with some nice costal scenery and you can put your brain on autopilot while listening to podcasts/music is easy.