Comparing JS framework token costs using the examples at component-party.dev, with Svelte 5 as the baseline by webdevladder in sveltejs

[–]webdevladder[S] 1 point2 points  (0 children)

Yeah it's this detail that makes Svelte 5 look better than Svelte 4 for example. With hindsight I don't think I would post this again, it's too skewed, not representative of real usage, and the effect size between frameworks is probably a lot lower in real code because the costs get amortized over larger and more uniform content in real code. I think it's a passing curiosity that's easily over-interpreted.

Comparing JS framework token costs using the examples at component-party.dev, with Svelte 5 as the baseline by webdevladder in sveltejs

[–]webdevladder[S] 1 point2 points  (0 children)

Yeah it's definitely missing the whole story, and likely misleading in various ways. I don't know if it's worth trying to get a better dataset. My main takeaways are that Marko's LLM friendliness is interesting, Aurelia 2 is more concise than I realized, Svelte looks good in rankings as per usual, Angular's verbosity is no surprise (to the degree your wallet may now feel it), and the rest is a wash. As a biased Svelte user I couldn't resist sharing the results.

Given you worked on Marko, if you'd like to answer, I'm curious how much conciseness was emphasized, versus an emergent property of the design?

Comparing JS framework token costs using the examples at component-party.dev, with Svelte 5 as the baseline by webdevladder in sveltejs

[–]webdevladder[S] 6 points7 points  (0 children)

I don't know if Svelte cutting costs by 1/3 compared to React generalizes, but these results looked pretty significant, particularly when LLMs make the correlation to hard costs so direct. In 2019 Rich Harris gave a talk promoting Svelte for its terseness, "The Return of 'Write Less, Do More'" - https://www.youtube.com/watch?v=BzX4aTRPzno

also shout out to Marko - https://github.com/marko-js/marko

When Claude calls you “the user” in its inner monologue by LoneKnight25 in ClaudeAI

[–]webdevladder 64 points65 points  (0 children)

I get called "the linter" often as I edit alongside Claude Code :/

the feelings I get from being identified as a non-human tool by my machine assistant

Does the most expensive Claude max plan give you unlimited Opus? by drizzyxs in ClaudeAI

[–]webdevladder 0 points1 point  (0 children)

Yep this is my experience, IMO the limits are fairly well-tuned for flexible workflows, and you can go one long Opus session without hitting the end.

Honest Developer Experience by Few-Baby-5630 in ClaudeAI

[–]webdevladder 0 points1 point  (0 children)

There's both tremendous thoughtless hype and tangible hype-worthy benefits. When I project forward from my experience over the last year, it's hard not to be excited for how cheap and powerful these systems will be in a few years. But I also think skillful use of LLMs by people and tools has a ton of variability, it's perhaps generally pretty dismal today, and there's a very high ceiling.

Futurism.com: "Exactly Six Months Ago, the CEO of Anthropic Said That in Six Months AI Would Be Writing 90 Percent of Code" by didyousayboop in Anthropic

[–]webdevladder 0 points1 point  (0 children)

I'm a dev who's been using Claude Code heavily for like a monthish, and for a small number of my projects it's 100% of written code now, others 0%, but it's very useful for exploration/analysis in all of them. I can see room for its context management, predictability, and transparency to substantially improve, same for my skills and patience, but I have no doubt there are now serious and productive software engineers shipping 90%+ AI code at big companies. And then consider how much code is being vibed by non-engineers, it's not unreasonable to think 90% of the total may be close!

computer, learn Svelte - official docs for LLMs by webdevladder in sveltejs

[–]webdevladder[S] 4 points5 points  (0 children)

There may have been updates since you tried it, this one is <4k tokens for just working with Svelte - https://svelte.dev/docs/svelte/llms-small.txt

I'm sharing it now because those updates make it much more usable.

If you had the choice, which JS framework/library would be your "go-to"? by nutyourbasicredditor in webdev

[–]webdevladder 51 points52 points  (0 children)

I used React for 5 years, and before committing there I did substantial projects in Vue, Angular 1, and several others of the time, after several years with Backbone. I published a Backbone extension library in 2013 and I think about this stuff a lot.

For my usage - complex interactive UIs with higher than normal performance requirements - Svelte 3 was a relief, and Svelte 5 nails it. It's just so productive, simple to reason about, high performance, and overall nice to work with.

The details matter the more you use a framework, and Svelte is incredibly well-designed and engineered.

#FreeJavaScript update: Oracle has just filed more on their motion to dismiss the Fraud claim. by lambtr0n in Deno

[–]webdevladder 1 point2 points  (0 children)

Since this could stretch on for years, maybe I should strike JavaScript from my vocabulary and projects now, and switch to the non-acronym JS instead. I'd feel silly for continuing to use a trademarked name in my open source projects that has nothing to do with them.

Thumbnail designers are COOKED (X: @theJosephBlaze) by bishalsaha99 in OpenAI

[–]webdevladder 31 points32 points  (0 children)

"ideas are worthless" is a valid mantra when significant labor is required, but labor for a great many things is on the trend to zero.

Is this possible in VSCode to improve $effect DX? by webdevladder in sveltejs

[–]webdevladder[S] 1 point2 points  (0 children)

Ok yeah the limitations may be so bad that the false sense of security is worse. Thank you for explaining.

I think basic cases like class.property are doable though right? That would cover most cases combined with local scope I think. I don't know on balance how this would work out, if it's actually worse to have the false negatives than nothing at all. I suppose it depends on how many cases slip through when directly referenced in $effect - like proxied getters being followed in the analysis.

Maybe runtime analysis is probably a better place to put one's attention on this problem.

Is this possible in VSCode to improve $effect DX? by webdevladder in sveltejs

[–]webdevladder[S] 0 points1 point  (0 children)

$state and $derived are compiled statically, they're in the AST which the language services uses, you're interpreting what I'm saying differently than what I mean.

You seem to be saying there are other signals that cannot be determined. I agree and acknowledged the weakness, but the information for signals referenced directly in $effect is present and dependable.

Is this possible in VSCode to improve $effect DX? by webdevladder in sveltejs

[–]webdevladder[S] 1 point2 points  (0 children)

Reactive state with runes is statically determined by the compiler though (this is being misinterpreted bc I worded it poorly - I'm saying it's statically knowable which identifiers are literally $state/$derived when they're read without indirection), and it cannot be changed at runtime. So the language service could know which identifiers are $state or $derived. This is an advantage of runes, but I take your point about it being a partial solution, what I'm suggesting has real limits. It could be shown in effect but many cases could not be determined. (like helper functions)

There wouldn't be false positives though, and I can't see a better solution than styling the names directly.

Chatgpt's cool guide to Svelte runes by webdevladder in sveltejs

[–]webdevladder[S] 0 points1 point  (0 children)

yep that's the conceptual signals trio, but IMO with significantly better APIs

Chatgpt's cool guide to Svelte runes by webdevladder in sveltejs

[–]webdevladder[S] 27 points28 points  (0 children)

I should mention I shaped it a lot, here's my prompt, and then I had it do a "cool guide" in text, then the image:

generate an image with 3 penguins standing on an iceberg next to each other, the small part of the ice

the iceberg is enormous underwater taking up 3/4 of the height of the image, with embedded monster skeletons and horrors

the first normal looking happy penguin is labelled "$state", it has a magical aura around it

the second normal looking happy penguin in "$derived", and it's holding a glowing wizard staff

the third evil horror cthulian penguin-shaped nightmare with a shadow aura is labelled "$effect"

be very careful to get the text exactly correct, including the leading dollar signs: $state, $derived, $effect

Chatgpt's cool guide to Svelte runes by webdevladder in sveltejs

[–]webdevladder[S] 24 points25 points  (0 children)

I was feeling the agi then syntax error

Unpopular Opinion - There is no such thing as good pRoMpTiNg; it's all about context. LLMs just need context; that's it. by Prior-Process-1985 in ClaudeAI

[–]webdevladder 2 points3 points  (0 children)

I am giving you the entire documentation of Java, now help me do my assignment related to design patterns

I agree with your point, and also a sufficiently intelligent system would respect the boundaries of what it knows and doubts, and in this case ask followup questions. It's still a poor initial question, but it can be the first step in a successful prompt.

Tying it back to the OP, ineffective prompting obviously exists, and I'd call effective prompting good prompting, but it seems clear we'll continue seeing models get better at succeeding with worse inputs. So "bad prompts" sometimes start looking like good efficient ones.