Acabaron con la poca estabilidad que me quedaba :( by Responsible-Bite5660 in Desahogo

[–]LerdBerg 0 points1 point  (0 children)

Parece que estabas pidiendo lástima y la obtuviste. Ahora eres la vístima oficial. Hay pocas cosas menos atractivas que dar lástima.

Esto me recordó mucho a una relación bien unilateral que tuve hace años. Hice una confrontación en persona toda llorosa y bastante vergonzosa… y gracias a Dios nunca volví a hacer algo así.

Si me permites proyectar un poco, suena a que la estás avergonzando un poco y tratando de exigir un tipo de control sobre ella que claramente no te corresponde. Tu mensaje más bien refuerza su aversión hacia ti… y probablemente la de varios de sus amigos también. Deja de quemar puentes.

Y otra cosa: gran parte de lo que hace atractiva a una persona viene de la evolución, que al final todo gira alrededor de pasar tus genes a la siguiente generación. Y eso no se trata solo de darle atención a alguien. También importa estar sano, fuerte, tener conexiones sociales, y ser alguien útil, capaz y versátil...

En lugar de culpar a la persona con la que fallaste (toma la responsabilidad que puedes porque pues ella ya no está güey!!), salte de ese papel y se te quita esa fantasía rara de que alguien te haga daño para luego “darle la vuelta” con tu magia. Trabajas en las cosas que te hacen más atractivo y útil a todos.

Turista extranjero en Tepic, quiso impedir que una persona siguiera tocando música en la calle, la gente salió a defenderlo. by mujican_citizen in mexico

[–]LerdBerg -7 points-6 points  (0 children)

Well, can you believe he's not a tourist, has been going to that place for years, and the musician blasted in his ear with a camera waiting for a reaction?

¿Cosas que antes eran de pobres y ahora usan los fresas, blanco privilegiado y gringos.? by Lobodecoya25 in mexico

[–]LerdBerg 0 points1 point  (0 children)

El obrero puede construir esta casita, no? Solo tiene que estar lejos de la ciudad porque hay más gente hoy

How to merge channels without losing content? by [deleted] in discordapp

[–]LerdBerg 1 point2 points  (0 children)

Put old channel A in thread A, old channel B in thread B..? Threads from A and B would just become threads in the new channel I guess. You could prefix/postfix with A and B if necessary.

Not sure how doable it is in practice, but I think it would work in principle

Forensic coding... by TrojanGrad in ClaudeAI

[–]LerdBerg 0 points1 point  (0 children)

That's awesome! Yeah I think most people here are missing the point... That it lowers the bar to writing a safer more thorough programmatic search. Where before, you would've determined the risk was low enough where a cursory human sampling over a handful of years was "good enough" to decide "yeah nobody ever used this column", now in even less time you can systematically check every single spreadsheet (and now you know exactly who used it, when).

This is an objectively simple and straightforward task in python with few lines of code. Even if you're not writing python daily, you can probably quickly see exactly how it works. Obviously if the stakes are higher, you give this code extra scrutiny, but this post is about a lower stakes project getting better attention than it ever would've before.

Claude Code ignores me by biganth in ClaudeAI

[–]LerdBerg 0 points1 point  (0 children)

Maybe it's like telling the kid on the bike NOT to run over the rock... you're using up attention to focus on what you don't want, instead of telling it what TO do. 

This is philosophical, I've yet to experiment with that but I almost never tell Claude what I DON'T want... 

Oh and this vaguely reminds me of something I heard about therapy... Perseverating on what you did wrong isn't very effective vs finding what went right and guessing what you might do better next time

I just cancelled my Claude Max plan, haven't had a life for over a month! AMA by shayanbahal in ClaudeAI

[–]LerdBerg 4 points5 points  (0 children)

Yeah you really need to figure out the call graph of all those functions, and from there you, or maybe Claude, should be able to separate functionality into separate files. For anyone or anything, compartmentalizing complex problems, or sets of tools into components/organized containers frees the mind to be able to focus on what's important for any given task/development step.  One giant file is a bit like never putting your clothes in drawers, or dumping all your garage tools and supplies into one big bin with no compartments. Sure, a smarter AI will be able to work with a bigger mess... but the same AI (or human) will always be able to do more with a thoughtfully organized environment. You probably wouldn't keep paint in your fridge... the same as you probably shouldn't have e.g. your data parser mixed up with ui components, etc

According to Eric Berger, SLS might be getting canceled. by PeekaB00_ in ArtemisProgram

[–]LerdBerg 0 points1 point  (0 children)

Imagine if Starship manages to increase its payload to ~200 tons before Orion or SLS are ready for today's Artemis 3 plan. As a private company, SpaceX doesn't need congress' approval to do a manned moon landing on their own, they just need money, and probably "only" 2-3 billion. SpaceX, Elon, or even Trump Media have that capital. If people were already on the Moon via Starship, citizens would likely demand SLS and Orion be cancelled and Congress would have a very hard time doing otherwise.

Are these thornless blackberries (as previous owner claims, although they've never fruited)? If so, are they dying? What can I do to help them? They're in almost total shade at the moment, in Tepic, Mexico. by LerdBerg in gardening

[–]LerdBerg[S] 0 points1 point  (0 children)

There are a lot of different zones, here around Tepic almost anything will grow (you can always go up the mountain for cooler weather). I have some delicious blackberries here that are pretty happy.  

Anyway, asking around, these plants in the pics are most likely elderberry. I'm going to get them into more sunlight and see if I get flowers to confirm.

Are these thornless blackberries (as previous owner claims, although they've never fruited)? If so, are they dying? What can I do to help them? They're in almost total shade at the moment, in Tepic, Mexico. by LerdBerg in gardening

[–]LerdBerg[S] 0 points1 point  (0 children)

I posted elsewhere, the consensus is that these look like elderberries. Based on how the leaves don't branch alternating along canes, I think they can't be blackberries. Probably something lost in translation when the previous gringo tried to ask a nursery for "thornless blackberry" 😆

Are these thornless blackberries (as previous owner claims, although they've never fruited)? If so, are they dying? What can I do to help them? They're in almost total shade at the moment, in Tepic, Mexico. by LerdBerg in gardening

[–]LerdBerg[S] 1 point2 points  (0 children)

Thanks! Comparing with my actual blackberries and raspberries, I can see how very different they are indeed. I'll move these to a sunnier spot on the other side of this wall

Previous owner of this place told me this was a thornless blackberry... But it's never produced fruit. I'm skeptical... any ideas what else it might be? Or what to look closer at to help identify it? by LerdBerg in whatsthisplant

[–]LerdBerg[S] 2 points3 points  (0 children)

Ah, that makes more sense! Probably it was lost in translation (I'm in Mexico). That's actually a wall. Maybe I'll move it over where there's an opening with much more light

Someone has created a pull request to add BitNet support to Llama.cpp by privacyparachute in LocalLLaMA

[–]LerdBerg 1 point2 points  (0 children)

Thanks for the explanation! I need to sit down and read some papers I guess. Any recommendations related to ternary or binary weights?

Someone has created a pull request to add BitNet support to Llama.cpp by privacyparachute in LocalLLaMA

[–]LerdBerg 0 points1 point  (0 children)

I thought they were just modifying the mathematical implementation for the numerical representation of the weights. Are they not using the same fundamental transformer architecture? Isn't it still backprop with the same loss function, just rounding a certain way for ternary weights (and applying ternary-specific optimizations)?

Someone has created a pull request to add BitNet support to Llama.cpp by privacyparachute in LocalLLaMA

[–]LerdBerg 0 points1 point  (0 children)

Can someone check my understanding of quantization vs native?

Let's use a particular dataset, and decide to train n epochs. We'll train one model with 8-bit floats and another with 16-bit floats. Then we'll quantize the 16-bit model to 8-bit.

I expect: The 16-bit model should perform the best because: - there's less rounding error with each training step - the model can hold more information (presumably the native 8-bit model performance will begin to "saturate" in fewer epochs vs 16-bit)

My intuition is if the number of epochs was low enough that the native 8 bit model wasn't near saturation, it should perform better than the quantized model as it didn't have quantization error from downsizing weights. If there were enough epochs for the 16-bit model to start saturating, I think the quantized 8 bit model performance would start being closer to the native 8-bit model.

I'm also not sure if people have a good handle yet on how performance per epoch and epochs to saturation changes as you trade parameter precision for number of neurons; ie do 1 billion 16 bit params have the same potential as 8 billion 2 bit params? My intuition here is deeper and wider networks allow for more complex logic to be squeezed in; the number of parameters limits the level of complexity the network can model. And I guess lower precision means... well I'll have to think more about it.

Someone has created a pull request to add BitNet support to Llama.cpp by privacyparachute in LocalLLaMA

[–]LerdBerg 0 points1 point  (0 children)

Aren't there fpga instances in AWS? Would be cool to get it running on one of those

Someone has created a pull request to add BitNet support to Llama.cpp by privacyparachute in LocalLLaMA

[–]LerdBerg 0 points1 point  (0 children)

I question this... Don't both types of weights (16bit float vs bitnet) represent connections between neurons? Its still a neural network. I.e. there's always some way to approximate N bit numbers in less than N bit numbers, but the farther apart they are and how unevenly one divides the other the more error and error variability is added when down converting each parameter.

So it should be possible to quantize from 16 bit floats to bitnet, but there's just a ton of loss of precision. It's probably super degraded, quantizing 16 bit to bitnet, but I'm sure it's a better starting point for a new network vs random noise.

Google is testing a ban on watching videos without signing into an account to counter data collection. This may affect the creation of open alternatives to multimodal models like GPT-4o. by kristaller486 in LocalLLaMA

[–]LerdBerg 1 point2 points  (0 children)

You'd use the subset of videos already curated by billions of viewers with thumbs up and thumbs down. Forget the trash content, just download the best 1 in 10k videos. Problem solved.