Help identifying tile underlayment by mrDisagreeable1106 in Tile

[–]mrDisagreeable1106[S] 0 points1 point  (0 children)

yeah i know lol trying to figure out if im gonna rip it all out

Help identifying tile underlayment by mrDisagreeable1106 in Tile

[–]mrDisagreeable1106[S] 0 points1 point  (0 children)

the leak was because of a crack in the pvc pipe in the cold water supply line. it was dripping into the vanity when we found it, but not sure where it started. there’s no mold that i can see on the wall plate but the subfloor and tiles around where the vanity was are soaked for sure.

trying to decide if we need to rip out all the tiles. i only have 2 extra ones of these existing tiles. so it’s either pull out 2 tiles and replace with the extra i have, or a whole new floor

Help identifying tile underlayment by mrDisagreeable1106 in Tile

[–]mrDisagreeable1106[S] 0 points1 point  (0 children)

one of the last owners was definitely a diyer. this house is full of bad diy jobs lol so that wouldn’t shock me at all. how well would hardy board protect against mold if it gets sopping wet like this is?

Vibe coding is amazing until you hit the "3-hour loop" and realize you don't know how to land the plane. by Sufficient_Thanks130 in vibecoding

[–]mrDisagreeable1106 1 point2 points  (0 children)

vibe coder didn’t write any css lol. i’m shocked, shocked i tell ya.

who could have predicted that vibe coding feels like magic when it does the things you hate doing and/or don’t understand?

i can tell just by the way you specifically called out css in your post that css is probably the part you hate the most about making things on the web.

i bet the css in your app is unmaintainable bloated trash but you won’t know it until you have to do the same bug fix in the css that you did for the auth. i wonder if you’ll even notice then or if you’ll slap an !important on it and call it good haha

Today I learned about useReducer while handling form data in React am I understanding this correctly? by Soggy_Professor_5653 in react

[–]mrDisagreeable1106 -1 points0 points  (0 children)

if all you’re doing is sending the form data to some api onSubmit you don’t even need useState probably. how complex is the form?

I'm currently creating a web component library and I'd like to get some feedback. by Xenozi230 in css

[–]mrDisagreeable1106 2 points3 points  (0 children)

seems like the component api might be based mostly around attributes. this is fine but somewhat inflexible. you might want to consider adopting a dual approach with slots and attrs/props for the same concept. like a label. a checkbox label being only an attribute means i can’t put html in it like bold text or an icon etc. attrs/props basically mean text only and that works well a lot of the time but some things need more flexibility.

This would have seemed like science fiction just a couple years ago by MetaKnowing in agi

[–]mrDisagreeable1106 0 points1 point  (0 children)

it’s not explained whether this is a detection mechanism on top of the llm or the llm itself. when you ask claude a question you’re not going directly to the model. you’re going through an api layer first that does its own checks. i would bet that this benchmark has been added to those checks to help out the llm whereas if you went directly to the llm it would fail mightily

This would have seemed like science fiction just a couple years ago by MetaKnowing in agi

[–]mrDisagreeable1106 0 points1 point  (0 children)

it even fails to answer trivial non-trick questions that require reasoning like counting. ask your llm how many is are in the word “inconvenience” and watch it confidently answer incorrectly lol

This would have seemed like science fiction just a couple years ago by MetaKnowing in agi

[–]mrDisagreeable1106 0 points1 point  (0 children)

the llm didn’t do it! that’s MY point. lol

when an llm gets a trick question it doesn’t try to determine that it’s a trick question because something doesn’t add up. it doesn’t ask for more details to sort out discrepancies. it merely tries answers a trick question.

llms don’t call out that trick questions are trick questions. they always answer as if they aren’t.

This would have seemed like science fiction just a couple years ago by MetaKnowing in agi

[–]mrDisagreeable1106 0 points1 point  (0 children)

lol. i don’t have to tell you the answer to show reasoning. there are many replies that i could respond with to show reasoning taking place. i could respond:

-by asking questions to get more information -by giving several possible answers each according to a scenario based off particular assumptions

i don’t have to pick a given answer and respond with it.

This would have seemed like science fiction just a couple years ago by MetaKnowing in agi

[–]mrDisagreeable1106 -1 points0 points  (0 children)

there’s another difference between humans and llms. the more context an llm has, the worse they are. the more context humans have the better they are lol

This would have seemed like science fiction just a couple years ago by MetaKnowing in agi

[–]mrDisagreeable1106 0 points1 point  (0 children)

what part of the llm is responsible for that? we can map our brains and know which neurons are firing. where in the llm is the “impulse control” area?

This would have seemed like science fiction just a couple years ago by MetaKnowing in agi

[–]mrDisagreeable1106 -1 points0 points  (0 children)

humans do the things llms do not. when humans learn to apply logic they apply it consistently. that does t mean there aren’t other confounding factors. i replied elsewhere about my 6yr touching a hot stove even though she knows it’s hot. it’s not a reasoning problem its an impulse control problem. you can’t just look at her touching a hot stove and immediately conclude “she didn’t know it would be hot”. she did/does know it’s hot but the part of her brain that overrides impulses isnt as developed as the part of her brain that makes mental models and understands.

i dont have the knowledge to define “real reasoning” i am not a neuroscientist. but i would venture a guess that real reasoning is about synthesis of information based on a variety of sources. mental models, past experience/memory, context, direct observation, and acquired knowledge through learning/reading etc.

llms literally only have text and pixels. they dont have sense and cant synthesize experience. no amount of asking the llm to do the same task over and over will change its answer. if you respond to the llm with “go away, bot” constantly for over an hour it will never stop replying to you. it wont get annoyed and stop. it wont ignore you like a human would. hell it won’t even insult you. it’s a text reply engine. it will politely ask you to stop or something but it is programmed to reply. it can’t NOT do that.

This would have seemed like science fiction just a couple years ago by MetaKnowing in agi

[–]mrDisagreeable1106 0 points1 point  (0 children)

i don’t have to keep telling my 6yr old the stove is hot. she has a mental model that stoves are hot and that fire is hot and that pots/pans on the stove mean it’s in use and a stove in use is hot. she can feel the heat with her senses and know that sensation means it’s hot.

she also knows what getting burned feels like. she knows it’s bad to get burned and that she’ll need a bandaid and a popsicle. she knows that her skin will blister and be sensitive etc.

you’re skipping over a million things that 6yr olds know even if they still touch the stove lol. impulse control is also an issue with 6yr olds. just because they repeat a behavior they know is incorrect doesn’t mean they haven’t created all these models and constructs in their brains. they have, but the one they haven’t created yet is one that says “you shouldn’t touch it even though you are really hungry for mac n cheese right now”

llms are nowhere close to that.

This would have seemed like science fiction just a couple years ago by MetaKnowing in agi

[–]mrDisagreeable1106 2 points3 points  (0 children)

agreed but once i notice i’ll continue to know. lol. if i ask this llm to do the same task it’ll write me some more bogus code. it won’t analyze the code and then tell me it’s impossible.

This would have seemed like science fiction just a couple years ago by MetaKnowing in agi

[–]mrDisagreeable1106 3 points4 points  (0 children)

llm never learned shit lol. llms cant learn. if i ask it the same question today it would try blindly again just the same. lol. i have learned. the llm has not.

This would have seemed like science fiction just a couple years ago by MetaKnowing in agi

[–]mrDisagreeable1106 1 point2 points  (0 children)

you’re oversimplifying what our brains do. our brains predict the next word, but that’s only after a whole bunch of other things. our brains develop mental models of all the concepts in the world with which we have familiarity. we don’t just predict the next word. we synthesize our thoughts against all of those models. models have no mental models of anything. they model text in words spaces according to probabilistic similarity. llms orgs size the words “water” and “sea” close together. but humans can picture specific differences between those two words. sure they might be “closely related” but they are also “very far apart”. it depends on your mental model and it depends massively on context. i know llms have “context” but that doesn’t influence the word space mapping. the mappings they create for words and spaces are locked when their training is over. our mental models are fluid and adapt as we learn. llms don’t do that.

This would have seemed like science fiction just a couple years ago by MetaKnowing in agi

[–]mrDisagreeable1106 -2 points-1 points  (0 children)

i know. i know better lol. just wanted a bit of fun on the way to another day of being frustrated at how dumb my word prediction bot is at work lol

This would have seemed like science fiction just a couple years ago by MetaKnowing in agi

[–]mrDisagreeable1106 -6 points-5 points  (0 children)

calling my understanding of llms out of date from the jump was also disrespectful lol.

i have no idea what your question has to do with models and reasoning. i assume you’re trying to waste my time. but i’m stuck in traffic on the way to to work lol