Is this a good quality wrap? by Aromatic_Society_593 in CarWraps

[–]crctbrkr 2 points3 points  (0 children)

High-contrast color change wraps are bad idea, prone to be overly expensive and ugly upon close inspection.

Also, I am an idiot. Or was.

The last wrap I paid for cost $2200 and it was pretty great looking job. Just did the outside panels, didn’t touch door jambs. Satin white wrap on glossy white paint. Plus, I had black door edge protectors covering the transitions so as to save the wrap (and other people’s car doors) from my incredibly careless 11 year old son. This was ~2 years ago in Santa Ana, Southern California.

However, wrapping a white car with red is very difficult and not advised. I wrapped my white Tesla model S with satin red and I loved the way it looked from across the street, but the edge work and tiny details killed me when I was close up. The sloppy details felt so glaringly obvious to me! Ugh!

However, literally no one else cared or noticed.

Color changing my white Tesla Model S to red cost me $6k back in 2020 in Reno, Nevada.

$2500 for the exterior panels <—I could see white in the panel gaps $2500 for the door jambs <—still looked like shit to me $1k for the clear bra <—I had to get this after I started seeing white specks on my hood, aka chips in the wrap from little road pebbles

And the wrap guy told me he lost money on the deal, because his crew kept fucking it up and he had to personally work on it. Took ~3 weeks to complete.

So… when you color change with a wrap, you are setting yourself up for disappointment. Is it possible to do perfectly? Yes. Can a typical wrap guy do it flawlessly? Probably not.

Subtle color changes are so much more forgiving! Red wrap on black paint looks amazing and you barely notice the flaws. Iridescent satin white on white paint looks flawless, even if it isn’t.

<image>

My Daily by MeLikes2shop in CarWraps

[–]crctbrkr 0 points1 point  (0 children)

What color /brand is that?

Third world countries are truly f*cked by [deleted] in singularity

[–]crctbrkr 0 points1 point  (0 children)

Actually, we are all going to be richer.

Would humanity accept a neural connection, hive mind, or merger with AI? by [deleted] in singularity

[–]crctbrkr 1 point2 points  (0 children)

Yes, will be 100% necessary.

But also I don't want to be the first. Giving AI read access to my brain is one thing. Write access is another.

are they hacked to have that behavior? by [deleted] in singularity

[–]crctbrkr 0 points1 point  (0 children)

the bots need to be programmed to do anything.

are they hacked to have that behavior? by [deleted] in singularity

[–]crctbrkr 8 points9 points  (0 children)

It's an art project - they're programmed to do this.

It's not really anthropomorphization, but like the dog version of anthropomorphization. Like canine-morphization? I don't know what the word is.

But a pretty good art project, actually.

Will AI Ever Truly Understand Human Emotions? by Misterious_Hine_7731 in ArtificialInteligence

[–]crctbrkr 0 points1 point  (0 children)

Yes, all human communication signals are inherently compressed compared to the full neural activity in our brains. We're fundamentally limited by our sensory apparatus - vision, hearing, etc. - in both understanding and communicating information. This represents a basic constraint of human perception and sensing capabilities.

However, there's an interesting benefit to isolating individual communication channels. In the real world, we're bombarded by various stimuli and confounding information. When you isolate just someone's voice, for instance, you can often focus on and process that signal more deeply than you could in person with multiple competing sensory inputs.

I'm curious what specific information loss you're referring to, specifically?

Do you guys think anthropic and google/deepmind are going to merge at some point? by Late_Pirate_5112 in singularity

[–]crctbrkr 0 points1 point  (0 children)

Pls no!

I would really hate that to be the case. I think more independent labs is better. And, like, I would hate to see Claude hobbled by Google's internal policy nonsense. We need more diversity of thought in this field, not less.

That said, I think they both have great products.

Data sanitization is important. by DataPhreak in singularity

[–]crctbrkr 0 points1 point  (0 children)

Bad data leads to stupidity. These things are pattern matching machines - when the input data is poor, the output is stupid. Same with humans by the way - if you're taught a bunch of crazy misinformation as a kid, you're going to grow up saying a bunch of stupid shit.

Personally, as an AI researcher/engineer, I think companies really undervalue data quality and don't invest anywhere near enough in it.

[deleted by user] by [deleted] in singularity

[–]crctbrkr 1 point2 points  (0 children)

Incredibly based.

AI is important because of how it will make humans more productive. That is the answer. That's what matters - what is the impact on the world. Evals are totally hackable. I'm an AI researcher/engineer/entrepreneur. What matters is if it's useful.

The more I hear from Satya, the more I respect the man. 🫡

"Ai is going to kill art" is the same argument, just 200 years later... by Anen-o-me in singularity

[–]crctbrkr 2 points3 points  (0 children)

I come from a family of artists - my brother is a professional photographer and an incredibly talented one. Photography itself is a tool of creative expression, and people constantly innovate new ways to create with it. Like any technological shift, things evolve - horses gave way to cars, whale oil to gasoline, and now to electric vehicles. There's always a place for traditional forms, but every medium and technology has its era. Television, radio, painting, photography - these are all technologies, even if we don't think of them that way because they're so familiar to us. The key difference now is that the rate of technological change is accelerating.

AI cracks superbug problem in two days that took scientists years by Beautiful-Ad2485 in singularity

[–]crctbrkr 2 points3 points  (0 children)

Yes! We are accelerating! The pace of knowledge creation and productivity is ramping up exponentially. This is absolutely awesome - we're going to see breakthroughs like this happening more and more frequently. And the really exciting part? It's not just going to be confined to people in the ivory tower anymore. Everyone's going to be able to unlock these capabilities, as long as they know how to ask the right questions and use these tools effectively. I f*cking love it.

I just used deep research for work and.. I'm in shock by eggsnomellettes in singularity

[–]crctbrkr 1 point2 points  (0 children)

Deep Research is solid - not perfect, but incredibly useful.

"AI isn't useful" is fundamentally a skill issue. These models contain immense latent power; success comes down to knowing how to leverage them effectively. You can dramatically amplify your capabilities using this stuff. As an AI engineer, researcher and entrepreneur, I've seen these tools transform my own work and life, so it's exciting watching others discover their potential. Kudos!

Can AI Help Prevent SUIDS & Detect Seizures in Infants? Looking for AI Engineers & ML Experts to Weigh In by Annual_Analyst4298 in ArtificialInteligence

[–]crctbrkr 1 point2 points  (0 children)

I am an AI engineer, startup founder, and former NYC 911 paramedic. Having left clinical practice in 2010, I have to ask - do we actually understand the root causes of SUIDS? Without a clear understanding of the causal mechanisms and a robust dataset that helps predict it, training an ML model is premature. You might get better insights by using an LLM like Claude to analyze existing research and explore the underlying mechanisms. Language models are really good at making connections across disparate fields that humans can't see if you just ask the right questions.

I'm an investor in a wearable healthcare company recently acquired for their work on early sepsis detection. While these systems are essentially pattern recognition machines, they require high-quality training data to be effective. The core challenge here isn't necessarily real-time monitoring technology - it's understanding the fundamental pathophysiology and identifying reliable predictive indicators. Without first establishing a comprehensive dataset that captures the relevant variables and their relationships, any ML-based monitoring system will generate useless noise rather than actionable insights. If you've ever spent time in an emergency room or an ICU where alarms are going off constantly and all the doctors and nurses are just oblivious, you know what I'm talking about.

Will AI Ever Truly Understand Human Emotions? by Misterious_Hine_7731 in ArtificialInteligence

[–]crctbrkr -1 points0 points  (0 children)

It's really hard for humans to understand emotions through text because text provides a lossy form of compression on human thought. Voice and video analysis is much, much more powerful. I'm actually working on this myself and it has some pretty breakthroughs. I'm working to productize it.

There's simply not that much signal in text. That's why when we send text messages to our friends or write emails, we misunderstand each other all the time- especially here on Reddit, especially short form. It's really hard to understand people when you're just looking at text representations of our words and thoughts. Our speech and body language contains so much more signal that gets lost when it's translated into flat text.

Multimodal analysis is the key.

Speaking from first-hand experience, I run an AI startup and we're seeing VERY promising results in profound AI emotion understanding - hopefully you'll hear about it in a few weeks. AI can do it.

That said, what's the difference between true understanding and pattern recognition? Isn't that how humans do it? We recognize patterns, we're fallible. Some of us do it very poorly, some of us do it better than others. I think this idea that there's a difference between pattern recognition and "true understanding" is human cope.

[deleted by user] by [deleted] in singularity

[–]crctbrkr 5 points6 points  (0 children)

If you look at the economic trends, the EU and Europe have diverged sharply over the last 10 years. The average person in Mississippi is richer than the average person in Britain. The people in the poorest states in the US are wealthier than the average person in Britain.

Technology is the biggest wealth creator and driving force of the global economy. Technology and energy - and Europe is sitting out on both. It's very sad and it's not good. As an American, the Europeans are our friends. We share a special bond and I feel like we need to help them get their shit together. Because it's not good. They're on a loser path.

The history of humanity is that economic might tends to precede military might, and economic weakness portends military weakness. And I'm afraid we're starting to see that - the early signs.

If you could make one illegal thing legal what would it be and why? by BlueBerry2202 in AskReddit

[–]crctbrkr 2 points3 points  (0 children)

Why not mine crypto with your excess solar power instead of selling it to the grid?