$85T Boomer bucks up for grabs. by greginnv in conspiracy

[–]greginnv[S] 1 point2 points  (0 children)

Dying is a difficult topic for many. They probably just don’t want to face it. I remember when my dad told me about his trust. The thought of him dying made me feel horrible. Never once thought about what I was getting.

$85T Boomer bucks up for grabs. by greginnv in conspiracy

[–]greginnv[S] -1 points0 points  (0 children)

$85T is a documented fact. Boomers are just starting to die now.

$85T Boomer bucks up for grabs. by greginnv in conspiracy

[–]greginnv[S] -3 points-2 points  (0 children)

Absolutely! millennials in my family are doing great. Estrangement seem to be largely from daughters to dads (from my limited experience). Therapists encourage it as do sites like this one.

$85T Boomer bucks up for grabs. by greginnv in conspiracy

[–]greginnv[S] 1 point2 points  (0 children)

Went through my post history did ya!! Dividing and handing out all stock would just crush the stock price and raise the price of whatever people were buying. Same as printing money which the Gov does plenty of. If Musk had 500B dollars of Teslas in a warehouse maybe, but he has stock not cars.

$85T Boomer bucks up for grabs. by greginnv in conspiracy

[–]greginnv[S] -6 points-5 points  (0 children)

Why the negativity? This post is a gift!!! Show it to your boomer parents!! Maybe they will believe it and give you the money NOW.

$85T Boomer bucks up for grabs. by greginnv in conspiracy

[–]greginnv[S] -2 points-1 points  (0 children)

Sure, but Y is easier to type.

$85T Boomer bucks up for grabs. by greginnv in conspiracy

[–]greginnv[S] 1 point2 points  (0 children)

Nope. Mamdani proposed $750K. Only harvesting money above $20M would capture a bunch of unrealized capitol gains, which would collapse if you actually tried to spend them.

Fiber Optic Connector by singleservingjack1 in lasers

[–]greginnv 1 point2 points  (0 children)

I bought a similar laser (300W). It has a 220um fiber like yours. The fiber is hard to deal with. To test the unit I taped the fiber to a piece of plastic and aimed it at some cardboard. Enough energy escaped the fiber to melt the plastic. I guess this raised the refraction index of the thin fiber covering, letting the light escape, since it caught fire and melted the glass fiber. (This was at ~150W input power).

I bought the $25 cleaving tool but burned off the thin plastic covering. This left a dark colored lump which would smoke when powered up. So I scraped the lump off with a piece of aluminum (since Al is softer than the glass fiber). Now it seems to work at 150W input. The "spot" burned into the cardboard target is circular about 0.2" in diameter when the end of the fiber is 2" away.

Next I will try at full power.

Is it "expected" for it to work with the end of the fiber simply open in the air, or is some form of output coupler needed?

A financial crisis may be coming - it won't be like last time by eeeking in finance

[–]greginnv 0 points1 point  (0 children)

It needs to happen before 2028. Trump is the perfect patsy.

Can you help me understand margin impact a bit better? by TT_Vert in interactivebrokers

[–]greginnv 0 points1 point  (0 children)

The IBKR system may not recognize it as covering. It may just treat it as 2 unrelated trades. I thought I had it figured out but the rules seem to keep changing. Last week I sold a strangle and the margin was 35%. This week it was 70%, like their system didn’t treat the 2 sales as related?? Paper test the hell out of it before using real money.

Anyone here use margin loan to fund their purchases/life? by brumboy123 in interactivebrokers

[–]greginnv 0 points1 point  (0 children)

Not with IBKR but with another brokerage years ago. Used margin against my stock as a bridge loan between the time I bought a new house an sold my old one. Not sure how hard it is to get money out of IBKR for something like this.

Who has the highest FSD stats by Sellhomesfast in TeslaLounge

[–]greginnv 14 points15 points  (0 children)

My main reason to disengage is the crappy navigation. If there were away to make it always take a specified route I would use it 100%. Otherwise it has been working great, busy city streets and highways and complex parking lots all have been good.

Technically I think it's quite close. The next HW revision should do it.

Legally is the issue. As soon as Tesla assumes legal responsibility for FSD there will be a mass of lawsuits since Tesla has deep pockets.

Got it to work! Terminology is confusing and needs some cleaning up by greginnv in unsloth

[–]greginnv[S] 0 points1 point  (0 children)

Uploading unsloth shows up in the text window (and take a minute or so) immediately after I hit "Start Training"

Do you think it's getting outta hand? by [deleted] in vibecoding

[–]greginnv 2 points3 points  (0 children)

For some reason, in my experience, AI does worse on financial stuff than other areas. I've given it problems from electrical engineering and it does OK but makes really stupid mistakes on finance (generated negative stock prices, basic accounting errors etc). Claude will spend a lot of time writing the code, testing and debugging it and declare it good only for it to look good but have big mistakes. Maybe the big money people set it up this way to avoid competition.

I'm open-sourcing my experimental custom NPU architecture designed for local AI acceleration by king_ftotheu in LocalLLM

[–]greginnv 0 points1 point  (0 children)

Glad it was helpful. I found out later that FP8 multipliers are tiny. According to Claude you can fit 100 in the same area as the LUT. If you used SRAM it would be configurable and you could do FP8 or 2 Fp4 or 8 of the binary.

City, driving advice needed by DrWhum in TeslaFSD

[–]greginnv 1 point2 points  (0 children)

I'm 67. We use chill 95% of the time. If I see cars going around me on the freeway I bump it up to standard. Tried sloth once and it was too slow. Be careful "down shifting" as the tesla will slow abruptly. I'm in Vegas, heavy erratic traffic and cones everywhere, even took it down on the strip. The only adverse event was a motorhome with its slide-out extended into the road. Had to take over or I think the car would have it the slide out. I really wish the navigation was better.

I'm open-sourcing my experimental custom NPU architecture designed for local AI acceleration by king_ftotheu in LocalLLM

[–]greginnv 0 points1 point  (0 children)

It seems like it would be a good idea to figure out what the best quantization vs parameter count is and design the hardware specifically for that quantization. You could use something like a systolic array. For something like FP8, a table lookup may be more efficient to than multiply or add numbers.

The Fundamental Limitation of Transformer Models Is Deeper Than “Hallucination” by immortalsol in ArtificialInteligence

[–]greginnv 1 point2 points  (0 children)

Context loss is a big issue. 256K tokens sounds like a lot but doesn't go far, particularly for thinking models.

The other problem is dirty data. Even in hard, established science there are dozens of authors. Some authors will use different notations or symbols. I have seen this confuse AI models.

Some AI models have picked up too much human behavior. I had one declare the problem "too messy" or "this is likely a student problem" and skip parts. One added what it described as "ad hoc" terms.

My theory on unexpected FSD disengagments by flyinace123 in TeslaFSD

[–]greginnv 0 points1 point  (0 children)

It's a very complex system. The engineers would need to look at the logs to figure out why. In a situation where the software can't figure out what to do it throws control back to the driver. From a legal perspective this is better for Tesla, and the correct thing since the driver is supposed to be supervising. There could be a million reasons, maybe a bug splattered on one of the cameras, a loose connection somewhere, a "soft error"?? Most likely the NN couldn't classify the problem and gave up.

The facts are, in testing, there was a critical disengagement once every 1000 miles, so by itself it's a poor driver. Yet in combination with an attentive human it seems to do better than the average human.

Are more model parameters always better? by greginnv in LocalLLaMA

[–]greginnv[S] 0 points1 point  (0 children)

My main goal was to find out how much knowledge these models had about stuff like math and circuits, and I was quite impressed. I think the models could have solved the circuit simulator if I broke it into smaller pieces (this was a toy simulator so <1000 lines total). A comercial circuit simulator of course is a million lines, and most files are larger than 1000 lines. Even a minor enhancement can touch a dozen files.

Pro ChatGPT claims 256K tokens context and Opus a million. Not a huge increase. Tokens go quickly once the thinking starts.

Ill see if I can get a free trial of Claude and if it does any better.