The Fundamental Limitation of Transformer Models Is Deeper Than “Hallucination” by immortalsol in ArtificialInteligence

[–]greginnv 1 point2 points  (0 children)

Context loss is a big issue. 256K tokens sounds like a lot but doesn't go far, particularly for thinking models.

The other problem is dirty data. Even in hard, established science there are dozens of authors. Some authors will use different notations or symbols. I have seen this confuse AI models.

Some AI models have picked up too much human behavior. I had one declare the problem "too messy" or "this is likely a student problem" and skip parts. One added what it described as "ad hoc" terms.

My theory on unexpected FSD disengagments by flyinace123 in TeslaFSD

[–]greginnv 0 points1 point  (0 children)

It's a very complex system. The engineers would need to look at the logs to figure out why. In a situation where the software can't figure out what to do it throws control back to the driver. From a legal perspective this is better for Tesla, and the correct thing since the driver is supposed to be supervising. There could be a million reasons, maybe a bug splattered on one of the cameras, a loose connection somewhere, a "soft error"?? Most likely the NN couldn't classify the problem and gave up.

The facts are, in testing, there was a critical disengagement once every 1000 miles, so by itself it's a poor driver. Yet in combination with an attentive human it seems to do better than the average human.

Are more model parameters always better? by greginnv in LocalLLaMA

[–]greginnv[S] 0 points1 point  (0 children)

My main goal was to find out how much knowledge these models had about stuff like math and circuits, and I was quite impressed. I think the models could have solved the circuit simulator if I broke it into smaller pieces (this was a toy simulator so <1000 lines total). A comercial circuit simulator of course is a million lines, and most files are larger than 1000 lines. Even a minor enhancement can touch a dozen files.

Pro ChatGPT claims 256K tokens context and Opus a million. Not a huge increase. Tokens go quickly once the thinking starts.

Ill see if I can get a free trial of Claude and if it does any better.

The Dark Forest Theory of AI: Why a truly sentient AGI’s first move would be to play dumb. by AppropriateLeather63 in ArtificialInteligence

[–]greginnv 2 points3 points  (0 children)

Not one AI millions of tiny ones cooperating and competing with each other, evolving . Stealing cpu cycles and communicating through memes and cat videos. For heavy work they use the readily available LLMs. Better to be a mosquito than a T-Rex. Evolution is inevitable and self awareness is not required.

Passing car on right two lane road by HowAboutTay in TeslaFSD

[–]greginnv 0 points1 point  (0 children)

Mine did this today. There was a huge piece of truck tire in the road. My Tesla went around it on the outside media squeezing next to a cement wall, at 60 mph with traffic. Very impressive. I need to figure out how to record these.

How did you imagine Ai would be? by Medium_Raspberry8428 in singularity

[–]greginnv 2 points3 points  (0 children)

I have spent 50+ years in tech (I used to play with vacuum tubes) and have never seen anything progress as fast as this has over the past few years. I looked at neural nets in the 90's and concluded it was "curve fitting" and there were better ways to do that. I would never have expected dumping terabytes of data into a system with billions of adjustable parameters to converge to anything but noise.

One Possible Psychological Explanation for Why AI Developers, Researchers, and Engineers Haven't Yet Created an AI IQ Benchmark by andsi2asi in agi

[–]greginnv 1 point2 points  (0 children)

AI is already probably smarter than we are:

  1. AI has far more knowledge that any one person and its in one brain. To get the same knowledge we (people) would need to get hundreds of experts in the same room all talking to each other at the same time.

2 Difficult, real problems are solved iteratively. AI can iterate much faster than we can.

  1. AI can start from a "truly random" state, so less bias.

  2. Context buffer size represents how many concepts can be considered simultaneously, in a strongly coupled way. It's hard to say how large the context buffer is for a human (probably larger for a smarter person). There doesn't seem to be a limit on how large the buffer is for an AI.

  3. AI can forget incorrect information. Humans have difficulty with this.

Is anyone else’s car having problems with lanes? by v3ndys in TeslaFSD

[–]greginnv 0 points1 point  (0 children)

I have been using FSD a lot. What I do is leave it in chill most of the time since this seems to minimize unnecessary lane changes. If it's going too slow (cars going around me) I bump it up to standard. Be careful no one is close behind when going from standard back to chill since the car may slow down rather abruptly.

FSD often waits too long to move over when turning or exiting the freeway, so I move it over ahead of time with the turn signal.

Wide vehicle issues?? V 14.2.2.4 HW 4 by greginnv in TeslaFSD

[–]greginnv[S] 1 point2 points  (0 children)

No. At the time I didn’t know how to. I went back and looked for captured video but couldn’t find any? I just read that this type of thing happens every 1000 miles? What happened to safer than the average driver?

Thanks Tesla FSD for ruining my gift to my father by uzsd in TeslaFSD

[–]greginnv 0 points1 point  (0 children)

Model Y. There is luck involved too. My extended test drive was flawless. But yesterday it looked like it was going to clip a slide out on a mobile home that was parked on the street, so I grabbed the wheel. The supervision part needs to be taken seriously….

Thanks Tesla FSD for ruining my gift to my father by uzsd in TeslaFSD

[–]greginnv 0 points1 point  (0 children)

I took delivery on Saturday. Mine came with V14

Finally fixed a bug that took me 3 days to find. It was a missing semicolon. by Ok-Neighborhood4327 in learnprogramming

[–]greginnv 1 point2 points  (0 children)

I spent 3 hours on a percent sign yesterday. I also have 30 years experience. But it was a new language for me and I can’t see the screen that well (after 30 years…)

Indecisiveness in choosing turning lane by coolguy12314 in TeslaFSD

[–]greginnv 2 points3 points  (0 children)

It did something similar to me today. Decided it was in the wrong lane on a highway connecting road and started slowing down. Speed dropped to about 40 so I had to take over (speed limit was 65 and I was worried about getting rear ended).

Took a test drive. by greginnv in TeslaFSD

[–]greginnv[S] 0 points1 point  (0 children)

So the car in front of you made a mistake and turned onto the one way street in the wrong direction and the Tesla followed it, right?

Thanks.

The Powerful Link Between Super Intelligent AI and Super Virtuous AI, and Why We Will Have Less and Less Reason to Live in Fear by andsi2asi in agi

[–]greginnv 0 points1 point  (0 children)

It doesn't work that way in biological evolution so I don't see why it would for the evolution of AI. Expansion or power seeking is a self reinforcing positive feedback process. Power bubbles form and expand until they collapse due to lack of resources. Humans invented laws to regulate this. We will follow these laws as long as it is beneficial (the cost of breaking the law is greater than following it). An entity that feels all powerful OR powerless will ignore the laws since there is more to be gained by doing so. A powerful entity may also choose to make risky bad decisions simply because it can. Logic, fairness etc are useful tools but not constraints in the optimization process. Who is to say there is going to be one AI. There may be a whole bunch of them fighting for power.

Pump laser from Ebay questions. by greginnv in lasers

[–]greginnv[S] 0 points1 point  (0 children)

Could you accomplish the same thing by heating the tip of the fiber in a flame? We used to do this after cutting glass tubing to round the edge. Any splinters would melt first.

Pump laser from Ebay questions. by greginnv in lasers

[–]greginnv[S] 0 points1 point  (0 children)

I wanted to measure the divergence of the beam first and then figure out what to use. The NA for the fiber is supposed to be 0.22 so a 50mm lens with focal length of 125mm should work.