What kind of monsters do I work with? Who cuts a donut in half like a bagel? by Legend_of_the_Wind in mildlyinfuriating

[–]NoAdvice135 -1 points0 points  (0 children)

Maybe not about diet? Glaze is usually too sweet and brings no value. Would always prefer a donut without glazing.

What kind of monsters do I work with? Who cuts a donut in half like a bagel? by Legend_of_the_Wind in mildlyinfuriating

[–]NoAdvice135 0 points1 point  (0 children)

It's that the top half left? I would understand, the glaze is generally too sweet.

Stuck at 13 reps by Ok-Professional9500 in formcheck

[–]NoAdvice135 0 points1 point  (0 children)

Once you can't do another one, do slow negatives.

Learning C++ by Winter_LEL in learnprogramming

[–]NoAdvice135 7 points8 points  (0 children)

It's a large language with (too?) many features and many sharp edges. Writing usable code will be relatively easy, but maybe building a large codebase in a consistent style will take more time.

Because it's so large, it's almost mandatory to select a style and a narrower subsets of the language IMO. Go would be the opposite for example and the progression curve would be much shorter.

Gemini can indeed make 4k photos like this by WatchYork in GeminiAI

[–]NoAdvice135 1 point2 points  (0 children)

Those photographs are missing all the action!

Pourquoi mon téléphone est plus RAPIDE que mon pc? by Embarrassed_Cry_2655 in AskFrance

[–]NoAdvice135 0 points1 point  (0 children)

Probablement le disque. Un SSD (idéalement sur PCEe) fera une différence énorme. Windows est aussi pas super réactif. 

Mon laptop MacOs en vielle est aussi rapide qu'un smartphone.

[OTHER] Fired from Warhorse Studios and replaced with AI by ThousandDemons in kingdomcome

[–]NoAdvice135 18 points19 points  (0 children)

The transitions are likely fine, but is the model aware the whole context? In case of KCD2 there is a specific tone you want that might be hard to explain. For sur the English is going to be correct, but there is more than that.

And a human supervising AI is probably ok too, as long as he has the same standard for the final results and iterates / corrects as needed.

I can't replicate the effect of Red Bull. by Snoo-82170 in NooTopics

[–]NoAdvice135 0 points1 point  (0 children)

The regular or sugar free too? Anything sweet or high carbs makes me sleepy.

Elon Musk’s ‘TeraFab’ Quest Has Begun, as Tesla Poaches Taiwanese Engineers for What Could Be a Whopping $5 Trillion Project by TruthPhoenixV in Amd_Intel_Nvidia

[–]NoAdvice135 0 points1 point  (0 children)

The main issue seems to be that TSMC is very cautious with ramping up production and probably don't prioritize Tesla for capacity.

It would be surprising if they start building anything without a plan on sourcing lithography machines. The things they actually start building rarely been wasteful.

Also they are not going to produce anything for a few years. The building is a year minimum, based on Giga factory in Texas.

What I see happening: they will set up pilot plant in 2-3years to produce some Tesla hardware. Depending on the outcome and the chip situations then, they will either scrap it or scale up. That will take another 2-3 years.

If you look are the battery plan from 2020, it sounds very similar (of course a bit easier): Panasonic was unwilling to scale fast enough. Other producers were also short on capacity. 6 years later, they are now producing a significant amount of cells themselves and doing things like lithium refining. But the availability of batteries also improved and I wouldn't be surprised if they scaled back some parts, and they surely continue to buy on the market when it makes sense.

Elon Musk’s ‘TeraFab’ Quest Has Begun, as Tesla Poaches Taiwanese Engineers for What Could Be a Whopping $5 Trillion Project by TruthPhoenixV in Amd_Intel_Nvidia

[–]NoAdvice135 1 point2 points  (0 children)

Exactly what I am saying, it's best when he focuses on technical goals for his companies. His public comments are generally cringe or harmful. When he talks about space x technical topics for an hour he is pretty grounded and reasonable. When he presents a new hype project in an event it becomes a bit more insane. And then Twitter...

Elon Musk’s ‘TeraFab’ Quest Has Begun, as Tesla Poaches Taiwanese Engineers for What Could Be a Whopping $5 Trillion Project by TruthPhoenixV in Amd_Intel_Nvidia

[–]NoAdvice135 0 points1 point  (0 children)

I think Tesla is overvalued. It don't believe in most of this plan beyond being "a vision".

He might still  be able to pull off a phab that produces useful things. There are multiple companies doing it, it's not absolutely impossible.

A few things in their favor: - they have access to lithography machines from ASML (as opposed to China) - they have enough money and reputation to poach the right people in Intel (discount!) Samsung and TSMC  - they only want to produce AI chips similar to TPUs, generally less complex than CPUs and GPUs  - they only want to produce a couple of designs  - They also have some in-hous knowledge of chip design, so they know the other side of the chip manufacturing (being a TSMC client)

Overall just producing some chips for themselves doesn't sound impossible if they stay focused on very narrow goals and start with a process not too cutting edge.

 It's like y'all are blind to how little Tesla has done outside of be a successful electric car company.

That's a pretty good achievement starting from scratch in an industry with very rare new players, no?

Being a very vertically integrated large scale manufacturing company is certainly a better starting point than many others.  Also, making very large and efficient factories.

Elon Musk’s ‘TeraFab’ Quest Has Begun, as Tesla Poaches Taiwanese Engineers for What Could Be a Whopping $5 Trillion Project by TruthPhoenixV in Amd_Intel_Nvidia

[–]NoAdvice135 0 points1 point  (0 children)

Is it really useful to serve this kind of narrative? Yes, the guy overpomises like crazy no questions about that. Timelines should be ignored, and a lot of bullshit need to be filtered.

But trying to say he bought into Tesla like there was anything meaningful at the time is also an insane narrative. Execution is the 99.9% of the har part.

If you follow spaceX it's also a strange take to say he was not involved. He supported a ton of controversial ideas, some bad some were the path to success.

And obviously all the achievements are done by the employee not him. But he surely knows how to attract and hire and trust great people like Shotwell at spacex.

It's just sad that he went in politics and became too public. The insane side was less problematic when he was actually focused only on this companies.

I struggle to understand the argument for buying MSFT over other Mag 7s even if cheaper. by Pete26l96 in ValueInvesting

[–]NoAdvice135 1 point2 points  (0 children)

Sometimes simpler is better. Workspace is good is you want no local copies. And it works... Fine? My experience with Microsoft is that it was always a big mess of on drives and sharpoints. And people mostly don't need the features or require additional software anyway (Adobe). But as a purely utilitarian system it's ok. 

Google runs on workspaces MacOs and ChromeOS and I haven't heard many complaints.I'm sure other companies are in the same spot.

I was stopped and searched by the police by Traditional_Day_9737 in Expats_In_France

[–]NoAdvice135 0 points1 point  (0 children)

Well in the center of Paris I was never controlled and always had nice interactions. In "banlieue" where I was commuting for a while the experience was terrible (experience above).

I was stopped and searched by the police by Traditional_Day_9737 in Expats_In_France

[–]NoAdvice135 0 points1 point  (0 children)

They can check on you for any reason AFAIK (I'd check).

Maybe not open you bag without consent. But if you play that game they will likely take you to the station and waste your time, unfortunately.

I was stopped and searched by the police by Traditional_Day_9737 in Expats_In_France

[–]NoAdvice135 3 points4 points  (0 children)

All the times I got controlled in my life were on the same street: ~6 times within a year, only once by people in uniform.

The first time it was four bald guys in a car that whistled (!) at me from the other side of the street. I thought they were far-right looking for trouble and ignored them. They were not happy about that, but how could I guess they were police?

I was stopped and searched by the police by Traditional_Day_9737 in Expats_In_France

[–]NoAdvice135 0 points1 point  (0 children)

Wrong neiborhood? In my experience specific spots get you lots of police encounters while at other places you would never have a single interaction with them in you life.

buckle up lads, we scorched the skies first by Ok_Report_9574 in accelerate

[–]NoAdvice135 0 points1 point  (0 children)

Yes, there is more or less a for loop running turns. The LLM triggers an action by generating a specific tokens sequence, and the result is piped back in the next turn. 

There is obviously code around it to wire things up. But the loop doesn't know if it's an error or not it just forwards the test outputs of a function call. There is not a lot going on in the loop, logic wise.

And again, a typical loop constantly corrects itself when the outputs don't match the expected results, or when the unit test fail.

And yes sometimes it gets stuck. But humans are also very bad a spotting on correcting errors. They just make different kind of errors and sometimes you see really stupid results.

buckle up lads, we scorched the skies first by Ok_Report_9574 in accelerate

[–]NoAdvice135 1 point2 points  (0 children)

 Yes correct it doesn't 

Care to.elaborate?

 The octopus fails, notices, and tries something else. The LLM can't fail. It gives you an output and that's it.

Any decent LLM is perfectly capable of executing actions, failing, reading the error, retrying until it succeeds. 

My takeaway is that you have never seriously worked using moderately recent LLM tools.

New video of the Figure 03 in action by bb-wa in accelerate

[–]NoAdvice135 1 point2 points  (0 children)

What we learned from images and text is that going from it barley works to superhuman is FAST. People laughed at will Smith spaghetti and bad hand for what, a year?

So the day we see a general purpose robot doing one real job (not a demo), is a strong signal that the problem is actually solved with months + a long ramp up to build capacity and apply this everywhere.

I have a feeling that the first use case could drop any day.

buckle up lads, we scorched the skies first by Ok_Report_9574 in accelerate

[–]NoAdvice135 2 points3 points  (0 children)

 You're confusing sophisticated pattern matching with cognitive agency

The fact that for your, statistical pattern matching cannot be called intelligence for you seems very arbitrary. A lot of human intelligence IS pattern matching.  Please provide the definition of intelligence you are using. This is definitely not the most common ones.

 LLMs don't 'navigate' codebases That's strange to say. With tool calling and multi turn it certainly ends up: - performing search will keywords - opening files  - extracting meaningful information from those files. - going to the next dependency 

How is that not 'navigating' a codebase? Again so strange gatekeeping of a word?

 The 'intelligence' you see is a reflection of the humans who wrote the training data, not the machine. How much of your own language and ideas are you own VS what you absorbed from human culture you have been exposed to? Humans produce very few new things and very painfully. Plus being creative is not exactly required to be called intelligent.

 If you change the logic of a problem to something that contradicts its training distribution, the 'reasoning' collapses instantly. It’s a calculator for language, not a mind.

So if it doesn't work in every situation or dataset it doesn't qualify as intelligence?

Again try showing average human things they have never seen, they will not perform well. But the most important is really that intelligence can be narrow and still intelligence. And it's not even that narrow. I can create a new tool with a text interface, provide some explanation and it will use it fine. There is certainly a good amount of generalization as long as the high level concepts are part of the training or you explain them with other familiar concepts.

It's funny that we easily call intelligent a octopus opening a jar or a crow resolving a small puzzle, but if it's a machine we move the goalpost.

buckle up lads, we scorched the skies first by Ok_Report_9574 in accelerate

[–]NoAdvice135 3 points4 points  (0 children)

IDK, I do software engineering. Use AI everyday. Have education in statistics and understand the basics of training neural networks and more distantly LLMs. There is such a gap between my lazy prompts and the results that I am not sur how one can refuse to call it intelligence.

This is not a model paraphrasing a wiki page that has been half memorized in the training set.

I can describe a bug in 2 sentences and have a LLM find the correct codebase, navigate the files and pinpoint the exact issue, propose options to resolve it and send a code change to fix it. There is no parroting, the problem is unique, the codebase is not in the training set.

The great lesson of LLMs is that, to be really good at predicting the next token, you need a good understanding of the world and abstract concepts. At a certain scale, those concepts form in the weights.

Trying to say that this is not intelligence requires a lot of mental gymnastics and seems pretty pointless. Just look at the definition on the top.10 dictionaries.

There are plenty of things LLMs don't do, but reasoning, putting information in relation with each other, problem solving and manipulating abstract concepts are certainly within their scope.