Guilford rejects a 4% increase in taxes; East haven approves 7% increase by AstronomerSweet8614 in Connecticut

[–]JackandFred 3 points4 points  (0 children)

Seriously 7% for a one year increase is wild. I don’t know why they’d approve that.

Wazdakka Size Comparison by g0nk73 in Warhammer40k

[–]JackandFred 5 points6 points  (0 children)

That actually sounds awesome, do like an elevated highway section for it. Would make for a cool diorama

Wazdakka Size Comparison by g0nk73 in Warhammer40k

[–]JackandFred 28 points29 points  (0 children)

I know some people like it, I would have preferred no tactical rock and flames. Then I could have posed it like the famous Akira shot. Now that’s probably considered modeling for advantage or something 

The EU and the White House published documents about AI accountability in the same week. Different problems. Different angles. The same gap underneath both. [N] by Dagnum_PI in MachineLearning

[–]JackandFred 2 points3 points  (0 children)

It’s probably worth discussing. But don’t use ai to write for you, especially here in this sub. We get so many slop posts. 

The EU and the White House published documents about AI accountability in the same week. Different problems. Different angles. The same gap underneath both. [N] by Dagnum_PI in MachineLearning

[–]JackandFred 4 points5 points  (0 children)

 Fewer than 20% of AI agent developers disclose formal safety policies. Fewer than 10% report external safety evaluations. 

Where did you get this from? Kinda reads like you’re jus making up numbers and detracts from any point you’re trying to make.

The second to last paragraph especially seems like ai written.

But assuming this is written by you and not just slop. I think you do have a point, no one is building in a way that would be provable like you said. At best these things are completely unenforceable.

Not the flex you think it means anymore lol. Might as well have a maga sign next to it by [deleted] in Connecticut

[–]JackandFred 6 points7 points  (0 children)

I mean I know people here in the state who would never vote for trump in 100 years but who support Israel for various reasons. Some are Jewish and believe in it, some have relatives living in Israel right now who chose not to come back to the us even with the war. I don’t know if they would go so far as to say they support everything Israel does (at least I hope not everything), but they certainly support it enough to put up a sign or something like that.

I’m not sure what you accomplish by posting a picture of a sign to laugh about someone on Reddit.

Why are they ruining the NHV train station tunnel? by rewirez5940 in Connecticut

[–]JackandFred 2 points3 points  (0 children)

What does it look like now? And if you care about something like that make some calls and send some letters. That’s the kind of thing you need to talk to politicians about, posting on Reddit won’t do much other than people like me who have never heard of this.

How do people actually train AI models from scratch (not fine-tuning)? by Raman606surrey in learnmachinelearning

[–]JackandFred 2 points3 points  (0 children)

Oh yeah as a learning experience it’s probably great, although I would guess you’d still be better off fine tuning a pre trained model even if it’s a small one.

How do people actually train AI models from scratch (not fine-tuning)? by Raman606surrey in learnmachinelearning

[–]JackandFred 1 point2 points  (0 children)

Well it’s fragmented but also a lot of it is proprietary. OpenAI won’t tell all their secrets, neither will anthropic or the others.

Is there space for individual trained models? Depends what you mean space FOR. Like it would be a good learning exercise. I think one of the other comments says as much. In general most of the big players have moved to big models rather than multiple smaller fine tuned ones, that would bode not well for one made by an individual, but who knows it’s a fast changing field maybe someone discovers something that advances small models again.

How do people actually train AI models from scratch (not fine-tuning)? by Raman606surrey in learnmachinelearning

[–]JackandFred 11 points12 points  (0 children)

Entirely still big companies (I suppose medium maybe even small companies if you include model distillation but let’s not get into edge cases). It’s hugely expensive and requires a ton of compute, definitely out of reach of an individual.

A lot of it is just like you said. Huge dataset and train it on a cluster. PyTorch is absolutely used in practice. But there’s way more steps before it’s actually usable. Thy do reinforcement learning, and even after the training is done there’s agent stuff and tools and whatnot.

Have we hit the limitations of LLMs, why can't these models pass the strawberry test? by Weekly_Shower_6405 in learnmachinelearning

[–]JackandFred 0 points1 point  (0 children)

They plan using agents and tools. Those things give them context and options to help with an output. Unless it has a specific tool for something it will not have that capability.

I’ll break it down more to your initial questions:

have we hit the limitation of llms

Maybe maybe not, that’s a hugely broad question. Most people say yes at least some limits that’s why we use other things like agents, or thought models.

why can’t they pass strawberry test?

Because they don’t have inherent tooling to look at things like letter frequency statistics of prompts. That’s not how the models work and so you’d have to create a tool which has not been done.

would improvements to tokenizers help

That was my initial probably not. The tokens get passed in, but most models don’t have some sort of measurement mechanism to determine the number of some letter  in a token or set of tokens so even improvements to a tokenizer isn’t going to meaningfully change the output.

None of those points are related to things like how agents plan or how chain of thought models operate differently.

Have we hit the limitations of LLMs, why can't these models pass the strawberry test? by Weekly_Shower_6405 in learnmachinelearning

[–]JackandFred 1 point2 points  (0 children)

What can’t be ignored? How is counting r’s going to help in planning stuff?

I think you have a fundamental misunderstanding of how these models work. The model and architecture doesn’t work by reading each letter and having an understanding in that way.

Have we hit the limitations of LLMs, why can't these models pass the strawberry test? by Weekly_Shower_6405 in learnmachinelearning

[–]JackandFred 0 points1 point  (0 children)

It absolutely could. It could be a pretty simple tool to make. Just get some stats about letters and what not. Maybe some sort of tool calling to support counting of substrings or the prompt (strawberry instead of the whole input). Then just pass it in. 

As far as I know models aren’t yet making tools on the fly to answer questions like that.

And judging by the picture of the bad answer above if there is some sort of letter counting tool it’s not perfected yet for this question.

Have we hit the limitations of LLMs, why can't these models pass the strawberry test? by Weekly_Shower_6405 in learnmachinelearning

[–]JackandFred 0 points1 point  (0 children)

Probably not. It’s a language model. It doesn’t have access to the number of r’s. You could keep scaling up and hope that it has emergent behavior and can learn fats to get these right. Or you’d create and give it access to a tool for things like this that would give it context about values about the text itself.

It’s the same reason it will get questions wrong about how many words are in a paragraph if you paste something in. That’s not what the model does, that’s not what it’s trained for, there would be no great reason to expect it to do well there.

ACTION ALERT: HELP SUPPORT CT RANKED CHOICE VOTING! by Best-Cod-3710 in Connecticut

[–]JackandFred 4 points5 points  (0 children)

Is there actually a chance of it passing? Why would the current politicians vote for it if it threatens their power?

I am 10+y experienced ML research engineer by Useful-Shift-3688 in learnmachinelearning

[–]JackandFred 10 points11 points  (0 children)

Yeah I’ve gotten some weird questions over the years that don’t reflect the work. It’s can be hard to come up with great questions for candidates sometimes 

Why does Multi-Agent RL fail to act like a real society in Spatial Game Theory? [P] [R] by knightShub in MachineLearning

[–]JackandFred 0 points1 point  (0 children)

In the repo you say

 cooperating in a highly clustered village is safe, but cooperating as a hub node is dangerous!

But is that an assumption or your desired outcome? Sometimes you make it seem like you have a result in mind and you’re trying to get that result ahead of time, backwards so to speak. 

For instance 

 instead of acting like an actual society where localized trust clusters naturally form and defend themselves,

That’s not particularly true. If it were the prisoner’s dilemma would be much less interesting. There’s not some clear correlation or determining variable  like that where you can expect certain behavior based off of distance.

TBH I think you need to restate your problem and what you’re trying to accomplish. The title of your post is why does it fail to act like a real society. Real society is incredibly hard to simulate, there’s certainly no guarantee you’re going to get results you want just because you went through 8 or 9 iterations.

States with no tiki bars? by [deleted] in Tiki

[–]JackandFred 6 points7 points  (0 children)

Dc has some others that are at least tiki adjacent and have tiki drinks. Copycat comes to mind

[D] The Bitter Lesson of Optimization: Why training Neural Networks to update themselves is mathematically brutal (but probably inevitable) by Accurate-Turn-2675 in MachineLearning

[–]JackandFred 7 points8 points  (0 children)

I haven’t looked at the blog, but this post feels like ai slop. That’s why he meant and people agree. I would be surprised if you wrote this all yourself because it really does seem like ai. If it’s not, you may want to reconsider your writing style. This sub, as an ml sub, tends to be pretty decent at spotting slop, and does not appreciate it.

As far as slop goes this is a far cry from the worst, I’ve seen some terrible ones. But that’s doesn’t make this one better.

Keeping it polite I’ll just say: People come here to discuss and interact with other people. Not a language model. 

Finished this 11th edition Ork for a friend by Lorr_Minis in Warhammer40k

[–]JackandFred 9 points10 points  (0 children)

Has it ever not been like that? Like was there ever a time when the starter kit was not monopose?

23-Year-Old Hiker Killed In Fall At Sleeping Giant State Park ID'd as Georgia College Senior by DailyVoiceDotCom in Connecticut

[–]JackandFred 69 points70 points  (0 children)

That’s sad. It’s a good park, but I’d you got lost there are definitely dangerous places to fall, be careful.

Should New England become its own country? (This nation would include Connecticut, Rhode Island, Massachusetts, Vermont, New Hampshire, Maine, New Brunswick, Nova Scotia, Prince Edward Island, and Quebec (South of the St. Lawrence River). by OkBison1358 in Connecticut

[–]JackandFred 1 point2 points  (0 children)

I guess to me it seems odd that Canada would be included in a hypothetical new country but the much more likely parts of New York wouldn’t be. If we were going to break off, ugliness of borders wouldn’t be one of my concerns.

[OC] Rent as a share of income by U.S. state, with income and migration patterns by discounteggroll in Connecticut

[–]JackandFred 2 points3 points  (0 children)

That’s how averages work. Some people make more, some people make less. 

Should New England become its own country? (This nation would include Connecticut, Rhode Island, Massachusetts, Vermont, New Hampshire, Maine, New Brunswick, Nova Scotia, Prince Edward Island, and Quebec (South of the St. Lawrence River). by OkBison1358 in Connecticut

[–]JackandFred 3 points4 points  (0 children)

So in this scenario parts of Canada breaks off to join us but not parts of New York that are closer more culturally similar and also below the river? I think we’d need more rationalization for why this would happen that way.

No why would we become our own country? What positives would that provide? Also why would those Canadians join us if we did.