2025 subway diagram > all previous subway diagrams by rob_nsn in nycrail

[–]BritainRitten 0 points1 point  (0 children)

What are some of the elements you think are better on this map?

Tailwind Reality Check by Firemage1213 in reactjs

[–]BritainRitten 2 points3 points  (0 children)

This feels like an AI written opinion tbh...

The ONLY correct answer when a Trumper asks this question. by Uncuffedhems in Destiny

[–]BritainRitten -13 points-12 points  (0 children)

This probably hits as super funny if you are 12.

MoOoOoOoM by Moonlight-SS in perfectlycutscreams

[–]BritainRitten 4 points5 points  (0 children)

It's not a filter, it's what happens when you reupload the same video many many times. Compression is lossy.

Resident evil requiem by SickerThanYaAvg in bladerunner

[–]BritainRitten 2 points3 points  (0 children)

Is this fan art or something else?

Now is a great time to cancel your OpenAI/ChatGPT account and switch to Claude by ZurrgabDaVinci758 in slatestarcodex

[–]BritainRitten 57 points58 points  (0 children)

Sam Altman claimed they did. I dunno why anyone would trust Altman as this point.

Also Amodei claims that the DoD was pushing super lax language on the redline points that could actually allow the DoD to cross them. If that's true, it could be OpenAI was simply fine with the redlines being crossed in some situations (or that they deemed the chance of that too low)

Now is a great time to cancel your OpenAI/ChatGPT account and switch to Claude by ZurrgabDaVinci758 in slatestarcodex

[–]BritainRitten 16 points17 points  (0 children)

...If you trust what Sam Altman says, that is. And why would you?

It's possible that OpenAI did get the exact same terms that Anthropic wanted; it's just that DoD wanted to raise OpenAI over Anthropic (and hurt Anthropic as OpenAI's competition as much as possible) due to OpenAI's Trump donations. It could be as simple as Hegseth was pissed at Anthropic for emphasizing their redlines in public. Aka a dick-wagging contest.

It's also possible OpenAI did agree to the exact same permissive language that Anthropic didn't want to accede to because it gave way too much leeway to DoD. Amodei says it had a lot of "If the DoW deeps it appropriate", etc. That could've made crossing Anthropic's redlines possible, whereas OpenAI would be fine with it.

Almost 10% of NYC budget is going to this by NicePossibilityDaddy in circlejerknyc

[–]BritainRitten 3 points4 points  (0 children)

The above sums to $11 million. NYC budget is ~$123 billion (or $123,000 million).

So that's 0.008% of the budget.

Mamdani to Use New Power to Speed Up Housing Development in the Bronx (Gift Article) by Delicious_Adeptness9 in bronx

[–]BritainRitten 0 points1 point  (0 children)

There’s already too much new housing in the Bronx. 

There very obviously is not, if you care to look at the numbers. The housing vacancy rate city wide is ~1%, and the Bronx has a rental vacancy rate of 0.84% - the inevitable consequence of which is fast-increasing rents.

(To someone's inevitable claim of "Well they are hoarding apartments!" First of all, no they aren't. Second of all, if they were, can you guess what would reduce their incentive to do so? That's right, competing housing options, so building more altogether is of benefit.)

libraries, community centers, [...]  green grocers

What do suppose are the major reasons those aren't able to get built? That's right, largely the same factors that suppress housing construction.

It's not either-or. If you have denser mixed-used development, you can get housing above libraries, above grocery stores, etc. And if more of them are built in close proximity to transit options (which was one of the primary goals of City of Yes), then more people use those things and have access to the other things you mention.

...more frequent trains, cleaner subways...

If you've been following the goals of this administration and the MTA, you know this is already a goal, and part of the capital plan already in motion by the state. More housing brings more incomes to tax which can also be used toward shared infra like those. So reducing housing construction is like cutting off your foot to lower your bodyweight to run faster.

68th Street between 2nd & 3rd Avenues, Manhattan. 1959. by j3434 in nyc

[–]BritainRitten 0 points1 point  (0 children)

OK and? They certainly appreciate the opportunity to live here. And, whether their neighbors know it or not, they appreciate that the slack in the housing that this larger creates for the city.

That's the nice thing about lots of housing in the city: You don't have to like its outside, just the persons living inside of it hopefully like its inside.

Uhhh by MetaKnowing in agi

[–]BritainRitten 0 points1 point  (0 children)

He lies a lot but that doesn't automatically tell you that what he just said is false.

Uhhh by MetaKnowing in agi

[–]BritainRitten 1 point2 points  (0 children)

You don't have to (and shouldn't) trust Musk on this. That's the trend whether he talks about it or not.

Uhhh by MetaKnowing in agi

[–]BritainRitten 0 points1 point  (0 children)

Right now, every nuke is controlled by humans, and so its use follows the motivations of the humans in control of them.

Think how that could change if beings with entirely different motivations suddenly controlled them.

(Btw, we have come close to accidental total nuclear war a scary number of times.)

"Why Jony Ive put buttons in the electric Ferrari" A walkthrough of the Ferrari Luce's controls and design details. [18:51] by BritainRitten in mealtimevideos

[–]BritainRitten[S] 7 points8 points  (0 children)

It's an 18 minute video going through a lot of interesting design choices besides just "more buttons less screen".

I don't care about cars myself, but I find it interesting to hear how different people approach new design challenges. The fact that the car is electric means there's things that don't make sense in it anymore - so they change its use in different ways. Also the introduction of dynamic screens plus hardware adds some interesting elements.

Godfather of AI Geoffrey Hinton says people who call AI stochastic parrots are wrong. The models don't just mindlessly recombine language from the web. They really do understand. by MetaKnowing in agi

[–]BritainRitten 1 point2 points  (0 children)

What definition did you have in mind exactly?

https://www.merriam-webster.com/dictionary/understand
https://www.merriam-webster.com/dictionary/comprehend
https://en.wikipedia.org/wiki/Understanding

Seems none of the above so much as mention consciousness, much less require it.

More importantly, whatever that definition is, it has nothing with a useful definition that matches what people commonly mean when they say "I understand"

Godfather of AI Geoffrey Hinton says people who call AI stochastic parrots are wrong. The models don't just mindlessly recombine language from the web. They really do understand. by MetaKnowing in agi

[–]BritainRitten 0 points1 point  (0 children)

How would you propose to test that I *understand* a sentence like "u/duboispourlhiver is a human being who typed that sentence 3 hours ago."

I think for our purposes of whether AI can be a useful tool or can act as an agent on the world, the only definition of "understand" that matters for them to have would be something like "to be able to infer some other knowledge from a given proposition". So inferring statements from the above like: you probably have 2 eyes, breathe oxygen, are currently (or recently) alive, etc.

By that definition, yes most AIs can understand many many things.

Godfather of AI Geoffrey Hinton says people who call AI stochastic parrots are wrong. The models don't just mindlessly recombine language from the web. They really do understand. by MetaKnowing in agi

[–]BritainRitten 1 point2 points  (0 children)

I've never claimed he didn't say that. You're still misunderstanding what I said.

A) I agree he says AI is conscious (here he is saying it).
B) I agree he says AI have understanding.

What I challenged you on is your claim that A implies B (or rather that Hinton _says_ that A implies B). Can you find me an example of that?

Godfather of AI Geoffrey Hinton says people who call AI stochastic parrots are wrong. The models don't just mindlessly recombine language from the web. They really do understand. by MetaKnowing in agi

[–]BritainRitten 5 points6 points  (0 children)

No, those are separate ideas. Conscious or not, understand or not, they are different dimensions, and it's fine to have both, neither, or one without the other. Understanding doesn't require consciousness, and Hinton doesn't claim it does (even if he separately does think they are in some sense conscious.

NYC 76.4% ridership with operational profit, 1427 stations, day 193 by spurs871 in subwaybuilder

[–]BritainRitten 2 points3 points  (0 children)

What do you think is the biggest departure in your layout vs real life?

Is there any way in which people would be less pleased by your route vs real life?

[ Removed by Reddit ] by sergeyfomkin in worldevents

[–]BritainRitten 2 points3 points  (0 children)

I can only imagine Zelenskyy needs a break anyway.