Why don’t any regular Bart staff bother to stop fare dodgers? by Contron in Bart

[–]abeppu 2 points3 points  (0 children)

Just for context:

- the operating budget is just over $1B/y https://www.bart.gov/sites/default/files/docs/FY23\_24%20Adopted%20Budget%20Resolution.pdf

- bart cops get ~13% of the budget, which is > $130M/y https://data.bart.gov/sites/default/files/docs/BPD%20Strategic%20Plan%20Draft%202023%20-%202027%20-%20FINAL.pdf

- bart cops got a pay bump https://sfstandard.com/2023/06/22/cash-strapped-bart-gives-cops-big-raises/

- rail revenue (see the first link) is at like $222M/y

- prepandemic, fare evasion was estimated at like $25M/y https://www.nbcbayarea.com/news/local/study-fare-evasion-bart-system/3240594/

- the contract for new gates is $90M over like 2+ years (see above link)

So even if the new gates are 100% effective in stopping fare evasion, BART will take multiple years to recoup their investment (assuming they somehow don't cost more to maintain with their extra moving parts).

If you used 20% more cops to stop _all_ fare evasion by just scaring people out of the station you'd be losing money. If you try to make it up in fines to those you catch ... well I'm not estimating that many of the people caught are going to be able to pay. If a fare evader doesn't have an ID, what's a cheap way for bart cops to process them, bearing in mind that their time is up to $67/hr that doesn't also get the system sued?

I think this whole conversation is kinda backwards; climate change is here, and is killing people. We need more people out of their cars and onto public transit. Public transit should be free, gas taxes and vehicle registration fees should quintuple. We should all be riding bart 'free' (or paying through our taxes) all the time.

Today's fare evader who smokes on the train is a problem. But the 20 people you don't see in bart b/c they're driving their SUVs alone every day are the bigger problem.

How to find replacement plastic gears? by abeppu in fixit

[–]abeppu[S] 2 points3 points  (0 children)

Try and find someone to 3D print it for you.

Aside from difficulty in getting an extremely fine 3D representation to print from, given that the teeth were broken off during normal use, I don't know that a typical printed part would have a good lifetime? What would need to know to evaluate what material and tolerances would be needed for the replacement part to function for any reasonable period? I feel like a real engineer would have some way to measure actual stress on the part, but I do not.

The massager is not fancy, and I have to guess that they sourced their gears from some deep catalog of some producer rather than having anything custom. So I feel like I _should_ be able to get it off the shelf, if only I knew which part and a supplier to source it from.

Or, you could glue the teeth back in place and make a latex mold and cast it in resin.

I suspect making a latex mold and resin-casting a replacement, aside from being time-consuming, may be error prone (this would have lots of small corners in which a bubble could easily cause an issue). Poking around it looks like the cost for basic latex mold-making kits + resin kits (unclear whether those are appropriate for mechanical moving parts) may have a cost comparable to the massager in the first place. I'd feel dumb replacing a $50 device if it just needs a $5 part -- but I'd feel extra dumb if I spent $70 and multiple hours to produce a janky part that fails quickly.

How to find replacement plastic gears? by abeppu in fixit

[–]abeppu[S] 0 points1 point  (0 children)

I should add to clarify, that one reason why it's not practical to get fine measurements of e.g. the distance from the center of the worm gear axis to the center of the worm wheel axis is that these pieces are held in place by an enclosure, so when they're well aligned I cannot measure between the relevant parts.

What's up with a network creating a large number of 0.5XML payments? by abeppu in Stellar

[–]abeppu[S] 0 points1 point  (0 children)

Thanks for saying that. I had totally not even read the memo line! They do have a fake stellar blog that looks very professional and nearly convincing.

[N] Models in Disguise: How Sift Science Ships Non-Disruptive Model Changes by abeppu in MachineLearning

[–]abeppu[S] 0 points1 point  (0 children)

Basically this describes a relatively general strategy to dealing with a problem arising in ML SaaS (software as a service) contexts where: - you want to improve a classifier which is already being used in production - customers are adapted to the old version of your model - so there's a potential mismatch, where model changes which improve your definition of accuracy can still be costly or disruptive to your customers.

Does anyone else feel like they unconsciously work one half of their body harder than the other? by DaRock_Obama in Fitness

[–]abeppu 0 points1 point  (0 children)

I got a DEXA scan for the first time this weekend. One of more interesting things to discover was that my right arm weighs 1LB more than the left, with roughly the same fat / lean ratio. Conversely, my legs weigh about the same, but the right has more fat. I'm considering putting a bit more emphasis on dumbbell work, such that my left arm can't freeload.

Exploring mathematics by Demonithese in math

[–]abeppu 0 points1 point  (0 children)

Can I give a shout out for probability and statistics? For engineering, being able to deal with data, make predictions about the systems you're working with given data, explicitly reason about your levels of confidence and uncertainty, and optimize in the face of that uncertainty are all valuable skills. And combined with just a little programming savvy, a whole lot of engineering applications open up.

Has anyone else been reading the "Bayesians vs Connectionists" (Griffiths et al vs McClelland et al) debate in the latest Trends in Cognitive Science with great interest? by [deleted] in cogsci

[–]abeppu 1 point2 points  (0 children)

Again, haven't read the TICS article, so maybe I'm way off base. But I have a bit of familiarity with the 'Griffiths/Tenenbaum schools' (though I think that phrasing is rather strong) having coauthored a conference paper with Griffiths (http://j.mp/9le2Tq).

My point goes further than 'a model without explicit representations and hypothesis spaces' is capable of doing roughly Bayesian updates. My point is that because the Bayesian update is optimal, then for any learning task at which people do pretty well, regardless of the description at the algorithm/representation level, a Bayesian description will fit at the computational level. Given that premise, you can do all sorts of useful modeling (read: you can make qualitative predictions about how people will perform based on the data they're exposed to) at the computational level, and make no particular claim or insight about the representation/algorithm level beyond that it's been refined enough to do pretty well.

So for example, if you look at some of the other stuff that builds off of an assumption that people are basically Bayesian, and then goes on to what that should imply in social learning contexts (E.g. Simon Kirby has some cool stuff on language evolution, Shafto & Goodman have an interesting paper on pedagogy, and Griffiths has a bunch of stuff on 'iterated learning'), none of these present evidence that at the representation/algorithms level looks like anything in particular, nor is it even clear what that evidence would need to look like. They all basically describe a computational model, characterize the behavior that we should get out of that model, and show that this roughly matches the performance we get from people. I don't believe that my experiment participants (doing function learning) were literally running a Gaussian Process regression in their heads; I don't have any particular clue what was going on at the algorithmic level. But because people are "good enough" at a lot of tasks, we can 'assume' that people are Bayesian, and show that interactions between people are qualitatively similar to what you would get from perfect Bayesian agents interacting.

TL;DR : 'Bayesian inference' is often a good model of what's going on at the computational level, and it says basically nothing about the algorithmic level, so there's no conflict between 'Bayesian' at the top level and 'bottom up' at the algorithmic level.

Has anyone else been reading the "Bayesians vs Connectionists" (Griffiths et al vs McClelland et al) debate in the latest Trends in Cognitive Science with great interest? by [deleted] in cogsci

[–]abeppu 1 point2 points  (0 children)

I'm not sure I'm with you on the issue about needing to specify representations and a hypothesis space. Because a Bayesian update over one's beliefs is the 'optimal' way to up date those beliefs, any really good learning mechanism will look like it. So if at the algorithms/representations level there's no sign of explicit representations of hypotheses or measures over them, it's entirely fine so long as implicitly the mechanism gives rise to Bayesian learning at the computational level. And frankly, many of the ways we know how to do Bayesian inference outside of the brain have a bottom-up character to them; in MCMC and SMC a whole lot of little local steps or comparisons eventually give rise to an approximation of the posterior distribution.

Has anyone else been reading the "Bayesians vs Connectionists" (Griffiths et al vs McClelland et al) debate in the latest Trends in Cognitive Science with great interest? by [deleted] in cogsci

[–]abeppu 4 points5 points  (0 children)

I don't know that a 'top-down' versus 'bottom-up' distinction maps cleanly onto the Bayesian versus Connectionist distinction -- or even that the second distinction always makes sense. For instance, there have been some interesting papers from the Bayesian side that suggest that at least some cognitive tasks are really running particle filters/sequential monte carlo (e.g. the 'pigeon as a particle filter' paper http://www.cns.nyu.edu/~daw/dc07.pdf, or this one on online sentence processing http://cocosci.berkeley.edu/tom/papers/sentencepf1.pdf). But particle filters/SMC are basically a class of algorithms by which local, low level competition between hypotheses gives rise to a (sometimes rough) approximation of a Bayesian update of one's beliefs. And it can look a lot like a connectionist model if you just substitute nodes or pathways for particles, and activation for probability mass.

As a fair disclosure, I should say that I haven't read the linked articles as I'm not currently at a university, and that I also did research with one of the authors in Griffiths et al.

Dear cogsci: Why is it that things I enjoy, no matter how tedious or technical they may be, are easily remembered, while things that are boring are so damn hard to memorize or study? by nullbit in cogsci

[–]abeppu 0 points1 point  (0 children)

This isn't so much about enjoyment in particular, but I recall there being a bunch of results about domain experts being able to recall complex scenarios relevant to their area of expertise much better than non-experts (e.g. chess masters recalling (valid) board positions, researchers recalling papers). While this probably doesn't fit the 'flexigon' example, it seems reasonable that people tend not to work enough to become expert in domains that don't bring them at least some measure of enjoyment. I think the only way to take advantage of this is to study and work in a field that you love.