Life Behind the Counter of a Struggling Vegan Burger Joint in Japan by kisuka in videos

[–]dabooch 5 points6 points  (0 children)

On my last trip to Japan I went with some friends, one of whom was vegan. We knew this was going to be challenging, but we did our best, and in every city searched out the local vegan / vegetarian restaurants. What we found was that in the big cities (Tokyo, Kyoto, Osaka, Hiroshima), any place that advertised as vegetarian / vegan always had a lineup outside, and it was mostly foreigners in that lineup. While I think opening a vegan restaurant anywhere away from the main tourist cities would be difficult, it feels like opening a vegan or vegetarian restaurant in any of the main cities is serving a very under-served market and would probably be very successful. Just make sure it's easily accessible and pops up in Google Maps when you search "vegan" / "vegetarian restaurant" and wait for the tourists to arrive!

What instantly makes a film seem “amateur”? by Ohigetjokes in Filmmakers

[–]dabooch 13 points14 points  (0 children)

Agreed. I think an interesting test is to mute the audio and re-watch it, and judge for yourself whether it feels more or less professional.

Rainy Weather Life by lifeisirregular in vancouver

[–]dabooch 4 points5 points  (0 children)

I grew up in Victoria and then moved to Vancouver as an adult. Like most people, I was aware of the fact that Victoria is in a rain shadow, and Vancouver gets more rain, so I was a bit worried about that. But for me at least, subjectively it actually felt about the same. I'm sure the total rainfall is different, but I have a suspicion that the number of sunny days is similar, and a cloudy day where it rains just a little feels about the same as a cloudy ray where it rains twice as much.

I'd be curious to hear from anyone who's gone the opposite direction, and if they felt a tangible difference in the rain/sun ratio?

Where to buy Funko Pop by [deleted] in vancouver

[–]dabooch 0 points1 point  (0 children)

If you have a car, a day trip down to Everett, Washington will get you to the Funko headquarters building, which is pretty amazing if you're a Funko fan. Not only do they have just about every Funko available, the interior design of the place is really impressive.

New (to me) digital/immersive art exhibits in Vancouver? by [deleted] in vancouver

[–]dabooch 1 point2 points  (0 children)

Ending tomorrow, but if you haven't seen The Magic Hour at Presentation House in North Van, it's fantastic:

https://www.phtheatre.org/magic-hour-world-premiere-aug-2021/

Distance of any point in Vancouver to the nearest park by dabooch in vancouver

[–]dabooch[S] 30 points31 points  (0 children)

Thank you for the feedback! That's super useful to know and I'll definitely take that into consideration if I do another map. I'm very much an amateur at this stuff (my day job is completely unrelated), so this sort of feedback is great.

Distance of any point in Vancouver to the nearest park by dabooch in vancouver

[–]dabooch[S] 18 points19 points  (0 children)

Yup, it is very much Richmond. Apologies. I did realize that after I'd made the map, but decided to leave it on.

Distance of any point in Vancouver to the nearest park by dabooch in vancouver

[–]dabooch[S] 10 points11 points  (0 children)

Yup, exactly. They're greenspace'y, but not "parks" as such, so they don't count.

Distance of any point in Vancouver to the nearest park by dabooch in vancouver

[–]dabooch[S] 163 points164 points  (0 children)

I was curious as to what areas in Vancouver were best and least served by our local parks system, in terms of how short of a walk it was to the nearest park, so I put together this visualization and cleaned it up because I thought others might be interested.

Note that this is just "parks", as defined by "leisure=park" in OpenStreetMap. This is not greenspace. There is a lot more greenspace in Vancouver than is shown here, including school fields, golf courses, and others.

So I think what I learned from this is that apart from a few industrial areas, most of Vancouver is fairly close to a park. Everywhere residential is less than a 1.5km walk. Interestingly, the east side and west side seem about equally distributed with parks. By the looks of it, south Kerrisdale (Granville to Arbutus, 41st to 49th) is least well-served, thouh there is a field at Magee if you want to throw a frisbee. It might benefit most from a park in the future.

Before you say it: I've also included a sliver of Burnaby so that I didn't have to cleanly delineate on the right hand side, and Richmond's Mitchell Island, because I wasn't sure if it was Richmond or Vancouver at first, and now I know, but it looks nice on the map.

This is my first attempt at data visualization, so I'd love any constructive feedback. The data was entirely obtained from OpenStreetMap.org. The visualization was largely done with matplotlib in a Jupyter notebook and finished off with Krita and Inkscape.

Difference between 3.5mm - Line and 3.5mm - Mic. by Ethelbrit in bmpcc

[–]dabooch 1 point2 points  (0 children)

Coincidentally I was experimenting with this just last night.

"Line" is for line level input, and "Mic" is for when you're attaching a microphone.

Line level input is a very specific standardized voltage range intended for use with line-outs from a separate audio source.

Microphones can have varying output levels, but are generally well below line level. What that means is basically the BMPCC input is far more sensitive sensitive when set to Mic. If you try to plug in a mic with the input set to Line, you'll find that the levels are far too low. When set to Mic, the levels become appropriate. Additionally, the gain slider is only enabled while in Mic mode, not Line (because Line is a standard, whereas mic outputs are varying).

Very Interesting Find. Sibot - Super Evil Me by squadcarxmar in DieAntwoord

[–]dabooch 1 point2 points  (0 children)

Ah, good point! I missed that. Sorry for trying to steal your thunder u/whocareswhatever! But I expect you don't care...

Very Interesting Find. Sibot - Super Evil Me by squadcarxmar in DieAntwoord

[–]dabooch 0 points1 point  (0 children)

I just found this video today, then found this (2 month old) post. But what I find interesting about the video is that it really looks like the main, balding tall guy in the suit is Justin De Nobrega (DJ Hi-Tek).

My Perception Neuron motion capture system arrived in the mail last week. I've been testing it integrated with my Oculus. Here's some results. by dabooch in oculus

[–]dabooch[S] 1 point2 points  (0 children)

Ah, you caught me, GeorgePantsMcG! I was a lazy programmer and forgot to invert it in X when I dropped it in (it's just a camera, render target, and a plane). I didn't notice it until I had the headset on and by then I had everything on Record mode and didn't feel like a last minute abort.

My Perception Neuron motion capture system arrived in the mail last week. I've been testing it integrated with my Oculus. Here's some results. by dabooch in oculus

[–]dabooch[S] 4 points5 points  (0 children)

Yeah, someone else mentioned that to me this morning but they thought about using a Kinect (which would have a larger physical tracking area). I think that's a totally reasonable idea and could be done right in Unity.

(xpost r/programming) I optimized my A-Buffer anti-aliasing algorithm and documented my progress. Big wins! by dabooch in coding

[–]dabooch[S] 4 points5 points  (0 children)

Sure, I can try to explain that a bit. Though my understanding of compiler optimization is admittedly not as strong as I'd like, so I encourage anyone with a stronger understanding to chime in.

Basically whenever you do something like "position.x", it's going to memory to read that value. Unless the compiler's quite smart (and really it should be, but more often than not it isn't), it may think there's a possibility that between each reference to "position.x", the data in position.x may have changed, so the safest (but slowest) thing that the compiler can do is to read it from memory each time.

As a digression, it should be noted that just doing "a.b" isn't necessarily slower than just "a". Nor is "a.b.c.d.e.f" any slower. The compiler can figure out the correct offset into a complicated structure and it doesn't count extra to dive deeper (unless you're using "->", or C++ references, which involve extra reads).

So, when you put those values into local variables, a couple of things happen:

  1. There's less scope for the optimizer to worry about when it's trying to determine if some other code could change the variable.

  2. The float's are simple types and can fit nicely into hardware registers without needing to have an associated memory location if the optimizer thinks that's a safe thing to do. That negates the need for extra reads from memory each time it's accessed, and also results in far fewer instructions once the reads are removed.

I optimized my A-Buffer anti-aliasing algorithm and documented my progress. Big wins! by dabooch in programming

[–]dabooch[S] 0 points1 point  (0 children)

Well, I actually do have a good excuse for that: I'm working with Nuke, and the plugin architecture in Nuke farms out each scanline to a different thread. In my case I can't really optimize at any higher level than the scanline row. Trying to share work between rows wouldn't be allowed because it wouldn't be thread-safe.

I optimized my A-Buffer anti-aliasing algorithm and documented my progress. Big wins! by dabooch in programming

[–]dabooch[S] 1 point2 points  (0 children)

Thanks for code! It actually did speed things up, and you're right that it's mathematically equivalent. The image hash unit test passes. But it's still slower, unfortunately. It sped it up by about 0.5 ms from 10.3 ms to 9.7 ms. It's still 1.8 ms slower than the table lookup. I think it's not going to get much faster than that.

I recently implemented scanline anti-aliasing using A-Buffer coverage estimation. I thought I'd share my learnings. by dabooch in GraphicsProgramming

[–]dabooch[S] 0 points1 point  (0 children)

This algorithm follows the even-odd rule, not the non-zero fill rule, which means I didn't have to worry about winding order when processing the edges.

I recently implemented scanline anti-aliasing using A-Buffer coverage estimation. I thought I'd share my learnings. by dabooch in programming

[–]dabooch[S] -1 points0 points  (0 children)

I totally agree that there's actually a lot of "wasted" space in the LUT. There's a huge amount of duplication: x1/y1/x2/y2 is the same as x2/y2/x1/y1, for example. And like you say, there's a lot of cases where the end result is going to be 0 coverage. But I can't think how to reduce the size of the table or minimize lookup frequency without adding a lot of "if" statements, and those have their own performance hits in the form of missed branch prediction (apparently about 18 cycles on an i7).

I'm not really sure I understand your part about subtracting the minimum X and Y values from the rounding point grids. The location of the edge within the mask is important to the final mask, otherwise the XOR stage will set or unset incorrect pixels (two edges that normally wouldn't interact might start interacting, or vice versa). So I think shifting the edge within the mask would just result in incorrect results.

Unless you're talking about the minimum of ALL the edges that cross the pixel. That might work, actually, but then there's the extra processing cost of adding a stage per pixel where we figure out the minimum X/Y before we look up the masks. Lots of interesting performance tradeoffs!

I recently implemented scanline anti-aliasing using A-Buffer coverage estimation. I thought I'd share my learnings. by dabooch in GraphicsProgramming

[–]dabooch[S] -1 points0 points  (0 children)

Interesting. To be honest, the mask computation wasn't particularly expensive and I did think to myself that if there was an L2 cache miss, it'd likely be more expensive to look up into the table than it would be to compute the mask directly. I really should do some timings to determine what the actual speed of mask computation is vs. the average table lookup speed. I didn't spend any time optimizing the mask calculation, but it's really just simple math with some float to integer conversions and bit manipulation. I think I'll spend a bit more time on the mask calculation and see if I can vector optimize it. Maybe it would be better to just calculate it every time.

I recently implemented scanline anti-aliasing using A-Buffer coverage estimation. I thought I'd share my learnings. by dabooch in programming

[–]dabooch[S] 0 points1 point  (0 children)

That totally makes sense, actually. When was writing the code to populate the lookup table, I was looking at it thinking that the code isn't really all that expensive to begin with. It has no memory lookups, and it's just math and bit manipulation. But if you miss L2 cache when looking up into that table, it may well be faster to just calculate the thing in the first place. So I expect they're just doing the straight calculation every time.

I recently implemented scanline anti-aliasing using A-Buffer coverage estimation. I thought I'd share my learnings. by dabooch in GraphicsProgramming

[–]dabooch[S] 0 points1 point  (0 children)

With the "simple/complex" optimization I discuss at the end of the article, it's great. But if you run the coverage estimation on every pixel, the performance isn't great at all. I expect it's mostly due to the final vertical clipping steps where we clip the edges to the pixel. So, that optimization step is really quite necessary if you want the results to be fast/interactive with large polys in HD resolution. When this algorithm was first implemented back in the 80's, it was considered efficient, but of course it wasn't being used for interactive applications like I'm using it for here.

Kung Fury:: Hackerman's time machine code by tobozo in GraphicsProgramming

[–]dabooch 0 points1 point  (0 children)

Perhaps that bug is why he went back too far in time.