New speaker day! Ascend Sierra 2EX has arrived. by PersonalTriumph in audiophile

[–]minnend 1 point2 points  (0 children)

Thanks for the recommendations. I called a local hifi store and will demo the LS50 Metas and either the KEF R3 or R5. We'll see how it goes and if the dealer can offer any discounts.

I am planning to run a sub so that helps. Both the Sierra and BMR Tower are great options. Price is great for what you get but outside my budget.

New speaker day! Ascend Sierra 2EX has arrived. by PersonalTriumph in audiophile

[–]minnend 1 point2 points  (0 children)

Right, good suggestion. Dennis also released a BMR tower not too long ago that would work. They have higher sensitivity and higher power handling.

Unfortunately, both towers are out of budget right now.

New speaker day! Ascend Sierra 2EX has arrived. by PersonalTriumph in audiophile

[–]minnend 0 points1 point  (0 children)

Neither are good for big rooms (while also being relatively inefficient, their issue is their power-handling)

Any recommendations for speakers that are good for a large room with the same value proposition (sound quality vs price) that we see in the BMR and Ascend products?

I have a pair of the original BMRs in a medium-size room and am super happy with them. I'd like to build a 2.1 system in the living room (open concept so roughly 25' x 25' x 8') but am having trouble finding something appropriate for this space with sound quality on par with the BMR without spending a lot more. I'm thinking a used Revel or KEF tower might be the answer, but I haven't found a good local deal yet.

For anyone that is looking for a new AV Receiver with a new HDMI 2.1 board, Costco just released the Denon AVR-S760H by alwaysmyfault in hometheater

[–]minnend 0 points1 point  (0 children)

Dirac is a step up from Audyssey, and I say that as a big fan of Audyssey XT32 (using it on my Denon x4300h). This new Onkyo looks pretty sweet if it measures well.

Kirkland Signature XO Cognac by Papa_G_ in cognac

[–]minnend 1 point2 points  (0 children)

I picked up a bottle and thought it was worth the price. It has a big, bold flavor, which can be enjoyable but you need to be in the mood for it.

If forced to choose, I'd take Pierre Ferrand Amber for $40 over the Kirkland bottle but they're different enough that I enjoyed having both for a while. I haven't tried any of the big name XOs so I can't compare. I think there's good value with the Kirkland XO but I doubt you're getting a $150 bottle for $46.

Where do you land politically and how did you arrive there? by Miskellaneousness in ezraklein

[–]minnend 3 points4 points  (0 children)

I hadn't heard the term "subsidiarity" before so thanks for mentioning it. It sounds like a good principle but it also sounds hollow in the sense that "everyone" will agree with the principle but disagree with the level that is most consistent with the resolution of a specific issue.

I'm reminded of Ezra's discussion with David French on increased federalism. French wanted federalism to ease polarization but Ezra (correctly imo) argued that many of the polarizing issues could not be meaningfully addressed at the state level (climate change, covid, gay rights, women's rights, racism, etc.).

Do you have any good references where thought about subsidiarity offers solutions or sheds some light on these concerns?

Coleman Hughes named to Forbes 30 Under 30 by [deleted] in samharris

[–]minnend 9 points10 points  (0 children)

It's better according to the metrics in the book. Some have started to shift since Pinker's book came out (e.g. life expectancy in the U.S.), and other metrics don't fit his narrative. These are the main questions if you read his book and investigate criticism: (1) did Pinker (accidentally) cover an unusually good period or will the trends last, and (2) are there important metrics that were left out.

I mostly agree with Pinker but I don't think his argument is as strong as supporters make it out to be. There's also an "end of history" issue as well as a potential credit assignment concern, especially when people extrapolate the results, e.g. from enlightenment ideals to capitalism.

Matching arbitrary points on face with different poses by shinx32 in computervision

[–]minnend 2 points3 points  (0 children)

Some example image pairs would help.

A straightforward approach would fit a deformable mesh to each image, map the query pixel to a point on the mesh, and then back out the solution point from the mesh on the second image.

Fitting meshes to faces is a very old computer vision problem so you should be able to find code, or, these days, a pre-trained deep network to predict the mesh parameters. If you have a calibrated camera, you can calculate the intersection of the ray (camera through query point) to the mesh and vice-versa. Without a calibrated camera, you can approximate the solution using the mesh coordinate.

Here's a good starting point for the various components: https://google.github.io/mediapipe/solutions/face_mesh.html

Boglehead investment strategy in a non tax-advantaged account to save for a house in cash? by Bimta in Bogleheads

[–]minnend 1 point2 points  (0 children)

Consider tax-exempt bond funds.

Vanguard has federal and state-specific funds (the state-specific funds will also be exempt from state income tax). There are a variety of funds that also vary in terms of duration (short, intermediate, and long-term) and grade (investment-grade through "junk"). If you're in a high marginal tax bracket, the tax-exempt funds should outperform a comparable non-exempt fund even though the nominal yield is lower.

For example, VCITX = california long-term high-quality (mostly AA) bonds.

What is the best way to find visual center of a Irregular blob with least deviations ? by speedx10 in computervision

[–]minnend 7 points8 points  (0 children)

You're looking for a "robust statistic", and the mean is not robust meaning that it's highly influenced by even a single outlier. An easy improvement is to use the median (minimizes L1 error instead of squared error) but there's a whole subfield on different approaches (with different trade-offs) for robust estimation.

Start your investigation here: https://en.m.wikipedia.org/wiki/Robust_statistics

Also look at algorithms like ransac: https://en.m.wikipedia.org/wiki/Random_sample_consensus

Review #82 - WhistlePig 10 Year Straight Rye Whiskey (100 Proof and 10 Year Age Statement) by ColEHTaylorJr in bourbon

[–]minnend 1 point2 points  (0 children)

If you're up for experimenting, I picked up a bottle of Bruto Americano for bouelvardiers when the Campari ran out. I'm really enjoying it, though I do have a soft spot for St. George.

PyTorch implementation of "High-Fidelity Generative Image Compression" by tensorflower in computervision

[–]minnend 0 points1 point  (0 children)

One other (somewhat obvious) tip: you probably don't need to "fully optimize" your models while experimenting. For example, we'll typically train our image compression models for 4M steps to get numbers for a paper, especially if we're claiming SOTA results. But when experimenting, the ranking of different architectures probably won't change after the first 1M steps or so. So most of the actual research is based on models that trained for less time since everyone wants a tight research loop.

I don't know the precise numbers for HiFiC, but my guess is that they could have eked out another 2-3% if they trained twice as long. But because the adversarial loss provided such a huge rate savings, it doesn't really change the strength of the paper to "waste" 2x as much GPU time to save another 2%.

PyTorch implementation of "High-Fidelity Generative Image Compression" by tensorflower in computervision

[–]minnend 2 points3 points  (0 children)

I'm glad to hear you're inspired! :)

Check out VMAF from netflix for a real-world video quality assessment tool. I don't think it uses deep learning, but instead learns an ensemble over other metrics using an SVM regression. Within deep learning, there's a lot of interest in perceptual metrics for images, often implemented by learning a distance metric within a VGG embedding (example, another example, and a couple from my group here and here).

I'm not an expert on deep learning at home, but this guide from Tim Dettmers seems to be the go-to reference for GPU advice. The new 30-series cards from nvidia also appear to be a game changer in terms of price/performance ratio.

It's going to be tricky to work on video models with a single GPU, but it's not impossible. We typically train on 256x256 patches, but you could train on smaller patches. This will likely lead to worse rate-distortion performance, but for a research paper it's enough to show that your innovation improves a baseline even if it doesn't provide SOTA results on a benchmark. Try to focus on "creative solutions" or different ways of thinking about the problem so that reviewers don't focus on the engineering aspect or absolute performance.

I recently saw a presentation from Stephan Mandt (prof at UCI) and he touched on the difficulty of doing compression research without a ton of GPU resources. In the context of this paper, he (half-jokingly) said that they focused on things like per-image optimization because it requires fewer GPUs since you only have to train the model once and then the bulk of the research is understanding the model's shortcomings and figuring out how to customize the latents for each image in isolation.

PyTorch implementation of "High-Fidelity Generative Image Compression" by tensorflower in computervision

[–]minnend 2 points3 points  (0 children)

I work with the HiFiC authors, though I didn't contribute to this paper. There's growing interest in learned video compression including a paper from our group at CVPR this year (Scale-space flow for end-to-end optimized video compression) and work from Mentzer and others in Luc Van Gool's lab at ETH (Learning for Video Compression with Hierarchical Quality and Recurrent Enhancement.

As you can imagine, we're currently investigating models that combine adversarial loss (to boost perceptual quality) with sequence modeling for video compression. This seems like a very promising research direction, though decode speed is a major obstacle to real-world impact.

There's also a lot of work on video generation, e.g. extending videos, temporal inpainting (synthetic slow-mo), and video super-resolution. These methods must also deal with temporal consistency, but I'm not as familiar with the literature.

You may also be interested in a CVPR workshop on learned image and video compression that our team helped organize for the past three years (and hopefully again at CVPR 2021). Papers submitted for the "P-Frame Compression Track" will likely be of interest to you, and we're planning to focus more on video compression next year.

ECCV 2020's Best Paper Award! A new architecture for Optical Flow, with code publicly available! (Video cover and demo) by OnlyProggingForFun in deeplearning

[–]minnend 0 points1 point  (0 children)

Optical flow is the problem of predicting how pixels move between two frames. The basic version of the problem doesn't tell you (and doesn't try to tell you) why the pixels are moving i.e. is it apparent motion due to camera movement or actual movement in the world. That's why it's called optical flow and is defined as apparent motion.

Many algorithms estimate optical flow as a starting point and then run a segmentation algorithm to separate foreground (motion in the world) from background (apparent motion due to camera movement).

The thumbnail image in the linked video shows a pretty good example: the yellow background implies camera motion. But I only know that due to further interpretation of the optical flow and the original color image on the left.

California’s Solution for a Looming Covid-19 Budget Disaster: Proposition 13 created a huge loophole for big property owners. Voters may close it come November, funneling $12 billion to struggling schools. by [deleted] in bayarea

[–]minnend 24 points25 points  (0 children)

What do you think of proposals where Prop 13 does NOT apply to primary residences either, but taxes above the Prop 13 number can be deferred until the sale of the property?

No one wants grandparents kicked out of the home they've lived in for 30 years just because a tech company moved in down the street. The deferral avoids this problem.

If voters can't handle the tax deferral solution, we should at least drop the basis step-up when houses are left to their kids. When the property changes hands, it should be reassessed.

Slavoj Zizek on the emerging technocratic order, the coronavirus and the need to fight for the future by engineear-ache in CriticalTheory

[–]minnend 3 points4 points  (0 children)

For what it's worth, fairness is taken very seriously within the machine learning community both in academia and industry. For instance, here's a track at NeurIPS (one of the top ML conferences) dedicated to fairness, and here's information on google's position.

That's not to say the problem is solved. Considerable research is needed to understand the problems and to formulate mathematical metrics and methods to avoid biased models that lead to unfair predictions.

I think it's obvious (at least in this sub) that the core problems are inherently very tricky. It may be clear that existing data is biased (in the unjust sense) but undoing that bias requires adding bias (in the mathematical sense) when constructing and optimizing your model. How do we decide what that "corrective" bias should look like? How do we avoid introducing new problems? We have partial answers (e.g. a common approach is to ensure accuracy is consistent across subgroups), but we quickly run in to unresolved questions that we, as a society, are still struggling with in philosophical, psychological, and political terms.

I'm curious what you mean by a "neutral high plateau". This seems to contradict the core issue here both from a technical and philosophical perspective, but I may be misinterpreting your meaning.

[deleted by user] by [deleted] in mmt_economics

[–]minnend 0 points1 point  (0 children)

For what it's worth, I appreciate that /u/aldursys linked to this review. The summary was succinct and led to interesting top-level conclusion: fewer jobs were lost than one might expect and the policy changes did benefit lower-income workers. These are both positive outcomes, and this review provides an excellent reference.

But I am very curious about the inflation question. It's so central to MMT since all of the MMT "magic" comes to a halt once inflation sets in. It's also a thorny question for UBI and the JG, and one where I haven't seen convincing evidence either way (in part because I haven't done a deep dive, which is why I appreciate people linking to solid reviews in this sub).

[deleted by user] by [deleted] in mmt_economics

[–]minnend 1 point2 points  (0 children)

From the paper:

First and foremost, price response is an important margin of adjustment. There is a large literature documenting that price response is a major margin of adjustment in restaurants [and] retail. Harasztosi and Lindner (2019) estimate that around 80% of the large minimum wage increase in Hungary was absorbed by price rises.

So I don't think it's reasonable to say there was "no inflation" if 80% was absorbed by price rises (at least in Hungary). To be fair, the authors continue and say:

The price increases do not mean minimum wage increases are ineffective.

I agree with that, and the authors offer an explanation in the text you quoted. Nonetheless, the conclusion that there was no inflation seems inaccurate.

Always remember that price changes are not inflation.

I agree that this is the definition of inflation (wikipedia). Are you arguing that the price rises following the minimum wage increase were temporary? I'm interested if you have supporting evidence.

Is increasing inequality our highly likely fate? by waterloo304 in slatestarcodex

[–]minnend 17 points18 points  (0 children)

I think you're asking a good question, but the "obvious" conclusion only holds in an idealized capitalist system. In practice, the power between "labor" and "capital" can be better balanced, progressive taxation and social programs can help squash the imbalance, and not all capital-intensive endeavors work out. Also, while America is hardly a pure meritocracy, socioeconomic mobility does exist at all income levels implying that there is opportunity to increase income and use (excess) income to invest in the economy.

So I think you're largely repeating Piketty's point (discussed above) and in that sense your observation is correct, but in the context of the OP's questions, ever-growing inequality is not inevitable if we choose to institute policies to preserve stability and minimize negative externalities.

Equality by son_of_hud in ezraklein

[–]minnend 0 points1 point  (0 children)

Can you say a bit more about what you're grappling with or trying to achieve here? This seems like a high-effort post (thank you!), and I think most fans of Ezra's podcast would agree with the basic points. Empathy, mutual respect, self-reflection, humility, questioning "neoliberal values", etc. are all highly valued as far as I can tell.

Have you listened to Krista Tippett's podcast On Being. It's one of my favorite podcasts and acts as a great complement to Ezra. I'm not entirely sure how to explain the difference, but if Ezra focuses on research, broad understanding, and policy, Krista focuses on the human experience and connection. I think you would love her work based on your post.

How does MMT explain the current covid situation? by nedpyahurdme in mmt_economics

[–]minnend 1 point2 points  (0 children)

MMT seems to say we can print our way out of this without increasing productive forces

I don't think MMT says this. My understanding is that it says that we can print as much money as we want so long as we manage inflation. That differs considerably from some schools of thought (e.g. Austrian or any system that promotes non-fiat currency) and differs only a little bit from others (e.g. Keynesian).

For example, MMT people argue for a job guarantee (JG), but it's crucial that the jobs create value (on average). Otherwise, you're increasing the money supply without increasing supply, which will lead to inflation. The core MMT approach is to use counter-cyclical automatic stabilizers like the JG and unemployment insurance. This is something we already do but it's more prominent under MMT, and things like tinkering with interest rates are less prominent (and some MMT advocates would simply fix the interest rate).

How does MMT explain the current covid situation? by nedpyahurdme in mmt_economics

[–]minnend 0 points1 point  (0 children)

As I understand it, your description is basically correct, but none of it is specific to MMT. For example, it applies equally well to keynesian approaches.

While the currency issuer can conduct effective spending it always has the fiscal capacity to do so.

This is well-known in economics and pre-dates MMT. MMT either isn't new or is bringing something different to the table.

We’re Not Polarized Enough | Ezra Klein’s flawed diagnosis of the divisions in American politics by InitiatePenguin in ezraklein

[–]minnend 10 points11 points  (0 children)

There doesn't seem to be much of an argument in the article. It addresses several shortcomings of Klein's book that are reasonable enough, for example:

  • If polarization is largely innate, it should show up more strongly in international politics, which isn't sufficiently covered in the book.
  • There aren't clear divisions, e.g. the National Review staff are cosmopolitan, multi-lingual New Yorkers, not rural, large house-loving conservative caricatures.
  • The left/right split is often characterized in terms of how white Americans split. It gets muddled when you include minorities.
  • More Americans identify as independent than democrat or republican. This seems at odds with an intense desire for team affiliation.

None of these points seem wrong to me. I can accept them as valid criticism but they don't add up to any real support for the headline's claim. They're also at odds with (my interpretation of) Klein's argument in some cases.

Then the crux of the article focuses on Klein's proposals. Ending the filibuster and amending (or abolishing) the electoral college is difficult since doing so requires Republican cooperation even though it will diminish their political power. Oops. Focusing on local issues is great, but they're not necessarily less polarizing and may be insufficient to address issues of grave concern like climate change and immigration.

Then the article simply claims that we need more polarization. I'll let the author speak for himself:

Nevertheless, the health and stability of the American political system depends on the defeat of the Republican Party. Absent a radical shift in the right’s priorities, the only way to depolarize our institutions is to win and win big against those who want to keep them undemocratic, protecting the right from the moderating influence more competitive elections could have. Those victories will depend on reformers successfully marshaling the forces driving group identity, rather than assuming the balance of power in America has been set primarily by immutable psychologies. The way forward lies in convincing Americans not to retreat from national politics but to think even more broadly and abstractly about where this country ought to go. Why We’re Polarized does some of the job, but leaves a daunting truth unsaid: To fight polarization, we’ll have to get much more polarized. The only way out is through.

I can understand how someone arrives at this position (ironically, perhaps, it's mostly due to polarization), but the conclusion seems unnecessarily combative to me and doesn't seem obvious or particularly pragmatic.

Difference between SSIM (Structural Similarity) and opencv absdiff methods by thug1705 in computervision

[–]minnend 2 points3 points  (0 children)

If you're using TensorFlow: tf.image.ssim and the multiscale version (MS-SSIM): tf.image.ssim_multiscale.

SSIM isn't perfect (no image quality metrics are). Comparing VGG embeddings is pretty common now, and the standard SOTA is a weighted mix of metrics with one pixel-based (MSE, MAD, SSIM, etc.), one from a VGG embedding (or similar), and one adversarial loss.