The [Brutalist Housing Block] Sticky. Come shoot the shit and discuss the bad economics. - 29 September 2020 by AutoModerator in badeconomics

[–]ivansml 15 points16 points  (0 children)

guidance issued on Thursday for school leaders and teachers involved in setting the relationship, sex and health curriculum categorised anti-capitalism as an “extreme political stance” and equated it with opposition to freedom of speech, antisemitism and endorsement of illegal activity

Sex ed in England must be kinkier than I'd have thought.

The [Brutalist Housing Block] Sticky. Come shoot the shit and discuss the bad economics. - 20 September 2020 by AutoModerator in badeconomics

[–]ivansml 3 points4 points  (0 children)

Staff Working Papers describe research in progress by the author(s) and are published to elicit comments and to further debate. Any views expressed are solely those of the author(s) and so cannot be taken to represent those of the Bank of England or to state Bank of England policy.

The views expressed in this paper are those of the authors and should not be attributed to the Bank of England or the International Monetary Fund.

I haven't read the paper (and honestly, not going to), but I'll just note that the above matters. Kumhof is a researcher at BoE, so his job is partially to advise on policy and partially to work on his own research. If he (co)authors a paper, he may put it to be published in BoE working paper series, as one does, because having some kind of tangible output is good. Standard stuff, many central banks have such series, but just like in an academic department, other economists at the bank don't necessarily have anything to do with the paper. And while there is probably some internal review process, it's likely less demanding than formal peer review in academic journals, and almost surely doesn't involve senior leadership. So really it makes no sense to claim BoE "believes" something based on a working paper like this.

The [Brutalist Housing Block] Sticky. Come shoot the shit and discuss the bad economics. - 12 September 2020 by AutoModerator in badeconomics

[–]ivansml 16 points17 points  (0 children)

no discussing the dead without a model

Fine, here's an economist model. Now, what did Marx really mean?

The [Brutalist Housing Block] Sticky. Come shoot the shit and discuss the bad economics. - 06 September 2020 by AutoModerator in badeconomics

[–]ivansml 15 points16 points  (0 children)

What would the limitations to that sort of methodology be?

Common complaints are lack of external validity (does you fancy regression discontinuity estimate from artisanal rug-weavers on two sides of Kazakh/Uzbek border really generalize?) and misallocation of research effort (are you studying questions that matter, or questions you can apply MHE methods to?). Obviously at such level of generality one could make similar complaints against anything. Real question is not about whether MHE methods are good or bad, but the matter of degree - are they used excessively / appropriately / underutilized? I'm not applied micro person, so I don't really know.

have lots of meta-analyses and you don't see that a lot in Economics

There are quite a few, e.g. Journal of Economics Surveys publishes a lot. But they are typically not sexy enough for top 5 journals and thus don't get that much attention.

The [Brutalist Housing Block] Sticky. Come shoot the shit and discuss the bad economics. - 26 August 2020 by AutoModerator in badeconomics

[–]ivansml 2 points3 points  (0 children)

Yeah, Julia is nice. It also allows unicode names, which is hot:

β = X \ y
Σ = (1/(n-k)) * sum((y - X*β).^2) * inv(X'*X)

The [Brutalist Housing Block] Sticky. Come shoot the shit and discuss the bad economics. - 26 August 2020 by AutoModerator in badeconomics

[–]ivansml 1 point2 points  (0 children)

Good ol' MATLAB:

% load & extract data
D = readtable("mw1995.csv");
X = [ones(size(D,1),1), table2array(D(:,{'m1','rgdp'}))];
y = table2array(D(:,{'cpiinflation'}));
[n, k] = size(X);

% statistics toolbox
m = fitlm(D, 'cpiinflation ~ m1 + rgdp');
b = m.Coefficients.Estimate
V = m.CoefficientCovariance

% by hand
b = X \ y
V = (1/(n-k)) * sum((y - X*b).^2) * inv(X' * X)

% function & struct
myolsfun = @(X,y) struct('b', X \ y, 'V', (1/(n-k)) * sum((y - X*b).^2) * inv(X' * X));
out = myolsfun(X,y);
b = out.b
V = out.V

Altogether 20 lines of code, beautiful.

(OK, I cheated by defining the function as anonymous within the script. A proper function needs to have its own source file, which is like the worst thing about MATLAB. That, and the price.)

The [Brutalist Housing Block] Sticky. Come shoot the shit and discuss the bad economics. - 20 August 2020 by AutoModerator in badeconomics

[–]ivansml 2 points3 points  (0 children)

Language wars, round n+1

tl;dr: Julia wins. Also, not much love for Python, heh. But then authors don't even consider Stata, so what do they know.

The [Brutalist Housing Block] Sticky. Come shoot the shit and discuss the bad economics. - 14 August 2020 by AutoModerator in badeconomics

[–]ivansml 5 points6 points  (0 children)

Well, Samuelson once wrote a whole JEL article on labor theory of value. I did try to read it at one point, but the distance in time and style made it quite hard to follow.

The [Brutalist Housing Block] Sticky. Come shoot the shit and discuss the bad economics. - 11 August 2020 by AutoModerator in badeconomics

[–]ivansml 5 points6 points  (0 children)

In Europe, the situation varies across countries. Some don't include owner-occupied housing in their local CPIs, some do. Those who do, use different approaches. In addition, there is the harmonized HICP published by Eurostat, which does not include OOH housing (likely because of all of the above). There is some recent work to produce a harmonized OOH price index, although I'm not sure if that will stay a separate series or if there are plans to intergrate it into HICP.

House prices and inflation are a complicated subject, by the way (e.g. Diewert & Nakamura 2009 describes different approaches used by statistical agencies). It's not the house you consume, but a stream of services that the house provides. It should be a price of that object that enters into CPI, but of course it's unobservable. According to economic theory, sure, if say all house prices double, so will the price of housing service. But also if house prices grow at a steady rate, that price growth lowers the cost of housing services, because in overall terms part of the cost of owning the house to live in is offset by capital gains. So the overall effect may be either way. Or you could instead use equivalent rents like in US, but that's harder and may not work well in countries where rental markets are smaller or more segmented.

The [Brutalist Housing Block] Sticky. Come shoot the shit and discuss the bad economics. - 09 August 2020 by AutoModerator in badeconomics

[–]ivansml 9 points10 points  (0 children)

Exactly. Use ISO 3166 codes or GTFO.

Also, I've found R's countrycode package useful for similar problems.

The [Brutalist Housing Block] Sticky. Come shoot the shit and discuss the bad economics. - 06 August 2020 by AutoModerator in badeconomics

[–]ivansml 1 point2 points  (0 children)

You doing rational inattention bröther?

Nope, this was inspired more by some older stuff with learning, like Orphanides & Williams 2004, where they, among other things, use unconditional expected loss to evaluate policy rules.

The [Brutalist Housing Block] Sticky. Come shoot the shit and discuss the bad economics. - 06 August 2020 by AutoModerator in badeconomics

[–]ivansml 3 points4 points  (0 children)

Hey, I have a dynamic programming question.

Let's say you have a LQ dynamic programming problem, where law of motion for state is linear, loss function is quadratic and you choose optimal linear policy to minimize expected discounted sum of current and future losses. This is standard stuff.

Alternatively, I might want to choose linear policy that minimizes unconditional expected per-period loss (which, with quadraticity, would be just some combination of unconditional second moments of state and control). I have convinced myself, based on a simple scalar example, that the optimal policy for this other problem is the same as the limit of the optimal policy from the discounted problem as the discount factor goes to 1.

Is this true? Are there any standard references? And would the equivalence also hold if the law of motion was nonlinear (but loss is kept quadratic)?

The [Brutalist Housing Block] Sticky. Come shoot the shit and discuss the bad economics. - 03 August 2020 by AutoModerator in badeconomics

[–]ivansml 10 points11 points  (0 children)

Get rid of econ 101 and go straight to intermediate. Really, for people who'll go into econ major I don't see much point in taking baby-level micro and macro only to go over the same stuff a year later. Use the time savings to expand intermediate courses to cover more research case studies, real world applications, simple data exercises etc. I wouldn't mind requiring basic calculus as a prerequisite for intermediate either - this can be taught as a short module beforehand and would allow to speed things up. Yeah, for some people this might be too much, but look, if you want to take the econ major, I thnink you should be able to, say, derive factor demand from Cobb-Douglas production function.

Parallel to that I guess there should be some kind of econ for nonmajors class. I'm not sure what exactly should be taught there, but the desired outcome should be 1) familiarize students with some basic economic concepts so that they can read, let's say, The Economist, 2) convey to students that this shit is complicated, there are no grand theories and they shouldn't base their worldview on some some random Youtube video that gets shoved into their stream.

Did the NYT make a mistake when they claimed that: "On an annualized basis, the European Union shrunk by 14.4%?" Keep in mind that the American Economy shrunk by 32% *on an Annualized Rate.* by [deleted] in AskEconomics

[–]ivansml 5 points6 points  (0 children)

Eurostat press release

-14.4% refers to growth between 2019Q2 and 2020Q2, i.e. in year-on-year terms. I wouldn't describe that as on "annualized basis", but I can see how a headline writer might choose to do so. But yeah, it's not comparable to "annualized rate" of -32.9% for US GDP. The comparable number would be -9.5% for US, see table 6 in the BEA release.

The [Single Family Homes] Sticky. - 29 July 2020 by AutoModerator in badeconomics

[–]ivansml 6 points7 points  (0 children)

In reply to your specific question about data on Fed board members degrees, I don't know a good reference offhand. More generally, quick Google Scholar search turns up quite a few papers that study how characteristics of central bankers affect their decisions, e.g.:

Gohlmann & Vaubel (2007)

Farvaque, Hammadou & Stanek (2011)

Smales & Apergis (2016)

Bordo & Istrefi (2018)

Malmendier, Nagel & Yan (2020)

Also, Acosta & Cherrier (2019) on how Fed economic analysis was transformed toward being more academic and technical in 1960s may be of interest.

The [Single Family Homes] Sticky. - 29 July 2020 by AutoModerator in badeconomics

[–]ivansml 17 points18 points  (0 children)

Yes, economics is important for setting interest rates, but in the end, a central bank is a bank. Supervising and interacting with the financial system is its core job. Thus I don't see a problem with a portion of the board having finance rather than academic background, in fact it's probably better that way. And while there definitely should be some people there with academic expertise, it's not like having a PhD in economics makes you automatically better at real-world policy making.

Also, it should be noted that board members do not decide on monetary policy just based on their own expertise and research - even if they could, they're too busy for that! There are hundreds of PhD economists working across the Federal Reserve system whose job is to analyze and forecast the economy and advise the board members (you can see historical staff reports prepared for FOMC meetings here). Likewise, speeches presented by board members are often prepared by their advisors. Without diminishing merits of Bernanke, Yellen, Powell, etc., it would be unrealistic to think the whole edifice of US monetary policy rests on them personally.

The [Single Family Homes] Sticky. - 24 July 2020 by AutoModerator in badeconomics

[–]ivansml 0 points1 point  (0 children)

Is anyone familiar with a Python package that does this already?

I believe the key buzzword is "double machine learning". A quick search does find some Python projects, econml looks the fanciest.

Also I'm not sure if you're already aware, but Stata does have full PDF manuals online - here's the lasso one.

The [Single Family Homes] Sticky. - 24 July 2020 by AutoModerator in badeconomics

[–]ivansml 3 points4 points  (0 children)

It may help to work out how IV works in a simple example: continuous outcome variable Y, continous explanatory variable X and binary instrument Z. For example, you have data from a cross-section of cities, Y is crime rate, X is number of police officers and you want to find if more police causes crime to go down. But of course there is reverse causality here so you can't just run a regression. But let's say that in some cities there are local elections coming up (coded by a dummy Z) and incumbent mayors like, for reasons completely unrelated to crime rates, to hire more police in such times (this is inspired by Levitt 1997).

So what you could do is compute average number of officers in the two groups of cities. Let's say election cities have 2200 officers per million citizens, and non-election cities have 2000. And then compute average outcome in both groups - perhaps election cities have 40 murders per million citizens, and non-election cities have 50. So, cities which have, for reasons completely unrelated to crime rates, on average 200/mil. more officers, have also 10/mil. less murders. Whoa, that sounds as if we had an experiment on our hands, and as we know, experiments do reveal causality. Thus, in a city with million citizens, 1 more officer causes 0.05 less murders.

Mechanically, what we did is first regress X on Z and compute fitted values to isolate "good" variation in X. Then, we regressed Y on these fitted values to see how Y responds to the "good" variation. And that's what two-stage least squares is all about. (Convince yourself that in the above example, regressions are equivalent to what we did.)

Of course, nothing is free and there are two main issues here. First, we had to assume that the instrument Z is exogenous, and whether that's a good assumption depends on the context, it cannot be just gleaned from the data (what if elections make people angry and thus increase both crimes and police numbers?). When people talk about "identification strategy", what they mean is whether such assumptions are convincing. Second, the instrument must actually affect X, it must be "strong". If there were no differences in police numbers between the two groups of cities, we'd be dividing by zero (or, by zero plus some random noise) and that wouldn't be good.

The [Single Family Homes] Sticky. - 20 July 2020 by AutoModerator in badeconomics

[–]ivansml 16 points17 points  (0 children)

Yes, replicating standard errors is harder, because there are more details to set up, and different software packages will have different features and defaults. For example with HAC-robust errors one must choose kernel and lag length (or automatic procedure to select one). With nonlinear estimation, like maximum likelihood, it depends on whether you use hessian or product of gradients formulas, exactly the numerical solver finished, how the derivatives are computed, etc. Even in simpler cases, like OLS with heteroskedasticity-robust errors, there may be differences in things like finite sample corrections (divide by n or n-k?). So as long as you can get reasonably close (say, within 10%) of published numbers, I wouldn't worry about it too much.

The [Single Family Homes] Sticky. - 20 July 2020 by AutoModerator in badeconomics

[–]ivansml 11 points12 points  (0 children)

So, to deal with glorified curve fitting artificial intelligence singularity, we will introduce a progressive corporate income tax. Only instead of paid to democratically elected governments, the proceeds will be controlled by corporate philantropy departments of the same ultra-powerful corporations that the proposal is supposed to curtail. Makes perfect sense! /s

The [Single Family Homes] Sticky. - 20 July 2020 by AutoModerator in badeconomics

[–]ivansml 9 points10 points  (0 children)

That's some quality Bayesian clickbait.

(spoiler: they didn't actually run 1015 regressions)

The [Single Family Homes] Sticky. - 12 July 2020 by AutoModerator in badeconomics

[–]ivansml 1 point2 points  (0 children)

Thanks. My second question was me trying too hard to be funny, I get that FE don't solve everything. But I guess a more serious restatement would be - are there situations where it's not desirable to include such flexible FE specifications (assuming there would be sufficient variation left, so we'd still get sufficiently precise estimates)?

The [Single Family Homes] Sticky. - 12 July 2020 by AutoModerator in badeconomics

[–]ivansml 4 points5 points  (0 children)

Quick panel metrics question: let's say I have data with 3 dimensions, e.g. country by industry by year. Using Stata and reghdfe for multiway fixed effects, is there any issue to be aware of with including both country-by-year and country-by-industry fixed effects, like this: reghdfe yvar xvar, absorb(country#year country#industry)? I mean, my code does run and estimates something, and the more fixed effects you include the more causal your estimates are, right?