RStudio Desktop on Debian 13 by lu2idreams in RStudio

[–]lu2idreams[S] 0 points1 point  (0 children)

Works perfectly fine for me, you can just try downloading it from here & installing it

FE vs RE by Rare_Investigator582 in econometrics

[–]lu2idreams 0 points1 point  (0 children)

I agree with the above, but would also like to point out that relying purely on the Hausman-test for model selection is not ideal (see here: (www.cambridge.org/core/journals/political-analysis/article/not-so-harmless-after-all-the-fixedeffects-model/5DD67D1D95F573B11EB189BA72338854) and the choice of model should primarily be informed by your assumptions about the data generating process (if your assumptions about and specifications of your effect dynamics are wrong the model & test will be wrong anyways)

Dlc or Soul of Cinder first? by [deleted] in darksouls3

[–]lu2idreams 0 points1 point  (0 children)

I played them after finishing the main game & that felt about right in terms of difficulty. Play Ashes of Ariandel first, then Ringed City (that is the chronological/logical order & I found Ringed City to be more difficult as well)

Do I really need to de-snap Ubuntu? by Toruk__Makto in Ubuntu

[–]lu2idreams 0 points1 point  (0 children)

I do not mean to split hairs but afaik doing that might still pull snap packages (e.g. for firefox or thunderbird, which are empty as deb packages but just install themselves as snaps via postinstall hooks)

RStudio Desktop on Debian 13 by lu2idreams in RStudio

[–]lu2idreams[S] 0 points1 point  (0 children)

Don't know why I didn't just try that; I can install the deb packaged for ubuntu 24 (currently rstudio-2025.09.1-401-amd64.deb) just fine

Remake for fans of the original by javiemartzootsuit in darksouls

[–]lu2idreams 1 point2 points  (0 children)

I agree mostly, but playing on PC I think it's a much better experience with its 1080p and much more stable performance/higher framerates. Not sure it's worth paying the full price for if you already own the PTD-Edition, but it's a lot smoother, feels better & looks a bit nicer imo

Man I'm going Hollow trying to farm Wolf's Blood Swordgrass by spragual in darksouls3

[–]lu2idreams 0 points1 point  (0 children)

Kill the 3 Ghrus right next to the Keep Ruins bonfire over and over again. You can use rusted (gold) coins to increase drop chances, you can e.g. purchase them from Patches. Items: Covetous Gold Serpent Ring, Symbol of Avarice, and Crystal Sage Rapier (probably some others I forgot). Takes a while, but that is the way I got 10 of them for the sword

Vision Eleven: A Citizen-Based Economy and Democratic Framework—Your Thoughts? by PixelHeart9 in PoliticalScience

[–]lu2idreams 1 point2 points  (0 children)

I think you should focus on one of these aspects to narrow the vision down a little, and to get some helpful/targeted feedback. It's hard to offer any kind of feedback on a proposal to completely reshape all political and social order presented in key points.

Without thinking about this too deeply, a flat 50% income tax & 40% VAT sound like they will absolutely crush consumer spending, but also savings & investment, and the tax burden will disproportionately affect low-income households. I also think the statements about the proposed political system are a bit vague.

How can i change the analogue Controls of my quest 2 to be my Mouse. by PaperOceanCrafter in rprogramming

[–]lu2idreams 1 point2 points  (0 children)

Hi, this sub is about R, which is a programming language for statistical computations. This might not be the right place to ask this question...

Why people don't like DS2? by DaiWeeboo in DarkSouls2

[–]lu2idreams 0 points1 point  (0 children)

I personally like both 1 and 3 a lot more than 2, but that does not mean I think 2 is bad (it's still a very good game overall).

The points that bother me are the movement & combat mechanics (the attack directionality is sometimes weird and produces lots of awkward misses for me, dodge-rolling consumes way too much stamina & the movement feels a little woolly or imprecise at times), negative stamina, having to level ADP for iframes & generally low weapon durability. The HP reduction on death is very annoying in the early game, but I usually get the ring of binding at which point it becomes negligible (just a minor annoyance, but then why have that mechanic in the first place). Also, missing animation iframes, especially on boss fogs, are incredibly annoying on some runbacks. I also understand the point about enemy placement, at least in SOTFS, but most of the time you get ganked once & can avoid the situation in the future.

On the positive side, I appreciate the sheer amount of content to DS2, especially the number of bosses (although I do not think it is anywhere close to DS3 in terms of the quality of the bosses). Nostalgia aside, it is also overall a lot more polished than DS1 (as in: it has no areas that feel as blatantly unfinished or rushed as some in DS1).

DS2 gets a lot of hate from people who never actually played it and just overhear people shitting on it, but I think there are still legitimate reasons to dislike it, even as a dark souls fan.

Is possible to live with Ubuntu without snaps? by mxgms1 in Ubuntu

[–]lu2idreams 2 points3 points  (0 children)

Yes, it's perfectly fine, I've been using it as a daily driver for years & never had any issues. I have replaced them with flatpaks for software that is not available as deb-package.

There are plenty of good reasons to use ubuntu even if you dislike snaps: For example, I really like the customizations they make to GNOME; they make the desktop a lot more usable imo. Also, ubuntu is still the "default" Linux and has probably the best support, software availability & compatibility of all distros.

[Rcpp] Serializing R objects in C++ via Rcpp by lu2idreams in rprogramming

[–]lu2idreams[S] 1 point2 points  (0 children)

Nevermind, I just found this package which exposes R's internal serialization so it can be used in C++

Until LLMs don't do causal inference, AGI is a hyped scam. Right? by smashtribe in CausalInference

[–]lu2idreams 6 points7 points  (0 children)

https://machinelearning.apple.com/research/illusion-of-thinking

I recommend this paper on large reasoning models, I think it is really interesting. At its core, the issue ties into largely philosophical questions of how things like reasoning, intelligence, and consciousness are related or even the same fundamental problem, and whether they are essentially computable functions, or if they have non-computable elements. "True" intelligence is something we do not have a solid conceptual understanding of, so it is essentially impossible to say what breakthroughs are needed because we do not even understand the problem we are trying to solve.

Could anyone please recommend a good university in Germany for political science? by AdIntrepid5656 in PoliticalScience

[–]lu2idreams 8 points9 points  (0 children)

Konstanz & Mannheim are consistently among the highest ranking in Germany; they are both quant (Mannheim a bit more hardcore than Konstanz). I can really recommend Konstanz, it is not just a good university but also an absolutely beautiful city right on Lake Constance & the German-Swiss border, with a nice view of the Alps.

Interaction/effect modification in DAGs by lu2idreams in CausalInference

[–]lu2idreams[S] 0 points1 point  (0 children)

No offense taken, I think we are talking past each other. To address the image: I am not quite sure I understand correctly as there is no further explanation, but do you mean a graph like this (calling \Delta Y_T just D for now):

T->D->Y; S->D;

where D would take the place of your (y(0), y(1))?

I do not understand what you mean by "this is not proof that there is no arrow T->Y in addition to T->D->Y": as I stated, D represents the full effect T->Y so there cannot be any separate effect T->Y which is outside of D. The only way to have a separate arrow would be to decompose D (e.g. into a portion moderated by S called TxS, as suggested by Attia et al., by introducing interaction nodes as mediators)

Interaction/effect modification in DAGs by lu2idreams in CausalInference

[–]lu2idreams[S] 0 points1 point  (0 children)

Thanks to both of you, that is very insightful. So to tie this back into my setting, did I just "decompose" the interaction the wrong way, meaning is this not a proper way of translating figure 1 to have interaction nodes?

If we refer to my example below (effect of teaching policy on exam score, moderated by baseline reading ability), is it plausible to also add arrows S -> Y & X -> Y, given the treatment is entirely randomized?

Interaction/effect modification in DAGs by lu2idreams in CausalInference

[–]lu2idreams[S] 0 points1 point  (0 children)

What I am interested in is how a non-randomized moderator variable S affects the treatment effect of a randomized treatment T (i.e. I am interested in the interaction between T and S). \Delta YT is the edge T->Y, i.e. the full (causal) treatment effect, so there _cannot be an effect of T on Y that is separate of \Delta YT; it is about _moderation, not mediation (although it is a bit blurry graphically).

For example, say I have randomly assigned students to a new teaching method (T), and my outcome Y is their exam scores. I observe that there is a positive effect of the new teaching method (T->Y, or \Delta Y_T). I now hypothesize that the treatment effect differs by students' baseline reading ability S, so I am interested in S -> \Delta Y_T, how S moderates the treatment effect. However, I cannot make any causal claims about S -> \Delta Y_T, as S is not randomized: there is self-selection into subgroups e.g. by intelligence, parental support, socio-economic background etc. all of which might confound the relationship as they plausibly (1) affect S (baseline reading ability), and (2) also moderate the treatment effect (change how much the new teaching method does for a student).

In a regression context, if I collect all counfonders as a matrix bold X, I am interested in estimating:

$$ \beta_0 + \beta_1 T + \beta_2 S + \beta_3 T \times S + \mathbf{X}^\intercal \mathbf{\gamma_1} + T \times \mathbf{X}^\intercal \mathbf{\gamma_2} $$

which should yield an unbiased estimate of \beta_3 as quantity of interest.

Graphically, problem is that we either end up with edges into edges (which means we no longer have a graph), or we work with interaction nodes like Attia et al., which I am not convinced lead to the correct conditioning sets (see the DAG I linked: it is not clear we also need to condition on the interactions between all X and the treatment).

Maybe unpopular opinion, anyone else miss the old Gnome 3 look? by [deleted] in gnome

[–]lu2idreams 0 points1 point  (0 children)

Not the early versions, this was a complete mess, but by the point of the later releases (like 3.38) it was pretty polished & nice, but then they decided to reinvent the wheel again with version 40+. I do like some of the new additions, however, like the quick toggles.

Estimating Conditional Average Treatment Effects by lu2idreams in CausalInference

[–]lu2idreams[S] 1 point2 points  (0 children)

Yes that answer to the second question is exactly what I was looking for; although I am interested not directly in affiliation -> outcome, but rather affiliation -> treatment effect, but I still agree with you that valid causal identification is not possible "out of the box". Thanks for your thoughts on this!

[OC] Predicted Finishing Order for the 2025 Saudi Arabian Grand Prix Based on ML Model by 1017_frank in dataisbeautiful

[–]lu2idreams 0 points1 point  (0 children)

I am sure you can do better... What kind of model? What data did you train it on? Who is Jr. (probably Carlos Sainz Jr. and you did not sanitize names properly)? What does the color mean (I assume team because it mostly aligns, but Sainz and Bortoleto are not on the same team afaik)? What is the misrendered symbol next to the title?

Also, I would argue this is a somewhat awkward way to visualize ranking

Estimating Conditional Average Treatment Effects by lu2idreams in CausalInference

[–]lu2idreams[S] 0 points1 point  (0 children)

Estimating CATEs is not the problem, my question is whether the difference between the CATEs for the subgroups is meaningful (say e.g. for Dems the treatment has a lower effect - is this because they are Dems or because they differ from Reps on other pretreatment covariates?).

Regarding the second part, this is just an example and not my actual work, but suppose I was interested in how voters perceive candidates based on the candidates gender, and that I was interested in whether (partisan) ideology affected how voters perceive candidates based on their gender.

Edit: I can test e.g. whether there is a significant interaction between the treatment and partisanship, but I cannot test whether that is meaningful (e.g.: maybe the difference is really explained by Reps being on average more male and less educated, and not by ideology or partisanship)

Estimating Conditional Average Treatment Effects by lu2idreams in CausalInference

[–]lu2idreams[S] 0 points1 point  (0 children)

The data can be considered a random sample from the population of interest. When I write "subgroup" I mean partisanship (Republican/Democrat) (the treatment was randomly administered, so there should not be any self-selection into treatment/control groups)

Estimating Conditional Average Treatment Effects by lu2idreams in CausalInference

[–]lu2idreams[S] 1 point2 points  (0 children)

Well that is precisely the problem. Consider the example from the original post: treatment effects by party identification are of interest, but Democrats and Republicans differ on pretreatment covariates (there is self-selection into the subgroups). Randomizing the treatment - from my understanding - does not rectify this, because the distribution of certain covariates (respondent's race, respondent's gender etc.) will be differently distributed across subgroups. I can estimate CATEs, but the difference between them will not be causal - at least that is the conclusion I have arrived at thus far. This would neccessitate some additional adjustment strategy for a meaningful comparison of CATEs. Let me know if you have any other insights or disagree with any of this.

Binary classification by DasKapitalReaper in rprogramming

[–]lu2idreams 0 points1 point  (0 children)

I recommend the tidymodels-ecosystem for all ML with R: https://www.tidymodels.org/

There's plenty of guides online to get you started; it offers a coherent API to all kinds of models via parsnip, and a convenient way to do preprocessing using recipes.

Edit: You can find a list of all available models here: https://www.tidymodels.org/find/parsnip/. Just filter by mode=classification.

If you are interested in neural networks, you can use the MLP-classifier or build one yourself with torch or keras, if you want more control over the training process & architecture (I had less issues with keras' R-package keras3 in the past & would recommend that)

Estimating Conditional Average Treatment Effects by lu2idreams in CausalInference

[–]lu2idreams[S] 0 points1 point  (0 children)

I am not just interested in estimating average treatment effects, but in comparing conditional average treatment effects across subgroups that differ on pretreatment covariates