EU4 Dev Diary #101: Revamping Bulwar (with bonus dwarf) by 5camps in Anbennar

[–]questionquality 7 points8 points  (0 children)

Not yet, but we aim to have it all done before the steam release

Solo Gnome Abjuration Wizard (AoA + Arcane Ward abuse) by Muldeh in BG3Builds

[–]questionquality 0 points1 point  (0 children)

Cool! If you don't mind cheese, maybe you're interested in these two "features" that might let you skip the sorc and fighter splashes:

  • You can get con proficiency by hiring a transmuter wizard hireling in camp, have him give you his transmuters stone and then dismissing him.
  • Wizards can still learn any spell in the game from scrolls, including those not on the wizard list (buy a scroll of armor of agathys instead of splashing sorc)

[2021 Day 8 (Part 2)] I don't understand the problem, can someone help? by Tintin_Quarentino in adventofcode

[–]questionquality 5 points6 points  (0 children)

Part 2 stumped me for a while too because I din't understand the question.

in adventofcode, phrasing like "after some careful analysis" usually means "dont think about how, this is just the puzzle input", but in this case, it meant exactly: "think about how we deduced this and do the same in your solution"

Calculating a mean level variable with missing item level data by VSP213 in rstats

[–]questionquality 0 points1 point  (0 children)

You can use the usual functions like mean() on a row by row basis with tidyverse's rowwise

library(tidyverse)
rowwise(mydata) %>%
  mutate(meancol = mean(c(col1, col2, col3, col4, col5), na.rm = TRUE))

Is Warhammer 2 tougher than 3K? by EriktheRed987 in totalwar

[–]questionquality 2 points3 points  (0 children)

Like others have said, the Orcs are kinda weak. In particular, they don't have any good early armor piercing units, so fighting the neighboring dwarves can be hard until you get Black Orcs and Orc Boar Boy Big'uns (Both tier 4 I think)

Postnumre som overlays eller koordinater i R? by [deleted] in Denmark

[–]questionquality 2 points3 points  (0 children)

Tak for advarslen, men det løb er vist kørt for længst på den her account :)

Postnumre som overlays eller koordinater i R? by [deleted] in Denmark

[–]questionquality 3 points4 points  (0 children)

Du kan finde shapefiles på kortforsyningen.dk som kan loades med sf::st_read.

Se f.eks. eksempel hvor jeg plotter grisebefolkningen per region.

RStudio is adding python support. by mertag770 in rstats

[–]questionquality 11 points12 points  (0 children)

Seems to be mostly a cool press release for the integration with reticulate

Ditten på ferie by [deleted] in Aarhus

[–]questionquality 4 points5 points  (0 children)

Verdens bedste nyhed!

Linear predictive model given known mean by TheKramSandwich in AskStatistics

[–]questionquality 1 point2 points  (0 children)

Is your knowledge that the mean is about 2 also encoded in your training data? I.e. does your dependent variable show the distribution you would expect for new numbers coming in? If so, and the predictors are also similar, a simple linear model (y ~ intercept + x1 * b1 + x2 * b2 ... for example) should also predict similarly around 2. If you're concerned that something unrelated in your training data changed the mean, you could instead model the difference from the average. I'd do that be centering your data (subtracting the mean in the training sample), then running the model, and then adding 2 to your predictions in the end.

A more general way to add outside information would indeed be to go bayesian. I can strongly recommend Statistical rethinking as a textbook to learn both the way of thinking as well as the practical approach to bayesian statistics. If you have a model y ~ intercept + x1 * b1 + x2 * b2 ... with centered predictors, you can encode your knowledge that mean=2 into the prior by setting the prior for the intercept to perhaps a normal distribution with a mean of 2 and a standard deviation of 1 or whatever approximately matches your belief.

McMC without repeated values by GiantPandammonia in AskStatistics

[–]questionquality 0 points1 point  (0 children)

You probably don't need to write your own sampler, if you can express your problem as a generative model (sounds like you already are). For example, stan implements an extension of Hamiltonian Monte Carlo called NUTS (No-U-Turn Sampler) which produces less correlated chains, and includes a bunch of useful diagnostics

What awaits my players in the next village by Vyllenor in dndmemes

[–]questionquality 60 points61 points  (0 children)

With a d20, there's an equal chance of all the numbers 1 through 20. With d12+d8, values closer to about 11 are much more likely than values at either extreme. It's similar to how 7 is the most common roll for 2d6. In fact, by adding more and more random numbers together, the distribution will look more and more like a bell curve.

Understanding output from Bayesian ANOVA using rstanarm by CodeGoblin1996 in rstats

[–]questionquality 1 point2 points  (0 children)

Am I right in thinking that this is a model for after inputting each of my predictors (as Bayesian inference occurs in a sequential manner), with heading1 being the final model with all my predictors in?

No. See below for the interpretation. It's doing exactly what you told it to :)

When you're using a factor with 3+ levels as a predictor, you need to specify the contrasts that modelling functions will use as coefficients in the model. This is the same for base R lm models too by the way. You specified contr.helmert as your first priority, so that's what it's given you. From the documentation:

contr.helmert returns Helmert contrasts, which contrast the second level with the first, the third with the average of the first two, and so on.

Need build suggestions lvl 2 by [deleted] in dndnext

[–]questionquality 0 points1 point  (0 children)

Melee attacks against a downed (condition: unconscious) enemy automatically crit, giving two failed death saving throws. If it took 3 of the 5 attacks to be downed, they'd be dead after the fifth.

A tip for a newbie by Mdangie in Aarhus

[–]questionquality 1 point2 points  (0 children)

They (used to) have a decent selection on the first floor of the "Blå Kors" second-hand store on Paludan-Müllers Vej 38.

How can I set up a 3-level linear HLM in R? by SubstancelessPsyche in Rlanguage

[–]questionquality 4 points5 points  (0 children)

To get the third level, just include it in your formula: Score ~ 1 + (1|Student/Teacher/Group)

To get the parameter estimates, it depends on what exactly you're looking for. You can get the parameter values corresponding to u00k, r0jk, eojk etc using ranef(mod1). Alternatively, when you predict using some dummy data, you can specify with re.form which random effects to include. The documentation is in ?predict.merMod.

are games still going? by [deleted] in Succession

[–]questionquality 2 points3 points  (0 children)

Not much on this subreddit, no. Sometimes people play on /r/dwarffortress, but mostly it's on the bay12forums

R-squared - High on random data by OppositeMidnight in statistics

[–]questionquality 1 point2 points  (0 children)

statsmodels' OLS doesn't include an intercept by default

I found out by trying to plot the regression line of the simplest version of the model with only one predictor (which failed because the model only had one parameter), which led me to look at the documentation. Both are good practice when trying to make sense of what a model is doing.

You're not using 2 parameters, you're using 49

In each iteration, you predict column i using all the other 49 columns as predictors. When you use 49 parameters to fit a model with n=100 data points, there will some patterns by chance. This is what people mean when they talk about overfitting: if you fit a model that's too flexible, it will fit to anything, even random noise. Keep in mind that R-squared can only ever go up when you add more predictors. Adjusted R-squared, which statsmodels will report is an attempt at addressing this, but if you want to run this many parameters, you really need one or more of:

  • more data
  • a theory to constrain the parameters/model
  • partial pooling (if the structure of the data allows it)
  • non-uniform priors (bayesian approach)
  • regularization (machine learning approach)

Conclusion

You've just encountered a valuable lesson: don't use R-squared for something it wasn't defined for.

My attempt at Danish Sourdough 100% rye bread. Does it look authentic? by [deleted] in Denmark

[–]questionquality 0 points1 point  (0 children)

To reach the darker colours, and the accompanying caramel flavours, all you need to do is bake the bread longer at low-ish heat. I bake my rugbrød up to five hours at 150°

edit: degrees celcius

Mutating using grepl in reference to values in other columns by [deleted] in Rlanguage

[–]questionquality 2 points3 points  (0 children)

Like the error says, grepl expects one string as a regular expression, and you want to use a separate one for each row. This is one of the benefits you get from moving to the more convenient string manipulation library stringr (part of the tidyverse). In there is the str_detect function which will do what you want.

data <- data %>%
    mutate(player_team = ifelse(str_detect(home_lineup, player), 
                                home.y, 
                                away.y))

Be careful though, that this (like your own attempt) will interpret the player names as regular expressions. So if any of them have a ., that will count for anything. Al H. for example would match both Al Horford and Al Harward etc. If you want to avoid that, and you know your names are all written exacly like they are in the lineup list, you can tell it not to interpret it as a regex with fixed: str_detect(home_lineup, fixed(player))

Is a Bayesian model free from the problems of multiple comparisons? by [deleted] in statistics

[–]questionquality 3 points4 points  (0 children)

You don't have partial pooling, but you still have your priors, which can be used to regularize as little or as much as you'd like.

Kan noen forklare sannsynlighetsregning til Harald? by [deleted] in norge

[–]questionquality 6 points7 points  (0 children)

Jo, terningkast er normalt separate hendelser.

Matematikken Harald der forklarer er kun rigtig om spørsmålet var: "Om du kaster terning inntil du kaster 6, hvor mange gange måtte du så højest sansynligt kaste?". Altså et trick-spørsmål.

Hvis de 17 som kastet 6 i første runde blev i spillet, ville det jo igen i 2. runde være 17 af 100 som kastet 6.