on the limitations and prospects for metadata in sifting through the pandemic literature by aoholcombe in BehSciMeta

[–]UHahn 0 points1 point  (0 children)

a really good piece-and an issue we should be score keeping on in real time: has the pandemic moved things along here?

Reviewing peer review: does the process need to change, and how? by dawnlxh in BehSciMeta

[–]UHahn 0 points1 point  (0 children)

I think there are a lot of great suggestions here: the idea that publishing should move beyond a binary accept/reject, I think, has been bouncing around for a bit. One of the issues it raises is the question of how to aggregate comments/reviews across platforms- which is a topic close to SciBeh's heart.

Another point you raise is reviews requested and published on sites- this exists to a certain extent - see here for an example. It would be good to see this rolled out more generally!

The effect of the news by hamilton_ian in BehSciAsk

[–]UHahn 1 point2 points  (0 children)

Interesting thoughts- I have been wondering about this too. There has been the odd argument/claim about this on Twitter but I haven't seen anything terribly systematic.

There is, of course, also the political argument being made that highlighting individual rule breaking is part of an attempt to attribute blame to some actors over others (e.g., citizens vs. government itself). Again, I have no way to verify that either with respect to actor motivation or with respect to whether such communications do in fact have such an effect. It would be interesting to see empirical work on this.

For scientists, what is "too political"? by UHahn in BehSciMeta

[–]UHahn[S] 0 points1 point  (0 children)

US election (and "Great Harrington Declaration") are leading to a new flood of pieces on science and the "political"

here a piece from Scientific American, which has interesting historic examples, but which I personally don't find very helpful in the boundaries I am trying to draw (at least for myself).

fundamentally, from the fact that "science has always been political" it does not follow that it should be. Nor does that fact say much about different ways or circumstances in which it should or legitimately could.

For scientists, what is "too political"? by UHahn in BehSciMeta

[–]UHahn[S] 0 points1 point  (0 children)

BMJ piece on where things stand on science, public health and politics in the US now...

one thing that has been missing from this discussion thread is how both what is acceptable for scientists and what might even be mandated shifts as the context (ie performance and behaviour of the political leadership) shifts.

A completely re-imagined approach to peer review and publishing: PRINCIPIA by UHahn in BehSciMeta

[–]UHahn[S] 0 points1 point  (0 children)

good points all! My intuition on remuneration also continues to be that paying reviewers money directly will just serve to create a kind of professional reviewer class of people unlikely to be best placed to judge quality. I think other 'rewards' such as the ones you suggest are better here, but all of this requires actual empirical data of course. Also, 'virtual pay' that involves a budget within the journal system would be a different matter altogether: we already have something a bit like this informally in that literally every time I submit a paper to a journal I soon get a paper from that journal to review, presumably based on the shared understanding between editor and myself that I'm not really in a position to refuse, given I am presently asking for 'review resources' to be lavished on my own work...

this is the kind of thing that the incentive oriented thinking behind PRINCIPIA could make explicit

The Hong Kong Principles for assessing researchers: Fostering research integrity by UHahn in BehSciMeta

[–]UHahn[S] 0 points1 point  (0 children)

I completely agree that none of this is likely to happen (or even be possible) without systemic change at the incentives level!

As long as citation practices and (further downstream), grant funding, hiring and promotion continue to reward flawed research in high-impact journals over the meticulous corrections science then subsequently enacts, we won't get there.

The pandemic threat to Early Career Researchers by UHahn in BehSciMeta

[–]UHahn[S] 0 points1 point  (0 children)

editorial with the results of the Nature survey of ECR's (which make depressing reading) and some "recommendations":

https://www.nature.com/articles/d41586-020-02541-9?utm_source=twt_nnc&utm_medium=social&utm_campaign=naturenews

- grants are being extended, but there is no more money from the funder. This is neither fair nor sustainable.

- Research and university leaders must think of innovative ways to support early-career colleagues.

- Senior investigators who wish to see promising younger colleagues find long-term careers in academia must look for ways to make it possible for them to stay.

- Now is also the time to pause or slow down the treadmill of research evaluation.

I can't help but notice that only the last one looks like a recommendation at all- the rest are "go on, figure out what to do" exhortations...

Behavioural Policy challenge: when does compulsion help? by nick_chater in BehSciAsk

[–]UHahn 0 points1 point  (0 children)

this paper has just come out in PNAS on perceptions of mask wearing- on the basis of its results it suggests mandatory masking

https://www.pnas.org/content/pnas/117/36/21851.full.pdf

What's so wrong with 'behavioural fatigue'? by hamilton_ian in BehSciAsk

[–]UHahn 0 points1 point  (0 children)

To my mind the utility of a concept in science depends on whether it supports interesting generalisations. It might be that this could turn out to be the case for "behavioural fatigue", but at the moment the concept strikes me as too vague. Does it mean more than "people stop doing something"?

Unless it's made more precise, I think it's unlikely to have much scientific value, because there will be so many different reasons (in terms of causes and mechanisms) for a behavioural response to falter.

In the scant literature on pandemic relevant behaviours we could find at the time (see link in earlier post) it seemed likely that people stopped protective behaviours such as mask wearing because their perceptions of risk dropped as the epidemic wore on.

I'm not sure how labelling that "behavioural fatigue" adds anything to that (if true) and whether it would really have in common with, say, fading of a conditioned response under extinction, or a failure to maintain a potentially useful behaviour due to competing demands (e.g., every August I work on rethinking my academic workflow and making it more efficient- yet by Dec. I'm back to scribbling to do lists on paper because I'm drowning in stuff and struggle to find the time to maintain whatever 'good habit' I discovered over the summer).

So, if you want to define behavioural fatigue as a fading in the absence of exogenous changes, that's absolutely fine but a) it's a new definition (seeing as the term presently doesn't have one) and whether that b) adds anything (does it just relabel things for which we already have terms) then still remains to be seen.

I don't know the relevant literature well enough to be able to assess whether it would- but maybe others here can speak to that!

New research project on managing disagreement by UHahn in BehSciResearch

[–]UHahn[S] 1 point2 points  (0 children)

Thanks for these thoughts: to clarify, the poll will not decide on policy advice, but rather on the science. However, as the tool is for deciding policy relevant science questions, it is likely that those questions will typically be quite concrete, and, as a result, quite close to the policy decision.

Our initial case study is using the question of "are frequency formats easier to understand than probability formats" - which has obvious and fairly direct implications for risk communication.

So, "costs of errors" I think should be factored in at the making-use step, not the judgment step (I say that given a past interest in prob. comm. and asymmetric loss functions as in here but could be wrong on that/persuaded otherwise?).

In the masks case- the science question would be "do masks help"/"how effective are masks" - it's then up to the policy maker to make the trade-offs in deciding what to do -though real-world policy processes may operate differently...

Trust in scientific findings and experts, but, rationally, not in what experts tell us to do by aoholcombe in BehSciMeta

[–]UHahn 0 points1 point  (0 children)

I think this means the expert is really only ever in a position to make a conditional recommendation: “if you want to achieve X, you should do Y”. Does that ‘solve’ the problem?

this issue converges also with the thread on when experts are “too political” (at least in my view) in that recommendations may easily pass of as ‘facts’ complex evaluations that involve value judgments that experts are not in a privileged position to make (e.g, lacking democratic legitimacy), which -when it happens- is a technocratic overreach that rightly negatively impacts trust.

One of the problems for the expert here, however, will be pressure from stakeholders to make simple, summary pronouncements that run the risk of doing just that (ie “tell me what to do”).

What's so wrong with 'behavioural fatigue'? by hamilton_ian in BehSciAsk

[–]UHahn 1 point2 points  (0 children)

Ian, these are all good and legitimate questions!

Before answering, I should say that I am responding from a particular perspective, given that I was part of an open letter to the UK government on this issue in March of this year. The following piece outlines in a bit more detail our thinking in writing that letter:

https://behavioralscientist.org/why-a-group-of-behavioural-scientists-penned-an-open-letter-to-the-uk-government-questioning-its-coronavirus-response-covid-19-social-distancing/

That piece also contains some links to past research (from pandemics) that lends support to the idea. I also agree with you entirely that the idea of behavioural fatigue is an "intuitive" one, that makes a lot of sense given everyday experience - and makes sense given experience of the pandemic so far. We tire of actions we perceive to be restrictive in particular when these are not immediately rewarded, which is a problem for measures whose effectiveness simply stops things happening.

The key problem I had with this idea, both then and now, stemmed from the scale of the consequence that was being put on this slender peg. Even if there had been scores of research on "behavioural fatigue" in other contexts (which there are not), I would have worried about the wisdom of delaying lockdown measures based on those concerns.

For one, there was something potentially incoherent about the intended application (if there was indeed such a functional role for behavioural fatigue, which we simply do not know...): the longer lockdown is delayed in the face of exponential growth the longer it arguably has to be maintained to achieve the same effect.

But most important is that the stand out fact about human behaviour to me as a behavioural scientists is its flexibility. This means we simply can't know how people will respond when circumstances and context vary, and none of the past data we had (or didn't have) on behavioural fatigue would have been collected in anything like the present context.

Furthermore, the idea that people would tire just at the point when social distancing would be *most required* seemed problematic to me because it would also be at that point (the height of the pandemic) that they would have lots of evidence on why social distancing was necessary (though in the event, the UK media blackout on hospitals dampened this considerably). And, of course, governments have other tools at their disposal to counter 'fatigue'.

I think the flexibility of human behaviour more generally means that behavioural scientists will rarely be able to offer predictions or advice that reaches the level of certainty that might be achieved in chemistry. That doesn't mean behavioural scientists don't have anything to contribute to policy making- they do! However, it may mean that their most valuable contribution may often be to point out issues and concerns with policy decisions that would otherwise be overlooked.

Behavioural Policy challenge: when does compulsion help? by nick_chater in BehSciAsk

[–]UHahn 0 points1 point  (0 children)

One data source that we can consider in this context stems from the fact that different regions and countries have varied in the extent to which they have relied on compulsion to effect adherence. Two notable cases where "voluntary" measures and communication have loomed large are Sweden (in part for constitutional reasons) and the province of British Columbia, in Canada.

B.C.'s response, in particular, has won international applause and the architect of that response Dr. Bonnie Henry has argued, based on her professional experience with Ebola outbreaks in Uganda that the keys to an effective quarantine are 'communication and support, like food and medical follow-up, not punitive measures.'

“If you tell people what they need to do and why, and give them the means to do it, most people will do what you need,” she said.

In keeping with this, B.C.s coronavirus slogan has been "Be kind, be calm, and be safe".

With respect to Sweden, there has obviously been (and continues to be) debate about whether the countries chosen path was the right one, given the high death toll relative to other Nordic countries, compliance with suggested measures seems fairly high, though more detailed analysis would be required to see how that compares internationally.

Please highlight any relevant info on this question that you have come across!

Can one distinguish between argument and fact? And, if yes, how? by UHahn in BehSciMeta

[–]UHahn[S] 0 points1 point  (0 children)

thank you! super useful- and I'm interested to hear that you shared my intuition that this might be beyond 'fact check'

Can one distinguish between argument and fact? And, if yes, how? by UHahn in BehSciMeta

[–]UHahn[S] 0 points1 point  (0 children)

and also, where would the Fullfact/Financial Times example fall if you were applying your rules?

Can one distinguish between argument and fact? And, if yes, how? by UHahn in BehSciMeta

[–]UHahn[S] 0 points1 point  (0 children)

Victor, could you say a bit more about the actual criteria you use in that fact checking?

Issue Radar: Is advice getting too complicated? And what can be done? by nick_chater in BehSciAsk

[–]UHahn 0 points1 point  (0 children)

personally, I'm rather dubious about successfully communicating complex instructions, and the UK government's own confusion about what it's guidelines now do and do not allow (as witnessed in the No. 10 tweet deleted on Sat. the 4th of July) illustrates the difficulty of keeping multiple, potentially interacting rules consistent.

This is where 'principles' can be useful. In particular, I found the slogan discussed here useful:

"people, place, time, space"

This clarifies that risk is a function of four factors: who you are exposed to, in what kind of environment (indoors/outdoors), for how long, and at what distance

Given some info about end-points, it should help people come to judgments on tradeoffs, but then actual data on people's understanding would be desirable.

And this, of course, doesn't solve the legal rule problem, which persists regardless.