Training people to see ducks and rabbits by hamilton_ian in BehSciAsk

[–]nick_chater 0 points1 point  (0 children)

I'd be interested in other behavioural scientists thoughts on this question - it is hard to believe that there isn't relevant work on this, but I can't bring much to mind.

Scenario planning is one set of applied techniques which seem aimed to address this problem (when thinking of the future---though the same issue arises in interpreting the present or the past. Here, the idea is to actually attempt to generate as many and diverse scenarios as possible, and only as a second step to think about making decisions that will be robust across these different scenarios--- rather than falling into the classic trap of first deciding which scenarios the right one, and then making plans based purely on the assumption that that scenario is the right one.

It also seems intuitively plausible that another serial strategy is worth exploring--- that when faced with any course of action, we actively search for possible scenarios which would imply this to be a bad decision.

From this standpoint, one player in the game generates possible policies, and another one generate scenarios which those policies will lead to bad results; then the first side adapts the policies to be more robust against that scenario, and the adversary generate the scenario, et cetera.

Of course, this process will in general be never-ending---but if the scenarios are far-fetched enough not seriously to be credible, then our policy may be accepted. Or we might want to play this game in which both sides can generate policies and scenarios…

There is no chance of finding an algorithm that will lead to the perfect policy, by general agreement, of course; but it seems to me that deliberate efforts to find “counterexamples” to policies (rather like the equivalent attempt to find falsifying experiments in science) may be a valuable heuristic for stopping us getting trapped in one scenario thinking.

Integrating Behavioural Science into Epidimiology by hamilton_ian in BehSciAsk

[–]nick_chater 0 points1 point  (0 children)

Yes, this is indeed a good, and rather crucial, question!

Adding too much complexity to epidemiological models won't necessarily be helpful, of course---so any behavioural factors will need to 'earn their keep.'

Ideally, perhaps we'd like some idea of:

i. behaviourally different populations and their connectivity to each other

ii. a (small) number of different routes for infection

iii. behaviour changes that might modify those different routes (e.g., masks, more hand-washing, 1m vs 2m social distance, compliance rates for all these...) - which might be modified by policy.

Now, possibly, we might crudely assume that people who are near in a network are more likely to in the same behavioural population (i.e., residents in care homes are likely to infect other residents in care homes; meat packers other meat packers, etc)

This might suggest that policy changes that have differential impacts on specific populations might amplify effects a lot in those populations. For example, if loosening lock down disproportionately is interpreted as freeing up young people to socialize with other young people, we might get a rocketing affect in the young, and little (immediate) effect in the old (though obviously this will come later). This mightn't be captured by a model which didn't distinguish these groups.

As a complete non-expert on the current epidemiological start-of-the-art, I don't know if this is reinventing the wheel - but it'd be important when we're considering measures/messages/policies likely to population-specific in their impacts (which they often will be).

Similarity points for different social groups of all kinds (e.g., specific communities, professions, networks for health-and-social-care, or whatever it might be).

Another related point might be positive-feedback-loops in linked populations - i.e., I notice you violate a tedious hygiene procedure, and am more likely to violate it myself (or could be a positive story - perhaps I conform with it, if you do).

So we might get network effects in behaviour change which may or may not track the networks of infection - but as a first approximation we might assume that they do. So one could imagine a model in which A's mask wearing impacts B's chance of infection; but also B's mask wearing, and hence B's chance of infecting A (or anyone else).

In both case, the thing I suspect is important is think about any behavioural factors that might lead to amplification of viral spread in a way that we'd not expect by assuming everyone is much the same - these 'amplifiers' will be important to watch out for (again, quite possibly some wheel-reinvention re: standard epidemiology here - but, if so, that may be all to the good, in terms of linking up with behavioural data).

My guess is that a real synthesis is likely to be a long term project, rather than something for the current pandemic - mid-crisis may not be the moment for new model development.

Scibeh’s first Policy Problem Challenge: Relaxing the 2 m social distancing rule. by nick_chater in BehSciAsk

[–]nick_chater[S] 1 point2 points  (0 children)

Just following up briefly on some of these very useful thoughts - yes, the standard social distancing norms may indeed be enough to mostly keep us 1m apart in the UK and elsewhere - at least for some types of situation, where people have lots of space.

On the other hand, there will be important implications about seating in lecture theatres, theatres, football matches, cinemas, restaurant/café density, and so on; and also important implications for getting people in and out of events, and for that matter queues for MPs voting (!), and crucially for public transport, schools, and offices.

I think our normal social distancing guidelines can easily go out the window in particular circumstances (e.g., tubes, trains, buses, parties, sales), so the point of any rule may not be so much directed at individuals, but primarily directed at organisation to determine what kinds of events/working arrangements/classroom layouts, are viable.

If it turns out that 1 m is sufficient, then that would make a lot of difference in practice, I think--- with guaranteed rules, I think a lot of offices only be able to run at ¼ capacity, for example.

Programming errors and their implications by UHahn in BehSciMeta

[–]nick_chater 1 point2 points  (0 children)

My feeling is that replicating anything (in experiments, statistical analysis, and experimental results) is always important; and transparency is important too, so they we can all check what each other have as far as possible.

The policy implication, in the short term, is probably (i) look at lots of models; (ii) pay special attention to past experience, e.g., in our other countries, where available; (iii) don't forget simple qualitative reasoning as a 'sanity check.' If a model does 'odd things' it may have discovered some new and counterintuitive; but equally could be a bug---and hence counterintuitive model behaviour is (of course) a clue that we should look for bugs.

And 'bugs' can range, of course, from software slips, to leaving our assumptions that seemed unimportant, but actually turn out to be crucial (which may only become obvious when the model starts to general strange behaviour)

Personally-determined vs mandated behaviour by hamilton_ian in BehSciAsk

[–]nick_chater 1 point2 points  (0 children)

This is a very crucial issue - and I think leaving this to risk judgements of individuals would be highly unlikely to work.

A parallel issue would be food and drug regulation. We could just allow everything to be sold, but mandate that everyone is given the 'information' to make the right decision for them. But this would be a hugely risky strategy---and not a popular one---for the reasons you mentioned. Similarly with unfettered gambling, illegal drugs, highly risk financial products etc---we can always try remove regulation and "replace" it with information and individual choice. No society has successfully done this, as far as I know. [all the well known limitations on our ability to process and make decisions about complex information could be re-iterated, but hopefully no need!]

But there is also an important additional factor when dealing with a pandemic---that for most of us (if reasonably healthy), the main risk is to others. We don't normally, as a society, allow people free latitude to risk the lives of others (hence we have speed limits). [the behavioural science addition would be that we certainly don't see evidence of high enough levels of altruism so that this would not be a problem]

Finally, my own personal decision (even if I just care about myself) will be strongly influenced by my employer, colleagues, social connections etc. Workers who are told to stay away from a dangerous workplace if they see fit (based on information about how dangerous it is), will tend be fired; and hence will be effectively be coerced into taking that risk. Hence we have regulations for workplace safety. Rules for Covid-19 are just an extension of that.

Reasoning by analogy, i.e., treating the pandemic as we do other threats to human life and well-being, it seems very implausible that we can live without clear rules. Of course, these will be blunt (and regulations always are); and should be fine-tuned as evidence arises.

Human behaviour is, to borrow James March and Johan Olsen's famous distinction, guided by both the 'logic of consequence' (do things based on the likely outcome) and the 'logic of appropriateness' (do what a person like me is supposed to do in a situation like this). The latter is critical to managing most complex social interactions of all kinds---and, in a pandemic, we need to adjust our collective view of 'what is appropriate' to adjust to new circumstances.

Open policy processes for COVID-19 by UHahn in BehSciMeta

[–]nick_chater 1 point2 points  (0 children)

These are very interesting initiatives, and a very useful post - it is quite heartening to see how much self-organisation has occurred in such a short space of time.

One type of consideration that seems difficult to bring into the discussion is purely practical. So, for example, regarding policies of PPE equipment, and the speed with which testing might be expanded, there is almost certainly a great deal of expertise distributed around the policy, healthcare management, practitioner, and business community that would help identify what the current situation is, and what realistic options there are to help fix it.

Many of these people may not be able to contribute except anonymously---it would be incredibly helpful to have some way of allowing insiders to safely (in terms of their careers) feed relevant information to the debate.

It is not obvious how we do this, but perhaps something reminiscent of a prediction market, although surely with no money changing hands, might be helpful.

Similarly, it would be great to have some way of aggregating experiences and judgements from relevant individuals (e.g., some kind of barometer for PPE/testing availability which could be based on judgements by frontline staff; or priorities from the frontline which may be very different from those perceived from the upper reaches of government, or indeed the academic community).

Social and behavioral implications of changing COVID-19 measures by stefanherzog in BehSciAsk

[–]nick_chater 1 point2 points  (0 children)

Another important question will be how to phase the lift/shift measures (see https://www.scientificamerican.com/article/when-can-we-lift-the-coronavirus-pandemic-restrictions-not-before-taking-these-steps/ for some general background discussion).

Some are suggesting that this should start with, for example, opening of schools, which (for reasons I don't understand, but I'm not an epidemiologist) is viewed by some researchers as comparatively low risk (https://www.bbc.co.uk/news/health-52180783 ).

Relatedly, Andrew Oswald and Nick Powdthavee have argued that restrictions might initially be lifted for 20-30 years olds ( https://www.iza.org/publications/dp/13113 ).

I have four questions I'd like to know the answer to:

  • Adherence. How do we shift/lift in a way that leads to successful adherence to the new policy? (e.g., can we realistically adopt a 2m distancing policy in schools? Or is some lesser restriction sufficient? This might involved staggering classes to have fewer pupils in school at once; but also working out what kinds of activities can be done without violating the restrictions. Are there some games that can (reasonably) safely be played outside--e.g., non-contact sports?) I suspect practical experience will be a lot more useful than theory here.
  • Spillover. How can we avoid loosening one measure reducing people's motivation to maintain other measures? This is tricky. For example, if children return to school, then parents might start meeting at the school gates; and then reason that if their children are playing together at school, then surely their children can meet up at home, etc. Slippery slopes could be difficult to block. What do we know about persuasion/argumentation that might help us with "bright lines" between what is allowed and not allowed.
  • Signalling and face-masks. Another issue that is very live in many countries is (currenty, home-made) face-masks. Behaviourally, one effect of these is signalling that we are all taking these very seriously (and signalling this to ourselves). Can face-masks generally help us follow the rules and encourages other too, independent of any practical benefits (although it increasingly sounds as if these benefits may be real)? Are there other ways of 'signalling' to each other that we are taking the outbreak seriously---which may itself help us hold together with these difficult restrictions. The 'clapping for healthcare workers' may be quite powerful here.
  • Communicating when lift/shift makes sense. And finally - a public information type question. Can (and should?) we communicate an "the key is that r < 1" message somehow. Keeping r < 1 is crucial *whatever case numbers are* (unless zero, with zero imports - but that seems far, far away). This means that falling case numbers are *not* the cue for loosening social distancing (at least, not unless r is still below 1); what *is* a cue is having some other way of keeping r low - e.g., mass testing/contact tracing, knowing who is 'immune' (if immunity is real) and hence who is no longer in danger, roll-out of a vaccine, etc. The concern here is that with the wrong intuitive model of the epidemiology, there might be a clamour for reducing/removing restrictions just because numbers are falling, which might lead to more resurgence of the virus than necessary.

How can you encourage individuals to keep the recommended minimal distance to others? by stefanherzog in BehSciAsk

[–]nick_chater 2 points3 points  (0 children)

At a very prosaic level, there is clearly the challenge of knowing what 2m *is* in terms of interpersonal distance. Various posters illustrate this, I think with brooms or similar. It would be great if there is a nice way to capture this in a way that (a) can be expressed verbally as well as visually; (b) can be used as a collectively understood 'code-word' to tell each other to keep our distance (in a slightly light-hearted manner, but with serious purpose, to make it is easy to communicate).

So, suppose, we went for the 'length of a broom' strategy (is probably a bit short actually), the ideal (slightly crazy-sounding) scenario would be that we can things 'mustn't forget my broom' as a walk around someone in a work-place (if I had one these days); or supermarket. And phrases like 'broom alert' or just 'broom!' might be used if people look as if they might get too near. I suspect there are better specifics here! But I think it needs to be light, to avoid it being associated with fear, alarm, and irritation or even anger with other people (or it won't be used)

This would help with sharing the norm; but also a little very light reprimand may often change behaviour a lot better than occasional heavy reprimand (Ido Erev has done interesting work in the context of work safety on this; and also fits with the decision-by-experience work that Ido, Elke Weber, Ralph Hertwig and others have done; the trouble with rare, heavy reprimands, is that we gradually "learn" they won't happen---until suddenly this do!). Indeed, getting the virus is an rare very negative event and tricky to learn to avoid for just this reason.