Comment author: Peter_Hurford  (EA Profile) 08 October 2018 05:12:04PM 1 point [-]

Instead, you could assign based on whether they have and odd or even number of letters in their name.

You could SHA-256 hash the names and then randomize based on that. Doing so should remove all chances of confounding effects.

Comment author: Derek 11 October 2018 09:15:40PM 0 points [-]

It's been a long time since I wrote the comment, but I think I was under the impression the allocation had to happen at the point of distribution, using only the names on the pigeonholes. But if you could get a list of students that exactly matched the names on the pigeonholes in advance of distribution, then I agree randomising hashes would be ideal. I doubt you'd get this from administrators due to data protection issues, but presumably you could go round and manually record the names. That would be very time-consuming for a large study, but perhaps worth it to avoid any doubts about the randomisation.

Note that this would not remove all risk of selection bias because the allocation would still not be concealed, i.e. the people putting the leaflets in the pigeonholes would know who was assigned to each group. It is possible they would consequently give some of the leaflets to the wrong people, e.g. if they wanted to increase the effectiveness of the intervention they might be influenced by personal knowledge of individuals (better to give a leaflet to someone who is not already vegan) or presumed characteristics (gender, ethnicity, etc) that are correlated with likelihood of being influenced by the leaflets. This may seem far-fetched, but unconcealed allocation is associated with higher effect sizes in medical trials so we shouldn't be too quick to assume it wouldn't matter here.

https://handbook-5-1.cochrane.org/chapter_8/8_10_1_rationale_for_concern_about_bias.htm

https://handbook-5-1.cochrane.org/chapter_8/8_10_2_assessing_risk_of_bias_in_relation_to_adequate_or.htm

One solution is to give every student a leaflet inside an opaque sealed envelope, with some getting the 'treatment' and some a 'control'. But this introduces additional complexity, e.g. it could cause (further) 'contamination' as students compare what they got in that weird envelope, it reduces external validity (assuming leaflets would not normally be in envelopes), and the control leaflet would have to be very carefully designed so as not to affect the outcomes of interest while being very similar in every other respect (e.g. a leaflet promoting exercise may indirectly influence dietary choices).

Comment author: Derek 10 February 2017 03:21:12PM 3 points [-]

Good to see some thought being put into this topic. A few comments/concerns below.

To pseudo-randomise based only on names, I wouldn't use the first letter. I haven't checked, but I'd guess that some strongly correlate with confounders like ethnicity or class: many starting with an X could be Chinese, with a Z Arabic, etc.

Instead, you could assign based on whether they have and odd or even number of letters in their name. It seems unlikely that this would be correlated with confounders, or that there would be very uneven numbers in each group, but I suppose it's possible, e.g. if the most common few names all happen to be even (or odd) and also be associated with class/ethnicity/sex. Using their full name would mitigate this but would slow down leafleting compared to only using first or last, and perhaps introduce counting errors. Maybe you can get a fairly random sample of student names from somewhere and check whether using just one name is likely to be problematic.

Obviously you need to ensure that the name they use on the survey is exactly the same as the one on the pigeonhole (Elizabeth/Beth/Liz/Lizzy; middle name included or not, etc), unless you specifically ask whether they received a leaflet, which introduces other problems. I would probably suggest adding to the questionnaire something like: "(Please write your name exactly as it is on your pigeonhole.)". Presumably family names are less likely to be modified/dropped, but as noted, only using those may hinder randomisation.

I'd guess that many of the leaflets would be seen by the control group. People will show them to their friends, leave them lying around, etc. This would dilute the observed effect of the intervention. I'm not sure how to avoid it without going for cluster randomisation, which is even more difficult to get right. I suppose it would at least give you some basis for saying your findings are conservative; Type 1 errors (false positives) are generally seen as worse than Type 2 errors (false negatives), and given the state of evidence for animal-focused interventions, it is probably wise to be cautious. There would still be some factors pushing the other way though, so I'm not sure it would actually be conservative.

Opportunity permitting, I would consider some follow-up research to establish mechanisms. For example, you could ask (a random subset of?) changers why they changed, whether they remember seeing the leaflet, whether they showed it to others, what they thought about it, whether they were planning to change anyway, etc. This could increase (or decrease) confidence that the result shows a true effect. You might be able to get one or two of these into the original survey; to avoid influencing responses, use a web form that requires them to submit the 'change' answers before viewing subsequent questions - this is quite a common design in my experience. You could also include a social desirability instrument, as in the ACE study.

The use of incentives could introduce response bias. Presumably the extent to which money motivates response will be associated with SES and perhaps ethnicity and other characteristics. Could still be justified, though, in order to boost response. Not sure. Given the huge impact this has on costs, it might make sense to do an unincentivised one first, then perhaps add the incentives in later studies (obviously at different institutions). This would also function as a cheap 'test run', allowing you to refine the methodology and logistics.

I suspect the referral system would introduce considerable sampling bias. Those who pass it on, and perhaps those who are friends with referrers and who respond to a referred task, are unlikely to be representative of the study population: they'd presumably be more conscientious, have more friends, have more interest in dietary change, etc. It seems odd to go to all that effort to randomise then include a method that would undermine it so much. I'd only do it if it was impossible to get an adequate response otherwise.

Likewise publishing articles about it in advance. Those who read them may not be representative, e.g. more likely to be native English speakers, conscientious/motivated, interested in dietary change (or study design), etc. Some people's decision to go veg could also be influenced by the articles themselves. And obviously you couldn't publish them if you were trying to conceal the purpose of the study as others have suggested.

When calculating sample sizes, I'm not sure it's a good idea to rely heavily on one estimate of base rate change. Presumably it fluctuates considerably in response to time of year (new year's resolutions), news stories, other campaigns, etc., and correlates with location and many other factors. It would be a shame to waste resources on an under-powered study.

I would be a little surprised if the true effect was more than 1/200 for any dietary change (at least a durable one), and not surprised if it was orders of magnitude smaller. If you want to look at subgroups (vegan, reducetarian, etc), and maybe even if you don't, I'd guess you'd need a much larger sample than proposed. But these are just hunches.

As usual with trials (and life), it seems there would be a number of hard trade-offs to be made here. I suppose whether it's worth doing depends on what results you expect and how they would influence action. To play devil's advocate: You think a null result is likely but that it wouldn't justify stopping leafleting, in which case the study would not have been helpful in terms of allocating resources. Depending on the size and quality of the study, it would also be unwise to put much weight on a weak positive or negative effect. A strong negative is unlikely and in such a case we should probably conclude that there was a problem with these particular leaflets or with the study design (or that we just got really unlucky). So only a strong positive (>1/150?) seems like a very useful outcome. Even then, we should still suspect influence from chance, bias and/or confounding, but it would give some justification for continuing, and perhaps increasing, leaflet campaigns; and for investing in further studies to confirm their effectiveness and investigate what strategy works best, for whom, why, and in what circumstances. However, a strong positive seems unlikely (<15%?). Therefore, perhaps the question is whether it is worth doing the study if there is only a small chance of getting a result that would (or should) change our activities.

I suspect the answer to the last question is "yes", and I'm not actually sure the other results would be useless (e.g. a null or negative result might prevent inordinate future investment in leafleting based on poor quality past studies). There are also other benefits to running the study, like boosting the credibility of, and skill development within, the movement. But it's worth thinking carefully about what exactly we expect to get from the study before leaping into it.