Comment Re:Solved problem (Score 1) 162
tl;dr Yes I would modify your approach. You are proposing a solution that grossly oversimplifies the problem by making a huge assumption that rarely holds in real life. It's not even wrong
Take 100 volunteers, divide them randomly into two groups...
...But if you're giving your advice to 50 people in Group 1, and someone else is giving different advice to 50 people in Group 2, the samples are large enough that the proportion of unmotivated people is going to be about the same in each group -
That is a huge assumption that will not be true. Simply dividing a group randomly does not make the raw results coming out the other end meaningful. Do the 50 people in group 1 have the same starting weights as group 2? Same disposable incomes? Same amount of free time? Same stress level at home? Same family history? What if people move out of the area or otherwise lose contact? If you select groups of 50 randomly you will almost certainly have different distributions of underlying factors that could all plausibly have an impact on compliance with the regimen and effect of a well-followed treatment.
Maybe if you have a sufficient budget you can increase the sample size so that you are more confident that the two groups overlap. But are you sure? You don't even want to look at inputs and instead look only at end results so you will never be sure. And even if you could increase the sample size, with a sufficiently high multidimensional problem (which is generally the most important kind of problem) you can never truly ensure equality between groups.
So what would I do? First, I would do a better job splitting up the groups. No I won't explain how I would do it but you can find plenty of information on good experimental design elsewhere. Second, even with careful experimental design I probably wouldn't get perfect overlap, so I would build a model to test the effect of intent to treat on treatment rates and then treatment on the end variable, and do poststratification on the model versus the population to evaluate the intent to treat. The difference between what real researchers do and what you propose is like the difference between shooting a bullet and throwing it.