Genuine apologies, I really do feel bad about the baiting, it wasn't my intent to waste time, only facilitate discussion. Also, apologies for the tone in the previous response which wasn't as objective as it could have been, partially as a result of my almost daily 3 hour drive home from work. I was also really hoping you would have found something good as well. I'm not trying to disagree with you, or defend the "pro violence in games" people. I'm trying to point out that at the moment, it isn't possible to objectively take sides on the issue at all, and that the people doing so and claiming their results are absolutely conclusive are pushing an opinion and social agenda, rather than scientific research (whatever side they're on). This is also a very relevant topic to the court case, mentioned specifically by the Justices, in that the definition of violence is too subjective and ill-defined to allow for arbitrary state regulation (http://www.supremecourt.gov/oral_arguments/argument_audio_detail.aspx?argument=08-1448).
The reason why journals keep publishing these kinds of things is a symptom of the larger problem. It's a relatively new area where the experimental method and design isn't entirely clear yet, and experts in other related fields get by peer-review probably because 1) they have established credentials in their other work, and 2) the "peer" reviewers aren't always peers in "game effects research" but in the related fields such as media effects and psychology, and they're assuming that the same rules apply.
The larger problem itself is what I was getting at by deconstructing the methodology in these papers, and why I can't do really well what you're asking. The fields of game studies, and video game effects are so new that they're getting a lot of really bad work published, just to get established, generally from people who don't know enough about games to begin understanding how to study them. There are a lot of people trying to do ground breaking work, without first figuring out where the ground actually is. This creates an easy place for opportunists to get a lot of work published, even if it's not well done. That said, I'm not saying that the other side of this debate is publishing better work - they suffer the same problems. For example, this is a field where one of the primary measures used in many papers is "immersion" and everyone defines it differently and uses different scales to measure it, which makes it very difficult to create a standardized tool to generalize the results. In short, there isn't a good meta-analysis. There can't be a good meta-analysis, because the underlying research is not very good and the problem area isn't well defined.
If you summarize a pile of crap, you don't end up with gold. A good meta-analysis in this case would consist of ignoring the normal meta-analytic tools at the beginning, actually looking objectively at methodology, participants and results of each study, and throwing out work that isn't very good before trying to summarize and meta-analyze what's left. It would require a lot of open discussion and dialog between authors for clarification of results and methodology, which would take a considerable amount of time. Additionally, this would probably never be publishable, because it would be too easy to corrupt with opinion, so there probably is no one working on it, and on the off chance it did get published, the authors would probably be labeled "hacks" or some other derogatory term for not using a purely quantitative approach.
And I'm also not trying to say that there isn't a violence effect with games - that would be silly. Basic cognitive science and psychology gives us stimulus response effects, priming, training, and activations of similar and related mental schema, so we know there is something there. What isn't clear, specifically when it comes to games, is what that effect is, how large it is, and how are people self-regulating, socializing, and compartmentalizing the experience - because it is clearly not the same cognitive or emotional process as watching TV. We do have enough evidence to show that it's different, but not enough to understand exactly how different it is, or in what way.
This is a very messy area to work in and make claims about. We are still decades away from -really- starting to grasp an understanding of the interactions between people and games, and unfortunately this is just how media effects works in academia. If you go back, television effects research went through the same kinds of issues when it started. And, also unfortunately, the interactions between people and games, and the variety of games are generally turning out to be orders of magnitude more complex than other media research. I would say that its similar to having to rebuild our understanding of psychology from scratch - not quite that bad, but definitely approaching it.
While I can't send you directly to any really well done studies, I can direct you to some to think about.
"Bridging the methodological divide in game research", Dmitri Williams,2005 - a fairly good, short, explanation of some of the problems we're seeing in game effects. Primarily that there isn't enough collaboration between quantitative and qualitative researchers, and that researchers actually need to play the games they're studying to understand what they're studying. Dmitri offers an over-simplified solution to the problem, but he gives a good starting point in actually addressing the issue. He's beginning to address that game studies / effects is not something you can just jump into from another field like psychology or even media effects, because it actually requires specialization and understanding beyond that. Based partially on this, other works, and this entire debate, it is becoming quite clear that amount of time spent playing video games should be considered a required credential for this kind of research.
"The Short and Happy Life of Interdisciplinarity in Game Studies", Thomas Malaby and Timothy Burke, 2009. This is an introduction to an entire issue of a journal that focused on beginning to examine some of the specific difficulties with researching games. If you want more depth, anything in "Games and Culture", vol.4 no.4 2009.
More specifically on topic:
"Do aggressive people play violent computer games in a more aggressive way? Individual difference and idiosyncratic game-playing experience." Peng, W., Liu, M., & Mou, Y, 2008. Good article because it validates the GAM... without using the GAM directly. But, at the same time, it shows that the effect is moderated by other external factors that the rest of the research is not accounting for (such as personality). Bad study because they don't do a good job of explaining what a "violent act" is, which they are coding for.
"Vulnerability to violent video games: A review and integration of personality research", Markey, P.M., & Markey, C.N., 2010. This paper shows that violent video games do affect people - but that the effect is moderated by personality (Five Factor model used) and that only people with a high degree of _three intersecting dimensions_ of personality show significant effects. A look at those particular traits would lead one to believe that those individuals are significantly high in other anti-social risk factors as well. Suffers from the "non-comparable game" issue, which is a bit less relevant given the context of the study, but still an issue.
I don't mind providing more if needed. To me, this debate isn't a waste of time, it's very productive. I truly appreciate having someone on either side to discuss the issue with. Thank you.