Catch up on stories from the past week (and beyond) at the Slashdot story archive


Forgot your password?
DEAL: For $25 - Add A Second Phone Number To Your Smartphone for life! Use promo code SLASHDOT25. Also, Slashdot's Facebook page has a chat bot now. Message it for stories and more. Check out the new SourceForge HTML5 Internet speed test! ×

Comment Re:The danger of commonality (Score 2) 273

Radical Math, by itself, has nothing to do with common core. Their existence predates CC by a few years at least.

That aside, integrating relevant real-world issues into math lessons seems like it would be a rather effective teaching mechanism. Not simply just because they engage certain students, but they make it possible to double-up and teach multiple integrated concepts more quickly - associative memory is much more powerful than rote memorization.

Aligning these materials to CC is just a rather smart marketing move, hooking into a buzz-word. You're not opposed to free market capitalism, are you? I suppose you could always come out with CC-aligned materials that push your cause. How about math questions like "What percentage of a product description can be a lie before it becomes fraud?". Or perhaps topics such as "Maximizing profits by endangering employees – an introduction to economic thinking".

Comment Re:6 months? (Score 2) 311


Put the phone away. Talk to the child. You know: teach human interaction? This is a child, not your personal experiment.

A review of the evidence in the Archives Of Disease in Childhood says children's obsession with TV, computers and screen games is causing developmental damage as well as long-term physical harm. Doctors at the Royal College of Paediatrics and Child Health, which co-owns the journal with the British Medical Journal group, say they are concerned. Guidelines in the US, Canada and Australia already urge limits on children's screen time, but there are none yet in Britain.

Why would you substitute the acquisition of developmental language skills and the attendant ability to relate and empathise - with a fixation on shiny lights and noises?

I understand that this is Slashdot - but value of the concept cannot be completely alien...

This. A thousand times this. It doesn't matter that they like it, it is still a bad idea to encourage it at that age. Too lazy to cite other sources, but they exist.

Comment Re:I hope.. (Score 3, Informative) 304

Castelfranchi & Falcone ( ) have a nice overview explaining how and why even the iterated prisoner's dillema fails to explain any real-world human behavior. They provide a nice set of additional citations to go look at as well

Comment Re:Propaganda (Score 1) 82

the average person is really bad at self medication.

And why is it our job to protect them?

Boxing is extremely dangerous. If two people make the choice to get in the ring, we may think that's unwise, but it's their decision. If you make the decision to do something that will harm you, you may be an idiot, but I don't have the moral right to stop you through means other than making an argument to try to change your mind.

When you get into things that have the potential of harming others, then that's another story. You're free to drink alcohol and use whatever other drugs you want to. You're not free to drive on public roads under their influence.

I'm unfamiliar with a theory of social morality that supports the line of reasoning you start from. Could you point me towards more information on this that is supported by contemporary social theory? Preferably grounded in a processural approach?


Comment Re:The Founding Fathers ... (Score 2) 616

Having access to information does not make one "well informed". One can have access to all the up to the second information and news possible and still " misconceive" the facts. In a way, the internet has made this even more difficult. The raw amount of information available from different sources is even more difficult to process; if not only because of its sheer volume, but also even the amount of detail, spin, and misinformation provided. Instead of only having one or two sources of information to filter and decypher, now you have hundreds, thousands, millions!

No, this point is even more apt now.

Comment Re:Best suggestion is Kodu (Score 2) 237

I have to second Kodu. Very minimal learning curve, easy to make relatively fun games in a short amount of time. Options like Unity, Torque, and XNA are reasonable if you have the time to invest in teaching them programming on top of teaching them how to make a game (or need the advanced features, such as cross platform dev, which it sounds like you don't).

With Kodu, you can focus on the game development and/or production, rather than the programming behind it. There are some limitations, such as being stuck with the 3d models and assets Kodu gives you (unless this has changed recently), so consider that when making a decision.

Comment Ethernet / Infiniband Tradeoff (Score 1) 387

Pros and Cons for each link, and it depends on -which- speed of infiniband / how they are bonded. Infiniband can get quite fast in the right configuration, if you want to spend the money on it, and even then, you could do similar setups with 10GB bonded ethernet, that might be cheaper.

One advantage I think ethernet has over infiniband (and correct me if I'm wrong here, someone) is that infiniband requires a specialized network protocol to use, where ethernet can use standard TCP/IP sockets. This is perfectly fine, since many cluster libraries can use infiniband...but using ethernet would open up your cluster to situations and use-cases where things like MPI may not be appropriate architectures - for instance, some types of cognitive modeling can benefit from the CPU resources available on the cluster, but their architectures don't always bind well to MPI metaphors (and for some programming languages / cognitive architectures, getting MPI to work is non-trivial in a cluster environment).

Comment Re:The evidence for video game violence is solid (Score 1) 154

Genuine apologies, I really do feel bad about the baiting, it wasn't my intent to waste time, only facilitate discussion. Also, apologies for the tone in the previous response which wasn't as objective as it could have been, partially as a result of my almost daily 3 hour drive home from work. I was also really hoping you would have found something good as well. I'm not trying to disagree with you, or defend the "pro violence in games" people. I'm trying to point out that at the moment, it isn't possible to objectively take sides on the issue at all, and that the people doing so and claiming their results are absolutely conclusive are pushing an opinion and social agenda, rather than scientific research (whatever side they're on). This is also a very relevant topic to the court case, mentioned specifically by the Justices, in that the definition of violence is too subjective and ill-defined to allow for arbitrary state regulation (

The reason why journals keep publishing these kinds of things is a symptom of the larger problem. It's a relatively new area where the experimental method and design isn't entirely clear yet, and experts in other related fields get by peer-review probably because 1) they have established credentials in their other work, and 2) the "peer" reviewers aren't always peers in "game effects research" but in the related fields such as media effects and psychology, and they're assuming that the same rules apply.

The larger problem itself is what I was getting at by deconstructing the methodology in these papers, and why I can't do really well what you're asking. The fields of game studies, and video game effects are so new that they're getting a lot of really bad work published, just to get established, generally from people who don't know enough about games to begin understanding how to study them. There are a lot of people trying to do ground breaking work, without first figuring out where the ground actually is. This creates an easy place for opportunists to get a lot of work published, even if it's not well done. That said, I'm not saying that the other side of this debate is publishing better work - they suffer the same problems. For example, this is a field where one of the primary measures used in many papers is "immersion" and everyone defines it differently and uses different scales to measure it, which makes it very difficult to create a standardized tool to generalize the results. In short, there isn't a good meta-analysis. There can't be a good meta-analysis, because the underlying research is not very good and the problem area isn't well defined.

If you summarize a pile of crap, you don't end up with gold. A good meta-analysis in this case would consist of ignoring the normal meta-analytic tools at the beginning, actually looking objectively at methodology, participants and results of each study, and throwing out work that isn't very good before trying to summarize and meta-analyze what's left. It would require a lot of open discussion and dialog between authors for clarification of results and methodology, which would take a considerable amount of time. Additionally, this would probably never be publishable, because it would be too easy to corrupt with opinion, so there probably is no one working on it, and on the off chance it did get published, the authors would probably be labeled "hacks" or some other derogatory term for not using a purely quantitative approach.

And I'm also not trying to say that there isn't a violence effect with games - that would be silly. Basic cognitive science and psychology gives us stimulus response effects, priming, training, and activations of similar and related mental schema, so we know there is something there. What isn't clear, specifically when it comes to games, is what that effect is, how large it is, and how are people self-regulating, socializing, and compartmentalizing the experience - because it is clearly not the same cognitive or emotional process as watching TV. We do have enough evidence to show that it's different, but not enough to understand exactly how different it is, or in what way.

This is a very messy area to work in and make claims about. We are still decades away from -really- starting to grasp an understanding of the interactions between people and games, and unfortunately this is just how media effects works in academia. If you go back, television effects research went through the same kinds of issues when it started. And, also unfortunately, the interactions between people and games, and the variety of games are generally turning out to be orders of magnitude more complex than other media research. I would say that its similar to having to rebuild our understanding of psychology from scratch - not quite that bad, but definitely approaching it.

While I can't send you directly to any really well done studies, I can direct you to some to think about.

"Bridging the methodological divide in game research", Dmitri Williams,2005 - a fairly good, short, explanation of some of the problems we're seeing in game effects. Primarily that there isn't enough collaboration between quantitative and qualitative researchers, and that researchers actually need to play the games they're studying to understand what they're studying. Dmitri offers an over-simplified solution to the problem, but he gives a good starting point in actually addressing the issue. He's beginning to address that game studies / effects is not something you can just jump into from another field like psychology or even media effects, because it actually requires specialization and understanding beyond that. Based partially on this, other works, and this entire debate, it is becoming quite clear that amount of time spent playing video games should be considered a required credential for this kind of research.

"The Short and Happy Life of Interdisciplinarity in Game Studies", Thomas Malaby and Timothy Burke, 2009. This is an introduction to an entire issue of a journal that focused on beginning to examine some of the specific difficulties with researching games. If you want more depth, anything in "Games and Culture", vol.4 no.4 2009.

More specifically on topic:

"Do aggressive people play violent computer games in a more aggressive way? Individual difference and idiosyncratic game-playing experience." Peng, W., Liu, M., & Mou, Y, 2008. Good article because it validates the GAM... without using the GAM directly. But, at the same time, it shows that the effect is moderated by other external factors that the rest of the research is not accounting for (such as personality). Bad study because they don't do a good job of explaining what a "violent act" is, which they are coding for.

"Vulnerability to violent video games: A review and integration of personality research", Markey, P.M., & Markey, C.N., 2010. This paper shows that violent video games do affect people - but that the effect is moderated by personality (Five Factor model used) and that only people with a high degree of _three intersecting dimensions_ of personality show significant effects. A look at those particular traits would lead one to believe that those individuals are significantly high in other anti-social risk factors as well. Suffers from the "non-comparable game" issue, which is a bit less relevant given the context of the study, but still an issue.

I don't mind providing more if needed. To me, this debate isn't a waste of time, it's very productive. I truly appreciate having someone on either side to discuss the issue with. Thank you.

Comment Re:The evidence for video game violence is solid (Score 1) 154

Apologies, but I trolled you, and was sort of hoping you would use these specific examples, as they are considerably bad research design. I should mention that I do a considerable amount of reading and research on video game effects before continuing.

I asked for an actual experiment, with very specific kinds of evidence. The book, and the lit review do not provide any of these. The lit review, I know because I've spent quite a lot of time with it and the material it covers, the book, I admit I'm assuming, because I've read pretty much everything else Anderson has written, and where video games are concerned, he consistently does not provide proper controls for his experiments, among other issues â" which are easier to point out from the rest of what you posted.

Let's start with "Longitudinal Effects of Violent Video Games on Aggression in Japan and the United States" Firstly, he provides little explanation of how he's classifying violence in games. In one study, the explanation is that the sample population was asked to arbitrarily judge the violence themselves, which they say is "standard practice" â" a practice, that if you look at the citation, they established themselves in other very flawed studies. In another study in the same paper they say that specific GENRE of games have specific violence levels, which is absolutely not true. For instance, anyone claiming that Dead Space 2 and Unreal Tournament 2004 display the same relative level of violence has very clearly never played at least one of those games - but, they're in the same basic genre, FPS. For those arguing that they aren't the same specific genre, I agree, no they are not, but the video game violence research, particularly Anderson and Bushman, don't even seem to have a grasp of the basic classifications, repeated hundreds and thousands of times across many video game review websites.
Continuingâ¦There is no list of the specific games mentioned at all. There is no information regarding how the same games participants rated differently were controlled or averaged for. There is no explanation or information for if they controlled for other risk factors in aggression â" which they full well know how to do if you read their aggression research that isn't attached to video games. Things like cultural background, income level, really really basic demographic information can all cause significant variance leading to exactly the same kinds of results they show in this paper. This paper also ignores the "violent people tend to play violent games" aspect which is found in other literature, often ignored by this team, and of considerable impact.

Next, there is a major cultural issue in this paper, which thanks to doing linguistic and ethnographic work with Japanese individuals, I can point out for you â" American and Japanese cultures have drastically different views and exposure to violence, specifically at the age level children and young adults. Anderson does not mention how this was controlled for. If you'd like a subtle example, watch something like Sailor Moon in the original Japanese cut.
Next, "Correlates and Consequences of Exposure to Video Game Violence: Hostile Personality, Empathy, and Aggressive Behavior"

Firstly, they're using the same rating system mentioned above. Replicating a flawed study does not make the results valid. To further delve into this, they ask for the 5 favorite video games. They don't ask how often they're played. Final Fantasy 7 is one of my top 5, I haven't played it in at least a half dozen years. IF anything, what they've proven here is that there are personality factors involved with selection of favorite games, and that those factors also reflect violent tendencies.

This paper is also one of my favorite counter examples of how not to select games for a study. In another study in the same paper, they use Unreal Tournament as their violent game, and Myst as their non-violent game. I'm sorry, no, this is an apples to elephants comparison. They are not controlling for violence between these games. Firstly, they aren't even the same genre. They don't have similar game mechanics that can be compared. Myst is basically a slide-show with interactivity, and doesn't even allow the user to move freely in the environment. Additionally, Myst is a very slow-paced, narrative, puzzle game, being compared to an incredibly fast paced action-oriented first person shooter. Any one of the difference between the two could have significant effects. They need to do a much better job of controlling for this before they can make any of the claims they make based off of the statistics they got.

"Causal effects of violent sports video games on aggression: Is it competitiveness or violent content?"

This one was new to me, thanks! I haven't fully read it yet, but let's start with the glaring errors found via a quick skim. Again, they're asking their research subjects to create the measurement instrument for them, with no reasonably objective analysis to the actual content. Instead of accounting or explaining difficulty in the games themselves, they simply say "we'll the violent one was rated more difficult". Yes, that may actually be because it was more difficult. If that were the case, then the difficulty, not the violence, could very well be causing the arousal effect they measured. I'll go back and re-read this again, but parts of their setup and analysis are already seriously flawed.
None of the literature you posted meets the conditions I requested. There is an article with longitudinal data, but does not explain controls or rating of violence in games. There is an article that uses what may be comparable games, but does not show any longitudinal effect. It was a reasonable effort, and the literature listed are excellent examples of how not to conduct research for those interested in the area.

If you'd like to try again, I'd recommend trying to find a study that isn't authored by Anderson or Bushman, does not use the GAM, and does not use Anderson's "violence scale" - you'll be headed in the right direction.

Comment Re:The evidence for video game violence is solid (Score 1) 154

This is a great read, and does follow quite a bit of what I've read from media effects studies on violence. That said there is a major issue here. If you follow the Games Effects literature much of it indicates that video games do NOT follow the same effect sizes, or results, as the same levels of exposure that individuals get from other visual media.

One of the primary differences is that genre and game mechanics can drastically change the result (completely separate from the level of violent content). I want to be completely objective here. So, if you can find it, I would appreciate seeing an article that concludes that video games, (not media in general), can increase violent or aggressive tendencies for a duration longer than an hour, using a "violent" and "non-violent" game where the ONLY difference between the two is the violence factor. If that's too hard, then make it easier; find a study that uses games in the same genre.

Comment Re:TFA tells a whole different story (Score 1) 154

You apparently read a much different 2010 study than I did by Anderson et al. The one I read âoeViolent video game effects on aggression, empathy, and prosocial behavior in Eastern and Western countriesâ has several major flaws. Many of which are pointed out by Ferguson âoeMuch ado about nothing: The misestimation and overinterpretation of violent video game effects in Eastern and Western nations: Comment on Anderson et al.(2010).â Which received a reply âoeMuch ado about something: Violent video game effects and a school of red herring: Reply to Ferguson and Kilburn (2010).â by Bushman that completely failed to address most of the issues Ferguson pointed out, and instead sought to attack Fergusonâ(TM)s credentials (sound familiar?). Furthermore, the studies cited by Anderson and his group generally have a very major fatal flaw, in that they compare the results of playing incredibly different games, where itâ(TM)s not clear that they are actually controlling for violence or if one of the hundred, to thousands of other differences in the games could be causing incredible variance in their results. Itâ(TM)s quite clear upon reading their work that 1) they do have a solid understanding of violence and aggression in media effects. Good for them, but 2) that they donâ(TM)t understand video games at all, and it seems like they havenâ(TM)t actually played any of the games they use in their experiments, or made any effort to understand the subject their researching. Itâ(TM)s like someone running experiments on television viewing who has never watched TV or a movie before, they make lots of mistakes in their experimental setup.

Comment Anderson and Bushman ~ Credentials (Score 1) 154

Having read quite a lot that these two have published, as well as literature from the opposing side, it's quite clear that these two don't really know what the hell they're talking about. They consistently create experimental conditions comparing games that aren't comparable to test for violence effects, with silliness like 'lets see if Unreal Tournament causes people to have higher arousal than those playing Farmville!'. While they -may- be experts on aggression, they clearly have no idea the mechanics or theory involved in actually playing video games. While "games research" is kind of new, it also still requires specialization to work with, part of which is an actual understanding of games, gameplay, and mechanics. From the literature these two publish, it is fairly evident that they don't have it. Dmitri Williams said it best - "People researching games should PLAY GAMES", and it seems like these guys haven't touched a video game in their lives.

Yes, the "video games are violent" people publish a lot. But the quality of their experiments, and result reporting is very very lacking. This is evident by the methodology mention in TFA which "provides strong support" according to Bushman. No, it does not. If anything, it shows people staggering and back tracking to find support for something they suspect based on opinion, and realize they haven't done a reasonable job proving.

If you actually -read- the literature, and start poking it with a stick there is some evidence that starts to form to make a clear picture. That picture is that there -is- an effect of violence in video games, the effect is incredibly short term (hours) and very small, and that there are cognitive -benefits- to playing video games that quite longer term. Additionally, the small, short-term effect seems to only be a risk condition for people with very specific personality issues who would need to fit other risk-factors anyways for it to actually be an issue.

Comment Re:Lisp "hackers" have hardly failed (Score 1) 98

I think we're basically on the same page here. Both isolation and collaboration have major benefits. I'm primarily an isolationist researcher myself, but I'm starting to find that I've replicated a lot of existing work (or planned to replicate), which to me is a sign that I should start looking externally to find more existing work that I can apply my unique and new ideas to. It's because of this, and since many of our goals are similar, that I've found there are a lot of existing pieces for what he wants to do.

The major problem with putting them together is that many of them are incompatible because the ideas have been developed in isolation too long. Some of the concepts may still be valid, but from a coding perspective may have to be entirely rewritten to be compatible at an architecture level, which can be a little frustrating because it seems wasteful, and because of the academia issues.


One of my major problems with how this stuff is being handled in academia is that there are a lot of people building small pieces and validating them against known data, and not a lot of people putting them together to do something useful or otherwise novel.

But at the same time, we end up with that huge table I linked to, where each researcher or group is pigeonholed into their own architecture, with limited actual collaboration -between- groups. Each is replicating the same thing that others have already done and moving forward very slowly because no one wants to use the other group's architecture. While at the same time, no one wants you to take their architecture or theory and "violate" it by re-implementing it.


Comment Lisp "hackers" have hardly failed (Score 1) 98 Quite a few cognitive architectures are written using Lisp, and many of them have produced significant results. Many of them could also be written in something other than Lisp, but for some reason it is easier for psychologists to write in lisp than using other programming languages. As someone working very close to the subjects he mentions, most of TFA sounds like marketing, and anytime he mentions something technical, it doesn't sound like he knows what he's talking about. Also...

What other artificial life/intelligence projects are you keeping tabs on? What should we be excited about?

Oh, I'm the wrong person to ask. I try not to look. For one thing I don't want my own thoughts to be polluted by other people's, and for another there's always a hundred people who claim to be doing exactly what I'm doing and it's kind of depressing to know that.

I understand the desire to have unique ideas- actual research has found that people can generate more unique ideas alone than in a group (you can go find the CSCW papers for yourself, I'm lazy). That said, ignoring everything else isn't good either. There is a LOT of good work that has been done in this area. Is each individual project a complete solution to creating artificial life? No. But I suspect that many of the pieces are already out there just waiting to be assembled.

Slashdot Top Deals

All the simple programs have been written.