This is vastly better. Slashdot's summary is one of the most sensationalized non-issues I've seen on
/. in a few months now. It didn't take very long at all for bot conflicts to become obvious to bot-authors, at which point and they quickly put in code to notice edit conflicts. When the bots spot back & forth editing, they back off, and alert the bot's maintainer. It took a little longer to notice loops that spanned across the different language editions of articles, but that's because the relationship among them is usually pretty weak. This Summary acts like a bot-conflict spanning 3629 articles is something impressive. In that time period, that represents around 0.01% of the article namespace when you span all language variants of WP, and the bots in question do seriously boring things related to cleaning up redirect-links or fixing named references if they become broken as an unintended side effect of a user's edit.
As far as this better summary, and looking at a longer summary from the Alan Turing Institute website, it looks like it's also inflating the implications of the study. It's certainly true that simple rules can result in complex unintended conflicts, but that's already a well-known idea. Specific novel lessons learned from this study have pretty weak implications to AI. And the cultural conclusions it draws are borderline silly. "the same technology leads to different outcomes depending on the cultural environment. An automated vehicle will drive differently on a German autobahn to how it will through the Tuscan hills of Italy." I'm gonna guess that this guy isn't a software developer. Upon checking, yup, he's a physicist turned social-scientist.