It does sound like the same shenanigans at play, although nobody is admitting to purposely breaking the Chinese one, like what happened with Tay.
This type of chatbot is a predictor about what will be said next in a conversation, based off of the words that have already been said. In the case of Microsoft Tay, it was being trained from Twitter. So all anybody had to do was make sure it was trained on their tweets, and they could make it say anything. If it sees a pair of tweet like "That dog is awesome" and the response "It must have a highly varied diet", then as long as this is the only time it's seen the phrase "That dog is awesome", if you say "That dog is awesome" to it, it will respond with "It must have a highly varied diet".
Of course, it was the 4chins that noticed this first in Tay's case, so it ended up saying some awful stuff. And I'm assuming that's what happened with Propaganda Bot too. Of course it's also possible that it just got some negation mixed up. Negation words in NLP are super hard to deal with and cause all kinds of headaches. So maybe it was supposed to say that it WAS a huge fan of the Communist Party. But seeming that it mentioned wanting to go to America, that sounds like it's just parroting some rapscallion.
This gets into why I think this whole Markov chain based thing is a total dead end in terms of AI. It just produces very convincing nonsense. If anybody is interested in the tech though, check out Karpathy's blog post "On The Unreasonable Effectiveness of RNNs". It's what kicked off the popularity of the technique.