Seems appropriate to dig this account out of hibernation now. I was one of the early editors here, from 1998 or so. Worked with Roblimo for years. Very decent guy, did a good job of lending an adult presence to the group that was sorely needed. I'm sad to see him go.
Bill Gates
Steve Ballmer
Larry Ellison
Lennart Poettering
Richard M. Stallman
Lucretia Borgia
How can anybody call him a "denier" when he acknowledged global warming in the first twenty seconds of the cited video?
He is more of a lukewarmist, meaning that he agrees that the climate is changing, is not certain that's a bad thing, and reserves judgment on controlling emissions until there is more data to confirm the models' predictions.
OK, well, why the hell do they owe you an explanation of what they spend it on? I think the code they produce pretty much speaks for itself, don't you?
Dealing with infringements is expensive, too. They sued Cisco, as I recall. Others, too. Lawyers cost money.
I'm not particularly sanguine about the idea of importing the anal-retentive beancounter caste into libre project management, frankly.
What the fuck do you mean? It's a non-profit. They file regular reports.
Is this some kind of FUDtroll?
"Tracking the RNG" would help you win the game, but it doesn't tell you anything about how to play the game.
That would be my point.
This AI learns to play the game, it then wins the game using experience it gains in the same way a human does - feedback from the game score.
That is one possible interpretation, which is not supported by the statements so far. That is not to say that it is not the case, only that it is not currently supported by what I have seen so far; something along the lines of "We tested this against games with multiple RNGs with no perceptible change in AI performance" would support that interpretation. There are other interpretations. People are *assuming* that "wins" = "plays the game" - and the company that did it isn't relieving anybody of that perception (understandably). That's the point. Exploration of other explanations for success are warranted.
Consider that, for games which possess a weak RNG (i.e. predictable starting conditions and knowable changes in game play, i.e. most old console games), it is in theory possible to play *blind* - in other words, not actually paying attention to what's going on on the screen, but simply hitting buttons at precise enough intervals. If 'score' is taken as a proxy for 'how far you can get in the game' (ceteris paribus, someone with a higher score made it longer), then most known machine-learning methods will converge on that/those sequence(s) without any understanding of 'the game' per se. It may even be possible to do that for short gameplay sequences based on pattern matches to known game conditions. While that does get off into the semantic weeds of what 'playing the game' is, it is difficult to differentiate between an AI which has 'learned' to play the game in the sense that it understands abstract rules, interprets game state, and makes decisions about what to do based on that observed state, and a neural network which has converged on the correct list of keystrokes to pwn the computer given certain observed starting conditions. One of them is impressive; the other one isn't, quite so much.
I find myself wondering about the following question:
How did they differentiate "learning to play the game" from "learning how to track the game's RNG"?
Most video games have ridiculously simplistic PRNG generators embedded in them. An AI might get "sidetracked" and learn how to play the underlying RNG output of the game, rather than the game itself. That would yield really good results for most arcade games of this type, I imagine (weak RNG, limited input and timing options, etc.) I don't know if they checked for that possibility.
Easy way to check, though: Reach into the game and substitute a better RNG (cryptographically-strong/hardware/quantum) RNG for the one in the game. That would enable you to quickly determine the difference. If the AI's game performance suddenly goes to shit, it wasn't a real game-playing AI. If it doesn't, well, all hail Skynet, I guess.
You can kludge on encryption in the pipeline:
If they told everybody "your info was hacked" while they hadn't cleaned it up yet, a bunch of folks would have logged on and changed their passwords, immediately exposing the NEW ones. You clean up first, then you engage the PR folks.
"... We wrapped a robot in a dead sparrow and decided to see if we could fool the other sparrows into interacting with our creepy, ghoulish automaton! It's *science*!"
And of course, it was COMPLETELY UNEXPECTED that the grisly abomination stapled to a tree branch triggered aggressive reactions from the other sparrows. Because every living thing JUST LOVES to be confronted with a soulless golem wrapped in the dead flesh of another of its kind. And that never causes pants-shitting terror or anything.
I can see it now:
Sparrow 1: "OH MY GOD! IS THAT... *THING*
Sparrow 2: "It's not him anymore. IT'S!
Sparrow 1: "*snf* OK... OK... oh God, Frank... God help me..."
Yup. Science.
Is there, like, a review board or anything? Maybe that could screen some horror flicks before writing checks for this kind of bullshit? "New rule: If your study is substantially similar to the plot of any one of this library of 100 horror movies, or if it has a plausible chance of producing similar outcomes, we're not going to fund it."
My local ISP (slic.com) installed FTTH, and I'm getting 100Mbps to my house, so don't blame me for any drop in speed!
So does Adobe.
The inflexibility of Unity for anyone who uses two monitors and has Linux on the right-hand one led me to switch to KDE. This is most likely a permanent switch.
Center meeting at 4pm in 2C-543.