The company is hiring a Computer Vision Engineer (and several others) for this product line, so I'd say they have plans for significant improvement down the road. Things like dynamic aging, expression rigging, rendering multiple facial variations to highlight environmental variability to the genetic plan (I expect a lot more data will need to be gathered before this in particular is implemented), perhaps even computer-based recognition as suggested... Snapshot can definitely be misused as other commenters have suggested, but it seems a useful tool within its limitations. Disclaimer: I personally know one of the scientists who worked on this, though we haven't really discussed it much, and we certainly haven't discussed the company's future plans for the product. That's all idle speculation.
A lamentable state of affairs that I mentioned in my Score:1 (yes, it was an ugly wall... but still!) reply, with the anecdote that even universities don't distinguish between these. Fortunately, a number of others did give some very good advice. I think perhaps people responded to the OP by misinterpreting the statement, "If it was only programming I could help him."
I know exactly where this kid is coming from! In high school, I filled notebooks with pseudocode for how I wanted some of my game ideas to work. Getting into Dungeons and Dragons was a helpful experience as well, especially in the Dungeon Master role. If you want to deal with lore or game balancing, there's the perfect opportunity! I know from experience that finding resources on game DESIGN, as opposed to game PROGRAMMING, can be difficult. I spent a semester at university minoring in their brand-new Game Design program that was exactly not that, and was highly disappointed. Fortunately since you seem to be talking about games in general and not just video games, there's a wonderfully accepting and active community of designers and playtesters at BoardGameGeek.com: https://www.boardgamegeek.com/... There are frequent design competitions there as well, and board games are certainly an easier jumping-off point than most video games in terms of complexity. There are tons of excellent game design articles on BGG, as well as at GamaSutra, Polygon, and yes, MagicTheGathering.com as mentioned above. It might also be enlightening to study the well-documented changes over time to the other long-lived juggernaut of gaming, World Of Warcraft, with over a decade of design changes for various reasons, but then again it may take more subject knowledge to really understand why that game evolved the way it did. MTG is a much more self-contained study, but I would say also less dynamic and interesting in its changes. The recent announcement to remove the summer Core Set and have 2 2-set Blocks per year is a good example of a thoughtful change, though. A big part of good design is finding what parts of your creation don't work as well as they could, then ripping them out and fixing them. Finally, the world's best prototyping tool for most games may well be Microsoft Excel. If you want to quickly see how ideas interact, finding a way to describe and model them mathematically can be very useful.
First off, kudos for sharing your codification of a useful and rational approach to evaluating advice, which I think is extensible to plenty of other things (don't buy your parents an extremely complicated DVR, don't buy your kid a Standard Transmission if you're not planning on teaching them how to use it, you probably don't need another infomercial kitchen appliance, etc.). But you missed a very important part of giving advice: don't treat your audience poorly! For example, claiming that your version of good advice is "by definition" the better advice, or that those giving other advice are being both unhelpful and dicks. Clearly, the difference between your idea put forth here and what you're railing against, is the difference between "best effect on average for a random sample" and "best possible effect for an individual," which does not invalidate the "best possible effect" being the best advice, because - and this is the key that other commentators have hit upon already - your advice must be judged based on who it's being given to! In general, a
/. audience will be more likely to take more complicated yet ultimately more effective tech advice than the general population, for example.
So I put forth that your entry here needs a simple revision as follows; WABR should use a sufficiently-large sample size of people _similar to the one(s) being given the advice_, and when performing mental WABR estimations, remember that you may potentially need some explicit weighting based on how important SOME positive effect for many is, compared to how important THE LARGEST positive effect for a few is. The computer antivirus example is a good one where cutting back on most viruses for most people is probably more important than avoiding almost all the viruses for a few people, while in a real-world virus advice comparison between (A) Avoid crowds and sick people when you can, OK?, or (B) Go get vaccinated for stuff, dummy!, the less-likely-to-be-followed advice may still be the best to give, due to herd immunity effects.
Then there are some lesser considerations of encouraging both WABR and CBR advice-sharing, things like 1) being given both a piece of difficult advice and a piece of easier advice, makes the easier advice more likely to be followed because it seems much more achievable than when presented on its own (thus increasing the value of the typical WABR advice by acknowledging CBR advice), 2) the fact that giving CBR advice will be a positive effect for some people whether or not they're already following WABR advice because they will see its value immediately, 3) CBR advice gives people room to grow if they later decide that WABR advicewas not enough.
Which makes me wonder if we're discussing strategy for the wrong person here... maybe it's the advice _seeker_ who needs a new strategy: to ask advice of many people until they have a sufficiently-diverse set of options and frequency distributions for those options, before following any advice.
I know that I'm missing the human-interest angle of the story here, but as someone who works at a company that has performed some large-scale DNA vaccine production research (Vandalia Research, but please don't google us because the website is an embarrassment), I'm a little disappointed that the article didn't try harder to explain the difference between these new vaccines and the old egg-grown ones. I think a little science education is a good thing to provide, to pull back the curtain on the good that genetic engineering can do. The first-pass explanation was "Flublok uses insect proteins instead of eggs. (The other is Flucelvax, which relies on animal proteins.)" which is rather poor since the proteins don't replace the eggs, the insect/animal culture cells those proteins are grown in do. I don't expect an in-depth discussion of promoters or vectors, but more about the recombinant engineering involved than "insect cells are used to cultivate hemagglutinin" would be nice. For anyone interested in a more academic explanation of Flublok's approach, along with several other possible vaccine design strategies that will hopefully be coming soon, a good page to read would be http://www.ncbi.nlm.nih.gov/pm...