Since non-GMO foods have been consumed over many lifetimes of consumption, their safety or lack thereof has been pretty well established.
We're constantly hybridizing and producing all manner of crazy new plants where God knows how many genes are combined in totally new ways. There are tons of hybrids we're eating right now that have nowhere near "many lifetimes" worth of consumption, and there's no good reason to think that we'll do a better job of predicting all of their subtle properties (especially "long term" safety) any better than if we take a known organism and add one carefully selected gene to it.
Perhaps you could explain how such a salary is justified? Without resorting to "well the Market says..."?
Becuse he's earning his employers shittons more than $6M per year by doing that work? Would the morally correct outcome be for him to cut his salary by, say, $3M so the owners of the company can pocket $3M more?
One could point out that there are fewer instances of white males being miscategorized.
White males are just about the easiest faces to categorize. They tend to have short hair that doesn't obscure facial features or create oddball shapes that confuse the classifiers. Their skin tone makes photographing them and finding edges extracting features easier than it is with darker skinned people. White people have a greater variety of eye colors that can be used to distinguish among them. "White guy face" is just about the optimal case for this problem. If I had to come up with a worst case that was also a photo of a fairly "common" person, I'd go with "dark skinned, brown eyed person wtih long hair and facial hair." That's a pretty clean sweep of all of the variables that make this a hard problem.
Good decisions? Sorry, but releasing poorly tested software like this was obviously a bad decision. The bad outcomes were a direct result of their poor decision making.
How good does the cutting edge of object recognition need to be before it's not "poorly tested" anymore, especially when it's for a silly photo app and not a medical or military application? I never hear this type of thing from people who have actually had to solve these types of problems. The reality is that objects are going to be confused with other objects. Lots of them, once we're talking about hundreds of millions or billions of samples. Some cases will fail with great regularity and patterns. The unfortunate fact here was that the pattern happened to coincidentally have really embarrassing cultural connotations.
This is one of the things I don't miss about working in machine vision. We'd run our algorithm over a zillion images and it would correctly handle all of them save a small handful and that small handful would be filed as bugs. OK, maybe we'll be able to handle that small handful at the expense of a smaller handful next time around. But the pass/fail criteria for the tool is in its overall results, not in the outliers.
Force needed to accelerate 2.2lbs of cookies = 1 Fig-newton to 1 meter per second