Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror

Comment Negative, tribal chimps (Score 1) 81

We have intrinsic negativity bias and intrinsic tribal nature. Any social interaction system we design should account for those biases so it can either avoid creating feedback loops or exploit creating feedback loops - depending on the incentives of the developer. We are also social and where incentives are right we want to work together and even be altruistic at times. Systems could also leverage those instincts and incentives. But I'm guessing the tribal/negative ones are stronger and maybe easier to poke with a stick to stimulate...?

Comment Re:Scary only because China lies (Score 1) 194

My guess is that infection rate is underreported relative to deaths so the mortality rate is NOT nearly as high as 8%. Deaths are more likely to be reported for obvious reasons. Someone might get sick but never even go to the hospital because for many people the virus is uncomfortable but doesn't require hospitalization. Then again, death rates could be intentionally underreported. So who knows, really....

Comment Re:Ruination of the site (Score 1) 61

I don't know if more characters would help. Facebook allows longer responses and I think the lack of meaningful conversation is still there. I don't know if people *want* meaningful conversation on these platforms. It seems like mainly people want the instant gratification of quick likes/retweets. Anyway, that's my hot take. :)

Comment Re:Problematic (Score 2) 409

As I understand things - I think you're mixing up censorship with violation of the first amendment. First amendment protection is only related to government. Censorship is a more generic concept that can be applied anywhere. In this case YouTube is definitely censoring. But it's not a first amendment violation because YouTube is not a governmental body.

Comment Re:"bias against certain demographic groups" (Score 2) 90

it doesn't have to know the demographics to have a bias. if the learning dataset the AI uses is biased, then the AI will be biased. here's an overly-simplified example. let's say the learning dataset is all from kids from New York. And "good" essays by kids from New York (based on human evaluation) happen to often have the words "foo" and "bar" in them. So the AI thinks essays with the words "foo" and "bar" in them are "good". Now let's say "good" essays by kids from New Jersey (based on human evaluation) happen to often have the words "goo" and "jar" in them. But those essays were not put in the learning dataset! Now the learning dataset is biased towards kids from New York. Even though the AI has NO CLUE what state the kids are from. Make sense?

Slashdot Top Deals

"One Architecture, One OS" also translates as "One Egg, One Basket".

Working...