Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Comment Re:she will be able to use her mom's smartphone (Score 2) 156

"they" should be officially added to the English language as a suitable replacement for "he" and "she" so people can stop using these words to push stereotypes or agendas.

Instead of "added," how about restored, or perhaps simply "re-legitimized"?

More details at the link, but " singular they" has been in common English usage since the 14th century, and until the mid-1800s it was standard even among famous authors and educated folks.

Then, as with many supposed grammatical bugaboos, the Latin wackos got ahold of things and tried to claim that English should be more like some idealized version of Latin (which itself was largely a constructed artificially policed version of Latin espoused by Cicero et al.). So blame the grammar wackos of the late 1800s for imposing an artificial rule on English usage.

Usually when you find some arbitrary grammatical rule, it's not that we need to "officially allow" other uses into English... Often we just need to stop grammar wackos from trying to enforce supposed "rules" that were simply made up by some up-tight dude in the late 1800s who was bored and decided to write a grammar book to impose all of his (yes, almost always "his"!) artificial restrictions on the world.

Comment Re:Yeah, right (Score 2) 267

Third, you can spend hours fact checking the claim in order to eventually, finally, reassure yourself that yes, they are lying sacks of shit and no part of what they said was representative of the truth.

How often do you actually take the third option? How often can you, really? That's like asking someone how many EULAs they read.

All the time, actually.

Before the internet was so comprehensive, not very much, I'll admit. But now I can search for information on just about anything, and within a minute (not "hours"), I can be reading professional journal articles on the topic.

If I see a post I know is right (or at least includes a bunch of stuff I know is right already), I generally skim it or pass by. If I see a post that I know is wrong, I may reply with what I know, or I may just ignore it depending on how much I care.

But if I see a post making assertions that seem more speculative or which make strong claims that contradict what I thought I knew, I want to know the truth. So, I often go a-searching. Generally within a couple minutes, I can either locate a reputable source that seems to verify it, or a reputable source that shows the poster was an idiot -- or, I often find both the spurious claims the poster was making along with someone else who has better credentials or better data debunking it.

That's a primary way I learn new stuff in the internet age. You should try it sometime. Sure, I don't fact-check comments on things I don't care about at all, because I don't often read comments or stories I don't care about (or only briefly skim comments looking for anything interesting).

Anyhow, that's about the main reason I read comments -- I want someone to tell me something new. And if it seems legitimately new, I generally want to know more about it -- not just accept it as truth and go around telling people, "Yeah, I heard a guy on the internet talking about X, and you won't believe what he said! Let me tell you about it..."

That's useless and a waste of everyone's time. My default assumption is skepticism. If you're going to believe any useless crap on the internet without checking it yourself, then I have a bridge to sell you. What I often like about discussion here on Slashdot is that people don't have a lot of patience for that kind of nonsense. Yes, it gets modded up sometimes, but then someone else frequently comes along who does know something and can provide better citations. It's not a perfect system, but it works better than most.

Comment Re:The default state: Skeptical (Score 1) 267

This may just be my own unqualified opinion on the subject but it seems like nothing turns people in to a pack of complete idiots faster than anonymity.

Agreed -- which is the primary case for pseudonymity.

I agree that there is plenty of value in real names on the internet when someone is actually going to offer something in their official professional capacity or area of expertise. When some dude starts spouting medical advice, and you can found out that he's using his real name AND is a doctor, maybe that can change your judgment.

But maybe that doctor also wants to offer other opinions on topics related to medical science, but maybe the issues are more controversial or perhaps he wants to say some things that he's not ready to stake his reputation on.

If he posts anonymously, you can't judge his opinion better than any other wacko on the internet.

But if he has a durable pseudonym, people can go back and look at previous posts. If he has a record of saying reliable things about medical topics, maybe he does know something.

Or, if he wants to go posting on a chef site, he can adopt a different pseudonym, and again people can judge his reputation based on previous contributions. Regardless of whether he has "credentials," he can develop an online reputation for saying things that other actual experts agree with.

We do this all the time in real life -- we speak differently to the boss compared to our coworkers, to the old ladies at church compared to the guys at the bar, to our kids compared to a partner in the bedroom. If all of those are connected and mushed together into one "real-life" identity, it makes it nearly impossible to have the multitude of different types of interactions we do in the real world. This is one of the major problems with social networks like Facebook, which try to insist that you be a real person and that you are the same person to all people. (Zuckerberg is on record as saying that people who want to have different online identities must be inherently dishonest.)

But that's just not like things are in the real world. In the real world, you build up your reputation at the bar based on previous behavior, and a pseudonym online can approximate something like that.

Comment Re:Don't add Internet to everything. (Score 1) 267

Just talk to people and you will se the same thing. Be it in a meeting, in a pub or wherever.

That's somewhat true. Although, I know I "cheated" here by reading TFA, but the summary is actually quite bad in this case. For example, the first half of TFA talks a lot about sexism issues in commenting and other things.

So it happens in the real world. It has happend since ages. Why would it surprise anybody that it happens on the Internet?

Well, as TFA points out, one thing that is different about the internet is that the more disconnected (and often anonymous) nature of internet commenting tends to lead people to have fewer inhibitions when commenting -- probably more so than even in a pub (to take your example) for many people.

It's not exactly a new observation, but it is something that potentially makes this interaction a little different. If people assume that commenters are authoritative and "like them" on the internet, then jerks and trolls who post nasty things -- like, say, racist comments on a story -- might gain a wider audience, who then feel justified in their own latent racism.

We can see these dynamics in action, for example in scandals in the past few years where the news has drawn attention to huge numbers of racist comments or tweets that might follow a somewhat inocuous event. (I recall a Hispanic kid singing the national anthem dressed in traditional Mexican clothing at a major national sports event bringing out a flurry of racist comments, for example -- even though the kid was born in the U.S., and I believe his father was in the U.S. military or a veteran or something.)

As I said, TFA focuses on sexism a bit rather than racism, but it's a similar issue. We all know that people are more likely to be jerks on the internet, particularly if they think they are relatively anonymous. However, if other people still read these jerks' comments as authoritative and "like them," it may reinforce opinions and ideas that are actually less mainstream or which would be kept out of normal "civil" discourse.

On the other hand, one might argue that such revelations also often show opinions that are "not poticially correct" but people nevertheless hold -- so such extreme comments may also show a little more about what some people "really think" but normally wouldn't say. Whether or not that balances the trolls and flamers is another question -- but the point is that the internet does actually change these interactions in interesting ways, and TFA talks about some of them.

Comment Re:I blame the FDA (Score 1) 365

As a result, it's expensive as bloody hell to society, leading to a *deficit* in high-quality medical care socialist countries.

I agree with most of what you said, however this simply is not true. When you take a one-year snapshot and look at smokers -- yes, they cost more on average.

But if you look at lifetime total expense, smokers cost less because they die significantly earlier. Yes, a year or two of treatments for lung cancer can be expensive, but then many smokers die. Meanwhile, the healthy runner who needs a number of joint replacements, has a few random cancers in his 70s, and then spends the last 15 years in assisted care due to dementia can cost many times more.

Bottom line -- smokers may LOOK like a net deficit in the annual snapshot. But if you stopped ALL smoking today (somehow), you'd save money for a few years, and then 10-20 years down the road, all your socialist health costs would skyrocket... because the darn people didn't die.

It's not a nice way to think about the argument, and most researchers stay away from this argument, because it seems to run counter to the anti-smoking campaigns most governments like these days. But there are plenty of studies out there which look at total life expenses and how smokers are cheaper. Spend some time looking, and you'll find them.

Comment Re:just ban it (Score 1) 365

The question is how many years of useless sucking on social security.

Why didn't you say you were a willfully ignorant sociopath to start with? Those people using the benefits they paid for are still buying cars, computers, and day-to-day goods. You know....putting money into the economy while no longer competing with younger workers for jobs.

Umm, no. Your logic doesn't make sense. I'm not a "sociopath" (or at least I don't think so), and I'm all in favor of valuing elderly people and their social contributions, but a NET monetary one is generally NOT one of them.

It's not like all of their assets magically disappear if they die at 60 or 65 or 70 or whatever. No -- that money, which was produced through ADDED value to society through working is passed on to others when they die -- either to specific heirs or to taxes toward society's benefit in general.

And guess what -- OTHER people will then use that money, either directly in spending or investments or whatever. You don't need to prop up an 85-year-old to allow him to click on Amazon -- he can die, pass on the money to grandkids, and they can spend it just as easily.

So, what really matters is when an older person stops making a net positive contribution, which is generally around retirement. Sure, older people do often continue to do some stuff, like providing some help with childcare for grandkids or whatever, and some continue to do a lot of stuff in retirement -- but the majority stop actually generating net positive productivity at that point.

I'm NOT at all saying that they should "go ahead and die" or whatever. There are many reasons to value them as family members and other resources, but the simple fact is that most people past retirement cease to add net MONETARY value to society. Thus, from an economic standpoint, they are draining resources.

And that's why those who die young (whether from smoking, obesity, disease, whatever) are generally -- purely from an ECONOMIC balance sheet -- less of a drain on society than those who live into old age. Seriously -- there are a LOT of studies out there that show this, if you care to look. It's a little morbid, but it's the truth.

Comment Re:I'll take the wine instead (Score 1) 480

You're fortunate you don't have whatever it is that gives people a thrill from gambling. For those people, the worst thing that could happen is to win the first time. It ruins lives. I've seen it.

Yeah, I know it ruins lives. And I WAS excited by the win. I think I played another few dollars until the logic set in and I thought about the odds rationally and realized I was incredibly lucky to end up that far ahead so quickly... So I stopped. It's not that there wasn't a thrill. It's not that I have never fantasized about what it would be like to win the lottery either. But I also analyze it rationally, and for me, that rational analysis wins out.

Comment Re:I'll take the wine instead (Score 1) 480

I still maintain that by not buying a ticket my odds of winning are not significantly reduced.

Precisely true. I've won $175 in lottery money from scratch tickets, but I've never bought one.

Instead, some of my relatives have taken to giving a few of these as gifts at Christmas. I think it's ridiculous, but whatever. So, over the past few years, I've won something like 5 or 6 tickets for a total of $175, including one ticket that got me $50 and another that got me $100.

No one else in the family has ever won more than $20-25 on a single ticket, despite some of them buying scratch tickets on a regular basis.

So yeah, I'd say your statement is definitely true. I've never bought a ticket, and I've had bigger winnings than the people I know who buy them regularly. I don't think I'm "lucky" (whatever that means, though this past year these people bought me EXTRA tickets because they're convinced I am)... it's just random chance.

Of course, I'm also the guy who only once gambled in a casino, and it was when my Dad took me to one and gave me $20 "to get started." I went to a slot machine, after spending about $5, I hit $75. I cashed out, paid my Dad back his $20 and kept the remainder. Never played again... have no desire to.

I guess the moral of my story is -- if possible, gamble with other people's money. It's been "lucky" for me, anyway. :)

(P.S. I'm not trying to smug here. I have no issue with people who have enough money gambling for entertainment. People spend stupid amounts of money on all sorts of stuff for "entertainment," whether it's hundreds of dollars on tickets to a sporting event, a concert, an opera, whatever. Whatever floats your boat. I just personally don't find the entertainment value of tickets that interesting.)

Comment Re:Except (Score 1) 480

You're missing the point. $2 is worthless to me. I can't buy a coffee (one I would drink) with $2, someplaces you won't even get a soda with it, and maybe would get me breakfast if I ate a donut but if I'm honest I have trouble thinking of a single restaurant I'd actually eat at with something on the menu for $2 or less.

No, $2 isn't "worthless" to you. It's worth precisely $2. You may not be able to buy much with it by itself, but if you're at a store trying to pay for your new tablet with cash and you only have $298 in your wallet instead of the $300 price, well, that $2 could cause you some annoying inconvenience, at a minimum.

Of course, if I only made minimum wage I might really care about those $2, but I don't

And this is the real problem. The amount people spend on lottery tickets per year seems inversely proportional to their income (to a point). It's not that a $2 ticket is going to be a huge inconvenience on you or even many lower-class people.

The problem is when the person who already has trouble making rent and having enough money to buy food for the kids this week decides to take $50 from their paycheck and buy extra lottery tickets this week. Or when they spend that money on scratch tickets in hopes that they'll get more back this week (which occasionally happens).

In other words, it's not the $2. But all money adds up with other money. The problem is not *a* $2 ticket, it's the people who end up spending hundreds or even thousands of dollars on lottery tickets every year (and there are more of those people out there than you might think), when they could be putting that money toward paying off debt or saving for retirement or just building up an emergency fund.

Comment Re: food pyramid vs calories (Score 1) 180

There was a recent case of a normal weight woman getting a fecal transplant from an obese donor, and now this woman has become obese but not changed her diet and lifestyle.

Yeah, I'd say a conclusion from this one case study is pretty darn premature. Here's the actual paper. Both the woman and her daughter (the donor) were borderline overweight (~BMI 26) before the transplant. Both the mother AND the daughter gained significant weight after the transplant (30-35 lbs. each).

By your logic, we could also attribute the daughter's weight gain to the fact that she donated stool -- which wouldn't make any sense, but is also consistent with the data.

I don't know where you get that she had "not changed her diet and lifestyle." That's not mentioned in the study. All it says is that the woman unintentionally gained a lot of weight (as did her daughter), and she apparently had tried to control it but was unsuccessful. Lots of people in their early 30s start to gain weight.

Also, I should note that the condition she was treated for caused severe digestive discomfort. Many of those symptoms lessened after treatment. Could it have been that suddenly she started eating more because it no longer made her feel terrible?? (And it looks like her daughter joined her in the new binge, if her weight change shows anything....)

Anyhow, I have no idea what this one case study shows. But (1) the woman was borderline overweight before the treatment, (2) both she and her daughter the donor experienced similar weight gain, which could point to a common shift in dietary eating habits in the household -- perhaps to foods that were more caloric but would previously have caused the mother digestive distress, and (3) she had a condition that would have made eating too much unenjoyable, but that disease was mostly cured.

Suggestive? Perhaps. But I wouldn't conclude too much on the basis of this case study, unless there's information that's not in the official published account that could establish better causality.

Comment Re:Unsettling science (Score 1) 180

Don't blame the science - this is about taking science's name in vain and claiming something is proven when science has always been very up front about the limitations in what, for want of a better word, is called current knowledge. This is what always happens; people don't understand how science works or how scientists think and communicate. When the scientist says 'To the best of current knowledge, eating eggs is probably bad for you, although we really haven't researched that enough' it translates into 'Science says egg is bad for you'.

To be fair, there are PLENTY of scientists and studies which actively promote their results by emphasizing aspects that go beyond a reasonable interpretation of the data. It may be less common in "hard science" fields, but in "soft sciences" and things like nutritional studies, you'll often see "Discussion" sections at the end of the paper that claim, on the basis of some questionable stats and a sample group of 12, that they have discovered eggs are bad for you, found a cure for cancer, and suggest possible locations for the body of Jimmy Hoffa.

Okay, I exaggerate a bit. But I've literally had this exact conversation about an egg study in the past couple years with a vegetarian friend who started posting alarmist things on social media concerning eggs -- according to this recent study she read, eggs really WERE bad for you, and in her vegetarian diet, this seemed to be something to worry about since she tended to depend on eggs as a protein source.

Anyhow, I went and looked first at the press release she linked to. Not only the university promoting the study but the researchers themselves were quoted as saying almost verbatim, "There's been some question about this in the past, but we've shown here that eggs really are bad for you." Sure, there was some minor disclaimer at some point saying, "Further research is needed," etc., but the folks doing this study clearly had an agenda, which became clear when you read their paper.

I don't know what the agenda was -- maybe the director of the study is a militant vegan and hates the egg industry, or maybe they became convinced that eggs were terrible years ago and are fighting to hold onto their hypothesis, or maybe they just don't like eggs.

Or maybe, like many researchers, they just need grant money to keep their jobs or get tenure, and they want to draw attention to their work.

Regardless, the study clearly was full of holes, both from statistical perspective and a design perspective. The sample size was small. They didn't try to control for most obvious confounding variables (like, for example, what else did the people eat in their diets -- it wasn't even mentioned). Etc.

Look -- I agree with you that media reports tend to exaggerate science and often don't hedge as much as real scientists do. What you fail to account for is that some scientists often want their work to get attention (or are really proud of their pet theory or whatever), so while they may hedge officially in a sentence here or there, they may also be happy to have their results as broadly interpreted and cited as possible. And when they write up a press release or are interviewed, yes, they'll say "We still need further research," but they'll go on to provide all sorts of sweeping conclusions that their research may "suggest." It's no wonder then that media sources get confused.

(Again, I'm NOT saying all scientists are like this. But if you start reading things like discussion sections in nutrition papers, you'll quickly realize that (1) humans are complex systems, so designing a good experiment and analyzing the data fairly can be really hard, but (2) that often doesn't stop researchers from overstating the possible importance of their results significantly.)

Comment Re:TL;DR People doesn't understand the Turing test (Score 1) 129

The Turing Test is a thought experiment. It's just saying "if you can talk to this, and can't tell if it's a person or a computer, then it doesn't matter: it's intellegent." It's not a method for a scientific, practical process.

If that's true, then why did Turing claim in his original paper that by the year 2000, computers would be able to fool humans and "pass the test" 30% of the time? Why state such a specific prediction for a test that was not intended to be practical and only a "thought experiment"?

It's just something to think about when considering what might constitute intelligence.

Why can't it be both? In Turing's time (and still today) there were (and are) people who think real strong human-like AI is impossible. In order to evaluate "intelligence," though, we need a standard test that we could agree on. Turing attempted to roughly define the outlines of such a test, which also involved a lot of philosophical debate. On the other hand, he predicted within 50 years of his paper that computers would be around which could pass this test, which suggests that he thought it was in fact a practical (if a little vague) way of gauging progress in AI.

Comment Re:TL;DR People doesn't understand the Turing test (Score 1) 129

It is not a test of whether an AI can fool an average person, but whether it can fool an expert.

You are not allowed to redefine the test just because it makes you more comfortable to do so. The original paper simply said "A man, a woman, and an interrogator". It did not qualify that interrogator as an expert, but simply the one who poses the questions (thus, an interrogator)

Well, please re-read the original paper.

You are correct that the original test did not specify an AI expert as interrogator. On the other hand, read the types of dialogue Turing offers as examples. It's very clear that he is imagining "interrogators" (note that word -- it implies someone with a strong drive to ask probing questions) who are not only quite intelligent but also keep asking very probing questions designed to test the intellect of the person/thing on the other side.

The standard is clearly NOT, "Gee, can I have a nice small talk conversation?" Instead, the "interrogator" uses questions varying from computational problems to chess problems to questions about composing a sonnet to detailed discussion of subtle linguistic meanings in English, related in abstract ways to classic literature.

That doesn't sound like your "average Joe" interrogator to me. Does it to you? I'm sure Turing didn't expect all his interrogators to be so intelligent, but they were clearly expected (based on his sample dialogues) to understand how to probe intelligence at a pretty sophisticated level.

Comment Re:TL;DR People doesn't understand the Turing test (Score 4, Insightful) 129

The pronoun disambiguation is a good test, because AI does that poorly, and humans do it well. But that is not a replacement for the Turing Test, that IS the Turing Test.

Indeed. Here's an excerpt from Turing's original paper that described the "imitation game," replying to a possible objection that his test would not be able to be used to gauge true understanding as a human might:

Probably [the objector to the test] would be quite willing to accept the imitation game as a test. The game (with the player B omitted) is frequently used in practice under the name of viva voce to discover whether some one really understands something or has "learnt it parrot fashion." Let us listen in to a part of such a viva voce:

Interrogator: In the first line of your sonnet which reads "Shall I compare thee to a summer's day," would not "a spring day" do as well or better?

Witness: It wouldn't scan.

Interrogator: How about "a winter's day," That would scan all right.

Witness: Yes, but nobody wants to be compared to a winter's day.

Interrogator: Would you say Mr. Pickwick reminded you of Christmas?

Witness: In a way.

Interrogator: Yet Christmas is a winter's day, and I do not think Mr. Pickwick would mind the comparison.

Witness: I don't think you're serious. By a winter's day one means a typical winter's day, rather than a special one like Christmas.

And so on, What would Professor Jefferson say if the sonnet-writing machine was able to answer like this in the viva voce? I do not know whether he would regard the machine as "merely artificially signalling" these answers, but if the answers were as satisfactory and sustained as in the above passage I do not think he would describe it as "an easy contrivance."

THAT is the sort of standard of AI that Turing was envisioning could be passed in his "test." It isn't a computer pretending to be a non-responsive teenager with an attitude problem who doesn't really speak the same language as the interrogator (as some chatbots might claim).

It's an idea of AI as something that could debate word replacement in a Shakespearean sonnet, would understand and be able to process poetic scansion, understand the subtle word meanings and connotations in language, and be able to synthesize these various things together while applying such concepts to evaluations of classic literary references.

Turing's test then assumes an AI competent enough to have a flawless conversation on the level of a bright university student or even a colleague of Turing's. Now, granted, we might find the literature quiz a little unnecessary, but in a more general sense this example gets at the idea of probing the AI's understanding of concepts, connecting disparate uses of things together (like a literary character to an abstract concept to a matter of style or poetic form), and in general a fluent and adaptive recognition of linguistic meaning.

I think we would all agree that the various chatbots that have claimed in recent years to have "passed the Turing test" are NOWHERE near this level.

This is the kind of standard Turing himself explicitly mentioned in his original article on the test. And frankly, if I encountered an AI that could have a conversation this fluid and wide-ranging (even if not on literature specifically) in flawless English, I'd be happy to declare it "intelligent." But we don't have anything close to that -- and pretending the "Turing test" is obsolete and needs to be more strict is misunderstanding the ridiculously high expectations Turing himself set out many decades ago.

Comment Re:The sad part? (Score 1) 577

Please read the 9th and 10th amendments. Just because "rights" are not explicitly mentioned in the Constitution does NOT mean they aren't real or don't exist.

And it doesn't mean they do exist, either.

Absolutely true. But in federal law, by default, they exist until proven otherwise (at least according to the pre-1937ish Constitution).

I have no right to drive without a license.

Actually, you do, according to FEDERAL law (again, going with the pre-1937ish Constitution). The 9th and 10th amendments say that regulation of rights not enumerated are reserved to states or to individuals. The STATES may regulate your right to drive. The federal government does not get to regulate that right.

By your logic, I'd have the right to drive without a license because the Constitution does NOT mention it.

Precisely right, since regulating driving or transportation is not an enumerated power of the federal government (pre-1937). Nowadays, and for the past 75 years or so, SCOTUS has just rolled over and let the federal government pretty much do what it wants, so the federal government is effectively no longer bound by enumerated powers. But back when it was, from the perspective of the FEDERAL government, they could not regulate your right to drive... only states or local governments.

The fact is that there were sensible gun laws for 200 years before the "2nd Amendment" movement started in the late '70s.

I'm assuming you mean the 1970s. 200 years before that was the 1770s. Please cite a federal law from the 1770s that qualifies as one of your "sensible gun laws." Or, well, for it even to be relevant to thsi conversation, it must post-date the enactment of the current Constitution, so cite one after 1789, I suppose.

Feceral law has always been pretty severely restricted in terms of gun regulation. (Note, for example, SCOTUS's overruling of the Brady Bill's requirements for state and local governments to conduct background checks -- those are reegulations that get to be determined by STATES, not by the federal government, according to the 10th amendment.) STATE laws were always allowed to regulate that right, since states are by default granted regulatory powers not assigned tot he federal government.

What SCOTUS did in recent years was to INCORPORATE an explicit federal right into state and local law, a trend that it has gradually been doing with the Bill of Rights for the past 150 years or so. Before, only the federal government was bound to respect the 2nd amendment; now states and local governments must too. Just like state and local governments now must obey the 1st or 5th or whatever amendments too (which wasn't always the case -- for example, there were states in the U.S. that had official established religions).

(For the record, I think we need lots of better gun regulation. I'm fully in favor of strict training requirements etc. to own such a weapon. But that has no bearing on the legal arguments here, which you're grossly misrepresenting.)

Slashdot Top Deals

This file will self-destruct in five minutes.

Working...