Follow Slashdot blog updates by subscribing to our blog RSS feed


Forgot your password?

Submission + - I like my relationships like I like my source... open->

An anonymous reader writes: Researchers at the University of Edinburgh Informatics department have created joke generating software in time to rival comedians at this month's Fringe Festival. Their system uses Google n-gram data and an unsupervised machine learning model to encode assumptions about what make jokes in the form: "I like X like I like my Y, Z" funny. The paper is being presented at the 51st annual meeting of the Association for Computational Linguistics in Sofia, Bulgaria next week, read the paper here.
Link to Original Source

Comment Re: A good first step (Score 2) 121

Academic journals typically have an editor or group of editors who work for little or no pay. These editors decide whether a submission should proceed to peer review, select the reviewers, and oversee the communication between the reviewers and the submitting authors. Academics do this work for free because it is considered to be part of the vocation of creating and expanding knowledge. Publishers were necessary in the past because they handled the logistics of typesetting and printing and distributing the material, but now authors are able to typeset their own papers and distribute them through the internet.

The Journal of Machine Learning Research (JMLR) exempifies this change. Much of the editorial board of the Machine Learning Journal collectively resigned to form JMLR as an open-access journal. The new journal had all of the prestige and experience that the old one used to have, with virtually none of the costs, and is doing just fine.

Comment Re:further reason for a popular vote (Score 1) 642

First, it's going get get dumped the first time the "wrong" candidate wins the popular vote by 0.001% and some blue state has to vote all red or vice versa. Imagine all whining about the 'stolen' election in Florida, but an order of magnitude more annoying.

Maybe, but I doubt it. Under this system, the electoral college becomes a mere formality. People will of course be curious about how their state voted, but the determining factor is the popular vote, not the electors. It's a lot easier to justify "one person, one vote" than "one person, a variable number of votes according to a 250-year-old compromise that depends on your state's relative population."

Secondly, it's a huge incentive to cheat wildly in counting the votes. In order to prevent rampant cheating, you'd have to get all the States to agree on a single voting procedure and/or control of their election systems by the Federal government. If the latter's the case, you're right back to needing to amend the Constitution.

I don't follow. How is it more of an incentive to cheat wildly when you have to fake a 1-2% swing in 122 million votes nationwide compared to, say, the 5.5 million votes in Ohio?

Finally, there are plenty of States that aren't going to want this. If urbanization continues then a small number of urban centers will be setting policy for vast areas of the US about which they know little and care less. How many bitter gun-clinging, religious, 'fly over' states want to give over their power of self-determination to LA or NY?

By the same logic, right now we have rural areas disproportionately setting policy for urban areas. Under a popular vote plan, the rural areas would receive attention that more closely reflects their population. Is this a problem? Moreover, those states, and rural regions of those states, would still have disproportionate representation in the Senate and gerrymandered congressional seats: this proposal is only for presidential elections.

Also, I doubt the opposition would be that stiff in most states. There were only 19 states, worth only 189 electoral college votes, with a partisan advantage of more than 20 points in 2012 (i.e. more partisan than 60/40, ignoring 3rd parties). A national popular vote would allow the votes of the losing 40%+ in the other states and districts to still count.

Comment Re:further reason for a popular vote (Score 2) 642

Hmm, 100% of the States agree to this to make the change...

Alternately, 75% of the States have to agree for a Constitutional Amendment.

Yah, it's sooooo much easier to get the States to bypass the amendment process....

Read the article (here's the link again). Only 270 electoral votes' worth of states need to agree for this change. This is because a state is constitutionally allowed to allocate its electors in any way that it wants. Under the national popular vote compact, each state agrees to allocate their electors to the winner of the national popular vote, regardless of what the state's own citizens do. Once enough states agree to this, it doesn't matter if the other 268 votes' worth of states decide to go along or not. The winner of the popular vote is guaranteed to get the 270 electoral college votes needed to win.

Comment Re:Couldn't we just charge them tuition? (Score 1) 689

It's worth pointing out that it goes the other way too. Any doctoral student from the US working/studying at a lab somewhere else will most likely be supported by grants to that lab, not with money from the US.

Where do you think Grant money comes from?

...from the government or other organization of "somewhere else," of course. I'm an American pursuing a PhD in Scotland, and all of my fees and living expenses are covered by funds ultimately either from the Scottish government or tuition to my university (and this tuition is zero for Scottish and EU students, except England, Wales, and Northern Ireland).

Comment Re:I'd hire him (Score 1) 368

As the saying goes, "the plural of anecdote is not data." Anecdotes by their nature are subject to sampling bias: an anecdote is not brought up unless it is somehow interesting. Taking a larger sample of anecdotes just inflates the sampling bias. You need to make sure your observations are representative, typically through taking a random sample or running a controlled experiment, to call it "data."

Comment Re:Congress Sucks (Score 1) 858

Health insurance is insurance. It survives because it takes calculated risks, and the general public is not a very good risk health-wise. The value and the problem with insurance is that it faces the reality that there are limited resources out there head-on. Now you may well be correct to say that using those resources for the benefit of only those who can pay is unfair, but what criteria do you use to ensure fair distribution?

The general public is a much better risk than the current system, which contains a disproportionate number of people who need more expensive treatments because they've been avoiding relatively cheap preventative care, or show up to the emergency room with no coverage at all. The health care reform prioritizes preventative care and universal coverage. You're right that the general public is a worse bet than only NBA players, but it's a much better bet than what we're covering now.

Comment Re:Supply and Demand (Score 1) 454

As a current PhD student (although not in astronomy), I think writing a dissertation is actually the most rewarding aspect of doing a PhD. First, during your PhD, you have a lot more freedom in determining the direction of your work than most researchers. As I understand it, funding agencies tend to require specific deliverables that constrain possible research questions after the PhD, but PhD research is much more open-ended. So a dissertation is an opportunity for a student to really spend some time thinking very carefully about something they care about.

However, this is only relevant if the student has the peace of mind to actually think carefully. I'm an American doing a PhD in the UK, and one of my main considerations for coming here was that UK PhD program(me)s are 3-4 years with no required courses. I did sit in on one course (for no credit) my first term, but was able to get started on my research right away, and will be submitting in December just over 3 years after starting. I've also been TA-ing (and tutoring, and marking) for one course, but it's been much less stressful than the American habit of throwing a grad student in front of 30 freshmen with little preparation.

Comment Re:Yes. (Score 1) 1127

Only a minority of men are involved in a disproportionate number of rapes. David Lisak has done some very eye-opening research, finding that most rapes are committed by about 5% of men, who rape again and again and again.

It turns out that if you ask these men questions like "Did you force someone to have sex with you, even though they didn't want to?" they are happy to say yes, and they think other men will too, and they don't think that forcing somebody to have sex is rape. When people talk about "rape culture," this is what they mean. Rapists don't think that they are doing anything unusual, because they get repeated cues from the men around them that rape is OK. The vast majority of men who laugh at rape jokes, or otherwise sexist jokes, are not actually believing the ideology behind it, but the 5% of men in the group see that laugh and think "that person is just like me, my attitudes and actions are not exceptional." Rape culture is real and has real, devastating consequences.

More to the point of TFA, or at least one of its points, when men overstep women's boundaries without actually raping them or sexually assaulting them, that reinforces the belief in the rape-y and assault-y minority of men that disrespecting women's boundaries is OK.

Comment Re:Yes. (Score 1) 1127

This study conducted in the late 90's found that about 17.6% of American women had experienced attempted or completed rape, with 13% experience completed rape (more summary stats here), and this study from 2007 found that 18% of American women had experienced rape. I'm not sure Rei's number of 1 in 4 came from (a different country perhaps?), but a rate of 1 in 6 is shockingly high.

Comment Re:Field dependent requirement (Score 1) 1086

Most (all?) modern approaches to artificial intelligence use calculus. If you're using a maximum likelihood statistical approach, you'll (usually) be differentiating the probability of the data with respect to your parameters to find a good local maximum. If you're taking a Bayesian statistical approach, you'll be integrating out your model parameters to get an average answer with respect to all models. If you're using a support vector machine, then you'll be using lagrange multipliers to minimize your error.

There are well-studied special cases that probably wouldn't take too much understanding of calculus, because you can just use existing code out of the box, but most applications are going to require at least a basic understanding of what you're differentiating with respect to or integrating out, and how that is actually implemented in your code.

"Once they go up, who cares where they come down? That's not my department." -- Werner von Braun