Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Re:How many other flaws (Score 2) 173

This is already the case in many smaller and traffic offences (In Belgium these are handled in Police Court, I'm sure there's a type of court like this in the US as well). It's basically your word against theirs and there is no way the judge will take yours. Same with slander or resisting arrest. Often the judge won't even hear your defence if you don't have a lawyer. You start talking and they just go: "listen, this is the offence, that's the fine. next!". They basically don't have time to treat proper attention to all cases. It's only when there's been a murder or something more serious that the actual court system with proper defence and serious proceedings will take place.

Comment Re:Poor quality of courses (Score 1) 145

Andrew Ng's "Machine Learning" on Coursera is also very well presented. Maybe a bit light on the hardcore Math side (which he acknowledges several times), but he gives a very good overview of what's available, how and when to use different ML techniques. He never loses track of the big picture, which really is one of the most important aspects of tackling any problem space, because in the end you're not going to re-implement a neural network, you'll just use an existing package.
I did the PGM course (successfully) and Daphne Koller warns us in the introduction that it is a hard course (even by Stanford CS standards) and Stanford students do spend a significant amount of time to it weekly ( I think it was 15-20 hrs avg). I did indeed often get lost in some of the ramblings where I was thinking "why is this necessary and what are we trying to do here?". It was not always clear to me how some of the techniques connected to reality, and why it was better than others. But still a very useful course going deep into Bayesian networks, Markov fields, etc...
So, your mileage may definitely vary, and some courses really do require you to be on top of your game and have some serious prior background knowledge. But I love MOOCS and how can one not be thankful to get access to courses given by the most prominent researchers and profs in their field?

Comment Liberal Arts (Score 1) 397

Except for maybe hardcore nerds, I've noticed most people in STEM actually are very interested in Liberal Arts ( Literature, Music, Anthropology, History, Graphical Arts, ...) and enjoy experiencing and learning about it on their own time. Of those people who were into STEM in high-school, most achieved higher grades in the Liberal Arts courses given in high school than the so called liberal arts students.

Comment Re:I find it interesting we are bashing tech (AGAI (Score 1) 349

Indeed. I know a 40 yr old woman who's Director in a technology company (a real director, not a startup 10 man company 'director', meaning big payout, big BMW company car) and she's also complaining about the glass ceiling because all collegues she works with at about the same level are VP. She doesn't seem to get that in the last 7 yrs she got promoted from Software Engineer to Project Management to Director. That's a pretty steep promotion curve, and she still has a long career in front of her to make it even bigger. I'm sure of the former software engineers she worked with most are still just that.

Comment Re:It's just hard work and machine learning (Score 1) 68

Well, Machine Learning doesn't exclude the use of Semantic Tools like Ontologies. You can still use them to gazeteer your ML indexing process, inference over the Ontology hierarchy etc... Both aren't really mutually exclusive. However, I do think the idea of everyone annotating their webpages semantically is never going to take off. The closest thing we have successfully achieved on the interwebz in that sense is WikiPedia.

Comment Re:It's just hard work and machine learning (Score 1) 68

No you didn't :) It was a valid argument.
However this semantic enhancement requires a couple of things: the model (ontology) must be defined by consensus. A model is by definition an incorrect representation of reality. Hence even with a manually crafted model ontology, it still won't be 'exact'. If you apply this on big medical ontologies, you're really in trouble, as they may have hundreds of thousands of concepts. So this is the ontology part. Next you have the actual semantic annotation part of the document where you put actual trust in the annotator that his knowledge of the ontology is perfect and he's doing a good job of annotating the document. This requires plenty of training.

Comment It's just hard work and machine learning (Score 1) 68

I think there are two reasons why the whole rdf(s)/owl annotated web pages never really gained traction. First of all it's hard work if you have to do it manually, but most content management systems now offer some kind of key word adding feature though. The second reason, IMO, is that the current Big Data and Machine Learning techniques (and more computing power / persistence media / bandwidth than 15 years ago when the whole rdf/owl thing took off) trump the whole categorization and knowledge extraction / data mining process anyway.

Slashdot Top Deals

Friction is a drag.

Working...