Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Re:Early Crimefighting Crowdsourcing in Salem (Score 1) 270

But that's not the same as a lynch mob.

"Better than a lynch mob!" is hardly the standard the American legal system once aspired to. Although I guess people with darker hued skins might disagree.

There are innocent people being held in Guantanamo Bay without access to the rights that the American legal system was supposed to protect.

Shrugging and saying, "Well, at least we aren't burning anyone at the stake! I don't see what you're making such a big deal over!" is not a civilized response to this situation, and making out like the procedural snafus were the biggest issue kind of misses the point.

Comment Re:Will Box for Passport (Score 1) 1109

There's only one thing all terrorists have in common, and in light of recent events I thought it important to point it out. You know what I'm talking about, don't you? It's the one thing that unites terrorists all over the world, from the United States to Russia, India, the United Kingdom, Japan, Spain, Italy, Germany and even Canada?

In every case you find one and only one thing that is exactly the same amongst all of them. Every single one. You know what it is, don't you? It should be obvious now after decades of senseless attacks on innocent people. The thing that unites them all is only too clear.

It is the ONLY thing that they all have in common.

You've figured it out, haven't you?

That's right.

Every single one of those terrorist attacks was carried out by a human being.

Comment Re:Cataclysmic events may be required (Score 1) 272

This may be one factor (of possibly several) that explains the Fermi paradox.

Another factor is that specifically human intelligence of the kind that proves theorems and builds spaceships is almost certainly an accident of sexual selection. There is absolutely no utility in being able to prove theorems or build spaceships in the stone age, so there couldn't have been any selective pressure in favour of that type of specifically human intelligence.

This is likely why specifically human intelligence is so rare, despite all the apparent building-blocks being common. Rudimentary tool use isn't especially rare, nor are basic communication skills that appear to be the basis for language. But since the selection for these things is an accident of sexual selection and not a predictable product of natural selection there are a lot of co-incidences that have to happen to make beings like us.

It is quite likely from what we know of abiogensis and evolution that life will prove to be quite common in the universe, and intelligence extremely rare.

Comment Re:Looks like creationism... (Score 5, Interesting) 272

On the other hand evolutionists rarely notice that a process of natural selection doesn't create something "new", it only causes a (mathematically preexisting) potential arrangement of atoms, one of an infinite set, to actually appear

The problem with "philosophical literacy" is that it makes you say things like "mathematically pre-existing" as if it meant something other than "non-existent".

You seem to want to reify the mathematical language we use to describe reality, as if the tool we use to describe the world and which we have invented and adapted to describe the world ever more deeply, somehow "predates" the world that language was invented to describe.

I see no reason to privilege math over English in this regard. Both are just languages we use to describe, understand and communicate our understanding. Neither has any ontology apart from us, the beings who invented them, and to impute otherwise is both unwarranted and uninteresting. There is no explanatory need to do so, nor any operational test we can apply to test the validity of the hypothesis (although it would be damned interesting if you could come up with one.)

There are certainly many cases where our mathematical description has to be "fixed up" by hand to actually describe the world, the most obvious one being the excess of solutions to almost all the basic differential equations we use in physics, particularly the things like the backward-in-time solutions to any given wave equation. (That the time-reversed solutions of the Dirac equation can be given meaning does not change this, it merely emphasizes what a poor tool mathematics is for describing the universe in all the other cases where the advanced wave has no apparent physical meaning.)

Given what a lousy tool math is to describe the world, it would be very, very weird if the world were somehow "following" math. The hypothesis that we invented math to describe the world in much the same way we invented to stone ax for changing the world looks a lot more plausible.

Comment Re:High School Students (Score 2) 41

I was an FRC mentor for several years and it was both incredibly demanding and incredibly rewarding. You'll see high-school students go from clueless newbies in their first year with the team to competent, confident and capable young men and women by the time they're done.

A lot of it is the unplanned activities. One of my favouite memories is teaching a couple of students some vacuum technique for ensuring the pneumatic system was sealed properly. The students are motivated, interested and eager to learn, and you get to see their competencies undergo these sudden upward steps where they are frustrated and confused one minute and doing the job properly five or ten minutes later.

It's really worthwhile for everyone, and if anyone had told me how much fun it would be to work with teenagers I would have laughed my head off. But it turns out it is.

Comment Re:Good thing it's dead (Score 1) 138

Although you're more correct than most of the people posting here, much of what you say is wrong.

SGML is a very flexible language created (pre-web) to be a universal document format - or perhaps a meta-format.

Meta-format is close. SGML is a language for defining markup languages. That's what the "G" is about (it stands for "Generalized" but should have been an "A" for "Abstract"). You're correct that with suitable clever ticks you can make almost anything a valid document against some SGML language. The "" to "/>", which is very clever but incompatible with HTML.)

SGML plays the same role in markup languages that EBNF syntax plays in programming languages.

You're right about the power and flexibility, though: I once created a concrete syntax and DTD that would let me use SP to process RTF documents.

XML and HTML were both subsets of SGML.

"Subset" isn't the right word to be using here (why yes, I did take a double-dose of Pedantic Pills today!) XML and HTML are both concrete markup languages whose definitions are valid SGML DTDs and that use the SGML concrete reference syntax (mod the redefinition of NET used by XML).

XML somehow became popular for serializing data, but it's just not a very good tool for that. JSON is far simpler and less verbose for object serialization, but I couldn't see using it for sparse document markup.

XML is just fine for serialization, and no more verbose than JSON when used properly (contrived examples to the contrary). What it lacks is a lightweight parser: the compelling advantage of JSON is you don't have to pull a huge parser over the wire to handle a few hundred bytes of information. JSON would be a bad tool for document markup, though.

Comment Re:GASP we break the law all the time and no one d (Score 4, Insightful) 400

I make a distinction about that being a good safety regulation imposed by law, versus speed limits where one driver can be safer over the speed limit than a less capable driver under the speed limit.

There are no less capable drivers. I mean seriously, just ask any driver. They are all more capable than average, and therefore it's safe for them to flout the rules of the road, speed laws, you-name-it, because they feel safe, and really, when have feelings ever let anyone down as a means of perfectly objective self-assessment?

Comment Re:Yes, can detect buggy software. (Score 1) 68

The software world isn't as aware of the technology.

It's not "less aware of" so much as "this technology is less useful/more difficult to apply in the kind of complex semantic domains where software operates".

Hardware has very limited semantics compared to software: the number of distinct types on a chip is small compared to the number of distinct types in even a moderately complex application.

Each type of thing has its own set of behaviours that have to be encoded in the checking software. This encoding process has proven thus far to have very high difficulty, and every time you introduce a custom type you have to specify it in the language of the checking program.

None of this is to say that such work isn't making progress (particularly in the functional world) but that the problem in the hardware domain really is easier, so we should expect software to lag, relatively speaking.

Comment Re:Can detect buggy software? (Score 3, Insightful) 68

But there is simply no way to prevent the program from doing something unintentional (like cutting the wrong thing) without prior detailed knowledge of the actual intent.

Back in the '80's David Parnas argued that software verification was fundamentally different from hardware verification precisely because software has very nearly infinite bandwidth, which is precisely the opposite of the argument being made in the article.

That is, for hardware, a small change in some parameter will in almost all practical cases result in behaviour that is not wildly different from the behaviour with the original value. This means that we can test it under a finite set of conditions and smoothly extrapolate to the rest.

With software, a single bit flipped can result in behaviour that is arbitrarily different from the original behaviour. As such, nothing short of exhaustive testing over all possible accessible states can prove software correct.

Algorithmic testing of the kind described here will catch some bugs, but provably correct algorithms can still run into practical implementation details that will result in arbitrarily large deviations from ideal behaviour. For example: the inertial navigation module on the Ariane V could have had provably correct code (for all I know it actually did) and the rocket still would have destroyed itself.

Most problems with embedded systems are of the kind, "The hardware did something unexpected and the software responded by doing something inappropriate." Anticipating the things the (incorrectly assembled, failure-prone, unexpectedly capable) hardware might do and figuring out what the appropriate response is constitutes the greater part of building robust embedded code, and this kind of verificationist approach, while useful as a starting point, won't address those issues at all.

Comment Re:Systemic Prejudice (Score 2) 188

I'm faced with a dilemma here: I'm an algorithmist, and believe most questions can be more accurately be answered, in the long run at least, by a well developed algorithm than even the most skilled human being.

I agree, but there are a number of things about this case that are problematic for algorithmic analysis as such:

1) Ten years from now the primary growth industry in tech is going to be rasting, which won't be invented for another three years. How do you predict who is going to be successful in the highly competitive counter-rast and ablatives industries?

2) Related to that, Google is a stable corporate environment in which some kind of prediction has been possible for the past few years. The world is not stable. Never has been, never will be.

3) There is a huge industry of metrics-based success prediction, and it sucks. The entire SAT/GRE/MCAT thing is a lousy predictor of success, with only weak correlations between *AT scores and academic achievement, much less career achievement in the first ten years.

4) To validate such an algorithmic approach you would need to test it, likely by attempting to apply your metrics to people from ten to thirty years ago and seeing how well they did in the ensuing 10 years. If you only apply the metric to people ten years ago you have no idea if it is robust over time, and robustness over time (because: rast) is absolutely necessary.

So the odds of this working well are small. Making algorithms that work in the real world is hard, and these guys seem to have set themselves up one of the hardest problems going.

Comment Re:Amazing (Score 1) 127

Even cutlery,,, how long would it take for an Iron Age blacksmith to craft a single cutlery set?

The fact that most cutlery nowadays is rolled stainless steel still kinda blows me away. When I was a kid (40-odd years ago) stamped mild-steel flatware coated with peeling chrome was commonplace. Today it's almost unheard of, and you can get a decent stainless steel set for $100.

Back in the day we used sterling silver flatware for fancy occasions, which you do still see now and then, but it is so much inferior to stainless that it's extremely rare (and honestly building spoons designed to stir near-boiling liquids like tea out of the best elemental conductor of heat there is was never a particularly clever move...)

If you look around you you'll see an incredible amount of amazing stuff that is of higher quality and lower cost--including lower environmental cost--than the things your parents and grandparents had, and it's only going to get better, assuming the people who don't find any of it amazing at all aren't given the power to screw things up.

Comment Re:Holy moly (Score 1) 116

I remember when this theoretical technology was proposed about a year ago, and figured it would be a decade before they could actually do it.

It would be fascinating to troll through the /. archives and find out what fraction of things that were predicted to come to market in timescale X actually did so. XKCD aside, I don't think this question has ever been properly addressed.

Comment Re:I don't like boost (Score 4, Insightful) 333

I think the reason for omission is simply that the operators are nothing more than syntactic sugar. Anyone that needs those operations can write them quickly without putting much thought into it.

Both the GP and you are wrong about this. Hardware support for exponentiation is completely irrelevant to it being a built-in operator rather than a function call.

FORTRAN, famously, has some extremely efficient tricks for implementing exponentiation by small integer exponents (up to 7, if memory serves) that are independent of MPU support. They are handled by the compiler. Why C/C++ doesn't have this is beyond me, and writing these things efficiently for a given architecture is non-trivial and better handled by the people porting the compiler than application developers.

Slashdot Top Deals

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...