Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×

Comment Re:Definitely not (Score 1) 427

It's an interdisciplinary field, but it is most definitely also a science, with proper academic Computer Science adhering to the scientific method. It is in fact the scientific study of information and computational processes. It is largely not a natural science, although it can be, because there are natural information processes (for example DNA). Most people trained in Computer Science go on to practice Software Engineering, not Computer Science (which is not unusual, many people trained in a science go on to work in applying the science as technology, not doing scientific research).

If you've ever worked in academic Computer Science you'll know that there is:
- A vast amount of empirically collected data based on experiments.
- That if you wish to publish, your work has to be reproducible, so other people can verify your data.
- Papers come with a falsifiable hypothesis.

Comment Re:Why assembly? (Score 1) 121

Most compilers (including gcc and MSVC) support these as intrinsics (which are usually fairly standardized per processor platform) though, so you don't actually have to go down to assembly level to access them (the same with SIMD instructions, which is another place you can get large gains over vanilla C) and the instrinsics are exposed as normal functions and types.

Intrinsic code is also more standard than inline assembly, which differs between compilers. You can take the x86 intrinsic code written on MSVC and use it in GCC with relatively few problems.

Comment Re:Why assembly? (Score 4, Informative) 121

To get any boost of performance over C, you have to be an extremely good assembly coder... to get a consistent 3x boost, you are either writing very sloppy C, or you're extremely good at assembly and using a pretty poor compiler/poor compiler settings. It actually takes an amazing amount of effort to beat a compiler these days, because compilers have rules to spot non-obvious stalls and such, where as the human has to rely on analyzing every bit of that by hand.

Also, a system where every component is 3x faster is still only 3x faster overall, there is no Captain Planet performance magic where by the power of assembly combined you get a 20x speed up... not to mention many desktop operations being IO limited (especially the ones that you actually notice the slow down on) and assembly doesn't magically make that faster.

Finally, someone did try it - MenuetOS - and they were able to make a quite compact and fast OS. But they also cut out an awful lot of what goes into a modern OS to do so. Syllable itself is not written in assembly like MenuetOS, which was actually the example used above.

Comment Re:On no. 1 & 3: Never trust the client (Score 1) 265

That doesn't stop the client very easily spoofing their own location coordinates, before it is ever stored on the server, which is the problem the article is highlighting. The geo-locations API in HTML5 is called by clientside code, it then exists in an intermediate form before being passed to the server to store. With easily available debugging tools, you can stop the code and change that intermediate data before it gets sent off to the server. Of course, most of this is possible with native applications, it is just considerably harder to do.

Slashdot Top Deals

"The one charm of marriage is that it makes a life of deception a neccessity." - Oscar Wilde

Working...