Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment What stupid patents? (Score 1) 99

A friend's boss saw him talking to a valve actuator using a tapping device and told him to talk to the patent lawyer about the invention. The "invention" was using a single wire to talk to something inside containment areas where drilling holes was a bad thing so wires could cost about a million a conductor. The resulting patent application didn't have that bit in it. It did have the use of a single wire for sending code using a keying device to another device. He ended up with a patent for using Morse code complete with encoding and everything else that was invented long ago. The bit about using the old technology in a unique way was missing.

Comment Can too healthy be bad? (Score 2) 134

There is an old test known as the Schneider Index which was used by the US Navy for divers and pilots in the 1940s. An old movie called "Dive Bomber" shows details of how the test was done at the time. The test ended the flying careers for many pilots at the time if their score decreased much. It turns out that the guys who did best in the test were the ones most likely to pass out on dive bombing runs. The Schneider Index uses reclining heart rate, blood pressure with standing and then rapid activity for about 30 seconds and then factoring in increase in pulse, BP and the time to return to normal.

Comment Re:Refactoring done right happens as you go (Score 1) 247

Newton looked at the spectrum and saw that it contained six distinct colours to the human eye: red, orange, yellow, green, blue, purple. But his alchemist beliefs considered 7 to be a magic number and so wanted the spectrum to have seven colours. He decided that purple should be split into indigo and violet to reflect this, but didn't split any of the others (even where the difference is at least as pronounced) because it contradicted his mystical thinking.

If even Newton 'One of the smartest men to ever live' couldn't manage to keep his science separate from his mysticism, what hope do you think other religious people have?

Comment Re:Uh, what? (Score 1) 91

This is a confusion in terms. Personally I blame Sun. An interpreter IS a form of compiler, it is the term used to refer compilation at run time

No, sorry. A compiler is, in theoretical terms, a partial application of an interpreter to a program. In practical terms, a compiler transforms the input into some other form, which is then executed, whereas an interpreter executes the input directly. JIT compilation is still compilation. A just-in-time compiler is the term given to compilers that produce their output just before it is executed, as opposed to ahead-of-time (AoT) compilers, which produce it all up front, even if some paths are never executed.

There's some complication, because most environments that do JIT compilation also include interpreters that gather profiling information to incorporate into the JIT compiled code and to improve startup times. JavaScript implementations, in particular, often spend a reasonable amount of time in the interpreter because most web pages contain a load of JavaScript that's only run one or two times and the time taken to compile it is more than the time saved to execute it. Some have multiple compilers - JavaScriptCore from the WebKit project has an interpreter and three different JIT compilers that have different points in the space between compilation time and execution time - they'll recompile hot paths multiple times as they're executed more, with more optimisation each time. The key difference between the interpreter and compilers here is that the compilers are each invoked once on a segment of code and it's then executed without involving the compiler. The interpreter is involved every time the bytecode is run. It reads a bytecode and then jumps to the segment of interpreter code that executes it and then returns. The compiler takes a sequence of bytecodes, generates a fragment of native code to execute them, and then this fragment is combined with other fragments to produce a running program.

The shader compilers in drivers, however, are not JIT compilers. They are AoT compilers that are invoked at load time - often at install time. They don't compile the code just before it's run, they typically compile it once and cache the result for multiple invocations of the program. Some drivers (Windows and Android come to mind) have a mechanism that allows you to do the compilation at install time. Unlike most JIT environments, graphics drivers don't tend to use run-time profiling for optimisation, the bytecode exists solely for the purpose of providing an ISA-neutral distribution format.

Comment Re:File extensions? (Score 1) 564

Ugh, trust MS to fuck up a reasonable UI choice. On OS X, by default, it only happens for programs and requires you to close the dialog and then bring up the context menu for the program while holding a modifier key. You don't know how to do it unless you've actually read all of the way to the end of the dialog, so it generally protects people.

There are some interesting corner cases though, such as shell scripts. The file manager doesn't know if the thing that you tell it to open a shell script with is a text editor or a script interpreter, so may warn spuriously.

Comment Re:File extensions? (Score 2) 564

There are two problems. The first is that the OS allows you to run porn.jpg.exe having downloaded it from some random place on the 'net. I don't think that either OS X or Windows do: they'll both pop up a thing saying 'You are trying to run a program downloaded from the Internet, do you really want to?', which isn't normally something that happens when people try to open a file so ought to trigger them to avoid it (if it doesn't, then seeing the .exe extension probably won't either).

The second is that the OS allows programs and other file types to set icons at all before their first run. This also leads to confused deputy-like attacks where you think you're opening a file with one program but are actually opening it with something that will interpret it as code. The solution to this is probably to have programs keep their generic program icon until after their first run. If you double click on something that has a generic program icon, then you probably intend to run it...

Comment Uh, what? (Score 2) 91

an LLVM-based bytecode for its shading language to remove the need for a compiler from the graphics drivers

This removes the need for a shader language parser in the graphics driver. It still needs a compiler, unless you think the GPU is going to natively execute the bytecode. If you remove the compiler from a modern GPU driver, then there's very little left...

Comment Re:c++? (Score 2) 407

C++ is a language that is very good for generic programming. It doesn't really meet Alan Kay's definition of OO (and he's the one who coined the term), nor does it pass the Ingalls Test for OO. It has classes, but method dispatch is tied to the class hierarchy so if you want to really adopt an OO style you need to use multiple inheritance and pure abstract base classes, which is a very cumbersome way of using C++.

The worst C++ code is written by people who are thinking in C when they write C++, but the second-wrost C++ code is written by people who are really thinking in Smalltalk. If you're one of these people, then learn Objective-C: the language is far better at representing how you think about programs.

Any programming language can be used to write code in any style. You can write good OO code with a macro assembler if you want. However, every language has a set of styles that fit naturally with the language and ones that don't. You can force C++ to behave in an OO way, and it sort-of works, but it's not using the language in the most efficient way.

Comment Re:C++14 != C++98 (Score 1) 407

The main advantage of auto is in templates, where the type is something complex derived from template instantiations. You simply don't want to make it explicit. Oh, and in lambdas, where explicitly writing the type of the lambda is very hard (though you generally cast to std::function for assignment). As to using a float instead of an int... I have no idea how you'd do that with auto. The type of auto is the type that you initialise it with. If you're using a platform where you don't have hardfloat, then why do you have APIs that return floats? There's your bug, the use of auto was just a symptom.

And if you've never seen anyone use the algorithms library, then you must be using C++ in a very specialised environment. I've not seen any C++11 code that didn't use std::move. I've rarely seen a nontrivial C++ file that didn't use std::min or std::max, and most code will also use std::copy. The only code that I've seen that didn't use the algorithms library included is own, poorly optimised and buggy, versions of several of them.

Comment Re:c++? (Score 2) 407

First, WindowsMaker doesn't use Objective-C, it's written in C. However, GNUstep, which is the open source implementation of the Cocoa frameworks (originally the OpenStep specification, but they're tracking Apple changes) could use more help! Oh, and we support (on *NIX) a superset of the Objective-C language that Apple supports on their products, so I wouldn't say that Obj-C is more limited on Linux.

That said, and I say this as the maintainer of the GNUstep Objective-C implementation, I'd recommend C++, but with the two caveats:

  • C++ is not an OO language. It sort-of supports OOP, but writing OO code in C++ is not the natural way of using the language.
  • Don't look at any version of the language before C++11. It's just terrible and will damage your brain.

C++11 and C++14 have cleaned up C++ a lot. With shared_ptr and unique_ptr, you can write code with sane memory management. With perfect forwarding, lambdas, and variadic templates, you can write code that has most of the benefits of a late-bound language. I like a lot of Objective-C, but Apple broke the 'simple, orthogonal syntax' when they added declared properties and a few other things. Any successful programming language eventually becomes a mess of compromises and ugly corners. Some, like Python and C++, start that way, but at least C++ has been slowly improving over the last couple of versions.

The one thing where Objective-C is still a clear winner is in writing libraries that want to maintain a stable ABI. This is insanely difficult in C++ because the language doesn't have a clean separation of interface and implementation and relies a lot on inlining and static binding for performance. The down side, of course, is that once you have a library in Objective-C you're limited to consumers who also want to use Objective-C.

Oh, and Qt GUIs suck beyond belief on OS X - not sure what they're like on Windows, but I wouldn't recommend them for a portable UI. Good MVC design and a native UI is the only way to go if you really want a cross-platform GUI app that doesn't suck.

Slashdot Top Deals

Get hold of portable property. -- Charles Dickens, "Great Expectations"

Working...