Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Comment Re:Quantum Computers (Score 1) 236

If you can read this... 01110101 01110010 00100000 01100001 00100000 01100111 01100101 01100101 01101011

While you, on the other hand, would need to learn to capitalize and write proper sentences. And yes, I read that using neither a calculator nor an ASCII chart. ;)

Comment Re:Another thread, another flamewar (Score 1) 338

AFAICT, the only reason we're all using Flash is that it was a stop-gap measure to deal with the fact that normal video support in web browsers wasn't what it should have been.

What I don't understand, though, is what was wrong with the <object> tag. It could be used to embed the client's favorite media player into the page to play a video over HTTP, could it not? What does the <video> tag do that the <object> tag couldn't?

Comment Re:Yes, but that's a lot more than just the GNU ut (Score 1) 747

You're just purposely trying to evade my point, rather than meeting it. Yes, there are alternatives to many of the GNU utilities. I, too, could probably name a dozen embedded Linux distros or so that don't use GNU code to any larger extent, but that's just besides the point.

The point is that the systems the vast majority of people use (say, Debian, Ubuntu, Fedora, &c.) are heavily based on GNU. So much so, in fact, that it if you really seek a single qualifier for it, it would be more appropriate to call it GNU than to call at Linux. At the core of the system lies all of the GNU system, including the coreutils, GCC, binutils, bash, texinfo, gzip, glibc, all the reimplementations of basic system tools like grep, sed, awk and what have you not. What you're normally actually using, as a user, is more often than not GNU code. (And that applies to very many GUI users, too, seeing how GNOME is part of GNU.)

Also, I'm not trying to force you to call the system GNU/Linux instead of Linux, honestly. I, too, usually call it Linux, but only because that's what I and others have become used to, not because I think that it is the most correct denomination to use. (Well, only when I speak with laymen, though, really. When I speak with my friends, I can usually leave out the "Linux" part of it completely and just say that I use "Debian".) On the other hand, I certainly have no wish to actively discredit GNU's extremely pivotal role in the system.

Comment Re:GNU/Linux is not the official name (Score 1) 747

Linux is not a GNU project, though. It's a kernel made from contributed code from many different people. Their ideas, their expertise, not the direct result of GCC. GCC is just the program used to compile their ideas. You could build it with ICC and it still is Linux, but it doesn't turn into Intel/Linux.

Certainly so, but noone, not Stallman and noone else, calls Linux the kernel GNU/Linux. GNU/Linux is the term for an operating system based on the GNU userspace along with the Linux kernel.

Comment Re:Again, that argument is equally valid for the r (Score 1) 747

It's entirely possible to boot a Linux system with binutils or BSD userspace utilities.

Well yeah, sure, that's possible. Do you know anyone who uses such a distribution? I'm pretty sure RMS speaks of GNU/Linux because that's what people actually use.

Even if you say that the line between applications and operating system is fuzzy, I do think we can both agree that it is reasonable to count something which is required to build and boot the system as "part of the operating system", no? Especially so if we speak of the actual operating system distributions, like Debian or Fedora, which doubtlessly uses those programs for those purposes, and where it would require large amounts of work to replace them.

(Furthermore, I'm pretty sure Linux (the kernel, that is) requires GCC, GNU ld and gmake to build. I might be wrong about that, though.)

Comment Re:That argument is equally valid (Score 1) 747

It certainly isn't. Neither X, any desktop environment, Apache, MySQL, PHP, Perl or any web browser is necessary to do even so much as boot or create the system. They're all perfectly optional components that you can choose to run on top of your GNU/Linux operating system.

Sure, many people do, but others don't; either way, that's not the point. The point is that they aren't part of the operating system, and therefore there's no reason to name it after them. GNU software, on the other hand, is most necessarily and intrinsically part of the operating system, and therefore it is reasonable to acknowledge it.

Aside even from that, I would still argue that it is reasonable to credit GNU, not only for the actual software, but for the philosophy of the entire system. If it weren't for RMS, FSF and GNU, free software as we know it would probably not exist.

Comment Re:GNU/Linux is not the official name (Score 2, Insightful) 747

I think you would have a more compelling argument the day you don't:

  • use the GNU coreutils and GNU bash to run your init scripts;
  • use GNU gzip and GNU cpio to create your ramdisks;
  • use GNU GRUB to load said ramdisk and boot your Linux kernel;
  • use GNU GCC and binutils to compile the entire system; or
  • use GNU's glibc as the C library for virtually every single process in the system.

Just sayin'. And I probably missed a few things.

Comment Re:Adapt (Score 3, Interesting) 626

All that which you say is certainly true, but I would still argue that EPIC's greatest problem is its hard parallelism limit. True, it's not as hard as I tried to make it out, since an EPIC instruction bundle has its non-dependence flag, but you cannot, for instance, make an EPIC CPU break off and execute two sub-routines in parallel. Its parallelism lies only in very small spatial window of instructions.

What I'd like to see is, rather, that the CPU can implement a kind of "micro-thread" function, that would allow two larger codepaths simultaneously -- larger than what EPIC could handle, but quite possibly still far smaller than what would be efficient to distribute on OS-level threads, with all the synchronization and scheduler overhead that would mean.

Comment Re:Adapt (Score 4, Interesting) 626

As I mentioned briefly in my post, there was research into dataflow architecures in the 70's and 80's, and it turned out to be exceedingly difficult to do such things efficiently in hardware. It may very well be that they still are the final solution, but until such time as they become viable, I think doing the same thing in the compiler, as I proposed, is more than enough. That's still the computer doing it for you.

Comment Re:Adapt (Score 5, Interesting) 626

No, it's not about adaptation. The whole approach currently taken is completely, outright on-its-head wrong.

To begin with, I don't believe the article about the systems being badly prepared. I can't speak for Windows, but I know for sure that Linux is capable of far heavier SMP operation than 4 CPUs.

But more importantly, many programming tasks simply aren't meaningful to break up into such units of granularity is OS-level threads. Many programs would benefit from being able to run just some small operations (like iterations of a loop) in parallel, but just the synchronization work required to wake up even a thread from a pool to do such a thing would greatly exceed the benefit of it.

People just think about this the wrong way. Let me re-present the problem for you: CPU manufacturers have been finding it harder to scale the clock frequencies of CPUs higher, and therefore they start adding more functional units to CPUs to do more work per cycle instead. Since the normal OoO parallelization mechanisms don't scale well enough (probably for the same reasons people couldn't get data-flow architectures working at large scales back in the 80's), they add more cores instead.

The problem this gives rise to, as I stated above, is that the unit of parallelism gained by more CPUs is to large to divide the very small units of work that exist among. What is needed, I would argue, is a way to parallelize instructions in the instruction set itself. HP's/Intel's EPIC idea (which is now Itanium) wasn't stupid, but it has a hard limitation on how far it scales (currently four instructions simultaneously).

I don't have a final solution quite yet (though I am working on it as a thought project), but the problem we need to solve is getting a new instruction set which is inherently capable of parallel operation, not on adding more cores and pushing the responsibility onto the programmers for multi-threading their programs. This is the kind of the the compiler could do just fine (even the compilers that exist currently -- GCC's SSA representation of programs, for example, is excellent for these kinds of things), by isolating parts of the code in which there are no dependencies in the data-flow, and which could therefore run in parallel, but they need the support in the instruction set to be able to specify such things.

Comment Re:You don't (Score 1) 904

How about "bash virus.txt"?

I'd like to see that "solved".

(It is far from trivial to make bash non-executable - you essentially need to make a "kiosk")

Then again, that would be no different from what you'd be able to do anyway if you get a shell prompts; shell scripts are just sequences of shell commands, after all. I don't see the problem. If you don't want your users able to do stuff, then naturally, you need to give them a restricted shell, which you'd do either by putting rbash in their passwd entry, or locking down Gnome for them.

If you really feel the need to, that is; I never really understood the purpose of locking down a login session to begin with. Security problems shouldn't be solved that way anyway, and if it isn't security problems you're out to solve, then what is it that you're trying to do?

Slashdot Top Deals

A morsel of genuine history is a thing so rare as to be always valuable. -- Thomas Jefferson

Working...