Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×

Comment Re:Liberated s/w on unliberated OS, or vice versa? (Score 1) 150

I was talking about things like Libre-Office

So I did a little digging. It looks like the first WYSIWYG functionality in personal computer appeared with the Apple Lisa in 1983, and WYSIWYG quickly made its way into the GUI releases of Word and WordPerfect around 1985. Still, this is 1985 we're talking about. You're lamenting the lack of consumer-oriented open-source desktop applications back at a time when any commercial enterprise trying to build and sell such a product was taking a huge risk. The reason Microsoft's mission statement was "a computer on every desktop" wasn't because the hardware was already there! Even Scott Adams mocked Microsoft for their motto in his 1995 book "The Dilbert Principle"; the concept was still seen as far-fetched even then.

KPlayer, KMPlayer, Kaffeine, Dragon Player.

Incidentally, VLC was just one example - there are plenty of others like I listed above, and then Totem and MPlayer.

Reference as many different projects as you like, they sit on the shoulders of the same group of people who've gone to monumental efforts collecting samples, reverse engineering, documenting and reimplementing a massive variety of codecs. It doesn't matter what skin you put on the thing when the hardest part is the compression and decompression of undocumented multimedia formats.

GNUCash is admittedly there, but I doubt that they're so much into working w/ institutions like banks or brokerages than in just being a personal finance manager, which I guess is fine. However, I was thinking not so much about tax software, but rather, something like QuickBooks, which has nothing like it in Linux or BSD environments.

This is exactly what I thought you meant.

I agree that those monumental efforts would not have been easy. However, my argument in my previous post has been that instead of spinning a gazillion different Linux and somewhat fewer BSD distros, people would have done better in pooling their efforts towards making liberated applicaitons software.

Open source and free software isn't powered by some top-down, collaborative idea saying "hey, we need to all work together to do $x", it's powered by a bunch of people doing what interests them, for the reasons they're interested. There's no one, two or even three people who could say "everybody, pull in this direction" and get more than a few hundred people rallying behind them, and certainly not more than for a very limited period of time.

Also, look at the reason the various distros exist. They're iterative technological improvements on previous packaging methodologies (Slackware in response to linux-from-scratch, RPMs in response to Slackware's tarballs, Apt in response to pre-yum RPMs, Portage in response to unoptimized and inflexible binary distributions), philosophical differences (Debian's DFSG as opposed to RH's more lenient policies, Ubuntu's pragmatism vs Debian's strict policies), role/niche-specific distros (FreeNAS, pfSense, netbook-targeted) or political (such as how OpenBSD split off from NetBSD because some of the core devs didn't get along with Theo, and he didn't want to jump through hoops to get work done. (If I read the historical conversations right))

The core distros seem to continue on inertia and contributions from derivatives. Derivatives come and go as they experiment with basing themselves on different software...such as Mint's using Debian to build a rolling-release system.

This is how competition, invention and improvement work; you have to allow for things to break into many overlapping pieces if you want to see which ones work, which ones don't, and which ones beat the pants off of everything that came before.

I expect we're going to have to agree to disagree on this, because all of the solid intellectual reference on this is based in command economies vs free markets, and Hayek vs Keynes. All I can suggest is that you read some a book or two by F. A. Hayek.

I'm going to summarize this here, and it's the last time I'll go to the trouble of specifically spelling it out: You cannot focus the open source community, because everyone in the open source community works on what interest them, and not what some central entity suggests.

On the bloat that you are talking about in Windows as a result of having to maintain compatibility, I'd argue that that's a good thing. Admittedly, even MS broke it when going from XP to Vista, but I'd argue that a good place to have broken compatibility, if it had to happen, was to break compatibility b/w 32-bit and 64-bit. As it is, apps had to be re-written for Windows 7, so the right place to have broken compatibility would have been going from 32-bit Windows to 64-bit Windows. But back to the question on the unix side of things, having sophisticated applications that use the above libraries, like Qt, GTK, glibc and so on break whenever the OS ships w/ changes to the userland is extremely disruptive.

Let me tell you a bit about myself. My day job is writing C++ code to run on Windows. Some of that is maintenance work for legacy applications whose earliest versions had them running on Windows 95. Some of that is ground-up writing of new programs. I've been doing this for five years, so, yeah, I've been around the block. Now, in my hobby time, I run Gentoo systems at home, administer a Debian system for my website, and do software dev on projects that strike my fancy.

Now let me tell you a little about the history of software dev on Windows, starting with Windows 3.1, since that's the code Vista really broke. Windows 3.1 came with an API called Win32s, direct (and mostly compatible) ancestors of which are still the way you do native-code GUI application dev on Windows, up through Windows 7. If you wrote a program that did not do direct hardware access, and ran under Windows 3.1, chances are it still worked fine up through Windows XP.

Windows Vista was not the first version of Windows to operate in 64-bit mode. There was a 64-bit version of Windows XP, but driver support was spotty. (Nobody was shipping XP systems with more than 3G of RAM, so nobody was shipping systems with XP 64-bit. Since nobody was shipping XP 64-bit, driver developers didn't need to build 64-bit versions of their drivers.) As far as I know, Windows 3.1 apps worked on this version of Windows.

If you wrote a 32-bit application to run on Windows 95, and you didn't do any fancy hardware access, chances are your application still works fine on Windows 7. It'll probably even work on x86 (not ARM)-based Windows 8 systems.

After all, even in the Linux world, distros don't usually maintain older versions except perhaps under LTS, and since the newer versions are 'free', they have a good reason not to. As a result, someone who's using, say, Mageia 1 today might decide to, for security updates, go to Mageia 2 (I just picked this distro @ random from distrowatch - use any that you feel like). In doing this transition, quite a few things change. GTK goes from 2.24.4 to 3.4.1. GLIBC goes from 2.12.1 to 2.14.1. GCC changes from 4.5.3 to 4.6.3. Qt goes from 4.7.3 to 4.8.1. The Linux kernel used goes from 2.6.38.7 to 3.3.6. I see what you said above about most programs linking or compiling just fine, but to use a phrase you used in our previous encounter that you cited, there are too many variables, or moving parts here, if you will. As a result, when they don't work, debugging them can be a bitch.

This is a big driver in the popularity of rolling-release distributions like Gentoo, Arch and Linux Mint/Debian. Periodic atomic releases do indeed make porting software to subsequent versions of a distribution difficult, and a lot of work. That's why it's the responsibility of the package's maintainer within that distribution to ensure the package continues to work.

This is also why there are best practices in software engineering. You don't change your API unless there's either an immediate critical need, or unless you've given advance warning. That's why API developers have the word "deprecate", and that's why consumers of APIs should not use deprecated components.

Rolling-release distributions have the same issue, but it tends to happen in far less-overwhelming chunks.

And, yes, when there are many moving parts, things inevitably break. The trick is to only have moving parts where necessary and useful. Unfortunately, this means you can't have a "one size fits all" system, as what's necessary and useful in a desktop environment isn't necessarily the same in a desktop environment shipped over a terminal server, and certainly not the same as headless server. Yet, with Linux, all these disparate environments sit on more or less the same core.

If you don't like that, you can try to build a one-size-fits-all distribution. Many have [tried].

In this case, there are at least 4 such variables (I'm assuming that few apps will ever use GTK and Qt at the same time), which makes mere porting from one version to another a nightmare if each of them chooses to break compatibility in one aspect or another - all this despite all the source code being available.

Open-source isn't a panacea; it doesn't guarantee that things will continue to work. It makes understanding why they don't work easier. And, thus, it makes fixing things easier. In the proprietary software world, it's really not any easier. I don't even get debugging symbols for some of the crap components I have to bring into my address space.

For hardware, SSE continued to support the same instructions that MMX did, even if differently, so it's not like apps developed to take advantage of MMX no longer ran. It's one thing to argue that an app needs to be recompiled or re-linked to run optimally on a new platform. However, it's another thing to argue that an app needs to be recompiled to run at all on the new platform, if it happens to be an upgrade from the older one.

You missed the key point in my example: Let's say I have a library which needs to meet performance guarantees to users of my library. If the CPU changes to make things more expensive (while not raising an exception), then my performance guarantee is broken. This could be critical in real-time environments such as multimedia, machinery control or medicine. Yet the CPU manufacturer is free to make such changes, even if it blindsides the user of their product (as that particular case involving MMX did). (And machinery control and medical equipment manufacturers need to test the hell out of their platform, too.)

As far as linking things, you can still run very, very old apps (at least, as long as they use ELF). Working around this kind of issue is what LD_PRELOAD is for; I've got a friend who managed to get a libc5 app running on a libc6 system this way. (Boy, was I impressed!)

"slotting" is another mechanism distros use to get around version incompatibilities. Gentoo uses a slotting system so that apps which are known to only be compatible with a range of versions of a given library are only linked against that version of the library...and it will compile and install different versions of libraries side by side in order to do this. Works pretty well, actually. (Though sometimes you wonder why ye olde version of blah is installed, because it's forcing a dependency on stuff you really don't want on your system)

Finally...

Understand that the current "to the cloud" movement is the immediate-term solution to what you're describing. Everything is being made cross-platform by shoving it into one platform...that has implementations on every consumer-facing system out there. Now, I get the impression that you're a multimedia consumer and creator, and that that's one of your primary interests. I understand that shoving a crapton of raw video up to some cluster a thousand miles away isn't something that's going to work for you.

I also understand that using a binary blob from a proprietary vendor can be a royal pain in the ass. What you need to do in that case is use exactly the distribution and software version the vendor certifies the product with. Thankfully, this is a lot easier these days with things like libvirt, vmware, virtualbox.

If you take the time to learn Gentoo, you may also be able to get it to work there; that slotting system I was talking about works wonders with things like Skype, Flash and other binary blobs.

Comment Re:Liberated s/w on unliberated OS, or vice versa? (Score 1) 150

Had the FOSS movements (not talking about just the FSF, but everybody involved in having source code automatically available w/ binaries) actually started w/ useful apps and making those liberated or open-sourced - things like Office Suites,

emacs, vi, vim, latex

Image & video editing software

Ok, GIMP lagged Photoshop to market by six years. However, it's worth noting Photoshop was the first in its class.

Video editing? PCs really weren't up to snuff. I remember having to run an MPEG decoder in grayscale mode under Windows 3.1 just because decoding chroma made it run too slow.

Publishing software

LaTeX...which was (and is) a WYSIWYM editor similar in some respects to WordPerfect for DOS (remember that?), but was really a user-friendlier means of working with TeX, the standard bearer for publishing at the time.

financial software

Admittedly terribly lacking, currently. Discussion of this comes up again every year around tax season...

VLC

Go ask the VLC guys how easy it is to reverse-engineer codecs written at a time when everyone who needed FMV (so, game devs) built their own in-house, and where each revision of a game had changes to the codecs, and nobody wanted to use MPEG (too expensive). The precursors to Vorbis and Theora either weren't around, or hadn't caught on. (In fact, it wasn't until within the last decade when I spotted ogg files in a commercial game's media pack)

Once that was out there, it would have been relatively easier to migrate them to FOSS OSs, be it Linux, BSD, osFree, ReactOS, et al. The initial port may have been a bitch - all those API translations and so on - but once that was done & out of the way

So once you've completed one monumental effort of inventing products which hadn't been invented yet (or which were very new), on personal computers (which weren't yet widespread), and once you've completed another monumental effort (porting from DOS, DOS/Win32 and NT/Win32 coding styles and APIs to POSIX), then everything's easy. Except you want these monumental efforts to have occurred 20-25 years ago. And even if it had happened, it still wouldn't be easy.

(incidentally, while on that subject, such software should not have to be re-written b/w different versions of glibc or GCC or GTK or Qt - once it's written in each library, it should automatically be supported by its successors)

First, that, right there, is why Windows is incredibly bloated. It has an incredible amount of layered-in support for older model APIs.

Second, most programs don't need to be modified to support newer versions of glibc, GCC, Gtk or Qt, except possibly in response to bumps in the major revision number. There hasn't just been a Gtk+, Gtk2 and Gtk3, there've been dozens of revisions of each. Programs that compile (or even link!) against revisions within the same major version set don't normally notice a difference.

Third, understand that computer science and engineering knowledge is continually marching on. We're constantly learning and implementing new ways to do things better (R/W locks instead of simple mutexes, PIC, thread-local storage, RAII and garbage collection, embedding of domain specific languages, new ways to compute, new ways to communicate, new ways to interact with the user...), and enforcing backwards-compatibility constrains how efficient your product can be; can we keep the thing small? Can we keep the thing fast? Must we not use scalar floating-point operations because they want to use MMX, and our ABI's calling convention didn't guarantee the sanctity of that register over there?

What about when hardware changes out from under you? With the introduction of SSE, Intel changed another CPU instruction (I don't remember which one...but I think it had to do with manipulating MMX like a stack) from being a zero-cost instruction to costing a CPU cycle, as they wanted to dissuade people from using that instruction. If the hardware can change the performance characteristics of instructions on you (particularly when nobody expects it), and you made a promise in your library's API docs that depended on those characteristics...what then?

Blanket backwards compatibility is, short term, a laudable goal, and this is why most products distinguish between major, minor and revision components in their version numbering; they can limit the duration their API promises prevent them from sloughing off obsolete, broken(take a look at how many bugs Microsoft has to keep in place for backwards-compatibility reasons) or inefficient code.

Blanket backwards compatibility is, long term, why you ultimately run into bloat problems like Windows has. Or DLL hell, in both its Windows and Linux variants. And have you looked at how much disk space WinSxS takes on your system? WinSXS is a lot like sonames or Gentoo's slotting of packages to get around packages specifically needing different versions of libraries.

(Ohai, by the way. I thought your username looked familiar. If we're going to keep seeing each other, I should mark you as a friend...)

Comment Re:so what is ipv6 good for? (Score 1) 236

The simple masking pattern that I used was only as an example. Use any pattern you like for masking. And yeah, it's static - that's what EUI-64 is as well, and I was proposing an alternative to that which doesn't just leak the MAC address. But if you want a dynamic address, just take EUI-64, add it to a selected function of the date-time stamp, and then run it. Whenever it needs to expire, or get updated, repeat the process.

Sure. I was discussing the particulars (particularly in case anyone else comes along and and decides to implement that mask concept). I wasn't trying to be combative.

Incidentally, all that can be done statefully as well, in a dhcp6 server. Also, you can assign a pool of addresses - however large you like - to act as dynamic address.

Again, I really don't like adding more moving parts to a system. It might work fine for a wired Ethernet link with few clients, but it falls over quickly in wireless environments like apartment buildings and hotels. (It's ugly, but perhaps one in twenty DNS queries fail for me on my laptop at home, with the packets apparently lost between my laptop and my AP twenty feet away.)

Yes, I know about DHCP-PD. Using it to specify a pool of addresses to source from is an interesting idea. And, yes, if you use DHCP, you can push all kinds of configuration into a client.

But, again, I like SLAAC, because it's more reliable and generates less network traffic for me.

Comment Re:so what is ipv6 good for? (Score 1) 236

That private company (or their ISP) needed a special transit or tunnel setup between their AS and the AS their customers sit on so that intermediate networks didn't simply drop the packets. I expect the fan-out happened at the ISP in Australia; Australia doesn't have a very good connection to the rest of the Internet.

I suspect you don't have all the details. I know you haven't done anything in-depth with IPv6, based on your previous comments.

Comment Re:so what is ipv6 good for? (Score 1) 236

FYI, your SLAAC still is static (if you're not using Win7 or Vista...and their quirk there can be configured out of them, too), even in the case of privacy extensions. Privacy extensions use a combination of a static IP and continually-changing temporary IPs.

Otherwise, I like most of what you describe. Still, you shouldn't use a simple masking pattern. At one end of the MAC address, entropy is reduced because of the presence of manufacturer ID information. At the other end of the MAC address, entropy is reduced because of the serial nature of their generation; when I see motherboards with two NICs, those NICs typically have adjacent numbers for MAC addresses. I've seen similar patterns with MAC addresses assigned to virtual machines.

That's why I suggested SHA1; every bit in the input data gets spread across every bit in the output data. Generate the SHA1, and then take however many of the bits you need off of one end or the other. Deterministic, yet sufficiently unlikely to result in a collision.

Comment Re:Varying links (Score 1) 236

First, using a centralized box to continually shuffle client IP addresses adds a significant degree of complexity; you're either adding moving parts to an otherwise static system, or you're forcing existing moving parts to move much faster.

Neither is a good idea.

Second, IPv6 privacy extensions work by generating a permanent IP, as well as one or more temporary addresses that are used for outbound connections, and those temporary addresses are deprecated, expired and generated by the host itself...but it still keeps a permanent address. (Windows Vista and 7 use a full random permanent address, but other stacks use deterministically-generated permanent addresses.)

Third, the 'type' information I was referring to discusses how the IPs are generated. You can tell a SLAAC IP from a privacy-extension IP just by looking at them. That's different from keying on subnet.

Comment Re:so what is ipv6 good for? (Score 1) 236

OK, now that doesn't make sense.

First off, you're really not guaranteed that there's going to be a decent source of random numbers[1]. And there's not a need for truly random numbers; you don't want to consume them if you don't need to. Besides...I like deterministic addresses I can see traffic on my network and immediately know which machine it corresponds to, without having to use something stateful like DHCP. If your network has fewer than ten nodes on it, it doesn't long at all to get to know it.

So instead take the SHA1 hash of the MAC address (or perhaps the MAC address appended to the prefix), and use the first N bits. Then, at least, you have a deterministic method to generate the IP address.

[1] Believe me, this is a problem I'm trying to provide solutions for. My etools/entmesh package has been stagnant for a while, but that's because I've been getting prepping for getting married, getting married, doing the honeymoon thing, and dealing with other life surprises. Happens.

Comment Re:so what is ipv6 good for? (Score 1) 236

Reasonable points, though I think it was more about easing hardware implementations of the stack than about dealing specifically with memory-constraints. That said, I would still protest anything that breaks SLAAC; SLAAC is very useful. If you want to describe how SLAAC would operate in a completely classless system, I wouldn't mind discussing it.

What are these incorrect assumptions you describe, and can you point to some of the examples of them coming up? (Just honestly curious)

Comment Re:so what is ipv6 good for? (Score 1) 236

You are aware that all major IPv6 stacks have IPSec support? Forget that the standard doesn't explicitly require it...Linux, *BSD and Windows have long shipped with working versions.

You still haven't given any technical reason why "prefix-length === 64" should be dropped; you've called it stupid and unnecessary, and the best you've done is indicate you don't care that doing so would break SLAAC. I like and use SLAAC, which depends on that 64-bit minimum subnet size; your 'unnecessary' argument doesn't hold in that context.

Comment Re:Varying links (Score 1) 236

To begin with, DHCPv6 runs counter to privacy extensions. That rather sucks. Also, I recommend actually reading RFC 4941. There's more to privacy extensions than it appears you realize.

It's also worth noting that those 16 bits between your MAC address and the width of a /64 are used for encoding type information. That makes it a useful diagnostic, and not something to be discarded lightly.

Slashdot Top Deals

I tell them to turn to the study of mathematics, for it is only there that they might escape the lusts of the flesh. -- Thomas Mann, "The Magic Mountain"

Working...