Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×

Comment Re:google has no choice, like many others before t (Score 1) 128

Basically you have to pay them money in order to be allowed to do things that are already ethical, perhaps even legal to do. If you already can do these things, then you often have to put up lobbying efforts to make sure that you can continue doing them.

Paying for extortion is unethical and illegal too. Laws punish both the extorter and who omits to denounce.

For example, recall how after Google introduced gmail, California senator Liz Figueroa wanted to ban it.

Presumably she was afraid of the fact that the average Gmail user wouldn''t be aware that Google (and Google's unfaithful employees, and hackers, and the NSA, ...) would be able to read his email, and continue to be able to do so for an unspecified amount of time after that mail was "deleted". Which is what actually happens today, but to a much wider extent, with people using the services of Google (Facebook, Bing, ...) without being aware of the massive and uncontrollable espionage that supports them, because the terms of service are explained in EULAs which are effectively not understandable by those users. Banning Gmail would have been unuseful and unjust, I'd have regulated them to explain this policy to the users by using the same font size that they use when they advertise the size of the storage space they're offering, before the user signs the contract.

In that case, it took some heavy lobbying in order to keep gmail legal.

You mean that Google overrode the people's sovereign will, that they had expressed democratically by electing Liz Figueroa, by corrupting other politicians? If so, it's highly immoral and Google deserves to be punished for this. The government has the monopoly of coercion in modern democracies, and this privilege stems from the fact that it represents the will of the people. Altering this fact is one of the most serious crimes that an entity can stain itself with.

Before gmail they used to suck horribly, the good ones gave you a whopping 10MB of storage

In 2005 my ISP gave me 300 MB of storage which, in a time of 56K modem dialup connections, was plenty. The free offer from the same provider was 100MB, which is still ten times bigger than 10MB.

and each action you took required an entire page reload, making them slow as fuck.

Did your webmail work like that? The one of my ISP looked like MS Outlook and wasn't bad. Why, AJAX was invented by Microsoft for that exact purpose.

Comment Re:I'm disapointed in people (Score 1) 693

when what their user base wanted was yet another rehash of the win 95 desktop layout. The Gnome developers actually tried to do something new in desktop UIs, they actually tried to innovate

Even Windows 8, with all of Microsoft's economical and political prowess behind it, failed, because UI designers decided to drop the excellent "Windows 95 desktop layout" without having a proper replacement for it (Metro solved a different problem). Microsoft's remedies for this situation have all gone in the direction of restoring elements of the Windows 95 desktop layout.

Perhaps so many people want the "Windows 95 desktop layout" not because they dislike change or are irrational beings. Perhaps they want it because it works, and as is the case for most things that work, perhaps its form follows its function, and this could be the reason why most traditional desktop environments tend to appear similar. Most airplanes look like the same, even though aviation is characterized by strong innovation.

Comment Open Source commercial (Score 1) 144

And while Apple can readily fix a bug in its own software, at least for users who keep up on patches, "Linux" refers to a broad range of systems and vendors, rather than a single company, and the affected systems include some of the biggest names in the Linux world, like Red Hat, Debian, and Ubuntu.

And thanks to the LGPL license of GnuTLS, all the users have the possibility to upgrade their systems, independently of whether Red Hat, Debian, Ubuntu, Apple, Microsoft believe that maintaining those systems is still commercially convenient or not. GPLv3 would be better, as it would give the users the warranty of being able to actually install the updated code into their devices, which is important for non-PCs.

Comment Weasel words (Score 4, Insightful) 155

Just some days ago we were already told that the Free Software Community hates Canonical. Then again, who is this Free Software Community? I've been using free software since before it was fashionable to call it thus, so I think that I use lots of software coming from the Free Software Community. Today I happen to use some pieces of free software from Canonical. Of the works by some of the persons spotted in TFA as speakers for the "Free Software Community", I use nothing, so I see more contribution to the Free Software Community from Canonical than from them.

Don't like software form Canonical? Don't use it. They're a commercial company, so they have to break even ultimately. I understand if, after listening to everyone, they make their own decision. Their Mir project is all about Ubuntu phones: should that platform be successful, they'll take the merit, should they fail, the Free Software Community will still have Android as their reference platform. Even if Google is a commercial company, too, and compared to them Canonical is Candy Candy.

Comment Re: What an open source baseband can be. (Score 1) 137

No matter how many internet comments you post, you still can't prevent other people from posting their own ones. A badly behaving radio will prevent all other radios working on the same frequency bands from operating correctly. It doesn't take a very powerful radio to cause massive denial of service. Also, a hacked radio could make use of frequencies that its owner hasn't paid to use.

Comment Re:Irrational open source fanboys (Score 2) 137

because they can't just hard code that right into the chip and never let you see it ...

No, because we would see either the software interfacing with the hard-coded backdoor, or some undocumented hardware means of communication coming out from the chip, and we'd start asking questions.

So if I just embed my code into the processor itself, you won't bitch.

Thats just silly.

Embedding code in (readonly or flash) ROMs is actually preferred from Stallman's point of view, because it allows the hardware to work out-of-the-box when using free software to control that hardware. Binary firmware is problematic for free software operating systems, not because free software enthusiasts have some maniac obsession about not running binaries that they haven't compiled themselves, but because the copyright holders of the firmware binary blobs often attach very restrictive licensing conditions to them, making them very hard or impossible to redistribute.

Comment Re:I dont get it (Score 1) 551

Now name one that the UN left for the US to hop the bag on for over a decade instead of taking care of business

Then next time Russia or some other country you don't like "takes care of some business" without waiting for the U.N. don't act outraged. Principles can't be bent to one's convenience.

Comment Re:I dont get it (Score 1) 551

Iraq was invaded without U.N. authorization because the U.S. had produced false evidence of weapon of mass destruction being stored inside Iraq. The satellite countries that agreed to take (a minor) part in the invasion stated officially that they were convinced by the U.S.' false evidence, in reality all they were after was them to partake into the feast of the war's aftermath - they gave so they could receive.

There are other countries that repeatedly defy the United Nations and the U.S. would never invade.

Which is not to say that Saddam Hussein was a nice guy and the U.S. are the empire of evil - I certainly am happier to live under the U.S.' influence rather than Russia's - but let's not paint conflicts of political interests with manicheism.

Comment Google arrogance (Score 5, Informative) 363

They didn't address any of the problems. They just called them "myths" and said "don't worry, trust us, everything will be fine" for each one of them. And they did so using condescending, arrogant and insulting language (look for example at the passage when they declare that they want people to wear google glasses inside locker rooms (!): "just bear in mind, would-be banners..."). This reinforces in me the distrust in the company and the concern about the product.

Comment Re:Why should I drop glibc? (Score 1) 134

Except statically linked binaries. Those were linked at build time, but they don't invoke the linker when they are executed. Guess what I use musl for? Building static binaries.

But you're changing the goalpost now. What I said was troublesome was using musl (or anything else, if glibc-incompatible) as a replacement for glibc. Using it for selected static binaries that you build yourself is another thing.

Nope. Binaries I only need one or the other, not both. Libraries only when I need a musl version of a library.

That's because I was talking about replacing glibc with musl, and be able to run binaries, over which you have no control, written for any of the two libraries. Which is the only scenario that would makes final users happy, should glibc coexist with musl.

No. Only for shit software that doesn't have any kind of protocol.

Again, that's stuff that is out of our control. Engineers have to design their systems for the worst case, not for the best.

Wrong. ISO C specifies stdint.h, int64_t is a standard way to get a 64-bit signed integer, and uint64_t is standard for 64-bit unsigned integers. The fact that you don't know ISO C is illuminating.

Microsoft's stuff is compatible with ISO C90. uint64_t was unsigned __int64 in their world until very recently.

Never works in practice. If the prototype for a library function is the same, and the calling semantics is the same, then it can be relinked without recompilation whether it was statically (unless it was stripped) or dynamically linked, and if either has changed, the code needs to be reviewed, rewritten and recompiled in either case.

With versioned symbols (glibc) you haven't this problem. But if you add support for them in your library, then you add "bloat". So you have to decide whether you want to add "bloat", or to make your users unhappy.

I don't know what adding getline has to do with existing programs. If they are not already making use of getline, then even if they are recompiled, they will still not make use of getline, and will not require it's symbol for linking. The Austin group (POSIX) are careful to not break stuff when revising standards, though I'm sure you can still point to some breakage; in general they tend to revise things by adding new symbols, or assuming the greatest common behavior between implementations.

They added getline() to the standard, and doing so they broke all the software that, perfectly standard-compliant until then, used the getline() name for their own stuff. This is to say that imagining to resolve binary compatibility with standard compliance simply doesn't work. They're two different problems.

Autotools gives me, as a user, and have done so since the 90s: - cross compiling support;

Never works.

I've bootstrapped my entire 64-bit Linux installation from a 32-bit host using it. It has almost always worked for me, even for packages whose developer hadn't ever thought about the possibility of cross-compilation. I have no doubt that packages using autotools were the ones that gave me fewer problems.

However plain old Makefiles, I just set CC, CFLAGS, LD, LDFLAGS and LIBDIR and things just work (also CXX and CXFLAGS for C++).

Only for the simplest cases. When things get more complex, you'll have to handle the difference between the host C compiler (which can be used to compile stuff which will run on the build host, such as code generators) and the target C compiler (which can only be used to produce binaries that won't run on the build host).

- ability to change any installation path;

I can do that with "PREFIX=/foo/bar make install" with any well written makefile.

That's the difference, with autotools you are almost sure that you get those features out of the box, and they're standardized, with other systems you have to hope that the developers wrote their makefiles well, and I can tell you that the current trend among developers is that they like less and less to invest time into packaging their source code. You also have to study the makefiles and see if it's DESTDIR or INSTALL_ROOT or ROOT_PREFIX or something else. Also, consider the difference between PREFIX and DESTDIR. The first can end up into paths stored in the generated code. The second won't.

- support for building shared and static libraries simultaneously using the best compiler options for each case; and probably something else that I'm forgetting.

make staticlibs; make dynamiclibs

Then you have to take into account the different flags required for building static vs shared libraries on every platform that you want to target (e.g. Linux supports non-PIC code in shared libraries on i386, other OSes don't, x86_64 never supports it...). With autotools you get that for free and out-of-the-box.

Fixing a broken homemade Makefile takes me a few minutes. Fixing a broken autobarf takes me hours to days.

Modern .ac files are much, much simpler than the Makefiles and scripts that they generate. I can't see how you find it more difficult to fix them rather than the Makefiles themselves. Have you ever tried to debug a problem with a .cmake file and its arcane language? And what about scons, it works basically with raw python scripts...

Porting code to 9front is basically a rewrite as it's so alien, so let's put that to one side.

I don't think autotools work well, if at all, outside UNIXish systems. Even on Windows, they require impedance adaptation layers such as cygwin or mingw or interix.

Comment Re:Why should I drop glibc? (Score 2) 134

Virtually all binary software is distributed in enormous tarballs containing their own libc, and every other library they use, and a bunch they don't simply because static linking doesn't work against glibc properly (and violates the license for non-free software).

Please don't spread false information this way.

Take Firefox for example, from Firefox.org, not the version shipped with your distro. Every Firefox tarball includes their own build of GTK, GDK, glibc, libnspr, etc.

Yes, let's take the version from firefox.org that I'm using right now.

$ ldd /opt/firefox/firefox-bin
linux-gate.so.1 (0xb77bb000)
libpthread.so.0 => /lib/i386-linux-gnu/libpthread.so.0 (0xb776b000)
libdl.so.2 => /lib/i386-linux-gnu/libdl.so.2 (0xb7765000)
librt.so.1 => /lib/i386-linux-gnu/librt.so.1 (0xb775c000)
libstdc++.so.6 => /usr/lib/libstdc++.so.6 (0xb7675000)
libm.so.6 => /lib/i386-linux-gnu/libm.so.6 (0xb762f000)
libgcc_s.so.1 => /usr/lib/libgcc_s.so.1 (0xb7614000)
libc.so.6 => /lib/i386-linux-gnu/libc.so.6 (0xb746a000)
/lib/ld-linux.so.2 (0xb77bc000)

See? It's using my libc, so what you said is completely incorrect. Shall we see if it uses its own GTK library as you've said? I'll past the ldd result here: http://pastebin.com/UsP0MHUe , so you can see for yourself that firefox uses the system libraries instead.

Because dynamic linking is so horribly broken on Linux that is the only way it could possibly work on more than one distro.

Dynamic linking on Linux is state-of-the-art. If it doesn't work for you, look for responsibilities somewhere else than the operating system. If anything, it's static linking which is badly supported (by glibc, not by Linux).

SORRY.

...

Comment Re:Why should I drop glibc? (Score 1) 134

Actually it is.

It's linked by any executable on your system and it has the role of loading every single dynamic binary that you have installed, including init. It's special.

I have both musl and glibc on my systems, and they have different filenames, so I can have both installed without conflicts.

Of course you can have both installed, but not without conflicts, you must be very careful to avoid the sea of conflicts that arise when you have two incompatible libraries with the same name.

Programs interoperate through IPC and the filesystem, so having programs built against different libc on the same system is not a problem.

You'll have to:
- duplicate every single binary and library to have libc and musl versions;
- ensure that the version of each binary is the same for its libc and musl image (otherwise IPC and filesystem communication will fail);
- ensure that the libc and musl versions of the binary are installed in two different paths and don't interfere (otherwise a libc program dlopening a musl plugin will fail), and rewrite all the software which uses hardcoded paths (e.g. Python);
- ensure that libc and musl types are binary-compatible (otherwise IPC and filesystem communication will fail);
- ensure that every package you compile finds the correct version of both libc and musl when you compile it;
- have your machine withstand the memory pressure of two different and unshareable userspace images running simultaneously (so much for fixing "bloat");
- ...

And musl and glibc are both source-code compatible in that they both implement a POSIX and ISO compatible libc.

Even Microsoft Visual C++ is ISO C compliant. So you can write programs that will compile on both GLIBC and Visual C++. As long as you don't use non-standard stuff such as 64-bit integers.

The only place where you would get source incompatibilities is in non-conformant programs that take advantage of defects or non-standard APIs of one of the libc, and the real solution here is to have developers fix their code so that it conforms to the standard.

If I am a user, I want my binaries to run and that's it. I don't have the option to ask whoever coded the programs that I use to rewrite them because I don't like their choice of API usage.
Furthermore, any library that is actually deployed and used will get bugs fixed, which can cause incompatible changes, and standards will be amended in ways that can break compatibility (for instance, the addition of getline() to the C library). What do you do then? Ask users to rewrite all the programs that they use every time such a thing happens?

the headaches around trying to get Gnu's arcane and defective build system to work

I've heard millions of times that autotools suck. That's true. The problem is that everybody else (and I mean scons, cmake, waf, ...) sucks even more. Autotools give me, as a user, and have done so since the 90s:
- cross compiling support;
- ability to change any installation path;
- ability to have a temporary installation path for packaging;
- ability to apply a transformation to program names;
- a standardized way to change any of the tools to be used during compilation;
- support for building shared and static libraries simultaneously using the best compiler options for each case;
and probably something else that I'm forgetting.

Comment Re:Why should I drop glibc? (Score 1) 134

You get your binaries from distributions anyway

No, I get binaries from wherever I want and it works.

and with musl your closed source bits can just statically link safely libc and live happy and isolated.

I can't live happily without shared libraries. They were invented for good reasons tens of years ago and have been used extensively ever since. Think about the "bloat" of statically linked executables and the fact that they don't get updates of the linked-in library code (including security fixes).

Slashdot Top Deals

One man's constant is another man's variable. -- A.J. Perlis

Working...