Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Weasel words (Score 4, Insightful) 155

Just some days ago we were already told that the Free Software Community hates Canonical. Then again, who is this Free Software Community? I've been using free software since before it was fashionable to call it thus, so I think that I use lots of software coming from the Free Software Community. Today I happen to use some pieces of free software from Canonical. Of the works by some of the persons spotted in TFA as speakers for the "Free Software Community", I use nothing, so I see more contribution to the Free Software Community from Canonical than from them.

Don't like software form Canonical? Don't use it. They're a commercial company, so they have to break even ultimately. I understand if, after listening to everyone, they make their own decision. Their Mir project is all about Ubuntu phones: should that platform be successful, they'll take the merit, should they fail, the Free Software Community will still have Android as their reference platform. Even if Google is a commercial company, too, and compared to them Canonical is Candy Candy.

Comment Re: What an open source baseband can be. (Score 1) 137

No matter how many internet comments you post, you still can't prevent other people from posting their own ones. A badly behaving radio will prevent all other radios working on the same frequency bands from operating correctly. It doesn't take a very powerful radio to cause massive denial of service. Also, a hacked radio could make use of frequencies that its owner hasn't paid to use.

Comment Re:Irrational open source fanboys (Score 2) 137

because they can't just hard code that right into the chip and never let you see it ...

No, because we would see either the software interfacing with the hard-coded backdoor, or some undocumented hardware means of communication coming out from the chip, and we'd start asking questions.

So if I just embed my code into the processor itself, you won't bitch.

Thats just silly.

Embedding code in (readonly or flash) ROMs is actually preferred from Stallman's point of view, because it allows the hardware to work out-of-the-box when using free software to control that hardware. Binary firmware is problematic for free software operating systems, not because free software enthusiasts have some maniac obsession about not running binaries that they haven't compiled themselves, but because the copyright holders of the firmware binary blobs often attach very restrictive licensing conditions to them, making them very hard or impossible to redistribute.

Comment Re:I dont get it (Score 1) 551

Now name one that the UN left for the US to hop the bag on for over a decade instead of taking care of business

Then next time Russia or some other country you don't like "takes care of some business" without waiting for the U.N. don't act outraged. Principles can't be bent to one's convenience.

Comment Re:I dont get it (Score 1) 551

Iraq was invaded without U.N. authorization because the U.S. had produced false evidence of weapon of mass destruction being stored inside Iraq. The satellite countries that agreed to take (a minor) part in the invasion stated officially that they were convinced by the U.S.' false evidence, in reality all they were after was them to partake into the feast of the war's aftermath - they gave so they could receive.

There are other countries that repeatedly defy the United Nations and the U.S. would never invade.

Which is not to say that Saddam Hussein was a nice guy and the U.S. are the empire of evil - I certainly am happier to live under the U.S.' influence rather than Russia's - but let's not paint conflicts of political interests with manicheism.

Comment Google arrogance (Score 5, Informative) 363

They didn't address any of the problems. They just called them "myths" and said "don't worry, trust us, everything will be fine" for each one of them. And they did so using condescending, arrogant and insulting language (look for example at the passage when they declare that they want people to wear google glasses inside locker rooms (!): "just bear in mind, would-be banners..."). This reinforces in me the distrust in the company and the concern about the product.

Comment Re:Why should I drop glibc? (Score 1) 134

Except statically linked binaries. Those were linked at build time, but they don't invoke the linker when they are executed. Guess what I use musl for? Building static binaries.

But you're changing the goalpost now. What I said was troublesome was using musl (or anything else, if glibc-incompatible) as a replacement for glibc. Using it for selected static binaries that you build yourself is another thing.

Nope. Binaries I only need one or the other, not both. Libraries only when I need a musl version of a library.

That's because I was talking about replacing glibc with musl, and be able to run binaries, over which you have no control, written for any of the two libraries. Which is the only scenario that would makes final users happy, should glibc coexist with musl.

No. Only for shit software that doesn't have any kind of protocol.

Again, that's stuff that is out of our control. Engineers have to design their systems for the worst case, not for the best.

Wrong. ISO C specifies stdint.h, int64_t is a standard way to get a 64-bit signed integer, and uint64_t is standard for 64-bit unsigned integers. The fact that you don't know ISO C is illuminating.

Microsoft's stuff is compatible with ISO C90. uint64_t was unsigned __int64 in their world until very recently.

Never works in practice. If the prototype for a library function is the same, and the calling semantics is the same, then it can be relinked without recompilation whether it was statically (unless it was stripped) or dynamically linked, and if either has changed, the code needs to be reviewed, rewritten and recompiled in either case.

With versioned symbols (glibc) you haven't this problem. But if you add support for them in your library, then you add "bloat". So you have to decide whether you want to add "bloat", or to make your users unhappy.

I don't know what adding getline has to do with existing programs. If they are not already making use of getline, then even if they are recompiled, they will still not make use of getline, and will not require it's symbol for linking. The Austin group (POSIX) are careful to not break stuff when revising standards, though I'm sure you can still point to some breakage; in general they tend to revise things by adding new symbols, or assuming the greatest common behavior between implementations.

They added getline() to the standard, and doing so they broke all the software that, perfectly standard-compliant until then, used the getline() name for their own stuff. This is to say that imagining to resolve binary compatibility with standard compliance simply doesn't work. They're two different problems.

Autotools gives me, as a user, and have done so since the 90s: - cross compiling support;

Never works.

I've bootstrapped my entire 64-bit Linux installation from a 32-bit host using it. It has almost always worked for me, even for packages whose developer hadn't ever thought about the possibility of cross-compilation. I have no doubt that packages using autotools were the ones that gave me fewer problems.

However plain old Makefiles, I just set CC, CFLAGS, LD, LDFLAGS and LIBDIR and things just work (also CXX and CXFLAGS for C++).

Only for the simplest cases. When things get more complex, you'll have to handle the difference between the host C compiler (which can be used to compile stuff which will run on the build host, such as code generators) and the target C compiler (which can only be used to produce binaries that won't run on the build host).

- ability to change any installation path;

I can do that with "PREFIX=/foo/bar make install" with any well written makefile.

That's the difference, with autotools you are almost sure that you get those features out of the box, and they're standardized, with other systems you have to hope that the developers wrote their makefiles well, and I can tell you that the current trend among developers is that they like less and less to invest time into packaging their source code. You also have to study the makefiles and see if it's DESTDIR or INSTALL_ROOT or ROOT_PREFIX or something else. Also, consider the difference between PREFIX and DESTDIR. The first can end up into paths stored in the generated code. The second won't.

- support for building shared and static libraries simultaneously using the best compiler options for each case; and probably something else that I'm forgetting.

make staticlibs; make dynamiclibs

Then you have to take into account the different flags required for building static vs shared libraries on every platform that you want to target (e.g. Linux supports non-PIC code in shared libraries on i386, other OSes don't, x86_64 never supports it...). With autotools you get that for free and out-of-the-box.

Fixing a broken homemade Makefile takes me a few minutes. Fixing a broken autobarf takes me hours to days.

Modern .ac files are much, much simpler than the Makefiles and scripts that they generate. I can't see how you find it more difficult to fix them rather than the Makefiles themselves. Have you ever tried to debug a problem with a .cmake file and its arcane language? And what about scons, it works basically with raw python scripts...

Porting code to 9front is basically a rewrite as it's so alien, so let's put that to one side.

I don't think autotools work well, if at all, outside UNIXish systems. Even on Windows, they require impedance adaptation layers such as cygwin or mingw or interix.

Comment Re:Why should I drop glibc? (Score 2) 134

Virtually all binary software is distributed in enormous tarballs containing their own libc, and every other library they use, and a bunch they don't simply because static linking doesn't work against glibc properly (and violates the license for non-free software).

Please don't spread false information this way.

Take Firefox for example, from Firefox.org, not the version shipped with your distro. Every Firefox tarball includes their own build of GTK, GDK, glibc, libnspr, etc.

Yes, let's take the version from firefox.org that I'm using right now.

$ ldd /opt/firefox/firefox-bin
linux-gate.so.1 (0xb77bb000)
libpthread.so.0 => /lib/i386-linux-gnu/libpthread.so.0 (0xb776b000)
libdl.so.2 => /lib/i386-linux-gnu/libdl.so.2 (0xb7765000)
librt.so.1 => /lib/i386-linux-gnu/librt.so.1 (0xb775c000)
libstdc++.so.6 => /usr/lib/libstdc++.so.6 (0xb7675000)
libm.so.6 => /lib/i386-linux-gnu/libm.so.6 (0xb762f000)
libgcc_s.so.1 => /usr/lib/libgcc_s.so.1 (0xb7614000)
libc.so.6 => /lib/i386-linux-gnu/libc.so.6 (0xb746a000)
/lib/ld-linux.so.2 (0xb77bc000)

See? It's using my libc, so what you said is completely incorrect. Shall we see if it uses its own GTK library as you've said? I'll past the ldd result here: http://pastebin.com/UsP0MHUe , so you can see for yourself that firefox uses the system libraries instead.

Because dynamic linking is so horribly broken on Linux that is the only way it could possibly work on more than one distro.

Dynamic linking on Linux is state-of-the-art. If it doesn't work for you, look for responsibilities somewhere else than the operating system. If anything, it's static linking which is badly supported (by glibc, not by Linux).

SORRY.

...

Comment Re:Why should I drop glibc? (Score 1) 134

Actually it is.

It's linked by any executable on your system and it has the role of loading every single dynamic binary that you have installed, including init. It's special.

I have both musl and glibc on my systems, and they have different filenames, so I can have both installed without conflicts.

Of course you can have both installed, but not without conflicts, you must be very careful to avoid the sea of conflicts that arise when you have two incompatible libraries with the same name.

Programs interoperate through IPC and the filesystem, so having programs built against different libc on the same system is not a problem.

You'll have to:
- duplicate every single binary and library to have libc and musl versions;
- ensure that the version of each binary is the same for its libc and musl image (otherwise IPC and filesystem communication will fail);
- ensure that the libc and musl versions of the binary are installed in two different paths and don't interfere (otherwise a libc program dlopening a musl plugin will fail), and rewrite all the software which uses hardcoded paths (e.g. Python);
- ensure that libc and musl types are binary-compatible (otherwise IPC and filesystem communication will fail);
- ensure that every package you compile finds the correct version of both libc and musl when you compile it;
- have your machine withstand the memory pressure of two different and unshareable userspace images running simultaneously (so much for fixing "bloat");
- ...

And musl and glibc are both source-code compatible in that they both implement a POSIX and ISO compatible libc.

Even Microsoft Visual C++ is ISO C compliant. So you can write programs that will compile on both GLIBC and Visual C++. As long as you don't use non-standard stuff such as 64-bit integers.

The only place where you would get source incompatibilities is in non-conformant programs that take advantage of defects or non-standard APIs of one of the libc, and the real solution here is to have developers fix their code so that it conforms to the standard.

If I am a user, I want my binaries to run and that's it. I don't have the option to ask whoever coded the programs that I use to rewrite them because I don't like their choice of API usage.
Furthermore, any library that is actually deployed and used will get bugs fixed, which can cause incompatible changes, and standards will be amended in ways that can break compatibility (for instance, the addition of getline() to the C library). What do you do then? Ask users to rewrite all the programs that they use every time such a thing happens?

the headaches around trying to get Gnu's arcane and defective build system to work

I've heard millions of times that autotools suck. That's true. The problem is that everybody else (and I mean scons, cmake, waf, ...) sucks even more. Autotools give me, as a user, and have done so since the 90s:
- cross compiling support;
- ability to change any installation path;
- ability to have a temporary installation path for packaging;
- ability to apply a transformation to program names;
- a standardized way to change any of the tools to be used during compilation;
- support for building shared and static libraries simultaneously using the best compiler options for each case;
and probably something else that I'm forgetting.

Comment Re:Why should I drop glibc? (Score 1) 134

You get your binaries from distributions anyway

No, I get binaries from wherever I want and it works.

and with musl your closed source bits can just statically link safely libc and live happy and isolated.

I can't live happily without shared libraries. They were invented for good reasons tens of years ago and have been used extensively ever since. Think about the "bloat" of statically linked executables and the fact that they don't get updates of the linked-in library code (including security fixes).

Comment Re:Why should I drop glibc? (Score 2) 134

The libc is not a library like all the others. Proposing a binary- and source- incompatible replacement for glibc, as is being done here, means to partition the Linux userspace, both binaries and source code, into two isolated subsystems. Something that we are already suffering with Android. This is not a benefit, this is a damage for the Linux community as a whole, and it will hurt me even if I don't want to switch. Casual PC users already run into enough problems when switching to Linux; asking them to check which libc is required before installing a program is the kind of nuisance that makes them run away. I don't contest musl developers' freedom to code whatever they want, and I welcome their efforts. I do contest the stance to replace glibc with an incompatible library, whomever it comes from.

Comment Re:Modularity (Score 1) 302

That's not really a problem on modern systems.....at least OSX

Linux is a modern system, yet that's a problem in Linux. I just had to recompile dozens of libraries because one of them had littered the namespace with its own unprefixed symbols, one of which clashed with another's library's -init function, which didn't get called - so I got a segfault on exit of any application that was using that library. Fortunately the library in question was open source and its binary wasn't stripped, so the problem was easy to spot.

You can use versioned symbols on Linux, but that's not a design requirement, and not a majority of the developers do that.

Of course, if you want to call a function that has the same name in both libraries, then you're in trouble, but that's a problem in Java, too.

Not in Java, because all symbols are namespaced by design. It would be a problem if you had two versions of the same library and wanted to use both of them at once.

Comment Re:Modularity (Score 2) 302

Meh, I wrote my own replacement for freeglut3 in a weekend. It's not hard to have a platform abstraction layer, and many already exist (I just needed my own lightweight one for my games). Since I started out with cross platform toolchain, I have no issue writing code that runs on multiple platforms.

Writing your code is only part of the problem. Things become funny when you have to use code that has already been written by other people. For example, you're using a shared library which exports a symbol which clashes with another library used by another library that is dynamically loaded by another library that you use. Without you knowing.

I get a native application without Java's huge runtime dependencies

What are Java's huge runtime dependencies? For instance the Linux version only requires the X11 libraries if you want to display graphics. It will run on a Pentium 1 machine with 16 MB of RAM.

Providing binaries for every current modern chipset including ARM and MIPS takes me about 30 minutes total to build with my cross compilers.

This is assuming that you write code that doesn't interface with any existing software on the target, which is a rare occurence. Do you talk directly to the hardware? Your cross compiler won't spare you from having to write hardware-dependent code for each of the flavours of your target.

However, saying that cross platform C/C++ is more of a headache than Java is ridiculous. They're all "write once, debug everywhere" options.

With C and C++, you get in the best case to fix up your application to port it into a new operating environment, which is what Java requires you to do in the worst case. And we're not even considering the case of mutually incompatible runtime dependencies.

Slashdot Top Deals

No man is an island if he's on at least one mailing list.

Working...