Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
KDE

KDE and Canonical Developers Disagree Over Display Server 202

sfcrazy (1542989) writes "Robert Ancell, a Canonical software engineer, wrote a blog titled 'Why the display server doesn't matter', arguing that: 'Display servers are the component in the display stack that seems to hog a lot of the limelight. I think this is a bit of a mistake, as it’s actually probably the least important component, at least to a user.' KDE developers, who do have long experience with Qt (something Canonical is moving towards for its mobile ambitions), have refuted Bob's claims and said that display server does matter."

Comment Google arrogance (Score 5, Informative) 363

They didn't address any of the problems. They just called them "myths" and said "don't worry, trust us, everything will be fine" for each one of them. And they did so using condescending, arrogant and insulting language (look for example at the passage when they declare that they want people to wear google glasses inside locker rooms (!): "just bear in mind, would-be banners..."). This reinforces in me the distrust in the company and the concern about the product.

Comment Re:Why should I drop glibc? (Score 1) 134

Except statically linked binaries. Those were linked at build time, but they don't invoke the linker when they are executed. Guess what I use musl for? Building static binaries.

But you're changing the goalpost now. What I said was troublesome was using musl (or anything else, if glibc-incompatible) as a replacement for glibc. Using it for selected static binaries that you build yourself is another thing.

Nope. Binaries I only need one or the other, not both. Libraries only when I need a musl version of a library.

That's because I was talking about replacing glibc with musl, and be able to run binaries, over which you have no control, written for any of the two libraries. Which is the only scenario that would makes final users happy, should glibc coexist with musl.

No. Only for shit software that doesn't have any kind of protocol.

Again, that's stuff that is out of our control. Engineers have to design their systems for the worst case, not for the best.

Wrong. ISO C specifies stdint.h, int64_t is a standard way to get a 64-bit signed integer, and uint64_t is standard for 64-bit unsigned integers. The fact that you don't know ISO C is illuminating.

Microsoft's stuff is compatible with ISO C90. uint64_t was unsigned __int64 in their world until very recently.

Never works in practice. If the prototype for a library function is the same, and the calling semantics is the same, then it can be relinked without recompilation whether it was statically (unless it was stripped) or dynamically linked, and if either has changed, the code needs to be reviewed, rewritten and recompiled in either case.

With versioned symbols (glibc) you haven't this problem. But if you add support for them in your library, then you add "bloat". So you have to decide whether you want to add "bloat", or to make your users unhappy.

I don't know what adding getline has to do with existing programs. If they are not already making use of getline, then even if they are recompiled, they will still not make use of getline, and will not require it's symbol for linking. The Austin group (POSIX) are careful to not break stuff when revising standards, though I'm sure you can still point to some breakage; in general they tend to revise things by adding new symbols, or assuming the greatest common behavior between implementations.

They added getline() to the standard, and doing so they broke all the software that, perfectly standard-compliant until then, used the getline() name for their own stuff. This is to say that imagining to resolve binary compatibility with standard compliance simply doesn't work. They're two different problems.

Autotools gives me, as a user, and have done so since the 90s: - cross compiling support;

Never works.

I've bootstrapped my entire 64-bit Linux installation from a 32-bit host using it. It has almost always worked for me, even for packages whose developer hadn't ever thought about the possibility of cross-compilation. I have no doubt that packages using autotools were the ones that gave me fewer problems.

However plain old Makefiles, I just set CC, CFLAGS, LD, LDFLAGS and LIBDIR and things just work (also CXX and CXFLAGS for C++).

Only for the simplest cases. When things get more complex, you'll have to handle the difference between the host C compiler (which can be used to compile stuff which will run on the build host, such as code generators) and the target C compiler (which can only be used to produce binaries that won't run on the build host).

- ability to change any installation path;

I can do that with "PREFIX=/foo/bar make install" with any well written makefile.

That's the difference, with autotools you are almost sure that you get those features out of the box, and they're standardized, with other systems you have to hope that the developers wrote their makefiles well, and I can tell you that the current trend among developers is that they like less and less to invest time into packaging their source code. You also have to study the makefiles and see if it's DESTDIR or INSTALL_ROOT or ROOT_PREFIX or something else. Also, consider the difference between PREFIX and DESTDIR. The first can end up into paths stored in the generated code. The second won't.

- support for building shared and static libraries simultaneously using the best compiler options for each case; and probably something else that I'm forgetting.

make staticlibs; make dynamiclibs

Then you have to take into account the different flags required for building static vs shared libraries on every platform that you want to target (e.g. Linux supports non-PIC code in shared libraries on i386, other OSes don't, x86_64 never supports it...). With autotools you get that for free and out-of-the-box.

Fixing a broken homemade Makefile takes me a few minutes. Fixing a broken autobarf takes me hours to days.

Modern .ac files are much, much simpler than the Makefiles and scripts that they generate. I can't see how you find it more difficult to fix them rather than the Makefiles themselves. Have you ever tried to debug a problem with a .cmake file and its arcane language? And what about scons, it works basically with raw python scripts...

Porting code to 9front is basically a rewrite as it's so alien, so let's put that to one side.

I don't think autotools work well, if at all, outside UNIXish systems. Even on Windows, they require impedance adaptation layers such as cygwin or mingw or interix.

Comment Re:Why should I drop glibc? (Score 2) 134

Virtually all binary software is distributed in enormous tarballs containing their own libc, and every other library they use, and a bunch they don't simply because static linking doesn't work against glibc properly (and violates the license for non-free software).

Please don't spread false information this way.

Take Firefox for example, from Firefox.org, not the version shipped with your distro. Every Firefox tarball includes their own build of GTK, GDK, glibc, libnspr, etc.

Yes, let's take the version from firefox.org that I'm using right now.

$ ldd /opt/firefox/firefox-bin
linux-gate.so.1 (0xb77bb000)
libpthread.so.0 => /lib/i386-linux-gnu/libpthread.so.0 (0xb776b000)
libdl.so.2 => /lib/i386-linux-gnu/libdl.so.2 (0xb7765000)
librt.so.1 => /lib/i386-linux-gnu/librt.so.1 (0xb775c000)
libstdc++.so.6 => /usr/lib/libstdc++.so.6 (0xb7675000)
libm.so.6 => /lib/i386-linux-gnu/libm.so.6 (0xb762f000)
libgcc_s.so.1 => /usr/lib/libgcc_s.so.1 (0xb7614000)
libc.so.6 => /lib/i386-linux-gnu/libc.so.6 (0xb746a000)
/lib/ld-linux.so.2 (0xb77bc000)

See? It's using my libc, so what you said is completely incorrect. Shall we see if it uses its own GTK library as you've said? I'll past the ldd result here: http://pastebin.com/UsP0MHUe , so you can see for yourself that firefox uses the system libraries instead.

Because dynamic linking is so horribly broken on Linux that is the only way it could possibly work on more than one distro.

Dynamic linking on Linux is state-of-the-art. If it doesn't work for you, look for responsibilities somewhere else than the operating system. If anything, it's static linking which is badly supported (by glibc, not by Linux).

SORRY.

...

Comment Re:Why should I drop glibc? (Score 1) 134

Actually it is.

It's linked by any executable on your system and it has the role of loading every single dynamic binary that you have installed, including init. It's special.

I have both musl and glibc on my systems, and they have different filenames, so I can have both installed without conflicts.

Of course you can have both installed, but not without conflicts, you must be very careful to avoid the sea of conflicts that arise when you have two incompatible libraries with the same name.

Programs interoperate through IPC and the filesystem, so having programs built against different libc on the same system is not a problem.

You'll have to:
- duplicate every single binary and library to have libc and musl versions;
- ensure that the version of each binary is the same for its libc and musl image (otherwise IPC and filesystem communication will fail);
- ensure that the libc and musl versions of the binary are installed in two different paths and don't interfere (otherwise a libc program dlopening a musl plugin will fail), and rewrite all the software which uses hardcoded paths (e.g. Python);
- ensure that libc and musl types are binary-compatible (otherwise IPC and filesystem communication will fail);
- ensure that every package you compile finds the correct version of both libc and musl when you compile it;
- have your machine withstand the memory pressure of two different and unshareable userspace images running simultaneously (so much for fixing "bloat");
- ...

And musl and glibc are both source-code compatible in that they both implement a POSIX and ISO compatible libc.

Even Microsoft Visual C++ is ISO C compliant. So you can write programs that will compile on both GLIBC and Visual C++. As long as you don't use non-standard stuff such as 64-bit integers.

The only place where you would get source incompatibilities is in non-conformant programs that take advantage of defects or non-standard APIs of one of the libc, and the real solution here is to have developers fix their code so that it conforms to the standard.

If I am a user, I want my binaries to run and that's it. I don't have the option to ask whoever coded the programs that I use to rewrite them because I don't like their choice of API usage.
Furthermore, any library that is actually deployed and used will get bugs fixed, which can cause incompatible changes, and standards will be amended in ways that can break compatibility (for instance, the addition of getline() to the C library). What do you do then? Ask users to rewrite all the programs that they use every time such a thing happens?

the headaches around trying to get Gnu's arcane and defective build system to work

I've heard millions of times that autotools suck. That's true. The problem is that everybody else (and I mean scons, cmake, waf, ...) sucks even more. Autotools give me, as a user, and have done so since the 90s:
- cross compiling support;
- ability to change any installation path;
- ability to have a temporary installation path for packaging;
- ability to apply a transformation to program names;
- a standardized way to change any of the tools to be used during compilation;
- support for building shared and static libraries simultaneously using the best compiler options for each case;
and probably something else that I'm forgetting.

Comment Re:Why should I drop glibc? (Score 1) 134

You get your binaries from distributions anyway

No, I get binaries from wherever I want and it works.

and with musl your closed source bits can just statically link safely libc and live happy and isolated.

I can't live happily without shared libraries. They were invented for good reasons tens of years ago and have been used extensively ever since. Think about the "bloat" of statically linked executables and the fact that they don't get updates of the linked-in library code (including security fixes).

Comment Re:Why should I drop glibc? (Score 2) 134

The libc is not a library like all the others. Proposing a binary- and source- incompatible replacement for glibc, as is being done here, means to partition the Linux userspace, both binaries and source code, into two isolated subsystems. Something that we are already suffering with Android. This is not a benefit, this is a damage for the Linux community as a whole, and it will hurt me even if I don't want to switch. Casual PC users already run into enough problems when switching to Linux; asking them to check which libc is required before installing a program is the kind of nuisance that makes them run away. I don't contest musl developers' freedom to code whatever they want, and I welcome their efforts. I do contest the stance to replace glibc with an incompatible library, whomever it comes from.
Space

Earth Barely Dodged Solar Blast In 2012 202

Rambo Tribble (1273454) writes "Coronal mass ejections, with severity comparable to the 1859 Carrington event, missed Earth by only 9 days in 2012, according to researchers. The Carrington event caused widespread damage to the telegraph system in the U.S., and a similar occurrence would be devastating to modern electronics, it is thought. From the Reuters article, 'Had it hit Earth, it probably would have been like the big one in 1859, but the effect today, with our modern technologies, would have been tremendous.' The potential global cost for such damage is pegged at $2.6 trillion."
Wikipedia

Russian Civil Law Changed By Wikimedia 88

An anonymous reader writes "Changes to the Russian Civil Code, which include the recognition of open licenses, the right for libraries to generate digital copies of certain works, were now signed by the Russian President and come into force on October 1st. According to Wikimedia-RU member Linar Khalitov, 'these changes are a result of a lot of hard work on behalf of Wikimedia-RU ... proposing, discussing and defending amendments to the Code.'" The changes are pretty major: licenses no longer require a written contract to be enforced, and published works can no longer be retracted. The two combine to give Wikipedia RU authors stronger author rights. Pictures of architectural objects can be used freely without the permission of the architect, which will allow many images that were pulled from the Wikimedia Commons to return, and new projects to add pictures of monuments to go forward.

Comment Re:Modularity (Score 1) 302

That's not really a problem on modern systems.....at least OSX

Linux is a modern system, yet that's a problem in Linux. I just had to recompile dozens of libraries because one of them had littered the namespace with its own unprefixed symbols, one of which clashed with another's library's -init function, which didn't get called - so I got a segfault on exit of any application that was using that library. Fortunately the library in question was open source and its binary wasn't stripped, so the problem was easy to spot.

You can use versioned symbols on Linux, but that's not a design requirement, and not a majority of the developers do that.

Of course, if you want to call a function that has the same name in both libraries, then you're in trouble, but that's a problem in Java, too.

Not in Java, because all symbols are namespaced by design. It would be a problem if you had two versions of the same library and wanted to use both of them at once.

Comment Re:Modularity (Score 2) 302

Meh, I wrote my own replacement for freeglut3 in a weekend. It's not hard to have a platform abstraction layer, and many already exist (I just needed my own lightweight one for my games). Since I started out with cross platform toolchain, I have no issue writing code that runs on multiple platforms.

Writing your code is only part of the problem. Things become funny when you have to use code that has already been written by other people. For example, you're using a shared library which exports a symbol which clashes with another library used by another library that is dynamically loaded by another library that you use. Without you knowing.

I get a native application without Java's huge runtime dependencies

What are Java's huge runtime dependencies? For instance the Linux version only requires the X11 libraries if you want to display graphics. It will run on a Pentium 1 machine with 16 MB of RAM.

Providing binaries for every current modern chipset including ARM and MIPS takes me about 30 minutes total to build with my cross compilers.

This is assuming that you write code that doesn't interface with any existing software on the target, which is a rare occurence. Do you talk directly to the hardware? Your cross compiler won't spare you from having to write hardware-dependent code for each of the flavours of your target.

However, saying that cross platform C/C++ is more of a headache than Java is ridiculous. They're all "write once, debug everywhere" options.

With C and C++, you get in the best case to fix up your application to port it into a new operating environment, which is what Java requires you to do in the worst case. And we're not even considering the case of mutually incompatible runtime dependencies.

Comment Re:Mexico City tried this... (Score 4, Informative) 405

This measure is not experimental, it has been used in Europe since the 80s. People won't buy another car to bypass the restriction because owning a car is very expensive (insurance, taxes, ...) and if you can afford that then probably you could as well pay the fines for ignoring the law. Less environment-friendly vehicles often can't enter the city centres at all, because there it's common to put restriction on car access depending on their "euro rating".

Slashdot Top Deals

"Remember, extremism in the nondefense of moderation is not a virtue." -- Peter Neumann, about usenet

Working...