Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?
DEAL: For $25 - Add A Second Phone Number To Your Smartphone for life! Use promo code SLASHDOT25. Also, Slashdot's Facebook page has a chat bot now. Message it for stories and more. Check out the new SourceForge HTML5 Internet speed test! ×

Comment Read about it before commenting, people! (Score 4, Informative) 127

Wow, people commenting seem to have so little information about what this actually is. (Canonical is partly to blame for, as usual, doing a poor job at messaging.)

This is not replacing the Debian build system or Debian packages. Ubuntu will continue to be based on Debian.

This is an additional packaging system that makes it exceptionally easy to more reliably distribute Linux applications and services. Underneath it uses LXC (also originally developed at Canonical), the same jail-like technology that powers Docker and LXD. It basically lets the application get its own "view" of the operating system's filesystem (using AuFS) so that you can distribute required dependencies with the application. Of course it can't override the Linux kernel or other important system services, but it actually solves a major hurdle in distributing software across various OS library baselines. Until now, we've been using PPAs or other external Debian repositories to distribute software -- you can still use them if you prefer, but these are tied to the baseline and need constant tweaking to the packagers. A Snappy package made now should be able to run years from now without a problem. The Snapcraft packaging tool is very easy to use and does so much of the hard work for you: you can even just give it a git repository URL, and it will pull and build and package. I see it being very useful for something like Steam.

Also, like Docker, Snappy uses SHA-signed diffs, so package updates will be very fast. It also makes it trivial to switch between versions.

The announcement is that Ubuntu 16.04 will come with Snappy built in, so you can immediately install Snappy packages if you want. You don't have to.

There is also a new flavor of Ubuntu called "Snappy Ubuntu Core" in which the base OS itself is a Snappy image, so that it gets updates the same way as the other packages, and in the same way you can switch between versions. It is useful for various special use cases. For example, a phone OS will have an easier and safer job upgrading while letting the user trivially revert back if things break. It is not the official Ubuntu recommended for all users, but rather a building block for developers to create specialized Ubuntu-and-Snappy-based distributions.

Comment CrossOver (Score 1) 889

I feel your pain. But: I run the complete Office 2010 suite using CrossOver without a hitch. It doesn't quite feel "native," but it works well enough for all my needs. CrossOver even creates links so that, for example, when you double-click on .docx documents, they will open up with Word 2010.

It actually works so well that I have a terminal-server-based office (based on LTSP) running Word 2010 over CrossOver.

This is not a great solution (you will have to buy a license from Microsoft), but it is a solution to allow my setup to stay in Linux and still collaborate with others.


Comment Nimble? (Score 2) 161

"Nimble" does not mean that it performs well.

If "mainstream productivity" refers to word processing and web browsing, you are fine. But if you're doing photo, video, audio editing, heavy software compilation, scientific simulation or other work, fast boot times are not what you're after. Gaming, too, why not CPU-heavy usually, demands GPUs that only high-end, very expensive laptops can deliver.

Yes, laptops keep getting better, but so do workstations. For the same money, you get much more bang from a desktop as compared to a laptop.

The real story is how well the bottom has reached decent levels for "mainstream productivity." 5 years ago, a $200 netbook was really disappointing in terms of everyday performance: web browsing was slow, video playback was choppy at higher resolutions, and even word processing could get laggy. These days, machines at that price range are totally acceptable. Entry-level laptops like the Acer E3 or the HP Stream 11 are surprisingly good. Unless you're doing "workstation" work, they won't feel any slower than a laptop that costs 10 times as much.

I think that might actually be what this article is clumsily trying to say.

Comment Multiple meanings of "monolithic" (Score 3, Insightful) 551

There is a confusion of two aspects of "monolithic" here, and unfortunately Poettering did not clarify it well:

1) "Monolithic" in terms of a single repository for all code. The systemd project is monolithic in this respect, and Poettering is absolutely correct that this is the classic Unix way. All the *BSDs are maintained this way. Linux is thus, as he correctly points out, the anomoly.

2) "Monolithic" in terms of tools that depend on each other. The systemd system is not monolithic in this respect. The only two required components are journald and udev. Everything else is entirely optional and replaceable, but "recommended" in the sense that the people working on the project really think that these components, written from scratch, are of better quality and consistency than the existing components they replace. But some hysterical people hear this recommendation as "forcing it down our throats". Distro makers will decide which components to use, whether those in the systemd project or the existing ones. Obviously, the existing ones have the benefit of maturity.

Also, he doesn't point this out in this interview, but these new components are also better at reporting errors in a way that the whole init would be more robust when certain components have partial failures (and systemd knows how to deal with them). This is especially crucial for servers with complicated, layered network stacks. People say that systemd is for desktops, but really it is just as important for servers to have a robust initialization of services.

Comment #gamergate (Score 3, Funny) 642

OK, my fellow gamergaters, it's time to dox Sweden!

She lives just east of Norway and west of Finland. Make sure to visit that feminazi every day and teach her the consequences of trying to censor all games and force us to play Depression Quest!

Together we will fight to guarantee better ethics in game journalism.

Comment Ubuntu changed everything (Score 4, Insightful) 110

Ubuntu changed everything we've come to expect about free, general-purpose operating systems.

People don't give Launchpad enough credit: for the first time, we have an integrated build/test/deploy process for the whole operating system. It takes the solid Debian root and adds a layer of modern quality assurance that we've never seen before. There's still a ways to go, and I'm sure people will complain about one or other package being broken, but the fact is that Ubuntu raised the bar of what we've come to expect.

Slashdotters and others also love to complain about one particular package or another. Obviously, the desktop environment (or just the shell) is the first thing that most people see. But it's also a small project in the larger scope of Ubuntu. Don't like Unity or GNOME 3 or KDE or Xfce or LXDE or Enlightenment? You have lots of options. Don't like systemd? Well, Ubuntu devoted a lot of time and effort to Upstart, but made the mature decision to abide by Debian's decision to go with systemd (for now). Don't like either? Yeah, well, life these days must be truly hell for poor little you.

And now, Ubuntu may do for mobile what it did for the desktop. In 10 years, I hope we can celebrate the existence of truly free devices, onto which we can install any package we want -- including alternative UIs for those who will undoubtedly not like Unity.

Comment C is better than C++ for the kernel (Score 5, Insightful) 365

Having been on the fence about this for a while, my experiences convinced me that C++ is wrong for the kernel.

The problem is not the extra features. The problem is that the programmer has little control over exactly how they are implemented: the compiler decides how to handle virtual method tables, destructors, multiple inheritence, etc. In the recent past, C compiler bugs have caused serious problems with Linux development. C++ compilation is an order of magnitude more complex, and you can bet it would be less reliable. This also means that C++ compiles much slower: doesn't sound like a big deal, but it is a cost to take into account.

The lack of a standard, clear ABI for C++ is also problematic. While it's true that Linux is monolithic, it still supports modules that interact with each other dynamically. Debugging C++ can be quite painful because of this. But it also means that it would be that much harder to contribute a module if it's not written exactly for the same compiler as the one used to build the kernel. Of course, it would have to be written in C++, too. This lack of flexibility can be quite painful in environments where you are limited to very specialized compilers (embedded). C has the most standard ABI of any language (well, C and Pascal). You can guarantee that *anything* would be able to interface with it.

So if you put the technical cons (losing control, flexibility and debugabbility) vs. the pros (cleaner syntax) then it's right to pick C, on technical grounds. As others have stated here, anything you can do in C++ you can do in plain C. It's a bit clumsier, but then you have complete control over the implementation. I do OOP in C all the time, it's perfectly OK. If anything, a bit more powerful than C++, because I tailor the OOP features to exactly my needs and tastes.

Beyond that, there is the more controversial issue of programmer culture. C++ hides away implementation details, but for kernel development you want programmers who think about every tiny issue of implementation: exactly what is going on with the call stack, what is a pointer and what isn't? The more explicit nature of C encourages a more hard-nosed stickler for technical correctness, which is more important than pretty code for kernel work.

By the way, I'm writing this as a former C++ zealot. I even created something like this in the past, a C++ wrapper for Windows NT networking services. I found out the hard way that C++ takes more than it gives. I write all my code in C these days, and don't feel like I'm missing anything.

Comment Re:Unified Experience Across Devices (Score 1) 644

Windows 8 unified tablets and desktops. You can buy a 7" Atom-based tablet right now that you can connect to a dock and get a full desktop experience.

But Windows phones are still different. With Windows 10, you will be able to the above with your phone.

That might not seem like a big deal to you, but it can radically change the computing market. For many people, owning a single computing device (a phone) will be enough. They will just get a dock for a tablet (BlackBerry has this) or for a laptop or for the living room and that's it. Enterprises, too, can invest in fewer gadgets per employee.

Today's phones are powerful enough to run most everyday computing tasks. Obviously not all, but for those people who need the power, workstations and gaming PCs will still be around.

Comment The Unix Way (Score 1) 613

It's the "Unix way" to make one tool do one thing well. The "systemctl" tool is not meant to show status.

The point in Unix is that tools are building blocks. You can create a higher-level tool (using a simple shell script) that uses these tools together to do cool things that the devs have not thought of.

Comment Re:bad for standards (Score 2) 194

This has nothing to do with the "tag" itself, which does not specify codecs. Yes, this is still a compromise, but many of us have been compromising for years on various aspects of freedom and openness. Choose your battles carefully and you can win the war: Mozilla has already achieved so much for the open web, and I'm confident the upward slope will continue.

Comment Genymotion (Score 1) 167

I agree: indeed Genymotion fills in this gap perfectly, and I recommend it strongly for any Android dev. I also found Genymotion devs to be amazingly prompt in responding to bug reports. (I have no connection the company, am just a fan of their work.)

But it's still surprising the that official Google toolkit doesn't have anything like it. Google, get on board! Just buy up Genymotion or license their tech.

By the way, emulation still has its uses in some cases... it's of course best to have both.

Slashdot Top Deals

"If you can, help others. If you can't, at least don't hurt others." -- the Dalai Lama