Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Gnome 3.26 removes the Status Bar/System Tray (Score 1) 176

According to Gnome developers, removing of the system tray is so insignificant, that it is not even worth mentioning in the short list of changes. It is mentioned at the end of the long list, outside of the bullet points.

GNOME 3.26 no longer shows status icons in the bottom-left of the screen. This prevents the status icon tray from getting in the way and is expected to provide a better overall experience. The lack of status icons is not expected to cause serious issues for users. However, if you do find that you need to access them, they can be restored using the TopIcons extension. More information about this change can be found in a blog post on the subject.

This means that if you don't have the latest TopIcons extension already installed, a lot of programs that minimize to Status Bar will become inaccessible. That's mainly non-Gnome programs.

Gnome developers are trying to force application developers to not use the "pretty old" standard that "predated Gnome 2.0" and instead to use Gnome specific API's like their notification.

The big problem is that they do not seem to understand what is the purpose of the Status Bar, how people use it and why it exists in all Desktop platforms - Linux, Windows and Mac.
The Status Bar is for checking the status of an application, with single glance, without need for any actions from the user, like moving mouse to specific position on the screen, having to click, switch desktops or open the program window.
In comparison, notification are for signaling change or event. Their use is not only different, they also could be quite annoying and actively ignored.

Here are few more links to read:
https://blogs.gnome.org/aday/2017/08/31/status-icons-and-gnome/
https://lwn.net/Articles/732622/

Comment Re:Military propaganda movie for home consumption (Score 1) 726

Your observation is truly insightful.
But I have two nitpicks:

Are you sure the bugs don't have space flight capabilities?
They do appear to colonize planets and in the movie it is said that they spread by spores. This implies that bugs do have a way for space travel. As their home planet is surrounded by asteroids belt, it is only natural to use them as spaceships. They just have to drill the inside and put engine(s) on the outside. Landing may be hard but spores survive a lot of punishments.

Of course, none of this disproves the false-flag theory. I just checked and the movie says "The meteor was shut out of orbit by bug plasma that derived from Kleondathu - the arachnid home planet". So it definitely was not bug's colonization ship. Even if the asteroid comes from the arachnid quarantine zone, most likely by warp. (It also had no visible engines and rotated under "wrong" angle.)

Here comes the second nitpick, the female "heroine" doesn't get "bollocked" (whatever that is), she is actually flirting with the instructor in the changed course scene. The captain is never shown to comment about the course.

As for the sequels... you know that there are other Starship Troopers movies... they definitely don't follow that scenario. (That probably explains why they are not so good. :))

Comment Bad Analogy Department (Score 1) 177

At first the system trims the "fat" and it seems to improve the things, because corporations tend to accumulate fat. However soon the system becomes victim of its own success. There is no more fat to cut. So it starts to trim more and more "muscle" and less and less fat. This goes until the corporation collapses when it can not support its own weight anymore. In the mean time it may show symptoms of anemia and massive internal infection.

Comment Re:Not just in the U.S. (Score 1) 273

This article talks about how in England there has been a huge increase in the number of measles cases since Wakefield published his claptrap about vaccines causing autism and other nonsense.

For those not bothering to read the article, this is part which you need to know:

This year, the U.K. has had more than 1,200 cases of measles, after a record number of nearly 2,000 cases last year. The country once recorded only several dozen cases every year. It now ranks second in Europe, behind only Romania.

http://upload.wikimedia.org/wikipedia/commons/a/a4/Measles_incidence_England%26Wales_1940-2007.png

Here is the graph om measles incidents in England and Wales, . As you can see even the 2,000 cases from the last year are still less than the measles cases from 1998, when everybody was vaccinated and the fraudulent study was published.

I'd like to see the stats for the last 5 years too, but for me it is quite clear that this "outbrake" is more PR scare than real epidemic.

Comment Re:Good for Linux. (Score 1) 353

Without setting up a revenue sharing contract with the original publisher (which would be incredibly messy for reasons I'd be happy to elaborate if you can't think of them on your own), or else selling the rights to the Mac version back, they'd have no way to earn money from purchases on Steam. Thus, your grousing is entirely misplaced, since this is a problem with the way Steam is structured.

And why is Aspyr not setting up such a deal with the original publishers?
Honestly, what you said may apply for old titles, but if they know that this is the problem they may start pushing this clause in their new contracts.

Now, Steam could add their ports as a new separate title from the main title. The problem is that in order to play same game on two different devices you'll have to purchase it twice. People are not happy when they have to pay twice.

Hum... maybe Steam could add the Mac port as DLC, they depend on the original title and they can be from another publisher. Of course the idea is the "DLC" to cost only as Aspyr share. However depending on their contract they may not be allowed to sell it separately.

Comment Three problems. (Score 1) 167

1. Usually digital sound recoding is lossy. This means that a lot of "information" is discarded. Good audio codec would try to eliminate the noise before going on to simplify the "relevant" audible information. You will need to have raw PCM recoding for proper analysis.
This method makes a lot more sense with analog recoding.

2. The method rely only on variation of the 50Hz main frequency used for the power grid... it is definitely not as precise.
First, low frequencies cause low induction. That's why ac/dc adapters (PC PSU too) usually upscale frequency to 50'000Hz before using small transformer.
Second, the grid frequency deviations are changing slowly because of the inertia of the dynamos that generate the electricity. High power consumption tries to slow down the rotation and automatic feedback compensates with increasing power (aka more steam in the turbine).
This method may be good enough to pin-point the possible time of the day when the conversation happened, but it would not be good enough to say if few seconds here and there have been removed.
Even the article says "if you look at it over time, you can see minute fluctuations."

Actually the method may detect the point of a cut, if the 50Hz main frequency suddenly changes phase. However this could easily be avoided if the one doing the edit does the cuts in lengths of 1/50second (or just a whole seconds).

3. Since the forensic scientist logs the noise, he is in the excellent position to manipulate the recoding so it appears from any time period he desires.
Also... to disprove his analysis one needs access to the same noise logs, but preferably done by somebody else.

As others have pointed out, you can take previous recordings, clear the noise from them, edit them, then add current hum from the power grid. Then present the result as evidence that this conversation just happened.

When it comes to police recoding suspects... there are much better digital methods to ensure that the recoding is not manipulated by a non-expert.

Comment Re:not impossible, but breaks existing drivers (Score 1) 152

What the udev guys are suggesting is that in the "module init" stage (where modules are loaded into the kernel) the module should not block waiting for firmware (because there may not be a filesystem yet, especially if the module is actually compiled into the kernel rather than loaded later). Rather the firmware should be loaded at "device open" time.

This is actually a reasonable position to take.

Unfortunately it breaks a number of (arguably misbehaving) modules, and among most linux kernel developers it is a BIG DEAL to break existing code.

No, this is not reasonable positions and these modules were not broken in any arguable fashion.

First, udev didn't block firmware loading until filesystems are activated. It blocked until the parent module init completes.

The case scenario happened with a dvb-c/t capture dongle. It consist of (among other things) USB multimedia controller (em28xx), demodulator that needs firmware (drx-k) and tuner connected to the demodulator (all with separate modules).
The problem is that in order to finish the initialization, the device needs to know what the tuner is, but it can't init the tuner until the firmware is loaded. However if the firmware is blocked by udev until em28xx finishes init, we just get a deadlock (that is resolved by 30 second timeout). To avoid this deadlock, the driver have to stop init at firmware loading, create the device files pretending that it knows what the actual device is. When it is opened, it would load the firmware, finish the init. The minor problem is what happens if at this point it finds out that the tuner is not supported and thus the whole device is not supported.
There is a reason why device initialization must be done at initialization.

Actually, there is no case where a firmware could be loaded and its loading would cause any kind of problem. Yet udev enforced completely unneeded serialization, that prevented exactly that.

As for having to load only the firmware from the filesystem? Why would you do that? If you compile-in module, then do the same with the firmware. And if you don't, then load the firmware from the same filesystem as the module.

Comment Red hering (Score 2) 152

"If patent litigation caused by the U.S. patent system stifled innovation, U.S. software companies would not be the most successful in the world."

He is right. Patent litigation doesn't stifle innovation, it stifles competition.
And IBM know that because they've stomped enough businesses back in the days when they were the big evil monopoly.

Innovation happens as byproduct on working on a given problem. It will happen despite somebody having patented portion or the whole of it. However the patent may prevent the innovative company from selling its product or increase the cost. The innovation then could be bought or outright stolen. Then the big and successful patent holders would become bigger and even more successful.

Comment Re:And this is why (Score 1) 946

This is what NVidia wants removed. These functions are not special in any legal way. All kernel functions called by any module are covered under GPL, because the whole kernel (including the files from BSD) is GPL.

This is incorrect. The Linux kernel is GPL, but has exemptions for calling functions/syscalls that are part of the public interface (otherwise all Linux apps would have to be GPLv2). Binary-only drivers that call non-public code are only allowed so long as they are not distributed with the kernel.

The exact text is " NOTE! This copyright does *not* cover user programs that use kernel services by normal system calls - this is merely considered normal use of the kernel, and does *not* fall under the heading of "derived work". "

As you can see there is nothing about calling kernel functions from kernel modules. NOT A THING. It is not specially excepted. Kernel functions are not exempt depending on whatever they are public or private interface, because a) modules are not user(space) programs; b) modules are not using system calls (sysctl).

There is not a single non-GPL binary-only kernel module that could be distributed with the kernel, no matter what API is used. This is even true for firmware, that by its nature doesn't run on the host CPU and doesn't use _any_ kernel api functions. (There is firmware that have been allowed to be included in GPL code.)

You are correct that binary-only drivers are only allowed so long as they are not distributed with the kernel, but this is exactly what the problem is. The license explicitly allows separate distribution, but the kernel DRM system enforces additional arbitrary rules on top of that.

Comment Re:GPL API (Score 1) 946

NVidia can implement its own kernel (API and ABI compatible with linux) that have its own DMA-BUF implementation that uses the same API. This module would work on it. However it won't work on normal linux kernel.

The problem is that some kernel developers have implemented DRM system, that artificially limits the user in what he can do with his system. If he tries to compile a module that is not under GPL license, a selected number of functions would cause the build to fail. The freedom of the user is artificially taken away. Well, thanks to the GPL the user have the source and can hack the kernel to remove the marks of these function, however this is procedure that takes time and effort, that are basically wasted.

This is what NVidia wants removed. These functions are not special in any legal way. All kernel functions called by any module are covered under GPL, because the whole kernel (including the files from BSD) is GPL. NVidia doesn't want "something" from the kernel to be relicensed, it just wants the DRM on that API to be removed.

I'm just going a step further, the kernel doesn't need digital restriction management.

As for the making money part. Nobody can distribute GPL kernel and non-GPL kernel module together. It must be the end-user who creates this derivative work. So NVidia won't be able to use the "user loophole" on consumer devices like smartphones/tablets/etc. .

Comment Re:And this is why (Score 1) 946

APIs GPL only? Seriously guys, WHAT THE FUCK?

That's not what's happening at all.

The basic fact is: the Linux kernel sources are GPL licensed. This was an early decision by Linus, and no amount of wishing will change that. There are just to many contributors that would have to approve a re-license.

Now, the GPL is very clear regarding derived work: if you distribute such a work, it needs to have a GPL-compatible license and provide sources.

What constitutes a derived work for a kernel? Basically, calling any code from the kernel would create a derived work, so the Linux license contains exemptions for user space code that calls the kernel through the public interface.

However, in this case the nVidia driver would call an internal kernel function, that is not exempted, so this would create a GPL derived work. The function is so low level that it would create an intimate bond between the Linux kernel and the nVidia binary driver.

Even if the Linux maintainers would allow this, anyone that wrote any part of Linux could start a court case against nVidia for breach of license. Would that be a better outcome?

If the above was even one bit true, non-GPL modules would have been completely forbidden from loading in the kernel at all. But they are not, here is explanation why.

The derived work of the GPL kernel and non-GPL NVidia module is produced when the module is loaded. This is when the linking of both happens. It is very important to note this, because the "calling" thing you are using is just red-herring.

This derived work is allowed because it is done by the user of the system. The result never leaves the memory of the user's system, so it is never distributed. GPL explicitly allows the user to do anything, as long as it doesn't involve distributing the result.
It also means that GPL kernel + non-GPL-nvidia cannot be distributed together.

If you take a look of the code that compiles the NVidia module for your kernel, you will notice that the code does not contain any GPL code in itself. It may require some kernel headers and config files, but these have been established to not be copyrightable (and thus no need of license for them).

So, what is the problem?

NVidia can implement its own kernel (API and ABI compatible with linux) that have its own DMA-BUF implementation that uses the same API. This module would work on it. However it won't work on normal linux kernel.

The problem is that some kernel developers have implemented DRM system, that artificially limits the user in what he can do with his system. If he tries to compile a module that is not under GPL license, a selected number of functions would cause the build to fail. The freedom of the user is artificially taken away. Well, thanks to the GPL the user can hack the kernel and remove the marks of these function, however this is procedure that takes time and effort that are basically wasted.

This is what NVidia wants removed. These functions are not special in any legal way. All kernel functions called by any module are covered under GPL, because the whole kernel (including the files from BSD) is GPL.

Slashdot Top Deals

If you think the system is working, ask someone who's waiting for a prompt.

Working...