I find it unfortunate that Freedesktop and later GTK choose to clone the system tray API. But I understand why the decision was made. I was involved in discussing this on the mailing lists. The Real Player use case was even expressed as a recommended one (don't know if it still is), as was click close to "minimize to systray". There were better APIs mentioned, but the main reason for cloning this API (and mimicing Windows in general) was to make cross platform development easier (this was before everyone started to copy OSX instead).
Before this interaction with the CD-ROM drive and sound volume was done with explicitly added applets. Applications running in the background to make the startup time feel snappy ware simply running in the background without showing any indication of life (which in itself could be considered bad behavior of course). But for applications needing to show notifications there were nothing. I favored something like growl on OSX (even thou some applications are a bit too talkative) which later emerged for Linux as libnotify.
As far as I'm concerned this is no longer a problem in Linux. No application I currently use crashes then there is no system tray available. I personally prefer to use XFCE with Notion as window manager and global hotkeys for music player controls and such. (But I certainly wouldn't want to force that everyone.)
That was the new UI concepts introduced in Windows 95 again? The start button and the system tray are the only thing I come to think of. I never liked the start button concept. But I guess it was a remedy the complete mess some users ware able to make in the program manager (thanks to it's window in window MDI implementation). Perhaps it was kind of intuitive. But clinking "start" to turn of the computer... I don't know. The system tray on the other hand is simply just bad. It presents a bunch of random icons to the user. A few of the icons may be useful, a few of them understood by the user but most of them have no real purpose other than to expose some logo. (You can say the third party applications displaying the icons are to blame but I think the system tray still is responsible for proving an API that encourages it.)
Apart from that, Windows 95 tried to move from an application centric paradigm to a document centric. But it only felt like a poor atempt to mimic OS/2. Instead of replacing the load/save pattarn with the open/close pattern they ended up with just replacing the word "load" with the word "open", and (less conseqently) the word "quit" with the word "close". They basically replaced established UI terminolygy with a new anything-goes-policy. Not unlike iOS then you think about it. No application can be too strange to feel out of place. Perhaps that was the biggest achivement of the Windows 95 UI?
But the believers in the placebo thermostat will step in and defend it, and thus absorb much the complaints.
There are a lot of bad things in this new product. But just a handful of them are new and the rest are in older products as well. It's just getting slightly worse. That's almost like an improvement.
Yes, the western scripts have separated more (and for a longer time). But those eastern scripts have separated a bit too, at least then it comes to typesetting. It seems logical and convenient to have common code points for all CJK languages. But in reality it's actually causing problems since the same symbol is expected to look in one way for Chinese and slightly different for Japanese. It would probably be most convenient for everyone if they agreed on a common convention (as with the Latin script as you mentioned), but apparently they haven't. One solution to this could be to have code points for language context hints. Another could be to have entirely different sets of code points for the different languages. Both seem quite bad, but at least better than having algorithms trying to guess the language (which still is to prefer over having suboptimal typesetting).
By the way, there are plenty of examples where the same symbols have different code points intended for different contexts (Greek letters used for math etc). There are even Latin letters that look slightly different in different language contexts like U+0152 (filtered out by Slashdot), Ø and Ö (they all stem from a combination of O and E, Ö from the convention of writing the E above the O). Agreeing on one of the symbols for all affected languages would be logical and fully intelligible for everyone, but it would look wrong. The difference might not be as big for CJK languages (I don't know), but apparently big enough for it to matter. It's seems easy to distinguish between what is a typesetting detail (like bold, italic and letters with our without serifs) and what is an entirely different symbol (like upper or lower case). But it's not in many cases. And I expect the view on these matters will continue to change over time.
It's all a big mess. Not unicode specifically, but human writing in general.
That's roughly like saying that you need to render the words "automaton", "Tsirpas", and "Varoufakis" in Greek characters, and "Putin" and "Gorbachev" using Cyrillic characters, in Latin text: it serves little purpose and it would make the text unreadable for many readers.
Latin, greek and cyrillic scripts have their own code points for (historically) common characters. For example the latin letter B (U+0042), the greek letter Beta (U+0392) and the cyrillic letter Ve (U+0412) are historically the same symbol but have their own code points in unicode. This makes it easy to embed snippets of greek script in an english text, for example, since a greek font will automatically be used for the greek script (instead of getting randomly mixed fonts risking a suboptimal rendition).
Yes, but the kernel IS monolithic. No one is denying that. The same could be said about libc. But the GNU tools (coreutils) are a bit different since they are independent and have cleanly defined interfaces. You can easily pick the ones you like and use alternate implementations of others (busybox for example).
Could it be so simple that Poettering simply get a kick from pissing people off. From the way he argues it defenitely seems like it. He mentions that people don't find systemd "unix-like" and instead of addressing the actual critique he makes up his own definition of "unix-like" which he surely knows is different from what anybody else means. Same as for coming up with his own definition of the word modular.
Can he really believe this political rhetoric is fooling anyone? Anyone who cares about an init system, that is. I find it more likely he is actively trying to piss people off. Because if he let the code speak for itself and was honest about it, it's really not that bad.
Linux was has been faced with the same critique for not being modular. It has lead to honest and interesting discussions about microkernel vs monolithic design. Torvalds would never enter such discussions saying Linux actually has the most pure microkernel design of any OS and end the discussion by pulling a new definition of every established term out his ass.
Instead of criticizing Torvalds for being bad at handling people Poettering could learn a few things. If he simply admitted that systemd is a big monolithic beast, just like the Linux kernel, the matter could at least be discussed (at a technical level). If that had been what he wanted...
I'm sure this "invention" will correctly attribute Snow White to Brothers Grimm and not Disney. Right?
I guess that's why Disney prevents Google from implementing the algorithm by patenting it.
There was a similar case in Sweden which highlighted many of the problems with current child pornography laws. It was a manga translator who was accused but was finally declared not guilty in the final instance (högsta domstolen). The picture in question depicted a topless (relatively realistic looking) manga girl standing alone on a field.
So what is child pornography exactly?
1) It depicts a child. A child is someone, real or fictional, under 18. This includes an adult pretending to be a child (also called age play). And also an adult looking like a child (willingly or not), for example by dressing in childish clothes. One tool to decide if someone looks like a child is the Tanner scale (which was used in court).
2) It is pornographic. This is of course very subjective and defined as what is commonly perceived as pornographic. An obvious problem with this definition is that something needs to contains adults (or at least teens) to be commonly perceived as pornographic to begin with. So one has to imagine to be a paedophile in order to make the decision. Which is only unnecessary sexualisation of children (for example pictures children playing on a beach becomes commonly perceived as pornographic).
The laws tend to get more and more inclusive to include more and more as child pornography. And no one wants to pull the breaks since it will get them accused of liking child pornography and being pedophiles themselves (an open goal for political opponents). While in reality the real child pornography (with real children being real victims) simply gets dwarfed by the vast amount on cartoons and teens taking pictures of themselves. Which makes it difficult for the police to legally focus their resources.
These laws are expansions of laws against indecent behavior. You are not allowed to have sex in public -> you not allowed to publicly display pictures of people having sex, or other pornographic images -> some pornographic images you are not allowed to distribute -> some pornographic images you are not allowed to possess.
It would make much more sense to instead expand the laws of sexual assault, to forbid images of those. There is not much point in determining of someone may find them pornographic or not (from a legal perspective).
One key question here is of course what the relationship is between child pornography and pedophiles committing sexual assaults. One possibility is that the pornography inspires pedophiles to commit more sexual assaults. Another is that the pornography keeps the pedophiles occupied so they commit less sexual assaults. The studies made on serial offenders point to the conclusion that pornography lessens the risk of repeated offenses. But it's uncertain if this is also true for the first offense (which isn't as easy to study for obvious reasons).
I have a theory that it's impossible to prove anything, but I can't prove it.