While I agree with your point, it is worth noting that there have been several studies over the last half-century disproving the diet-heart hypothesis.
You're conflating aspect ratio and resolution. If the 16:9 monitor is 1600x900, then a 4:3 with equivalent vertical resolution would be 1600x1200.
I use 2x 1280x1024 on my desktop, and 1x 1920x1080 on my laptop. I agree that 16:9 is almost like having dual screens, but it's just not as good. If you're going to use dual screens anyway, then it makes more sense to go with 4:3 and have one window per screen.
A semi-related issue is that Linux HiDPI support isn't quite there yet (KDE5 and Wayland aren't mainstream yet), so there's little reason to upgrade until then.
The only reason, AFAIK, is because it's of strategic advantage to the systemd project, and by extension, Red Hat. (If someone has evidence to the contrary, I'd love to hear it.)
I've used systemd since mid-2013, and since then I've acquired a fair few reasons to dislike it, but it's the management of the project that bothers me more than any technical aspect. The systemd modules all seem to depend on the process manager and journal. The process manager requires that systemd also acts as init,* and user instances require a root instance. None of these dependencies need to exist - even the journalling library could be replaced by a shim that just forwards everything to stderr. Traditionally they would have been separate projects and such dependencies wouldn't exist.
* Systemd is a much better process manager than SysVinit, but there was never any reason to prevent the user from choosing another init.
How is that not fair? As a solar panel user, you’re no different from any other generator company.
The problem is that they aren't under the same contract as another generation company; they're just given a flat rate. The generators are subject to dynamic pricing that varies by the hour, and those prices take power factors into account. In certain regions, at certain times, the price of electricity can even go negative if the mismatch between demand and supply gets too great (this is typically associated with large wind/solar installations, which are inherently unpredictable).
Of course, the reason they're given a flat rate in the first place is because dynamic pricing would be too complex and unpredictable for the average consumer, even though it would result in a more efficient system. Ultimately I suspect they'll just offer a mean price that compensates for this, with dynamic pricing as a possible alternative.
Generalized Bechdel test: how many movies have conversations between minor characters that are about something other than the main character?
When you flick a pencil in the absence of gravity, you impart both translational and rotational momentum to it, both of which are conserved. Flick it in the center, and it will fly forward, and you backward. Flick it at an extreme*, and while it spins forward, you will spin backward. Flick it anywhere in between, and you'll get some combination of those.
* I doubt you can flick it without imparting any translational momentum in practice, because your finger is not infinitely thin.
Rather than choosing specific resources, you might find it more helpful to look for comprehensive collections. e.g.
- Wikipedia - because if anything comes close to being the sum of human knowledge, this is probably it. [10 GB]
- A complete mirror of the packages available for your distribution of choice (I suggest Debian stable [60 GB], though Gentoo [160 GB] might be worth considering if you want more flexibility and don't mind the compile times.). This will allow you to experiment with a language/program/etc. even if you don't have it installed when you leave.
- Project Gutenburg [8 GB]
In addition to these, any of the standard comp sci books (e.g. the Art of Programming) will give you something to mull over. Learn a functional language if you haven't used one before (I suggest Haskell).
I think D is spectacular and I'm sorry it hasn't seen more adoption.
I'll second that. It seems to be purely a matter of corporate backing - Go and Rust have received a significant amount of funding from Google and Mozilla, but D hasn't gotten much more than a few conferences sponsored by Facebook.
If it's not supported on WINE, then it's completely random whether or not the WINE demographic will buy it, and MBA-types tend to prefer predictability.
Furthermore, if it does work well under WINE, that implies the effort needed to port it is minimal - simply bundling WINE (or a similar translation layer) with it would be sufficient. In my experience, there isn't much support available for games anyway, unless it's a game-breaking bug that affects a large number of people.
Then you've got the people like me, who dual boot all of their systems (so we're already customers, anyhow).
Then, over in a tiny little corner, you've got the Linux users with a gamer-grade PC, no OS but Linux, no console, and pockets lined with cash earmarked for games if only publishers'd release them on their OS of choice!
I think you've mis-characterised the demographic. I single boot Linux (because dual booting is a pain), but I still buy Windows games via Steam that work via WINE. If a game isn't playable under WINE (which is increasingly uncommon these days) and doesn't have a native Linux port, I simply don't buy it.
Valve knows exactly how many people are doing this, and my guess is its a small but non-trivial number.
That's actually a rather interesting question.
As with many things in law, it's the result of historical precedent (aka legacy code).
Historically, computer programs were not effectively protected by copyrights because computer programs were not viewed as a fixed, tangible object: object code was viewed as a utilitarian good produced from source code rather than as a creative work. Due to lack of precedent, this outcome was reached while deciding how to handle copyright of computer programs. The Copyright Office attempted to classify computer programs by drawing an analogy: the blueprints of a bridge and the resulting bridge compared to the source code of a program and the resulting executable object code. This analogy caused the Copyright Office to issue copyright certificates under its "Rule of Doubt".
So basically, software is copyrightable because blueprints are copyrightable, and later legislation was passed to codify this (and legislation doesn't need to be well reasoned or justified, merely politically tenable).
This then leads to the question of why blueprints are copyrightable.
"Consistent with other provisions of the Copyright Act and copyright regulations, . . . protection [of architectural works] does not extend to standard features, such as common windows, doors, and other stable building components." As architect Michael Graves explained, copyright protection covers only the "poetic language" of an architectural work, which includes those parts of the design that are "responsive to issues external to the building, and incorporates the three-dimensional expression of the myths and rituals of society". It does not cover "internal language", which includes those parts of the design that are "intrinsic to the building in its most basic form – determined by its pragmatic, constructional, and technical requirements." Thus, for example, individual elements that are driven by function are not copyrightable, including the presence of doors and windows or those elements required by building codes. Accordingly, architectural designs must be analyzed to determine the scope of their functionality.
So basically, architecture is a combination of art and functional aspects, and only the artistic elements were ever intended to be covered. The problem is that because the judiciary (in most cases) don't understand programming, they are unable to distinguish between them adequately.
IMO, computer programs in general should never have been considered to have an artistic element (and I say that as someone who appreciates beautiful code). A building may be said to have artistic elements because it serves two purposes: a functional one (to provide shelter), and an artistic one (to look good). With the exception of examples in textbooks (which are copyrightable independently of this), code is almost never written to look good, merely to serve a functional purpose. While it may be beautiful, that it not it's primary purpose. They should have been regarded as purely mechanical, and covered by patent law instead.* At this point in time though, it is likely impossible to fix that flaw, given how disruptive it would be.
* I think that software patents should exist, but not in their current form. While computer programs are mathemathical in nature, so are many other patentable creations. For example, the negative feedback amplifier was well-deserving of a patent (given how revolutionary it was at the time), but each of the components in that circuit could be well-defined mathematically. If the requirements for a patent were enforced properly (i.e. novelty/obviousness and the limitation to an implementation, as opposed to a goal/idea), then they would actually be useful.
However, I still think you have missed my point because you say I desire copyrighted APIs. I'd rather see copyright rolled back entirely or at least greatly restricted like along the lines Richard Stallman proposes. What I am saying is that as long as one supports copyright as it is now, and as it is being expanded, then you have to accept APIs should be copyrightable. In that sense, if you believe in the value of copyrighting computer software, Linux should *not* have been legally made ignoring that copyright violation sued to be mostly just a civil matter until recently it became criminal, and that the UNIX copyright holders would have had to chosen to purse Linux in court).
You have a fairly well-written (if lengthy) post, but it is based on the assumption that the law should be consistent. However, pretty much every law has exceptions added. e.g. murder has self-defence and (in some jurisdictions) euthanasia, copyright has fair use, etc.
I consider APIs to be the digital equivalent of forms. You submit a form to a department that accepts it, and you get another form back. The layout of the form (which sections are on which pages, whether you have boxes or grids of bubbles) affects the efficiency with which the form may be processed.
Forms are not copyrightable, for many of the same reasons that APIs should not be copyrightable.
Copyright protects artistic expression. Copyright does not protect useful articles, or objects with some useful functionality. The Copyright Act states:
A “useful article” is an article having an intrinsic utilitarian function that is not merely to portray the appearance of the article or to convey information. An article that is normally a part of a useful article is considered a “useful article”.
“the design of a useful article, as defined in this section, shall be considered a pictorial, graphic, or sculptural work only if, and only to the extent that, such design incorporates pictorial, graphic, or sculptural features that can be identified separately from, and are capable of existing independently of, the utilitarian aspects of the article.”
However, many industrial designers create works that are both artistic and functional. Under these circumstances, Copyright Law only protects the artistic expression of such a work, and only to the extent that the artistic expression can be separated from its utilitarian function (what courts call "conceptual separability"). If the aesthetic aspects cannot be separated from the functional aspects, copyright protection is not available.
It can be difficult to gauge whether the artistic aspects of a work can be separated from its useful aspects. Courts often rely on the Denicola test, which asks whether the artistic design was significantly influenced by functional considerations. If so, copyrightability depends on the extent to which the work reflects the artistic expression inhibited by functional considerations. As discussed by Judge Oakes:
Copyrightability "ultimately should depend on the extent to which the work reflects artistic expression uninhibited by functional considerations." To state the Denicola test in the language of conceptual separability, if design elements reflect a merger of aesthetic and functional considerations, the artistic aspects of a work cannot be said to be conceptually separable from the utilitarian elements. Conversely, where design elements can be identified as reflecting the designer's artistic judgment exercised independently of functional influences, conceptual separability exists.
Full disclosure: I hold a bachelor's in CS from Stanford and have been an engineer for 14 years since then. I think my degree was, to be polite, poor preparation for any real-world work beyond teaching college CS courses, although I have also never seen any program I think is better.
That says it all, really. A CS degree is not supposed to be preparation for a career as a software engineer - it's preparation for a career in CS. Degrees in software engineering already exist and serve that purpose, though it sounds like Stanford didn't have one in 2000.
I agree. Systemd is now about a "2nd kernel" or "userspace plumbing". Essentially a redesign of Linux.
The problem is I don't know what there is for Debian to debate. They don't have the upsteam influence. If systemd is expanding and large numbers of developers in upstream and going to be introducing dependencies on systemd what is there for Debian to debate? The most they could in a practical sense do would be to create a subset of packages that don't have systemd dependencies directly or indirectly and frankly that's pretty easy for anyone to do for themselves and just create a child distribution
IMHO the anti-systemd people are mostly admins and not developers so I don't think they have the manpower to pull off what they want.
Debian is probably the second most influential distro, after Red Hat. One example of this is how Canonical dropped support for upstart after Debian adopted systemd as the default. (And don't forget how many distros are derivatives of Ubuntu.)
If Debian imposes a requirement that packages do not depend on systemd, then that will have a significant effect. That said, some larger projects do have enough influence to block them from doing so. e.g. not being able to run Gnome would be a pretty big blow to them. Ultimately, distros and upstream projects have a symbiotic relationship, so the larger players on either side can influence the outcome.
Regardless of what's running as PID 1, many administrators of Linux installations do not want any of Poettering's libs installed at all. With a codebase this polemic, it's understandable that there would be a call for removing systemd libs as obligatory dependencies.
In any event, it's not GIMP itself that has the dependency but an intermediate library. I simply used it as an example of a *nix desktop application that pulls in systemd libs on Debian even if one is trying very hard to avoid everything systemd related.
This is Debian we're talking about, not Gentoo (and I say that as a Sabayon user). For most packages, you're limited to whatever configuration the maintainer decided on (which is usually the one with the most features), since the alternative would be to have a separate package for each version. e.g. vlc and vlc-nox.
If you really want that level of control, Debian is not the right distro to be using.