Because with a description of the problem he wants to solve, rather than his proposed solution, someone may be able to point out that there are better solutions which don't involve this kind of low-level coding.
Are you trolling? For anyone not already intimately familiar with the process, the vertical learning curve of writing Perl bindings for C++ code will cause more pain, anguish, wailing and gnashing of teeth than writing in either pure Perl or pure C++. You will also gain nothing in portability: in fact you will lose, because portability will be the lowest common denominator of both Perl and C++ (I won't argue over which is lower to start with, both can be high with the right libraries), with the added headache of having to deal with two orthogonal sets of problems, in different languages.
I don't disagree that nostalgia sells, but I do disagree that what we are seeing here is purely nostalgia-driven. I, for one, prefer unrealistic "drift-style" racers to simulations - I get a lot of enjoyment from going as fast as possible, negotiating courses through a mixture of careful positioning and controlled drifts, with the height of skill being completing a lap without releasing the accelerator, without crashing.
Games which deliberately ape the looks & sounds produced by old systems may indeed rely heavily on nostalgia, but there are plenty of other games out there maintaining the old-fashioned arcade driving mechanics, whilst taking full advantage of modern hardware. Personally I would put Mario Kart 8 in this category (although it is debatable whether the Wii U can be called "modern" in the graphics department). In TFA itself, the Power Drive 2000 trailer may have retro music and a retro *feel* to the graphics, but the graphical fidelity itself is not artificially restricted. Elsewhere on Kickstarter, Formula Fusion  seeks to recreate the style and mechanics of the WipEout series, whilst not in any way pretending to be an old game - I for one am excited by the prospect of finally having what is essentially WipEout (in all but name) running on modern PC hardware, with all the bells, whistles and convenience that implies, but would probably be put off if they were to deliberately attempt an original-PlayStation aesthetic. The 90s Arcade Racer  is definitely playing heavily on nostalgia, littered with references to (as you may have guessed) various 90s arcade games, but again, it seeks to make the best of the underlying hardware.
Nostalgia is certainly one aspect of all this, but don't underestimate the number of people who simply find these kind of games fun, and want to be able to play them easily & legally on contemporary hardware! I suspect I am not alone in finding that simulation-style games are not enjoyable without matching realistic controls, but have neither the space to dedicate to wheels, joysticks, throttles, pedals etc. - nor do I particularly want to spend the money or devote the time. For example, much as I am pleased that Elite: Dangerous and Star Citizen exist, I personally am holding out for No Man's Sky, simply because releasing on PS4 first means it is far more likely to have a simple control scheme which works on common controllers. Many will probably decry it as "dumbed down" or "retro"; I say it is just a different design decision.
Do you honestly expect HTC and/or Valve to have invented some magic which somehow manages to render images at the same (or higher) resolution & framerate, with the same image quality and in-game graphics options, with any less beefy hardware to back it up? Or do you think Oculus are simply lying about what is needed for a good experience?
In this case, they are all things that require knowledge of who is logged in - functions to do with actually tracking creation/switching/ending of sessions, or things where admins may wish to change policy based on who is logged in (e.g. non-superuser can't reboot a shared machine whilst anyone else is using it). I agree it does seem like a bit of a kitchen sink, but it represents things that need to be considered in tandem for this functionality to work well on desktop systems, which have not traditionally been considered in tandem - on the one hand, this is the kind of consolidation systemd opponents complain about; on the other hand, in my experience, all this stuff now works better than ever before.
IMHO, logind is not a good example if you want to demonstrate feature creep - it is a good example of providing a unified solution to a bunch of related problems which were not previously addressed in a satisfactory way. Better examples of feature creep are things like networkd (for dynamic configurations on desktops/laptops we already have NetworkManager; for static configurations on servers you only need to get the distro-specific network set-up right once then leave it alone), or timesyncd (what's wrong with existing NTP clients?).
Personally, I'm not strongly opposed to systemd, and have observed some benefits from it - but my usage of it is limited to my own desktop and laptop, I am not a sysadmin worried about having to re-learn how to administer an entire network. In the context of desktops & laptops, I would say systemd is a good thing; elsewhere, I don't consider myself qualified to have a strong opinion.
To provide a consistent, reliable way of tracking who is logged in, and interfaces for doing various things related to user sessions - this includes providing controlled access to things logged-in users might want to do, such as suspend, reboot, power off, access input devices (separately from those in use by other users who may be logged in to the same machine), switching between different sessions, inhibiting suspend (because I'm watching a movie), and so on. A whole load of stuff which has traditionally been unreliable on desktops, or only worked for one user at a time, or worked differently for each distribution, or had no consistent mechanism for controlling access to the functions.
It has a man page: http://www.freedesktop.org/sof...
Desktop environments choose to depend on logind because it frees them of the responsibility to implement all this stuff themselves - which has traditionally been a mess, because the way these things are handled across different distributions has always been subtly different. Which group do I need to be in to be allowed access to reboot? What are the permissions on the device node for the keyboard? How does a generic video player app tell the system not to turn the screen off, without individual support for the methods used by various disparate desktop environments? If you have a common interface which all the DEs can use (and other apps with no specific DE affiliation), it becomes very tempting to, you know, *use* it.
Its access control goes via Polkit, which is itself a generic system to controlling access to privileged things. Polkit itself is not part of systemd.
TFA and the summary make it sound as if it is the lack of support contract which makes these systems insecure. This is complete and utter nonsense - it is the fact that they are running Windows XP which makes them insecure. It's not as if malicious hackers around the world were sitting there rubbing there hands in glee, waiting for the day the support contract expired to plunder the systems, having previously been completely and utterly thwarted in their evil plans by the exchange of funds between the UK government and Microsoft.
But at least a support contract would get them fixes for any newly discovered vulnerabilities, right? Well, maybe. No software is perfect, but the world - and Microsoft's practices - have moved on, and realistically it would take a *lot* of money for MS to spend a meaningful fraction of their resources securing an OS past the end of its useful commercial life.
"because there is less flickering in DirectX games". DirectX games played under Wine, or are your problems with AMD/ATI not actually directly related to Linux at all? I'm not sure what you mean by "flickering", but the problem anti-aliasing is designed to solve is, well, aliasing - that is, jagged edges on objects caused by the unavoidable fact that the on-screen image is composed of individual pixels, which becomes noticeable whenever different coloured objects don't line themselves up perfectly along pixel boundaries (i.e. most of the time).
If the problem is not strictly to do with jagged edges on objects, you may also want to read up on mipmapping and/or anisotropic filtering:
You might be misunderstanding the problem and exacerbating things through poor graphics options, or you might simply be abnormally sensitive to the limitations of interactive 3D graphics rendering. Alternatively, if by "flickering" you mean entire objects are actually disappearing/reappearing, that sounds like an application bug, or a hardware failure waiting to happen (e.g. video memory corruption resulting from overheating).
.. then this wins for me, hands down. I have been relatively lucky with HDDs over the years; only ever had one failure which I didn't see coming, and even that didn't result in any data loss (though it did result in an interesting afternoon's work resurrecting the drive). I had an overheating GPU once, but again, I was able to see the failure coming a mile off and replaced the card before it became unusable.
Anyone who has assembled their own machine and never had a BIOS or UEFI related problem - even if self-induced through misconfiguration - is extremely lucky indeed. Very recently I built myself a new box, and through combinations of various issues, have already rendered it unbootable several times within the space of a week:
- Booting from an external optical drive causes the Windows installer not to see the storage controller unless plugged into particular USB ports
- Needing so-called "legacy USB" support enabled to boot from external optical drives
- Poorly-labelled choice of legacy BIOS or UEFI boot modes in the configuration, which of course you can't switch between without replacing your bootloader
- A "fast boot" mode which doesn't always take effect, disables access to the settings menu and requires a Windows utility to turn it back off
- A BIOS update which stopped the machine booting entirely (thankfully I discovered at this point that the board has a second BIOS chip accessible by a physical switch)
I now have a working machine, booting in UEFI mode and reaching the desktop in just over 20 seconds (Windows 8.1 64-bit on an SSD - no Linux install yet). There is an "ultra fast" boot mode which in theory would reduce this further, but it seemingly requires cooperation from the graphics card, and mine does not cooperate.
I didn't. This was my first comment in this thread. My point is, you say you're off to play this game, but it hasn't yet released.
+1 for Das Keyboard. A Model S Professional with Cherry MX Red switches strike a nice balance between firmness of action and volume. I also like the feel of the Microsoft Natural 4000, especially with the wedge installed to raise the front - it feel strange at first, but so comfortable once you get used to it! Sadly I found the quality of the mechanism lacking, too spongy and unsuited to long-term sustained use.
One day someone will make something with the shape of the 4000 (including wrist-rest and raised front) and decent mechanical keys, and I will have found my typing soulmate...
Now I'm going to go off to play some
Cool story, bro.
They usually say "that bug was fixed already in our latest version. Go bug your distro to update their packages." And thus the buck is ever passed.
That is not passing the buck, that is upstream developers correctly leaving packaging up to the distribution maintainers. If I release a piece of open-source software, and it gets packaged (by people other than myself) for multiple Linux distributions, do I - as upstream maintainer - suddenly become responsible for the care and maintenance of those packages, in distributions I may not even be aware were packaging my software, let alone have any sort of commit access to? Would you expect me to go through the rigmarole of becoming a Debian developer, for example, just so that I can ensure the Debian package of my software (which I didn't even create, I just release tarballs of code) is always bleeding edge?
If a distribution moves too slowly for your tastes, that is a problem with the distribution, and upstream are not beholden to you to fix it.
Umm... what? So nobody should develop for Android because people with Android phones are cheap bastards? This kind of subjective, offensive, blanket generalisation is exactly what draws accusations of butthurt amongst the iOS crowd.
I propose an alternative: coffee shops should focus on selling good coffee, and they will get my money by selling me coffee. These brand-specific apps are PR; they provide advertising, spread awareness and recognition through word of mouth, encourage customer loyalty, so on and so forth. If a coffee shop expects me to pay them for that, they can shove it, quite frankly. Being able to pay for coffee *via* an app could be interesting, especially if it makes the process quicker and more convenient, but paying for the app itself? No thanks.
I think you misunderstand what is going on here. Server push is basically a way for sites to pre-populate a browser's cache, by sending it suggested requests for things it doesn't yet know it will need ("push promises"), and the responses to said requests (the pushes themselves). If the server pushes responses to requests that the client never actually ends up making - not even to the extent of pulling the data from its local cache - then the pushed data will never be processed.
Unless you are sitting on an unpublished proof-of-concept, the only malicious use I can see is filling up the cache of a poorly written browser with nonsense. This is already feasible with HTTP/1.x via any number of means.
To inject unsolicited, *processed* (as opposed to cached then ignored) data into a browser using HTTP/2 server push, an attacker would also need to control some resource which the client has already requested, and manipulate it in a way which results in the client needing to load the pushed resource. I don't see how this is any less onerous for an attacker than hijacking an HTTP/1.x resource: that initial hijacking, be it by XSS, rooting the server or any other already-extant method, must still be performed. In fact using HTTP/2 push arguably complicates things, since being able to inject the push itself implies either complete control over the server, or a hijack of the entire session!
Sure, the implementation of the feature itself in any particular HTTP/2 stack could be buggy, but so could anything else.