Comment Re:All your games... (Score 1) 217
Thankyou so much for getting "Somebody set up us" correct, instead of writing "Somebody set us up"!
A strange pet peeve, I know, but there you have it.
Thankyou so much for getting "Somebody set up us" correct, instead of writing "Somebody set us up"!
A strange pet peeve, I know, but there you have it.
I'm confused. How would a GPU with 32GB of "integrated memory" be news, but a GPU with 32GB of [non-integrated*] memory is not news? I'm not sure what you mean when you say "integrated memory". This is not the graphics half of an APU, it is a discrete card, and nowhere do the summary or the press release state otherwise. The term "compute GPU" just means it's targeted at computing workloads, not graphics workloads.
What exactly is your complaint?
* Not even sure what this means, but you seem to be contrasting "integrated memory" and "memory".
Because with a description of the problem he wants to solve, rather than his proposed solution, someone may be able to point out that there are better solutions which don't involve this kind of low-level coding.
Are you trolling? For anyone not already intimately familiar with the process, the vertical learning curve of writing Perl bindings for C++ code will cause more pain, anguish, wailing and gnashing of teeth than writing in either pure Perl or pure C++. You will also gain nothing in portability: in fact you will lose, because portability will be the lowest common denominator of both Perl and C++ (I won't argue over which is lower to start with, both can be high with the right libraries), with the added headache of having to deal with two orthogonal sets of problems, in different languages.
I don't disagree that nostalgia sells, but I do disagree that what we are seeing here is purely nostalgia-driven. I, for one, prefer unrealistic "drift-style" racers to simulations - I get a lot of enjoyment from going as fast as possible, negotiating courses through a mixture of careful positioning and controlled drifts, with the height of skill being completing a lap without releasing the accelerator, without crashing.
Games which deliberately ape the looks & sounds produced by old systems may indeed rely heavily on nostalgia, but there are plenty of other games out there maintaining the old-fashioned arcade driving mechanics, whilst taking full advantage of modern hardware. Personally I would put Mario Kart 8 in this category (although it is debatable whether the Wii U can be called "modern" in the graphics department). In TFA itself, the Power Drive 2000 trailer may have retro music and a retro *feel* to the graphics, but the graphical fidelity itself is not artificially restricted. Elsewhere on Kickstarter, Formula Fusion [1] seeks to recreate the style and mechanics of the WipEout series, whilst not in any way pretending to be an old game - I for one am excited by the prospect of finally having what is essentially WipEout (in all but name) running on modern PC hardware, with all the bells, whistles and convenience that implies, but would probably be put off if they were to deliberately attempt an original-PlayStation aesthetic. The 90s Arcade Racer [2] is definitely playing heavily on nostalgia, littered with references to (as you may have guessed) various 90s arcade games, but again, it seeks to make the best of the underlying hardware.
Nostalgia is certainly one aspect of all this, but don't underestimate the number of people who simply find these kind of games fun, and want to be able to play them easily & legally on contemporary hardware! I suspect I am not alone in finding that simulation-style games are not enjoyable without matching realistic controls, but have neither the space to dedicate to wheels, joysticks, throttles, pedals etc. - nor do I particularly want to spend the money or devote the time. For example, much as I am pleased that Elite: Dangerous and Star Citizen exist, I personally am holding out for No Man's Sky, simply because releasing on PS4 first means it is far more likely to have a simple control scheme which works on common controllers. Many will probably decry it as "dumbed down" or "retro"; I say it is just a different design decision.
[1] http://www.r8games.com/
[2] http://www.destructoid.com/rem...
Do you honestly expect HTC and/or Valve to have invented some magic which somehow manages to render images at the same (or higher) resolution & framerate, with the same image quality and in-game graphics options, with any less beefy hardware to back it up? Or do you think Oculus are simply lying about what is needed for a good experience?
In this case, they are all things that require knowledge of who is logged in - functions to do with actually tracking creation/switching/ending of sessions, or things where admins may wish to change policy based on who is logged in (e.g. non-superuser can't reboot a shared machine whilst anyone else is using it). I agree it does seem like a bit of a kitchen sink, but it represents things that need to be considered in tandem for this functionality to work well on desktop systems, which have not traditionally been considered in tandem - on the one hand, this is the kind of consolidation systemd opponents complain about; on the other hand, in my experience, all this stuff now works better than ever before.
IMHO, logind is not a good example if you want to demonstrate feature creep - it is a good example of providing a unified solution to a bunch of related problems which were not previously addressed in a satisfactory way. Better examples of feature creep are things like networkd (for dynamic configurations on desktops/laptops we already have NetworkManager; for static configurations on servers you only need to get the distro-specific network set-up right once then leave it alone), or timesyncd (what's wrong with existing NTP clients?).
Personally, I'm not strongly opposed to systemd, and have observed some benefits from it - but my usage of it is limited to my own desktop and laptop, I am not a sysadmin worried about having to re-learn how to administer an entire network. In the context of desktops & laptops, I would say systemd is a good thing; elsewhere, I don't consider myself qualified to have a strong opinion.
To provide a consistent, reliable way of tracking who is logged in, and interfaces for doing various things related to user sessions - this includes providing controlled access to things logged-in users might want to do, such as suspend, reboot, power off, access input devices (separately from those in use by other users who may be logged in to the same machine), switching between different sessions, inhibiting suspend (because I'm watching a movie), and so on. A whole load of stuff which has traditionally been unreliable on desktops, or only worked for one user at a time, or worked differently for each distribution, or had no consistent mechanism for controlling access to the functions.
It has a man page: http://www.freedesktop.org/sof...
Desktop environments choose to depend on logind because it frees them of the responsibility to implement all this stuff themselves - which has traditionally been a mess, because the way these things are handled across different distributions has always been subtly different. Which group do I need to be in to be allowed access to reboot? What are the permissions on the device node for the keyboard? How does a generic video player app tell the system not to turn the screen off, without individual support for the methods used by various disparate desktop environments? If you have a common interface which all the DEs can use (and other apps with no specific DE affiliation), it becomes very tempting to, you know, *use* it.
Its access control goes via Polkit, which is itself a generic system to controlling access to privileged things. Polkit itself is not part of systemd.
TFA and the summary make it sound as if it is the lack of support contract which makes these systems insecure. This is complete and utter nonsense - it is the fact that they are running Windows XP which makes them insecure. It's not as if malicious hackers around the world were sitting there rubbing there hands in glee, waiting for the day the support contract expired to plunder the systems, having previously been completely and utterly thwarted in their evil plans by the exchange of funds between the UK government and Microsoft.
But at least a support contract would get them fixes for any newly discovered vulnerabilities, right? Well, maybe. No software is perfect, but the world - and Microsoft's practices - have moved on, and realistically it would take a *lot* of money for MS to spend a meaningful fraction of their resources securing an OS past the end of its useful commercial life.
"because there is less flickering in DirectX games". DirectX games played under Wine, or are your problems with AMD/ATI not actually directly related to Linux at all? I'm not sure what you mean by "flickering", but the problem anti-aliasing is designed to solve is, well, aliasing - that is, jagged edges on objects caused by the unavoidable fact that the on-screen image is composed of individual pixels, which becomes noticeable whenever different coloured objects don't line themselves up perfectly along pixel boundaries (i.e. most of the time).
http://blender.stackexchange.c...
If the problem is not strictly to do with jagged edges on objects, you may also want to read up on mipmapping and/or anisotropic filtering:
http://en.wikipedia.org/wiki/A...
You might be misunderstanding the problem and exacerbating things through poor graphics options, or you might simply be abnormally sensitive to the limitations of interactive 3D graphics rendering. Alternatively, if by "flickering" you mean entire objects are actually disappearing/reappearing, that sounds like an application bug, or a hardware failure waiting to happen (e.g. video memory corruption resulting from overheating).
.. then this wins for me, hands down. I have been relatively lucky with HDDs over the years; only ever had one failure which I didn't see coming, and even that didn't result in any data loss (though it did result in an interesting afternoon's work resurrecting the drive). I had an overheating GPU once, but again, I was able to see the failure coming a mile off and replaced the card before it became unusable.
Anyone who has assembled their own machine and never had a BIOS or UEFI related problem - even if self-induced through misconfiguration - is extremely lucky indeed. Very recently I built myself a new box, and through combinations of various issues, have already rendered it unbootable several times within the space of a week:
I now have a working machine, booting in UEFI mode and reaching the desktop in just over 20 seconds (Windows 8.1 64-bit on an SSD - no Linux install yet). There is an "ultra fast" boot mode which in theory would reduce this further, but it seemingly requires cooperation from the graphics card, and mine does not cooperate.
I didn't. This was my first comment in this thread. My point is, you say you're off to play this game, but it hasn't yet released.
+1 for Das Keyboard. A Model S Professional with Cherry MX Red switches strike a nice balance between firmness of action and volume. I also like the feel of the Microsoft Natural 4000, especially with the wedge installed to raise the front - it feel strange at first, but so comfortable once you get used to it! Sadly I found the quality of the mechanism lacking, too spongy and unsuited to long-term sustained use.
One day someone will make something with the shape of the 4000 (including wrist-rest and raised front) and decent mechanical keys, and I will have found my typing soulmate...
Now I'm going to go off to play some
Cool story, bro.
They usually say "that bug was fixed already in our latest version. Go bug your distro to update their packages." And thus the buck is ever passed.
That is not passing the buck, that is upstream developers correctly leaving packaging up to the distribution maintainers. If I release a piece of open-source software, and it gets packaged (by people other than myself) for multiple Linux distributions, do I - as upstream maintainer - suddenly become responsible for the care and maintenance of those packages, in distributions I may not even be aware were packaging my software, let alone have any sort of commit access to? Would you expect me to go through the rigmarole of becoming a Debian developer, for example, just so that I can ensure the Debian package of my software (which I didn't even create, I just release tarballs of code) is always bleeding edge?
If a distribution moves too slowly for your tastes, that is a problem with the distribution, and upstream are not beholden to you to fix it.
Top Ten Things Overheard At The ANSI C Draft Committee Meetings: (5) All right, who's the wiseguy who stuck this trigraph stuff in here?