There are two stinky common problems with my game, both of which annoy the hell out of me, but neither of which I can really do anything about.
The first is issues with graphics rendering. It seems that, even though OpenGL has a way to report errors like "out of memory," a lot of graphics drivers simply don't bother to do so. Instead they just render shit incorrectly and generally fuck shit up. The result is that it looks like the game sucks, but the truth is the game isn't being told anything about what's wrong. It presently doesn't pay attention other than to report the errors on stderr, but then to my knowledge, nothing useful has ever come out of OpenGL's error reporting functions, so it isn't like I'm missing out on a chance to give useful information to the user. Indeed, the graphics stacks which don't have these problems seem to deal with low memory not by reporting an error, but by swapping things to system memory where, if memory usage goes up further, the OS begins swapping out to disk. Thus the good drivers just get slower when they run low on memory, but otherwise present no errors. The bad ones just randomly drop textures and display lists, and again present no errors.
The second is issues with malloc(). Sure, malloc() will tell you when it cannot allocate memory. The problem is that there's generally nothing you can do in that case other than to terminate the program. You can't even printf() about the error because printf() uses malloc() internally and so if malloc() isn't working, printf() won't work either. Nevermind trying to do something more reasonable like display a message to the user about what went wrong and what they might be able to do about it.
Unfortunately, many of the tools programmers rely upon to make software have been written lazily, and so even if you yourself write your code to detect as many errors as possible, there are still plenty you can't do anything about. My game checks the return value of everything. In most cases it just prints a message to stdout and exits, as many errors are just so unlikely that it isn't worth the trouble of writing code to do anything else. Even so, I still get malloc() errors that don't make sense. I made it track memory usage and found that malloc() fails after allocating 400 MB of data on systems with 4 GB and no other software running. I have no clue why it's happening, and I can't do anything about it after it happens. It'd be nice if malloc() were more informative about why it is failing than to simply return NULL as a generic error. I've thought about simply doing one huge malloc() for everything I need and then write my own memory allocator to allocate out of that chunk so that I can be guaranteed to get what I need, but even that doesn't solve problems like a later printf() or some other library call failing when it calls the real malloc(), and I'd also lose out on features like realloc() being able to move memory around by asking the OS to remap pages rather than copying them.
Indeed, it would probably make a lot of sense if malloc(), like the good OpenGL stacks, simply started swapping to temporary files when it could no longer allocate more memory. Even better if it told the application about this so that it can drop things that aren't really necessary (like cached data), and so that the application can rely on the swapping to keep running as it informs the user about the problem and offers a choice to continue on at a snail's pace or to give up and terminate.
The state of error handling is really rotten all the way to the core. It isn't just individual pieces of software that suck at it. Even people who want to do it right are kind of screwed.
...or, at least, it isn't enough that you'll even notice.
I've been working on a free game for a while. Everything it requires is inside the executable. The download is still just over 1 MB. It's small enough that, when trying to get people to try out the game for me, I have issues with people assuming it must be some sort of trojan simply because of its incredibly small size.
Originally I was distributing it like most software, with an executable accompanied by many other files, but this just created issues of people copying the executable but not the graphics files, or copying things but putting them in the wrong directory (which is quite amazing considering that all anyone had to do was extract the ZIP file and run the executable from where it was extracted to). So I included the files in the executable, which brought it up to 10 MB. Later I made it so that most of the graphics come from the server and are cached locally, so that different servers can have different textures, and so now it's only 1 MB.
Also, until a week ago, it was using a dynamically linked version of GLFW for the Linux version. However, I began hearing reports of people not having GLFW available, so I included the necessary apt-get command on the web page so that people would know how to install it.
I suppose I could have installed a 64-bit system somewhere, but honestly, every time I install Linux I have to go through hell figuring out how to make it stop asking for my password every three minutes, as well as changing numerous other idiotic default settings.
So instead I just statically linked GLFW as well. I'm sure it added something to the size of the executable, but at present, 25% of the executable is a single PNG image used for the player avatar. While 1 MB may seems small for executables these days, there was a time when our whole computers ran in a single MB. Code simply doesn't require that much memory. When people complain of some software's memory usage, it isn't using all that memory due to a bloated code base, it's using it to store data (or memory leaks). To put it into perspective, the RGB data of a 1920x1080 display requires 6 MB of memory to store -- the entire executable for my game will fit into that multiple times. Thus, the memory saved by dynamically linking is inconsequential.
There are still things that aren't statically linked, like X11 libraries and OpenGL, but I suspect that statically linking those would cause more problems than would be solved. After all, if someone doesn't have X11 or OpenGL libraries, they probably don't have X11 or OpenGL. (I'd also have to find their various licenses to see if I'm even allowed to statically link -- LGPL won't allow you to statically link closed-source applications, despite its reputation of being nearly public domain, and so it is useless for closed-source software.)
As for the topic of this article, I also realize my game desperately needs an installer, as many people who download it just don't know what the hell to do with a ZIP file, but despite the fact that all that needs to be done is to toss the executable somewhere and make a link to it in the start menu, I can't seem to find any installer that is simpler than this overcomplicated bullshit. Creating a game while suffering from a sleep disorder is difficult enough without having to waste time and energy to figure out how to configure something so complex to do something so simple. On the rare occasions that my mind is clear enough to work through shit like that, I'd much rather use that clearness to work on more complex features for my game, not waste it figuring out something that's far more complex than it needs to be. So I spent about two hours working through that page, and had something that would install the game, but it was clear that I would have to read more documentation to make sure I didn't run into weird issues like having it think that each upgrade was a new installation, and so I've decided to put it off indefinitely. I suspect it would be less work to simply write documentation for the kids about how to save and extract ZIP files for every browser / OS combination than to figure out how to create a WiX installer.
And that's probably the main problem with software in general. Developers are so familiar with their own shit that, to them, the complex steps necessary to make it work seem trivial. They don't look at it from the point of view of someone who isn't intimately familiar with their software, or someone who doesn't have the time and energy to spend days reading the manual to become intimately familiar with it. I would think that creating an installer would be as simple as creating a list of "this file goes here and this file goes there" but for some reason it's far more complex than that.
The prosecution certainly has as much time as it wants. It can gather evidence indefinitely, gaining as much of a head-start against the defense as it would like, before filing charges. In essence, when the prosecutor files charges, that's saying "OK, I'm ready, let's go."
To allow the defense to have as much time as it would like, but not require the defendant to give up their right to a speedy trial at the same time, would only make sense. It's already impossible for the defense to have as much time to prepare a case as the prosecution has. There's no reason someone should have to give up their right to not sit in jail waiting for a trial indefinitely just because their defense needs a little more time to prepare a response, when the prosecution essentially had all the time in the world.
I doubt nature intended us to overeat to the point that we can no longer catch more meals, or outrun predators.
The video I linked to explains things rather well. Sugar: The Bitter Truth (It's a 90 minute lecture by a Doctor who treats pediatric obesity.)
To sum it up as a non-doctor who doesn't remember all the details: Basically, the fructose half of sugar goes straight to the liver, since only the liver can metabolize fructose. Most of it follows a metabolic pathway that turns it directly into fat. The rest follows a pathway that's a complete trainwreck, producing chemicals which increase cholesterol, raise blood pressure, interfere with the leptin hormone (which tells your brain how much fat your body has), and also tell your body that it should take the glucose it got from the sugar and store it as fat as well, rather than use it for energy. The result is that you may be eating a lot of calories, but your body is simply storing them all as fat (and setting itself up for a lot of medical problems) and so you're still hungry because there's so much fat storage going on that you don't have enough energy left over to do anything.
One important factor to consider is that how much you eat in a single sitting is just your brain's estimate of how much food you need at the moment to maintain your metabolism.
It makes up for this the next day. If it consumed more energy then it thought, you'll be less hungry. If it consumed less, you'll be more hungry.
So that this might work for a single meal isn't much of a surprise. I'd expect it to fail for any long-term use, however.
To lose weight, one would do much better to simply stop eating Oreos. See Sugar: The Bitter Truth for more information. After simply cutting sugar from my diet, but otherwise eating as much as I wanted to, I lost 75 pounds over 6 months. The only difficult part is the first two weeks, over which it becomes painfully obvious that sugar is addictive since, no matter how much you eat, you're still hungry until you eat something with sugar in it. Once you break that addiction, however, losing weight isn't hard at all. So just stock up on jalapeno poppers and other tasty sugar-free foods and over-consume them for the first two weeks so that you aren't tempted to consume any sugar. Once the addiction is broken, your brain will start regulating your appetite in response to your leptin levels exactly the way nature intended, and you'll just naturally no longer want to overeat.
I once made something similar, by attaching my telephone line to my sound card input and decoding the Caller ID information in software.
Rather than play the three tones, however, I simply attached a relay to my parallel port so that the computer could pick up and then hang up the line.
That actually makes them stop calling as well. I guess they're smart enough to realize they're just wasting their time when they get hung up on every time, but not smart enough to realize they're wasting their time when you've ignored the previous hundred messages they've left on your answering machine.
You can also combine it with a phone that you can configure not to ring until it has Caller ID information (by setting different rings for different callers, and thus getting no ring at first since it doesn't have the Caller ID info yet) and you won't even hear the phone ring when the morons call.
My problem isn't that it's Google, but that it's anything at all. (And I already use DuckDuckGo, BTW.)
Pasting into the browser window isn't a good enough reason to send that data over the internet. If I paste it into the URL bar, then perhaps parse it as a URL. If I paste it into the search bar, then send it as a search query. However, Opera goes so far as to take data pasted anywhere where it otherwise wouldn't do something else, and send it to Google as a search query. I sent them a bug report about this years ago, suggesting that they only do this when it's pasted into the URL bar, but apparently they didn't think that was a very good idea.
I did find a work-around, in that if I delete all search providers from Opera except for "find in page" then it will no longer do this. However, it's still a terrible default to have, regardless of which search provider it sends the data to, particularly considering that it's so easy to do, simply by middle-clicking on something you thought was a link but which actually isn't, or by missing a link by a pixel or two. Like I said, it'd make far more sense if I actually had to paste into the URL bar or the search bar in order to trigger this behavior -- you know, if I had to indicate that I wanted it to do this rather than it just assuming it knows what I want.
Opera has a similar nasty bug... If you middle-click almost anywhere within the browser window, it likes to take the last bit of text you highlighted with your mouse and send it to Google. It's wonderful when you're simply trying to middle-click a link to open it in a new tab, but you're off by a pixel and so instead Opera sends some secret text you didn't want anyone else to see to Google so that it can store it forever in its database of every search query ever submitted.
You can pray that a 100 ohm, 10% tolerance resistor is right at 100 ohms, and yeah, probably that's about what it'll be. Me, I'll measure the thing and I'll *know* what it is.
Obviously you've never done this. The good ones are sorted out and sold as higher tolerances. Thus, if you did find one that was actually 100 ohms, suspecting holy intervention may be appropriate.
If you don't believe ALSA is just too complicated, look at this "simple" example:
I can hear people now saying "so what if it's complex, people can always write wrapper libraries to create simpler interfaces."
There's always someone who says "sound has worked just fine for years!"
Obviously it works well for some people. It probably works well for the developers, since they can fix their problems, and it probably for some people who coincidentally have similar hardware...
It seems my copy of Linux Mint has no OSS support whatsoever by default. I was trying to play some NSF files the other day, but my NSF player wouldn't work as there's no "/dev/dsp" for it to access. So I tried to find another. Couldn't find one that worked. Eventually had to just use my Windows laptop to play the files.
Honestly, if they'd just start doing audio mixing in kernel space instead of treating the idea like it's some sort of sin (for fuck's sake, it isn't like Linux is a microkernel), and simply create an API (or better yet an ABI) that doesn't require so much effort to learn that the few people who successfully learn it immediately think "I should create a wrapper for this" while the ones who find it to be too complex simply use one of the many (typically broken) wrappers, Linux audio would be a lot less painful.
They kind of want power handed to them just because they have good ideas, rather than to win it by convincing everyone that their ideas are the best.
Take legalization of marijuana for example. One might start a political party for that goal, put a candidate on the ballot, and lose.
As such, as soon as these people create a new political party, they've already kind of lost. You don't create change by taking your minority views, putting them on a ballot, and hoping that somehow people vote for them despite them being minority views.
I'll be the first to say that we need to switch to proportional representation and also use some form of Condorcet voting, but I don't believe that it's the two party system that is holding back political progress. Sure, it isn't helping, but eliminating it won't solve the real problem. I think our biggest problem is that people who are smart enough to have good ideas, and smart enough to evolve their ideas when they hear valid criticism, are also smart enough to realize just how annoying it is to listen to people talk about things they hardly know anything about, and so they keep their mouths shut. Thus stupid ideas have an evolutionary advantage.
Judging from what I read in your link, it sounds like they just put everyone in a single primary (regardless of party affiliation) and the top two winners of that primary go on the ballot for the main election. I don't see how that would make things harder for third party candidates. Indeed, getting enough supporters to win a primary is probably easier than getting enough supporters to win a main election due to fewer people voting in primaries, and once on the main ballot, any candidate is likely to get about 50% of the vote simply because they aren't affiliated with the one of the two main parties that someone doesn't like.
For example, a republican, a democrat, and a libertarian may be in the primary. If the republican and democrat are the top two, then only they will be on the ballot, but if the libertarian can't beat either of them in the primary, they're probably not going to beat both of them in the main election, and so it doesn't matter. However, if the libertarian did beat one in the primary, then he'll be on the ballot with the other in the main election. In that case, someone may see a republican and a libertarian on the ballot and vote for the libertarian simply because they hate republicans, or see a democrat and a libertarian and vote libertarian simply because they hate democrats. This could easily give any third party candidate about 50% of the vote, allowing their actual supporters to put them over the top.
Thus I fail to see how top two primaries, as described in your link, would do anything but help third party candidates.
I have one of these scripts on my web site. It isn't there to track if people click the links. It's to allow me to link to shady web sites without Google knowing that I'm linking to shady web sites and penalizing me for doing so. (They are useful for discussion sometimes.) The script itself is blocked by robots.txt, and so Google never sees that there's a redirect that points to the web site since it never makes a request to the script, whereas simply using a nofollow tag would still allow Google to know about the link's existence, even if it doesn't follow the link.