Forgot your password?
typodupeerror

Comment: I bought some of those sets a few years ago... (Score 1) 335

by Sanians (#46778579) Attached to: Kids Can Swipe a Screen But Can't Use LEGOs

...which is why I know what to look for on the box. The web site doesn't mention what blocks you get, but some kind person listed them in their review:

This is what you get for the money.. 132 1x1, 224 1x2 (that is 356 of the 650 pieces) 136 2x2, 36 1x3, 36 2x3, 26 1x4, 39 2x4, 10 1x6, 3 2x6, 6 1x8, and 2 2x8. Nice mix of colors. White, blue, red, and yellow were the only colors with pieces larger than 2x4. Wish LEGO would sell more larger pieces in their sets.

So you pay $30 for a set and you can probably throw away 100 of the pieces because you'll never find a use for that many 1x1 blocks. just imagine what you end up with if you buy multiple sets because for some reason you need more than 39 pieces of 2x4. Buy ten sets so that you have a respectable collection of 390 pieces of 2x4 and you've now got 1,320 1x1 blocks. What the hell are you going to do with all of them? Maybe you can melt them down and turn them into the 165 pieces of 2x4 they should have been to begin with. ...or, no, most likely you'll end up using them as kitty litter.

Then there's all the 1x2 in the set. You can build a wall with the 1x2 that is almost three times the size of a wall built with the 2x4 included in the set, which explains why I remember suddenly developing a fondness for walls built from 1x2 after I bought my sets.

Last time I looked it was rather easy to purchase exactly the blocks you want, and there didn't seem to be any mark-up for the custom sets vs. an off-the-shelf set. I guess their machines that put the sets together have access to all of the blocks and so a custom set isn't a big deal. So if anyone is going to buy Lego, I'd recommend they go that route since, while it is still expensive, at least you'll only pay for what you can actually use.

Comment: Smaller Blocks (Score 1) 335

by Sanians (#46778121) Attached to: Kids Can Swipe a Screen But Can't Use LEGOs

Mega-Block is a parallel to Minecraft. The size of the blocks irritates me. I've seen some more recent games in development that allow multiple sizes of blocks. Those will be the better system. Minecraft is "fine," but the ball has been dropped and is being picked up slowly by other indie developers.

Being one of those "indie developers," I feel the need to spam about my own effort at smaller blocks, Multiplayer Map Editor. Just prepare yourself to be unimpressed. The game has existed for years yet basically has only one active player. ...and to be honest, I have no idea how he keeps himself amused. I feel like I'm punishing myself every time I start it up just to see if anything is happening.

...but, it does work in Linux. I think that counts for something around here, although that something may be immediately wiped away by its closed-source nature.

More to the point of the discussion, those smaller blocks aren't all you'd imagine they are. In my experience with this game, if I were to make suggestions to someone thinking of writing a similar game, I'd suggest they stick with the standard size, or at most cut it in half. I'm not sure I'd do the same if I were to start over with the game, but I'd definitely have to consider taking my own advice on the issue as it would be kind of silly to ignore what I learned the first time.

First of all, essentially no one actually wants smaller blocks. Even those who think they want smaller blocks don't want smaller blocks. If you actually want the ability to add more detail, what you're really looking for is something like Blender which, despite the steep learning curve, is perfectly usable if you're willing to spend a few weeks watching these tutorial videos. My nephew used to play my game, and liked it since it was detailed enough that he could build complex objects like tanks, and he'd actually look up measurements on the internet to make them as realistic as he could. Then I taught him how to use Blender. Building with blocks just isn't good enough for him anymore. I don't blame him since I also never built anything in my game after I learned to use Blender. Using blender is simply more rewarding when you look at what you get vs. the time you put into it.

So your players consist entirely of people who don't want to learn Blender. ...but that isn't all that they don't want to do. They're also not fond of measuring anything. So their ceiling heights are entirely random because you can't just eye-ball 24 blocks, you have to use the measuring info box that the cuboid tool provides, but first you have to know that ceilings should be 24 blocks tall. So how do you know that? Well, I provided a tiny example house, with proper measurements for ceiling height, door height, door width, chair height, etc., but no one takes the hints. They just build whatever they want. Then it looks awful and they lose interest and they go back to playing Minecraft. ...but again, I can't blame them. I use the measuring tools and what I build looks awful too. I tried building a full-size house and only got half way through before deciding I really didn't give a shit.

The only time I really built anything that looked interesting was early in the game's development when I needed a properly-sized house to verify that everything was sane (you can't really judge your speed or height above ground when you're looking at a flat surface and nothing else) and so I found blueprints for a house on the internet and drew a grid over them at the scale of the blocks in the game. That turned out nice and was rather impressive. It's been all downhill since then, particularly because I've never since found a freely available blueprint for a house on the internet.

The performance implications of smaller blocks also cannot be ignored. With 10 cm blocks, you're not looking at ten times as many blocks, you need 1000 times as many blocks for the same volume. This creates issues everywhere. The most obvious is that your maps are smaller, as you need 1000 bytes for every cubic meter instead of just 1 byte. ...but that isn't all. For example, Minecraft is able to implement its shading by allowing light travel only 16 blocks. With 10 cm blocks, the distance from the floor to the ceiling in a normal house is 24 blocks, so the light wouldn't even reach the floor if it traveled only 16 blocks. So in my game the light on maps with 10 cm blocks travels 64 blocks, which requires 4 million bytes of map data to be examined each time a chunk is compiled, vs. only 0.1 million for Minecraft to do the same thing with its light that travels only 16 blocks. It's such an expensive calculation that, by default, shading is enabled only on maps with blocks 20 cm or larger since in that case it can have the same effect when it travels only 32 blocks, and so it isn't so painfully slow.

Overall, I'm left to think that Minecraft actually did things the correct way: Make the blocks as large as you can get away with, then make up for the deficiencies of large blocks by having some objects you can place that are smaller than blocks, e.g. stairs, fences, plants, etc. The only real mistake they made is writing the thing in Java. Given the performance I've seen from my own game, which is written in C, where viewing everything at a radius of 1024 blocks is no big deal, it just seems insane that I get into Minecraft and can't use the maximum setting of 16 chunks (256 blocks) without it tying itself up in knots after a while.

Comment: What the fuck? (Score 1) 284

by Sanians (#46765107) Attached to: OpenBSD Team Cleaning Up OpenSSL

If your language runtime has a bug instead, it's much more likely to be a very indirect one, because now not only do you likely have to cause a specific behavior in the program itself, but that behavior has to trip up the runtime in a way that causes that bug to lead to something bad.

Yeah and? Has that stopped all the exploits of the Flash runtime and the Sun/Oracle JVM? Nope. In fact, those two are among the most exploited pieces of userspace software on the OS.

How the fuck can everyone misunderstand this? Every time this topic comes up, almost everyone fails to get it.

If you were writing programs in Flash or Java and those programs received data from the internet, those programs would be less exploitable because they were written in a high-level language. However, that's not what people usually do with Flash and Java. They use Flash and Java to run programs they've downloaded from the internet, which means they're left to the security of Flash and Java which are written in C.

Honestly, I don't know why I'm even trying. I'm 100% certain that everyone is going to entirely misunderstand what I've just said.

Look, people. Try to understand: These are two different concepts:

1. It's hard to exploit programs written in high-level languages like Flash and Java.
2. It's easy to exploit high-level language interpreters written in low-level languages like C.

I shall now prepare to be down-modded as a troll, as that seems to be the usual response to someone saying something that everyone else is so incapable of understanding that they assume they must simply be trying to cause an argument.

Comment: sizeof() is ambiguous (Score 1) 171

by Sanians (#46754983) Attached to: First Phase of TrueCrypt Audit Turns Up No Backdoors

was expecting something esoteric but turned out to be really straightforward

I think you failed to notice that the page talks about two separate bugs. In the first one, the memset() really is completely removed by optimization.

the type of error you make at 2am, taking the size of the pointer instead of the actual size of the buffer

I'd argue that's an error one might make any time of the day. The sizeof() operator is ambiguous. Consider the following example:

#include <stdio.h>
void main() {
    char a[100];
    char *b = a;
    printf("address of a is %p\n", a);
    printf("address of b is %p\n", b);
    printf("size of a is %lu\n", sizeof(a));
    printf("size of b is %lu\n", sizeof(b));
};

One might assume that, since both "a" and "b" function identically (e.g., both "a[7] = 0" and "b[7] = 0" are valid, as are both "strlen(a)" and "strlen(b)"), then using the sizeof() operator on each of them should return similar results. However, that isn't the case, as sizeof(a) gives us the size of an array while sizeof(b) gives us the size of a pointer.

It would make more sense if sizeof(a[]) returned the size of the array while sizeof(a) and sizeof(b) both returned the size of a pointer. As it presently works, sizeof() is a somewhat scary operator to use. I usually end up using a printf() to verify that it is giving me the size of what I want the size of rather than assume I know what it is doing.

Comment: It Assumes Bounds Checking without Implementing It (Score 2) 171

by Sanians (#46754647) Attached to: First Phase of TrueCrypt Audit Turns Up No Backdoors

WTF?!?

WTF indeed.

There seems to be a major trend towards making compilers create code that is as different as possible from what the programmer wrote without being so different that the programmer actually notices. One might assume it's a secret NSA plot to defeat security measures in all software everywhere. You know, if one was incredibly paranoid, that is.

It's hard to say whether this is justified behavior. As an example, consider this code from a link an AC posted:

int
crypto_pk_private_sign_digest(....)
{
    char digest[DIGEST_LEN]; ....
    memset(digest, 0, sizeof(digest));
    return r;
}

Exploit mitigation code like this is a case of writing code which we expect to never have any effect, just in case we're wrong and it does have an effect. Then the compiler comes along and decides for itself that the code we wrote will never have any effect and removes it. It's kind of hard to blame it for noticing the uselessness of the operation when we ourselves expected the code to likely have no effect when we wrote it, but then, the whole reason we wrote it is because we thought we might be wrong. Should the compiler then assume that we might be wrong as well, and that we might access that memory using a different pointer?

Does it make sense to compile with optimization enabled when, by including things like the memset() call to clear memory we're finished using, we clearly have goals other than optimization?

The article mentions the fix being the use of a different function which won't be optimized away, but I wonder if even that is a legitimate fix. Our "digest" array is just another variable that the compiler is free to do whatever it wants with in the name of optimization. If it will make the program run faster, it's free to make two copies of it. Then our new never-optimized-away function will end up erasing only one copy of the variable.

So problem here isn't the use of memset() rather than some other function. The problem is that we're asking the compiler to create code that doesn't match what we've written. It should be no surprise then when it goes ahead and does that. Thus, I don't think it's correct to claim that the error here is the failure to use the correct function to clear the memory. I think the error is in asking the compiler to generate code that isn't identical to the source code.

The core of the problem is that C isn't a language that allows us to clearly tell the compiler exactly what we want to happen. Without bounds checking on pointer use, every pointer is effectively a pointer to all memory. Thus, when a pointer falls out of scope, it doesn't mean anything. That memory can still be accessed via any other pointer anywhere in the program. If C enforced bounds checking, such that accessing the data in "digest" via any other pointer was impossible, then the compiler could safely work under the assumption that once "digest" falls out of scope, the data it points to will never be accessed again, and thus removing the memset() call would be a safe thing to do since it truly would have no effect.

It really seems ridiculous when you think about it. Compilers assume that bounds on pointers will be respected, yet make no attempt whatsoever to enforce those bounds, essentially guaranteeing that they will not be respected since programmers are imperfect.

Consider what the compiler will do when it encounters code like this:

int a[4];
int b[4];
int c[4];
b[-1] = 0;

Despite the obvious error in the above code, GCC will compile it without error. It will then perform optimizations that assume that neither a[] nor c[] have been affected by the assignment to b[]. It seems rather ridiculous that anyone is expected to create secure software in such an environment. Either the compiler should enforce bounds checking, or it should assume that any pointer operation can affect any variable.

C could really use an extension for bounds checking. Even if it is purely optional, like a new type of pointer that includes a limit. In many cases the compiler could enforce the bounds at compile time simply by analyzing the program as it currently does when optimizing and so no additional machine code would be generated. In other cases, I think the security benefits would be worth the loss in efficiency. With bounds checking, the compiler could safely optimize away all of those memset() calls and other things programmers do to mitigate security issues since it would then actually know with certainty that the memory being cleared will never be accessed after the variable falls out of scope.

As things are now, we're currently benefiting from those optimizations despite not having modified the language such that the compiler can truly know that those optimizations are safe. Even without modifications to the language, the compiler should at least point out violations in how it assumes the programmer uses pointers, like the assignment in the code above. To optimize under the assumption that the programmer doesn't do such things while ignoring obvious violations of that assumption seems rather negligent.

Comment: Re:Pre-Existing vs. Post-Existing (Score 1) 721

by Sanians (#46732597) Attached to: Can the ObamaCare Enrollment Numbers Be Believed?

Did I get cancer while covered by the new, one day old policy? Clearly not.

It would probably have to be defined as such, for the idea to be workable at all. ...and it wouldn't be all that unfair. Companies would simply be inclined to have you checked out before they sign a policy with you. Presumably you wouldn't cancel your old insurance before getting new insurance, and so when you went for this check-up, the cancer would be discovered and your old insurance company would be responsible for paying for the treatment. Occasionally it wouldn't be detected, but the same would happen when customers were leaving them for another company, and so it would all balance out. Indeed, the more thorough the checkup, the more profitable the insurance company, and so switching insurance policies might become the best thing you can do for your health.

Sure, it wouldn't be a perfect solution, but I'm already presuming a perfect solution is unacceptable. So all that matters is that it is somehow workable and that it makes more sense than what we have now and what we had before.

Comment: Obfuscated Variable Names (Score 3, Informative) 149

by Sanians (#46732513) Attached to: NSA Allegedly Exploited Heartbleed

I challenge anybody to review it and find (or notice) the bug.

It's actually kind of easy to see. I just use the same trick I use when trying to read almost anyone's code: I assume that some jackass obfuscated all of his variable names and so I rename them as I figure out what they actually represent so that the new names actually describe the variable. Once that's complete, I'm left with "memcpy(pointer_to_the_response_packet_we_are_constructing, pointer_to_some_bytes_in_the_packet_we_received, some_number_we_read_out_of_the_packet_we_received)" and it immediately raises a red flag.

...but more seriously, the code in that check-in is why I hate to let anyone work on any programming projects with me. Worthless variable names create code that's as worthless as English text that refers to everything as "that stuff" and "those things." It's just a step away from choosing purposefully obfuscated variable names. If the variable is named "payload" then not only should it be the actual payload data, rather than just its size, but it should also be the only payload in existence such that no distinction needs to be made between "received_payload" and "payload_to_be_sent." ...and then there's the single-letter variables, some of which are incremented at times so that they don't even consistently refer to the same thing over time, creating a variable that not only doesn't indicate what it refers to, but one which actually might refer to anything.

I've read that the reason there's a packet length sent from the remote host is because this data is sent with random padding bytes added to each packet and so the packets need to indicate how much of the data is actually valid. So why isn't the packet size figured out closer to when the data first enters the program? First thing I would do when receiving a packet is read out this packet size, verify that the actual size of received packet is large enough to contain it, and toss the packet if it wasn't large enough since it was obviously corrupted (or malicious). Then I'd write the size into a structure for the packet's meta-data, along with any other data we find in every packet (like a packet type number), and every other part of the entire program would read the data from that structure. That's how you do these things. Everything received is "tainted" and, once you verify it isn't poisonous, you move it out into a data structure that the rest of your program trusts. Otherwise you have every piece of code that needs that data having to verify it every time it accesses it which just creates enormous opportunity for error.

So when you come across code like this which pulls data out of the packet and just uses it, it isn't just wrong, but it doesn't even resemble anything that might be correct. Thus, the poor variable naming just might be why this wasn't noticed. Since the data pulled out of the packet is stored into a variable named "payload" it's easy to imagine it's simply payload data, which doesn't have to be checked as it won't ever be used for anything other than being returned to the remote host, and so the absence of code that checks the validity of that data might be expected. If it were named even something as ambiguous as "payload_size" then you have to immediately wonder if it's a size that needs to be checked against anything when you see it being pulled out of a buffer of untrusted data. ...but then, you don't see that either, since the pointer is named "p" which doesn't scream "this is untrusted data" and, even if you look above to see that "p" was assigned from "&s->s3->rrec.data[0]" you're still left wondering what the fuck that might be. Maybe "rrec" refers to some sort of received record? Fuck, who knows.

I mean, right after the memcpy I see "RAND_pseudo_bytes(p, padding)." Is this even putting the padding bytes in the correct place? Well, "p" could be a pointer to anything so it's pretty easy to assume it could be correct. Hell, with those variable names, we could assume that "padding" is the pointer and "p" is the size if we wanted to, in which case it definitely looks like it's writing them to the correct place. ...but, regardless, how did this error happen? To write the bytes in the correct place requires not only using the correct pointer, but also calculating the correct offset, but not only is "p" the wrong pointer, but there's no attempt to increment any pointer by the size of the payload bytes. Maybe that omission just didn't stand out due to the s2n and n2s macros apparently (guessing based on how they're used) incrementing the pointer passed to them, creating a situation where sometimes pointers have to be incremented and sometimes they don't, which is just more sloppiness. You don't get error-free code by ensuring that every time you do something you have to do it differently.

...but don't think I'm blaming it all on the programmer to defend C. C's ways just aren't good enough anymore. If you want a good challenge for anyone, challenge them to write the same software in any high-level language and screw it up the same way. It just can't happen. They can type "substr(outgoing_packet, 2) = substr(incoming_packet, 2, whatever)" and pull the value for "whatever" from wherever they like, but they'll never manage to construct a packet that contains anything that wasn't in the packet they were sent. ...but that's never going to happen as long as we have people valuing speed above security. It also doesn't help that everyone seems incapable of designing a programming language that isn't completely asinine. No one's going to choose to program in a better language if attempting to do so just makes them want to shoot themselves.

I mean, I fucking hate C. I just use it because I hate everything else even more.

Comment: Some clarification... (Score 1) 721

by Sanians (#46719697) Attached to: Can the ObamaCare Enrollment Numbers Be Believed?

that still ignores the question of whether we want a future where adults with chronic illness are unable to get insurance because their parents didn't have coverage on them before they were born, but the only solution to that is single-payer

I didn't explain that clearly. What I meant to say was, like, say someone's born with cerebral palsy, but their parents didn't have coverage for them at the time. That's a pre-existing condition for that person's entire life, even though that person couldn't have planned ahead and obtained coverage for it before they were even born. So my post-existing coverage idea doesn't help them out at all, nor does it help with any childhood illness which the parents similarly didn't have coverage for. I suppose some people are OK with allowing children to suffer just because their parents are idiots, but it doesn't sit well with a lot of people who realize that they could just as easily have found themselves in the same position. When you allow people to suffer because of situations outside of their control, you're saying it's OK for people to allow you to suffer for reasons outside of your control. A lot of people think it's a good trade to help those people out if it means they'll similarly be helped out if they're in need. The big issue here is that this is being argued by adults, who already know they're not one of those unlucky children, and so they don't care that it could have happened to them because it didn't happen to them and there's no risk that it's going to.

With that in mind, since the whole point of insurance is that people pay into it before the risk is realized, it makes sense that people should be covered even before they're born since there's risk from the moment of conception. ...but an individual can't control what coverage they have until they're old enough to work and buy that coverage themselves. It's not really fair to just tell someone "you're fucked because your parents suck," especially when it's a problem that disappears when you acknowledge that no sane person really wants to not have health insurance, and so it makes sense to just provide it for everyone from the moment of conception and let them pay for it later in life via taxes, much like how our children pay for child services keeping them safe from abusive parents by paying for it for other people's children when they're old enough to pay taxes.

Comment: Pre-Existing vs. Post-Existing (Score 1) 721

by Sanians (#46719469) Attached to: Can the ObamaCare Enrollment Numbers Be Believed?

Apparently the possibility that people might take advantage of the "no pre-existing condition" clause of the ACA to get insurance when something catastrophic happens disturbs the insurance companies' bottom line deeply.

I'm still in disbelief that an idea that bad was able to become law. Especially when there's a similar idea that makes far more sense and solves many of the same problems: coverage for post-existing conditions.

We'd always hear stories of someone getting cancer and being unable to work during their treatment and therefore unable to pay the premiums for their insurance. Why they hell did they have to pay those premiums? They got cancer while they were paying for coverage, so that coverage should cover the treatment of that cancer no matter how long it takes regardless of whether they continue to pay or not.

Strangely, I never heard anyone debate that idea. Indeed, I've had essentially no success getting anyone on the internet to even understand what I'm talking about. I've made analogies to homeowners insurance and your house burning down, but people seem unable to comprehend that you buy insurance to cover the cost of illness, not the cost of doctor bills, and so once the illness occurs, the insurance company should pay up, even if you decide to switch to a different insurer during your treatment. ...and, naturally, the new insurer then wouldn't care about your pre-existing condition because your previous insurer, the one you were paying when you acquired the illness, would be the one paying for its treatment.

...but, no. The idea is apparently insane, and only makes sense to me because I am insane.

Not that I don't think single payer makes more sense, but if we're going to attempt a free market solution, we should at least attempt one that actually makes sense. Require coverage for post-existing conditions and require up-front pricing for all medical procedures. The free market's functionality is caused by consumers shopping around for the best price. If they aren't doing that, it's obviously going to fail to control prices. For that reason, flat copays (you pay $20 no matter what the visit costs) must also be illegal, and instead copays must always be a percentage of the final bill, to encourage consumers to continue to look for lower prices even if they aren't paying most of the bill. If you lower the cost of insurance to a point where people can afford it, and regulate it well enough that people know that they'll actually get what they think they're paying for (since otherwise the simple desire to not get screwed will deter people from buying insurance), all it would take would be a few well-designed ad campaigns to the effect of "even 18-year-olds can get cancer" to get everyone to buy some insurance.

Of course, that still ignores the question of whether we want a future where adults with chronic illness are unable to get insurance because their parents didn't have coverage on them before they were born, but the only solution to that is single-payer, and we've ruled that out for some fucking reason. Apparently there are people out there who would honestly rather just keep their money and just die if they get cancer, I guess, much like how there are people who'd rather not pay for the police and just get shot if someone doesn't like them. I guess we can't infringe on the rights of the insane by making them pay for things they don't want but would probably use anyway if they needed to.

Comment: I suspect people have money to waste. (Score 1) 641

by Sanians (#46699897) Attached to: Meet the Diehards Who Refuse To Move On From Windows XP
Upgrading to a new computer with Windows 7 may be the best idea overall, but it doesn't mean it's the best solution for everyone.

My mother has an old computer with XP on it. We were at the store one day and she asked if she should look at getting a new computer since XP support was ending. She really doesn't have the money for a new computer. I told her we'd just keep it as it is, and if in the future it becomes unusable, we'll just install Linux on it and see what she thinks of that. I suspect she'd be just fine with it. She does have a few games she's bought that require Windows, but given the choice between paying $400 to keep playing those games, or paying nothing and being limited to games on the internet, I suspect she'd rather keep her $400 and just play the games that are available on Facebook.

Indeed, waiting to see what happens before you spend your money is usually a wise thing to do. For example, a friend once told me of his plans to replace the tires on his car. I told him he shouldn't replace them, but instead wait until one of them goes flat, because "planning ahead" tends to often just waste money. He and another friend of mine insisted I was insane and that the tread on the tires was to the point that the tires needed to be replaced. So he replaced them, and a month later the car broke down and he never drove it again. Even if that hadn't happened, by replacing the old tires, he was throwing away a portion of their value. If they were 90% of the way to the point of being unusable, he was throwing away 10% of their value just because they were almost to the point of needing replaced and he wanted to think ahead and replace them now.

The same could easily be true with the end-of-life of Windows XP. Maybe it's doomed to be infested with malware within a year, but it might also be perfectly usable a year from now. ...and even if all anyone gets out of it is another year of use, they'll be able to get a better computer for their money a year from now than they can get right now. Who knows, maybe a year from now Microsoft will have given us our start menu back in Windows 8 and so they can get Windows 8 instead of Windows 7. If nothing else, waiting a year to buy a computer means that the computer you buy will likely still be working a year after one you might buy today ceases to be useful. The simple fact is that not upgrading makes more sense than upgrading because it doesn't make sense to upgrade until you have to.

Comment: Code Examples (Score 2) 303

by Sanians (#46699571) Attached to: OpenSSL Bug Allows Attackers To Read Memory In 64k Chunks
<quote>In other news, we can check everything quickly by not checking everything!</quote>

His comment is more interesting than your reply makes it out to be.

I've looked at a few disassemblies of compiled C code. GCC is already pretty good at understanding how variables interact within for() loops. If you make two nested loops, one for each index of a two dimensional array, it'll output code which simply uses a pointer that's incremented in a loop to access the array and check the pointer against a limit to determine when the loop is complete, such that the pair of for loops you wrote don't actually exist in the final code. It'll even switch the order of your nested loops if necessary to make this possible. If you perform a calculation inside of both loops that can be moved outside of one of them, it'll spot that and move the calculation outside of the inner loop. Write an equation in an unnecessarily verbose way that makes use of multiple unnecessary temporary variables? It'll notice and your temporary variables won't exist in the final code. Use the same sub-expression in multiple equations? It'll notice and calculate that sub-expression once and store it to a temporary variable. The compiler optimization isn't smart enough to turn your O(n^2) algorithm into an O(n*log(n)) algorithm, and indeed, what it does do is kind of basic, but it's still able to figure out quite a few of the more obvious optimizations, and in doing so it frees the programmer from having to worry about these more trivial things, which allows the programmer to focus their attention on things that computers aren't smart enough to do for us.

Thus, when it comes to code like this:

int a[100][100];
for (int y = 0; y < 256; y++) {
    for (int x = 0; x < 256; x++) {
        a[x][y] = x * y;
    };
};

GCC wouldn't even think about outputting code to check the array bounds. It would check them as it compiles and decide whether to output the code without bounds checking, or, as in the case above, report that this code will always exceed the array bounds.

GCC already looks at code in enough detail that, if you call a small function that exists in the same source file, it'll notice its small size and inline it. I once had a pair of functions that each called the other recursively and saw that it chose to inline one of them within the other. So it would likely notice that the array bounds will be just fine even if you put your loop in one function and access the array in another as long as both functions exist within the same source file so that it is able to examine both functions at the same time. So, for the most part, it would only generate bounds checking where it is actually necessary: In code where the array index comes from outside the program, or in code where the index comes from within the program but through a complex enough path that even a human couldn't safely assume that they're 100% certain the index will always be within bounds.

So, most of the time, the compiler would correctly determine whether or not bounds checking is even necessary. Sure, sometimes it might be wrong, but humans get this wrong as well. The difference is that when the compiler gets it wrong, it'll err on the side of performing the check even when it's unnecessary, whereas humans tend to often get it wrong by leaving out the check even when it is necessary, or by attempting to perform the check but writing the code incorrectly so that the check fails.

As long as our software is doomed be screwed up in some manner, I'd prefer it be screwed up by being a bit slower than necessary, rather than having it expose data on my computer to any web site I visit.

Comment: It's sad no one agrees with this. (Score 2) 303

by Sanians (#46692147) Attached to: OpenSSL Bug Allows Attackers To Read Memory In 64k Chunks

I don't understand why this is controversial. People consider it a bad idea to roll your own encryption code. Why isn't it a bad idea to roll your own bounds checking? Because it's easy and you won't screw it up? I'm sure people writing their own hash functions feel the same way.

Do people seriously prioritize speed over security? Of all of the things my computer might squander its gigahertz on, squandering them by checking bounds on things that will never actually be out of bounds isn't something I can disagree with if it makes the software running on my computer more secure. It's kind of like how, when doing math problems on a test, you check your work to make sure you get the right answer. Technically it's a waste of time if you know what you're doing, but if you're concerned about your grade, you do it anyway just in case.

Besides, if you want to whine about unnecessary wasting of CPU cycles, just look at an assembly dump of floating-point equations being solved in C as compiled by GCC. Anyone who actually knows how to do floating point in assembly language will want to cry. The round() function is a good example, as the documentation declares a specific way of rounding half-way cases which isn't what the FPU does and which anyone who knows how to use floating point math correctly will realize is a complete fucking waste of time since what the FPU does do is 100% adequate.

Comment: We shouldn't need the warnings at all. (Score 1) 85

by Sanians (#45896109) Attached to: Creating Better Malware Warnings Through Psychology

The problem is that we shouldn't need the warnings at all.

Say your kid finds a web site that offers an awesome free game, and so he downloads it. Why shouldn't your computer be able to run that game (or virus) in such a way that it isn't able to take over your entire computer? The idea that programs should be able to do anything on a computer that the user running them is authorized to do is completely outdated.

When users want to access arbitrary files and make massive changes to their filesystem, they use a file browser provided by the OS, or a zip/unzip utility provided by the OS, and so in both cases there's no concern of the security of these applications. Every other program anyone uses only needs to access files specifically selected by the user, and so all that is needed is an API call to the effect of "open_whatever_file_the_user_selects()" which prompts the OS to display a file open dialogue to select which files the program should have access to and return the file handles to the program. The only other need for filesystem access I can think of is software which needs to cache data, but that doesn't require filesystem-wide access either. All it requires is that the OS give it a folder specific to that application where it can store whatever data it wants inside that folder, but not outside it.

The present state of things where programs can do anything the user is allowed to do was created before anyone thought of viruses and so it's completely outdated. Why we haven't improved upon that situation, I have no idea. It seems easy enough to do, but instead we're fucking around with the wording of our "your stupid OS will let this program do anything to your computer that you're allowed to do, which could be disastrous if the program is evil, so do you want to twiddle your thumbs today or do you dare to attempt to use your computer?" dialogue boxes. People choose to run software because the reason they own a computer is that they want to run software. It's no surprise at all that they learn to ignore their OS's warnings about how incompetent it is because if they heeded the warnings they'd never get anything done.

Comment: It's probably a trojan. (Score 1) 237

by Sanians (#45854425) Attached to: There's Kanye West-Themed Crypto-Currency On the Way

We will be releasing password protected, encrypted archives containing binaries and source for the wallet and daemon BEFORE LAUNCH, with the passwords to be released at the specified time.

A.K.A. "let's see how many fools we can get to run our trojan all at the same instant."

To do this correctly, they'd have to release source code and binaries before launch so that they can be examined and checked for security, then, at launch, simply release some necessary but harmless piece of data that is required to participate in the mining pool.

It'd be hilarious if this turned out to be a Bitcoin mining application which automatically forwards all coins harvested to a wallet somewhere.

Comment: Re:Saw this earlier (Score 1) 894

I'm pretty certain that TSA does not keep a yard debris chipper at each customs station.

No, but they do have incinerators.

However, this does remind me of when I was about 6 years old and some kid offered to trade me a bicycle lock for something. Using six-year-old logic, I decided that since I really wanted the bicycle lock, I had to trade something I really liked for it, and so I grabbed my favorite toy and made the trade. Not much later I discovered the lock had actually been stolen when it was taken from me by the parents of the kid who owned it. I went to the kid I traded with and demanded my toy back. He told me that he had rolled it down the stairs and it had broken. So I went home.

It wasn't until years later when thinking about it that it occurred to me that the toy wasn't broken. ...or, well, it may have been by that time, but it more than likely wasn't when I asked about it. That's the tricky thing about liars. When you're someone who doesn't lie, and everyone you know is similarly honest, it's all too easy to forget that not everything everyone tells you is true. (Perhaps that's the issue with the elderly. We're constantly told that intelligence doesn't decline with age. Perhaps the problem is that, with age, people become so good at avoiding bad people that they begin to forget to expect the things bad people do. That's probably especially easy when you're retired and the only people you see are your kids and grandchildren who wouldn't lie to you about anything.)

Anyway, I'm sure what happened here was that some asshole discovered some very valuable flutes while examining luggage, then noted that the luggage would not be examined again until it had sat around the airport for a while, then flown far away, and even then possibly not unpacked for days, and so he realized he could take the flutes and by the time anyone noticed they were missing he could simply claim that they were incinerated hours or days ago. Hell, perhaps to delay the discovery further, he replaced the usual "call this number to contest this confiscation" note with a bullshit "write to this phony address" note he'd created himself.

It is not every question that deserves an answer. -- Publilius Syrus

Working...