Duh. He should've used NetBSD on it!
Duh. He should've used NetBSD on it!
A programmer is a software construction worker.
It's easy, if someone designs a mechanism, he is no longer a programmer, he is a software engineer.
However amateur or inept.
The article mentions training as if it meant much and bridges didn't crumble and batteries didn't explode all the time all over the world.
Well, we are in the future now and we know it to suck.
unlink(2) is also known as rm(1).
They were removing the files as far as the general meaning is concerned.
The data on your disk is never purged unless you make an extra willing effort.
If their site was being modeled after a filesystem, it's reasonable to expect that it would only remove the links.
It is also reasonable that someone would think about taking extra measures for child pornography, given that mere possession is a serious crime and digital forensics are regularly used in its prosecution.
Code Simian - (To self) We removed it but the data it's still there, better purge it just in case.
Code Simian - Boss, should we purge all known child porn from our systems?
Kim Dotcom - Aye.
It is reasonable for someone to make that logical jump for child pornography while forgetting to do the same for copyright infringement.
Or... does the prosecution agree with child pornography?
We can then all agree that there is reasonable doubt that the defendant willingly kept the data for copyright infringement purposes.
Criminal Case *poof*
That defense doesn't account for e-mails, confessions and general dumbassery, though.
If you write software, you know what it is supposed to do.
I gather you have never worked as a software engineer, then?
At best we can infer what it's supposed to do based on a tissue paper scribbled by someone who has no technical expertise and was told about the project five minutes beforehand. Using lipstick or crayons as available.
Usually we get an artist's rendition of the above, faxed, and then re-scanned and embedded into a pdf file.
True. This was also my first reaction.
If you read the whole post and speak BSD, however, you'll notice that full kernel-space ASLR is under way as well. So, once again, OpenBSD leads exploit mitigation.
Ed is the standard text editor.
You mean a simulation like this?
This is because the C standard is full of crap such as undead(maybe it was half-unsigned?) chars and non-zero NULL and Harvard architecture hacks. If you want to be sure your program will work as intended when some starry-eyed clang/gcc developer reasons he can optimize away your security code because it is undefined behavior, you must support all the brain-dead architectures that motivated the standard, in order to serve as canaries.
This is not related to supporting non-standard shitty libcs and OSes which run on 64-bit architectures and yet do not support 64-bit pointers.
Sorry I posted the wrong link.
No it's not, it is stated quite clearly that it is written for OpenBSD. OpenBSD is mostly "POSIX-compatible" but they aren't too shy to extend libc when there isn't a good alternative. The slides and the talk mention strlcpy/cat(unfortunately ignored by C11 but widely adopted everywhere but GLIBC) and reallocarray. Only obliquely referenced is a proper kernel API (P)RNG which is not available in most platforms(using
However, like OpenSSH, you can expect the LibreSSL portability team to write wrappers to make the best of what there is in your OS. As opposed to the best Win16 could do.
Anyone who thinks all software has bugs has never written "Hello World" in assembly.
Perfect, trivial software is clearly possible. Perfect software that's slightly more complex is also clearly possible. We haven't yet accepted that perfect software is possible, but we should demand it (for moderately expensive software, or where bugs will cost you money, for instance). A reasonably intelligent programmer writing a modestly complex program should be able to do so perfectly. That he can't, (because his tools don't help him do so) is infuriating.
Yes, almost all software has bugs. We are way too comfortable with the idea. Software doesn't need to have bugs. We just don't have toolchains and development stacks that encourage perfect software. It's as if engineers decided to only use modeling clay for buildings, because nobody sells steel, and it's too cumbersome to smelt their own.
The profession really is no better off for accepting this sorry state.
Sorry, but you are wrong. A perfect Hello World written in assembly according to specification and formally proofed can have bugs in not one but two cases.
Hardware people can be clueless retards as well
CPU bugs are something you only read about in books until you actually try to do something non-trivial with the CPU.
We are following all these rules and so we are safe. We can save a penny per 1000 units sold using a crappy MMU-less CPU.
First of all, following the stupid rules requires you to use baroque lint imitations which will go off on every line of idiomatic C. You need a paper trail to justify every line of code. Seems about right, people's lives are in danger, right?
Now consider that the controller system is hundreds of thousand of LOCs(for us it's more like millions). Most of that is crap boilerplate code required by the standards. This means if you follow that methodology strictly, you need hundreds of people going through mindnumbing lists of "You are not using this argument/This code assigning an argument to itself does nothing". Given that most software developers are inept and overworked, I can give you a certificate that there will be bugs.
It took me two weeks with the code to find a checksum function used all over the place that had been "fixed" to detect offset data after some earlier corruption bug was not detected.
Every 256 bytes "checksummed", a bit from the input would be left unaccounted(And it was actually used for data several times larger than that). I know for a fact that had to go through at least three source and design reviews and at least one more design review with some fat managers higher up.
Now tell me you feel safe.
Note to PHBs: Googleing up a fucking working CRC and getting a CS PhD to make a formal proof that it will work as intended would have cost far less.
Also, you see, the crappy CPU vendor stack measuring tools - that rules say we must use to guarantee safety - don't account for function pointers(they do show scary icons for recursive functions). They say foo(384) bar(uhhm... maybe 0?) I know to look for that when I add calls to function pointers, but I guess most people don't.
Now you add another rule. LOZRA 4092: You can't use function pointers at all.
Make my life more miserable, give the remaining work I will be unable to do to Dave, the monstera plant, or someone with the same programming aptitude.
I will give the crappy CPU/Compiler/RTOS vendors that should be sued free advice:
0- Add an MPU
1- Add canaries to every function call with any local variable at all(here it's not hackers it's programmers following LOZRA 396: cast the shit off everything so the compiler can't tell)
2- Add stack overflow canaries on every task switch. (add an MPU and align to page in the stack growing direction)
3- Add canaries to any memory pool allocation. (add MPU dead pages - You don't need RAM, just fucking address space of which you are using like 2%)
4- If any of the above traps, jump to a customer defined function(stored in ROM than can only be physically modified by outside hardware) that puts all vital hardware in a safe state, adds a record to the black box and reset the whole thing from scratch.
5- Forget about tasks and threads and move on to processes running on separate address spaces. If information must flow from a to b it better go through accepted channels. 6- Did I tell you to add a fucking MPU!?
Programmers used to batch environments may find it hard to live without giant listings; we would find it hard to use them. -- D.M. Ritchie