Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×

Comment Re:A good visual why to move away from animal food (Score 1) 466

70-90% of all soy, corn and wheat grown in the US is fed to livestock...and the return on that (in calories or protein) is a fraction of what goes into it..

Of course. But the return on stuff our digestive system can actually process without difficulty is way bigger, or else we'd be eating dirt. After all, plants get their nutrients from the ground, right?

but that's not how 99% of the livestock in the US is raise

I'm assuming you have a huge experience in cattle farming in the US, to come up with such a bold number. Also, the rest of the world isn't the US.

And imagine if it were: the majority of land is already used for this kind of agriculture.

You're assuming that if we just grew soy instead, we'd be better off. The fact is that what most animals do is pruning (they eat the greener leafs of selected species, but leave older leafs intact as well as the roots). Most farming techniques don't use pruning, so the stress in the soils is way bigger. As an example, look at corn plantations. Also, rations made directly by smashing corn use the whole plant (except the roots) while the plant is still green, while human feeding based on it usually only recycles the seed part of it (so half of the plant is waste or used as low-nutrient food for cattle).

Again, just look at that graph.

Well, I cannot speak for graphics or metrics made by others, specially when they aren't really peer-reviewed and related to a specific population. Of course the amount of water I was mentioning was by direct consumption - there is a whole lot of water used for cleaning, irrigation and whatnot. That varies a lot according to the location of the cattle, the heat, the kind of feeding, among other factors. If you're worried about water usage, have a look at aluminum processing or lorries.

And that's not how much water goes into making milk. It's definitely not 1:1

Well, you can show me any blog articles you want.I can only speak by my experience, by working on a cow farm during the summers in my youth.

Comment Re:A good visual why to move away from animal food (Score 1) 466

Imagine the ratio of surface area to protein/vital nutrient intake once those animals are converted into plants. Do you know how much grass a cow eats and how efficient it is converting it and water into milk and tasty meat? Do you think you can do the same by just eating grass? Go, help yourself. An adult cow drinks 20-40l of water and produces around 20l of milk, while producing tasty meat. Do the math.

Comment Re:It's not all about flavor... (Score 1) 466

Well, it doesn't take more water for the same protein value. One of the reasons we eat livestock instead of grass is because we need more complex stuctures (eg. proteins) to live. Animal shit may be a problem, but human shit is way worse, as it can't even be easily recycled as a fertilizant. I'm not in favour of animal cruelty (so some heavy processed meat process makes me wonder where we're going, but on the other hand, recycling everything from a corpse is more thant what I do with my vegetables), but your meat choices have no impact in life.

Comment Re:Sigh (Score 1) 141

1) You're a short-sighted moron, but that we already knew;
2) If crowdfunding of these kind of projects work, we'll see many more in the future - even if it fails its goal;
3) Learning engineering tricks form a viable, proven piece of tech that went where no man has been before is priceless. You can simulate whatever you want, but in the end I'll take a proven piece of gear to a simulation any day;
4) Your opinion's worth is way inflated. Even for free, its still overpriced.

Comment Re:de Raadt (Score 1) 304

Ah, ok. SO like Segmexec in PaX where the address space is dual-mapped and split 50/50, or W^X where part of the memory is NX.

Yeah, sort of exactly like that. But it has existed since "forever" in x86-32 and you can specify arbitrary segments per-task.

Code resides in segments in a binary. A library has data and code segments, mapped separately. They're mapped per-page for memory management: if you experience memory pressure, small pages (4K) can be swapped out. Code pages get invalidated and re-read from the original source on disk if needed.

Yeah, but because the address translation system used by segments sits on top of the actual paging mechanism, this is transparent :) Its all linear addresses.

LDT in 32-bit flat protected mode is an interface I never saw. Odd. I only knew about the single segment (code, data) layout using CS and DS and SS.

You're probably a typical unix guy :) And AFAIK, most academic literature does not cover this. Its odd how my path differs from the traditional "academic"way - I learned from system architecture and multitasking from the cpu manuals, so you can imagine my surprise when I started looking into existing operating systems :D

Comment Re:de Raadt (Score 1) 304

There was no 'Executable' flag in the page table entry (page descriptor) in the 80386 and later x86 processors, until, to make this capability available to operating systems using the flat memory model, AMD added a "no-execute" or NX bit to the page table entry in its AMD64 architecture, providing a mechanism that can control execution per page rather than per whole segment.

I'm not talking about page table entries. I'm taking about segment descriptors. Those are two completely distinct features/funcionalities. Please check the *actual* reference manual link in my previous comment.

What happens is your program starts its VMA with 16MB of non-mapped memory. Then you have the executable .text segment. Directly above that is the non-executable brk() segment (heap). Directly above that are anonymous mappings, including executable shared library .text segments. Finally, you have the stack.

You still don't see the problem. Locking PER-SEGMENT has existed since the 80286. Not per-page but PER-SEGMENT. Because most operating systems use a half-assed, simplified, portable implementation of the x86 protection mechanism this is not used - at all. So you have the stupid contiguous loading that you just described, with all the shitstorm that comes with it.

So memory looks approximately like: nnnnXXXXWWWWXXWXWXnnnnnnnWWWWW for non-mapped, eXecutable, and Writable memory. All mapped is readable to simplify this model.

Mapped and unmapped memory is not relevant for the protection mechanism, UNLESS you want per-page protection (which is kinda stupid anyway, code should reside in its own descriptor).

On x86, PROT_READ is also PROT_EXEC; there aren't 2 bits. While you can use a segmentation setup, it must be contiguous: setting the highest executable page to the top of the main executable makes the heap non-executable, but also makes all library code non-executable.

Again, I'm not talking about pages. Segment descriptors are on top of the paging mechanism (ie. all addresses are linear addresses if using pages, its not even mandatory), and you CAN have execute-only segments on top of this. You can have multiple segment descriptors mapping the same linear addresses with different permissions, if you want. Even with different DPLs. Read the actual manual, and you'll quickly understand this.

There is no bit to say, "This part is executable, and this part isn't, and this next part is, but this next part isn't, this part is, and this last part isn't."

There is a way of saying "this segment is executable, this one isn't", for an arbitrary number of segments. The GDT can have upto 8191 descriptors, and *each* LDT has this kind of limit. On modern systems, you still have the dumb-as-f**k 4 main descriptor approach + upto 4 for "userspace". This happens because the protection mechanism of almost all the SOs specifically DON'T make use of all the x86 protection features.

Hence the tricks with setting up ring-0 memory protection on data pages and then forcing a DTLB load if they're read/written, to act as a kernel-controlled NX bit.

If you're interested, take a couple of minutes to understand how the protection mechanism works on the 80386. All of it. You'll quickly realize how shitty most implementations are.
On a similar note, check "call gates" and then ask yourself why most modern CPUs have an instruction that implements what is a call gate. If you want an answer, have a look at the shitty shitty linux 2.0/2.2 code that deals with syscalls.

Comment Re:de Raadt (Score 1) 304

The problem was, originally, that the CPU itself did not have an NX bit

It doesn't need one, if a system actually implements protection according to the spec. Check http://www.intel.com/Assets/en... , section 3.4.5.1. (Page 3-16, vol. 3A). If you're still in doubt, check the original 80386 manual, http://css.csail.mit.edu/6.858... (page 109).

because you could *read* program code and why would you want to do that except to execute it

Well, read and execute are two separate permissions even on unix systems :) (eg. you cannot read a file, but you can order the system to execute it). There is a whole range of applications for read-only buffers, not only for execution, specially if eg. my application deals with DMA transfers. On modern PCI Express systems, this makes even more sense.

Yes, and in 2001 no x86 CPUs were physically capable in hardware of marking executability in the LDT.

I actually find that quite hard to believe. The LDT was introduced with the 80286, in 1981. If you check both manuals I mentioned, you'll find indication that you cannot store a TSS in the LDT and some other special descriptor types. Not code/data ones. That would actually *void* the initial purpose of the LDT, which was providing a poor man's protection mechanism without a paging system. My memory isn't what it used to be, and sometimes I do get some things wrong. It happens. But I've spent over 10 years programming mainly in x86 assembly (covering the end of the 90's), and - while I may not recall all the specifics, I still remember some stuff . The drawbacks of using LDTs have more to do with memory consumption and system complexity than anything else. If you have information that support your claim, please share :)

Comment Re:de Raadt (Score 1) 304

Some of that was just spurting. Also the PaX stuff isn't unmapped pages; they're mapped, just they're ring-0 only, and so when the userspace ring-3 execution flow tries to execute it you get a protection fault.

This happens because all descriptors map the same area. They shoudn't. PaX may be a clever trick, but it is to fix a software design flaw. Modern (elf/coff-based et al) binaries have a fixup table *precisely* to be loaded at a random address. Code segments and data segments are separated for the same reason. The same goes for the BSS info. It makes no sense *not to use this*, specially since most platforms are elf-based.

Then the OS forces a ring-3 DTLB load if it's not an execution attempt, and the program continues--the protection isn't checked if there's a DTLB entry, so this actually works.

By design, in x86 systems, every application should have its own set of descriptors (by using an LDT). In most modern systems, it doesn't. Even if using just a set of 3 descriptors for all tasks (per cpu/execution unit, of course), save/reloading of descriptor info could be done upon task switch (just like task state info is stored and a new TSS is loaded). This would allow every application to have a read/write stack that is not reachable by the code segment, and a data segment that does not map the same area as both the stack and the code segment. Linear address could also solve (at least partially) the shared library problem.

I would think anyone who goes as far as to use BSD in production would be leaning on OpenBSD.

Security is just one parameter of the equation. OpenBSD does security quite well, but the rest leaves a lot to be desired. I was a huge OpenBSD fan/user (and still buy the CDs whenever I can), but the 2-release support cycle isn't enough for my needs. And the lack of *any* kind of container or virtualization technology (like jails), lack of a modern filesystem, and somewhat dysmal smp support make OpenBSD unsuitable for most of my needs. Regardless, I do know of companies using OpenBSD in production without any hiccups, and I'd consider it for some workloads.

Comment Re:de Raadt (Score 1) 304

I would expect OpenBSD to be the bigger fraction--who uses BSD on a server?

FreeBSD is the bigger fraction. And I use BSD on servers instead of Linux. So do companies like Yahoo and Whatsapp.

Thing is we know a lot of high-profile targets are straight Linux or they have dedicated appliances that were vulnerable (FortiNet products, SonicWall products, Cisco and Juniper gear even)

Well, some versions of JunOS are based on FreeBSD. And BSD-based appliances are quite popular (pfsense, monowall, etc).

(...) I read--16 byte objects are common, so I have a "picoheap" or whatever I called it that's just a collection of mmap() areas for holding 16 byte objects

That vaguely reminds me of the slab allocation available on both FreeBSD and Linux kernels (and Solaris) (http://en.wikipedia.org/wiki/Slab_allocation).

Yes I know. I ingested enough about computer architecture in 1 month when I was 19 to have the Tetris Effect on a hilarious level.

So, do you also agree its dumb? :D

But hey, did you know that on IA-32 you can mark a page with the Supervisor bit, and when you try to access it you cause a page fault exception into the kernel, which can then check if you want to READ/WRITE or EXECUTE the page, and kill the program if EXECUTE, else force a DTLB load (in the CPU) which gets cached and no longer raises an exception until the DTLB fills up and invalidates that entry?

Off the top of my head, no :) You're probably refering to the fact that accessing a linear address that don't map to actual 'real' memory results in a page fault, and then you can decide what to do. That's how virtual memory works, and it is a bit of an overkill to be abusing it. On the other hand, COW uses a somewhat similar mechanism and no one is complaining :)

so the whole stack was automatically NX without all that extra overhead, and the heap was page-granularity protected.

Well, most modern x86 operating systems don't fully use the x86 protection mechanism.If they did, every program would have their separate LDT entry, and each program would have its own set of descriptors to make sure you're not messing around where you shouldn't. This, of course, has its own limitations (LDT entries, for once, since the GDT is limited to 64K), but it also has its own advantages (call gates are awesome, specially when every program has its own stack segment descriptor mapped into linear address space). The common approach is to use a limited set of descriptors in different execution rings, so that shared libraries and other stuff works without too much hiccups. If you saw how Linux handled syscalls in version 2.0/2.2 on x86, you'd laugh. Now modern cpu's have their own instruction to do this, instead of using call gates :)

And kept adding to it for about 3 months before cooling down. I know about word alignment.

Well, I've spent several years working in assembly with x86 ia-32 protected mode. Just not in unix environments :)

Comment Re:de Raadt (Score 1) 304

That they have an OpenBSD appliance somewhere in the rack doesn't protect them.

Well, that depends. Following your assumptions on wide generalization of setups, it is common to have dedicated servers working as SSL proxies. Again, I'd be surprised if at least some of those companies weren't using OpenBSD for that. Or some vendor that leveraged BSD tech in their product.

I'm assuming that there are so few that the probability of OpenBSD servers being the first round of exploits--or really in any significant way exploited early enough that the bug is exposed and understood before the damage is probably done

You cannot assume that.You don't know how long this vulnerability has been exploited, and it is reasonable to expect that some high-profile targets that use OpenBSD in their stack may have been compromised. Its not only about how OpenBSD could have mitigated this - its about why a decade goes by and basic security measures are still the exception in other operating systems.

You mean like with object pooling, which is essentially what freelists are?

AFAIK the problem isn't "trying to optimize thousands of a-couple-of-bytes" allocation hashes. Even if it was, it is a good idea to cleanup free blocks with zeros. If you're allocating a 4k page, you probably should use other mechanisms. The problem is, on top of the fact that no boundary check was performed on a copy operation, this passed a code review and was incorporated in the most widely used SSL library, that reads and process sensitive information.

It's a common design pattern. Next you'll tell me Duff's Device is a dumb idea.

When applied to modern unix computing? It is. You can have a smart design around it, but as it is - its dumb as fuck. Eg. acessing odd memory positions is a stall in most modern CISC cpu's (and some ancient ones too), so your code will actually run slower. In some cpu's, the stall is everything that is not 16-byte aligned. In the end, if you're using C and trying to be clever in the architecture, either you're doing critical kernel stuff or you should be using assembly anyway. Odds are, your very clever algorithm implementation will be a pile of poo on a newer generation of cpus and no compiler will save you from that (have a look at the Bresenham's integer line drawing algorithm for an example).

I had a memory allocator project 10 years ago that I abandoned which aimed for better memory density and high security by abandoning brk() and using mmap() and thread/class grouping (allocations from the same thread tend to allocate/free together; allocation fragmentation can be dealt with by grouping allocations of a given size together in a range of memory, rather than just buddy allocating in one big open field), and used all kinds of tests with canaries and dtags (deleted! double-free!) and other security bits.

That actually sounds interesting. What happened to it?

I got in an argument with Theo some 8 or 10 years ago about static checkers. He told me, rather loudly, that static checkers make programmers lazy because they start just writing to satisfy the checker and then get sloppy with actual code review

I'm a big fan of static analysis, but I do understand his point. Having these features available on development time do take a toll on the quality of the code. Having them available on a CI server or whatever is a very good idea. If an error is triggered, proper atention will be given to it, instead of having someone just changing the code so it stops complaining about the error.

His strong position was that static checkers INCREASE defects in code; but I see that somewhere in the past 5 years he's reversed that position sharply.

He probably tried using one :D

Comment Re:de Raadt (Score 3, Interesting) 304

OpenBSD is a hobby OS

*every* community-driven operating system is a hobby OS. Is that relevant?

It's like Linux with grsecurity

Maybe for you. Not for me. And it is actually easier to audit and it has a smaller kernel. And a kernel debugger. Something that is quite handy to find and troubleshoot problems.

(...) I would avoid the bet.

There are also more Windows machines than *nix machines with an internet connection. Some little-known RTOS are way more popular than Linux and BSD combined. Your point is?

OpenBSD's allocator is what we call "Proof of Concept".

Monolythic kernels are a proof of concept of monolythic designs. Every existing implementation of something is a proof of concept of the given concept. Again, what is the point?

It exists somewhere in real life, you can leverage it (I've leveraged proof-of-concept exploit code from Bugtraq in actual exploit kits), but it's not this ubiquitous thing that's out there enough to have an impact on the real world.

While OpenBSD itself is a niche product, its team is very well known for producing hugely popular products, including OpenSSH and PF. BSD server usage is low, but there are no real stats on middleware - routers, storage units, set-top boxes, closed devices, etc. FreeBSD is reportedly used in Playstation - that's more users than most Linux distros has. Is popularity usage relevant to the discussion? Not really.

Suntrust, Bank of America, slashdot, the NSA, Verisign, Microsoft, Google--is running a non-OpenBSD operating system with no such protections

I'd actually be very surprised if none of these companies use OpenBSD goodies - Either OpenBSD by itself, or middleware BSD products. And then you can add to this OpenSSH, OpenBGPD and a couple more interesting products. Microsoft used OpenBSD as a basis for the Microsoft Services for Unix. But again - is it relevant to the discussion? Not really.

And again, the concept of allocation caching is common. Freelists are used when allocations are all the same size; that gripe is essentially that a valid data object is not valid because they dislike it. Plenty of software uses freelists, and freelists are a generalization of the object pool software design pattern used for database connection caching in ORMs, token caching in security systems, and network buffers (ring buffer...). I would be surprised if OpenBSD's libc and kernel didn't make use of freelists or object pools somewhere.

So, you're saying that optimizing memory allocation in privileged space is the same as optimizing memory allocation on a userland library? That managing fixed-sized, out-of-the-userspace-address-pool structures is the same as trying to be smarter than the local malloc implementation? No system is perfect, but it generally sounds like a very bad idea.

In short: there's a lot of whanging on that OpenSSL made OpenBSD's security allocator feature go away, and that (implication) if OpenSSL had not done that, then an exploit attempt would have come across one of the 0.01% of interesting servers running OpenBSD, and a child Apache process would have crashed, and some alarms would have gone off, and someone would have looked into the logs despite the server humming along just fine as if nothing had happened, and they would have seen the crash, and investigated it with a debugger, and then reproduced the crash by somehow magically divining what just happened, and done so BEFORE THE WHOLE WORLD HAD DEPLOYED OPENSSL 1.0.1.

So, you're assuming there aren't compromised OpenBSD servers because of this. And that no one actually tried to exploit it in OpenBSD. The fact is that no one kows exactly the extent of the damage of this vulnerability, or if it could have been detected way earlier by using OpenBSD or Linux with grsecurity or whatnot. And it still remains a fact that it is a dumb idea to try to micro-manage memory on your library.

Maybe you should whine that everyone else should do similar exploit mitigation techniques to OpenBSD first, instead of whining that OpenBSD could, in theory, possibly, have caught this if it was less-broken.

Actually, the whining I read was precisely that - why aren't OS developers doing the best they can to avoid and mitigate these kind of errors? And if a hobby OS like OpenBSD can do it . and has done it years ago - why isn't this kind of protection mainstream?

Coverity missed it even with the freelist disabled; Frama-C would not shut up about Heartbleed, and should have detected this ages ago. So nobody ran Frama-C against OpenSSL. Apparently a static checker could find Heartbleed on a whim, and could find it even if it wasn't being exploited--which means it could find it faster than OpenBSD's allocator

That only demonstrates that someone needs to take a serious look at what's going on with OpenSSL and start fixing stuff. Oddly enough, its the hobby-os-that-no-one-uses that is picking up the task - by THEMSELVES. Theo does have a loudmouth, but they are actually trying to improve it. How many companies would benefit from that? And how many of those have started doing it? Yeah...

Comment It depends (Score 1) 119

I've seen advantages and disadvantages in both scenarios. It depends on the application and the profile of your production systems. As a rule of thumb, your test/dev systems should be as close as possible to the production machines; If you're deploying to cloud services, you should have your test and staging system running on the same platform/provider; If you're deploying to bare metal, you should have dedicated servers for testing and staging. The applications don't work by themselves, and eg. controller/driver problems in production systems are quite difficult to diagnose if you can't replicate the problem in staging; So it doesn't really make sense to test something on a cloud provider (with "emulated" hardware) if you may be needing to be on the lookout for specific problems on your own hardware. There is also the question of I/O - both local and network. Dedicated gear will always be faster than virtualized solutions, or at least cheaper for the same amount of IOPs. If you have an application that requires heavy I/O, usually cloud services are almost as expensive as they are useless. As an example, you can rent a couple of servers in Hetzner that will run *laps* around most EC2 instances, for a fraction of the price. Since usually test data is ephemeral, server reliability isn't really an issue - a competent Sysadmin team will provision a cluster of those by a fraction of the price of a cloud service. So, in short, if using cloud in production, its the wise move, if not its probably not what you're looking for.

Comment Re:US Law (Score 1) 91

it's printed right on the money: "This note is legal tender for all debts, public and private". You're legally required to accept US Currency. It's up to you how much, but if you have a debt you don't get to say no. You can require that the currency be in certain denominations (e.g. you can decline a jar full of 10,000 pennies) but that's about it.

Someone already replied to this. And, as you may have guessed by now, the US didn't invent the concept of money (or markets, for what it's worth).

If you say no and it's enough money to go to court over than sooner or later a judge is going to make you take the money. If you tell a judge no he throws you in prison for contempt of court.

This is just silly. A judge CANNOT FORCE anyone to do anything. The judge has no power. The institution the judge represents is a different matter altogether. But, as an example, I can easily create a pizza restaurant that works exclusively via subscription and is payed via electronic transaction. AFAIK there is no law against this.

Oh, gold's intrinsic value is that rich people like to wear it.

You don't mean rich, you mean wealthy. And if they're wealthy, you probably already have gold, or some spare stuff you can trade from/to gold - thats WHY they're wealthy. And part of it is, people like yourself see a demand for gold (because rich people want it...), so will gladly accept it as currency, thus giving it its intrinsic value. You would work for gold :)

Slashdot Top Deals

Stellar rays prove fibbing never pays. Embezzlement is another matter.

Working...