This seems to come out of the peculiar microsoft feature of being able to be an administrator user but without administrator privilege most of the time except when needed, and a lot of work to make this escalation happen in an non-intrusive fashion or be faked depending on context. It's a really complicated beast that no other platform tries to do.
MS up to and including XP (excluding the DOS based family) basically had the same as everyone else, you either were an administrator or you weren't, with facilities to 'runas' an elevated user to handle as-needed. The problem being they had tons of software from the DOS based system failing to use the right section of the registry and filesystem, requiring people to go through pains to run as administrator to run a lot of applications. This meant that most XP users just logged in as administrator.
To mitigate it, they embarked upon crafting this fairly complex thing to make running as administrator user safer most of the time. It's funny because at the same time they started doing more and more to allow even poorly designed DOS-era software to run without administrator. They create union mounts to make an application think it can write to it's application directory even when it cannot (and do sillier things like make 'system32' a different directory depending on whether a 32 or 64 bit application is looking). I do the atypical usage of a non-administrator user full time with UAC prompts nagging me about passwords if needed, and nowadays it doesn't nag any more than sudo does in a modern linux desktop. If I understand this behavior correctly, this usage model might be immune to this risk factor.
While impersonation and other techniques is used a lot more and including larger portions of the API, impersonation itself has been along since NT 3.1. Are you a file server process serving a request from a client? Just create an impersonation context for the user who sent the request and pass that along to the file system. You only need to make sure that you create the right context and tell other services on whose behalf you are doing this. This is not identical to setuid and similar, most importantly because a single thread can keep many impersonation contexts.
That this is part of the application compatibility cache service is almost coincidental, the real problem is in the fact that impersonation services are used, but used incorrectly. Impersonation was part of the original NT design, and for relatively good reason.
you first need to copy that data into another block, erase the original one, write all data back and erase your "tmp" block. The churn on blocks happens a lot faster than what you'd think.
If that's the case, then why are they not copying the data to ram contained on the drive itself? Seems like an awful waste of cycles with a relatively simple fix. Is it just a cost issue?
Any wear levelling worth its salt will not do what the grandparent wrote. You simply do not change one page in a block. If you write a single page, that is handled by mapping that page to another (free) block and maintaining a mapping table for which LBAs are currently stored in what blocks. However, if you are doing single-sector writes, or in turn repeated I/O flushes of the same sector, you still see a lot of write amplification. To keep data integrity, the mapping tables also need to be kept updated in a correct way (or at least uniquely recoverable by scanning through all blocks after a hard power off).
But, well, the difference is that Oracle has actively asked everyone else to quote references to their book. Google has produced a product that only respects those references that Oracle has encouraged anyone to use. If Oracle starts pursuing anyone *writing* Java code for copyright infringement ("hey, you called all methods of ArrayList, in the order they are declared"), that's a different thing.
Has Google copied Javadocs? Those texts are not necessary for technical interoperability. Thus, it would be a very different thing. Public symbols should be just that, public.
I find it a bit suspicious that Africa has handled many Ebola outbreaks before this just fine. Sure there were deaths, it's Ebola, but they handled it. Now we have 20+ medical companies with untested human trials of Ebola vaccines/cures all rushing in to "save" the day by testing the drug on humans without a controlled environment and with no legal liability. If a survivor's next fetus grows a third eye and has an IQ of a sweet potato in a drought are they going to take responsibility? I doubt it. How about getting the right gear and help to the health workers instead of pumping them full of crap you could not test legally in your own country. Stop using Africa as a petri dish. Makes me wonder if they didn't start and help spread the epidemic in the first place. But then again maybe I have played/watched too much resident evil.
Proper help to the health workes would include vaccine. In an early phase, there won't be enough vaccine doses for the general population anyway, but currently health workers are at high risk and the number of doctors is dwindling. Not because they leave their posts, but because they die. If you have protective gear and practices that reduce the risk of infection by 99 %, that's still a high risk scenario if you are at a treatment center with multiple patient encounters a day. For any similar infection where there is some kind of vaccine or prophylaxis, health workers get it. The side effects could be quite dire and still be worth it, assuming they are going to do their job at all.
I think all first year computer science / programming / engineering students should be introduced to this and learn how to write programs for this environment first before moving on to modern systems. True power is being able to write useful stuff with only 64kb of ram and 1mhz of processor, and have it run in an acceptable time frame, and taking those skills and scaling up today's multi-core/ multi-gigahertz/multi-gigabyte address spaces.
Cheap memory accesses compared to instruction latency, over your whole memory space. Memory locality basically doesn't matter. Branching is also almost free, since you are not pipelined at all. If you would extrapolate a Z80 machine to multi-gigahertz and multi-gigabyte you would get a much simpler low-level structure than you actually have. Some of the lessons learned regarding being frugal make sense, but you will also learn a lot of tricks that are either directly counter-productive, or at the very least steal your attention from what you should be concerned with, even in those very cases where you are really focusing on performance and power efficiency (in addition to the myriad of valid cases where developer productivity is simply more important than optimality).
It used to be that you couldn't pre-calculate everything since you didn't have enough storage. These days, you shouldn't pre-calculate everything since it will actually be cheaper to do it again rather than risk hitting main memory with a tremendous amount of random accesses, or even worse, swap. Up to some quite serious size data sets a flat data structure with fully sequential accesses can turn out to be preferable to something like a binary tree with multi-level indirections. (Now, the insane amount of references even for what should be a flat array in anything from Java to Python is one reason for why even very good JITs fall short of well-written C, Fortran, or C++.)
If you clear out the various multi-platform work for OpenSSL, _of course_ it can progress more quickly and more securely. The multi-platform work is where so much of the work has been done.
As a person making their living writing software for MacOS X and iOS, do I care about this code running in MacOS 9? I don't care one bit. They explain it very well: You don't need to be "multi-platform" if you are standard. Instead of "we have thirteen implementions of SSL_memcpy that run on a dozen completely outdated platforms that nobody cares about", they use memcpy and say "if your platform doesn't support a standard C function correctly, fuck you and your platform". Which is the correct approach.
A slightly more pragmatic approach is to keep those implementations, at least the most crucial ones, but please make sure that you use memcpy etc directly on any sane modern platform.
What is so difficult to understand, and why is everyone getting their knickers up in a bunch over it?
The OpenBSD project used to be pretty rude to a number of people (mostly you could understand why, but that doesn't justify). While some of this is just ignorance much of it is likely people wanting to get back at them and people from the various security services trying to spread dissent.
Any other bitching just shows what an idiot you are (not saying you're bitching, just pointing that out to the general peanut gallery).
He's bitching, whether he realises it or not. He didn't point to a single instance of Alpha support slowing down other platforms. His analogy doesn't apply just because they provide portability. It applies if providing portability to old platforms such as the Alpha slows down the development of OpenBSD, which it probably occasionally does and on the other hand probably keeps people interested in developing on their old Alpha machines contributing and so overall has a positive effect.
I haven't watched the full talk (yet), but previous outlines of what OpenSSL did reeked of a Javaesque approach, provide such a thick runtime library on top of the OS, that you don't really need to care about what the platform gives you. That gives you a great portability story, once the layer is in place. The bad thing is that you do not benefit from the specifics of the platform. If you let your portability/base layer rot, you are also behind everyone's game. What's happened during the last 5-10 years is a lot of work to make the C standard library (or slight variations of it), as well as base system calls, much more hardened, to some extent providing security in depth. The LibreSSL critique has been based on the fact that OpenSSL went with their home-rolled, slightly inferior, slightly unpredictable (not handling NULL values in places, at least not in the same way any sane platform did, etc) layer for far too many things, even on modern platforms. As a provider of a platform with security in mind, I can understand the frustration of having a crucial library saying "hey, we don't care about that stuff, we can implement everything we need".
yes, you do. in a preemptable OS, in a multi-threaded app, you need atomic operations to share data between threads, as any read-modify-write operation on shared data gets wrecked when it is preempted between the read and the write.
Furthermore, what is atomic in terms of context switching preemption is not necessarily atomic in terms of memory bus arbitration. The two can usually coincide, but they don't have to.
The one question I still have is why the flow stops at 41,000 ft. I would have expected a kind of spring effect, followed by the lower portion of the siphon slowly descending as water vaporizes off the pre-apex portion, allowing the water in the lower part to descend while maintaining the same vapor pressure. I'm sure it is my failure to understand, so if anyone can offer a better explanation please do so!
I think it does. The time scale is just staggeringly different. Watching a water surface dry, and one with low area versus the volume, at that, is a boring activity. Put some table salt in a glass and fill an identical glass with water. Put some lid over it all. The equilibrium state will be more water in the initially empty, salt-containing, glass, than in the one originally containing water. Why? Because of the change in boiling enthalpy. But that change, and the formation of a water film or drops on all other surfaces in the enclosed volume, is immensely slow anywhere near room temperature.