But those charities don't pay any taxes on such stock sales, right?
But those charities don't pay any taxes on such stock sales, right?
He'd probably quadruple his money if he began short-selling construction stocks in Turkey right now
On Windows, it's quite easy, actually. The non-IE browser plugin and the ActiveX controls are separate installs. Without the latter, you don't have issues outside the browser. The browser plugin flash is invisible to anything but the browsers. I don't recall if recent IE uses the browser plugin or ActiveX variant, I recall that older ones needed the ActiveX version.
Look, I'm not advocating running the fucking filesystem or a network driver in the userspace. I advocate running leaf drivers there -- such as application-specific USB drivers and the console. The console "driver" doesn't provide services used by any other applications. It receives stuff from the kernel and does something with it, and there it ends. It's just like the audit deamon!
When you have exclusive device ownership by an application, with app-specific USB devices, etc., there's absolutely no point in having a device-specific kernel driver. None. Those devices generally are data sinks or sources, and you don't need anything in the kernel besides the regular USB host stack. All of the requests can be handled by the userspace without loss of performance. I know, I've done that, and it works great. FTDI's unix drivers are completely broken on OS X since they use libusb, and libusb doesn't know how to deal with sub-second timeouts on OS X because it uses the wrong APIs that only time out with a second granularity. Thus I've written my own userspace driver from scratch, and it performs beautifully. I can have a bog-standard userspace process get worken up every two milliseconds to process incoming bulk transfer, and to respond to it. It allows regular userspace to provide soft-realtime kind of performance -- I can easily get 3ms device-to-device latency, with userspace application in the middle. It's very useful for prototyping, where occasional glitches can be worked around or even ignored.
Same goes for Windows, where FTDI's stupid driver doesn't feel it necessary to inform you that the device got disconnected. Basically FTDI's driver is written mostly for the braindead blocking kind of use, and they have half-heartedly added an event source that signals arrival of data and change in control line status. I mean give me a fucking break, how stupid that is -- obviously in their minds no one cares if the device has gone away, or when a data transfer had finished, etc. Anyway, that driver code works on OS X using their USB api, Linux using their api, and Windows using UMDF. The performance on all of the platforms is exactly the same. So I specifically don't buy any bullshit arguments about userspace being slow. Yeah, if you need to go back and forth between kernel, userspace driver, kernel, and other applications -- like for networking, filesystems, block devices etc, then sure it's slow. If your application is the sole user of a device, there are few if any benefits to having a kernel driver -- at least if a lower-level kernel driver already handles the bus transaction queuing, as is the case in any modern device bus you can think of (ata/sata, usb, modern pci, etc.). If you want to stream data at a low overhead between the USB device and the userspace, the userspace driver merely needs to keep the device's request queue filled in the host stack, and keep the finished requests drained. That's all there's to it, and it works well, and I can easily keep the device saturated -- running at a rate limited by the device itself and the host usb stack. I can transfer about 33 megabytes/s when talking to FT232H chip connected to a fast CPU -- and that's without trying particularly hard. So, again, I don't buy bullshit arguments that don't stand to a simple experimental confirmation. Remember: the ultimate test is the experiment. Nothing else matters.
Never make it to userspace? What the heck? What prevents the kernel from starting the first process as soon as the scheduler is up? It's a happenstance, in a way, that linux starts all the drivers and only then launches init. There's no solid technical reason for it, other than inertia. Not anymore.
has specifically formed MPAA and RIAA to be, among others, their bully
Well, wrong choice of words. Of course MPAA and RIAA well predate the current bullying efforts (by a century or so), but the industry supports their existence and those relatively newfangled efforts.
But the industry doesn't care about those peanut payments! The industry has specifically formed MPAA and RIAA to be, among others, their bully! It's all about scaring people, not about extracting payments that would form any sort of a sustainable revenue stream. It's coincidental that MPAA has chosen extorting money as their modus operandi. The money they extort wouldn't pay their rent, much less the salaries of all the lawyers that work for them. If you look at cash only, it's all a money sink -- a ruse by the lawyers to get paid for doing not only something that is not productive, but in fact something that is counterproductive to the industry! The industry was coaxed by laywers to think that somehow the FUD campaign has a net positive financial effect. Of course we all know this is a load of bullshit, but the industry is none the wiser. If you trace the various efforts, they all have lawyers at the helm - it is a long time campaign by the IP lawyers to set up themselves as well paid parasites on the industry.
The industry PAYS MPAA TO EXIST, and they are (in their self delusion) happy to do so. MPAA has nothing to do with a debt collection agency, and I have no idea why anyone would think they are there to collect payments to distribute back to anyone in the industry, much less the artists! Again, the money the MPAA extorts wouldn't even remotely begin to cover their costs. In industrial terms, it's peanuts -- of course it can be life-wrecking for the poor souls at the receiving end of the lawsuits. But just the fact that it's a lot of money to a proverbial grandma, doesn't mean it's of any consequence to MPAA or the industry proper. It's at the level of rounding errors in their financial reports. Seriously.
Uh, but you see, to the common folk it all sounds the same. Inventor, scientist, big deal.
fifty to a hundred years ago, a bunch of people who knew nothing about science and not much more about Christianity made a really stupid guess, and they're too proud to admit that they guessed wrong
This! Although the process really extends way past 100 years ago, I agree that the period you mention was perhaps the most influential.
Given that ideas only reach the status of theory if they have overwhelming evidence supporting them
Unfortunately, nobody fucking knows that, it seems. Nobody on the school board, certainly, and a whole lot of college graduates don't know that either. It's quite sad, really. People don't grok the simple fact that in reality, as opposed to common man's fantasy, science's ultimate goal is to produce theories, and making a scientific theory is a crowning achievment. I'd be pretty extra damn proud of myself if I was a scientist and had produced an accepted theory.
The common man's fantasy about what theory means is the exact opposite of the true meaning as it applies when talking about science. I'm OK with common uses of the word "theory", but one has to understand that it all changes when the subject is science, including science education. I'm thinking I'd be all for getting a sledgehammer out and pounding into some people's thick skulls until they get it
It's better if those few crucial features are in a process that can be restarted without having to reboot the kernel after a panic! That's the whole point. If a console server process fails, it simply gets restarted and that's it. You get your consoles back. That's reliable software to me. The alternative would be a kernel oops or a panic. Even if a kernel keeps running after an oops, you should normally reboot at the earliest opportunity because you can't be sure without examining the oops in detail to what extent kernel's internal data structures are consistent. A panic is simply an immediate forced reboot, an oops is a reboot that's forced but may be postpone-able -- IMHO an oops is only postponeable if there's a pair of understanding eyes that can examine the oops, check where in the kernel it happened, and how safe it is to continue. Otherwise, an oops should result in an instant reboot, but preceded with as much of an orderly shutdown as possible. For the most part this means: send sigterm to database and cache processes, send sigkill to all other processes, wait for the terminated processes to shutdown within some window of opportunity, kill them after the window has passed, recursively unmount filesystems, reboot.
It's not a long stretch to imagine that you can start a userspace process long before all of the kernel drivers are initialized. It's basically a big waste of time that the kernel delays starting init to *after* all the drivers are initialized. It's a waste of time. The applications that depend on functionality of certain parts of the kernel should simply wait until those parts become available. That's all there's to it. Also, the drivers can be initialized in parallel. No reason for the network card driver initialization not to run in parallel with waiting for the scsi raid driver to come up. The console doesn't need any of that and can be started up as the first thing, even if it were a userspace driver. Kernel usually starts off an initrd image, that's where the console application would be. I think it'd be wonderful if the kernel went in this direction, not only for console but for all other drivers as well. The applications that need to wait for certain things can get notifications when drivers get ready.
It's fine if the userspace console process will be less stable. It can restart without undue consequences! Fail fast, isolate failures. That's the way to reliable software. Ericsson people knew what they were doing when they were coming up with Erlang's approach to handling software failure. It's 20 years later and linux kernel people finally get the message. I applaud that.
What's wrong with it being in the userspace? At least if it crashes, it doesn't bring the whole kernel down. The process is relaunched by the kernel, and off you go. It's the Erlang mantra of reliable software: fail fast, in limited scope. I love it. That's why I'm a big fan of userspace drivers for all devices that are application-specific. Suppose you have a USB-based toy that has a vendor-specific functionality and isn't one of the standard USB device classes. You have an application for. It should only have a userspace driver bundled with the application. You start the app, it claims any USB devices it can handle, and goes from there. That's often how it's done on OS X, it's quite onpopular on Windows, unfortunately.
All Finagle Laws may be bypassed by learning the simple art of doing without thinking.