Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×

Comment Re:We get cancer because we have linear DNA (Score 1) 185

That's easy to fix. If a cell has not just the existing error correction codes but also digital ones as well, then mutagenic substances (of which there are a lot) and telemere shortening can be fixed. Well, once we've figured out how to modify the DNA in-situ. Nanotech should have that sorted soonish.

The existing error correction is neither very good nor very reliable. This is a good thing, because it allows evolution. You don't want good error correction between generations. You just want it in a single person over their lifespan, and you want it restricted so that it doesn't clash with retrotranspons and other similar mechanisms. So, basically, one whole inter-gene gap/one whole gene protected by one code. Doable. You still need cell death - intercept the signal and use a guaranteed method.

Comment Exploit that which you cannot defeat (Score 1) 185

Here, in the year Lemon Meringue, we decided to solve the problem once and for all.

Instead of trying to kill cancer, we hijack its techniques. We start by having nanocomputers in the vaccuelles of each brain cell. These keep a continuous backup copy of the state of the brain up to death. Cancers disable the hard limit on cell duplication that cannot otherwise be avoided. By using the techniques of cell-devouring microphages, the cancer "consumes" the old cells and replaces them with new ones. They can't spread anywhere else, because that's how the cancer is designed to spread. Once the body has been fully replaced, the cancer is disabled. The brain is then programmed by the nanocomputers and the remaining cells are specialized by means of chemical signal.

This does result in oddly-shaped livers and three-handed software developers, but so far this has boosted productivity.

Comment Re:It's not a kernel problem (Score 1) 727

The free market didn't provide alternatives. The free market created Microsoft and the other monopolies. Adam Smith warned against a free market.

The majority do not create alternatives, either. The majority like things to not change. The familiar will always better the superior in the marketplace.

Alternatives are created by small groups of people being disreputable, commercially unproductive and at total odds with the consumer. These alternatives will typically take 7-14 years to develop. Adoption will typically reach peak after another 7-14 years. By the 30th year after first concept, the idea will be "obvious" and its destiny an "inevitable consequence" of how things are done.

In reality, it takes exceptional courage and a total disregard for "how things are done". 7-14 years with guaranteed losses is not how the marketplace works. Even thinking along those lines is often met with derision and calls of "Socialism!" by the market. No, real inventors are the enemy of the free market.

If you want a Linux desktop, you must forgo all dreams of wealth. You must subject yourself to the abject poverty that is the lot of an inventor in a market economy, or move to somewhere that supports the real achievers.

Comment The problem isn't X. (Score 1) 727

The problem is corruption. OSDL were working on a Linux desktop environment, but a key (financial) figure in the organization worked hard to kill off success and left around the time the unit went bankrupt. Several organizations they've been linked to have either gone belly up or have suffered catastrophic failure.

I won't name names, no point. What is the point is that such people exist in the Linux community at all, parasites that destroy good engineering and good work for some personal benefit of their own.

X is not great, but it's just a specification. People have developed Postscript-based GUIs using it. It's merely an API that you can implement as you like (someone ported it to Java) and extend as you like (Sun did that all the time). The reference implementation is just that. Interoperability of just that set of functions used by Glib/Gtk and Qt would give you almost all the key software.

Alternatively, write a GUI that has a port of those three libraries. You could use Berlin as a starting point, or build off Linux framebuffers, or perhaps use SDL, or write something unique. If it supports software needing those libraries, then almost everything in actual use will be usable and almost everything written around X in the future will also be usable. If what you write is better than X, people will switch.

Comment Re:Nobody else seems to want it (Score 1) 727

Binary drivers exist and are loadable so long as they are properly versioned.

Block drivers can always use FUSE.

Automatic builders can recompile a shim layer with new kernels (or even the git tree version), automatic test harnesses or a repurposed Linux Test Project can validate the shim. You don't need to validate the driver for everykernel, if it's totally isolated from the OS and worked before then it'll remain working.

Automated distributors can then place the binaries in a corporate yum/apt repository.

What has an ABI got to do with it? Only gets in the way of writing clean code.

Comment Why? (Score 1) 727

The commands to the bus don't change.
The commands sent to the hardware don't change.
The internal logic won't change.

That leaves the specific hooks to the OS and the externally visible structures.

Nobody is insane enough to use globals directly and structures are subject to change without notice. So external stuff will already be isolated.

If the hardware is available for any two of HyperTransport, PCI Express 2.x, VME/VXI or one of the low-power busses used on mobile hand-warmers, err, smart devices, then the actual calls to the bus hardware will be compartmentalized or go through an OS-based abstraction layer.

So 95% of a well-written driver is OS-agnostic and the remaining 5% is already is isolated.

So either drivers are very badly written (which is a crime against sanity) or the hardware vendor could place the OS-dependent code in its own DLL at bugger-all cost to them. Since the OS-dependent code has nothing trade secret in it, they can publish the source for the shim at no risk. Since the shim isn't the driver, there's no implication of support for OS' they don't know or understand. It's not their problem what the shim is used for.

Everyone's happy. Well, happier. The companies don't get harassed, the Linux users get their drivers, Microsoft gets fewer complaints about badly-written drivers killing their software. It's not open, it's not supported, but it's good enough.

Comment Re:Blame them, not Heartbleed (Score 1) 89

Heartbleed may be a huge IT problem, but you seem to have forgotten that health care system decisions are not made by IT security managers. They are run by demi-gods that we mere mortals are instructed to refer to as "doctors." And the doctor's prioritized view of IT is this:

#1. Be Available. I may need this system right this second in order to save a life. I don't care if it's my kid's Nintendo DS, I'm telling you it might save a life.
#2. Stay The Hell Out Of My Way. Don't interrupt me when I'm saving someone's life. And you don't know when that is; just that if you're interrupting me, it probably is now.
#3. Give Me Exactly What I Want. For I am the giver of life and death, and you must respect me.

So unless a problem is currently causing them an outage (so not just any old problem, it has to be causing an actual outage), it won't rise to the level of severity that says "skip all quality control processes and immediately patch this."

It doesn't matter if the router is vulnerable to hacking. It doesn't matter if a hacker who pwns the router could brick it. It doesn't matter if he is stealing patient records. Those things aren't interfering with #1, 2, or 3. So follow procedures, deploy it in a lab, go through testing and QA, and install it only on Wednesday afternoons when the hospital admins are all on the back nine.

Comment Re:AdBlock = Inferior + 'Souled-Out' vs. hosts... (Score 1) 611

Incidentally I also use the Linux kernel feature called Transparent Hugepage Support. I set it to "Always" (as opposed to only when a program specifically wants it enabled). This is known to increase the memory footprint of applications, though by how much I couldn't tell you. The idea of this feature is: the operating system's memory allocator is gaining increased performance ("This feature can improve computing performance to certain applications by speeding up page faults during memory allocation, by reducing the number of tlb misses and by speeding up the pagetable walking") at the cost of higher memory usage.

Just thought I'd mention that since it may be relevant.

Comment Re:AdBlock = Inferior + 'Souled-Out' vs. hosts... (Score 1) 611

* Addons slowup slower usermode browsers layering on more - & bloat RAM consumption too + hugely excessive cpu use (4++gb extra in FireFox https://blog.mozilla.org/nneth... [mozilla.org])

That this can happen, I do not dispute. But I believe the case for it is being severely overstated by people who *ahem* have a vested interest in promoting alternatives to browser add-ons.

I currently run Firefox with 24 addons installed and actively enabled. This is mostly for ad-blocking and privacy-enhancing, with a few miscellaneous add-ons like one that restores the old-style Stop button behavior (stops animated GIFs as well as page loads). Since you seem to appreciate bold: there is no slowdown or latency problem that I can subjectively notice. If my addons are "slowing down the browser" they're doing it below the threshold of what a human can detect. I consider that a good and reasonable trade-off to make on my own systems.

On memory... I have 26 tabs open with a wide variety of sites loaded, many of which are content-heavy. This browser instance has been running continuously for many days. KSysGuard gives a nice breakdown of the memory usage of my Firefox process and this is the summary:

-----

Summary

The process firefox (with pid 5618) is using approximately 993.9 MB of memory.
It is using 971.4 MB privately, 15.6 MB for pixmaps, and a further 26.5 MB that is, or could be, shared with other programs.
Dividing up the shared memory between all the processes sharing that memory we get a reduced shared memory usage of 7.0 MB. Adding that to the private and pixmap usage, we get the above mentioned total memory footprint of 993.9 MB.

-----

Another section mentions that the 15.6MB for pixmaps may be stored on the graphics card's memory. At any rate, this is nowhere near 4+ gigs. Nor have I ever, with any version of Firefox, experienced anything remotely like 4GB of memory usage. This is a 64-bit system running a 64-bit Firefox that I compiled from source (your article mentions the memory penalty for Adblock is higher on 64-bit systems, which makes sense when you understand what that means). This system has 8GB of RAM installed, so ~994MB is negligible to me. For a little perspective, currently about 6GB is being used for buffers and disk cache, since this is what Linux does with memory that would otherwise be empty and therefore doing nothing. If I run a Windows game via WINE then that comes down to 4-5GB for buffers/cache since about another 1-2 gigs of memory becomes used.

Incidentally, I don't run Windows so I don't use your hosts file tool (and even if I ran Windows I'd probably rather roll my own, nothing personal). But I do use a comprehensive /etc/hosts file. I believe that good security is done in overlapping, interlocking layers. "Security" does not mean just remote attackers, but also anything intrusive I don't want, like advertisers and their tracking. I use an /etc/hosts file AND Adblock Plus, NoScript, Privacy Badger, Ghostery, and several others. What one of them alone does not catch, another one will.

Instead of viewing browser add-ons as an obstacle in your path to promoting your own solution, you could learn to work with them, use them effectively, and incorporate them into a multi-layered approach that includes all the work you've put into hosts files. Everyone would benefit that way, especially your users.

Comment Re:$230 (Score 2) 611

Don't get me wrong, DuckDuckGo sounds good. Sounds like they certainly don't actively track you. But I don't see them bragging that they "keep no data to hand over in the first place"

They don't use tracking cookies (their preferences cookies are not identifying, they're just a string of your options, if you've set them), so the most data that they can have for identifying you is the IP address. They've been SSL by default (redirecting from http to https and defaulting to https in search results where available, for example on Wikipedia) for a long time, so you don't suddenly jump into an unencrypted connection as soon as you leave.

It sounds much better than any other US-based search engine I'm aware of. But my own preference doesn't even log an IP address since 2009. You can also bookmark a URL generated with your preferences so there is no need to accept even preference cookies from them (and preferences include options like using POST instead of GET so search terms stay out of other sites' logs). And the aforementioned deal about being outside US jurisdiction is nice too.

DuckDuckGo also does not appear to offer to act as your Web proxy like Startpage will do. I rarely ever use this feature but it's nice that they would offer it. Startpage also offers the option to act as your proxy only for image/video searches, so other sites don't even get that data from you. This is what I like about them: they not only don't log and track you themselves, they also go out of their way to enhance your privacy against third-party sites.

I'm not knocking DuckDuckGo by any means; in my opinion it's good but Startpage/Ixquick is great. Yet, I think all of us benefit from having multiple privacy-conscious options available. Choice is a good thing.

Slashdot Top Deals

Intel CPUs are not defective, they just act that way. -- Henry Spencer

Working...