Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. ×

Comment Re:Who's to say? (Score 5, Insightful) 99

If it were true that long-term low level radiation were unquestionably harmful, you'd expect to find a clear negative trend.

No, that's not what we'd expect to find at all.

We'd expect to find at the high end a certain level of radiation that is absolutely lethal, and as the dose is reduced, the impact would drop down steadily, until a zone where life expectancy is reduced. However, that life expectancy is more or less on an absolute scale, and must be compared to the life expectancy of the species being exposed. An insect may survive high doses of radiation simply because it wouldn't normally live long enough to exhibit symptoms, while a longer-lived animal like a human will likely survive long enough to get cancer that ultimately causes death.

At a very low dose, the chances of having any noticeable symptom from radiation is unlikely enough that it could equally likely be caused by millions of other factors, so usually nobody cares. There is still a negative trend in survivability, but it's dwarfed by all of the other fatal conditions.

Too little radiation and the species dies due to inability to keep pace with changing environmental conditions.

Radiation isn't the only mechanism for mutation, though. Rather, it's the fast and cheap way to make a lot of mutations really fast, usually in places that cannot possibly contribute to evolution.

In order to change the species, an offspring's DNA must be mutated. That's dependent on a few thousand cells out of the trillions in a human body. Those particular cells are the ones involved in meiosis, splitting and reassembling the DNA that will become half of the offspring. During that reassembly process is where most mutations happen, usually by random chemical processes rather than any radiation. This enzyme doesn't successfully react with that protein, so a gene gets skipped or altered or inserted... It is extremely rare that a gene is altered by radiation during the process.

Once an offspring's development begins, though, the effects of mutations become more pronounced. If radiation mutates a single cell during early stages of growth, that fetus will develop with a cluster of mutated cells. Unless those cells are destined to become a gonad, however, the mutation will die with that generation, and the species will not change.

Similarly, radiation affecting a mature individual is is unlikely to have any positive effect, as the mutation is almost always either destructive or irrelevant. The proper functioning of a human body requires millions of interactions between tens of thousands of proteins, so randomly changing one protein is more likely to break something than to add new functionality. Of course, as before, even breaking something is only going to affect the species if it happens to occur in a cell involved in reproduction.

It is important to remember that evolution is never towards anything. It is away from an inability to reproduce (usually due to death). As an illustration, you must realize that you are the result of an unbroken line of millions of ancestors dating back millions of years, and every single one of those millions of ancestors were fertile and successful in mating. There is no scorecard in evolution. Either you pass on your genes, or you don't. It doesn't matter if your changing environment caused you severe illness or discomfort. As long as you manage to find a mate and make a child, you've won the natural selection game.

In short, radiation is a purely random occurrence with purely random effects, and the odds of any particular radiation-caused mutation being beneficial are so absurdly small that it is absolutely safe to say that overall, there is no safe dose.

Comment Re:Init alternatives (Score 1) 330

Has the development of new cinder block designs 'stagnated' because nobody has designed a new cinder block for probably a century? No, the existing design is fine and doesn't need replacement.

That's a very interesting analogy, since I'm currently in a Japanese hotel overlooking a construction site.

Rather than the typical cinder blocks, the bottom two floors are being built from large reinforced concrete slabs, about 1 meter by 3 meters, which interlock and periodically have apparently-plastic pieces. It definitely has increased the speed of construction over laying cinder blocks, and I suspect the slabs and plastic provide some means of safety in an earthquake.

Elsewhere on the technological spectrum, several years ago I volunteered in Africa, where buildings were built with more traditional cinder blocks. The blocks, though, were formed with poor-quality cement, and crumbled when put under load. In America in the 1980s, I was involved in a remodeling project that had to replace some 50-year-old blocks because they were falling apart. Modern blocks (30 years ago) have much more consistent quality, because manufacturers can now chemically test the quality of cement before using it.

In short, construction techniques and technology have indeed improved in the last century. Not even the traditional block dimensions are "fine" for all cases, as I see across the street, though in other areas compatibility is necessary. The requirements have changed, but you seem blissfully unaware. Fortunately, your ignorance of technological progress does not negate its existence.

Comment Re:We need a parts database for stuff. (Score 1) 273

But certainly there are plenty of components (such as the plastic drive gears in a garage door opener) which can be printed and replaced by consumers.

And how often does the average consumer need to print out weird parts? (And how many of them actually have the skills, experience, and tools to make use of them?)
 
  That is the fundamental limitation of 3D printing - the average consumer doesn't have significant need and/or the relevant skills. The "needs" 3D enthusiasts keep positing will enable the consumer (mass market) adoption are in fact edge cases.

Comment Re:The problems are many (Score 1) 273

Prusa Research has been pushing the technology closer to a consumer class appliance.

The problem isn't the lack of a consumer class appliance. Never has been. The problem is lack of consumer need or even desire - and that's going to be difficult to overcome. Most people don't need something printed daily, or even weekly. A significant percentage don't need something printed even monthly. There's just no mass market to be had. Other than the maker market (the folks who make cool stuff just because), the only real market in the near term (a decade or so) are other hobbyists (model railroaders, dollhouse builders, etc...) and that market isn't that big and is going to be very tough to crack. 3D printers are nowhere near capable of producing all the components required (and won't be for a good while yet), and the cost of learning a new skillset on top the cash outlay will be a strong deterrent.
 
The only market for 3D printers in the near term isn't the individual consumer (and likely won't ever be), but the small manufacturer serving niche communities.

Comment Re:Init alternatives (Score 1) 330

You're reading far too much into one word. You should try reading the rest of that paragraph.

The first init systems were damned little more than just a shell. After that, we moved to running a single script at startup, and eventually went to runlevels with some common conventions. That's where development stagnated for a decade or two, and that's where I'm drawing my "antique" line. At the time, systems couldn't handle multitasking very well (mostly in terms of race conditions and programmers' sanity), and the massive university systems didn't really need to boot quickly, either, so there wasn't much development in parallel initialization.

Since then, Linux has been created and moved to the desktop, and we have a whole slew of new init systems, most of which natively support modern perceptions of parallelism, security, configuration, hardware, and other new developments since their predecessors moved out of the design phase. It isn't so much that "old is bad" as that the new is more likely to have been designed with modern paradigms in mind. Despite your dismissal, parallelism in particular is important, especially as Linux has taken a role as the embedded OS of choice for smart devices and cheap laptops.

While "change for the sake of change" may be wasted effort, it must be compared against the effort of keeping the old system. For example, how much effort is required of a distro maintainer to write and maintain init scripts for all their packages, including functionality for checking that dependencies started correctly and that scripts follow current best practices? How much effort is required to even make sure that the scripts are numbered in order to start correctly? In an age where building a dependency tree is only a few milliseconds of work, I would say it is wasted effort to make a sysadmin figure it out.

On the side of systemd proponents, I don't think the argument has ever been that "old means bad". Rather, the argument has been that we've learned a few lessons over the past thirty years, and we ought to put those lessons into our software.

Comment Re:Init alternatives (Score 2) 330

So let me get this straight... in order to say "Foo depends on some kind of bar, which happens to be baz on this system", I need to write a "bar" definition that actually runs "baz", and go modify a completely separate dependency file to add "foo".

...and you're suggesting this is clean?

Comment Re:Init alternatives (Score 3, Informative) 330

With all due respect, that comparison is awful.

In the effort to make an "apples to apples" comparison, it uses only the bare minimum of functionality from each toolset. There's no illustration of dependencies or capability control. It is useful for getting a rough idea of how the init systems' config files look, but not really as the basis for any kind of comparison, especially with regard to advanced features.

Comment Re:Init alternatives (Score 2, Interesting) 330

Well, on my home rolled NAS appliance, I really like the ability to reboot all of my VMs very quickly when applying security updates, because I'm not the only one that uses it.

A fair point.

The thing is, there's so much damn drama over it that I'm curious what its detractors want to use in its place.

Typically sysvinit or mostly-compatible equvalents. From my perspective, they don't want to learn something new, and they don't see the existing system as broken.

And why are some people going to go out of their way to say "you don't need a faster boot time" when they don't know my use case?

The obligatory XKCD applies. Most boot processes are fast enough now that it's not really worthwhile for an end user to shave a few seconds off the time. On the other hand, doing something as a hobbyist is entirely about wasting time, so I won't hold that against you.

The biggest improvement over antique boot systems is going to parallel boot chains. Rather than running scripts one at a time, in order, a tree is built to determine what services are dependent on what other services. For example, it doesn't make sense to start the SSH server until the network is live. There are several init systems that do this, differing mostly in how they define dependencies. Some rely on specific services ("openssh-server relies on network") while others work on more generic capabilities ("remote-shell relies on network, and openssh-server is what we'll use for remote-shell").

After parallelism, it gets tricky and subtle. Maybe we don't need all of a service to start before its dependencies. For example, we don't necessarily need all of our DHCP leases assigned before we know which network interfaces are connected. That requires a more granular service definition, but provides a lot more power, especially for systems with very complicated startup procedures. With that power, we can shave a few more seconds off the boot time, because we aren't required to wait while services settle, improving our overall parallelism. That's useful for me (professionally, I build systems that boot with a strict time limit, and may reboot every few hours), but most folks don't really benefit with the added complexity.

Furthermore, with the way I hack my Android smartphone, I'd love it if it booted faster.

I don't know much about Android init, but I think it uses its own system unrelated to systemd, sysvinit, or any of the alternatives listed in TFS.

Comment Re:Init alternatives (Score 4, Interesting) 330

I'm not sure if that's a serious question or an attempt to troll, but regardless...

Speed is not why you should want (or not) systemd. It's Linux. How often do you expect to reboot the thing, anyway?

In the spirit of "Do one thing and do it well", systemd's goal is "manage services and dependencies". To that end, the only real interaction you normally have with systemd is to start or stop a service, and view the associated logs if some service is misbehaving. In my opinion, them, I don't really see the point in changing one's distro (including support lifecycles, development trust, and organization philosophy) just to swap out init. It's just not that big a deal.

Comment Re:Garages? (Score 1) 11

Think about the power to weight ratio--with as little as a plastic vehicle with a passenger or two would weigh on Ceres, the ratio would be very high, especially after they found the ferromagnetics in the belt that could be magnetized a hundred times as strong as today's (that story, "The Pirate", is still in edit), replace the magnets in a 100 watt motor with them, and one watt will run that motor as well as 100 did the old.

They already had real moon buggies, they're still up there. They used wheels, but the moon is a LOT heavier than Ceres.

Imagine playing basketball on Ceres? I might add that to a story, there were microgravity sports in "Mars, Ho!".

Slashdot Top Deals

Another megabytes the dust.

Working...