Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?
For the out-of-band Slashdot experience (mostly headlines), follow us on Twitter, or Facebook. ×

Comment: Same here. (Score 1) 542 542

I have similar issues:
  - Towing several tons (travel trailer or 23 foot trailerable-with-extreme-trailer deep-keel coastal-water-ocean-capable sailboat) up and down mountains and cross-country.
  - Going to/from the ranch - over 250 miles one way (over the Altamont grade, across the central valley, and through a pass in the Sierras) - with the last 0.7 miles sometimes hubcap-deep mud.
  - Carrying ranch groceries for several months and/or other supplies or equipment from the nearest supermarket etc. - 27 miles away.
and so on.
  - Off-roading to visit ghost towns and other historic sites in the Nevada Desert - where "running out of gas" - in the absence of cell phone service - might mean your skeletons are discovered in a couple years.

On the other hand, for trips about 3/4 of the year and NOT towing, a plug-in hybrid or an all-electric vehicle with sufficient range, serious regenerative braking, and adequate cargo capacity for two week's groceries and luggage for two, would be ideal. Charge it up at each end (off a windmill/solar at the Nevada end) to start full, use regenerative braking on the downslopes to power across the valley or up the next up slope. For a hybrid: Top off the batteries while cruising the central valley and use batteries plus engine to avoid being a creeping traffic hazard on the mountain roads.

My cycle would be almost identical to a Silicon Valley worker who mostly commutes 25 miles each way and occasionally vacations at the Lake Tahoe ski resorts or Reno or camps in the Sierras. A single vehicle that could do both - rather than needing two vehicles to accommodate the use pattern - would be ideal.

Comment: Systemd, pass II (Score 1) 171 171

Sure, no problem. If you dislike systemd that much, it certainly makes sense to move to a different software platform.

I don't particularly dislike systemd per se. I do observe the controversy around it, and the image of it and its project, painted by its opponents (some of whom have enough creds that it's unlikely that they're talking through their hats), indicates that the claimed issues are likely to be real problems, and this may be a tipping point for Linux adoption and user choice among distributions or OSes.

Your Snowden argument isn't particularly applicable in this instance, as you have access to the full source code for systemd. If you're not comfortable looking through C code, then any init system would be a problem for you. ... If you think that porting your laptop, home servers and desktops to a completely different operating system is less effort than learning how systemd works, then I can only conclude you haven't tried to learn how systemd works. Or you've severely underestimated the work involved in moving to another OS.

I did my first Linux drivers (a PROM burner and a Selectric-with-selonoids printer) on my personal Altos ACS 68000 running System III, wrote a driver for a block-structured tape drive for AUX - working from my own decompilation of their SCSI disk driver (since the sources weren't available to me initially), ported and augmented a mainframe RAID controller from SvR3 to SvR4, and so on, for nearly three decades, through hacking DeviceTree on my current project. I don't think C has many problems left for me, nor does moving to yet another UNIX environment - especially to one that is still organized in the old, familiar, fashion. B-)

As for trying to learn how systemd works, that's not the proper question. Instead, I ask what is so great about it that I should spend the time to do so, distracting me from my other work, and how doing this would meet my goals (especially the undertand-the-security-issues goal), as compared to moving to a well-supported, time-proven, high-reliability, security-conscious alternative (which is also under a license that is less of a sell to the PHBs when building it into a shippable product.)

Snowden's revealations show that the NSA, and others like them are adept, at taking advantage of problems in obscure corners of systems and using that obscurity to avoid detection. The defence against this is simplicity and clarity, avoiding the complexity that creates subtle bugs and hides them by burying them in distractions. Bigger haystacks hide more needles.

The configuration for systemd isn't buried. It's there for all to see and change, in plain text. Logging in binary form is _optional_. You can choose to direct logged messages to syslog, or use both syslog and binary, to have the "best of both worlds", albeit with the best of disk usage.

Unfortunately, I don't get to make that choice myself. It's made by the distribution maintainers. My choice is to accept it, open the can of worms and redo the work of entire teams (and hope their software updates don't break things faster than I fix them), or pick another distribution or OS.

Again, why should I put myself on such a treadmill of unending extra work? If I could trust the maintainers to mostly make the right choices I could go along - with no more than an audit and perhaps an occasional tweak. But if they were making what I consider the right choices, I wouldn't expect to see such a debacle.

Entangling diverse processes into an interlocking mass is what operating systems are all about! ;)

No, it's not. The job of an operating system is to KEEP them from becoming an interlocking mass, while letting them become an interacting system to only the extent appropriate. It isolates them in their own boxes, protects them from each other, and facilitates their access to resources and ONLY their LEGITIMATE interaction wherever appropriate and/or necessary. The job is to Keep It Simple while letting it work.

Your phrasing, and making a joke of this issue, is symptomatic of what is alleged to be wrong with systemd and the engineering behind it.

Comment: Re:Routing around (Score 2) 197 197

At a large scale, the internet was designed to route around individual problems such as this.
Can't this same principle be applied on a smaller scale?

Yes, it can. Just dig a whole bunch MORE trenches around the country at enormous cost.

The SONET fiber networks were designed to be primarily intersecting rings. Most sites have fiber going in opposite directions (with a few having more than two fibers going off in more than two directions so it's not just ONE big, convoluted, ring.) This is built right into the signaling architecture: Bandwitdth slots are pre-assigned in both directions around the ring. Cut ONE fiber run and the signals that would have crossed the break are folded back at the boxes at each end of the break, run around the ring the other way, and get to where they're going after taking the long route. The switching is automatic and takes place in miliseconds. The ring approach means that the expensive cable runs are about a short and as separated as it's possible to make them.

But cut the ring in TWO places and it partitions into two, unconnected, networks. To get from one to the other you have to hope there's another run between the two pieces, and there's enough switching where they join to reroute the traffic.

IP WANs have, in some portions, also adopted the ring topology as they move to fiber, rather than sticking to the historic "network of intersecting trees" approach everywhere. That's partly because much of the long haul is done on formerly "dark fiber" laid down in bundles with the SONET rings from the great fiber buildout (or is carried in assigned bandwidth slots on the SONET networks themselves), partly because the same economics of achieving redundancy while minimizing costly digging apply to high-bandwidth networking regardless of the format of the traffic, and partly because routers that KNOW they're on a ring can reroute everything quickly when a fiber run fails, rather than rediscovering what's still alive and recomputing all the routing tables.

= = = = =

Personal note: Back when Pacific Bell was stringing its fibers around the San Francisco Bay Area, I was living in Palo Alto. They did their backbone as two rings. There was only one section, perhaps a mile long, where BOTH rings ran along the same route. It happened to go right past my house, with the big, many-manhole repeater vault right next to the house. (I used to daydream of running my own fiber the few feet into the vault. B-) The best I had available, in those pre-DSL days, were dialup with Telebit PEP modems (18-23 k half-duplex) and base-rate (128k) ISDN.)

Comment: Re: Thanks Linus! (Score 1) 171 171

Anyway, I digress. Advantages of systemd are: [long list]

Those are all very nice things to have.

Unfortunately, for my needs, simplicity and understandability are far more important than a fast boot and feature-rich management of the runtime environment. I need to KNOW that things are being handled properly and securely. That's become far more important since Snowden showed us, not that the spooks were getting into our computers (which we'd already figured was happening), but how DEEPLY and EFFECTIVELY their technology and personnel are able to do so.

If the improved functionality is at the cost of burying the configuration and logging in non-human-readable form and entangling diverse processes into an interlocking mass under a complex and ever growing manager, the shark has been jumped.

Though Linux has been becoming (MUCH!) more usable with time, its configuration has been buried progressively more deeply under more and more "convenient and simplifying", but non-transparent, configuration management tools. Systemd is the continuation of the trend. But it is also a quantum leap, rather than another thin slice off the salami. So it has apparently created the "Shelling Point", where a lot of frogs simultaneously figure out that NOW is the time to jump out of the pot.

It's been a great ride. It had the potential to be even greater. But I think this is where it took the wrong turn and it's time for me to get serious about switching.

There's good reason to switch to NetBSD at work, on the product. (The code supporting the secret sauce is on the user side of the API and is Posix compatible, so it should be no big problem.) Porting my laptop, home servers, and desktops to OpenBSD now looks like it's worth the effort - and less effort than trying to learn, and keep abreast of, the internals of systemd.

Call me if somebody comes up with a way to obtain the key benefits of systemd in a simple and transparent manner, rather than creating an opaque mass reminiscent of Tron's Master Control Program. (Unfortunately, the downsides of systemd's approach seem to be built into its fundamental structure, so I don't expect it to evolve into something suitable, even if it's forked.)

Comment: The choice seems clear. (Score 1) 171 171

As I understand the three major forks:

One (OpenBSD) is for having as secure a desktop/server/embedded platform as the maintainers can manage - important in this post-Snowden era (as it was, all unknown, in the era preceding Snowden B-b). It is based outside the US so it can incorporate strong encryption without coming afoul of US export controls.

One (NetBSD) is for developing network internals software and networking platforms (typically ported, when possible and not part of a proprietary product, to the others and other OSes.)

One (FreeBSD), now that its original purpose of getting the code disentangled from proprietary accomplished and the other two projects forked from it, is for making an open unix-like system run on the widest range of hardware platforms and devices possible.

Unless you're using your machine for building networking equipment or it's a new hardware platform under development, the choice seems clear.

Comment: Re:... run away, screaming like little girl. (Score 0) 171 171

Are we allowed to say that out loud?

According to the first amendment, the government of the United States can't stop you.

If the denizens of the largest religion of the Unitied States (Progressivism), or at least their media spokespreachers, decide to gang-shun you, there's still the other half of the population to interact with.

Fortunately, techies usually have to deal with real-world more than social issues. Unfortunately, PHBs have control of the money and have to interact with the fanatics. Fortunately, techies are noted for not being skilled on social fads and are given much slack. Unfortunately, that slack sometimes comes with a hook: The PHB tells his techies not to be a "lightning rod" and say/post things, in a way traceable to a particular employee of The Company, that might bring down the wrath of the pressure groups, make it look like his "herd of cats" really IS crazy and repell funders and customers, or otherwise make his job harder than it already is.

Which (mainly the "crazy cats" case) is why I started posting anything that MIGHT be controversial under pseudonyms. And a reference to the PHB's order is the origin of the slashdot pseudonym "Ungrounded Lightning Rod" (since slashed down to "Ungrounded Lightning" by changes to the slashcode that limited pseudonym size). And why, now that "ULR" has a large and valuable reputation (and though that reputation might help with job searches) I STILL don't out the corresponding "True Name" on any electronic medium.

(So now you know.)

In Linus' case, I doubt that even a gang-shun by the Politically Correct would have an impact, on his finances, his social standing, or the adoption of his work or technical ideas.

(Can you imagine, for instance, the luddites , or even Microsoft's PR department, trying to get people to avoid Linux and switch to Windows or MacOS, or avoid git and switch to Clearcase, Bitkeeper, ... because Linus once said "... run away, screaming like little girl" and therefore must be a Sexist Pig? Especially, can you imagine ANY tech company using THAT slander and thus inviting that kind of scrutiny of their OWN people? B-) )

Comment: Re:He answered the most boring questions! (Score 2) 171 171

So have faith. Either he's right, and systemd will not turn out to be that bad, or his faith in systemd will end in tears, and then, he'll sit down and write a new startup management system that will kick everybody else's collective asses!

Or maybe somebody ELSE will write a kick-ass init system, and Linus will say "Hey, that's cool!" and promote it. Or the maintainers of a major distribution will adopt it. Or those of a MINOR distribution will - and user will migrate.

Linus is great. But why does THIS have to be HIS problem? The init system may have a bit of extra-special status and privilege, but it's largely NOT the kernel's problem. Along with the system call API it is THE boundary between the kernel guts and the user/demon/daemon firmament. It says to the kernel: "Thanks, I'll take it from here."

Comment: He's totally wrong. (Score 1) 288 288

When Bill Gates says:

"There's no battery technology that's even close to allowing us to take all of our energy from renewables and be able to use battery storage in order to deal not only with the 24-hour cycle but also with long periods of time where it's cloudy and you don't have sun or you don't have wind."

he's totally wrong.

For starters, there's Vanadium Redox. A flow battery (pumped electrolyte): Power limited by the size of the reaction device's electrode and membrane assembly. Energy storage limited by the size of the tanks. It's mainly used for utility-level energy storage down under (Oz or Nz, I think), because the patents are still fresh and the little startup doesn't want to license it to others. Vanadium is some substantial percentage of the Earth's crust so there's no shortage. Using the same element (in different sets of oxidation states - vanadium has (at least) 6 of 'em) for BOTH electrodes means leakage of small amounts of the element through the dilectric membrane doesn't poison the battery.

Lithum cells are already good enough to run laptops, cars, and houses, and are improving at a Moore's Law like rate. The elements are also not rare and the use of several nanotech techniques on the electrodes have drastically increased the lifetime and other useful properties. (We just had reports of yet another breakthrough within the last day or so, doubling the capacity and extending the life.) The fast-charge/discharge cells are also extremely efficient. (They have to be, because every horsepower is 3/4 kW, so even a few percent of loss would translate to enormous heat in an automotive application.) The main problem is to get companies to "pull the trigger" on deploying them - and risk their new production line being rendered obsolete before the product hits the market by NEXT month's breakthroughs.

Lead-acids need to be replaced once or twice per decade. But they have been the workhorses for off-grid since Edison's and Nikola Tesla's days, and still are today (though not for long, if Elon Musk and the five billion dollars of investments in his lithium battery plant have anything to say about it).

Nickel-Iron wet cells are a technology developed by Edison. They have more loss than lead-acids. But they literally last for centuries. If you have a moderately steady renewable source (like some combination of enough wind and a big enough windmill, enough sun and a big enough solar array, or a stream and a big enough hydro system) you'll have enough more power than you need to keep them topped off. They're just fine for covering days, or even a couple weeks, of bad generation weather, or down-for-maintenance situations. That IS what they were in at least one hydro plant I know of. (The problem is finding them: They last so long you only need to buy them ONCE, so there aren't many plants.)

That's just four FAMILIES of entirely adequate solutions. There ARE more.

So Bill is either uninformed, talking through his hat, or starting on the "embrace" stage of yet another:
  - Embrace
  - Extend
  - Extinguish

Comment: Second law of thermodynamics. (Score 2) 288 288

we have a way to turn electricity directly into heat. But there is no direct way to turn heat into electricity. It has to go thru a second step of mechanical energy to spin a magnet to create electricity.

You can go from electricity directly to heat because that increases entropy. You can't go from heat to anything useful because that decreases entropy, and entropy of a closed system only increases. The best you can do is a heat engine, working off a temperature DIFFERENCE. (Some of them also work backward as heat pumps, to go from electricity to heat more effectively, by also grabbing some heat from elsewhere to include in the hot end output.)

There ARE at least two major forms of electronic heat engines - direct from temperature differences to electricity, with only charge carriers as the moving parts: Thermoelectrics (thermocouples, peltier junctions, and thermopiles of them) and thermionics (both heat-driven vacuum diode generators and a FET-like semiconductor analog of them). Both are discussed in other responses to the parent post.

Comment: Thermionics (Score 3, Interesting) 288 288

TEs are ridiculously inefficient and aren't looking to be much better anytime soon

Because thermoelectric effect devices leak heat big time.

However there's also thermionics. The vacuum-tube version is currently inefficient - about as inefficient as slightly behind-the-curve solar cells - due to space charge accumulation discouraging current, but I've seen reports of a semiconductor close analog of it (as an FET is a semiconductor close analog of a vacuum triode) that IS efficient, encouraging the space charge to propagate through the drift region by doping tricks (that I don't recall offhand). The semiconductor version beats the problems that plague thermoelectrics because the only charge carriers crossing the temperature gradient are the ones doing so in an efficient manner, so the bulk of the thermal leakage is mechanical rather than electrical, and the drift region can be long enough to keep that fraction down.

Comment: Then again. (Score 1) 62 62

I got the impression from the (sketchy) article that repeater AMPLIFIERS were still needed but repeater REGENERATORS were not.

Then again - another part of the article makes it look like an additional result was that they could boost this less-subject-to-degradation-by-nonlinear-distortions signal at the start until the fibre itself was acting non-linearly, in order to get a signal strong enough to survive a much longer hop.

So it's not clear to me whether the distance was achieved by:
  - long hops enabled by strong signals, and NO amplifiers
  - longer propagation without regenaration using JUST amplifers
  - a combination of the two: Both getting long total length without regeneration AND being able to use stronger signals and thus use larger space between the amplifier-type repeaters.

Comment: but not amplifiers (Score 1) 62 62

Since the diameter of the earth is 7 926.3352 miles, this could conceivably remove any need for repeaters.

I got the impression from the (sketchy) article that repeater AMPLIFIERS were still needed but repeater REGENERATORS were not.

I.e. you still needed to boost the strength of the signal to make up for the losses. But the progressive degradation of the quality of the signal - with data from different frequency bands bleeding into other bands (especially in the amplifiers themselves) due to nonlinear "mixing" processes - had been headed off, by synchronizing the frequencies of all the carriers to exact multiples of a common basic difference-between-the-carriers frequency.

This apparently sets up a situation where the distortion products of each carrier's interaction with nonlinear processes cancel out with respect to trying to recover the signals on another carrier - much the way the modulation products do in OFDM modulation schemes. In OFDM it allows you to make essentially total use of the bandwidth. In this system it lets you use simple, cheap, amplifiers to get your signal boost, rather than ending the fibre before things get too intertwingled, demodulating all the signals back to data streams and recovered clocking, then generating a fresh set of modulated light streams for the next hop - MUCH more expensive and power hungry.

Comment: Re:Once all the data is in the cloud... (Score 1) 89 89

... government regulators couldn't possibly find financial irregularities by grabbing you documents from the cloud service provider, ...

The courts said you have no expectation of privacy one you put your data in the hands of a third party. Great! Let's convince all those "evil corporations" to store all their data in the cloud. Then the government can go after them any time they want. B-b

"Any excuse will serve a tyrant." -- Aesop