Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?

+ - Samba user survey results - Improve the documentation !->

Submitted by Jeremy Allison - Sam
Jeremy Allison - Sam writes: Mark Muehlfeld of the Samba Team recently surveyed our user base and recently reported the results at the SambaXP conference in Germany.

They make fascinating reading, and include all the comments on Samba made by our users. Short answer — we must improve our documentation. Here are the full results:



                Jeremy Allison,
                Samba Team.

Link to Original Source

Comment: In particular, NO redundancy. Reliability drops. (Score 4, Informative) 211

Losing data goes with the territory if you're going to use RAID 0.

In particular, RAID 0 combines disks with no redundancy. It's JUST about capacity and speed, striping the data across several drives on several controllers, so it comes at you faster when you read it and gets shoved out faster when you write it. RAID 0 doesn't even have a parity disk to allow you to recover from failure of one drive or loss of one sector.

That means the failure rate is WORSE than that of an individual disk. If any of the combined disks fails, the total array fails.

(Of course it's still worse if a software bug injects additional failures. B-b But don't assume, because "there's a RAID 0 corruption bug", that there is ANY problem with the similarly-named, but utterly distinct, higher-level RAID configurations which are directed toward reliability, rather than ONLY raw speed and capacity.)

Comment: NetUSB=proprietary. Is there an open replacement? (Score 2) 69

It happens I could use remote USB port functionality.

(Right now I want to run, on my laptop, a device that requires a Windows driver and Windows-only software. I have remote access to a Windows platform with the software and driver installed. If I could export a laptop USB port to the Windows machine, it would solve my problem.)

So NetUSB is vulnerable. Is there an open source replacement for it? (Doesn't need to be interworking if there are both a Linux port server and a Windows client-pseudodriver available.)

Comment: Opportunity to detect MITM attacks? (Score 4, Interesting) 71

by Ungrounded Lightning (#49737679) Attached to: 'Logjam' Vulnerability Threatens Encrypted Connections

I skimmed the start of the paper. If I have this right:

  - Essentially all the currently-deployed web servers and modern browsers have the new, much better, encryption.
  - Many current web servers and modern browsers support talking to legacy counterparts that only have the older, "export-grade", crypto, which this attack breaks handily.
  - Such a server/browser pair can be convinced, by a man-in-the-middle who can modify traffic (or perhaps an eavesdropper-in-the-middle who can also inject forged packets) to agree to use the broken crypto - each being fooled into thinking the broken legacy method is the best that's available.
  - When this happens, the browser doesn't mention it - and indicates the connection is secure.

Then they go on to comment that the characteristics of the NSA programs leaked by Snowden look like the NSA already had the paper's crack, or an equivalent, and have been using it regularly for years.

But, with a browser and a web server capable of better encryption technologies, forcing them down to export-grade LEAKS INFORMATION TO THEM that they're being monitored.

So IMHO, rather than JUST disabling the weak crypto, a nice browser feature would be the option for it to pretend it is unpatched and fooled, but put up a BIG, OBVIOUS, indication (like a watermark overlay) that the attack is happening (or it connected to an ancient, vulnerable, server):
  - If only a handful of web sites trip the alarm, either they're using obsolete servers that need upgrading, or their traffic is being monitored by NSA or other spooks.
  - If essentially ALL web sites trip the alarm, the browser user is being monitored by the NSA or other spooks.

The "tap detector" of fictional spy adventures becomes real, at least against this attack.

With this feature, a user under surveillance - by his country's spooks or internal security apparatus, other countries' spooks, identity thieves, corporate espionage operations, or what-have-you, could know he's being monitored, keep quiet about it, lie low for a while and/or find other channels for communication, appear to be squeaky-clean, and waste the tapper's time and resources for months.

Meanwhile, the NSA, or any other spy operation with this capability, would risk exposure to the surveilled time it uses it. A "silent alarm" when this capability is used could do more to rein in improper general surveillance than any amount of legislation and court decisions.

With open source browsers it should be possible to write a plugin to do this. So we need not wait for the browser maintainers to "fix the problem", and government interference with browser providers will fail. This can be done by ANYBODY with the tech savvy to build such a plugin. (Then, if they distribute it, we get into another spy-vs-spy game of "is this plugin really that function, or a sucker trap that does tapping while it purports to detect tapping?" Oops! The source is open...)

Comment: Re:No Chicklets! (Score 1) 146

The inadequately-configurable trackpads, in positions where they detect the palm resting on the laptop (or brushing them) and randomly jump the cursor or highlight whole paragraphs so the next keystroke replaces them, are no help, either.

What do you mean by inadequately configurable? There's usually an option to disable while typing somewhere.

It's there. It's on. Didn't help. Don't know if it's that Ubuntu 14.04 doesn't support it properly on these two machines or if it doesn't do the job I want done.

What I'm looking for is NOT there: A threshold level for touch sensitivity. If you're going to put a BIG touchpad on a laptop's palm rest, you need to either put it where the palms won't brush it, or you need to make it possible to turn down the sensitivity so that a feather-light brushing of the pad doesn't register as a mouse motion or button click.

Two different manufacturers (Lenovo and Toshiba) have used exactly the same layout, and exactly the same hair trigger, non-adjustable, touchpad sensitivity. (Also exactly the same sort of wafer-thin flat tile keys, which is how we got into this digression.)

Comment: No Chicklets! (Score 3, Insightful) 146

The problem I have with current keyboards is not just the short travel and lack of clickyness, but the tiny height of the keys.

Instead of the tall keys with space between them for fingernail clearance, there are these thin squares maybe an eighth of an inch above a solid surface. If I don't keep all my fingernails cut short, when they go past the side of the key they hit the panel and the key doesn't "strike". Letters get dropped. (So I get to pick between typing well and playing the guitar. I pity those who must keyboard for a living but want long nails to maintain their social life.) The short travel means there's little margin for finger variation, so some letters, where my fingers don't depress the keys as far, normally, don't strike, while others, where I support the weight of my hands, do strike when they shouldn't, or strike multiply.

After over a year I haven't been able to adjust. You may have noticed that my spelling has gone to hell as a result: I have to do a lot more correction and sometimes miss fixing things up.

(The inadequately-configurable trackpads, in positions where they detect the palm resting on the laptop (or brushing them) and randomly jump the cursor or highlight whole paragraphs so the next keystroke replaces them, are no help, either.)

On the other hand, when the nails do hit the key, they quickly wear through the top level of black plastic, exposing the backlit transparent light below it. I replaced a laptop about a year ago and after about six months about a half-dozen heavily-used keys had their pretty letters obscured by the giant glow of the scoured away region.

I had been running on older thinkpads and toshibas, with classic keyboard-shaped keys, or at least the little fingertip cup and substantial fingernail clearance. Switching (in a two-dead-laptops-in-two-weeks emergency) to a lenovo z710, then to a company-supplied toshiba s75, both with the stupid "I'm so thin", square, low-travel, no-finger-cup keys has been a disaster.

Comment: Solar offgrid with NiFi battery backup. (Score 1) 401

A solar offgrid (or grid-tied with standalone capabilities) would provide power locally until too much stuff failed.

Lead acid batteries last for several years, recent lithium probably for a couple decades. Nickel-Iron batteries are more lossy, but last for centuries, if provided with water to replace evaporation, potentially decades if they have catalytic fill caps to recombine lost hydrox or, say, a reservoir-based automatic watering system. (If their chemistry has a long-term unavoidable failure mode I'm not aware of it.)

Even with the batteries dead (NiFe or otherwise) the system will have power when the sun is out until at least one panel in every series substring is too degraded, shaded, or smashed to provide adequate power.

Semiconductor controllers might go for a decade to centuries, depending mainly on whether the conductive interconnects of the semiconductors are sized to avoid electromigration at the current levels used and what they're using for large capacitors.

Wind generaors have several moving parts to screw up - how many depends on the design. For a simple homebrew one you have the main bearings, yaw bearing, and tail furling-system bearing. Any one of them failing will take it out. (Even the furling bearing: Once that screws up it doesn't furl right and tears apart in the next storm.) There's also the get-the-power-past-the-yawing mechanism (typically a long cable being twisted and manually "unwound" every few years, or a brush mechanism.) Call it a decade without maintenance at the outside.

So some of 'em may run until a nearby lightning strike fries something.

Comment: Maybe due to misclassifying, esp. the Big-P? (Score 1) 845

by Ungrounded Lightning (#49682577) Attached to: Religious Affiliation Shrinking In the US

I wonder what the numbers would be if "Progressivism" were also counted as a religion, rather than JUST a philosophy or political affilication? B-)

Think about it: It claims to prescribe what behavior is good or bad, generally expects its adherents to take its pronouncements on faith, and has a lot to say against various religions - just like ("other") competing religions do to their opponents.

I could go on with the similarities. But since they include suppression of competing ideas by pretty much any available mechanism (including arbitrary down-moderation, personal attacks, and flame wars), I'd prefer to keep the discussion light.

They're not alone in this, either. (c.f. any of several political philosophies, right, left, libertarian, authoritarian, moderate, ...) But they're my current candidate for the largest not-advertised-as-religion-religion at the moment. B-)

Comment: Re:QoS is hard but necessary (Score 1) 133

My ISP uses an AQM and I can maintain about 10ms of additional latency even when my connection is flooded beyond 100%. ... When I manage my own AQMs on my network, I can maintain 0ms of additional latency, no QoS needed.

Latency is a problem, and as you mention, AQM can deal with it without packet-type distinctions. But it's not the BIG problem when TCP and streams are trying to divide a channel's bandwidth.

That problem is packet loss. TCP imposes it on streams. TCP is HAPPY to accept a little packet loss. Streams get into trouble quickly - and all the workarounds short of QoS packet-class distinctions on the pathway just push the problem around into other aspects (such as delay).

With QoS you can put the drops selectively into, first the TCP flows (which then throttle back), then already-delayed stream packets (which streams no longer need - when TCP could use the equivalent just fine.) In fact you could even give streams strict priority over TCP - provided they're within their bandwidth limit - and avoid dropouts and most of the jitter completely. Streams get the cream and TCP gets the whey, other stuff gets something in between.

Comment: QoS is hard but necessary (Score 4, Informative) 133

ISP should be limited to purchasing more bandwidth and using anti-bufferbloat AQMs, but no throttling or QoS.

QoS may be hard. But it's necessary, because streaming and TCP don't play well together.

Streaming requires low latency, low jitter, low packet loss, and has a moderate and limited (in absence of compression, typically constant) bandwidth. TCP, when being used for things like large file transfers, increases speed to consume ALL available bandwidth at the tightest choke point, and divide it fairly among all TCP connections using the choke point. It discovers the size of the choke point by expanding until packets are dropped, and signals other TCP connections by making their packets drop. The result that TCP forces poor QoS onto streams unless the infrastructure is massively oversized.

This can be fixed by a number of traffic management schemes. But they all have this in common:
  - They treat different packets differently.
  - The infrastructure can be misused for competitive advantage and other unfair business practices.

The PROBLEM is not the differing treatment of different packages (which can help consumers), but the misuse of the capability (to hurt consumers).

So IMHO an "appropriate legal remedy", under current legal theories, is not to try to force ISPs to treat all packets the same (and break QoS), but to limit the ISPs ability and incentives to misuse the capability.

So the appropriate regulation is not communications technical regulation, but consumer protection and antitrust law:
  - Consumer fraud law should already cover misbehavior that penalizes certain traffic flows improperly. (What is "internet service" if it doesn't handle whatever end-to-end traffic is thrown at it, just for starters) Ditto charging extra for better packet treatment rather than just fatter pipes, charging anyone other than their base customers for the service, or heavily penalizing packets of customers (or the customers themselves) whose usage is problematic for the ISP but within the advertised service. If current law needs a tweak, the enforcement infrastructure is already there should Congress choose to commit the tweak and use it.
  - Penalizing packets of competitors for its own services, or giving appropriate handling to its own packets of a type and not to that of others, is anticompetitive behavior. Indeed, having such services in the same company AT ALL, let alone forming conglomerates that include both "content" creation and Internet service distributing it, is a glaring conflict-of-interest, of the sort that led to the historic breakups of AT&T and Standard Oil. Antitrust law is up to the problem: Just use it.

(I put quotes around "appropriate legal remedy" above, because I think that a free market solution would be even better. Unfortunately, we don't have a free market in ISP services, due to massive, government-created or government-ignored barriers to entry. And we aren't likely to see one in the near future - or EVER, unless the government power-wielders get it through their skulls that "competition" and its free-market betnefits don't kick in until there are at least three, and usually until there are four or more, competitors for each customer. (This "Two-is-competition, Hey! Where's the market benefits?" error has been built into communication law ever since the allocation of bandwidth for the early, analog, AMPS cellphone service.) With only two "competitors", market forces drive them to cartel-like behavior and all-the-market-will-bear pricing, without any collusion at all.)

Comment: Think of it as evolution in action. (Score 1) 29

by Ungrounded Lightning (#49637105) Attached to: Grooveshark Resurrected Out of US Jurisdiction

"...after music streaming service Grooveshark was shutdown"
Why in hell are you using a noun when a verb is required?

This is how language evolves.

Sometimes you can convince people to drop a useful construct or misspelling - like by telling them it makes their arguments less convincing. Other times it's like trying to sweep back the tied.

Comment: But it might actually cripple a magnetic sense. (Score 4, Interesting) 257

Come on. This misinformation is 30 years old already. Why can't we let it die already?

Contrary to popular belief, Haimes never claimed that a CAT scan had caused her to lose her psychic powers. In fact, the often alluded-to CAT scan never took place. Haimes only claimed that the headaches resulting from her allergic reaction prevented her from earning a living as a psychic.

On the other hand, I could see an MRI actually destroying a hypothetical human magnetic navigation sense.

  - A number of animals, including birds, are documented to have a magnetic sense they use in navigation.
  - Bacteria are known to migrate vertically using the earth's field to align them as "dipping needles" so their cilia drive them downward to lower-oxygen water.
  - The bacteria obtain their magnetic alignment by depositing crystals of magnetitie of a size that will hold no more than a single magnetic domain, and thus be automatically magetized. New crystals are deposited next to old, making them align in the same direction. The row of crystals is a strong enough magnet to align the bug like a compass needle. The row is normally split when the bug reproduces, so the two new bugs are both magnetized the same way, rather than one getting a 50/50 chance of swimming the wrong way. (No doubt the occasional offspring gets none and has to take the chance - which let the species survive magnetic reversal events.)
  -Some nerve cells in a number of animals contain such magnetite particles, leading to the speculation that these may be the basis for a magnetic sense.
  - Among such nerve types is on in the human nose, leading to the speculation that some humans may be able to "smell" magnetic fields (or have some magnetic sense in some OTHER group of neurons that ALSO produces the particles and that those in the nose are vestigial mis-triggering of the mechanism, or that an organism in their ancestry may once have had a magnetic sense, of which this is a vestigial remanent.)
  - (I have a small number of personal, anaecdotal, experiences that lead me to believe that I once had a magnetic sense that was input to my brain's location processing, but at a priority far below visual observation. These all occurred before I ever had an MRI.)
  - If some nerves do detect ambient magnetism by monitoring mechanical forces originating in magnetitie particles, the strong magnetic field of an MRI machine might be expected to disrupt this by modifying the magnetization of the particles, or by yanking on then so strongly they disrupt, or even kill, the nerves in question.

So if humans DO have a magnetic sense of this form, it might actually be destroyed by exposure to, and especially testing in, an MRI machine.

Some of my readers ask me what a "Serial Port" is. The answer is: I don't know. Is it some kind of wine you have with breakfast?