Slashdot is powered by your submissions, so send in your scoop


Forgot your password?
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. ×

Comment Re:More Info, and Announcement Content (Score 1) 101

The information I really want to see is a statement clarifying this as either technical as in a failure of the security software somewhere, or administrative, as in someone left something open through error or poor security design choices.

It's important to know if this is a bug which is on all Linux systems, or someone made a human error.

Comment Re:SSH keys? (Score 1) 101

That being said, it's important to use a different private key on each machine where you might ssh from...

Hell yes, and for key management sanity it is good to use a clear comment on keys so you know what they are for.

However, in case of a compromise, you'd still need to remove trust in the private key of the impacted machine (so if got hacked, you need to remove your old's public key from your ~/.ssh/authorized_keys on

You also have to rebuild authorized_keys to check that no new or modified keys have been added. That would allow the hackers right back in at some future time. I keep a copy and cksum of authorized keys for every machine where I use keys, just so I can check. Yes, I'm paranoid.

Also, if the hacker got's /etc/ssh/ssh_host_rsa_key, then he could theoretically later on mount an MITM attack against so the admins better change that one as well (while publishing the new key's fingerprint on an SSL server).

I don't know if that's the case, but they could certainly set up a fake server if they have your public key and the server host keys. So generating a new host key pair is really required, forcing every user to change the key.

What a mess!

Comment High thruput needs network tuning (Score 1) 359

Actually some pretty modest hardware will generate very high thruput if tuned properly. The network stack default parameters we use today are remnants of what we did in the the 90s, when 16MB RAM was big, 4Mbit token ring was used, and 100k for a network buffer was a lot.

To increase thruput there are several things which can be done. The first is to increase the window size so that data can flow until the ACK packets get back. The second is to increase the packet size (aka jumbo packets, or MTU). After that you need to allocate enough buffer space to keep the pipe full on the transmit end and prevent buffer overflow on the receiving end. The OS needs to prioritize interrupt handling so the incoming data get handled, it doesn't need a lot of CPU, but it needs it NOW.

Finally, realize that the disk subsystem may become a bottleneck at Gbit speeds, sustained transfer to/from disk may take more than the minimal bargain drive. You don't need super hardware to use that Gbit, but you do need some optimal use of the hardware you have.

Comment Re:Not 'unprecedented' (Score 1) 78

They aren't exoplanets, and hence not a precedent.

You sound a bit like a patent attorney explaining why something isn't prior art. We have found exoplanets by star wobble and solar planets by planet wobble, I think "unprecedented" is an overstatement, just my opinion. The method was certainly used previously in this system.

Comment Re:Be patient (Score 1) 394

So wait; we have a choice between a set of power sources which provide indefinite quantities of energy; where the installation, once done, is pretty much forever and just needs small scale maintenance; where the major influence on the environment is extremely localised and quite easy to understand and reduce and another power source which provides energy now but where later we have to look after nuclear waste for hundreds of thousands of years. Where the major cost is decommissioning and clean up which happens at the end and where almost all cost estimates basically assume the tax payer covers that for free.

That sounds great, what energy source is that? Because people living on the east coast of the US would sure like to get all those nasty polluting coal plants in the west shut down. The ones that put so much sulpher in the air that the acid rain makes the limestone bubble? Similar to the ones in Japan where you can develop photographs in some of the lakes?

Please let us know what power source you are talking about, because "Clean Coal" is an advertising slogan, not a reality. The technology to capture the SO2 and CO2 would raise the cost higher than buying politicians.

Comment Re:Good test. (Score 1) 204

Not true, at least in the UK:

Interfering with mail - Postal Services Act 2000 Section 84 Triable Summarily (Magistrates court) 6 Months and or a fine (Max) A person commits an offence if they without reasonable excuse intentionally delay or open a postal packet in the course of transmission by post or intentionally opens a mail bag. A person commits an offence if, intending to act to a person's detriment and without reasonable excuse, opens a postal packet which they know or suspect to have been delivered incorrectly.

And there's the rub, if the mail is delivered as addressed, can it be said to be delivered incorrectly? This is why lawyers exist, to convince a judge or jury that what the law says is not what it means.

If you work for the Post service you could commit other offences under Section 83 triable either way (Magistrates or Crown court) and get a sentence of 2 years and or a fine.

Comment Why this is useful (Score 1) 148

Previous articles about phones being bricked by Exchange mail admins led VMware to develop a phone hypervisor to run a 2nd copy of the phone OS, so you could have a business phone VM (with a separate number) and if something was done to the business phone, the personal phone would still be functional. Note that clearing all mail and contacts information seems to be a "feature" of Exchange, ie. a requirement rather than optional. While this is acceptable for a company provided phone, it's not for a personal phone being used for business for the benefit of the company.

Running a full function OS on a phone may or may not be as useful, in general the UI is not optimized for a small touch screen, so usability might be less than desired. This would make more sense on a tablet, using a netbook spin of Fedora or Ubuntu as a base, or Meego, or one of the small distributions like Puppy (build it for ARM?).

Other than providing some extra CPU power, I don't see that being dual core is in any way a requirement, unless the HVM is missing in the single core models.

Comment Re:Easy. (Score 1) 73

When I was doing work requiring clearance (DoD and DoE at various times) there was a lot of stuff to understand about need to know. Having low level clerks see things I would restrict to cabinet level access is stupid, and no new research needed, just applying principles practiced in the 1970s.

Given the chance to design an access system, I would have a "can see" bit map and put characterizing bits (flags, whatever) on each item, so unless someone was cleared for all characteristics of a document or folder, they wouldn't even see that it exists.

I'm skipping some implementation details which are important, this isn't a technical forum, and I know they exist.

Comment This is a truely flawed view of the problem (Score 1) 287

The problem is not the media, but the access to data. Given the breadth of the information topics, no one below cabinet level should have been able to see it all, much less some low level clerk. This was a failure of the need to know policy, and the attempt to blame wikileaks or the clerk for the release is clearly an attempt to disguise the failure of method. I covered the technology and ethical issues at length in a blog post when it happened.

I have held DoD and DoE clearance, and have worked with information control for companies like GE and SBC (now at&t)

Comment CPU-bound no better, disk & network worse (Score 1) 52

This comes as no surprise. In any activity which is mostly limited by CPU in user mode, not much changes, you can track that over a number of operating systems. What has gotten slower is disk io and network transfer time, and some tests, such as web serving, may be using all or mostly pages in memory, so this is not as obvious as it might be.

In addition, the test was run in a virtual machine, so to some extent the huge host memory provided more resources, and the very fast disk hides poor choices in the io scheduling and provides additional write cache and buffers. In other words, neither the tests chosen, or the environment used, were typical for small server or generous desktop.

For a meaningful test no more than four CPUs (or two with hyperthreading) should be used, and all io should go to a real rotating disk, like a $100 1TB WD or Seagate, and the filesystems should be on that, not some fancy large SSD. Then some numbers can be identified which reflect the performance on machines in the small server or fast desktop price range of a motivated home user or budget limited small business. Then the limitations of the CPU and io scheduler changes will be more evident, and perhaps the performance using the deadline scheduler should be included, since discussions on Linux-RAID mailing list indicate that many of us find the default scheduler is a bottleneck for typical loads (particularly raid-[56]).

Comment Don't compare apples and oranges (Score 1) 606

It sounds like you could compete with Dell and that you should start a company. Maybe then you realise that 1kUS$ isn't that much for a system.

The saving is in that he isn't starting a company. So he has no costs for inventory, advertising, shipping, distributor discounts, etc. If he is building a dozen systems he can't win on cost over the hardware lifetime, but if he is building hundreds, he says he needs that many, he probably can shave quite a bit of the cost. But how much hardware is going on those machines, to drive the cost that high? Dell sells a reasonable office machine for just under $600, without massive discounts. Companies like eMachines go lower than that, and have similar performance. There is something driving up the cost we haven't been told.

The next obvious question is how much of the cost is software, and how much of that (possibly including the OS) is required cost? The old "easier if they're all the same" argument is usually made by a salesman or lazy purchasing agent, and often doesn't match reality. Data entry jobs which are poking numbers into web forms or spreadsheets don't require proprietary software. That doesn't mean that there may not be some need for commercial software, just that there's a lot of tasks in most enterprises which don't. And the "retraining cost" FUD is just that, people doing data entry, or any activity where the browser is the computer, need to learn login and start application from a menu or icon. Just like Windows. And free software will read/write most proprietary formats, so the need for a proprietary data format doesn't mean proprietary software is necessarily needed. One size does not fit all, there is probably room for saving in software, too.

This might even be a case for thin clients and a few servers, and get the cost way down, not enough information to guess, but a possible large saving. The problem is convincing management that the best approach is finding the most cost effective solution, not in finding the best price on the "way we always did it."

Slashdot Top Deals

"You can have my Unix system when you pry it from my cold, dead fingers." -- Cal Keegan