Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Comment Re:A lot of to-do about $700 (Score 1) 245

Over $900, and he will match the donations with his own funds so... that's definitely enough for a pretty nice machine. And with the slashdotting, probably a lot more now.

The bigger problem is likely network bandwidth to his home if he's actually trying to run the server at home. He'd need uplink and downlink bandwidth so if he doesn't have FIOS or Google Fiber, that will be a bottleneck.

-Matt

Comment Re:Git Is Not The Be All End All (Score 1) 245

A single point of failure is a big problem. The biggest advantage of a distributed system is that the main repo doesn't have to take a variable client load that might interfere with developer pushes. You can distribute the main repo to secondary servers and have the developers commit/push to the main repo, but all readers (including web services) can simply access the secondary servers. This works spectacularly well for us.

The second biggest advantage is that backups are completely free. If something breaks badly, a repo will be out there somewhere (and for readers one can simply fail-over to another secondary server or use a local copy).

For most open source projects... probably all open source projects frankly, and probably 90% of the in-house commercial projects, a distributed system will be far superior.

I think people underestimate just how much repo searching costs when one has a single distribution point. I remember the days when FreeBSD, NetBSD, and other CVS repos would be constantly overloaded due to the lack of a distributed solution. And the mirrors generally did not work well at all because cron jobs doing updates would invariably catch a mirror in the middle of an update and completely break the local copy. So users AND developers naturally gravitated to the original and subsequently overloaded it. SVN doesn't really solve that problem if you want to run actual repo commands, verses greping one particular version of the source.

That just isn't an issue with git. There are still lots of projects not using git, and I had a HUGE mess of cron jobs that had to try very hard to keep their cvs or other trees in sync without blowing up and requiring maintainance every few weeks. Fortunately most of those projects now run git mirrors, so we can supply local copies of the git repo and broken-out sources for many projects on our developer box that developers can grep through on our own I/O dime instead of on other project's I/O dime.

-Matt

Comment Re:SVN and Git are not good for the same things (Score 1) 245

This isn't quite true. Git has no problem with large repos as long as the system ram and kernel caches can scale to the data footprint the basic git commands need to access them. However, git *DOES* have an issue with scaling to huge repos in general... it requires more I/O, certainly, and you can't easily operate on just a portion of a repo (a feature which I think Linus knows is needed). So repos which are well in excess of the RAM and OS resources required to do basic commands can present a problem. Google has precisely this problem and it is why they are unable to use git despite the number of employees who would like to.

Any system built for home or server use by a programmer/developer in the last 1-2 years is going to have at least 16G of ram. That can handle pretty big repos without missing a beat. I don't think there's much use complaining if you have a very old system with a tiny amount of ram, but you can ease your problems by using a SSD as a cache. And if you are talking about a large company... having the repo servers deal with very large git repos generally just requires ram (but client-side is still a problem).

And, certainly, I do not know a single open source project that has this problem that couldn't be solved with a measily 16G of ram.

-Matt

Comment It's not that big a deal (Score 1) 245

It's just that ESR has an old decrepit machine to do it on. A low-end Xeon w/16-32G of ECC ram and, most importantly, a nice SSD for the input data set, and a large HDD for the output (so as not to wear out the SSD), would do the job easily on repos far larger than 16GB. The IPS of those cpus is insane. Just one of our E3-1240v3 (haswell) blades can compile the entire FreeBSD ports repo from scratch in less than 24 hours.

For quiet, nothing fancy is really needed. These cpus run very cool, so you just get a big copper cooler (with a big variable but slow fan) and a case with a large (fixed, slow) 80mm input fan and a large (fixed slow) 80mm output fan and you won't hear a thing from the case.

-Matt

Comment Mobile links (Score 1) 174

For mobile internet connections... for dual mobile internet connections. I haven't done that but I have used VPNs over mobile hotspots extensively. There is just no way to get low latency even over multiple mobile links. The main problem is that the bandwidth capabilities of the links are fluctuating all of the time, and if you try to dup the packets you will end up overloading one or the other link randomly as time progresses because the TCP protocol will get acks from the other link and thus not backoff as much as it should. An overloaded mobile link will drop out, POOF. Dead for a while.

For VPN over mobile links, the key is to NOT run the VPN on the mobile devices themselves. Instead, run it on a computer (laptop etc) that is connected to the mobile devices. Then use a standard link aggregation protocol with a ~1 second ping and a ~10 second timeout. You will not necessarily get better latency but it should solve the dropout problem... it will glitch for a few seconds when it fails over but the tcp connections will not be lost.

-Matt

Comment It can be done but... (Score 1) 174

I run a dual VPN link over two telcos (Comcast and U-Verse in my case), between my home and a colo. I don't try to repeat the traffic on both links, however, because they have different bandwidth capabilities and it just doesn't work well if the line becomes saturated. Instead I use PF and FAIRQ in both directions to remove packet backlogs at border routers in both directions, and to ensure that priority traffic gets priority. Either an aggregation-with-failover or a straight failover configuration works the best (the TCP connection isn't lost since it's a VPN'd IP). That way if you lose one link, the other will take over within a few seconds.

The most important feature of using a VPN to a nearby colo is being able to prioritize and control the bandwidth in BOTH directions. Typically you want to reserve at least 10% for pure TCP acks in the reverse direction, and explicitly limit the bandwidth to just below the telco's capabilities to avoid backlogging packets on either your outgoing cablemodem/u-verse/etc router or on the telco's incoming router (which you have no control over without a VPN). Then use fair queueing or some other mechanism to ensure that bulk connections (such as streaming movies) do not interfere with the latency for other connections.

In anycase, what you want to do just won't work in real life when you are talking about two different telco links. I've tried it with TCP (just dup'ing the traffic). It doesn't improve anything. The reason is that one of the two is going to have far superior latency over the other. If you are talking Comcast cable vs U-Verse, for example (which, the comcast cable will almost certainly have half the latency of the U-Verse. If you are talking about Comcast vs Verizon FIOS, then it is a toss-up. But one will win, and not just some of the time... 95% of the time. So you might as well route your game traffic over the one that wins.

-Matt

Comment Bash needs to remove env-based procedure passing (Score 4, Interesting) 236

It's that simple. Even with the patches, bash is still running the contents of environment variables through its general command parser in order to parse the procedure. That's ridiculously dangerous... the command parser was never designed to be secure in that fashion. The parsing of env variables through the command parser to pass sh procedures OR FOR ANY OTHER REASON should be removed from bash outright. Period. End of story. Light a fire under the authors someone. It was stupid to use env variables for exec-crossing parameters in the first place. No other shell does it that I know of.

This is a major attack vector against linux. BSD systems tend to use bash only as an add-on, but even BSD systems could wind up being vulnerable due to third party internet-facing utilities / packages which hard-code the use of bash.

-Matt

Comment Re:"unlike competitors" ??? (Score 1) 504

It's built into Android as well, typically accessible from the Setup/Security & Screen Lock menu. However, it is not the default in Android, the boot-up sequence is a bit hokey when you turn it on, it really slows down access to the underlying storage, and the keys aren't stored securely. Also, most telco's load crapware onto your Android phone that cannot be removed and that often includes backing up to the telco or phone vendor... and those backups are not even remotely secure.

On Apple devices the encryption keys are stored on a secure chip, the encryption is non-optional, and telcos can't insert crapware onto the device to de-secure it.

The only issue with Apple devices is that if you use iCloud backups, the iCloud backup is accessible to Apple with a warrant. They could fix that too, and probably will at some point. Apple also usually closes security holes relatively quickly, which is why the credit card companies and banks prefer that you use an iOS device for commerce.

-Matt

Comment VPN is the only way to go, for those who care (Score 1) 418

I read somewhere that not only was Comcast doing their hotspot crap, but that they will also be doing javascript injection to insert ads on anyone browsing the web through it.

Obviously Comcast is sifting whatever data goes to/from their customers, not just for 'bots' but also for commercial and data broker value. Even this relatively passive activity is intolerable to me.

Does anyone even trust their DNS?

Frankly, these reported 'Tor' issues are just the tip of the iceberg, and not even all that interesting in terms of what customers should be up in arms about. It is far more likely to be related to abusing bandwidth (a legitimate concern for Comcast) than to actually running Tor.

People should be screaming about the level of monitoring that is clearly happening. But I guess consumers are mostly too stupid to understand just how badly their privacy is being trampled.

There is a solution. Run a VPN. If Comcast complains, cut the T.V. service and change to the business internet service (which actually costs less).

-Matt

Comment Re:That guy just wasted his time (Score 2) 314

By what strange theory does Slackware support systemd? And how is the conversation being "held back"? At least on LQ, I think it's been discussed to death to the point where there's really nothing new to say about it.

I can say one thing for certain: you do not know that anything concerning systemd in Slackware is likely or not. Hell, *I* don't.

Comment High perf SMP coding is in a category of its own (Score 5, Informative) 195

Designing algorithms that play well in a SMP environment under heavy loads is not easy. It isn't just a matter of locking within the protocol stack... contention between cpus can get completely out of control even from small 6-instruction locking windows. And it isn't just the TCP stack which needs be contention-free. The *entire* packet path from the hardware all the way through to the system calls made by userland have to be contention-free. Plus the scheduler has to be able to optimize the data flow to reduce unnecessary cache mastership changes.

It's fun, but so many kernel subsystems are involved that it takes a very long time to get it right. And there are only a handful of kernel programmers in the entire world capable of doing it.

-Matt

Comment That's nothing (Score 4, Informative) 361

In the 80's it was well known that the CIA was monitoring the USENET. Apparently there was a list of keywords that they searched for that became well known, so we used them all the time. We had it on good authority that the CIA had become amused by our antics. It probably relieved the boredom.

-Matt

Comment Stupid argument (Score 4, Informative) 441

It's hilarious watching people argue over a topic that has already been shown to be a non-issue. The EIA (US) and German statistics show that, in aggregate, wind-energy sources produce a relatively steady amount of power. Individual turbines and even whole wind farms might not be deterministic, but all the wind farms taken together... are.

-Matt

Slashdot Top Deals

Thus spake the master programmer: "After three days without programming, life becomes meaningless." -- Geoffrey James, "The Tao of Programming"

Working...