Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror

Comment Re:Perl (Score 1) 99

For mart and everyone else, $@ is populated by eval. So if an error occurs the error message is there. So what he describes would be like this:

sub check {
my $ip = shift;
eval{ &lib::function($ip) };
return !$@;
}

So if $@ is null it returns true. Otherwise false since ! is a condition operator. And they can do false by calling "die" in that lib::function. But yes, that is rather silly. I'm hoping they couldn't control the other code for some reason. :)

Comment Re:The Moose description says bad things about Per (Score 1) 99

It also seems to make people load frameworks rather than learn the language. I've fixed a few libraries I found that were using Moose by removing the framework and replacing it with 30 lines of OO code. They loaded another 35 files just to avoid a couple ISA statements and do some input validation. If people bothered to learn the language they could make their code even faster and cleaner. This problem is even worse in PHP. And java seems to require every project to write their own. :/

Comment Re:Privacy (Score 1) 363

It's easy to infer data for the most part. The biggest challenge I see to your requirement for keys is the lack of interesting searches. If you encrypt everything and require keys then people won't be able to search to see if they have the same hobbies as you. What you suggest isn't much of a social site but more of a private BBS almost. Only a few can dial-in and the rest of the world knows nothing of the content.

Comment slicehost works for me (Score 1) 456

I've got a HostGator, dedicated server in Germany and a 1and1 developer account (for a few years now). I'm in the process of moving them all into VPS at slicehost.com. You can start the slices out cheap and work your way up to multiple slices with backnet and public interfaces. So if you need to grow you can do a load balancer on your primary IP and then have other slices doing the work. Their admin tools are pretty easy to deal with too. This option is best if you aren't afraid to administrate your own box though. They have Chat support, forums, articles etc to help people out who are fairly new to system administration with optional backup for your slices.

Comment We do a simple P2P at work... (Score 1) 291

I've thought about working with either RSS or pushing .torrent files and then having torrent daemon's synce files for me. But so far that's been overly complicated as I have ~1500 production web servers to keep up to date on different code releases depending on what cluster they're in.

Currently we have a database of our production equipment and we keep track of what cluster/role a server is in. So the boxes all run a cron every minute that checks to see if a new "version" of a release is available (any code change whether a new code release or just an update increments version for that cluster).

Now we're use rsync for our transfer. So we keep our /etc/rsync.conf up to date with the potential subnets that we have internally so random outside machines on the WAN can't rsync to our boxes. But we also keep a "lock" table in our database along with the state of each server.

So we have one super seed if you will which keeps a copy of code for all clusters. When you update a release (kept elsewhere) and then "push" the change you made gets copied to the super seed machine. Once that is done the version gets incremented and then the crons start to look for the new code.

Basic idea to locking we we limit the number of peers a box is allowed to have. So in order to not impact production traffic it's set to 3 currently (plenty fast). So if no "peers" (any machine in the cluster that is not the seed) has the code it will be used first and the boxes that fired off the cron last will sit in a queue (waiting to get a lock on anything with code that hasn't reached it's limit of locks). So when it first starts out the first three boxes will fail over to the super seed. Then the peers will start to get it from those boxes once they are updated.

Doing a code push to our larger cluster used to take 45 minutes to an hour (was done in sequence and not in parallel) but now takes about 5 minutes.

My next goals are to distribute the super seeds and potentially use RPM distribution since I'm working on making the code release only restart the fewest service necessary in order to pick up the changes. And with RPM I can have those commands in the install portion. :)

Slashdot Top Deals

Unix soit qui mal y pense [Unix to him who evil thinks?]

Working...