Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×

Comment Re:I will be changing to FreeBSD too (Score 1) 450

There's definitely going to be some teething pains. Which is why I'm not rolling out anything production on RHEL7 until 7.2 or 7.3 comes out next year.

But I am looking forward to having (1) log file to dig through instead of two dozen or more. And being able to easily pull that to a centralized log server (and pull is more secure then push). I'm also looking forward to not having to write monit / nagios scripts to restart services if other services restart.

Comment Re:OpenPGP (Score 1) 63

The problem with Perfect Forward Secrecy (PFS) in the case of GPG/PGP encrypted messages is that PFS requires two-way communication between the end-points at the start to securely transmit and agree on a ephemeral key for that session.

That's not practical in the case of sending an encrypted email/file to someone. There is no "session" to speak of. There's no two-way conversation at the start before the file/information is transmitted.

GPG/PGP is designed to defend against disclosure of data-at-rest (i.e. an email body sitting on someone's server or a file sitting on your hard drive). It just so happens that because it defends in the data-at-rest scenario that it can also help protect the contents in transit. It's very good at what it does, but trying to use it in a situation where you want PFS is a misapplication of the technology.

(So yeah... the EFF folks are idiots and are lumping together apples and oranges.)

Comment Re:Still a second class citizen (Score 1) 214

In general, if a device supports microSD cards of 64GB, they'll work fine past that point.

The original SD spec was limited in size. SDHC came out in 2006 and allowed for card capacity of up to 32GB. Most devices made in 2013 or earlier are SDHC with a 32GB limit (such as my Thinkpad T61p laptop and my Asus TF700T tablet). That means putting a 64GB card into a SDHC slot is a bad idea (it will probably corrupt the data once it tries to write past the 32GB mark).

SDXC was introduced three years later in 2009, and allows for cards up to 2TB in size. A lot of times, the manufacturers will only certify up to the size that was available when the device was released. So larger cards may very well work, up to the limits of the spec.

Comment Re:I have just one word for you (Score 1) 217

A lot of Java boilerplate code (and not just getters/setters) can be gotten rid of with a bit of AspectJ (Spring Roo leverages this heavily). With good use of AspectJ, your java objects look like POJOs (plain old java objects) with all of the extra stuff added at compile-time by the .aj files.

Comment Re:Old saying (Score 4, Informative) 249

Best practice in the real world is four reference clocks or only one. With just three configured you run into the problem of ending up in the "just two clocks situation" more often then not. At which point, NTP is likely to oscillate between the two remaining good candidates (without the "prefer" keyword).

How you choose to configure NTP is a tricky art depending on how resilient you want to be and whether you have a local time source or need less then 5ms accuracy. For most situations (99% of servers), being within 500ms of the "internet time" is enough. Your goal is mostly to avoid the issue where the clock is off by tens of seconds or worse.

Comment Re:I send bulk email.. (Score 1) 139

I send bulk email for an opt-in list with mailman (opt in as in you have to walk in the store and physically write your email on our sign up sheet).

It's not opt-in unless you send out a verification email to the address on the sign-up sheet. You have zero guarantee that the person writing down that address has the permission of the person who receives mail at that address. That verification email should explain how you obtained the address and require action on the recipient's part in order to remain on the list. If you get no response or the recipient takes no action, you should throw away that record.

No, you're not allowed to do advertising in that initial mailing either. And those "asking permission" emails should go out sooner (within a week) rather then later (months+).

Comment Re:working as designed? (Score 1) 139

It breaks a few mailing (discussion, not advertising) list programs (such as my uni's one) if you send from a SPF protected address because the list server forwards it with you address in the from boxs. Other then that it works well.

Then that mailing list is poorly maintained. I belong to dozens of mailing lists on a domain with very restrictive SPF records and have never had issues.

If you allow the mailing list to forge your email address, then *everyone* can forge your email address. The better mailing list software no longer forges your email address on outbound mail.

Comment Re:working as designed? (Score 1) 139

SPF is all about preventing joe-jobs where someone sends out malicious email and uses your email address to do it.

With properly configured SPF records (with "-all"), you're telling all of the mail servers of the world (or the majority which support SPF) that if the email doesn't come from a select (and small) group of IP addresses that they should discard it. A message that fails SPF verification is a very bad thing in most spam software and will get a severe down-vote.

That being said, SPF is not anti-spam - it's anti-forgery. DKIM is also anti-forgery.

(Yes there are teething pains with putting SPF on your domain. But you don't have to use it. But if you can, you should.)

Comment Re:Are you sure? (Score 1) 863

Eh, I'm looking forward to Systemd because it will be an improvement over init.d scripts. Especially when you have multiple services that depend on other services being up and running.

In today's world, you have to write some other non-standard script or use some other non-standard hack of the original init scripts to make sure that X starts before Y and that Z also gets notified that X restarted. That's a major pain point for anyone who doesn't depend solely on monolithic apps. Such as a mail server... (clamd, amavisd, postfix, dovecot, sogod all intertwined).

That being stated - there's no way I will roll out RHEL 7 or CentOS 7 until the 7.1 or 7.2 release (i.e. sometime in late 2015). I'm not convinced yet that systemd is fully baked yet. I have the same stance on btrfs, which is still a technology preview.

And binary logs are not a huge deal when it will make it far easier to find an event without having to look at a dozen different log files, each with a slightly different naming scheme or location. While the current log viewing tools are rudimentary, I expect that we'll see improved tools as people scratch the itches. The problem with binary logs is that people have really only dealt with Window's proprietary implementation (which is has been sucky for a decade-plus). There's no way to copy the log files off to a second server (if you can get the drives mounted) and the built-in log viewing tool is just horrible.

Comment Re:WHY IS THE INTERNET FOCUSED ON THIS SHIT (Score 1) 223

Writing down a password is not the big bug-a-boo that you make it out to be.

Writing it down and leaving it stuck to the monitor / keyboard is a problem (a social problem). Writing it down and keeping it in a secure location, not such a big deal (password manager software falls into the second category).

The trap that many system admins fall into is that they think requiring long and complex passphrases meshes well when combined with forced password expiration of less then a few years. When you force password resets on everyone on a week/monthly/quarterly basis, your users will figure out some trivial method that gets past your system or resort to just sticking passwords on notes stuck to monitors.

Far better to let them choose something reasonably complex (which is 14+ characters these days) then monitor for signs of unauthorized activity. And add in two-factor authentication using their corporate assigned phone or smart card or token thingy that kicks in if things look iffy.

Comment Re:I am not going to convert (Score 1) 245

When every developer in the offices pulls out every thing in the repository and tries to check the whole repo in after modifying the 1-2 files they changed, that is the problem.

That causes zero issues in SVN, because in SVN it only commits the files that have changed... now, if you have a developer editing dozens and dozens of files and doing a massive commit, that's a separate management issue. (i.e. they should be doing a feature branch)

Where SVN falls down is in complicated branch/merge scenarios, and they're constantly working to improve it. Git, Mercurial or other DVCS systems are just better at that.

(shrugs) I've looked at git, mercurial and subversion - and SVN is just easier for regular users to understand. The main hurdle they have is learning the update / modify / commit cycle and that the shorter the cycle, the better things work. Plus, learning not to leave their working copy / development areas dirty with uncommitted changes.

Slashdot Top Deals

How many QA engineers does it take to screw in a lightbulb? 3: 1 to screw it in and 2 to say "I told you so" when it doesn't work.

Working...