Forgot your password?
typodupeerror

Comment: Re:geek or not (Score 1) 229

by WuphonsReach (#47897341) Attached to: Ask Slashdot: Advice On Building a Firewall With VPN Capabilities?
For DYI, the choice really does boil down to either pfSense or IPFire depending on whether you want BSD or Linux underneath.

Personally, I went with a full blown CentOS with Shorewall / OpenVPN on top, but it was definitely not the easiest thing to setup. Next time around I'm strongly considering a firewall distro.

Comment: Re:Good decision? (Score 1) 345

There's really only three Linux distros... Red Hat, Debian, everyone else.

Which is somewhat similar to the days where you had Windows 95/98 vs Windows NT - and you couldn't always run software from one on the other.

And really, once you get past the package manager, most of the differences between the distros are only skin-deep. It's all GNU/Linux underneath.

Comment: Re:Seems kind of pointless- the DNS has to be subv (Score 1) 67

by WuphonsReach (#47882731) Attached to: Mozilla 1024-Bit Cert Deprecation Leaves 107,000 Sites Untrusted
DANE is mostly to guard against rogue CAs. CA #1 cannot sign a certificate claiming to represent the domain that was actually certified by CA #2. So it limits the amount of damage that a rogue CA can get away with.

It may also eliminate the need for CAs and certificate altogether. You just store the public half of your certs in the DNS system.

Comment: Re:They declared that security required, https (Score 1) 67

by WuphonsReach (#47845873) Attached to: Mozilla 1024-Bit Cert Deprecation Leaves 107,000 Sites Untrusted
Even if you don't do financial transactions on your site - consumers / customers / users are getting more savvy and want *any* personal information to be encrypted in transit. Login details are naturally something that should always be encrypted, but that also extends to things as mundane as URL history or search terms.

I just wish DANE was farther along (plus DNSSEC).

Comment: Re:Can we have a [credible] MS Access equivalent? (Score 1) 185

by WuphonsReach (#47836703) Attached to: Why Munich Will Stick With Linux
The bigger issue with MSAccess and where other tools fall flat is the ease of linking together multiple, disparate, data sources - without having to register dozens/hundreds of ODBC drivers - mashing the data together, then sending it off to yet another destination.

This is especially critical when you work with ad-hoc data sets that are somewhat or completely different from job to job, client to client, so putting that data into a proper database and writing proper SQL queries to massage it or slapping a web front end on it -- is not worth the time investment.

I've looked at OpenOffice/LibreOffice Base over the years. It's still an infant, not even equivalent to the old MSAccess 2.0 functionality yet. Import/Export of CSVs is difficult - it won't create the tables for you and create reasonable field definitions. Linking to another database requires an ODBC driver connection to be configured on the system.

Worse - it uses HSQLDB, where you have to put double quotes around all of your field/table identifiers. That makes it garbage - because you can not prototype a SQL query in Base, then copy/paste it to another SQL compliant database and get it to run without major changes.

Comment: Re:Isolating the problem (Score 1) 220

by WuphonsReach (#47819271) Attached to: Firefox 32 Arrives With New HTTP Cache, Public Key Pinning Support
I really cannot think of a reasonable workflow where that would make sense but I'm not trying to judge

The workflow is pretty much anyone who has to wear multiple hats during the day. Think of open tabs in background windows as short-term bookmarks.

One browser window with half a dozen tabs to keep an eye on the internal ticket system. Another window open with a dozen tabs to track stats on jobs in-progress across multiple days (so that you can just alt-tab to that window, glance through the tabs, rather then rummage for bookmarks or use the awesome-bar). Then typically one window per task / project with anywhere from 1-20 tabs.

As an example, let's say I need to look into GlusterFS. I can either re-purpose one of the my existing browser windows, or better, open a new one and keep all tabs relating to GlusterFS in a single window. I'll start with Google or the GlusterFS home page, then will start proliferating tabs as I find things that are interesting enough to be read, but I'm not ready to dive into that tab yet, nor is it something that I'll want as a long-term bookmark.

As I work through the various tabs, they either get bookmarked after I've read them or just closed.

Not hard to hit 100 tabs. Today is about average and I have 10 windows open, each has 1-15 tabs in it.

Comment: Re:Seemed pretty obvious this was the case (Score 1) 311

by WuphonsReach (#47819101) Attached to: Apple Denies Systems Breach In Photo Leak
Of course, you should keep a record of those questions and answers so you can correctly answer them if the need arises.

That's what GPG encrypted text files were invented for.

One text file per account, the contents are a GPG ASCII armored encryption block containing things like the site name, password, account name, answers to security questions, or anything else.

I then store those text files in a version control system, which makes it easy to share across multiple machines.

(The weak link in all of this is the GPG key - but there are options to strengthen that like smartcards.)

Comment: Re: Too late (Score 1) 107

by WuphonsReach (#47806513) Attached to: Hackers Behind Biggest-Ever Password Theft Begin Attacks
Encrypt the tablet / phone - use a 6-9 digit PIN (which is a lot better then just a 4-digit PIN). Have the device wipe after 10 bad attempts (the default on Android).

Most thieves, when presented with that obstacle - will just reformat the device for sale rather then try and steal information off of it.

As for apps, keypass / lastpass are frequently mentioned. My personal preference is a strong master password in Firefox, and just let it remember the 100s of secondary website account passwords (i.e. not my bank, webmail, or other financial sites). The best choices are those where you setup your own webdav cloud storage on your own hardware, and use that to keep things synchronized.

Comment: Re:Why? Simple bullshit is why. (Score 2) 107

by WuphonsReach (#47806467) Attached to: Hackers Behind Biggest-Ever Password Theft Begin Attacks
Four words, strung together, can be a key space as small as 3000^4 (roughly 46 bits of entropy), especially if they are chosen from the top 3000 words in the dictionary. That's nowhere near 6.2 * 10^36.

Misspellings can help a lot and make it a lot stronger (adding maybe 3-4 bits per word). Adding spaces or punctuation between them adds maybe 1 bit per word. Random capitalization of something other then the first letter adds 2 bits per word.

Basically, if you're using English language phrases / words without any munging, you're only getting about 2 bits per character. A bit lower if it's a grammatically correct phrase (~1.5 bits/character), a bit higher if it's random words strung together (~2.3 bits/character). That puts a 26 character phrase like you provided at somewhere between 39-60 bits (and it is always better to assume the lower bound).

Most attackers will assume 2-6 words strung together, from the top N lists. So just tacking words together is not safe. Or they'll use N-grams (sort of like Markov chains, but more general) and go after the most common phrases.

In comparison, an 8-character password, chosen from a field of 64 possibles per character (6 bits) is 48 bits strong. If you managed to use one of 90 possible characters per position, that is 52 bits strong (6.5 bits/char * 8 bits).

48-52 bits is just not a lot these days, if the attacker gains access to the hashed password and can attack it offline. Minimum bits of complexity really needs to be about 64 bits (10-12 characters, fully random) to deal with offline attacks, and 80 bits of entropy is far better.

Comment: Re:Notified and ignored? (Score 1) 107

by WuphonsReach (#47806293) Attached to: Hackers Behind Biggest-Ever Password Theft Begin Attacks
These days the password on your email account is more important then your bank account password...

Because if they can gain access to your email, they can do password resets to gain access to dozens / hundreds of your accounts.

Some of the web email providers have 2FA (two-factor authentication) - those are probably better choices if you don't run your own email server.

Comment: Re:Final nail in the Itanium coffin (Score 2) 161

by WuphonsReach (#47776197) Attached to: Research Shows RISC vs. CISC Doesn't Matter
All of which paints a bleak picture for Itanium. There is no compelling reason to keep Itanium alive other than existing contractual agreements with HP. SGI was the only other major Itanium holdout, and they basically dumped it long ago. And Itaiums are basically just glorified space heaters in terms of power usage.

Itanium was dead on arrival.

It ran existing x86 code much slower. So if you wanted to move up to 64bit (and use Itanium to get there), you had to pay a lot more for your processors, just to run your existing workload.

Okay, you say, but everyone was supposed to stop running x86 and start running Itanium binaries! Please put down the pipe and come back to reality. No company is going to repurchase all of their software to run on a new platform, just because Intel says this is the way forward.

Maybe, maybe! If all of the business software was open-source and easily ported to a different CPU architecture it might have worked. But only if you'd gain a 3x-5x improvement in wall clock performance by porting from x86 to Itanium instruction sets. (An advantage that never materialized.)

And once AMD started shipping AMD64 and Opterons that could run your existing x86 workload, on a 64bit CPU, at slightly fastter speeds then your old kit for the same price - that buried any chance of Itanium ever succeeding in the market. Any forward looking IT person, when it came time to upgrade old kit, chose AMD64 - because while they might be running 32bit OS/progs today, the 64bit train was rumbling down the tracks. So picking a chip that could do both, and do both well, was the best move.

Comment: Re:Switched double speed half capacity, realistic? (Score 1) 316

by WuphonsReach (#47769509) Attached to: Seagate Ships First 8 Terabyte Hard Drive
One thing I'd LOVE to see, and even think there's a market for, would be a single-platter drive suitable for mounting in the optical bay of mobile workstation laptops

' Thinkpads T-series laptops have had that capability since the early 2000s. I'm pretty sure that current models still let you swap out the DVD drive for a 2nd SATA drive slot.

The problem with any solution that attempts to be multi-vendor is that every laptop has a slightly different form factor for their optical bay tray - there is no standard.

Comment: Re:Switched double speed half capacity, realistic? (Score 3, Interesting) 316

by WuphonsReach (#47762555) Attached to: Seagate Ships First 8 Terabyte Hard Drive
As you mention, 15k SAS drives are going to be rapidly undercut by SSDs. The price difference is no longer 10x or 20x when looking at cost/gigabyte, the price difference is now only 2-3x.

Pay 2x-3x the amount for a SSD of the same size as the 15k SAS, and you gain 50x improvement in your IOPS. For workloads where that matters, it's an easy choice to make now. As soon as you say something like "we'll short-stroke some 15k RPM SAS drives" - you should be considering enterprise level SSD instead. Less spindles needed, less power needed, and huge performance gains.

The only downside of SSDs is that write-endurance. A 600GB SSD can only handle about 120TB of writes over its lifespan (give or take 20-50% depending on the controller, technology, etc). The question is - are you really writing more then 60GB/day to the drive (in which case it will wear out in 5 years).

And more importantly... will you care if it wears out in 4-5 years? That you could handle the same workload using fewer spindles and less power likely pays for itself, including replacing the drives every 4-5 years.

Comment: Re:Seagate failures (Score 3, Informative) 316

by WuphonsReach (#47762529) Attached to: Seagate Ships First 8 Terabyte Hard Drive
External 3.5" drives are generally put in junky enclosures with no cooling and iffy controller chips and 1-year warranties. Since 3.5" hard drives are much more sensitive to heat issues then their 2.5" laptop drive cousins, you need active cooling (at least a minimal amount of airflow 24x7 over the drive).

One external drive enclosure that I've been happy with is a Mediasonic HF2-SU3S2. This is a USB 3.0 unit which can hold up to (4) 3.5" drives in a few different configurations (I use JBOD). Not that expensive, has a fan, and has good performance.

Stick some moderate quality 3.5 drives in it (WD Red, Seagate Enterprise Capacity drives, Hitachi Ultrastars) and it should run fine for a few years. Most of those drives have 3 or 5 year warranties.

(For the 4-drive unit, we write to a different drive each day. And our backups are based on rdiff-backups, so each backup set has the full 53 weeks of change history for the source data.)

Say "twenty-three-skiddoo" to logout.

Working...