Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×

Comment Re:Why should I care? (Score 1) 122

My physical credit card (normal mag stripe) is compromised at least once a year and sometimes more often. I might not be liable for the fraud, but it is VERY inconvenient when it happens, the card gets locked out, plus it also causes my bank to start verifying more of the transactions which is just as inconvenient if I don't answer the text message quick enough and the card gets blocked for no reason.

It got so bad that around 2 years ago I got a second credit card so I could file it with trusted sites like Amazon and with charities that I donate to regularly and not have to give them all new card numbers every time my over-the-counter card got compromised. That's how bad it has been.

Chip-and-pin is less convenient than ApplePay. Tap-and-pay cards are nominally the same convenience as ApplePay but still have physical security issues. Mag stripe is clearly going to die soon... the data breaches are occurring so often now that not fixing it is no longer an option for merchants. They will have to go to NFC whether they like it or not.

I use ApplePay wherever I can now.

-Matt

Comment NFC alone isn't enough (Score 2, Informative) 122

You need NFC (which many Android devices have had for years)... but you also need an actual secure chip (not a software emulation or intermediary), and the ability to initiate payment without having to turn on the phone or type in a security code (i.e. a fingerprint reader), and you have to be able to do it with the phone locked and turned off (meaning you need low power hardware to detect the NFC and wake the phone up). And then you need the OS integration to make it all work together seemlessly. And it has to not leak information to anyone except your bank which obviously needs to have the information anyway... and there is no smart phone app on the market other than ApplePay which can make that guarantee. Certainly not Google Wallet. Or CurrentC. Or anything else. And it's better than chip-and-pin and tap-to-pay which both have physical security issues (though they are much better than mag stripe).

Android is missing too many pieces and it will be at least 1-2 years before it has them all. And even then there will be such a huge percentage of *new* android phones that won't have all the pieces that it will only create mass confusion for the general consumer.

The reason Google Wallet has been a failure to-date is that it (and all other smartphone-based payment systems except ApplePay) is simply not convenient to use compared to swiping a credit card. The reason ApplePay became the #1 smartphone payment mechanism overnight is because it's utterly trivial and convenient to use.

It took me exactly 3 seconds at the local Whole Foods to pull out my phone, tap it with my finger on the finger print reader, and put it back in my pocket. It takes me about as long to swipe my card if I don't have to sign, but half the time I do have to sign so ApplePay immediately wins because I never have to sign (at least not so far).

Eventually all smart phones will do it the Apple way. For now, though, and for the next 1-2 years at a minimum, Apple is the only smartphone game in town that actually works well. Chip-and-pin and tap-to-pay cards work almost as well... they can even be more convenient in some situations, but they don't cover all the security bases.

-Matt

Comment Re:A lot of to-do about $700 (Score 1) 245

Over $900, and he will match the donations with his own funds so... that's definitely enough for a pretty nice machine. And with the slashdotting, probably a lot more now.

The bigger problem is likely network bandwidth to his home if he's actually trying to run the server at home. He'd need uplink and downlink bandwidth so if he doesn't have FIOS or Google Fiber, that will be a bottleneck.

-Matt

Comment Re:Git Is Not The Be All End All (Score 1) 245

A single point of failure is a big problem. The biggest advantage of a distributed system is that the main repo doesn't have to take a variable client load that might interfere with developer pushes. You can distribute the main repo to secondary servers and have the developers commit/push to the main repo, but all readers (including web services) can simply access the secondary servers. This works spectacularly well for us.

The second biggest advantage is that backups are completely free. If something breaks badly, a repo will be out there somewhere (and for readers one can simply fail-over to another secondary server or use a local copy).

For most open source projects... probably all open source projects frankly, and probably 90% of the in-house commercial projects, a distributed system will be far superior.

I think people underestimate just how much repo searching costs when one has a single distribution point. I remember the days when FreeBSD, NetBSD, and other CVS repos would be constantly overloaded due to the lack of a distributed solution. And the mirrors generally did not work well at all because cron jobs doing updates would invariably catch a mirror in the middle of an update and completely break the local copy. So users AND developers naturally gravitated to the original and subsequently overloaded it. SVN doesn't really solve that problem if you want to run actual repo commands, verses greping one particular version of the source.

That just isn't an issue with git. There are still lots of projects not using git, and I had a HUGE mess of cron jobs that had to try very hard to keep their cvs or other trees in sync without blowing up and requiring maintainance every few weeks. Fortunately most of those projects now run git mirrors, so we can supply local copies of the git repo and broken-out sources for many projects on our developer box that developers can grep through on our own I/O dime instead of on other project's I/O dime.

-Matt

Comment Re:SVN and Git are not good for the same things (Score 1) 245

This isn't quite true. Git has no problem with large repos as long as the system ram and kernel caches can scale to the data footprint the basic git commands need to access them. However, git *DOES* have an issue with scaling to huge repos in general... it requires more I/O, certainly, and you can't easily operate on just a portion of a repo (a feature which I think Linus knows is needed). So repos which are well in excess of the RAM and OS resources required to do basic commands can present a problem. Google has precisely this problem and it is why they are unable to use git despite the number of employees who would like to.

Any system built for home or server use by a programmer/developer in the last 1-2 years is going to have at least 16G of ram. That can handle pretty big repos without missing a beat. I don't think there's much use complaining if you have a very old system with a tiny amount of ram, but you can ease your problems by using a SSD as a cache. And if you are talking about a large company... having the repo servers deal with very large git repos generally just requires ram (but client-side is still a problem).

And, certainly, I do not know a single open source project that has this problem that couldn't be solved with a measily 16G of ram.

-Matt

Comment It's not that big a deal (Score 1) 245

It's just that ESR has an old decrepit machine to do it on. A low-end Xeon w/16-32G of ECC ram and, most importantly, a nice SSD for the input data set, and a large HDD for the output (so as not to wear out the SSD), would do the job easily on repos far larger than 16GB. The IPS of those cpus is insane. Just one of our E3-1240v3 (haswell) blades can compile the entire FreeBSD ports repo from scratch in less than 24 hours.

For quiet, nothing fancy is really needed. These cpus run very cool, so you just get a big copper cooler (with a big variable but slow fan) and a case with a large (fixed, slow) 80mm input fan and a large (fixed slow) 80mm output fan and you won't hear a thing from the case.

-Matt

Comment Mobile links (Score 1) 174

For mobile internet connections... for dual mobile internet connections. I haven't done that but I have used VPNs over mobile hotspots extensively. There is just no way to get low latency even over multiple mobile links. The main problem is that the bandwidth capabilities of the links are fluctuating all of the time, and if you try to dup the packets you will end up overloading one or the other link randomly as time progresses because the TCP protocol will get acks from the other link and thus not backoff as much as it should. An overloaded mobile link will drop out, POOF. Dead for a while.

For VPN over mobile links, the key is to NOT run the VPN on the mobile devices themselves. Instead, run it on a computer (laptop etc) that is connected to the mobile devices. Then use a standard link aggregation protocol with a ~1 second ping and a ~10 second timeout. You will not necessarily get better latency but it should solve the dropout problem... it will glitch for a few seconds when it fails over but the tcp connections will not be lost.

-Matt

Comment It can be done but... (Score 1) 174

I run a dual VPN link over two telcos (Comcast and U-Verse in my case), between my home and a colo. I don't try to repeat the traffic on both links, however, because they have different bandwidth capabilities and it just doesn't work well if the line becomes saturated. Instead I use PF and FAIRQ in both directions to remove packet backlogs at border routers in both directions, and to ensure that priority traffic gets priority. Either an aggregation-with-failover or a straight failover configuration works the best (the TCP connection isn't lost since it's a VPN'd IP). That way if you lose one link, the other will take over within a few seconds.

The most important feature of using a VPN to a nearby colo is being able to prioritize and control the bandwidth in BOTH directions. Typically you want to reserve at least 10% for pure TCP acks in the reverse direction, and explicitly limit the bandwidth to just below the telco's capabilities to avoid backlogging packets on either your outgoing cablemodem/u-verse/etc router or on the telco's incoming router (which you have no control over without a VPN). Then use fair queueing or some other mechanism to ensure that bulk connections (such as streaming movies) do not interfere with the latency for other connections.

In anycase, what you want to do just won't work in real life when you are talking about two different telco links. I've tried it with TCP (just dup'ing the traffic). It doesn't improve anything. The reason is that one of the two is going to have far superior latency over the other. If you are talking Comcast cable vs U-Verse, for example (which, the comcast cable will almost certainly have half the latency of the U-Verse. If you are talking about Comcast vs Verizon FIOS, then it is a toss-up. But one will win, and not just some of the time... 95% of the time. So you might as well route your game traffic over the one that wins.

-Matt

Comment Bash needs to remove env-based procedure passing (Score 4, Interesting) 236

It's that simple. Even with the patches, bash is still running the contents of environment variables through its general command parser in order to parse the procedure. That's ridiculously dangerous... the command parser was never designed to be secure in that fashion. The parsing of env variables through the command parser to pass sh procedures OR FOR ANY OTHER REASON should be removed from bash outright. Period. End of story. Light a fire under the authors someone. It was stupid to use env variables for exec-crossing parameters in the first place. No other shell does it that I know of.

This is a major attack vector against linux. BSD systems tend to use bash only as an add-on, but even BSD systems could wind up being vulnerable due to third party internet-facing utilities / packages which hard-code the use of bash.

-Matt

Comment Re:"unlike competitors" ??? (Score 1) 504

It's built into Android as well, typically accessible from the Setup/Security & Screen Lock menu. However, it is not the default in Android, the boot-up sequence is a bit hokey when you turn it on, it really slows down access to the underlying storage, and the keys aren't stored securely. Also, most telco's load crapware onto your Android phone that cannot be removed and that often includes backing up to the telco or phone vendor... and those backups are not even remotely secure.

On Apple devices the encryption keys are stored on a secure chip, the encryption is non-optional, and telcos can't insert crapware onto the device to de-secure it.

The only issue with Apple devices is that if you use iCloud backups, the iCloud backup is accessible to Apple with a warrant. They could fix that too, and probably will at some point. Apple also usually closes security holes relatively quickly, which is why the credit card companies and banks prefer that you use an iOS device for commerce.

-Matt

Comment VPN is the only way to go, for those who care (Score 1) 418

I read somewhere that not only was Comcast doing their hotspot crap, but that they will also be doing javascript injection to insert ads on anyone browsing the web through it.

Obviously Comcast is sifting whatever data goes to/from their customers, not just for 'bots' but also for commercial and data broker value. Even this relatively passive activity is intolerable to me.

Does anyone even trust their DNS?

Frankly, these reported 'Tor' issues are just the tip of the iceberg, and not even all that interesting in terms of what customers should be up in arms about. It is far more likely to be related to abusing bandwidth (a legitimate concern for Comcast) than to actually running Tor.

People should be screaming about the level of monitoring that is clearly happening. But I guess consumers are mostly too stupid to understand just how badly their privacy is being trampled.

There is a solution. Run a VPN. If Comcast complains, cut the T.V. service and change to the business internet service (which actually costs less).

-Matt

Comment Re:That guy just wasted his time (Score 2) 314

By what strange theory does Slackware support systemd? And how is the conversation being "held back"? At least on LQ, I think it's been discussed to death to the point where there's really nothing new to say about it.

I can say one thing for certain: you do not know that anything concerning systemd in Slackware is likely or not. Hell, *I* don't.

Slashdot Top Deals

"The four building blocks of the universe are fire, water, gravel and vinyl." -- Dave Barry

Working...