Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Comment NFC alone isn't enough (Score 2, Informative) 122

You need NFC (which many Android devices have had for years)... but you also need an actual secure chip (not a software emulation or intermediary), and the ability to initiate payment without having to turn on the phone or type in a security code (i.e. a fingerprint reader), and you have to be able to do it with the phone locked and turned off (meaning you need low power hardware to detect the NFC and wake the phone up). And then you need the OS integration to make it all work together seemlessly. And it has to not leak information to anyone except your bank which obviously needs to have the information anyway... and there is no smart phone app on the market other than ApplePay which can make that guarantee. Certainly not Google Wallet. Or CurrentC. Or anything else. And it's better than chip-and-pin and tap-to-pay which both have physical security issues (though they are much better than mag stripe).

Android is missing too many pieces and it will be at least 1-2 years before it has them all. And even then there will be such a huge percentage of *new* android phones that won't have all the pieces that it will only create mass confusion for the general consumer.

The reason Google Wallet has been a failure to-date is that it (and all other smartphone-based payment systems except ApplePay) is simply not convenient to use compared to swiping a credit card. The reason ApplePay became the #1 smartphone payment mechanism overnight is because it's utterly trivial and convenient to use.

It took me exactly 3 seconds at the local Whole Foods to pull out my phone, tap it with my finger on the finger print reader, and put it back in my pocket. It takes me about as long to swipe my card if I don't have to sign, but half the time I do have to sign so ApplePay immediately wins because I never have to sign (at least not so far).

Eventually all smart phones will do it the Apple way. For now, though, and for the next 1-2 years at a minimum, Apple is the only smartphone game in town that actually works well. Chip-and-pin and tap-to-pay cards work almost as well... they can even be more convenient in some situations, but they don't cover all the security bases.

-Matt

Comment Re:That's the part that "counts" (groan) (Score 4, Interesting) 443

Pretty sure NASA has blown more on Constellation, Orion and SLS, launchers to no where that never launch, than SpaceX has spent on successful development of 2 new rockets and Dragon1, and will probably spend on Falcon Heavy, Dragon 2 and their reusable program.

NASA's problem is not insufficient funding. Its inefficiency, bureaucratic bloat, corrupt contractors, and the inability to build or do much of anything in the vacinity of its manned space program. JPL and a few others places are doing fine but they are an exception to the rule.

Some people at Orbital probably do need to be sacked for trying to use 40+ year old Russian engines, the engines are actually that old not just the design. Some people at NASA probably should be sacked for buying in to a contractor proposing such a flawed concept.

Comment Re:A lot of to-do about $700 (Score 1) 245

Over $900, and he will match the donations with his own funds so... that's definitely enough for a pretty nice machine. And with the slashdotting, probably a lot more now.

The bigger problem is likely network bandwidth to his home if he's actually trying to run the server at home. He'd need uplink and downlink bandwidth so if he doesn't have FIOS or Google Fiber, that will be a bottleneck.

-Matt

Comment Re:Git Is Not The Be All End All (Score 1) 245

A single point of failure is a big problem. The biggest advantage of a distributed system is that the main repo doesn't have to take a variable client load that might interfere with developer pushes. You can distribute the main repo to secondary servers and have the developers commit/push to the main repo, but all readers (including web services) can simply access the secondary servers. This works spectacularly well for us.

The second biggest advantage is that backups are completely free. If something breaks badly, a repo will be out there somewhere (and for readers one can simply fail-over to another secondary server or use a local copy).

For most open source projects... probably all open source projects frankly, and probably 90% of the in-house commercial projects, a distributed system will be far superior.

I think people underestimate just how much repo searching costs when one has a single distribution point. I remember the days when FreeBSD, NetBSD, and other CVS repos would be constantly overloaded due to the lack of a distributed solution. And the mirrors generally did not work well at all because cron jobs doing updates would invariably catch a mirror in the middle of an update and completely break the local copy. So users AND developers naturally gravitated to the original and subsequently overloaded it. SVN doesn't really solve that problem if you want to run actual repo commands, verses greping one particular version of the source.

That just isn't an issue with git. There are still lots of projects not using git, and I had a HUGE mess of cron jobs that had to try very hard to keep their cvs or other trees in sync without blowing up and requiring maintainance every few weeks. Fortunately most of those projects now run git mirrors, so we can supply local copies of the git repo and broken-out sources for many projects on our developer box that developers can grep through on our own I/O dime instead of on other project's I/O dime.

-Matt

Comment Re:SVN and Git are not good for the same things (Score 1) 245

This isn't quite true. Git has no problem with large repos as long as the system ram and kernel caches can scale to the data footprint the basic git commands need to access them. However, git *DOES* have an issue with scaling to huge repos in general... it requires more I/O, certainly, and you can't easily operate on just a portion of a repo (a feature which I think Linus knows is needed). So repos which are well in excess of the RAM and OS resources required to do basic commands can present a problem. Google has precisely this problem and it is why they are unable to use git despite the number of employees who would like to.

Any system built for home or server use by a programmer/developer in the last 1-2 years is going to have at least 16G of ram. That can handle pretty big repos without missing a beat. I don't think there's much use complaining if you have a very old system with a tiny amount of ram, but you can ease your problems by using a SSD as a cache. And if you are talking about a large company... having the repo servers deal with very large git repos generally just requires ram (but client-side is still a problem).

And, certainly, I do not know a single open source project that has this problem that couldn't be solved with a measily 16G of ram.

-Matt

Comment It's not that big a deal (Score 1) 245

It's just that ESR has an old decrepit machine to do it on. A low-end Xeon w/16-32G of ECC ram and, most importantly, a nice SSD for the input data set, and a large HDD for the output (so as not to wear out the SSD), would do the job easily on repos far larger than 16GB. The IPS of those cpus is insane. Just one of our E3-1240v3 (haswell) blades can compile the entire FreeBSD ports repo from scratch in less than 24 hours.

For quiet, nothing fancy is really needed. These cpus run very cool, so you just get a big copper cooler (with a big variable but slow fan) and a case with a large (fixed, slow) 80mm input fan and a large (fixed slow) 80mm output fan and you won't hear a thing from the case.

-Matt

Comment Re:No WMD's...Really? (Score 4, Insightful) 376

Its no secret Iraq had chemical weapons. They used them liberally against Iranian human wave attacks during the Iran Iraq war.

The reason they were hushed up is because they were provided by western countries. You do know the U.S. and Europe backed Saddam in the Iran Iraq war and most probably encouraged the use of chemical weapons against Iranian teenagers right? Iran had a huge population advantage, Iraqi Shias weren't that keen on fighting Iranian Shia, so Iraq needed technology to level the field and the West helped with that edge.

The West was really happy about a lengthy, bloody stalemate in that war bleeding both countries white.

Comment Mobile links (Score 1) 174

For mobile internet connections... for dual mobile internet connections. I haven't done that but I have used VPNs over mobile hotspots extensively. There is just no way to get low latency even over multiple mobile links. The main problem is that the bandwidth capabilities of the links are fluctuating all of the time, and if you try to dup the packets you will end up overloading one or the other link randomly as time progresses because the TCP protocol will get acks from the other link and thus not backoff as much as it should. An overloaded mobile link will drop out, POOF. Dead for a while.

For VPN over mobile links, the key is to NOT run the VPN on the mobile devices themselves. Instead, run it on a computer (laptop etc) that is connected to the mobile devices. Then use a standard link aggregation protocol with a ~1 second ping and a ~10 second timeout. You will not necessarily get better latency but it should solve the dropout problem... it will glitch for a few seconds when it fails over but the tcp connections will not be lost.

-Matt

Comment It can be done but... (Score 1) 174

I run a dual VPN link over two telcos (Comcast and U-Verse in my case), between my home and a colo. I don't try to repeat the traffic on both links, however, because they have different bandwidth capabilities and it just doesn't work well if the line becomes saturated. Instead I use PF and FAIRQ in both directions to remove packet backlogs at border routers in both directions, and to ensure that priority traffic gets priority. Either an aggregation-with-failover or a straight failover configuration works the best (the TCP connection isn't lost since it's a VPN'd IP). That way if you lose one link, the other will take over within a few seconds.

The most important feature of using a VPN to a nearby colo is being able to prioritize and control the bandwidth in BOTH directions. Typically you want to reserve at least 10% for pure TCP acks in the reverse direction, and explicitly limit the bandwidth to just below the telco's capabilities to avoid backlogging packets on either your outgoing cablemodem/u-verse/etc router or on the telco's incoming router (which you have no control over without a VPN). Then use fair queueing or some other mechanism to ensure that bulk connections (such as streaming movies) do not interfere with the latency for other connections.

In anycase, what you want to do just won't work in real life when you are talking about two different telco links. I've tried it with TCP (just dup'ing the traffic). It doesn't improve anything. The reason is that one of the two is going to have far superior latency over the other. If you are talking Comcast cable vs U-Verse, for example (which, the comcast cable will almost certainly have half the latency of the U-Verse. If you are talking about Comcast vs Verizon FIOS, then it is a toss-up. But one will win, and not just some of the time... 95% of the time. So you might as well route your game traffic over the one that wins.

-Matt

Comment IronNet Cybersecurity is the ethics issue (Score 5, Informative) 59

Most of the ethics questions around Alexander involve his company IronNet Cybersecurity. He founded it when he retired. He's charging big banks $1,000,000 a month to protect them in cyberspace, and its not exactly clear what he has to offer to justify the price tag, other than classified insider knowledge of cyber threats from his NSA years, he probably shouldn't be selling to the highest bidder.

Slashdot Top Deals

What good is a ticket to the good life, if you can't find the entrance?

Working...