Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×

Comment Re:Cell phones are insecure. (Score 1) 46

no reason that an end-to-end secure cellphone network cannot exist.

The problem is, you will never, EVER control every single bit & atom along the signal path between your vocal cords and the recipient's ear. Without PKI, you're vulnerable to MITM. With PKI, you're vulnerable to compromise of the PKI infrastructure itself. Or compromise to the layer that enforces PKI's use. The best you can ever really hope for is to eliminate enough failure points to at least NOTICE the possibility that your communication might be getting intercepted or compromised.

Is absolute security between two people possible? Maybe... IF

* they know in advance that they're going to communicate with each other

* they have a way to securely exchange devices in a way that's not vulnerable to tampering during shipment or after receipt.

* they can implicitly trust everyone who had a role in the software running on the device

* they'd rather be left unable to communicate than communicate with the slightest risk of unauthorized disclosure.

The last one is the biggie. 99.999% of all security exploits exist because someone figured out how to use the emergency backdoor left in the code to deal with unforeseen future emergencies that might otherwise brick millions of dollars worth of hardware. Think of a building... you can armor-plate the windows, and weld all the doors shut except for one that's protected by an army of soldiers... then have 95% of the building's occupants die in a fire because they couldn't get out due to all the escape routes being closed off. OR... you can design escape routes to maximize survivability, then have someone gain access to the building by triggering a false alarm & sneaking in through the escape routes while everyone else is trying to get out. The more you harden something to eliminate vulnerabilities, the more vulnerable you leave yourself to future device and data loss.

Comment Odroid U3 + separate access point (Score 1) 427

I hate to say it because it feels like partial defeat, but your best bet probably consists of two devices... something like an Odroid U3 acting as your router/application gateway/personal server/whatever, and a separate access point for wifi.

Why the separate access point? Thanks to closed drivers and general lack of proper documentation, it's damn near IMPOSSIBLE to get best-of-breed wifi performance out of ANY open firmware. Go read the forums for any open firmware... broken 5GHz, no support for beamforming, and random weirdness that nobody can properly fix because everything they do is a stab in the dark. So, the next best thing is to hold your nose, isolate out that specific functionality into a separate device, and concentrate on the one part of the equation you CAN control... the router/server/whatever itself.

Why Odroid U3, and not a Raspberry Pi? Much better hardware, and almost meaningless difference in price (once you factor in shipping, case, and everything else you're going to have to buy to make it work). Go ahead and use your Pi if you already have one gathering dust in a drawer somewhere, but IMHO, if you're buying everything new for this, the $25 or so extra is money well spent on better, more-capable hardware.

Comment Re:They deserve it (Score 1) 286

The problem isn't 1280x720@60fps video... the problem is that 1080i60 source gets butchered by the cheap & nasty way those TVs hacked up the signal to make it displayable on the smaller screen.

Go ahead. Try this experiment. Create a complex animation with lots of motion, reflection, and detail using Blender (or find the source file to one that somebody else has already done), then render it as both 720p60 and 1080i60, and encode both to MPEG-2 with a max bitrate of 19.2mbps. Then view the videos on a LCD TV.

I *guarantee* you'll like the 720p60 version better.

There's a reason why a CRT capable of handling 1080i60 needed ~33.5MHz of video bandwidth, but a CRT capable of handling 720p60 needed 45MHz of video bandwidth. You can't fool mother nature. 720p60 converted to 1080i60 is disappointing, but 1080i60 converted to 720p60 almost always looks like shit.

Comment Re:They deserve it (Score 3, Informative) 286

720p60 is absolutely true HD... but for various real-world technical reasons, natively-interlaced 1080i60 source that gets transcoded to faux 720p60 is NOT equal to native 720p60.

True 720p60 is a beautiful thing. It's sad to see how many people have forgotten what smooth, lifelike video is supposed to look like, because almost everything on TV now is stuttering 30fps (look at turn signals & railroad crossing lights for the most graphic example of why that's bad).

99 times out of 100, a nominally 1920x1080 60 field/second video is going to REALLY be 1440x1080. To convert it to 720p60, it's treated like 60fps 1440x540, then resampled to reduce the horizontal resolution to 1280, and interpolate the vertical resolution up to 720.

Comment Re:consolidate the legacy cell phone networks (Score 3, Interesting) 28

I'm sorry, but that's just stupid. If T-Mobile abolished 3G and 4G, they'd literally lose most of their customers almost OVERNIGHT.

It would be equally stupid for Sprint. For one thing, 3G GSM (UMTS/HSPA) is basically CDMA2000-1xRTT with wider channels and some minor refinement tweaks. In fact, with a little software effort, you could even overlay CDMA voice/1xRTT users on top of the same channel used for HSPA in a fringe-rural area (or interior femtocell). 4 CDMA2000/IS95 users, equally spread between 4 channels overlapping a single HSPA uplink or downlink channel, would just look like a second user of the same mode to the other phone simultaneously using the same frequency.

Legacy GSM is kind of brutish with its demands, and EVDO is bitchy in its own way, but 3G GSM and CDMA voice/1xRTT data are "kissing cousins".

If they were ever forced to make an absurd decision between legacy GSM and HSPA, T-Mobile would be better off abolishing legacy GSM, because they'd lose far fewer customers.

By the same token, god forbid, if the FCC approves a merger between T-Mobile and Sprint, the best thing Sprint could do is quickly release radio-modem upgrades for higher-end Android phones and iPhones to allow Sprint phones to be Canada-like and use CDMA (from Sprint towers) for voice, but HSPA (from T-Mobile towers) for data when LTE isn't available. AFAIK, every high-end Sprint Android phone since the Galaxy S2 and Motorola Photon has been capable of HSPA. If you really want to split hairs, even the now-ancient original HTC Evo had latent HSPA capabilities (but no SIM card, though I think some guys in India eventually found a way to hand-solder a USIM meant for embedded use into it... score one for the good guys subverting carrier-enforced arbitrary obsolescence...)

Frankly, Sprint doesn't need MORE spectrum so much as it needs BETTER spectrum. Sprint already has more licensed RF spectrum than it fucking knows what to do with. It's just that 100% of it is 1900Mhz or above. Sprint could solve 100% of its coverage problems by throwing more towers at it. And even with more towers, what Sprint REALLY needs is more fiber backhaul. T-Mobile isn't sitting on ABUNDANT spectrum, but with the new spectrum they got from AT&T, they have enough to solve their own problems with more tower sites, too. See, that's the nice thing about CDMA (and HSPA) -- you can solve just about ANY capacity problem by simply throwing more fiber-connected towers at it. It's the killer feature of CDMA-based technologies.

In a rational & sane world, companies like Comcast & U-verse would start building agile picocells into their cable & VDSL2 modems, allocating a few mbps of bandwidth (independently of what's available to customers) to them, and allowing AT&T, Verizon, Sprint, and T-Mobile users to roam on them at some cost low enough to be a better deal for cell carriers than deploying more towers (obviously, the carriers would have to delegate a small chunk of spectrum to them, too). If every cable & (V)DSL(2) modem was ALSO a mini cell tower with a few hundred feet of range, even fringe-suburbia would become blanketed coverage zones within a few months. And the reduced penetration of 1700, 1900, and 2100MHz would become an advantage rather than a drawback.

Comment Re:Strength (Score 1) 62

Something like a phone case needs to be tough enough to resist abrasion or it will shred in contact with hard objects

Depends. If the case's destruction allowed it to dissipate enough instantaneous kinetic energy to save your phone's display from an expensive repair job, the loss of that 99c case might not necessarily be a bad thing. I've seen drops bad enough to crack the hard inner shell of an Otterbox Defender. Like the time my brother put his phone down on the roof of his car, then forgot about it before driving away. It hit the ground at a *minimum* of 20mph. The case was destroyed, but the phone inside was unscathed.

Comment Re:Thankfully those will be patched right in a jif (Score 2) 127

Find a popular ROM at XDA derived from whatever version you want to stick with and flash it (with a compatible kernel) to your phone.

Until you have a few months of reflashing experience, DO NOT attempt to flash any ROM that requires repartitioning the flash, and don't ask the recovery manager to wipe /system unless you really know what can happen & have a plan for dealing with it. This goes DOUBLE for anybody with a Samsung Galaxy S3.

Long story short: the eMMC is kind of like a SSD controller, and there are MAJOR known bugs (and plenty of poorly-understood ones, too) in the firmware. Basically, it's as if you tried to use Linux to create a new filesystem, but a bug caused it to just make all the old directories owned by some undefined user with impossible permissions instead... and do it in a way that made the drive initially LOOK reformatted, but spontaneously resurrect those corrupted files as more and more writes occurred.

Now for the bad news (if you have a Galaxy S3) -- the eMMC firmware installed with stock roms older than 4.3 is dangerously buggy with AOSP-derived ROMs, and getting rid of enough of those bugs to semi-safely do wholesale repartitioning almost requires installing a stock-derived (but hacked so it doesn't enforce Knox) ROM first to get the eMMC firmware updated. More confusingly, the eMMC firmware is part of the radio modem firmware, even though it doesn't really have anything to do with the radio modem itself. So, if you're running a 4.1 stock ROM and want to install a 4.1 AOSP-derived ROM, tread VERY carefully, and pay special attention to any warnings at XDA that involve the word "eMMC".

Comment Re:You can create a token but keep it off nets (Score 2) 113

Strictly speaking, a USB (or bluetooth, or whaver) device has the potential to be MORE secure... IF it meets the following criteria:

* Negotiates directly with the remote service requesting authentication credentials, and has robust logic to detect MITM situations. For the purposes of this example, the local operating system is merely a bucket-brigade dumb transport layer that facilitates the delivery of packets between the token and remote login service.

* Has its own onboard hardkeys under the exclusive control of the token, with some degree of logic to verify that the user is deliberately consenting to the login attempt... preferably, enough to implement some kind of secondary authentication. I'm totally not a fan of biometrics, but if there's anyplace where a fingerprint sensor might be appropriate as the equivalent of a residential keyed non-deadbolt lock that says 'no' to casual attackers, without even pretending it could survive a full-on attack from someone willing to do something drastic (like break the door down), it's probably HERE.

* Has its own display, under the exclusive control of the token, and logic to display an appropriate level of concern to alert the user to unusual situations. For example, being asked to authenticate to ${some-specific-server} for ${limited-purpose} might merit full-on warnings the first time you authenticate, but require little more than a finger swipe or button press for subsequent uses that don't exceed some user-defined threshold.

Unfortunately, I've never even SEEN a hardware token available to non-enterprise customers even REMOTELY in the same ballpark as the feature set I've listed. Manufacturers just can't resist the temptation to eliminate the cost of an expensive dedicated display, or multiple hardkeys, or some comparable dedicated input and output hardware that's sealed, self-contained, and has no dependencies upon the security of anything beyond the token itself. It also assumes at least minimally-savvy users who'll take the time to at least read the first-time/threshold-exceeded warnings, and won't just blindly swat them away without independently contemplating their possible implications.

Ideally, the token would also have some additional security layer that causes it to be disabled permanently if the person with whom it's associated ceases to be alive (to ensure that a robber couldn't force you to tell him your access code at gunpoint, then shoot you anyway. If he knows that his free fountain of money shuts down the moment you die, he'll have more incentive to employ heroic means to keep you alive even if he's the reason you're in danger of death to begin with.

Finally, you'll want to have the token itself be a delegate of some master token, with a reissue procedure for replacing it with a new token that has multiple layers of identity-authorization, since there's always a very real risk of loss. It's little comfort knowing a thief can't get at your money if, from your perspective, it's as gone as if it were in a concrete vault at some unknown spot on the floor of the Pacific Ocean.

Comment Re:Best Wishes ! (Score 1) 322

Within a year... yeah, most decent peripherals had drivers. At midnight on the day Windows 95 went on sale across America? They were basically nonexistent, From what I remember, soundcards were a MAJOR pain point for YEARS. Gravis totally dropped the ball with the Ultrasound (eventually releasing crippled win32 drivers that sort of worked, but if you wanted to play .mid files with wavetable instruments, you were stuck with realmode SBOS), and my dad's soundcard was a source of misery for YEARS until he threw in the towel and bought an AWE32. From what I remember, unlike a real SBpro (which set the port, irq, and DMA via jumpers), my Dad's stupid soundcard had to have the port, irq, and DMA set via realmode drivers at boot time. Yuck.

I seem to remember that CD-ROM drives were another source of realmode misery, but I'm not really sure *why*. I think it was because the drives themselves were IDE, but Adaptec held a patent on something and wouldn't allow Microsoft to bake support for CD-ROM drives into Windows without paying royalties, so Microsoft just left everyone to suffer with the Adaptec-licensed realmode drivers that came in the box with the drives (and began a 20-year tradition of always finding some petty way to cripple Windows' native handling of optical drives absent expensive thirdparty software).

Comment Re: Astronomy, and general poor night-time results (Score 1) 550

PRK also has a much higher incidence of starbursts and halos

Yes, but you're overlooking an important detail -- in the early 2000s, an average PRK (or LASEK) patient went into surgery with significantly worse vision than an average Lasik patient. Until fairly recently, the maximum amount of correction the FDA allowed for PRK & LASEK was a diopter or two HIGHER than the limit imposed for Lasik... but the maximum-allowed diameter of the ablation zone was about 2mm LESS. The net result is that patients who were disqualified for Lasik were able to get PRK/LASEK, but their blend zone was fairly steep, and was often smaller in diameter than many patients' pupils in the dark. Meanwhile, patients with milder vision problems ended up getting Lasik by default, because it healed faster & was more heavily-advertised.

In other words, the PRK/LASEK patients who had the worst problems with halos are basically the ones who wouldn't have even been ALLOWED to get Lasik back in the early 2000s. I know, because I was one of 'em (1/2 diopter more astigmatism, and I would have been disqualified from PRK/LASEK too).

The good news is that the FDA finally raised the limits allowed for both maximum correction and ablation-zone diameter, and wavefront laser surgery can now fix most of the problems caused by the old FDA limits (enlarging the fully-corrected zone so it's as big as a darkness-accommodated pupil, and eliminating the halos in the process).

Comment Re: Astronomy, and general poor night-time results (Score 0) 550

Tell your mother to consider scleral gas-permeable lenses. Few people have ever heard about them, and they look kind of scary when you first see them being put in, but they're actually one of the most comfortable types of contacts you can wear:

* Gas-permeable lenses are more permeable to oxygen than soft lenses

* GP lenses don't dry out

* By having the lens rest on the sclera instead of the cornea, there's less sensation of motion from blinking (and less motion, period). The "pumping" motion of normal GP lenses drove me insane when I tried wearing them 20 years ago, and my dad admitted the same motion drove HIM crazy when HE tried wearing traditional hard lenses in the early 70s.

* The layer of tears between the bumpy cornea and rigid lens optically bridges the two (tears have almost exactly the same index of refraction as the cornea and GP lens), so they can fix (or at least greatly help) problems that are untreatable with glasses or soft lenses.

Scleral lenses are actually an old design, but making them with gas-permeable plastic is a relatively recent development.

Their only real downsides are that you pretty much HAVE to go to a real opthamologist, and they aren't cheap. But they're an awesome option for people who either can't stand normal lenses, or have problems that normal lenses can't effectively fix.

https://www.youtube.com/watch?...

https://www.youtube.com/watch?...

Comment Re:Customer service? (Score 1) 928

The REAL question is... why the FUCK do so many airlines seem to board planes from FRONT TO REAR? Is it just the gate crew being complete idiots, or is it an official policy dictated to them for some insane reason?

I mean, ok, fine... board first class first... then continue with passengers who'll be sitting in the rear so they won't be tripping over (and getting in the way of) passengers sitting closer to the front. The only thing I can think of is that they know they have to board first class first & they're too lazy to look up the number of rows, so they just start with first, then keep calling rows ~10 at a time until ~80% of the people mulling around near the line to board have entered the plane, then end with "all other passengers may now board".

Comment Re:Best Wishes ! (Score 5, Informative) 322

Yes... and no. In theory, if you did a virgin installation of Windows 95 onto a pristine new computer whose peripherals ALL had genuine Win32 drivers capable of running in 386Enh protected mode, and you ONLY ran "true" Winapps that bent over backwards to have no dependencies on realmode, DOS was basically a Grub-like stage 2 bootloader invoked by the BIOS that loaded Windows, kicked the PC into 386enh Protected mode, and handed it over to Windows. And you probably had a pet unicorn living in the back yard ;-)

From what I remember, the compelling feature of Windows 3.11 that distinguished it from Windows 3.1 was native Win32 code for reading & writing (V)FAT filesystems on IDE hard drives (which gave it a HUGE performance boost compared to 3.1).

I believe that one of Win95's launch-time features was that Microsoft re-implemented the VESA BIOS extensions (and original VGA BIOS) as proper win32 drivers, so that manufacturers like Tseng and S3 only had to provide them with "miniport" drivers that did the grunt work that would have otherwise required them to fall back to realmode. I'm pretty sure the 386enh hooks for video BIOS emulation existed in 3.11, but the actual Microsoft-written code was given to vendors to distribute on their own disks & wasn't directly used by any video cards the day Win3.11 went to manufacturing. In a sense, Windows 3.11 existed to give videocard manufacturers a prototype platform so they could develop and test their protected-mode drivers on a released operating system.

Slashdot Top Deals

Dynamically binding, you realize the magic. Statically binding, you see only the hierarchy.

Working...