Forgot your password?
typodupeerror

Comment: Re: (pre-emptive to 'New-Age' gamers...) GOML! (Score 1) 163

Right? The 'oldies' really are the 'goodies' in gaming, as it turns out.

Well... let's not go overboard here. Even the most nostalgic X'er will admit that the 2600's graphics looked like total ass, even in 1980, and 98% of Atari 2600 games have almost zero enduring fun value. Seriously, play 'em for 5 minutes for the first time in 20 years, and the last minute before you hit reset will seem to LAST for 20 years.

Well, besides Circus Atari & Warlords (the original 4-player "party game"). It's kind of ironic that two of the 2600's least graphically-sophisticated games ended up among the small canon of unique 2600 games that are still kind of fun and have never really been improved upon on other platforms.

It's really a shame Colecovision's short-sighted licensing deals and messy bankruptcy left their games covered in the legal equivalent of toxic sludge that nobody will ever be able to scrub away cheaply enough to make a $24.95 embedded Colecovision-in-a-(joy)stick with the dozen or so most popular games ever viable.

Comment: Re:Switched double speed half capacity, realistic? (Score 1) 314

by Miamicanes (#47762517) Attached to: Seagate Ships First 8 Terabyte Hard Drive

Would it be trivial to design a drive that can be switched into a double-speed half-capacity mode?

There's a word for it... "Velociraptor".

There's even a word for a drive that's "triple" speed... "Cheetah".

In any case, you wouldn't decrease the capacity on account of the faster rotational speed... you'd just use a faster DSP capable of doing its thing in less than half the time as a slower drive. From what I recall, the Cheetah's storage density per platter was basically the same as any other 2.5" drive.

SSDs obviously made the highest-performance spinning disks almost irrelevant, but personally, I used to think it would have been awesome if Seagate had taken the Cheetah platform, added two more independent sets of actuators and read/write heads, and wired it all up to look like 3 SCSI drives with sequential SCSI IDs so you could have single-drive RAID-5 performance in a luggable laptop (think: inch-thick Alienware/Sager/Clevo) or SFF desktop. Heat would be an issue... but really, a Cheetah didn't throw off any more heat than the mini-PCIe discrete video cards found in some gamer/mobile-workstation laptops now. In MY laptop, at least, the GPU's cooling system is bigger than the CPU's.

One thing I'd LOVE to see, and even think there's a market for, would be a single-platter drive suitable for mounting in the optical bay of mobile workstation laptops (say, 120mm diameter, 7mm or thinner). I rarely use optical discs, but having another 4tb or so that's always with me would be nice to have. Basically, it would be 7mm thick Quantum Bigfoot from the late 90s... and Jesus, with that much diameter per platter, just imagine how many terabytes you could pack into a multi-platter drive that fully-consumed a 5.25" quarter-height drive bay. It's almost scary to think about something like a 256-tb 5.25" single-bay hard drive.

I'm also kind of surprised that nobody ever made a thin-but-3.5" drive for laptops (which would obviously need a larger drive bay... but modern laptops, even thin ones, have SHOCKING amounts of horizontal acreage under the keyboard that could easily be put to good use for bigger cheap drives).

Comment: Re:I quit buying Samsung (Score 1) 220

Motorola didn't take "too long" to roll out the "latest version of Android" for the Photon and Atrix2... Motorola promised POINT BLANK circa October 2011 that the Photon and Atrix2 would both get ICS eventually. Then ~8 months later, said, "Ok, we lied. But we'll give you $50 off the purchase of another Motorola phone."

I, for one, can't WAIT for the class-action lawsuit. Motorola's decision to cancel ICS for the Photon sucked, but the way they recklessly locked the bootloader to try and make sure nobody ELSE could do it EITHER was despicable.

Comment: Re:I quit buying Samsung (Score 1) 220

Actually, Motorola does a good job with pushing out updates (at least with Moto X, G, E).

Maybe... but they actively & intentionally FUCKED everyone unfortunate enough to buy one of their phones before then. As if it wasn't bad enough that they decided to break their promise (advertised, in writing) to eventually ship ICS for the Photon & Atrix2, they ALSO rolled out a trojan update (2.3.4) whose sole purpose was to permalock the bootloader and make sure end users couldn't upgrade the Photon/Atrix2 to ICS on their own, either.

It's widely believed among former Photon/Atrix2 owners that Moto deployed the new permalocking-bootloader with protections to prevent future updates, then discovered (too late) that the new bootloader had a bug that rendered it unable to safely repartition the flash to accommodate the larger /system partition needed by ICS.

I really hope there's an extra-toasty spot in hell where Motorola's execs can burn forever as punishment for what they did to us.

#motofail #neveragain

Comment: Re:Cell phones are insecure. (Score 1) 46

by Miamicanes (#47643097) Attached to: Silent Circle's Blackphone Exploited at Def Con

no reason that an end-to-end secure cellphone network cannot exist.

The problem is, you will never, EVER control every single bit & atom along the signal path between your vocal cords and the recipient's ear. Without PKI, you're vulnerable to MITM. With PKI, you're vulnerable to compromise of the PKI infrastructure itself. Or compromise to the layer that enforces PKI's use. The best you can ever really hope for is to eliminate enough failure points to at least NOTICE the possibility that your communication might be getting intercepted or compromised.

Is absolute security between two people possible? Maybe... IF

* they know in advance that they're going to communicate with each other

* they have a way to securely exchange devices in a way that's not vulnerable to tampering during shipment or after receipt.

* they can implicitly trust everyone who had a role in the software running on the device

* they'd rather be left unable to communicate than communicate with the slightest risk of unauthorized disclosure.

The last one is the biggie. 99.999% of all security exploits exist because someone figured out how to use the emergency backdoor left in the code to deal with unforeseen future emergencies that might otherwise brick millions of dollars worth of hardware. Think of a building... you can armor-plate the windows, and weld all the doors shut except for one that's protected by an army of soldiers... then have 95% of the building's occupants die in a fire because they couldn't get out due to all the escape routes being closed off. OR... you can design escape routes to maximize survivability, then have someone gain access to the building by triggering a false alarm & sneaking in through the escape routes while everyone else is trying to get out. The more you harden something to eliminate vulnerabilities, the more vulnerable you leave yourself to future device and data loss.

Comment: Odroid U3 + separate access point (Score 1) 427

by Miamicanes (#47639897) Attached to: Ask Slashdot: Life Beyond the WRT54G Series?

I hate to say it because it feels like partial defeat, but your best bet probably consists of two devices... something like an Odroid U3 acting as your router/application gateway/personal server/whatever, and a separate access point for wifi.

Why the separate access point? Thanks to closed drivers and general lack of proper documentation, it's damn near IMPOSSIBLE to get best-of-breed wifi performance out of ANY open firmware. Go read the forums for any open firmware... broken 5GHz, no support for beamforming, and random weirdness that nobody can properly fix because everything they do is a stab in the dark. So, the next best thing is to hold your nose, isolate out that specific functionality into a separate device, and concentrate on the one part of the equation you CAN control... the router/server/whatever itself.

Why Odroid U3, and not a Raspberry Pi? Much better hardware, and almost meaningless difference in price (once you factor in shipping, case, and everything else you're going to have to buy to make it work). Go ahead and use your Pi if you already have one gathering dust in a drawer somewhere, but IMHO, if you're buying everything new for this, the $25 or so extra is money well spent on better, more-capable hardware.

Comment: Re:They deserve it (Score 1) 286

The problem isn't 1280x720@60fps video... the problem is that 1080i60 source gets butchered by the cheap & nasty way those TVs hacked up the signal to make it displayable on the smaller screen.

Go ahead. Try this experiment. Create a complex animation with lots of motion, reflection, and detail using Blender (or find the source file to one that somebody else has already done), then render it as both 720p60 and 1080i60, and encode both to MPEG-2 with a max bitrate of 19.2mbps. Then view the videos on a LCD TV.

I *guarantee* you'll like the 720p60 version better.

There's a reason why a CRT capable of handling 1080i60 needed ~33.5MHz of video bandwidth, but a CRT capable of handling 720p60 needed 45MHz of video bandwidth. You can't fool mother nature. 720p60 converted to 1080i60 is disappointing, but 1080i60 converted to 720p60 almost always looks like shit.

Comment: Re:They deserve it (Score 3, Informative) 286

720p60 is absolutely true HD... but for various real-world technical reasons, natively-interlaced 1080i60 source that gets transcoded to faux 720p60 is NOT equal to native 720p60.

True 720p60 is a beautiful thing. It's sad to see how many people have forgotten what smooth, lifelike video is supposed to look like, because almost everything on TV now is stuttering 30fps (look at turn signals & railroad crossing lights for the most graphic example of why that's bad).

99 times out of 100, a nominally 1920x1080 60 field/second video is going to REALLY be 1440x1080. To convert it to 720p60, it's treated like 60fps 1440x540, then resampled to reduce the horizontal resolution to 1280, and interpolate the vertical resolution up to 720.

Comment: Re:consolidate the legacy cell phone networks (Score 3, Interesting) 28

I'm sorry, but that's just stupid. If T-Mobile abolished 3G and 4G, they'd literally lose most of their customers almost OVERNIGHT.

It would be equally stupid for Sprint. For one thing, 3G GSM (UMTS/HSPA) is basically CDMA2000-1xRTT with wider channels and some minor refinement tweaks. In fact, with a little software effort, you could even overlay CDMA voice/1xRTT users on top of the same channel used for HSPA in a fringe-rural area (or interior femtocell). 4 CDMA2000/IS95 users, equally spread between 4 channels overlapping a single HSPA uplink or downlink channel, would just look like a second user of the same mode to the other phone simultaneously using the same frequency.

Legacy GSM is kind of brutish with its demands, and EVDO is bitchy in its own way, but 3G GSM and CDMA voice/1xRTT data are "kissing cousins".

If they were ever forced to make an absurd decision between legacy GSM and HSPA, T-Mobile would be better off abolishing legacy GSM, because they'd lose far fewer customers.

By the same token, god forbid, if the FCC approves a merger between T-Mobile and Sprint, the best thing Sprint could do is quickly release radio-modem upgrades for higher-end Android phones and iPhones to allow Sprint phones to be Canada-like and use CDMA (from Sprint towers) for voice, but HSPA (from T-Mobile towers) for data when LTE isn't available. AFAIK, every high-end Sprint Android phone since the Galaxy S2 and Motorola Photon has been capable of HSPA. If you really want to split hairs, even the now-ancient original HTC Evo had latent HSPA capabilities (but no SIM card, though I think some guys in India eventually found a way to hand-solder a USIM meant for embedded use into it... score one for the good guys subverting carrier-enforced arbitrary obsolescence...)

Frankly, Sprint doesn't need MORE spectrum so much as it needs BETTER spectrum. Sprint already has more licensed RF spectrum than it fucking knows what to do with. It's just that 100% of it is 1900Mhz or above. Sprint could solve 100% of its coverage problems by throwing more towers at it. And even with more towers, what Sprint REALLY needs is more fiber backhaul. T-Mobile isn't sitting on ABUNDANT spectrum, but with the new spectrum they got from AT&T, they have enough to solve their own problems with more tower sites, too. See, that's the nice thing about CDMA (and HSPA) -- you can solve just about ANY capacity problem by simply throwing more fiber-connected towers at it. It's the killer feature of CDMA-based technologies.

In a rational & sane world, companies like Comcast & U-verse would start building agile picocells into their cable & VDSL2 modems, allocating a few mbps of bandwidth (independently of what's available to customers) to them, and allowing AT&T, Verizon, Sprint, and T-Mobile users to roam on them at some cost low enough to be a better deal for cell carriers than deploying more towers (obviously, the carriers would have to delegate a small chunk of spectrum to them, too). If every cable & (V)DSL(2) modem was ALSO a mini cell tower with a few hundred feet of range, even fringe-suburbia would become blanketed coverage zones within a few months. And the reduced penetration of 1700, 1900, and 2100MHz would become an advantage rather than a drawback.

Comment: Re:Strength (Score 1) 62

by Miamicanes (#47567965) Attached to: 3-D Printing Comes To Amazon

Something like a phone case needs to be tough enough to resist abrasion or it will shred in contact with hard objects

Depends. If the case's destruction allowed it to dissipate enough instantaneous kinetic energy to save your phone's display from an expensive repair job, the loss of that 99c case might not necessarily be a bad thing. I've seen drops bad enough to crack the hard inner shell of an Otterbox Defender. Like the time my brother put his phone down on the roof of his car, then forgot about it before driving away. It hit the ground at a *minimum* of 20mph. The case was destroyed, but the phone inside was unscathed.

Comment: Re:Thankfully those will be patched right in a jif (Score 2) 127

by Miamicanes (#47562723) Attached to: Old Apache Code At Root of Android FakeID Mess

Find a popular ROM at XDA derived from whatever version you want to stick with and flash it (with a compatible kernel) to your phone.

Until you have a few months of reflashing experience, DO NOT attempt to flash any ROM that requires repartitioning the flash, and don't ask the recovery manager to wipe /system unless you really know what can happen & have a plan for dealing with it. This goes DOUBLE for anybody with a Samsung Galaxy S3.

Long story short: the eMMC is kind of like a SSD controller, and there are MAJOR known bugs (and plenty of poorly-understood ones, too) in the firmware. Basically, it's as if you tried to use Linux to create a new filesystem, but a bug caused it to just make all the old directories owned by some undefined user with impossible permissions instead... and do it in a way that made the drive initially LOOK reformatted, but spontaneously resurrect those corrupted files as more and more writes occurred.

Now for the bad news (if you have a Galaxy S3) -- the eMMC firmware installed with stock roms older than 4.3 is dangerously buggy with AOSP-derived ROMs, and getting rid of enough of those bugs to semi-safely do wholesale repartitioning almost requires installing a stock-derived (but hacked so it doesn't enforce Knox) ROM first to get the eMMC firmware updated. More confusingly, the eMMC firmware is part of the radio modem firmware, even though it doesn't really have anything to do with the radio modem itself. So, if you're running a 4.1 stock ROM and want to install a 4.1 AOSP-derived ROM, tread VERY carefully, and pay special attention to any warnings at XDA that involve the word "eMMC".

Comment: Re:You can create a token but keep it off nets (Score 2) 113

by Miamicanes (#47559449) Attached to: Ask Slashdot: Open Hardware/Software-Based Security Token?

Strictly speaking, a USB (or bluetooth, or whaver) device has the potential to be MORE secure... IF it meets the following criteria:

* Negotiates directly with the remote service requesting authentication credentials, and has robust logic to detect MITM situations. For the purposes of this example, the local operating system is merely a bucket-brigade dumb transport layer that facilitates the delivery of packets between the token and remote login service.

* Has its own onboard hardkeys under the exclusive control of the token, with some degree of logic to verify that the user is deliberately consenting to the login attempt... preferably, enough to implement some kind of secondary authentication. I'm totally not a fan of biometrics, but if there's anyplace where a fingerprint sensor might be appropriate as the equivalent of a residential keyed non-deadbolt lock that says 'no' to casual attackers, without even pretending it could survive a full-on attack from someone willing to do something drastic (like break the door down), it's probably HERE.

* Has its own display, under the exclusive control of the token, and logic to display an appropriate level of concern to alert the user to unusual situations. For example, being asked to authenticate to ${some-specific-server} for ${limited-purpose} might merit full-on warnings the first time you authenticate, but require little more than a finger swipe or button press for subsequent uses that don't exceed some user-defined threshold.

Unfortunately, I've never even SEEN a hardware token available to non-enterprise customers even REMOTELY in the same ballpark as the feature set I've listed. Manufacturers just can't resist the temptation to eliminate the cost of an expensive dedicated display, or multiple hardkeys, or some comparable dedicated input and output hardware that's sealed, self-contained, and has no dependencies upon the security of anything beyond the token itself. It also assumes at least minimally-savvy users who'll take the time to at least read the first-time/threshold-exceeded warnings, and won't just blindly swat them away without independently contemplating their possible implications.

Ideally, the token would also have some additional security layer that causes it to be disabled permanently if the person with whom it's associated ceases to be alive (to ensure that a robber couldn't force you to tell him your access code at gunpoint, then shoot you anyway. If he knows that his free fountain of money shuts down the moment you die, he'll have more incentive to employ heroic means to keep you alive even if he's the reason you're in danger of death to begin with.

Finally, you'll want to have the token itself be a delegate of some master token, with a reissue procedure for replacing it with a new token that has multiple layers of identity-authorization, since there's always a very real risk of loss. It's little comfort knowing a thief can't get at your money if, from your perspective, it's as gone as if it were in a concrete vault at some unknown spot on the floor of the Pacific Ocean.

Comment: Re:Best Wishes ! (Score 1) 322

by Miamicanes (#47528879) Attached to: Microsoft's CEO Says He Wants to Unify Windows

Within a year... yeah, most decent peripherals had drivers. At midnight on the day Windows 95 went on sale across America? They were basically nonexistent, From what I remember, soundcards were a MAJOR pain point for YEARS. Gravis totally dropped the ball with the Ultrasound (eventually releasing crippled win32 drivers that sort of worked, but if you wanted to play .mid files with wavetable instruments, you were stuck with realmode SBOS), and my dad's soundcard was a source of misery for YEARS until he threw in the towel and bought an AWE32. From what I remember, unlike a real SBpro (which set the port, irq, and DMA via jumpers), my Dad's stupid soundcard had to have the port, irq, and DMA set via realmode drivers at boot time. Yuck.

I seem to remember that CD-ROM drives were another source of realmode misery, but I'm not really sure *why*. I think it was because the drives themselves were IDE, but Adaptec held a patent on something and wouldn't allow Microsoft to bake support for CD-ROM drives into Windows without paying royalties, so Microsoft just left everyone to suffer with the Adaptec-licensed realmode drivers that came in the box with the drives (and began a 20-year tradition of always finding some petty way to cripple Windows' native handling of optical drives absent expensive thirdparty software).

Comment: Re: Astronomy, and general poor night-time results (Score 1) 550

by Miamicanes (#47528693) Attached to: Laser Eye Surgery, Revisited 10 Years Later

PRK also has a much higher incidence of starbursts and halos

Yes, but you're overlooking an important detail -- in the early 2000s, an average PRK (or LASEK) patient went into surgery with significantly worse vision than an average Lasik patient. Until fairly recently, the maximum amount of correction the FDA allowed for PRK & LASEK was a diopter or two HIGHER than the limit imposed for Lasik... but the maximum-allowed diameter of the ablation zone was about 2mm LESS. The net result is that patients who were disqualified for Lasik were able to get PRK/LASEK, but their blend zone was fairly steep, and was often smaller in diameter than many patients' pupils in the dark. Meanwhile, patients with milder vision problems ended up getting Lasik by default, because it healed faster & was more heavily-advertised.

In other words, the PRK/LASEK patients who had the worst problems with halos are basically the ones who wouldn't have even been ALLOWED to get Lasik back in the early 2000s. I know, because I was one of 'em (1/2 diopter more astigmatism, and I would have been disqualified from PRK/LASEK too).

The good news is that the FDA finally raised the limits allowed for both maximum correction and ablation-zone diameter, and wavefront laser surgery can now fix most of the problems caused by the old FDA limits (enlarging the fully-corrected zone so it's as big as a darkness-accommodated pupil, and eliminating the halos in the process).

Comment: Re: Astronomy, and general poor night-time results (Score 0) 550

by Miamicanes (#47528369) Attached to: Laser Eye Surgery, Revisited 10 Years Later

Tell your mother to consider scleral gas-permeable lenses. Few people have ever heard about them, and they look kind of scary when you first see them being put in, but they're actually one of the most comfortable types of contacts you can wear:

* Gas-permeable lenses are more permeable to oxygen than soft lenses

* GP lenses don't dry out

* By having the lens rest on the sclera instead of the cornea, there's less sensation of motion from blinking (and less motion, period). The "pumping" motion of normal GP lenses drove me insane when I tried wearing them 20 years ago, and my dad admitted the same motion drove HIM crazy when HE tried wearing traditional hard lenses in the early 70s.

* The layer of tears between the bumpy cornea and rigid lens optically bridges the two (tears have almost exactly the same index of refraction as the cornea and GP lens), so they can fix (or at least greatly help) problems that are untreatable with glasses or soft lenses.

Scleral lenses are actually an old design, but making them with gas-permeable plastic is a relatively recent development.

Their only real downsides are that you pretty much HAVE to go to a real opthamologist, and they aren't cheap. But they're an awesome option for people who either can't stand normal lenses, or have problems that normal lenses can't effectively fix.

https://www.youtube.com/watch?...

https://www.youtube.com/watch?...

<<<<< EVACUATION ROUTE <<<<<

Working...