Please create an account to participate in the Slashdot moderation system


Forgot your password?

Slashdot videos: Now with more Slashdot!

  • View

  • Discuss

  • Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).


Comment: Re:Let's stay focused, people (Score 1) 134

by Miamicanes (#49156423) Attached to: Adjusting To a Martian Day More Difficult Than Expected

The problem with relying on that approach is that it completely breaks the ability of Martians to use the same internet as Earthlings. There's no getting around the latency problem, but the availability of nearly infinite (over the span of a single 24-hour window of time, from the perspective of any individual user) bandwidth can go a LONG way towards smoothing over the difference by allowing a degree of adhoc websurfing where the user triggers a load operation, then the proxy proceeds to recursively fetch not only that specific page, but every page and bit of content linked to it for several levels. Kind of like getting deliveries on a remote island where it takes weeks for goods to arrive... but when the ship DOES arrive, it's a Chinese-sized mega-freighter that costs almost the same per trip whether it's full or empty.

More importantly, there's a potentially-lucrative market for such local caching services right here on Earth: cruise ships. When a ship's in port, it can have fiber-speed connectivity. When it's at sea, satellite data bandwidth is limited, but hard drive space is cheap. Instead of sending only videos explicitly requested by passengers on a specific ship TO that ship, you could just bundle up all of Youtube's daily updates requested by anyone on any ship that's a customer, and broadcast them once (plus enough extra data for forward error correction) to every ship watching that particular satellite (so that if a passenger goes to watch a video the next day, it'll already be cached locally).

The same approach would work for providing internet access on Antarctica. Run fiber to an island near the Antarctic Peninsula, then build microwave relay towers to handle inland backhaul. Strictly speaking, latency would be low (since it would be microwave to fiber), but bandwidth during the winter would still be scarce because we're talking about a thousand-mile microwave-relay route from the South Pole to the nearest viable fiber drop... and snow does terrible things to link quality above 2GHz (think: weather radar frequencies. The signals hit snowflakes and ricochet & experience doppler shift) during the time of the year when good internet access is needed by residents the most urgently.

Comment: Re:Let's stay focused, people (Score 2) 134

by Miamicanes (#49152321) Attached to: Adjusting To a Martian Day More Difficult Than Expected

> It's not the length of a day that will impact Mars-dwellers the most, it will be their internet speed.

No, it'll be their latency. I believe the scenario informally tossed around by IETF for "extraterrestrial internet" envisions three categories of latency... a relatively small amount of net bandwidth sent directly between Mars and Earth that enjoys the lowest possible latency, and two roughly equal amounts of bulk bandwidth with much longer latencies. handled by satellites at the L3, L4, and L5 Earth-Sun and Mars-Sun Lagrange points. For semi-adhoc websurfing, your request would get sent along the fast (bandwidth-limited) link, and the response would travel along one or both of the lagrange paths. Someone like Akamai would come up with an open web standard that allowed sites to export themselves in their entirety (and remain synchronized in an rsync-like manner) so they'd run in a local VM on Mars.

Let's use StackOverflow as an example. To kick the whole thing off, SO would take a snapshot of itself (kind of like it already does for and begin uploading it to the server on Mars along the high-latency longer lagrangian bulk-data path. Once the server on Mars had a complete copy, it would become their local mirror. Normal bulk updates would occur frequently and periodically along the longest lagrangian path. Posted questions by someone on Mars (and the text of replies to them) might get expedited and sent along the shorter direct route.

Porn sites and Youtube would do the same thing. If a Martian wanted to visit some smaller site, he'd have to tag them for fetching and wait a few hours for them to become available. They'd probably follow a "Martians Pay" pricing model that split the bandwidth costs among everyone on Mars who accessed specific sites on a regular basis. Popular sites with lots of users (like Reddit and StackOverflow) would be cheap despite having lots of data because the cost would be divided among lots of Martian users. More offbeat sites might force individual users to be somewhat selective and conservative about their bulk-fetches (or at least about keeping them updated in perpetuity if they're only interested in viewing them as a one-shot activity), and might rebate back part of the initial acquisition cost if/when future users go to view the same site (ie, you, the first user, might pay $5 to bulk-grab all the postings of {some-user}, but get $2 of it rebated back when/if some future person pays $3, and you'd both get another buck (and further diminishing rebates) as more people paid diminishing prices to gain access to it.

By the same token, cable networks like HBO and SkyTV would bundle their new video content daily and bulk-upload it to their local affiliate on Mars (who'd make it initially available at some official scheduled time, and thereafter by streaming).

Ironically, the biggest single limiting factor to bandwidth wouldn't be between Earth and Mars, but between the surface of the Earth and a satellite orbiting the earth in geostationary orbit.Between the L3 and L4 satellites, you can use 30GHz of spectrum if you've got the hardware & power budget to do it. Then double it by sending half the data along the path in the other direction. The problem is the "last mile" between orbit and the surface, where it's likely that something more exotic will be required (say, multiple satellites using tightly-focused lasers to ferry the bulk data between earth and orbit, then bulk-uploading their chunks directly to the lagrangian satellites.

In short, the future of interplanetary internet can be summed up as multipath, multilink, and Akamai-like CDNs hosting VMs for earth websites on Mars. The biggest hard challenges aren't the technical ones... it'll be dealing with Hollywood lawyers and the copyright mafia losing sleep at night that their precious content is being cached on Mars with insufficient DRM or that someone, somewhere on Mars, is listening to a song that was improperly licensed.

The resources to maintain this kind of large-scale local cache would be substantial... but probably agreed to without hesitation on the grounds that they'd be likely to make the single biggest difference to morale on Mars than anything not directly related to life-or-death.

Comment: Re: Just y'know... reconnect them spinal nerves (Score 4, Informative) 209

by Miamicanes (#49146987) Attached to: Surgeon: First Human Head Transplant May Be Just Two Years Away

Not exactly. Some organ systems have controls that are a bit more local. That's why a quadriplegic can still digest food and have a beating heart, but needs a ventilator unless he had enough nerve connectivity remaining post-injury to breathe on his own. It's also why someone who's paralyzed can still have sex & enjoy it (even though he can't feel the orgasm).

Comment: Re:Peanuts (Score 1) 411

by Miamicanes (#49036103) Attached to: Your Java Code Is Mostly Fluff, New Research Finds

Most of Java's problems lie with the fact that the designers of its original API made some unbelievably bad decisions early in its development. Like:

* specifying arguments as int or String values, instead of enums... so every method you called that explicitly needed UTF8 had to be surrounded with try/catch (just in case you couldn't remember whether Java wanted you to call it "UTF-8", "UTF_8", or "UTF8" & threw an UnsupportedEncodingException). This particular thorn in my side was FINALLY fixed as of JDK 7.

Before anyone points out that enum is a semi-recent addition to Java, I'd like to remind everyone that even in 1996, you could declare a class with a private constructor, then use it to declare public static final constants of itself that were defined by their own declarations (which, I believe, is what 'enum' actually does behind the scenes, anyway)

* Pre-JDK8, Java's handling of nearly everything related to the concept of a Date/Time (parsing, printing, calculating, the works) was completely fucked. --

* Swing (Enough said. It speaks for itself... and does it almost as loudly and proudly as AWT.)

Even MORE tragic, though, is the way Android's API architects perpetuated the EXACT SAME anti-patterns (string/int constants as args) with the Android API... including brand new framework classes that didn't exist until Android did & had NO REASON to be that way.

Comment: Re:Here's a great idea... (Score 1) 481

by Miamicanes (#48988935) Attached to: DOT Warns of Dystopian Future For Transportation

Let's not forget that Orlando's toll roads are violently expensive compared to the tolls just about everywhere in the US besides New York City.

Orlando also did some really STUPID things, like build 3/4 of the beltway before deciding on the final route for the northwestern quadrant. Take a look at the northern end of 429 & notice (via Google Earth) that 451 used to be its tail end. The idiots allowed developers to build a solid wall of neighborhoods in what was supposed to be its northward path, and they ended up having to back up 3 miles and find a new route north (and demolish & re-route a half mile of 429 to try and make it less obvious to future generations just how badly they fucked up).

Of course, Miami has done some things of epic stupidity, too.. like allowing developers to build homes in what everybody knew was supposed to be the westward route of SR836, leaving the Turnpike-836 interchange with a bizarre layout that makes absolutely no sense in its current configuration (2 mile loop from northbound turnpike to eastbound 836 that doesn't allow exits to NW 107th Avenue (and even 20 years ago, involved a truly bizarre & surreal rigged-up semi-permanent detour through the FHP office's parking lot and a 2-lane road overgrown by trees on both sides that eventually led to 107th Avenue). Or the unfathomably stupid decisions that allowed the eastern terminus of both Gratigny Parkway and the Sawgrass Expressway to dump onto local roads 2 miles west of I-95 (guaranteeing that someday, FDOT will end up spending about a billion dollars apiece to finish them off properly and redoing their interchanges with I-95.

Comment: Re:why google keeps microsoft away (Score 4, Informative) 280

by Miamicanes (#48938735) Attached to: Microsoft To Invest In Rogue Android Startup Cyanogen

No Android device running a stock carrier ROM ever used flash for swap (that I'm aware of), but ~2-3 years ago, just about everyone running Cyanogenmod (or some other AOSP-derived ROM) had swapfiles. And yes, we really DID destroy $80+ microSD cards. It caught almost everyone by surprise, because we all blindly believed the manufacturers' assertions that the flash would last "a lifetime of normal use", failing to note that manufacturers didn't consider paging virtual memory almost nonstop to be "normal use". It was literally a use case the manufacturers never designed for, that didn't even become *viable* until overclocked class 6 and class 10 microSD became fast enough to make swapping to it faster than killing & re-spawning activities.

Comment: Re:why google keeps microsoft away (Score 5, Insightful) 280

by Miamicanes (#48936831) Attached to: Microsoft To Invest In Rogue Android Startup Cyanogen

More specifically, because lots of Android's fundamental architecture was dictated by a perceived need to work on slow CPUs (as in, 400MHz ARMv6) with absurdly low-res displays (remember 240x360?). Literally NOBODY involved with Android's genesis would have believed you if you told them that 5 years after the HTC G-phone's arrival on T-mobile, a phone with 1280x800 display, 1Ghz dualcore CPU, a gig of RAM, and at least 4-8 gigs of flash would be considered uselessly ghetto and hopelessly obsolete.

Remember, the whole reason why Google made the Nexus One was its frustration with the wimpy hardware of the second-gen Android phones, and hints that the third-generation phones were only going to be another half-step better. On the day of its release, the Nexus One was literally leaps and bounds beyond any competing phone, and its popularity forced HTC and Samsung to throw away their roadmaps and race back to the drawing board to come up with the Evo4G and Galaxy S family.

Current things that make Android feel laggy:

* 30hz touchscreen drivers and screen update rates are still the norm. 1/30th of a second is long enough to be perceptible as "lag", and when you factor triple-buffering into the equation, the lag is more like 1/15 second.

* The resolution and color depths of high-end Android phones have completely outstripped the dumb-framebuffer 3Dfx-heritage architecture behind most current hardware. Most video chipsets were optimized for 16-bit color at 1280x800 (more or less), but some high-end Android phones now ship with 2560x1600 displays running at 24-bit color and can barely sustain 30fps, let alone 60fps or faster. Basically, they're optimized for (and accelerate) the wrong thing. They might have great 3D graphics for games, but those capabilities are unusable and useless at higher-res/color. That's why some Android homescreen-replacement apps use 3D acceleration, but become fuzzy during transitions... they drop the resolution and color depth down to what the chips can handle, and don't go back to full-resolution until the transition completes. You can see it for yourself... do the "rotating cube" effect (or whatever you want to use), and notice that the moment the gesture begins, the resolution gets fuzzed in half, then snaps back into focus when you stop.

* Android's primitive (compared to Java since 1.4) garbage collection, which practically forces the OS to constantly kill off apps running in the background to reclaim their RAM, coupled by the real-world problems of trying to use a phone's flash to do Linux-style virtual memory (if you aren't careful, you can literally burn through an eMMC's lifetime write count in a few months. MicroSD is even worse... more than a few guys at XDA have destroyed expensive Sandisk microSD cards with a few days of hard benchmarking and intensive swapping. That's why most Android ROMs no longer make it easy to enable swap, even though it can be a HUGE performance boost. Too many users were destroying flash cards too quickly. Cyanogen with a large swapfile that's tweaked to abstain from killing off idle tasks will nuke a brand new class-10 microSD card in about 3-8 months of normal daily use... and if you did a swapfile with the phone's INTERNAL flash, your phone would essentially get bricked once the counter tripped and the eMMC write-protected itself (because Android can't deal with booting into an environment where it literally can't write ANYTHING to disk).

Comment: Re:Since when is AMT controversial? (Score 1) 179

by Miamicanes (#48936579) Attached to: FSF-Endorsed Libreboot X200 Laptop Comes With Intel's AMT Removed

As I understand it, at the bare-metal hardware level, AMT is basically a networked JTAG programmer grafted onto the ethernet controller that can do things like read & write values into RAM, stuff values into the CPU's registers, update the BIOS NVRAM, and override the normal boot process as long as you have physical ethernet access to the same network as the target computer & can present AMT with credentials it's satisfied with. It basically starts with the foundation provided by Wake-on-Lan & PXE, and adds the JTAG-like capabilities and security on top.

GNU is Not Unix

Serious Network Function Vulnerability Found In Glibc 211

Posted by Soulskill
from the audits-finding-gold dept.
An anonymous reader writes: A very serious security problem has been found and patched in the GNU C Library (Glibc). A heap-based buffer overflow was found in __nss_hostname_digits_dots() function, which is used by the gethostbyname() and gethostbyname2() function calls. A remote attacker able to make an application call to either of these functions could use this flaw to execute arbitrary code with the permissions of the user running the program. The vulnerability is easy to trigger as gethostbyname() can be called remotely for applications that do any kind of DNS resolving within the code. Qualys, who discovered the vulnerability (nicknamed "Ghost") during a code audit, wrote a mailing list entry with more details, including in-depth analysis and exploit vectors.

Comment: Accounting formalities (Score 1) 200

by Miamicanes (#48619675) Attached to: NASA's $349 Million Empty Tower

Serious question: how much of that alleged $700k/year-to-mothball is real, hard cash NASA has to spend, vs accounting formalities like "how much would the site be worth if put to its highest and best use" (and taken as a paper loss because the site isn't being used)? Or one-time costs that were incurred for mothballing, but aren't likely to be repeated annually (like shuttering the building, building a fence around it, etc)?

Don't discount the accounting formalities. I once worked for a company where upper management directed us to immediately dispose of about 100 non-obsolete laptops... at a disposal cost of more than $900 apiece. Why? Because they were sitting in a stack in the middle of a mostly-empty datacenter literally covering most of a square block, and some idiot in the accounting department decided that they were costing us $25,000/year to maintain for no reason besides "they're taking up 100 square feet, and we're paying $250/foot per year in rent"... in a building that was about 95% empty & leased for 20 years at the height of the dotcom boom just because "it was there". The fact that even if you take the fictional annual rent for the floorspace seriously, it took more than FIVE YEARS just to break even on the insane disposal fees. And in the meantime, we had to buy new laptops to replace the ones we were ordered to dispose of, because new people were still getting hired. Wait, it gets better. As a matter of policy, we were required to ship the laptops to the disposal center via FedEx. Priority Overnight. Individually. Almost a decade later, I *still* can't grasp how anybody could have possibly thought it was sane, let alone a *good* idea.

Comment: Re:Global Warming (Score 0) 47

Bzzzt. Florida is probably in the best position of any state (besides MAYBE New York) to deal with climate change. Why? Because we haven't had anything that vaguely resembles a natural river or coastline in almost a century. Our coastline is ALREADY fortified against flooding. Drive to South Beach sometime, and notice that West Avenue (the road along the western edge of the island) is already a few feet higher than the surrounding terrain. Then observe that there's another huge berm sitting between Ocean Drive and the ocean itself (the one covered in sea oats with boardwalks over it).

Then, while you're at it, take a peek at the western edge of urban Dade & Broward counties. Notice the HUGE-ass dike that keeps the "Everglades" side underwater, and the "human" side dry & suitable for condos, office parks, and golf courses.

It's the same as the Netherlands. Everyone likes to point to it as a country that's in peril of being submerged, but it's probably the *least* likely country in Europe to even *notice* rising sea levels, because the barriers around it were all solidly over-engineered with plenty of wiggle room to spare. And when the time comes to rebuild them in a century or so, they'll just get rebuilt a few feet higher.

Comment: Re:only for nerds (Score 1) 66

In theory, the answer is a qualified "maybe". Most new laptop discrete video cards connect via mini-PCIe, and I believe there's some anecdotal degree of physical compatibility between Alienware/Dell and someone else (Clevo, I think). As a practical matter, if you you're talking about buying a better video card on eBay that was explicitly designed for your exact model (say, upgrading from the cheapest ATI card to the best Quadro), you'll probably be OK. Everything else is a crapshoot.

Apparently, screw holes are a big, big problem with cross-device compatibility... different laptops put them in different places, even when the electrical interface, shape, thickness, and cooling arrangements are compatible.

There are actually a lot of relatively upgradable laptops out there (as long as you don't insist on one that's a glued/laminated-together 1mm-thick Apple-inspired abomination that's built like a cell phone). The problem is, it's nearly impossible to make any kind of informed purchase decision in advance of actually buying anything. The information you need just plain isn't reliably available until some brave soul tries doing it, takes pics, measures things, and posts the pics to his blog. Thinkpads are somewhat of an exception... but Lenovo made a new mess of their own (and got lots & lots of hate) when they started whitelisting specific mPCIe cards in the EFI BIOS and refusing to enable cards not on the list.

Put another way, there's a lot that can go wrong, and you're at least as likely to burn cash on parts with limited resale value that won't ultimately work, and can often be purchased only used on eBay from sellers who harvested them from broken laptops bought for scrap.

Comment: Re:Go T-Mo (Score 1) 112

by Miamicanes (#48240491) Attached to: AT&T Locks Apple SIM Cards On New iPads

No need for a lawsuit. Just file a complaint with the FTC under the Magnuson Moss Warranty Act, then sit back with a bowl of popcorn and watch the manufacturer beg for mercy. Or ask to speak to the front-line employee's supervisor, and just say the magic phrase that pays: "If you don't fix it, I'm going to file a Magnuson Moss complaint with the FTC". They'll blanche, take the phone, charge the usual deductible if you let them, JTAG-reflash it back to stock, and proceed as normal.

The catch with Magnuson Moss is that the manufacturer is under no obligation to return a rooted or reflashed phone to you STILL rooted or reflashed. They're 100% unambiguously entitled to JTAG-reflash it to stock prior to returning it, even if the newer version to which they reflashed it doesn't have a working root exploit. So, 9 months from now, you COULD conceivably find yourself owning a rooted & reflashed phone with a flaky USB port that's eligible for warranty repair, but will be returned to you reflashed with unrootable Android L and a locked-down bootloader. You'd be stuck between two equally-shitty rocks and hard places... flaky USB with root, permissive SElinux, and ext2 microSD hacked back into the ROM... or working USB, but no root and Google-crippled microSD that only supports FAT32, and restricts what apps can do with it regardless.

Comment: Re:Go T-Mo (Score 3, Interesting) 112

by Miamicanes (#48227157) Attached to: AT&T Locks Apple SIM Cards On New iPads

What, exactly, does Verizon do that is so dishonest and earns them so much hate?

They lock down their phones, and in the past they've actively disabled features supported by their phones' hardware to force you to use their premium services (Bluetooth modes, Wifi, and GPS have all been casualties of Verizon's lockdown fetish in the past). Compounding matters, there are lots of semi-rural places where Verizon is the only carrier with viable service (or at least, viable service INDOORS). Verizon was also the only carrier who forced bootloader-locking up until AT&T joined the party last year.

That's why T-Mobile is the carrier everyone desperately wants to love, even in areas where their service is poor. They're the only carrier who DOESN'T lock down their phones & try to restrict what you can do with them.

I am more bored than you could ever possibly be. Go back to work.