Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×

Comment Re:So far away (Score 1) 400

Well... actually... it wasn't "desktop publishing" that drove print shops out of business... it was cheap, fast photocopying by stores like Kinko's. If anything, desktop publishing GENERATED lots of business business for small print shops by enabling them to make high-quality (or more cost-effective) prints for customers who did the layout themselves with Pagemaker, and enabled small print shops to offer layout & design as a profitable service to customers instead of having to settle for lame generic signs or outsource it to a service bureau for actual typesetting.

I can't personally speak for small town America, but in South Florida, we still have a very strong local printing industry, if only because there's so much local demand for glossy real estate magazines, tourist magazines, and nightclub handouts. The barriers to entry are more formidable than they were 20 years ago (a 4-color high-res digital printing press is now non-negotiable), but the companies we have now are doing quite well.

Comment Re:Wow, that was so full of stupid... (Score 1) 449

I forgot to add the "best" part about the circumstances under which the outages occurred -- the storm's worst part was Sunday afternoon, but Comcast & U-verse went down on Monday morning. Why? Because storm knocked out commercial power to their network centers on Sunday afternoon, and Monday morning is when they ran out of diesel for the generators. This seems to be the new normal with tropical storms. :-(

Comment Re:Fine, get rid of POTS, give us Net Neutrality (Score 1) 449

So if we take the opposite approach, we run Internet service as slow and rickety DSL (which is highly dependent on distance from the telco switch) over the POTS copper. Which would you really prefer?

VDSL2 over POTS copper, leased to a CLEC at rates that are open, published, and available to all on equal terms (ie, if AT&T or Verizon charges themselves $19/month for a dry copper pair, they're required by law to lease it to any CLEC who wants to use it instead for the same $19/month).

With the best VDSL2 available today, 100mbps over two pairs (one for uplink, one for downlink) up to about 2,000 feet is quite do-able... and those are 100mbps that AT&T and Verizon can't fuck with, and are my inalienable right to use as intensively as I want to communicate with my ISP's VDSL2 backplane.

This isn't about un-burdening AT&T and Verizon with obsolete legacy infrastructure. This is about eliminating one of the few remaining back channels that motivated individuals can use to do an end run around them to avoid their metering & caps.

If Verizon wants to deploy ONLY wireless in Mantoloking, fine... let them. But apply the same regulatory standards that applied to POTS to them. Require 10 days of backup power, like the central office had a gigantic array of lead acid batteries to provide them with. Force them to sell unbundled raw IP transit to any CLEC, with the same guaranteed and unmetered throughput that could be achieved via VDSL2, for the same price as unbundled dry copper.

The second part alone would probably stop them dead in their tracks, because the only way they COULD provide guaranteed hundred-megabit throughput (maybe pooled among 2-4 households, max) within the constraints of their spectrum licenses via LTE would be to lay new fiber to all the neighborhoods ANYWAY, and stick a microcell every 4 houses. And prohibit them from charging higher or new fees, so they can't pass off the costs on customers anyway.

If the up-front capital costs of deploying 14,000 fiber-networked picocells across Mantoloking to serve ~40,000 customers didn't stop Verizon in its tracks, the long-term maintenance costs of replacing 14,000 sets of backup batteries capable of supplying power for a week, plus the nontrivial number of picocells that would die due to lightning or ruptured Chinese electrolytic capacitors, *would*. Verizon barely has enough spectrum to feed any one tower site with 50mbps. If they had to potentially supply that much guaranteed sustained throughput to every single customer at the costs they now charge for a dry copper pair, their only option would be to settle for making literally the "last hundred feet" wireless and deploying a brand new fiber-networked nightmare of picocells serving 3-4 customers apiece.

For LESS than what it would cost them to purchase, deploy, and maintain an ungodly huge network with 14,000 fiber-connected neighborhood picocells, they could just skip the picocells and run fiber the last hundred feet to everyone's house. Actual fiber is now cheaper per linear foot than UTP copper wires, and a bundle of direct-burial cable with 8-16 fibers now costs less per linear foot than direct-burial cat5e.In contrast, if Verizon could deploy a remote picocell with fiber termination and enough battery backup power to run for a week without commercial power for less than $20,000, they'd be lucky. If they had to shoulder the cost of deploying all those picocells themselves as the cost of eliminating copper, they'd NEVER go through with it.

What REALLY needs to be done is another forced breakup of AT&T and Verizon to make them divest their ROW, wire, and fiber to a new company that's required by law to deal with them at arm's length, on equal terms with other wireless carriers, CLECs, and service providers. If Verizon and AT&T don't want to own wires anymore, fine... but make them sell them to someone who DOES, instead of allowing them to create artificial scarcity by decommissioning them, then hoarding the public right of way so nobody else can use it either. Any natural monopoly granted over public right of way should ALWAYS be on "use it, or lose it" terms, subject to revocation and re-allocation if the franchise holder isn't willing to put it to its highest and most productive use.

Comment Re:Wow, that was so full of stupid... (Score 4, Interesting) 449

The fundamental problem is that POTS sucks by any definition, but it rarely fails suddenly and catastrophically in areas where the phone lines are mostly underground (I don't know about the rest of the US, but in Florida, there are a LOT of places where the phone lines are buried, even though the power lines aren't). Most of what you describe is progressive deterioration over relatively long periods of time. Wireless networks, in contrast, tend to lose power suddenly, and stay down for at least the remainder of whatever catastrophe caused the failure in the first place.

Twenty years ago, it was almost UNHEARD of in Florida to actually lose phone service during anything short of an Andrew-like hurricane... and even in Andrew, few people actually lost phone service. When they did, it was almost always due to catastrophic destruction of their own home's demarc box. Two years ago, half of Dade & Broward county lost Comcast & U-verse for half the day during a GODDAMN TROPICAL STORM (Isaac) that didn't even hit us directly. In fact, it seems like the most disruptive storms are, in fact, "slow & sloppy" tropical storms that have enough gusts to knock out commercial power early in the storm, then leave the area in limbo for another day and a half as the storm slowly passes through the area.

Comment Re:So far away (Score 1) 400

Just to add... frankly, I'm NOT happy with the current state of desktop publishing. PageMaker is gone, WordPerfect lobotomized itself, Word sucks, and MS Publisher sucks even more.

HTML, and the attitudes it encouraged (no, make that *demanded*) towards formatting, coupled with the dire state of publishing software today, have combined to give us ebooks that are ugly enough to make your eyes bleed, and printed books with sloppy typesetting that would have gotten people *fired* 20 years ago. 20 years ago, people would spend HOURS tweaking the layout of chapters until every page was *perfect* -- no widows, no orphans, no dangling paragraphs intruding into the visual space of a diagram or photo.

Years ago, I used to wonder how civilizations could fall and cause arts and technological advances to be lost. Now, I can say I've seen it happen firsthand with regard to desktop publishing. We reached the pinnacle sometime around the mid-90s, and we've been sliding downhill into ugly barbarism ever since.

Comment Re:So far away (Score 1) 400

> There were plenty of arguments against doing your own desktop publishing in the C64/Apple II days.

And most of them were 100% right. C64 and Apple II DTP almost without exception looked like total shit. And I'm writing that as someone who personally used both the Print Shop and Newsroom on both platforms from the day they arrived until the day I got my first Amiga in '86, and suffered *horribly* with a Star Gemini 10X connected to a C64 through a Cardco CardPrint+G. For those who never had the pain of using that particular combo, it had a design flaw made a thousand times worse by rushed, buggy firmware that caused the printhead to scrub back and forth thousands of times per line, printing only a single column of dots with each swipe. It made it basically IMPOSSIBLE to print even a single-page sign, because it took HOURS to finish & beat up the printer.

At least the Print Shop's output looked halfway OK. The Newsroom was another matter entirely... my eyes started to bleed a few seconds ago just REMEMBERING how awful its print quality was.

Comment Re:Beta Sucks (Score 1) 400

> We live in an economy of mass computing, because it is way, way cheaper to perform a calculation on a mainframe than a microcomputer on your desk.

I disagree. If that were true, nobody would build Bitcoin-mining rigs. They'd just lease server resources from EC3.

Look what happened to aGPS the moment phones blew past a gigahertz -- the round-trip time it took to query the remote server after taking a reading from a local radio exceeded the time to just calculate it locally, and the idea of offloading the math to a remote server just quit making sense.

If we all had gigabit fiber connections to the internet and you could get the latency down to under ~50ms, it *might* be viable to offload OpenGL rendering tasks to remote server farms and simply stream it back to a Chromebook as h.264 instead of spending $2,400 on an Alienware gaming laptop with high-end discrete graphics card. At least, for games not involving hair-trigger reflex actions. But by the time we get to that point, Android watches will probably have a 3GHz 16-core processor, and will probably be able to do realtime raytracing at any meaningful resolution, color depth, and framerate the display is capable of.

Comment Re:So far away (Score 2) 400

Given the relatively low price of Lego blocks if you buy them in bulk (as opposed to buying the theme sets whose price is mostly licensing fees paid to Disney or someone like them), plus the amount of work you'll have to do to sand off the spurs and finish them off, is it *really* worth printing Lego blocks yourself? Especially if you're paying retail prices for the plastic filament in relatively small quantities, and making an effort to avoid plastic with dangerous (or unknown) amounts of lead?

Comment Re:This is why I started using MATLAB (Score 1) 391

> the HTML specification document neglected to mention which CSS should be used to get eg

From what I remember, there were quite a few things you could do with table/row/cell going all the way back to IE3, but COULDN'T do at all with CSS1, and couldn't reliably do with CSS2 in a way that was cross-browser compatible without implementing multiple variants that were more trouble than they were worth, and either serving different HTML based on the browser-sniffed user agent string, or using conditional comments and doctype to tell IE what your precise expectations were and hide whatever you did from Firefox & Opera -- and getting it to work with Firefox & Opera required major DOM-manipulation via Javascript in onLoad().

I distinctly remember that we didn't start seeing articles saying, "There's now nothing you could do with tables that you can't do with CSS" until CSS3 became commonplace. The CSS1-era articles all basically said, "Sorry, can't do it", and the CSS2-era articles basically said, "You can sort of do it if you're feeling incredibly masochistic, but it's almost pointless to bother").

Comment Re:Um no (Score 1) 224

The biggest roadblock to metric adoption in the US was the insane idea that anything expressed in metric units had to be some whole multiple of 10 or 100. We weren't allowed to have 5mL and 15mL measuring spoons... they had to be 1mL and 10mL, bundled with an equally-useless 100mL measuring cup. Or at least, that was what you'd think if you saw the useless set of baking utensils my mom & grandmother got for Christmas at some point in the late 70s. It was like there was some unwritten rule banning 250mL measuring cups, because it wasn't a "proper" metric size.

The sole exception to the "whole multiple of 10" rule was speed limits. Without exception, the speed limit in km/h was always less than the speed limit in miles per hour -- sometimes, a lot. Hence, signs that listed metric speed limits for 55mph and 30mph as 88km/h and 40km/h. I specifically remember the news reports on TV about how the 30mph/40km/h signs were vandalized at an abnormally-high rate (in rural areas, they were usually shot full of bullet holes). Had the government given drivers a freebie, and listed the metric speed limits for 55 and 30mph as 90km/h and 50km/h respectively, I *guarantee* American drivers would have LOVED the metric system. At least, in that specific context.

Comment Re:Um no (Score 1) 224

DST is an anachronism. Only old people (esp. those in congress) oppose getting rid of it.

I know plenty of people -- young AND old -- who'll happily support a proposal to abolish DST, as long as "abolish DST" means "stay in summer time all year".

40 years from now, DST will begin in early February, end the weekend after Thanksgiving, people will bitch about the stupidity of changing clocks for just 9 or 10 weeks, and the DST-abolitionists will be trying to abolish summer time and turn 9 weeks of winter time into 52, and wondering why everybody opposes them.

Comment Re:don't connect it (Score 2) 106

> why the hell would you connect your house to the internet or any appliance on the Internet anyway.

So you can check up on your cats during the day while you're at work, and reassure yourself that the house hasn't gotten broken into in a way that somehow managed to avoid setting off the alarm. And dispense treats for them from the Magic Invisible Food God if you start to feel guilty about leaving them home alone all day. And drive the Roomba-platform-mounted webcam around to their favorite hiding spot (still working on *that* one).

There's also the fact that more traditional means of remote home control (via phone) rarely work well with VoIP and voicemail. My alarm, for example, DOES have a telephone interface module... but it depends upon having an answering machine pick up the call so it can eavesdrop and listen for the triggering code. If the call rings until it goes to voicemail, the alarm never gets a chance to listen in and grab the call away from the answering machine. If the alarm answers the phone, and it was somebody calling, all it can do is play back a ~5-second .wav file apologizing and hang up on them. Did I mention yet that the way Android phones implement keyboard DMTF (playing a short pre-generated sample, as opposed to generating the tones on the fly in realtime), coupled with the way most VoIP codecs and mobile phone networks mangle DMTF, causes roughly 1 or 2 digits per dozen or so to fail to get recognized?

As a practical matter, thanks to VoIP, voicemail, and mobile phones, you almost *have* to implement your controls via IP rather than dial-in unless you want to pay AT&T $35/month for a landline phone that you almost never actually use.

That said, most internet-interfaced home automation controls are HORRIFICALLY insecure. If their interface consists of a Wiznet serial-to-IP module, and actually depends upon Wiznet's own password-based security, you should probably just assume it's been pwn3d several times over. ESPECIALLY if whatever's connected to the serial port end of the Wiznet module was designed to be physically connected to a real RS-232 serial port inside a locked cabinet, and all they did was strap the Wiznet module onto it. A security-free serial port isn't a great idea, but if it's inside a locked cabinet inside your house, it's pretty low on the list of concerns unless you have servants spending time unsupervised inside your home. That same security-free serial port strapped onto a Wiznet module with 8-character password (and with no rate-limit or lockout policy) can literally be bruteforced via UDP in a matter of days if the password is purely alphanumeric.

ARM-based modules aren't a whole lot better, because manufacturers try to shave 17c from the manufacturing costs and cram everything into a few megs of flash. Of course, the first thing that gets cut when the compiled code is a little too big is the security. To manufacturers, security isn't a quantifiable selling point compared to features, and strong security raises tech support costs anyway by making the device more likely to NOT work for some non-obvious reason.

IMHO, the only secure way to connect embedded hardware with minimal security to the internet is through a gateway appliance that shields them from direct contact with the internet, and acts as a proxy server/firewall/application level gateway. Preferably, running over a different physical network, and at the very least (if wire-sharing is inevitable), segregating the insecure devices into a different IP range that can communicate ONLY with that gateway.

Note that if 100mbit ethernet is fast enough, you can actually wire two electrically-independent 10/100 ethernet jacks with a single cat5e cable (use green & orange for one, blue & brown for the other). If you pull two cat5e cables from every room to the wiring closet, you can use one for gigabit ethernet (possibly using a pair of layer VLAN-capable switches that support layer 2 IGMP snooping to isolate the "TV multicast network" from the "home internet network" if you have U-verse), and use the other to wire a pair of 10/100 networks... one for your security cameras, and one for your home automation gear. While you're at it, pull another cat5e, so you can use one pair for RS-485, another pair for 1-wire, and have two pairs left for "whatever". Or, just run smurf tube, so you can pull new cable as necessary. For what it's worth, Cisco also has a family of in-wall ethernet switches (they look kind of like an oversized wall plate that sticks out about 1cm from the wall), and (surprisingly) gigabit ethernet can often work over cat-3 cable that fails with 100mbit ethernet (gigE uses 4 pairs instead of 2, and can more aggressively renegotiate lower speeds if the wires aren't good enough; 100mbit ethernet will just fail.)

Comment Re:Limited LTE Network Support (Score 1) 217

On the other hand, the Find 7 has two things the Nexus 5 will never have:

* removable battery

* microSD card

And as a practical matter, it's about as open as the Nexus 5. I think I even read that it has open-source drivers for one or two chips that are proprietary binary blobs in the Nexus 5.

A Nexus 5 can run Cyanogenmod with minimal difficulty. A Find 7 gets to have Cyanogenmod as its official operating system right out of the box.

Over at XDA, the Find 7 is a popular choice for "next phone" among Nexus 5 owners, and became the #1 choice of users who WERE looking at the Galaxy S5 almost overnight, once news leaked that even the T-Mobile S5 would have a locked bootloader.

Slashdot Top Deals

"Experience has proved that some people indeed know everything." -- Russell Baker

Working...