Forgot your password?
typodupeerror

Comment: Re:Simple set of pipelined utilties! (Score 1) 284

by hey! (#47930309) Attached to: Torvalds: No Opinion On Systemd

I don't think people understand the Unix philosophy. They think it's about limiting yourself to pipelines, but it's not. It's about writing simple robust programs that interact through a common, relatively high level interface, such as a pipeline. But that interface doesn't have to be a pipeline. It could be HTTP Requests and Responses.

The idea of increasing concurrency in a web application through small, asynchronous event handlers has a distinctly Unix flavor. After all the event handlers tend to run top to bottom and typically produce an output stream from an input stream (although it may simply modify one or the other or do something orthogonal to either like logging). The use of a standardized, high level interface allows you to keep the modules weakly coupled, and that's the real point of the Unix philosophy.

Comment: Re:Why dilute the brand? (Score 2) 250

by King_TJ (#47930229) Attached to: Is the Tesla Model 3 Actually Going To Cost $50,000?

I get your point, but I don't think that's the business model.

It looks to me like Tesla put out the high-dollar "elite" sports car as the first product, in order to generate enough revenue (higher profit margins on each one) to build more of a company aimed at the mass market.

So this isn't about "brand dilution" so much as the company knowing who it wants its customer to be -- and gradually lowering prices on the cars as the technology and profits from previous sales allow it to get there.

Tesla isn't trying to compete with Ferrari, Lamborghini, and the like. It wants to reach a point where it's considered a superior brand competing with brands like Nissan, Toyota, GM, Ford and Chrysler.

Comment: Re:Edge routers are expensive (Score 1) 83

by dgatwood (#47924071) Attached to: Why Is It Taking So Long To Secure Internet Routing?

I keep thinking that if an ISP really wanted to cut costs, they could proactively monitor their network for problems:

  • Provide the CPE preconfigured, at no additional cost to the customer. (Build the hardware cost into the price of service.)
  • Ensure that the CPE keeps a persistent capacitor-backed log across reboots. If the reboot was caused by anything other than the customer yanking the cord out of the wall or a power outage, send that failure info upstream. Upon multiple failures in less than a few weeks, assume that the customer's CPE is failing, and call the customer with a robocall to tell them that you're mailing them new CPE to improve the quality of their service.
  • Detect frequent disconnects and reconnects, monitor the line for high error rates, etc. and when you see this happening, treat it the same way you treat a CPE failure.
  • If the new hardware behaves the same way, silently schedule a truck roll to fix the lines.

If done correctly (and if clearly advertised by the ISP so that users would know that they didn't need to call to report any outages), it would eliminate the need for all customer service except for billing, and a decent online billing system could significantly reduce the need for that as well.

Comment: Re:So, a design failure then. (Score 1) 155

by hey! (#47921919) Attached to: Developing the First Law of Robotics

It depends on your design goals.

In Asimov's story universe, the Three Laws are so deeply embedded in robotics technology they can't be circumvented by subsequent designers -- not without throwing out all subsequent robotics technology developments and starting over again from scratch. That's one heck of a tall order. Complaining about a corner case in which the system doesn't work as you'd like after they achieved that seems like nitpicking.

We do know that *more* sophisticated robots can designed make more subtle ethical systems -- which is another sign of a robust fundamental design. The simplistic ethics is what subsequent designers get when they get "for free" when they use an off-the-shelf positronic brain to control a welding robot or bread-slicing machine.

Think of the basic positronic brain design as a design framework. One of the hallmarks of a robust framework is that easy things are easy and hard things are possible. By simply using the positronic framework the designers of the bread slicing machine don't have to figure out all the ways the machine might slice a person's fingers off. The framework takes care of that for them.

Comment: Re:Article shows fundamental lack of understanding (Score 2) 178

by dgatwood (#47921615) Attached to: Why Apple Should Open-Source Swift -- But Won't

They won't see people switching to Swift uniformly. There are trillions of lines of code written in Objective-C, and programmers already know it and are comfortable with it. There are no tools for migrating code from Objective-C to Swift, much less the hodgepodge of mixed C, Objective-C, and sometimes C++ that quite frequently occurs in real-world apps, so for the foreseeable future, you'd end up just adding Swift to your existing apps, which means you now have three or four languages mixed in one app instead of two or three, and now one of them looks completely different than the others. I just don't see very many developers seriously considering adopting Swift without a robust translator tool in place.

I do, however, expect to see Swift become the language of choice for new programmers who are coming from scripting languages like Python and Ruby, because it is more like what they're used to. In the long term, they'll outnumber the Objective-C developers, but the big, expensive apps will still mostly be written in Objective-C, simply because most of them will be new versions of apps that already exist.

BTW, Apple never really treated Java like a first-class citizen; it was always a half-hearted bolt-on language. My gut says that they added Java support under the belief that more developers knew Java than Objective-C, so it would attract developers to the platform faster. In practice, however, almost nobody ever really adopted it, so it withered on the vine. Since then, they have shipped and subsequently dropped bridges for both Ruby and Python.

Any implication that Swift will supplant Objective-C like Objective-C supplanted Java requires revisionist history. Objective-C supplanted C, not Java. Java was never even in the running. And Objective-C still hasn't supplanted C. You'll still find tons of application code for OS X written in C even after nearly a decade and a half of Apple encouraging developers to move away from C and towards Objective-C. (Mind you, most of the UI code is in Objective-C at this point.) And that's when moving to a language that's close enough to C that you don't have to retrain all your programmers.

Compared with the C to Objective-C transition, any transition from Objective-C to Swift is likely to occur at a speed that can only be described as glacial. IMO, unless Apple miraculously makes the translation process nearly painless, they'll be lucky to be able to get rid of Objective C significantly before the dawn of the next century. I just don't see it happening, for precisely the same reason that nine years after Rails, there are still a couple orders of magnitude more websites built with PHP. If a language doesn't cause insane amounts of pain (e.g. Perl), people are reluctant to leave it and rewrite everything in another language just to obtain a marginal improvement in programmer comfort.

Comment: Re:The protruding lens was a mistake (Score 2) 391

by hey! (#47921441) Attached to: Apple Edits iPhone 6's Protruding Camera Out of Official Photos

I don't think you've really grasped Apple's design sensibility. Job one for the designers is to deliver a product that consumers want but can't get anywhere else.

The "camera bulge" may be a huge blunder, or it may be just a tempest in a teapot. The real test will be the user's reactions when they hold the device in their hand, or see it in another user's hand. If the reaction is "I want it", the designers have done their job. If it's "Holy cow, look at that camera bulge," then it's a screw-up.

The thinness thing hasn't been about practicality for a long, long time; certainly not since smartphones got thinner than 12mm or so. They always been practical things the could have given us other than thinness, but what they want you to do is pick up the phone and say, "Look how thin the made this!" The marketing value of that is that it signals that you've got the latest and greatest device. There's a limit of course, and maybe we're at it now. Otherwise we'll be carrying devices in ten years that look like big razor blades.

At some point in your life you'll probably have seen so many latest and greatest things that having the latest and greatest isn't important to you any longer. That's when know you've aged out of the demographic designers care about.

Comment: Re: Apple not in my best interests either (Score 1) 178

by dgatwood (#47919715) Attached to: Why Apple Should Open-Source Swift -- But Won't

No, they're saying Apple switched because GCC's core wasn't designed in a way that made it easy to extend the Objective-C bits in the way that Apple wanted. And that could well be part of it—I'm not sure.

But I think a bigger reason was that Apple could use Clang to make Xcode better, whereas GCC's parsing libraries were A. pretty tightly coupled to GCC (making it technically difficult to reuse them) and B. licensed under a license that made linking them into non-open-source software problematic at best.

Comment: Re:Where the pessimism comes from. (Score 4, Insightful) 185

by hey! (#47915329) Attached to: Sci-Fi Authors and Scientists Predict an Optimistic Future

I'd argue that we do try to write about the future, but the thing is: it's pretty damn hard to predict the future. ...
The problem is that if we look at history, we see it littered with disruptive technologies and events which veered us way off course from that mere extrapolation into something new.

I think you are entirely correct about the difficulty in predicting disruptive technologies. But there's an angle here I think you may not have considered: the possibility that just the cultural values and norms of the distant future might be so alien to us that readers wouldn't identify with future people or want to read about them and their problems.

Imagine a reader in 1940 reading a science fiction story which accurately predicted 2014. The idea that there would be women working who aren't just trolling for husbands would strike him as bizarre and not very credible. An openly transgendered character who wasn't immediately arrested or put into a mental hospital would be beyond belief.

Now send that story back another 100 years, to 1840. The idea that blacks should be treated equally and even supervise whites would be shocking. Go back to 1740. The irrelevance of the hereditary aristocracy would be difficult to accept. In 1640, the secularism of 2014 society and would be distasteful, and the relative lack of censorship would be seen as radical (Milton wouldn't publish his landmark essay Aereopagitica for another four years). Hop back to 1340. A society in which the majority of the population is not tied to the land would be viewed as chaos, positively diseased. But in seven years the BLack Death will arrive in Western Europe. Displaced serfs will wander the land, taking wage work for the first time in places where the find labor shortages. This is a shocking change that will resist all attempts at reversal.

This is all quite apart from the changes in values that have been forced upon us by scientific and technological advancement. The ethical issues discussed in a modern text on medical ethics would probably have frozen Edgar Allen Poe's blood.

I think it's just as hard to predict how the values and norms of society will change in five hundred years as it is to accurately predict future technology. My guess is that while we'd find things to admire in that future society, overall we would find it disturbing, possibly even evil according to our values. I say this not out of pessimism, but out my observation that we're historically parochial. We think implicitly like Karl Marx -- that there's a point where history comes to an end. Only we happen to think that point is *now*. Yes, we understand that our technology will change radically, but we assume our culture will not.

Comment: Where the pessimism comes from. (Score 5, Insightful) 185

by hey! (#47914675) Attached to: Sci-Fi Authors and Scientists Predict an Optimistic Future

The pessimism and dystopia in sci-fi doesn't come from a lack of research resources on engineering and science. It mainly comes from literary fashion.

If the fashion with editors is bleak, pessimistic, dystopian stories, then that's what readers will see on the bookshelves and in the magazines, and authors who want to see their work in print will color their stories accordingly. If you want to see more stories with a can-do, optimistic spirit, then you need to start a magazine or publisher with a policy of favoring such manuscripts. If there's an audience for such stories it's bound to be feasible. There a thousand serious sci-fi writers for every published one; most of them dreadful it is true, but there are sure to be a handful who write the good old stuff, and write it reasonably well.

A secondary problem is that misery provides many things that a writer needs in a story. Tolstoy once famously wrote, "Happy families are all alike; every unhappy family is unhappy in its own way." I actually Tolstoy had it backwards; there are many kinds of happy families. Dysfunctions on the other hand tends to fall into a small number of depressingly recognizable patterns. The problem with functional families from an author's standpoint is that they don't automatically provide something that he needs for his stories: conflict. Similarly a dystopian society is a rich source of conflicts, obstacles and color, as the author of Snow Crash must surely realize. Miserable people in a miserable setting are simply easier to write about.

I recently went on a reading jag of sci-fi from the 30s and 40s, and when I happened to watch a screwball comedy movie ("His Girl Friday") from the same era, I had an epiphany: the worlds of the sci-fi story and the 1940s comedy were more like each other than they were like our present world. The role of women and men; the prevalence of religious belief, the kinds of jobs people did, what they did in their spare time, the future of 1940 looked an awful lot like 1940.

When we write about the future, we don't write about a *plausible* future. We write about a future world which is like the present or some familiar historical epoch (e.g. Roman Empire), with conscious additions and deletions. I think a third reason may be our pessimism about our present and cynicism about the past. Which brings us right back to literary fashion.

FORTH IF HONK THEN

Working...