Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Comment Re:Why? (Score 1) 86

There are a couple of reasons.

As a starter, I remember an interview from way back in the aughties when where they asked an IE designer for his thoughts on the Firefox browser, which was at that point really cutting into IE market share. I remember one comment along the lines of "really good browser: the only thing I would change is to put tabs on top. The address bar and everything else only affects the current tab, so you want tabs on top to give the impression that each tab is like its own, separate browser." At the time, IE didn't have tabs, so he could say these sort of things without thinking he's shooting himself in the foot.

He did cite Microsoft usability studies (no specific study, just the nebulous term "usability studies") as part of that comment. Eventually Mozilla did it's own study and concluded pretty much the same thing. There was also an argument about how tabs would be easier to select now while using less screen space because of the "infinite space" of the tab. You see, if you scroll over a tab, and go too far, one of two things can happen: you either scroll past the tab and onto something else (miss), or you hit the edge of the screen and the cursor lands on the tab anyway. The argument was that this is in effect like having an infinitely tall tab, so it's easier to hit.

Now, some personal comments on why I hate that entire line of reasoning:
First, back at the time I found the initial comments from an MS employee to be odd, because that's exactly the opposite of how MS has trained users to think of tabs in every one of their products (except for this hypothetical "tabs on top" browser which didn't exist anywhere yet). Before the browser, you mostly saw tabs in OS preference dialogs, where sometimes the tabs were on top just because they were used as categorical dividers (you know, just like real tabs in a in a real filing cbinet were always meant to be). But just as often, there would be a small section of tabs embedded on some larger dialog pane. The only thing they had in common was the obvious "tabs are nested within windows". To the population of the time, window and browser were inextricably linked.

But that was then, and this is now. What about how people ten years later are used to interacting with the browser? Well for one, most still don't actually think of each tab as a "mini-browser". If anything they just expect the browser control elements to go away altogether to make room for the page. (In fact, the ease with which mobile browsers have hidden away such controls proves to me that taking up _any_ space within a tab is probably a losing proposition.) But where hiding elements isn't possible, the view is still generally that the window is a true "window" out to some slice of the internet. To me personally, arguing that each tab should contain "its own" URL bar and buttons is sort of like arguing that each window in your card should have it's own steering wheel and speedometer. It just doesn't follow for me to embed controls within content. But since the controls are being hidden away fast, it's largely a matter of choice.

So why should tabs-on-top be a good default choice then? They argue because it make tabs easy to select with minimum real estate. The "infinite space of the screen-edge tab". Unfortunately, I don't think I have ever had a browser end at a screen edge. Either I'm on a Mac or a Linux variant that has a bar on top, or I'm in Windows where I never use full-screen mode. (I'm having trouble even thinking of a time when I even would use full-screen mode for a browser now that the screens are all obscenely wide.) So the infinite space argument is DOA. And then there's the times where I'm on a half-foreign system (read: work laptop) where the touchpad cursor is slow and all I want is for the cursor to get there. Overshooting is not a concern. In these cases, making the tabs farther from center and shorter while trying to make up for it with pseudo-infinite space just makes them shorter and farther away.

And on a final, unrelated note, when the summary notes, "it is feasible to combine the address, search, and find box into one": of course it is. We did it before 2004. It was called a command line. It's trivially easy to designate some box as the "do things with this input" box. The actually command used to parse that input was still offloaded to other elements, such as the "go" button, and the "search" spyglass. Even in 2004, it wasn't a matter of feasibility. The only reson the bars were ever separate in the first place is because most screen space for you "Ask Jeeves" toolbar meant more advertising for "Ask Jeeves".

Comment Re:Scalpel or gun can be used for good or bad ... (Score 4, Insightful) 406

When doctors or nurses use their knowledge of anatomy in order to torture or conduct medical experiments on helpless subjects, we are rightly outraged. Why doesn't society seem to apply the same standards to engineers?

Whenever I read something like this, I immediately think of Florman's "Existential Pleasures of Engineering" despite the title, Florman's book is rooted is actually a spirited apology for the engineering profession in an age where everyone was lamenting all the modern horrors that those damned engineers could have prevented if they had just be more ethical.

As Florman notes, there has been a large focus for the past half-century on making engineers more ethically aware, and it's mostly pointless. Despite what most people seem to believe engineers are not philosopher kings any more than Technology is some sort of self-sufficient, self-empowering beast working counter to the benefits human society. Both do exactly what the rest of society tells (read: pays, begs, and orders) them to do, and nothing more. And while you don't see many engineers saying this -- because when someone tells them that they run the world and hold the future of all man kind in their hands, people are disinclined to temper their ego and deny it -- we only do what the suits pay us to do, and if we don't do that they fire us and move on to someone else who will.

Let's ask this another way: why aren't business men considering the ethical implications of their investments? Why aren't militaries, bureaucracies, and governments considering the ethical implications of their orders? Why isn't the average person taking five minutes to understand a problem now so he doesn't demand government, the market, and God on high give him an answer that he's going to hate more than the original problem a year from now?

Every profession has ethical considerations. More ink has been spilled and time spent on the subject of ethics in engineering and practical sciences than any discipline save medicine. And yet it does not solve the problem and will not solve the problem because that is not where the problem lies.

Comment Re:Iridium + Something Else (Score 2) 175

I did some work with both Iridium and Inmarsat on a project a while back. It's been a while, so my comments are mostly qualitative, not quantitative.

Iridium offers a global array with redundant satellites (which is good since they lost a few a few years back), while Inmarsat uses a directional antenna relies on you being able to actually aim an antenna. If you're in the Inmarsat range of coverage (and pretty much everyplace habitable is), I'd recommend it. You can get a ethernet-ready single package antenna+modem (about the size of a thick laptop) that's pretty easy to aim (the unit provides some guidance). This assumes you're on foot, of course. If you have a dedicated vehicle you might invest in a tracking antenna. The data rates we got we in the 35 Mbps range.

Iridium is literally dial up over satellite. The service was designed for voice telephony, and it uses an analog signal until the satellite relays it to a base station with modems and an internet connection. It will be reliable, but very slow. The 0.0024 Mbps rate Spazmania gives below matches my recollection.

The two units are similarly portable: the Inmarsat unit is the size of a thick laptop, while the iridium modem is half that, but you have to get an antenna.

Comment Re:Maybe (Score 1) 293

Here's the issue: either there is something out there we can't see (hence "dark matter") which is taking on more and more fantastical properties the more we learn about it, or our understanding of the universe's mechanics on a grand scale is wrong (and wrong in such as way that they line up pretty well at our small-scale understanding). Or for that matter, both could be true to some degree.

The scale is beyond the range of direct experimentation, so what can you do? In the former case, you can try to find some other way to observe the material. In the latter case, you can keep making observations until you have enough data to form a new understanding. That's about it. Until you have progress on one, it's very, very difficult to rule out the other.

At this rate, I am disinclined to believe in dark matter. Unfortunately, whether you believe in it or not you have to go through a similar search to determine what's true. This is the slog we're going through now.

Comment Re:GMA 600? Last years Atom? $200?!? (Score 2) 214

It's basically an MCU eval board: low PCB runs, low quantity orders for every part in the BOM, very generic capabilities. EVBs are expensive. Sure, the engineering time that goes into it is pricey, but on a per-unit basis it's the fact that these are very general-use, quasi-custom toys that increases the cost.

Comment It's about time (Score 3, Insightful) 192

Every time I look at my old engineering texts taking up shelf space I think, "I wish that someone could take all these, cut out about half of the valuable material, dice up the remainder between 30 odd sites and apps, and then tie it to a device with a 7-year shelf life."

As anyone who's dealt education-oriented online media (such as Blackboard) can tell you, the products are not always stellar. You get less text, its usually structured in such a way that it takes longer to read, the access is spotty, and it will probably not work as well as that in a year. Even the number one benefit of digitization -- search -- tends to be awkward or incomplete.

They say the iPad is about half the cost of books. I can easily believe that, but it also means you don't get to buy used books, or re-sell your used books. They've streamlined the process in a way that either offers no benefit, or benefits suppliers more than students.

It did convince the university to buy their student's books for them, provided you don't consider being forced to buy an iPad as being the same as being forced to "rent" used books. Or for that matter, so long as you don't consider going to a free library as an option. And so long as you don't consider that buying an iPad and getting electronic copies of textbooks was always an option for most books. All the ways they've streamlined the process are for the primary benefit of the supplier of the material.

Overall, it seems workable for books that you no interest in keeping beyond one semester (electives). But that is exactly the case where you can generally benefit from being flexible, buying bog-standard books from any store you please, buying a digital copy, or going to the library as needed. If you're talking about material that will actually continue to be relevant after a single semester, it sounds like a bad idea, putting a random-valued timer on your reference material.

Comment Re:interesting take. (Score 1) 158

the problem has always been that you have to be of questionable morality to harvest this data, get data of probably low quality, and piss people off

It may not solve the "current" problem, since advertisers won't even use this if they don't feel it will help them - they already have their means. But that doesn't mean it's useless. Don't be myopic.

No one said it's useless. It's very useful, primarily to advertisers, who will definitely make use of it if it's available. And it aims to solve _exactly_ the "current problem" of advertising, couched in a language that presumes that there is surely some other use out there that we're going to presume is the primary use case once they figure out what it is. So far, their attempt to find another use case amounts to a news site that skips right to the sports section (or a site that knows that to you, news.tld really means news.tld/sports). At best (where the user is concerned) it's a solution in want of a problem. At worst, it's an attempt to obscure its primary purpose. And the morality of the purpose itself remains questionable.

Moreover, it could theoretically work for a lot more than just ads and marketing. It would basically permits third parties to be granted access to your data. So, for instance, you could grant your geolocation database to only certain mapping sites. Or your social media history only to a game site that will utilize it.

There are many applications that are appearing for this kind of information harvesting that aren't all malicious. Some of them are even exciting

Except that's not what's being proposed at all. It's a good idea that any data sharing should require explicit user approval on a party-by-party basis: history, geolocation data, anything. But they're not talking about parsing your geolocation or social connections or anything like that. Their proposal is not "sharing certain things with certain parties over the internet". That's pointlessly broad. They're talking about divining general categories of interests, Stumble-Upon-style, from your browser history -- the type of sites you'd like to see more of, and the stuff you're most likely to buy -- and sending that to designated parties. In other words, they propose giving you increased power to allow people to advertise to you. But, since they haven't fixed any other method of tracking, you don't have a corresponding increase in your power to disallow advertising. The benefits of this are 100% in the favorof the advertiser.

Um, they're proposing that you, the user, can tell sites what you want them to know. Nothing at all, or certain information for certain sites. This is an opt-in. It will be useless to we who don't want it. It will help those who aren't as hardline about ads.

Respectfully, no technology in the history of the web has actually panned out this way. The web as its consumed now is developer-skewed. Ultimately, you're looking to the developers to provide the medium, and often the content, for everything you consume. Your only ultimate freedom is to choose to participate on the developer's terms, or not. Everything else is either granted by a developer favor (perhaps because he actually cares about a democratized, user-empowered web), pried out of the service under false pretenses, or just a result of a blind spot. If the developers don't want to support IE8 anymore, they have the power to lock you out (unless you spoof the UA string). If they want to force javascript on a site that doesn't even need it, they can detect non-compliance and refuse to serve you (unless you run No-Script and essentially lie about having a browser that runs javascript). If they want to require GPS location from your phone and refuse to let you use their location-driven service otherwise, they can. This is generally in the name of offering a uniform experience (read: creating complex applications with minimal testing), but the fact of the matter is that you still lock people out of the information they want because they aren't consuming it on the terms you want.

If everyone else is fine enabling cookies just to get any use out of, say, target.com, then you can bet you will eventually be expected to as well. Even if you're just "window shopping" and don't need cookies for any legitimate reason. It becomes the burden of the cookie-wary to either compromise their preferences or figure out a technical solution to bypass the developer's preferences. This is the meaning of my phrase above, "you are as free as your neighbors want to be".

Any useful technology enables new things that were not possible without it. It's tautology then to say that that technology will be a requirement for something. If it's accepted as a requirement for enough things, it soon becomes an acceptable requirement for pretty much _everything_, the way cookies, CSS, javascript, and the right user agent string all are these days. That is why we have to be skeptical, and not just throw our arms lovingly around every shiny bauble that gets thrown our way.

Comment Re:interesting take. (Score 1) 158

This parallels the Mozilla "ping" fiasco of a few years back. In that case, someone at Mozilla labs came to the conclusion that people were always going to be tracked somehow, and all the gyrations of cookies, 1x1 pixel images and so on was just producing a lot of waste traffic. If it's going to happen anyway, why not make the tracking process as technically-efficient as possible? Thus they proposed that each site could have a meta link for a server to "ping" with minimal packets on a page load. Naturally, this would be opt-out: optional because they don't believe that they are evil, and opt-out because if it was opt-in no one would do it. Of course, users objected, the internet got angry, and it never happened.

Now we have another proposal from Mozilla for making the user privacy/advertizing money trade-off slightly more efficient, but no more palatable. In theory there might be some benefit: sites wouldn't _have_ to collect browser history the old fashioned way, and maybe if enough people don't care enough to change, the advertizing wolves will jump on them while the rest of us pass merrily on our way. In practice, that won't happen: advertiser will use tracking cookies, images, history sniffing, _and_ web interests, and people who opt out will probably get a snarky message about how any given website is not available unless you upgrade to a modern browser and enable cookies, javascript, and web interests. Traffic will not decrease. CPU cycles and storage TB will not be saved. Users will have no more, and probably less control over their privacy. And as with every new web technology, you're only as free as your neighbors want to be.

There are other applications which would be valuable: a suggestion engine for movies as Netflix uses, for music as the iTunes "genius" feature uses, for products as Amazon uses, for friend's as Facebook uses, and for Dates as way too many sites use. In fact, in terms of making the web useful, suggestion engines are probably behind only search and the sheer act of being able to fetch data from another computer. But none of those are really what's being proposed here. All of those require deeper and more specialized connections than would be available through browser history. No, this proposal is just about going out of one's way technically to help advertisers, while in the long-run probably providing a net negative to the users and the internet infrastructure.

Comment Re:interesting take. (Score 4, Informative) 158

Yes, this is a more recent (on the scale of years) method: load a bunch of links, let the user's browser assign them properties based on whether they've been visited or not, then let the site's javascript read back the properties from DOM. This is in addition to more direct methods such as cookies (we know where you've been because some party we have an agreement with has been keeping a log for us), super-cookies (we know where you've been through cookie-like files from flashand other things that don't typically get cleared), and 1x1 pixel images images (we know where you've been because you've been phoning home to an image server with every page load).

Comment Re:Retroactively? (Score 1) 279

It did. The idea as Lucas originally described it in his draft (back when it was called "Journal of the Whills") was that it was like picking a book off the shelf and finding it was the fourth volume of a history series. (While he said that later, it seems more likely that it was actually more like coming into a Buck Rogers serial halfway through.) Perhaps the original poster is referring to the way it was subtitled retroactively _on_posters_. That is, until Episode I came out, it was just "Star Wars", with sequels "(Star Wars:) the Empire Strikes Back" and "(Star Wars:) Return of the Jedi".

Episodes 1 through 3 are a lot more modern idea than Lucas wants many to believe. He never really talked about them until he was into his second sequel. There was likely no master design for a 9-part arc as he described. There was just a convenient gap left by an earlier gimmick.

None of this really changes the point though: Star Wars doesn't have a title problem. Well, other than the fact it keeps changing titles on movies that already exist, a problem this proposed scheme exacerbates. Nobody cares if a Star Wars movie follows the Skywalkers anymore than they care if every videogame, novel, cartoon, and comic follow them. Star Wars is not a brand. And while that makes it ever so slightly awkward for the movie to call itself "Star Wars 7", they never did that in the first place, and likely won't do it by the time the movie has a real title. So who cares? Star Wars: Number-free Title.

Comment Re:First (Score 1) 405

Microsoft actually filed a patent on this, for "courtesy-aware phones". Walk into a church or movie theater broadcasting the correct "quiet, please" alert, and cell phones could automatically go to silent/vibrate mode, or possibly only receive.

Most people's reaction was "who needs that?" with a smattering of "sounds like something useful to those planning a theater shooting".

Comment Re:First (Score 1) 405

Pretty much true. My first thought when I'm outside downtown and I see someone smoking is, "who still smokes?" Of course, the fact that you have to stand 15 feet away from the door -- which is usually far enough out onto a busy sidewalk that it just encourages smokers to travel three more feet to the relative peace of a dark, dingy alley -- probably doesn't do much for the appearance of coolness either.

Comment Re:90% of new solutions ... (Score 2) 92

Innovation is extremely overrated. Most of our tech-driven culture is not based on innovation. Not even close. All the innovating was done decades ago, when people started dreaming up what might be possible given phenomena they had only a slippery grasp of once they were leveraged into machines that hadn't yet been built.

No, our culture is driven by cost-reduction. Once something becomes cheap enough, we do it. If it's not, we put it on a shelf until it is. For the computing revolution, the internet bubble of the 90s, and the social web of today, the driving factors have all been about scale: quantity as a quality all its own. You could have done the same things decades earlier -- hell, we _were_ doing the same thing decades earlier -- but only when it's cheap enough that we increase the scale by three orders of magnitude or more did it become a game-changer.

There are plenty of cases in recent history of a product being invented "before its time". It exists in relative obscurity, figuratively collecting dust until it is brushed off to solve the problem that we only got to at just the present moment. You can't drive technology from behind, it has to be pulled along by the context of the situation. Technology by its very nature is very light on innovation, because it is firmly rooted in a practical context of costs and needs.

Now as to whether some technological innovation can be automated: definitely. All engineering is about working around the edges of a problem until you can describe what you need in terms of what you can do. You probe the problem, consider edge cases, and trace the general shape of a missing block until you can say, "what we really need here is some way to measure X and Y, figure out which is closer to Z, and then just give us that one." And then you have a block-level spec. And then you drill into that block as a device all its own, filling in the parts that are obvious from experience and education until you get to the difficult part, the truly novel part, and again say "what we need here is some what to measure X...." Once you get down to a block that only has things that have already been invented, you're done. Then you have your new block, in your new device, solving your new problem. That's what invention is: work.

Get a "smart" enough robot, one which is flexible enough in its model of the world that you can teach it roughly like a human, and you can certainly train it the way you could a junior engineer. A simpler robot will have more limits, but can also be trained to do simpler, smaller steps of the same process. TFA basically describes a system for making a bot context aware... by trawling through records of what humans have done, it can recognize problems that humans have solved before to help another human solve a similar problem (even if he doesn't know it's similar). It can recognize that a margarita machine, a cement mixer, and washing machine all have similar problems to solve on some level, even though no one human really looked at any two of those problems. It looks at your statement of "what we need here...," and chimes in "oh, like a ____ but for _____."

Slashdot Top Deals

I've noticed several design suggestions in your code.

Working...