Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Comment Re:Whatever ... (Score 1) 141

It seems like we probably agree on the general idea here, but I was impressed on a recent visit to a museum where they had mobile apps you could download in advance and WiFi available on-site. Together these let you choose from a number of recommended tours based on duration and topic(s) and then guided you around with directions, highlights, and more in-depth background on various other exhibits you'd pass along the way if you were interested. It was a well made presentation that someone had obviously worked hard to put together, and the only thing that was a little awkward was walking around holding a tablet with headphones plugged in for the whole visit. That's an area where I could see an unintrusive headset might be an advantage.

Comment Re:Whatever ... (Score 2) 141

People where hostile to people with Cell phones in the 1980's

And today there are quiet carriages on trains, coffee shops with no-phones policies, and generally if you're the guy who talks really loud on the phone then everyone around you still gets annoyed and may actually challenge you if you carry on for long.

And that's for a device that is just an interruption, not a device that a lot of people perceive to be an inherently creepy invasion of their privacy literally because someone just looked at them funny.

In general Google Glass may or may not make it.

I expect technology similar to Google Glass will make it, but I also suspect it will be used primarily for specific applications where it has a clear benefit. I don't think anything too similar will be worn by a lot of people all the time in the near future.

For example, someone walking around a museum might borrow some sort of headset that guides them on a tour and provides background information about each exhibit they are looking at. Staff at a warehouse used for on-line grocery shopping might have a headset that guides them to collect the items purchases in the most efficient way.

However, I think perhaps the tide is already starting to turn against mass surveillance culture, intrusive personalised advertising, and the like. Surely it's only going to get more hostility as things like insurance premiums that people see directly in their bank balance become ever more customised behind the scenes, and as more people suffer significant problems due to identity theft or embarrassing disclosures themselves or know close friends or family members who have.

In fact, I wonder whether even the US government, not exactly a bastion of privacy advocacy, might be having second thoughts about how much personal data is casually thrown around, now that hostile forces are openly doxxing US service personnel and encouraging allies within the US to attack those people and their families at home, as was reported this week.

So if I were going to place a long-term bet on new technologies tomorrow, I certainly wouldn't be backing an obviously intrusive device like the previous Google Glass, complete with tiny camera, always-on microphone, and wireless connection to the mothership. On the other hand, build a device with similar useful features but a less goofy design, and then back it with a widely-advertised and genuine emphasis on privacy so it didn't engender the same degree of hostility from others nearby, and you might be on to something.

Comment Re:Plug-ins were scapegoats but now we can't go ba (Score 1) 237

Thanks. I'm really surprised by that. We've used onbeforeunload for ages to give genuine warnings about unsaved changes in web apps, but for security reasons browsers have long forced any prompt message into a standard format and only given pages one shot at blocking the user from leaving. It hadn't even occurred to me that they would still let you spend significant amounts of time in that function or do other user-hostile kinds of things.

Comment Re:Plug-ins were scapegoats but now we can't go ba (Score 1) 237

Go ahead, watch the bottom of the page when you close a Slashdot tab. When Slashdot is slow (and it often is) you'll see the "Working" indicator with the shitty little spinny wheel before your browser actually complies and closes the tab.

What is that, anyway? I have the usual blockers and such installed, but something there is still getting through.

And how the hell does it stop browsers from closing the window immediately when I tell them to? There is a defined mechanism for prompting users to confirm they want to leave a page, which Slashdot doesn't use. Why is anything else not just killed instantly when you close the tab, and who gave web pages a vote in whether or not I get to close them in my own browser?!

Comment Re:Plug-ins were scapegoats but now we can't go ba (Score 1) 237

If you're getting regular crashes in everything but IE, there's something wrong.

I'm a professional web developer, I work on a wide range of projects, and in recent versions I've seen fatal errors with Firefox and Chrome several times per week on average. Chrome comes up with a "didn't shut down properly" message a lot of the time just from a session loading it, leaving a real-time screen on Google Analytics open for a while, and then closing it down again!

It may be that we are using different definitions here. I'm including things like Firefox hanging (requiring the process tree to be killed) because of problems with add-ons rather than Firefox itself, Chrome shutting down a process but then complaining next time it starts up, or being forced to close and restart Chrome because it doesn't reset things reliably any more if you refresh a page using Java applets.

It's also fair to say that the projects I work on don't tend to be done-in-a-day tweak-a-template jobs. We're using relatively complicated page layouts in web apps, various HTML5 features in quite demanding ways, and handling non-trivial volumes of data in JS. None of this should be a problem with modern web technologies, but they are things a simple page for your local church or a basic e-commerce kind of site probably isn't doing.

Just to be clear, the general level of unreliability I'm talking about has been observed under controlled conditions, over an extended period, across multiple projects, testing by multiple people using multiple computers and operating systems. This is not one isolated case. Unfortunately, the behaviour often differs depending on the particular system used for testing, or might even not be reproducible from one test to the next, so it would often be difficult to file a useful bug report with a good test case, even if we had enough time to do so.

Comment Re:Plug-ins were scapegoats but now we can't go ba (Score 1) 237

Yes, it seems we do generally agree about this.

I have criticised the rapid release cycle of Chrome and now Firefox many times myself for the instability and crazy amount of regressions it seems to bring with it (though expressing this opinion seems almost guaranteed to get you down-modded/voted on almost any web development forum on-line) and I'm disappointed that Microsoft is reportedly going to move in the same direction with its new browser project.

Comment Re:Plug-ins were scapegoats but now we can't go ba (Score 1) 237

But performance of data parsing is still largely sucks in all managed languages.

Are we talking about parsing the JS, or work being done by JS code here? I'm certainly not suggesting rewriting the browsers themselves or major components like the JS engine in a managed language. There are plenty of ways to make much more security-friendly languages than C++ that still compile to self-contained, native code without depending on a heavyweight VM.

This is a highly specific task, really. And browsers have already literally excluded themselves from the rest of the software ecosystem. They come with their own network libraries, DNS libraries, security libraries, video/audio decoding libraries, GUI libraries and so on.

I don't think it's as specific as you're suggesting. The same general balance between needing the control and speed vs. needing security and robust code applies to just about any system software or communications software, for a start.

Ironically, those dependencies on their own libraries (or reinventing all the wheels on the carriage, if you prefer) that were set up to promote portability mostly seem to have adverse consequences that would have been avoided if they'd actually used their host operating system instead of trying to be one. For example, Chrome infamously rendered text worse than the native system functions on all major platforms for a long time, while trying to actually build a site that uses HTML5 multimedia elements has been absurdly difficult for developers because there is so much variation in which exact A/V formats the different browsers support. (Did I mention that Flash could just download an audio or video file in one format and play it on all the platforms it supported?)

Nope. Your memory is playing tricks with you.

Browser crashes due to plug-in bugs were the most often cause for browser crashes.

That may be true, but according to the objective data on the projects I work on (which include some going back nearly a decade and using plug-ins) today's browsers are significantly worse for crash bugs than they used to be.

Chrome comes up with some sort of "I didn't shut down properly last time" warning almost daily, often prompted by nothing but loading Google's own sites. It can't even reinitialise pages using Java applets properly after a page refresh any more.

Firefox has been hanging more for us in recent months than it has for years. This appears to be due to a couple of popular add-ons we use rather than Firefox itself, but the fact that a failing add-on can take out the entire Firefox process is itself a damning indictment of Firefox's basic process isolation and security model, which is still fundamentally flawed many years after every other major browser dealt with this issue.

I recently went travelling, and with mobile devices just a few years old, the built-in browser was crashing just from trying to access various private WiFi systems. Sure, the browser is a little out of date, but that's because to upgrade it we'd wind up upgrading the whole OS, which numerous sources report as basically rendering the device so slow and buggy as to be useless.

The only major browser that does not have major crash/hang bugs with any project I work on today is actually IE, which gets a bad rap for historical reasons but objectively has been vastly better in quality than Firefox, Chrome or Safari for several years now according to our bug trackers.

Here is a simple technical reason: keyboard input. There is no established interface, and generally the interface is highly OS specific, for a plug-in to pass an unhandled widget event (for example keyboard input) to the browser.

That's a fair example, though my immediate question is why these plug-ins ever had direct access to things like keyboard input in the first place, given the obvious stability and security issues you mention. We've been running Java applets embedded within web pages for around two decades, and it's kind of absurd that in all that time and despite the rise and fall of other plug-ins like Flash and Silverlight along the way, browsers and operating systems haven't come up with a better model.

Comment Re:Plug-ins were scapegoats but now we can't go ba (Score 1) 237

So tell me, what "suitable" language would allow the browser to parse 200-500K of minified JS code in under 0.5 second?

It's not as if I have a handy JS engine implemented in every safer language to benchmark, but there are plenty of them out there that compile down to speeds close enough to C that the difference is mostly academic. The trouble is, every one of them is currently in the range of "obscure" to "extremely obscure" and lacks the surrounding ecosystem to be a viable alternative today.

This is a big general problem with the software industry right now. There is so much momentum behind the C and C++ ecosystem that creating an alternative language that is also relatively fast/low-level/compiled-to-native but has better safety properties and all of the tools and libraries to go with it is a huge challenge. It doesn't really matter if there's some great language out there, if you have to reinvent every wheel in it to get anything useful done.

This is why I'm optimistic about newcomers like Rust, which is the first language I've encountered in recent years that seems to be qualitatively better in safety, similarly or more expressive despite the low-level/compile-to-native form, and well enough supported that it might actually go the distance.

Integration with 3rd parties is a bitch. That was and remains the main reason why plug-ins suck.

But going back, say, a decade, all of the major browsers integrated with all of the major plug-ins just fine. The problems have been caused by deliberate decisions to drop support for various long-standing mechanism and/or an obvious lack of concern for even testing basic integration works. I don't for a moment believe that this was all done purely for technical reasons.

Comment There was a happy middle ground (Score 2) 237

The thing is, that was all true with even relatively early browsers, because it's the uniform access to information that was the radical improvement on what we had before.

Nothing about that necessarily means moving complex executable software to the browsers or making browsers a thin client for code running in the cloud is a similarly significant improvement. Plenty of us would argue that in many ways it has been a huge step backward, leading to dumbed-down software, security and privacy concerns, rent-seeking behaviours, inherent unreliability, and so on.

Comment Plug-ins were scapegoats but now we can't go back (Score 1, Informative) 237

There's nothing stopping you from going back.

Actually, there is. You can't use any of the popular plug-ins on a lot of mobile devices. Chrome is so buggy that even the most basic functionality doesn't work with some of the plug-ins now. As a developer, trying to actually produce a good user experience using any of the formerly popular plug-ins is futile with all the security warnings and all-but-invisible switches to override them in modern browsers.

And yet, after all their bitching about how insecurity is Java's fault or Flash's fault or whatever, it turns out that the browser writers aren't doing much better, because now they also have that complexity to deal with, and they are also trying to write secure software in unsuitable programming languages like C++.

So now we can't use tried and tested plug-in technologies to actually make stuff, and we all have to use HTML5+JS instead, even though in some areas they are still far inferior to what we had before with Flash or Silverlight or Java applets. This is not progress, unless your goal is not to actual provide better results for users but merely to kill off technologies that can threaten your native apps (Apple) or that are not ideal for your commercial model (Google).

At least there have been some moves in the right direction. For example, it will be interesting to see whether the first browser or browser components written in Rust do better.

Comment Yes, a lot of people *do* have options (Score 2) 200

Alternative theory: I choose not to travel to somewhere where such mall cops have any authority, or where border authorities like to throw their weight around.

There are more places in the world that I would like to see than I will ever be able to in one lifetime. I choose to visit those where I feel welcome, and they get my tourism revenue in return.

There are more clients in the world than my company will ever be able to do business with. I choose to work with those in places where doing business is easy, and those places get more business and probably more tax revenues in return.

Of course there are some people who realistically need to travel to certain places, though I don't think it's nearly as many as the apologists tend to claim and I think the number is coming down as more convenient and much cheaper long-distance communications technology improves. And of course there are some people who are willing to put up with a lot because they really want to visit a certain place. But not everyone who travels is in these categories, and by making travel unpleasant and making a country unwelcoming, in the long run those places will lose out on the rest of the visitors they might have had.

I recently travelled from the UK to another country in Europe, and chose to go by train. It was significantly more expensive than flying with a budget airline, and of course the travel time itself was significantly longer. But it was so much more pleasant in all other respects than all the hassle that comes with flying these days that I did it anyway.

The thing I most noticed was that although I was going through several different countries, once I was out of the UK and into the Schengen Area I just got on a train to go from place to place and the fact that it was international was no big deal. And you know what? No-one died in a horrific terrorist incident on the train. The criminal underworld has not taken over half of Europe. They don't seem to have any worse problems with contraband and black markets and illegal immigrants than we have at home. I doubt anyone was sneaking state secrets (or a dodgy rip of the latest movie) out of the country in a USB stick hidden in their handbag. And at no point on the journey did I feel threatened or unsafe because of the lack of overt security.

In fact, the only times I felt threatened and unsafe on the entire trip were going out of and back into my own country, and that's because we're doing it wrong. But it was still far less unpleasant than flying and all that goes with it these days.

Comment Re:What What? (Score 1) 240

The Desktop was still a mainstream device in 2008.

I didn't say it wasn't. I just said those who had a business need for laptops typically all had them by then.

I also mentioned that even a lot of students were arriving at university with their own laptops by 2008, and those machines were perfectly capable of dealing with their coursework, even for the CS or math students who might need a bit of real processing power at times.

But we're drifting off the topic here. My point was that laptops have been just fine for many years at doing the kinds of work people used to need a desktop for. Even entry-level laptops today are absolute beasts in performance and storage compared to what the high-end machines had a few years ago, and somehow people still managed to type a document in Word using them. You don't need some magical new class of hybrid device to get work done.

[An iPad is] as portable as a Surface Pro.

Don't be silly. Just looking at the physical dimensions, the Surface Pro 3 is two inches longer, over an inch wider, about 50% deeper and about twice as heavy compared to the iPad Air 2. It needs to be to accommodate the keyboard and a screen large enough for laptop-style uses, and you see similar distinctions between most convertible/hybrid devices and most large tablets. And there are plenty of tablets that are a bit smaller for added convenience, such as the iPad Mini, Galaxy Tab 7", etc.

If you're carrying your gear around in a laptop case anyway, those differences might not matter. However, for ladies who prefer to carry something in their handbag, they make a huge difference, and for gents, the smaller tablets will even fit in a coat or suit jacket pocket.

It's also common for people to hold a tablet in one hand just like, say, an e-reader. It's hard to imagine many people doing that with these larger, convertible devices. They're just too big for that kind of use over extended periods, and anything with a full-size, standard-layout keyboard always will be.

You seem to be unaware of what constitutes "business-grade". Please list the actual models for comparison.

Seriously? Are you really arguing that an i5, 8GB RAM, SSD, 15" screen laptop (the spec I gave before) is not sufficient for everyday business use? It's a wonder we ever managed to get anything done on computers more than a couple of years ago. </sarcasm>

I'm not going to bother citing specific machines, because basically every machine I was looking at before -- the ones I could buy off the shelf in a few minutes for around the £500 mark, from well-known brands like HP and Lenovo -- had a higher spec than the entry-level Latitude 14 7000 for half its price, and getting anything close to spec parity would still more than double the cost.

Yes, you can buy expensive support plans from the likes of Dell, but again the Latitude 14 7000 right off Dell's site only includes a 3-year warranty for that price, and even the laptops I was looking at from my local John Lewis store -- hardly a business-centric supplier -- typically came with a minimum of a two-year warranty. I see no indication that the Microsoft Surface Pro 3 price I found, right on the Microsoft Store web site, comes with the kind of long-term, rapid-response business support you seem to think is essential either.

The only qualitative difference I can see with the Dell is that you get that next-business-day on-site support. But for a more than 100% mark-up and given that the dominant cost of hardware failures is usually the immediate downtime and then the recovery time, that seems like the kind of deal only a Corporate CIO who went to school with a Dell VP could think was a good investment. I've worked at big companies that used Dell as a supplier and talked with the IT guys who had to actually use those support contracts, and not one of them thought it was actually worth it.

In any case, again we're drifting off the topic. The original point was to do a like-for-like comparison, so we're just looking at the cost overhead of moving from laptop to hybrid. Obviously the hybrid-style devices I was looking at from the same stores for price comparison purposes were coming with a similar level of warranty terms and customer support, and they were still about 2x as expensive for like-for-like specs.

It is completely necessary if you work with professionals who spend 40 hours plus/week using their machines productively.

That sentence actually made me laugh out loud when I read it. If your professionals can't be productive without a shiny new Retina screen, even on a laptop with an otherwise much better specification, you need to fire them and hire competent staff. Otherwise, as I said, it's a wonder we ever managed to get anything done on computers more than a couple of years ago.

Comment Re:What What? (Score 1) 240

Laptops have become only really wide spread in the last 6 or 7 years as prices came down. Prior to that was desktops with laptops only for the Execs and road warriors, and before that typewriters.

Interesting comment. Here in the UK, I'd say laptops have been ubiquitous (meaning that anyone who had a plausible need for one would probably have one) since the early 2000s. Certainly that was true at every employer I worked for back then and more recently for every client I work with today. By 6-7 years ago, even the students all seemed to have laptops.

Why do people use iPads? you can do all that on a laptop too if you try hard enough?

An iPad is far more portable: you can't put a laptop in your handbag, or hold it in one hand while you sit on the sofa reading an e-book.

An iPad is also very much simpler and easier to use.

One of these is about the hardware, while the other is about the style of software you run on it, but both are fundamental, qualitative distinctions between today's tablets and laptops, and today's hybrids are still much closer to the laptop side.

Not sure what you classify as a business laptop. These sell in the region of $2k (Australian dollars) for HP, Dell or Lenovo with decent processor, memory and hi-res screen.

We are living in different worlds. I have just looked up what I could buy if I walked to multiple stores within 15 minutes of where I'm sitting now and picked something up off the shelf.

There are numerous models of laptop, from well-known brands such as those you mentioned, with at least an i5, 8GB of RAM, an SSD, and a 15" screen, for well under £500 (a little under AUS$1,000). Such a powerful machine is easily sufficient for any normal business use.

Even with a tighter budget, it looks like I could have a choice of laptops with a slightly lower spec (i3, 4GB, HDD, 13" screen), still easily capable of running things like Office and web browsers, in my hands in 20 minutes for not much over £300.

Of course if you do want to spend silly money, you can buy a laptop with a 300dpi class screen or go with Apple products, and they'll happily double the price or more, but that is totally unnecessary for a basic office system.

So, that's laptops, but what about hybrids? It's hard to do direct comparisons because of all the minor spec variations, but in general the price premium to get a convertible laptop/tablet device with an otherwise similar spec to the laptops appears to be about a factor of 2x today. The Surface Pro seems to be particularly bad: you can't even buy it from any of the local stores I looked up, and the on-line price for a Surface Pro 3 around the i5/8GB/SSD level (but with a much smaller screen than the laptops I was looking at) is well over £1,000.

Any way you look at it, the idea that a hybrid doesn't have a significant price premium just doesn't stack up. The only way hybrids are cheaper than laptops, at least here in the UK, is if you consider a close to top-of-the-range MacBook to be a normal business laptop, but you're willing to accept an overall much lower spec for your hybrid just to get the convertible form factor.

Comment Re:What What? (Score 1) 240

You can't argue it's a different operating system when I choose to install a different desktop environment.

I haven't argued that, or anything close to it, anywhere in this entire thread. The examples I've mentioned have mostly been about things like the process and security models, which would be the same regardless of any particular style of UI you build on top, but might have different priorities for mobile, touchscreen, very user-friendly devices than they have for desktop or server environments.

You're describing Windows 3.x, Windows 95, 98 and ME. They all distinguished the foreground process from the background ones.

And it worked so well that no popular desktop or server OS has promoted such a distinction for well over a decade.

However, if you're designing an OS where the user typically interacts with single apps in isolation -- as is the case for tablets and smartphones -- and where you have more limited system resources and a need to conserve battery power -- again, as is the case for tablets and smartphones -- then naturally your basic assumptions will be different.

Slashdot Top Deals

It is easier to write an incorrect program than understand a correct one.

Working...