Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Re:What What? (Score 1) 240

No user of any popular tablet or smartphone today, Android or otherwise, is exposed to the kinds of internal implementation details you mentioned. The APIs provided by both iOS and Android are very much geared to the per-app/sandbox model used by apps on those systems, and even on Android, you wouldn't normally be messing around with a FHS-style directory tree the way Linux desktop or server applications would.

Anyway, I don't know whether you are wilfully missing the point just to troll or you really can't understand that there are fundamental differences between tablet/smartphone and desktop/laptop use cases beyond the cosmetic details of their shells, but further discussion here seems unlikely to get us anywhere useful.

Comment Re:What What? (Score 1) 240

In your first paragraph you say you don't think it's just about the shell, but what you've gone on to described is the shell

The shell is important, but it's only worth as much as the underlying models the operating system provides. For example, the underlying file system has nothing to do with the shell. Nor do the security model or any related control of application installation and permissions systems. Nor do the process model and any mechanisms for inter-process communication.

The rest of my post, referring to how applications are installed, was an example. The point is that on tablet/smartphone devices, everything is simplified/dumbed down [delete as applicable] in the same way. The average tablet/smartphone app is designed to do one simple task using simple interactions. To my knowledge, no-one has yet written a smartphone or tablet app that is anywhere close to the complexity and flexibility that major desktop apps routinely offer.

Comment Re:What What? (Score 1) 240

I've never owned a Windows 8 (or 8.1) anything. There are good reasons for that, starting with the fact that I've used Windows 8. ;-)

But seriously, you give an excellent example of the real problem here. Tablets are valuable precisely for their simplicity and because they work with minimal configuration. The idea that anyone should need to run a tool like regedit to be able to use a tablet-style device properly is just... bizarre.

Comment Re:What What? (Score 1) 240

The thing is, I don't think it's just the shell interfaces for Windows that matter. The entire way people use the device is usually different. Are you working with multiple applications at once? Is your data associated with a specific application or stored within a more structured filesystem that you can see? Do you install software with a lot of flexibility and many options, or does the software concentrate on simplicity and need minimal configuration?

Like smartphones, I think tablets have been successful precisely because of their simplicity. Tasks like installing software so you can use your device to do interesting things should not require any more effort than choosing the software you want and if necessary paying for it, but Windows is awful for this, OS X isn't much better, and Linux is OK as long as exactly the software you need is available from your distro but otherwise it's a joke. Smartphones and tablets came along, with their app store model but also with the simple "home screen" style of launching apps and usually with a single app visible at once, and made all that horrible complexity go away, and that's why non-geeks love them.

Of course, there is a price to pay for that simplicity: the software isn't as powerful and flexible as the kinds of applications we run on desktop/laptop systems. The tablet/smartphone UI style doesn't scale to more demanding tasks and can't cope with the kinds of complicated interactions that, for example, an office worker manipulating a spreadsheet needs all the time.

So for that, we come back to a desktop/laptop style of UI, with a real keyboard and mouse. Once you've got those, although a touch screen might be useful occasionally, it's mostly just a gimmick anyway.

Comment Re:What What? (Score 1) 240

Yes, but if you try and understand what sucks about Win 8, most of it will be (hopefully) be fixed with Win 10.

I think the marketing problem for Microsoft is that, for desktop/laptop users, most of it was not broken in Windows 7 anyway. Windows 10 can't just be about fixing the things they got wrong with Windows 8. It needs to have some significant benefits as well, or everyone who's on Windows 7 today will just stay there and not upgrade.

No not necessarily, but in most instances, Office people use Office apps (Outlook, Word, Excel, Web etc), and for that a Surface does the job of a tablet, laptop and desktop quite sufficiently.

But for those uses, a laptop does the job just fine anyway, and people who go to a lot of meetings probably take their laptop with them already. What extra benefit do they get for having a more complicated and expensive device like a Surface?

Comment Re:What What? (Score 1) 240

What constitutes a "real PC" these days? Laptops are, for many, a desktop replacement.

True enough, but how much of that is because they're better at doing the job, and how much is just convenience for people making the purchasing decision?

If you actually do work away from your desk a significant amount of the time, or use your computer in different places around the home, a laptop offers a genuine advantage. And if you have an organisation where many/most of your staff are in that category, consistency among your users might be a genuine advantage for purchasing and technical support purposes as well.

Otherwise, compared to a "real PC", a laptop is often just a more expensive system with lower performance, lower storage capacity, fewer display options, worse ergonomics, limited connectivity... Of course as technology improves the distinctions will probably become finer and less of a concern, but we are still a long way from parity.

Touchscreens are becoming the norm because it's a 'value-add' that adds little to the purchase price.

A 'value-add', really? I suggest that touchscreens on laptops are becoming a common sight for much the same reason that "smart" TVs are: it's not because many customers actually want or need them, it's simply that a plain system the customer bought a couple of years ago is now perfectly capable of providing excellent results for several more years anyway, so manufacturers need to create a gimmick and then convince you via their advertising efforts that you need that gimmick so you should spend more money with them.

Should one device perform both functions, or do we stick with the Apple mantra that you need both an iPad AND a macbook? Or the Google mantra that, increasingly, you don't need a desktop OS altogether?

To me, an iPad (running iOS and simple apps) and a MacBook (running OS X and full applications) might both be useful for quite different tasks, so if fruity technology is your preference then I would tend to agree with Apple here.

I find Google's position on almost everything to be favourable to Google but rarely an acceptable alternative to incumbent technologies for everyone else. Google, like a lot of "cloud" services, provide the software equivalent of those touchscreens and smart TVs I mentioned above. Looking objectively, most of their web applications are so limited and often so short-lived that I find it hard to take them seriously.

Comment Re:What What? (Score 1) 240

What's wrong with having one device with the capability of satisfying both use cases?

Nothing, as long as it satisfies both use cases as well as two dedicated devices would, or at least close enough not to make any meaningful difference.

This is not what I saw with Windows 8, however. Instead, what you got was a least common denominator. Reducing desktop workstation and tablet to a least common denominator does to the workstation roughly what reducing gaming console and power-gamer PC to a least common denominator does to the power-gamer PC. That is, it's such a poor substitute in both power and usability that the serious end of the market doesn't really consider it an alternative at all. You're just hoping there's enough of the market willing to pay real money for something that isn't very good to get away with it.

Comment Re:What What? (Score 1) 240

Tablet in the meeting room, docked at desk to dual 24" monitors, keyboard and mouse giving 3 full HD desktops. This is the MS vision, if your use case doesn't suit, that doesn't mean most of the corporate world doesn't.

Given that Windows 8 has been a Vista-scale catatrophe for Microsoft, I think by now it's safe to say overall the corporate world doesn't buy into that vision either.

Again, the things these devices are useful for in meetings are not necessarily the same things they are useful for when someone is working alone at a desk. So far, it appears that trying to fit the same modes of operation into both boxes just results in a mediocre compromise that isn't very good at either set of tasks.

Comment Re:What What? (Score 3, Insightful) 240

What do you do when you plug your tablet in a docking station and start using it with multiple displays, a keyboard and a mouse?

I have no idea. In probably 4-5 years of owning tablet-style devices, I have never once connected them to any external peripherals like that, nor wanted to.

Tablets are for convenient data access and occasional very light data entry. For the stuff that needs multiple displays and serious input devices, I have other tools that are much, much better at it than any tablet ever produced.

In other words, my use cases (and going by the Internet commentary, almost everyone else's use cases too) are completely different for tablets and real PCs. It makes absolutely no sense to run the same style of operating system on both of them -- not just the shell, but the file system, the process model, the security model, connectivity...

Comment Re:All it means is (Score 1) 292

This is a perfect example of just what I meant: the organization would had been better off if Bob had just done his job and left the system unsupported, thus forcing the managament to formally give the responsibility to someone - which also means it's an officially acknowledged role within the organization.

But this makes all kinds of unstated assumptions, most obviously that management would in fact be forced to formally recognise the responsibility and to find or hire someone to do every useful little job that ever gets done. This is completely unrealistic. In practice, probably only the technical staff and maybe their technical leads or immediate line managers know these little details about how the jobs are being done. Even if more senior management were aware of them, the administrative overheads of documenting every last aspect of every job are prohibitive, no-one has the budget to hire dedicated staff for all of these things, and you'd spend forever trying to recruit idealised candidates where you knew for sure they had exactly the right balance of skills to fill the gaps in your team.

The kind of person who wants to run their department this way is the kind of person who says things like "Everything needs to be managed" and "You can't manage what you can't measure", yet is completely blind to the overheads their policies impose on the department as a whole and would apparently prefer to know exactly how fast their project is failing under the weight of those overheads than to let their people get on with their work and have the project succeed.

The simple, brutal fact is that from the organization's point of view, Bob is a liability. He might die, he might leave. The more responsibility he gets, the bigger liability he'll become.

That's a very strange argument, though it's not the first time I've seen it made. If Bob leaving for whatever reason would be a loss, then the work he was doing was valuable, and not letting him do it just guarantees you'll suffer the same loss voluntarily.

The logical conclusion of your argument is that no organisation should ever hire anyone with something unique to offer or let anyone make a useful contribution that is outside their formal job description. You should only hire completely unremarkable people and if you accidentally hire anyone with any sort of aptitude or ability you didn't expect and a willingness to use it to perform better, you should immediately suppress that instinct.

Good luck competing against any organisation that actually hires talent and rewards initiative with that policy -- you'll need it. But probably not for very long.

Comment Re: Aren't all (but one) popular languages like th (Score 1) 757

No, anyone concerned with performance will write the code first, profile and then optimize.

Absolutely. That's why any good C++ programmer writes

void some_function(big_class data)

first, and only uses

void some_function(const big_class & data)

after profiling confirms that their code is as slow as everyone knew it would be.

Comment Aren't all (but one) popular languages like this? (Score 4, Insightful) 757

This is all true, but I'm not sure how it's any different to almost any other popular language.

Java and C# have also evolved a lot of new language features in recent years. For many types of software, the way the code looks will also be heavily influenced by which libraries and frameworks are used in that project's stack.

It's the same story for web development. We have different flavours of JavaScript (ES5 in most browsers today, but ES6 just around the corner and supporting a wider range of programming styles), Python (2 vs 3), and so on. And with these more dynamic languages, the style is often even more guided by a framework if you're using one.

Even if you're not using pervasive third party frameworks or libraries, any project of non-trivial size is going to adopt its own conventions and build its own abstractions to suit its particular needs, and then the rest of its code will again become its own dialect written in terms of those conventions and abstractions.

In fact, I can't think of any mainstream language except for C that doesn't suffer from the "dialect" problem to some extent. And that's because C is a 20th century language in a 21st century world, so lacking in expressive power that it can't support any of these modern, high-productivity development styles and abstraction tools. Its ubiquity, portability and simplicity are assets, but they are effectively its only redeeming features in 2015, and as time goes by it will be necessary for fewer and fewer projects to choose C for those reasons.

"There are only two kinds of languages: the ones people complain about and the ones nobody uses." -- Bjarne Stroustrup

"If you attack a tool based primarily on not liking the people who use it, you're still just a bigot, no matter how famous you are." -- Anonymous Slashdot poster

Slashdot Top Deals

"Ninety percent of baseball is half mental." -- Yogi Berra

Working...