Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×

Comment Re:If 95% of the best programmers are not in the U (Score 1) 294

There are many successful tech companies outside the US. Heck, Intel would be nearly dead if it weren't for their Israeli R&D team and the (British) ARM chip. And SAP is German. Samsung, Foxcon, Hitachi, Sony, Panasonic, Toshiba, LG, Nokia, Arduino, ... they all do tech, and they're outside the US, right?

Yes, there are tons of US tech companies. But don't forget that the rest of the world exists, and has exciting stuff going on, too!

Comment Re:They want you there... (Score 1) 294

Exactly! The rule was supposed to be that managers didn't get overtime, but workers did. But the salary number stayed fixed for many decades, while inflation pushed 90% of workers above the line, so they're treated as if they were managers, who would be getting bonuses, etc., based on company performance. But they're (typically) not.

Comment Re:Exactly this. (Score 1) 294

Don't bet on it. The large majority of real estate agents make very little money most of the time - big deals that pay well are relatively rare, so the cash flow is erratic. There's a small number of agents who are making huge money doing huge deals, of course.

Being a programmer gives you pretty good job security, and a consistent income, unless you've stayed current with mainstream technologies.

Comment Re:Exactly this. (Score 1) 294

"But don't blame the process, blame the people who don't implement it well."

There's some truth to this, in that working remotely can work well, but there's a lot that you lose by not being co-located with the people you work with. That can be fine if you're working relatively independently, but if you're functioning as a part of a team and need to interact with them regularly, there's more friction involved if you're not co-located. It's not insurmountable, of course. But it's a lot easier for someone working remotely to "disconnect", because you can't see each other as casually, just in scheduled meetings, so you lose all of the informal lunch/hallway discussions, which have a lot of value. And, of course, when someone is remote they have less direct oversight, which can, if they lack discipline, lead to them spending a lot of time not getting work done. There are tools that help - IM, video (Skype, FaceTime, Hangouts), etc. But it's pretty consistent that, all things being equal, a team of people all in one room will generally work more effectively than a geographically dispersed team, because there's an energy and momentum that a team builds in their space, and a team bonding and commitment, which is hard to make work remotely. Because people aren't just "skills on legs" they are social creatures, and being in the same places works better for most people.

Of course, things aren't always equal. If the perfect developer that fills needed skills is remote, and won't move for the job, and you can't find anyone local who can do the work, you're certainly better off with him remote than not having the skills on the team at all (and failing).

Comment Re:Poor slashdot... (Score 1) 449

Having worked on machines with thousands of CPUs, I disagree. The thin that Linus is missing (IMO) is that modern GPUs are no longer "graphics processors" but are actually quite powerful MPP supercomputers, and there are millions of them out there, and applications are increasingly being written to take advantage of them.

He's right that putting many extremely expensive, power-hungry Intel CPUs in a single box isn't a good tradeoff except in very specific cases. Luckily it's actually quite cheap to add large numbers of cheap, high performance CPUs to a computer, and in fact they're likely already there, so the cost of using them is $0 for hardware, just some developer effort. So the question is simply whether developers should ignore all those CPUs and use only the main CPU, or they should learn how to use the supercomputer sitting on the graphics card.

Comment Re:Build a PC: More RAM or more CPUs or more I/O? (Score 1) 449

It's not "hard to parallelize one application". It's just a matter of learning to think that way. Once you do, nearly all problems parallelize well.

For example, consider video games. Most of them have hundreds or thousands of AIs and game objects that can run in parallel. Heck, even word processing renders thousands of characters to the screen, which can be done in parallel. Sorting, searching, indexing, all parallelize. Of course, as lot as it's considered "hard" developers won't do it, except in the highest value cases (e.g. video processing, graphics) but that's a matter of tooling. In languages/compilers that are designed for parallelism, it's easy. It's just hard in C++ because as a language it makes parallelism very hard. Compare to FORTRAN 90, or C*.

Comment Re:Pullin' a Gates? (Score 1) 449

This is only true if you're unable to use more than one CPU chip in your computer, a hurdle that was overcome 30 years ago. :-) People have been running multiple CPUs to improve performance for a _long_ time.

The real question is - would you rather have multiple CPUs at the price/performance peak, or one CPU that's a bit faster for a much higher price. Typically getting 2x performance costs 4x or so, making 2 cheap CPUs a much better deal than one really expensive CPU.

Comment Re:Pullin' a Gates? (Score 1) 449

In the real world the tradoff is dollars (or power consumption, for mobile devices). So the question is - should you buy a 2x faster CPU for 4x the cost and 4x the power consumption, or should you buy 2 cores for 2x the cost and 2x the power consumption?

For applications that only run single-threaded, you don't have a choice - you have to buy the fastest CPU you can. But for well-written applications, more cores is a cheaper, more power efficient way to scale performance.

Comment Re:Pullin' a Gates? (Score 1) 449

It's not a technical issue, it's a "chicken and egg" market issue. Many desktop applications _would_ run very well on massively parallel hardware, but that's not what people have, so it's not what developers target. And since games are written not to use more CPUs, people don't buy computers with many CPUs. And because MPP hardware is a niche, mainstream developers have no idea how to program for them, much less to think about what problems would run well in parallel.

From a technical perspective, which I think Linus is trying to argue from, many desktop applications could easily take advantage of massive parallelism. Once you start thinking in terms of data parallelism or agent parallelism, almost all problems decompose in ways that parallelize nicely. For example, there are hundreds of AIs and simulation objects in many games, and each could run on a CPU (or process or thread). Video and image processing are "embarrassingly parallel", and now that people edit video at home, they could happily consume all the CPU you have. Sorting, searching, indexing, scrolling in documents, rendering characters to the screen - all very parallel.

Luckily the "graphics processors" are breaking out of the "chicken and egg" trap. The better GPUs are now not really "graphics processors", they are fully general MPP CPUs, and many applications are taking advantage of them. Interestingly this architecture is similar (at a high level) to the MPP supercomputers from decades ago. The Thinking Machines' Connection Machine had a fast front-end computer, controlling an array of thousands of tens of thousands of CPUs that did the heavy lifting, and now it's your CPU controlling an array of CPUs in your "GPU". So millions of PCs are MPP, even though their owners probably don't think of them that way. And this is leading to more and more applications taking advantage of MPP!

So I think that Linux is wrong, in that he's missed that what he's dismissing as GPUs are actually MPP co-processors that are astoundingly powerful and are increasingly being taken advantage of by developers when performance matters.

Comment Re:Pullin' a Gates? (Score 1) 449

Thinking Machines did this. We had one front-end CPU that ran the sequential process that controlled everything, and thousands of parallel CPUs that did all of the heavy lifting by processing the data in parallel. For large data problems, it worked extremely well. Yes, at any given time some CPUs might not be doing work because they're waiting for other CPUs, but when you're pushing the performance (e.g. processing TB of data, doing PFLOPS) the cost of making a single CPU faster goes up much faster than the performance increase and then becomes impossible, while piling up more CPUs the performance goes up linearly. Of course, some problems don't parallelize in obvious ways, but IMO anything running on large data sets can be parallelized if you look at it right.

Luckily things like rendering graphics, sorting, searching, running web sites, many crypto problems, simulations, games, image processing, video processing, etc., parallelize really well. Admittedly it takes some cleverness to write a sort algorithm that runs on thousands of CPUs in parallel, but it's valuable to have a constant-time sort (i.e. you can scale hardware linearly with the data size, and sort arbitrary amounts of data in fixed time). The main challenge that parallel computing has, IMO, is that most programmers don't think that way, similar to how most programmers don't think in terms of multi-threading. But that's a matter of education. People used to be terribly confused by event-based programming frameworks, too!

Once you start thinking in terms of having thousands or millions of (virtual) CPUs, and decomposing problems to run in parallel based on data or actors, pretty much everything becomes highly scalable.

Comment Re:Caveat emptor (Score 2) 325

The debate between 1K = 1,000 and 1K=1,024 has been going on for decades. As long as the terms are precisely defined, I don't think there's a case there. And Apple documents exactly how much storage each of their devices comes with, including the footnote that "1GB = 1 billion bytes; actual formatted capacity less." I wouldn't expect a consumer device to get into the details of directory blocks, etc. If a consumer wants to know how much storage the device has available, they can easily check by looking in Settings / General / Usage, and it shows the exact storage used and available. They'll even show you how much storage is used by each app, and for some apps (e.g. videos, podcasts) you can drill down into individual files in the app and delete them. It's really, really easy to manage storage in iOS 8.

Comment Re:MicroSD card? (Score 4, Interesting) 325

"You mean like how when Apple purposefully degrades the performance of older iOS devices when a new iOS version is out"

Example? So far (and I've run every iOS release) they do the opposite - they allow a much wider range of devices to upgrade than any other consumer electronics company. I have several Android devices, and new OS release support is spotty, because it's dependent on manufacturer and carrier QA, while Apple is the manufacturer, and got the carriers to allow Apple to push software straight to users without going through telco gatekeepers.

Apple does disable new features that run badly on older hardware, such as Siri only being available on newer phones, but that's the opposite of degrading - it's protecting users from degraded performance. So, as is typical with Apple, they'd rather deliver less functionality, with better performance, while Google goes the opposite direction - all sorts of functionality, but iffy performance. Both strategies are legitimate, and suit different kinds of users.

Slashdot Top Deals

"Here's something to think about: How come you never see a headline like `Psychic Wins Lottery.'" -- Comedian Jay Leno

Working...