Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×

Comment Re:Hold on a minute (Score 1) 198

That would be fair to say, if we (Americans) were free to emigrate to the countries that H1-Bs are being brought in from. We're not. The markets for labor and capital are both far from free (see 'capital controls' regarding the later).

There is some truth is that H1-B influx puts a damp in job hunting. But, from experience, if a programmer feels constantly threatened by that influx to the point of seeing his salary (or even employment opportunities) nosediving, I would question said's person's skills.

The real good engineers in India either come here already in scholarships or transition very quickly from H1-B to resident status. And these are not the majority. The bulk of H1-B are just average/below average (with a good chunk being just atrocious coders) with very little work experience (most of it limited to web development), facing cultural barriers in communication and delivery of work.

This is not a diss or intended as an insult to them. It is just a function of many things that affect their society (and I suspect that the quality of work will improve over the decades.)

If you (the generic "you") are threatened by that, by the current quality of work presented by offshore/H1-B teams, then you are replaceable and possibly not that great at software/IT. Don't blame them. Blame your skills.

If you know your shit well, you will have no shortage of $$$ and work. That is a fact. Do the type of work that cannot be easily offshored/replaced/commoditized, and you will be fine.

Comment Re:A lot of to-do about $700 (Score 1) 245

Over $900, and he will match the donations with his own funds so... that's definitely enough for a pretty nice machine. And with the slashdotting, probably a lot more now.

The bigger problem is likely network bandwidth to his home if he's actually trying to run the server at home. He'd need uplink and downlink bandwidth so if he doesn't have FIOS or Google Fiber, that will be a bottleneck.

-Matt

Comment Re:Hold on a minute (Score 2) 198

The guys getting 250k a year from Google are basically the SF version of high-end guys making 120-140 a year here in FL. Same take home and everything.

Bingo. This was my #1 reason I forego the idea of relocating from SoFla to the Valley. One thing that I would add is that said programmer in Tampa can buy a *real* house in a decent school district as a head-of -household with with a stay-at-home spouse. In San Francisco, forget it. The equivalent programmer could only afford a whole in a wall, or have to have his/her spouse work in the same field just to be able to buy a *real* house in a good school district.

Denver, Dallas or Seattle are much better options to relocate, money-wise.

Comment Re:Git Is Not The Be All End All (Score 1) 245

A single point of failure is a big problem. The biggest advantage of a distributed system is that the main repo doesn't have to take a variable client load that might interfere with developer pushes. You can distribute the main repo to secondary servers and have the developers commit/push to the main repo, but all readers (including web services) can simply access the secondary servers. This works spectacularly well for us.

The second biggest advantage is that backups are completely free. If something breaks badly, a repo will be out there somewhere (and for readers one can simply fail-over to another secondary server or use a local copy).

For most open source projects... probably all open source projects frankly, and probably 90% of the in-house commercial projects, a distributed system will be far superior.

I think people underestimate just how much repo searching costs when one has a single distribution point. I remember the days when FreeBSD, NetBSD, and other CVS repos would be constantly overloaded due to the lack of a distributed solution. And the mirrors generally did not work well at all because cron jobs doing updates would invariably catch a mirror in the middle of an update and completely break the local copy. So users AND developers naturally gravitated to the original and subsequently overloaded it. SVN doesn't really solve that problem if you want to run actual repo commands, verses greping one particular version of the source.

That just isn't an issue with git. There are still lots of projects not using git, and I had a HUGE mess of cron jobs that had to try very hard to keep their cvs or other trees in sync without blowing up and requiring maintainance every few weeks. Fortunately most of those projects now run git mirrors, so we can supply local copies of the git repo and broken-out sources for many projects on our developer box that developers can grep through on our own I/O dime instead of on other project's I/O dime.

-Matt

Comment Re:SVN and Git are not good for the same things (Score 1) 245

This isn't quite true. Git has no problem with large repos as long as the system ram and kernel caches can scale to the data footprint the basic git commands need to access them. However, git *DOES* have an issue with scaling to huge repos in general... it requires more I/O, certainly, and you can't easily operate on just a portion of a repo (a feature which I think Linus knows is needed). So repos which are well in excess of the RAM and OS resources required to do basic commands can present a problem. Google has precisely this problem and it is why they are unable to use git despite the number of employees who would like to.

Any system built for home or server use by a programmer/developer in the last 1-2 years is going to have at least 16G of ram. That can handle pretty big repos without missing a beat. I don't think there's much use complaining if you have a very old system with a tiny amount of ram, but you can ease your problems by using a SSD as a cache. And if you are talking about a large company... having the repo servers deal with very large git repos generally just requires ram (but client-side is still a problem).

And, certainly, I do not know a single open source project that has this problem that couldn't be solved with a measily 16G of ram.

-Matt

Comment It's not that big a deal (Score 1) 245

It's just that ESR has an old decrepit machine to do it on. A low-end Xeon w/16-32G of ECC ram and, most importantly, a nice SSD for the input data set, and a large HDD for the output (so as not to wear out the SSD), would do the job easily on repos far larger than 16GB. The IPS of those cpus is insane. Just one of our E3-1240v3 (haswell) blades can compile the entire FreeBSD ports repo from scratch in less than 24 hours.

For quiet, nothing fancy is really needed. These cpus run very cool, so you just get a big copper cooler (with a big variable but slow fan) and a case with a large (fixed, slow) 80mm input fan and a large (fixed slow) 80mm output fan and you won't hear a thing from the case.

-Matt

Comment Re:Newton anyone? (Score 1) 84

ARM exists largely because of Apple. They didn't want to buy mobile chips from a competitor (Acorn), so invested in a joint venture so that Acorn would spin off their chip division into a company that would sell to both. They then ignored ARM after killing the Newton though. Many of the people working on the current ARM cores at Apple formerly worked on a PowerPC processor at PA Semi. I think, if IBM and Freescale, had been serious about selling desktop chips that Apple would have been happy to avoid a load of software costs by having a single CPU family for their entire product suite. IBM didn't want to compete with Intel in mobile chips and Freescale kept promising exciting parts and never quite bringing them to market.

Comment Re:Why fear the iMac? (Score 1) 355

They're still not offering a retina external display, but they're about the only manufacturer that isn't. I just got a 4K 27" display at work (and I'm now back to preferring to read text on the big screen instead of on the laptop screen). The panel quality is nice, but it lacks any bells and whistles. They're only £300 now though, so we're buying them as the standard external displays.

Comment Re:Is D3D 9 advantageous over 10? (Score 1) 55

Direct3D 10 is very different from DirectX 9. The latter was designed with modern GPUs in mind and so is based around an entirely programmable pipeline. DirectX 9 is predominantly a fixed-function API with various places where you can insert shader programs into the pipeline. This means that DirectX 10 is easier to support because there's less provided by the API.

Supporting D3D 9 is akin to supporting OpenGL 2. You need to expose most of the programmable interfaces but also have a load of fixed-function stuff work, typically (on modern hardware) by providing shader programs that do the same thing. Supporting D3D10 is more similar to supporting OpenGL 3, where most of the complexity is in moving data to and from the GPU and compiling shader programs.

With Gallium, there are two aspects to supporting these APIs. The first is the compiler turning programs from a source language (HLSL, GLSL) into TGIR. The second is the state tracker, which handles API-specific state. The former part is about as complex for D3D 9 and 10, as they have similar shader language support. The latter is a lot simpler for 10, as it is a much less stateful API.

Comment Designed in US, Built in EU, Filled in Iraq (Score 5, Informative) 376

The summary seems to have left out the most interesting tidbit:

According to the Times, the reports were embarrassing for the Pentagon because, in five of the six incidents in which troops were wounded by chemical agents, the munitions appeared to have been "designed in the US, manufactured in Europe and filled in chemical agent production lines built in Iraq by Western companies".

Where were they found? Next to the plants set up by Western companies that filled them in Iraq, of course. Who has control of those plants now? Why, ISIS of course. Don't worry, though, the people who thought it was better we didn't know about these things are assuring us that all those weapons were hurriedly destroyed.

Comment Re:It is serious but also concerning (Score 1) 571

A small reactor could power a U.S. Navy warship, and eliminate the need for other fuel sources that pose logistical challenges

A navy ship, what about a cruise liner? With cheap energy, you could process the deuterium from sea water for fuel, grow food in artificially lit enclosures below decks and have a self-sustaining artificial ecosystem that could spend years between trips to port.

Slashdot Top Deals

I tell them to turn to the study of mathematics, for it is only there that they might escape the lusts of the flesh. -- Thomas Mann, "The Magic Mountain"

Working...