Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×

Comment Re:You know what would REALLY motivate kids? (Score 1) 208

Except that Civil Engineering is licensed by each individual state. There is also the requirement that in order to get a PE you have to have a certain number of years of experience under a PE that will sign off on it. Plus, there are basic competency tests required, and continuing education requirements.

That is why my wife who has a PE in CE doesn't work with any H1B's while I do. She does work with a number of permanent residents, but most of them moved here, went to school here, and got their PE's after working in the field here for a few years.

Plus, the requirement that nearly any project of significance have a PE sign off on it, keeps the field vibrant.

Comment Content companies? (Score 1) 244

Well, besides all the other listed problems with moving into the TV market. I'm sure apple had two major roadblocks for a uber high res TV. The question of who would supply the glass, and who would supply the content probably were insurmountable. Its not like Samsung or LG were going to sign exclusive sales deals to only sell the panels to apple. Then there are the content providers who probably refused to provide custom content for apple devices fearing a repeat of the itunes situation where they became beholden to apple.

Comment Re:Greedy Corporation (Score 2) 214

XP is crap. Its driver model and security model are a total joke.

Please be more specific because the whole NT line shares the same driver model (the most significant changes were actually in win2k with the addition of PnP) and security model. And that includes windows 10... The addition of UAC dialogs instead of runas, isn't a security "model" change so much as a implementation detail. The virtualized HKLM aren't really "security model" changes either and are probably the single largest security change to newer windows that actually makes a difference over running as a restricted user in 2k,xp,2k3.

So, i'm curious what exactly you think is crap about the windows security model and what exactly changed that had a meaningful impact.. And no, simply changing the default user to a restricted one doesn't really count because anyone with 1/2 a brain did the same thing to older windows installs. Maybe the largest resulting change is that crap software now actually works consistently in such an environment without having to implement custom policies for busted applications. ASLR maybe? Because that is application specific and there are 3rd party utilities that provide it for XP. Same thing for driver signature enforcement, its possible to set a GPO to reject unsigned drivers. Something ACL related maybe? Because in microsoft's words "The fundamental structure of access control lists (ACLs) has not changed much for Windows Vista".

You should really read this article http://www.windowsecurity.com/... which is a pretty good introduction to the security features of the NT kernel, so that you can communicate effectively about what you think is wrong with windows security model before you start making blanket statements about it.

Comment Re:What is with this "HD" (Score 2) 175

he 3d technology was too new back then. And they jumped to the technology without much insight of the quality of the universe.

I assume your aware that there is a new kings quest in the making... King's Quest: Your Legacy Awaits, which when I initially saw the screen shots I was really sad. I guess they think 3d technology has evolved, but it still looks like ass in comparison to KQ7, which runs at much lower resolution.

There definitely a place for good hand drawn art in video games. See Machinarium, and a number of fire maple games like The Lost City. I found these much more satisfying then nearly any game using a 3d engine I've played in the last 10 years. Even the old prerendered games like riven/etc look better IMHO. Realtime 3D/polygon rendering is cool for things that need 3D, but this idea that even isometric games (starcraft/etc) need to be 3D takes away from the experience.

I really was excited about the new xcom until I saw the screenshots and found out it was using cryengine. Not that there is anything wrong with that engine, I just wish someone would do a big budget game with something other than that, unreal, or idtech. The use of one of three game engines for 99% of the games released in the past few years means that they all have the same look/feel in my book.

Comment Re:Without demand to prop up economies of scale (Score 1) 276

Without demand to prop up economies of scale, will prices of general-purpose computers rise to where they were before the late 1990s?

Maybe my memory if failing me, but I don't think the upper midrange PC is less expensive now than it was in the late 90's. Back then a cheap PC could be had for $7-800 and a decent one for $1500-2000. Sure you could go crazy, and dump $5k, but it didn't get you much over the $2k one.

Same thing today, a cheap PC is probably $400 and a decent one is $1500, and you can dump $5k on a really good one. So, the largest change is probably on the low end where prices are 1/2 to 1/3 what they were. This isn't really the market that people who need a PC are apt to buy into anyway.

The one thing that has happened is that laptops have gone from premium devices to cover the mid range and low end.

Comment Crap technology? (Score 2) 405

Or maybe its all the crap, half baked technology being used over the last few years. I think we are sort of in a time period like the mid/late 90's where everyone was shoveling garbage windows apps out the door before they were done baking (and win9x itself was a pile of crap).

It seems to me, that over half the "web stacks" are just steaming piles of unfinished garbage. Same with a lot of the core infrastructure technologies that are all the hotness (see docker, openstack, etc).

So, its no wonder these things get stressful, someone hits a bug and suddenly they are trying to fix software that is way over their head on a deadline.

Comment More idiots... (Score 1) 532

Its probably takes all of a couple fields scattered around the database or a code to human description table somewhere.

Then when it comes to printing it, the result set gets joined to the human readable table and it gets printed as "code, human text".

Heck its hard to imagine that the table doesn't exist, which leaves you with the feeling that only printing the "codes" is on purpose.

Because, it keeps those pesky customers from asking why they paid $500 for something they can buy over the counter at walmart for $1, or why the chest xray cost $2000 when its the same as the one their doctor ordered which was only billed for $50.

Comment Re:Latency vs bandwidth (Score 1) 162

It only takes a queue depth of 2 or 3 for maximum linear throughput.

I haven't any idea why you are so up voted, because your flat out wrong, 5 minutes with a benchmark like ATTO allows you to see the performance with small sequential IO and queue depth. Another benchmark showing ATTO sequential IO's for small transfers

And, your sort of right the OS will do a certain amount of prefech/etc but that doesn't help when things are fragmented or the application/whatever is requesting things in a pattern that isn't easily predictable (say booting without a readyboot optimized system).

Try it out yourself, get the old sysinternals Disk Monitor and watch the size size attribute. Its in 512 byte sectors, and on my machine probably 1/3rd of the IO's are listed as "8". AKA 4k. Heck the example screenshot on the listed page is all 8 except for one 16.

So, yes small IO transfers are still an issue, and will be until we get OS's that can solve the hard problem of consolidating unpredictable IO streams. Heck a lot of people turn superfetch off because it slows things down. AKA aggressive prefetch isn't necessarily faster.

Comment Re:Latency vs bandwidth (Score 2) 162

Gosh, stupid html tags ate most of my posting. Anyway here it is.

I don't understand why people still don't understand the difference between latency and bandwidth, and the fact that a huge amount of the desktop IO load is still less than 4k with a queue depth of basically 1.

If you look at many of the benchmarks you will notice that the .5-4k IO performance is pretty similar for all of these devices and that is with deep queues. Why is that? Because the queue depth and latency to complete a single command dictate the bandwidth. So you either need deeper queues or lower latency to go faster at those block sizes.

So the latency on PCIe is not that much better, but the queue depth can be much deeper than what is possible with a normal AHCI controller. This helps a lot with benchmarks, but not so much for a single user.

Anyway, boot times, and general single user performance is bottle necked mostly by latency. Especially when the throughput of larger transfers is greater than a few hundred MB/sec. So, the pieces large enough to take advantage of the higher bandwidth is a smaller (and growing smaller) portion of the pie.

Next time you start your favorite game look at the CPU/DISK IO. Its likely the game never gets anywhere close to the max IO performance of your disk, and if it does its only for a short period.

Anyway, its like multicore, beyond a fairly low core count most desktop type operations are better off with faster CPU's rather than more of them.

And just like desktop benchmarks, the guys running benchmarks seem lothe to heavily weigh single thread operations, or queue depth 1 1k IO loads in the overall performance picture even though its a large portion of actual system performance running everyday tasks.

Comment Latency vs bandwidth (Score 1) 162

I don't understand why people still don't understand the difference between latency and bandwidth, and the fact that a huge amount of the desktop IO load is still a few hundred MB/sec. So, the pieces large enough to take advantage of the higher bandwidth is a smaller (and growing smaller) portion of the pie.

Next time you start your favorite game look at the CPU/DISK IO. Its likely the game never gets anywhere close to the max IO performance of your disk, and if it does its only for a short period.

Anyway, its like multicore, beyond a fairly low core count most desktop type operations are better off with faster CPU's rather than more of them.

And just like desktop benchmarks, the guys running benchmarks seem lothe to heavily weigh single thread operations, or queue depth 1 1k IO loads in the overall performance picture even though its a large portion of actual system performance running everyday tasks.

Comment Re:Probably best (Score 1) 649

You probably don't have to go that old, plenty of cars from the late 90's are both safer, and get better gas mileage. Sure they have ECU's too, but the ECU tends to be only for engine management, and its built with 80's era DIP's and 1Mhz processors. AKA you can reprogram it, repair it,etc.

I've have a late 90's Toyota, that is pretty open (or has been reverse engineered) has airbags/etc. But it doesn't have TPS's that have to be replaced all the time, or an AC that decides I don't want recirculation on with the defroster, or a headlamp controller that is part of the ECU and won't allow me to have the car running with the headlights off. Its stereo is also standalone... Etc. All things the more recent toyota I also own has, and its a PITA.

Basically, there is nothing scary about late 80's-90's cars with ECU's when the ECU's did little more than timing advance and injector timing (no ABS). You could probably build a replacement for those functions using an arduino and a couple weekends on a dynamo.

The problem is the modern, computer on wheels vehicles where everything is integrated into a network and your car refuses to start when it notices the gas cap hasn't been screwed in completely.

Comment Re:Valve needs to use their clout (Score 1) 309

You mention intel, but fail to acknowledge that they are probably the best bet on linux right now. Their drivers are open, and seem to actually work pretty good (in my fairly limited experience). I've even played a number of humble bundle games on my intel based laptop.

Maybe the performance isn't good, but at least they work enough to get X running across a couple screens without crashing/studdering/etc like the open source AMD/Nvidia drivers, or simply refusing to work (as the nvidia proprietary drivers have done for me a couple times).
 

Comment Re: And it's not even an election year (Score 3, Insightful) 407

The biggest secret to having good people isn't hiring H1B's it's working to retain the people you have.

But... This would imply that people aren't "human resources" that can be swapped with each other at will. It implies that someone who works on a project for a few years can contribute more meaningfully to a product than someone just hired.

I've seen this a few times in my career, an "average" developer with a few years experience on a project may not be as celebrated as the rock star that was just hired, but a couple years down the line when the rock star has moved on, its the "average" developer's code that doesn't need weekly maintenance. Its, often the guys that have been there for a couple years tasked with cleaning up the mess. A problem, much harder than creating it in the first place. That is if they are still around, because even an average developer can put their resume out there and get a pay bump if they put the effort into it.

Bottom line, I totally agree, retention of good solid "average" developers is what companies should be focusing on. Everyone is looking for a magic solution, but in reality a lot of software development is just slogging through loads and loads of unstimulating work.

Slashdot Top Deals

So you think that money is the root of all evil. Have you ever asked what is the root of money? -- Ayn Rand

Working...