Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×

Comment Re:Latency vs bandwidth (Score 1) 162

It only takes a queue depth of 2 or 3 for maximum linear throughput.

I haven't any idea why you are so up voted, because your flat out wrong, 5 minutes with a benchmark like ATTO allows you to see the performance with small sequential IO and queue depth. Another benchmark showing ATTO sequential IO's for small transfers

And, your sort of right the OS will do a certain amount of prefech/etc but that doesn't help when things are fragmented or the application/whatever is requesting things in a pattern that isn't easily predictable (say booting without a readyboot optimized system).

Try it out yourself, get the old sysinternals Disk Monitor and watch the size size attribute. Its in 512 byte sectors, and on my machine probably 1/3rd of the IO's are listed as "8". AKA 4k. Heck the example screenshot on the listed page is all 8 except for one 16.

So, yes small IO transfers are still an issue, and will be until we get OS's that can solve the hard problem of consolidating unpredictable IO streams. Heck a lot of people turn superfetch off because it slows things down. AKA aggressive prefetch isn't necessarily faster.

Comment Re:Latency vs bandwidth (Score 2) 162

Gosh, stupid html tags ate most of my posting. Anyway here it is.

I don't understand why people still don't understand the difference between latency and bandwidth, and the fact that a huge amount of the desktop IO load is still less than 4k with a queue depth of basically 1.

If you look at many of the benchmarks you will notice that the .5-4k IO performance is pretty similar for all of these devices and that is with deep queues. Why is that? Because the queue depth and latency to complete a single command dictate the bandwidth. So you either need deeper queues or lower latency to go faster at those block sizes.

So the latency on PCIe is not that much better, but the queue depth can be much deeper than what is possible with a normal AHCI controller. This helps a lot with benchmarks, but not so much for a single user.

Anyway, boot times, and general single user performance is bottle necked mostly by latency. Especially when the throughput of larger transfers is greater than a few hundred MB/sec. So, the pieces large enough to take advantage of the higher bandwidth is a smaller (and growing smaller) portion of the pie.

Next time you start your favorite game look at the CPU/DISK IO. Its likely the game never gets anywhere close to the max IO performance of your disk, and if it does its only for a short period.

Anyway, its like multicore, beyond a fairly low core count most desktop type operations are better off with faster CPU's rather than more of them.

And just like desktop benchmarks, the guys running benchmarks seem lothe to heavily weigh single thread operations, or queue depth 1 1k IO loads in the overall performance picture even though its a large portion of actual system performance running everyday tasks.

Comment Latency vs bandwidth (Score 1) 162

I don't understand why people still don't understand the difference between latency and bandwidth, and the fact that a huge amount of the desktop IO load is still a few hundred MB/sec. So, the pieces large enough to take advantage of the higher bandwidth is a smaller (and growing smaller) portion of the pie.

Next time you start your favorite game look at the CPU/DISK IO. Its likely the game never gets anywhere close to the max IO performance of your disk, and if it does its only for a short period.

Anyway, its like multicore, beyond a fairly low core count most desktop type operations are better off with faster CPU's rather than more of them.

And just like desktop benchmarks, the guys running benchmarks seem lothe to heavily weigh single thread operations, or queue depth 1 1k IO loads in the overall performance picture even though its a large portion of actual system performance running everyday tasks.

Comment Re:Probably best (Score 1) 649

You probably don't have to go that old, plenty of cars from the late 90's are both safer, and get better gas mileage. Sure they have ECU's too, but the ECU tends to be only for engine management, and its built with 80's era DIP's and 1Mhz processors. AKA you can reprogram it, repair it,etc.

I've have a late 90's Toyota, that is pretty open (or has been reverse engineered) has airbags/etc. But it doesn't have TPS's that have to be replaced all the time, or an AC that decides I don't want recirculation on with the defroster, or a headlamp controller that is part of the ECU and won't allow me to have the car running with the headlights off. Its stereo is also standalone... Etc. All things the more recent toyota I also own has, and its a PITA.

Basically, there is nothing scary about late 80's-90's cars with ECU's when the ECU's did little more than timing advance and injector timing (no ABS). You could probably build a replacement for those functions using an arduino and a couple weekends on a dynamo.

The problem is the modern, computer on wheels vehicles where everything is integrated into a network and your car refuses to start when it notices the gas cap hasn't been screwed in completely.

Comment Re:Valve needs to use their clout (Score 1) 309

You mention intel, but fail to acknowledge that they are probably the best bet on linux right now. Their drivers are open, and seem to actually work pretty good (in my fairly limited experience). I've even played a number of humble bundle games on my intel based laptop.

Maybe the performance isn't good, but at least they work enough to get X running across a couple screens without crashing/studdering/etc like the open source AMD/Nvidia drivers, or simply refusing to work (as the nvidia proprietary drivers have done for me a couple times).
 

Comment Re: And it's not even an election year (Score 3, Insightful) 407

The biggest secret to having good people isn't hiring H1B's it's working to retain the people you have.

But... This would imply that people aren't "human resources" that can be swapped with each other at will. It implies that someone who works on a project for a few years can contribute more meaningfully to a product than someone just hired.

I've seen this a few times in my career, an "average" developer with a few years experience on a project may not be as celebrated as the rock star that was just hired, but a couple years down the line when the rock star has moved on, its the "average" developer's code that doesn't need weekly maintenance. Its, often the guys that have been there for a couple years tasked with cleaning up the mess. A problem, much harder than creating it in the first place. That is if they are still around, because even an average developer can put their resume out there and get a pay bump if they put the effort into it.

Bottom line, I totally agree, retention of good solid "average" developers is what companies should be focusing on. Everyone is looking for a magic solution, but in reality a lot of software development is just slogging through loads and loads of unstimulating work.

Comment edgerouter.. (Score 1, Interesting) 225

I have the edgerouter POE, which is a fantastic piece of hardware, but it still doesn't support proper vlan tagging controls on the embedded switch ports. A feature I would add myself but the hardware isn't open enough to do it without a lot of reverse engineering.

So, this makes me wonder if they are sort of stuck between stupid hardware companies and the GPL. They may not be able to publish changes to the open source products without violating their NDAs with the manufactures of assorted chips/etc they use.

I'm not trying to defend them, just point out a situation I've found myself in. GPL software is great for bootstraping a project, but for some of these platforms it can be a real PITA. I feel for small companies like Ubiquiti. But I'm pretty irritated by Sony, broadcom, cisco, etc which are also playing the same game.

Comment Re:Only 8K? (Score 1) 263

I already have >1G of ram on my video card. And besides, that means that we might be catching up proportionally to the early 90's when I had a 4MB computer with a 1MB graphics card.

Bring it on, I for one welcome having a 50+" display that I can't see the pixels on from 2' away. Even if the video card burns up 200W just to refresh a 2d screen. That is why I have a desktop. Leave the boys with their laptops to crummy resolutions.

About time.

Comment Re:Too many pixels = slooooooow (Score 2) 263

Without knowing the size of the display the whole discussion is pointless 8k in 2" or 8k in 50"?

Cause there is a world of difference, and humans have pretty good spatial memory. Having a monitor larger than what can be seen without moving your eyes/head is a good thing. In fact that is what I'm using right now, 4 monitors are already more than I can see at the same time. With my focus on the left monitor I can't really see anything in the right. But that doesn't make it less useful for having a PDF open, or another window of code. I can flip my eyes back and forth between the right and the left far faster (and less disruptively) than I can swap virtual desktops.

Comment Re:scientific computing (Score 2) 125

I disagree, speaking as someone who has in fact had a weeks long job running on my desktop before. I mean if you have a fast PC (desktop processors are often faster per core, at the expense of fewer cores)

Well you have to differentiate whether your talking about an intel desktop machine or a "workstation" class machine. The difference at this point is the "workstation" is using xeon class processors, and has ECC. The problem is that the "workstation" has exactly the same processors as the rack mount machine running in the server room with a much better power and cooling environment.

That said, it is possible to get something like the Xeon E5-2637 v3 which is a quad core, 3.5 Ghz (+turbo) CPU. Sure its not the the 4+Ghz you can get on a "desktop" CPU but it does have a LGA 2011v3 which will give you significantly more memory bandwidth and ECC.

Frankly, while I think running anything besides a desktop workload on a "desktop" is silly because those kinds of workloads tend to be better handed with servers. It does seem that intel has lost its way a little when it comes to extreme single threaded performance. Particularly in the server space. Why they don't offer a 200+ watt pig clocked a few percent higher seems strange. Pretty much everyone else does it (POWER8, z13 (@ 5.2 Ghz), AMD, etc).

Comment Re:*sighs* (Score 1) 150

I saw korn back in 98 and bought a korn bottle opening at the show. I went back to see them again in 99, the bottle opener was on my keychain.

Did you actually think the security at a concert is there to protect the concert attendees?

Eye rolling... Their primary job is to protect the revenue streams inside the concert. Hence the focus on busting people with hip flasks and the like.

Comment Re:We need hardware write-protect for firmware (Score 1) 324

Unless the virus is resident in Bios... before booting into your OS.

Well, for this to work, you had better have jumpers on all your PCI/thunderbolt/etc devices with option ROMs as well. Otherwise your BIOS is going to get owned during POST.

But, all that was a fine plan before we got EFI stuffed down our throats. Now you better make sure to unplug whatever device holds the EFI System Partition as well. Because you may be loading EFI "drivers"/etc from there.

But there is a gocha there too, because lots of machines now have the primary storage soldered to the mainboard...

Comment Re:To answer your question (Score 1) 279

X86 was a poor ISA when the first 8086 chips were made (but good, given hardware capabilities at the time). That was about 40 years ago. MIPS and Sparc (and ARM) are all better than x86.

You, speak like its 1995 before anyone fully understood OoO, or started decoupling the micro ISA from the actual ISA. The core x86 arch (ignore the 286/386 protected mode instructions which are very complex, and mostly unused) turns out to be fairly simple when compared with mips/sparc/arm. Three architectures that all made small, but hard to overcome decisions for creating large superscalar renamed OoO CPU's. Take for example the fact that traditionally all of ARM's instructions can be conditionally executed. This complicates long pipelines, especially when they are OoO because now you have to resolve an additional dependency for every instruction before its retired. If you look at the optimization guides for cortex you will see that the basic ideas of ARM had to be "evolved" a little in order to make it fast.
Similarly register windows (SPARC), multiple load/store instructions (more complex exception mechanism), etc etc etc..

So, to say that x86 is somehow "worse" or that any of those named architectures is "better" evokes the very wrong headed RISC vs CISC stupidity of the 1990s. This has been known for nearly a decade by anyone close to the development of any actual CPUs. Similar to the discussion ten years ago on how x86 could never be power competitive with ARM because there was some "fundamental" problem with the ISA.

ISA's are now "good" when they remain flexible enough to deal with multiple different micro architectural implementations without providing handicaps that limit the designs. Turns out that x86 isn't that bad, it seems to be a bit of luck that the src/dest register model can be renamed easily, and that it has some higher level instructions (like rep movsd) that can be optimized really well in microcode.

Comment node.js (eye rolling) (Score 1, Interesting) 319

I could go into a dozen technical reasons why javascript is a terrible, horrible, outrageously bad language here but this post would be TL;DR; for most people. Lets just settle for goggling "javascript terrible" and reading the first couple links.

Or for some silly (not not really deal killing) things watch https://www.destroyallsoftware...

The fact that there are actually people who think using in on the server is a good idea, proves there are insane or completely incompetent developers out there. If someone actually approaches me with this idea, I immediately think they are an idiot.

See, on the browser we basically have to deal with javascript because there aren't any real alternatives. But things that are just "issues" or "irritations" in the browser quickly blossom into product killing problems when used on the server.

Oh, and yes, I've written my fair share of javascript (and other languages), so don't think i'm talking out of my ass here.

Slashdot Top Deals

All seems condemned in the long run to approximate a state akin to Gaussian noise. -- James Martin

Working...