Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?
Compare cell phone plans using Wirefly's innovative plan comparison tool ×

Comment Re:massive parallel processing=limited application (Score 1) 108

That's why the many core server CPUs have massive L3 caches and quad channel memory. 24 core x86 CPU with around 60MiB of L3 cache? Why not? More memory channels allow more concurrency of access. Intel NICs support written packets directly to L3 cache as to skip memory writes. Large on NIC buffers to make better use of DMA collecting and reduce memory operations, transferring in larger chunks to make use of that high bandwidth memory.

In case it's not clear, I'm not trying to say your point isn't valid, just saying your point explains a lot of current features in high end components.

Comment Re:massive parallel processing=limited application (Score 1) 108

Also, multicore designs can have separate memory.

NUMA comes to mind but it has complexity issues added to the OS and application. Accessing another CPU's memory is expensive, so the OS needs to try to keep the threads close to the data. The applications need to try to do cross socket communication by using a system API, assuming it exists, to find out which socket the thread is on and trying best to limit to current socket threads. Cross socket communication is probably best done via passing messages instead of reference because copying and storing the data locally, even if duplicated, will be faster than constantly accessing the other socket's memory.

Then you have the issue of load balancing memory allocations. May not always be an issue but it can become an issue if you consume all of one socket's memory. There are other issues like one socket may have direct access to the NIC while the other socket has direct access to the harddrives. Topology is important.

As soon as you step out of a cache-coherent system, then you run into even more fun problems. Stale data is a huge issue. You need to make sure you're always getting the current data and not some local copy. At the same time, without cache-coherency, cross core communication is very high latency. Most x86 CPUs can remain cache-coherent into single cycle latencies. While copying the data may not be any faster, you know if the data changed very very quickly. If the data is read a lot and rarely changed, then you have some nice guarantees about how quickly you know if the data changed and only incur the access cost when that event happens. Without coherency , you are now forced to check out to memory every time, incurring high latency costs every access.

With multi-core systems, cache-coherency has an N^2 problem. I'm sure someone will come up with an idea of "channels" to facilitate low latency inter-core communication while allowing normal memory access to be separate. Possibly even islands of cache-coherency in an ocean of cores. Each island can be a small group. Some of the many-core designs where they have 80+ cores have heavy locality issues. Adjacent cores are fast to access and far aware cores are very expensive. Pretty much think of each core only able to talk to adjacent cores, and requests to far away cores need to go many "hops". Even worse is cores physically nearer the memory controller have faster access to the memory. All memory requests have to go through these cores. Lots of fun issues that requires custom OS designs.

Comment Re: massive parallel processing=limited applicatio (Score 1) 108

I've managed super-linear a few times with multi-threading. Required good use of cache. If you can get the threads to be pseudo-synchronized without having to use any actually synchronization, what the first thread reads from memory, the other threads can benefit from. This case only applies to cores that share the same cache. The "super-linear" part no longer applied adding more sockets/CPUs, and adding more cores had diminishing returns, approaching a fixed percentage increase in performance over a single thread.

Then I tell people I code in C# and they don't understand how someone who writes in a high level language know how to think so low level. Lets just say I'm that go-to guy when you can't empirically find why your code is slow. Many hard performance issues cannot be measured because measuring can change the outcome. At that point you need a good mental model of how CPUs, cache, memory, OSes, threads schedulers, io schedulers, harddrives, SSDs, and networks interact to produce strange slowness when no one part is the bottleneck. Almost always an issue of latency vs throughput and different parts of the system with different throughput or latency characteristics.

Comment Re:So let me get this straight (Score 1) 188

$180 PSU, $150 mobo, $150 memory, $400 few SSDs, $60 case, $200 monitor, $300 GPU, $70 Intel network card. Not to mention the $30 each for mag-lev bearing fans. Yep, I really want to save $50 on a CPU with heat and power issues.

I came from a poor family, I had to earn my own money to buy computer parts when I was a child. I've learned to appreciate quality. If AMD can get within 10% of Intel in performance per core and efficiency, I will support the underdog. I really want a bunch of cores and ECC memory on my desktop, but Xeons are too expensive.

Comment Re:Reliability (Score 1) 209

Been building and repairing computers for 25+ years and have worked in IT for quite a few. I have never seen a harddrive die from power issues. I have seen burnt motherboards, and melted traces where the power comes in, but never had an HD die from a surge or lightning strike. Pretty much only unexpected shutdowns in need of a scandisk. I have seen drives die for a myriad of other reasons.

How common are surge/lightning/PSU-blow-up HD deaths? My limited experience is "not often" since I've never seen one.

Comment Re:comment (Score 1) 209

To change any earlier block. Changing earlier data requires later data to be re-written because the write head is wider than the read head. As long as you append data, you're fine. There in lies the rub. How do you know if you're near the front or back of a shingled region? If it's always per track, then that information is available. Even then, most/all file systems don't care. OpenZFS will care in the future. CoW nature plays well with being able to almost always append to these regions, reducing the amount of re-writing.

Comment Re:comment (Score 1) 209

OpenZFS has been working to become aware of shingled storage. The CoW nature of ZFS already plays well with shingled recording, but it will become much better once the FS is aware of the layouts. In theory it's not much work, in practice, it's a lot of refactoring.

Comment Re:Protection plans (Score 1) 90

That's pretty crappy. My ISP uses Ethernet for everything, including voice and TV. They had to run Cat5E throughout my house. My friend built a new house with no CAT and he said they ran the cabling through his dry-wall like pros. All "free" of course. If my podunk ISP can afford to run CAT in every house in the city, then Comcast can easily afford to fix or re-run COAX for a small number of customers.

Comment Re:Not tech crisis - it's a general crisis (Score 1) 118

Which do you believe is the cause? Do you believe such people resorted to reason when their natural behavior was not accepted or that their natural behavior was not accepted because they reasoned through their decisions? Which you assume will greatly affect how you perceive the world.

Probably like the nature vs nurture, a mix of both.

I have been starting to believe that the main difference is personality. Potential be damned, if you don't have the obsession or passion, you'll never exercise your ability. Hypothetical. Two people with equal potential, but one has been fervently exercising their ability to critically think since the age of two and the other just started to notice at the age of 20 and even so only exercises their ability when required or forced. Will the later ever catch up to being remotely close?

Everyone I know has had traits that are to their parents' disdain from a young age. No one is an easy child, though some mindsets are easier to accommodate at some ages than others.

I mean as in were so different that they were ignored because their parents couldn't relate or actively discouraged because their parents tried too hard to "help". A sub-optimal to hostile learning environment.

And I've met people who were absolute trained cogs in the system until they were faced with a question they could not dismiss, a concept that did not fit their lessons

So they do exist. Everyone(Not many people) that I've ever met that were smart enough to recognize a question they could not dismiss, but not a critical thinker, just didn't care. Everyone else doesn't seem to recognize when something does't fit their lessons. I guess I subscribe to bimodal bathtub curve where there is a high separation and few between.

dismissing a population as having no potential to think for themselves just makes you a modern eugenicist

Yeah, I don't like that either. On the other hand, I firmly believe there are many forms of intelligence, critical thinking being only one of many. I also believe perfect is the enemy of good. Eugenics tries to create perfect by using flawed metrics of intelligence and usefulness. This is one topic where I would rather use science to create a pound of cure than an ounce of prevention(eugenics).

Comment Re:Not tech crisis - it's a general crisis (Score 1) 118

Overall I highly agree with you, but only so much that teaching children to question "truth" is a good trait. But on the other hand I'm thinking of these children as if they're more like pets that need to be trained. There are many forms of intelligence, but I'll focus on critical thinking and fluid intelligence. All intelligent people that I know had strong critical thinking skills from a young age, even to their parents disdain. Many of these people were treated as "different", and in spite of all of the social pressure to fit in and teachers trying to make them like everyone else, they continued to do what they enjoyed, learning.

Should I think of the general populace as a bunch of idiots that need to be trained or should I think of them as intelligent people that will be smart regardless of the education system because they can self-educate? If you need your hand held(trained/educated), you'll never fully learn critical thinking because by definition you cannot think for yourself. Of course education/training at it's core is very important because there are a lot of necessary skills required to function as a society, like communication and basic math, but when it comes to higher learning, it's not an issue of being taught, it's an issue of being able.

Comment Re: Translation: More H-1Bs (Score 1) 118

You can't train CS. Ever see the damning stats related to CS and programming in general? 80% failure rate in the first two semester for people who WANT to get into CS or programming. Then another 20% drop out along the way, and of the remaining 16% who applied, 50% of them only pass because of a strong will, but are otherwise horrible. Of the remaining 8% who have even the slightest knack, their skills are distributed on a power curve, leaving 80% of them below average.

We don't need a strong push for more people in CS, we need to find better ways to help support those that would be good in CS to get into it and afford it.

Slashdot Top Deals

We cannot command nature except by obeying her. -- Sir Francis Bacon