Follow Slashdot stories on Twitter


Forgot your password?

Comment Open sourcing your private toolset would help (Score 1) 480

Before signing contract open source your private code library/snippets which you intend to use under BSD license and put into well known repository like Github or sourceforge. BSD license just to calm down your employer/client, that all modifications will remain their sole property, and what they get is some debugged and cleaned code for free, without strings attached. After that whenever question of authorship arise you can point on the traces of BSD code and date of submission into repository, and also demand from employer/client to add BSD copyright note with your name to their code.

Comment The same way as with younger programmers (Score 1) 509

There are always programmers who didn't not bothered to remember their math course. They usually have enough coding knowledge that they provide some value, but from a technical perspective they are a slowly-increasing liability. As an example: I work with a developer who is 10 years younger, but still doesn't understand how to extract rotation matrix from coordinate transformation matrix and cannot be trusted to code gradient descent without causing a mess that somebody else will have to clean up. On top of that, he is really resistant to the idea of refreshing his math skills; I suspect he dislikes people he considers senior to him making suggestions about how to improve his math knowledge. How do you help somebody like this to improve their skill-sets? And, most importantly, how do you do so without stepping on anybody's feelings?

Comment No (Score 2) 209

Deep learning system are not quite simulations biological of neural nets. The breakthrough in DL happened then researcher stopped trying to emulate neurons and instead applied statistical (energy function) approach to simple refined model. Modern "ANN" used in deep learning in fact are mathematical optimization procedures, gradient decent on some hierarchy of convolutional operators, more in common with numerical analysis then biological networks.

Comment GPU/GPGPU bottleneck (Score 1) 128

In my experience GPU and especially GPGPU bottleneck is not amount of memory but memory access bandwidth. 256-512 bit is not adequate for existing apps. Before amount of memory will become important manufacturers should move to at least 2048 bit mem bus and also increase amounts of registers per core several times.

Comment Re:Hiring assholes is never worth it. (Score 1) 400

However some projects require for code do something specific, nontrivial and be finished in finite time. Readable, commented, understandable code which do nothing is acceptable for very long projects where managers and team leaders change job before project is cancelled.

Comment Funny thing about those tests (Score 1) 776

I was consulting for division of big corporation and was asked several time to help with interviews. Candidates which did well or at least not bad on my question/tests were invariably rejected by HR of the division. They were too expensive. Workers without experience/knowledge of the field were taken instead because they were cheap. Happily the are not asking me any more.Whole endeavor is depressive.

Comment There is no "realistic HPC workload" (Score 2) 95

Specific algorithmic implementation and limitation imposed on it by hardware are wildly different. Some algorithms don't need If's and branch prediction, other do. Different algorithms have different memory access pattern, different complexity of the kernels(for GPGPUU) and different requirement to memory bandwidth. Even on GPGPU algorithms doing the same thing in CUDA and in OpenCL can have several times performance difference. And some algo consist of mostly matrix multiplication, and quite a number of useful methods reach peak GPGPU performance (for example PDE solution). You can't find consistent "average" HPC algorithm.

Comment GPU don't do "efficiency" (Score 1) 563

Why do you think we still use CPU? GPU core can't stop if it started thread, it can only ignore result on early return. Neither it do branch prediction or efficient caching. Basically through into it big data or small data - if the size is not hardcoded it will do the same amount of work. Simplifying it somehow you can say It recalculate whole screen buffer to change a single pixel.

You will lose an important tape file.