Before signing contract open source your private code library/snippets which you intend to use under BSD license and put into well known repository like Github or sourceforge. BSD license just to calm down your employer/client, that all modifications will remain their sole property, and what they get is some debugged and cleaned code for free, without strings attached. After that whenever question of authorship arise you can point on the traces of BSD code and date of submission into repository, and also demand from employer/client to add BSD copyright note with your name to their code.
Pay bonus for product of acceptable quality and make sure contractors confirm online that you are actually paying bonuses. Build yourself a reputation of trustworthy client and you would attract qualified contractors.
There are always programmers who didn't not bothered to remember their math course. They usually have enough coding knowledge that they provide some value, but from a technical perspective they are a slowly-increasing liability. As an example: I work with a developer who is 10 years younger, but still doesn't understand how to extract rotation matrix from coordinate transformation matrix and cannot be trusted to code gradient descent without causing a mess that somebody else will have to clean up. On top of that, he is really resistant to the idea of refreshing his math skills; I suspect he dislikes people he considers senior to him making suggestions about how to improve his math knowledge. How do you help somebody like this to improve their skill-sets? And, most importantly, how do you do so without stepping on anybody's feelings?
Deep learning system are not quite simulations biological of neural nets. The breakthrough in DL happened then researcher stopped trying to emulate neurons and instead applied statistical (energy function) approach to simple refined model. Modern "ANN" used in deep learning in fact are mathematical optimization procedures, gradient decent on some hierarchy of convolutional operators, more in common with numerical analysis then biological networks.
In my experience GPU and especially GPGPU bottleneck is not amount of memory but memory access bandwidth. 256-512 bit is not adequate for existing apps. Before amount of memory will become important manufacturers should move to at least 2048 bit mem bus and also increase amounts of registers per core several times.
However some projects require for code do something specific, nontrivial and be finished in finite time. Readable, commented, understandable code which do nothing is acceptable for very long projects where managers and team leaders change job before project is cancelled.
I was consulting for division of big corporation and was asked several time to help with interviews. Candidates which did well or at least not bad on my question/tests were invariably rejected by HR of the division. They were too expensive. Workers without experience/knowledge of the field were taken instead because they were cheap. Happily the are not asking me any more.Whole endeavor is depressive.
Specific algorithmic implementation and limitation imposed on it by hardware are wildly different. Some algorithms don't need If's and branch prediction, other do. Different algorithms have different memory access pattern, different complexity of the kernels(for GPGPUU) and different requirement to memory bandwidth. Even on GPGPU algorithms doing the same thing in CUDA and in OpenCL can have several times performance difference. And some algo consist of mostly matrix multiplication, and quite a number of useful methods reach peak GPGPU performance (for example PDE solution). You can't find consistent "average" HPC algorithm.
Look's like Ballmer's definition of "people" is not what we have in dictionary. I wonder if he mean lizards, or may be ant folk...
You can opt out of our new program of experimental nuclear reactor near you home. To opt out of our program buy license for going nuclear-free form Intellectual Ventures.
Why do you think we still use CPU? GPU core can't stop if it started thread, it can only ignore result on early return. Neither it do branch prediction or efficient caching. Basically through into it big data or small data - if the size is not hardcoded it will do the same amount of work. Simplifying it somehow you can say It recalculate whole screen buffer to change a single pixel.
As side benefit you computer double as heater as soon as it turned on.
It give you flexibility. BI at best would give you the same chances as CS for some clients/employers,a nd a lot less for others. MBA could be actually considered detrimental for science/research heavy projects. It could make employer doubt your commitment.
Santino was castrated. Seems zookeepers decided his planning ability was too advanced for their liking. Thing to remember for one intending to show advanced planning ability to more technologically advanced species.