Agreed. The field is just too large for people to have *any* chance of keeping a useful handle on what is going on. You are going to have to specialise. The trick I think is to know what you don't know, not to keep up with everything. I was recently asked by a potential client if I could produce a mobile phone app for them - 'no chance, i'm not competent at that' was my reply. I pointed them at someone who specialises in this work. Knowing what you don't know (and who does!) is a much more useful skill than attempting to keep up with everything...
Actually, no, what lots of us do is solve real world business problems by automating them using computers. Knowing what the business problem is is just as valuable as understanding how computers work, and you can approach the solution from either direction - neither is implicitly better than the other.
BTW, I'm a CompSci, but I work with Physics Phd types who understand the problems I work on better than I ever will, and we have complementary skills.
It has always interested me to know what drives companies to upgrade their systems. Let's say you have a farm of 1,000 servers that you've had for 5 years, doing useful stuff, running 12.04 - what incentive is there for you to upgrade?
If they are web facing, and under attack - sure, I get it.
If you are developing cutting edge software for deployment to other hosts - I get it.
But if you are using them to actually do work for your company, say, running some data mining, or hosting a big kafka cluster, why change? The logical point is when you rip the lot out and install new hardware (and decide on a new machine config, including OS) but for existing hardware, shouldn't the OS choice live for the life of the server?
Well, it's might introduce latency, or it might not, it really does depend on exactly how it's all setup.
I can imagine VS working on a shared filesystem, so that sshing and invoking the build (probably via cmake as cmake support is new in VS2017) is pretty quick and painless if the machine is nearby. If it runs the build, and integrates build/test results with the IDE, then it's going to be a performance win for some people who are very VS centric, and who are developing cross platform libraries.
Anything that reduces the pain of cross platform development is welcome - it's not easy to do well, and you really don't want to be switching between multiple toolchains too much as it'll make your brain hurt. Each additional platform takes much extra effort, and keeping that number in check will really help.
HP DL370 - industry standard 2S Xeon socket server
HP DL570 - industry standard 4S Xeon socket server
Sure, getting hold of a mass produced 4S motherboard is difficult, and these 4S servers aren't cheap but if you are intending on buying $20k of processors to run on it, the difficulties can also be solved by throwing money at the problem.
I'm not sure what world you are living in, but in the one i'm in, we have CPUs with a lot more cores the 4, 6 or 8.
For starters, mainstream Intel dual socket supporting processors have 22 core options - E5-4669 v4 for example. So, you can get 44 cores into a dual socket machine.
Sun/Oracle got into this game in a big way with their T series processors, and blurred threads vs cores (in a very interesting way), so produce things like the T5 with 16 cores and 128 threads - it's like hyperthreading, but very cleverly done, so instead of relying on out of order execution to keep the execution units humming, you use multiple threads. Of course you can get multi-socket machines for these too, so you can get a T5-8, so 8 sockets (128 cores, 1024 threads).
So, high core count is out there, you just jave to look a little further than intel processors aimed at the desktop market.
Actually, svn is just about the perfect source control system if you want something quick and dirty that you can understand. git (I presume that is what you would propose as an alternative) adds no features to many small software development teams. Fortunately the svn->git migration path is well trodden.
If you had mentioned cvs, or rcs, then i'd agree
The Z600 and Z620 are a real joy to use and sit by - they are full of low speed fans, and sensible airflow design, so you can get a dual xeon in an under-desk format and these would be my preference. I've actually got one of each, the Z620 is my normal desktop machine, with dual E5-2670, and 48gb of RAM. It has plenty of CPU and RAM to throw at VMs. The Z600 I have has X5650 processors in it, so it's dual six core, and plenty fast enough.
The DL 380s, G6/G7/G8 are not silent, and you'd get annoyed if you tried to live with one next to you all day. The fans get loud and blowy, but can be set to throttle back. The iLO stuff allows you to remote power cycle these servers, and check their overall happiness. This means that remote operation is much more manageable. I've got 4 (two G6s and two G7s) in my loft, and they are great, doing both CI duty, and two as a test platform for some performance benchmarking. Power use is around 80-100 watts per server, but this is as I have them throttling back to conserve power as they are for development, and normally one is on, and the others I power up when I need them.
So if you have space, DL380s are great, but if you want to sit next to it all day, go Z600/Z620.
My preferred choice of machines from that era would be the HP Z600 and Z620 - dual socket capable machines, taking X5600 series in the case of the Z600, and E5 and E5 v2 processors in the Z620 (v2 only if it's the later BIOS version).
Here in the UK, the Z600 with dual 6 core processors can be picked up with relative ease, and these work well. The Z620 can be harder to find, but can run with dual 8 core processors (say, E5-2670) or dual 10 core processors (E5-2670 v2). The v2s are still very expensive on the second hand market, so stick with an E5 for now. Both machines take heaps of standard DDR3 registered ram, so it's easy to come by.
If you want rack mount, get an HP-DL380, say a G7, or a G8. The G7s are very cheap these days, whilst the G8 tend to hold their value a bit more. Again, dual socket boards, so you get lots of CPU and plenty of DDR3 slots in them.
If you worry that you're going to miss out with the slower ram that these machines use, then you're doing it wrong - get your working set down into L3 cache (20mb or so), that'd be my advice
Realistically what is going on here is that china has started investing in this area, as it has massively behind - the US and Europe are miles ahead, and China aren't even on the radar. I'd suggest London is the centre for development, whilst the US provides the hardware expertise. I'm pretty certain i've not worked with any Chinese kit over the years in trading systems, and i'd be surprised if this became common for various reasons (and let's face it, security concerns would be on the list).
Well I believe them, it will be encrypted before it leaves your machine. The real question is who will have the key.
Indeed, a successful model. The right team mix with the right level of experience and enthusiasm can really churn through the work and get stuff sorted. Team of 8 where I am - 4 under 25, 2 mid 30s, two over 45. The trick is to understand what everyone brings to the team (invariably different skills), and to totally ignore upper managements belief that everyone is equally capable and interested in tackling every problem (you're all developers right?).
C is a small language, certainly compared to the others in the top of the list (Java and Python libraries are just huge). By using postings and general 'chat' about languages to gauge interest, i'm a little worried that people attempting to get their head around the larger language libraries will be taken as equivalent to what will most likely be more targeted chat about C. To me this would suggest that C is probably underscored, whilst larger languages will be overscored if using this sort of approach (not that i've dug properly into their methodology!).
Sounds rather confusing then.
Waste not, get your budget cut next year.