Comment they're sniper rounds...you don't shoot many (Score 1) 216
Even at 10K/round that wouldn't be crazy compared to the cost of training the sniper and getting them in and out of the area.
Even at 10K/round that wouldn't be crazy compared to the cost of training the sniper and getting them in and out of the area.
Now try it ten feet away from a 120" projector screen. (Or 2 feet away from a 24" monitor, which is the same relative size.)
Linux internal APIs get changed from time to time. Drivers that are part of the kernel tree get updated by whoever is changing the API. Drivers that are outside the kernel tree have to be updated to work with the API changes.
I've had my distro ship a new kernel and then had to wait weeks for the Nvidia driver to be updated. (Admittedly they're getting better at it, but it has happened.)
The rule against using rebase really only applies if you are publishing your git repo (and specifically the branch in question) to other people.
The reason for the rule is that if you rebase your changes on top of the latest upstream, anyone pulling in your branch is then forced to rebase as well since you've essentially rewritten history. (Doing a rebase changes the hashes on all your local commits.) If you instead merge upstream work onto your local branch then your history is preserved and everyone downstream from you can also just do a merge.
However, for any development branch that you are not exporting to other people, "git rebase" is a very useful tool to keep your own development at the tip of your local development branch. When other people push changes you can pull them in and rebase your work on top of theirs.
I *never* do development on an upstream branch. So instead of the above I would always checkout the local "master" branch, do a "git pull", then checkout my development branch and rebase my work on top of the latest local "master" branch.
The nice thing about this is that the local "master" branch is always identical to some version of the upstream "master" branch, I never need to worry about it getting polluted with my development work.
I prefer the model where the "master" branch is the continuous trunk of development. When you want to release, you branch off for that release.
Functionally equivalent, but has the nice property that most stuff goes into "master" and only bugfixes/backports or special-case stuff goes onto the release branch.
In git a "branch" is literally a mapping between a name and the commit ID of the commit at the head of the branch. If you've merged your commit into the main development trunk (submitting the bugfix) there is essentially zero overhead to keeping the branch around in your repository.
Some CI systems have a mechanism where you create what you think will be the fix, then push it up to a common area for testing, review, etc. Then you respin your fix based on feedback, send it back in. Rinse, wash, repeat until everyone is satisfied, then merge for real. In that sort of environment it can be handy to have one local branch per bug that you're working on.
So one branch per bug you're working on, one branch per feature you're working on, one branch per released version, etc. I just checked and one local git repo I'm working on is sitting at around 40 local branches currently.
4K = 3840 x 2160 or in other words double the dimension of the 1920x1080 doubled in both directions. I've always thought calling it 4K was a bit dubious, yes it's 4 times the number of pixels but it's only twice the resolution.
The "4K" refers to the number of horizontal pixels, since in cinema 4K it's 4096 pixels across.
The use of "4K" for 16:9 consumer displays is a bit of a misnomer.
An ASA 25 slide projected in a dark room looks *awesome*. A VCR on a CRT looks crap.
"And I have no use whatever for "integrated" graphics."
I on the other hand am a software development guy. As long as it's snappy in 2D and supports multiple big monitors I'm totally down with integrated graphics since they usually use a lot less power than the discrete solutions, and Intel has really good Linux driver support.
As far as I can tell the cost of a Toyota Corolla is basically the same number of dollars as it was 10 years ago. Which means that after factoring in inflation the car is significantly cheaper than it used to be.
I have a 2005 Toyota Matrix, and aside from oil changes and tires I've only had to replace one part (the airbag clockspring) which cost a few hundred bucks and which I installed myself.
I worked for a large company (thousands of employees) and was laid off with almost twenty others on my team because we had basically completed the project that we had been working on for some years. We were given notice that it would be happening a month in advance, some of us were asked to stay on longer (with pay of course) for knowledge transfer purposes. They had meetings going over all the expectations, all the necessary paperwork, tax implications, etc.
It was all very civilized, nobody got booted out. People finished up their immediate work and gave training sessions to the people that were going to be staying on to maintain the project. Everything went smoothly.
Imagine if there was a precision guided tactical nuke that was basically equivalent to 10 conventional precision guided bombs. People would be much more likely to use it.
Imagine that a nation had a small "clean" nuke that could be delivered with pinpoint precision. At that point it's basically just a more efficient form of high explosive. Why *wouldn't* they use it? (As opposed to tens or hundreds of conventional bombs.)
The issue with nukes is that they're WMDs. If they got to the point where they were no longer WMDs but rather just a very efficient way of blowing up a relatively small area (a single remote military installation, for example) then people are going to use them.
Kinetic energy goes with the square of the speed, so a 40mph crash has not quite 2X as much energy as a 30mph crash. (16:9 ratio)
For God's sake, stop researching for a while and begin to think!