The limiting factor of shared fiber broadband is the packet turnaround time just like coax and radio combined with scheduling the upstream data. The *PON networks were designed for sending lots of cable TV bits one direction and being able to cope with a small percentage going the other way. There are all sorts of techniques to fix that problem and all of them fail in different ways. So far the fastest home internet isn't PON based but a dedicated point to point links to a somewhat local fiber switch that has massive amounts of upload. It would be very interesting for Google to release some documents about their different types of technology they are using in Kansas City experiment. I've heard that they are using at least 4 different types of connections.
German and English won in the engineering world because of compound words. You can invent a new device and create a name that works in letter describing it.
English wins over German because of the relative lack of gendered words. Genders can get very messed up when using compound words. As an example, if a boat is female and a trailer is male, what gender should a boat-trailer be?
I suspect this debacle has been a massive setback for Linux on the desktop. I'm as hardcore an open source you'll find, I haven't run a closed-source OS in over 20 years, but I was almost ready to throw in the towel and install Windows during the height of this!
I did exactly this... I run linux on just about every non-GUI bit of equipment I have - virtualisation, the lot - but everything that I actually have to look at a screen for, I use Windows 7 again. Gnome 3 killed it for me... I have 3 x 24" 1920x1080 screens that Gnome 3 could never handle right. I was running Fedora 20 until Gnome 3.
TBH, XFCE would be perfect IF it was using wayland. The graphics tearing issues I had with my tri-head video card + XFCE was horrible. The sad fact was the only real fix was that XFCE needs newer graphics handling. One day it'll get there - and hopefully one day it becomes nice to use again - but until then, I went back to Windows 7 and the amount of work I actually get done is amazing...
The other benefit is that the dual GPUs (Intel + ATI) in my laptop actually work properly so I can play some TF2 on my laptop in the downtime - with VERY good performance. I couldn't get anywhere near that on Linux - even with the ATI binary drivers.
This is what is wrong with SystemD.... Do ONE job, do it well. Not replace the entire ecosystem.....
I haven't flow in the US in the last year. I've been on commercial aircraft in Australia where the pilot got the wrong frequency when the controller was using "dec-ee-mal". A friend had his class do an experiment where students wrote down numbers that were being read in different styles. There were substantially more errors with the ICAO way of reading numbers than the older FAA style with the Aussie students.
In the US they use "point" which is one syllable. There is no place in aviation radio where the decimal point isn't implied which makes using "decimal" a bigger waste of radio time.
You may not have a choice. My last power bill had a connection charge that was higher than the energy consumption charge an I pay $.22 a kwh. That will be the trend in the future. In places where the grid is still locally owned, I see it being added to property taxes as the cost of batteries come down where people can go off grid.
We just put in 6 250W panels. They cost less then $190 each but installing the frame and the wiring cost more. The mPPT module happens to plug into our existing telco grade -48V DC power supply and it was only $800 but plugged into a nice $5k system. The batteries that will run one of our racks of gear for 8 hours cost $250 each for 8 of them. The silicon bits aren't a major part of the cost of going off grid now.
The USDA's budget is 100x that of the BLM. Sure one is dept of Ag and the other is Dept of Interior but I'm not sure it matters much since I think the USDA has claim to all BLM land as well.
Work recently spent about $5k for cube and it isn't printing any better than the 4 other 3d printers I've used 3 of them costs less than $1000.
You can't do this with systemd. A kill to a process group is an atomic operation in Posix so that if you do a kill -9 -1 (i.e. send a SIGKILL to init and all of its children), the kernel will not return from the "kill" syscall until it has sent the signal to all of the processes. That syscall will also prevent any other task switches until it is done so the result is no process (other than init) ever runs again even if they are in the middle of a forkbomb. A kill -1 -1 (send SIGHUP to everything via init's process group) has traditionally told all user level programs that the user logged out and all daemons that they should reload their config files.
Killing a process group (the negative process id, which is what the original commentator was talking about, not a SIGHUP) is used all the time on systems. That is how apachectl (and most other forking deamons and their control programs) tell its children to reload the config file or end in a controlled way. It is used every time a user logs out to make sure all their processes do go away. Signals are the oldest and more reliable of the IPC mechanisms and are great when the number of messages you need to send is a tiny number of options.
Over the past 3 decades, versions of the inittab syntax allowed for things in the 2nd and 3rd fields to say things could be run in the background or depend on other named states which is why the 1st field is a name.
It is amazing how many properly run systems can cope with a "kill -1 -1" to reset everything without a reboot.
Yet a 1 mm tick plastic bezel around the glass would be nearly invisible and protect the glass too. If done right, it might even make manufacturing cheaper.
The problem is the new xcode will soon drop support for the 32 bit versions of the OS and for some reason, mac developers can't figure out how to make a fat binary that runs on everything from about 10.0.0 to 10.11.00 even thought it requires having 3 versions of X code running on two or 3 different (virtual?) machines and then copying a few files. It is amazing how many open source packages just compile with older version of Xcode if you add in a few #DEFINES for things that aren't used anyway.
Oddly enough, pushing pixels is the only sane reasons for doing 64 bit operations on a hand held device. If your not using more than 4 gig address space, going from 32 bits to 64 tends to mean you spend far more time moving pointers that have all zeros in the top half. Old stats showed the best a 64 bit PCU tends to do is about 6% worse based on average loads but operations with lots of indirect operations (like Java) it can be far worse.
And who controls the names and how much does it cost to be a data producer?