Yeah, and don't forget that "loud pipes save lives" around typical inattentive drivers. This thing is silent but deadly.
It'swould be more like Cisco requiring ypu to buy a software/feature licence to use 10Gbps ports on hardware you already paid for.
Oh wait, they do that already (e.g. on ASA-5585-X, probably other ASA nodels too)
More pseudoscience. They say that they're not sure whether this means that porn shrinks your brain, or if the shrunken brain causes porn viewing. But, this leaves out the very real possibility that this correlation means nothing whatsoever. The site below collects correlations that look pretty convincing in the graphs, but quite obviously are unlikely to be cases of causation in either direction:
It could be cleverly disguised as a bit of MD5 but is actually something encrypted with a 33 character one time pad.
This technique works best when combined with cold fusion. Also, don't forget about step 3.
They may not even need any fine print. Accepting compensation can affect your right to seek damages later.
1) You can use a different logger with systemd
2)To watch log messages with journal, journalctl -f
There are still some things I don't like about the journal (I haven't seen how to specify different retention rules for logs of different applications), but then I've only spent a few minutes actively using it.
Maybe the thing that irritates me about journal is I don't know what previously unsolved problem it is trying to solve, while making some log processing difficult.
But, if you are booting from CDs, and the CD has the rest of the media, why do you need the utility for verifying signatures on the boot media (1.44MB image)? Bootstrap the installation image from the iso9660 part of the CD (or network in the case if a network install)? and have that contain the signature verification utility.
Hint: RPM-baswd distro have been doing this since rpm 3.x, or about 1999.
Really, who uses floppies for installation these days? Sure, maybe floppy emulation on a DRAC or iLO or ILOM, but they all
-support CDROM or DVD emulation
-PXE boot (with relatively large images possible via TFTP)
If none of these are options, just write the whole (hybrid) ISO image to a 4GB USB flash disk and be done with it.
I personally haven't used an actual CD-RW or DVD to install a syatem in about 5 years. Either network install booted via PXE for servers, or USB flash disk for laptops.
I am on Android, and I don't see any way to see the video from m.slashdot.org.
Oh, and that still doesn't answer why laptops are trickier than desktops in this regard.
Unless your boot is "Please enter password to boot up computer" before it can boot the OS.
Of course it is. Any other FDE is the sprinkling of magic encryption dust kind of FDE. Both initscripts (on RH-style systems) and systemd support this, and have for years.
c) full-disk encryption can be tricky to do right on laptops, which are the main user of WiFi.
I have been using full (or, full enough,
I have used KDE for a long time. My laptop has an embedded 3G card that works better / more easily with NetworkManager/ModemManager than with more traditional (e.g. pppd, wvdial etc.) setups. Thus, I tried KNetworkManager.
However, I use WiFi networks with both WPA2 Personal, and WPA2 Enterprise, security. I don't mind my WiFi keys for the WPA2 Personal networks being stored somewhere, but I don't want my passwords for WPA2 Enterprise networks stored *anywhere*. Before trying NetworkManager/KNetworkManager, I would have all the WiFi configuration in
However, with KNetworkManager, my options are:
In the 'Store' case, due to my KDE Wallet settings (including 'close when screensaver starts'), now every time I resume my laptop, I will be prompted to enter my KDE wallet password (longer/more complex than the WPA Enterprise password).
In the 'Always Ask' case, I am required to enter my password *every* *time* I associate to the the SSID.
So, maybe it is better than nm-applet (I haven't used nm-applet *that* much) or the Gnome 3 integration (which I only see when trying to help a colleague), but it most definitely isn't better than the old
At present, I don't care about having a WiFi network connected before a user is logged in. Surely on a typical laptop, that occurs once a month or so? We have network authentication with cached crendentials, and I can kinit after logging in anyway. If this is really a requirement, using TPM (with all of its failings) would probably be a better approach.
The question will actually be more like "would you keep driving manually if it meant 80% higher insurance rates?"
Putting aside, for the moment, all the Slashdot griping about whether this is or is not a productive use of human time and energy (I agree it's probably not in a macro sense, but hey, that's the world we live in), this is indeed an old idea.
I worked as a consultant on a hardware-based HFT system back in 2007 for a Silicon Valley company called Xambala. They were using reconfigurable logic-style chips of their own design specialized for text processing applications, rather than more general purpose FPGAs, though we discussing chaining their chips with FPGAs for more computationally intensive algorithms.
The edge you are going to get from doing processing in silico is quite limited. You can conceivably cut a few tens of microseconds, maybe even 100 microseconds, out of a computation - you still have to have all the other pieces of the puzzle just right. If you are doing straight news/information driven trades in situ at an exchange and can get the same timing of feed data to respond to, then you'll have a good edge (i.e. "Buy if X>0.2, Sell if X0.1, do nothing otherwise).
If you are trying to do intermarket arb (futures/ETF arb, for example) your edge is smaller, since differences in network route, networking hardware, other infrastructure are generally larger in magnitude than what you gain from cutting a few tens of microseconds out of the picture in hardware - but this edge would probably serve existing players well who already have top tier infrastructure.
For the more sophisticated, "game"-driven trading algorithms out there in equity markets, how much value doing stuff in hardware gives you is variable. There's a lot of decision logic involved in spiking orders around, changing behavior states based on other participants, and so on. A better set of algorithms running on top tier infrastructure in software will probably do better than inferior algorithms running in hardware without top tier infrastructure.
Other than Xambala, I am sure there are other players doing similar things. I've also used CUDA on NVIDIA GPUs for calculating option market prices really fast. These are just tools and other people definitely are using these tools in the right scenarios. What really matters in making money is combining the right tools with good implementation, excellent infrastructure, and testing and adaptiveness to market conditions.