happened. NXOS is the Cisco datacenter OS that is *already* based on a Linux kernel. Geez, Cisco's ASA appliances made the move from iOS to Linux years ago. Your next network operating system = your existing network operating system. Wake up/Redundancy/Get a life/I pity you because you've wasted everyone's time.
First they turn on their capitalist landowners and now they have turned on their Submarine Internet Cable! Don't you know what's good for you, Cuba?
In the Future, *all* restaurants are Taco Bell!
For employers, or even police: you could easily detect emotional flushes in someone's face when asked certain questions, i.e., a lie detector of sorts. Also, think poker players with this software built into their "Google Glasses".
yep, you nailed it.
Hey guys - lets just combine the "good" with the "bad" and they'll neutralize each other! All in favor of "Genetically Modified Organic" say aye! Or wait a sec - is that what GMO already stands for?? I am confused.
There is science to back up the existence of your peanut allergy. There is no science to back up harm caused by GM foods - that's the point of TFA.
Link to Original Source
Absence of evidence is not evidence of absence.
Perhaps, but crappy evidence is evidence of crap, IMHO. Take a look at the dude's screen shots. Any power company using such poorly put together screens, with no interesting status info, no proper overview screen with worthwhile data, isn't really a power company, but some kiddies dream.
Allen Bradley is out there quite heavy. in fact I saw far more of it than siemens stuff.
AB is big in the US only. Siemens is by far the largest controls systems provider internationally.
ATTN: Systems Integrators.
Guys, we can’t ignore this one. Stuxnet has taught the whole world what can be done. So it is now orders of magnitude more likely that an attacker could develop a modified version of it or design something similar to it in nature with the potential of doing much more damage than Stuxnet actually caused.
Here’s a worst-case scenario:
We’re now in a situation (unlikely, but potential) where an American systems integrator could connect his laptop to a plant in India, pick up something like this, and then bring it back to our in-house systems, where it would then spread to every system they ship. The control systems then start failing, accidents occur, etc.
I don’t think Systems Integrators are at risk to this particular threat (the original Stuxnet) for the following reasons:
The antivirus vendors are all over this one. Its probably in every signature scanner, and its behavioral tricks are probably being watched by all of the behavior-based malware products.
Microsoft issued a fix for the Windows exploit Stuxnet uses in early August (or sooner). So if you’ve done Windows Update since then you’re protected regardless of antivirus status.
The quick policy change I think we need to make is this:
1. Control systems products and Internet surfing must be 100% separated. So if you run Step7 or RSLogix on your native boot laptop, then you need to surf inside a VM. OR, If you surf on your main machine, all your controls programs must run inside VMs.
2. Develop a good firewall procedure for when we connect laptops to foreign plant networks (especially International). We need to block the laptop from accepting inbound IP traffic from any addresses other than the ones in our own panel. This won’t be a big deal to implement and maintain as we travel to different networks.
3. Keep all hosts and VM’s current on Critical updates from Microsoft.
4. Keep current updates on whichever antivirus or antimalware program you’re using. I actually think we’re safer overall if we keep a mix of security products in use (different ones on different machines) rather than picking one single vendor’s solution, because we’re more likely to learn we’ve been infected, even if its just 1 of the products we’re using that detected it. Then we can use appropriate measures to remove it from any systems that didn’t detect it.
Is this good enough for now? Too extreme? Other ideas?
I think the issue is that at some point one of your internal network computers will need to communicate to a server that's out on the Internet - and that server will resolve to an IPv6 address. So how does a PC with no IPv6 stack attempt to communicate with such an address outside your NAT'd IPv4->IPv6 firewall? Perhaps if the NAT on the router is sophisticated enough it could translate the DNS lookup to some fake IPv4 address and then your internal network computers wouldn't know the difference, but that seems like a stretch. In the end, the easiest way to NAT IPv6 is probably going to be with full blown end-to-end IPv6.
I'm using a backup scenario which uses swapping out mirrors (RAID 1) in combination with Windows Volume Shadow Copy services. Twice per day my system (Server 2003 or Server 2008) takes a Volume Shadow snapshot (VSS?), which is how you get that "oops" protection - its similar to having an always available, instant restore tape library built right into the filesystem (BTW, Novell had this feature 10 or more years before Microsoft, but I digress). You just need to make sure you have plenty of free space on the drive to accommodate the snapshots, but the algorithm is very efficient since it only grows when files have changed.
Then, once per week, for the off-site disaster protection, I swap the external eSATA software mirror drive out, remove the broken mirror under disk manager, import the foreign disk from last week and recreate the mirror. Bingo - just a single drive to keep up with. I have a hot on-site mirror and an offsite mirror no more than a week old. Its quick and convenient and performance is excellent (and cost is minimal since the features are built-in to Windows)
I'm not sure if there's an equivalent to Volume Shadow Copies under Linux, but the software mirroring is there and works quite well.
Whoa, hold the boat. I've had a lot of experience with Dell & HP/Compaq(Proliant) provided RAID systems and they are not sensitive to disks with vastly different innards. All that matters is block count and software mirroring doesn't even care about that, because you'll simply be limited to the size of the smaller disk. If you're using mirroring or RAID, try to go with different makes of the same size. This article talks about MTBF. It turns out if 2 drives of the same exact model comes off the line and end up in your PC, there is a chance they could fail within a very close time to one another. So your mirror or RAID could fail permanently while rebuilding from the first failure. But if all your drives are of a different make, chances are they won't fail at the same time and you'll get the critical time needed to rebuild your array.
When I'm going to do mirroring or RAID on hardware that doesn't have high-end dedicated server RAID controller, I use Windows or Linux software RAID. Performance is surprisingly good and I'm not married to a specific hardware implementation. I've had _none_ of the issues you've described with Linux software RAID on several servers for several years. Mdadm has only whined after a power outage or genuine disk failure.