I am not US taxpayer so I don't give a shit how much such bullet costs. All I know that sometimes the SEALS or other special ops. unit serves to protect civilians. Hard to belive but that is its function. Put aside "the bad terrorists" and just focus on some scenarios in which such weapon would be extremely useful despite its cost... like I don't know... maybe it is some stupid Hollywood style example but - Maersk Alabama incident. AFAIK snipers did excellent job then and if such weapon could help in such situations I like it.
The idea is good itself but unless your OS vendor starts using it it is worthless IMHO - lets think of RHEL for example:
* it rises security issues cruicial stuff like kernel code comes from third party which party does not give any SLA or other agreement - I don't think that security guys will like that
* it rises support issues - does f.e. RH or Oracle support systems patched this way
* it (paradoxically) rises the complexity of running the systems since it involves yet another way of patch, test, deploy cycle iterations
So it is cool feature to have f.e. for home server but I won't pay 4 bucks for it. It is cool from technical standpoint. But unless the operating system vendor itself supports it is worthless from my point of view.
Also I don't see RH or Novell (SUSE) even touching this stuff - I wonder why?
> When a stock broker's trading floor system goes down, the loss is
> measured in millions of dollars per second
Ksplice does not protect you from servers going down.
> Downtime is just not acceptable under some circumstances.
Still - ksplice does not make your servers highly avialable or fault tolerant. It just allows you to patch the server without rebooting.
Any decently designed HA or FT system should have such things like service reboots implemented by design since it is natural and obvious that you will need to reboot some nodes sometimes. Usually it is reffered to as maintanance or planned downtime - it is quite other thing that an unplanned downtime or disaster recovery - ksplice does not deal with that.
I don't really personally see any use of such service. If you need FT or HA system you need to design it as such from ground up. In this case paying 4 bucks just solves some problems with rebooting after kernel upgrade. I dont have problem with that. I just reboot in next service window. In normal situation mission critical systems have some sort of redundancy not only to cope with planned service reboots but with other unplanned disasters. So usually you have a N+1 redundant cluster in which you can reboot the servers using some procedure that was worked out while DESIGNING the system. Also I see quite few security issues with patching the kernel this way. In mission critical services you usually do test everything before rolling it out to the systems so using such feature just makes things more complicated (that just simply reboot the machine with my current procedures).
I cannot find anything about security details on their webpage. They state "Ksplice Uptrack uses cryptography to authenticate the update feed.". So what? Fedora also used cryptography and once their servers got rooted the whole chain collapsed. So if I was to use their service I wish to know how exactly their security is implemented since I would be getting kernel patches (quite critical stuff) from them. At least with RHEL I know a about their security procedures (quite rigorious). From support point of view. Does f.e. Red Hat or Oracle support systems patched this way?
It is a nice feature but IMO not suitable for enterprises yet.
At launch of Bing I have used it to test it and I haven't found any feature that would break my addition to Google. Even if Bing was as good as Google it is still different and requires me to learn a new tool. The only reason I would have learnt a new tool would be if it was any better - but it is not. At least in my opinion.
So my question is - does anybody even use Bing? Recently I recall that I have used Bing only when I gived the search box at MSFT KB/Support pages (which use Bing) and it just failed for simple queries like "download something-microsoftish". Google is much better even when searching MSFT sites.
Yes and I know that Google != privacy. But I can cope with that if it works OK.
> Any admin worth their pay can run rings around a net-blocker.
What Admin? Oracle admin? AIX admin? SharePoint admin? SAP admin? There is a lot of different types of admins now and what makes them worth their pay is that they help you run your business and earn money. The ability to run rings around a net-blocker is not something you put on your resume.
Also in well implemented network it is not as easy to run around it *undetected*.
Also by doing so you are clearly breaking the rules that your supervisor set for you - what for? So they can fire you easly if they wish? Mobile broadband internet is like 10 bucks a month (at least here in Poland). Just get your own netbook or laptop and use it for unauthorized Internet access.
Where is logic in that?
- you use SPF for own domains
- your shool's Zimbra installation scores mails from your domains as spam
Based on above facts how have you come to conclusion that SPF doesn't work in general? The fact that your school's Zimbra scores your mail as spam is just a single cases and most probably not related to SPF in general.
Have you looked at headers of these message marked as spam? Have you contacted the postmaster?
Some spam filters score on SPF. So not having SPF increases chance of false positives for your legitimate mail when you don't have SPF. And since SPF is free and painless to implement (just few DNS records) I don't see any reason not to use it. Also not like it is something that much significant either.
Well if I do then SSL/TLS certificates and cryptography in general are the means to authenticate someones (or some servers) indentity.
So my question is: if sites in my intranet use proper PKI and SSL/TLS mechanisms am I still voulnerable to this flaw?
Srly - great.
Well you don't clearly state what you wish to accomplish nor how much money you have so it is hard to answer. But maybe such setup will be OK.
Build yourself custom PCs.
- good and big enclosure which can fit large ammount of drives
- moderate 64bit AMD processor (really any - you will not be doing any serious processing on storage server)
- any ammount of RAM (really 1 or 2 gigs will be enough)
- mobo with good SATA AHCI support (for RAID) and NIC (any - for management) onboard
- one 1Gb PCI-* NIC with two ports
- 6x SATA2 NCQ HDD (any size you need) dedicated for working in RAID - software based (dmraid) RAID1+0 array configuration
Virtualization servers (2 or more):
- you need the virtualization servers to have the same config
- any decent enclosure you can get
- the fastest 64bit AMD processor you can get preferably tri or quad core (it will do the processing for guests) with VT extensions
- as much RAM as you can get/fit into the machine
- mobo with VT support, one (any - for management) NIC onboard
- one 1Gb PCI-* NIC with two ports
- one moderate SATA disk for local storage (you will be using it just to boot the hypervisor) or disk-on-chip module
Network switch and cables:
- any managed 1Gb switch with VLAN and EtherChannel support, HP are quite good and not as expensive as Cisco
- good CAT6 FTP patchcords
General notes for hardware:
- make sure all of the PC hardware is *well* supported by Linux since you will be using Linux
- if you can get better (quality wise) components, good enclosures, power supplies, drives etc. - since it is a semi server setup you don't like it to fail for some stupid reason
- make two VLANS - one for storage, other for management
- plug onboard NICs into management VLAN
- plug HBA NICs into storage VLAN
- configure ports for EtherChannel and use bonding on your machines for greater throughput
- for storage server just use Linux
- for virtualization servers use Citrix XenServer5 (it is free, has nice management options, supports shared storage and live motion) or vanilla Xen on Linux, don't bother with VMWare Server, VMware ESX and Microsoft solutions are expensive
Storage server setup:
- install any Linux distro you like (CentOS would not be a bad choice)
- use 64bit version
- use dmraid for RAID and LVM for volume management
- share your storage via iSCSI (iSCSI Enterprise Target is in my opinion best choice)
Virtualization servers setup:
- install XenServer5 (or any distro with Xen - CentOS won't be bad)
- use interface bonding
- dont use local storage for VMs - use storage network instead
Well here it is. Quite powerfull and cheap virtualization solution for you.
Off-shelf NAS device will be not only slow but also full of various bogus bugs with which you need to wait for vendor to issue firmware update...
Just build it yourself - build a PC. You have plenty of options:
1. If you have a rack somewher buy a low end rack 2U rack server with enclosures for SATA disks and some decent RAID controller.
2. Build yourself a PC in tower enclosure. Get some Core 2 Duo mobo (cheapest), medicore ammount of RAM - SMB and NFS and AppleTalk servers with Linux operating system will eat up something like 80MB for the system and 10MB per client computer - go figure, the rest of RAM is for I/O buffers. Stuff as much as you can get SATA disks into that (like 4x 1TB). Setup it with software RAID. And you are done with it. Probably it will be much cheaper than decent NAS box (so called SoHo boxes are no worth even looking at).
Do so and you have a decent storage that is more efficent that your network.
You said about network efficency? Well - this has nothing to do with NAS box. You can have the best performing NAS box - but if your network is weak - well here goes your efficency.
So as for network buy managable switch that can cope with Linux channel bonding - with that you can bond N ethernet channels and get network transfers somewhat lower than N*interface speed.