Become a fan of Slashdot on Facebook


Forgot your password?

Comment Re:Other Motivation? (Score 5, Insightful) 101

Wish I mod points. This is the crux of the entire problem. These satellite downlink frequencies were originally setup by the FCC for only that use. Now that the FCC messed up and allowed this to proceed we have a completely different ballgame - satellite downlink frequencies being transmitted at terrestrial locations and high power levels, but the existing receiving equipment (some 10-15 years old), is supposed to continue to work in an environment like this?

Existing receivers do not expect that kind of high power/close neighbor interfence because A> to have to filter it would reduce the received signal and sensitivity anyway (lower performance), B> any such filtering would be more expensive (power and cost), C> no filtering is required since the FCC already made sure no one would be swamping the signal by effectively keeping this area of spectrum "quiet" (or at least the received signals are all at similar power levels with sufficient guard bands).

There are other frequencies and better receivers, but these are not your cheapo handheld battery powered GPS receivers. So while technical solutions might be found going forward, the real problem is that most of the commercial GPS equipment will basically stop working - so who should pay to replace everyone's GPS (from handheld's, to in car units, iPhone's, etc)??

Comment Re:No the solution is to go yell at IT (Score 1) 287

Seriously, your problems there sound 100% the fault of incompetent IT, not Windows (I say this as an IT guy). If your system runs a virus scan on login that is fucking retarded. There is NO reason to do such a thing. If they want to run a regular full scan (something I am not convinced is useful with on access scanning) it should be done at night when nobody is around.

If your IT is anything like my IT, full virus scan scheduled to run at "night" means begin at 5pm. I cannot kill the process, but at least I can suspend it, then resume it when I really leave for the evening. (Really sucks when you try to remote in from home in the evening to get some work done and the remote connection is slow as heck because of the AV scan). Would think 2am would be a better time, but no control over it.

Comment Re:SSDs are a better overall solution (Score 1) 287

but Windows has this brain-dead idea that it should save it first to Temporary Internet Files (under c:\Users !!), and only THEN transfer it over the network to the NAS

Using IE to download the files?

You give me hope. Are you saying this is just stupid IE behavior, not Windows7? Then good, I haven't gotten around to installing Firefox yet. I had to use IE on the initial install to download motherboard drivers (obviously not the Intel NIC drivers), video drivers and such (and the 64-bit slackware 13.37 iso). So as soon as I get Firefox installed I can avoid that nonsense. Of course, these days, what version of Firefox to install? 3.6? 4? 5? 6? Yeesh. That's a whole 'nother debate. Heh.

Comment Re:Boot time isn't Window's problem (Score 2) 287

Don't replace the 9TB RAID, just add an SSD for Windows (120GB or so). Get two and RAID0 if you want and it's yet faster still (however, be warned that most likely RAID with SSDs will lose TRIM support).

Keep the 9TB RAID array. My current motherboard (ASUS P8P67 Deluxe) has 4 SATA3G ports, 2 intel SATA6G ports (raid-able) and 2 Marvell SATA6G ports (raidable). 4 or 6 HDD systems are completely possible now without having to get an add on PCIe controller - assuming your case has room for the drives.

Comment Re:SSDs are a better overall solution (Score 1) 287

Totally agree. This inflexibility (and surprising disk space requirements - 25GB!) forced me to scrap my 15K RPM SCSI drive as my boot drive when I upgraded my computer to a Core i7. I typically had my OSes partitioned onto the 72GB SCSI disk, and had applications on another, and user data on yet another HDD. But it was real clear early on that splitting a 72GB drive between Windows 7 and Linux was going to be too small, especially if I couldn't get ALL the user data off of C:\. Since applications' load times would benefit from an SSD, I finally caved a little and got the 120GB SSD and let windows and all its applications just go to C:\, and then went through the agreeably painful process or relocating every folder that has a "Location" tab to another disk. There's about 10 or so directories to relocate in Windows 7, per user, rather than just one "My Documents" as in XP.

On the linux side, root is on the SSD, /var is on 15K RPM SCSI, /opt and /home on my SATA 7200pm disk (750GB), and /tmp on tmpfs (16GB RAM!). It's not that hard to properly partition for performance and ease of backup on the linux side. But my Windows7 partition is just a muck of everything so I guess if I want to back Windows up, just have to concede I have to create a 30GB partition image file or larger, just because everything is all thrown into C:\. On linux, I backup user data with rsync to external drives, and make parition images of the root and boot drives so that recovering on a new bare metal drive is really fast (just restore the partition image, then boot). Then create a new /home partition and rsync the data back. The partition images are small enough to put on a USB stick or DVD-R. But my Windows7 image has to stay on the NAS until I get a BDXL burner!!!

Comment Re:SSDs are a better overall solution (Score 1) 287

It's not as nice as Linux or MacOS X, but you can change the "Location" of most directories under C:\Users\. You cannot relocate Application Data, unfortunately, but just about everything else (Documents, Downloads, Videos, Photos, Music, etc).

My biggest annoyance with Windows7 and user storage so far is when downloading large files off the Internet. I have an Atom based NAS that can sustain about 85MB/sec transfer rate. When downloading something like a 4GB DVD ISO file, I choose to save it directly to the NAS, but Windows has this brain-dead idea that it should save it first to Temporary Internet Files (under c:\Users !!), and only THEN transfer it over the network to the NAS. When this temporary download area is on the SSD - that really irritates me! 4GBs of unnecessary writes, followed immediately by 4GB of deletes (hopefully, with TRIM, right)? Still, totally unnecessary since my NAS link is more than fast enough to handle downloading something off the Internet at about 1MB/sec.

But I agree, since Windows 7 now clearly has all user data neatly tucked into C:\Users, it should not be much of a stretch to just support having \Users on a different drive, like E:\Users. But alas, that's not easy to do unless you think of it before you install Windows (there are hacks to move the entire Users directory, but it must be done during the installation - once Windows is installed, it is not possible to relocate the C:\Users folder).

Comment Re:SSDs are a better overall solution (Score 1) 287

Then you're going to be waiting a while. Even before SSDs, there were obvious advantages to running more than one HDD in a system. Back when RAM was very expensive, it was usually worth it to have a second HDD just to hold the swap file.

Tiered storage is done in Enterprise and no reason a similar approach cannot be done for the home user. You can easily get 2TB drives today cheap. You will not see SSDs there at the same price point for a while. The performance benefits of SSD are not really needed to bulk storing your DVD rips. Application loading, or any other application that tends to do lots of small random reads/writes will greatly benefit from an SSD.

For me, the small (120GB-ish) SSD replaces my old stragegy since the mid-90s of using a smaller, more expensive 15K RPM SCSI disk as the primary boot drive, and a cheaper, larger, slower IDE or SATA disk for storage. These days even a 3-tier storage solution is very practical: fast SSD OS drive, larger SATA 7200pm HDD for data and games, and an external 2TB+ NAS for bulk storage, archiving, backups. I don't think $200 is unreasonable for a performance oriented drive. Trying to do a 15K RPM SAS for $200 is practically impossible (being that a good PCI-e SAS controller will probably be equal to or exceed the cost of a SATA-III SSD).

I guess technically I have a 4-tiered system, since after getting the SSD, I didn't junk my 15K RPM SCSI drive. But since it's only 72GB, it was inadequate for Windows7 + Linux dual-boot. So SSD for OS, 15K RPM SCSI for user data, video editing/encoding, internal SATA 7200RPM for normal data, games, photos, etc, and then 2TB external NAS. High performance drives are always going to carry a price premium (15K SAS) - but on a modern day Sandy Bridge system (easily run at 4.5GHz), the old spinning HDD is really the bottleneck in the computer for almost all tasks - but yes, it's too expensive currently to replace ALL spinning HDD in a system with SSD. But I would contend that a $100 price premium for something that could quintuple your disk performance, is not a bad investment.

If you are the type what wants just a single HDD in their system, then yes, SSDs (or any other performance oriented drive, e.g., 15KRPM SAS) will simply be too expensive if you want more than 100-300GB of storage in your system.

Comment SSDs are a better overall solution (Score 1) 287

My current Windows7 boots to login screen in about 8 seconds, and after logging in, it's about 2 seconds to useable desktop. Of course, this is on a new SF-2281 SSD, that pumps out about 511MB/sec read rate on SATA-III controller. If you want fast boot times, people these days should consider an SSD OS drive (120-240GB), and a spinning disk for everything else (data, games, photos, movies, etc). The SSD improves a lot of aspects of performance, much more than just the initial boot time. Of course, this fast-boot on an SSD should be darn near instantaneous start times - unless of course it's not possible to speed up that swirling MS windows logo on boot. (Is that the bootup time bottleneck? Heh.)

With the memory footprint of something like Windows7 64-bit these days, a partial hibernation might be a good idea since full hibernation may require writing an 8GB or larger file to disk depending on how many applications are open. If you leave everything open and then hibernate, cold-booting might be faster, especially on an SSD OS drive.

Comment Re:IBM did the same (Score 1) 394

No kidding. Tried to get my HP OfficeJet 7410i to work with my new Windows 7 64-bit machine, and neither the inOS or 350MB of HP printer drivers will print to the darn thing in network mode (IP address:9100). After trying to uninstall the bloated HP drivers, I still had HP folders and junk all over the HDD, so I just wiped the disk and reinstalled windows 7. Ironically, when I plug the printer into my machine via USB port, the inOS windows 7 driver just works. However, for MacOSX and Linux (Slackware), it's always been about a 1 minute affair to install new printer, enter IP address, print test page, and be done with it, even with said 7410i.

I recall having major problems getting the Windows XP driver back in the day to print to the 7410 in network mode also. HP printers + Windows + networking is a total joke.

I miss my good ol' HP-IIIP from 1990. Used it all the way up to 2006, with a postscript cartridge and parallel-port -> RJ45 print server. For Linux, it was awesome as a networked JetDirect PostScript printer. Had to recycle it when the fuser failed after 16 years and it would cost more than a new printer to replace the fuser. I have been pretty disappointed with this OfficeJet7410.

I do really like my HP ProCurve 1810G switch, however. The only thing I have from HP right now that I am proud to own (well, and my aging 48GX).

Comment Re:Warranty (Score 1) 244

Not at all. I just installed a Core i7-2600K with stock cooler a week ago into my home computer, and it was installed correctly. I am not a fan of the 1155 socket, with these "springy" clips that are supposed to stay on tight, but for whatever that's worth my cpu was running around 65-70 degrees C upon POST and going into the BIOS.

As soon as the system was powered on, the CPU fan was making a nasty clicking noise. The fan then failed after 2 minutes of uptime, and the temperatures proceeded to skyrocket up to 80C then to 95C. I shut the system down at that point. Repeated attempts to power up jumped to 95C within 30-40 seconds and the motherboard (Asus P8P67 Deluxe) was quite adamently trying to warn me of CPU fan failure, and it was nice to see the temperature right on the main EFI bios screen. On a new build, the first thing I want to check is to make sure the CPU is not overheating, and all the fans are spinning. Then go play around in the BIOS a bit before going on to OS install.

I didn't bother trying to reapply grease and reseat the stock cooler since the fan had failed outright anyway. I looked at some 95W stock replacements, but decided they just weren't up to par, so instead I bought a Zalman CNPS10 Flex and added my own 120mm silent fan. After this installation, the temperatures reported in the BIOS are now 41-45C, running at standard clock of 3.4GHz. I don't plan to OC much, but with temps around 39-41C in Linux, there is plenty of headroom now (compared to 70C on the stock cooler before the fan failed).

It would have been in my best interest if the CPU just didn't have a heatsink/fan and I was forced to buy a decent aftermarket one from the get go. I usually do, this was the first time I decided to just go with the stock heatsink/fan. Bzzt. Mistake!

Comment Re:So... practical linux attacks next? (Score 1) 281

If you consider corporations that employ hundreds or thousands of engineers (hardware, software, ASIC, etc), it is quite conceivable that the engineer's primary desktop is a workstation running Linux (what used to be running HPUX or Solaris a decade ago). This would classify as 1,000s of linux clients in an Enterprise environment, and is such the case at my company. We also have Windows PCs, but I have never seen a Mac here, ever.

Comment Re:Rethinking my pro-nuclear stance (Score 1) 580

Unless they intentionally put the storage pools high up so they couldn't be flooded out by a tsunami. I thought most reactors had their storage pools in the ground or underground? (IANANP)

Hindsight is 20/20, but a lot of buildings have their backup generators on the roofs for a reason. It also makes it easier to airlift in more diesel fuel. Any place that can flood doesn't make sense to have the backup generators in the basement.

But even so, backup contingencies? There was a guy on the news the other night talking about Southern California's reactors and stated what I think is obvious to most /. readers (and most people in general). Why can't you airlift in backup generators? The guy representing the california reactor stated they had contingency plans to bring in generators that are stored off site if the local generators failed. If you can get the generators there and installed before the batteries run out, then there should be no incidents.

Of course, the best long term solution (if you are still a proponent for Nuclear, which I am), is to require cooling systems that do not require electricity to function (passive safety systems) or reactors that do not have major failure scenarios if you remove an active safety system (can't get hot enough, self-shutting down, etc).

Comment Re:Robots are the Answer (Score 1) 580

I believe I read somewhere that it would take at least 200 air drops of water to control the situation in just one storage pool. Sounds like they did just a few drops, then abandoned the idea because the radiation was too high to be flying helicopters directly over the reactor site.

So yes, a handful of airdrops was completely useless - it needed a fire brigade of continuous drops. 400-600 air drops, depending on how many storage pools needed to be filled that way.

I would think with Japan's robotic technology, they could get the water cannons closer without adding more risk to the workers. But if it wasn't already thought of, too late now.

One still has to respect and honor the workers onsite trying to control this situation. I do fear that many of them will end up with serious health issues or even death from the exposure, especially if the storage pond situation continues out of control (which now seems to be the much more serious problem).

Slashdot Top Deals

Some people claim that the UNIX learning curve is steep, but at least you only have to climb it once.